We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I was asked to develop a data reduction pipeline for a telescope and am facing a problem.
I have the standard lamp spectra and need to identify the emission lines in order to find a wavelength solution to apply to the spectra taken.
I managed to find the signal peaks and calculate their centroids, I am unsure - however - of how to match the centroids I found with a list of the emission lines of the lamps.
I basically have to do what "Identify" in IRAF does, but in python. I know there have been some packages that do that (like specutils), but they are outdated and recall deprecated libraries.
Any tip on where to go or what to do to have a centroid-wavelength match would be greatly appreciated.
Wavelength Calibration in Python - Line Identification algorithm - Astronomy
I used refspectra (noao.onedspec.refspec) to associate reference arcs with star spectra (Some stars have two arcs, which are averaged together)
To do this more easily, I used a cl file.
This didn't work ('not reference spectra') because I had not used identify yet. After some iraf problems, this finally worked (after a new version of iraf was installed).
I manually did the following files, to figure out how to do it.
I then used reidentify, with the first file (1feb05.0038.0037.ms.fits) as a template, to calibrate the rest. I checked the RMS for each fit, and if it was unusually large (over .5, the points were generally very scattered, randomly) checked the fit and identified or deleted lines.
I then ran refspec, successfully this time.
I ran dispcor to set the wavelength scale for the stars, found the wavelength per pixel to be 0.75, and there was a jump to zero on the left end, so ran it again starting at 3511 instead of 3500.
new names are w+original name
w1feb05.0042.ms.fits and w1feb05.0043.ms.fits are flat lines (?)
(after using the combined arcs, these spectra were normal)
w1feb05.0056.ms.fits is junk
w1feb05.0058.ms.fits is very noisy (this image was out of focus - railroad tracks)
Using the log from the last run of dispcor, I found arcs with similar pixel shifts (i.e. they are shifted only a little from each other) and used imcombine to combine these arcs.
I used reidentify to redo the fits for these summed images.
I then again ran refspec using a new .cl file to reassign reference images for the stars whose arcs were summed, then ran dispcor again with the new references.
Several spectra have jumps to zero before 3515, so I reran dispcor starting at 3515. there is a jump to zero at the end also, so set the end wavelength to 5497.
Ran dispcor again, changing the following parameters:
(w1 = 3515.) Starting wavelength
(w2 = 5497.) Ending wavelength
used hedit to set the observatory to mmto (from MMT) to avoid entering this every time
Command line interface
pygwinc provides a command line interface that can be used to calculate and plot the various canonical IFO noise budgets described above. For most distributions this should be available via gwinc at the command line, or python3 -m gwinc otherwise:
Or custom budgets may also be processed by providing the path to the budget module/package:
Budget plots can be saved in various formats (.png, .svg, .pdf):
Or trace data can be saved to an HDF5 file:
Trace HDF5 files can also be plotted directly:
The --range option can be used to include the values of various inspiral ranges for the overall noise in the output.
IFO parameters can be manipulated from the command line with the --ifo option:
You can also dump the IFO parameters to a YAML-formatted parameter file:
Stand-alone YAML files assume the nominal 'aLIGO' budget description.
The command line interface also includes an "interactive" mode which provides an IPython shell for interacting with a processed budget:
See command help for more info:
For custom plotting, parameter optimization, etc. all functionality can be accessed directly through the gwinc library interface:
A default frequency array is used, but alternative frequencies can be provided to load_budget() either in the form of a numpy array:
or frequency specification string ('FLO:[NPOINTS:]FHI'):
The load_budget() function takes most of the same inputs as the command line interface (e.g. IFO names, budget module paths, YAML parameter files), and returns the instantiated Budget object defined in the specified budget module (see budget interface below). The budget ifo gwinc.Struct is available in the budget.ifo attribute.
The budget run() method calculates all budget noises and the noise total and returns a BudgetTrace object with freq , psd , and asd properties. The budget sub-traces are available through a dictionary ( trace['QuantumVacuum'] ) interface and via attributes ( trace.QuantumVacumm ).
The budget freq and ifo attributes can be updated at run time by passing them as keyword arguments to the run() method:
The algorithm implemented here works as follows: For each RV shift to be considered, the wavelength axis of the template is shifted, either linearly or using a proper Doppler shift depending on the mode . The shifted template is then linearly interpolated at the wavelength points of the observation (spectrum) to calculate the cross-correlation function.
The wavelength axis of the observation.
The flux axis of the observation.
The wavelength axis of the template.
The flux axis of the template.
rvmin : float
Minimum radial velocity for which to calculate the cross-correlation function [km/s].
rvmax : float
Maximum radial velocity for which to calculate the cross-correlation function [km/s].
drv : float
The width of the radial-velocity steps to be applied in the calculation of the cross-correlation function [km/s].
mode : string, The mode determines how the wavelength axis will be modified to represent a RV shift. If “lin” is specified, a mean wavelength shift will be calculated based on the mean wavelength of the observation. The wavelength axis will then be shifted by that amount. If “doppler” is specified (the default), the wavelength axis will properly be Doppler-shifted. skipedge : int, optional If larger zero, the specified number of bins will be skipped from the begin and end of the observation. This may be useful if the template does not provide sufficient coverage of the observation. edgeTapering : float or tuple of two floats If not None, the method will “taper off” the edges of the observed spectrum by multiplying with a sine function. If a float number is specified, this will define the width (in wavelength units) to be used for tapering on both sides. If different tapering widths shall be used, a tuple with two (positive) numbers must be given, specifying the width to be used on the low- and high wavelength end. If a nonzero ‘skipedge’ is given, it will be applied first. Edge tapering can help to avoid edge effects (see, e.g., Gullberg and Lindegren 2002, A&A 390). The RV axis of the cross-correlation function. The radial velocity refer to a shift of the template, i.e., positive values indicate that the template has been red-shifted and negative numbers indicate a blue-shift of the template. The numbers are given in km/s. The cross-correlation function. An algorithm for finding the extreme points by parabolic approximation ( quadExtreme() ). A Spectroscopy and astrophysics Library for Python 3 This Python package is an expanding code base for doing computational astronomy, particularly spectroscopy. It contains both a Spectrum class for handling spectra as objects (with +, -, *, /, etc. operations defined) and a growing suite of analysis tools. Quick note: the subpackage astrolibpy was not developed by me. It was coded by Sergey Koposov (@segasai) at Cambridge (then at least). I found it useful for performing velocity corrections on my spectroscopic data. I've modified several modules such that it can be imported and used in Python 3.x. See his README file. SLiPy is split into several components. The principle component is the subpackage SLiPy itself, which contains all the relevant functionality. Further, Data is a package I'm working on that will provide an API for searching astronomical data archives in a simple way. The other two subpackages Framework and astrolibpy are of utility to the project but not necessarily intended for export. As stated previously, astrolibpy was not developed by me, only modified. I'm not going to document it's usage here. Its name is unfortunate for me as it is a bit over done with the convention I was already using, but for consistency I will keep it as it was from the author. The following modules are elevated to the package level and are available to import:
Wavelength Calibration in Python - Line Identification algorithm - Astronomy
SLiPy/ Functions/Classes Spectrum WaveVector, Spectrum, Fits Find, RFind, GetData, Header, Search, PositionSort, Simbad Position, Distance, Sptype, IDList, Correlate XCorr, Telluric Correct, Velocity HelioCorrect, BaryCorrect, IrafInput, Observatory . , Plot SPlot, Iterate, Profile Select, Fit, Extract, Montage SolveGrid, Mosaic, SubField, Field,
SLiPy/Data Functions/Classes Elodie Archive, Script, Download, Atomic Ion, IonManager,
The mode determines how the wavelength axis will be modified to represent a RV shift. If “lin” is specified, a mean wavelength shift will be calculated based on the mean wavelength of the observation. The wavelength axis will then be shifted by that amount. If “doppler” is specified (the default), the wavelength axis will properly be Doppler-shifted.
skipedge : int, optional
If larger zero, the specified number of bins will be skipped from the begin and end of the observation. This may be useful if the template does not provide sufficient coverage of the observation.
edgeTapering : float or tuple of two floats
If not None, the method will “taper off” the edges of the observed spectrum by multiplying with a sine function. If a float number is specified, this will define the width (in wavelength units) to be used for tapering on both sides. If different tapering widths shall be used, a tuple with two (positive) numbers must be given, specifying the width to be used on the low- and high wavelength end. If a nonzero ‘skipedge’ is given, it will be applied first. Edge tapering can help to avoid edge effects (see, e.g., Gullberg and Lindegren 2002, A&A 390).
The RV axis of the cross-correlation function. The radial velocity refer to a shift of the template, i.e., positive values indicate that the template has been red-shifted and negative numbers indicate a blue-shift of the template. The numbers are given in km/s.
The cross-correlation function.
An algorithm for finding the extreme points by parabolic approximation ( quadExtreme() ).
A Spectroscopy and astrophysics Library for Python 3
This Python package is an expanding code base for doing computational astronomy, particularly spectroscopy. It contains both a Spectrum class for handling spectra as objects (with +, -, *, /, etc. operations defined) and a growing suite of analysis tools.
Quick note: the subpackage astrolibpy was not developed by me. It was coded by Sergey Koposov (@segasai) at Cambridge (then at least). I found it useful for performing velocity corrections on my spectroscopic data. I've modified several modules such that it can be imported and used in Python 3.x. See his README file.
SLiPy is split into several components. The principle component is the subpackage SLiPy itself, which contains all the relevant functionality. Further, Data is a package I'm working on that will provide an API for searching astronomical data archives in a simple way. The other two subpackages Framework and astrolibpy are of utility to the project but not necessarily intended for export. As stated previously, astrolibpy was not developed by me, only modified. I'm not going to document it's usage here. Its name is unfortunate for me as it is a bit over done with the convention I was already using, but for consistency I will keep it as it was from the author.
The following modules are elevated to the package level and are available to import:
To install SLiPy, there is no setup procedure. Simply download the package, either by clicking on the download link for a tar or zip archive or by cloning it.
Extract it's contents to wherever you like in a directory (ostensibly named slipy, but actually you can call this library whatever you want as well because all the imports are relative). Then add the parent directory to your PYTHONPATH if it isn't already. For example:
SLiPy attempts to catch all foreseeable exceptions and re-throw them under a common handle with a human readable message. There is a unique exception class for every module, all derived from a common SlipyError. The naming convention is for a module's exception to be named after the module with the addition of the word Error. So the Fits module will throw a FitsError. These are meant to handle erros internally. The user need not worry about these in an interactive session however, inside of scipts you might catch SlipyError. For finer control, catch the individual exceptions (e.g., Fits.FitsError).
If you use SLiPy or have your own code related to spectroscopy or computing for astronomy and think it would be a useful addition (or you find a bug/mistake) I'm more than open to suggested contributions/additions.
Geoffrey Lentner, B.S.
Graduate Research Assistant
Department of Physics & Astronomy
University of Louisville
Significant intellectual contributions have been made by my thesis advisor, specifically in terms of the science behind much of this package.
James Lauroesch, Ph.D.
Department of Physics & Astronomy
University of Louisville
If you have made use of SLiPy in your project/research, you can acknowledge your use in the following ways:
This research has made use of SLiPy, an open source spectroscopy and astrophysics library for Python 3 (G. Lentner, 2015).
If your code either makes use of or borrows from SLiPy, a good way to reference this is with a shield in your README file.
The above badge is generated using the following snippet
Objects for representing astronomical data. Currently, this includes the Spectrum class and it's helper function WaveVector.
WaveVector ( rpix, rval, delt, npix ):
Construct numpy array of wavelength values where rpix is the reference pixel index, rval is the wavelength at reference pixel, delt is the resolutions (delta lambda), and npix is the length of desired array.
class Spectrum ( *args, **kwargs ):
The Spectrum object is a container class for a data array and its corresponding wavelength array.
There are a few ways to create a Spectrum. If a single string argument is given, it is taken as a file name and used to read in data from a FITS file. With the keyword argument wavecal set as True (the default case), elements are read from the header to create a corresponding wavelength array to go with the data.
|wavecal||True||fit wavelength vector to data|
|crpix1||'crpix1'||reference pixel header keyword|
|crval1||'crval1'||value at reference pixel|
|cdelt1||'cdelt1'||resolution (delta lambda)|
|xunits||'Angstrom'||units for wavelength array|
|yunits||''||units for data array|
If an array-like object is given, it is converted to a numpy array and taken as the spectrum data. A wavelength array will be generated that is simply an index (pixel) count. But if a second argument is provided, it will serve as the wavelength array. These must be equal in length however.
Units will be imposed for these arrays. When initialized from a file, the default units are Angstrom and dimensionless_unscaled for the wavelength and data arrays respectively. Alternatives can be applied by providing the keyword arguments xunits and yunits. If initialized via an array-like object, dimensionless_unscaled will only be applied if no units are detected. If no wavelength array is provided, the generated wavelength array will have pixel units. Units are again only applied if none are detected for the given array.
Addition, subtraction, multiplication, and division (including in-place operations, e.g., '+=') are defined for both spectrum on spectrum operations as well as scalar operations. When two spectra are operated on, the LHS spectrum takes precedent. One of the spectra must be contained within the other (i.e., their wavelength domain is either equal to or entirely interior to the other). The outer spectrum is resampled onto the inner spectrum's pixel space and the operation is applied element-wise. To state it briefly, only the affected region of the LHS spectrum is operated on. This makes units dangerous for multiplication and division. Only in the case of multiplying/dividing spectra of equivilent wavelength arrays will the units be properly applied. Otherwise, the RHS units will be ignored. Considering the physical context within which it makes sense to apply these operations to spectra this is justified the data will have dimensionless units in all likelihood. For scalar addition and subtraction, if no units are given the units are implied to be the same as that of the data.
The comparison operations ( >, <, >=, <=, ==, !=, &, ^, | ) are defined to return a binary Spectrum of dimensionless 1's and 0's. The same convention applies as above. Either the LHS or RHS must be contained within the other, and the LHS is compared on the overlapping regions. Data outside this overlapping region is returned as False.
The shift operations ('>>' and '<<') are defined to mean a shift in the spectrum. The addition and subtraction operate on the data. These operations apply to the wave array. Spectrum << 2 * u.Angstrom says to blue shift the spectrum by 2 Angstroms. If the RHS is a dimensionless number, the wavelength units are implied. Also, one can shift a spectrum by another spectrum (e.g., spectrumA >> spectrumB ), where the wave arrays would be operated on only. In these cases, they should have the same number of pixels! Finally, to get a variable shift across the spectrum without creating a whole spectrum with junk data, you can shift using a numpy array (also of equal length). If no units are detected, the wavelength units will be implied.
Also, you can access the spectrum data via "index" notation. The behavior is not the same though. See the below interactive demonstration.
If given a single argument, it is taken to be a Spectrum object, and self is resampled onto the pixel space of the other spectrum. Otherwise, three arguments are expected. The first and second argument should define the lower and upper wavelength value of a domain, respectively. The third argument should be the number of elements (pixels) for the new domain. Think numpy.linspace(). kind is passed to scipy. interp1d.
Given a Spectrum, other, contained within the wavelength domain of self, replace all pixels in the overlapping region with that of an interpolation built on other. kind is passed to interp1d.
Essentially a wrapper to deepcopy(). To say SpectrumA = SpectrumB implies that SpectrumA is SpectrumB. If you want to create a new spectrum equal to another, say SpectrumA = SpectrumB.copy()
Manipulate FITS files. Import data into Spectrum objects. Filter results by right ascension and declination. Grab header elements. Search for attributes of the data such as distance, spectral type, etc.
Find (toplevel = './', pattern = '*.fits'):
Search for file paths below toplevel fitting pattern. Returns a list of string values.
RFind (toplevel = './', pattern = '*.fits'):
Recursively search for file paths below toplevel fitting pattern. Returns a list of string values.
GetData ( *files, **kwargs):
Import data from FITS files. Returns a list of Spectrum objects.
Header ( filename, keyword = None, **kwargs):
Retrieve keyword from FITS header in file filename. Return type depends on what is returned. If no keyword is given, the entire header object is returned.
Search ( *files, **kwargs):
Extract object names from Fits files and use Simbad module to resolve the attribute (a required keyword argument) from the SIMBAD astronomical database. Currently available attributes are 'Position', 'Distance', 'Sptype', and 'IDList'. Returns a list of results (type depends on the values).
PositionSort ( center, radius, *files, **kwargs ):
Return a list of files from files that lie in a radius (in decimal degrees) from center, based on the ra (right ascension) and dec (declination).
This module allows the user to query the SIMBAD astronomical database from inside Python or shell commands/scripts. It's four current major functions Position, Distance, Sptype, and IDList return real variables with appropriate types ready for use.
Position ( identifier, **kwargs ):
Return the right ascension and declination in decimal degrees of identifier as a pair.
Distance ( identifier, **kwargs ):
Return the distance in parsecs to identifier.
Sptype ( identifier, **kwargs ):
Return the spectral type of identifier as resolved by SIMBAD.
IDList ( identifier, **kwargs ):
Return a list of alternate IDs for identifier.
Correlation functions for astronomical data.
XCorr ( spectrumA, spectrumB, **kwargs ):
The function returns an integer value representing the best shift within a lag based on the computed RMS of each configuration.
Removal of atmospheric absorption lines in spectra.
Correct ( spectrum, *calibration, **kwargs ):
Perform a telluric correction on spectrum with one or more calibration spectra. If more than one calibration spectrum is provided, the one with the best fit after performing both a horizontal cross correlation and a vertical amplitude fit is used. The spectrum and all the calibration spectra must have the same number of pixels (elements). If a horizontal shift in the calibration spectra is appropriate, only the corresponding range of the spectrum is divided out!
Notice Your spectra must be continuum normalized for this to work!
|lag||25||pixel range to shift over|
|range||(0.5, 2.0, 151)||numpy.linspace for amplitude fitting|
Figure 1: The above figure is an example of applying the Telluric.Correct() algorithm to a spectrum. In this case, six spectra of Regulus from the Elodie archive were used as calibration spectra.
Radial velocity corrections for 1D spectra.
HelioCorrect ( observatory, *spectra, **kwargs ):
Perform heliocentric velocity corrects on spectra based on observatory parameters (longitude, latitude, altitude) and the member attributes, ra (right ascension), dec (declination), and jd (julian date) from the spectra. These should all have units.
BaryCorrect ( observatory, *spectra, **kwargs ):
Perform barycentric velocity corrects on spectra based on observatory parameters (longitude, latitude, altitude) and the member attributes, ra (right ascension), dec (declination), and jd (julian date) from the spectra. These should all have units.
IrafInput ( *files, **kwargs ):
Build an input file for IRAF's rvcorrect task.
files should be a list of FITS file names to build the output table for. The user can optionally specify a toplevel directory to search for files under. If outfile is given, write results to the file.
Define observatory parameter similar to the IRAF task. All observatories should follow the following pattern. The user can add as many as they like to this module. I welcome suggestions.
There are currently 69 defined observatories:
|Class ID||Observatory Name|
|OHP||The Observatoire de Haute-Provence, France.|
|KPNO||Kitt Peak National Observatory|
|CTIO||Cerro Tololo Interamerican Observatory|
|LASILLA||European Southern Observatory: La Silla.|
|PARANAL||European Southern Observatory: Paranal|
|LAPALMA||Roque de los Muchachos, La Palma.|
|MSO||Mt. Stromlo Observatory|
|SSO||Siding Spring Observatory|
|LCO||Las Campanas Observatory|
|MTBIGELOW||Catalina Observatory: 61 inch telescope|
|DAO||Dominion Astrophysical Observatory|
|SPM||Observatorio Astronomico Nacional, San Pedro Martir.|
|TONA||Observatorio Astronomico Nacional, Tonantzintla.|
|PALOMAR||The Hale Telescope|
|NOV||National Observatory of Venezuela|
|BMO||Black Moshannon Observatory|
|BAO||Beijing XingLong Observatory|
|KECK||W. M. Keck Observatory|
|EKAR||Mt. Ekar 182 cm. Telescope|
|APO||Apache Point Observatory|
|VBO||Vainu Bappu Observatory|
|IAO||Indian Astronomical Observatory, Hanle|
|ORO||Oak Ridge Observatory|
|LNA||Laboratorio Nacional de Astrofisica - Brazil|
|SAAO||South African Astronomical Observatory|
|CASLEO||Complejo Astronomico El Leoncito, San Juan.|
|BOSQUE||Estacion Astrofisica Bosque Alegre, Cordoba.|
|ROZHEN||National Astronomical Observatory Rozhen - Bulgaria.|
|IRTF||NASA Infrared Telescope Facility|
|BGSUO||Bowling Green State Univ Observatory.|
|DSAZ||Deutsch-Spanisches Observatorium Calar Alto - Spain.|
|CA||Calar Alto Observatory|
|HOLI||Observatorium Hoher List (Universitaet Bonn) - Germany.|
|LMO||Leander McCormick Observatory|
|FMO||Fan Mountain Observatory|
|WHITIN||Whitin Observatory,Wellesley College|
|OSN||Observatorio de Sierra Nevada|
|GEMINI NORTH||Gemini North Observatory|
|GEMINI SOUTH||Gemini South Observatory|
|LASILLA||European Southern Observatory: La Silla.|
|PARANAL||European Southern Observatory: Paranal.|
|ESONTT||European Southern Observatory, NTT, La Silla.|
|ESO36M||European Southern Observatory, 3.6m Telescope, La Silla.|
|ESOVLT||European Southern Observatory, VLT, Paranal.|
|SLN||SLN - Catania Astrophysical Observatory.|
|EUO||Ege University Observatory|
|TNO||Turkiye National Observatory|
|TUG||TUBITAK National Observatory, Turkey.|
|MGO||Mount Graham Observatory|
|ARIES||Aryabhatta Research Institute of Observational Sciences.|
|OALP||Observatorio Astronomico de La Plata|
|OLIN||Connecticut College - Olin Observatory|
|SOAR||Southern Astrophysical Research Telescope.|
|HET||McDonald Observatory - Hobby-Eberly Telescope.|
|JCDO||Jack C. Davis Observatory, Western Nevada College|
|LNO||Langkawi National Observatory|
Convenient wrapper to matplotlib for plotting spectra. A SPlot is simply a manager of figure attributes, to quickly go from looking at one spectra to another. One can also overlay spectra.
class SPlot ( spectrum, **kwargs ):
Spectrum Plot - Create figure of spectrum.
|marker||'b-'||marker for data|
|label||'spectrum'||name of object|
|usetex||False||render with pdflatex|
The following member functions call matplotlib.pyplot equivalent: **xlim**, **ylim**, **xlabel**, **ylabel**, **title**, **legend**, **text**, **grid**, **close**, **tight_layout**.
Here, when these function are called, the arguments are passed to matplotlib however, these calls are remembered. So when you go to draw the figure again, you are back where you left off.
Rebuild and render the figure.
Re-render (refresh) the figure, without clearing the axis.
Clear all the calls to text( ) from the figure.
Switch either on or off (value = True | False) the horizontal offset.
Switch either on or off (value = True | False) the vertical offset.
Given one or more splots, overlay the figures.
Reassign the values for the marker s in the figure. The number of arguments must equal the number of spectra in the figure.
Restore the figure to only the original spectrum.
Save the figure to a file called filename. The file format is derived from the extension of filename.
Iterate( *splots, **kwargs ):
Iterate thru splots to inspect data, the user marks spectra of interest. The function returns a list of keepers.
|keep||'name'||alternative is 'plot'|
Profile fitting tasks for spectra.
Select ( splot ):
Select points from the splot. This should be of type SPlot (or it can optionally be a Spectrum type, for which a SPlot will be created). The splot will be rendered and the user clicks on the figure. When finished, return to the terminal prompt. A dictionary is returned with two entries, wave and data, representing the x-y locations selected by the user. This can always be retrieved later by accessing the module member Profile.selected.
While the user makes selections, temporary markers appear on the figure indicating the data point that was just selected. If a mark does not appear, try moving the curser slightly and trying again. Even if the line goes through that point, there might not actually be data there.
AutoFit ( splot, function = InvertedLorentzian, params = None)
Given a splot of type Plot.SPlot, the user selects four points on the spectrum and a parameterized function is fit (an inverted Lorentzian by default). Optionally, splot can be of type spectrum and a basic SPlot will be created for you. If the user gives an alternative function, params (parameters) must be provided. params is to be the first guess, p0 given to scipy. curve_fit the user can provide them expicitely, or in the form of functions with the templates f(x, y) where x and y are the wave and data arrays (respectively) extracted between the two inner points selected by the user.
InvertedLorentian is defined in SLiPy.Algorithms.Functions. The user does not need to provide params for the default behavior.
Figure 2: The above figure was generated by running the above code snippet.
Extract ( splot, kernel = Gaussian, **kwargs):
Select locations in the splot figure, expected to be of type SPlot. Exactly four points should be selected. These are used to extract a line profile from the spectrum plotted in the splot figure. The inner section is used for the line, and the outer intervals are used to model the continuum these, respectively, are both returned as Spectrum objects. The gap is jumped using 1D interpolation (scipy. interp1d). In order to get a smooth continuum and error estimates, this function uses non-parametric kernel regression (a.k.a. kernel smoothing) to fit a curve through the data marked as continuum. Be default, the kernel function is the Gaussian. The user can actually use a different, user defined kernel function, but it's template must match that of SLiPy.Algorithms.Functions.Gaussian. See the SLiPy.Algorithms.KernelFit module to better understand how this happens. The bandwidth is actually the standard deviation in the Gaussian. It is a length scale that defines how influencial data is at a particular distance. The default bandwidth will almost definitely be too large (resulting in a largely flat result at the average of the data points). As the bandwidth decreases to smaller than the resolution of the data, you will essentially being performing linear interpolation (no smoothing). You want to choose a length that is larger than the small scale variation in the noise, but smaller than the larger scale variation in the overall continuum level.
The rms can optionally be computed between the continuum data and the curve fit to it. This rms value is used to scale the error profile that is returned for the line extracted. Explicitely, the error for a line extraction is error_spectrum = continuum_rms * I_0 / I where I is the line data and I_0 is the interpolated continuum over the gap.
Intensity calibration fundamentals
Intensity calibration is even more challenging . The data recorded by a spectrometer is never the true spectrum rather, it is modified by every optical element along the light path, including lenses, mirrors, the diffraction grating, and the detector. Each of these elements has its own spectral response . Figure 3 shows the responses of a typical holographic diffraction grating (green curve) and a back-thinned CCD camera (black curve). A spectrum recorded with these components will include their instrumental artifacts . Researchers know that spectra should be normalized to remove the instrumental contribution, but because this process is tedious and difficult, it is rarely done. Choquette has outlined the technique in Applied Spectroscopy (2007, 61: 117).
Typical instrumental intensity artifacts are shown in Figure 4. These spectra show photoluminescence from zinc oxide recorded on a high-resolution spectrometer. The red curve is the true spectrum the blue and green curves show spurious peaks arising from discontinuous changes in grating diffraction efficiency as the grating is rotated around its axis (i.e., as the wavelength is scanned across the focal plane).
The spectra presented in Figure 5 are of an incandescent lamp . The black curve shows the theoretical blackbody spectrum the three colored curves show the measured data, with a sharp feature that arises from a Wood’s anomaly.
Figure 6 shows Raman spectra of stearic acid, a common component of pharmaceutical tablets. The spectrum rendered in red represents raw data , while the spectrum rendered in orange has been corrected . The excitation wavelength is 785 nm the C-H stretching peaks at 2900 cm -1 are at 1015 nm, where the detector’s quantum efficiency is rapidly decreasing to zero. Intensity calibration restores the proper peak-height ratio while flattening the baseline , making it much easier to quantify the fraction of stearic acid in a mixture.
We have implemented a simple iterative algorithm to select ThAr lines for use in calibrating echelle spectrographs. The algorithm combines the expected line properties (e.g. degree of blending), based on the input line-list, with properties of the ThAr spectrum observed on the spectrograph of interest (e.g. resolution, relative line intensities, etc.). We emphasize that although this paper has focused on the application of this algorithm to VLT/UVES spectra, the algorithm can be applied to any other spectrograph all that is required is a ThAr spectrum taken on that spectrograph and a reduction code which allows repeatable wavelength calibration with different ThAr line-lists.
Using the most recent and reliable ThAr laboratory wavelengths, including that of LP07, we have produced a new ThAr line-list which, we argue, should be used to calibrate UVES spectra in preference to the list distributed by ESO. The main advantages of the former over the latter are: (i) more homogeneous input line catalogues with mutually consistent wavelength scales (ii) more accurate and precise laboratory wavelengths (iii) only unblended lines appear in the final list and (iv) only lines which are observed to be useful in UVES spectra are included. Indeed, the wavelength calibration residuals achieved using the new UVES line-list (rms ∼ 35 m s −1 ) are more than a factor of 3 better than those achieved with the ESO or DCH98 list (rms ∼ 130 m s −1 ) – see Fig. 8. An assumption one makes in using our new line-list to calibrate UVES spectra is that the relative line intensities (i.e. lamp conditions like pressure, current, etc.) are similar to those in the sample UVES ThAr spectra used in our selection algorithm. However, ideally, the selection algorithm should be included in the wavelength calibration procedure itself, thus producing the best selection of ThAr lines for each individual ThAr exposure being reduced.
The new line-list has been used to probe distortions of the wavelength scale introduced into UVES spectra which were originally wavelength calibrated with the ESO line-list. The distortions include sharp, short-range corrugations with peak-to-peak amplitude up to ∼75 m s −1 and longer range variations between +30 m s −1 and −30 m s −1 ( Fig. 9). These distortions have serious implications in the context of current UVES estimates of possible variations in the fine-structure constant, especially those of Chand et al. (2004)– see Figs 10 and 11. For example, if one employs the common Mg/Fe ii combination of transitions to derive Δα/α then the errors in the ESO line-list cause a spurious value of (Δα/α)ThAr≈+0.4 × 10 −5 for absorption redshifts 1.22 ≲zabs≲ 1.35 and (Δα/α)ThAr≈−0.3 × 10 −5 for 1.38 ≲zabs≲ 1.75. Although Chand et al. (2004) claim an overall precision in Δα/α of 0.06 × 10 −5 from 23 Mg/Fe ii absorbers (but see Murphy et al. 2007b, c), a single absorber at zabs= 1.2433 strongly dominates their sample with an individual error on Δα/α of 0.1 × 10 −5 . However, at this redshift, and considering which transitions are used to derive Δα/α in this system, we find that a spurious systematic shift of (Δα/α)ThAr≈+0.4 × 10 −5 has affected this system. Once such wavelength calibration corrections are applied to the entire sample, the null result of Δα/α= (−0.06 ± 0.06) × 10 −5 claimed by Chand et al. (2004) becomes Δα/α= (−0.17 ± 0.06) × 10 −5 – a 2.5σ‘detection’. The new ThAr line-list we present here should aid in avoiding such important yet elementary errors in future analyses of UVES spectra.
Resolving power R≡λ/FWHM, where FWHM is the instrumental profile's full width at half-maximum.
The included script fits_to_csv.py can be used as a command line tool to extract spectral data from fits files to csv, similar to the "Save Spectral Data as Text File" function in the original software.
I recommend taking exposures using SBIG's CCDops software, using the procedure given in the spectrograph manual. This software will only accept images with dimensions 121x648 pixels, which in CCDops means setting resolution to 1xN and vertical binning to 4.
Usage: fits_to_csv mercury_line_pixel [path] [nm]
The csv files are created in the same directory as the images, with the same names. The x-value at which the 5461 A mercury line appears in the image must be provided. If a path to a directory is specified, extracts data from all FITS files in that directory. If a filename is specified instead, extracts data from that file only. Omitting a path extracts data from all files in the current working directory. If the argument "nm" is provided, outputs wavelength in nm instead of A.
The pixel value at which the 5461 A mercury line appears in the image depends on the calibration process. Consult the spectrograph manual for how to identify this line.
fits_to_csv 337 image.FIT Extracts data from image.FIT and creates image.csv in the same directory.
fits_to_csv 337 Astronomyimages Extracts data from all fits files in Astronomyimages , creating separate csv files for each in that directory.
fits_to_csv 337 "C:DocumentsAstronomySpectrograph imagesimage.fit" nm Extracts data from image.fit , with the wavelength in nanometers.
fits_to_csv 337 nm Extracts data from all fits files in the current working directory, with the wavelength in nanometers.
For more advanced use cases, such as getting binned data, cropping to a specific part of the image (the defualt crop is like the original software's, pixel 55 to 65) or getting the data in a different format, the python package can be used to build other tools. I'm working on better documentation, for now the comments in the code will have to suffice.
Contributions from the community are strongly encouraged. Dive into any of the sections and start creating and contributing.If you wish to add the feed of your blog to our list of aggregated blogs fill in the contribution form below!
Subscribe : ATOM or RSS
New to Python? Give it a try !
To suggest a new feed to our aggregator, fill in the Contact Form on the right !
Less than 4GB RAM
That's tricky because you won't have much free RAM after you deduct the size of all the CCDData objects and what you OS and the other processes need. In that case it would be best to process for example 5 bias images at a time and then combine the results of the first combinations. That's probably going to work because you use average as method (it wouldn't work if you used median ) because (A+B+C+D) / 4 is equal to ((A+B)/2 + (C+D)/2)/2 .
That would be (I haven't actually checked this code, so please inspect it carefully before you run it):