What is Astro-image processing ?
For some basic information on astro-image processing, see ESA web page here
Basically it means extracting the maximum information from the captured image by working with the Camera's 'RAW' data. RAW data will exhibit all sorts of defects that would normally be 'processed out' (i.e. hidden) by the camera when converting to RGB. In working with RAW we have to do that processing ourselves. In addition, the long exposures used in astrophotography will reveal defects that would never show up when taking 'normal' (sub-second) photos.
Why are Dark Frames required ?
All CMOS sensors are very susceptible to random 'noise', especially thermal (heat) noise. The longer the exposure, the more noise is accumulated. In any long exposure, the background will soon 'brighten up' to the point where it is grey/green - or at least 'non-black'. Whilst noise is essentially random, individual pixels will 'accumulate' noise at roughly the same rates and the rate of noise accumulation is very dependent on temperature.
Some sensors (mainly CCD types) also suffer from 'amp glow' i.e. heat from other circuit elements on the sensor chip - such as the read-out shift registers and amplifiers. This warms up the pixels 'nearby' faster and they thus accumulate noise faster than those further away.
To reduce the effect of noise on the final image, a 'Dark Frame' (i.e. an image taken with the lens cap on) is exposed for exactly the same length of time (and same ISO etc) as the 'real' image and subtracted from it.
Noise is temperature dependent, a doubling of temperature leading to 4x as much noise. Since few camera's come with temperature sensors, the only way to 'guarantee' the same conditions is to take the Dark frame at least 'as close as possible' in time to the 'real image'.
TIP - letting your camera cool down before taking your shots is just as important as letting your scope cool down ! Reducing the camera temprature from an indoor 20 degrees C to an outdoor 10 degrees C will more than halve the noise (on a really cold night, getting it down to 5 degreess will more than halve again).
Canon Cameras have a built in Dark Frame exposure and subtraction capability (called 'noise reduction') that results in a doubling of total exposure time. This should be used when just a few long exposures are to be taken - however if an extended sequence of exposures are to be taken, it will be faster to take one Dark at the start and one at the end of the sequence (and 'interpolate', if necessary, between the two).
Note that the DeepSkyStacker software treats multiple Dark Frames as something to be 'stacked' before use (and thus 'noise reduced'). If (all) your Dark Frames are exposed at the same temperature as (all) the actual images, this approach is valid, however the more 'correct' approach is to subtract the dark frame that was taken 'closest in time' (and thus closest in temperature) to each actual image.
When it is necessary to calculate a dark frame between two other dark frames taken at different temperatures, the 'interpolation' formula should be based on a 'temp squared' (or log) function (see Note 1 at end)
TIP - each sensor pixel will have a different 'characteristic' response and thus different level of noise susceptibility, some more noisy others less noisy. To avoid 'stacking up' a 'more noisy' area, offset the image on the sensor between shots - modern stacking software will cope OK and this will 'average out' the noise better. Offsetting will also help you cope with 'bad pixels' (see below)
How to use Dark Flat Frames / Bias Frames / Offset Frames ?
DeepSkyStacker supports an additional type of 'subtract' frame known as a Bias or Offset Frame.
The intent is to remove any 'offset' or 'bias' introduced by the sensor (CCD or CMOS chip) readout process.
Since any Bias will be present in ALL images, including Dark Frames, it will be subtracted from the image when the Dark Frames are subtracted :-)
Flat Frames will also contain any Bias .. and when the Flat Frames are used to 'flatten' the image, this will result in the Bias being 'added back in'. Thus Bias Frames should only be subtracted from Flat Frames (and not any other)
To create a Bias Frame, cover the camera lens and take the shortest possible exposure (it may be 1/4000s or 1/8000s depending on your camera) at the same ISO setting as Flat Frame (and all the other images).
Note that if multiple Bias Frames are available, DeepSkyStacker will combine them automatically to create and use a clean Master Bias frame.
Bad-pixel mapping and replacement.
Bad pixels are ones that are stuck 'on' or 'off' - 'stuck on' pixels are often called 'hot' pixels, whilst those 'stuck off' are often called 'dark pixels'.
Bad pixel replacement should be done after the Dark Frame subtraction (if done before, all the Dark Frames must also be "badpixel processed").
The first step is to 'map' (generate a list of) the badpixels (see Flat Frames).
The images are then processed by replacing the listed 'hot' or 'dark' pixels with the 'average' of the surrounding pixels (from the matching colours in the 'next door' Bayer matrix 2x2 grids).
Why use Flat Fields / Flat Frames ?
Flat Frames are used to correct for vignetting and uneven field illumination created by dust or smudges in your optical path. It is thus vital to expose your Flat Frame with the camera in the exact same position as used for the real images (as well as the same settings, especially ISO setting, and focus position).
A Flat Field is an image of an evenly illuminated white surface. The exposure time needs to be set so that the 'average' image pixel intensity recorded is about 50% = this can be done manually, by 'trial & error', or the camera can be set to 'Av mode' and allowed to choose it own exposure time.
The easiest / simplest approach is to put a white T shirt over your telescope aperture and smooth out any creases. Then shoot something bright such as a flash, a bright white light or the sky at dawn.
To minimise the effects of any random noise, a number of Flats can be taken - DeepSkyStacker will combine them automatically to create and use a clean 'master flat frame'.
Using flats for badpixel mapping
Our first use of a Flat is to find and map the badpixels. Since all pixels should be 'about' 50%, any found to be significantly lower ('0%' = dark) or significantly higher ('100%' = hot) must be 'bad' - and these should be listed for replacement.
Using flats for uneven light distribution compensation
Flats can be used to compensate for the uneven light distribution caused by the telescope optics, and, to some extent, differences in the individual pixel sensitivities.
If a 'web cam' (or other small area CCD) is being used, then the image will be taken from the centre of the field of view and further processing using Flat Fields may, to a large extent, be skipped.
However if a Digital SLR camera is used, then, unless every image is cropped so that only the centre of the sensor is ever used, it is almost guaranteed that the telescope optics will effect the distribution of light across the sensor.
The longer the Flat exposure time, the more noise will be present and thus the more difficult it will be to separate out light distribution/sensitivity effects from noise (including 'amp glare').
Thus the Flat Frame should be taken at fast exposure times (and need be done only once). The Camera's own built in Dark Frame 'noise reduction' mode (if it has one) should be used when taking the Flat.
Next, badpixel replacement must be applied to the Flat. The Flat Field can then be used on the images.
To perform the compensation, the Flat Field is used to calculate 'multiplication constants' to be applied to the RAW data as follows :-
If the goal is to generate a final RGB image ... then for each set of colour sub-pixels (R,G & B), calculate the sum (average) across the entire sensor array. For each individual pixel, calculate the individual multiplication constant needed to adjust the Flat Field detected pixel to the R, G or B average (as appropriate).
If the goal is to use the RAW data as a high resolution B&W image, then calculate the 'average' for the entire array. Then for each individual pixel, calculate the multiplication constant needed to adjust that pixel's level to the 'average' (this compensates for the difference in sensitivity to 'white' light caused by the Bayer Matrix colour filters).
The Flat Field adjustment multipliers can then be applied to each 'real' image pixel. Note that, to minimise any loss of data, the output of this calculation will 'boost' the pixel data from 12 bit to 32 bit resolution (in fact, we really need 34 bit data, however the limit for TIFF/FITS data (and ease of software processing) is 32bits per pixel and this will have to do).
What is Binning ?
When the individual pixel area is smaller than the optics limit, Binning should be used. This is the process of adding adjacent pixels together to form a 'super pixel'. If the optics is not able to make use of the full spatial resolution of the sensor, this is worth doing since (like stacking) it also improves the signal to noise ratio.
'Binning' can also be used as an opportunity to increase the dynamic range of the data .. 12 bit data that is 2x2 'binned' means that 4 pixels are added together to form one new pixel - and 4 x 12 bit pixels thus become one 14 bit pixel.
What is Stacking ?
Again, if we work with the RAW data, 'Stacking' reduces noise because the 'real' (signal) data always re-enforces itself (adds up in a linear fashion), whilst the 'random' noise does not always re-enforce (adds up in an 'average' fashion).
By doing this to the 12 bit RAW data, any signal ('real' data) that differs from noise by only a few parts in 4096 will be gradually 'accumulated' until it stands out from the noise. If stacking is done in 8bit RGB, it is necessary for the difference between signal and noise to be at least 16 parts in 4096 (i.e. 1 part in 256) - any difference of less than 1 in 256 will have zero effect on the output.
If Flat Field processing has been applied, we will be stacking 32 bit data. To avoid loss of resolution during calculations, all the images to be stacked should be processed in a single pass (intermediate results i.e. running totals, can be kept in memory at higher resolution and only reduced to 32 bits when the final stacked result has to be written back to disk as a TIFF/FITS file).
What are stretch & floor cut-off ?
Still working with the maximum resolution data, transfer functions (such as dynamic range stretching or gamma correction) & then the final background 'floor level cut off' (in PhotoShop, set the 'black level' on the RAW data) can be applied.
When to do RGB conversion ?
RGB conversion leads to irreversible data loss. Starting with the sub-pixels in the Bayer Matrix (12bit, 4096 levels), interpolation is used to generate component RGB (3x 8bit, 256 levels) pixels.
Thus RGB conversion should be done ONLY after ALL the sub-pixel level processing is complete.
RGB conversion is essentially a subjective process. The user will have to experiment with the 'curves' used in the conversion process to obtain the best visual results - the 'gamma' curve used to map the 12 (or 32) bit data onto 8 bits should be adjusted to maximise the detail.
In some circumstances, eg nebula images, it may be necessary to process the raw data twice - once for the central 'core' and again for the surrounding 'clouds' .. a 'mask' function is then used on the 'blown out' central core to 'allow in' the re-processed in higher detail 'core'.
Note that the photographic 'HDR' function is essentially what is required to add the two images together
Notes.
Note 1) The human perception of brightness is well approximated by a Steven's power law, which over a reasonable range is close to log2 logarithmic (as described by the Weber-Fechner law), which is one reason that logarithmic measures of light intensity are often used.