Fourier Transform spectrometers vs. Grating spectrometers

Introduction

This article compares some of the less obvious details of fourier transform spectrometers and classical grating spectrometers. It contains some apparently not-widely-known observations I made recently, but is probably not interesting to you if you are not into spectroscopy. This is also why I will skip explaining how the two spectrometers work.

The article will first compare the two types of spectrometers in terms of signal-to-noise ratio, and then in terms of spectral resolution (which also has some implications on SNR, see below).

Signal-to-Noise ratio (SNR) considerations

The scanning issue

The most obvious reason for losing SNR is always throwing away energy. If you have a scanning grating spectrometer which has a slit blocking part of the spectrum, you throw away energy, and thus SNR. There is not much to discuss here; it is just worth mentioning that the fourier transform (FT) spectrometer is inherently not a scanning spectrometer in this sense; even though it scans over the interference pattern, at each point in time, all available power interferes to produce the output signal and is thus used.

Scanning spectrometers always lose a factor sqrt(N) in SNR over non-scanning spectrometers, where N is the amount of scan points acquired, if the noise is shot noise (see below); or a factor N in SNR if the noise is detector noise.

The detector noise versus shot noise issue

From now on, we assume we don’t have scanning spectrometers, because they are out of this game anyways. Assuming all available power is detected at each point in time, there are two main cases where noise can come from: the signal itself; and the detector.

If the wavelength is long (longer than visible light, i.e. IR and above), there is typically a lot of detector noise. In this case, it is funnily desirable to have as few detectors as possible, because each detector produces its own share of noise, and the more of them you have, the more noise power you have overall compared to the power in your signal. This means, a CCD or CMOS chip with a million pixels will have a thousand times as much noise as a single detector pixel.

This is an advantage for the fourier transform spectrometer, since it inherently only has one single detector. It implies a gain in SNR of a factor sqrt(N), where N is the number of detectors. This advantage is historically probably the main reason fourier transform spectroscopy is used in the infrared domain so much.

The advantage is lost as soon as shot noise (photon counting noise), i.e. noise inherent to the signal, dominates over detector noise; in this case it doesn’t matter how many detectors you have. This is typically roughly the case for visible light and wavelengths below.

The noise redistribution and temporal stability issues

Since for a FT spectrometer, the interferogram at each point contains an interference pattern from each spectral component in the analyzed light, noise is distributed across the spectrum, and does not stay localized at the wavelength producing it (e.g. a strong spectral peak with amplitude noise). This is especially a concern when scan times are long if the source intensity is not stable. It also causes noise (at least the shot noise) from strong peaks to leak into relatively silent areas of the spectrum, reducing dynamic range. The problem can be mitigated by doing quick scans (averaging them later on if required), or measuring a phase-shifted reference channel in addition.

Limits on resolution: two of them simple, one complicated

A limit on resolution is set in both systems by comparing the wavelength to the “sweep dimension”: for the grating spectrometer, that is the width of the grating; for the FT spectrometer, it is the maximum distance the mirror is moved. The achieveable resolution is simply the wavelength divided by this sweep dimension. The reason for this limitation is the position-momentum uncertainity relation from quantum mechanics.

Another limit is imposed by the energy-time uncertainity relation, but in this case that is typically not interesting; it can be completely negated by very short integration times already.

There is however yet another limit on achieveable resolution, which is much more delicate in nature. It is caused by beam quality. When feeding the spectrometer with perfectly parallel light from a point source, such as approximated by a good-quality laser beam, the limitation disappears. However, when the source is not point-like, there is a certain randomness in the beam; it is not possible to make it perfectly parallel, and for the grating spectrometer it is easy to see how this hurts resolution. In fact, this is a fundamental problem: this kind of spectrometer works by detecting some property derived from one component of the photon momentum, subsequently computing the photon energy from that component, assuming that the propagation direction is known. Any disorder in the beam violates this assumption and worsens the result.

Slits and energy losses

Beam quality can be arbitrarily improved by adding a narrow slit in front of the spectrometer, as is typically done. This comes with the disadvantage of removing large amounts of energy from the beam, so you have to find a trade-off between resolution and signal-to-noise ratio. It is not possible to losslessly improve beam quality; this would imply a decrease of phase space volume occupied by the beam, caused by an arrangement of passive components, which is forbidden by thermodynamics. This observation is for example also touched by the theory of gaussian beams.

The reason for the error introduced by beam disorder

In case of the grating spectrometer, the grating distributes photons on the detector screen depending on the z component of their momentum (z being the propagation direction of the beam). However, the x direction of the momentum (x being the direction perpendicular to the beam which is cut by the slit) also causes photons to spread out on the screen in addition, even if they all have the same energy; imagine photons hitting a mirror under different angles. This spread is proportional to the x component, and with that, proportional to the beam spread angle.

In case of the fourier transform spectrometer, the error is introduced by differences in path length before the two components interfere. Unless the two arm lengths are exactly equal, the path length difference will depend on the angle. However — and this is the insight which motivated me to write this post — because it depends on the cosine, the extra path length difference is proportional to the square of the ray misalignment angle. The fourier transform spectrometer has (more or less by chance I’d say) geometrically aligned the error such that it causes minimum effect on the result.

The fourier transform spectrometer’s undocumented advantage

Update: This observation is actually known since 1960 and is called the “Jacquinot advantage”.

What does this mean? It’s simple: for both spectrometers, you gain spectral resolution proportional to the inverse diameter of the slit, or pinhole respectively, at least up to the point where your resolution limit is dominated by the size of your “sweep dimension”. Half the slit width — twice the spectral resolution. There is a significant difference though: in the grating spectrometer case, a one-dimensional slit is used; in the fourier-transform case it is a two-dimensional round pinhole. Making the slit twice as wide trades a factor 2 in resolution for a factor 2 in signal quality; making the pinhole twice as wide trades a factor 2 in resolution for a factor 4 in signal quality. To phrase it cleverly, the extra dimension in the pinhole geometry (circle vs. slit, causing the area to be proportional to the square of the width vs. proportional to just the width) is exactly nature’s way of handing you over the “square” gain in error described in the previous paragraph.

Thus, effectively, for non-point sources which cannot be used to produce a perfectly parallel beam, at the same resolution you have much more light available in a fourier transform spectrometer than in a grating spectrometer — which means better signal-to-noise ratio.

Note: If you know of any resources already documenting this observation and the reason behind it, I’d be thankful for a comment.

Categories: Everything

Leave a Reply

Your email address will not be published. Required fields are marked *