Laser wavelengths ranging from ultra-violet through visible to near infra-red can be used for Raman spectroscopy. Typical examples include (but are not limited to):
The choice of laser wavelength has an important impact on experimental capabilities:
Raman scattering intensity is proportional to λ-4 where λ is the laser wavelength. Thus an infra-red laser results in a decrease in scattering intensity by a factor of 15 or more, when compared with blue/green visible lasers.
The diffraction limited laser spot diameter can be calculated according to the equation, diameter = 1.22 λ/NA (where λ is the wavelength of the laser, and NA is the numerical aperture of the microscope objective being used). For example, with a 532 nm laser, and a 0.90/100x objective, the theoretical spot diameter will be 0.72 µm – with the same objective, a 785 nm laser would yield a theoretical spot diameter of 1.1µm. Thus, achievable spatial resolution is partially dependent on choice of laser.
Optimization of resulting based on sample behavior
Ultra-violet (UV) lasers for Raman spectroscopy typically include laser wavelengths ranging from 244 nm through to 355 nm.
Theoretically UV Raman spectroscopy is not different from standard analysis using visible laser wavelengths. However, in practice there are a number of practical difficulties and disadvantages which must be considered.
Near infra-red (NIR) lasers for Raman typically include a range of wavelengths greater than 700 nm, such as 785 nm, 830 nm, 980 nm and 1064 nm. The key reason for the use of NIR Raman is for fluorescence suppression, but there are a number of drawbacks which must be considered. Whilst at times NIR Raman is invaluable, it should certainly not be considered the best solution for every sample.
There are two main classes of filters used for Raman spectroscopy.
These optical components are placed in the Raman beam path, and are used to selectively block the laser line (Rayleigh scatter) while allowing the Raman scattered light through to the spectrometer and detector. Each laser wavelength requires an individual filter. There are two main types of filters used, both of which can be used without user intervention or optimization:
A CCD (Charge Coupled Device) is a silicon-based multichannel array detector of UV, visible and near-infra light. They are used for Raman spectroscopy because they are extremely sensitive to light (and thus suitable for analysis of the inherently weak Raman signal), and allow multichannel operation (which means that the entire Raman spectrum can be detected in a single acquisition). CCDs are widely used, not least as the sensors in digital cameras, but versions for scientific spectroscopy are of a considerably higher grade to give the best possible sensitivity, uniformity and noise characteristics.
CCD detectors are typically one dimensional (linear) or two dimensional (area) arrays of thousands or millions of individual detector elements (also known as pixels). Each element interacts with light to build up a charge – the brighter the light, and/or the longer the interaction, the more charge is registered. At the end of the measurement read out, electronics pull the charge from the elements, at which point each individual charge reading is measured.
In a typical Raman spectrometer, the Raman scattered light is dispersed using the diffraction grating, and this dispersed light is then projected onto the long axis of the CCD array. The first element will detect light from the low cm-1 edge of the spectrum, the second element will detect light from the next spectral position, and so on...the last element will detect light from the high cm-1 edge of the spectrum.
CCDs require some degree of cooling to make them suitable for high grade spectroscopy. Typically this is done using either peltier cooling (suitable for temperatures down to -90ºC), and liquid nitrogen cryogenic cooling. Most Raman systems use peltier cooled detectors, but for certain specialized applications, liquid nitrogen cooled detectors still have advantages.
An Electron Multiplying CCD (EMCCD) is a special type of CCD detector, which uses the latest technology to enhance the spectrum quality when extremely low signal levels are present. This enhancement is particularly valuable when the Raman signal is very weak, since the electron multiplication process can result in good spectrum quality, unlike the conventional CCD where only a few of the stronger features can just be observed above the noise. The benefits of EM gain are clearly obvious in fast Raman spectral imaging,where the necessary short integration times can often result in signals which are barely visible above the noise when measured with a conventional CCD.
The EMCCD has two readout registers on the chip – a conventional register and an electron multiplying (EM) register. In the EM register, the clocking voltages used are higher than for conventional clocking, causing the electrons to acquire sufficient energy that impact ionization can occur. At this point, extra electrons are produced and stored in the next pixel. There is only a small probability of electrons acquiring sufficient energy for impact ionization to occur (thus creating additional electrons), but since the readout register has many elements within it, significant gain factors are possible (up to ~1000x). The key benefit of an EMCCD is that the amplification occurs before readout of the signal, which means that the signal is not readout noise limited. In other words, through amplification, the signal is raised well above the noise floor which is largely determined by the noise of the readout electronics (pre-amplifier and A/D converter).
Spectral resolution is the ability to resolve spectral features and bands into their separate components.The spectral resolution required by the analyst or researcher depends upon the application involved. For example, routine analysis for basic sample identification typically requires low/medium resolution. In contrast, characterization of polymorphs and crystallinity often requires high resolution, since these phenomena exhibit only very subtle changes in the Raman spectrum, which would not be visible in a low resolution experiment.
Spectral resolution is an important experimental parameter. If the resolution is too low, spectral information will be lost, preventing correct identification and characterization of the sample. If the resolution is too high, total measurement time can be longer than necessary. What makes resolution “too low” or “too high” depends upon the particular application, and what information is desired from the experiment.
Typically, low/medium resolution is suitable for basic chemical identification, and distinguishing different materials. Higher resolution becomes necessary to characterize more subtle spectral features – for example, minor changes in the shape or position of a peak. There are a number of chemical phenomena which cause such subtle spectral changes:
Spectral resolution in a dispersive Raman spectrometer is determined by four main factors. In the discussions below, the effect of each factor is considered under the assumption that all other factors remain unchanged. In real life all of these factors can exist in many varied permutations, which makes direct comparison of a system’s performance and capabilities difficult.
Spectrometer focal length
The longer the focal length (e.g., the distance between the dispersing grating and detector) of the spectrometer, the higher the spectral resolution. Typical Raman spectrometers have focal lengths ranging from 200 mm (for low/medium resolution) through to 800 mm and higher (for high resolution). It is sometimes forgotten that a long focal length spectrometer is not limited to high resolution work only – With a suitable choice of gratings (see below), a high resolution spectrometer can be run in a low reso-lution mode. In this way, it is ideally suited for low/medium resolution analysis for routine screening, and yet can also offer high resolution analysis for more specialized applications.
The higher the groove density of the grating (typically measured as number of grooves per millimeter), the higher the spectral resolution. Typical gratings used for Raman vary from perhaps 300 gr/mm (low resolution) through to 1800 gr/mm (high resolution). More specialized gratings (including 2400 gr/mm and 3600 gr/mm) are also available, but have certain limitations, and should not be considered general purpose. The use of higher groove density gratings cannot be applied ad infinitum to increase spectral resolution, since they will have fixed practical and physical limits linked with the spectrometer itself. Thus, gratings provide an initial way to improve resolution, but once their limit is reached, it is necessary to move to a longer focal spectrometer.
The dispersing power of a grating/spectrometer pair can usually be considered constant in terms of wavelength. However, Raman spectra use an energy related unit (Raman shift, or wavenumber, cm-1) which means that the spectral resolution decreases as the laser excitation is changed from infrared to visible to ultra-violet wavelengths. As an example, if a 600 gr/mm grating is used with an infrared laser, a 1200 gr/mm or 1800 gr/mm will be required with a green laser to achieve a similar resolution.
Most systems have a single detector, so practically the user does not have control of this factor. However, it should be noted that different detectors can be configured with different pixel sizes. The smaller the pixel the higher the achievable spectral resolution.
A Raman microscope combines a Raman spectrometer with a standard optical microscope. The excitation laser beam is focused through the microscope to create a micro-spot with a diameter in the order of 0.5-10 µm. The Raman signal from the sample is collected from a similar area, passes back through the microscope into the spectrometer and is there analyzed for spectral information.
The Raman microscope allows Raman spectroscopy to be performed with microscopic spatial resolution. Thus it opens up a new dimension in chemical analysis:
Simply adding a microscope assists in giving lateral (XY) spatial resolution, but does not give depth (Z) spatial resolution. For this confocal optics are required. There are several methods in use today, some truly confocal, others pseudo confocal, which work with varying success. For a true confocal design (which incorporates a fully adjustable confocal pinhole aperture) micron depth resolution is possible, allowing individual layers of a sample to be discretely analyzed.
Some Raman microscopes do not have confocal optics. Simply adding a microscope assists in giving lateral (XY) spatial resolution, but does not give depth (Z) spatial resolution. For this, confocal optics are required. There are several methods in use today, some truly confocal, others pseudo confocal, which work with varying success. With a true confocal design (which incorporates a fully adjustable confocal pinhole aperture) micron depth resolution is possible, allowing individual layers of a sample to be discretely analyzed.
Do you have any questions or requests? Use this form to contact our specialists.