Infrared detectors are in general used to detect, image, and measure patterns of the thermal heat radiation which all objects emit. Early devices consisted of single detector elements that relied on a change in the temperature of the detector. Early thermal detectors were thermocouples and bolometers which are still used today. Thermal detectors are generally sensitive to all infrared wavelengths and operate at room temperature. Under these conditions, they have relatively low sensitivity and slow response.
Photon detectors were developed to improve sensitivity and response time. These detectors have been extensively developed since the 1940's. Lead sulfide (PbS) was the first practical IR detector. It is sensitive to infrared wavelengths up to ~3 µm.
Beginning in the late 1940's and continuing into the 1950's, a wide variety of new materials were developed for IR sensing. Lead selenide (PbSe), lead telluride (PbTe), and indium antimonide (InSb) extended the spectral range beyond that of PbS, providing sensitivity in the 3-5 µm medium wavelength (MWIR) atmospheric window.
The end of the 1950's saw the first introduction of semiconductor alloys, in the chemical table group III-V, IV-VI, and II-VI material systems. These alloys allowed the bandgap of the semiconductor, and hence its spectral response, to be custom tailored for specific applications. MCT (HgCdTe), a group II-VI material, has today become the most widely used of the tunable bandgap materials.
As photolithography became available in the early 1960's it was applied to make IR sensor arrays. Linear array technology was first demonstrated in PbS, PbSe, and InSb detectors. Photovoltaic (PV) detector development began with the availability of single crystal InSb material.
In the late 1960's and early 1970's, "first generation" linear arrays of intrinsic MCT photoconductive detectors were developed. These allowed LWIR forward looking imaging radiometer (FLIR) systems to operate at 80K with a single stage cryoengine, making them much more compact, lighter, and significantly lower in power consumption.
The 1970's witnessed a mushrooming of IR applications combined with the start of high volume production of first generation sensor systems using linear arrays
At the same time, other significant detector technology developments were taking place. Silicon technology spawned novel platinum silicide (PtSi) detector devices which have become standard commercial products for a variety of MWIR high resolution applications.
The invention of charge coupled devices (CCDs) in the late 1960's made it possible to envision "second generation" detector arrays coupled with on-focal-plane electronic analog signal readouts which could multiplex the signal from a very large array of detectors. Early assessment of this concept showed that photovoltaic detectors such as InSb, PtSi, and MCT detectors or high impedance photoconductors such as PbSe, PbS, and extrinsic silicon detectors were promising candidates because they had impedances suitable for interfacing with the FET input of readout multiplexers. PC MCT was not suitable due to its low impedance. Therefore, in the late 1970's through the 1980's, MCT technology efforts focused almost exclusively on PV device development because of the need for low power and high impedance for interfacing to readout input circuits in large arrays. This effort has been paying off in the 1990's with the birth of second generation IR detectors which provide large 2D arrays in both linear formats. These detectors use TDI for scanning systems; in staring systems, they come in square and rectangular formats.
Monolithic extrinsic silicon detectors were demonstrated first in the mid 1970's. The monolithic extrinsic silicon approach was subsequently set aside because the process of integrated circuit fabrication degraded the detector quality. Monolithic PtSi detectors, however, in which the detector can be formed after the readout is processed, are now widely available.