Practical Considerations in Computational Sensing

Computational sensing as a eld is continuing to grow at a rapid pace. The number of journal publications related to computational sensing has steadily increased every year since 2008 . There is now a major Optical Society of America (OSA) meeting dedicated to computational sensing  and textbooks dedicated to its study and development . While it is a powerful approach to radically new sensor architectures, textbooks and papers tend to focus on architecture concepts and positive results. Little attention is given towards the practical issues one faces when implementing computational sensors. For example, calibration is a major topic in this dissertation. In many talks and papers on computational sensing, calibration is barely mentioned or relegated to a minor paragraph. Calibration is the process of quantifying the response of a sensor in order to produce an accurate forward model. While many traditional sensors also require calibration, computational sensors tend to be more sensitive to calibration error. There are two main reasons for this. The first reason is simply due to the fact that non-isomorphic measurements require a computational step to solve an inverse problem. The algorithms rely on accurate knowledge of the forward model to separate measurement data due to the instrument and data due to the signal-ofinterest. The second reason is due to the lack of redundancy of measurement data in compressive sensing. The redundancies that are typically deemed wasteful in traditional sensing, can also be used by post-processing algorithms to solve the inverse problem in a robust fashion to correct for missing or corrupted data. In compressive sensing, only a few numbers are used to represent many. If the few numbers are misinterrepted due to poor calibration, it can have a drastic impact on the performance of the estimation algorithm. I will illustrate in this dissertation the calibration challenges in several computational sensors. Calibration has become a major drawback in compressive sensing. A consumer cannot be expected to spend time calibrating every time the instrument is physically bumped, the air temperature or pressure changes. In high dimensional compressive sensors like hyperspectral imagers, the calibration time can last hours. One issue that is encountered in multiplexed sensing is lack of dynamic range. As the amount of light is sensed by the detector increases, it becomes more difficult to discern differences between signals. For example, in a single sinusoidally amplitude modulated signal, there is a DC oset and then the modulation itself. The pertinant information is encoded in the modulation so being able to resolve the peak to peak difference is important. Now imagine another amplitude signal with a different frequency. Physically, the amount of electrons in each pixel well is increased and the total modulation will tend to \average” out. The situation gets worse as the number of signals increases per measurement. This is one of the potential issues faced by single-pixel compressive sensing architectures. The SCOUT architecture attempts to alleviate this by \spreading” the photons onto more pixels for a compressive measurement.

Another hurdle in the implementation of practical computational sensing is the need for prior knowledge. For example, in compressive imaging one must assume the signal is sparse in some basis. Fortunately most realistic objects can be treated as such with many commonly known bases, such as a type of wavelet basis. However, sometimes one needs to image something that is dicult to represent in any known basis. One needs to resort to generating training data. Training requires one to generate many different examples of the signal. This becomes time and computationally expensive. Another example for the need of prior knowledge is the AFSSI-C, a computational spectral classier which requires knowledge of the standard deviation of the probability density function of the noise to perform spectral classication in the least number of measurements as possible.Reducing the number of detector elements is often the goal in computational sensing. A notable example is the single pixel camera is an architecture. However, the single pixel camera requires several time sequential measurements. Each measurement displays a different DMD pattern to create a randomly encoded measurements. The drawback to this architecture is that one must point the camera at the object scene until enough measurements have been collected for proper reconstruction. A complication arises when this architecture is used to image temporally varying object scenes. One must display the DMD patterns even faster and reduce the exposure time to keep up. As a result the SNR may begin to degrade. A possible way to mitigate this issue is to do all the encoding in parallel. However, a completely parallel approach would require a lens, a DMD (or coded aperture), and a detector pixel for each measurement. Since each lens uses a different entrance pupil, this means that each detector pixel will have a different view of the object scene. This drastically scales the complexity of the architecture and algorithms. In this dissertation, I will discuss a compromise to parallel coding, in two different computational sensors, by using a common entrance pupil and a CCD.Much of the optimal measurement codes contain simultaneously positive and negative measurement weights. In reality, with incoherent light one is unable to make negative measurements. One is often forced into situations where one must record two sets of measurements and subtract one set of measurements from the positive set of measurements. This means an additional noise term is added to each effective measurement. Algorithms engineered to solve inverse problems often do not account for the nonnegativity of many physical situations. For example, in spectral unmixing where the problem is to solve for the concentration of each material given a mixed spectrum. A non-negative fractional abundance that sums to one is a physical requirement. However, there is a lack of sparsity promotting algorithms that are able to enforce both the non-negativity and the sum to one constraint. A major issue in both the theoretical and experimental compressive sensing community is a lack of code design schemes. For the most part random measurements techniques are dominant because they obey well known theoretical results which guarantee reconstruction with high probability. However, designed codes have been shown to outperform random codes in various applications. Intuitively, designed codes which can take into account prior knowledge of the physical limitations of the sensing task and additional statistical assumptions of the signal-of-interest should be able to outperform random measurements. For example, I will demonstrate in the AFSSI-C that adaptively designed Principal Component Analysis (PCA) codes dramatically outperform random codes in low SNR environments

https://search.proquest.com/openview/4c390c5e298854ed9383815aca5431c0/1?pq-origsite=gscholar&cbl=18750&diss=y