-
Notifications
You must be signed in to change notification settings - Fork 3
/
noise.tex
253 lines (220 loc) · 16.4 KB
/
noise.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
\documentclass[a4paper]{report}
\usepackage{amsmath,mathrsfs,amsfonts}
\usepackage{mathtools}
\usepackage[a4paper,margin=2.7cm,tmargin=2.5cm,bmargin=2.5cm]{geometry}
\usepackage{mdframed}
\newcommand{\nexercise}[0]{\arabic{exercises}\addtocounter{exercises}{1}}
\addtocounter{chapter}{1}
\makeatletter
\renewcommand{\thesection}{\@arabic\c@section}
\renewcommand{\thefigure}{\@arabic\c@figure}
\makeatother
\begin{document}
\newcounter{exercises}
\addtocounter{exercises}{1}
\newmdenv[linewidth=2pt,
frametitlerule=true,
roundcorner=10pt,
linecolor=red,
leftline=false,
rightline=false,
skipbelow=12pt,
skipabove=12pt,
nobreak=true,
innerbottommargin=7pt,
]{exercisebox}
\begin{center}
\textbf{\Large{Detectors and noise}}
\end{center}
\section{Photodetectors}
\subsection{Converting photons into bits}
Let us first consider the steps involved in converting a photon falling onto image data.
\begin{itemize}
\item \textbf{Photoelectric effect:} the probability that an absorbed photon will be converted into a photoelectron (or initiate a photochemical reaction, in the case of photographic film) is the \textit{quantum efficiency} of the detector.
\item \textbf{Amplification:} the charge associated with the generated photoelectrons is converted into a voltage that we can measure.
\item \textbf{Analog-to-digital conversion:} the voltage is measured by an ADC, giving rise to binary data that can be stored or manipulated by computers. The \textit{bit depth} of the digitizer will determine maximum image value that can be acquired.
\end{itemize}
\begin{exercisebox}[frametitle={Exercise \nexercise: Quantum efficiency of different materials}]
In recent decades, photomultiplier tubes and silicon based detectors (photodiodes and cameras) have replaced film as the primary means for measuring light.
How do you think the quantum efficiency of these materials compares?
Which is the lowest and which is the highest?
\end{exercisebox}
\subsection{CCD and CMOS cameras}
Both CCD and CMOS detectors rely on photoelectric effect to detect photons but differ in how the generated photoelectrons are amplified and counted.
In a CCD camera, photoelectrons can be transferred around the chip and are eventually amplified serially.
In a CMOS camera, each pixel has electronics that can amplify the charge, making much higher frame rates possible.
CMOS cameras can operate in \textit{global} or \textit{rolling} shutter mode.
When using rolling shutter (which also can help increase frame rate), can has to be taken to avoid rolling shutter artefacts.
Both CCD and CMOS sensors can hold a finite number of photoelectrons within each pixel before saturation. This number is referred to as the \textit{full well capacity} or \textit{well depth}.
\subsection{Photomultiplier tubes}
Photomultiplier tubes have a single photosensitive element -- the photocathode, equivalent to a single pixel of a camera chip.
The image has to be constructed serially, e.g. by scanning.
Photoelectrons emitted by the photocathode are amplified
The stochastic nature of this amplification leads to some variability in the photocurrent generated by individual photons.
PMT signals are typically measured using one of two methods:
\begin{itemize}
\item \textbf{Current integration:} the total photocurrent measured for each pixel is summed up using analogue electronics, and then digitized.
\item \textbf{Photon counting:} events associated with individual photons are detected and counted, disregarding the amplitude of individual events.
\end{itemize}
Photon counting is only feasible and useful in the low light regime, when overlap between events arising from different photons is unlikely.
As discussed earlier, PMTs have lower achievable quantum efficiencies than silicon-based detectors.
Their advantage comes from the fact that the amplification cascade allows them to reliably detect single photons above noise levels (which we will discuss shortly).
In addition, they have a large surface area, which can be useful, as you'll see when designing and building the light path for two-photon microscopes later.
Different regions of the photocathode can differ in their sensitivity, which has to be taken into account when designing microscopes.
\section{Sources of noise in imaging}
\subsection{Read-out noise}
Read-out noise originates from the electronics that convert photoelectrons into binary data.
Since read-out noise fluctuations can be both positive and negative, it is important to apply an offset to avoid clipping negative values, which would result in a biased measurement.
Read-out noise is typically specified in units of charge -- i.e. multiples of the charge of the electron.
Together with the full well capacity it determines the \textit{dynamic range} of the sensor -- the maximum number of readily distinguishable gray levels.
\begin{exercisebox}[frametitle={Exercise \nexercise: Picking optimal gain}]
How should we pick the amplification and digitization gain to maximize dynamic range, given the level of read-out noise and a fixed bit depth? E.g. what would be the gain to pick if our RMS read out is $10e^-$?
\end{exercisebox}
\subsection{Shot noise}
Shot noise is a direct consequence of the quantum nature of light. While we may be interested in light intensity (e.g. as a proxy for the concentration of a fluorescent protein), we only observe discrete photons.
While the underlying \textit{rate} of generation and detection of photons may be constant, the number of photons we observe in a given time window will vary, following the Poisson distribution.
The Poisson distribution has an interesting property in that its mean, often written as $\lambda$, corresponding in this case to the underlying rate of photons, is equal to its variance:
\begin{equation}
\lambda = \sigma^2
\end{equation}
Or, if you prefer the more formal language of expected values,
\begin{align}
\lambda = \mathbb { E } [ x ] = \mathrm { Var } ( x ) = \mathbb { E } [ (x - \mathbb { E } [ x ])^2 ],
\end{align}
where $x$ is the Poisson distributed photon counts.
This means that as average number of photon events increases, for example with light intensity or exposure time, so does the variability of photon counts due to shot noise.
So why then do we have the intuition that images ``become less noisy'' as we increase the amount of light?
The reason is that although the variance increases, the signal-to-noise ratio (SNR) also goes up.
SNR is typically quantified and the mean of the signal divided by its standard deviation:
Since the standard deviation of the photon count due to shot noise is equal to the square root of the variance, it increases slower than the mean.
Therefore, SNR goes up in proportion to the square root of the mean photon count.
\begin{equation}
SNR = \frac{\lambda}{\sigma} = \frac{\lambda}{\sqrt{\lambda}} = \sqrt{\lambda}
\end{equation}
\begin{exercisebox}[frametitle={Exercise \nexercise: Why can’t we see the Milky Way from London?}]
Both signal (stars) and background (sky glow from light pollution) contain shot noise, and both contribute to the variance of the observed photon counts:
\begin{align}
\sigma_{total}^2 = \sigma_{signal}^2 + \sigma_{background}^2 \\
=\lambda_{signal} + \lambda_{background}
\end{align}
However, of course of the signal helps increase the SNR:
\begin{align}
SNR &= \frac{\lambda_{signal}}{\sigma_{total}} \\
&= \frac{\lambda_{signal}}{\sqrt{\lambda_{signal} + \lambda_{background}}}
\end{align}
\end{exercisebox}
\subsection{Dark noise}
Dark noise arises from generation of electrons in a photon-independent fashion.
Since it is also a Poisson process, it has the same statistical properties as shot noise.
Dark noise can be controlled by cooling the sensor.
\subsection{Estimating gain using shot noise}
While numbers of photons detected by the camera follow the Poisson distribution, the camera does not output raw photon counts.
Instead the image values it generates will be proportional to the actual numbers of photons with a proportionality constant we refer to as gain:
\begin{equation}
\underbracket{c}_{\mathclap{\text{image value}}} = \alpha \overbracket{p}^{\mathclap{\text{photon count}}},
\end{equation}
where $\alpha$ is the total gain.
The units of gain are gray values per photon.
\subsubsection{Proof using expected values}
Before we continue, we need to arm ourselves with the definition of \textit{expectation} or \textit{expected value} of a random variable.
The expectation of $f(x)$, a function of a discrete random variable $x$ with a probability distribution $p(x)$ is given by
\begin{equation}
\mathbb {E} [f] = \sum _{x} p(x) f(x),
\end{equation}
In the simplest case, if $f(x) = x$, then $\mathbb {E} [x] = \sum _{x} p(x) x$.
In practice, we often do not have access to the full probability distribution $p(x)$ and have to resort to approximating the expectation as the empirical mean:
\begin{equation}
\mathbb { E } [ x ] \simeq \frac { 1 } { N } \sum _ { n = 1 } ^ { N } x _{n}.
\end{equation}
\begin{exercisebox}[frametitle={Exercise \nexercise: Expectation example}]
If heads is 1 and tails is 0, what is the expected value of a coin flip? What about a six-sided dice roll?
\end{exercisebox}
A key property of expectations is their linearity. That is
\begin{align}
\mathbb{E}[x+y] &= \mathbb{E}[x] + \mathbb{E}[y], \textrm{and } \\
\mathbb{E}[ax] & = a \mathbb{E}[x], \label{eq:lin}
\end{align}
where $a$ is a constant (i.e. not a random variable). The last piece of information we need is the definition of variance in terms of expectations. It is the expected value of the squared deviations from the mean, $\mathbb{E}[x]$:
\begin{equation}
\mathrm{var}(x) = \mathbb{E}[(x - \mathbb{E}[x])^2] = \mathbb{E}[x^2] - \mathbb{E}[x]^2
\end{equation}
The first identity is true by definition, and the second can be easily demonstrated by expanding the square. With definitions out of the way, we can now express the variance of image values in terms of its mean.
\begin{align}
\mathrm{var}(c) & = \mathbb{E}[(c - \mathbb{E}[c])^2] \\
& = \mathbb{E}[(\alpha p - \mathbb{E}[\alpha p])^2] \\
& = \mathbb{E}[(\alpha p - \alpha \mathbb{E}[p] )^2] \label{eq:ref1} \\
& = \alpha^2 \mathbb{E}[(p - \mathbb{E}[p])^2] \label{eq:ref2} \\
& = \alpha^2 \mathrm{var}(p) = \alpha^2 \mathbb{E}[p] \label{eq:ref3} \\
& = \alpha\mathbb{E}[c].
\end{align}
Eqs. \ref{eq:ref1} and \ref{eq:ref2} took advantage of the linear properties of expectations, specifically Eq. \ref{eq:lin}, and Eq. \ref{eq:ref3} of the fact that $p$ is Poisson distributed, and hence its variance and expected value are equal. Thus we see that the variance of image values due to shot noise increases proportionally to its mean, with a constant of proportionality equal to the gain $\alpha$, the conversion factor from photoelectrons to gray levels.
\subsubsection{Contribution of read-out noise}
The derivation is a little more tiresome if we take into account that fact that the measured image values are the sum of the amplified photon counts and read-out noise $\epsilon$:
\begin{equation}
c = \alpha p + \epsilon.
\end{equation}
Here $\epsilon$ is a new random variable and is statistically independent from $c$ and $p$. With this revised definition,
\begin{align}
\mathrm{var}(c) & = \mathbb{E}[(\alpha p + \epsilon - \mathbb{E}[\alpha p + \epsilon ])^2] \\
& = \mathbb{E}[(\alpha p + \epsilon - \alpha \mathbb{E}[p] - \mathbb{E}[\epsilon])^2].
\end{align}
Expanding the square and collecting all the terms that depend linearly on $\epsilon$ and then expanding the expectation, we get:
\begin{align}
\mathrm{var}(c) & = \mathbb{E}[\alpha^2 p^2 - 2\alpha^2 p\mathbb{E}[p] +\alpha^2\mathbb{E}[p]^2 + \epsilon^2 - \mathbb{E}[\epsilon]^2 + \epsilon(\ldots)]\\
& = \mathbb{E}[\alpha^2(p - \mathbb{E}[p])^2] + \mathbb{E}[\epsilon^2] - \mathbb{E}[\epsilon]^2 + \mathbb{E}[\epsilon]\mathbb{E}[(\ldots)] \\
& = \alpha^2 \mathbb{E}[(p - \mathbb{E}[p])^2] + \mathrm{var}(\epsilon),
\end{align}
where we took advantage of the fact that $\mathbb{E}[\epsilon^2] - \mathbb{E}[\epsilon]^2 = \mathrm{var}(\epsilon)$, that and $\mathbb{E}[\epsilon]=0$, and that $p$ and $\epsilon$ are independent. Finally, noting that $\mathbb{E}[(p - \mathbb{E}[p])^2] = \mathrm{var}(p)$,
\begin{align}
\mathrm{var}(c) & = \alpha^2 \mathrm{var}(p) + \mathrm{var}(\epsilon) \\
& = \alpha^2 \mathbb{E}[p] + \mathrm{var}(\epsilon) \\
& = \alpha\mathbb{E}[c] + \mathrm{var}(\epsilon)
\end{align}
So, if we plot the variance of image values versus its mean, the intercept will be the read-out noise variance -- in units of gray levels. We can use our estimate of the gain $\alpha$ to convert it into photoelectrons, as given in the spec sheet. Note that we assumed that the offset has been corrected, otherwise $\mathbb{E}[\epsilon] \neq 0$.
\section{Noise Practical}
In this practical you will measure the noise characteristics of your camera.
You will have control over acquisition parameters such as offset and gain.
It is important that you set these to reasonable values before you begin.
The relevant settings in \textit{pylon} software are:
\begin{itemize}
\item \textbf{Pixel format:} determines the bit depth. Set to \textbf{12 Bits/Pixel}.
\item \textbf{Blank level:} sets the camera offset and is 0 by default. Set it to a positive value to make sure you are not clipping -- use the histogram feature with the camera cap closed to check.
\item \textbf{Gain:} self-explanatory, set to its minimum value -- 0 dB -- to start with.
\item \textbf{Exposure auto:} set to ``Off''.
\item \textbf{Exposure time:} self-explanatory.
\end{itemize}
Set the user level to Expert and use the textbox to search for them.
\begin{exercisebox}[frametitle={Exercise \nexercise: Measure read-out noise}]
Acquire a sequence of images ($\sim100$) from the camera with NO light hitting the detector.
Hint: Cover the camera and shorten the exposure time as much as possible ($\sim1$ ms). Measure the standard deviation of each pixel’s value across the movie (use ImageJ).
What is the average noise (standard deviation in counts) across pixels?
\end{exercisebox}
\begin{exercisebox}[frametitle={Exercise \nexercise: Measure dark counts}]
- Acquire images from the camera with NO light hitting the detector using a range of short and long exposure times.
- Measure the Mean number of counts in each image. Do the counts increase with increasing exposure time? How many counts/pixel/second do you observe?
Hint: Remember to subtract the ‘Bias Level’, the average value (in counts) appearing in the image following a 0 second exposure. (This is an ‘offset’ added to each image by the camera manufacturers and does not reflect the incident light level.)
\end{exercisebox}
\begin{exercisebox}[frametitle={Exercise \nexercise: Compute the dynamic range of the camera}]
First determine the camera’s full well capacity (saturation value) in counts after bias removal.
Use a long exposure and expose the camera to a bright light source. What value are ‘saturated pixels’ assigned?
Use the formula below to compute the camera’s \textit{dynamic range} (the number of individually discernible light levels) for a fixed exposure.
\begin{center}
Dynamic Range = (Saturation –- Dark counts) / Read-out noise
\end{center}
Note: Use bias subtracted values for both Saturation and Dark counts.
Compare your results to the camera’s technical specification sheet.
\end{exercisebox}
\begin{exercisebox}[frametitle={Exercise \nexercise: Measure shot noise}]
Acquire sequences of images (~100) with the camera exposed to a wide distribution of brightness values. You can do this in two ways:
\begin{itemize}
\item Use a range of pixel intensities within an image: structured sample illuminated at a fixed intensity. Use ImageJ to measure the mean and standard deviation for each pixel across the image sequence.
\item Use a range of illumination intensities: same pixel, across different LED currents. Use ImageJ or your favourite programming language to measure the mean and standard deviation of a select pixel across the images acquired at various illumination intensities.
\end{itemize}
Use 1 second exposure time for either option. The illumination light must be stable; use an LED in an enclosed container (to avoid background changes).
Plot the variance, i.e. standard deviation squared (Y axis) vs. mean (X axis) for different brightness levels. Fit a line.
Determine the detector’s gain (counts per photoelectron):
Hint: Remember to subtract out the ‘Bias Level’ measured above.
Note: The slope of the line will be the “Gain” of your camera.
Where does the fit line intersect the y-axis? Does this make sense?
\end{exercisebox}
\end{document}