import warningsimport numpy as npimport polars as plimport scipy.optimizeimport scipy.stats as stimport tqdmimport bebi103import bokeh.ioimport bokeh.plottingbokeh.io.output_notebook()
Loading BokehJS ...
In our previous lesson on numerical maximum likelihood estimation, we computed an MLE for the parameters of a Negative Binomial model for mRNA counts for Nanog using the Singer, et al. dataset. Here, we will compute the MLEs for the other three genes as well and further compute confidence intervals for these MLEs.
38.1 MLE for all genes
We start by writing function to compute the maximum likelihood estimate for our model. We have a set of Negative-Binomially distributed i.i.d. counts, parametrized by \(\alpha\) and \(b=1/\beta\), and we wish to compute the MLE. We worked out the code previously, so we can write the functions here.
def log_like_iid_nbinom(params, n):"""Log likelihood for i.i.d. NBinom measurements, parametrized by alpha, b=1/beta.""" alpha, b = paramsif alpha <=0or b <=0:return-np.infreturn np.sum(st.nbinom.logpmf(n, alpha, 1/(1+b)))def mle_iid_nbinom(n):"""Perform maximum likelihood estimates for parameters for i.i.d. NBinom measurements, parametrized by alpha, b=1/beta"""with warnings.catch_warnings(): warnings.simplefilter("ignore") res = scipy.optimize.minimize( fun=lambda params, n: -log_like_iid_nbinom(params, n), x0=np.array([3, 3]), args=(n,), method='Powell' )if res.success:return res.xelse:raiseRuntimeError('Convergence failed with message', res.message)
Now, we just have to load in the data and compute the MLEs.
# Load DataFramedf = pl.read_csv(os.path.join(data_path, 'singer_transcript_counts.csv'), comment_prefix='#')genes = ["Nanog", "Prdm14", "Rest", "Rex1"]# Data frame to store resultsdf_mle = pl.DataFrame(schema=[('gene', str), ('parameter', str), ('mle', float)])# Perform MLE for each genefor gene in genes: mle = mle_iid_nbinom(df[gene].to_numpy()) sub_df = pl.DataFrame({'gene': [gene]*2, 'parameter': ['alpha', 'b'], 'mle': mle}) df_mle = pl.concat([df_mle, sub_df])
Let’s take a look at the results.
df_mle
shape: (8, 3)
gene
parameter
mle
str
str
f64
"Nanog"
"alpha"
1.263097
"Nanog"
"b"
69.347842
"Prdm14"
"alpha"
0.552886
"Prdm14"
"b"
8.200636
"Rest"
"alpha"
4.530335
"Rest"
"b"
16.543054
"Rex1"
"alpha"
1.634562
"Rex1"
"b"
84.680915
38.2 Bootstrap confidence intervals
Remember that the MLE is a random variable, so we can compute confidence intervals for it. There are several approaches to computing confidence intervals for maximum likelihood estimates. We will explore two methods. They differ in how we use our model to generate bootstrap samples.
In this case of mRNA counts, the data were generated out of an unknown distribution \(F(n)\). We do not know what it is, though we model it as Negative Binomial to compute an MLE. (We chould choose other models, some will be closer to \(F(n)\) than others.) To generate a bootstrap sample, we sample out of the empirical distribution \(\hat{F}(n;\mathbf{n})\). We then use the Negative Binomial model to obtain MLEs for the burst frequency \(\alpha\) and burst size \(b\). These MLEs constitute our bootstrap replicates for the MLE, and we can compute confidence intervals from the replicates.
Instead of approximating the generative distribution \(F(n)\) with \(\hat{F}(n;\mathbf{n})\), we approximate the generative distribution with the model distribution, in this case a Negative Binomial, parametrized by the MLE. In other words, \(F(n) \approx F_\mathrm{NBinom}(n; \alpha^*, b^*)\). We then sample out of the model distribution to generate our bootstrap samples, and compute the replicates from these. This is called parametric bootstrap and seems to be more widely used. It operates under the assumption that our model is indeed a good approximation for the generative distribution.
The first method is nonparametric in that we use the plug-in principle for approximating the generative distribution (and only use the model to compute the statistical functional), so we will refer to it as the nonparametric bootstrap for the MLE, but I have not seen that terminology be widely used, probably because of confusion with the fact that a parametric model is used to compute the MLE.
38.2.1 Nonparametric MLE confidence interval
For the nonparametric MLE confidence interval, we simply draw a bootstrap sample out of the original data set, as we have been doing, and then compute the MLE estimates. Let’s do that. We will use TQDM to give us a progress bar, since this calculation takes a few minutes.
rng = np.random.default_rng(3252)def draw_bs_sample(data):"""Draw a bootstrap sample from a 1D data set."""return rng.choice(data, size=len(data))def draw_bs_reps_mle(mle_fun, data, args=(), size=1, progress_bar=False):"""Draw nonparametric bootstrap replicates of maximum likelihood estimator. Parameters ---------- mle_fun : function Function with call signature mle_fun(data, *args) that computes a MLE for the parameters data : one-dimemsional Numpy array Array of measurements args : tuple, default () Arguments to be passed to `mle_fun()`. size : int, default 1 Number of bootstrap replicates to draw. progress_bar : bool, default False Whether or not to display progress bar. Returns ------- output : numpy array Bootstrap replicates of MLEs. """if progress_bar: iterator = tqdm.tqdm(range(size))else: iterator =range(size)return np.array([mle_fun(draw_bs_sample(data), *args) for _ in iterator])
With these functions in hand, we can draw our bootstrap replicates. Note the the replicates will be returned as a two-dimensional array, with the first column being samples of \(\alpha^*\) and the second being samples of \(b^*\). We will do this for Nanog first.
To get the confidence intervals, we can take the percentiles as we have done before. Note that we can use the axis kwarg to enable computing confidence intervals for both parameters in one statement.
conf_int = np.percentile(bs_reps, [2.5, 97.5], axis=0)# Take a lookconf_int
So, the confidence interval for \(\alpha\) lies between 1.06 and 1.55, and that for \(b\) lies between 57.1 and 82.3.
38.2.2 Confidence regions
The confidence intervals we have just reported are the confidence intervals for the respective parameters, but we really should define a confidence region, which is the multidimensional extension of a confidence interval. (A confidence interval is the special case of a confidence region with only one parameter.)
Indeed, the confidence interval for \(\alpha\) comes from percentiles computed from the probability density function \(f(\alpha^*;\mathbf{n})\). (Strictly speaking this is a probability mass function in this case because there are a finite number of unique bootstrap samples, but we do not need to worry about that distinction because there are a lot(!) of unique bootstrap samples.) This PDF is computed from the joint PDF of the MLEs as
The marginalization was done implicitly by just considering the bootstrap samples of the individual parameters, but bootstrap replicate \(i\) of \(\alpha^*\) is related to bootstrap replicate \(i\) of \(b^*\) because they were found together in a maximization of the likelihood function. We might want to plot the joint distribution. One way to visualize this is to just plot all of the bootstrap replicates.
There is a correlation between our MLE estimates for \(\alpha\) and \(b\). We can also visualize the distributions for \(f(\alpha^*,b^*;\mathbf{n})\) using the bebi103.viz.corner() function.
# Package replicates in data frame for plottingdf_res = pl.DataFrame(data=bs_reps, schema=["α*", "b*"])p_corner = bebi103.viz.corner( samples=df_res, parameters=["α*", "b*"], show_contours=True, levels = [0.95],)bokeh.io.show(p_corner)
The plots on the diagonal contain the histograms of the bootstrap replicates for the MLE estimates. The off-diagonal plot shows a plot of all the bootstrap replicates of the MLE with a contour plot overlaid. By selecting levels=0.95, we have asked for a 95% confidence region; approximately 95% of all bootstrap replicates lie within the contour. This is not exact, though, as some smoothing is done to generate the contour, which is necessary for confidence regions in two or more dimensions because we have a finite number of samples.
An important note: These are not distributions of the true parameters values. In a frequentist setting, that has no meaning. These are distributions of the maximum likelihood estimates for the parameters where we have estimated the generative distribution using the empirical distribution.
It is also useful to note that we can access the contour lines directly using the bebi103.viz.contour_lines_from_samples() function and can overlay them on the samples. We may also find such contours useful in other custom visualizations.
We repeat the calculation for a parametric confidence interval. As we write a function for this, we need to include an mle_fun as for draw_bs_reps_mle(), but we also need to provide a function that draws new data sets out of the parametric model.
def draw_parametric_bs_reps_mle( mle_fun, gen_fun, data, args=(), size=1, progress_bar=False):"""Draw parametric bootstrap replicates of maximum likelihood estimator. Parameters ---------- mle_fun : function Function with call signature mle_fun(data, *args) that computes a MLE for the parameters gen_fun : function Function to randomly draw a new data set out of the model distribution parametrized by the MLE. Must have call signature `gen_fun(*params, size)`. data : one-dimemsional Numpy array Array of measurements args : tuple, default () Arguments to be passed to `mle_fun()`. size : int, default 1 Number of bootstrap replicates to draw. progress_bar : bool, default False Whether or not to display progress bar. Returns ------- output : numpy array Bootstrap replicates of MLEs. """ params = mle_fun(data, *args)if progress_bar: iterator = tqdm.tqdm(range(size))else: iterator =range(size)return np.array( [mle_fun(gen_fun(*params, size=len(data), *args)) for _ in iterator] )
To draw parametric bootstrap replicates, we need only to provide the function to generate samples from the model distribution. We directly use the Negative Binomial functionality of the Numpy random module.
We get similar confidence intervals as for the nonparametric case. We can also look at the corner plot.
# Package replicates in data frame for plottingdf_res = pl.DataFrame(data=bs_reps_parametric, schema=["α*", "b*"])p = bebi103.viz.corner( samples=df_res, parameters=["α*", "b*"], show_contours=True, levels = [0.95],)bokeh.io.show(p)
The corner plot is indeed similar to the nonparametric case.
We could compute confidence intervals for the other genes, but we will not do that here to save time, since each computation takes a few minutes. In the next part of this lesson, we will parallelize the calculations to make them faster.