leaderbot.models.BradleyTerry.fisher#
- BradleyTerry.fisher(w=None, epsilon=1e-08, order=2)#
Observed Fisher information matrix.
- Parameters:
- warray_like, default=None
Parameters \(\boldsymbol{\theta}\). If None, the pre-trained parameters are used, provided is already trained.
- epsilonfloat, default=1e-8
The step size in finite differencing method in estimating derivative.
- order{2, 4}, default=2
Order of Finite differencing:
2: Second order central difference.
4: Fourth order central difference.
- Returns:
- Jnumpy.ndarray
The observed Fisher information matrix of size \(m \times m\) where \(m\) is the number of parameters.
- Raises:
- RuntimeWarning
If loss is
nan
.- RuntimeError
If the model is not trained and the input
w
is set to None.
See also
loss
Log-likelihood (loss) function.
Notes
The observed Fisher information matrix is the negative of the Hessian of the log likelihood function. Namely, if \(\boldsymbol{\theta}\) is the array of all parameters of the size \(m\), then the observed Fisher information is the matrix \(\mathcal{J}\) of size \(m \times m\)
\[\mathcal{J}(\boldsymbol{\theta}) = - \nabla \nabla^{\intercal} \ell(\boldsymbol{\theta}),\]where \(\ell(\boldsymbol{\theta})\) is the log-likelihood function (see
loss()
).Examples
>>> from leaderbot.data import load >>> from leaderbot.models import Davidson >>> # Create a model >>> data = load() >>> model = Davidson(data) >>> # Generate an array of parameters >>> import numpy as np >>> w = np.random.randn(model.n_param) >>> # Fisher information for the given input parameters >>> J = model.fisher(w) >>> # Fisher information for the trained parameters >>> model.train() >>> J = model.fisher()