leaderbot.models.Davidson.train#

Davidson.train(init_param=None, method='BFGS', max_iter=1500, tol=1e-08)#

Tune model parameters with maximum likelihood estimation method.

Parameters:
init_paramarray_like, default=None

Initial parameters. If None, an initial guess is used based on the cumulative counts between agent matches.

methodstr, default= 'BFGS'

Optimization method.

  • 'BFGS': local optimization (best method overall)

  • 'L-BFGS-B': local optimization (best method for all ScaledRIJ models)

  • 'CG': local optimization

  • 'Newton-CG': local optimization (most accurate method, but slow)

  • 'TNC': local optimization (least accurate method)

  • 'Nelder-Mead': local optimization (slow)

  • 'Powell': local optimization (often does not converge)

  • 'shgo': Hybrid global and local optimization (slow)

  • 'basinhopping': Hybrid global and local optimization (slow)

See scipy.optimize for further details on each of the above methods.

max_iterint, default=1500

Maximum number of iterations.

tolfloat, default=1e-8

Tolerance of optimization.

See also

predict

predict probabilities based on given parameters.

Notes

The trained parameters are available as param attribute.

Examples

>>> from leaderbot.data import load
>>> from leaderbot.models import Davidson

>>> # Create a model
>>> data = load()
>>> model = Davidson(data)

>>> # Train the model
>>> model.train()

>>> # Make inference
>>> prob = model.infer()