Importance Sampling Algorithms
Importance sampling in MonteCarloX is built around local proposals + acceptance rules. The defining feature is a discrete-step update loop (proposal → decision → update).
While this is most commonly used for equilibrium sampling, the same machinery can be used in non-equilibrium protocols by changing parameters or target weights over steps.
Mental model
Each step is:
- propose a local change
- compute a local log-ratio (or local energy difference)
- accept/reject via the algorithm
- update counters and measure if needed
The core API function is accept!.
Target distribution and acceptance rule
Let $\pi(x)$ be the target density (or mass function) on the state space.
- Bayesian example:
\[\pi(\theta) = p(\theta \mid \mathcal{D}).\]
- Statistical-mechanics microstate example:
\[\pi(x) = p(x \mid \beta) = \frac{e^{-\beta E(x)}}{Z(\beta)}.\]
Metropolis-Hastings acceptance is
\[\alpha(x \to x') = \min\!\left(1, \frac{\pi(x')\, q(x \mid x')}{\pi(x)\, q(x' \mid x)}\right).\]
For symmetric local proposals $q$, this reduces to $\pi(x') / \pi(x)$.
Unified view: Bayesian and statistical-physics targets
The same sampler is used in both domains by changing only the callable log target score:
- Bayesian inference:
logweight(theta) = logposterior(theta) = loglikelihood(theta) + logprior(theta) - Statistical mechanics:
logweight(x) = -beta * E(x)
In MonteCarloX this callable is carried by the algorithm as the ensemble object, and can also be accessed as logweight(alg).
Both views are important:
ensemble(alg)is the architecture-level objectlogweight(alg)is the acceptance-rule interpretation
They refer to the same callable value.
Metropolis
When to use it
- default first choice for equilibrium sampling
- simple and robust
Acceptance intuition
- always accept moves toward larger target weight
- accept less favorable moves with probability
exp(log_ratio)
Minimal usage
using Random
using MonteCarloX
rng = MersenneTwister(1)
logdensity(x) = -0.5 * x^2
alg = Metropolis(rng, logdensity)
x = 0.0
for _ in 1:20_000
x_new = x + randn(alg.rng)
x = accept!(alg, x_new, x) ? x_new : x
end
println(acceptance_rate(alg))Bayesian example (primary)
Coin-flip posterior with local random-walk proposals on $\theta$:
- $\pi(\theta) = p(\theta \mid \text{data}) \propto p(\text{data} \mid \theta)\, p(\theta)$
- implementation target:
logposterior(theta)
Repository example: examples/bayesian_coin_flip.ipynb
Statistical-mechanics example (secondary)
Ising microstate sampling:
- state $x$ = spin configuration
- target $\pi(x) = e^{-\beta E(x)} / Z(\beta)$
- local proposal: spin flip
Repository example: examples/spin_systems/metropolis_ising2D.ipynb
Energy-variable view (same physics):
\[p(E \mid \beta) = \frac{\Omega(E)\, e^{-\beta E}}{Z(\beta)},\]
where $\Omega(E)$ is the density of states.
Glauber
Same proposal style as Metropolis, but uses logistic acceptance. Useful when that acceptance rule is the natural one for your dynamics or modeling convention.
HeatBath
Draws from local conditional probabilities instead of accept/reject. For Ising-like models this often means directly sampling local spin values from conditional weights.
Generalized ensembles
These methods adapt or use non-canonical weights to improve exploration.
Multicanonical
- keeps a histogram of visited bins
- updates tabulated log-weights from histogram information
- useful for broad energy exploration and barrier crossing
using Random
using MonteCarloX
lw = BinnedObject(-20:2:20, 0.0)
alg = Multicanonical(MersenneTwister(2), lw)
set!(ensemble(alg), -10:2:10, x -> 0.0)
# run your update loop with accept!(alg, x_new, x_old)
# then call update!(ensemble(alg))Wang-Landau
- updates log-density-of-states estimate at visited bins
- progressively refines modification factor (
logfviaupdate!(ensemble(alg))between stages)
using Random
using MonteCarloX
lw = BinnedObject(-20:2:20, 0.0)
alg = WangLandau(MersenneTwister(3), lw; logf=1.0)
# in your loop: accept!(alg, x_new, x_old)
# between stages: update!(alg)Choosing quickly
- Start with
Metropolisfor standard equilibrium sampling. - Use
HeatBathwhen conditional local probabilities are natural and cheap. - Use
MulticanonicalorWangLandauwhen canonical sampling gets stuck or explores too narrowly.
Example map (algorithm ↔ application)
- Bayesian scalar posterior (
Metropolis):examples/bayesian_coin_flip.ipynb - Bayesian regression posterior (
Metropolis):examples/house_price_prediction.ipynb - Canonical spin sampling (
Metropolis):examples/spin_systems/metropolis_ising2D.ipynb - Generalized-ensemble exploration (
Multicanonical,WangLandau):examples/spin_systems/muca_ising2D.ipynbexamples/muca_LDT_gaussian_rngs.ipynb
API reference
MonteCarloX.AbstractImportanceSampling — Type
AbstractImportanceSampling <: AbstractAlgorithmBase type for importance-sampling algorithms.
MonteCarloX.AbstractMetropolis — Type
AbstractMetropolis <: AbstractImportanceSamplingBase type for Metropolis-family samplers where acceptance is naturally computed from a local state difference (e.g. ΔE).
MonteCarloX.AbstractHeatBath — Type
AbstractHeatBath <: AbstractAlgorithmBase type for heat-bath style algorithms.
MonteCarloX.ImportanceSampling — Type
ImportanceSampling <: AbstractImportanceSamplingGeneric importance-sampling algorithm that operates on full-state acceptance arguments (x_new, x_old) using a callable ensemble.
The callable may be a function or a log-weight object and should return a scalar score such as a log density / log weight.
Conceptual API expectations:
ensemble(alg)returns the architectural ensemble object carried by the algorithm.logweight(alg)returns a callable score object/function derived from that ensemble.- Acceptance logic uses score differences from
logweight.
Unified view:
- Bayesian inference:
logweight(theta) = logposterior(theta) - Statistical mechanics:
logweight(x) = -beta * E(x)
Both are represented identically as ensemble-provided logweight callables.
MonteCarloX.ensemble — Method
ensemble(alg::AbstractImportanceSampling)Return the ensemble object carried by an importance-sampling algorithm.
This is the canonical accessor in the ensemble-first API. Operationally, this object defines the logweight used in acceptance.
MonteCarloX.logweight — Method
logweight(alg::AbstractImportanceSampling)Return the algorithm ensemble via a logweight-oriented alias. Equivalent to ensemble(alg).
Use this accessor when reasoning about acceptance formulas.
MonteCarloX.logweight — Method
logweight(ens::AbstractEnsemble)Return a callable logweight object/function for an ensemble. Concrete ensembles must implement this.
MonteCarloX.logweight — Method
logweight(ens::AbstractEnsemble, x)Evaluate logweight on state/value x. Concrete ensembles should provide this directly or rely on a callable from logweight(ens).
MonteCarloX.Metropolis — Type
Metropolis <: AbstractMetropolisMetropolis algorithm for importance sampling.
The Metropolis algorithm samples from a probability distribution proportional to exp(log_weight) using an accept/reject criterion.
Unified view:
- Bayesian inference:
logweight(theta) = logposterior(theta) - Statistical mechanics:
logweight(x) = -beta * E(x)
Both are passed as the same callable ensemble score. In other words, the algorithm ensemble defines the operative logweight.
Fields
rng::AbstractRNG: Random number generatorensemble: Callable ensemble score (function or weight object)steps::Int: Total number of steps attemptedaccepted::Int: Number of accepted steps
Examples
# Create with Boltzmann weight
alg = Metropolis(Random.default_rng(), β=2.0)
# Create with Bayesian log-posterior
logposterior(theta) = loglikelihood(theta) + logprior(theta)
alg = Metropolis(Random.default_rng(), logposterior)
# Create with custom callable score
alg = Metropolis(Random.default_rng(), x -> -0.5 * x^2)
# Create with a weight object
ens = BoltzmannEnsemble(1.5)
alg = Metropolis(Random.default_rng(), ens)# Create with a tabulated or custom ensemble
ens = FunctionEnsemble(x -> -0.5 * x^2)
alg = Metropolis(Random.default_rng(), ens)MonteCarloX.Glauber — Type
Glauber <: AbstractMetropolisGlauber sampler with logistic acceptance rule.
Uses the same proposal interface and log-ratio as Metropolis-family algorithms, but acceptance is:
p_accept = 1 / (1 + exp(-log_ratio))MonteCarloX.HeatBath — Type
HeatBath <: AbstractHeatBathHeat-bath sampler parameters.
For Ising-like systems, updates draw directly from the local conditional probability using inverse temperature β.
MonteCarloX.accept! — Function
accept!(alg::AbstractImportanceSampling, x_new, x_old)Evaluate acceptance criterion for importance sampling with differences.
Updates step and acceptance counters in the algorithm. Returns true if the move is accepted based on the Metropolis criterion:
- Accept if log_ratio > 0 (new state has higher weight)
- Accept with probability exp(log_ratio) otherwise
This is the core accept/reject step used by all importance sampling algorithms.
accept!(alg::AbstractMetropolis, delta_state)Metropolis-family acceptance using a local state difference.
accept!(alg::ImportanceSampling{<:WangLandauEnsemble}, x_new, x_old)Perform Metropolis acceptance and apply Wang-Landau local adaptation at the visited state by updating the tabulated logweight.
MonteCarloX.acceptance_rate — Function
acceptance_rate(alg::AbstractImportanceSampling)Calculate the acceptance rate of the algorithm.
Returns the fraction of accepted moves: accepted/steps. Returns 0.0 if no steps have been attempted yet.
MonteCarloX.reset! — Method
reset!(alg::AbstractImportanceSampling)Reset step and acceptance counters to zero.
Useful when you want to measure acceptance rate for a specific run phase without previous history.
MonteCarloX.Multicanonical — Function
Multicanonical([rng,] bins; init=0.0)Create a generic ImportanceSampling algorithm with a MulticanonicalEnsemble built from bins.
If rng is not provided, the global RNG will be used.
Multicanonical([rng,] ens::MulticanonicalEnsemble)Wrap an existing multicanonical logweight in a generic ImportanceSampling algorithm.
If rng is not provided, the global RNG will be used.
MonteCarloX.WangLandau — Function
WangLandau([rng,] bins_or_logweight; init=0.0, logf=1.0)Create a generic ImportanceSampling algorithm with a WangLandauEnsemble built from bins_or_logweight. If rng is not provided, the global RNG will be used.
MonteCarloX.update! — Method
update!(ens::AbstractEnsemble, args...; kwargs...)Perform in-place adaptation/update of an ensemble. Concrete ensembles must specialize this when supported.
MonteCarloX.update! — Method
update!(e::WangLandauEnsemble; power=0.5)Update Wang-Landau schedule by scaling the modification factor: logf <- power * logf with default power=0.5.