Misc

StateSpaceDynamics.FilterSmoothType

" FilterSmooth{T<:Real}

A mutable structure for storing smoothed estimates and associated covariance matrices in a filtering or smoothing algorithm.

Type Parameters

  • T<:Real: The numerical type used for all fields (e.g., Float64, Float32).

Fields

  • x_smooth::Matrix{T} The matrix containing smoothed state estimates over time. Each column typically represents the state vector at a given time step.

  • p_smooth::Array{T, 3} The posterior covariance matrices with dimensions (latentdim, latentdim, time_steps)

  • E_z::Array{T, 3} The expected latent states, size (statedim, T, ntrials).

  • E_zz::Array{T, 4} The expected value of zt * zt', size (statedim, statedim, T, n_trials).

  • E_zz_prev::Array{T, 4} The expected value of zt * z{t-1}', size (statedim, statedim, T, n_trials).

Example

```julia

Initialize a FilterSmooth object with Float64 type

filter = FilterSmooth{Float64}( xsmooth = zeros(10, 100), psmooth = zeros(10, 10, 100), Ez = zeros(10, 5, 100), Ezz = zeros(10, 10, 5, 100), Ezzprev = zeros(10, 10, 5, 100) )

StateSpaceDynamics.ForwardBackwardType
ForwardBackward{T<:Real}

A mutable struct that encapsulates the forward–backward algorithm outputs for a hidden Markov model (HMM).

Fields

  • loglikelihoods::Matrix{T}: Matrix of log-likelihoods for each observation and state.
  • α::Matrix{T}: The forward probabilities (α) for each time step and state.
  • β::Matrix{T}: The backward probabilities (β) for each time step and state.
  • γ::Matrix{T}: The state occupancy probabilities (γ) for each time step and state.
  • ξ::Array{T,3}: The pairwise state occupancy probabilities (ξ) for consecutive time steps and state pairs.

Typically, α and β are computed by the forward–backward algorithm to find the likelihood of an observation sequence. γ and ξ are derived from these calculations to estimate how states transition over time.

StateSpaceDynamics.ProbabilisticPCAType
mutable struct ProbabilisticPCA

Probabilistic PCA model from Bishop's Pattern Recognition and Machine Learning.

Fields:

W: Weight matrix that maps from latent space to data space.
σ²: Noise variance
μ: Mean of the data
k: Number of latent dimensions
D: Number of features
z: Latent variables
StateSpaceDynamics.ProbabilisticPCAMethod

ProbabilisticPCA(;W::Matrix{<:AbstractFloat}, σ²:: <: AbstractFloat, μ::Matrix{<:AbstractFloat}, k::Int, D::Int)

Constructor for ProbabilisticPCA model.

# Args:

- W::Matrix{<:AbstractFloat}: Weight matrix that maps from latent space to data space.

- σ²:: <: AbstractFloat: Noise variance

- μ::Matrix{<:AbstractFloat}: Mean of the data

- k::Int: Number of latent dimensions

- D::Int: Number of features

# Example:

```julia

# PPCA with unknown parameters

ppca = ProbabilisticPCA(k=1, D=2)

# PPCA with known parameters

ppca = ProbabilisticPCA(W=rand(2, 1), σ²=0.1, μ=rand(2), k=1, D=2)

```

StateSpaceDynamics.E_StepMethod
E_Step(ppca::ProbabilisticPCA, X::Matrix{<:AbstractFloat})

Expectation step of the EM algorithm for PPCA. See Bishop's Pattern Recognition and Machine Learning for more details.

Args:

  • ppca::ProbabilisticPCA: PPCA model
  • X::Matrix{<:AbstractFloat}: Data matrix

Examples:

ppca = ProbabilisticPCA(K=1, D=2)
E_Step(ppca, rand(10, 2))
StateSpaceDynamics.M_Step!Method
M_Step!(model::ProbabilisticPCA, X::Matrix{<:AbstractFloat}, E_z::Matrix{<:AbstractFloat}, E_zz::Array{<:AbstractFloat, 3}

Maximization step of the EM algorithm for PPCA. See Bishop's Pattern Recognition and Machine Learning for more details.

Args:

  • model::ProbabilisticPCA: PPCA model
  • X::Matrix{<:AbstractFloat}: Data matrix
  • E_z::Matrix{<:AbstractFloat}: E[z]
  • E_zz::Matrix{<:AbstractFloat}: E[zz']

Examples:

ppca = ProbabilisticPCA(K=1, D=2)
E_z, E_zz = E_Step(ppca, rand(10, 2))
M_Step!(ppca, rand(10, 2), E_z, E_zzᵀ)
StateSpaceDynamics.fit!Function
fit!(model::ProbabilisticPCA, X::Matrix{<:AbstractFloat}, max_iter::Int=100, tol::AbstractFloat=1e-6)

Fit the PPCA model to the data using the EM algorithm.

Args:

  • model::ProbabilisticPCA: PPCA model
  • X::Matrix{<:AbstractFloat}: Data matrix
  • max_iter::Int: Maximum number of iterations
  • tol::AbstractFloat: Tolerance for convergence

Examples:

ppca = ProbabilisticPCA(K=1, D=2)
fit!(ppca, rand(10, 2))
StateSpaceDynamics.loglikelihoodMethod
loglikelihood(model::ProbabilisticPCA, X::Matrix{<:AbstractFloat})

Calculate the log-likelihood of the data given the PPCA model.

Args:

  • model::ProbabilisticPCA: PPCA model
  • X::Matrix{<:AbstractFloat}: Data matrix

Examples:

ppca = ProbabilisticPCA(K=1, D=2)
loglikelihood(ppca, rand(10, 2))
StateSpaceDynamics.GaussianHMMMethod
GaussianHMM(; K::Int, output_dim::Int, A::Matrix{<:Real}=initialize_transition_matrix(K), πₖ::Vector{Float64}=initialize_state_distribution(K))

Create a Hidden Markov Model with Gaussian Emissions

Arguments

  • K::Int: The number of hidden states
  • output_dim::Int: The dimensionality of the observation
  • A::Matrix{<:Real}=initialize_transition_matrix(K): The transition matrix of the HMM (defaults to random initialization)
  • πₖ::Vector{Float64}=initialize_state_distribution(K): The initial state distribution of the HMM (defaults to random initialization)

Returns

  • ::HiddenMarkovModel: Hidden Markov Model Object with Gaussian Emissions

```

StateSpaceDynamics.SwitchingBernoulliRegressionMethod
SwitchingBernoulliRegression(; K::Int, input_dim::Int, include_intercept::Bool=true, β::Vector{<:Real}=if include_intercept zeros(input_dim + 1) else zeros(input_dim) end, λ::Float64=0.0, A::Matrix{<:Real}=initialize_transition_matrix(K), πₖ::Vector{Float64}=initialize_state_distribution(K))

Create a Switching Bernoulli Regression Model

Arguments

  • K::Int: The number of hidden states.
  • input_dim::Int: The dimensionality of the input data.
  • include_intercept::Bool=true: Whether to include an intercept in the regression model (defaults to true).
  • β::Vector{<:Real}: The regression coefficients (defaults to zeros).
  • λ::Float64=0.0: Regularization parameter for the regression (defaults to zero).
  • A::Matrix{<:Real}=initialize_transition_matrix(K): The transition matrix of the HMM (defaults to random initialization).
  • πₖ::Vector{Float64}=initialize_state_distribution(K): The initial state distribution of the HMM (defaults to random initialization).

Returns

  • ::HiddenMarkovModel: A Switching Bernoulli Regression Model
StateSpaceDynamics.SwitchingGaussianRegressionMethod
SwitchingGaussianRegression(; 
    K::Int,
    input_dim::Int,
    output_dim::Int,
    include_intercept::Bool = true,
    β::Matrix{<:Real} = if include_intercept
        zeros(input_dim + 1, output_dim)
    else
        zeros(input_dim, output_dim)
    end,
    Σ::Matrix{<:Real} = Matrix{Float64}(I, output_dim, output_dim),
    λ::Float64 = 0.0,
    A::Matrix{<:Real} = initialize_transition_matrix(K),
    πₖ::Vector{Float64} = initialize_state_distribution(K)
)

Create a Switching Gaussian Regression Model

Arguments

  • K::Int: The number of hidden states.
  • input_dim::Int: The dimensionality of the input features.
  • output_dim::Int: The dimensionality of the output predictions.
  • include_intercept::Bool: Whether to include an intercept in the regression model (default is true).
  • β::Matrix{<:Real}: The regression coefficients (defaults to zeros based on input_dim and output_dim).
  • Σ::Matrix{<:Real}: The covariance matrix of the Gaussian emissions (defaults to an identity matrix).
  • λ::Float64: The regularization parameter for the regression (default is 0.0).
  • A::Matrix{<:Real}: The transition matrix of the Hidden Markov Model (defaults to random initialization).
  • πₖ::Vector{Float64}: The initial state distribution of the Hidden Markov Model (defaults to random initialization).

Returns

  • ::HiddenMarkovModel: A Switching Gaussian Regression Model
StateSpaceDynamics.block_tridgmMethod
block_tridgm(main_diag::Vector{Matrix{T}}, upper_diag::Vector{Matrix{T}}, lower_diag::Vector{Matrix{T}}) where {T<:Real}

Construct a block tridiagonal matrix from three vectors of matrices.

Arguments

  • main_diag::Vector{Matrix{T}}: Vector of matrices for the main diagonal.
  • upper_diag::Vector{Matrix{T}}: Vector of matrices for the upper diagonal.
  • lower_diag::Vector{Matrix{T}}: Vector of matrices for the lower diagonal.

Returns

  • A sparse matrix representing the block tridiagonal matrix.

Throws

  • ErrorException if the lengths of upper_diag and lower_diag are not one less than the length of main_diag.
StateSpaceDynamics.block_tridiagonal_inverseMethod
block_tridiagonal_inverse(A, B, C)

Compute the inverse of a block tridiagonal matrix.

Arguments

  • A: Lower diagonal blocks.
  • B: Main diagonal blocks.
  • C: Upper diagonal blocks.

Returns

  • λii: Diagonal blocks of the inverse.
  • λij: Off-diagonal blocks of the inverse.

Notes: This implementation is from the paper:

"An Accelerated Lambda Iteration Method for Multilevel Radiative Transfer” Rybicki, G.B., and Hummer, D.G., Astronomy and Astrophysics, 245, 171–181 (1991), Appendix B.

StateSpaceDynamics.euclidean_distanceMethod
euclidean_distance(a::AbstractVector{Float64}, b::AbstractVector{Float64})

Calculate the Euclidean distance between two points.

Arguments

  • a::AbstractVector{Float64}: The first point.
  • b::AbstractVector{Float64}: The second point.

Returns

  • The Euclidean distance between a and b.
StateSpaceDynamics.gaussian_entropyMethod
gaussian_entropy(H::Symmetric{T}) where T <: Real

Calculate the entropy of a Gaussian distribution with Hessian (i.e. negative precision) matrix H.

Arguments

  • H::Symmetric{T}: The Hessian matrix.

Returns

  • The entropy of the Gaussian distribution.
StateSpaceDynamics.kmeans_clusteringFunction
kmeans_clustering(data::Matrix{<:Real}, k_means::Int, max_iters::Int=100, tol::Float64=1e-6)

Perform K-means clustering on the input data.

Arguments

  • data::Matrix{<:Real}: The input data matrix where each row is a data point.
  • k_means::Int: The number of clusters.
  • max_iters::Int=100: Maximum number of iterations.
  • tol::Float64=1e-6: Convergence tolerance.

Returns

  • A tuple containing the final centroids and cluster labels for each data point.
StateSpaceDynamics.kmeans_clusteringFunction
kmeans_clustering(data::Vector{Float64}, k_means::Int, max_iters::Int=100, tol::Float64=1e-6)

Perform K-means clustering on vector data.

Arguments

  • data::Vector{Float64}: The input data vector.
  • k_means::Int: The number of clusters.
  • max_iters::Int=100: Maximum number of iterations.
  • tol::Float64=1e-6: Convergence tolerance.

Returns

  • A tuple containing the final centroids and cluster labels for each data point.
StateSpaceDynamics.kmeanspp_initializationMethod
kmeanspp_initialization(data::Matrix{<:Real}, k_means::Int)

Perform K-means++ initialization for cluster centroids.

Arguments

  • data::Matrix{<:Real}: The input data matrix where each row is a data point.
  • k_means::Int: The number of clusters.

Returns

  • A matrix of initial centroids for K-means clustering.
StateSpaceDynamics.kmeanspp_initializationMethod
kmeanspp_initialization(data::Vector{Float64}, k_means::Int)

Perform K-means++ initialization for cluster centroids on vector data.

Arguments

  • data::Vector{Float64}: The input data vector.
  • k_means::Int: The number of clusters.

Returns

  • A matrix of initial centroids for K-means clustering.
StateSpaceDynamics.logisticMethod
logistic(x::Real)

Calculate the logistic function in a numerically stable way.

Arguments

  • x::Real: The input value.

Returns

  • The result of the logistic function applied to x.
StateSpaceDynamics.make_posdef!Method
make_posdef!(A::Matrix{T}) where {T}

Ensure that a matrix is positive definite by adjusting its eigenvalues.

Arguments

  • A::Matrix{T}: The input matrix.

Returns

  • A positive definite matrix derived from A.
StateSpaceDynamics.row_matrixMethod
row_matrix(x::AbstractVector)

Convert a vector to a row matrix.

Arguments

  • x::AbstractVector: The input vector.

Returns

  • A row matrix (1 × n) containing the elements of x.
StateSpaceDynamics.stabilize_covariance_matrixMethod
stabilize_covariance_matrix(Σ::Matrix{<:Real})

Stabilize a covariance matrix by ensuring it is symmetric and positive definite.

Arguments

  • Σ::Matrix{<:Real}: The input covariance matrix.

Returns

  • A stabilized version of the input covariance matrix.