
Introduction
This page provides an overview of the Generalized Quantum Episodic Memory (GQEM) model and its implementation in the Julia package QuantumEpisodicMemoryModels. The GQEM is a quantum model of recognition memory which accounts for phenomena that pose challenges to many alternative models. These phenomena include subadditivity (a.k.a. overdispersion) and violations of the law of total probability (LOTP). In what follows, we will introduce a recognition memory task used to study subaddivity and violations of the LOTP, the mechanics of the GQEM model, and illustrate some basic functionality provided by this package.
Task
In the recognition memory task, participants study a list of items (e.g., pictures or words) during the learning phase. Subsequently, in the test phase, participants distinguish between previously studied items and two types of new items. The three item types are:
- $O$: an old item defined as an item in the study list
- $R$: a new item defined as an item that is related an item in the study list
- $U$: a new item that is not related to any items in the study list
Particpants complete the test phase under one of three between-subject conditions:
- $V$: respond yes to old items (verbatim)
- $G$: respond yes to new, related items (gist)
- $V \cup G$: respond yes to old items or new, related items (verbatim + gist)
- $U$: respond yes to new, unrelated items (unrelated)
Subadditivity
Classical probability theory requires mutually exclusive and exhaustive events to sum to 1. In the recognition memory task above, subjects are instructed to respond yes to items in three mutually exclusive and exhaustive categories: gist (G), verbatim (V), and new unrelated (U). Thus, for a given item type $i \in \{O,R,U \}$, the judgments summed across the three conditions should be:
\[\Pr(Y_G = 1 \mid i) + \Pr(Y_V = 1 \mid i) + \Pr(Y_U = 1 \mid i) = 1\]
Subadditivity, which occurs when the sum exceeds 1, is frequently observed in recognition memory decisions.
Model Description
The goal of this section is to introduce the mechanics and concepts underlying the QGEM. Before introducing the GQEM, we will load the QuantumEpisodicMemory package along with packages for plotting and LaTeX support for mathematical support.
using LaTeXStrings
using QuantumEpisodicMemory
using Plots
using Random
Random.seed!(407)Random.TaskLocalRNG()Quantum cognition distinguishes between two types of representations: compatible and incompatible. Compatible representations can be evaluated simultaneously within the same basis. For example, if you can simultaneously think and reason about your political beliefs and those of your friend, you are using a compatible representation. The joint probability distribution is represented with a common basis. However, if you cannot represent the beliefs simultenously, the beliefs are incompatible. As a consequence, they must be evaluated sequentially using a different basis for each. The bases are defined in the same representational space, but are related to each other through a rotation. Conceptually, this is analogous to shifting one's perceptive to reason about another's political beliefs.
Bases
The GQEM model assumes that the features of an item — gist $(G)$, verbatim $(V)$, and unrelated $(U)$ – are incompatible. For this reason, the features are represented in $\mathbb{R}^2$ with respect to their own bases:
- Verbatim basis: $\boldsymbol{\chi}_V = \{ \ket{V} = [1,0]^{\top}, \ket{V}^{\perp} = [0,1]^{\top} \}$
- Gist basis: $\boldsymbol{\chi}_G = \{ \ket{G}, \ket{G}^{\perp} \}$
- New Unrelated basis: $\boldsymbol{\chi}_U = \{ \ket{U}, \ket{U}^{\perp} \}$
Note that all quantities discussed below are defined relative to this $\boldsymbol{\chi}_V$, which is arbitarily anchored to the standard position.
State Vectors
Upon viewing an old, new related, or new unrelated items a person enters a superposition defined by the corresponding state vectors:
- Old state vector: $\ket{\psi_O}$
- New related state vector: $\ket{\psi_R}$
- New unrelated state vector: $\ket{\psi_U}$
Parameters
The GQEM consists of 5 parameters which describe the angle between the standard basis $\boldsymbol{\chi}_V = \{ \ket{V} = [1,0]^{\top}, \ket{V}^{\perp} = [0,1]^{\top} \}$ and other two bases and the three state vectors. The parameters are defined as follows:
- $\theta_G$: angle between basis $\boldsymbol{\chi}_V$ and $\boldsymbol{\chi}_G$ in radians
- $\theta_U$: angle between basis $\boldsymbol{\chi}_V$ and $\boldsymbol{\chi}_U$ in radians
- $\theta_{\psi_O}$: angle between basis $\boldsymbol{\chi}_V$ and state vector $\ket{\psi_O}$ in radians
- $\theta_{\psi_R}$: angle between basis $\boldsymbol{\chi}_V$ and state vector $\ket{\psi_R}$ in radians
- $\theta_{\psi_U}$: angle between basis $\boldsymbol{\chi}_V$ and state vector $\ket{\psi_U}$ in radians
Response Probabilities
The purpose of this section is to provide a geometric illustration of computing response probabilities with the GQEM model. In the code block below, we will begin by setting the value for each parameter, and creating a model object.
θG = -.5
θU = 2
θψO = .90
θψR = .20
θψU = -1.5-1.5Next, we pass the parameters to the GQEM constructor as keyword arguments (order does not matter).
dist = GQEM(; θG, θU, θψO, θψR, θψU) GQEM
┌───────────┬───────┐
│ Parameter │ Value │
├───────────┼───────┤
│ θG │ -0.5 │
│ θU │ 2.0 │
│ θψO │ 0.9 │
│ θψR │ 0.2 │
│ θψU │ -1.5 │
└───────────┴───────┘
The figure below illustrates geometrically how response probabilities are generated from the GQEM model. In this example, we assume that a person was placed in the gist condition, and is in a superposition state for related new items, $\ket{\psi_R}$. The probability of responding yes is found by projecting $\ket{\psi_R}$ onto the basis vector $\ket{G}$. In the figure below, the red vector represents the superposition state $\ket{\psi_R}$, the green vector represents the projection of $\ket{\psi_R}$ onto $\ket{G}$ and the dashed black line is perpendicular to the projection.
plot(dist, θψR, θG; state_label = L"\psi_R")Show Details
The superposition state for related items is obtained by rotating the verbatim basis vector.
\[\ket{\psi_R} = \mathbb{U}(\theta_{\psi_R}) \ket{V}\]
Similarly, the basis state for gist instructions is obtained by rotating the verbatim basis vector.
\[\ket{G} = \mathbb{U}(\theta_{G}) \ket{V}\]
The projector matrix for basis vector $\ket{G}$ is defined as:
\[\mathbf{P} = \ket{G} \bra{G}\]
The probability of responding yes given a related word is defined as the squared magnitude of the projection of $\ket{\psi_R}$ onto $\ket{G}$:
\[\Pr(X = 1 \mid R) = \lVert \mathbf{P} \ket{\psi_R} \rVert^2\]
Model Usage
Predictions
Response Probabilities
The predicted response probabilities are computed via the function compute_pred as shown below. The predictions can be piped to the function to_table to provide row and column names.
preds = compute_preds(dist) |> to_table4×3 Named Matrix{Float64}
condition ╲ word type │ old related unrelated
──────────────────────┼───────────────────────────────────
gist │ 0.0288888 0.584984 0.291927
verbatim │ 0.386399 0.96053 0.00500375
gist+verbatim │ 0.252098 0.680375 0.454676
unrelated new │ 0.205749 0.0516208 0.876951Subadditivity
Below, the model predicts subadditivity for verbatim and unrelated new items, but not old items.
sum(preds[["gist", "verbatim", "unrelated new"],:], dims = 1)1×3 Named Matrix{Float64}
condition ╲ word type │ old related unrelated
──────────────────────┼────────────────────────────────
sum(condition) │ 0.621037 1.59713 1.17388Generate Data
The code block below demonstrates how to generate 100 trials for each condition.
n_trials = 100
data = rand(dist, n_trials)4×3 Matrix{Int64}:
2 62 29
42 98 0
23 77 43
23 6 88As before, we can display names for rows and columns to aid in the interpretation of the data.
to_table(data)4×3 Named Matrix{Int64}
condition ╲ word type │ old related unrelated
──────────────────────┼────────────────────────────────
gist │ 2 62 29
verbatim │ 42 98 0
gist+verbatim │ 23 77 43
unrelated new │ 23 6 88Log Likelihood
Finally, the code block below shows how to compute the log likelihood of the data using the function logpdf.
logpdf(dist, n_trials, data)4×3 Matrix{Float64}:
-1.45426 -2.75625 -2.43545
-2.75376 -1.90373 -0.501631
-2.49201 -4.31977 -2.64426
-2.53489 -1.86612 -2.10887References
Trueblood, J. S., & Hemmer, P. (2017). The generalized quantum episodic memory model. Cognitive Science, 41(8), 2089-2125.