Hide code cell content
import mmf_setup; mmf_setup.nbinit()
import os
from pathlib import Path
FIG_DIR = Path(mmf_setup.ROOT) / '../Docs/_build/figures/'
os.makedirs(FIG_DIR, exist_ok=True)
import logging; logging.getLogger("matplotlib").setLevel(logging.CRITICAL)
%matplotlib inline
import numpy as np, matplotlib.pyplot as plt
try: from myst_nb import glue
except: glue = None
import manim.utils.ipython_magic
!manim --version

This cell adds /home/docs/checkouts/readthedocs.org/user_builds/physics-521-classical-mechanics-i/checkouts/latest/src to your path, and contains some definitions for equations and some CSS for styling the notebook. If things look a bit strange, please try the following:

  • Choose "Trust Notebook" from the "File" menu.
  • Re-execute this cell.
  • Reload the notebook.

Manim Community v0.18.0

Manim Community v0.18.0

Hamiltonian Mechanics#

Recall that with Lagrangian mechanics, one recovers Newton’s laws as a principle of extremal action:

\[\begin{gather*} S[\vect{q}] = \int_{t_0}^{t_1}\!\!\d{t}\; L(\vect{q}, \dot{\vect{q}}, t), \qquad \delta S = 0, \qquad \diff{}{t}\pdiff{L}{\dot{\vect{q}}} = \pdiff{L}{\vect{q}}. \end{gather*}\]

The quantities \(p_i = \partial L/\partial\dot{q}_i\) are called the conjugate momenta. Expressed in this form, we have:

\[\begin{gather*} \vect{p} = \pdiff{L}{\dot{\vect{q}}}, \qquad \diff{\vect{p}}{t} = \pdiff{L}{\vect{q}}. \end{gather*}\]

The idea of Hamiltonian mechanics is to effect a Legendre transformation, replacing the coordinates \(\dot{\vect{q}}\) with the conjugate momenta \(\vect{p}\):

\[\begin{gather*} \vect{p} = \pdiff{L}{\dot{\vect{q}}}, \qquad H(\vect{p}, \vect{q}, t) = \underbrace{\vect{p}\cdot\dot{\vect{q}}}_{ \mathclap{\vect{p}\cdot\dot{\vect{q}}(\vect{q}, \vect{p}, t)} } - \overbrace{L}^{\mathclap{ L\bigl(\vect{q}, \dot{\vect{q}}(\vect{q}, \vect{p}, t), t\bigr) }}. \end{gather*}\]

Now we have Hamilton’s equations of motion:

\[\begin{gather*} \dot{\vect{q}} = \pdiff{H}{\vect{p}}, \qquad \dot{\vect{p}} = -\pdiff{H}{\vect{q}}. \end{gather*}\]

Phase Flow#

Although one can study dynamics by plotting the flow of trajectories in configuration space by plotting the velocity verses position \((\vect{q}, \dot{\vect{q}})\), there are advantages to considering dynamics in phase space \((\vect{q}, \vect{p})\), replacing the velocity with the generalized momentum. Specifically, Hamiltonian dynamics preserves area in phase space, which is not necessarily true if one uses the velocity.

Hamilton’s equations define phase flow. Let \(\vect{y}(0) = (\vect{q}_0, \vect{p}_0)\) be a point in phase space. The phase flow is defined by the trajectory \(\vect{y}(t)\) through phase space which satisfies Hamilton’s equations:

\[\begin{gather*} \diff{}{t}\vect{y}(t) = \begin{pmatrix} \mat{0} & \mat{1}\\ -\mat{1} & \mat{0} \end{pmatrix} \cdot \pdiff{H(\vect{y})}{\vect{y}}. \end{gather*}\]

Example#

Here we implement a simple example of a pendulum of mass \(m\) hanging down distance \(r\) from a pivot point in a gravitational field \(g>0\) with coordinate \(\theta\) so that the mass is at \((x, z) = (r\sin\theta, -r\cos\theta)\) and \(\theta=0\) is the downward equilibrium position:

\[\begin{gather*} L(\theta, \dot{\theta}, t) = \frac{m}{2}r^2\dot{\theta}^2 + mgr\cos\theta\\ p_{\theta} = \pdiff{L}{\dot{\theta}} = mr^2 \dot{\theta}, \qquad \dot{\theta}(\theta, p_{\theta}, t) = \frac{p_{\theta}}{mr^2},\\ H(\theta, p_{\theta}) = p_{\theta}\dot{\theta} - L = \frac{p_{\theta}^2}{2mr^2} - mgr\cos\theta,\\ \vect{y} = \begin{pmatrix} \theta\\ p_{\theta} \end{pmatrix},\qquad \dot{\vect{y}} = \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \cdot \begin{pmatrix} \pdiff{H}{\theta}\\ \pdiff{H}{p_{\theta}} \end{pmatrix} = \begin{pmatrix} p_{\theta}/mr^2\\ -mgr\sin\theta \end{pmatrix}. \end{gather*}\]
Hide code cell source
plt.rcParams['figure.dpi'] = 300
from scipy.integrate import solve_ivp

alpha = 0.0
m = r = g = 1.0
w = g/r
T = 2*np.pi / w   # Period of small oscillations.


def f(t, y):
    # We will simultaneously evolve N points.
    N = len(y)//2
    theta, p_theta = np.reshape(y, (2, N))
    dy = np.array([p_theta/m/r**2*np.exp(-alpha*t),
                   -m*g*r*np.sin(theta)*np.exp(alpha*t)])
    return dy.ravel()


# Start with a circle of points in phase space centered here
def plot_set(y, dy=0.1, T=T, phase_space=True, N=10, Nt=5, c='C0', 
             Ncirc=1000, max_step=0.01, _alpha=0.7, 
             fig=None, ax=None):
    """Plot the phase flow of a circle centered about y0.
    
    Arguments
    ---------
    y : (theta0, ptheta_0)
        Center of initial circle.
    dy : float
        Radius of initial circle.
    T : float
        Time to evolve to.
    phase_space : bool
        If `True`, plot in phase space $(q, p)$, otherwise plot in
        the "physical" phase space $(q, P)$ where $P = pe^{-\alpha t}$.
    N : int
        Number of points to show on circle and along path.
    Nt : int
        Number of images along trajectory to show.
    c : color
        Color.
    _alpha : float
        Transparency of regions.
    max_step : float
        Maximum spacing for times dt.
    Ncirc : int
        Minimum number of points in circle.
    """
    global alpha
    skip = int(np.ceil(Ncirc // N))
    N_ = N * skip
    th = np.linspace(0, 2*np.pi, N_ + 1)[:-1]
    z = dy * np.exp(1j*th) + np.asarray(y).view(dtype=complex)
    y0 = np.ravel([z.real, z.imag])

    skip_t = int(np.ceil(T / max_step / Nt))
    Nt_ = Nt * skip_t + 1
    t_eval = np.linspace(0, T, Nt_)
    res = solve_ivp(f, t_span=(0, T), y0=y0, t_eval=t_eval)
    assert Nt_ == len(res.t)
    thetas, p_thetas = res.y.reshape(2, N_, Nt_)
    ylabel = r"$p_{\theta}$"
    if phase_space:
        ylabel = r"$P_{\theta}=p_{\theta}e^{-\alpha t}$"
        p_thetas *= np.exp(-alpha * res.t)
    
    if ax is None:
        fig, ax = plt.subplots()

    ax.plot(thetas[::skip].T, p_thetas[::skip].T, "-k", lw=0.1)
    for n in range(Nt+1):
        tind = n*skip_t
        ax.plot(thetas[::skip, tind], p_thetas[::skip, tind], '.', ms=0.5, c=c)
        ax.fill(thetas[:, tind], p_thetas[:, tind], c=c, alpha=_alpha)
    ax.set(xlabel=r"$\theta$", ylabel=ylabel, aspect=1)
    return fig, ax

# Version without damping for now
alpha = 0.0
xlim = (-3, 14)
ylim = (-2, 3)

fig, ax = plt.subplots(figsize=(10, 5))

ys = np.linspace(0.25, 3.5, 13)

for n, y in enumerate(ys):
    plot_set(y=(y, y), c=f"C{n}", phase_space=False, ax=ax)

ax.set(xlim=xlim, ylim=ylim)
plt.tight_layout()
fig.savefig(FIG_DIR / "phase_space_pendulum_no_damping.svg")
display(fig)
plt.close(fig)

# Version with damping for use elsewhere.
xlim = (-1.5, 14)
fig, ax = plt.subplots(figsize=(10, 10 * abs(np.diff(ylim)/np.diff(xlim))[0]))

alpha = 0.3

for n, y in enumerate(ys):
    plot_set(y=(y, y), c=f"C{n}", ax=ax, T=1.3*T, phase_space=True)
ax.set(ylabel=r"$p_{\theta}$", xlim=xlim, ylim=ylim, aspect='auto')

plt.tight_layout()
fig.savefig(FIG_DIR / "phase_space_pendulum_damping.svg")
plt.close(fig)
../_images/74375aef9a737edecad1d6357cf8d85d23729000bae94ba80d1f30a14197b9e6.png

Liouville’s Theorem#

An interesting property of phase flow is that it conserves areas in phase-space. To see this, recall that the a mapping \(\vect{y} \mapsto \vect{f}(\vect{y})\) changes areas by a factor of the determinant of the Jacobian of the transformation:

\[\begin{gather*} \det \mat{J} = \begin{vmatrix} \pdiff{f_0}{y_0} & \pdiff{f_0}{y_1} & \cdots\\ \pdiff{f_1}{y_0} & \pdiff{f_1}{y_1} & \cdots\\ \vdots & \vdots \end{vmatrix}. \end{gather*}\]

Liouville’s theorem states that if the divergence of \(f(\vect{y})\) vanishes \(\vect{\nabla}_{\vect{y}}\cdot \vect{f}(\vect{y}) = 0\), then the mapping preserves volumes.

A Hamiltonian with Damping#

A common misconception is that damping or dissipation cannot be included in a Hamiltonian or Lagrangian framework. Consider the following:

\[\begin{gather*} H(q, p) = \frac{p^2}{2m}e^{-\alpha t} + V(q)e^{\alpha t}\\ \dot{p} = - V'(q) e^{\alpha t}, \qquad \dot{q} = \frac{p}{m}e^{-\alpha t}, \qquad p = m\dot{q} e^{\alpha t},\\ \begin{aligned} m\ddot{q} &= \dot{p}e^{-\alpha t} - m\alpha \dot{q}\\ &= - V'(q) e^{\alpha t}e^{-\alpha t} - m\alpha \dot{q}\\ &= - V'(q) - m\alpha \dot{q}.\end{aligned} \end{gather*}\]

This is just Newtonian dynamics for a particle of mass \(m\) in a potential \(V(q)\) with a drag term \(-m\alpha v\) such as might be seen for a particle falling through the atmosphere.

It is instructive to consider this in the Lagrangian framework:

\[\begin{gather*} L(q, \dot{q}, t) = p\dot{q} - H = \overbrace{\left(\frac{m \dot{q}^2}{2} - V(q)\right)}^{L_{0}(q, \dot{q})}e^{\alpha t}. \end{gather*}\]

Here we see that the factor \(\lambda(t) = e^{\alpha t}\) simplify multiplies the Lagrangian. In the action, this can be interpreted as a rescaling of time \(\tau = e^{\alpha t}\d{t}\):

\[\begin{gather*} S[q] = \int L_0(q, \dot{q})\; \underbrace{e^{\alpha t} \d{t}}_{\d{\tau}}. \end{gather*}\]

This presents an analogy of cosmology where the universe is decelerating as it expands, leading to an effective cooling from the dissipative term introduced by the scaling of time.

We now demonstrate Liouville’s theorem. Note that this applies only to he canonically conjugate variables \((q, p)\) that enter the Hamiltonian (top plot below), but not to the conventional coordinates \((q, m\dot{q})\) for which the damping results in contraction.

The key is that the canonical momentum is not \(m\dot{q}\), but instead

\[\begin{gather*} p = \pdiff{L}{\dot{q}} = e^{\alpha t} m \dot{q}. \end{gather*}\]

The extra time-dependent factor \(e^{\alpha t}\) exactly counters this contraction to ensure Liouville’s theorem. However, now the phase space dynamics are not independent of time.

Hide code cell source
fig, axs = plt.subplots(
    2, 1, figsize=(20, 12), sharex=True, 
    gridspec_kw=dict(height_ratios=(3.5, 1)))

# Saved version with damping.
fig0, ax0 = plt.subplots(figsize=(10,5))

alpha = 0.3

for n, y in enumerate(np.linspace(0.25, 1.75, 6)):
    plot_set(y=(y, y), c=f"C{n}", ax=axs[0], T=1.3*T, phase_space=False)
    [plot_set(y=(y, y), c=f"C{n}", ax=ax, T=1.3*T)
     for ax in [ax0, axs[1]]]
    
axs[0].set(ylim=(-6, 7))
axs[0].set(title=fr"$\alpha = {alpha}$, $T=1.3\times 2\pi \sqrt{{r/g}}$");

#fig0.savefig(FIG_DIR / "phase_space_pendulum_damping.svg")
plt.close(fig0)
../_images/576147a5d71d2dc88052d99ff40cc26c945ee4132ed93e661a88d669c7cc2604.png

Dispersion and Relativity#

A useful application of the Hamiltonian framework is if you know the dispersion relation \(E(p)\) – the kinetic energy as a function of the momentum. Simply apply Hamilton’s equations of motion:

\[\begin{gather*} H(q, p) = E(p) + V(q), \\ \dot{q} = \pdiff{H}{p} = E'(p), \\ \dot{p} = -\pdiff{H}{q} = -V'(q) = F(q). \end{gather*}\]

Group Velocity#

If you have studied waves, you may recognize the second equation \(\dot{q} = E'(p)\) as the group velocity of a wave, which is the derivative of the dispersion relationship. The last equation \(\dot{q} = F = -V'(q)\) is just Newton’s law.

Effective Mass#

We can get the more conventional form of Newton’s law by taking the time derivative of the second equation and using the chain rule:

\[\begin{gather*} \ddot{q} = \diff{}{t}E'(p) = E''(p)\dot{p} = \overbrace{E''(p)}^{m^{-1}}F(q). \end{gather*}\]

This shows us that the effective mass is the inverse of the second derivative of the dispersion \(m = 1/E''(q)\). I.e. the curvature of the dispersion defines the mass.

Constant Force#

There is a peculiar special case which we can solve easily: that of a particle under a constant force \(F\):

\[\begin{gather*} V'(q) = -F, \qquad V(q) = -Fq. \end{gather*}\]

The solution has two arbitrary constants \(a\) (position) and \(b\) (speed):

\[\begin{gather*} q(t) = \frac{E(p_0 + Ft)}{F} + a + bt. \end{gather*}\]

Here the trajectory of the particle has the same shape as the dispersion.

Special Relativity#

For a specific example, recall that in special relativity, the kinetic energy of a particle of rest-mass \(m\) and momentum \(p\) is

\[\begin{gather*} E(p) = \sqrt{p^2c^2 + m^2c^4} \end{gather*}\]

where \(c\) is the speed of light, which should be constant for all observers.

Rindler Coordinates#

Combining the relativistic dispersion and constant force result gives us the relativistic motion of a constantly accelerating object. Choosing the constants so that we start at time \(t=0\) at rest \(v_0 = p_0 = 0\) and at position \(q_0\), we have

\[\begin{gather*} q(t) = \sqrt{t^2c^2 + h^2} + (q_0 - h), \qquad h = \frac{mc^2}{F}, \end{gather*}\]

where \(h\) is the natural length scale. Choosing units so that \(h = c = 1\):

\[\begin{gather*} q(t) = q_0 + \underbrace{\sqrt{1 + t^2}}_{\gamma} - 1. \end{gather*}\]

With some work, this allows one to consider physics from the perspective of a constantly accelerating observer using Rindler coordinates: the relativistic equivalent of a constant gravitational field. Even though this is purely special-relativity, the Rindler frame has some very interesting properties, including the existence of an event horizon distance \(h\) below the observer. If you drop a watch, the watch will approach this horizon, slowing both the rate at which it falls, and the rate at which the clock runs, never actually falling across the horizon: just like a black hole.

Relativistic Lagrangian#

One might like to try to figure out what Lagrangian gives rise to an equation with dispersion \(E(p)\) by effecting the Legendre transformation:

\[\begin{gather*} L(q, \dot{q}, t) = \dot{q}p - H(q, p, t). \end{gather*}\]

We must now solve for \(p(\dot{q})\) by inverting \(\dot{q} = E'(p)\):

\[\begin{gather*} L(q, \dot{q}, t) = \dot{q}p(\dot{q}) - E\Bigl(p(\dot{q})\Bigr) - V(q). \end{gather*}\]

Note that, in general, this does not have the form of \(L=T-V\), kinetic minus potential unless \(E(p) \propto p^2\), i.e. as for Newtonian mechanics.

Simplifying the relationship for the group velocity \(\dot{q} = E'(p)\) we have

\[\begin{gather*} \dot{q} = E'(p) = \frac{pc^2}{\sqrt{p^2c^2 + m^2c^4}}, \qquad p = m\dot{q}\frac{1}{\sqrt{1 - \frac{\dot{q}^2}{c^2}}}. \end{gather*}\]

Thus, we recover the well-known relativistic relationship:

\[\begin{gather*} p = \gamma m \dot{q}, \qquad \gamma = \frac{1}{\sqrt{1-\beta^2}}, \qquad \beta = \frac{\dot{q}}{c}, \end{gather*}\]

in terms of the Lorentz factor \(\gamma(\beta)\) that relates the rate of change of time \(t\) in the inertial frame to proper time \(\tau\) in the co-moving frame:

\[\begin{gather*} \gamma = \diff{t}{\tau}. \end{gather*}\]

Collecting and simplifying we find the correct Lagrangian:

\[\begin{gather*} L(q, \dot{q}, t) = -\frac{m c^2}{\gamma(\dot{q})} - V(q). \end{gather*}\]

This makes more sense when we consider the action and introduce proper time \(\tau\) such that \(1/\gamma = \d{\tau}/\d{t}\):

\[\begin{gather*} I = \int_{t_0}^{t_1} L \d{t} = \int_{t_0}^{t_1} \left(-mc^2\diff{\tau}{t} - V(q)\right) \d{t}\\ = -mc^2\int_{\tau_0}^{\tau_1}\d{\tau} - \int_{t_0}^{t_1}V(q)\d{t}. \end{gather*}\]

Note that the relativistic Lagrangian does not have the form \(T - V\). Instead, the action corresponding to the kinetic piece becomes manifestly Lorentz invariant: it is simply the proper time of a clock moving with the observer, multiplied by appropriate dimensional factors.

Canonical Transformations#

The power of the Hamiltonian framework is that it allows one to express the most general possible coordinate transformations that preserve the structure of Hamilton’s equations. These are called a Canonical transformations, and can be checked using the Poisson bracket which we now define.

Poisson Bracket#

Given a set of canonical variables: coordinates \(\vect{q}\) and associated conjugate momenta \(\vect{p}\), one can define the Poisson bracket \(\{\cdot, \cdot\}\):

\[\begin{gather*} \{f, g\}_{qp} = \sum_{i}\left(\pdiff{f}{q_i}\pdiff{g}{p_i} - \pdiff{g}{q_i}\pdiff{f}{p_i}\right) \end{gather*}\]

Do It! Prove the following properties of the Poisson bracket:

\[\begin{align*} \{f, g\} &= -\{g, f\}, \tag{Anticommutativity}\\ \{af + bg, h\} &= a\{f, h\} + b\{g, h\}, \tag{Bilinearlity}\\ \{fg, h\} &= \{f, h\}g + f\{g, h\}, \tag{Leibniz' rule}\\ 0 &= \underbrace{\{f, \{g, h\}\} + \{g, \{h, f\}\} + \{h, \{f, g\}\}}_{ \{f, \{g, h\}\} + \text{cyclic permutations}}. \tag{Jacobi identity} \end{align*}\]

Note that these are the same properties shared by the matrix commutator \([\mat{A}, \mat{B}] = \mat{A}\mat{B} - \mat{B}\mat{A}\).

Note that

\[\begin{gather*} \{q_i, p_j\}_{qp} = \delta_{ij}, \qquad \{q_i, q_j\}_{qp} = \{p_i, p_j\}_{qp} = 0. \end{gather*}\]

Canonical Quantization

Once canonically conjugate variables \(q\) and \(p\) have been identified, one can seek linear operators \(\op{q}\) and \(\op{p}\) whose commutator satisfy an analogous commutation relation:

\[\begin{gather*} \{q, p\}_{qp} \rightarrow \frac{[\op{q}, \op{p}]}{\I\hbar}, \qquad \{q_i, p_j\}_{qp} = \delta_{ij} \rightarrow [\op{q}_{i}, \op{p}_{j}] = \I\hbar\delta_{ij}. \end{gather*}\]

Once such operators are found, the theory can be canonically quantized by replacing classical observables the Hamiltonian \(H(q, p)\) with the quantum operators:

\[\begin{gather*} \op{H} = H(\op{q}, \op{p}). \end{gather*}\]

This works for operators that are quadratic in the coordinates and momenta, but can fail in general, especially if there are higher order terms in both \(p\) and \(q\) where the order of operators becomes ambiguous. For more information, see the associated discussion with the Moyal bracket, which generalizes the Poisson bracket.

Canonically Conjugate Variables#

Starting from a Lagrangian framework, we can identify canonically conjugate variables \(q_i\) and \(p_i = \partial L/\partial \dot{q}_i\). The Poisson bracket allows us to check whether or not new coordinates \(\vect{Q}(\vect{q}, \vect{p}, t)\) and \(\vect{P}(\vect{q}, \vect{p}, t)\) are canonically conjugate: They are if the they satisfy

\[\begin{gather*} \{Q_i, P_j\}_{qp} = \delta_{ij}, \qquad \{Q_i, Q_j\}_{qp} = \{P_i, P_j\}_{qp} = 0. \end{gather*}\]

Note that these relations hold for any set of conjugate pairs \((\vect{q}, \vect{p})\) and \((\vect{Q}, \vect{P})\), so we can suppress the subscript \(\{\cdot, \cdot\}_{qp} \equiv \{\cdot, \cdot\}_{QP} \equiv \{\cdot, \cdot\}\): the Poisson bracket expresses an operation that does not fundamentally depend on the coordinates. (See the Poisson bracket in coordinate-free language)

Evolution as a Canonical Transformation#

One very important set of canonically conjugate variables is defined by evolution. Note that evolution defines an map from phase space into itself:

\[\begin{gather*} \begin{pmatrix} q_0 \\ p_0 \end{pmatrix} \rightarrow \begin{pmatrix} q(q_0, p_0, t) \\ p(q_0, p_0, t) \end{pmatrix}. \end{gather*}\]

I.e., if we take our initial conditions \((q_0, p_0)\) at time \(t=t_0\) as a point in phase space, then evolving these for time \(t\) gives a new point in phase space \((Q, P)\) which where the initial point ends up at time \(t_0 + t\). This evolution defines a canonical transformation from the old coordinates \((q, p) \equiv (q_0, p_0)\):

\[\begin{gather*} \begin{pmatrix} Q(q_0, p_0, t) = q(q_0, p_0, t_0+t)\\ P(q_0, p_0, t) = p(q_0, p_0, t_0+t). \end{pmatrix} \end{gather*}\]

Generating Functions#

Canonical transformations can be generated by specifying an arbitrary generating function function of one old and one new coordinate. There are thus four types of such functions:

\[\begin{gather*} F_1(q, Q, t), \qquad F_2(q, P, t), \qquad F_3(p, Q, t), \qquad F_4(p, P, t). \end{gather*}\]

From these, one can generate the omitted coordinates and the new Hamiltonian as follows:

\[\begin{align*} p &= +\pdiff{}{q}F_1(q, Q, t), & P &= -\pdiff{}{Q}F_1(q, Q, t),\\ p &= +\pdiff{}{q}F_2(q, P, t), & Q &= +\pdiff{}{P}F_2(q, P, t),\\ q &= -\pdiff{}{p}F_3(p, Q, t), & P &= -\pdiff{}{Q}F_3(p, Q, t),\\ q &= -\pdiff{}{p}F_4(p, P, t), & Q &= +\pdiff{}{P}F_4(p, P, t). \end{align*}\]

Or, more compactly:

\[\begin{align*} p &= +\pdiff{}{q}F, & P &= -\pdiff{}{Q}F,\\ q &= -\pdiff{}{q}F, & Q &= \pdiff{}{P}F. \end{align*}\]

Note that these equations give implicit relationships between the old and new coordinates which must be solved to complete the transformation. Once this is done, the new Hamiltonian in all cases is:

\[\begin{gather*} H'(Q, P, t) = H(q, p, t) + \pdiff{F}{t}, \end{gather*}\]

which will give Hamilton’s equations

\[\begin{gather*} \dot{Q} = \pdiff{H'(Q, P, t)}{P}, \qquad \dot{P} = -\pdiff{H'(Q, P, t)}{Q}. \end{gather*}\]

The general approach for deriving these is to note that the most general transformation from \((q, p)\) to \((Q, P)\) preserves the equations of motion after minimizing the action consists of three things: 1) scaling the Lagrangian, (just changes the time unit), 2) effecting a coordinate change, and 3) adding a total derivative.

\[\begin{gather*} L(q, \dot{q}, t\bigr) = \alpha L'(Q, \dot{Q}, t) + \diff{F_1(q, Q, t)}{t}. \end{gather*}\]

The scaling just changes the units of time, so we neglect this (\(\alpha = 1\)). It is the addition of a total derivative that provides additional freedom over the standard Lagrangian approach of writing the new coordinates as a simple function of the old \(Q=Q(q, t)\).

Expressing this relationship in terms of differentials, and introducing the Hamiltonians, we have

\[\begin{align*} \d{F_1} &= (L - L')\d{t} = (p\dot{q} - H)\d{t} - (P\dot{Q} - H')\d{t}\\ &= p\d{q} - P\d{Q} + (H' - H)\d{t} ,\\ \d{F_1(q, Q, t)} &= \pdiff{F_1}{q}\d{q} + \pdiff{F_1}{Q}\d{Q} + \pdiff{F_1}{t}\d{t}. \end{align*}\]

Equating these, and collecting the differentials gives our formula above for the canonical transformation generated by \(F_1(q, Q, t)\). To obtain the relationship for another function, say \(F_2(q, P, t)\) we need to switch \(-P\d{Q}\) to \(Q\d{P}\), which can be done by adding \(\d{QP} = Q\d{P} + P\d{Q}\). I.e., \(F_2 = F_1 + QP\):

\[\begin{align*} \d{F_2} = \d{(F_1 + QP)} &= p\d{q} + Q\d{P} + (H' - H)\d{t},\\ \d{F_2(q, P, t)} &= \pdiff{F_2}{q}\d{q} + \pdiff{F_2}{P}\d{P} + \pdiff{F_2}{t}\d{t}. \end{align*}\]

This accounts for the sign change.

Hamilton-Jacobi Equation#

One important transformation is to find a generating function \(F_{1}(q, Q, t) = S(q, Q, t)\) such that \(H' = 0\). This transformation defines coordinates which are completely independent of time \(\dot{Q} = \dot{P} = 0\). The conditions \(H'(Q, P, t) = H(q, p, t) + S_{,t}\) and \(p = S_{,q}\) define the Hamilton-Jacobi equation:

\[\begin{gather*} H\left(q, \pdiff{S(q, Q, t)}{q}, t\right) + \pdiff{S(q, Q, t)}{t} = 0. \end{gather*}\]

Do It! Show that the classical action \(S(q, Q, t)\) is a solution.

Show that classical action for a physical trajectory \(q(t)\) with \(q(t_0) = q_0\) and \(q(t_1) = q_1\),

\[\begin{gather*} S(q_0, t_0; q_1, t_1) = \int_{t_0}^{t_1} L(q, \dot{q}, t)\d{t}, \qquad \diff{}{t}\pdiff{L(q, \dot{q}, t)}{\dot{q}} = \pdiff{L(q, \dot{q}, t)}{q}, \end{gather*}\]

gives a solution \(S(q, Q, t)\) to the Hamilton-Jacobi equation with \(Q=q_0\), and \(t=t_1\).

Hamilton’s Principal and Characteristic Functions#

The typical approach for solving the Hamilton-Jacobi equation is to look for separability. For example, if the Hamiltonian is time-independent, then we can write

\[\begin{gather*} S(\vect{q}, \vect{Q}, t) = W(\vect{q}, \vect{Q}) + f(t), \qquad H\left(\vect{q}, \pdiff{W(\vect{q}, \vect{Q})}{\vect{q}}\right) = -\pdiff{f(t)}{t} = \text{const.} = E. \end{gather*}\]

The left-hand-side contains no time, while the right-hand-side contains no \(\vect{q}\) and \(\vect{Q}\). Thus, the two sides must both be constant. That constant is the conserved Hamiltonian implied by Noether’s theorem when one has time-translation invariance. In most cases, that constant is the energy \(E\) is the energy of the system, so we use this notation, and solve for \(f(t) = -Et\):

\[\begin{gather*} S(\vect{q}, \vect{Q}, t) = W(\vect{q}, \vect{Q}) - Et. \end{gather*}\]

This differentiates between the time-dependent action \(S(\vect{q}, \vect{Q}, t)\), which is called Hamilton’s principal function, and the time-independent action \(W(\vect{q}, \vect{Q}) = S + Et\), which is called Hamilton’s characteristic function.

Action-Angle Coordinates#

Another related set of coordinates called action-angle coordinates. We will give the general formulation below, but the basic idea applies to time-independent Hamiltonians where the Hamilton-Jacobi equation is separable so that the motion can be considered quasi-periodic. I.e., each of the coordinates executes either a libration \(q_k(t+T_k) = q_k(t)\) or a rotation (where the coordinate is periodic like an angle) \(q_k + Q_k \equiv q_k\). In both cases, we can define the action over a complete period/cycle:

\[\begin{gather*} I_k(E) = \oint q_k\,\frac{\d{p_k}}{2\pi}, \end{gather*}\]

where the integral is over the path defined by the condition of constant energy \(E=E(q_k, p_k)\). The energy of such a system is a function only of these actions

\[\begin{gather*} H'(\vect{I}) = E(\vect{I}), \end{gather*}\]

hence the corresponding canonical coordinates (the angle coordinates) have constant velocity:

\[\begin{gather*} \dot{\theta}_{k} = \pdiff{H'(\vect{I})}{I_k} = \text{const.},\qquad \theta_k(t) = \theta_k(0) + \dot{\theta}_{k} t. \end{gather*}\]

The geometric structure is one of an \(n\)-torus in phase space defined by the constant energy manifold \(E = E(\vect{q}, \vect{p}) = E(\vect{I})\). The periodic trajectories are orbits around these tori, and the angle variables specify where on the orbits the system is at a given time, parameterized in such a way as to advance at a constant rate in time.

The process of finding these is straightforward

  1. Calculate the new generalized momenta \(\vect{I}\) – the action variables:

    \[\begin{gather*} I_k = \oint q_k\,\frac{\d{p_k}}{2\pi} = \tfrac{1}{2\pi}\oint q_{k}\dot{p}_k \, \d{t}. \end{gather*}\]
  2. Express the original Hamiltonian in terms of these \(H'(\vect{I}) = E(\vect{I})\).

  3. Differentiate to get the frequencies:

    \[\begin{gather*} \omega_k = \pdiff{H'}{I_k}, \qquad \theta_k = \theta_k(0) + \omega_k t. \end{gather*}\]

Action-Angle Generating Function#

These coordinates are generated by the type-2 generating function \(W(\vect{q}, \vect{I})\) called Hamilton’s characteristic function

\[\begin{gather*} W(\vect{q}, \vect{I}) = S(q, \vect{I}, t) + E(\vect{I}) t. \end{gather*}\]

For a single coordinate, the action-angle coordinates can be generated from the following type-2 function with \(P = I\) being the area in phase space (divided by \(2\pi\) for our units):

\[\begin{gather*} W(q, I) = \int_0^{q}p(q, I)\d{q},\\ Q = \theta = \pdiff{W}{I} = \int_0^{q}\pdiff{p(q, I)}{I}\d{q},\qquad p = \pdiff{W}{q} = p(q, I),\qquad H' = H. \end{gather*}\]

For applications to Canonical Perturbation Theory, we would like to have a form of action-angle variables where the transformed Hamiltonian \(H' = 0\). To do this, we simply note that the Hamiltonian is a constant function of the action variables \(H'(\vect{I}) = E\). Thus, we simply use the type-2 generating functional

\[\begin{gather*} S(\vect{q}, \vect{I}, t) = W(\vect{q}, \vect{I}) - H'(\vect{I})t, \end{gather*}\]

which is Hamilton’s principal function as before. This defines the following coordinates

\[\begin{gather*} \vect{Q} = \pdiff{S}{\vect{I}} = \overbrace{ \pdiff{W}{\vect{I}} - \pdiff{H'}{\vect{I}} }^{\vect{\theta}(t) - \vect{\omega} t} = \vect{\theta}_0. \end{gather*}\]

I.e., Recall that the angle variables march forward with constant angular velocity \(\vect{\omega}\):

\[\begin{gather*} \vect{\theta}(t) = \vect{\theta}_0 + \vect{\omega} t. \end{gather*}\]

The new coordinates are just the (constant) starting angles \(\vect{\theta}_0\).

In summary, we have two transformation:

\[\begin{align*} F_2(\vect{q}, \vect{I}) &= W(\vect{q}, \vect{I}): & (\vect{q}, \vect{p}) &\rightarrow (\vect{\theta}, \vect{I}), \tag{Action-Angle}\\ F_2(\vect{q}, \vect{I}, t) &= S(\vect{q}, \vect{I}, t): & (\vect{q}, \vect{p}) &\rightarrow (\vect{\theta}_0, \vect{I}), \tag{Hamilton-Jacobi}\\ \end{align*}\]

where

\[\begin{gather*} S(\vect{q}, \vect{I}, t) = W(\vect{q}, \vect{I}) - E(\vect{I})t, \qquad \vect{\theta}(t) = \vect{\theta}_0 + \vect{\omega}t, \end{gather*}\]

and \(\vect{I}\), \(\vect{\theta}_0\), and \(\vect{\omega}\) are constants.

Formalism#

The formalism for a system of \(n\) particles starts with identifying \(n\) integrals of motion \(F_{k}(\vect{q}, \vect{p})\) with \(F_1(\vect{q}, \vect{p}) = H(\vect{q}, \vect{p})\) which are in involution with each other, meaning that their Poisson brackets are zero:

\[\begin{gather*} \{F_i, F_j\}_{pq} = 0. \end{gather*}\]

Liouville proved the following about the level set

\[\begin{gather*} M_{\vect{f}} = \{(\vect{q}, \vect{p}) \quad |\quad \vect{F}(\vect{q}, \vect{p}) = \vect{f}\}. \end{gather*}\]
  1. \(M_{\vect{f}}\) is a smooth manifold, invariant under phase flow with the Hamiltonian \(H = F_1\).

  2. If the manifold \(M_{\vect{f}}\) is compact and connected, then it is diffeomorphic to the \(n\)-dimensional torus

    \[\begin{gather*} T^{n} = \{\vect{\phi} \mod 2\pi\} \end{gather*}\]
  3. The phase flow with Hamiltonian \(H = F_1\) determines conditionally periodic motion on \(M_{\vect{f}}\). I.e.:

    \[\begin{gather*} \dot{\vect{\phi}} = \vect{\omega}(\vect{f}). \end{gather*}\]
  4. The canonical equations can be integrated by quadratures. (I.e., although we only have \(n\) integrals, we can get all \(2n\) integrals of motion.

The angle coordinates will be

\[\begin{gather*} \vect{\phi}(t) = \vect{\phi}(0) + \vect{\omega}(\vect{f}) t \end{gather*}\]

but we must still identify the conjugate momenta \(\vect{I}\), which will not in general be the functions \(\vect{F}\). They are given as discussed above, by computing the action about the periodic orbit (though, it should not be obvious from this discussion that these do indeed do the job).

Canonical Perturbation Theory#

To organize the perturbative calculation, one can turn to canonical perturbation theory which maintains the canonical relationship between variables as we add perturbative corrections.

We consider a Hamiltonian system with conjugate variables \((\vect{q}, \vect{p})\):

\[\begin{gather*} H(\vect{q}, \vect{p}, t) = H_0(\vect{q}, \vect{p}, t) + \epsilon H_1(\vect{q}, \vect{p}, t). \end{gather*}\]

We now assume that a solution to the Hamiltonian problem \(H_0(\vect{q}, \vect{p}, t)\) is known such that we can effect a canonical transform via the generating function \(F_2(\vect{q}, \vect{P}, t)\) to a new set of conjugate variables \((\vect{Q}, \vect{P})\) with Hamiltonian \(K_0(\vect{Q}, \vect{P}, t)\):

\[\begin{gather*} p_i = \pdiff{F_2}{q_i}, \\ Q_i = \pdiff{F_2}{P_i}, \\ K_0 = H_0 + \pdiff{F_2}{t} = 0. \end{gather*}\]

Effecting the same transform on the full Hamiltonian \(\vect{H}\) gives:

\[\begin{gather*} K = \epsilon H_1, \\ \dot{\vect{Q}} = \pdiff{K}{\vect{P}} = \epsilon \pdiff{H_1}{\vect{P}}, \\ \dot{\vect{P}} = -\pdiff{K}{\vect{Q}} = -\epsilon \pdiff{H_1}{\vect{Q}}. \end{gather*}\]

This is still exact, but note that the motion is now of order \(\epsilon\).

The first approach, explained in [Goldstein et al., 2000], is to use these equations to recursively build successive approximations. Thus, substituting the order \(\epsilon^0\) solution on the right-hand-side of these equations gives a linear system that can be integrated to obtain the order \(\epsilon^1\) approximation. The order \(\epsilon^1\) solution can then be inserted to compute the order \(\epsilon^2\) approximation, etc.

Explicitly, let \(\vect{y} = (\vect{Q}, \vect{P})\). The full solution will have the form:

\[\begin{gather*} \vect{y} = \vect{y}_0 + \epsilon \vect{y}_1 + \epsilon^2 \vect{y}_2 + \cdots,\\ \dot{\vect{y}}_{n} = \epsilon \begin{pmatrix} \mat{0}& \mat{1}\\ -\mat{1} & \mat{0} \end{pmatrix} \cdot \left.\pdiff{H_1(\vect{y}, t)}{\vect{y}}\right|_{\vect{y} = \vect{y}_{n-1}}. \end{gather*}\]

In the case of a single degree of freedom \(\vect{y} = (Q, P)\) we have

\[\begin{gather*} \begin{pmatrix} Q(t)\\ P(t) \end{pmatrix} = \begin{pmatrix} Q_0\\ P_0 \end{pmatrix} + \epsilon \begin{pmatrix} Q_1(t)\\ P_1(t) \end{pmatrix} + \epsilon^2 \begin{pmatrix} Q_2(t)\\ P_2(t) \end{pmatrix} +\cdots,\\ \begin{pmatrix} \dot{Q}_{n+1}(t)\\ \dot{P}_{n+1}(t) \end{pmatrix} = \left. \begin{pmatrix} \pdiff{H_1(Q, P, t)}{P}\\ -\pdiff{H_1(Q, P, t)}{Q} \end{pmatrix} \right|_{Q=Q_n(t), P=P_n(t)}. \end{gather*}\]

Note that if \(H_1(Q,P)\) is independent of time, then, since \(Q_0\) and \(P_0\) are constant, \((Q_1, P_1)\) is linear in time, \((Q_2, P_2)\) is quadratic, etc., and all integrals are trivial.

Important

Note: By itself, applying this formalism for canonical perturbation theory does not ensure that you won’t have secular terms. I.e. if you try to apply these equations to the transformation to constant coordinates \((\vect{q}_0, \vect{p}_0)\) generated by \(F_2(\vect{q}, \vect{q}_0, t) = S(\vect{q}_0, t_0; \vect{q}, t)\), you will generally end up with linearly increasing perturbations.

You must also use action-angle coordinates as generated by the \(F_2(\vect{q}, \vect{I}, t) = S(\vect{q}, \vect{I}, t)\) described above. In this case, the secular terms give rise to changes in the angular frequency \(\vect{\omega}\), preserving the periodic nature of the solutions.

For details, see Worked Example: The Pendulum.

Adiabatic Invariants#

For 1D systems, the action variable \(I\) is an adabatic invariant, which we explain here by way of an example. Consider a harmonic oscillator with a time-dependent frequency that varies slowly

\[\begin{gather*} H(q, p, t) = \frac{p^2}{2m} + \frac{m}{2}\omega^2(t) q^2, \qquad \ddot{q} = -\omega^2(t) q. \end{gather*}\]

Since the Hamiltonian now depends on time, the energy will not be conserved as we change the frequency, i.e., work may be done on the system. However, if we vary \(\omega(t)\) slowly enough, then we might expect that the energy will return if we take the system slowly (adiabatically) away from \(\omega_0\) but then slowly bring it back. The question is then, how does the energy depend on \(\omega(t)\) in the adiabatic limit?

For constant \(\omega\) we can find the action-angle variables (see the example above):

\[\begin{gather*} I = \frac{E}{\omega}, \qquad \phi = \phi_0 + \omega t, \end{gather*}\]

and we shall now show that \(I\) is an adiabatic invariant, meaning that it is quadratic \(I = I_0 + O(\dot{\omega}^2)\) for small \(\dot{\omega}\). Thus, the answer to the question posed above is:

\[\begin{gather*} E(t) = I\omega(t) + O(\dot{\omega}^2). \end{gather*}\]

Warning

Exactly what slowly or adiabatic means is quite subtle. See [Wells and Siklos, 2006] for a discussion. Averaging over cycles place a key role that we have not discussed here.

WKB Approximation#

The traditional approach for the WKB approximation of quantum mechanics is to express the wavefunction as follows, then insert it into the Schrödinger equation:

\[\begin{align*} \psi(x, t) &= \exp\left\{\frac{i}{\hbar}W(x,t)\right\},\\ \psi'(x, t) &= \frac{i}{\hbar}W'(x, t)\psi(x, t),\\ \psi''(x, t) &= \left(\frac{i}{\hbar}W''(x, t) - \frac{W''(x, t)}{\hbar^2}\right)\psi(x, t),\\ 0 &= \left(\frac{(W')^2 - \I\hbar W''}{2m} + V + \dot{W}\right)\psi. \end{align*}\]

Expanding \(W\) in powers of \(\hbar\), we have the following lowest two orders:

\[\begin{align*} &W(x,t) = S(x,t) - \I\hbar \log A + \order(\hbar^2),\\ &W'(x,t) = S'(x,t) - \I\hbar \frac{A'}{A} + \order(\hbar^2),\\ &W''(x,t) = S''(x,t) - \I\hbar \frac{A A'' - (A')^2}{A^2} + \order(\hbar^2),\\ \text{Order $\hbar^0$:}\quad &\frac{(S')^2}{2m} + V(x,t) + \dot{S} = H(x, S', t) + \dot{S} = 0,\\ \text{Order $\hbar^1$:}\quad &\frac{S''}{2m} + \frac{A'S'}{mA} + \frac{\dot{A}}{A} = 0,\\ &\psi_{WKB}(x, t) = A(x,t)\exp\left\{\frac{i}{\hbar}S(x,t)\right\},\\ \end{align*}\]

\(\order(\hbar^0)\): Hamilton-Jacobi Equation#

The order \(\hbar^0\) equation is the well-known Hamilton-Jacobi equation, which is satisfied by the classical action as a function of initial and final states.

\(\order(\hbar^1)\): Continuity Equation#

The order \(\hbar^1\) equation is the well-known continuity equation, expressing the conservation of particle number or probability. To see this, multiply through by \(2A^2\):

\[\begin{gather*} \frac{A^2 S'' + 2S'A'A}{m} + 2\dot{A}A = 0\\ \left(\frac{S'}{m}A^2\right)' + \pdiff{A^2}{t} = 0. \end{gather*}\]

This is the 1-dimensional form of the familiar continuity equation, once we identify \(S'\) as the momentum, and \(A^2\) as the probability density:

\[\begin{gather*} \vect{\nabla}\cdot\vect{j} + \dot{n} = 0,\\ n \equiv A^2 = \abs{\psi}^2, \qquad \vect{v} \equiv \frac{\vect{p}}{m} = \frac{\vect{\nabla} S}{m}, \qquad \vect{j} = n\vect{v}. \end{gather*}\]

Exercise

Show that the order \(\hbar^1\) equation is satisfied by

\[\begin{gather*} A(x, t) = \sqrt{-\frac{\partial^2 S(x, t; x_0, t_0) / (2\pi \I\hbar)} {\partial x\partial x_0}} \end{gather*}\]

as given by the path integral formulation. Note: this is only valid up to an overall constant giving the dependence of \(A(x, t)\) on \(x\) and \(t\). One must adjust this overall constant to normalize the wavefunction.

Exercise: Geometry

Note that since \(S' = p\), the factor \(A\) can be expressed as

\[\begin{gather*} n(x, t) \propto A^2(x, t) \propto \pdiff{p(x, t;x_0, t_0)}{x_0}. \end{gather*}\]

Explain this geometrically using Liouville’s theorem and the requirement that classical evolution conserves particle number

\[\begin{gather*} n_0(x_0, t_0)\d{x_0} = n(x, t)\d{x} \end{gather*}\]

for fixed initial condition \(p_0(x_0)\).

WKB: Path Integral Formulation#

In quantum mechanics, one can use the Feynman path-integral approach to construct the propagator (here expressed in terms of position-to-position transitions):

\[\begin{gather*} \newcommand{\S}{\mathcal{S}} U(q, t; q_0, t_0) = \int \mathcal{D}[q]\; \exp\left\{\frac{\I}{\hbar}S[q]\right\},\\ \psi(q, t) = \int U(q, t;q_0, t_0)\psi(q_0, t_0)\d{q_0}. \end{gather*}\]

where the integral is over all paths that start at \(q(t_0) = q_0\) and end at \(q(t) = q\), and \(S[q]\) is the classical action

\[\begin{gather*} S[q] = \int_{t_0}^{t}\d{t}\; L(q, \dot{q}, t). \end{gather*}\]

Given an initial wavefunction \(\psi(q_0, t_0)\), the wavefunction at time \(t\) is:

\[\begin{gather*} \psi(q, t) = \int \d{q_0}\; U(q, t;q_0, t_0)\psi(q_0, t_0). \end{gather*}\]

the WKB approximation relies on the idea that classical trajectories where \(S'[q_{\mathrm{cl}}] = 0\) – the famous principle of extremal action – dominate the propagator, and use the expansion of the action

\[\begin{gather*} S[q+\xi] = S[q] + S'[q]\cdot \xi + \frac{1}{2!}S''[q]\cdot\xi\xi + \frac{1}{3!}S'''[q]\cdot\xi\xi\xi + \cdots. \end{gather*}\]

The WKB approximation amount to considering all classical trajectories with appropriate boundary conditions, performing the path integral over \(\xi\), and dropping terms of order \(\order(\xi^3)\) and higher to obtain:

\[\begin{gather*} U_{WKB}(q, t; q_0, t_0) = \int \mathcal{D}[\xi]\; \exp\left\{\frac{\I}{\hbar}\left( S[q_{\mathrm{cl}}] + \frac{1}{2}S''[q_{\mathrm{cl}}]\cdot\xi\xi\right)\right\}\\ = \sqrt{\frac{-\partial^2 S / (2\pi \I \hbar)} {\partial q_{\mathrm{cl}}(t)\partial q_{\mathrm{cl}}(t_0)}} \exp\left\{\frac{\I}{\hbar}\S(q_{\mathrm{cl}}(t),t;q_{\mathrm{cl}}(t_0),t_0)\right\}, \end{gather*}\]

where \(\S = \S(q,t;q_0,t_0)\) is the classical action with \(q=q_{\mathrm{cl}}(t)\) and \(q_0 = q_{\mathrm{cl}}(t_0)\) are the final and initial points of the classical trajectory. The key point here is that all of the information about the propagator in this approximation is contained in the classical action \(\S(q,t;q_0,t_0)\), sometimes called Hamilton’s principal function.

Once the path integrals over \(\xi\) have been done, everything is expressed in terms of the classical trajectory \(q_{\mathrm{cl}}(t)\) and we shall drop the “cl” subscript in what follows.

(Note: if there are multiple trajectories that satisfy the boundary conditions, then they should be added, giving rise to quantum interference patterns.)

Similar results can be obtained from the momentum-to-position transitions if the initial state is expressed in terms of momentum, however, in this case, since the boundary conditions are no longer the same, we must use a different form of \(\S(x,t;p_0,t_0)\):

\[\begin{gather*} \S(q,t;p_0,t_0) = p_0q_0(q,t;p_0,t_0) + \S\Bigl(q,t;q_0(q,t;p_0,t_0),t_0\Bigr),\\ U_{WKB}(q, t; p_0, t_0) = \sqrt{\frac{\partial^2 \S}{\partial q_{\mathrm{cl}}(t)\partial p_{\mathrm{cl}}(t_0)}} \exp\left\{\frac{\I}{\hbar}\S(q_{\mathrm{cl}}(t),t;p_{\mathrm{cl}}(t_0);t_0)\right\}. \end{gather*}\]

Examples#

Hide code cell content
from scipy.special import airy

m = hbar = g = 1
t0 = 0
z0 = 0
t = t0 + np.linspace(0, 6, 1000)[1:]
z = z0 - g*t**2/2
xi = (hbar**2/m**2/2/g)**(1/3)
x = (z-z0) / xi

phi = m/hbar*(
    (z-z0)**2/2/(t-t0) - g/2*(z+z0)*(t-t0) - g**2*(t-t0)**3/24
)
psi = np.sqrt(m/2/np.pi/hbar/1j/(t-t0))*np.exp(1j*phi)
Ai, Aip, Bi, Bip = airy(z/xi)
fig, ax = plt.subplots(figsize=(5, 2.5))
ax.plot(x, Ai, label=r"$\mathrm{Ai}(z/\xi)$")
ax.plot(x, psi.real * Ai[-1] /np.real(psi)[-1], label=r"$\Re \psi_{WKB}$")
y = np.cos((-x)**(3/2)*2/3 - np.pi/4)/abs(x)**(1/4)/np.sqrt(np.pi)
ax.plot(x, y, ":", 
        label=r"$\cos(\frac{2}{3}(-x)^{3/2} + \frac{\pi}{4})/\sqrt{\pi}|x|^{1/4}$")
ax.set(ylim=(-0.5, 1), xlabel=r"$z/\xi$", ylabel=r"$\psi$",
       title=r"$\xi=\sqrt[3]{\hbar^2/(2m^2g)}$")
ax.legend();
if glue: glue("fig:airy", fig, display=False);
../_images/dbf68aef3ddbd813ff59e825c82f399c7c1c8a5d0fe53741316d768c5f5311b0.png