-1
TABLE OF CONTENTS
economics
- Cox-Ingersoll-Ross CIR
- Equation of Exchange
- Geometric Brownian Motion SDEs
- Ito Lemma
- Risk-Neutral Valuation and Power Contracts
- Sharpe Ratio
- Stock Prices as Geometric Brownian Motion
physics
- Electric Oscillations
- General Relativity
- Maxwell Equations Integral
- Mechanic Oscillations
- Waves In Long Conductors
math
- Arithmetic and Geometric Progressions
- Bessel Equation
- Binomial Expansion
- Complex Numbers
- Convergence of Series
- De Moivre's Theorem
- Determinants of a Matrix
- Diffusion Conduction Equation
- Eigenvalues and Eigenvectors
- Exponential Distribution
machine learning
- Average Treatment Effect ATE
- Bound on Target Domain Error
- Conditional Average Treatment Effect CATE
- Diffusion Model Forward Process
- Diffusion Model Reverse Process
- Diffusion Model Variational Lower Bound
- Diffusion Model Variational Lower Bound Loss
- Domain Adaptation H-Divergence
- Domain-Adversarial Neural Networks DANN
- Generative Adversarial Networks GAN
EQUATION LIST
-
Cox-Ingersoll-Ross CIR
#Financial #Economics
$$\mathrm{d} r(t) = a[b - r(t)] \mathrm{d} t + \sigma \sqrt{r(t)} \mathrm{d} Z(t) \\ P(r, t, T) = A(T-t)e^{-rB(T-t)} \\ \gamma = \sqrt{(a-\bar{\phi})^{2} + 2 \sigma^{2}} \\ q(r, t, T) = \sigma \sqrt{r} B(T-t) \\ \text{yield to maturity} \\ \frac{2ab}{ a - \bar{\phi} + \gamma}$$
READ MORE -
Ito Lemma
#Financial #Economics
$$\mathrm{d}X(t) = a(t, X(t)) \mathrm{d}t + b(t, X(t))\mathrm{d} Z(t) \\ Y(t) = f(t, X(t)) \mathrm{d}t \\ \mathrm{d} Y(t) = f_{t}(t, X(t)) + f_{x}(t, X(t))\mathrm{d} X(t) + \frac{1}{2} f_{xx}(t, X(t))[\mathrm{d}X(t)]^{2} \\ [\mathrm{d} X(t)]^{2} = b^{2}(t, X(t))\mathrm{d} t$$
READ MORE -
Risk-Neutral Valuation and Power Contracts
#Financial #Economics
$$\frac{\mathrm{d}S(t)}{S(t)} = (r - \delta) \mathrm{d}t + \sigma \mathrm{d} \tilt{Z}(t) \\ \tilt{Z}(t) = Z(t) + \phi t \\ V(S(t), t) = e^{-r(T-t)} E^{*}[V(S(T), T) | S(T)] \\ F^{p}_{t, T}(S^{a}) = S^{a}(t) e ^{ (-r + a(r-\delta) + \frac{1}{2} a(a-1)\sigma^{2})(T-t)}$$
READ MORE -
Stock Prices as Geometric Brownian Motion
#Financial #Economics
$$\frac{\mathrm{d}S(t)}{S(t)} = (a - \delta) \mathrm{d}t + \sigma \mathrm{d}Z(t) \\ S(t) = S(0) e^{(a - \delta - \frac{\sigma^{2}}{2})t + \sigma Z(t)} \\ \mathrm{d}[\ln S(t)] = (a - \delta - \frac{\sigma^{2}}{2}) \mathrm{d}t + \sigma \sigma \mathrm{d} Z(t) \\ S(t) \sim \ln( \ln S(0) + (a - \delta - \frac{\sigma^2}{2})t, \sigma^{2}t)$$
READ MORE -
Electric Oscillations
#physics #oscillations
$$\text{Impedance} \\ Z=R+ix \\ \text{Series connection} \\ V=IZ, Z_{\rm tot}=\sum_i Z_i~,~~L_{\rm tot}=\sum_i L_i~,~~ \frac{1}{C_{\rm tot}}=\sum_i\frac{1}{C_i}~,~~Q=\frac{Z_0}{R}~,~~ Z=R(1+iQ\delta) \\ \text{Parallel connection} \\ \frac{1}{Z_{\rm tot}}=\sum_i\frac{1}{Z_i}~,~~ \frac{1}{L_{\rm tot}}=\sum_i\frac{1}{L_i}~,~~ C_{\rm tot}=\sum_i C_i~,~~Q=\frac{R}{Z_0}~,~~ Z=\frac{R}{1+iQ\delta}$$
READ MORE -
Maxwell Equations Integral
#physics #maxwell #electricity #magnetism
$$\oiint (\vec{D}\cdot \vec{n}) \mathrm{d}^{2}A=Q_{\text{free,included}}\\ \oiint (\vec{B}\cdot \vec{n}) \mathrm{d}^{2}A=0 \\ \oint \vec{E} \mathrm{d}\vec{s}=-\frac{\mathrm{d}\Phi}{\mathrm{d}t}\\ \oint \vec{H} \mathrm{d}\vec{s}=I_{\text{free,included}}+\frac{\mathrm{d}\Psi }{\mathrm{d}t}$$
READ MORE -
Mechanic Oscillations
#physics #mechanic #oscillations
$$m\ddot{x}=F(t)-k\dot{x}-Cx \\ F(t)=\hat{F}\cos(\omega t) \\ -m\omega^2 x=F-Cx-ik\omega x \\ \omega_0^2=C/m \\ x=\frac{F}{m(\omega_0^2-\omega^2)+ik\omega} \\ \dot{x}=\frac{F}{i\sqrt{Cm}\delta+k} \\ \delta=\frac{\omega}{\omega_0}-\frac{\omega_0}{\omega} \\ Z=F/\dot{x} \\ Q=\frac{\sqrt{Cm}}{k}$$
READ MORE -
Fourier Series
#math #fourier series
$$y(x)=c_{0}+\sum^{M}_{m=1}c_{m}\cos mx+\sum^{M^{'}}_{m=1}s_{m}\sin mx \\ c_{0}=\frac{1}{2\pi}\int^{\pi}_{-\pi}y(x) \mathrm{d} x \\ c_{m}=\frac{1}{\pi}\int^{\pi}_{-\pi}y(x) \cos mx \mathrm{d} x \\ s_{m}=\frac{1}{\pi}\int^{\pi}_{-\pi}y(x) \sin mx \mathrm{d} x$$
READ MORE -
Gamma Distribution
#Math #Statistics
$$\Gamma \left( a \right) = \int\limits_0^\infty {s^{a - 1} } e^{ - s} ds \\ P(x) = \frac{x^{\alpha-1} e^{-frac{x}{\theta}}}{\Gamma(\alpha) \theta^{\alpha}} \\ \mu = \alpha \theta \\ \sigma^{2} = \alpha \theta^{2} \\ \gamma_{1} = \frac{2}{\sqrt{\alpha}} \\ \gamma_{2} = \frac{6}{\alpha}$$
READ MORE -
Power Series for Complex Variables
#math #complex variables
$$e^{z}=1+z+\frac{z^{2}}{2!}+\frac{z^{3}}{3!}+...+\frac{z^{n}}{n!}+...\\ \sin z=z-\frac{z^{3}}{3!}+\frac{z^{5}}{5!}-...\\ \cos z=1-\frac{z^{2}}{2!}+\frac{z^{4}}{4!}-...\\ \ln (1+z)=1-\frac{z^{2}}{2!}+\frac{z^{3}}{3!}-...\\ (1+z)^{n}=1+nz+\frac{n(n-1)}{2!}z^{2}+\frac{n(n-1)(n-2)}{3!}z^{3}+...$$
READ MORE -
Power Series with Real Variables
#math #power #series
$$e^{x}=1+x+\frac{x^{2}}{2!}+...+\frac{x^{n}}{n!}+... \\ \ln(1+x) = x - \frac{x^{2}}{2} + \frac{x^{3}}{3} + ... + (-1)^{n+1}\frac{x^{n}}{n!} +... \\ \cos(x) = \frac{e^{ix}+e^{-ix}}{2}=1-\frac{x^{2}}{2!}+\frac{x^{4}}{4!}-\frac{x^{6}}{6!}+...\\ \sin(x) = \frac{e^{ix}-e^{-ix}}{2i}=x-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}+...$$
READ MORE -
Spherical Harmonics Equation
#math #spherical harmonics
$$[\frac{1}{\sin \theta} \frac{\partial}{\partial \theta}(\sin \theta \frac{\partial}{\partial \theta}) + \frac{1}{\sin^{2} \theta} \frac{\partial^{2}}{\partial \phi^{2}}) ] Y^{m}_{l} + l(l+1) Y^{m}_{l}=0 \\ Y^{m}_{l}(\theta,\phi)=\sqrt{\frac{2l+1}{4 \pi} \frac{(l-|m|)!}{(l+|m|)!}}P^{m}_{l}(\cos \theta) e^{im \phi} \times \begin{cases}(-1)^{m} & m\ge 0 \\ 1 & m <0 \end{cases}$$
READ MORE -
gaussian process
#math #gaussian process
$$\log p(y|X) \propto -[y^{T}(K + \sigma^{2}I)^{-1}y+\log|K + \sigma^{2}I|] \\ f(X)=\[f(x_{1}),f(x_{2}),...,f(x_{N}))\]^{T} \sim \mathcal{N}(\mu, K_{X,X}) \\ f_{*}|X_{*},X,y \sim \mathcal{N}(\mathbb{E}(f_{*}),\text{cov}(f_{*})) \\ \text{cov}(f_{*})=K_{X_{*},X_{*}}-K_{X_{*},X}[K_{X,X}+\sigma^{2}I]^{-1}K_{X,X_{*}}$$
READ MORE -
Bound on Target Domain Error
#machine learning #transfer learning
$$\epsilon_{T}(h) \le \hat{\epsilon}_{S}(h) + \sqrt{\frac{4}{m}(d \log \frac{2em}{d} + \log \frac{4}{\delta })} + d_{\mathcal{H}}(\tilde{\mathcal{D}}_{S}, \tilde{\mathcal{D}}_{T}) + \lambda \\ \lambda = \lambda_{S} + \lambda_{T}$$
READ MORE -
Diffusion Model Reverse Process
#machine learning #diffusion
$$p_\theta(\mathbf{x}_{0:T}) = p(\mathbf{x}_T) \prod^T_{t=1} p_\theta(\mathbf{x}_{t-1} \vert \mathbf{x}_t) \\ p_\theta(\mathbf{x}_{t-1} \vert \mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t-1}; \boldsymbol{\mu}_\theta(\mathbf{x}_t, t), \boldsymbol{\Sigma}_\theta(\mathbf{x}_t, t))$$
READ MORE -
Diffusion Model Variational Lower Bound
#machine learning #diffusion
$$\begin{aligned} - \log p_\theta(\mathbf{x}_0) &\leq - \log p_\theta(\mathbf{x}_0) + D_\text{KL}(q(\mathbf{x}_{1:T}\vert\mathbf{x}_0) \| p_\theta(\mathbf{x}_{1:T}\vert\mathbf{x}_0) ) \\ &= -\log p_\theta(\mathbf{x}_0) + \mathbb{E}_{\mathbf{x}_{1:T}\sim q(\mathbf{x}_{1:T} \vert \mathbf{x}_0)} \Big[ \log\frac{q(\mathbf{x}_{1:T}\vert\mathbf{x}_0)}{p_\theta(\mathbf{x}_{0:T}) / p_\theta(\mathbf{x}_0)} \Big] \\ &= -\log p_\theta(\mathbf{x}_0) + \mathbb{E}_q \Big[ \log\frac{q(\mathbf{x}_{1:T}\vert\mathbf{x}_0)}{p_\theta(\mathbf{x}_{0:T})} + \log p_\theta(\mathbf{x}_0) \Big] \\ &= \mathbb{E}_q \Big[ \log \frac{q(\mathbf{x}_{1:T}\vert\mathbf{x}_0)}{p_\theta(\mathbf{x}_{0:T})} \Big] \\ \text{Let }L_\text{VLB} &= \mathbb{E}_{q(\mathbf{x}_{0:T})} \Big[ \log \frac{q(\mathbf{x}_{1:T}\vert\mathbf{x}_0)}{p_\theta(\mathbf{x}_{0:T})} \Big] \geq - \mathbb{E}_{q(\mathbf{x}_0)} \log p_\theta(\mathbf{x}_0) \end{aligned}$$
READ MORE -
Diffusion Model Variational Lower Bound Loss
#machine learning #diffusion
$$\begin{aligned} L_\text{VLB} &= L_T + L_{T-1} + \dots + L_0 \\ \text{where } L_T &= D_\text{KL}(q(\mathbf{x}_T \vert \mathbf{x}_0) \parallel p_\theta(\mathbf{x}_T)) \\ L_t &= D_\text{KL}(q(\mathbf{x}_t \vert \mathbf{x}_{t+1}, \mathbf{x}_0) \parallel p_\theta(\mathbf{x}_t \vert\mathbf{x}_{t+1})) \text{ for }1 \leq t \leq T-1 \\ L_0 &= - \log p_\theta(\mathbf{x}_0 \vert \mathbf{x}_1) \end{aligned}$$
READ MORE -
Domain-Adversarial Neural Networks DANN
#machine learning #transfer learning
$$\min [\frac{1}{m}\sum^{m}_{1}\mathcal{L}(f(\textbf{x}^{s}_{i}),y_{i})+\lambda \max(-\frac{1}{m}\sum^{m}_{i=1}\mathcal{L}^{d}(o(\textbf{x}^{s}_{i}),1)-\frac{1}{m^{'}}\sum^{m^{'}}_{i=1}\mathcal{L}^{d}(o(\textbf{x}^{t}_{i}),0))]$$
READ MORE -
Graph Attention Network GAT
#machine learning #graph #GNN
$$h=\{\vec{h_{1}},\vec{h_{2}},...,\vec{h_{N}}\}, \\ \vec{h_{i}} \in \mathbb{R}^{F} \\ W \in \mathbb{R}^{F \times F^{'}} \\ e_{ij}=a(Wh_{i},Wh_{j}) \\ k \in \mathcal{N}_{i},\text{ neighbourhood nodes}\\ a_{ij}=\text{softmax}_{j}(e_{ij})=\frac{\exp(e_{ij})}{\sum_{k \in \mathcal{N}_{i}} \exp(e_{ik})}$$
READ MORE -
GraphSage
#machine learning #graph #GNN
$$h^{0}_{v} \leftarrow x_{v} \\ \textbf{for} k \in \{1,2,...,K\} \text{do}\\ \textbf{for} v \in V \text{do} \\ h^{k}_{N_{v}} \leftarrow \textbf{AGGREGATE}_{k}(h^{k-1}_{u}, u \in N(v)); \\ h^{k}_{v} \leftarrow \sigma (W^{k} \textbf{concat}(h^{k-1}_{v},h^{k}_{N_{v}})) \\ \textbf{end} \\ h^{k}_{v}=h^{k}_{v}/||h^{k}_{v}||_{2},\forall v \in V \\ \textbf{end} \\ z_{v} \leftarrow h^{k}_{v} \\ J_{\textbf{z}_{u}}=-\log (\sigma (\textbf{z}_{u}^{T}\textbf{z}_{v})) - Q \mathbb{E}_{v_{n} \sim p_n(v)} \log(\sigma (-\textbf{z}_{u}^{T}\textbf{z}_{v_{n}}))$$
READ MORE -
Model-Agnostic Meta-Learning MAML
#machine learning #meta learning
$$\min_{\theta} \sum_{\mathcal{T}_{i} \sim p(\mathcal{T})} \mathcal{L}_{\mathcal{T}_{i}}(f_{\theta^{'}_{i}}) = \sum_{\mathcal{T}_{i} \sim p(\mathcal{T})} \mathcal{L}_{\mathcal{T}_{i}}(f_{\theta_{i} - \alpha \nabla_{\theta} \mathcal{L}_{\mathcal{T}_{i}} (f_{\theta}) })$$
READ MORE -
Variational AutoEncoder VAE
#machine learning #VAE
$$\log p_{\theta}(x)=\mathbb{E}_{q_{\phi}(z|x)}[\log p_{\theta}(x)] \\ =\mathbb{E}_{q_{\phi}(z|x)}[\log \frac{p_{\theta}(x,z)}{p_{\theta}(z|x)}] \\ =\mathbb{E}_{q_{\phi}(z|x)}[\log [\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)} \times \frac{q_{\phi}(z|x)}{p_{\theta}(z|x)}]] \\ =\mathbb{E}_{q_{\phi}(z|x)}[\log [\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)} ]] +D_{KL}(q_{\phi}(z|x) || p_{\theta}(z|x))\\$$
READ MORE -
Gamma Distribution
#Math #Statistics
$$\Gamma \left( a \right) = \int\limits_0^\infty {s^{a - 1} } e^{ - s} ds \\ P(x) = \frac{x^{\alpha-1} e^{-frac{x}{\theta}}}{\Gamma(\alpha) \theta^{\alpha}} \\ \mu = \alpha \theta \\ \sigma^{2} = \alpha \theta^{2} \\ \gamma_{1} = \frac{2}{\sqrt{\alpha}} \\ \gamma_{2} = \frac{6}{\alpha}$$
READ MORE