X

Cheatsheet of Latex Code for Most Popular Natural Language Processing Equations

Navigation

Sequential Labeling

  • Hidden Markov Model

    Equation


    Latex Code
                Q=\{q_{1},q_{2},...,q_{N}\}, V=\{v_{1},v_{2},...,v_{M}\} \\
                I=\{i_{1},i_{2},...,i_{T}\},O=\{o_{1},o_{2},...,o_{T}\} \\
                A=[a_{ij}]_{N \times N}, a_{ij}=P(i_{t+1}=q_{j}|i_{t}=q_{i}) \\
                B=[b_{j}(k)]_{N \times M},b_{j}(k)=P(o_{t}=v_{k}|i_{t}=q_{j}) \\
            
    Explanation

    Q denotes the set of states and V denotes the set of obvervations. Let's assume we have state sequence I of length T, and observation sequence O of length T, Hidden Markov Model(HMM) use transition matrix A to denote the transition probability a_{ij} and matrix B to denote observation probability matrix b_jk.

  • Conditional Random Field

    Equation




    Latex Code
            P(y|x)=\frac{1}{Z(x)}\exp(\sum_{i,k}\lambda_{k}t_{k}(y_{i-1},y_{i},x,i))+\sum_{i,l}\mu_{l}s_{l}(y_{i},x,i)) \\
            Z(x)=\sum_{y}\exp(\sum_{i,k}\lambda_{k}t_{k}(y_{i-1},y_{i},x,i))+\sum_{i,l}\mu_{l}s_{l}(y_{i},x,i))
            
    Explanation

    p(Y|x) denotes the linear chain Conditional Random Field(CRF). t_k denotes the function on the transition, s_l denote function on the node. lambda_k and mu_l denotes the weight coefficient.

  • Transformer

    Equation


    Latex Code
            Attention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d_k}})V
            
    Explanation

Comments

Write Your Comment

Upload Pictures and Videos