
Cheatsheet of Latex Code for Most Popular Machine Learning Equations
rockingdingo 20220918 #GAN #VAE #KLDivergence #Wasserstein #MahalanobisIn this blog, we will summarize the latex code for most popular machine learning equations, including multiple distance measures, generative models, etc. There are various distance measurements of different data distribution, including KLDivergence, JSDivergence, Wasserstein Distance(Optimal Transport), Maximum Mean Discrepancy(MMD) and so on. We will provide the latex code for machine learning models in the following sections. We will also provide latex code of Generative Adversarial Networks(GAN), Variational AutoEncoder(VAE), Diffusion Models(DDPM) for generative models in the second section.
READ MORE 
Latex Code for Diffusion Models Equations
rockingdingo 20220918 #Diffusion #VAE #GAN #Generative ModelsIn this blog, we will summarize the latex code of equations for Diffusion Models, which are among the topperforming generative models, including GAN, VAE and flowbased models. The basic idea of diffusion models are to inject random noise to the feature vector in the forward process as markov chain models, and in the reverse process gradualy reconstruct the feature vector for generation. See below blogpost as reference for more details: Weng, Lilian. (Jul 2021). What are diffusion models? Lilâ??Log. lilianweng.github.io/posts/20210711diffusionmodels/
READ MORE 
Cheatsheet of Latex Code for Reinforcement Learning Equations
rockingdingo 20220718 #rl #reinforcement learningIn this blog, we will summarize the latex code of most fundamental equations of reinforcement learning (RL). This blog will cover many topics, including Bellman Equation, Markov Decision Process(MDP), Partial Observable Markov Decision Process(POMDP), DQN, A3C, etc.
READ MORE 
Cheatsheet of Latex Code for Financial Engineering and Quantitative equations
rockingdingo 20220718 #financial engineering #blacksholesIn this blog, we will summarize the latex code of most popular equations for financial engineering. We will cover important topics, including BlackScholes formula, Value at Risk(VaR), etc.
READ MORE 
Cheatsheet of Latex Code for Graph Neural Network(GNN) Equations
rockingdingo 20220717 #graph neural network #gnn #gcn #gat #graphsageIn this blog, we will summarize the latex code of equations of Graph Neural Network(GNN) models, which are useful as quick reference for your research. For common notation, we denote G=(V,E) as the graph. V as the set of nodes with size V=N, and E as the set of N_e edges as E = N_e. A is denoted as the adjacency matrix. For each node v, we use h_v and o_v as hidde state and output vector of each node.
READ MORE 
Cheatsheet of Latex Code for Transfer Learning Equations
rockingdingo 20220717 #machine learning #transfer learning #domain adaptation #DomainAdversarial Neural NetworksIn this blog, we will summarize the latex code of most fundamental equations of transfer learning(TL). Different from multitask learning, transfer learning models aims to achieve the best performance on target domain (minimized target domain test errors), not the performance of source domain. Typical transfer learning methods including domain adaptation(DA), feature subspace alignment, etc. In this post, we will dicuss more details of TL equations, including many subareas like domain adaptation, Hdivergence, DomainAdversarial Neural Networks(DANN), which are useful as quick reference for your research.
READ MORE 
Cheatsheet of Latex Code for Kernel Methods and Gaussian Process
rockingdingo 20220711 #kernel #svm #gaussian process #gp #deep kernel learningIn this blog, we will summarize the latex code of most popular kernel methods and Gaussian Process models, including Support Vector Machine (SVM), Gaussian Process (GP) and Deep Kernel Learning(DKL).
READ MORE 
Cheatsheet of Latex Code for MultiTask Learning Equations
rockingdingo 20220711 #mtl #multitask learning #mmoe #pleIn this blog, we will summarize the latex code of most fundamental equations of multitask learning(MTL) and transfer learning(TL). MultiTask Learning aims to optimize N related tasks simultaneously and achieve the overall tradeoff between multiple tasks. Typical network structure include sharedbottom models, CrossStitch Network, MultiGate Mixture of Experts (MMoE), Progressive Layered Extraction (PLE), Entire Space MultiTask Model (ESSM) models and etc. Different from multitask learning. In the following sections, we will dicuss more details of MTL equations, which is useful for your quick reference.
READ MORE 
Cheatsheet of Latex Code for Most Popular Causual Inference and Uplift modelling Equations
rockingdingo 20220710 #causal inference #uplift modelling #auuc #qiniCheatsheet of Latex Code for Most Popular Causual Inference and Uplift modelling Equations
READ MORE 
Cheatsheet of Latex Code for Most Popular Recommendation and Advertisement Ranking module Equations
rockingdingo 20220620 #recommendation #advertisement #ranking #sequential modellingRanking is a crucial part of modern commercial recommendation and advertisement system. It aims to solve the problem of accurate clickthrough rate(CTR) prediction. In this article, we will provides some of most popular ranking equations of commercial recommendation or ads system.
READ MORE 
20 Tricks to Tell if Rolex Watch is Real or Fake
fashion_watch 20220605 #ROLEX #FASHION #AIRKING #GMTMASTERII #YACHTMASTERII #SUBMARINER #DAYDATE20 Tricks to Tell if Rolex Watch is Real or Fake
READ MORE 
Cheatsheet of Latex Code for Most Popular Natural Language Processing Equations
rockingdingo 20220503 #nlp #latex #bertCheatsheet of Latex Code for Most Popular Natural Language Processing Equations
READ MORE 
CrossDomain Recommendation in Commercial Recommendation System With application of MMD and Wasserstein distance
rockingdingo 20210725 #cross domain recommendation #mmd #wassersteinCrossDomain Recommendation in Commercial Recommendation System With application of MMD and Wasserstein distance
READ MORE 
Deep Candidate Generation (DeepMatch) Algorithm in recommendation
rockingdingo 20210725 #deep candidate generation #deepmatch #recommendation #vector retrievalIn this post, we will talk about some realworld applications of deep candidates generation (vectorretrieval) models in the matching stage of recommendation scenario. Commercial recommendation system will recommend tens of millions of items to each user. And the recommendation process usually consists of two stages: The first stage is the candidate generation(matching) stage, a few hundred candidates are selected from the pool of all candidate items. The second stage is the ranking stage in which hundreds of items are ranked and sorted by the ranking score. Then the top rated items are displayed to users.
READ MORE 
Tensorflow并行：多核(multicore)，多线程(multithread)
rockingdingo 20191001 #tensorflow #并行 #多核 #多线程 #parallelism #multicore利用tensorflow训练深度神经网络模型需要消耗很长时间，因为并行化计算就为提升运行速度提供了重要思路。Tensorflow提供了多种方法来使程序的并行运行，在使用这些方法时需要考虑的问题有：选取的计算设备是CPU还是GPU，每个CPU多少核的资源并行计算，构建图Graph时消耗资源如何分配等等问题。下面我们以Linux多核CPU的环境为例介绍几种常见方法来提升你的tensorflow程序的运行速度。
READ MORE 
Tensorflow C++ API调用预训练模型和生产环境编译
rockingdingo 20181101 #tensorflow #cpp #c++ #build #nlp #deep learning研究如何打通tensorflow线下python脚本训练建模，线上生产环境用C++代码直接调用预先训练好的模型完成预测的工作，而不需要用自己写的Inference的函数。因为目前tensorflow提供的C++的API比较少，所以参考了几篇已有的日志，踩了不少坑一并记录下来。写了一个简单的ANN模型对Iris数据集分类的Demo。
READ MORE