RECOMMEND

Machine Learning

In this blog, we will summarize the latex code of most fundamental equations of transfer learning(TL). Different from multi-task learning, transfer learning models aims to achieve the best performance on target domain (minimized target domain test errors), not the performance of source domain. Typical transfer learning methods including domain adaptation(DA), feature sub-space alignment, etc. In this post, we will dicuss more details of TL equations, including many sub-areas like domain adaptation, H-divergence, Domain-Adversarial Neural Networks(DANN), which are useful as quick reference for your research.

OTHER

In this blog, we will summarize the latex code of most fundamental equations of multi-task learning(MTL) and transfer learning(TL). Multi-Task Learning aims to optimize N related tasks simultaneously and achieve the overall trade-off between multiple tasks. Typical network structure include shared-bottom models, Cross-Stitch Network, Multi-Gate Mixture of Experts (MMoE), Progressive Layered Extraction (PLE), Entire Space Multi-Task Model (ESSM) models and etc. Different from multi-task learning. In the following sections, we will dicuss more details of MTL equations, which is useful for your quick reference.

Chatbot close
  • Bot

    Hi TEMP_af5dbbbb,
    How can I help you today?