190828

August 28th, 2019

Today’s work

  1. Convex Optimization Course - Duality;
    weak and strong duality
  2. IEEE Kaggle - Data Organization;
    convert data by Data Description - link
  3. Study LightGBM with the video presented by Mateusz Susik from McKinsey - link

XGBoost by Tianqi Chen- link

오늘 할 일

  1. Convex Optimization 수업 - Duality; weak and strong duality
  2. IEEE 카글 대회 - 데이터 정리; 데이터 Description에 따라 변수들 변환 - link
  3. LightGBM 원리 공부 - 프레젠테이션 Mateusz Susik from McKinsey - link

XGBoost -> LightGBM

Update:

Duality -

  1. Form Lagrange

    $\mathcal{L}(x,\lambda,\nu)$ = $f_0 (x) + \sum_{i=1}^{m} \lambda_i f_i(x) + \sum_{i=1}^{p} \nu_i h_i (x)$

  2. Set gradient for $x$ equal to zero to minimize $\mathcal{L}$

    $\nabla_x \mathcal{L}$ = 0

  3. Plug it in $\mathcal{L}$ to get the Lagrangian dual function;

    $g(\lambda, \nu)$ = $\inf_{x\in D} \mathcal{L}(x, \lambda, \nu)$

    which is a concave function, can be $-\infty$ for some $\lambda, \nu$

    Lagrangian dual function is a concave function, since the Lagrangian form is affine function, and infimum of any family of affine is concave.

    We want to maximize lower bound (concave) to get the best optimal points, and maximizing lower bound is convex optimization problem.

LightGBM - XGBoost (either histogram implementation available)

One of the method used in LightGBM is ‘Graident-based one-side sampling’ that is the biggest benefit of LightGBM. This method is to concentrate on data points with large gradients and ignore data points with small graidents (close to local minima).

LightGBM - XGBoost(둘 다 histogram implementation 가능)

가장 큰 장점은 Gradient-based one-side sampling 이라는 방법으로, gradient가 큰 데이터 포인트들을 집중하며, small gradients (Gradients가 작다는 것은 local minima에 가깝다는것이고, 그 말은 즉 Residual 혹은 loss 가 적다는것) 들은 무시하기에 training 속도가 굉장히 빠르다는 것.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×