Light GBM is a framework based on the decision tree technique that may be used for ranking, classification, and a variety of other machine learning applications.

Because it is founded on tree algorithms, it splits the tree into leaf divisions based on the best match, unlike other boosting methods that split the tree level or depth-wise rather than leaf-wise. As a consequence the leaf-wise approach reduces more loss than the level-wise technique, resulting in substantially higher accuracy than any of the existing boosting strategies. It’s also shockingly quick, which is why it’s called Light.

Leaf-wise splits increase complexity and may result in overfitting; however, this may be avoided by supplying the max-depth option, which sets the depth to which splitting will occur.

Pros of LightGBM

  • Faster training speed and efficiency: Light GBM employs a histogram-based approach, which buckets continuous feature values into discrete bins, speeding up the training process.
  • Lower memory utilization: Continuous values are replaced with discrete bins, resulting in lower memory usage.
  • Better than any other boosting method in terms of accuracy: It uses a leaf-wise split strategy rather than a level-wise split approach to build significantly more complicated trees, which is the major element in obtaining greater accuracy. It can, however, lead to overfitting, which can be prevented by increasing the max depth option.
  • Compatibility with Huge Datasets: When compared to XGBOOST, it is capable of performing similarly well with large datasets while requiring significantly less training time.
  • Learning in parallel is encouraged.


It’s critical to become familiar with the basic parameters of an algorithm you’re working with. LightGBM has over 100 settings listed in its documentation, but you don’t need to learn them all. Let’s have a look at the various parameters.

Parameters of Control

  • Max depth: This value specifies the tree’s depth as well as the model’s overfitting. Reduce the maximum depth if your model appears to be overfitted.
  • Min data in leaf: The minimal amount of records in the leaf is also used to prevent the model from overfitting.
  • Feature fraction: It determines the randomly picked parameter for creating trees in each iteration. If the value is 0.7, it signifies that 70% of the parameter will be used.
  • Bagging fraction: This function checks for the data fraction to be utilized in each iteration. Frequently used to speed up training and avoid overfitting.
  • Early stopping round: If the validation data’s measure has improved in the previous early stopping round rounds. It will reduce the number of unnecessary iterations.
  • Regularization is stated by lambda. Its values vary from 0 to 1. Min gain to split: This variable is used to regulate the number of tree splits.

Parameters that are essential

  • Task– It describes the task that will be done on the data. It may either learn from the data or make predictions based on it.
  • Boosting– This indicates the kind of algorithm.
  • Application– This option determines whether regression or classification should be performed. Regression is the application’s default parameter in LightGBM.


Parameter Tuning is a critical step that data scientists do to get pinpoint efficiency, quick results, and avoid overfitting.

If you want excellent accuracy, do the following:

  • Use a lower learning rate with a large number of iterations.
  • Max bin should be given big values.
  • Assign a large number to num leaves.
  • The amount of your training data should be increased.
  • Make direct use of category characteristics.

If you need to get things moving faster, do the following:

  • Max bin should be given tiny values.
  • Make use of bagging by adjusting the bagging fraction and frequency.
  • Feature sub-sampling is enabled by setting feature fraction.
  • Use save binary to speed up data loading in the future.

When it comes to obtaining rapid and high-accuracy results, LightGBM is regarded to be a very fast algorithm and the most often used algorithm in machine learning. In the LightGBM manual, there are over 100 settings to choose from.

Testing. CI/CD. Monitoring.

Because ML systems are more fragile than you think. All based on our open-source core.

Our GithubInstall Open SourceBook a Demo