Skip to content

Releases: dmlc/xgboost

Release candidate of version 3.0.0

26 Feb 07:04
4bfd4bf
Compare
Choose a tag to compare
Pre-release

See #11286 .

2.1.4 Patch Release

06 Feb 18:23
62e7923
Compare
Choose a tag to compare

The 2.1.4 patch release incorporates the following fixes on top of the 2.1.3 release:

  • XGBoost is now compatible with scikit-learn 1.6 (#11021, #11162)
  • Build wheels with CUDA 12.8 and enable Blackwell support (#11187, #11202)
  • Adapt to RMM 25.02 logger changes (#11153)

Full Changelog: v2.1.3...v2.1.4

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
b6ce5870d03cc1233cad5ff8460f670a2aff78625adfb578c0b9eec3b8b88406  xgboost-2.1.4.tar.gz
9780ba8314824eac7b8565cc2af8ea692fd4898712052a49132ac3fdf7c0ab2b  xgboost_r_gpu_linux_2.1.4.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.1.4.tar.gz: Download

Source tarball

2.1.3 Patch release

26 Nov 10:21
600be4d
Compare
Choose a tag to compare

The 2.1.3 patch release makes the following bug fixes:

  • [pyspark] Support large model size (#10984).
  • Fix rng for the column sampler (#10998).
  • Handle cudf.pandas proxy objects properly (#11014).

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
90b1b7b770803299b337dd9b9206760d9c16f418403c77acce74b350c6427667  xgboost-2.1.3.tar.gz
96b41da84769920408c5733d05fa2d56b53feeefd209e3d96842cf9c266e27ea  xgboost_r_gpu_linux_2.1.3.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.1.3.tar.gz: Download

Source tarball

2.1.2 Patch Release

23 Oct 14:31
f199039
Compare
Choose a tag to compare

The 2.1.2 patch release makes the following bug fixes:

  • Clean up and modernize release-artifacts.py (#10818)
  • Fix ellpack categorical feature with missing values. (#10906)
  • Fix unbiased ltr with training continuation. (#10908)
  • Fix potential race in feature constraint. (#10719)
  • Fix boolean array for arrow-backed DF. (#10527)
  • Ensure that pip check does not fail due to a bad platform tag (#10755)
  • Check cub errors (#10721)
  • Limit the maximum number of threads. (#10872)
  • Fixes for large size clusters. (#10880)
  • POSIX compliant poll.h and mmap (#10767)

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
a84fc7d9846c24659a2ad16788a7eefa9640b19eea9bbc65f30e0a9d53c52453  xgboost-2.1.2.tar.gz
999eff38533ea79ab3a1f0da524c54f6d0abd2ef220b6dbb9ba1331703e898bc  xgboost_r_gpu_linux_2.1.2.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.1.2.tar.gz: Download

Source tarball

2.1.1 Patch Release

30 Jul 22:22
9c9db12
Compare
Choose a tag to compare

The 2.1.1 patch release make the following bug fixes:

In addition, it contains several enhancements:

Full Changelog: v2.1.0...v2.1.1

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
eddbc5200b7c5210f2b8974b9d2a0328a30753416bfb81fdaf5040f4f7abb222  xgboost-2.1.1.tar.gz
3ba5a6e0c609bd5cc0a667d83c57457c06778bece50863e58c8bc1b4eb415fc6  xgboost_r_gpu_linux_2.1.1.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.1.1.tar.gz: Download

Source tarball

Release 2.1.0 stable

20 Jun 07:46
213ebf7
Compare
Choose a tag to compare

2.1.0 (2024 Jun 20)

We are thrilled to announce the XGBoost 2.1 release. This note will start by summarizing some general changes and then highlighting specific package updates. As we are working on a new R interface, this release will not include the R package. We'll update the R package as soon as it's ready. Stay tuned!

Networking Improvements

An important ongoing work for XGBoost, which we've been collaborating on, is to support resilience for improved scaling and federated learning on various platforms. The existing networking library in XGBoost, adopted from the RABIT project, can no longer meet the feature demand. We've revamped the RABIT module in this release to pave the way for future development. The choice of using an in-house version instead of an existing library is due to the active development status with frequent new feature requests like loading extra plugins for federated learning. The new implementation features:

  • Both CPU and GPU communication (based on NCCL).
  • A reusable tracker for both the Python package and JVM packages. With the new release, the JVM packages no longer require Python as a runtime dependency.
  • Supports federated communication patterns for both CPU and GPU.
  • Supports timeout. The high-level interface parameter is currently hard-coded to 30 minutes, which we plan to improve.
  • Supports significantly more data types.
  • Supports thread-based workers.
  • Improved handling for worker errors, including better error messages when one of the peers dies during training.
  • Work with IPv6. Currently, this is only supported by the dask interface.
  • Built-in support for various operations like broadcast, allgatherV, allreduce, etc.

Related PRs (#9597, #9576, #9523, #9524, #9593, #9596, #9661, #10319, #10152, #10125, #10332, #10306, #10208, #10203, #10199, #9784, #9777, #9773, #9772, #9759, #9745, #9695, #9738, #9732, #9726, #9688, #9681, #9679, #9659, #9650, #9644, #9649, #9917, #9990, #10313, #10315, #10112, #9531, #10075, #9805, #10198, #10414).

The existing option of using MPI in RABIT is removed in the release. (#9525)

NCCL is now fetched from PyPI.

In the previous version, XGBoost statically linked NCCL, which significantly increased the binary size and led to hitting the PyPI repository limit. With the new release, we have made a significant improvement. The new release can now dynamically load NCCL from an external source, reducing the binary size. For the PyPI package, the nvidia-nccl-cu12 package will be fetched during installation. With more downstream packages reusing NCCL, we expect the user environments to be slimmer in the future as well. (#9796, #9804, #10447)

Parts of the Python package now require glibc 2.28+

Starting from 2.1.0, XGBoost Python package will be distributed in two variants:

  • manylinux_2_28: for recent Linux distros with glibc 2.28 or newer. This variant comes with all features enabled.
  • manylinux2014: for old Linux distros with glibc older than 2.28. This variant does not support GPU algorithms or federated learning.

The pip package manager will automatically choose the correct variant depending on your system.

Starting from May 31, 2025, we will stop distributing the manylinux2014 variant and exclusively distribute the manylinux_2_28 variant. We made this decision so that our CI/CD pipeline won't have depend on software components that reached end-of-life (such as CentOS 7). We strongly encourage everyone to migrate to recent Linux distros in order to use future versions of XGBoost.

Note. If you want to use GPU algorithms or federated learning on an older Linux distro, you have two alternatives:

  1. Upgrade to a recent Linux distro with glibc 2.28+. OR
  2. Build XGBoost from the source.

Multi-output

We continue the work on multi-target and vector leaf in this release:

  • Revise the support for custom objectives with a new API, XGBoosterTrainOneIter. This new function supports strided matrices and CUDA inputs. In addition, custom objectives now return the correct shape for prediction. (#9508)
  • The hinge objective now supports multi-target regression (#9850)
  • Fix the gain calculation with vector leaf (#9978)
  • Support graphviz plot for multi-target tree. (#10093)
  • Fix multi-output with alternating strategies. (#9933)

Please note that the feature is still in progress and not suitable for production use.

Federated Learning

Progress has been made on federated learning with improved support for column-split, including the following updates:

Ongoing work for SYCL support.

XGBoost is developing a SYCL plugin for SYCL devices, starting with the hist tree method. (#10216, #9800, #10311, #9691, #10269, #10251, #10222, #10174, #10080, #10057, #10011, #10138, #10119, #10045, #9876, #9846, #9682) XGBoost now supports launchable inference on SYCL devices, and work on adding SYCL support for training is ongoing.

Looking ahead, we plan to complete the training in the coming releases and then focus on improving test coverage for SYCL, particularly for Python tests.

Optimizations

  • Implement column sampler in CUDA for GPU-based tree methods. This helps us get faster training time when column sampling is employed (#9785)
  • CMake LTO and CUDA arch (#9677)
  • Small optimization to external memory with a thread pool. This reduces the number of threads launched during iteration. (#9605, #10288, #10374)

Deprecation and breaking changes

Package-specific breaking changes are outlined in respective sections. Here we list general breaking changes in this release:

  • The command line interface is deprecated due to the increasing complexity of the machine learning ecosystem. Building a machine learning model using a command shell is no longer feasible and could mislead newcomers. (#9485)
  • Universal binary JSON is now the default format for saving models (#9947, #9958, #9954, #9955). See #7547 for more info.
  • The XGBoosterGetModelRaw is now removed after deprecation in 1.6. (#9617)
  • Drop support for loading remote files. Users are encouraged to use dedicated libraries to fetch remote content. (#9504)
  • Remove the dense libsvm parser plugin. This plugin is never tested or documented (#9799)
  • XGDMatrixSetDenseInfo and XGDMatrixSetUIntInfo are now deprecated. Use the array interface based alternatives instead.

Features

This section lists some new features that are general to all language bindings. For package-specific changes, please visit respective sections.

  • Adopt a new XGBoost logo (#10270)
  • Now supports dataframe data format in native XGBoost. This improvement enhances performance and reduces memory usage when working with dataframe-based structures such as pandas, arrow, and R dataframe. (#9828, #9616, #9905)
  • Change default metric for gamma regression to deviance. (#9757)
  • Normalization for learning to rank is now optional with the introduction of the new lambdarank_normalization parameter. (#10094)
  • Contribution prediction with QuantileDMatrix on CPU. (#10043)
  • XGBoost on macos no longer bundles OpenMP runtime. Users can install the latest runtime from their dependency manager of choice. (#10440). Along with which, JVM packages on MacoOS are now built with OpenMP support (#10449).

Bug fixes

  • Fix training with categorical data from external memory. (#10433)
  • Fix compilation with CTK-12. (#10123)
  • Fix inconsistent runtime library on Windows. (#10404)
  • Fix default metric configuration. (#9575)
  • Fix feature names with special characters. (#9923)
  • Fix global configuration for external memory training. (#10173)
  • Disable column sample by node for the exact tree method. (#10083)
  • Fix the FieldEntry constructor specialization syntax error (#9980)
  • Fix pairwise objective with NDCG metric along with custom gain. (#10100)
  • Fix the default value for lambdarank_pair_method. (#10098)
  • Fix UBJSON with boolean values. No existing code is affected by this fix. (#10054)
  • Be more lenient on floating point errors for AUC. This prevents the AUC > 1.0 error. (#10264)
  • Check support status for categorical features. This prevents gblinear from treating categorical features as numerical. (#9946)

Document

Here is a list of documentation changes not specific to any XGBoost package.

Python package

  • Dask
    Other than the changes in networking, we have some optimizations and document updates in dask:
  • Filter models on workers instead of clients; this prevents an OOM error on the client machine. (#9518)
  • Users are now encouraged to use from xgboost import dask instead of import xgboost.dask to avoid drawing in unnecessary dependencies for non-dask users. (#9742)
  • Add seed to demos. (#10009)
  • New document for using dask XGBoost with k8s. (#10271)
  • Workaround potentially unaligned pointer from an empty partition. (#10418)
  • Workaround a ...
Read more

Release candidate of version 2.1.0

31 May 17:39
7de9112
Compare
Choose a tag to compare
Pre-release

See #10356 for details.

2.0.3 Patch Release

19 Dec 10:07
82d846b
Compare
Choose a tag to compare

The 2.0.3 patch release make the following bug fixes:

Full Changelog: v2.0.2...v2.0.3

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
7c4bd1cf6162d335fd20a8168a54dd11508342f82fbf381a80c02ac57be0bce4  xgboost-2.0.3.tar.gz
d0c3499504133a8ea0043da2974c51cc71aae792f0719080bc227d7add8fb881  xgboost_r_gpu_win64_2.0.3.tar.gz
ee47da5b21231965b1f054d191a5418543377f4ba0d0615a593a6f99d1832ca1  xgboost_r_gpu_linux_2.0.3.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.0.3.tar.gz: Download
  • xgboost_r_gpu_win64_2.0.3.tar.gz: Download

2.0.2 Patch Release

09 Nov 20:24
41ce8f2
Compare
Choose a tag to compare

The 2.0.2 patch releases make the following bug fixes:

  • [jvm-packages] Add Scala version suffix to xgboost-jvm package (#9776). The JVM packages had incorrect metadata, and the 2.0.2 patch version fixes the metadata.
  • [backport] Fix using categorical data with the ranker. (#9753)

2.0.1 Patch Release

24 Oct 08:56
a408254
Compare
Choose a tag to compare

This is a patch release for bug fixes.

Bug fixes

In addition, this is the first release where the JVM package is distributed with native support for Apple Silicon.

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
529e9d0f88c2a7abae833f05b7d1e7e7ce01de20481ea60f6ebb6eb7fc96ba69  xgboost.tar.gz
25342c91e7cda98b1362b70282b286c2e4f3e996b518fb590c1303f53f39f188  xgboost_r_gpu_win64_2.0.1.tar.gz
3d8cde1160ab135c393b8092ce0475709dff318024022b735a253d968f9711b3  xgboost_r_gpu_linux_2.0.1.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.0.1.tar.gz: Download
  • xgboost_r_gpu_win64_2.0.1.tar.gz: Download

Source tarball