Skip to content

Alex-Lekov/AutoML-Benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoML-Benchmark

A Performance Benchmark of Different AutoML Frameworks


Frameworks

In the benchmark framework:

Benchmark Settings

  • Repeated 5 times (on 5 K-folds)
  • Time Limit 1 hour on fold
  • Chose datasets from 1000 and more rows/examples
  • Docker

Server:

AWS: m5d.4xlarge (16vCPU, 64Gb Mem, 2x300Gb NVMe SSD)

Binary-Classification

Sum of revers positions in the rating for all datasets. (The bigger, the better):

bench

Framework Place
AutoML_Alex 79
AutoGluon 74
H2o 54
CatBoost 52
Auto_ml 41
Auto-sklearn 37
LightGBM 36
TPOT 23

Datasets (Binary-Classification)

Chose datasets from 1000 and more rows/examples and on which the problem has not yet been solved with 99 AUC

Name OpenML ID Features Rows
adult 179 14 48842
Amazon_employee_access 4135 9 32769
bank-marketing 1461 16 45211
Click_prediction_small 1226 11 798964
credit-g 31 20 1000
eeg-eye-state 1471 14 14980
electricity 151 8 45312
kc1 1067 20 2109
mozilla4 1046 5 15545
phoneme 1489 5 5404
qsar-biodeg 1494 41 1055

Total AUC on datasets:

Framework/dataset adult amazon bank-marketing click_predict credit-g eeg-eye-state electricity kc1 mozilla4 phoneme qsar-biodeg
auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std auc auc_std
AutoML_Alex 0,9160 0,0033 0,8687 0,0139 0,9371 0.0032 0,7223 0,0060 0,8011 0,0229 0,9968 0,0004 0,9753 0,0055 0,8394 0,0232 0,9887 0,0017 0,9643 0,0019 0,9353 0,0140
TPOT 0,9126 0,0026 0,7895 0,0339 0,8492 0,0070 0,7114 0,0045 0,7816 0,0189 0,5 0 0,7721 0,0741 0,8012 0,0153 0,9734 0,0016 0,9630 0,0030 0,9338 0,0491
H2o 0,9143 0,0020 0,8551 0,0030 0,9371 0,0037 0,7206 0,0041 0,7765 0,0479 0,9887 0,0016 0,9842 0,0006 0,8230 0,0316 0,9832 0,0029 0,9632 0,0075 0,9338 0,0175
Auto-sklearn 0,9112 0,0031 0,5 0 0,9345 0,0045 0,7046 0,0064 0,7798 0,0373 0,9926 0,0026 0,9652 0,0021 0,8246 0,0227 0,9813 0,0030 0,9589 0,0037 0,9328 0,0145
AutoGluon 0,9148 0,0032 0,8577 0,0124 0,9401 0,0034 0,7159 0,0074 0,7801 0,0249 0,9993 0,0002 0,9886 0,0006 0,8286 0,0265 0,9850 0,0024 0,9686 0,0023 0,9377 0,0082
Auto_ml 0,9147 0,0033 0,8286 0,0143 0,9035 0,0058 0,7188 0,0066 0,7925 0,0227 0,5 0 0,9617 0,0018 0,7940 0,0267 0,9786 0,0040 0,9666 0,0030 0,9312 0,0098
LightGBM 0,9144 0,0037 0,8463 0,0113 0,9365 0,0034 0,7160 0,0057 0,7795 0,0274 0,9685 0,0041 0,9545 0,0026 0,7749 0,0246 0,9799 0,0024 0,9532 0,0029 0,9358 0,0073
CatBoost 0,9150 0,0030 0,8467 0,0090 0,9379 0,0040 0,7191 0,0058 0,7837 0,0222 0,9823 0,0023 0,9563 0,0034 0,8224 0,0226 0,9789 0,0022 0,9530 0,0035 0,9362 0,0093

Boxplot Scores:

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

datset_Boxplot

About

A Performance Benchmark of Different AutoML Frameworks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published