Skip to main content

Table 4 Macro-AUC obtained by 5 ×5-fold cross-validation over Data-201706 for the nine competing methods

From: HPOAnnotator: improving large-scale prediction of HPO annotations by low-rank approximation with HPO semantic similarities and multiple PPI networks

Method

[1-10]

[11-30]

[31-100]

[101-300]

[ ≥301]

LR

0.526

0.553

0.633

0.735

0.755

BiRW

0.608

0.854

0.875

0.835

0.815

OGL

0.586

0.670

0.788

0.812

0.806

DLP

0.622

0.880

0.914

0.863

0.834

NMF

0.649

0.908

0.942

0.948

0.911

NMF-PPN

0.651

0.911

0.943

0.951

0.916

NMF-NHPO

0.653

0.919

0.946

0.947

0.919

AiPA

0.654

0.922

0.943

0.957

0.931

HPOAnnotator

0.655

0.925

0.947

0.958

0.931

  1. Method performs best in terms of this evaluation metric are in boldface