Skip to main content

Table 3 The results of the eight criteria obtained by 5 ×5-fold cross-validation over Data-201706 for the nine competing methods in total

From: HPOAnnotator: improving large-scale prediction of HPO annotations by low-rank approximation with HPO semantic similarities and multiple PPI networks

Method

AUC

AUPR

micro-AUC

micro-AUPR

macro-AUC

macro-AUPR

leaf-AUC

leaf-AUPR

LR

0.775

0.028

0.760

0.072

0.579

0.052

0.532

0.020

BiRW

0.875

0.066

0.826

0.096

0.732

0.056

0.597

0.031

OGL

0.785

0.051

0.776

0.078

0.603

0.034

0.536

0.014

DLP

0.902

0.073

0.875

0.100

0.736

0.094

0.659

0.055

NMF

0.961

0.496

0.900

0.273

0.753

0.139

0.701

0.089

NMF-PPN

0.963

0.525

0.902

0.281

0.756

0.142

0.703

0.089

NMF-NHPO

0.965

0.541

0.903

0.290

0.756

0.144

0.702

0.094

AiPA

0.970

0.559

0.905

0.295

0.760

0.146

0.705

0.096

HPOAnnotator

0.971

0.562

0.907

0.296

0.760

0.152

0.706

0.097

  1. Method performs best in terms of this evaluation metric are in boldface