Skip to main content

Table 1 Comparison of model architectures and settings across three Deep Learning-based cancer survival prognosis approaches

From: Deep learning-based cancer survival prognosis from RNA-seq data: approaches and evaluations

PropertiesModels
Cox-nnetDeepSurvAECOX
Deep Learning ArchitectureSingle-layer neural networksMulti-layer neural networksMulti-layer Autoencoder neural networks
Deep Learning Programming FrameworkTheanoTheano, LasagnePyTorch
Hyper-parametersL2 regularization weight λ.Learning rate; Number of hidden layers; Hidden layer sizes; Learning rate decay; Momentum; L2 regularization weight λ; Dropout rate.Learning rate; Autoencoder input-output error weight λ1; L1 regularization weight λ2; L2 regularization weight λ3; Dropout rate; Number of hidden layers; Regularization method.
Hyper-parameters Searching MethodsLine searchSobol solverSobol solver
Number of iterations for searching hyper-parameters12100100
Maximum epochs4000500300
Number of Hidden Layers11, 2, 3, or 40, 2, 4, 6, or 8
Last hidden Layer sizesInteger value in range [131, 135]Integer value in range [30, 50]16
Regularization MethodsL1, L2, DropoutL2, DropoutDropout, L1, L2, Elastic Net
Basic Objective (Loss) Functions\( \hat{\Theta}={\mathrm{argmin}}_{\Theta}\left\{{\sum}_{i:{C}_i=1}\left(\sum \limits_{k=1}^K{\beta}_k{X}_{ik}-\log \left({\sum}_{j:{Y}_j\ge {Y}_i}{\theta}_j\right)\right)\right\} \)
Optimization MethodsNesterov accelerated gradient descentStochastic gradient descent (SGD)Adaptive Moment Estimation (Adam)
Network Architectures(Input Layer) – (Hidden Layer) (tanh) – (Hazard Ratio)(Input Layer) – (Hidden Layer) (ReLU/SELU) – … – (Hidden Layer) (ReLU/SELU) – (Hazard Ratio)(Input Layer) – (Hidden Layers) (ReLU/Dropout) – (Code) – (Hidden Layers) (ReLU/Dropout) – (Output Layer); (Code) (tanh) – (Hazard Ratio)