 Research
 Open Access
 Published:
A categorical network approach for discovering differentially expressed regulations in cancer
BMC Medical Genomics volume 6, Article number: S1 (2013)
Abstract
Background
The problem of efficient utilization of genomewide expression profiles for identification and prediction of complex disease conditions is both important and challenging. Polygenic pathologies such as most types of cancer involve disregulation of many interacting genes which has prompted search for suitable statistical models for their representation. By accounting for changes in gene regulations between comparable conditions, graphical statistical models are expected to improve prediction precision.
Methods
In comparison problems with two or more experimental conditions, we represent the classes by categorical Bayesian networks that share one and the same graph structure but have classspecific probability parameters. The graph structure is learned by a scorebased procedure that maximizes the difference between class probabilities using a suitable measure of divergence. The proposed framework includes an indirect model selection by adhering to a principle of optimal class separation and identifies interactions presenting significant difference between the compared conditions.
Results
We evaluate the performance of the new model against some benchmark algorithms such as support vector machine, penalized linear regression and linear Gaussian networks. The classifiers are compared by prediction accuracy across 15 different data sets from breast, lung, gastric and renal cancer studies. In addition to the demonstrated strong performance against the competitors, the proposed method is able to identify disease specific changes in gene regulations which are inaccessible by other approaches. The latter is illustrated by analyzing some gene interactions differentiating adenocarcinoma and squamous cell lung cancers.
Introduction
Highthroughput technologies such as microarrays supply means for genomewide observation on cell samples and provide unique opportunities for studying complex heterogeneous diseases. It is understood for example that the highly polygenic pathology of cancers involves not single gene mutations but alternations in multiple genetic pathways [1]. Even cancer subtypes with a common origin can be driven by very different disregulations on gene interaction level [2]. Computational analysis of highthroughput genetic data thus requires adequate multivariate statistical models with capacity of studying gene regulations at system level. Graphical models such as Bayesian networks have been proposed for describing cell signaling processes [3] and analysis of expression data [4], to mention but a few, and have been accepted as important tools in the field of systems biology.
We present a categorical Bayesian network framework based on an original learning method for analysis of gene expression data, in particular, for classification of gene expression profiles coming from different populations. Typical applications include diagnostic tests for disease conditions and differentiating between disease subtypes. More formally, we assume we are given a sample of n microarrays measuring the expression level of N, potentially thousands, genes or gene probes under two different experimental conditions. Usually n is much smaller than N. We are interested in designing a methodology for setting apart these two conditions and be able to designate gene profiles to their originating classes.
Many classical approaches such as linear discriminant analysis are ill suited for large N small n settings. Other models, such as LASSO [5] and support vector machines (SVM) [6], either disregard possible gene associations or defy explicit interpretation. In contrast, Bayesian network (BN) models are able to identify associated genes and parsimoniously describe the global gene interaction structure [4, 7, 8]. BNs have been recognized as worthwhile alternative to more traditional stateofart models in terms of discrimination and classification power [9, 10], but their widespread application is nevertheless not evident.
A major issue in applying BNs to analysis of gene expression data is choosing the complexity of the underlying graph structure. Simple models may undermine the complexity of the observed gene system. On the other hand, too complex ones often overfit the data and, as a result, diminish the prediction power. A standard approach addressing this model selection problem employs the Bayesian paradigm and performs maximum a posteriori (MAP) estimation [11]. Since the posterior is usually not available in closed form, the MAP approach needs to be implemented by computationally expensive Monte Carlo procedures [9] or by applying some heuristic algorithms that approximate MAP [10]. Moreover, the efficiency of MAP in the context of model selection crucially depends on the selected prior. The so called constrainedbased learning methods such as the PC algorithm [12] also require setting additional parameters (the αlevel for the conditional independence tests in PC) in order to choose the 'right' complexity of the model. Similarly, the scorebased learning methods such as the penalized maximum likelihood estimation [13] rely on parameters controlling the penalization as a function of complexity. Therefore, in singlepopulation(class) settings, model selection seems to involve inevitably some external, outside the data itself, input or control.
In the theory of statistical learning there are two standard approaches for choosing model selection parameters. One is based on large sample asymptotic properties of the estimator and ensures that the latter is consistent. The other accepted practice is to follow a datadriven crossvalidation (CV) procedure. Both approaches however have disadvantages: the former one may suffer from lack of optimality in finite sample size settings, while the CV approach can be computationally prohibitive for the purpose of network learning. In addition, some authors have raised questions on the theoretical justification of CV [14]. Our approach is motivated by the intuitive expectation that in two(multi)class problems, model selection can be more easily resolved in a selfcontained manner. We propose a categorical BN framework with a scorebased learning algorithm that includes the class membership information in the optimization function. It addresses the model selection problem by choosing networks that provide optimal class separation. Our methodology can be applied to gene expression data of reasonable size and, as we show, is not only effective in gene profile classification, but can provide important insights on the functionality of the observed biological systems.
Categorical Bayesian networks (CBNs) represent associations between discrete random variables through directed acyclic graphs. In contrast to linear Gaussian BNs, CBNs are capable of representing nonlinear relationships between their nodevariables. Although the application of CBNs to continuous gene expression data involves loss of information in the necessary process of discretization, often (see Figure 1 below for examples), CBNs benefit from more faithful representation of the observed gene interactions than linear BNs. A number of existing methods exploit CBNs to mitigate gene expression noise and improve classification accuracy [15, 16]. In the context of microarray data, another advantage of discretization is its robustness to the socalled lab or batch effect inherent in many multilaboratory studies [17].
The paper is organized as follows. We start with a brief introduction to CBNs, the Maximum Likelihood (ML) principle for CBN estimation and formulate a novel scoring function as alternative to the standard loglikelihood function used in ML. Our discriminating function is based on the KullbackLeibler (KL) divergence between conditional probability tables (Eq. (3) below). For given twoclass training data, we reconstruct a CBN that includes only those gene connections that present significant class differences and thus reveal implicated gene interaction changes. We then describe a classification algorithm that models the observed conditions using the already estimated graph structure. The representing CBNs are distinguished by their classspecific probability tables. As usual, the class assignment of new observations is based on the likelihoods of the estimated class CBNs.
In the Results section, the proposed method is evaluated on 15 microarray data sets  6 breast cancer, 3 lung cancer, 3 gastric cancer and 3 renal cancer studies  grouped in pairs by phenotypic and class criteria. The performance of 4 algorithms  the proposed one, SVM, LASSO and a linear Gaussian BN classifier based on the PC algorithm for structure learning  are compared using sets of differentially expressed genes as well as on a collection of gene pathways from the KEGG database. Compatible but different data sets are chosen as (training, test) pairs for evaluation. The proposed classifier demonstrates favorable prediction performance across the selected data set pairs. Finally, we illustrate the analytical and interpretation merits of our methodology by focusing on the lung cancer data sets and inspecting some regulations that have significant role in distinguishing adenocarcinoma from squamous cell lung cancers.
Methods
Our methodology was first introduced in [18]; below we presented it in some more details.
In regard to the notation, we shall use capital letters to denote model parameters and random variables, and small ones for the realizations of these variables. Let X_{ i }, i = 1,.., N be random categorical variables. Categorical Bayesian network (G, P) with nodes X_{ i }'s is a probability model based on directed acyclic graph (DAG) G and conditional probability table P defined as follows. G can be described by a collection of sets Pa_{ i }'s such that for each i, Pa_{ i }, called parent set of X_{ i }, comprises all X_{ j }'s for which there is a directed edge connecting X_{ j } and X_{ i } in G. We shall use index k to denote the categorical levels of X_{ i } and multiindex j for the combination of parent states of X_{ i }, and with slight abuse of notation shall write k ∈ X_{ i } and j ∈ Pa_{ i }. The second component of a CBN is the table P of conditional probabilities Pr(X_{ i }Pa_{ i }). Let P_{ i,kj } ≡ Pr(X_{ i } = kPa_{ i } = j). For each i and j, the multinomial distribution of (X_{ i }Pa_{ i } = j) is defined by the probability vector ${P}_{i,j}\equiv {\left({P}_{i,kj}\right)}_{k\in {X}_{i}},{\sum}_{k\in {X}_{i}}{P}_{i,kj}=1$. Then we have $P={\left\{{P}_{i,j}\right\}}_{i,j}$.
With [X_{ i }] and [Pa_{ i }] we shall denote the number of states of X_{ i } and Pa_{ i }, respectively, and with Pa_{ i } the number of parents of X_{ i }. Clearly, $\left[P{a}_{i}\right]={\prod}_{{X}_{a}\in \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}P{a}_{i}}\left[{X}_{a}\right]$. The complexity of the CBN (G, P) is given by $df\left(G\right)={\sum}_{i1}^{N}\left[P{a}_{i}\right]\left(\left[{X}_{i}\right]1\right)$ and equals the degree of Freedom for defining the probability table P.
For any DAG G, there is a node order Ω, also called causal order, such that the parents of each node always precede that node in Ω, a fact that we write symbolically as X_{Ω(1)} ≺ X_{Ω(2)} ≺... ≺ X_{Ω(N)}. Formally, Ω is a permutation of the indices 1,..., N, such that for any i > 0, Pa_{Ω(i) }⊂ {X_{Ω(1)}, X_{Ω(2)}, ..., X_{Ω(i−1)}}. For any order Ω, with $\mathcal{G}\left(\text{\Omega}\right)$ we shall denote the class of DAGs that are compatible with Ω.
Learning categorical Bayesian networks from twoclass data
We approach CBN learning from the perspective of classification problems where the observations are assumed to come from two different classes. We describe an algorithm that utilizes the class label information to find a DAG attaining maximum class discrimination with respect to a suitable measure. The essential component of our method is the graph structure estimation since the optimal conditional probability table can be easily inferred for any given DAG.
Let ${\left\{{x}^{s}\right\}}_{s=1}^{n}$ be a nsample of independent observations on ${\left\{{X}_{i}\right\}}_{i=1}^{N}$ and let each observation x^{s} have a label c^{s} ∈ {0, 1} that assigns it to one of the two classes; c^{s}'s are assumed to be observations on a binary random variable C. We denote the labeled sample with ${D}_{n}={\left\{\left({x}^{s},{c}^{s}\right)\right\}}_{s=1}^{n}$.
The loglikelihood function of a CBN (G, P) with nodes ${\left\{{X}_{i}\right\}}_{i=1}^{N}$ with respect to the unlabeled sample ${\left\{{x}^{s}\right\}}_{s=1}^{n}$ is
where ${n}_{i},{k}_{j}\equiv {\sum}_{s=1}^{n}{1}_{\left\{{x}_{i}^{s}=k,p{a}_{i}^{s}=j\right\}}$ and ${n}_{i,j}\equiv {\sum}_{k}{n}_{i},{k}_{j}$. Note that for each i we have ${\sum}_{j\in \phantom{\rule{2.77695pt}{0ex}}P{a}_{i}}{n}_{i,j}=n$. For a fixed DAG G, the MLE $\widehat{P}$ of P is obtained by maximizing l(G, PD_{ n }) as a function of P
where ${\widehat{P}}_{i,{k}_{j}}\equiv {n}_{i,kj}/{n}_{i,j}$ is the point estimate of P_{ i,kj }. It is a well known fact that by increasing the complexity of G, that is, by adding new edges in G, the likelihood (2) can only increase. Therefore the MLE solution for G based on (1) will tend to overfit the training data. The latter can be overcome if, instead of the loglikelihood, one optimizes a scoring function of the form l(GD_{ n }) − λ_{ n }df (G) [13], where λ_{ n } is a penalization parameter indexed by the sample size n. Some standard choices include BIC, λ_{ n } = 0.5log(n)/n, and AIC, λ_{ n } = 1/n, however, no 'universally' optimal penalization criterion is available. As we have already commented in the introduction, datadriven approaches for selecting λ_{ n } such as CV are not necessarily optimal and their efficiency in twoclass discrimination problems is unclear. In contrast, our approach is based on a scoring function, very similar to the likelihood ratio test, that can be used to learn an optimal graph structure without involving additional penalization parameters.
Similarly to P_{ i },k_{ j }, let us define the conditional probabilities pertaining to the first experimental class, ${P}_{i,kj}^{0}=Pr\left({X}_{i}=kP{a}_{i}=j,C=0\right)$ and let ${\widehat{P}}_{i,kj}^{0}$ be the corresponding point estimators as in (2), that is, ${\widehat{P}}_{i,kj}^{0}={n}_{i,kj}^{0}\text{/}{n}_{i,j}^{0}$, where ${n}_{i,kj}^{0}\equiv {\sum}_{s=1}^{n}{1}_{\left\{{x}_{i}^{s}=k,p{a}_{i}^{s}=j,{c}^{s}=0\right\}}$. We then consider the statistics
and introduce the scoring function
the intuition behind which is given below. Given a collection $\phantom{\rule{0.25em}{0ex}}\mathcal{G}$ of DAGs with nodes ${\left\{{X}_{i}\right\}}_{i=1}^{N}$, we propose the following estimator of G
which we shall tentatively refer to as BNKL estimator.
Equivalently, $\phantom{\rule{0.25em}{0ex}}\mathcal{R}$ can be expressed as
where d_{ K L } denotes the KullbackLeibler (KL) divergence between the multinomial distributions ${\widehat{P}}_{i,j}={\left\{{\widehat{P}}_{i,kj}\right\}}_{k}$ and ${\widehat{P}}_{i,j}^{0}={\left\{{\widehat{P}}_{i,kj}^{0}\right\}}_{k}$. The optimization problem (5) aims at finding a DAG that achieves maximum class separation with respect to the accumulated KLdivergence between the node conditional probability tables. Note that we always have $\mathcal{R}\left(G{D}_{n}\right)\ge 0$. Moreover, if ${\widehat{P}}_{i,j}^{0}$ are uniform distributions, that is, ${\widehat{P}}_{i,j}^{0}={\left(1/\left[{X}_{i}\right]\right)}_{i=1}^{\left[{X}_{i}\right]}$, then (3) reduces to (1) up to an additional constant due to the equality
We can therefore look at $\mathcal{R}\left(G{D}_{n}\right)$ as an extension of the maximum loglikelihood l(GD_{ n }) to twoclass problems.
For a fixed DAG G, the statistics $2n\mathcal{R}$ is in fact equivalent to the likelihood ratio chisquared statistics (also known as G^{2} statistics) applied to $n\widehat{P}$ and $n{\widehat{P}}^{0}$ viewed as observed and expected counts, respectively. Not surprisingly then, under the null hypothesis ${H}_{0}:{P}_{i,j}={P}_{i,j}^{0}$, for all i, j, $2n\kappa \mathcal{R}\left(G{D}_{n}\right)$ is asymptotically χ^{2} distributed with df (G) degree of freedom, where κ = Pr(C = 0)/Pr(C = 1) is the odds ratio for the first class (the formal proof of this fact is out of the scope of this article).
The role of the factor df(G) in the denominator of (5) is to assist model selection. From informationtheoretical perspective, df(G) represents the amount of memory required for saving all of the states of a CBN with DAG G. Since $\mathcal{R}\left(G{D}_{n}\right)$ measures the class differences with respect to G, we can think of the scoring function (4) as an estimate of the degree of class separation per unit complexity. Let $\mathcal{R}\left(G\right)$ be the population version of (3) obtained by replacing $\widehat{P}$ with the population probabilities P, that is
We say that G_{0} achieves most efficient class separation in $\mathcal{G}$ if
Then, provided that G_{0} is unique maximizer, it can be easily shown that $\hat{G}$ is a consistent estimator of G_{0}, a claim that makes (5) a sound statistical procedure.
We proceed into some computational aspects of problem (5). Because $\mathcal{S}\left(G{D}_{n}\right)$ is usually highly nonregular function (nonsmooth and nonconvex), finding the optimal DAG essentially requires an exhaustive search in $\phantom{\rule{0.25em}{0ex}}\mathcal{G}$. In order to make the problem computationally manageable we thus need to apply some strong restrictive conditions on $\phantom{\rule{0.25em}{0ex}}\mathcal{G}$. First, we assume that the parent sizes are bounded above by a constant M. The value of M should depend on the samples size n used for estimation and we do not recommend M > 2 unless n is in the hundreds. Second, we limit the search in (5) to DAGs compatible with a fixed but optimally chosen node order. An intuitive causality argument suggests that if we believe that node conditional distributions are set rather independently of each other than otherwise, for a regulation X_{1} → X_{2}, it seems more plausible for X_{2} to have higher betweenclasses marginal difference than X_{1}. If we accept this argument, we would be inclined to assume that:
where 'upstream' is understand as earlier in the node order of the DAG. In the context of gene regulations we periphrase this principle in 'target' and 'biomarker' terminology as follows: 'target' genes present lower differential expression in comparison to the 'biomarker' genes and are thus situated upstream the regulation network with respect to the latter. Hereafter we refer to an order satisfying (7) as order of increasing differential expression or IDE.; see Algorithm 1 below for its estimation.
Formally, for the purpose of solving (5), we consider collections of DAGs of the form
where Ω satisfies (7). In the actual data analyses below we use M = 2 as a compromise between degree of network connectivity and computational complexity. For classes $\mathcal{G}\left(\text{\Omega},M\right)$, the optimal DAG $\phantom{\rule{0.25em}{0ex}}\hat{G}$ can be found by an efficient exhaustive search with polynomial complexity. In fact, BN estimation restricted to type (8) classes of DAGs is not new and can be traced back to [19]. The BNKL algorithm is implemented in the sdnet package for R, [20]. Below, we present average times of BNKL estimation of random CBNs with different sizes N, 3 categories per node, maximum parent size M = 2 and sample size n = 250 (Table 1).
The computational times are concordant with the theoretical complexity of the algorithm O(nN^{M + 1}).
A network model for classification of gene expression profiles
We return to the main goal of this investigation  developing a CBNbased classifier for twoclass problems. We have shown, Eq. (5), how we can choose a graph structure that achieves optimal separation of a labeled sample. We use the estimated structure as a common DAG of two CBNs that model the two classes with distinct probability tables. This approach, 'one DAG, two probability tables', has been previously adopted by other BNbased classifiers [21].
Gene expression data is acquired by a multistage process the result of which are continuous variables representing the expression levels of prespecified gene probes. Since CBN is a discrete model, the initial step in our inference framework involves discretization  any sample ${\left\{{y}^{s}\right\}}_{s=1}^{n}$ of observations on the genenodes ${\left\{{X}_{i}\right\}}_{i=1}^{N}$ is transformed into categorical sample ${\left\{{x}^{s}\right\}}_{s=1}^{n}$. Gene expression levels are often discretized into 3 categories  'underexpressed', 'baseline' and 'overexpressed' [16]. Although more sophisticated procedures are certainly possible, in our experiments we employ a 3level uniform discretization as follows. After excluding 5% of the most extreme values, a standard precaution against outliers, the range of y's is divided into equal intervals and an observation y is assigned a categorical value x according to the interval into which y falls. The uniform discretization is simple to implement and have good performance in practice. We emphasize that, as should be the case in all well designed training → test prediction studies, the discretization parameters (cutoff points) are determined strictly from the training sample and are used to discretize the test sample.
More formally, we assume that: (i) the class samples D_{0} = D ∩ {c = 0} and D_{1} = D ∩ {c = 1} come from two CBNs, (G_{0}, P^{0}) and (G_{0}, P^{1}), with DAG G_{0} and probability tables P^{0} and P^{1}; (ii) G_{0} is efficient in sense of (6); (iii) G_{0} is compatible with an IDE order Ω and has a maximum parent size of M. Since G_{0} is unknown in advance, the assumptions (ii) and (iii) cannot be checked. Instead, (ii) and (iii) should be considered technical assumptions specifying the properties of the estimated networks. All prerequisites being set, we propose Algorithm 1: the first part of it estimates G_{0}, P^{0} and P^{1}, while the second one performs classification of test samples.
Algorithm 1 BNKL Classification

1.
Training. Input: continuous labeled training sample ${\left\{\left({y}^{s},{c}^{s}\right)\right\}}_{s=1}^{n}$.

(a)
Node order estimation, IDE (7): For each i, perform ttest on y_{ i }'s comparing the 2 classes. Set $\widehat{\text{\Omega}}$to be the order of decreasing (ttest) pvalues.

(b)
Uniform discretization: For each i, set µ_{ ik } = q_{1} + k(q_{2} − q_{1})/3, k = 1, 2, where q_{1} and q_{2} are the 2.5 and 97.5 percentiles of the training observations y_{ i }'s. Discretize y_{ i }'s into 3 categories using the cutoff points µ_{i1 }and µ_{i2}.

(c)
Find the optimal DAG $\phantom{\rule{0.25em}{0ex}}\hat{G}$ in $\mathcal{G}\left(\widehat{\text{\Omega}}\right)$ according to Eq. (5).

(d)
Define CBNs $\left(\hat{G},{\widehat{P}}^{0}\right)$ and $\left(\hat{G},{\widehat{P}}^{1}\right)$ by estimating the classspecific conditional probability tables ${\widehat{P}}^{0}\left(\hat{G}{D}_{0}\right)$ and ${\widehat{P}}^{1}\left(\hat{G}{D}_{1}\right)$ as in Eq. (2).

(a)

2.
Prediction. Input: continuous test observation z.

(a)
For each i, discretize ${z}_{i}\mapsto {x}_{i}$ using the training cutoff points µ_{i1 }and µ_{i2}.

(b)
Calculate the loglikelihoods ${l}_{0}=l\left(\hat{G},{\widehat{P}}_{0}x\right)$ and ${l}_{1}=l\left(\hat{G},{\widehat{P}}_{1}x\right)$ according to Eq. (1).

(c)
Assign z to the class with greater loglikelihood l.

(a)
To avoid numerical instabilities in Algorithm 1, all zero slots in the estimated conditional probabilities ${\widehat{P}}^{0}$ and ${\widehat{P}}^{1}$ are reset to a minimum positive value of 1/(3n) (the resolution of a training sample of size n to populate a 3nomial distribution) and then renormalized so that ${\sum}_{k}{\widehat{P}}_{i,kj}=1$, for all i and j ∈ Pa_{ i }.
Figure 1 shows some examples of gene expression discretization and representation of gene interactions with 3nomial distributions. For instance, the probabilities P_{ ij } = Pr(X_{AKT1 }= iX_{PIK3R3 }= j) of the regulation AKT1→PIK3R3 are P_{i1 }= (0.4, 0.4, 0.2), P_{i2 }= (0.23, 0.54, 0.23) and P_{i3 }= (0.17, 0.42, 0.41). The probabilities corresponding to the first class only are ${P}_{i1}^{0}=\left(0.14,0.72,0.14\right)$, ${P}_{i2}^{0}=\left(0.28,0.14,0.57\right)$ and ${P}_{i3}^{0}=\left(0.25,0.5,0.25\right)$. A formal test for the linear association between the two genes fails to detect significant class difference (pval = 0.18). The relatively large KLdivergence score between P and P^{0} however, indicates significant class difference (pval≈0). The examples also illustrate different types of regulations such as activation (PIK3R3→AKT1, LAMB1→TRAF3) and inhibition (FHIT→LAMB1). The BNKL model, recall, is designed to detect changes in the interactions. For example, FHIT→LAMB1 is apparently inhibitory for the second class (in blue) but neutral for the first (in red) and BNKL perceives that difference.
Benchmark classifiers
To evaluate the performance of the BNKL algorithm we compare it to 3 established in practice classification methods. We consider SVM with Gaussian kernel as implemented in the e1071 package for R. The kernel parameter γ is tuned via CV on the training data for optimality. The benchmark performance of SVM is well established [22]. Our second choice is LASSO, an algorithm based on l_{1}penalized linear regression that is applied as follows. The expectation of the binary class variable is assumed to be a linear combination of a given set of genecovariates. Then LASSO selects a subset of significant predictor genes using a l1normbased penalization criteria and discard the rest. The sum of squared errors is used as classification criteria. We use an implementation of the algorithm provided by the lars package for R.
The third reference classifier, PC, employs a linear BN model as follows: (1) a DAG $\phantom{\rule{0.25em}{0ex}}\hat{G}$ is fitted to the combined sample D_{0} ∪ D_{1} using the PC algorithm [23] with Gaussian test for conditional independence at αlevel 0.05 (see pcalg package for R); thus a parent set Pa_{ i } is selected for each i; (2) for each i, two distinct sets of (Y_{ i }Pa_{ i })regression parameters are estimated for each class separately; (3) test samples are classified according to the conditional likelihoods. Note that, SVM, LASSO and PC, in contrast to BNKL, are applied directly to continuous observations on ${\left({X}_{i}\right)}_{i=1}^{N}$.
Results
This section complements the preliminary results presented in [18] and provides a comprehensive evaluation of BNKL on a diverse collection of gene expression data. We consider 15 samples stratified in 5 groups by cell type and disease condition. Table 2 provides a detailed description of the microarrays including their reference number, platform and class sample sizes. We have 2 breast cancer data sets with subjects grouped by survival status and 4 other using estrogen receptor (ER) status as classification criteria. Also considered are 3 lung cancer data sets comparing adenocarcinoma and squamous cell carcinoma, as well as, 3 gastric and 3 renal cancer related samples of disease vs. control. All expression data sets are obtained from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) and prior to the classification analysis are preprocessed by applying the following three steps. First, the raw expression sets are normalized using the RMA procedure [24]. Then, the probe expression levels across sample records are standardized. If ${\left\{{y}_{i}^{s}\right\}}_{s=1}^{n}$ are the expression levels of the ith probe, the standardization is performed according to the formula ${\stackrel{\sim}{y}}_{i}^{s}=\sqrt{n}\left({y}_{i}^{s}{\mu}_{i}\right)/s{d}_{i}$, where µ_{ i } and sd_{ i } are the sample mean and standard deviation of ${y}_{i}^{s}$'s. Standardization is intended to account for some gross disparities in the expression levels of probes coming from different data sets which cannot be handled by the normalization procedure. The latter is especially true for microarrays produced on different platforms such as KDN2 and KDN3. Finally, for each pair considered for across data set classification, we perform consolidation by subsetting to a common set of gene probes. Again, this step is needed in order to be able to compare between different platforms; for example, while GPL570 can accommodate up to 54675 probes, GPL96 is limited to 22283 probes. For brevity, hereafter we shall refer to gene probes simply as genes.
We carry out two classification strategies by applying the considered algorithms on two categories of gene subsets: (1) differentially expressed (DE) genes and (2) a collection of curated gene pathways. Below we give more details on these two approaches. The algorithms' performance is evaluated by across data set prediction for pairs of compatible data sets. The first prediction score we use is the balanced accuracy given by ACC = 0.5(TP/P + TN/N), where P and N are the number of test observations in the two classes, while TP and TN are the number of correctly assigned observations to the first and second class, respectively. The 'random guess' procedure thus has accuracy of 0.5 on average and so does any algorithm that assigns all observations to one class. As a second criteria we employ the area under the curve between sensitivity TPR = TP/(TP+FN) and FPR = TP/(TP+FN), known as AUC. An AUC of 1 represents perfect class separation.
Tables 3, 4 and 5 present prediction results for 16 compatible data set pairs. We calculate the pairwise ACC and AUC scores as follows. For a pair (A, B) with sample sizes n_{A} and n_{B} respectively, we perform the classifications A → B (A training, B test) and B → A (B training, A test), and calculate the corresponding ACC and AUC scores. Then we report the overall scores by weighting the individual scores according to the test sample sizes,
Class prediction using differentially expressed genes
A standard practice in comparative microarray studies is to perform discrimination analysis employing biomarkers  genes manifesting differential expression between contrasting experimental conditions. A DEbased approach can be implemented by some routine statistical procedures such as linear discrimination analysis, logistic regression and, from the above discussed algorithms, SVM and LASSO. All these methods however are essentially univariate for they do not account for possible interactions among the biomarker genes. In contrast, the proposed BNKL method, along with PC, selects and accounts for significant gene interactions. It is an open question of whether in practice discrimination analysis actually benefits from employing interaction models. The results presented below partially address this question.
We implement the following DEbased test framework. For each training set, we identify DE genes by performing twosample ttests and then order the genes according to increasing pvalues. Then we select the top 25, 50, 75 and 100 DE genes as features and supply them to each classifier. For more robust performance evaluation, overall ACC and AUC scores are formed by averaging the scores achieved on the above defined 4 DE sets. Since the selected genes are highly discriminating, we expect all classifiers to achieve their highest potential prediction scores. In particular, we consider the performance of LASSO to be representative of what would be the best prediction accuracy of a routine biomarker approach.
Table 3 presents the prediction scores using the top 100 DE genes as described. Listed are ACC and AUC for each data set pair as well as the overall average scores and total ranks. The latter are obtained as follows. For each data set pair (table row) the classifiers are ranked from 1 (lowest) to 4 (highest) according to their scores and then the ranks in each column are summed to obtain the total ranks. In terms of ACC, BNKL most often achieves best accuracy and has the best total rank of 61, followed by LASSO with rank 42. With respect to AUC, the difference between BNKL and LASSO is similarly prominent, rank 52 vs. 42. In terms of average performance BNKL also achieves the best ACC and AUC scores. These results clearly indicate the potential value of incorporating BNKL in a biomarker framework.
Pathwaybased classification
In the field of systems biology, pathways have been introduced as means for linking the functionality of groups of genes to specific biological processes. Well established methodologies such as Gene Set Enrichment Analysis (GSEA) [40], employ pathways as functional units to differentiate between experimental populations. In the context of CBN learning, we utilize pathways as priors to facilitate inference and lessen the computational complexity. First, BNKL learning benefits from the limited number of genes in the pathways.
Second, since the genes in the pathways are putatively related, it is reasonable to presume class differences in their interactions. Note that when no significant interactions are detected, BNKL is essentially equivalent to a naive classifier.
In this second validation scenario, we consider a collection of manually curated pathways based on expert knowledge and existing literature obtained from the Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.genome.jp/kegg/pathway.html). To limit the computational cost, we consider only pathways of size less than 400. The resulting collection contains 225 gene pathways of variable size, from 10 to 389. We apply BNKL and the benchmark classifiers on each selected pathway and record the achieved ACC and AUC scores. Since for a particular sample phenotype or disease condition only limited number of genes may show expression activity, we cannot expect all pathways to perform equally well in terms of prediction power. We therefore propose to select the top 10% of the best performing pathways for each classifier and report their average prediction scores.
Table 4 shows the prediction scores for each sample pair and the average score and total rank of each classifier. The BNKL classifier achieves the highest overall ACC score of 0.80. On the other hand, the AUC score of 0.87 for BNKL is slightly lower than LASSO's 0.88, not significantly so however as we show in the comparison tests below. We thus conclude that the pathwaybased performance of BNKL and LASSO, unlike the DE scenario, are similar.
In Table 5 we compare the algorithms' performance in terms of ACC based on the pathway scores as follows. The pathway ACCs of the benchmark classifiers are subtracted from that of BNKL and then MannWhitney test is appied on each of the resulting 3 sets of 225 (the number of tested pathways) differences. A significant positive median difference indicates better performance of BNKL, a negative one favors the competing classifier. As shown, BNKL performs significantly better than SVM for 10 out of the 16 data set pairs. In the BNKL vs. LASSO comparison, BNKL is significantly better in 8 cases, while LASSO in 4. On the other end, the PCbased algorithm presents the lowest performance among the 4 classifiers.
Comparative performance summary with discussion
In the next table we conveniently summarize the detailed results presented in Tables 3 and 4. We report the mean ACC and AUC scores along with the standard deviations in parentheses.
In addition, Figure 2 visualizes the above scores in form of barplots. The best overall ACC (0.82) and AUC (0.91) scores are achieved by BNKL using top 100 DE genes. Interestingly, the top 10% pathways ' scores of SVM and LASSO are slightly better than their DEbased scores.
We also perform a more formal comparison using the ACC and AUC differences between BNKL and the 3 benchmark algorithms. We subtract the scores of SVM, LASSO and PC from that of BNKL, report the median differences and, in parentheses, the pvalues corresponding to MannWhitney tests hypothesizing equal scores. Note that positive differences are in favor of BNKL and those significant at 0.05 level are emphasized.
The above results demonstrate the strong comparative performance of BNKL especially in terms of ACC.
The prediction scores of BNKL and LASSO are in close range. It is noticeable that in the pathway scenario BNKL losses the clear performance gain it has over LASSO in the DE case. The most probable explanation of this fact is that LASSO performs an active model selection by discarding all insignificant genes from a given pathway as model covariates; and this pruning improves its prediction power. On the other hand, BNKL, although focused on choosing the most significant regulations, does not exclude from using even those genes which are found to be in no interaction with the rest. As a result, including insignificant genes in the loglikelihood (1) actually hampers the prediction power of BNKL. The problem is not observable in the DE scenario where only highly discriminating genes are used. We believe that this limitation of BNKL can and should be addressed in future versions of the algorithm. Another difference between BNKL and the other 3 methods, the effect of which is yet to be investigated in details, is due to the additional discretization step involved in BNKL. Employing more sophisticated discretization procedures that provide better representation of the marginal distributions of gene expression values is likely to improve the performance of BNKL.
As a final comment, among the 4 algorithms PC trails behind with the lowest scores, which we contribute to its model selection insufficiency  close inspection shows that PC fits too complex networks (data not shown) thus overfitting the training data and degrading its prediction performance.
Differential regulation analysis of two types of lung cancer
In a comprehensive study [2] of squamous cell lung carcinoma (SQCC), the importance of several genes implicated in the disease condition have been reported, among which TP53, CDKN2A, PIK3CA, RAS (HRAS and KRAS), EGFR and NOTCH1. These are genes involved in cell cycle control, apoptosis and cell differentiation, and possibly express distinct alternation pattern in SQCC in comparison to adenocarcinoma, the other most common type of lung cancer. The presented below pathwaybased analysis corroborates with these findings and serves as a validation of the proposed BNKL methodology.
Table 6 shows the KEGG pathwaybased ACC scores for the (LNG1,LNG3) pair along with the top 16 pathways with best performance achieved by either one of the 4 classifiers. Small cell lung cancer, Wnt signaling and Bile secretion are among the best performing pathways. In Figure 3 we show some of the BNKL estimated DAGs overlaid on the original, curated KEGG pathways. The edges of the BNKL networks are colorcoded in red and blue to differentiate the class regulations. For the purpose of illustration, different isoforms or versions of a gene are represented by one node, which may result in loops seemingly incompatible with the original DAGs. As seen, the BNKL networks are relatively sparse in comparison to the curated KEGG networks for, recall, only associations with significant class differences are picked up by BNKL. The plots also highlight a key feature of the presented framework  identification of differentially expressed gene regulations that reveal easy to interpret functional changes between disease conditions. We proceed with some more details.
First we observe that BNKL often represents indirect actual associations, connecting with directed edges genes which are at the end of regulation cascades in the curated pathways. For example, in the Small cell lung cancer there is a long chain of regulations connecting the ECMreceptor LAMB1 and TRAF1 which is represented by a directed edge in BNKL  inhibition in the first class (red tee arrowhead) and activation in the second (blue arrowhead). LAMB1→BIRC3 is another example of association shortcut. The edges in the Bile secretion pathways are mostly indirect regulations. In the Wnt signaling pathway the WNT16→CTTNB1 edge selected by BNKL is a shortcut for the regulation chain WNT16→FZD10→DVL1→GSK3B→CTTNB1. In other cases however, BNKL draws edges between genes which are known to interact directly such as PIK3R3→AKT1 in the Small cell lung cancer and GNAS→ADCY6 in the Gap junction pathway. As a side note, the active presence of PIK3R3 in the estimated BNKL network is in agreement with the already established characteristic role of PIK3 gene family in SQCC [41].
Next we inspect more closely the Gap junction pathway, which regulates intercellular communication and is involved in tumor progression. It has been reported in [42] that the expression of one of the key genes involved in this pathways, GJA1, which encodes the connexin43 protein, is reduced in human and mouse lung carcinoma cells. According to the curated KEGG pathway, tubulinbeta proteins (TUBB and TUBA) bind to connexin43 and the expression of the latter is inhibited by MAPK7. In the BNKL reconstructed network there is an indirect inhibition of MAPK7 by GNAQ which is stronger in the case of adenocarcinoma. Moreover, TUBB6 is strongly associated with TUBA1B, inhibits PRKACA and expresses differential regulation on KRAS (activation in case of adenocarcinoma and inhibition in case of SQCC) thus emphasizing the importance of the regulation changes in tubulinbeta for distinguishing the two types of lung cancer. Another notable differential interaction selected by BNKL is GNAS→ADCY6 (activation in adenocarcinoma and suppression in SQCC) while, according to KEGG, in normal cells we have a stimulating effect of GNAS, the gene encoding the Gprotein, on ADCY6. An indirect association between EGFR and ADCY6 is also detected. We recall that EGFR is a recognized oncogene and is being investigated as a potential therapeutic target [41].
Finally we identify and report the most connected genes in the BNKL reconstructed pathways. For the purpose, we integrate all estimated KEGG pathways and for each gene we count the number of directed edges (in and outbound) to other genes. Then we rank the genes according to thus accumulated scores to obtain the most connected ones; see Figure 4. These are genes with most marked involvement in the differentiation between adenocarcinoma and SQCC. Among them are CDC42 (cell division control protein), PRKACA (cell signaling), CTNNB1 (cell adhesion), CHP2 (cell proliferation and tumor growth), PIK3R1 (cell proliferation and survival) and KRAS (a known oncogene and potential lung cancer drug target) which play key roles in celltocell signaling, as well as, cell growth, arrest and death.
Conclusion
Many of the problems accompanying the analysis of gene expression profiles are caused by technological noise, platform and lab related bias, and small sample size. Categorical Bayesian networks mitigate some of these problems by providing noise and bias reduction through discretization, ability to handle nonlinear gene interaction effects and efficient multivariate model representation. We have developed a framework for discrimination analysis, BNKL, based on the reconstruction of an optimal graph structure from twoclass labeled data. The proposed scorebased learning algorithm uses a KLdivergence criteria to maximize the observed class separation. The performed extensive analysis on real data has demonstrated the competitiveness of our approach with respect to some established classification algorithms. The distinctive advantage of BNKL  its utility in discovering differentially expressed regulations between comparable conditions  has been applied for discriminating cancer subtypes. In particular, we have utilized BNKL to model the difference between adenocarcinoma and squamous cell lung cancers.
Understandably, the BNKL classifier is limited by the computation complexity of its learning algorithm and its direct application to multithousand gene sets can be prohibitive. In our experiments we have restrained the complexity by using manually curated pathways and subsets of differentially expressed genes. However, a whole genome analysis can be also achieved by restricting the number of allowed parents for each genenode. Potential parents can be selected according to the degree of association with the child genes or using some prior information such as the KEGG pathway database of gene interactions. The current software realization of the algorithm [20] allows for implementation of such strategies.
We want to point to other application possibilities of BNKL beyond the microarray expression data used in this study. Next generation sequencing technologies provide an ample source of new genetic samples. For example, singlenucleotide polymorphism (SNP) samples, being genuinely discrete, can be immediately utilized. Adapting BNKL to new data modes and extending its area of application is a subject of ongoing investigation.
Abbreviations
 Bayesian Networks (BN):

Categorical Bayesian Networks (CBN), Directed Acyclic Graph (DAG), CrossValidation (CV), Maximum Likelihood (ML), Support Vector Machines (SVM), KullbackLeibler (KL), Differential Expression (DE), Increasing Differential Expression (IDE).
References
 1.
Kreeger PK, Lauffenburger DA: Cancer systems biology: a network modeling perspective. Carcinogenesis. 2010, 31 (1): 28. 10.1093/carcin/bgp261.
 2.
The Cancer Genome Atlas Research Network: Comprehensive genomic characterization of squamous cell lung cancers. Nature. 2012, 489 (7417): 51925. 10.1038/nature11404.
 3.
Woolf PJ, et al: Bayesian analysis of signaling networks governing embryonic stem cell fate decisions. Bioinformatics. 2005, 21 (6): 74153. 10.1093/bioinformatics/bti056.
 4.
Friedman N, Linial M, Nachman I, Pe'er D: Using Bayesian networks to analyze expression data. J Comput Biol. 2000, 7: 601620. 10.1089/106652700750050961.
 5.
Tibshirani R: Regression shrinkage and selection via the LASSO. J Royal Stat Soc B. 1996, 58: 267288.
 6.
Cortes C, Vapnik V: Supportvector networks. Machine Learning. 1995, 20: 273297.
 7.
Spirtes P, Glymour C, Scheines R, Kauffman S, Aimale V, Wimberly F: Constructing Bayesian network models of gene expression networks from microarray data. Proc Atl Symp on Comp Biology. 2000, 15.
 8.
Imoto S, Higuchi T, Goto H, Tashiro K, Kuhara S: Combining microarrays and biological knowledge for estimating gene networks via Bayesian networks. IEEE Comp Sys Bioinformatics. 2003, 2: 104113.
 9.
Ibrahim J, Chen M, Gray R: Bayesian Models for Gene Expression With DNA Microarray Data. JASA. 2002, 97 (457): 8899. 10.1198/016214502753479257.
 10.
Helman P, Veroff R, Atlas S, Willman C: A Bayesian Network Classification Methodology for Gene Expression Data. J Comput Biol. 2004, 11 (4): 581615. 10.1089/cmb.2004.11.581.
 11.
Heckerman D, Geiger D, Chickering D: Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning. 1995, 10: 197243.
 12.
Spirtes P, Glymour C, Scheines : Causation, Prediction, and Search. The MIT Press. 2000, 2
 13.
Buntine W: A guide to the literature on learning graphical models. IEEE Transactions on knowledge and data engineering. 2006, 8 (2):
 14.
BragaNeto U: Fads and fallacies in the name of smallsample microarray classification. IEEE Sig Proc Mag. 2007, 24: 9199.
 15.
Shmulevich I, Zhang W: Binary analysis and optimizationbased normalization of gene expression data. Bioinformatics. 2002, 18 (4): 555565. 10.1093/bioinformatics/18.4.555.
 16.
Parmigiani G, Garrett E, Anbazhagan R, Gabrielson : A statistical framework for expressionbased molecular classification in cancer. J Royal Stat Soc B. 2002, 64 (4): 717736. 10.1111/14679868.00358.
 17.
Zilliox MJ, Irizarry RA: A gene expression bar code for microarray data. Nat Methods. 2007, 4: 911913. 10.1038/nmeth1102.
 18.
Balov N: A discrete Bayesian network framework for discrimination of gene expression profiles. Bioinformatics and Biomedicine (BIBM), 2012 IEEE International Conference on: 47 October 2012. 2012, 17. 10.1109/BIBM.2012.6392692.
 19.
Cooper G, Herskovitz E: A Bayesian method for the induction of probabilistic networks from data. Machine Learning. 1992, 9: 330347.
 20.
Balov N: sdnet: Soft Discretizationbased Bayesian Network Inference. CRAN. 2012, R package version 1.01.7
 21.
Friedman N, Geiger D, Goldszmidt M: Bayesian network classifiers. Machine Learning. 1997, 29: 131163. 10.1023/A:1007465528199.
 22.
Statnikov A, Aliferis CF, Tsamardinos I: A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinformatics. 2005, 21 (5): 631643. 10.1093/bioinformatics/bti033.
 23.
Kalisch M, Bühlmann P: Estimating highdimensional directed acyclic graphs with the PCalgorithm. Machine Learning Research. 2007, 8: 613636.
 24.
Irizarry RA, Hobbs B, Collin F, BeazerBarclay YD, Antonellis KJ, Scherf U, Speed TP: Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003, 4 (2): 24964. 10.1093/biostatistics/4.2.249.
 25.
Pawitan Y, Bjöhle J, Amler L, Borg AL, et al: Gene expression profiling spares early breast cancer patients from adjuvant therapy: derived and validated in two populationbased cohorts. Breast Cancer Res. 2005, 7 (6): R95364. 10.1186/bcr1325.
 26.
Miller LD, Smeds J, George J, Vega VB, et al: An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. PNAS. 2005, 102 (38): 135505. 10.1073/pnas.0506230102.
 27.
Sotiriou C, Wirapati P, Loi S, Harris A, et al: Gene expression profiling in breast cancer: understanding the molecular basis of histologic grade to improve prognosis. J Natl Cancer Inst. 2006, 98 (4): 26272. 10.1093/jnci/djj052.
 28.
Desmedt C, Piette F, Loi S, Wang Y, et al: Strong time dependence of the 76gene prognostic signature for nodenegative breast cancer patients in the TRANSBIG multicenter independent validation series. Clin Cancer Res. 2007, 13 (11): 320714. 10.1158/10780432.CCR062765.
 29.
Dedeurwaerder S, Desmedt C, Calonne E, Singhal SK, et al: DNA methylation profiling reveals a predominant immune component in breast cancers. EMBO Mol Med. 2011, 3 (12): 72641. 10.1002/emmm.201100801.
 30.
Wang Y, Klijn JG, Zhang Y, Sieuwerts AM, et al: Geneexpression profiles to predict distant metastasis of lymphnodenegative primary breast cancer. Lancet. 2005, 365 (9460): 6719. 10.1016/S01406736(05)179471.
 31.
Kuner R, Muley T, Meister M, Ruschhaupt M, et al: Global gene expression analysis reveals specific patterns of cell junctions in nonsmall cell lung cancer subtypes. Lung Cancer. 2009, 63 (1): 328. 10.1016/j.lungcan.2008.03.033.
 32.
SanchezPalencia A, GomezMorales M, GomezCapilla JA, Pedraza V, et al: Gene expression profiling reveals novel biomarkers in nonsmall cell lung cancer. Int J Cancer. 2011, 129 (2): 35564. 10.1002/ijc.25704.
 33.
Starczynowski DT, Lockwood WW, Deléhouzée S, Chari R, et al: TRAF6 is an amplified oncogene bridging the RAS and NFkB pathways in human lung cancer. J Clin Invest. 2011, 121 (10): 4095105. 10.1172/JCI58818.
 34.
Cheng L, Wang P, Yang S, Yang Y, et al: Identification of genes with a correlation between copy number and expression in gastric cancer. BMC Med Genomics. 2012, 5: 1410.1186/17558794514.
 35.
Cui J, Chen Y, Chou WC, Sun L, et al: An integrated transcriptomic and computational analysis for biomarker identification in gastric cancer. Nucleic Acids Res. 2011, 39 (4): 1197207. 10.1093/nar/gkq960.
 36.
Wu Y, Grabsch H, Ivanova T, Tan IB, et al: Comprehensive genomic metaanalysis identifies intratumoural stroma as a predictor of survival in patients with gastric cancer. Gut. 2013, 62: 11001111. 10.1136/gutjnl2011301373.
 37.
Jones J, Otu H, Spentzos D, Kolia S, et al: Gene signatures of progression and metastasis in renal cell cancer. Clin Cancer Res. 2005, 11 (16): 57309. 10.1158/10780432.CCR042225.
 38.
Ding Y, Huang D, Zhang Z, Smith J, et al: Combined gene expression profiling and RNAi screening in clear cell renal cell carcinoma identify PLK1 and other therapeutic kinase targets. Cancer Res. 2011, 71 (15): 522534. 10.1158/00085472.CAN110076.
 39.
Varela I, Tarpey P, Raine K, Huang D, et al: Exome sequencing identifies frequent mutation of the SWI/SNF complex gene PBRM1 in renal carcinoma. Nature. 2011, 469 (7331): 53942. 10.1038/nature09639.
 40.
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP: Gene set enrichment analysis: A knowledgebased approach for interpreting genomewide expression profiles. PNAS. 2005, 102: 1554515550. 10.1073/pnas.0506580102.
 41.
Angulo B, SuarezGauthier A, LopezRios F, Medina PP, Conde E, Tang M, Soler G, LopezEncuentra A, Cigudosa JC, SanchezCespedes M: Expression signatures in lung cancer reveal a profile for EGFRmutant tumours and identify selective PIK3CA overexpression by gene amplification. The Journal of pathology. 2008, 214 (3): 34756. 10.1002/path.2267.
 42.
Ruch RJ, Porter S, Koffler LD, DwyerNield LD, Malkinson AM: Defective gap junctional intercellular communication in lung cancer: loss of an important mediator of tissue homeostasis and phenotypic regulation. Experimental lung research. 2001, 27 (3): 23143. 10.1080/019021401300053984.
Acknowledgements
The present article is based on "A discrete Bayesian network framework for discrimination of gene expression profiles", by Nikolay Balov, which appeared in the 2012 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). The author acknowledges the partial support of NIH grant K99LM009477 from the National Library of Medicine. The content however is solely the responsibility of the author and does not represent the official views of the National Library of Medicine or the National Institutes of Health.
Declarations
The publication costs of this article were funded by the corresponding author.
This article has been published as part of BMC Medical Genomics Volume 6 Supplement 3, 2013: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2012: Medical Genomics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcmedgenomics/supplements/6/S3.
Author information
Additional information
Competing interests
The author declares that there are no competing interests.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Published
DOI
Keywords
 Support Vector Machine
 Directed Acyclic Graph
 Differentially Express
 Differentially Express Gene
 Prediction Score