You are on page 1of 1

The advantages and disadvantages of these models can be categorized according to nine different criteria: ease of model building,

ability to detect complex relationships between predictor variables and outcome, ability to detect implicit interactions among predictor variables, generalizability and overfitting, discrimination ability, computational considerations, ease of sharing the models with other researchers, generation of confidence intervals, and ease of clinical interpretation (Table). Building an ANN requires less domain knowledge than does building a logistic regression model. A variety of available software with user-friendly interfaces exists that can be used to quickly build an ANN without the need to understand the inherent structure of the network. Logistic regression models are more challenging to construct because they require expert domain knowledge, Logistic regression models have better clinical or real-life inferences than do ANNs. Logistic regression models easily determine the variables that are most predictive of outcome on the basis of the coefficients and the corresponding odds ratios (6,26). The odds ratios can be interpreted as the relative increase or decrease in the probability of an outcome given a change in the predictor variables. In contrast, parameters of ANNs do not carry any real-life interpretation. In ANNs, inputs and outputs are not related in a form that the human user can understand, which is why ANNs are commonly called black boxes. In general, ANNs can be thought of as a generalization of logistic regression models (26,28,29). The main advantage of ANNs over logistic regression models lies in their hidden layers of nodes. In fact, a special ANN with no hidden node has been shown to be identical to a logistic regression model (29). ANNs are particularly useful when there are implicit interactions and complex relationships in the data, whereas logistic regression models are the better choice when one needs to draw statistical inferences from the output. In medical diagnosis, neither model can replace the other, but the two may be used complementarily to aid in decision making. Both models have the potential to help physicians with respect to understanding cancer risk factors, risk estimation, and diagnosis
System modeling with neuro-fuzzy systems involves two contradictory requirements: interpretability verses accuracy. The pseudo outer-product (POP) rule identification algorithm used in the family of pseudo outerproduct-based fuzzy neural networks (POPFNN) suffered from an exponential increase in the number of identified fuzzy rules and computational complexity arising from high-dimensional data. This decreases the interpretability of the POPFNN in linguistic fuzzy modeling. This article proposes a novel rough setbased pseudo outer-product (RSPOP) algorithm that integrates the sound concept of knowledge reduction from rough set theory with the POP algorithm. The proposed algorithm not only performs feature selection through the reduction of attributes but also extends the reduction to rules without redundant attributes. As many possible reducts exist in a given rule set, an objective measure is developed for POPFNN to correctly identify the reducts that improve the inferred consequence. Experimental results are presented using published data sets and real-world application involving highway traffic flow prediction to evaluate the effectiveness of using the proposed algorithm to identify fuzzy rules in the POPFNN using compositional rule of inference and singleton fuzzifier (POPFNN-CRI(S)) architecture. Results showed that the proposed rough setbased pseudo outer-product algorithm reduces computational complexity, improves the interpretability of neuro-fuzzy systems by identifying significantly fewer fuzzy rules, and improves the accuracy of the POPFNN.

You might also like