k-relevance vectors: Considering relevancy beside nearness

(2021) k-relevance vectors: Considering relevancy beside nearness. APPLIED SOFT COMPUTING. ISSN 1568-4946 1872-9681 J9 - APPL SOFT COMPUT

Full text not available from this repository.

Abstract

This study combines two different learning paradigms, k-nearest neighbor (k-NN) rule, as memory based learning paradigm and relevance vector machines (RVM), as statistical learning paradigm. The purpose is to improve the performance of k-NN rule through selection of important features with sparse Bayesian learning method. This combination is performed in kernel space and is called k relevance vector (k-RV). The proposed model significantly prunes irrelevant features. Our combination of k-NN and RVM presents a new concept of similarity measurement for k-NN rule, we call it k relevancy which aims to consider "relevancy"in the feature space beside "nearness"in the input space. We also introduce a new parameter, responsible for early stopping of iterations in RVM that is able to improve the classification accuracy. Intensive experiments are conducted on several classification datasets from University of California Irvine (UCI) repository and two real datasets from computer vision domain. The performance of k-RV is highly competitive compared to a few state-of-the-arts in terms of classification accuracy. (C) 2021 Elsevier B.V. All rights reserved.

Item Type: Article
Keywords: Nearest neighbor rule Relevance vector machine Sparsity Sparse Bayesian learning NEAREST-NEIGHBOR CLASSIFICATION EXTREME LEARNING-MACHINE DIAGNOSIS SYSTEM
Journal or Publication Title: APPLIED SOFT COMPUTING
Journal Index: ISI
Volume: 112
Identification Number: https://doi.org/10.1016/j.asoc.2021.107762
ISSN: 1568-4946 1872-9681 J9 - APPL SOFT COMPUT
Depositing User: Zahra Otroj
URI: http://eprints.mui.ac.ir/id/eprint/17497

Actions (login required)

View Item View Item