TY - GEN

T1 - A comparison of two algorithms for predicting the condition number

AU - Han, Dianwei

AU - Zhang, Jun

PY - 2007

Y1 - 2007

N2 - We present experimental results of comparing the Modified K-Nearest Neighbor (MkNN) algorithm with Support Vector Machine (SVM) in the prediction of condition numbers of sparse matrices. Condition number of a matrix is an important measure in numerical analysis and linear algebra. However, the direct computation of the condition number of a matrix is very expensive in terms of CPU and memory cost, and becomes prohibitive for large size matrices. We use data mining techniques to estimate the condition number of a given sparse matrix. In our previous work, we used Support Vector Machine (SVM) to predict the condition numbers. While SVM is considered a state-ofthe-art classification/regression algorithm, kNN is usually used for collaborative filtering tasks. Since prediction can also be interpreted as a classsification/regression task, virtually any supervised learning algorithm (such as kNN) can also be applied. Experiments are performed on a publicly available dataset. We conclude that Modified kNN (MkNN) performs much better than SVM on this particular dataset.

AB - We present experimental results of comparing the Modified K-Nearest Neighbor (MkNN) algorithm with Support Vector Machine (SVM) in the prediction of condition numbers of sparse matrices. Condition number of a matrix is an important measure in numerical analysis and linear algebra. However, the direct computation of the condition number of a matrix is very expensive in terms of CPU and memory cost, and becomes prohibitive for large size matrices. We use data mining techniques to estimate the condition number of a given sparse matrix. In our previous work, we used Support Vector Machine (SVM) to predict the condition numbers. While SVM is considered a state-ofthe-art classification/regression algorithm, kNN is usually used for collaborative filtering tasks. Since prediction can also be interpreted as a classsification/regression task, virtually any supervised learning algorithm (such as kNN) can also be applied. Experiments are performed on a publicly available dataset. We conclude that Modified kNN (MkNN) performs much better than SVM on this particular dataset.

UR - http://www.scopus.com/inward/record.url?scp=47349133199&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=47349133199&partnerID=8YFLogxK

U2 - 10.1109/ICMLA.2007.3

DO - 10.1109/ICMLA.2007.3

M3 - Conference contribution

AN - SCOPUS:47349133199

SN - 0769530699

SN - 9780769530697

T3 - Proceedings - 6th International Conference on Machine Learning and Applications, ICMLA 2007

SP - 223

EP - 228

BT - Proceedings - 6th International Conference on Machine Learning and Applications, ICMLA 2007

T2 - 6th International Conference on Machine Learning and Applications, ICMLA 2007

Y2 - 13 December 2007 through 15 December 2007

ER -