BoostingTree: parallel selection of weak learners in boosting, with application to ranking

Kocsis, Levente and György, A and Nándoriné Bán, Andrea (2013) BoostingTree: parallel selection of weak learners in boosting, with application to ranking. MACHINE LEARNING, 93 (2-3). pp. 293-320. ISSN 0885-6125 MTMT:2360229; doi:10.1007/s10994-013-5364-5


Download (1MB) | Preview

Download (374kB) | Preview


Boosting algorithms have been found successful in many areas of machine learning and, in particular, in ranking. For typical classes of weak learners used in boosting (such as decision stumps or trees), a large feature space can slow down the training, while a long sequence of weak hypotheses combined by boosting can result in a computationally expensive model. In this paper we propose a strategy that builds several sequences of weak hypotheses in parallel, and extends the ones that are likely to yield a good model. The weak hypothesis sequences are arranged in a boosting tree, and new weak hypotheses are added to promising nodes (both leaves and inner nodes) of the tree using some randomized method. Theoretical results show that the proposed algorithm asymptotically achieves the performance of the base boosting algorithm applied. Experiments are provided in ranking web documents and move ordering in chess, and the results indicate that the new strategy yields better performance when the length of the sequence is limited, and converges to similar performance as the original boosting algorithms otherwise. © 2013 The Author(s).

Item Type: ISI Article
Uncontrolled Keywords: ranking, Random search, Boosting
Subjects: Q Science > QA Mathematics and Computer Science > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány
Divisions: ?? R104a ??
SWORD Depositor: MTMT Injector
Depositing User: EPrints Admin
Date Deposited: 05 Feb 2014 12:32
Last Modified: 16 May 2014 11:49

Update Item Update Item