Andrews S, Tsochantaridis I and Hofmann T. Support vector machines for multiple-instance learning. In Advances in Neural Information Processing Systems 15, Cambridge, MA: MIT Press, 2003; 561–8.
Settles B, Craven M and Ray S. Multiple-instance active learning. In Advances in Neural Information Processing Systems 20, Cambridge, MA: MIT Press, 2008; 1289–96.
Jorgensen Z, Zhou Y and Inge M. A multiple instance learning strategy for combating good word attacks on spam lters. J Mach Learn Res 2008; 8: 993– 1019.
Fung G, Dundar M and Krishnappuram B et al. Multiple instance learning for computer aided diagnosis. In Advances in Neural Information Processing Sys- tems 19, Cambridge, MA: MIT Press, 2007; 425–32.
Viola P, Platt J and Zhang C. Multiple instance boosting for object detection. In Advances in Neural Information Processing Systems 18, Cambridge, MA: MIT Press, 2006; 1419–26.
Felzenszwalb PF, Girshick RB and McAllester D et al. Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell 2010; 32: 1627–45.
Zhu J-Y, Wu J and Xu Y et al. Unsupervised object class discovery via saliency- guided multiple class learning. IEEE Trans Pattern Anal Mach Intell 2015; 37: 862–75.
Babenko B, Yang MH and Belongie S. Robust object tracking with online multi- ple instance learning. IEEE Trans Pattern Anal Mach Intell 2011; 33: 1619–32.
Wei X-S and Zhou Z-H. An empirical study on image bag generators for multi-instance learning. Mach Learn 2016; 105:155–98.
Liu G, Wu J and Zhou ZH. Key instance detection in multi-instance learning. In 4th Asian Conference on Machine Learning, Singapore, 2012; 253–68.
Xu X and Frank E. Logistic regression and boosting for labeled bags of instances. In 8th Paci c-Asia Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 2004; 272–81.
Chen Y, Bi J and Wang JZ. MILES: multiple-instance learning via embedded instance selection. IEEE Trans Pattern Anal Mach Intell 2006; 28: 1931–47.
Weidmann N, Frank E and Pfahringer B. A two-level learning method for gen- eralized multi-instance problem. In 14th European Conference on Machine Learning, Cavtat-Dubrovnik, Croatia, 2003; 468–79.
Long PM and Tan L. PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. Mach Learn 1998; 30: 7–21.
Auer P, Long PM and Srinivasan A. Approximating hyper-rectangles: learning and pseudo-random sets. J Comput Syst Sci 1998; 57: 376–88.
Blum A and Kalai A. A note on learning from multiple-instance examples. Mach Learn 1998; 30: 23–9.
Sabato S and Tishby N. Homogenous multi-instance learning with arbitrary dependence. In 22nd Conference on Learning Theory, Montreal, Canada, 2009.
Fre ́nay B and Verleysen M. Classi cation in the presence of label noise: a survey. IEEE Trans Neural Network Learn Syst 2014; 25: 845–69.
Angluin D and Laird P. Learning from noisy examples. Mach Learn 1988; 2: 343–70.
Blum A, Kalai A and Wasserman H. Noise-tolerant learning, the parity problem, and the statistical query model. J ACM 2003; 50: 506–19.
Gao W, Wang L and Li YF et al. Risk minimization in the presence of label noise. In 30th AAAI Conference on Arti cial Intelligence, Phoenix, AZ, 2016; 1575–81.
Brodley CE and Friedl MA. Identifying mislabeled training data. J Artif Intell Res 1999; 11: 131–67.
Muhlenbach F, Lallich S and Zighed DA. Identifying and handling mislabelled instances. J Intell Inform Syst 2004; 22: 89–109.
Brabham DC. Crowdsourcing as a model for problem solving: an introduction and cases. Convergence 2008; 14: 75–90.
Sheng VS, Provost FJ and Ipeirotis PG. Get another label? Improving data 8. quality and data mining using multiple, noisy labelers. In 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Ve- gas, NV, 2008; 614–22.
Snow R, O’Connor B and Jurafsky D et al. Cheap and fast - but is it good? Evaluating non-expert annotations for natural language tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, 2008; 254–63.
Raykar VC, Yu S and Zhao LH et al. Learning from crowds. J Mach Learn Res 2010; 11: 1297–322.
Whitehill J, Ruvolo P and Wu T et al. Whose vote should count more: opti- mal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems 22, Cambridge, MA: MIT Press, 2009; 2035–43.
Raykar VC and Yu S. Eliminating spammers and ranking annotators for crowd- sourced labeling tasks. J Mach Learn Res 2012; 13: 491–518.
Wang W and Zhou ZH. Crowdsourcing label quality: a theoretical analysis. Sci China Inform Sci 2015; 58: 1–12.
Dekel O and Shamir O. Good learners for evil teachers. In 26th International Conference on Machine Learning, Montreal, Canada, 2009; 233–40.
Urner R, Ben-David S and Shamir O. Learning from weak teachers. In 15th International Conference on Arti cial Intelligence and Statistics, La Palma, Canary Islands, 2012; 1252–60.
Wang L and Zhou ZH. Cost-saving effect of crowdsourcing learning. In 25th International Joint Conference on Arti cial Intelligence, New York, NY, 2016; 2111–7.