Vithursan Thangarasa bio photo

Vithursan Thangarasa

Originally from Toronto, Canada, and currently based in the San Francisco Bay Area, I am deeply passionate about neural network compression, large-scale foundation models, and enhancing the efficiency of training large neural networks, with a keen interest in generative AI.

Twitter   Google Scholar LinkedIn Github E-Mail

On November 28, 2017, I gave a talk to the Machine Learning Research Group (MLRG) at the University of Guelph on different methods for teaching neural networks how to learn efficiently and effectively. Here, I reviewed a several methods in the literature such as: curriculum learning [1], variants of self-paced learning [2, 3], machine teaching [5] and meta-learning [4].


You can download my Google Slides in PDF.


Frame


More details coming soon…


References

[1] Bengio, Yoshua, Louradour, Jerome, Collobert, Ronan, and Weston, Jason. Curriculum learning. In Proceedings of the 26th International Conference on Machine Learning (ICML), pp. 41–48. ACM, 2009.


[2] Kumar, M Pawan, Packer, Benjamin, and Koller, Daphne. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189–1197, 2010.


[3] Jiang, Lu, Meng, Deyu, Yu, Shoou-I, Lan, Zhenzhong, Shan, Shiguang, and Hauptmann, Alexander. Self-paced learning with diversity. In Advances in Neural Information Processing Systems (NIPS), pp. 2078–2086, 2014.


[4] Thrun, S. Lifelong Learning Algorithms, pp. 181–209. Springer US, Boston, MA, 1998. ISBN 978-1-4615- 5529-2.


[5] Zhu, X. Machine teaching: An inverse problem to machine learning and an approach toward optimal education. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI), pp. 4083–4087, 2015.