Information theory of algorithms: How precisely should we compute?
An information theoretic framework for algorithm analysis.
- Starts at
TU Wien, Campus Favoritenstraße
Seminarraum FAV 05
1040 Vienna, Favoritenstraße 9-11
Stiege 1, 5. Stock, Gelber Bereich, HA0503
If you would like to attend, please register sending an e-mail to the address email@example.com with the subject: “Guest Talk 05.11.2015: Joachim M. Buhmann”.
Algorithms map input spaces to output spaces where inputs are possibly affected by fluctuations. Beside run time and memory consumption, an algorithm might be characterized by its sensitivity to the signal in the input and its robustness to input fluctuations. The achievable precision of an algorithm, i.e., the attainable resolution in output space, is determined by its capability to extract predictive information in the input relative to its output. I will present an information theoretic framework for algorithm analysis where an algorithm is characterized as computational evolution of a (possibly contracting) posterior distribution over the output space. The tradeoff between precision and stability is controlled by an input sensitive generalization capacity (GC). GC measures how much the posteriors of two different problem instances generated by the same data source agree despite the noise in the input. Thereby, GC objectively ranks different algorithms for the same data processing task based on the bit rate of their respective capacities. Information theoretic algorithm selection is demonstrated for minimum spanning tree algorithms and for greedy MaxCut algorithms. The method can rank centroid based and spectral clustering methods, e.g. k-means, pairwise clustering, normalized cut, adaptive ratio cut and dominant set clustering. Applications are in cortex parcellations based on diffusion tensor imaging and for the analysis of gene expression data.
Joachim M. Buhmann heads the Institute for Machine Learning in the Computer Science Department at ETH Zurich. He has been a full professor for Information Science and Engineering since October 2003. Since 2014, he serves as vice rector for education at ETH Zurich. He studied physics at the Technical University Munich and obtained his PhD in Theoretical Physics. As postdoc and research assistant professor, he spent 1988-92 at the University of Southern California, Los Angeles, and the Lawrence Livermore National Laboratory. He held a professorship for applied computer science at the University of Bonn, Germany from 1992 to 2003. His research interests spans the areas of pattern recognition and data analysis, including machine learning, statistical learning theory and information theory. Application areas of his research include image analysis, medical imaging, acoustic processing and bioinformatics. Since 2009, he serves as president of the German Pattern Recognition Society DAGM e.V..
This talk is organized by the Computer Graphics Group at the Institute of Computer Graphics and Algorithms.
- Prof. Dr. Joachim M. Buhmann, Information Science and Engineering Group, Department of Computer Science, ETH Zurich, Switzerland
Note: This is one of the thousands of items we imported from the old website. We’re in the process of reviewing each and every one, but if you notice something strange about this particular one, please let us know. — Thanks!