<thead id="lb7fv"><del id="lb7fv"></del></thead>
    <object id="lb7fv"><option id="lb7fv"><small id="lb7fv"></small></option></object>

    <optgroup id="lb7fv"></optgroup><delect id="lb7fv"><rp id="lb7fv"><big id="lb7fv"></big></rp></delect>
    <i id="lb7fv"></i>

    <delect id="lb7fv"><option id="lb7fv"></option></delect>
    <delect id="lb7fv"><option id="lb7fv"><big id="lb7fv"></big></option></delect>


    編輯:吳秦時間:2019-09-09點擊數: 來源:  

    報告題目(Title): The changing face of the curse of dimensionality in machine learning

    報告人姓名(Speaker): Josef Kittler

    時間(Date&Time): 2019-09-09 PM 2:00

    地點(Location): B222

    報告摘要(Abstract): The dimensionality of sensor data representation in machine perception has always been an important design factor. The importance of dimensionality reduction had been motivated biologically, by emulating the sensory information processing architecture of the human perception system, philosophically, by the Ugly duckling theorem, performance-wise, by its impact on the generalisation properties of machine learning systems, as well as computationally, from the perspective of engineering efficiency, since the infancy of Artificial Intelligence. The arguments have inspired vigorous research in feature selection and its dimensionality reducing mapping counterpart, namely feature extraction. For a while, the achievements in dimensionality reduction methods were dismissed, and the arguments reversed, with the advent of kernel methods, developed as an integral part of Support Vector Machine learning. Kernel functions map the input stimulus to a space of infinite dimensionality, which undermines the concept of dimensionality reduction. However, gradually, the negative impact of noise, nuisance variables and irrelevant information on the performance of machine learning systems had been realised and this has invigorated the quest for informative descriptors. The recent developments in deep learning have continued in this quest by looking for a minimum representation of the input data that is sufficient for the task in hand. However, there is still a dilemma. Practically, it is not feasible to construct a task dependent deep neural network (DNN) for each decision-making problem. If on the other hand we train a network to serve a range of tasks, its representational capacity for each individual task will not be minimal. We propose a dimensionality reduction method that dynamically selects features for each task. We demonstrate the merits of this approach on an application of visual object tracking in video.

    報告人簡介(Biography): Prof. Josef Kittler is the fellow of the Royal Academy Engineering, fellow of the international association of pattern recognition, and former president of IAPR. He received the B.A., Ph.D., and D.Sc. degrees from the University of Cambridge, respectively. He is currently a Distinguished Professor of machine intelligence with the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, U.K, which was founded by Prof Kittler at 1986. He has authored the textbook Pattern Recognition: A Statistical Approach and over 600 scientific papers, which have been cited for more than 60 thousand times. His research interests include biometrics, video and image database retrieval, medical image analysis, and cognitive vision.

    邀請人 (Inviter):   吳小俊