Applied Mathematics Seminar

The Department of Mathematics at Western Michigan University will present an applied mathematics seminar on Thursdays.

Day and time: Thursdays, 11 to 11:50 a.m.
Location: 6625 Everett Tower

oct. 5

Deep learning neural networks: paradigm and mathematical foundations II presented by Yuri Ledyaev, Ph.D., Department of Mathematics, Western Michigan University

Abstract: In the second talk we discuss a relation between optimization and training of Deep Learning Networks, Convolutional Neural network and application of Deep Learning Neural Networks in computer vision, speech recognition and natural language processing.

A significant progress during the last decade in developing Machine Learning methodology (especially, its Deep Learning version) for such application as speech recognition, machine translation, computer vision, bioinformatics, etc. was manifested in such successful commercial products as Apple's Siri, Microsoft's Cortana, Amazon's Alexa and Google Assistant. Deep Learning neural networks are characterized by multi-layer structures of intermediate knowledge representations which are built as a result of training of networks on sets of raw data and consecutive testing.

Our discussion is based on recent books, "Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms" by N. Buduma and N. Lacascio (2017) and "Deep Learning" by I. Goodfellow, Y. Bengio and A. Courville (2016).

sept. 28

Deep learning neural networks: paradigm and mathematical foundations I presented by Yuri Ledyaev, Ph.D., Department of Mathematics, Western Michigan University

Abstract: A significant progress during the last decade in developing Machine Learning methodology (especially, its Deep Learning version) for such applications as speech recognition, machine translation, computer vision, bioinformatics, etc. was manifested in such successful commercial products as Apple's Siri, Microsoft's Cortana, Amazon's Alexa and Google Assistant. Deep Learning neural networks are characterized by multi-layer structures of intermediate knowledge representations which are built as a result of training of networks on sets of raw data and consecutive testing.

Currently, in spite of impressive achievements of Deep Learning in many application, there is no general mathematical theory which explains efficiency of this approach for these applications or that explains the failure of it in some cases (for example, over-fitting phenomena).

In this series of two talks we discuss basic characteristics of Deep Learning neural networks's design and variety of mathematical methods which are used in this design.

Our discussion is based on recent books, "Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms" by N. Buduma and N. Lacascio (2017) and "Deep Learning" by I. Goodfellow, Y. Bengio and A. Courville (2016).

sept. 21

Analysis and Applied Mathematics Seminar organizational meeting presented by Yuri Ledyaev, Ph.D., Department of Mathematics, Western Michigan University

Abstract: This is a brief organizational meeting to discuss a program for Analysis Seminar (this is also MATH 6900, Seminar in Applied Mathematics) for the Fall 2017 semester.

Everyone who is interested in analysis, applied mathematics and their applications (in particular, in optimization, control theory, differential equations, mathematical finance, mathematical biology, etc.) is invited.