Generative Adversarial Networks: Used for understanding and producing a random data item

Virtual: https://events.vtools.ieee.org/m/289241

Prerequisites: You do not need to have attended the earlier talks. If you know zero math and zero machine learning, then this talk is for you. Jeff will do his best to explain fairly hard mathematics to you. If you know a bunch of math and/or a bunch machine learning, then these talks are for you. Jeff tries to spin the ideas in new ways. Longer Abstract: Suppose you have a distributions of random images of cats. Suppose you want to learn a neural network that takes uniformly random bits as input and outputs an image of a cat according to this same distribution. One fun thing is that this neural network won't be perfect and hence it will output images of "cats" that it has never seen before. Also you can make small changes in the network input bits and see how it changes the resulting image of a cat. The way we do this is with Generative Adversarial Networks. This is formed by having two competing agents. The task of the first agent, as described above, is to output random images of cats. The task of the second is to discern whether a given image was produced by the true random distribution or by the first agent. By competing, they learn. If we have more time in the talk then we will talk about Convolutional & Recurrent Networks which are used for learning images and sound that are invariant over location and time. Virtual: https://events.vtools.ieee.org/m/289241