BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IEEE Toronto Section - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.ieeetoronto.ca
X-WR-CALDESC:Events for IEEE Toronto Section
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20200101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20211116T170000
DTEND;TZID=UTC:20211116T183000
DTSTAMP:20260418T064341
CREATED:20211030T112019Z
LAST-MODIFIED:20211216T081306Z
UID:10000478-1637082000-1637087400@www.ieeetoronto.ca
SUMMARY:Generalizing from Training Data
DESCRIPTION:Prerequisites: You do not need to have attended the earlier talks. If you know zero math and zero machine learning\, then this talk is for you. Jeff will do his best to explain fairly hard mathematics to you. If you know a bunch of math and/or a bunch machine learning\, then these talks are for you. Jeff tries to spin the ideas in new ways. Longer Abstract: There is some theory. If a machine is found that gives the correct answers on the randomly chosen training data without simply memorizing\, then we can prove that with high probability this same machine will also work well on never seen before instances drawn from the same distribution. The easy proof requires D>m\, where m is the number of bits needed to describe your learned machine and D is the number of train data items. A much harder proof (which we likely won’t cover) requires only D>VC\, where VC is VC-dimension (Vapnikâ€“Chervonenkis) of your machine. The second requirement is easier to meet because VC<m.  Speaker(s): Prof. Jeff Edmonds\,   Virtual: https://events.vtools.ieee.org/m/287720
URL:https://www.ieeetoronto.ca/event/generalizing-from-training-data/
LOCATION:Virtual: https://events.vtools.ieee.org/m/287720
CATEGORIES:Instrumentation & Measurement,Women in Engineering
END:VEVENT
END:VCALENDAR