• Artificially Intelligent Imaging (AI2): System to Circuit to Device Level Implementations of Smart CMOS Imaging, A Generalized Approach for Non-Application Specific Intelligence Design (NAS-ID)

    Room ENGLG 05 George Vari Engineering Building

    August 11, 2016 at 1:00 p.m. Dr. Faycal Saffih, Department of Electrical Engineering, UAE University, will be presenting “Artificially Intelligent Imaging (AI2): System to Circuit to Device Level Implementations of Smart CMOS Imaging, A Generalized Approach for Non-Application Specific Intelligence Design (NAS-ID)”. Speaker: Dr. Faycal Saffih Assistant Professor, Department of Electrical Engineering UAE University Day & Time: Thursday, August 11, 2016 1:00 p.m. – 2:00 p.m. Location: Room ENGLG 05 George Vari Engineering Building Department of Electrical and Computer Engineering Ryerson University Contact: Dimitri Androutsos Abstract: In this talk we will present the development of intelligence (vs intelligent) implementations from top-down and bottom-up approaches and from Electrical engineering design and Biological Biomimicry to Solid-state Physics prediction. Smart CMOS imaging is the application of choice where these multi-disciplinary studies interacts to suggest a novel approach for research to design intelligent devices needed in a verity of advanced technological devices and systems for a variety of applications such as biomedical and renewables systems and devices to name a few. Biography: Dr. Fayçal Saffih (IEEE Member since 2000) received the B.Sc. (with Best Honors) degree in Solid-State Physics from the University of Sétif-1, Sétif, Algeria, in 1996, the M.Sc. degree in Digital Neural networks from Physics Department, University of Malaya, Kuala Lumpur, Malaysia, in 1998, and the Ph.D. degree in Smart CMOS Imaging from Electrical and Computer Engineering Department, University of Waterloo, Waterloo, ON, Canada. Taking a decade journey between academia and industry, Dr. Saffih enriched his experience multidimensionally spanning Microelectronics from devices up-to systems, and industry from R&D department to Entrepreneurship start-up, all of which from West USA (OR) to Singapore’s prestigious A*star Agency for Science, Technology and Research. Recently, Dr. Saffih endeavored into renewable energy research and business starting from Stanford certification in 2013 and currently undertaking an Online program from Renewables Academy (RENAC), Germany Dr. Faycal Saffih is currently an assistant professor at the Electrical Engineering Department of the UAE University and a regular visiting scholar at the University of Waterloo, University of Alberta among others. His research is on intelligence extraction and implementation on devices and systems particularly smart CMOS image sensors.

  • Perspectives of Automatic Speech Recognition (ASR) Technology

    TRS 1129, Ryerson University, Toronto

    Thursday October 27, 2016 at 1:00 p.m. Prof. Sadaoki Furui, IEEE Fellow and President of Toyota Technological Institute at Chicago, will be presenting “Perspectives of Automatic Speech Recognition (ASR) Technology”. Speaker: Prof. Sadaoki Furui IEEE Fellow President of Toyota Technological Institute at Chicago Day & Time: Thursday, October 27, 2016 12:00 p.m. – 1:00 p.m. Location: TRS 1129 Ryerson University Toronto Abstract: DNNs (Deep Neural Networks) based on “deep learning” have significantly raised the automatic speech recognition (ASR) performance as of several years ago. This talk gives an overview of major DNN-based techniques successfully used in acoustic and language modeling for ASR. However, what we can do with ASR technology is still very limited, and we still have many challenges that cannot be solved simply by relying on the capability of DNNs. Data sparseness is one of the most difficult problems in constructing ASR systems, since speech is highly variable and it is too costly to construct annotated “big speech data” covering all possible variations. We need to focus on how to collect rich and effective speech databases covering a wide range of variations, active learning for automatically selecting data for annotation, cheap, fast and good-enough transcription, and efficient supervised, semi-supervised, or unsupervised training/adaptation, based on advanced machine learning techniques. We also need to extend current efforts and think deeply about and analyze how human beings are recognizing/understanding speech, and implement various knowledge sources in ASR systems using machine learning techniques to achieve innovations. This talk focuses on my personal perspectives for the future of speech recognition research. Biography: Sadaoki Furui Received the B.S., M.S., and Ph.D. degrees from the University of Tokyo, Japan in 1968, 1970, and 1978, respectively. After joining the Nippon Telegraph and Telephone Corporation (NTT) Labs in 1970, he has worked on speech analysis, speech recognition, speaker recognition, speech synthesis, speech perception, and multimodal human-computer interaction. From 1978 to 1979, he was a visiting researcher at AT&T Bell Laboratories, Murray Hill, New Jersey. He was a Research Fellow and the Director of Furui Research Laboratory at NTT Labs. He became a Professor at Tokyo Institute of Technology in 1997. He was Dean of Graduate School of Information Science and Engineering, and Director of University Library. He was given the title of Professor Emeritus and became Professor at Academy for Global Leadership in 2011. He is now serving as President of Toyota Technological Institute at Chicago (TTI-C). He has authored or coauthored around 1,000 published papers and books. He was elected a Fellow of the IEEE, the Acoustical Society of America (ASA), the Institute of Electronics, Information and Communication Engineers of Japan (IEICE) and the International Speech Communication Association (ISCA). He received the Paper Award and the Achievement Award from the IEEE SP Society, the IEICE, and the Acoustical Society of Japan (ASJ). He received the ISCA Medal for Scientific Achievement, and the IEEE James L. Flanagan Speech and Audio Processing Award. He received the NHK (Japan Broadcasting Corporation) Broadcast Cultural Award and the Okawa Prize. He also received the Achievement Award from the Minister of Science and Technology and the Minister of Education, Japan, and the Purple Ribbon Medal from Japanese Emperor.

  • IEEE Signal Processing Society (SPS) Winter School on Distributed Signal Processing for Secure Cyber Physical Systems

    Concordia

    November 2-4, IEEE Signal Processing Society (SPS) is hosting a Winter School on Distributed Signal Processing for Secure Cyber Physical Systems at Concordia. Speakers: This event consists of presentations given by internationally well-known Distinguished Speakers including members of IEEE Signal Processing Society Board of Governors, 6 IEEE Fellows, and a Notable Industry-based Presentation form PwC’s Cybersecurity & Privacy Practice in Canada as follows: Prof. Ali Sayed (UCLA, President-Elect of IEEE SPS); Prof. Georgios Giannakis (IEEE Fellow, University of Minnesota); Prof. Pramod Varshney (IEEE Fellow, Syracuse University); Prof. Deepa Kundur (IEEE Fellow, University of Toronto); Prof. Anna Scaglione (IEEE Fellow, Arizona State University); Prof. Tongwen Chen (IEEE Fellow, University of Alberta); Prof. Mark Coates (McGill University), and; Mr. Sajith Nair, Partner in PwC’s Cybersecurity & Privacy in Canada. About The Event: This is a unique opportunity for Concordia’s students/researchers, working/interested in security and signal processing, to learn more about the state-of-the-art research, get the chance to talk in person with elite and internationally well-known researchers, and to start/build the bases for future research collaborations. Register: ​Please check the School’s Homepage (below) for the call for participation (CPF), Biography of the invited speakers, and Registration details: https://users.encs.concordia.ca/~i-sip/s3pcps2016/

  • Regularization by Denoising (RED)

    University of Toronto, Bahen Center (Room BA 5281)

    Thursday April 13, 2017 at 10:00 a.m. Dr. Peyman Milanfar, Leader of Computational Imaging team in Google Research, will be presenting an IEEE Signal Processing Society Distinguished Lecture, “Regularization by Denoising (RED)”. Day & Time: Thursday April 13, 2017 10:00 a.m. – 11:00 p.m. Speaker: Dr. Peyman Milanfar Leader of Computational Imaging team in Google Research Visiting Faculty at Electrical Engineering Department, UC Santa Cruz Location: University of Toronto, Bahen Center (Room BA 5281) 40 St. George Street, Toronto, ON M5S 2E4 https://goo.gl/maps/7ick2cparLF2 Contact: Mehrnaz Shokrollahi Organizers: IEEE Signal Processing Chapter Toronto Section Abstract: Image denoising is the most fundamental problem in image enhancement, and it is largely solved: It has reached impressive heights in performance and quality — almost as good as it can ever get. But interestingly, it turns out that we can solve many other problems using the image denoising “engine”. I will describe the Regularization by Denoising (RED) framework: using the denoising engine in defining the regularization of any inverse problem. The idea is to define an explicit image-adaptive regularization functional directly using a high performance denoiser. Surprisingly, the resulting regularizer is guaranteed to be convex, and the overall objective functional is explicit, clear and well-defined. With complete flexibility to choose the iterative optimization procedure for minimizing this functional, RED is capable of incorporating any image denoising algorithm as a regularizer, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. Biography: Peyman leads the Computational Imaging/ Image Processing team in Google Research. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz from 1999-2014, where he is now a visiting faculty. He was Associate Dean for Research at the School of Engineering from 2010-12. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Peyman received his undergraduate education in electrical engineering and mathematics from the University of California, Berkeley, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He holds 11 US patents, several of which are commercially licensed. He founded MotionDSP in 2005. He has been keynote speaker at numerous technical conferences including Picture Coding Symposium (PCS), SIAM Imaging Sciences, SPIE, and the International Conference on Multimedia (ICME). Along with his students, he has won several best paper awards from the IEEE Signal Processing Society. He is a Fellow of the IEEE “for contributions to inverse problems and super-resolution in imaging.”

  • Biomedical Signal and Image Analysis Workshop

    ENG 102, George Vari Engineering and Computing Centre, 245 Church Street, Toronto

    Wednesday May 24, 2017 at 9:15 a.m. IEEE Signal Processing Chapter, Toronto Section, IEEE Engineering in Medicine and Biology Society, Toronto Chapter, and Signal Analysis Research (SAR) Lab, Ryerson University will be presenting a series of sessions “Biomedical Signal and Image Analysis Workshop”. Day & Time: Wednesday May 24, 2017 Morning Session: 9:15 a.m. – 12:30 p.m Afternoon Session: 1:15 p.m. – 4:30 p.m. Speakers: Dr. Rangaraj M. Rangayyan, ranga@ucalgary.ca Department of Electrical & Computer Engineering University of Calgary, AB, Canada Dr. Sridhar Krishnan, krishnan@ryerson.ca Department of Electrical & Computer Engineering Ryerson University, ON, Canada Dr. April Khademi, akhademi@ryerson.ca Department of Electrical & Computer Engineering Ryerson University, ON, Canada Dr. Karthy Umapathy, karthi@ee.ryerson.ca Department of Electrical & Computer Engineering Ryerson University, ON, Canada Dr. Naimul Khan, n77khan@ryerson.ca Department of Electrical & Computer Engineering Ryerson University, ON, Canada Dr. Teodiano Bastos, teodiano@gmail.com Departamento de Engenharia Elétrica Universidade Federal do Espírito Santo, Vitoria, Brasil Location: ENG 102, George Vari Engineering and Computing Centre 245 Church Street Toronto, Ontario M5B 2K3 Ryerson University https://goo.gl/maps/2qLpvJKgkYw Contact: Mehrnaz Shokrollahi Yashodhan Athavale Organizers: Signal Analysis Research (SAR) Lab, Ryerson University IEEE Signal Processing Chapter, Toronto Section IEEE Engineering in Medicine and Biology Society, Toronto Chapter Morning Session: 9:15am Welcome remarks 9:30am Talk M1: Color Image Processing with Biomedical Applications – Dr. Raj Rangayyan, U of Calgary 10:45am – 11:00am break 11:00am Talk M2: Medical Image Analysis Techniques for Radiology and Pathology Images – Dr. April Khademi, Ryerson Univ. 11:45am Talk M3: Biomedical Signal Processing for Cardiac Arrhythmias – Dr. Karthi Umapathy, Ryerson Univ. Afternoon Session: 1:15pm Talk A1: Wearables, IoT and Analytics for Connected Healthcare – Dr. Sri Krishnan, Ryerson Univ. 2:00pm Talk A2: Assistive Technologies and BCI for Rehab Applications – Dr. Teodiano Bastos, UFES, Brazil 2:45pm – 3:00pm break 3:00pm Talk A3: Interactive Machine Learning for Biomedical Signal and Image Analysis – Dr. Naimul Khan, Ryerson Univ. 3:45pm – 4:30pm Open think-tank discussions on challenges and opportunities facing this field in the era of big data, AI, and translational research – moderated by S. Krishnan Biographies: Rangaraj M. Rangayyan is a Professor Emeritus of the Department of Electrical and Computer engineering (ECE) at the University of Calgary. Dr. Rangayyan received his Ph.D. in Electrical Engineering from the Indian Institute of Science in 1980. He has over 35 years as a professor at the University of Calgary and at the University of Manitoba. His research interests include digital signal and image processing, biomedical signal and image analysis, and computer-aided diagnosis. Dr. Rangayyan is the author of two well cited textbooks: “Biomedical Signal Analysis” (IEEE/ Wiley, 2002, 2015) and “Biomedical Image Analysis” (CRC, 2005). He has published over 430 papers in journals and conferences, and coauthored several books. He has supervised and co-supervised 17 Doctoral theses, 27 Master theses, and more than 50 researchers at various levels. He has been recognized with the 2013 IEEE Canada Outstanding Engineer Medal, the IEEE Third Millennium Medal (2000), and elected as Fellow, IEEE (2001); Fellow, Engineering Institute of Canada (2002); Fellow, American Institute for Medical and Biological Engineering (2003); Fellow, SPIE (2003); Fellow, Society for Imaging Informatics in Medicine (2007); Fellow, Canadian Medical and Biological Engineering Society (2007); Fellow, Canadian Academy of Engineering (2009); and Fellow, Royal Society of Canada. He has lectured in more than 20 countries and has held the Visiting Professorships with more than 15 universities world-wide. He has been invited as a Distinguished Lecturer by IEEE EMBS in Toronto and as an invited lecture at the IEEE International Summer School in France. Sridhar (Sri) Krishnan is a Professor in the Department of Electrical and Computer (ECE) Engineering and the Associate Dean of Research, Development and External Partnerships for the Faculty of Engineering and Architectural Science (FEAS) at Ryerson University. He is also a Canada Research Chair in Biomedical Signal Analysis. Dr. Krishnan received his Ph. D. in ECE from the University of Calgary in 1999. Dr. Krishnan’s research interests include adaptive signal representations and analysis and their applications in biomedicine, multimedia (audio), and biometrics. He has published over 280 papers in refereed journals and conferences, filed 8 invention disclosures, and has been granted one US patent. He has received over 20 awards and certificates of appreciation for his contributions in research and innovation. Dr. Krishnan has been invited to present in more than 30 international conferences and workshops. He has supervised and trained 10 Post-doc fellows, 9 Doctoral theses, 29 Master theses, 9 Master projects, 39 Research Assistants (RA), and 17 Visiting RAs. Dr. Krishnan is a Fellow of the Canadian Academy of Engineering. Dr. Krishnan is also the Co-Director of the Institute for Biomedical Engineering, Science and Technology (iBEST) and an Affiliate Scientist at the Keenan Research Centre in St. Michael’s Hospital, Toronto. Karthi Umapathy is an Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Ryerson University. Dr. Umapathy received his Ph. D. in ECE from the University of Western Ontario in 2006. During his graduate studies he held the prestigious NSERC CGS and PGS awards. He was an inaugural Ryerson postdoctoral fellow and was also the recipient of the Heart & Stroke Richard Lewar Centre of Excellence research fellowship award. Dr. Umapathy’s research interests include biomedical signal and image analysis, time-frequency analysis, digital signal processing, cardiac electrophysiology, and magnetic resonance imaging. One of his recent projects involves studying the electrical activity on the surface of the human heart during ventricular fibrillation to reduce sudden cardiac death in North America. Dr. Umapathy brings with him a vast knowledge in Magnetic Resonance Imaging (MRI) from his works in Philips Medical Systems India. As the Area Manager and Country Specialist for Philips, he led many successful MRI projects in India and Japan. April Khademi recently jointed Ryerson University as an Assistant Professor in in the Department of Electrical and Computer (ECE). Dr. Khademi received her Ph.D. in Biomedical Engineering from the University of Toronto. Dr. Khademi’s research interests include medical image analysis techniques for radiology and pathology images, generalized grayscale and colour image processing methodologies, biomedical signal processing, machine learning, personalized medicine, computer-aided diagnosis, Big Data analytics, Magnetic Resonance Imaging, and digital pathology. Dr. Khademi was an Assistant Professor in Biomedical Engineering at University of Guelph. She was the Senior Scientist and Innovation Specialist at PathCore Inc. Dr. Khademi also brings with her the industry and healthcare experience from her works at GE Healthcare, Toronto Rehabilitation Institute, and Sunnybrook Health Sciences Centre. Dr. Khademi is the recipient of more than 10 awards including Governor General’s Gold Medal for her Masters thesis and the prestigious NSERC-CGSD3. She has over 40 publications, and has been invited to speaker in more than 25 conferences, seminars and workshops. Naimul Khan recently jointed Ryerson University as an Assistant Professor in the Department of Electrical and Computer Engineering (ECE). Dr. Khan received his Ph. D. in ECE from Ryerson University in 2014. Dr. Khan’s research interests include designing interactive methods for visual computing that can bridge the gap between end-users and systems. He has contributed to the fields of machine learning, computer vision, and medical imaging. Dr. Khan was previously a research engineer at Sunnybrook Research institute, and an R&D Manager at AWE Company Ltd. At AWE, he led the Fort York Time Tablet project in partnership with the City of Toronto to create an augmented reality exhibit of the history of the Fort. The project has garnered significant media and public attention. Dr. Khan was the recipient of several awards including the OCE TalentEdge Postdoctoral Fellowship, the Ontario Graduate Scholarship, and Queen Elizabeth II Graduate Scholarship in Science & Technology. Teodiano Bastos is a Full Professor in the Department of Electrical Engineering at Universidade Federal do Espírito Santo and a Level 1 Researcher at CNPq. Dr. Bastos received his Ph. D. in Electrical and Electronic Engineering from the Universidad Complutense de Madrid, Spain, in 1994. Dr. Bastos’ research interests are in Electronic Measurement and Control Systems, including sensors, control, mobile robots, industrial robotics, rehabilitation robotics, assistive technology, and biological signal processing. Dr. Bastos has over 500 publications in journals, conferences, and books

  • Iris Matching and De-Duplication of Voter Registration Lists

    Room BA-4287, Bahen Centre for Information Technology, University of Toronto, M5S 2E4

    Friday, April 13th at 10:00 a.m., Schubmehl-Prein Professor Kevin W. Bowyer, will be presenting an IEEE Signal Processing Society Distinguished Lecture “Iris Matching and De-Duplication of Voter Registration Lists”. Day & Time: Friday, April 13, 2018 10:00 a.m. ‐ 11:00 a.m. Speaker: Schubmehl-Prein Professor Kevin W. Bowyer Department of Computer Science & Engineering University of Notre Dame, IN, US Location: Room BA-4287, University of Toronto http://map.utoronto.ca/building/080 Contact: Mehrnaz Shokrollahi, Yashodhan Athavale Organizer: IEEE Signal Processing Chapter Toronto Section Abstract: Fingerprint, face and iris are widely used as biometrics to verify a person’s identity. One important application of biometrics is to ensure that each person is enrolled only once on a list of eligible voters. Keeping someone from voting multiple times under different identitiesis referred to as “de-duplicating” the voting register. This talk will present results of a de-duplication trial performed for the country of Somaliland. The talk will cover how iris recognition works, what level of matching accuracy can be expected, what the matching accuracy suggests in terms of expected number of false matches and false non-matches, and some “special case” example images. (You should not need any prior experience with iris recognition tounderstand this talk.) Biography: Kevin Bowyer is the Schubmehl-Prein Family Professor of Computer Science and Engineering at the University of Notre Dame and also serves as Director of International Summer Engineering Programs. Professor Bowyer’s research interests range broadly over computer vision and pattern recognition, including biometrics and data mining. Professor Bowyer received a 2014 Technical Achievement Award from the IEEE Computer Society, with the citation “For pioneering contributions to the science and engineering of biometrics”. Professor Bowyer is a Fellow of the IEEE, “for contributions to algorithms for recognizing objects in images”; a Fellow of the IAPR, “for contributions to computer vision, pattern recognition and biometrics”; and a Golden Core Member of the IEEE Computer Society. Professor Bowyer is serving as General Chair of the 2019 IEEE Winter Conference on Applications of Computer Vision; has served as Editor-in-Chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence and as Editor-In-Chief of the IEEE Biometrics Compendium; and is currently serving on the editorial board of IEEE Access. Professor Bowyer’s most recent book is the Handbook of Iris Recognition, edited with Dr. Mark Burge.

  • Improving Speech Understanding in the Real-World for Hearing Devices: Solutions, Challenges and Opportunities

    Room BA 1230, University of Toronto

    Thursday April 18th, 2019 at 4:00 p.m. Dr. Tao Zhang, Director of Signal Processing Research Department, will be presenting “Improving Speech Understanding in the Real-World for Hearing Devices: Solutions, Challenges and Opportunities”. Day & Time: Thursday, April 18th, 2019 4:00 p.m. – 5:00 p.m. Speaker: Dr. Tao Zhang Director of Signal Processing Research Department Starkey Hearing Technologies Organizers: IEEE Signal Processing Chapter Toronto Section Location: Room BA 1230, University of Toronto Contact: Mehrnaz Shokrollahi Yashodhan Athavale Michael Zara Abstract: The cocktail party problem has remained to be one of the most challenging problems for hearing aids even after decades of extensive research. In this talk, we will review our research on the cutting-edge single-microphone speech enhancement with emphasis on deep learning-based approaches. We will introduce and discuss our research on the multi-microphone speech enhancement with an emphasis on robust and real-time algorithms. We will present our latest research on the multimodal speech enhancement by considering brain signals (i.e. EEG) and microphone signals in a single joint-optimization framework. Finally, we will discuss the challenges and opportunities in deploying these algorithms in practice. We will present our perspectives on future research directions especially in the areas of individualizations and customizations using big data and machine learning. Biography: Tao Zhang received his B.S. degree in physics from Nanjing University, Nanjing, China in 1986, M.S. degree in electrical engineering from Peking University, Beijing, China in 1989, and Ph.D. degree in speech and hearing science from the Ohio-State University, Columbus, OH, USA in 1995. He joined the Advanced Research Department at Starkey Laboratories, Inc. as a Sr. Research Scientist in 2001, managed the DSP department from 2004 to 2008 and the Signal Processing Research Department from 2008 to 2014. Since 2014, he has been Director of the Signal Processing Research department at Starkey Hearing Technologies, a global leader in providing innovative hearing technologies. He has received many prestigious awards including Inventor of the Year Award, the Mount Rainier Best Research Team Award, the Most Valuable Idea Award, the Outstanding Technical Leadership Award and the Engineering Service Award at Starkey. He is a senior member of IEEE and the Signal Processing Society and the Engineering in Medicine and Biology Society. He serves on the IEEE AASP Technical Committee and the industrial relationship committee and the IEEE ComSoc North America Region Board, He is an IEEE SPS Distinguished Industry Speaker, the IEEE SPS Industry Convoy for the Unites States (Region 1-6) and the Chair of IEEE Twin-cities Signal Processing and CommunicationChapter. His current research interests include audio, acoustic, speech signal processing and machine learning; multimodal signal processing and machine learning for hearing enhancement, health and wellness monitoring; psychoacoustics, room and ear canal acoustics; ultra-low power real-time embedded system design and device-phone-cloud ecosystem design. He has authored and coauthored 120+ presentations and publications, received 20+ approved patents and had additional 30+ patents pending.

  • MIMO Signalling: Knowing the Classics Can Make a Difference

    Room BA-2135, University of Toronto

    Thursday June 6th, 2019 at 10:00 a.m. Prof. Wing-Kin (Ken) Ma, Chinese University of Hong Kong, will be presenting an IEEE Signal Processing Society Distinguished Lecture “MIMO Signalling: Knowing the Classics Can Make a Difference”. Day & Time: Thursday June 6th, 2019 10:00 a.m. ‐ 11:00 a.m. Speaker: Prof. Wing-Kin (Ken) Ma Chinese University of Hong Kong Organizers: IEEE Signal Processing Chapter Toronto Section IEEE Communications Chapter Toronto Section Location: Room BA-2135, University of Toronto http://map.utoronto.ca/building/080 Contact: Mehrnaz Shokrollahi, Yashodhan Athavale, Michael Zara, Abstract: In this talk the speaker will share two stories of how his research was benefitted by learning from the basics. The first story concerns physical-layer multicasting, a topic that has been dominated bybeamforming and optimization techniques. We will see how the classical concept of using channel coding to fight fast fading effects gives spark to rethink multicasting, and how that leads to a stochastic beamforming approach that goes beyond what beamforming achieves. The second story considers one-bit massive MIMO precoding, an emerging and challengingtopic. Current research on this topic mostly focuses on optimization, often in a sophisticated, if not complicated, manner. We will see how the traditional idea of Sigma-Delta modulation for DAC of temporal signals can be transferred to the spatial case, leading to one-bit massive MIMO precoding solutions that are simple and have quantization error well under control. Biography: Wing-Kin (Ken) Ma is a Professor with the Department of Electronic Engineering, The Chinese University of Hong Kong. His research interests lie in signal processing, optimization and communications. His mostrecent research focuses on two distinct topics, namely, structured matrix factorization for data science and remote sensing, and MIMO transceiver design and optimization. Dr. Ma is active in the Signal Processing Society. He served as editors of several journals, e.g.,Senior Area Editor of IEEE Transactions on Signal Processing, Lead Guest Editor of a special issue in IEEE Signal Processing Magazine, to name a few. He is currently a member of the Signal Processing for Communications and Networking (SPCOM) Technical Committee. He received Research Excellence Award 2013– 2014 by CUHK, the 2015 IEEE Signal Processing Magazine Best Paper Award, the 2016 IEEE Signal Processing Letters Best Paper Award, and the 2018 IEEE Signal Processing Best Paper Award. He is an IEEE Fellow and is currently an IEEE SPS Distinguished Lecturer.

  • GPT-3 for Vision

    On Wednesday, October 7, 2020 at 2:00 p.m., Dr. Ehsan Kamalinejad will present “GPT-3 for Vision”. Day & Time: Wednesday, October 7, 2020 2:00 p.m. – 3:00 p.m. Speaker: Ehsan Kamalinejad, PhD Co-Founder & CTO at Visual One Associate Professor at Cal State East Bay University Former Senior Machine Learning Scientist at Apple San Francisco, USA Organizer: IEEE Toronto Signal Processing Chapter Location: Virtual – Click here for the Google Meets link. Contact: Mehrnaz Shokrollahi Abstract: Deep learning in computer vision (CV) has proved to be very effective in solving many problems in real world. However, while the raw number of researches done in standard CV problems (such as ImageNet object classification/detection) has exploded, the measurable progress in these fields has slowed down. Additionally, there are many real-world problems in vision that are simply not compatible with the current approaches. This demands a new wave of problem statements in CV (and a new set of benchmarks). This talk focuses on one important set of such problem statements. We propose that many real-world problems in vision are “event recognition” problems. We introduce a concrete definition for the event recognition problem. We will see that this definition of event detection prohibits large sample sets. Hence, these events need to be recognize based on very few samples. We start by reviewing the current literature and we propose some promising directions for approaching this problem. At the end we show some demos from our recent effort on wrestling with this very challenging problem. Our solution can be best described by the “vision counterpart of GPT-3 few shot learner”. Register: Please check back soon for the registration link. Biography: Ehsan Kamalinejad (EK) is a senior machine learning engineer. He is currently working on Visual One which is a YCombinator backed startup he co-founded. Before that he was working for several years at Apple and Amazon as a staff machine learning engineer. Ehsan holds a faculty position as an associate professor at Cal State East Bay University. He got his PhD from University of Toronto. He has more than 7 years of experience delivering machine learning products in computer vision and natural language processing. His current project, Visual One, is about bringing next level intelligence to surveillance cameras.

  • Machine Learning and Digital Signal Processing Applications in Online Video Platforms

    On Friday, November 20, 2020 at 2:30 p.m., Mehrdad Fatourechi will present “Machine Learning and Digital Signal Processing Applications in Online Video Platforms”. Day & Time: Friday, November 20, 2020 2:30 p.m. – 4:00 p.m. Speaker: Mehrdad Fatourechi, PhD Organizer: IEEE Signal Processing Chapter Toronto Section Location: This event will be hosted on google meets Meeting ID meet.google.com/yej-opbp-uxo Phone Numbers (US)+1 617-675-4444 PIN: 974 200 026 6220# Contact: Mehrnaz Shokrollahi Abstract: In the past 15 years, we have seen exponential growth in online video platforms such as YouTube, Instagram, Netflix, TikTok, amongst others. In this talk, we will look at some of the challenges these platforms have been facing and how machine learning and digital signal processing are playing important roles in addressing these challenges. We will focus on discussing 3 areas: 1- Content discovery and SEO optimization 2- Establishing trust and safety, and 3- Protecting the rights of the content owners We will also discuss some of the areas that are currently open for future research. Register: Registration is not required. Biography: Mehrdad is the VP of Engineering of BroadbandTV, a media-tech company that is advancing the world through the creation, distribution, management, and monetization of content. Mehrdad is currently responsible for managing the research and development (R&D) and IT departments. When he joined BBTV in March 2010, he was initially responsible for managing the research team, and then his role later expanded to lead the entire engineering department. Under his leadership, BBTV’s tech team has become one of the leading and most innovative teams in digital video space, building several internal and external products (including VISO Catalyst, VISO Collab, VISO Prism, VISO NOVI, and VISO Mine) as well as filing several patents. Mehrdad has an in-depth knowledge of digital signal processing, machine learning, and pattern recognition algorithms. He holds a PhD in Electrical Engineering from the University of British Columbia (UBC), where he was nominated for NSERC’s Doctoral Prize Award. He is an author on more than 30 journal and conference papers with a focus on pattern recognition, machine learning and intelligent algorithms. He previously held positions in the tech/education industry including roles as a research associate and sessional lecturer at UBC, as well as consulting with several companies (INETCO, BC Mining Research, and STC enterprises). He was the co-chair of the IEEE Signal Processing Chapter in Vancouver for two years.

  • Digital Health – Role of Biomedical Signal Analysis

    Room: ENGLG24, Bldg: George Vari Engineering and Computing Centre, ENG, 245 Church St, Toronto, Ontario, Canada, M5B 1Z4

    This talk will focus on the role of digital technology in providing a more patient-centric and proactive healthcare system. Following a motivational introduction to wearables and their role in providing a connected digital healthcare system, specific requirements for signal analysis and machine learning would be expanded. Case study examples of some of the innovation projects in the areas of baby heart rate monitoring, continuous vital signs analysis and mental health applications will be mentioned as the translational aspects of the research and development done at the Signal Analysis Research Lab in Toronto Metropolitan University. Speaker(s): Dr. Sri Krishnan, Room: ENGLG24, Bldg: George Vari Engineering and Computing Centre, ENG, 245 Church St, Toronto, Ontario, Canada, M5B 1Z4

  • Advances in Neuroscience at UFES/Brazil

    Room: 105, Bldg: Eric Palin Hall (EPH), 87 Gerrard St E, Toronto, Ontario, Canada, M5B 2M2

    This seminar will cover topics including: - Devices for Blind People, Amputees, People with Severe Disability - Control of Appliances Through sEMG and EOG, Rehabilitation Through Serious Games - Use of Internet of Things (IoT) for Human Activity Recognition (HAR) Based on Convolutional Neural Network (CNN) - Robots for Interaction with Children with ASD and Down Syndrome - Respiratory Rate Estimation Through Deep Learning Applied to Photoplethysmogram - COVID Detection Through Recurrent Neural Networks (RNN) and Deep Learning (DL) - Several Applications with Brain-Computer Interfaces (BCIs) Based on Electroencephalography (EEG) Speaker(s): Dr. Teodiano Freire Bastos-Filho, Room: 105, Bldg: Eric Palin Hall (EPH), 87 Gerrard St E, Toronto, Ontario, Canada, M5B 2M2