Chinagraph 2018特邀报告（Keynote Speech）
报告摘要：Current interaction with machine learning systems are mostly batch-based. A system takes a set of annotated data as input, and returns a trained model. This model is problematic because the annotation process is labor intensive and the training process works as a black box, making it difficult to understand and control. We are developing techniques to address these problems by introducing rich visual interaction techniques to help users understand and control machine learning systems. In this talk, I will present some of the results from our group in computer graphics.
个人简介：Takeo Igarashi is a professor at creative informatics department, the University of Tokyo. He received PhD from Department of Information Engineering, the University of Tokyo in 2000. His research interest is in user interface and interactive computer graphics. He is known as the inventor of sketch-based modeling system called Teddy, and received The Significant New Researcher Award at SIGGRAPH 2006. He is currently directing JST CREST “HCI for Machine Learning” Project.
个人简介：李远清，华南理工大学自动化科学与工程学院院长、华南理工大学脑机接口与脑信息处理研究中心主任、教授、博士生导师、IEEE Fellow、国家杰出青年科学基金获得者、教育部长江学者特聘教授、国家百千万人才工程国家级人选，广东省南粤百名杰出人才培养工程人选，获国家自然科学二等奖1项、教育部自然科学一等奖1项、广东省自然科学一等奖2项等。是IEEE Trans. on Fuzzy Systems, IEEE Trans. on Human Machine Systems 等4份SCI国际期刊的副主编。1988 年本科毕业于武汉大学数学系，1994 年硕士毕业于华南师范大学数学系，1997 年博士毕业于华南理工大学自动控制工程系。2000年以来，先后致力于独立分量分析与盲源分离、稀疏编码、半监督机器学习、脑电与fMRI信号分析、脑机接口、脑内视听觉整合等方面的研究。迄今为止，发表论文100余篇，其中在IEEE Trans. on Information Theory，IEEE Trans. on Signal Processing，IEEE Trans. on Biomedical Engineering，IEEE Signal Processing Magazine，Proceedings of the IEEE等汇刊上发表长文40余篇，在Cerebral Cortex, Human Brain Mapping, NeuroImage, Scientific Reports, Machine Learning，Pattern Recognition，Neural Computation，Journal of Neural Engineering，NIPS等国际著名期刊及权威会议上发表论文共30余篇。申请获批专利30余项。主持项目包括国家重点研发计划项目、国家自然科学基金重点项目、科技部863课题、广东省自然科学基金团队项目等。担任中国自动化学会常务理事、中国人工智能学会混合智能分会副主任委员、中国生物医学工程学会医学神经工程分会副主任委员等。
报告摘要：Over the last decades, the success in the field of visual computing has revolutionized our digital visual experience --- from special effects in Hollywood movies, face recognition on smartphones, to the stunning promise offered by VR/AR goggles. Yet, in this grand picture, one piece remains missing --- the sound. Our real world has never been silent. Not only is it colorful to our eyes, its sound is also rich and vivid to our ears. In current paradigms, visual computing is often performed in isolation from its audio counterparts. In this talk, I will propose audiovisual computing, a research area that renders, analyzes, and processes audiovisual information. I will first introduce our recent works in this area on physics-based audiovisual models from first principles, and then illustrate audiovisual processing using our work on 360 videos. In the second part of the talk, I’ll discuss the implication of these models on improving the physical world --- namely how to harness the computational audiovisual models to enable tangible forms and objects that offer unprecedented new functionalities.
个人简介：Changxi Zheng is currently an Associate Professor in the Computer Science Department at Columbia University. He received his Ph.D. from Cornell University with the Best Dissertation Award and his B.S. from Shanghai Jiaotong University. He served as an associated editor of ACM Transactions on Graphics. He was a Conference Chair for SCA in 2017, has won a NSF CAREER Award, and was named one of Forbes’ “30 under 30” in science and healthcare in 2013. He received the Best Paper Awards from 2016 SCA and 2017 UIST, among others. Changxi's current research is on the boundary between computer science, applied physics and mathematics. He is particularly interested in developing simulation methods for complex physical systems, the computational models for optical and acoustic sensing, and the computational design of structures and materials. The technologies developed by his group have been used in several industries such as Adobe, Weta Digital, and SoftBank. And his work has been profiled in major media outlets, including NPR, BBC, Forbes, WIRED, and Popular Science.
报告摘要：The importance for simulations, in both robotics and more recently autonomous driving, has been more and more recognized. In this talk, I will talk the fairly extensive line of simulation research at Baidu’s Robotics and Autonomous Driving Lab (RAL), from low-level sensor simulation, such as LIDAR, to high-level behavior simulation, such as drivers/pedestrians. These different simulations tools are designed to either produce an abundant amount of annotated data to train deep neural network, or directly provide an end-to-end environment to test all aspects of robots/autonomous vehicles movement capabilities.
个人简介：Ruigang Yang is currently Chief Scientist for 3D Vision at Baidu Research. He leads the Robotics and Autonomous Driving Lab (RAL). Before joining Baidu, he is a full professor of Computer Science at the University of Kentucky. He obtained his PhD degree from University of North Carolina at Chapel Hill and his MS degree from Columbia University. His research interests span over computer graphics and computer vision, in particular in 3D reconstruction and 3D data analysis. He has published over 100 papers, which, according to Google Scholar, has received over 10000 citations with an h-index of 48 (as of 2017). He has received a number of awards, including US NSF Career award in 2004 and the Dean’s Research Award in 2013. He is currently an associate editor of IEEE TPAMI and a senior member of IEEE.
报告摘要：RGB multispectral video that span the visible electromagnetic spectrum (400 nm - 700 nm) are now commonplace, with the advent of color video cameras. Near, mid, far Infrared (IR) hyper spectral imaging, that capture a portion of the human invisible spectrum (700 nm - 1mm) have additionally proven indispensable for many scenarios such as night vision, video surveillance, terrain sensing via satellites. Both multispectral and hyper spectral forms of imagery are now seeing use for navigation/control, in automobiles, transport vehicles and even drones. This spectral imagery, enables the simultaneous prediction of the visible and invisible geometry and material properties of complex scenes. This talk shall dwell on the success, and current challenges of state of the art, machine learning algorithms for spectral de-noising, semantic segmentation, data fusion, super-resolution.
个人简介：Chandrajit Bajaj is the director of the Center for Computational Visualization, in the Institute for Computational and Engineering Sciences (ICES) and a Professor of Computer Sciences at the University of Texas at Austin. Bajaj holds the Computational Applied Mathematics Chair in Visualization. He is also an affiliate faculty member of Mathematics, Computational Neuroscience and Electrical Engineering. He is currently on the editorial boards for the International Journal of Computational Geometry and Applications, and the ACM Computing Surveys, and past editorial member of the SIAM Journal on Imaging Sciences. He was awarded a distinguished alumnus award from the Indian Institute of Technology, Delhi, (IIT, Delhi). He is also a Fellow of The American Association for the Advancement of Science (AAAS), Fellow of the Association for Computing Machinery (ACM), Fellow of the Institute of Electrical and Electronic Engineers (IEEE), and Fellow of the Society of Industrial and Applied Mathematics (SIAM).