In the AI in 2023: Exciting Developments and Heightened Risks talk, Dr. Steve Kramer will give an introduction to AI for non-practitioners and then highlight current use cases in generative AI, computer vision, natural language processing, time series forecasting, anomaly detection, reinforcement learning, and recommender systems where AI-based systems are already performing well or offer significant promise to do so in the near future.
Some key examples include creative writing and video generation, drug discovery, robotics, language understanding, climate change mitigation, and supply chain optimization. A major focus will be on recent generative AI models like Stable Diffusion and ChatGPT that have attracted broad attention across the world.
Kramer will also discuss the significant risks of AI systems related to incorrectness, bias, fairness, privacy, fraud, cybersecurity, and misinformation/disinformation and then the current efforts in algorithmic accountability and AI ethics to minimize negative impacts or harms.
There will be plenty of accessible content for those newer to AI as well as pointers to technical details for those with strong AI backgrounds who want to learn about key recent research developments.
In the AI in 2023: Exciting Developments and Heightened Risks talk, Dr. Steve Kramer will give an introduction to AI for non-practitioners and then highlight current use cases in generative AI, computer vision, natural language processing, time series forecasting, anomaly detection, reinforcement learning, and recommender systems where AI-based systems are already performing well or offer significant promise to do so in the near future.
Some key examples include creative writing and video generation, drug discovery, robotics, language understanding, climate change mitigation, and supply chain optimization. A major focus will be on recent generative AI models like Stable Diffusion and ChatGPT that have attracted broad attention across the world.
Kramer will also discuss the significant risks of AI systems related to incorrectness, bias, fairness, privacy, fraud, cybersecurity, and misinformation/disinformation and then the current efforts in algorithmic accountability and AI ethics to minimize negative impacts or harms.
There will be plenty of accessible content for those newer to AI as well as pointers to technical details for those with strong AI backgrounds who want to learn about key recent research developments.
WHEN
WHERE
TICKET INFO
Admission is free.