Quantcast

Austin Forum on Technology & Society presents AI in 2023: Exciting Developments and Heightened Risks

eventdetail
Photo courtesy of Austin Forum on Technology & Society

In the AI in 2023: Exciting Developments and Heightened Risks talk, Dr. Steve Kramer will give an introduction to AI for non-practitioners and then highlight current use cases in generative AI, computer vision, natural language processing, time series forecasting, anomaly detection, reinforcement learning, and recommender systems where AI-based systems are already performing well or offer significant promise to do so in the near future.

Some key examples include creative writing and video generation, drug discovery, robotics, language understanding, climate change mitigation, and supply chain optimization. A major focus will be on recent generative AI models like Stable Diffusion and ChatGPT that have attracted broad attention across the world.

Kramer will also discuss the significant risks of AI systems related to incorrectness, bias, fairness, privacy, fraud, cybersecurity, and misinformation/disinformation and then the current efforts in algorithmic accountability and AI ethics to minimize negative impacts or harms.

There will be plenty of accessible content for those newer to AI as well as pointers to technical details for those with strong AI backgrounds who want to learn about key recent research developments.

In the AI in 2023: Exciting Developments and Heightened Risks talk, Dr. Steve Kramer will give an introduction to AI for non-practitioners and then highlight current use cases in generative AI, computer vision, natural language processing, time series forecasting, anomaly detection, reinforcement learning, and recommender systems where AI-based systems are already performing well or offer significant promise to do so in the near future.

Some key examples include creative writing and video generation, drug discovery, robotics, language understanding, climate change mitigation, and supply chain optimization. A major focus will be on recent generative AI models like Stable Diffusion and ChatGPT that have attracted broad attention across the world.

Kramer will also discuss the significant risks of AI systems related to incorrectness, bias, fairness, privacy, fraud, cybersecurity, and misinformation/disinformation and then the current efforts in algorithmic accountability and AI ethics to minimize negative impacts or harms.

There will be plenty of accessible content for those newer to AI as well as pointers to technical details for those with strong AI backgrounds who want to learn about key recent research developments.

WHEN

WHERE

Austin Central Library, Austin Public Library
710 W Cesar Chavez St, Austin, TX 78701, USA
https://www.austinforum.org/april-4-2023.html

TICKET INFO

Admission is free.

All events are subject to change due to weather or other concerns. Please check with the venue or organization to ensure an event is taking place as scheduled.
CULTUREMAP EMAILS ARE AWESOME
Get Austin intel delivered daily.