The symposium will engage attendees through a variety of sessions designed to encourage both learning and dialogue:
Invited speakers: Distinguished scholars in human-AI collaboration will provide keynote talks, setting the context for exploring trust, transparency, and responsible design across application areas.
Oral presentations and poster sessions: Accepted research papers and posters will showcase new insights and innovations, focusing on enhancing communication, transparency, and ethical standards in human-AI interactions. Neuro-symbolic approaches and applications of LLMs, VR/AR, and NLP in collaborative settings are welcome.
Tutorials: Tutorials will offer hands-on instruction in key technologies that support human-AI collaboration, including:
Explainable AI Techniques: Sessions on using interpretability methods like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) for understanding AI outputs.
Human-Centred LLM and Neuro-Symbolic Design: Instruction on prompt engineering, symbolic rule integration, and ethical design strategies to align AI responses with user expectations.
Collaborative Machine Learning Platforms: Tutorials on platforms like TensorFlow Federated for privacy-preserving, decentralised collaborative learning.
Panel session: A focused discussion on the “Impact of diversity of human and computational intelligence on the design of hybrid intelligent systems” will allow panellists to address critical questions such as:
How can the diversity in human cognitive abilities and AI models be laveraged to enhance the performance and adaptability of hybrid intelligent systems?
What are the challenges and best practices in integrating diverse human inputs and AI capabilities within hybrid systems?
How can we ensure that hybrid intelligent systems are designed ethically, respecting diverse human perspectives while mitigating biases in AI?
Workshops: Intensive workshops will explore specific methodologies and technologies for human-AI collaboration. Topics include:
Bias Mitigation and Fairness in AI Models: Approaches for identifying and reducing bias in AI outputs, with tools and best practices for promoting fairness.
Explainability Tools for User Trust: Techniques for applying explainability methods, including symbolic rule-based explanations, to improve user confidence in AI-driven applications.
RecSim and Neuro-Symbolic Approaches for User Modelling: This workshop will cover Google’s RecSim for creating simulations that assess and optimise recommendation systems alongside neuro-symbolic approaches to model user behaviour.
Human Augmentation Technologies: A session focusing on AI-driven advancements in human augmentation, such as cognitive prosthetics, AI-assisted diagnostics, and performance enhancement tools.
Roundtable Discussion: Exploring the benefits and challenges of establishing an international network for Human-AI collaboration. A proposal will be presented to attendees for feedback and possible adoption.
Schedule
Day 1
Morning (Opening & paper presentations)
8:00 – 8:45: Registration and Welcome
9:00 – 10:00: Keynote on human-AI collaboration – Vision, Challenges, and Opportunities
10:00 – 11:20:Oral session 1: Human-AI interaction Four selected papers (20 minutes each) focusing on foundational concepts of human-AI collaboration, user-centered design, and building trust in AI systems.
11:20 – 11:45: Networking coffee break
11:45 – 13:15:Tutorial session 1: Explainable AI techniques Hands-on tutorial on explainability methods like SHAP and LIME to improve transparency and interpretability of AI systems, followed by Q&A.
Afternoon (Oral presentations and tutorials)
14:00 – 15:20: Oral session 2: Explainability and interpretability Four selected papers (20 minutes each) discussing techniques for improving explainability in AI, including symbolic knowledge integration, model transparency, and ethical considerations in interpretability.
15:20 – 15:45: Networking coffee break
15:45 – 17:15:Tutorial session 2: Human-centered LLM and neuro-symbolic design Tutorial on incorporating symbolic knowledge with neural methods, covering prompt engineering, symbolic rule integration, and ethical design strategies.
Day 2
Morning (Oral session and poster)
9:00 – 10:00: Keynote on Explainable AI
10:00 – 11:20:Oral session 3: Applications of human-AI collaboration Four selected papers (20 minutes each) on real-world applications of AI in healthcare, education, and creative industries, with a focus on domain-specific challenges and solutions for human-AI interaction.
11:20 – 11:45: Networking coffee break
11:45 – 13:15: Poster session and networking Poster session featuring research on NLP applications, user adaptation, cognitive aspects in collaborative AI, and practical challenges.
Afternoon (Workshop and social event)
14:00 – 15:20:Workshop 1: Bias mitigation and fairness in AI models Workshop covering practical tools and strategies to identify and mitigate bias in collaborative AI applications.
15:20 – 15:45: Networking coffee break
16:00 – 18:00: Social event – Visiting the future meusium
Day 3
Morning (Panel session, Oral session, and Workshop)
9:00 – 10:30: Workshop 2: RecSim and user modeling Workshop on Google’s RecSim for recommendation system simulation, aiming at enhanced user interaction modeling.
10:35 – 11:00: Coffee break
11:00 – 12:30: Panel session Theme: “Impact of diversity of human and computational intelligence on hybrid intelligent systems.” Panellists discuss the diversity of human and computational intelligence and impact on hybrid intelligent systems.
13:00 – 2:00:Roundtable discussion: Establishing an international network on ”Human-AI collaboration.”