Track Description
Human-Centered AI is an interdisciplinary field dedicated to developing artificial intelligence systems that prioritise human needs, values, agency, and well-being at every stage of the design, development, deployment, and evaluation process. Rather than treating AI as a purely technical artefact optimised for performance metrics, HCAI places humans (individuals, communities, and society) at the core, ensuring that AI augments human capabilities, preserves meaningful control (human oversight), fosters trust, and contributes positively to ethical, inclusive, and sustainable outcomes.
This track bridges technical AI/ML advancements with insights from human-computer interaction (HCI), cognitive science, social sciences, ethics, law, and domain-specific applications. The focus is on creating responsible, trustworthy, and equitable AI that operates in real-world naturalistic contexts, addressing both opportunities (e.g., enhanced decision-making, creativity, and collaboration) and challenges (e.g., bias, opacity, loss of agency, and societal impact).
In 2026, the track particularly highlights emerging themes such as bidirectional human-AI alignment, agentic and embodied AI systems, explainability in the era of generative and autonomous agents, participatory and culturally grounded design, long-term human well-being, and governance for scalable, adaptive human-AI partnerships.
Topics of Interest
Submissions are welcome on (but not limited to) the following areas:
Human-AI Interaction and Collaboration
- Guidelines, patterns, and principles for effective human-AI interaction;
- Human oversight, control, and agency in agentic/ autonomous AI systems;
- Co-creative, collaborative, and augmented reasoning processes.
Design and Development Methodologies
- Human-centered design thinking, participatory design, and co-design for AI;
- Frameworks for responsible and trustworthy AI (fairness, accountability, transparency, robustness);
- User-centered evaluation and studies of human-AI systems;
- Adaptive and evolving AI systems that learn from human feedback and behaviour.
Explainability, Trust, and Transparency
- Human-centered explainable AI;
- Building and measuring trust, user confidence, and perceived fairness.
Ethical, Social, and Societal Dimensions
- Alignment with human values, norms, and cultural contexts;
- Fairness, bias mitigation, inclusivity, and equity;
- Privacy, data governance, and socio-technical impacts;
- Long-term effects on human cognition, well-being, critical thinking, and skills.
Emerging Frontiers
- Bidirectional human-AI coevolution and alignment;
- AI agents and multi-agent systems with human-in-the-loop;
- Integration of cognitive science and embodied cognition into foundational AI/ML;
- Responsible innovation, policy, auditing, and regulation in dynamic AI ecosystems.
Real-World Applications and Case Studies
- Practical examples showcasing how human-centered AI drives improved outcomes in healthcare, education, governance, and other domains.
Track Chairs
- Joana Campos - Instituto Superior Técnico
- Pedro Martins - Universidade de Coimbra
- Paulo Novais - Universidade do Minho