This theme examines the interplay between intelligent technologies and society, ensuring innovation aligns with human values. We study how AI and robotics influence social systems and human behaviour, exploring the social, ethical, legal and economic implications of these technologies. The goal is to design AI-enabled systems that positively support our sociotechnical environment. By drawing on insights from social sciences and engineering, our researchers promote responsible AI – developing frameworks, design principles and policies that foster transparency, fairness and public trust.
¸ßÇ帣ÀûƬ spans three subthemes across multidisciplinary research
¸ßÇ帣ÀûƬ aims to ensure that artificial intelligence (AI) technologies are developed and deployed responsibly, transparently, and equitably. It addresses the societal, legal, and ethical implications of AI, focusing on trust, accountability, and public interest. This aligns our broader strategy to lead in digital transformation and interdisciplinary innovation, by integrating technical excellence with social responsibility. The research brings together experts from engineering, law, social sciences, and business to examine how AI impacts institutions, education, policy, and civic life. It supports the development of governance frameworks that promote fairness, mitigate harm, and foster public trust in AI systems.
We are conducting multi-disciplinary research into the regulation, transparency, and ethical deployment of AI across sectors. Projects include analysing how AI affects trust in social institutions, shaping laws and policies to ensure AI serves the public good, and developing participatory frameworks for decision-making in education and civic technology.
¸ßÇ帣ÀûƬers are also exploring algorithmic bias, data justice, and the role of AI in democratic processes. Through collaborations with government, industry, and civil society, we aim to influence policy, set best practices, and train future leaders who combine technical expertise with ethical awareness.
This research aims to analyse and improve the ethical and governance frameworks surrounding artificial intelligence, with a focus on trust, transparency, and accountability in real-world applications, by combining technical innovation with legal, social, and policy expertise. This ensures AI systems are fair, responsible, and aligned with public values, leading to everyday benefits like safer healthcare technologies, unbiased decision-making tools, and more trustworthy digital services.
Professor , Dr
,
Ìý
¸ßÇ帣ÀûƬ focuses on designing intelligent systems that are socially aware, ethically grounded, and responsive to human needs. It aims to understand how people perceive, collaborate with, and make decisions alongside AI technologies, ensuring that these systems foster trust, transparency, and meaningful engagement. This research is driven by the belief that AI should augment human capabilities rather than replace them. This aligns with our overarching strategy to lead in responsible innovation and interdisciplinary research.
To achieve these objectives, researchers are conducting empirical studies and developing interactive systems that explore human-AI collaboration across domains such as healthcare, education, and creative industries. Projects include examining how generative AI influences creativity and design thinking, how moral values shape judgments of AI behaviour, and how fairness is perceived in algorithmic decision-making.
This research aims to improve human-AI interaction with a focus on building trust, transparency, and effective collaboration between people and intelligent systems, by designing adaptive and user-aware interfaces. This enhances usability and confidence in AI technologies, with broader impacts such as AI assistants that clearly explain decisions and wearable devices that respond to individual needs in everyday life.
ProfessorÌý, Professor , Associate Professor , Dr , Dr Wanchun Liu,ÌýDrÌý
Ìý
¸ßÇ帣ÀûƬ aims to critically examine the growing ubiquity of AI-powered systems across social, environmental, and economic spheres, and their influence on employment, equity, governance, and cultural norms. This aligns with our broader strategy to create a digital, sustainable, and healthier future by integrating engineering excellence with societal impact.
To achieve these goals, researchers are exploring human-machine interaction, social robotics, and the design of AI systems that support sociotechnical environments. Projects investigate how intelligent devices influence human behaviour and infrastructure, and how to design systems that foster trust, transparency, and ethical governance. This includes examining the boundaries of collaborative robotics, the societal framing of AI technologies, and their legislative and economic implications.
This research aims to analyse how AI and automation influence societal structures and everyday life, with a focus on equity, employment, and cultural transformation, by studying the integration of intelligent devices across social, environmental, and economic domains. This approach informs public debate and policy, helping communities adapt to technological change and harness AI for social good, such as ensuring fair access to jobs, ethical use of data, and inclusive digital services.
Professor , Associate Professor , Dr Wanchun Liu, Dr
Ìý