Dr. Nopasit Chakpitak
Title: Digital Transformation in Aerospace Industry
Abstract
Digital Transformation has been going beyond computerization and digitization in the last three decades. Internet convergence is bringing Artificial Intelligence, Internet of Things, Virtual Reality and Blockchain to commercial applications in real World. Web3 Digital Asset with AI and Web4 Digital Twins are going to change Human Computer Interface to Internet from 4/5G mobile device to Mixed Reality eyeglasses (Virtual Reality, Augmented Reality and video) linking between physical and virtual World, Universe and Metaverse or knowledge and imagination. AI, Computer Vision and GPU technologies play dominant roles in visualization and computation.
In aviation industry, digital tower, satellite navigation and Low Altitude Economy are major disruptive technology development. Science, Technology, Engineering and Mathematics (STEM) is promoted by United Nations particularly for Gen-Z for Technology Innovation in sustainable development. International Civil Aviation Organzation (ICAO), United Nations has Next Generation of Aviation Professional (NGAP) to attract and prepare talents since university and high school for advanced aerospace industry. A change management case study of APAC ANSP Committee, ICAO is illustrated the AeroThai journey.
Background
Dr. Nopasit Chakpitak is researcher and consultant in Knowledge Engineering and Management. During 2022-2025 as the CEO of Aeronautical Radio of Thailand, he fully engages Artificial Intelligence, Satellite Navigation and Low Altitude Economy in Aerospace Industry. His AI Knowledge Engineering application domains include power industry, asset management and industrial clusters. He was a professional efficiency and Control & Information engineer at Electricity Generating Authority of Thailand. His expertise is in process control, Distributed Control System and Supervisory Control and Data Acquisition. After startup of his own software company, he joined Chiang Mai University to establish the department of computer engineering and Northern THAISARN Internet and SchoolNet. After his doctor graduation in AI from the University of Strathclyde UK, he became the founder dean of the College of Arts, Media and Technology, the Software Engineering School at Chiang Mai University. Now he is an assistant professor in Knowledge Management and Software Engineering. His consultancy works are mainly in Knowledge Management implementation for government agencies, state and private enterprises by using Knowledge Engineering methodology to design and implementation of AI system. He also involves in several European funded projects in ICT and it applications. His recent work is on Smart City including Smart Transportation, Smart Tourism, Smart Education, Smart Energy, etc. This includes relevance disruptive technologies: Digital Economy, Smart City, Big Data, FinTech, EcoSystem, Innovation, Tech Startup, Intellectual Property and Cyber Law. He established the International College of Digital Innovation for promoting international Tech Startups in Thailand. He also is promoting digital transformation including AI, Internet of Things and Blockchain in Thailand. Current works focus on Web3 digital asset and Web4 digital twin technology for Metaverse. Currently he is working on Maintenance, Repair and Overhaul at predictive maintenance perspective for Enterprise Asset Management framework.
Professor Naeem Ramzan
Title: From Emotion Recognition to Biometric Identification through Physiological Signals
Abstract
Emotion plays a major role in our life. It can be manifest in certain physical behaviour which can facilitate identification. Speech and facial expression, for example, are the simplest channels for detecting emotions. However, emotion recognition via physiological signals (EEG, ECG) have shown great potential as compared to other methods, due to the impossibility of the person who is producing the signal manipulating or controlling the output. The data, then, accurately reflects the sensation experienced by the producer. The first part of lecture will present a method that combines both connectivity based and channel-based features with a selection method that considerably reduces the dimensionality of the data and allows for an efficient classification of different emotions via physiological signals.
Electroencephalography (EEG) signals in the field of biometrics has shown great potential however there is no common test bed that would make it possible to easily compare the performance of different techniques.
In this lecture, we present a dataset that has been specifically designed to allow researchers to attempt new biometric approaches that use EEG signals captured by using relatively inexpensive consumer grade devices. The proposed dataset contains EEG recordings and responses from 21 individuals, captured under 12 different stimuli across multiple sessions. The selected stimuli included traditional approaches, as well as stimuli that aim to elicit concrete affective states, in order to facilitate future studies related to the influence of emotions on the EEG signals in the context of biometrics. The captured data were checked for consistency and a performance study was also carried out in order to establish a baseline for the tasks of subject verification and identification.
The last part of lecture will focus on how different emotional states affect EEG-based biometrics. The variation exhibited between the brain signals (EEG) of different people makes such signals especially suitable for biometric user identification. However, the characteristics of these signals are also influenced by the user’s current condition, including his/her affective state. In this lecture, we analyse the significance of the affect-related component of brain signals within the subject identification context. Consistent results are obtained across three different public datasets, suggesting that the dominant component of the signal is subject-related, but the affective state also has a contribution that affects identification accuracy.
Background
Professor Naeem Ramzan (S’04, M’08, SM’13) received the M.Sc. degree in telecommunication from University of Brest, France, in 2004 and the Ph.D. degree in electronics engineering from Queen Mary University of London, London, U.K., in 2008. Currently, he is a Full Professor of Artificial Intelligence and Computer Engineering in University of West of Scotland. Before that he was a senior research fellow and lecturer at Queen Mary University of London from 2008 to 2012. He is a Director of Artificial Intelligence, Virtual Communication & Network Institute and Chair of Affective and Human Computing for Smart Environment (AHCSE) Research Centre.
He is, a Fellow of Royal Society of Edinburgh, senior member of the IEEE Fellow, Senior Fellow of Higher Education Academy (HEA), co-chair of MPEG HEVC verification (AHG5) group and a voting member of the British Standard Institution (BSI). In addition, he holds key roles in the Video Quality Expert Group (VQEG) such as Co-chair of the Ultra High Definition (UltraHD) group; Co-chair of the Visually Lossless Quality Analysis (VLQA) group; and Co-chair of the Psycho-Physiological Quality Assessment (PsyPhyQA). He has been a lead researcher in various nationally or EU sponsored multimillion-funded international research projects. His research interests are cross-disciplinary & industry focused and include: video processing, analysis and communication, video quality evaluation, Brain-inspired multi-modal cognitive technology, Big Data analytics, Affective computing, IoT/smart environments, natural multi-modal human computer interaction, eHealth/connected Health. He has a global collaborative research network spanning both academia and key industrial players. He has been the Lead supervisor/supervisor for about 30 postdoctoral research fellows and PhD research students, and six PhD students supervised by him, have successfully completed in UK. He has published more than 250 articles in peer reviewed journals, conferences, book chapters including standardised contributions. His paper was awarded best paper award 2016 of IEEE Transaction of Circuit and System for Video Technology and three conference papers were selected for best student paper award in 2015/2016.
He has been awarded Scottish Knowledge Exchange Champion award in 2023 and 2020 and only and only academics in Scotland got this award twice. He received STARS (Staff Appreciation and Recognition Scheme) award for 2014 and 2016 for “Outstanding Research and Knowledge Exchange” (University of the West of Scotland) and Awarded Contribution Reward Scheme 2011 and 2009 for outstanding research and teaching activities (Queen Mary University of London). He has chaired/co-chaired/organised more than 25 workshops, special sessions, and tracks in International conferences.
Apart from the research work at UWS, he led a team of young and enthusiastic lecturers to develop a highly innovative portfolio of post graduate studies including MSc Advanced Computing, MSc Big Data, MSc IoT, MSc eHealth. Advanced computing and networking technologies such as app development, advanced data science, intelligent systems, IoT, and cloud computing are taught in this programme.
Dr. Ray Holder
Title: No app, no hardware, no limits…WebXR
Description
What if immersive experiences didn’t require expensive tech or complicated downloads?
Join Dr. Ray Holder, Head of XR at Smartify, in a session that challenges the status quo of AR by showcasing the potential of WebXR – delivering accessible and scalable experiences straight from the browser.
Building upon a two-year research project with the University of the West of Scotland, the session will explore:
- How do the evolving capabilities of WebXR compare with established platforms.
- How can museums and cultural institutions leverage browser-based XR to engage audiences without barriers?
- What do upcoming WebXR projects signal for the future of immersive storytelling?
Background
Ray is a BAFTA winning XR professional with over 25 years of experience in visual communications, encompassing roles as a graphic designer, web developer, digital artist, 3D artist, animator, and VFX artist. Ray has a PhD in Computer Science with a focus on immersive technologies, completed while teaching creative computing and industry-related subjects.
Notably, Raymond was the VFX artist for a short film that won the Scottish BAFTA in 2017. Over the past two years, he has been a key figure in a Knowledge Transfer Partnership (KTP) with Smartify and the University of the West of Scotland, leading the development and expansion of Smartify’s XR capabilities. His work has driven innovative XR solutions that have elevated user engagement in the heritage, arts, and cultural sectors, earning recognition and nominations for prestigious awards.
As Head of XR at Smartify, he has driven successful collaborations with sites such as Tower Bridge, Cutty Sark, and Historic Royal Palaces. With a proven track record of delivering impactful XR solutions and a deep understanding of the intersection between technology and user experience, he continues to push boundaries in immersive technology.