DSAI 2023: 1st International Conference on Data Science & Artificial Intelligence
Posters
1. Adaptive Learning Models for Detecting Traffic Anomalies
Author: Manish Taneja, Matthew N. Dailey
Abstract:
In this research we develop a new finite mixture model and mathematically prove it. The mixture model is implemented to determine physical characteristics of a traffic scene for anomaly detection. The model uses Expectation Maximization approach for estimating traffic lane center from two dimensional Cartesian coordinates representing spatio-temporal position of vehicles passing through a traffic scene.
A traffic scene is determined in a two step process
1. Unsupervised approach for initial estimation of number of lanes at a scene in the region of interest using Hough transform technique for Gaussian noisy data.
2. Estimating the lane parameters for each lane using proposed finite mixture model.
The model can be implemented for diverse traffic scenes from around the world by capturing a video of vehicular motion at traffic scenes. The model can also be utilized in general for unsupervised estimation of clusters using multi-curve regression
2. Revealing the Hidden Potential Peatlands of the Samar Island, Philippines: An Application of Earth Data Science for Sustainable Environment
Author: Marichu Itang, Roderica Goles
Abstract:
Peatlands, rich in decomposing organic matter and crucial for carbon capture, play a significant role in climate mitigation and provide various ecological benefits but are now subjected to both natural and anthropogenic disruptions. It has been recognized its importance in the global scientific community, however it is still in the primary stage of maping, measuring and understanding peatlands in the Philippines. However, mapping these important ecosystems accurately remains a substantial challenge. Challenges of inventory includes the risk and cost of the survey from traditional mapping methods. There are lesser-known, poorly documented and relatively disturbed peat zones in the Philippines. This study investigated the existence of tropical peatlands in Samar Islands, Philippines through remote sensing technology. The researcher utilized a quantitative exploratory study approach to realize the objectives of the paper. It used both optical and radar satellite data such as Sentinel 1, Sentinel 2, and SRTM DEM. The training points for peatlands were gathered in Leyte Sab-a Peatland Forest, located in its neighboring province, considering this land type holds similar characteristics within the region. The gathered data were ingested and analyzed in Google Earth Engine through the Random Forest classifier. The result accepts the alternative hypothesis that there is currently an existence of tropical peatlands in the region of interest. Further, the accuracy of the classification using Sentinel 1, Sentinel 2, and SRTM DEM data achieved an overall accuracy of 95.25 % and a kappa coefficient of 94.41%. The study also concluded that elevation data serves as the most important band in detecting and mapping tropical peatlands. This research provides upscale information on peatlands to address their critical state, especially concerning management and preservation. However, it also notes the necessity for localized training samples to reduce bias in the Random Forest algorithm and perform a validation survey of the detected peatland sites
3. Cross-Attention based Late-Fusion for Medical Visual Question Answering
Author: Aiman Lameesa, Chaklam Silpasuwanchai, Md. Sakib Bin Alam
Abstract:
Image and question matching is greatly important in Medical Visual Question Answering (MVQA) in order to accurately measure the visual-semantic similarity between an image and a question. However, the recent state-of-the-art methods focus solely on the contrastive learning between entire image and question words. In contrast, we propose a novel Cross-Attention based Late Fusion (CALF) network in MVQA tasks by combining image and question features in a unified deep model. In our proposed method, we use self-attention to effectively leverage intra-modality relationships within each modality and implement cross-attention to emphasize the inter-modality relationships between image regions and question words. By combining both intra-modality and inter-modality relationships, our proposed model significantly improves the performance of MVQA. Experimental results on benchmark datasets, such as, SLAKE demonstrate that our proposed approach outperforms the existing state-of-the-art methods in MVQA tasks.
4. Supporting Instructors’ Intervention in Computer Programming Courses: A Learning Analytics Approach
Author: Piriya Utamachant, Chutiporn Anutariya
Abstract:
The high non-progress rates of students in programming courses have been recognize issue worldwide [1]. Besides effective instructional design and delivery, it is essential to employ efficient instructional intervention Strategies to support students’ learning in unforeseen situations. However, implementing effective intervention is always a challenge [2]. Firstly, each course is characterized by its distinct conditions, such as objectives, knowledge content, teaching methods, student profiles, and learning environments, which can pose difficulties for instructors in obtaining dependable and current information for decision-making. Secondly, intervention is known to be an ongoing process that necessitates iterative development to identify the most effective strategies to enhance student learning [3]. However, evaluating the extent to which the applied intervention works is even more complicate and requires extensive effort, often making it infeasible for most courses. This study proposes a potential learning analytic approach that can address the afore[1]mentioned challenges. The first part of the analytics assesses student’s learning deficiency gaps which include both behavior and comprehension aspects and suggest student lists that require attentions to inform instructors in making intervention decision. The second part of analytics systematically evaluates the effectiveness of the applied intervention by comparing deficiency gaps improvement of intervened students. Our analytic approach was experimented on a semester of an undergrad Java programming course with 253 students and 2 instructors. The outcomes were satisfactory. A total of complete 12 intervention cycles were successfully performed with minimal instructor effort. Our analytic approach consistently generated lists of lagging students for targeted intervention decisions. The effectiveness of these interventions was evaluated quantitatively, providing meaningful results that enabled instructors to refine their approach throughout the course period.
5. Enhancing Sleep Apnea Diagnosis: Evaluating Wellue O2 Ring for Wearable Device-Based Detection
Author: Kristina Thapa, Chutiporn Anutariya
Abstract:
Sleep apnea is a common sleep disorder that can have significant negative consequences, including cognitive impairment, excessive sleepiness, and depression. However, it is often not diagnosed, which causes delays in treatment. Standard diagnostic procedures, such as polysomnography, are complex and expensive and require specialized facilities and personnel. To address this problem, various wearable devices have been proposed to diagnose sleep apnea at patients’ homes. This study, in collaboration with Thammasat Hospital, aimed to identify the most suitable model for the detection of sleep apnea using a low-cost pulse oximetry device, the Wellue O2 ring. In the study, patients were required to wear a Wellue O2 ring and polysomnography sensors while sleeping in the Thammasat Hospital Sleep Lab. The polysomnography result was used as ground truth. The data collected from the O2 ring was then analyzed by three selected models: the rule-based approach model, SVM (Support Vector Machine), and 1D CNN (Convolutionary Neural Network). The rule-based approach model was based on the rule that if the patient’s SpO2 drops more than 3% from baseline for more than 10 seconds, then it is considered an event. The rule-based approach model achieved 73.5% accuracy. The model was able to detect if there was oxygen desaturation during the apnea event. However, the model could not detect the events if the patient had no oxygen level. Thus, SpO2 data alone could mislead the model; Additional patient-specific information such as motion, pulse rate, and demographic data of the patient were also incorporated as feature variables of the datasets, for deep learning and machine learning models. The prediction accuracy of the 1D CNN LSTM model was 67%. The SVM model, however, was found to be 92% accurate, a significant improvement compared to previous models. These results highlight the potential of the Wellue O2 ring as a possible device for the detection of sleep apnea. Compared to PSG, the Wellue O2 ring offers a more affordable and convenient option, making it easier for patients to wear during sleep. It is important to note that not all provinces in Thailand have access to the PSG facilities. In such provinces, the proposed approach can be utilized as a preliminary screening device for sleep apnea
6. Integrating User-centred Design into Healthcare Data Analytics and Visualisation for Fall Detection
Author: Parkpoom Wisedsri, Chutiporn Anutariya
Abstract:
Falls among the elderly and disabled individuals are crucial concerns. When they are home alone, there might not be anyone around to assist. Therefore, a video-analytic based fall detection system is essential to detect any falls and to immediately alert caregivers or healthcare professionals to provide the necessary care. Typically, such a system monitors elderly and disabled individuals in a household area, and can detect not only falls, but also other common activities such as sitting, standing, walking, laying down, etc. Proper analysis and visualization of these activities would allow healthcare professionals including physiotherapists to gain a deeper understanding of the behavioural patterns of elderly and disabled individuals. This can enable them to identify any concerns and/or give useful recommendations based on the activity summary and behavioural patterns found. Preventive measures can also be implemented. This research aims to address this essential need by proposing a web-based architectural design together with a UI/UX design of such data analytics and visualisation services by employment of the user-centred approach. Development of a prototype system is also part of the research. The methodology of the study begins with a comprehensive literature review to establish a theoretical foundation. It then adopts a user-centric approach through iterative development, incorporating feedback from medical experts. The Software Usability Measurement Inventory (SUMI) principles are applied for usability evaluation. The architectural design is optimized to seamlessly connect with the fall detection system’s data source. Usability is rigorously assessed, ensuring practical effectiveness. This methodology combines theoretical and practical elements for the development. The research then presents a prototype to visualize activities of the elderly and disabled, focusing on fall activity and alerts. Using a user-centred approach, medical experts are engaged throughout the whole cycle of the research. Key requirements include analysing individual activities, identifying fall incidents, and ensuring quick emergency responses. To develop the web service, we design framework as a blueprint, which manages data flow from edge care cloud computing to web service. A web service acts as the data transmission endpoint via API HTTP post requests, structured in JSON format following a NoSQL data model. In case of a fall detection, data is promptly sent, alongside a 10-minute data package covering non-fall activities like walking, sitting, and standing. Fast API, a Python library, handles data, while data storage relies on the Firebase real-time cloud database. Vue and Chart JS as a front-end synchrony with data processing layer. The presentation layer primarily handles front-end functions, including activity summary visualisation. The challenge is the convert from data source to the visual components on the website. As the initial design applying scatter plot chart to present activity type against time series also the timeline history activity is beneficial to understand the activity patterns. This research also conducts useability evaluation from medical experts, yielding positive feedback on the prototype’s user-friendliness and efficiency. However, suggestions for improvement include implementing video playback feature, improving visualisations for environmental factors related to falls, and adding a summary feature for elderly and disabled individuals’ activity analysis. These insights will guide enhancements to better recommendation to prevent fall for the elderly and disabled
7. A Comprehensive Review of Local-Level Structural Damage Evaluation in Reinforced Concrete Structures Using Nonlinear Finite Element Analysis
Author: Sajedur Rahman
Abstract:
Existing reinforced concrete (RC) structures in Old Dhaka, Bangladesh need to be assessed for structural damage to enhance building resilience. Traditional damage index studies calculate damage parameters on a global basis using section forces and displacements replacing the stress and strain, without considering the details of local member damage. This is not suitable for the RC structures of Old Dhaka, as they may exhibit high levels of nonlinearity due to factors such as poor engineering practices and dilapidated conditions. Therefore, a new damage index is needed to capture the nonlinear damage behavior at the element level, to provide a more complete assessment of damage using the full range of analysis results. This study proposes a new damage index method recently developed by the Japan Society of Civil Engineers (JSCE) for calculating the damage index of the structural components of existing RC structures based on several ground motions using incremental dynamic analysis (IDA) for local site conditions. The nonlinear finite element analysis is carried out and the damage index is identified at the element level. This can contribute to predicting the progressive damage and help identify the critical sections in the structure. The previous literature will be discussed here which has shown a significant relation between the new damage index and load-displacement behavior in different small-scale numerical studies, which supports its validity. Furthermore, this study will propose a method to validate the new damage index with a frame structure that has been analyzed for crack propagation by experimental works from the literature. All the proposed damage index analysis will be based on 3D Applied Element Method which is based on FORTRAN 95. The study will contribute to the development of strategies for mitigating the impact of earthquakes, both by reducing seismic hazards and their potential consequences.
8. ENAN: Enhancement of Data Management Through Academic Credential – Capability Maturity Model in Higher Education
Author: Pricilla Faye T. Simon, Chutiporn Anutariya
Abstract:
This study presents the development and application of an Academic Credential-Capability Maturity Model (AC-CMM) tailored for higher education institutions (HEIs) to address the prevalent issues of fraud and lack authentication measures in academic credentialing. In our society, academic credentials are seen as indicators of an individual’s skills and knowledge acquired during university education. Which is also a requirement for employment. This led to prevalent falsification of documents by groups of people. Additionally, challenges in the authentication process made it difficult to identify fraudulent academic credentials. The responsibility of issuing secure and easily verifiable credentials then falls on the Higher Education Institutions (HEIs). However, due to the lack of existing framework in guiding institutions on academic credential system results in difficulties for institutions in implementing a robust and reliable academic credential management system.
The AC-CMM was developed to assess the current maturity level of an institution’s Academic Credential Management System. The framework is a predicate of the Capability Maturity Model Integration developed by the Software Engineering Institute (CMM-SEI), which is used to guide organization in software development. The AC-CMM is structured across five levels – Initial, Emerging, Defined, Managed, Leading – that represents different stages of maturity in managing academic credentials. With each level defined by specific Key Process Areas (KPAs). These KPAs describe the essential elements of effective processes and encompass related activities, methods, materials, and deliverables. Each level outlines the progression of technology adoption and processes, where the ‘Initial’ level is characterized by manual, error-prone operations, and the ‘Leading’ level is marked by global collaboration and cutting-edge technological implementations. The model utilizes a detailed questionnaire to determine the institution’s maturity stage. Furthermore, an institution’s strengths and weaknesses can be pointed out using this tool. This in effect will help the university address the issues on those specific areas. To assess the AC-CMM’s relevance and effectiveness, the maturity model assessment technique by Awasthy, R. et al. (2018) was employed.
By applying this model, universities are equipped with a robust framework to assess and enhance their academic credential systems. It aims to address the challenges faced by institutions in credential management, offering a roadmap for improvement that culminates in future-ready, secure, and innovative credentialing practices.
9. A Comparative Analysis Of Proactive Auto-Scaling Strategies
Author: Somesh Rao Coka, Chantri Polprasert
Abstract:
Proactive autoscaling in server and cloud environments is a critical challenge, particularly for high-traffic websites where predicting and managing web traffic is essential to ensuring seamless performance and resource efficiency. The traditional reactive approach to scaling, which adjusts resources in response to traffic changes, often leads to either resource underutilization or overutilization. The proactive autoscaling problem involves anticipating future traffic loads accurately and scaling server resources accordingly, ahead of time. This approach requires highly accurate forecasting models that can predict traffic patterns based on historical data. This research contributes to solving this problem by evaluating and demonstrating the performance of Transformer models in predicting web traffic more accurately than traditional models like ARIMA. We assess the performance of the transformer model in predicting web traffic. Our evaluation involved a comparative analysis of the Transformer and ARIMA models, utilising one month of web traffic data from NASA’s server log. The findings indicate that the Transformer model offers marginally more precise predictions than its ARIMA counterpart, as evidenced by the Mean Absolute Error (MAE) metrics: the ARIMA model recorded an MAE of 14.07, whereas the Transformer model exhibited a lower MAE of 10.35. Comparing performance using MAE is done by keeping the same scale as the target you are predicting. The range of our data lies from 0 to 405 requests per minute. The model error rate of 10.35 says the models’ predictions deviate from the actual values by 10.35 units. It represents a relatively small error, as it is about 2.55% of the total range (10.35/405 ≈ 0.0255). The cost of pinpoint accuracy is the increased computational complexity and resource requirements associated with the Transformers model. Despite this, its superior performance in handling large datasets with complex patterns makes it a more suitable choice for predictive autoscaling in dynamic environments. The purpose of the analysis is to guide the selection of an optimal predictive autoscaling strategy, thereby enhancing resource management and system performance. In conclusion, this study reveals that Transformers offers a more dynamic and precise approach for complex and large-scale data environments for predictive autoscaling, outperforming the ARIMA model. There is consensus that there is no one-size-fits-all solution, as indicated by the differences in the models’ performance. The selection of a model depends on application[1]specific requirements, data characteristics, and operational constraints. Researchers and practitioners in the fields of autoscaling and cloud computing are anticipated to gain valuable knowledge from the results of this study
10. Enhancing Upper Limb Prosthetic Functionality Via Tinyml Powered Design, Emg, And 3d Printed Components
Author: Srijan Ghimire, Attaphongse Taparugssanagorn
Abstract:
This ongoing research investigates the integration of TinyML algorithms, 3D printing techniques, and electromyography (EMG) signals to develop low-cost, low-power prosthetics with advanced upper limb functionality. The study emphasizes creating prosthetic devices with minimal power consumption, achieved through the implementation of power-efficient machine learning (ML) models capable of running on low-power microcontrollers.
A crucial aspect of the research involves the utilization of a 1D convolutional neural network (1D CNN) model, selected for its power efficiency. The TensorFlow library is employed for model development, and the resulting model is converted to TensorFlow Lite format. Subsequently, this model is further transformed into a C array model to enable deployment on an ESP32 microcontroller, emphasizing the adaptability of advanced ML algorithms to low-power, memory-constrained devices.
In addition to integrating TinyML algorithms, the research incorporates 3D printing technology to affordably manufacture personalized prosthetic components. This approach addresses budgetary constraints and enhances the accessibility of prosthetic technology. The research methodology includes comprehensive testing for functionality, power efficiency, usability, and cost-effectiveness through simulations and prototype development. The study focuses on the unique challenges of implementing ML in a low-power and memory-constrained environment. The chosen approach aims to demonstrate the feasibility of deploying a power-efficient ML model on an ESP32 microcontroller, showcasing the potential for real-time interpretation of EMG signals within the prosthetic system.
The outcomes of this research are anticipated to contribute significantly to the field, providing a viable solution for individuals reliant on prosthetic devices. The outcomes of this research are anticipated to contribute significantly to the field, providing a viable solution for individuals reliant on prosthetic devices.
11. Exploring the Role of Artificial Intelligence in Marketing and Sales: A Study on Natural Language Processing and Natural Language Generation
Author: Issaret Prachitmutita, Chutiporn Anutariya
Abstract:
This research paper conducts a thorough investigation into the role of Artificial Intelligence (AI), specifically focusing on Natural Language Processing (NLP) and Natural Language Generation (NLG) in the realms of marketing and sales. The aim is to present a well-rounded view of various AI technologies and their roles in augmenting aspects like customer relationship management, content creation, personalization, and communication while also critically examining the limitations and ethical concerns that accompany their usage. Our research approach is grounded in a detailed literature review and meticulous data analysis, where we closely examined relevant publications from ti015 onward, highlighting the role of AI in marketing and sales.
The findings of this study underscore the pivotal function of NLP and NLG in extracting valuable business insights, streamlining processes, and improving customer interactions. NLP’s capabilities extend to areas such as text and speech recognition, machine translation, sentiment analysis, and enhancing chatbot communications. NLG, in contrast, is applied in creating both written and verbal narratives, playing a significant role in conversational commerce and boosting the efficiency of communications.
Despite these technological advancements, our study brings to light ongoing challenges like concerns over transparency, personalization, emotional intelligence, and privacy. These issues underscore the necessity for continuous improvement and ethical consideration in AI technology to ensure alignment with human needs and values.
One of the critical discoveries of our research is the escalating significance of AI-driven personalization within marketing and sales. The changing dynamics highlight an urgent need for advancements in NLP and NLG systems, especially in aspects of data management and adherence to privacy regulations, which include enhancing the understanding and generation of natural language.
Additionally, this research categorizes AI applications in marketing and sales into several key areas: customer acquisition, understanding, communication, activation, retention, and business support. We delve deeper into various facets of marketing technology, such as automation, intelligent features, and integrated systems, each playing a crucial role at different stages of the customer journey, from pre-purchase to post-purchase.
This study also sheds light on the limitations and ethical considerations in AI implementation, including potential biases, lack of empathy in B2B relations, and the challenges of integrating emotional intelligence. Addressing these concerns is vital for maintaining consumer trust and satisfaction.
In conclusion, this paper makes a significant contribution to the understanding of AI’s applications in marketing and sales. It offers a balanced perspective on the advantages and challenges of AI, guiding businesses in making informed decisions about incorporating AI into their strategies. The findings highlight the need for future developments in AI, mainly focusing on enhancing NLP and NLG systems, improving data management, and considering privacy aspects to elevate the marketing and sales industry further.