DI-Artículos
URI permanente para esta colecciónhttps://hdl.handle.net/10953/218
Examinar
Examinando DI-Artículos por Autor "Albahri, Ahmed Shihab"
Mostrando 1 - 3 de 3
- Resultados por página
- Opciones de ordenación
Ítem A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications.(Springer, 2023-04-14) Alzubaidi, Laith; Bai, Jinshuai; Al-Sabaawi, Aiman; Santamaria, José; Albahri, Ahmed Shihab; Al-dabbagh, Bashar Sami Nayyef; Fadhel, Mohammed A.; Manoufali, Mohammed; Zhang, Jinglan; Al-Timemy, Ali H.; Duan, Ye; Abdullah, Amjed; Farhan, Laith; Lu, Yi; Gupta, Ashish; Albu, Felix; Abbosh, Amin; Gu, YuantongData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time‑consuming, and error‑prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state‑of‑the‑art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state‑of‑the‑art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self‑Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics‑Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.Ítem A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion(Elsevier, 2023-08-10) Albahri, Ahmed Shihab; Duhaim, Ali M.; Fadhel, Mohammed A.; Alnoor, Alhamzah; Baqer, Noor S.; Alzubaidi, Laith; Albahri, Osamah S.; Alamoodi, Abdullah Hussein; Bai, Jinshuai; Salhi, Asma; Santamaria, José; Ouyang, Chun; Gupta, Ashish; Gu, Yuantong; Deveci, MuhammetIn the last few years, the trend in health care of embracing artificial intelligence (AI) has dramatically changed the medical landscape. Medical centres have adopted AI applications to increase the accuracy of disease diagnosis and mitigate health risks. AI applications have changed rules and policies related to healthcare practice and work ethics. However, building trustworthy and explainable AI (XAI) in healthcare systems is still in its early stages. Specifically, the European Union has stated that AI must be human-centred and trustworthy, whereas in the healthcare sector, low methodological quality and high bias risk have become major concerns. This study endeavours to offer a systematic review of the trustworthiness and explainability of AI applications in healthcare, incorporating the assessment of quality, bias risk, and data fusion to supplement previous studies and provide more accurate and definitive findings. Likewise, 64 recent contributions on the trustworthiness of AI in healthcare from multiple databases (i.e., ScienceDirect, Scopus, Web of Science, and IEEE Xplore) were identified using a rigorous literature search method and selection criteria. The considered papers were categorised into a coherent and systematic classification including seven categories: explainable robotics, prediction, decision support, blockchain, transparency, digital health, and review. In this paper, we have presented a systematic and comprehensive analysis of earlier studies and opened the door to potential future studies by discussing in depth the challenges, motivations, and recommendations. In this study a systematic science mapping analysis in order to reorganise and summarise the results of earlier studies to address the issues of trustworthiness and objectivity was also performed. Moreover, this work has provided decisive evidence for the trustworthiness of AI in health care by presenting eight current state-of-the-art critical analyses regarding those more relevant research gaps. In addition, to the best of our knowledge, this study is the first to investigate the feasibility of utilising trustworthy and XAI applications in healthcare, by incorporating data fusion techniques and connecting various important pieces of information from available healthcare datasets and AI algorithms.Ítem Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements(Wiley, Hindawi, 2023-10-26) Alzubaidi, Laith; Al-Sabaawi, Aiman; Bai, Jinshuai; Dukhan, Ammar; Alkenani, Ahmed H.; Al-Asadi, Ahmed; Alwzwazy, Haider A.; Manoufali, Mohammed; Fadhel, Mohammed A.; Albahri, Ahmed Shihab; Moreira, Catarina; Ouyang, Chun; Zhang, Jinglan; Santamaria, José; Salhi, Asma; Hollman, Freek; Gupta, Ashish; Duan, Ye; Rabczuk, Timon; Abbosh, Amin; Gu, YuantongGiven the tremendous potential and infuence of artifcial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse felds, including education, business, healthcare industries, government, and justice sectors. While AI and DM ofer signifcant benefts, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. Tis article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. Te review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six diferent applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoTnetworks, robotics for architecture, engineering and construction, fnancial technology, and healthcare. Te review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. Te insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.