Artificial Intelligence Trends and Tools for Improving Women’s Health View PDF

*Shivangi Mishra
Medicine, School Of Allied Health Sciences And Management, India

*Corresponding Author:
Shivangi Mishra
Medicine, School Of Allied Health Sciences And Management, India
Email:mishrashivang29@gmail.com

Published on: 2024-04-03

Abstract

For decades, women’s health has faced significant challenges, including underrepresentation in research, limited access to specialized care, and a persistent gender gap in diagnosis and treatment. However, a wave of innovation powered by artificial intelligence (AI) is poised to revolutionize the landscape, offering personalized solutions and improved healthcare experiences for women across all phases of life. This in-depth exploration delves into the evolving landscape of AI in women’s health. This review highlights prominent trends, showcases innovative tools and startups driving positive change, and discusses the potential impact on various aspects of well-being. From personalized care and early disease detection to mental health support and improved access to information, AI promises to transform women’s healthcare experiences.
on various aspects of well-being. From personalized care and early disease detection to mental health support and improved access to information, AI promises to
transform women’s healthcare experiences.

Keywords

Artificial intelligence, Women’s healthcare, Personalized care

Introduction

Women worldwide encounter distinct health challenges from adolescence through menopause and into senescence, encompassing issues such as menstruation, fertility, pregnancy, childbirth, chronic diseases, and mental health. However, conventional healthcare institutions frequently fail to appropriately meet and end up ignoring their unique demands. This is where AI comes into play, providing a potent method driven by its capacity to evaluate enormous information, spot trends, and customize findings. Women may make more educated decisions about their bodies, understand their health better, and receive specialized therapies across their lifespan by utilizing AI-powered solutions [1]. FemTech Expo is a worldwide series of virtual events designed for innovators within the healthcare and women’s health sectors [2]. The FemTech industry, comprising technology and software companies dedicated to addressing women's health needs, is projected to reach a value of US$1.07 trillion by 2026. This growth is being driven, at least in part, by heightened investment in telehealth and remote monitoring tools [3]. "Augmented intelligence" and "actionable insights" complement human effort rather than replacing it, emphasizing purpose and collaboration. Despite the escalating global population, women often experience medical neglect during menstruation and pregnancies, resulting in avoidable yet prevalent fatalities. The advent of AI and machine learning (ML) algorithms plays a pivotal role in disseminating awareness among women regarding common yet less-known health issues such as anemia, a significant problem in India contributing to female mortality [4]. The application of ML algorithms, deep learning (DL) neural networks, and automatic feature extraction tools aids in addressing biases and discrimination prevalent in medical consultations for women across various economic strata in both urban and rural settings. To assist health professionals in making better decisions, AI combines data from many organized and unstructured sources, can reason at a semantic level, and can be used in computer vision, text comprehension, conversational systems, and multimodal applications [5]. AI can improve human capacities, particularly in general care, imaging, and cancer detection.

Areas of Impact

In the domains of medicine and healthcare, AI encompasses enormous potential. It is crucial for diagnostic and prognostic assessments of medical data. With the advancement of AI technology, algorithms for medical research, diagnosis, and therapy have been developed. With the ability to learn and correct itself, AI systems may continuously improve through feedback mechanisms. This makes them an invaluable tool for physicians, especially in critical care settings like surgical operations. AI applications in healthcare fall into two categories: Natural language processing techniques that pull information from unstructured sources like clinical notes and medical literature, and ML approaches that use structured data analysis to predict illness prognosis, especially in light of genetic effects [6,7].

Women's health AI research has mostly addressed diseases impacting the cardiovascular system, breasts, bones, cervix, and endometrium. Artificial neural networks (ANNs) and classification and regression trees have been used to predict endometrial cancer in postmenopausal women and assess the role of different human papillomavirus types in cervical dysplasia recurrence risk [8,9]. While the paper acknowledges the broader spectrum of AI applications in women's health, including reproductive health, maternal care, fertility tracking, pregnancy monitoring, and chronic disease management, it emphasizes a focused examination of personalized care, early disease detection, and mental health support. This selective approach allows for a deeper analysis of specific challenges and opportunities within these critical areas, providing actionable insights for improving women's health outcomes.

Personalized care

By leveraging AI algorithms, healthcare providers can tailor treatment plans and interventions to individual women's needs, considering factors such as genetic predispositions, lifestyle habits, and personal preferences. This personalized approach enhances the effectiveness of healthcare interventions, leading to better health outcomes and improved quality of life [10].

A prediction system for breast cancer was created utilizing 38,444 mammography pictures from 9,611 women. The algorithm accurately detects biopsy malignancy and breast cancer using ML and DL. The system demonstrates the promise of AI in healthcare by assisting radiologists in identifying breast cancer. The algorithm was tested on 2548 women and verified on 1055 women as part of the study. The system, which had an AUC of 0.91, specificity of 77.3%, and sensitivity of 87%, detected 48% of false-negative results on mammograms. The model outperformed the Gail model by a large margin when trained only on clinical data, with an AUC of 0.78 vs 0.54. Only a global probability for the entire breast is provided by the ML-DL model at this time; localization of the discovery is not available (Figure 1) [11].

With the use of data collecting and analytics tools like high throughput genotyping and electronic health records, scientists may use real-world clinical and biomarker data to create novel phenotypes through precision medicine. These traits can enhance the effectiveness of personalized care in healthcare by validating medicines or improving illness diagnosis when paired with electronic health records knowledge [12]. In a study data from 257 infertile couples who had 426 IVF/ICSI cycles performed between 2010 and 2017 were analyzed. Using t-tests or χ2 tests, the researchers connected the ensemble of 118 factors for each cycle with the outcomes of live births. An ANN was built using parameters that have statistical significance. Ten random separations were used for cross-validation. Cumulative sensitivity and specificity for the ANN were 76.7% and 73.4%, respectively (Figure 2) [8].

Radiogenomics is an emerging field with precision medicine, that aims to correlate cancer imaging characteristics with gene expression, helping predict a patient's susceptibility to radiation therapy-induced toxicity. Recent studies have used convolutional neural network frameworks to predict genetic markers in gliomas based on magnetic resonance imaging data. AI has also been used to uncover radiogenomic connections in various cancers. A recent study focused on classifying tumors into Luminal A subtypes using magnetic resonance image patches. Three DL strategies were employed: learning from scratch, transfer learning, and using off-the-shelf deep features. Network architectures like GoogleNet, VGG, and CIFAR were used, with test AUCs ranging from 0.46 to 0.63 [13,14]. Challenges in this domain include the demanding nature of training convolutional neural networks from scratch and insufficient data availability, particularly in medical imaging (Figure 3) [15-18].

AI-assisted monitoring is used to assess intrapartum stress during labor, aiding in decision-making between cesarean section and normal vaginal deliveries. A system was developed to reduce manual errors in interpreting cardiotocography data by modeling fetal heart rate and uterine contraction signals. In high-risk deliveries, a combination of cardiotocography and ST waveform was used to facilitate timely interventions, reducing fetal morbidity and mortality. Comparative studies using decision tree, ANN, and discriminant analysis showed ANN classifiers achieved an overall accuracy of 97.78%, followed by decision tree and discriminative analysis with accuracies of 86.36% and 82.1%, respectively, while support vector machine (SVM) and genetic algorithms classifiers achieved 99.3% and 100% accuracy, respectively, in skewed datasets. However, relying solely on accuracy may be inadequate in cases of imbalanced or skewed datasets (Figure 4) [19].

A study employed a naive Bayes (NB) classifier with four feature selection techniques – Mutual information, correlation-based, reliefF, and information gain – to classify fetal state. The reliefF technique is a feature selection method in ML that evaluates the relevance of features by comparing their values with those of nearest neighbors to distinguish between instances of different classes. The combination of the NB classifier with features from the reliefF technique yielded the best results, achieving 93.97%, 91.58%, and 95.79% accuracy, sensitivity, and specificity, respectively. However, further evaluation using independent data and clinical trials is needed to fully assess the study's value. Future work could involve using open datasets with similar study designs (Figure 4) [20].

Early disease detection

AI-powered tools enable early detection of various health conditions, including breast cancer, cervical cancer, and cardiovascular diseases, among others. Through advanced imaging techniques, predictive analytics, and pattern recognition algorithms, AI facilitates the timely identification of potential health risks, allowing for prompt intervention and treatment initiation, thereby reducing morbidity and mortality rates [21].

The integration of AI into maternal healthcare is gaining traction among pregnant women, offering promising prospects for enhancing maternal health outcomes; AI's accuracy in assessing gestational periods can potentially reduce neonatal fatalities and diseases by leveraging predictive values such as amniotic fluid and maternal blood metabolic profiles, electrohysterogram images, ultrasound imaging of the cervix, and biological profiles; while a survey among pregnant women indicated a 69% acceptance rate towards AI in maternal healthcare. AI emerged as a significant determinant influencing acceptability, and employing advanced algorithms including Neural Networks, SVM, Random Forest, AdaBoost, and Multi-layer Perceptron in predicting gestational diabetes yielded a sensitivity exceeding 70%, particularly notable with the Random Forest model utilizing predictors such as fasting plasma glucose, thus highlighting its efficacy even in resource-constrained settings [22].

Breast cancer detection

Prominent success has been achieved in the early detection of cervical cancer and the screening for hepatitis through the application of Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) architectures within the AI framework tailored for women's healthcare. In this continuum, AI algorithms, leveraging DL principles and advanced pattern recognition, exhibit substantial promise in augmenting the precision and expeditiousness of early detection and diagnosis of breast cancer. The proficiency of these algorithms, particularly in the analysis of medical imaging such as mammograms, often rivals or surpasses the discernment capabilities of human experts [23].

Interpreting radiological and pathological tests presents challenges due to inherent differences and variations. Inter-reader agreement varies between 75% - 88%, especially for specific diagnoses. Experience level affects diagnostic accuracy, with more experienced radiologists demonstrating higher sensitivity in identifying breast abnormalities. Training new radiologists is becoming increasingly difficult due to rising workloads and a shortage of experienced mentors. This variability impacts image interpretation, and disease management, and potentially delays diagnosis and treatment decisions [24, 25].

In breast pathology, AI algorithms are utilized for cancer detection, classification, histologic grading, lymph node metastasis detection, biomarker quantification, and predicting genetic abnormalities like BRCA mutation. AI-CAD (computer-aided detection) development involves collecting a dataset representing the target population and imaging device, with human readers labeling lesions in mammograms. The AI-CAD system autonomously learns features for training, and undergoes internal validation to prevent overfitting, achieving high cancer detection rates and specificity, outperforming traditional CAD. This technology enhances accuracy, efficiency, and reduces diagnostic variability, potentially alleviating radiologist burden and enabling timely diagnoses. AI can be integrated into 2D breast screening workflows, either as a standalone system or concurrently with AI-CAD [26, 27].

In the fight against breast cancer, iBreastExam offers a ray of hope, especially for women in regions lacking access to traditional mammograms. This innovative, handheld device utilizes painless electronic palpation to detect potential abnormalities in breast tissue, making it a non-invasive and accessible alternative. iBreastExam shines thanks to its affordability, eliminating the cost barrier faced by many. It also boasts the potential for earlier detection compared to mammograms, potentially saving lives. However, it's important to remember that it's not a replacement for traditional diagnostic methods and should be used in conjunction with other techniques for comprehensive breast cancer screening [28].

Numerous studies have concentrated on creating AI algorithms for breast cancer detection and classification. A study introduced a CNN model capable of classifying patches containing invasive ductal carcinoma from whole slide images (WSI) of breast cancer and assessing the degree of infiltration and extent of invasive foci using the ConvNet classifier. Similarly, another research presented a DL model achieving an average accuracy of 93.2% across eight classes (four benign and four malignant) in a test dataset (Figure 5) [29].

The breast cancer histology (BACH) challenge aimed to automate breast tissue histology classification from hematoxylin and eosin H&E-stained microscopic images and WSIs. The best-performing model achieved pathologist-level accuracy, with AI assistance increasing average accuracy from 80% to 88% and improving interobserver concordance from 83% to 90% (Figure 6) [30].

The Galen breast AI model uses breast biopsy samples to detect and classify cancer cells into different subtypes. Its AUC values on a large-scale dataset were reported as 99% and 98% for invasive carcinomas (Figure 7) [31].

A study demonstrated that an ML classifier using nuclear shape, texture, and architecture can predict ODX Oncotype DX Assay reoccurrence risk categories for early-stage ER-positive breast cancer patients with an accuracy of 75% - 86%. Another study on H&E tissue microarray cohorts showed that a model using nuclear shape and orientation stratified short- and long-term survival outcomes (Figure 7) [32, 33].

A DL algorithm with faster R-CNN and ResNet-101 backbone network showed high accuracy in mitosis counting tasks and reduced pathologists' reading time by 27.8%. Another study introduced a fully automated system for region of interest (ROI) identification, mitosis counting from WSI, and tumor proliferation prediction, outperforming previous methods on the Tumor Proliferation Assessment Challenge 2016 TUPAC-16 dataset (Figure 7) [34, 35].

Research developed a model using Imaging Biomarkers in MMG, which accurately predicts, parenchymal patterns, and cancer occurrence in high-risk individuals. This model can predict short- and long-term risk using MMGs from a single time point. Mirai, an AI model, outperformed previous DL models in identifying five-year breast cancer risk and high-risk patients across diverse populations [36, 37].

The main concern is the generalizability or robustness of AI models, which refers to their consistent performance across different datasets. Strategies to improve model robustness include using diverse preanalytical and analytic factors, but acquiring large-scale datasets with manual annotations presents challenges in AI algorithm development [38].

Ovarian cancer detection

Annually, over 2,000 women undergo exploratory surgery for suspicious masses, resulting in almost 300,000 new ovarian cancer diagnoses. Despite ultrasound models showing promise, accurately diagnosing ovarian cancer preoperatively remains challenging. AI, particularly DL, shows the potential to address these diagnostic challenges through advancements in image recognition tasks [39-42]. AI applications in women’s healthcare are emerging but essential for diagnosis, detection, and prognosis. They enhance treatment and wellness services for women of all ages.

A novel image diagnosis system is proposed for classifying ovarian cysts in color ultrasound images, utilizing a fusion of image features derived from both deep learning networks and texture descriptors. Initially, ultrasound images are enhanced to enhance the training dataset quality, and rotation invariant uniform local binary pattern features are extracted as low-level texture features. Subsequently, high-level features extracted from a fine-tuned GoogleNet neural network, and the low-level uniform local binary pattern features are normalized and concatenated to form fusion features, capturing both semantic context and texture patterns in the image. These fusion features are then inputted into a cost-sensitive Random Forest classifier for image classification into "malignant" and "benign" categories. The deep neural network's high-level features represent visual features of the lesion region, while the low-level texture features describe edges, direction, and intensity distribution (Figure 8) [43].

A study involving 202 patients with ovarian tumors, including 53 with cancer, 23 with borderline malignant tumors, and 126 with benign tumors, used five ML classifiers to predict disease severity from 16 features from blood tests, patient background, and imaging tests. The XGBoost algorithm had the highest accuracy of 80%. The study found that AI could potentially aid in predicting the pathological diagnosis of ovarian cancer from preoperative examinations, with different results based on the correlation coefficient, regression coefficient, and Random Forest feature importance (Figure 8) [44].

Another study utilized various models, including CNN, SVM, and random forests. CNN architectures included GoogleNet, VGG16 VGG19 InceptionV3, ResNet18, ResNet34, ResNet50 ResNet18, and MaskRCNN32. Novel CNNs used multiple standardized blocks, convolutional, normalization, activation, and pooling layers, with attention modules included. One study generated a novel architecture using topology optimization [45-47].

Most models used were deterministic, with hidden Markov trees, probabilistic boosting trees, and Gaussian mixture models. Images were typically analyzed at a single resolution. Multiphoton Microscopy imaging and wide-field fluorescence imaging have been proposed for ovarian imaging to capture specific biomarkers and provide high-resolution images. The Linear Discriminant Analysis classification algorithm achieved an accuracy of 66.66%, 87.50%, and 62.5% for genotype, age, and treatment, respectively (Figure 9) [48, 49].

Most AI research in ovarian cancer histopathology is at high risk of bias due to limited data and weak validation methods. Researchers often use a single train-test data split without accounting for overfitting and model optimism. This is common in gynecological AI research using other data types. Recent reviews highlight poor clinical utility due to predominantly retrospective studies using limited data and weak validation, which risk model performance being overestimated [50].

Anaemia detection

A study adopted a three-phase methodology: Dataset gathering involving palm images, preprocessing including image extraction and augmentation, and segmentation of the ROI in images to acquire components of the CIE Lab* color space. Subsequently, models for anemia detection are developed using various algorithms including CNN, KNN, NB, SVM, and Decision Tree. The experiment initially employed 527 datasets, augmented to 2635 through rotation, flipping, and translation, then randomly divided into training (70%), validation (10%), and testing (20%) sets. Results demonstrate the effectiveness of palm-based anemia detection, with NB achieving 99.96% accuracy, CNN reaching 99.92%, and SVM showing the lowest accuracy at 96.34% (Figure 10) [51-53].

AI is a fast-developing area, and one of its most well-known subfields is DL, which aims to narrow the scope of AI for research and development as well as development. A study distinguished between iron deficiency anemia and thalassemia trait by classifying data from the Medical Laboratory Service Centre, Mahidol University, using a variety of ML techniques. In this job, the Decision Tree method, which used seven characteristics provided by the hematology analyzer, obtained a high accuracy of 98.03% [54].

A study involving 342 patients, including 152 with beta-thalassemia-type anemia and 190 with iron deficiency anemia, used extreme learning machines and regularized extreme learning machines to analyze anemia cases. The study considered various variables, including gender, and found that the regularized extreme learning machine approach had a 95.59% accuracy rate. Other AI methods used in the study included SVM, NB, Decision Tree (DT), KNN, Multi-layer Perceptron, hybrid classifier ML, Average Ensemble, Genetic Algorithm-CNN), Genetic Algorithm Stacked-Encoder, and random forest [55].

A study aimed to detect anemia using images of the lip mucosa, a thin skin tissue, non-invasively and in the home environment using ML. Data was collected from 138 patients, including 100 women and 38 men. Six ML algorithms were used: ANN, DT, KNN, Logistic Regression, NB, and SVM. The ML algorithm was used to analyze and classify images of the lip mucosa quickly and accurately, potentially increasing the efficiency of anemia screening programs. The results showed that NB had the highest accuracy (96%), followed by DT, KNN, and ANN at 93% (Figure 11) [56, 57].

AI faces challenges in imaging and diagnostic protocols due to lack of standardization, making it difficult for algorithms to accurately analyze medical images and identify potential abnormalities. Additionally, there is a lack of high-quality training data, and institutional xenophobia may restrict access to image data, potentially making results less accurate or generalizable [58].

Mental health support

AI technologies play a crucial role in addressing mental health issues among women by offering personalized interventions, remote monitoring solutions, and mental health assessment tools. Chatbots, virtual assistants, and AI-based therapy platforms provide accessible and stigma-free support for women experiencing mental health challenges, promoting early intervention, and improving overall well-being [59].

AI technologies are pivotal in addressing women's mental health concerns through personalized interventions, remote monitoring solutions, and assessment tools. Chatbots, virtual assistants, and AI-based therapy platforms offer accessible and stigma-free support for women experiencing mental health challenges, fostering early intervention, and enhancing overall well-being. Emotions, as intricate interplays of feelings and thoughts, serve as crucial semantic components in identifying various emotional states.

Contemporary digital wearable devices, including voice assistant devices, smartwatches, and smartphones, are classified as edge devices and are utilized by women, primarily focusing on monitoring emotions linked with physical fitness. A study introduced the Contextual Emotional Classifier (CEC) model that employs AI learning on edge devices to analyze emotional data gathered from diverse sources. The CEC model utilizes contextual computation to correlate emotional data from different devices, providing real-time alerts for dynamic mood swings, upcoming unplanned activities, and task management. Performance evaluation of the CEC model involves training emotional sentences collected by edge devices, employing a hybrid model consisting of a Multinomial Naive Bayes Classifier, Logistic Regression, Random Forest Classifier, and SVM. Classification metrics such as precision and recall are used to measure the model's performance, represented through confusion matrices depicting true and false positives and negatives [60].

A study published in Frontiers in Global Women's Health found that mothers who were highly engaged with the AI mental health app Wysa experienced a 12.7% reduction in depressive symptoms. Many transitioned from moderately severe depression to moderate depression. The app allowed mothers to express their emotions, stressors and needs for support, reframe their thoughts, and help them cope with their challenges [61].

For those who identify as women or have uteruses, Ema is an AI companion with empathy that was created by women. Ema helps women on a social, psychological, and bodily level to ease the transition into womanhood [62].

Improved access to information

AI-driven information systems and digital health platforms empower women with access to accurate, reliable, and culturally sensitive health information. Natural language processing algorithms enable intelligent search functionalities, personalized recommendations, and interactive educational resources, facilitating informed decision-making and proactive health management [63].

AI exhibits the potential to identify critical transient events pivotal for embryo implantation, thereby enhancing the likelihood of successful pregnancies. The amalgamation of robotics, virtual reality, and augmented reality, intertwined with high-powered AI algorithms, has manifested notable progress in refining surgical precision, postoperative care, reproductive technologies, and medical imaging, including simulations for laparoscopic and robotic surgeries [9]. Women may now access trustworthy and culturally appropriate information on a variety of health subjects, from menstrual health and fertility management to prenatal care and menopause.

In bustling metropolitan environments characterized by a fast-paced lifestyle, women engrossed in their daily routines and professional commitments often neglect proactive health management and routine health screenings. AI-powered chatbots present a convenient, time-efficient, and informative solution, delivering advice on sexual health, menstrual cycles, family planning, and specific conditions such as PCOS, PSOD, and thyroid disorders. These AI chatbots operate on generative AI tools, employing a rule-based framework of learning, language models, and Natural Language Processing. An exemplary instance is the chatbot named "Just Ask" [64].

The United Nations Population Fund (UNFPA) has introduced a chatbot in India, termed 'Just Ask,' representing an AI digital engagement platform designed for adolescents and young adults. Its primary objective is to facilitate learning about sexual and reproductive health and rights. Developed by UNFPA sexual and reproductive health experts, the chatbot has been launched in collaboration with the National Health Mission and the Department of Public Health and Family Welfare, Government of Madhya Pradesh, India. Functioning as a secure, personalized, and non-judgmental space, the chatbot caters to adolescents and young adults in India aged 15 - 29. It serves as a platform for individuals to explore information, gain awareness of their rights, dispel myths, and access services about sexual and reproductive health and rights [65].

AI-powered platforms like Clue (https://helloclue.com/) are easily accessible, customized information that removes socioeconomic and geographic obstacles to healthcare information and enables women to make decisions about their bodies and well-being [66]. ANN have emerged as robust algorithms capable of modeling complex nonlinear relationships between input and output variables, and their application in healthcare decision-making has been extensively documented [67].

Increasing awareness and preparedness during female gestational periods, as evaluated by the accuracy of AI, can aid in reducing neonatal fatalities and diseases. Common predictive values utilized in algorithms include metabolic profiles of amniotic fluid and maternal blood, electrohysterogram images, ultrasound imaging of the cervix, and biological profiles [68].

The Evie Ring, a new entrant in the women's health tracking arena, is a Food and Drug Administration approved medical-grade sensor, that aims to empower users with personalized insights by monitoring sleep, heart rate, blood oxygen, skin temperature, and menstrual cycles. Available in three finishes, this sleek and discreet ring connects to an app, offering a holistic view of your health by interpreting the interplay between various metrics [69].

New moms often face anxiety about their milk supply, unsure if their babies are getting enough to thrive. Coroflo, a game changing MedTech company, tackles this concern with the world's first breastfeeding monitor. This innovative device discreetly attaches to the breast as a nipple shield, housing a sensor that measures milk flow in real time. This data seamlessly transmits to a smartphone app, making information like the volume consumed, feeding duration, and estimated calorie intake easily accessible to mothers. Additionally, the data helps track the baby's growth and identify potential feeding issues, offering valuable information for data-driven decisions [70].

Conclusion

While AI holds immense potential for improving women's health, several ethical considerations require careful attention. Data privacy and security are paramount, ensuring women's sensitive health information remains protected. Algorithmic bias must be actively addressed to prevent discriminatory outcomes based on factors like race, ethnicity, or socioeconomic status. Accessibility and inclusivity are crucial, ensuring AI tools are developed and deployed in a way that truly caters to the diverse needs of all women. Finally, transparency and education are essential to build trust and empower women to understand and harness the full potential of AI for their health and well-being. The integration of AI into women's health is still in its early stages, but the potential for positive change is vast. As research and development accelerate, we can expect to see even more innovative tools and applications emerge. Collaborative efforts between researchers, developers, healthcare providers, and women themselves will be crucial for ensuring responsible and equitable development of AI solutions that truly empower women to take control of their health and well-being.

Acknowledgments

The authors express their special gratitude to Dr. Praveen Thakur for his valuable guidance and support for the completion of the manuscript.

Conflict Of Interest

All other authors declared no competing interests in this work.

References

  1. Korytnikova, Elena (2023) Artificial Intelligence And Women's Health: Innovations, Challenges, And Ethical Considerations. Adv Clin Med Res 4: 1-6. https://doi.org/10.52793/ACMR.2023.4(3)-59
  2. https://www.womenofwearables.com/blogwrite/femtech-expo-a-global-virtual-program-for-femtech-and-health-tech-innovators
  3. Empowering women in health technology (2022) The Lancet Digital Health Open Access Published E149. https://doi.org/10.1016/S2589-7500(22)00028-0
  4. Sharif N, Das B, Alam A (2023) Prevalence of anemia among reproductive women in different social group in India: Cross-sectional study using nationally representative data. PLoS One 18(2): e0281015. https://doi.org/10.1371/journal.pone.0281015
  5. Johnson KB, Wei WQ, Weeraratne D, Frisse ME, Misulis K, et al. (2021) Precision medicine, AI, and the future of personalized health care. Clin Transl Sci 14(1): 86-93. https://doi.org/10.1111/cts.12884
  6. Togunwa TO, Babatunde AO, Abdullah KU (2023) Deep hybrid model for maternal health risk classification in pregnancy: Synergy of ANN and random forest. Frontiers in Artificial Intelligence. 6:1213436. https://doi.org/10.3389/frai.2023.1213436
  7. Yoldemir T (2020) Artificial intelligence and women’s health. Climacteric 23(1): 1-2. https://doi.org/10.1080/13697137.2019.1682804
  8. Vogiatzi P, Pouliakis A, Siristatidis C (2019) An artificial neural network for the prediction of assisted reproduction outcome. J Assist Reprod Genet 36(7): 1441-1448. https://doi.org/10.1007/s10815-019-01498-7
  9. Afaq M, Abraham DE, Patel SH, Al-Dhoon AD, Arshad Z (2023) Empowering Women's Health: A Global Perspective on Artificial Intelligence and Robotics. Cureus 15(11): e49611. https://doi.org/10.7759/cureus.49611
  10. https://www.wionews.com/entertainment/lifestyle/news-ai-and-machine-learning-in-womens-health-management-pioneering-the-future-636254
  11. Akselrod-Ballin A, Chorev M, Shoshan Y, Spiro A, Hazan A, et al. (2019) Predicting breast cancer by applying deep learning to linked health records and mammograms. Radiology 292(2): 331-342. https://doi.org/10.1148/radiol.2019182622
  12. Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, et al. (2018) Scalable and accurate deep learning with electronic health records. NPJ digit med 1(1): 1-0. https://doi.org/10.1038/s41746-018-0029-1
  13. Szegedy C, Liu W, Jia Y (2015) Going deeper with convolutions. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 1-9. https://doi.org/10.1109/CVPR.2015.7298594
  14. Guerra E, de Lara J, Malizia A, Díaz P (2009) Supporting user-oriented analysis for multi-view domain-specific visual languages. Inf Softw Technol 51(4): 769-784. https://doi.org/10.1016/j.infsof.2008.09.005
  15. Zhu Z, Albadawy E, Saha A, Zhang J, Harowicz MR, et al. (2019) Deep learning for identifying radiogenomic associations in breast cancer. Comput Biol Med 109: 85-90. https://doi.org/10.1016/j.compbiomed.2019.04.018
  16. Bibault JE, Giraud P, Housset M, Durdux C, Taieb J, et al. (2018) Deep learning and radiomics predict complete response after neo-adjuvant chemoradiation for locally advanced rectal cancer. Sci Rep 8(1): 12611. https://doi.org/10.1038/s41598-018-30657-6
  17. Trivizakis E, Papadakis GZ, Souglakos I, Papanikolaou N, Koumakis L, et al. (2020) Artificial intelligence radiogenomics for advancing precision and effectiveness in oncologic care. Int J oncol 7(1): 43-53. https://doi.org/10.3892/ijo.2020.5063
  18. Mazurowski MA, Zhang J, Grimm LJ, Yoon SC, Silber JI (2014) Radiogenomic analysis of breast cancer: luminal B molecular subtype is associated with enhancement dynamics at MR imaging. Radiology 273(2): 365-372. https://doi.org/10.1148/radiol.14132641
  19. Omo-Aghoja L (2014) Maternal and fetal acid-base chemistry: A major determinant of perinatal outcome. Ann Med Health Sci Res 4(1): 8-17. https://doi.org/10.4103/2141-9248.126602Pinto P, Bernardes J, Costa
  20. -Santos C, Amorim-Costa C, Silva M, et al. (2014) Development and evaluation of an algorithm for computer analysis of maternal heart rate during labor. Comput Biol Med 49: 30-35. https://doi.org/10.1016/j.compbiomed.2014.03.007
  21. https://www.echelon.health/the-role-of-ai-in-early-disease-detection/
  22. Korytnikova, Elena (2023) Artificial Intelligence And Women's Health: Innovations, Challenges, And Ethical Considerations. Jour Clin Med Res 4: 1-6. https://doi.org/10.52793/ACMR.2023.4(3)-59
  23. https://www.frontiersin.org/articles/10.3389/frai.2023.1213436
  24. Agner SC, Rosen MA, Englander S (2014) Computerized Image Analysis for Identifying Triple-Negative Breast Cancers and Differentiating Them from Other Molecular Subtypes of Breast Cancer on Dynamic Contrast-enhanced MR Images: A Feasibility Study. Radiology 272(1): 91-99. https://doi.org/10.1148/radiol.14121031
  25. Sung JS, Jochelson MS, Brennan S (2013) MR imaging features of triple-negative breast cancers. Breast J 19(6): 643-649. https://doi.org/10.1111/tbj.12182
  26. Lee SE, Yoon JH, Hong H, Son NH, Kim EK (2023) One-on-one comparison between conventional CAD and AI-CAD applied to screening mammography. J Korean Soc Breast Screen 20: 19-29
  27. Han Z, Wei B, Zheng Y, Yin Y, Li K, et al. (2017) Breast cancer multi-classification from histopathological images with structured deep learning model. Sci Rep 7(1): 4172.
  28. https://www.ibreastexam.com/
  29. Cruz-Roa A, Gilmore H, Basavanhally A, Feldman M, Ganesan S, et al. (2017) Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent. Sci Rep 7(1): 46450. https://doi.org/10.1038/srep46450
  30. Polónia A, Campelos S, Ribeiro A, Aymore I, Pinto D, et al. (2021) Artificial intelligence improves the accuracy in histologic classification of breast lesions. Am J Clin Pathol 155(4): 527-536. https://doi.org/10.1093/ajcp/aqaa151
  31. Sandbank J, Bataillon G, Nudelman A, Krasnitsky I, Mikulinsky R, et al. (2022) Validation and real-world clinical application of an artificial intelligence algorithm for breast cancer detection in biopsies. NPJ Breast Cancer 8(1): 129. https://doi.org/10.1038/s41523-022-00496-w
  32. Romo-Bucheli D, Janowczyk A, Gilmore H, Romero E, Madabhushi A (2016) Automated tubule nuclei quantification and correlation with oncotype DX risk categories in ER+ breast cancer whole slide images. Sci Rep 6(1): 32706. https://doi.org/10.1038/srep32706
  33. Whitney J, Corredor G, Janowczyk A, Ganesan S, Doyle S, et al. (2018) Quantitative nuclear histomorphometry predicts oncotype DX risk categories for early stage ER+ breast cancer. BMC Cancer 18: 1-5. https://doi.org/10.1186/s12885-018-4448-9
  34. Nateghi R, Danyali H, Helfroush MS (2021) A deep learning approach for mitosis detection: Application in tumor proliferation prediction from whole slide images. Artif Intell Med 114: 102048. https://doi.org/10.1016/j.artmed.2021.102048
  35. Li C, Wang X, Liu W, Latecki LJ (2018) DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks. Med Image Anal 45: 121-33. https://doi.org/10.1016/j.media.2017.12.002
  36. Lee H, Kim J, Park E, Kim M, Kim T, et al. (2023) Enhancing breast cancer risk prediction by incorporating prior images. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention pp. 389-398. https://doi.org/10.1007/978-3-031-43904-9_38
  37. Kim KH, Nam H, Lim E, Ock CY (2021) Development of AI-powered imaging biomarker for breast cancer risk assessment on the basis of mammography alone. J Clin Oncol 39(15): 10519. https://doi.org/10.1200/JCO.2021.39.15_suppl.10519
  38. Ahn JS, Shin S, Yang SA, Park EK, Kim KH, et al. (2023) Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. J Breast Cancer 26(5): 405-435. https://doi.org/10.1200/JCO.2021.39.15
  39. van Nagell Jr JR, Miller RW (2016) Evaluation and management of ultrasonographically detected ovarian tumors in asymptomatic women. Obstet Gynecol 127(5): 848-858. https://doi.org/10.1097/AOG.0000000000001384
  40. http://gco.iarc.fr/
  41. Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, et al. (2019) Artificial intelligence in cancer imaging: Clinical challenges and applications. Cancer J Clin 69(2): 127-157. https://doi.org/10.3322/caac.21552
  42. Kermany DS, Goldbaum M, Cai W, Valentim CC, Liang H, et al. (2018) Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5): 1122-1131. https://doi.org/10.1016/j.cell.2018.02.010
  43. Zhang L, Huang J, Liu L. (2019) Improved Deep Learning Network Based in Combination with Cost-sensitive Learning for Early Detection of Ovarian Cancer in Color Ultrasound Detecting System. J Med Syst 43(251). https://doi.org/10.1007/s10916-019-1356-8
  44. Akazawa M, Hashimoto K (2020) Artificial Intelligence in Ovarian Cancer Diagnosis. Anticancer Res 40(8): 4795-4800. https://doi.org/10.21873/anticanres.14482
  45. Breen J, Allen K, Zucker K, Adusumilli P, Scarsbrook A, et al. (2023) Artificial intelligence in ovarian cancer histopathology: A systematic review. NPJ Precis Oncol 7(1): 83. https://doi.org/10.1038/s41698-023-00432-6
  46. Du Y, Zhang R, Zargari A, Thai TC, Gunderson CC, et al. (2018) Classification of tumor epithelium and stroma by exploiting image features learned by deep convolutional neural networks. Ann Biomed Eng 46: 1988-1999. https://doi.org/10.1007/s10439-018-2095-6
  47. Yokomizo R, Lopes TJ, Takashima N, Hirose S, Kawabata A, et al. (2022) O3c glass-class: A machine-learning framework for prognostic prediction of ovarian clear-cell carcinoma. Bioinform Biol Insights. https://doi.org/10.1177/11779322221134312
  48. Elie N, Giffard F, Blanc-Fournier C, Morice PM, Brachet PE, et al. (2022) Impact of automated methods for quantitative evaluation of immunostaining: Towards digital pathology. Front Oncol 12: 931035. https://doi.org/10.3389/fonc.2022.931035
  49. Liu T, Su R, Sun C, Li X, Wei L (2022) EOCSA: Predicting prognosis of epithelial ovarian cancer with whole slide histopathological images. Expert Syst Appl 206: 117643. https://doi.org/10.1016/j.eswa.2022.117643
  50. Ziyambe B, Yahya A, Mushiri T, Tariq MU, Abbas Q, (2023) A deep learning framework for the prediction and diagnosis of ovarian cancer in pre-and post-menopausal women. Diagnostics 13(10): 1703. https://doi.org/10.3390/diagnostics13101703
  51. Laengsri V, Shoombuatong W, Adirojananon W, Nantasenamat C, Prachayasittikul V, et al. (2019) ThalPred: A web-based prediction tool for discriminating thalassemia trait and iron deficiency anemia. BMC Med Inform Decis Mak 19: 1-4. https://doi.org/10.1186/s12911-019-0929-2
  52. El-kenawy ES, SM E (2019) A machine learning model for hemoglobin estimation and anemia classification. Int J Comput Sci Inf Secur 17(2): 100-108.
  53. Appiahene P, Asare JW, Donkoh ET, Dimauro G, Maglietta R (2023) Detection of iron deficiency anemia by medical images: A comparative study of machine learning algorithms. BioData Min 16(1): 2. https://doi.org/10.1186/s13040-023-00319-z
  54. Çil B, Ayy?ld?z H, Tuncer T (2020) Discrimination of β-thalassemia and iron deficiency anemia through extreme learning machine and regularized extreme learning machine based decision support system. Med Hypotheses 138: 109611. https://doi.org/10.1016/j.mehy.2020.109611
  55. Khan JR, Chowdhury S, Islam H, Raheem E (2019) Machine learning algorithms to predict the childhood anemia in Bangladesh. J Data Sci 17(1): 195-218. https://doi.org/10.6339/JDS.201901_17(1).0009
  56. Mannino RG, Myers DR, Tyburski EA, Caruso C, Boudreaux J, et al. (2018) Smartphone app for non-invasive detection of anemia using only patient-sourced photos. Nature Commun 9(1): 4924. https://doi.org/10.1038/s41467-018-07262-2
  57. Mahmud S, Dönmez T, Mansour M, Kutlu M, Freeman C (2023) Anemia detection through non-invasive analysis of lip mucosa images. Front Big Data 6: 1335213. https://doi.org/10.3389/fdata.2023.1241899
  58. Saputra DC, Sunat K, Ratnaningsih T (2023) A new artificial intelligence approach using extreme learning machine as the potentially effective model to predict and analyze the diagnosis of anemia. Healthcare 11(5): 697. https://doi.org/10.3390/healthcare11050697
  59. Rincon JA, Julian V, Carrascosa C (2020) Towards the edge intelligence: Robot assistant for the detection and classification of human Emotions. InHighlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness. 18(2020): 31-41. https://doi.org/10.1007/978-3-030-51999-5_3/COVER
  60. Ganesan V, Ramasamy V, Manoj C, Tejaswi T (2023) Contextual Emotional Classifier: An Advanced AI-Powered Emotional Health Ecosystem for Women Utilizing Edge Devices. Traitement du Signal 40(6). https://doi.org/10.18280/ts.400613
  61. https://www.digitalhealth.net/2023/11/ai-chatbot-reduces-depression-in-prenatal-and-postnatal-women/
  62. https://www.emaapp.co/
  63. Wilmink G, Dupey K, Alkire S, Grote J, Zobel G, et al. (2020) Artificial intelligence–powered digital health platform and wearable devices improve outcomes for older adults in assisted living communities: Pilot intervention study. JMIR Aging 3(2): e19554. https://doi.org/10.2196/19554
  64. https://thecsrjournal.in/chatbot-just-ask-sexual-health-madhya-pradesh/
  65. https://www.unfpa.org/updates/india-unfpa-launches-just-ask-chatbot-sexual-and-reproductive-health-and-rights
  66. https://helloclue.com/
  67. Kilicarslan S, Celik M, Sahin ? (2021) Hybrid models based on genetic algorithm and deep learning algorithms for nutritional Anemia disease classification. Biomed Signal Process Control 63: 102231. https://doi.org/10.1016/j.bspc.2020.102231
  68. Togunwa TO, Babatunde AO, Abdullah KU (2023) Deep hybrid model for maternal health risk classification in pregnancy: Synergy of ANN and random forest. Front. Artif. Intell 6: 1213436. https://doi.org/10.3389/frai.2023.1213436
  69. https://eviering.com/
  70. https://www.corobaby.com/
scroll up