Browsing by Author "Van Eden, Beatrice"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Anomaly detection monitoring system for healthcare(2021-01) Boloka, Tlou J; Crafford, Gerhardus J; Mokuwe, Mamuku W; Van Eden, BeatriceMost developing countries suffer from inadequate health care facilities and a lack of medical practitioners as most of them emigrate to developed countries. The outbreak of the COVID-19 pandemic has left these countries more vulnerable to facing the worse outcome of the pandemic. This necessitates the need for a system that continuously monitors patient status and detects how their physiological variables will change over time. As a result, it will reduce the rate of mortality and mitigate the need for medical practitioners to monitor patients continuously. In this work, we show how an autoencoder and extreme gradient boosting can be merged to forecast physiological variables of a patient and detect anomalies and their level of divergence. An accurate detection of current and future anomalies will enable remedial action to be taken by medical practitioners at the right time and possibly save lives.Item CHAMP: A bespoke integrated system for mobile manipulation(RobMech, 2014-11) Van Eden, Beatrice; Rosman, Benjamin S; Withey, Daniel J; Ratshidaho, T; Keaikitse, M; Masha, D; Kleinhans, A; Shaik, AMobile manipulation is a robotics paradigm with the potential to make major contributions to a number of important domain areas. Although some mobile manipulators are commercially available, bespoke systems can be assembled from existing and separate mobile, manipulation, and vision components. This has the benefit of reusing existing hardware, at a lower cost, to produce a customised platform. In this paper we introduce CHAMP, the CSIR Hybrid Autonomous Manipulation Platform, and describe the required integration of a Barrett Whole Arm Manipulator, a PowerBot AGV, and the necessary sensors. The described integration includes both the hardware and the software.Item A comparison of visual place recognition methods using a mobile robot in an indoor environment(2023-11) Van Eden, Beatrice; Botha, Natasha; Rosman, BSpatial awareness is an important competence for a mobile robotic system. A robot needs to localise and perform context interpretation to provide any meaningful service. With the deep learning tools and readily available sensors, visual place recognition is a first step towards identifying the environment to bring a robot closer to spatial awareness. In this paper, we implement place recognition on a mobile robot considering a deep learning approach. For simple place classification, where the task involves classifying images into a limited number of categories, all three architectures; VGG16, Inception-v3 and ResNet50, perform well. However, considering the pros and cons, the choice may depend on available computational resources and deployment constraints.Item Design of a 3D-printed test rig for micro aerial robotics platforms(2024-12) Botha, Natasha; De Ronde, Willis; Van Eden, BeatriceDuring the development of an aerial robotic platform, it is necessary to characterise the flight controller to ensure flight stability. Even though this can be done through open-source software like Betaflight, flight testing can result in potential crashes. To prevent this, this paper aims to design and manufacture a test rig to test and characterise the flight controller reliably. A 4-degree-of-freedom (DOF) test rig was developed to allow for vertical motion and characterise the yaw, pitch, and roll angles. It was also entirely constructed utilising additive manufacturing (AM). An iterative design approach was considered to improve the design after practical testing with a micro aerial robotic platform. This approach significantly enhanced the design of the ball joint and linear shaft to assist in better performance of the micro aerial robotic platform when using the test rig.Item Enhancing indoor place classification for mobile robots using RGB-D data and deep learning architectures(2024-12) Van Eden, Beatrice; Botha, NatashaPlace classification is crucial for a robot's ability to make high-level decisions. When a robot can identify its operating environment, it can provide more appropriate services. This capability is similar to how humans use their understanding of their surroundings to make informed decisions about appropriate actions. Depth data offers valuable spatial information that can enhance place classification on a robot. However, it is more common for mobile robot applications to rely on RGB data rather than RGB-D data for classifying indoor places. This study demonstrates that incorporating depth information improves the classification of indoor places using a mobile robot. Data were collected from a mobile robot, and indoor scenes were classified based on RGB and RGB-D inputs. A comparison was made between the performance of VGG16, Inception v3, and ResNet50 architectures using RGB data alone. Subsequently, depth information was fused with these RGB models. Experiments showed that classification accuracy improved when tested on the mobile robot by including depth data. In the experiment, the robot created a map of the indoor environment and identified four different rooms on the map using the trained models. This demonstrates the enhanced classification capabilities achieved by incorporating depth information.Item Image processing towards the automated identification of nanoparticles in SEM images(IEEE, 2019-01) Botha, Gerda N; Wessels, Gert JC; Botha, Natasha; Van Eden, BeatriceSEM images are crucial in the characterisation of material properties. These images can be very hard to interpret without any prior knowledge of the material. This paper discusses a pre-processing method for assisting convolutional Neural Networks in identifying the presence of nanoparticles in composite SEM images. The pre-processing method is developed using a synthetic SEM image.Item Material selection and optimisation of a 3D-printed indoor aerial robotics platform(2024-12) De Ronde, Willis; Botha, Natasha; Van Eden, Beatrice; Tshabalala, Lerato CBoth aerial robotic platforms and additive manufacturing (AM) have become more affordable to consumers. Indoor aerial robotic platforms are typically small and lightweight, while AM is renowned for creating small, high-strength prototypes and components. This paper discusses the material selection and structural optimisation of a 3D-printed indoor aerial robotic platform. Three commonly used AM materials were compared using finite element analysis (FEA): acrylonitrile butadiene styrene (ABS), polyethylene terephthalate glycol (PETG), and Nylon. It was found that Nylon offered the best performance in terms of the strength-to-weight ratio. The aerial robotic frame was optimised using an iterative design approach and previous knowledge with regards to the breaks observed during flight crashes. A dynamic FEA was performed to simulate a drop test from a height of one meter to compare the optimised design with the previous frame design. It was found that the improvements in the redesign have led to a 13.67 % decrease in weight and a 11.78 % decrease stress of the aerial robotic frame. This not only demonstrates the effectiveness of design optimisation, but also highlights the commitment to producing more efficient, reliable and sustainable designs.Item An overview of robot vision(2019-01) Van Eden, Beatrice; Rosman, Benjamin SRobot vision is an interdisciplinary field that deals with how robots can be made to gain high-level understanding from digital images or videos. Understanding an image at the pixel level often does not provide enough information for decision making and action taking. In this case, higher level semantic information that describes the image is required. This helps the robot to accomplish complex tasks that require visual understanding. For robots to add value they need to be sufficiently effective at executing tasks in different settings. Despite many impressive advances in robot vision, robots still lack the ability to function as humans do in complex environments. Importantly, this includes being able to interpret and understand the perceptual complexities of the world. Robot vision is dependant on ideas from both computer vision and machine learning. In this paper we provide a overview of the advances in these disciplines and how they contribute to robot vision.Item Prototype design of an aerial robotic platform for indoor applications(2023-11) Botha, Natasha; Van Eden, Beatrice; Lehman, Lodewyk; Verster, Jacobus JThere is an increased interest in using aerial robotic platforms for indoor applications in industrial and manufacturing environments. One such example is stocktaking in warehouses, where the use of other mobile robotic systems is not efficient as they aren’t able to reach higher shelves or operate at the same speed as an aerial robotic platform. In this paper we discuss a prototype design for an aerial robotic platform that will be able to operate in GPS-denied areas, such as indoor warehouses. Suitable choices for the hardware and avionics are proposed based on the system and user requirements. The final design weighs 194 g, costs R 7 354.3, and has an estimated flight time of 5.5 min which is within the system requirements.Item Robots for disaster management(2018-05) Van Eden, Beatrice; Rosman, Benjamin SThe past years have seen many deadly natural disasters including hurricanes, earthquakes, flooding and landslides. Search and rescue efforts have saved numerous lives but numerous others were lost. At the same time, robotic technology is becoming more widespread, and brings with it the potential to assist in these search and rescue scenarios. Despite many impressive advances, robots still lack the ability to function as humans do in complex environments. Importantly, this includes being able to interpret and understand complexities of the world as humans do. This short paper explains our first steps towards better robot cognition, for use in search and rescue scenarios. To this end, we focus particularly on the ability to understand the current surroundings of the robot. Our setup involves collecting data from a mobile robot moving between three different settings, and using this to train a neural network to identify the current setting. The robot will then be able to roam around in an environment and identify the three settings, marking them on the map it creates of the environment.Item Simulating object handover between collaborative robots(2023-11) Van Eden, Beatrice; Botha, NatashaCollaborative robots are adopted in the drive towards Industry 4.0 to automate manufacturing, while retaining a human workforce. This area of research is known as human-robot collaboration (HRC) and focusses on understanding the interactions between the robot and a human. During HRC the robot is often programmed to perform a predefined task, however when working in a dynamic and unstructured environment this is not achievable. To this end, machine learning is commonly employed to train the collaborative robot to autonomously execute a collaborative task. Most of the current research is concerned with HRC, however, when considering the smart factory of the future investigating an autonomous collaborative task between two robots is pertinent. In this paper deep reinforcement learning (DRL) is considered to teach two collaborative robots to handover an object in a simulated environment. The simulation environment was developed using Pybullet and OpenAI gym. Three DRL algorithms and three different reward functions were investigated. The results clearly indicated that PPO is the best performing DRL algorithm as it provided the highest reward output, which is indicative that the robots were learning how to perform the task, even though they were not successful. A discrete reward function with reward shaping, to incentivise the cobot to perform the desired actions and incremental goals (picking up the object, lifting the object and transferring the object), provided the overall best performance.Item Towards better flood risk management using a Bayesian network approach(2022-11) Wessels, Gert J; Botha, Natasha; Koen, Hildegarde S; Van Eden, BeatriceAfter years of drought, the rainy season is always welcomed. Unfortunately, this can also herald widespread flooding which can result in loss of livelihood, property, and human life. In this study a Bayesian network is used to develop a flood prediction model for a Tshwane catchment area prone to flash floods. This causal model was considered due to a shortage of flood data. The developed Bayesian network was evaluated by environmental domain experts and implemented in Python through pyAgrum. Three what-if scenarios are used to verify the model and estimation of probabilities which were based on expert knowledge. The model was then used to predict a low and high rainfall scenario. It was able to predict no flooding events for a low rainfall scenario, and flooding events, especially around the rivers, for a high rainfall scenario. The model therefore behaves as expected.