Research Publications/Outputs
Permanent URI for this community
Browse
Browsing Research Publications/Outputs by browse.metadata.cluster "Defence and Security"
Now showing 1 - 20 of 162
Results Per Page
Sort Options
Item Academic and skills credentialing using distributed ledger technology (DLT) and W3C Standards: Technology assessment(2022-12) Mthethwa, Sthembile; Pretorius, MorneThe ongoing push for the 4th industrial revolution is setting the stage to digitise, persist and verify identity along with credentials. Academic and skills credentials are currently verified manually and have much scope for automation using cryptographic techniques but requires standardisation to facilitate future systems interoperability. The Distributed Ledger Technology (DLT) and World Wide Web Consortium (W3C) Verifiable Credentials (VC) standards presents the possibility to achieve this credential verification automation. To accomplish this, an understanding of various DLTs and requirements for a viable skills tracking system is important. Therefore, this research aims to access the selected DLTs against the assessment criterion presented and an analysis has been completed to determine which DLT is suitable for the proposed system. The DLTs are assessed in terms of their ability to support the rapid prototyping of such a system and provide recommendations to guide a future development path from the perspective of standards compliance. We conclude that few DLTs possess the maturity to provide proper requirements coverage due to the emergent nature of the DLT space. Additionally, this paper presents the high-level requirements to achieve a minimally viable solution that can demonstrate such digital credential verification in the academic and skills tracking context.Item Acceleration of hidden Markov model fitting using graphical processing units, with application to low-frequency tremor classification(2021-11) Stoltz, M; Stoltz, George G; Obara, K; Wang, T; Bryant, DHidden Markov models (HMMs) are general purpose models for time-series data widely used across the sciences because of their flexibility and elegance. Fitting HMMs can often be computationally demanding and time consuming, particularly when the number of hidden states is large or the Markov chain itself is long. Here we introduce a new Graphical Processing Unit (GPU)-based algorithm designed to fit long-chain HMMs, applying our approach to a model for low-frequency tremor events. Even on a modest GPU, our implementation resulted in an increase in speed of several orders of magnitude compared to the standard single processor algorithm. This permitted a full Bayesian inference of uncertainty related to model parameters and forecasts based on posterior predictive distributions. Similar improvements would be expected for HMM models given large number of observations and moderate state spaces ( states with current hardware). We discuss the model, general GPU architecture and algorithms and report performance of the method on a tremor dataset from the Shikoku region, Japan. The new approach led to improvements in both computational performance and forecast accuracy, compared to existing frequentist methodology.Item Adding up the numbers: COVID-19 in South Africa(2022-06) Suliman, Ridhwaan; Mtsweni, Jabu SThe SARS-CoV-2 pandemic has wreaked havoc globally, with over half a billion people infected and millions of lives lost. The pandemic has also interrupted every aspect of our lives, with most governments imposing various interventions and restrictions on people’s movement and behaviour to minimise the impact of the virus and save lives. The debate among scholars on the effectiveness of the interventions and restrictions, particularly in the context of a developing country like South Africa, continues. The data and scientific evidence indicate that non-pharmaceutical interventions, and particularly the implementation and adherence thereto, may have been ineffective in terms of containment in the South African context and had minimal impact in stopping the spread of the SARS-CoV-2 virus.Item Advancing cybersecurity capabilities for South African organisations through R&D(2022-03) Dawood, Zubeida C; Mkuzangwe, Nenekazi NPThere is a growth of cyber-attacks in South Africa. Seeing that there are over 38 million Internet users in South Africa, this is no surprise. The South African government has published the National Cybersecurity Policy Framework (NCPF) and Protection of Personal Information Act (POPIA) to move towards mitigating cyber threats due to the increase of the presence of South African organisations and citizens in cyber space. This demonstrates that there is a need for organisations to have a clear roadmap to implement and improve on their own cybersecurity capabilities. South African organisations need to take a proactive stance in cybersecurity because businesses rely heavily on technology for day-to-day operations. Currently cyber-attacks cost South African organisations over R2 billion, and the current work-from-home arrangement that most organisations have implemented will only worsen the situation. While a cybersecurity roadmap will differ in every organisation based on the organisation’s vision, goals, and objectives, along with their information technology (IT) and operations technology (OT), a starting point is perhaps the identification of key research and development (R&D) areas together with key activities that organisations can focus on in order to improve their cybersecurity capabilities. Cybersecurity capabilities are tools that organisations use to strengthen their organisation and protect themselves from potential cyber threats. The purpose of this study was to investigate R&D areas that organisations should invest in for the purpose of improving their cybersecurity capabilities. There are various subfields in cybersecurity that can be explored for organisations to advance their cybersecurity capabilities. Five integral R&D dimensions were identified together with key activities and are presented and discussed. A conceptual framework is also presented which maps the R&D dimensions and activities to the main pillars of cybersecurity, i.e., People, Processes, and Technology. South African organisations could reference the framework and adapt it for their business needs to protect themselves against potential cyber threats.Item An aerodynamic CFD analysis of inlet swirl in a micro-gas turbine combustor(2023-07) Meyers, Bronwyn C; Grobler, Jan-Hendrik; Snedden, GCA combustor was designed for a 200N micro-gas turbine [1, 2] using the NREC preliminary combustor design method [1, 2, 3]. During the design process, there are various aspects where there are no definitive methodologies for specifying the design detail, such as the design of the hole-sets, and multiple options can be derived that can satisfy the required mass flow split and pressure drop for a particular hole-set.Item Age invariant face recognition methods: A review(2021-12) Baruni, Kedimotse P; Mokoena, Nthabiseng ME; Veeraragoo, Mahalingam; Holder, Ross PFace recognition is one of the biometric technologies that is mostly used in surveillance and law enforcement for identification and verification. However, face recognition remains a challenge in verifying and identifying individuals due to significant facial appearance discrepancies caused by age progression. Especially in applications that verify individuals from their passports, driving licenses and finding missing children after decades. The most critical step in Age- Invariant Face Recognition (AIFR) is extracting rich discriminative age-invariant features for each individual in face recognition applications. The variation of facial appearance across aging can be solved using three methods, namely, generative (aging simulation), discriminative (feature-based) and deep neural networks methods. This work reviews and compares the state-of-art AIFR methods to address the work that has been done to minimize the effect of aging in face recognition application during the pre-processing and feature extraction stages to extract rich discriminative age-invariant features from facial images of individuals (subjects) captured at different ages, shortfalls and advantages of these methods. The novelty of this work lies in analyzing the state-of-art work that has been done during the pre-processing and/or feature extraction stages to minimize the difference between the query and enrolled face images captured over age progression.Item Algebraic analysis of Toeplitz decorrelation techniques for direction-of-arrival estimation(2019-11) Shafuda, F; McDonald, Andre M; Van Wyk, MA; Versfeld, JIn this paper, we investigate the correlation Toeplitz (CTOP) and averaging Toeplitz (AVTOP) decorrelation techniques, as applied towards direction of arrival (DOA) estimation of coherent narrowband sources with the multiple signals classi cation (MUSIC) algorithm. Numerical studies suggest that CTOP leads towards more accurate DOA estimation than AVTOP; however, no theoretical motivation for this performance gap has yet been presented. In this paper, we derive expressions for the Toeplitz matrices produced by the CTOP and AVTOP techniques, for a scenario involving a three-element uniform linear array and two coherent source signals in additive white Gaussian noise. These expressions lead to the claim that the accuracy of the CTOP technique can be attributed to its retention of source DOA information as independent sums (i.e. in a superposition form) in the Toeplitz matrix. The claim is supported by an investigation of the MUSIC spectra corresponding to the distinct Toeplitz matrices.Item An analysis of a cryptocurrency giveaway scam: Use case(2024-06) Botha, Johannes G; Leenen, LA giveaway scam is a type of fraud leveraging social media platforms and phishing campaigns. These scams have become increasingly common and are now also prevalent in the crypto community where attackers attempt to gain crypto-enthusiasts’ trust with the promise of high-yield giveaways. Giveaway scams target individuals who lack technical familiarity with the blockchain. They take on various forms, often presenting as genuine cryptocurrency giveaways endorsed by prominent figures or organizations within the blockchain community. Scammers entice victims by promising substantial returns on a nominal investment. Victims are manipulated into sending cryptocurrency under the pretext of paying for "verification" or "processing fees." However, once the funds have been sent, the scammers disappear and leave victims empty-handed. This study employs essential blockchain tools and techniques to explore the mechanics of giveaway scams. A crucial aspect of an investigation is to meticulously trace the movement of funds within the blockchain so that illicit gains resulting from these scams can be tracked. At some point a scammer wants to “cash-out” by transferring the funds to an off-ramp, for example, an exchange. If the investigator can establish a link to such an exchange, the identity of the owner of cryptocurrency address could be revealed. However, in organised scams, criminals make use of mules and do not use their own identities. The authors of this paper select a use case and then illustrate a comprehensive approach to investigate the selected scam. This paper contributes to the understanding and mitigation of giveaway scams in the cryptocurrency realm. By leveraging the mechanics of blockchain technology, dissecting scammer tactics, and utilizing investigative techniques and tools, the paper aims to contribute to the protection of investors, the industry, and the overall integrity of the blockchain ecosystem. This research sheds light on the intricate workings of giveaway scams and proposes effective strategies to counteract them.Item An analysis of crypto scams during the Covid-19 pandemic: 2020-2022(2023-03) Botha, Johannes G; Botha-Badenhorst, Danielle P; Leenen, LBlockchain and cryptocurrency adoption has increased significantly since the start of the Covid-19 pandemic. This adoption rate has overtaken the Internet adoption rate in the 90s and early 2000s, but as a result, the instances of crypto scams have also increased. The types of crypto scams reported are typically giveaway scams, rug pulls, phishing scams, impersonation scams, Ponzi schemes as well as pump and dumps. The US Federal Trade Commission (FTC) reported that in May 2021 the number of crypto scams were twelve times higher than in 2020, and the total loss increased by almost 1000%. The FTC also reported that Americans have lost more than $80 million due to cryptocurrency investment scams from October 2019 to October 2020, with victims between the ages of 20 and 39 represented 44% of the reported cases. Social Media has become the go to place for scammers where attackers hack pre-existing profiles and ask targets’ contacts for payments in cryptocurrency. In 2020, both Joe Biden and Bill Gates’ Twitter accounts were hacked where the hacker posted tweets promising that for all payments sent to a specified address, double the amount will be returned, and this case of fraud was responsible for $100,000 in losses. A similar scheme using Elon Musk’s Twitter account resulted in losses of nearly $2 million. This paper analyses the most significant blockchain and cryptocurrency scams since the start of the Covid-19 pandemic, with the aim of raising awareness and contributing to protection against attacks. Even though the blockchain is a revolutionary technology with numerous benefits, it also poses an international crisis that cannot be ignored.Item An analysis of the MTI crypto investment scam(2023-06) Botha, Johannes G; Pederson, T; Leenen, LSince the start of the Covid-19 pandemic, blockchain and cryptocurrency adoption has increased significantly. The adoption rate of blockchain-based technologies has surpassed the Internet adoption rate in the 90s and early 2000s. As this industry has grown significantly, so too has the instances of crypto scams. Numerous cryptocurrency scams exist to exploit users. The generally limited understanding of how cryptocurrencies operate has increased the possible number of scams, relying on people's misplaced sense of trust and desire for making money quickly and easily. As such, investment scams have also been growing in popularity. Mirror Trading International (MTI) has been named South Africa's biggest crypto scam in 2020, resulting in losses of $1.7 billion. It is also one of the largest reported international crypto investment scams. This paper focuses on a specific aspect of the MTI scam; an analysis on the fund movements on the blockchain from the perpetrators and members who benefited the most from the scam. The authors used various Open-Source Intelligence (OSINT) tools, alongside QLUE, as well as news articles and blockchain explorers. These tools and techniques are used to follow the moneytrial on the blockchain, in search of possible mistakes made by the perpetrator. This could include instances where some personal information might have been leaked. With such disclosed personal information, OSINT tools and investigative techniques can be used to identify the criminals. Due to the CEO of MTI having been arrested, and the case currently being dealt with in the court of law in South Africa, this paper also presents investigative processes that could be followed. Thus, the focus of this paper is to follow the money and consequently propose a process for an investigator to investigate crypto crimes and scams on the blockchain. As the adoption of blockchain technologies continues to increase at unprecedented rates, it is imperative to produce investigative toolkits and use cases to help reduce time spent trying to catch bad actors within the generally anonymous realm of cryptocurrencies.Item Application of geospatial data in cyber security(2022-06) Veerasamy, Namosha; Yoolla, Yaseen; Dawood, Zubeida CGeospatial data is often perceived as only being related to maps, compasses and locations. However, the application areas of geospatial data are far wider and even extend to the field of cybersecurity. Not only is there an ability to show points of interestand emerging network traffic conditions, geospatial data also has the ability to model cyber crime growth patterns and indicate affected areas as well as the emergence of certain type of cyber threats. Geospatial data can feed into intelligence systems, help with analysis, information sharing, and help create situational awareness. This is particularly useful in the area of cyber security. Geospatial data is very powerful and can help to prioritise cyber threats and identify critical areas of concern. Previously, geospatial data was primarily used by militaries, intelligence agencies, weather services or traffic control. Currently, the application of geospatial data has multiplied, and it spans many more industries and sectors. So too for cyber security, geospatial data has a wide number of uses. It may be difficult to find patterns or trends in large data sets. However, the graphic capabilities of geo mapping help present data in more digestible manner. This may help analysts identify emerging issues, threats and target areas. In this paper, the usefulness of geospatial data for cyber security is explored. The paper will cover a framework of the key application areas that geospatial data can serve in the field of cyber security. The ten application areas covered in the paper are: tracking, data analysis, visualisation, situational awareness, cyber intelligence, collaboration, improved response to cyber threats, decision-making, cyber threat prioritisation and protect cyber infrastructure It is aimed that through the paper, the application areas of geospatial data can be more widely adopted.Item Apportioning human-induced and climate-induced land degradation: A Case of the Greater Sekhukhune District Municipality(2023-03) Kgaphola, Motsoko J; Ramoelo, Abel; Odindi, J; Mwenge Kahinda, Jean-Marc; Seetal, Ashwin RLand degradation (LD) is a global issue that affects sustainability and livelihoods of approximately 1.5 billion people, especially in arid/semi-arid regions. Hence, identifying and assessing LD and its driving forces (natural and anthropogenic) is important in order to design and adopt appropriate sustainable land management interventions. Therefore, using vegetation as a proxy for LD, this study aimed to distinguish anthropogenic from rainfall-driven LD in the Greater Sekhukhune District Municipality from 1990 to 2019. It is widely established that rainfall highly correlates with vegetation productivity. A linear regression was performed between the Normalized Difference Vegetation Index (NDVI) and rainfall. The human-induced LD was then distinguished from that of rainfall using the spatial residual trend (RESTREND) method and the Mann–Kendall (MK) trend. RESTREND results showed that 11.59% of the district was degraded due to human activities such as overgrazing and injudicious rangeland management. While about 41.41% was degraded due to seasonal rainfall variability and an increasing frequency of droughts. Climate variability affected vegetation cover and contributed to different forms of soil erosion and gully formation. These findings provide relevant spatial information on rainfall or human-induced LD, which is useful for policy formulation and the design of LD mitigation measures in semi-arid regions.Item Approaches to Building a Smart Community: An Exploration Through the Concept of the Digital Village(Cambridge Scholars Publishing, 2021-09) Phahlamohlaka, Letlibe J; Phahlamohlaka, Letlibe JThe unique approaches proposed in this book are ‘glocal’ in character, as they draw on the experiences of South Africans to address the global issue of ‘smart communities’. The book blends together social and technical aspects, and presents the experiences from a range of community practitioners, academics, architects and engineers.Item Aspects of Wind Tunnel Testing: Practices(2024) Morelli, Mauro FThis presentation focuses on various aspects of wind tunnels - their testing, types, processes, balances and model design and procurementItem Assessing the quality of acquired images to improve ear recognition for children(2023-06) Ntshangase, Cynthia S; Ndlovu, Lungisani; Stofile, AkhonaThe use of biometrics to secure the identity of children is a continuous research worldwide. In the recent past, it has been realized that one of the promising biometrics is the shape of the ear, especially for children. This is be cause most of their biometrics change as they grow. However, there are shortcomings involved when using ear recognition in children, usually caused by the surrounding environment, and children can be at times uncooperative, such as moving during image acquisition. Consequently, the quality of acquired images might be affected by issues such as partial occlusions, blurriness, sharpness, and illumination. Therefore, in this paper, a method of image quality assessment is proposed. This method detects whether the images are affected by partial occlusions, blurriness, sharpness, or illumination. This method assesses the quality of the image to improve ear recognition for children. In this paper, four different test experiments were performed using the AIM database, IIT DELHI ear database, and ear images collected by Council for Scientific and Industrial Research (CSIR) researchers. The Gabor filter and Scale Invariant Feature Transform (SIFT) feature comparison methods were used to assess the quality of images. The experimental results showed that partial ear occlusions has less than 16 key points, resulting in low identification accuracy. Meanwhile, blurriness and sharpness were measured using the sharpness value of the image. Therefore, if the sharpness value is below 13, it means that the image is blurry. On the other hand, if the sharpness value is greater than 110, the image quality affects the ex tracted features and reduces the identification accuracy. Furthermore, it was discovered that the level of illumination in the image varies, the higher the illumination effect, such as the value above 100 affects the features and reduces the identification rate. The overall experimental evaluations demonstrated that image quality assessment is critical in improving ear recognition accuracy.Item Assessment of homegarden agroforestry for sustainable land management intervention in a degraded landscape in South Africa(2021-12) Musvoto, Constansia D; Kgaphola, Motsoko J; Mwenge Kahinda, Jean-MarcAgroforestry-based sustainable land management (SLM) interventions provide opportunities for tackling land degradation and its associated socio-economic issues. Agroforestry is not a guaranteed SLM fix as every agroforestry practice is not automatically relevant to each context. It is critical to identify key considerations for ensuring a good fit between agroforestry and the receiving environment. This study identifies and analyses key factors for assessing the context-specific suitability of an agroforestry practice for SLM using a case-study of homegarden agroforestry in a degraded catchment. An analysis of biophysical and socio-economic characteristics of the catchment covering land degradation, SLM aspirations of residents, agriculture and agroforestry activities was conducted through literature review, field observations, GIS and remote sensing, stakeholder engagement and a questionnaire survey. Considerations in agroforestry practice assessment for SLM include agricultural and SLM objectives, which at our study site were increased crop production and arresting soil erosion. Availability of requisite resources, namely land, water and fencing; stakeholder interest in the tree and/or crop planting, species of interest and suitability of the species for the biophysical conditions should also be assessed. We propose a framework for systematically working through the relevant factors and assessing the suitability of an agroforestry practice for SLM intervention in a specific context. Based on the framework, homegarden agroforestry is an appropriate SLM intervention as it could meet stakeholders' SLM and agricultural objectives. Identification and systematic assessment of relevant factors are critical for ensuring the acceptability of agroforestry practice in a locality and the sustainability of associated SLM interventions.Item BFO Classifier: Aligning domain ontologies to BFO(2022-08) Emeruem, C; Keet, CM; Dawood, Zubeida C; Wang, SFoundational ontologies are known to have a steep learning curve, which hampers casual use by domain ontology developers to use them for domain ontology development. Foundational ontology developers have not provided methods or tools to lower the barriers of uptake beyond offering, at best, a computational version. We investigate an approach to bridge this gap through the development of a decision diagram for BFO, which offers the modeller a series of questions with closed answer options in order to step-wise arrive at a suitable entity to align the domain entity to. This diagram was implemented in a tool, the BFO Classifier, that keeps track of the question and answer trace and with the click of a button the alignment axiom can be added to the ontology. It was evaluated with two BFO-aligned ontologies, which showed that in at least half.Item Biases and debiasing of decisions in ageing military systems(2019-09) Pelser, Winnie CMany of the administrative decisions that must be made in a military environment are complex and rely on a rational analysis of situations. Decisions within the domain of ageing systems are particularly difficult and often riddled with different biases. This paper investigates why rational thinking is not always the norm, and suggests possible ways to assist decision making. A few biases are identified, and available debiasing techniques are discussed. It was found that research in this field is limited and must be expanded in order to ensure optimal decision.Item Biaxial estimation of biomechanical constitutive parameters of passive porcine sclera soft tissue(2022-02) Ndlovu, Z; Desai, D; Pandelani, Thanyani A; Ngwangwa, H; Nemavhola, FThis study assesses the modelling capabilities of four constitutive hyperelastic material models to fit the experimental data of the porcine sclera soft tissue. It further estimates the material parameters and discusses their applicability to a finite element model by examining the statistical dispersion measured through the standard deviation. Fifteen sclera tissues were harvested from porcine’ slaughtered at an abattoir and were subjected to equi-biaxial testing. The results show that all the four material models yielded very good correlations at correlations above 96%. The polynomial (anisotropic) model gave the best correlation of 98%. However, the estimated material parameters varied widely from one test to another such that there would be need to normalise the test data to avoid long optimisation processes after applying the average material parameters to finite element models. However, for application of the estimated material parameters to finite element models, there would be need to consider normalising the test data to reduce the search region for the optimisation algorithms. Although the polynomial (anisotropic) model yielded the best correlation, it was found that the Choi-Vito had the least variation in the estimated material parameters, thereby making it an easier option for application of its material parameters to a finite element model and requiring minimum effort in the optimisation procedure. For the porcine sclera tissue, it was found that the anisotropy was more influenced by the fiber-related properties than the background material matrix-related properties.Item Biaxial estimation of biomechanical constitutive parameters of passive porcine sclera soft tissue(2022-02) Ndlovu, Z; Desai, D; Pandelani, Thanyani A; Ngwangwa, H; Nemavhola, FThis study assesses the modelling capabilities of four constitutive hyperelastic material models to fit the experimental data of the porcine sclera soft tissue. It further estimates the material parameters and discusses their applicability to a finite element model by examining the statistical dispersion measured through the standard deviation. Fifteen sclera tissues were harvested from porcine’ slaughtered at an abattoir and were subjected to equi-biaxial testing. The results show that all the four material models yielded very good correlations at correlations above 96%. The polynomial (anisotropic) model gave the best correlation of 98%. However, the estimated material parameters varied widely from one test to another such that there would be need to normalise the test data to avoid long optimisation processes after applying the average material parameters to finite element models. However, for application of the estimated material parameters to finite element models, there would be need to consider normalising the test data to reduce the search region for the optimisation algorithms. Although the polynomial (anisotropic) model yielded the best correlation, it was found that the Choi-Vito had the least variation in the estimated material parameters, thereby making it an easier option for application of its material parameters to a finite element model and requiring minimum effort in the optimisation procedure. For the porcine sclera tissue, it was found that the anisotropy was more influenced by the fiber-related properties than the background material matrix-related properties.