GENERAL ENQUIRIES: Tel: + 27 12 841 2911 | Email: callcentre@csir.co.za

Show simple item record

dc.contributor.author Ngxande, Mkhuseli
dc.contributor.author Tapamo, J
dc.contributor.author Burke, Michael
dc.date.accessioned 2019-05-07T06:53:39Z
dc.date.available 2019-05-07T06:53:39Z
dc.date.issued 2019-01
dc.identifier.citation Ngxande, N., Tapamo, J., and Burke, M. 2019. DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis. SAUPEC/RobMech/PRASA 2019, Bloemfontein, South Africa, 28 - 31 January 2019, 6pp. en_US
dc.identifier.isbn 978-1-7281-0369-3
dc.identifier.isbn 978-1-7281-0370-9
dc.identifier.uri https://arxiv.org/abs/1903.02225
dc.identifier.uri https://ieeexplore.ieee.org/document/8704766
dc.identifier.uri DOI: 10.1109/RoboMech.2019.8704766
dc.identifier.uri http://hdl.handle.net/10204/10983
dc.description Copyright: IEEE 2019. This is the accepted version of the published item. en_US
dc.description.abstract Recent work has shown significant progress in the direction of synthetic data generation using Generative Adversarial Networks (GANs). GANs have been applied in many fields of computer vision including text-to-image conversion, domain transfer, super-resolution, and image-to-video applications. In computer vision, traditional GANs are based on deep convolutional neural networks. However, deep convolutional neural networks can require extensive computational resources because they are based on multiple operations performed by convolutional layers, which can consist of millions of trainable parameters. Training a GAN model can be difficult and it takes a significant amount of time to reach an equilibrium point. In this paper, we investigate the use of depthwise separable convolutions to reduce training time while maintaining data generation performance. Our results show that a DepthwiseGAN architecture can generate realistic images in shorter training periods when compared to a StarGan architecture, but that model capacity still plays a significant role in generative modelling. In addition, we show that depthwise separable convolutions perform best when only applied to the generator. For quality evaluation of generated images, we use the Frechet Inception Distance (FID), which compares the similarity between the generated image distribution and that of the training dataset. en_US
dc.language.iso en en_US
dc.publisher IEEE en_US
dc.relation.ispartofseries Worklist;22307
dc.subject Depthwise Separable Convolution en_US
dc.subject Frechet Inception Distance en_US
dc.subject FID en_US
dc.subject Generative Adversarial Networks en_US
dc.subject GANs en_US
dc.subject Synthetic Data en_US
dc.title DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis en_US
dc.type Presentation en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search ResearchSpace


Advanced Search

Browse

My Account