ENHANCED IMAGE STEGANOGRAPHY WITH KERAS-STEGANOGAN: A TENSORFLOW-BASED GAN
DOI:
https://doi.org/10.32782/2786-9024/v3i5(37).344525Keywords:
SteganoGAN, Keras, TensorFlow, image steganography, GAN, encoder, critic network, payload capacity, residual, dense, decoder, adversarial learning, hidden message, similarity metrics.Abstract
Image steganography is the process of embedding secret information within digital images such that the very presence of the message remains undetectable. Recent advances in deep learning, particularly in generative adversarial networks (GANs), have significantly improved both the payload capacity and perceptual quality of steganographic systems. The original SteganoGAN, implemented in Torch, achieved state-of-the-art performance by embedding up to 4.4 bits per pixel while maintaining strong resistance to steganalysis methods. However, the influence of the critic network on steganographic quality and learning stability remains insufficiently explored. This paper presents Keras-SteganoGAN, a TensorFlow-based reimplementation and extension of SteganoGAN, designed to systematically analyze the role of the critic in adversarial steganographic training. Two variants of the model–one incorporating a critic and one without–were trained and compared across three encoder architectures: basic convolutional, residual, and dense. Each configuration was trained over five epochs with message depths ranging from 1 to 6 bits, allowing a comprehensive study of trade-offs between payload capacity, image distortion, and decoding accuracy. Quantitative evaluation was conducted using standard image quality and steganographic metrics, including PSNR, SSIM, RS-BPP, and decoder accuracy. The results indicate that the inclusion of a critic improves perceptual quality and visual similarity at lower payloads, but its contribution diminishes as the message depth increases. These findings provide new insights into the interaction between encoder complexity, critic dynamics, and steganographic performance, offering guidance for the design of future GAN-based steganography systems.
References
Pevny ́, T., Filler, T., and Bas, P. “Using high-dimensional image models to perform highly undetectable steganography. Information Hiding”, 2010.
Kevin Alex Zhang, Alfredo Cuesta-Infante, Lei Xu, and Kalyan Veeramachaneni, “Steganogan: High capacity image steganography with gans,” arXiv preprint arXiv:1901.03892, 2019.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative adversarial nets. Advances in neural information processing systems”, 27, 2014.
Hayes, J. and Danezis, G. “Generating steganographic images via adversarial training”. In NIPS, 2017.
Baluja, S. “Hiding images in plain sight: Deep steganography”. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 2069–2079. Curran Associates, Inc., 2017.
Zhu, J., Kaplan, R., Johnson, J., and Fei-Fei, L. “HiDDeN: Hiding data with deep networks”. CoRR, abs/1807.09937, 2018.
He, K., Zhang, X., Ren, S., and Sun, J. “Deep residual learning for image recognition”. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K. Q. “Densely connected convolutional networks”. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269, 2017.
Shao-Ping Lu, Rong Wang, Tao Zhong, and Paul L Rosin, “Large-capacity image steganography based on invertible neural networks”, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10816– 10825.
Donghui Hu, Liang Wang, Wenjie Jiang, Shuli Zheng, and Bin Li, “A novel image steganography method via deep convolutional generative adversarial networks”, IEEE access, vol. 6, pp. 38303–38314, 2018.
S. L. Kryvyi, Y. V. Boyko, S. D. Pogorilyy, O. F. Boretskyi, M. M. Glybovets. “Design of Grid Structures on the Basis of Transition Systems with the Substantiation of the Correctness of Their Operation”. Cybernetics and Systems Analysis. January 2017, Volume 53, Issue 1, pp 105–114. Springer Science+Business Media New York 2017. https:// doi.org/10.1007/s10559-017-9911-0
S. D. Pogorilyy and I. Yu. Shkulipa. “A Conception for Creating a System of Parametric Design of Parallel Algorithms and Their Software Implementations”. Cybernetics and System Analysis, Volume 45, Issue 6 (November 2009), p.p. 952-958. Springer Science and Business Media Inc. ISSN: 1060-0396.
Tang, W., Tan, S., Li, B., and Huang, J. “Automatic steganographic distortion learning using a generative adversarial network”. IEEE Signal Processing Letters, 24(10):1547–1551, Oct 2017. ISSN 1070-9908. doi: 10.1109/LSP.2017.2745572.
Zhuo Zhang, Jia Liu, Yan Ke, Yu Lei, Jun Li, Minqing Zhang, and Xiaoyuan Yang, “Generative steganography by sampling,” IEEE access, vol. 7, pp. 118586–118597, 2019.
B Delina, “Information hiding: A new approach in text steganography,” in Proceedings of the International Conference on Applied Computer and Applied Computational Science, World Scientific and Engineering Academy and Society (WSEAS 2008), 2008, pp. 689–695.
Khoma D.Yu., “Steganographic algorithms and stegoanalysis based on classical methods and neural networks” – Scientific works of DonNTU, series "Informatics, Cybernetics and Computing", No. 2 (37), 2024, p. 42–53. https://doi. org/10.31474/1996-1588-2023-2-37-42-53
Agustsson, E. and Timofte, R. “NTIRE 2017 challenge on single image super-resolution: Dataset and study”. In The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.