Skip to content

intsystems/GenAdvAttacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Title

License GitHub Contributors GitHub Issues GitHub Pull Requests

Author Name Surname
Consultant Name Surname, PhD/DSc
Advisor Name Surname, PhD/DSc

Assets

Abstract

Deep learning methods are widely used for time series analysis in domains such as healthcare, finance, energy systems, and environmental monitoring. However, deep neural networks remain vulnerable to adversarial attacks, where small input perturbations can cause significant degradation in predictive performance. Commonly used gradient-based adversarial attacks (e.g., iterative first-order methods) are computationally burdensome. They require repeated backpropagation through the victim model to compute input gradients, and many iterative refinement steps to obtain high-quality perturbations under the chosen constraint set.

In this work, we propose model-based adversarial attacks in which perturbations are generated by a neural network. During training, the generative model learns to produce perturbations that maximize the loss of a frozen target model. We introduce both single-step and iterative generative attack schemes and evaluate them on multiple time series datasets and model architectures. Experimental results demonstrate that the proposed approach achieves performance comparable to classical gradient-based attacks while offering substantial computational advantages.

Citation

If you find our work helpful, please cite us.

@article{citekey,
    title={Title},
    author={Name Surname, Name Surname (consultant), Name Surname (advisor)},
    year={2025}
}

Licence

Our project is MIT licensed. See LICENSE for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages