Deep learning is the foundation for various applications, including decision support, fraud detection, text categorization, machine translation, market research, and customer segmentation. Despite their widespread use, deep learning algorithms are frequently vulnerable to adversarial instances, in which legal inputs are manipulated in subtle and often invisible ways. Even the most complicated models may be tricked by minor differences from their original equivalents, rendering them unrecognizable to a human observer. By revealing maliciously created adversarial cases, it is possible to increase the resilience of these models. Our research proposes an adversarial attack, which is a basic but effective foundation for generating adversarial text. We effectively targeted three pre-trained models, including the powerful BERT, long short-term memory (LSTM), and the widely utilized convolutional neural networks (CNN), on five text classification datasets from different domains. Extensive tests and comparisons to existing benchmarks show that our suggested attack recipe outperformed in generating successful adversarial NLP. To be more explicit, we first identify the essential terms in the target model, then prioritize replacing them with the most semantically comparable and grammatically correct sentences until the prediction is modified.