The deep learning methods had been proved to be effective for malware detection in the past. However, the recent studies show that deep learning models are vulnerable to adversarial attacks. Thus, the malware detection models based on deep learning face the threat of adversarial examples. As a popular case of adversarial examples, adversarial images are usually generated by adding unrecognizable perturbations to original pictures. When applying the same method to the windows PE (Portable Executable) malware, the original structure cannot be destroyed and the original functions of PE malware need to be preserved. Most existing windows adversarial malware generation works are derived from adversarial image methods with some adaptive modifications such as inserting perturbations in the slack space of the PE file. The both generation methods have some similarities but also many differences. Thus, directly using the methods of adversarial images to create malware effects the efficiency and fooling rate. In this paper, we overcome these issues by proposing a method for generating windows adversarial malware in PE format based on prototype samples of deep learning models. The prototype samples are the most typical ones of a certain category of the classification models. With the characteristic of the prototype samples, the bytes of the prototype samples are added as perturbations to the malware samples. This way can fast generate adversarial malware that could fool the target model. The proposed method is evaluated on a real world dataset of malware. Promising results show that the method can fool the deep learning based malware detection models with a high rate.