Federated learning is a new branch of research in the field of machine learning which leads to many new opportunities. Its objective is to bring together several entities to merge their model together in a so-called global model. This merged model respects data confidentiality and makes it possible to unite similar models developed on more varied data and in different contexts. Continuous learning which consists of retraining a model on new data is key for updating models. However, prior work shows that there is a risk of forgetting tasks learned from past data when employing continuous learning. This leads to a key question: is the global model in federated learning able to be designed to reduce tendency for task forgetting? The present study investigates federated learning models integrated with core memory backup to determine impact on task forgetting scenarios. Further, these federated models are compared with centralized learning models. To accomplish this, the study uses an Artificial Neural Network dedicated to the classification of machinery fault anomalies from bearing failure data. Multiple anomaly distributions encountered in production were evaluated including clustered, periodic, progressive and random types. The results demonstrate enhanced performance of federated learning with on average 20% to 50% better accuracy than centralized learning depending on the anomaly distribution in continuous learning. The federated learning model with memory also is shown capability to improve the stability of results.