Federated Learning (FL) has gained prominence as a decentralized and privacy-preserving paradigm that enables multiple clients to collaboratively train a machine learning model under the supervision of a central server but without sharing their data. By design, FL is a solution for data privacy but not model privacy. Recent attacks, such as gradient reconstruction attacks (GRAs) have precisely shown privacy issues when an attacker knows the model parameters sent by a client to the server. In the literature, these privacy issues are mainly explored when clients compute a single gradient descent step on their data (FedSGD). In a more realistic scenario, clients compute several gradient descent steps (FedAvg). This protocol adds intermediate computation steps, which are unknown from the attacker, thus making GRAs less successful. In this paper, we introduce a new regularizer that makes GRAs more efficient under FedAvg. Our discussion is supported with experiments in computer vision.