All the reviewers find the paper of interest and worth publishing. Two of them raise some minor issue that the authors should address for the final version. For example Assumption 1 should be better justified and it should be clarified that the set-up is not completely distributed. Also, the reviewers give some suggestions to clarify the algorithm and, in general, the paper presentation. Overall, this is a solid conference paper. Reviewer 4 of CDC 18 submission 1525 Comments to the author ====================== This paper presents a privacy-preserving distributed optimization algorithm with a specific client server architecture. The paper explains the setup well. The results do not have proofs here but the overall idea seems clear. I have few minor comments: 1. In Algorithm 1, I would write steps 6, 7, 8 as cases of finding u. Otherwise, it gets confusing as one case see as different u values to be calculated in the same algorithm. 2. What exactly does $\xi$ represent in line 9 of the algorithm 1? Never introduced before. 3. Authors use the term "parameters" often in the paper. By that do they mean $x$? 4. Is equation (2) a definition for M or does it somehow imply that the multiplicative perturbations are equal? Wording is misleading here. 5. Authors should explain the steps of the secure consensus for the sake of completeness. 6. Authors should define what exactly is an honest-but-curious adversary! Finally, please edit the paper carefully, such as for typographical errors. Reviewer 2 of CDC 18 submission 1525 Comments to the author ====================== This paper presented a privacy-preserving algorithm for distributed optimization using (mainly) noisy gradients. The topic (privacy in distributed optimization) is interesting. However, I have a few concerns below. 1. The algorithm is not very innovative. Compared to the authors' previous work "Distributed optimization for client- server architecture with negative gradient weights", it adds additive noises and directly uses the consensus scheme from "Privacy-preserving methods for sharing financial risk exposures". 2. The assumptions are not well supported. It is not clear how to satisfy Assumption 1. In the paper, the graph of servers is assumed to be fully connected, which is not "distributed" in the usual sense. 3. The main results have no complete proof (which is understandable). 4. There is no former statement regarding privacy. The discussions are not very convincing either. Reviewer 3 of CDC 18 submission 1525 Comments to the author ====================== The paper proposes a first-order algorithm for convex distributed optimization ('distributed learning') that ensures privacy of the local information, while assuring global optimality. The methodology relies on the projected gradient algorithm. The main results of the paper are presented in Theorem 1 and 2, and consist in proving (under boundless and Lipschitz of the gradient of the local objectives f_i(x)) that the sequence generated by the proposed algorithm converges to the set of optimal solution X* with probability 1. algorithm. The paper is well-written and the presentation is solid. The arguments used are well-explained, even though some proofs were omitted for brevity. For this, I believe the paper is suitable for publication in the conference. More comments and suggestions are in the attached file.