We can think of two reasons for using distributed machine learning: because you have to, and because you want to (hoping it will be faster). Only the first reason is good.
Disributed computation generally is hard, because it adds an additional layer of complexity and communication overhead. The ideal case is scaling linearly with the number of nodes; that’s rarely the case. Emerging evidence shows that very often, one big machine, or even a laptop, outperforms a cluster.