Nonsmooth optimization refers to the general problem of minimizing (or maximizing) functions that are typically not differentiable at their minimizers (maximizers). NSO problems are encountered in many application areas: for instance, in economics, mechanics, engineering, control theory, optimal shape design, machine learning, and data mining including cluster analysis and classification. Most of these problems are large-scale. In addition, constantly increasing database sizes, for example in clustering and classification problems, add even more challenge in solving these problems. NSO problems are in general difficult to solve even when the size of the problem is small and problem is convex. In this chapter we recall two numerical methods for solving large-scale nonconvex NSO problems. Namely, the limited memory bundle algorithm (LMBM) and the diagonal bundle method (D-BUNDLE). We also recall the convergence properties of these algorithms. The numerical experiments have been made using problems with up to million variables, which indicates the usability of the methods also in real world applications with big data-sets.