When having an imprecise knowledge of a situation, it is natural to consider that an alternative (or a solution) is better than another if it is better under every possible situations. In practice, this means that solutions will sometimes only be partially ordered, which in turns means that there may be more than one undominated solution. While identifying and representing such sets of undominated solutions in usual learning settings can be done by sheer enumeration, this is no longer true in combinatorial settings (multi-label problems, ranking problems, graph optimisation issues, etc...). Using the framework of imprecise probabilities, we illustrate this issue on different problems (mainly multi-label and learning to rank settings) and discuss the associated optimisation problems. We finish by providing further domains where the same kind of issues may happen.