Human decision making process is working in a way close to that of a logistic regression. Surely, it is parallel, sometimes non-linear and extremely high dimensional. Though, when it comes to the regular decisions it is often ends up in the same setup.

This process is exposed to the same dangers of biasing and overfitting. Though, we (me personally) can feel naturally when biased, due to lack of information or analysis paralysis. There are pathological cases, and that is descibed in details as Dunning–Kruger effect.

On the other side we could hardly notice when our decision making gets overfitted. We have a solution for each situation we tackle with, we see no better solutions and assured that there are no better ones. Though, from time to time some solutions are coming into the place, which are more effective and were not visible.

The mathematical way to deal with it is Regularisation. What is basically does is letting the learning to converge slower, not to overoptimize, and to focus on a higher level features. What regularisation would mean for our day-to-day lives? Probably, getting rest, to switch to the other task (to loose the context of the previous task and get little less effective in it for a while) or, as described in this post by sorhed.