So, what basically happens here is that what we call a "climate model" is a very complex computation that takes a huge number of input values (say, weather data of several decades) and outputs a single value (say, the expected average temperature of the next decade) or at least a small number of values (for example a temperature range that we expect to be true with a certainty of X%).
Scientists have a good idea about what factors into that model in general, but they aren't completely sure how they should weigh and connect all the different factors. To deal with this problem, they don't just choose a random set of values and formulas for their model and pretend that they are correct. Instead, they run the model repeatedly with slightly different values on historical input data, and then compare their results with already known results (for example average temperatures of the 80s, 90s and 00s) to see which model leads to the best fit.
Only then do they run their model on current data to try predicting the future.
If this way of doing things has worked reasonably well in the past, but now suddenly predicts something else than it has before, the question becomes: what has changed? Apparently, it is not the process that has changed, so it must be the input data that has.
In the best case, this is an example of "overfitting" - meaning that there are just enough "outliers" in the most recent input data to apparently become relevant for modelling, although they will later turn out to be irrelevant.
In the worst case, the change in input data is not just some spurious error, but an actual trend that only starts to become apparent now. In this case, the new models might be as correct as their predecessors always were.
Finding out whether that's true means that former and current models need to be compared not only as far as their outcomes are concerned, but by comparing the actual computation they perform. This can be a non-trivial task, especially if certain forms of machine learning are used to create the models.