RSS

Truth 03

Is there an ultimate Truth? Does it matter?

Most would argue that what’s true is true. That there does indeed exist an ultimate or quintessential underlying reality. But is this a valid understanding?

In my studies of mathematics, especially probability and statistics, we assume that there is a true parameter value. This is a fiction and is just a device for doing theory and developing methods. It allows us to theoretically show that some method to estimate it has some property which works better or worse. But overall the effect is to assume that something true exists that we then choose to model imperfectly.

I disagree with this assumption! Instead, I think that the universe is in constant flux and unevenly distributed. This implies that there is no ultimate truth and the truth that we observe today may in fact be materially different tomorrow. This is perhaps a difficult concept for many people accustomed to working with concrete objects and models in the real world. But not to worry, working with this concept of truth is not needed for 99% of the thinking we need to do.

So when might we want to assume a different model of truth? Advanced physics for one. Our understanding of the universe has been based upon the cosmological principle. Newton first asserted this notion that the spatial distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale. The determination of a homogeneous and isotropic universe versus a inhomogeneous and anisotropic universe is an unsolved determination in physics.

This has more recently become topical in advanced physics and cosmology, especially with some results from dark matter, backreactions, and quanta. I will not discuss that here (I’m not a good physicist) but you may search the issue if interested.

We also need to concern ourselves with this model when we consider artificial intelligence models. If a model takes truth as giving an objective then it may discard highly relevant but discordant data. As the complexities of our ai systems increase, we will not understand the conclusions of our ai but instead must know that the methods upon which the ai rests are sound and comprehensive.

This is a field that does matter to me. In practice, for supervised methods, the expanse of data is more of an issue. For unsupervised methods, we are generally trying to get a sense of the data or highlight patterns which likely does not need our expanded notions. Last, we consider reinforcement learning which I suggest might be highly sensitive to unevenness in our underlying universe. You might consider this the effect of the long tail. I would say that the inhomogeneous-anisotropic worldview is more needed for reinforcement learning and in particular the probabilistic graphical models I am most interested in.