A very rough draft of a formalization of science, largely based off of the system of science established by Karl Popper.
Many scientists reject philosophy of science and indeed philosophy as a whole. They consider it unimportant. But science itself is a subset of philosophy and specifically of metaphysics. Even more specifically, it is a subset of metaphysics founded on empirical investigation. However, because science education includes neither a thorough history or philosophy of science, there is often a great deal of ambiguity in discussions of science. Even among scientists, there seems to be disagreements and misunderstandings of what constitutes science. The following is an axiomatic formulation of science, working largely with Karl Popper’s falsification criterion.
Is science inductive or deductive? The goal of science, as it stands now, is to take a model and falsify it. We falsify it by generating a prediction (stating an outcome of an experiment) and showing that the actual outcome differs from the expected one. This deductive process was established by Karl Popper as a rejection to earlier inductive descriptions of science. However, how do we come up with a model in the first place? Very often, we make observations and generate a model which explains them. This occurs outside of the realm of science, but is still intimately connected to science.
(1) The universe is logically consistent.
(2) Empirical evidence can exist.
(3) Empirical observations, barring human error, are accurate. (The universe is not trying to trick us)
(3b provisional) Unless there is evidence to the contrary, human error has not occurred.
(4) There exists N, a collection of laws, s.t. l cannot be expressed as a logical consequence of any subset of N, except for l itself and l is true. The laws in N are the fundamental laws of nature.
(5) A model (theory), which has been accurate at making predictions in the past, will continue to be accurate at making predictions.
S1 science requires an additional assumptive layer, which is one reason why Hume and Popper, among others, had issues with it. However, even with the added assumption, neither S0, nor S1, result in the logical conclusion of “truth.”
Alternatively, the inductive process can be seen as the “constructive” phase of science. During the constructive phase, observations and hypotheses are generated and then are generalized into a theory. But that theory has not been tested in any way. It is in its infancy. It is still a theory however, so long as it is a model which can be used to make predictions, and whose predictions could potentially falsify the theory.
Karl Popper defined science based on the falsifiable theory. In order to formalize the falsification process, I need to define an experiment and an outcome, and write a model in terms of whether or not an outcome is necessary or possible for a given experiment. So we have a collection of experiments E and a collection of models M.
What does it mean to falsified? It means that the observed outcome of an experiment is either impossible under the model, or the outcome which is necessary, according to the model, did not occur. It is essentially a statistical version of proof by contradiction. As such, the theory, in falsification, is just an assumption to start. And just like with proof by contradiction, a failure to falsify the theory tells us nothing, or at least very little about the truth of the theory in question.
What does it mean to make a prediction using the model? It means showing that an outcome of a given experiment is necessary, according to the model.
Origin of Theories
The origin of theories is often described as inductive, and included within the scientific method. While it is true that we often base theories off of observations, this is not always the case. We can produce theories purely from mathematical deduction, which is often the case in theoretical physics. We could come up with a theory at random. Or we could produce a theory based on what we would like to be true. For instance, if we wanted time travel to exist, we could work on constructing theoretical physical models which allow for time travel, and then go about testing those theories using science. Given the diverse nature of theory development, some being very subjective and random, it does not make sense to call the process of theory development itself science. Theory development and science are just two closely related topics.
The use of the term “law” is largely historical. It is often used to refer to specific mathematical relationships, which have been tested repeatedly. However, a law is a theory. It is a rather simple form of theory in fact, as it can be directly expressed in mathematical terms. The issue with the term is that laws are given undue reverence. A law is still a theory.
Saying that an outcome of an experiment is necessary means that, according to the model p(outcome) = 1. Saying that it is possible means that p(outcome) > 0. Saying that it is impossible means p(outcome) = 0. Meanwhile something is almost certain to happen if Re[p(outcome)] = 1 and almost certain not to happen if Re[p(outcome)] = 0, but p(outcome) is neither 0 or 1.
Physical states vs our perception of those states. True outcome vs perception of outcome.
Claims: in science we can only make “factual” claims within the framework on our current models. In other words, we can say that the claim is consistent with the framework or that it is inconsistent with the framework.
Consider a collection of independent variables, the independent variable states, and O, the outcome space.
An experiment E is the collection <X, I, O>.
A theory T is a function from where is a hyperreal probability space assigning probabilities to O.
With this, and many other formulations, we can know nothing, or at least very little, about how likely a theory is to be true, just how likely it is to be false. This is discomforting, but that is just the way it is. Unless we can narrow down the space of possible theories to a finite number, and assign a proper distribution to the space of theories, we cannot know anything about competing theories.
Let us consider a very basic example. Suppose we have one independent variable, x, and one dependent variable, y. Further suppose that our theory is y = x^2. Then we can produce some hypotheses, such as if x = 1, y = 1, if x = 2, y = 4, and so on. How certain are we that y = x^2 is the correct model? The question comes down to two things: how many other models could be consistent with the observed data and how many potential data points can there be? Suppose x can only be a natural number. That severely limits the number of potential values, but even then, no matter how many hypotheses we test, the data we have is only an infinitesimal fraction of all potential results. Only if there are only a finite number of alternative theories, and we can falsify those, can we really learn much about how true the theory is. At most, Bayesian inference can, in such instances, only move the probability of a theory being true, up by some infinitesimal amount.
This is also why isn’t a theory never “proven” Given any model , we would need to show that the outcome of e that actually occurs is possible under m and if an outcome is necessary under m, then it occurs.
When can we make predictions in science? And when is making a prediction beyond science?