If we want to get real smart, we will have to let no reason unturned. Foundations of calculus have been debated for 23 centuries (from Archimedes to the 1960s’ Non Standard Analysis). I cut the Gordian knot in a way never seen before. Nietzsche claimed he “made philosophy with a hammer”, I prefer the sword. Watch me apply it to calculus.

I read in the recent (2013) MIT book “The Outer Limits Of Reason” published by a research mathematician that “all of calculus is based on the modern notions of infinity” (Yanofsky, p 66). That’s a widely held opinion among mathematicians.

Yet, this essay demonstrates that this opinion is silly.

Instead, calculus can be made, just as well, in finite mathematics.

This is not surprising: Fermat invented calculus around 1630 CE, while Cantor made a theory of infinity only 260 years later. That means calculus made sense without infinity. (Newton used this geometric calculus, which is reasonable. with any reasonable function; it’s rendered fully rigorous for all functions by what’s below. roll over Weierstrass. You all, people, were too smart by half!)

If one uses the notion of Greatest Number, all computations of calculus have to become finite (as there is only a finite number of numbers, hey!).

The switch to finitude changes much of mathematics, physics and philosophy. Yet, it has strictly no effect on computation with machines, which, de facto, already operate in a finite universe.

In the first part, generalities on calculus, for those who don’t know much; can be skipped by mathematicians. Second part: original contribution to calculus (using high school math!).

WHAT’S CALCULUS?

Calculus is a non trivial, but intuitive notion. It started in Antiquity by measuring fancy (but symmetric) volumes. This is what Archimedes was doing.

In the Middle Ages, it became more serious. Shortly after the roasting of Johanne’ d’Arc, southern French engineers invented field guns (this movable artillery, plus the annihilation of the long bow archers, is what turned the fortunes of the South against the London-Paris polity, and extended the so called “100 year war” by another 400 years). Computing trajectories became of the essence. Gunners could see that Buridan had been right, and Aristotle’s physics was wrong.

Calculus allowed to measure the trajectory of a canon ball from its initial speed and orientation (speed varies from speed varying air resistance, so it’s tricky). Another thing calculus could do was to measure the surface below a curve, and relate curve and surface. The point? Sometimes one is known, and not the other. Higher dimensional versions exist (then one relates with volumes).

Thanks to the philosopher and captain Descartes, inventor of algebraic geometry, all this could be put into algebraic expressions.

Example: the shape of a sphere is known (by its definition), calculus allows to compute its volume. Or one can compute where the maximum, or an inflection point of a curve is, etc.

Archimedes made the first computations for simple cases like the sphere, with slices. He sliced up the object he wanted, and approximated its shape by easy-to-compute slices, some bigger, some smaller than the object itself (now they are called Riemann sums, from the 19C mathematician, but they ought to be called after Archimedes, who truly invented them, 22 centuries earlier). As he let the thickness of the slices go to zero, Archimedes got the volume of the shape he wanted.

As the slices got thinner and thinner, there were more and more of them. From that came the idea that calculus NEEDED the infinite to work (and by a sort of infection, all of mathematics and logic was viewed as having to do with infinity). As I will show, that’s not true.

Calculus also allows to introduce differential equations, in which a process is computed from what drives its evolution.

Fermat demonstrated the fundamental theorem of calculus: the integral was the surface below a curve, differentiating that integral gives the curve back; otherwise said, differentiating and integrating are inverse operations of each other (up to constants).

Arrived then Newton and Leibnitz. Newton went on with the informal, intuitive Archimedes-Fermat approach, what one should call the GEOMETRIC CALCULUS. It’s clearly rigorous enough (the twisted examples one devised in the nineteenth century became an entire industry, and graduate students in math have to learn them. Fermat, Leibnitz and Newton, though, would have pretty much shrugged them off, by saying the spirit of calculus was violated by this hair splitting!)

Leibnitz tried to introduce “infinitesimals”. Bishop Berkeley was delighted to point out that these made no sense. It would take “Model Theory”, a discipline from mathematical logic, to make the “infinitesimals” logically consistent. However the top mathematician Alain Connes is spiteful of infinitesimals, stressing that nobody could point one out. However. I have the same objection for. irrational numbers. Point at pi for me, Alain. Well, you can’t. My point entirely, making your point irrelevant.

FINITUDE

Yes, Alain Connes, infinitesimals cannot be pointed at. Actually, there are no points in the universe: so says Quantum physics. The Quantum says: all dynamics is waves, and waves point only vaguely.

However, Alain, I have the same objection with most numbers used in present day mathematics. (Actually the set of numbers I believe exist has measure zero relative to the set of so called “real” numbers, which are anything but real. from my point of view!).

As I have explained in GREATEST NUMBER, the finite amount of energy at our disposal within our spacetime horizon reduces the number of symbols we can use to a finite number. Once we have used the last symbol, there is nothing anymore we can say. At some point, the equation N + 1 cannot be written. Let’s symbolize by # the largest number. Then 1/# is the smallest number. (Actually (# – 1)/# is the fraction with the largest components.)

Thus, there are only so many symbols one can actually use in the usual computation of a derivative (as computers know well).  Archimedes could have used only so many slices. (The whole infinity thing started with Zeno and his turtle, and the ever thinner slices of Archimedes; the Quantum changes the whole thing.)

Let’s go concrete: computing the derivative of x -; xx. it’s obtained by taking what the mathematician Cauchy, circa 1820, called the “limit” of the ratio: ((x + h) (x + h) – xx)/h. Geometrically this is the slope of the line through the point (x, xx) and (x + h, (x + h) (x + h)) of the x -; xx curve. That’s (2x + h). Then Cauchy said: “Let h tend to zero, in the limit h is zero, so we find 2x.” In my case, h can only take a number of values, increasingly smaller, but they stop. So ultimately, the slope is 2x + 1/#. (Not, as Cauchy had it, 2x.)

Of course, the computer making the computation itself occupies some spacetime energy, and thus can never get to 1/# (as it monopolizes some of the matter used for the symbols). In other words, as far as any machine is concerned, 1/# = 0! In other words, 1/# is. infinitesimal.

This generalizes to all of calculus. Thus calculus is left intact by finitude.

Patrice Ayme

Note: Cauchy, a prolific and major mathematician, but also an upright fanatic Catholic, who refused to take an oath to the government, for decades, condemning his career, would have found natural to believe in infinity. the latter being the very definition of god.

LEAVE A REPLY

Please enter your comment!
Please enter your name here