Calculus is the mathematical study of change. It has two major branches: differential calculus and integral calculus. The former concerns rates of change and the slopes of curves, whereas the latter concerns accumulation of quantities and areas under or between curves. Calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz, and was instrumental in enabling Newton to formulate his laws of motion. Despite its obvious success in describing the motion of objects, however, differential calculus in particular came in for some criticism. Perhaps the most prominent critic was the 18th century Irish philosopher and bishop George Berkeley, who critiqued differential calculus in his 1734 pamphlet The Analyst.
Berkeley’s argument was that the method relied on infinitesimals, which were treated as quantities that are simultaneously zero and non-zero. To see this, recall that to take the derivative of a function f() defined on the real numbers, we first calculate the quantity [f(x+h)-f(x)]/h, then evaluate the result when h = 0. The quantity h is considered an infinitesimal of the type that Berkeley was referring to. In the first step this cannot be zero, as we divide by h, and division by zero is not allowed; but then in the second step we set h equal to zero! You can see where Berkeley was coming from with his critique. However, it is now generally accepted that Berkeley’s criticism was answered with the rigorous development of limits in the 19th century.
The solution mathematicians came up with was to define the derivative as the limit of [f(x+h)-f(x)]/h as h ‘tends to zero’. This was given a precise definition using something called the ‘epsilon-delta’ definition of a limit, which I won’t go into here. Suffice it to say that the epsilon-delta definition relies on the existence of infinite sets. In a previous blog post I have criticized the assumption that infinite sets exist – the so-called ‘axiom of infinity – on materialist grounds. A critic of this position might argue that removing infinite sets from mathematics would remove our ability to rigorously define the derivative of a function using limits; and they would be right. But there is an alternative formulation of calculus which obviates the need for such a definition altogether.
Discrete calculus is an analogue of calculus for functions defined on discrete domains. In the remainder of this blog post I will go through some of the basics. Consider a function f() defined on the finite domain X = {0,1,…,N}. The discrete derivative of f() is defined by Df(x) = f(x+1)-f(x). The discrete derivative is linear: D(af+bg) = aDf + bDg for all integer constants a,b and functions f,g defined on X. We can derive a discrete analogue of the product rule: D(fg)(x) = f(x+1)Dg(x)+Df(x)g(x). We can also derive a discrete analogue of the quotient rule: D(f/g)(x) = [Df(x)g(x)-f(x)Dg(x)]/[g(x)g(x+1)]. Recall that in standard (continuous) calculus, d(xn)/dx = nxn-1. In discrete calculus we have the analogous rule given by Dxn = nxn-1, where xn is the ‘falling power’: xn = x(x-1)…(x-n+1).
Euler’s constant, e, is the number with the property that d(ex)/dx = ex. The discrete analogue of e is 2, as D(2x) = 2x+1-2x = 2x. The discrete integral is simply a sum: ∑a→bf(x) = f(a)+f(a+1)+…+f(b-1); note that the sum does not include f(b). The fundamental theorem of discrete calculus immediately follows from the definition of the discrete integral: ∑a→bDf(x) = f(b)-f(a). It is straightforward to determine from the fundamental theorem that ∑a→bxn = (bn+1-an+1)/(n+1). Note that it follows from the product rule above that Df(x)g(x) = D(fg)(x)-f(x+1)Dg(x). Integrating (summing) both sides between a and b, we obtain a discrete analogue of the integration by parts formula : ∑a→bDf(x)g(x) = f(b)g(b)-f(a)g(a)-∑a→bf(x+1)Dg(x). This formula allows us to do more advanced integrations.
We can also define discrete second derivatives. The obvious definition involves simply applying the first derivative twice: D2f(x) = D(Df)(x) = f(x+2)-2f(x+1)+f(x). There is an alternative definition though. Letting D+f(x) = f(x+1)-f(x) and D–f(x) = f(x)-f(x-1), we can set D2f(x) = D+(D–f) = D–(D+f) = f(x+1)-2f(x)+f(x-1). This definition has the advantage of being symmetric around x. In continuous calculus, sin() and cos() are functions f() with the property that d2f/dx = -f. To find discrete analogues of these, we must find functions f() such that D2f = -f. That is, we must solve f(x+1)-f(x)+f(x-1) = 0, or f(x+1) = f(x)-f(x-1). Setting f(0) = 0 and f(1) = 1, we get the sequence (0,1,1,0,-1,-1,0,1,…); this is the discrete analogue of sin(). Setting f(0) = 1 and f(1) = 0, we get (1,0,-1,-1,0,1,1,0,…); this is the discrete analogue of cos().
Let dsin() and dcos() denote these discrete analogues of sin() and cos(). Then from the definitions, we have D+(dsin(x)) = dcos(x) and D+(dcos(x)) = -dsin(x+1). Similarly, we have D–(dsin(x)) = dcos(x-1) and D–(dcos(x)) = -dsin(x). These are analogous to the relations between sin() and cos() and their derivatives in standard calculus. Thus, we have successfully defined discrete analogues of derivatives (including the product and quotient rules), integrals (including integration by parts), second derivatives, Euler’s constant, and the trigonometric functions sin() and cos(). There are yet more analogous definitions that can be made, but I will leave these for a future blog post.
Leave a comment