The really basic thing about calculus that we are not taught in Pakistan is that differentiation is tantamount to computing point by point differences and integration is exactly equivalent to accumulating point by point sums of any function. We are misled into believing that integration is the area under the curve. After this point, our understanding of calculus remains really obscure. All other methods such as the variable separable or integration by parts etc. are really trivial if not needless. Numerical methods are taught to do calculus on computers. They also only make sense to the human mind if it is known in advance that we are actually calculating sums and differences. Everything else naturally builds upon these ideas. But alas, once the student leaves a calculus classroom, he keeps on babbling about the rate of change and area under the curve without really knowing where they come from.
Having said this, I would also like to share that I have always wondered, that what would the inventors of calculus (Newton, Leibniz, Fermat) have thought when they were proposing it. I think that they must have really proposed that it is differences and sums (of a time series or of values of Y as a function of X (the domain)). After all, how did they manage to bring about a paradigm shift in mathematics by giving it a leap from ordinary arithmetic, that we all become familiar with in our early years in school to calculus, that literally has us crumble and shatters our self-confidence? The truth is that the only confident students about calculus only memorize various methods for computing derivatives and integrals (chain rule, integration by parts). They excel at solving numerical problems without about clue about the underlying meaning of the solutions. This is how our textbooks are designed at a stage where the mind of the student is most fertile and he is ready to take off in life. But the mind is obfuscated by learning how to solve numerical problems.
Consider this example: Derivate of F(t+1) literally means computing the difference of values between F(t+1) and F(t). This naturally leads to the rate of change. For all this is telling us that how much F has changed in moving from t to t+1.
Similarly, integration of F at t+1 also means the accumulative sum of F up to t+1. In other words, it is the sum of values of F at all values of t up to t+1. That’s it! That is all one needs to know to master calculus. Rest is just chit chat and icing on the cake. As a matter of fact, remaining methods (such as chain rule or integration by parts) are needed only by the student to hone his/her problem-solving skills.
Transform mathematics also become truly understandable when we come to grips with sums (in integration). Consider Laplace and Fourier transforms. Also, I wonder how would these two giant mathematicians would have proposed something so bizarre. The integrals by both of these mathematicians are daunting. A student can really sweat if he is jumped to a class of calculus to integral transforms (a module that was once taught by Dr. Maud in LUMS and I couldn’t get a clue about as what it was all about, although I have immense respect for him).
I think that Fourier and Laplace knew it for granted that integration is tantamount to computing sums. Given this, they proposed their transforms. Once we begin to develop a worldview of sums, we recognize that both transforms are merely inner products of two vectors, and hence simply correlation equations. That’s it. Once we understand that, we simply realize how Fourier transform computes spectral components given a time domain signal. Unless summing is not known, this level of profound imagination is really hard to develop. Again, the only students who do well in such courses are those, who can precog the exam and prepare the numericals accordingly. They may definitely sound very sharp and smart, but in reality, they know nothing about the mechanics of the transforms.
If you found an error, highlight it and press Shift + Enter or click here to inform us.