I’ve not had enough time to do maths properly for the past… well, quite a long while, actually, but in particular I got myself a REAL JOB in INDUSTRY, in fact as a welder, which takes up twelve or thirteen hours most days (ten of them are spent Working)… Anyways, I’d Like To Let the Tau Fans Know Something about Circles in Industry.
Nobody measures the radius of an Industrial Circle. Never. Radii are not a Thing. It’s Diameters, if possible, that we measure, and if a circle is Unfinished, we compare it with a template. At least, most of the time. I imagine a Machinist working at a Lathe may well consider truly radial measurements, but we don’t have any genuine lathes in our plant, and that’s Just Fine.
And the Reason that no-one measures the Radius of an Industrial Circle is because it’s basically impossible. You see, to measure the radius, you have to locate the center. This is, of course, doable in principle, but pretty much a waste of time, because the center is located at the intersection of DIAMETERS. And if you have a diameter, you may as well measure THAT.
Some Caveats: the principal method of producing one of our Industrial Circles is by rolling (that is, incrementally locally bending) some material beyond its elastic/plastic transition. Sometimes it’s feasible to do this end-to-end and then check: have I closed a circle? sometimes this is not feasible, and instead the ends of the material to be bent circlewise must be first matched to a template. But there are other ways — with a lathe, for instance, one may cut both external and internal circles. One will have a decent estimate of the radial location of the cutter, but you’ll still measure a diameter. But SOMETIMES, particularly in preparing the templates we have mentioned (sometimes to cut out holes to admit other industrial circles passage), we will draw a circle using a compass, and then cut along the lign. This is the ONLY case in which the radius is more easily known than the diameter, and it’s when we LEAST care about the circumference produced. For the compassed circles we most use, the center is promptly thrown away, because it would make the template too large and unwieldy.
That is all, you may now return to your what-have-you.
Dear Mr. Haran,
I’m grateful to you and correspondent Eischen (and Conway) for putting the name of Faulhaber on a calculation which heretofore I’d only known quoted, without attribution, by Heinrich Dörrie in Triumf der Mathematik—(which I’ve only read in translation). However, I’m most frustrated that neither Dörrie nor Eischen give any satisfying motivation for why the postulate should work.
For bystanders still catching up, this postulate is that if one defines a sequence of numbers $B_k$ “by expanding” $$(B-1)^{k+1} = B^{k+1}$$ and transcribing exponents to subscripts… one finds that the differences $$ (n + B)^{k+1} - B^{k+1} $$ similarly treated are equal to cumulative power sums, $$(k+1) \sum_{j \leq n} j^k$$
So the calculation is doable. My Beef is Dilemmimorphic: Either the notational abuse of $(n + B)^k$ suggests that $B$ should be Some Kind Of Linear Operator, in which case what is it? Or else there’s an Amazing Coincidence being Overlooked!
It’s a comparative Triviality that the power sums $\sum_1^N n^k$ should be polynomials of $N$, and that the leading term be $\frac{1}{k+1} N^{k+1}$ , so indeed it is perfectly reasonable to consider coefficients $B_{k,j}$ defined by $$ \sum_1^N n^k = \frac{1}{k+1} \sum \binom{k+1}{j} B_{k,j} N^{k+1-j} $$ BUT WHY SHOULD WE ASSUME that in fact $B_{k,j}$ depends only on $j$? That’s STAGE MAGIC, and the fact that indeed it somehow works does not explain “where it comes from” (Eischen’s favourite phrase on the matter).
So, in my customary way of starting with the actual problem and throwing at it what seems to me the minimum of thought, let’s first explicate that “comparative triviality”: the sequence of polynomials $p_k(j) = \binom{j+k}{j}$ are integral generators for the Integral-valued polynomials, and are recursively definable as iterated cumulative sums of the constant polynomial $p_0 \equiv 1$: $$\binom{j+k+1}{j} = \binom{j+k}{j} + \binom{j+k}{j-1}$$. Hence, cumulative sums of any polynomial, written in the binomial basis, can be obtained just by incrementing: $$\sum_{j=1}^N \sum a_n p_n(j) = \sum a_n p_{n+1}(N)$$
Next, cumulative sums are themselves defined by induction: $“\sum_{j=1}^0” P(j) = 0$ and $\sum_{j=1}^{N+1} P(j) = P(N+1) + \sum_{j=1}^N P(j)$, or said differently, by the Difference equation $$ SP(N+1) - SP(N) = P(N+1).$$ In other words we are trying to solve the Difference Equations $$ S_k(N) - S_k(N-1) = N^k,$$ but in the basis of Monomials $N^j$ instead of Binomials $p_j(N)$.
The binomial theorem, $$ (x+y)^k = \sum \binom{k}{j} x^{k-j} y^j $$ makes the Taylor-MacLaurin formula a Theorem for polynomials $$ (x+y)^k = \sum y^j \frac{1}{j!} \frac{d^j}{dx^j} x^k $$ which is fruitfully abbreviated $$ P(x+y) = e^{y\, d/dx} P(x) $$ the Backwards Difference, then, is similarly $$ P(x) - P(x-1) = (1 - e^{- d/dx}) P(x) $$
Shall we say, The kernel of the Backward Difference is reasonably well understood? The differential operator is the retract of the Integral operator $\int_0$, so the Taylor-MacLaurin formula provides us also a section for the Forward Difference operator, $$ 1-e^{-x} = \frac{d}{dx} + A\frac{d^2}{dx^2} $$ where, for now, the main point is that the unbounded-degree differential operator $A$ commutes with $d/dx$, so that, for example $$ (1 - e^{-d/dx}) \left(\int_0 \sim dx - A + A^2 \frac{d}{dx} - A^3\frac{d^2}{dx^2} + - \cdots \right) P(x) = P(x)$$
Of course, there are various paths to the power series, other than via expansion of the powers of $A$, but there is a (Laurent) power series $$ \frac{1}{1-e^{-t}} = \frac{1}{2}\coth(\frac{t}{2})+\frac{1}{2} = \frac{1}{t} + \sum \frac{B_j}{j!} t^{j-1} $$ where $B_j$ are the faBulous Bernoulli numbers.
In any case, applied to simple powers, $$ \left( \int_0 \sim dx + \frac{1}{2} + \sum_{j=2}^{\infty} \frac{B_j}{j!} \frac{d^{j-1}}{dx^{j-1}} \right) x^k = \frac{1}{k+1} x^{k+1} + \sum_{j=1}^{k} \frac{k!}{j!(k-j+1)!} x^{k-j+1} B_j \\ {} = \frac{1}{k+1} \sum_{j=0}^{k} \binom{k+1}{j} B_j x^{k+1-j} $$ Finally, the power sum polynomials $S_k$ vanish both at zero (formally an empty sum) and at $-1$ (since $S_k(0) - S_k(-1) = 0^k$), so that in particular, $$ \sum_{j=0}^k \binom{k+1}{j} B_j (-1)^{k-j} = 0$$ THAT’S WHERE THIS IS COMING FROM.
by which I mean: it’s obviously not “Pascal’s” “Triangle”; That is: it’s not just the fact that (in commutative algebra) there are “binomial coefficients”; nor even that, for reasons of applicable combinatorics, the binomial coefficients get to be called “en-choose-kay”. If anything is The Binomial Theorem, it’s the coincidence of the binomial coefficients and certain fractions involving factorials.
And while I’m as happy as the next fellow to agree that the number of subsets of a given size out of a set of the given size is equal to the number of permutations of the whole set modulo the permutations that fix the prefix/suffix partition at a fixed index… there are still more ways to interpret that equation than “precommutative terms in an algebraic expression, modulo the commutativity relations”.
That is, once we’ve decided that there ARE Binomial Coefficients, and that they are Integers, we can choose any argument that gives us those integers, even if it doesn’t look like it need give integers. The underlying combinatorics, even, may be identitical, but where we apply them doesn’t have to be “the terms of an algebraic expression”.
In other words, we can read the Binomial Theorem as saying
$$ \frac{(a+b)^n}{n!} = \sum_{k+l=n} \frac{a^k}{k!}\frac{b^l}{l!} $$ just as well as anything else; and we can construe the left hand side as the volume of an $n$-simplex with axes $(a+b)$, and the right hand side similarly as a sum of suitable products of simplices.
So I wrote a p3/java sketch to see what that looks like. Incidentally, I’ve often told myself that I write code that is basically terrible to maintain. Can’t seem to break those habits…
So, these Elliptic Curve things — It’s become very natural, post-Asteroids™ to think of “Torus” as $\mathbb{C}/\mathbb{Z}^2$, and so a function on an elliptic curve is “the same as” a doubly-periodic function on $\mathbb{C}$; it breaks some of the symmetry of this picture, but there is something marvelous that happens when one views the Torus instead as the quotient of a Cyllinder and — because $\exp$ teaches us how to view $\mathbb{C}^\times$ as a cyllinder — considers functions on a Torus as functions on $\mathbb{C}^\times$ with a discrete scaling-invariance.
Let $ 0 \lt |q| \lt 1 $, and for a warm-up, verify that the product
$$ R_q(z) = \prod_{n\in\mathbb{N}} (1 + q^{2n+1} z) $$ is absolutely convergent, with zeros $-\frac{1}{q^{2n+1}}$, and has a nice scaling property:
$$ R_q(z/q^2) = \frac{q+z}{q} R_q(z) $$ Check that similarly,
$$ R_q(\frac{q^2}{z}) = \frac{z}{q+z} R_q(\frac{1}{z}) $$ so that the product
$$ \Theta(q,z) = R_q(z) R_q(1/z) $$
has an even nicer scaling property,
$$ \Theta(q,z/q^2) = \frac{z}{q} \Theta(q,z) $$
and the fraction, even better:
$$\Psi(q,z) = \frac{\Theta(q,z)}{\Theta(q,-z)} = - \frac{\Theta(q,q^2z)}{\Theta(q,-q^2z)} = \Psi(q,z/q^4) $$
Furthermore, for straight-forward reasons, $\Theta(q,z)$ has critical points at $z=\pm 1$ so that the fraction $\Psi(q,z)$ has critical points at even powers of $q$, $z = q^{2m}$
All of the preceding should be Routine. As the title of this post is meant to suggest, what we’ve called $\Theta$ is more like the star of this show than is $\Psi$, although it’s not really the famous $\theta$… we’ll get to $\theta$ in a minute. And the genuine star $\theta$ will give us the same $\Psi$, in some sense.
The way $\Theta$ is defined, it should also have a very good Laurent Series Expansion, but (here’s the trick) for now we’re only going to worry about the second variable:
$$ \Theta(q,z) = \sum_{n\in\mathbb{Z}} a_n(q) z^n $$
because the scaling property from above,
$$ \sum_{n\in\mathbb{Z}} a_n(q) (z/q^2)^n = \frac{z}{q} \sum_{n\in\mathbb{Z}} a_n(q) z^n $$
is telling us that
$$ q^{2n-1} a_{n-1}(q) = a_n(q) $$ so that, inductively,
$$ a_n(q) = q^{n^2} a_0(q) .$$
It follows that $\Psi(q,z)$ can alternatively be written
$$ \Psi(q,z) = \frac{\sum_{n\in\mathbb{Z}} q^{n^2} z^n }{\sum_{n\in\mathbb{Z}} (-1)^n q^{n^2} z^n }$$ and the first bit of magic is this: we really have no right to expect that any Laurent series defined by a nice-looking product should be so very sparse in any of its variables, but only square powers of $q$ need appear in this fraction-of-series representation of $\Psi$.
The Really Magical $\theta$ then is defined to be this sparse series $$\theta(q,z) = \sum_{n\in\mathbb{Z}} q^{n^2} z^n $$ But that’s just a hint of the Magic!
from the MIT AI Lab memo “HAKMEM”…
Let us Name some Maps. “The” “Landen” transform, $v\mapsto \frac{1}{\sqrt{k}} \frac{2v}{1+v^2} $ is a double-cover of the elliptic curve $y^2=\sqrt{(k-x^2)(1-kx^2)}$ by the elliptic curve $u^2=\sqrt{(l-v^2)(1-lv^2)}$, where $$ l = L(k) = \frac{k^2}{(1+\sqrt{1-k^2})^2} ;$$ which defines $L$, one of our Important Named Maps.
Similarly, the transposition $v \mapsto \frac{1-v}{1+v}$ is an isomorphism of whatever elliptic curve $y^2=\sqrt{(k-x^2)(1-kx^2)}$ and $u^2=\sqrt{(r-v^2)(1-rv^2)}$ with
$$ r = T(k) = \left(\frac{1-\sqrt{k}}{1+\sqrt{k}}\right)^2 $$ which defines $T$, another Important Named Map.
About two and half Pt.s ago, we pointed out that $L(v)^2 + L(T(v))^2 = 1$, a Curious Circumstance Indeed.
It’s easier with a well-understood Computer Algebra Assistant, but, well… But that’s very Woostery of me, starting in the wrong place.
One develops a conviction that the simple maps representing double-covers of elliptic curves are “the right” way to think about Landen Transforms, and then you get lost looking for “the right” way to understand the next trick in these terms. It’s easy to get lost.
If we choose to norrmalize our Elliptic Curves in terms of the modulus $k$ as defined by $y^2 = (k-x^2)(1-kx^2)$, then writing in terms of $x$ there are roughly two globally-available double-covering maps… one could say there’s only one, up to twists and re-orientations, but … I digress. There’s the Möbius-translated squaring map $$ x\mapsto \frac{x^2 - \lambda}{1-\lambda x^2} \tag{Sq1}$$
and there is the conjugate of this map by the transposition $u\mapsto \frac{1-u}{1+u}$, which one calculates is
$$ x\mapsto \rho \frac{2x}{1+x^2} \tag{Sq2} $$ The particular relations between $\rho,\lambda,k$ will depend on whether one is mapping from or to the curve with modulus $k$.
In one sense, it does not matter whether we now use $\mathrm{Sq1}$ or $\mathrm{Sq2}$; however, the algebra is much simpler using $\mathrm{Sq2}$, or specifically, substitutions
$$ x = \frac{1}{\sqrt{k}} \frac{2u}{1+u^2} $$
which is a double cover mapping to a curve with modulus $k$, from one with modulus… something that you can work out if you really must. Well, we already know what happens to our favourite measure,
$$ \frac{dx}{\sqrt{(k-x^2)(1-kx^2)}} $$
when pulling-back along this substitution,
$$ \frac{dx}{\sqrt{(k-x^2)(1-kx^2)}} = \frac{du}{\sqrt{k^2u^2-(4-2k^2)u^2+k^2}} $$ (if you ask maxima to do this substitution, it’ll ALMOST tell you that, except that it’ll mention an extra pair of branch points at $u=\pm 1$ … which I believe I’ve mentioned before…)
So it’s maybe a Good Idea to see what happens under this substitution to other elliptic integrands. How about this one:
$$ \sqrt{\frac{1-kx^2}{k-x^2}} dx = \frac{1-kx^2}{\sqrt{(k-x^2)(1-kx^2)}} dx ?$$ They call that one “second kind”. And the Answer, up to some Branching Considerations:
$$ \frac{2(1-u^2)^2}{(1+u^2)^2\sqrt{(k^2u^2-(4-2k^2)u^2+k^2)}} $$
Hm. That may not look like Progress, but in fact It Is!
The first opacity to overcome here is that the fraction we have acquired,
$$\frac{2(1-u^2)^2}{(1+u^2)^2} $$ … it really wants to be a Polynomial… let’s do Partial Fraction decomposition.
$$ \frac{2(1-u^2)^2}{(1+u^2)^2} = \frac{8}{(1+u^2)^2} - \frac{8}{1-u^2} + 2 $$ The second is: the conceit of this game is that we’re now allowed to Integrate $\frac{1}{\sqrt{P(x)}}$. We also know how to differentiate. A perfectly good thing to Differentiate is $$ \sqrt{P(x)} $$! You can probably tell that this doesn’t immediately help, so instead, differentiate
products $$ \frac{x^m}{Q(x)} \sqrt{P(x)} $$ and in particular you find
$$ \frac{d}{du} \frac{u\sqrt{k^2u^4-(4-2k^2)u^2+k^2}}{1+u^2} = \left(\frac{8}{(1+u^2)^2} -\frac{8}{1+u^2} +k^2 u^2+k^2\right)\frac{1}{\sqrt{k^2u^4-(4-2k^2)u^2+k^2}} $$ and this is great!
The derivative and the transformed Second Kind Integrand have exactly proportional singularities, at least at $\pm i$, where they’re new. Or, to put it differently, Landen-transforming the Second Kind Integrand gives a sum of another Second-Kind integrand, a First-kind integrand, and a derivative (whose integral we know trivially).