# 19th Century Precursors of Polynomial Methods

These days, I’m spending a lot of time revising my book about polynomial methods and incidences (an older draft is available here). This led me to the following conclusion: I am not sure what classifies a proof as a polynomial method. A Wikipedia page states

“… the polynomial method is an algebraic approach to combinatorics problems that involves capturing some combinatorial structure using polynomials and proceeding to argue about their algebraic properties.”

I like this statement a lot, but I still don’t know whether some arguments should be considered as polynomial methods or not. There’s probably no good answer. Still, in this post, I want go over a case that I find interesting: If you have an inclusive view, you could claim that polynomial methods started in the 19th century. Anurag Bishnoi made a wonderful list of early polynomial method results, but it only goes back to 1975.

We can consider results such as the Monge-Cayley-Salmon theorem. However, for simplicity, let’s stick to shorter and more elementary proofs. Pascal’s theorem is from 1640. Two centuries later, Plücker provided an alternative proof for this theorem, which is based on studying basic properties of polynomials. For points ${a,b\in {\mathbb R}^2}$, we denote the line incident to $a$ and $b$ as $\overline{ab}$. For a polynomial $f$, we denote as ${\bf V}(f)$ the variety defined by $f$ (that is, the set of points on which $f$ vanishes).

Pascal’s Theorem. Let $\gamma\subset {\mathbb R}^2$ be an ellipse, parabola, or hyperbola. Consider six distinct points $a,b,c,d,e,f\in \gamma$. Then the three points $\overline{ab}\cap \overline{de}$, $\overline{cd}\cap \overline{af}$, and $\overline{ef}\cap \overline{bc}$ are collinear.

Proof. Note that $\gamma$ is an irreducible curve of degree 2. Let $h\in {\mathbb R}[x,y]$ be a degree 2 polynomial that satisfies $\gamma = {\bf V}(h)$. We define two more curves:

$L_1 = \overline{ab}\cup \overline{cd} \cup \overline{ef},$

$L_2 = \overline{de}\cup \overline{af} \cup \overline{bc}.$

Both $L_1$ and $L_2$ are curves of degree 3, since each can be defined as the product of three linear equations. Let $g_1,g_2\in {\mathbb R}[x,y]$ be degree 3 polynomials that satisfy $L_1 = {\bf V}(g_1)$ and $L_2 = {\bf V}(g_2)$. Let $p$ be an arbitrary point of $\gamma$ that is not one of $a,b,c,d,e,f$. There exists $\alpha\in {\mathbb R}$ such that the polynomial $g = g_1 + \alpha g_2$ vanishes on $p$. Indeed, one can think of $g(p)$ as a linear polynomial in $\alpha$, and set $\alpha$ to be the unique root of this polynomial. We note that $g$ is of degree 3 and vanishes on $a,b,c,d,e,f,p$.

By Bézout’s theorem, either $g$ and $h$ have a common factor or

$|{\bf V}(g)\cap {\bf V}(h)| \le \deg g \cdot \deg h = 3 \cdot 2 = 6.$

Since both $g$ and $h$ vanish on the seven points $a,b,c,d,e,f,p$, we get that the two polynomials have a common factor. Since $h$ is irreducible, $h$ is a factor of $g$. This implies that ${\bf V}(g)$ is the union of $\gamma$ and a line $\ell$. Since $\gamma$ cannot contain the three intersection points $\overline{ab}\cap \overline{de}$, $\overline{cd}\cap \overline{af}$, and $\overline{ef}\cap \overline{bc}$, these points are contained in $\ell$. That is, the three intersection points are collinear. $\Box$

Plücker’s proof is a simple application of Bézout’s theorem. Should it be considered as a polynomial method? Purists might argue that polynomial proofs should involve asymptotically large configurations. That is, that the problem should involve n objects and a polynomial of degree that depends on $n$. But this is not the case for some recent results. For example, people seem to consider the following proof as a polynomial method. It is also called a polynomial method in Larry Guth’s book about polynomial methods.

A regulus is the union of all lines in ${\mathbb R}^3$ that intersect three pairwise-skew lines $\ell_1,\ell_2,\ell_3$. That is, no two lines of $\ell_1,\ell_2,\ell_3$ are parallel and no two intersect. One example of a regulus is the hyperbolic paraboloid ${\bf V}(z-xy)$, depicted in the figure below. This surface is the union of all lines of the form ${\bf V}(x-c,z-cy)$ where $c\in {\mathbb R}$. It is also the union of all lines of the form ${\bf V}(y-c,z-cx)$. The lines in each of these families are pairwise-skew. We obtain ${\bf V}(z-xy)$ by taking the union of all lines that intersect three lines of the form ${\bf V}(y-c,z-cx)$. This is the family of lines of the form ${\bf V}(x-c,z-cy)$. Similarly, we obtain ${\bf V}(z-xy)$ by taking the union of all lines that intersect three lines of the form ${\bf V}(x-c,z-cy)$.

Lemma. Every regulus in ${\mathbb R}^3$ is contained in an irreducible variety that is defined by a polynomial of degree two.

Proof. Consider a regulus $S\subset{\mathbb R}^3$ that is defined by the three pairwise-skew lines $\ell_1,\ell_2$, and $\ell_3$. Let ${\mathcal P}$ be a set of nine points that is obtained by arbitrarily choosing three points from each of $\ell_1,\ell_2$, and $\ell_3$. Since $\binom{3+2}{3} = 10 > 9$, a standard polynomial parameter counting implies that there exists a nonzero $f\in {\mathbb R}[x,y,z]$ of degree at most two that vanishes on ${\mathcal P}$. We apply Bézout’s theorem in a plane that contains $\ell_1$ and is not contained in ${\bf V}(f)$. That is, we consider a linear bivariate polynomial that defines $\ell_1$ and a polynomial that defines the intersection of ${\bf V}(f)$ with the plane. Since $\ell_1$ intersects ${\bf V}(f)$ in at least three points, Bézout’s theorem implies that $\ell_1 \subset {\bf V}(f)$. For the same reason, we also have that $\ell_2,\ell_3\subset {\bf V}(f)$. If ${\bf V}(f)$ is a plane or a union of two planes, then at least two of $\ell_1,\ell_2,\ell_3$ are contained in a common plane. This would contradict the assumption that the three lines are pairwise-skew. We conclude that ${\bf V}(f)$ is an irreducible variety of degree two.

Consider a line $\ell'$ that intersects all three lines $\ell_1,\ell_2,\ell_3$. The three intersection points are distinct since $\ell_1,\ell_2,\ell_3$ are disjoint. Since $\ell'$ intersects ${\bf V}(f)$ in at least three points, Bézout’s theorem implies that $\ell' \subset {\bf V}(f)$. Since $S$ is the union of all lines that intersect $\ell_1,\ell_2,\ell_3$, we get that $S\subseteq {\bf V}(f)$.

So, can we start saying that polynomial methods go back to the 1840’s?

## 4 thoughts on “19th Century Precursors of Polynomial Methods”

1. kodlu |

nice post. I think you have made the case that the method goes back to the 19th century quite well. btw,where was the photo at the end taken?

• This is from the MOMATH – the national math museum of the US. They have a lot of nice online events during the pandemic.

2. In favor of the more purist point of view is that if you allow for fixed degree polynomials, then you might feel obliged to include many things which people usually do not include. For instance, rank arguments in which the entries depend on an affine function (e.g. Fisher’s inequality) are rarely considered applications of the polynomial method. For degree 2 it is a mixed bag.

Maybe at least any polynomial argument, which involves a degree >= 3 expression, should count. That would still include Bezout’s argument above, but it excludes trivial cases.