# Polynomial partitioning: the basics (part 1)

Guth and Katz’s seminal work provided an almost tight bound for Erdős’s planar distinct distances problem. One can regard the proof of this bound as consisting of four main tools:

• A reduction from the distinct distances problem to a problem about line intersections in ${\mathbb R}^3$. This part is referred to as the Elekes-Sharir reduction or as the Elekes-Sharir framework.
• The introduction of polynomial partitioning.
• Applying 19th century analytic geometry tools that are related to ruled surfaces, such as flecnode polynomials.
• The polynomial technique presented in Guth and Katz’s previous paper. (That is, relying on the existence of a polynomial of a small degree that vanishes on a given point set to reduce incidence problems between points and lines to the interactions between the lines and the algebraic variety which is the zero set of the polynomial.)
We already discussed the Elekes-Sharir framework from item (i) here, here, and here (and due to several new developments, I might write more posts about this subject soon). This series of posts surveys the polynomial partitioning technique from item (ii). Basically, this is a technique for partitioning a $d$-dimensional Euclidean space into a small number of “well behaved” cells. While the technique was originally introduced by Guth and Katz to solve the planar distinct distances problem, it already has several additional applications and extensions: a slightly improved upper bound for the three-dimensional unit distances problem (see here and here), improved bounds for various incidence problems (e.g., see here, here, and here), an alternative proof for the existence of a spanning tree with a low crossing number, and even a new range searching algorithm. Several recent works survey the basics of polynomial partitioning such as Zeev Dvir’s survey, Larry Guth’s lecture notes, this paper by Kaplan, Matoušek, and Sharir, and this blog post by Terry Tao.

In what follows, we regard the dimension $d$ of the ambient space as a (small) constant, and ignore the dependence on $d$ of the various constants of proportionality in the bounds. Consider a set $\cal P$ of $m$ points in ${\mathbb R}^d$. Given a polynomial $f \in {\mathbb R}[x_1,\ldots, x_d]$, we define the zero set of $f$ to be $Z(f) = \{ p \in {\mathbb R}^d \mid f(p)=0 \}$. For $1 < r \le m$, we say that $f \in {\mathbb R}[x_1,\ldots, x_d]$ is an $r$-partitioning polynomial for $\cal P$ if no connected component of ${\mathbb R}^d \setminus Z(f)$ contains more than $m/r$ points of $\cal P$. Notice that there is no restriction on the number of points of $\cal P$ that lie in $Z(f)$. In the following figure, the blue curve is the zero set of a 5-partitioning polynomial, since $m=10$ and every cell contains at most two points.

The following result is due to Guth and Katz.

Theorem 1 (Polynomial partitioning). Let $\cal P$ be a set of $m$ points in ${\mathbb R}^d$. Then for every $1< r \le m$, there exists an $r$-partitioning polynomial $f \in {\mathbb R}[x_1,\ldots, x_d]$ of degree $O(r^{1/d})$.

In the remainder of this post we show how to apply the polynomial partitioning theorem by presenting an algebraic proof for the Szemerédi-Trotter theorem (for the planar point-line incidence problem). In the following two posts, we prove the polynomial partitioning theorem and compare polynomial partitioning to the previous partitioning technique called cuttings. Parts of these posts are loosely based on parts of this paper by Kaplan, Matoušek, and Sharir. I also plan to discuss, in future posts, more advanced topics related to polynomial partitionings, such as the techniques of using a second partitioning polynomial (see here and here) and of using constant sized partitioning polynomials (introduced by Solymosi and Tao). Thanks to Micha Sharir for helping with the preparation of all three posts.

## An algebraic proof of the Szemerédi-Trotter theorem

Let $\cal P$ be a set of $m$ distinct points and let $\cal L$ be a set of $n$ distinct lines, both in the plane ${\mathbb R}^2$. An incidence is a pair $(p,\ell)\in {\cal P}\times{\cal L}$ such that the point $p$ is contained in the line $\ell$. We denote by $I({\cal P},{\cal L})$ the number of incidences in ${\cal P}\times{\cal L}$. We put $I(m,n) = \max_{{\cal P},{\cal L}} I({\cal P},{\cal L})$ where the maximum is taken over all sets $\cal P$ of $m$ points and all sets $\cal L$ of $n$ lines.

Theorem 2 (Szemerédi-Trotter). $I(m,n) = \Theta(m^{2/3}n^{2/3}+m+n)$.

The lower bound in the theorem was originally derived by Erdös (e.g., see this survey, which is also a good reference for additional information about incidence theory). We now prove the upper bound by using polynomial partitioning (a slightly different proof can be found here).

The following lemma from extremal graph theory is very useful in incidence theory (we only present a special case; for more details, see Section 4.5 in the book Lectures on discrete geometry by Matoušek).

Lemma 1 (Kővári–Sós–Turán). Let $G$ be a bipartite graph with vertex sets $V_1,V_2$ such that $|V_1| = m$ and $|V_2| = n$. If $G$ does not contain a copy of $K_{r,s}$ (a complete bipartite graph with vertex sets of sizes $r$ and $s$) such that the $r$ vertices belong to $V_1$ and the the $s$ vertices belong to $V_2$, then the number of edges in $G$ is $O(mn^{1-1/r} + n)$, where the constant of proportionality depends on $r$ and $s$.

The incidence graph $G$ of $\cal P$ and $\cal L$ is a bipartite graph on the two vertex sets $V_1,V_2$ such that $V_1$ contains a vertex for every point in $\cal P$ and $V_2$ contains a vertex for every line in $\cal L$. There is an edge in $G$ between the vertices $v_1\in V_1$ and $v_2 \in V_2$ if and only if the point that corresponds to $v_1$ is incident to the line that corresponds to $v_2$. An example is depicted in the following figure.

Since two distinct lines have at most one point in common, $G$ does not contain a copy of $K_{2,2}$. Thus, according to Lemma 1, $G$ has $O(mn^{1/2} + n)$ edges. Since there is a bijection between the edges of $G$ and the incidences in ${\cal P}\times{\cal L}$, we have

$I({\cal P},{\cal L})=O(mn^{1/2} + n).\ \ \ \ \ \ (1)$

This is still far from the bound $I({\cal P},{\cal L})=O(m^{2/3}n^{2/3}+m+n)$ implied by Theorem 2 (unless $m=O(n^{1/2})$, in which case both bounds are $O(n)$).

A common trick in incidence theory is to obtain a weak bound on the number of incidences, as we just did, and then to “amplify” it. This is done by partitioning the plane (or, more generally, the space) into “well behaved” cells, and then applying the weak bound in each cell. Intuitively, when considering the bound $O(mn^{1/2}+n)$ from $(1)$, we can say that every point of $\cal P$ contributes to the bound an amount equal to the square root of $|{\cal L}|$. By partitioning the plane into cells and applying $(1)$ in each cell, each point of $\cal P$ contributes only the square root of the number of lines that cross the relevant cell. Thus, we expect such a partitioning to imply an improved bound. An example is depicted in the following figure, where the blue dashed lines are the lines of $\cal L$ and the red curves form the partitioning of the plane.

Before Guth and Katz introduced polynomial partitioning, the partitioning of the ambient space was commonly done by using cuttings (e.g., see Chapter 4 of the book Lectures on discrete geometry by Matoušek). As already mentioned, in one of the following posts we will discuss the differences and similarities between these two techniques.

For simplicity, we only consider the case where $m=n$. That is, we consider a point set $\cal P$ and a set of lines $\cal L$ such that $|{\cal P}|=|{\cal L}|=n$ (the general case can be proved in a similar manner). To partition the plane into cells, we construct an $n^{2/3}$-partitioning polynomial $f$ for $\cal P$. According to Theorem 1, the degree of $f$ is $O(n^{1/3})$ and every cell in ${\mathbb R}^2\setminus Z(f)$ is an open set that contains at most $n/n^{2/3} = n^{1/3}$ points of $\cal P$. Let $s$ denote the number of cells in (i.e., connected components of) ${\mathbb R}^2\setminus Z(f)$.

We denote by ${\cal P}_0 = Z(f) \cap {\cal P}$ the set of points of ${\cal P}$ that are contained in $Z(f)$. Similarly, we denote by ${\cal L}_0$ the set of lines that are fully contained in $Z(f)$. For $1\le i \le s$, let ${\cal P}_i$ denote the set of points that are contained in the $i$-th cell and let ${\cal L}_i$ denote the set of lines that intersect the $i$-th cell. Notice that

$I({\cal P},{\cal L}) = I({\cal P}_0,{\cal L}_0) + I({\cal P}_0,{\cal L}\setminus {\cal L}_0) + \sum_{i=1}^s I({\cal P}_i,{\cal L}_i).$

We bound each of these three expressions separately. By factoring $f$ into irreducible factors, we notice that it has $O(n^{1/3})$ linear factors, and thus $Z(f)$ fully contains only $O(n^{1/3})$ lines; that is $|{\cal L}_0|=O(n^{1/3})$. This immediately implies

$I({\cal P}_0,{\cal L}_0) = O(n\cdot n^{1/3}) = O(n^{4/3}).$

Consider a line $\ell \in {\cal L} \setminus {\cal L}_0$ that is defined as $Z(y-(ax+b))$ (for constants $a,b$). Notice that there is a bijection between the points in $\ell \cap Z(f)$ and the roots of the polynomial $f(x,ax+b)$. Since this is a univariate polynomial of degree $O(n^{1/3})$, there are at most $O(n^{1/3})$ intersection points between $\ell$ and $Z(f)$, which in turn implies

$I({\cal P}_0,{\cal L} \setminus {\cal L}_0) = O(n\cdot n^{1/3}) = O(n^{4/3}).$

It remains to bound $\sum_{i=1}^s I({\cal P}_i,{\cal L}_i)$, for which we require an upper bound on the number of cells $s$.

Theorem 3 (Warren). Given a polynomial $f \in {\mathbb R}[x_1,\ldots, x_d]$ of degree $k$, the number of connected components of ${\mathbb R}^d \setminus Z(f)$ is $O\left((2k)^d\right)$.

References for the theorem can be found here and here. In our case, Theorem 3 implies $s = O((n^{1/3})^2) = O(n^{2/3})$. We set $m_i=|{\cal P}_i|\le n^{1/3}$ and $n_i = |{\cal L}_i|$. Since every line of ${\cal L} \subset {\cal L}_0$ intersects $Z(f)$ in $O(n^{1/3})$ points, we have $\sum_{i=1}^s n_i= O(n^{4/3})$. According to the Cauchy-Schwarz inequality, we have

$\sum_{i=1}^s n_i^{1/2} \le \sqrt{\left(\sum_{i=1}^s n_i\right)\left(\sum_{i=1}^s 1\right)} = O\left(\sqrt{n^{4/3}\cdot n^{2/3}}\right) = O(n).$

Recalling the bound from $(1)$, we obtain

$\sum_{i=1}^s I({\cal P}_i,{\cal L}_i) = O\left(\sum_{i=1}^s m_in_i^{1/2}\right) = O\left(n^{1/3}\sum_{i=1}^s n_i^{1/2}\right) = O(n^{4/3}),$

which completes the proof.