Standalone guide — no advanced maths needed. Consolidates the full story: the problem, the paper, PackingStar, human value, and whether AI can “prove” results.
For writing your SCMP letter: focus on humans in the loop.
Sources: paper (arXiv:2511.13391), PackingStar GitHub, Ref/SCMPreport.md, Ref/PKU_wechat.
In 3D: How many identical balls can touch one central ball of the same size without overlapping?
Think of a pomegranate: replace seeds with balls. How many can “kiss” the centre? Answer in 3D: 12 (proven in 1953).
The same question is asked in higher dimensions (4D, 5D, … 24D, 25D, …). In many dimensions we only know a range: “at least this many” and “at most that many.” The exact number is known only in a few special dimensions (1, 2, 3, 4, 8, 24).
Why it matters: It links to how we pack information in codes and signals (e.g. data storage, telecoms).
1. Historical researchers — Newton, Gregory, Hilbert, and later mathematicians who proved exact answers in some dimensions. They posed the problem and set the standard of proof.
2. Researchers on the problem — Mathematicians who used lattices, codes, and clever structures to push “at least this many” in many dimensions. PackingStar had to match their known results before claiming new ones.
3. Researchers on game and AI — The team that turned the problem into a two-player game, designed the “filler” and “corrector” (inspired by a mathematician manually removing bad balls), and still do the verification and interpretation of what the AI finds.
All three show that humans are indispensable: in history, in prior maths, and in designing and interpreting AI.
In most dimensions we don’t know the exact kissing number. Reasons in plain language:
So humans pushed the frontier for centuries; AI steps in where both intuition and hand-built methods hit limits.
Core idea: Instead of placing balls in high-dimensional space, the team works with a “cosine matrix”—a table of angles between pairs of balls. Filling this table under strict rules is equivalent to finding a valid packing, and it’s stable on computers and fits modern hardware (GPUs).
Adds entries to the table to try to fit more balls. The bigger the valid table, the better.
Removes or fixes bad entries—like “one bad ball ruins the soup.” This idea came from watching a mathematician manually delete bad balls to get a known result.
Both are trained with reinforcement learning. The team also breaks successful tables into “fragments” and reuses them in new runs (like cell division). There’s no big training dataset—the system explores from scratch (“discovery from nothing”).
Results: New “at least this many” records in dimensions 25–31, better rational constructions in 13D, and 6000+ new configurations; all checked and listed in standard databases (see GitHub).
Two different things are often both called “proof” in everyday language. Here they are kept separate.
Yes, AI and code can do this. Given a specific arrangement of N balls, we can check: do they all touch the centre? Do any two overlap? That’s a finite calculation. The PackingStar team does exactly this with scripts (verify_coordinates.py, verify_cosmatrix.py on GitHub). So in the sense of “prove this configuration is valid,” the result can be verified—and the team has done it; configurations are accepted into official databases.
PackingStar does not do this. Showing “no arrangement can have more than N balls” is a different task: you must argue logically that every possible arrangement is at most N, not just check one arrangement. That’s what mathematicians mean by a “rigorous proof” of the kissing number. The paper says the 25D configuration “may represent the optimal … arrangement, although a rigorous mathematical proof is still lacking.” So: AI finds and we can verify good configurations; proving they are the best possible still needs human mathematics (and possibly other tools).
PackingStar is built to find good configurations, not to write or check formal mathematical proofs. Other AI systems and tools do focus on proof:
So: PackingStar finds configurations and we verify them with scripts; formal proof tools (Lean, Isabelle, Coq) plus AI (e.g. AlphaProof, Lean Copilot) could in future help turn a human argument into a machine-checked proof that a kissing number is optimal. Right now, for the new PackingStar bounds, that step is still missing—which is why the paper says “rigorous mathematical proof is still lacking.”
See e.g. AlphaProof (DeepMind), Lean Copilot (arXiv:2404.12534), and proof assistants Lean, Isabelle, Coq.
1. Hook. Refer to the SCMP report on the Chinese team’s progress on the “kissing number” problem with AI (PackingStar).
2. Main argument. Humans in the loop: (a) problem and proofs from Newton and Hilbert; (b) decades of mathematicians pushed bounds; (c) the team embedded human intuition in the design (e.g. corrector from “delete bad balls”) and still rely on people to verify and interpret. AI can verify that a configuration is valid (and the team does this); proving it’s the best possible still needs humans—and in future, tools like AlphaProof or Lean could help turn such proofs into machine-checked form.
3. Close. When we use AI for research, including maths, keep humans at the centre: posing questions, designing methods, and checking meaning—and, where needed, completing the proof.
I refer to your report on the Chinese team’s use of AI to advance the “kissing number” problem (“Romance between scientists and AI,” 19 February).
The lesson is the enduring value of humans. The problem comes from Newton and Hilbert; exact answers in some dimensions were proved only after centuries of work. Later researchers used lattices and codes to push bounds—and the PackingStar team had to reproduce those results before claiming new ones. The “corrector” was inspired by a mathematician manually deleting bad entries; verification and interpretation still rest with people. AI can verify that an arrangement is valid (the team does this); proving it is the best possible still needs human mathematics—tools like AlphaProof or Lean may one day help. As the team said, it is “human and machine exploring the universe together.”
When we use AI for research, including maths, we should keep humans in the loop: posing questions, designing methods, and—where needed—completing the proof.
(About 148 words.)
Ref: Ref/SCMPreport.md; work/ideas4letters.md, moreIdeas.md.