Duality is one of the most – if not the most – fundamental principles in our universe. It’s a principle that, once in your gut, transforms the way you approach equations and algebraic concepts; putting a perimeter around each problem rather than letting the edges disappear into the obscure distance.
However, unless you’ve taken specific classes, or read graduate-level textbooks, it’s unlikely that you’ve heard much about it.
The reasons for this aren’t clear.
Perhaps it’s because, as Michael Atiyah put it in a lecture he gave on the topic in 2007 [1]:
…duality in mathematics is not a theorem, but a ‘principle’…
supposedly implying that duality acts more like a relational operator than a rule and that, as an operator, it typically isn’t needed by undergraduates – evidenced by how Atiyah then devotes the rest of his lecture to giving examples of this principle in practice: Poincaré Duality, Pontryagin Duality, Hodge Theory, Yang-Mills Theory, etc – most of which land well inside the ‘advanced’ side of mathematics.
Either way, because of this status of duality as a ‘principle’ you tend not to see it defined concretely but rather discussed, which turns it into a vague concept that one tends to appreciate through example.
I’m going to break that trend, however, and start this article by offering you my own definition of duality – well, actually, I’m going to define it under the name Closed Duality so as not to ruffle any feathers of those who may have more general notions of the concept – and then we’ll explore it a little.
Here, ‘ involutory ‘ means that if $a=dual(b)$ then $dual(a)=b$, and ‘closed’ means that all operations on elements defined by the system result in other elements that are also well-defined within the system.
That’s my definition. Now, let’s explore.
1: Holes and Pegs ¶

The simplest and most fundamental example of duality is the binomial theorem , which says that if you have $n$ holes and $p$ pegs, then there are are
\begin{equation} \frac{n!}{(n-p)!p!} \tag{1.1} \label{eq:1.1} \end{equation}ways to arrange the pegs in the holes.
However, there is a symmetry right down the middle of this combinatoric problem. That is, for a given number of pegs, the number of unfilled holes which are left over is $u=(n-p)$ and, when substituted into (1.1), we find that there are
\begin{equation} \frac{n!}{(n-u)!u!}=\frac{n!}{(n-(n-p))!(n-p)!}=\frac{n!}{p!(n-p)!} \tag{1.2} \label{eq:1.2} \end{equation}ways to arrange them – the same number as in (1.1). Thus, the problem of arranging unfilled holes is the same problem as arranging the pegs, just seen from a different perspective.
This is the essence of closed duality. The system is closed because of the fixed number of total holes, and precisely because of that closure a combinatoric symmetry arises.
Though simple, this combinatoric duality is arguably the underlying mechanism at the heart of many more complex examples, so it is worth keeping one foot on it at all times.
2. Zero and Infinity ¶
In the last example we saw that two quantities (holes and pegs) formed a duality in the trade-off between who was who.
This idea can be generalized into a more geometric setting by considering two participants, who we’ll call $A$ and $B$, and we’ll imagine that they are sharing some fixed integer quantity
\begin{equation} N=a+b \tag{2.1} \label{eq:2.1} \end{equation}of which $A$ has $a$ units and $B$ has $b$ units, which can take on any integer values, as long as they sum to $N$.
One way to represent a given pairing of $a$ and $b$ is in the form of a tuple $[a:b]$. For example, for $N=3$ we could have
$$ [0:3],[1:2],[2:1],[3:0] $$As you would expect by intuition, there is a line of symmetry down the center of an ordered list of these tuples such that on one side $A$ has everything, in the middle $A$ and $B$ have near-equal amounts, and on the other side $B$ has everything.
Another way we could represent these sets of tuples so that the same symmetry is apparent is by stating the amounts as a rational number $\lambda=\frac{a}{b}$, which would give us values
$$ \frac{0}{3},\frac{1}{2},\frac{2}{1},\frac{3}{0} $$These two representations can be depicted on a graph like so:

The key observations to make about the above are the following:
- If we switch the roles of $A$ and $B$ the representations are just mirror images.
- In the rational representation, depending on which perspective we take, one of the points becomes $\lambda=0$ and the other becomes $\lambda=\infty$.
The latter observation gives rise to the notions of a point at zero and a point at infinity , which form a duality with respect to the rational numbers.
At this point I haven’t shown you anything too complex, but these simple examples are the building blocks for what’s to come.
3. Symmetry and Anti-Symmetry ¶
There is another way we could have represented the two-participant system from the previous section, namely by using sines and cosines.
These functions represent the ratio of quantities $a$ and $b$ via a phase parameter $\theta$, such that the conservation of the total amount $N$ is ensured by the identity
\begin{equation} cos^2(\theta) + sin^2(\theta)=1 \tag{3.1} \label{eq:3.1} \end{equation}In this view, $cos(\theta)$ is essentially playing the role of $A$ and $sin(\theta)$ is essentially playing the role of $B$, and as $\theta$ increases from $0$ to $\frac{\pi}{2}$, what $A$ has gets passed into possession of $B$.
Whilst instructive to think about things this way, I introduce these functions as a stepping stone to their more analytical counterpart – the exponential function – which can be defined like so:
\begin{equation} exp(X) = \lim_{n \to \infty} \Biggl(I + \frac{X}{n}\Biggr)^n \tag{3.2} \label{eq:3.2} \end{equation}where $I$ is taken to be a ‘do nothing’ action – also known as ‘the identity action’ (or $1$) – and $X$ is some other commutative action.
The exponential function, by this definition, represents a push away from a stationary position (represented by $I$) by an action $X$, and it is calculated by splitting the action $X$ into an infinite (approximately infinite) number of smaller perturbations, which are performed successively rather than all at once.
In general, taking the $n_{th}$ power of a sum of two commutative terms gives the following
\begin{equation} (x+y)^n = \sum^{n}_{k=0} \binom{n}{k}x^{k}y^{n-k} \tag{3.3} \label{eq:3.3} \end{equation}(notice how the binomial function is ultimately the thing moderating the ratios of various powers – hence why I’ve gone into these specifics)
Applying this to the exponential function we recover the familiar expansion
\begin{equation} exp(X) = \sum^{\infty}_{k=0}\binom{\infty}{k}\Biggl(\frac{X}{n}\Biggr)^k = \frac{1}{0!} + \frac{X}{1!} + \frac{X^2}{2!} \ldots + \frac{X^k}{k!} \tag{3.4} \label{eq:3.4} \end{equation}You often see the exponential function stated directly in the form above (3.4), which can lead one to view it as some abstract, infinite entity, dependent upon irrational numbers. But its essence is perfectly well understood without invoking infinity – it’s just the repeated application of $(1+\Delta x)$ a large number of times. There’s no reason for $x$ to be a number; it could be an action, a function, or a matrix.
One such commutative matrix we could choose is
\begin{equation} J = \begin{bmatrix}0 & 1\\-1 & 0\end{bmatrix} \tag{3.5} \label{eq:3.5} \end{equation}representing the action of $B$ passing ‘stuff’ forward to $A$ and $A$ returning an equal amount to $B$ – which, indicated by the minus sign, arrives at $B$ from behind, essentially capturing something akin to a conveyor belt running between $A$ and $B$ (figure 3.1).

Setting
\begin{equation} X = J\theta \tag{3.6} \label{eq:3.6} \end{equation}and substituting into the exponential function (3.4), we can use the exponential function to represent the action of $A$ pushing a proportion of ‘stuff’ $\theta$ round the conveyor belt to $B$. Notably we find that
\begin{equation} exp(J\theta) = Isym(\theta) + Jasym(\theta) \tag{3.7} \label{eq:3.7} \end{equation}where $I$ is the identity between $A$ and $B$ and
\begin{equation} sym(\theta) = \frac{1}{0!} – \frac{\theta^2}{2!} + \frac{\theta^4}{4!} -\ldots \tag{3.8} \label{eq:3.8} \end{equation}\begin{equation} asym(\theta) = \frac{\theta}{1!} – \frac{\theta^3}{3!} + \frac{\theta^5}{5!} -\ldots \tag{3.9} \label{eq:3.9} \end{equation}which you may no doubt recognize as Euler’s formula :
\begin{equation} exp(i\theta) = cos(\theta) + isin(\theta) \tag{3.10} \label{eq:3.10} \end{equation}However, I’ve chosen to represent it in the way I have in order to demonstrate that at its core this exchange between $cosine$ and $sine$ is intrinsically combinatoric and, more than that, to demonstrate how even powers of $J\theta$ become associated with the point at zero, while odd powers become associated with the point at infinity.
Even and odd powers likewise correspond to changes which are, respectively, symmetric and anti-symmetric about the point at zero (hence my giving them names $sym$ and $asym$).
Put in a less rigorous, but intuitive, way: from the perspective of each participant, quantities which vary symmetrically have a sense of being associated with oneself, whereas quantities which vary anti-symmetrically have a sense of being associated with change.
We are observing a duality between symmetry and anti-symmetry.
An interesting example of this at work is in Independent Component Analysis (presented here in a form taken from Kutz, 2013, p.419 [2]):
If we have two distributions laying on top of one another that we wish to tease apart, one way to do this is to try to reverse-engineer the process by which the data ended up as it is; the first step of which is to rotate the data so that (as best as possible) it aligns with our axes.

One way to achieve this is to use calculus to find the point where the variance of the data around the axes is minimized. When we do so we find that the angle ($\phi$) by which we need to rotate the data is given by
\begin{equation} \frac{1}{2}tan(2\phi) = \frac{ \sum^{N}_{i=0}X_{A}(i)X_{B}(i) }{ \sum^{N}_{i=0}X_{A}(i)^2-X_{B}(i)^2 } \tag{3.11} \label{eq:3.11} \end{equation}that is, we find that that $\phi$ is given by the ratio of anti-symmetric (in the numerator) to symmetric (in the denominator) relationships between the points relative to the axes.
The tools of calculus are unquestionably of huge value, and it’s by no means the case that every tool has to deliver insight to be of use, but examples like this do motivate one to question whether there may be alternative paths to the solutions of linear problems that we may be overlooking.
This connection between calculus and duality is something we’ll be exploring a little further in the next section…
4. Position and Momentum ¶
In the last section we showed how the duality of sines and cosines arises from combinatoric origins, and ended by questioning whether invoking duality could help us bypass some of our calculus-based habits of reasoning.
While I can do no justice to the depth of this question in an article of this size, we can explore it a little.
The rule for performing differentiation on polynomials, taught in all high-schools, is
\begin{equation} \frac{dx^n}{dx}=nx^{n-1} \tag{4.1} \label{eq:4.1} \end{equation}If we apply this to the exponential expansion (3.4) we get
\begin{equation} \frac{d}{dx}e^{cx}=ce^{cx} \tag{4.2} \label{eq:4.2} \end{equation}which is the familiar result that ‘exponentials are their own derivative’ (up to scale). When this is evaluated around the point at zero we thus find that
\begin{equation} \left.\frac{d}{dx}e^{cx}\right|_{x=0}=c \tag{4.3} \label{eq:4.3} \end{equation}and, in the case of our ‘conveyor-belt’ equation (3.7), we get
\begin{equation} \left.\frac{d}{d\theta}e^{J\theta}\right|_{\theta=0}=J \tag{4.4} \label{eq:4.4} \end{equation}which is interesting… It’s telling us that the derivative at a point acts in the direction of ‘greatest duality’ from that point – the direction which is most anti-symmetric, relatively speaking – direction $J$.
Another interesting thing we can do is substitute the derivative operator itself into the exponential equation:
\begin{equation} exp\Bigl(\Delta\theta\frac{d}{d\theta}\Bigr) = \sum^{\infty}_{k=0}\frac{1}{k!}\Delta\theta^k\frac{d^k}{d\theta^k} \tag{4.5} \label{eq:4.5} \end{equation}This result is well known as the ‘Taylor Series’ – a tool for smooth approximation.
It shouldn’t be too surprising that we get this result, as this was pretty much our definition of the exponential function in the first place – i.e. you push the system away from the point at zero by an amount $x$ by breaking the journey up into tiny pieces.
However, the equation in the form of (4.5) is hiding the fact that we’re really just calculating
\begin{equation} exp\Bigl(\theta\frac{d}{d\theta}\Bigr) = \lim_{n \to \infty} \Bigl(1 + \delta\theta\frac{d}{d\theta}\Bigr)^n \equiv \Bigl(I + \delta\theta J\Bigr)^n \tag{4.6} \label{eq:4.6} \end{equation}It seems then that what the Taylor approximation is really telling us is that, whenever we traverse a geometry we can break our journey up into tiny steps, such that each step is a simple linear exchange between symmetric and anti-symmetric spaces.
(Notice here that I haven’t explicitly mentioned differentiation or infinitesimals, I’ve only mentioned the idea of there being an exchange between what, at a given point, feels like two independent spaces).
This is an idea that forms the conceptual bedrock of fields such as differential geometry and Hamiltonian mechanics.
So, with that in mind, let’s get to the idea that I really want to talk about in this section…
Let’s say that you have some mechanical system and that you want to make predictions about it. In that case, the only way you can do that is if the system follows predictable rules.
However, ‘predictable’ is a vague word.
In more concrete terms, what we mean is that the geometry at one point can’t be connected to the geometry at another point via some black-box that scrambles everything around. There has to be a deterministic through-line.
For example, if you were to draw a set of grid lines on your geometry, then these grid lines must squash and rotate in predictable ways – otherwise the system becomes degenerate, grid-lines merge or split (figure 4), and predicting unique outcomes becomes impossible.

Now imagine that we have a set of points moving through the geometry.
The density of grid lines would reflect something to something akin to ‘velocity’. That is, if the gaps between grid lines actually correspond to some fixed distance, then from the perspective of a point flying over the grid in some consistent manner, the stretching and squashing of the grid would actually correspond to the point gaining and losing velocity.
If we say that the distance between the points is
$$ \Delta x $$and that the density of grid lines at a point is
$$ \Delta p $$then $x$ and $p$ are known respectively as the canonical coordinate and canonical momentum , and there is a key rule which they must obey in order for the geometry to flow as we would like.
This rule is known as Liouville’s theorem , and states that
\begin{equation} \Delta x \Delta p=const \tag{4.7} \label{eq:4.7} \end{equation}In other words, changes in the space between states should always be equally balanced by changes in the density of the geometry.
To understand this intuitively think about a spring, which oscillates faster or slower in opposition to how stretched it is. Or think about the coolant in a refrigerator which gets hotter/colder as it contracts/expands.
While Liouville’s theorem is a cornerstone result of mechanics and geometry, we could possibly have surmised its existence based on what we know from our exponential results. Writing Liouville’s theorem another way, we get that
\begin{equation} \Delta p=\frac{const}{\Delta x} \implies \Delta x\frac{const}{\Delta x}=const \tag{4.8} \label{eq:4.8} \end{equation}which is reminiscent of our understanding that changes along the point at zero (e.g. say the $A$ direction) should be compensated by changes along the relative point at infinity (the direction of the differential). I.e. that
$$ dA\frac{d}{dA}=constant $$Of course, the nuts and bolts are somewhat different, but the underlying principle of duality is reaching out to us.
In the next section, we’ll see how more complicated multi-dimensional derivatives are made simpler further still by embracing duality in a recursive manner.
5. Space and Time ¶
In the previous sections I’ve been deliberately vague in using the word ‘stuff’ to describe the nature of the quantity that $A$ and $B$ are exchanging. If you’ve never opened a book on differential geometry or abstract algebra (which is fine, it’s no doubt the case for most people studying applied mathematics) then there’s a good chance that you have a habit of thinking that words like ‘stuff’ and variables like $x$, $y$, and $z$ are there to be replaced solely by numerical quantities like $93$, $34$, or $8$.
But this is a narrow perspective.
Take our conveyor belt operator $J$; if I hadn’t said that $J$ is equal to matrix (3.5) you would probably have had no sense of how to use it in an equation. But, by defining it as a matrix, it inherited an agreed-upon set of conventions of addition and multiplication that made it compatible with the exponential equation. You might have even recognized that $J$ is essentially equivalent to the imaginary number $i$… because that’s what it is. The imaginary number $i$, like $J$, is just a representation of an exchange within a system with two degrees of freedom.
The problem with numbers like $i$ though, is that we lose sense of what degrees of freedom they’re referring to. In one situation you might be using $i$ to refer to an exchange between $A$ and $B$, whereas in another you might be referring to $C$ and $D$. If you only talk about $i$ in your equations then you are forced to make that context clear either outside of your equations (i.e. in prose) or else you have to extend your algebraic language to capture it.
Alternatively we could invent some replacement notation for $i$ altogether, which is the the approach of mathematical fields like Geometric Algebra and, to some extent, differential forms. In Geometric Algebra we would capture the notion of $A$ exchanging with $B$ like so:
$$ J=A \wedge B $$Similarly $C$ and $D$ could be captured as a separate symbol
$$ K=C \wedge D $$And the two could be used together in expressions such as
$$ 10JKJ – 2K^2J^2 $$Or, importantly, we could even treat these new algebraic symbols as building blocks in their own right, and compose them to make even larger algebraic constructs, e.g.
$$ H=J \wedge K=(A \wedge B) \wedge (C \wedge D) $$I’m skipping over a lot of nuance here, because the thing I want you to focus on is the idea that duality need not be something that acts on a flat plane. It can form nested structures.
One of the most influential examples of this is the Electromagnetic field.
The dynamics of the the Electromagnetic field were originally published in (Maxwell, 1965) [3] as a set of four differential equations of the electric field $E$ and the magnetic field $B$. In a vacuum (i.e. with no exogenous currents/charges) they can be written in terms of one time variable $t$, and three space variables $x$, $y$, and $z$ as
\begin{equation} \nabla \bullet B=0 \tag{5.1} \label{eq:5.1} \end{equation}\begin{equation} \nabla \bullet E=0 \tag{5.2} \label{eq:5.2} \end{equation}\begin{equation} \frac{dE}{dt} = -\nabla \times B \tag{5.3} \label{eq:5.3} \end{equation}\begin{equation} \frac{dB}{dt} = \nabla \times E \tag{5.4} \label{eq:5.4} \end{equation}The equations have largely been taught in this form since 1865. However, in the early 1900s, in step with advances in differential geometry, alternative formulations of the equations emerged. (These formulations appear to be attributed to (Weber, 1900) [4] and (Weyl, 1918) [5] however the geometric algebra version I’m about to show you is attributed to (Hestenes, 1966) [6]). The idea is that, if we are in a reference frame defined by the time axis $t$, and if we free ourselves to create higher-order algebraic objects from our space and time dimensions
\begin{equation} \sigma_{tx}, \sigma_{ty}, \sigma_{tz}, \sigma_{xy}, \sigma_{yz}, \sigma_{zx} \tag{5.5} \label{eq:5.5} \end{equation}(where $\sigma_{tx}=t \wedge x$)
and along with these we create a space-time conveyor belt operator
\begin{equation} J_{EM}= t \wedge x \wedge y \wedge z \tag{5.6} \label{eq:5.6} \end{equation}then we find that the electric field becomes responsible for ‘time-like’ components, while the magnetic field becomes responsible for space-like components:
\begin{equation} E = e_x\sigma_{tx} + e_y\sigma_{ty} + e_z\sigma_{tz} \tag{5.7} \label{eq:5.7} \end{equation}\begin{equation} B = b_x\sigma_{yz} + b_y\sigma_{zx} + b_z\sigma_{xy} \tag{5.8} \label{eq:5.8} \end{equation}More than that, we can write the electromagnetic field and Maxwell’s vacuum equations in a much simpler form (with $I_{EM}$ being the identity):
\begin{equation} F = I_{EM}E + J_{EM}B \tag{5.9} \label{eq:5.9} \end{equation}\begin{equation} \nabla F = 0 \tag{5.10} \label{eq:5.10} \end{equation}Notice here the similarity between the exponential expansion in (3.7) and equation (5.9). It is telling us that the electric and magnetic fields are bound by duality; that is, if we change our perspective, ‘stuff’ belonging to one suddenly becomes ‘stuff’ belonging to the other – something that isn’t at all obvious from Maxwell’s equations, as important as they are.

More so, the ‘stuff’ that they’re exchanging isn’t something we would think about as being quantifiable in units or parcels. The ‘conveyor belt’ in this case is $J_{EM}$, which we defined in terms of four dimensions. It’s a duality built on top of smaller dualities (figure 5).
But look at how, by embracing this nested structure, the same equations that were describing smaller systems rose to the surface.
This will be the theme of our final section.
6. Spotting the signs ¶
In the previous section I spoke about how complex numbers can refer to different things in different contexts, and how if a language isn’t equipped to capture that context it can end up leaking into prose, or being compensated for by layers of conceptually orthogonal notation.
In this final section I want to end by providing you with a list of concepts/notation that you’ll often see appearing in dual pairs – that is, in a form indicating balance such as (where $*$ indicates $dual$)
\begin{equation} A + A^* = 0 \tag{6.1} \label{eq:6.1} \end{equation}or
\begin{equation} A \times A^* = const \tag{6.2} \label{eq:6.2} \end{equation}Some of these concepts are already well understood (or even explicitly defined) as being dual, whereas others are more subtle (less rigorous), and appear here more as an invitation to the reader to reevaluate the way they approach new equations.
With that in mind, my list goes as follows:
-
Arithmetic
- Plus & Minus
- Numerator & Denominator
- Clockwise & Anti-clockwise (for modulo arithmetic)
-
Algebraic
- Zero & infinity
- Positive exponents & Negative exponents
- Matrices & Ajoint Matrices (this includes matrices as representations of complex numbers and quaternions, whose adjoints are given by their conjugates)
- Bra & Ket vectors – i.e. Hilbert-space vectors and their adjoints
- Isomorphism domains & images
-
Geometric
- Configuration-space & tangent-space
- Points & lines
- Coordinates & conjugate momentum
- Symmetry & Anti-Symmetry
- Rotational & Translational transformations
- Self-Interaction & interaction with forcing terms
(Anyone who has read even as far as Poincaré Duality knows that I’m leaving a lot off of this list, and that I’m being somewhat crude with what I have included; but that’s fine, I’m just lighting a beacon here.)
With respect to the final couple of items on the list, I’d like to point readers towards a challenge paper – ‘Lie Groups as Spin Groups’ (Doran et al., 1993) [7]. This personally took me a huge effort to understand, and I’m still absorbing it, so don’t expect it to make sense without a lot of background reading, but it has been a great source of insight for which I can’t pass on sharing.
The essence of the paper is that, if higher-order operators (such as those we formed in the previous section, $J$, $K$, $I_{EM}$ etc.) are to form a self-consistent/closed algebra within a given geometry, then they can only be one of two types – which the authors derive and label type $E$ and type $F$.
Though the paper itself doesn’t give concrete physical examples of these operators in practice, I have personally found myself mentally partitioning terms in equations into these two camps and noticing a strong correlation between the split of $E$ and $F$ and the kinds of symmetries I’ve listed above.
There’s clearly so much more to be discovered in the realm of duality.
Conclusion ¶
As I mentioned at the start of this article, Duality is a concept which is most often discussed in higher-level classes and, because of that, there is an enormous amount I’ve left out.
However, that doesn’t mean that you can’t understand what duality is and why it might be so important.
If you’ve understood how duality falls out of combinatoric problems, and if you’ve understood how the exponential function creates a deep link between combinatoric problems and geometry, then you’ve understood the heart of the matter.
Duality goes hand-in-hand with symmetry, it provides a left for every right, a backwards for every forwards. It gives us a sense for how the world around us flows.
If you know how to look for it, you’ll never see the world the same again.
Bibliography ¶
- 1: Atiyah, M. F. (2007, December). Duality in Mathematics and Physics. https://fme.upc.edu/ca/arxius/butlleti-digital/riemann/071218_conferencia_atiyah-d_article.pdf
- 2: Kutz, J. N. (2013). Data-driven modeling & scientific computation: Methods for complex systems & big data (First edition). Oxford University Press.
- 3: Maxwell, J. C. (1965). A Dynamical Theory of the Electromagnetic Field. 54.
- 4: Weber, H. M. (1900). Die Partiellen Differential-Gleichungen der Mathematischen Physik. Braunschweig, F. Vieweg und Sohn.
- 5: Weyl, H. (1918). Space, Time, Matter.
- 6: Hestenes, D. (1966). Space-Time Algebra.
- 7: Doran, C., Hestenes, D., Sommen, F., & Acker, N. (1993). Lie groups as spin groups. Journal of Mathematical Physics, 34. https://doi.org/10.1063/1.530050
Mechanics/Liouville’s theorem:
- Arnold, V. I., Vogtmann, K., & Weinstein, A. (2013). Mathematical Methods of Classical Mechanics. Springer. https://public.ebookcentral.proquest.com/choice/publicfullrecord.aspx?p=5575726
- Cline, D., & Sarkis, M. (2019). Variational principles in classical mechanics.
Geometric Algebra:
- Hestenes, D. (2003). New foundations for classical mechanics (Repr). Kluwer Acad. Publ.
- Doran, C., & Lasenby, A. N. (2003). Geometric algebra for physicists. Cambridge University Press.
Links ¶
Other: ¶
- Much of my fascination in this topic was inspired by Norman Wildberger’s array of internet publications. So I urge anyone to check out his website , which contains links to his YouTube channel , along with his book, and other projects he’s working on.
-
I’d also like to mention a couple of publications on projective geometry – an old framework for geometry which is rich in duality, and is currently undergoing a small renaissance. I myself am still working through these, but they have so far provided me with much inspiration:
- Coxeter, H. S. M. (1987). Projective geometry (2nd ed). Springer-Verlag. (As far as I can tell this is actually freely available in PDF format, but I won’t post a direct link just in case).
- Hestenes, D., & Ziegler, R. (1991). Projective geometry with Clifford algebra. Acta Applicandae Mathematicae, 23, 25–63. https://doi.org/10.1007/BF00046919