What day of the week was it?

Among the more esoteric forms of nerd-entertainment is the “day of the week” problem. Specifically, given a date from history, can you determine the day of the week on which it fell? For example, humans first landed on the moon on July 20, 1969, and unless you are old enough and have a great memory, you probably didn’t know that this was a Sunday. Of course you can always look it up somewhere (TimeAndDate.com is my personal favorite), but the challenge is to figure it out in your head.

In one extreme example, it’s said that the mathematician John H. Conway has a password-protection system on his computer that asks him to identify, in rapid succession, the days of the week for three randomly-generated dates. How does he (or anyone) do it?

Well, there are a few convenient and peculiar facts about our calendar that can help us do the math. First, there is a set of “reference dates” that all fall on the same day of the week every year. The first of these are 4/4, 6/6, 8/8, 10/10, and 12/12. For the odd-numbered months, we have some nice dates that also fall on the same day of the week as the first five: 5/9, 7/11, 9/5, and 11/7. (Check out this year’s calendar if you don’t believe me.) Notice that we don’t have any dates for January, February, or March. This is unfortunate, and is partly the result of having a leap day inserted every four years. But there are some ways around that problem. I use the dates 1/9 and 2/6, along with 3/0 (i.e., the last day of February) to do it. Be careful, though—we need to treat January and February of year x as technically being part of year x-1, but the 3/0 date as part of year x.

Thus, once you know the reference day (the day of the week on which all the reference dates will fall) for a given year, you can figure out the day of the week for any date that year by adding and subtracting weeks from a reference day. For example, July 20 is 7/20, which is 9 days after the date 7/11 (which was a Friday). So, 7/20 is one week and 2 days later, on a Sunday.

Next, we need to have a way of finding the reference day from a given year. As you might expect, that reference day moves each year, though this movement is complicated by that leap day every four years. An easy mnemonic is “12 years is but a day”. That is, if you move 12 years forward (say, from July 20 1969 to July 20 1981), the day of the week will move exactly one day forward (from Sunday to Monday). If you take into account that the day of the week skips one day forward in non-leap years and two days in leap years, you can determine the day of the week of a date in any different year. Of course, you can do all of this in reverse if you want to move backward in time.

And that’s it! We can now use the “reference day + “12 years is but a day” + “skip 1-2 days per year” rules to determine the day of the week for historical dates. For example, suppose we want to know what day of the week the stock market crashed in the U.S.—10/29/1929. Here are the steps:

  • The reference day for 2016 is Monday.
  • This means that 10/10/2016 is Monday, so that 10/29/2016 is 2 weeks and 5 days ahead—i.e., a Saturday.
  • Back up seven multiples of 12 from 2016 to 1932 (since 2016 – 7•12 = 1932), so the day of the week shifts 7 days back (to where you started)—still a Saturday.
  • Back up three more years, keeping in mind that you passed the leap day on Feb 29 1932, so you move back four days from Saturday—now on a Tuesday.
  • There you have it—Oct 29 1929 was a Tuesday (which is why it’s often referred to as Black Tuesday!).

As long as you know the reference day for the current year, you can apply this rule to go forward or backward to any date in the Gregorian calendar. In my case, my birthday happens to fall on that reference day of the week, but any of the reference dates will work. (And, for the calendar nerds out there: for historical events, you need to know whether the country in question was using the Gregorian or Julian calendars, which is a whole other can of worms.)

Thanks for reading! I’ll be back for more at an unspecified date in the future.


On the Fundamental Groups of Nations (Part II)

Hi everyone! I’ve been gone for a long while. Basically, we had a baby, then I got busy with full time work, and life changed. I won’t promise regular postings on this site, but when something interests me I will try to drop it here for you to enjoy.

To follow up on my last pre-blackout-period post, the weird boundaries and enclave/exclave situations with borders around the world led to some interesting math, but it looks like India and Bangladesh got there before I could tell you about it! Specifically, they signed a treaty last year to swap various enclaves and exclaves to simplify the border. So we say goodbye to the world’s only third-order exclave and look forward to less strife and confusion at the border.

That’s where I’ll leave this story. I hope to continue with other math topics moving forward—stay tuned for some peculiar math in the Gregorian calendar!

On the Fundamental Groups of Nations (Part I)

In which we apply mathematical topology techniques to national borders.

Welcome back! In my last post, we toured the globe to find some quirky exclaves and enclaves, including a doughnut-shaped enclave on the Arabian peninsula. I darkly hinted that these examples were only the tip of the iceberg. Now it’s time to see exactly how complicated the borders can get!

Before getting into any of the math, let’s look at the most infamous examples. First, we have the Baarle-Hertog enclaves, exclaves of Belgium that are enclaved within the Netherlands. They’re a mess.

However, this mess doesn’t hold a candle to the haphazard jumble of exclaves that occur along the India-Bangladesh border:

Perhaps unsurprisingly, there is a rather convoluted history to go along with the convoluted border. The Economist had a nice article on the matter in 2011: apparently, the confusion dates back three centuries, though how exactly it got started is up for debate. The best part of this: there’s a piece of India inside of a piece of Bangladesh, inside of a piece of India, which is itself inside of Bangladesh! This is a double-doughnut hole, and it happens at Dahala Khagrabari. Check it out!

Now, my main interest is the notion of path-connectedness. Here’s a mathematical definition (paraphrased from Kosniowski):

A space X is path-connected if, given two points a and b in X, there is a continuous mapping f from the interval [0, 1] into X for which f(0) = a and f(1) = b.

And now a more colloquial definition, with geography in mind:

A region is path-connected if it’s always possible to travel between any two points in the region without traveling through foreign territory.

This gives us a way to assess whether or not an object has been broken into pieces—in geographical terms, all the exclaves of a country are just the path-connected chunks of that country. (Note that according to this definition, even the “main” part of the country counts as an exclave.)

To get a handle on enclaves, we need a mathematical way to represent the holes within a country (or more specifically, the allowable path-types in each exclave). There’s a mathematical object that can do this, called the fundamental group. With apologies to the topologists, I will be using drastically simplified language for it. (For the curious: read more about it here.)

For our purposes, the fundamental group of a region can be written as a list of non-negative integers; the number in position n (starting at n=0) tells you how many exclaves have n holes in them. For example, a country made up of two exclaves, one having no holes and the other having 2 holes, would have fundamental group (1, 0, 1, 0, 0, …). We know that, eventually, we’ll run through all the exclaves, so this list can be truncated to (1, 0, 1). Notice that if you add up all the numbers in the list, you get the total number of exclaves.

So, this mathematical analysis can be summed up nicely as follows:

  • Find the number of exclaves (including the main body of the country).
  • Classify each of the exclaves according to the number of enclaves that they surround.
  • Write the country’s fundamental group as a list, where the nth number in the list (beginning with n=0) gives the number of exclaves with n holes in them.

Here are a couple of simple examples.

South Africa is a single piece, and surrounds one enclave (the nation of Lesotho).

Its fundamental group can be written as (0, 1). Incidentally, since Lesotho is made up of a single piece that surrounds no enclaves, its fundamental group is (1). Clearly, most countries will have a fundamental group of (1).

The United Arab Emirates is broken into two pieces: the main body of the country, and the second-order enclave contained inside the doughnut-shaped piece of Oman.

Since that doughnut-shaped piece of Oman is enclaved within the UAE, we end up with a fundamental group of (1, 1). Oman itself has a small exclave on the Persian Gulf, in addition to the doughnut-shaped piece and the main part of the country, so its fundamental group is (2, 1).

Exclaves, Enclaves, and Doughnuts

In which we explore the strange paths taken by national borders around the worldAlso, doughnuts. Mmmm, doughnuts…

Most national borders that exist today are the result of many decades (or even centuries) of redrawing, and these redrawings were often entangled with the wars, treaties, or imperial ambitions of the time. Sometimes, the border is the result of a very clear principle—for example, the US-Canada border runs neatly along the 49th parallel for much of its length (though this article shows how it’s not quite that simple). Other times, it’s the result of a long an torturous process—compare this map of the Holy Roman Empire in 1786 with the simpler subdivisions of the modern state of Germany. [Side note: all maps from this point onward are taken from Google Maps.]

In this post, I’ll share some examples of unusual national borders—just to get the lay of the land. In the next post, I’ll examine some of the most complicated examples, and begin to apply a mathematical concept (the fundamental group) that, hopefully, will clarify the messy business of national borders.

Before going any further, though, some terminology is in order.

Enclave: an enclave is any portion of a state/province/country that is completely surrounded by another state/province/country. One good example of an enclave is the entire nation of Lesotho, which is “enclaved” within South Africa.

Exclave: an exclave is a portion of a state/province/country that is separated from the main part by multiple states/provinces/countries. A good example of this is the Kentucky Bend, a portion of the US state of Kentucky that is surrounded by Missouri and Tennessee.

An interesting side note: the western borders of Kentucky and Tennessee are defined by the Mississippi River, following the course it ran when originally the border was originally defined. Over the years, most rivers will change their course in multiple places, which means that many states have small chunks that lie on the opposite side of the Mississippi River (go explore the above map to see what I mean). These are called pene-exclaves, since the border doesn’t separate them from their state, but rather a geographic feature (in this case, the river).

Warning! These definitions are not mutually exclusive. Some, but not all, exclaves are also enclaves. To avoid confusion, I’ll just say exclave to mean one portion of a country that’s separated from the main part, and enclave to mean a country that’s completely surrounded by a single other country.

Another Warning! These definitions depend on which country you’re referring to. One example of this is the Spanish town of Llívia (see map below); it is enclaved within France, but it is an exclave of Spain.

Let’s see what else is out there! We continue our tour through Europe:

Enclaved Countries. There are a number of microstates in Europe, but only two of them are true enclaves: San Marino and Vatican City, both surrounded by the Italian Republic. These two, along with Lesotho, give us all of the world’s enclaved countries.

Alpine Villages. The Alps are host to two interesting exclaves, both surrounded by Switzerland. One is the Campione d’Italia; it’s surrounded by the southern Swiss canton of Ticino. The other is Büsingen am Hochrhein, a German town surrounded by Schaffhausen canton.

The Politics of Railroads. The Belgium/Germany border is host to a strange series of exclaves.

As you can see from the map, there are five German exclaves surrounded by Belgium (one of which is just a house & its yard), which are just barely separated from their homeland. There are two roads and a rail line running to the east of these exclaves, which are owned by Belgium. Apparently, the entire thread of territory was once a rail line (the Vennbahn) that Germany ceded to Belgium as part of the Treaty of Versailles. The roads intersect with German roads and highways, but are still Belgian. It’s really strange—go poke around the map.

Doughnuts. Lest we spend all of this post in Europe, the Arabian peninsula is host to a fascinating little enclave/exclave situation. The village of Nahwa strides the border of Oman and the United Arab Emirates (UAE), which itself isn’t all that unusual in itself. However, the UAE portion is part of an enclave, surrounded by Omani territory which is itself surrounded by UAE territory, thus creating a doughnut-shaped chunk of Oman inside of the UAE.

The innermost UAE territory is an example of a second-order exclave: an exclave within an exclave. This might seem to be the height of absurdity when it comes to national borders, but it turns out we’re only getting started. In my next post, we will hit the accelerator and see how complicated things can really get—including a look at the world’s only third-order exclave.

See ya next time!

Physical Analogies: Making the Inconceivable Conceivable

In which we explore some of the more unusual attempts to make huge numbers more understandable.

In the modern world, there’s always a problem when explaining the realm of the very large or the very small to a general audience. When you’re faced with the fact that the sun is 333,000 times more massive than the Earth, which itself is billions of times more massive than the largest of objects that a person would come across in a typical day, you need to find creative ways to bridge the cognitive gap.

Often, this ends up producing some silly results. I’ve been collecting examples for several months now, and now it’s time to share! I will go in order from least silly to most silly.

1. The Astronomical Unit. In Mark Anderson’s book, The Day The World Discovered The Sun, the author recounts an analogy used by John Lathrop in his 1814 work “Lectures on Natural Philosophy” to put the Earth-Sun distance into context. According to Lathrop, the distance from the Earth to the Sun is “…so prodigious that a cannon ball going at the rate of 8 miles in a minute would be more than 22 years in traveling from our globe to the central and solar luminary of its orbit.” This analogy seems mildly silly, but only because a cannon ball is an archaic reference to most modern readers.

2. Precision of Atomic Clocks. This one is a case where the very large is used to explain something very small. Back in January, NPR aired a story “Tickety-Tock! An Even More Accurate Atomic Clock.” The reporter, Nell Greenfieldboyce, described a new atomic clock “…that would neither gain nor lose a second in 5 billion years.” This is just a way of contextualizing the fact that the clock is accurate to 0.2 nanoseconds over the course of a year. However, the report ends by musing that, one day, atomic clocks could be so accurate that they’d lose only 1 second every 50 billion years—a time interval more than 3 times the age of the universe. A true analogy, yes, but a little silly, too.

3. Randall Munroe’s What If? Blog. Munroe has become his own cottage industry of quirky physics speculation, and there’s no better example that his blog, What If?, in which he answers physics questions from readers. In one post, to explain exactly how fast the International Space Station moves as it orbits the Earth, he determines that while listening to the Proclaimers’ song I’m Gonna Be (500 miles), you’d travel about 1000 miles. In another post, he describes the precise flight path of the Rosetta spacecraft as being “…like throwing an object from New York and having it hit a particular key on a keyboard in San Francisco.”

4. The BP Oil Spill. Of course, I’ve saved the best ones for last. If you’re familiar with The Bugle podcast, you might know what you’re in for. In episode 116, John Oliver quotes a news article as saying that the 19 million gallons of oil spilled in the first 5 weeks of the BP oil spill is enough fill a line of 1 gallon milk jugs stretching from New York to Chicago, and back. In episode 117, an adventurous listener wrote in with another analogy—undoubtedly the most silly of the bunch. Specifically: If all the oil spilled in one day were frozen and molded into cricket bats, then laid end to end, it would stretch from London to Paris and back. Also, if you took all the oil spilled by May 1, the cricket bats would stretch from Caracas to Pyongyang and back. Silliness, we have a winner!

At some point, though, all of these examples—whatever their intention—are wrestling with the fundamental fact that daily human life occurs on a very specific scale in time and space, while the universe as a whole covers a much wider range. On some level, this means every physical analogy will seem at least somewhat absurd. I don’t see the silliness ending any time soon! In fact, if you take all the words from all physical analogies published since 1800, represent each one with a pack of gum, and line them up end to end, … okay, you get the idea.

Friday Fun: Calendar Outtakes

The best part of a comedy film is when the actors make small mistakes, and the cast and crew bust out laughing. It’s become common to run outtakes during the credits (by the way, this was first done by Peter Sellers in Being There, in 1979).

Here, I want to share some odd, silly, and independently interesting factoids about how our calendar systems have come up short—sometimes with dire consequences. These factoids all come from Nachum Dershowitz & Edward Reingold’s Calendrical Calculations, which I used last November for my dual posts on Thanksgiving and Hanukkah.

First up, a manufacturing disaster:

“…a computer software error at the Tiwai Point aluminum smelter at midnight on New Year’s Eve [in 1996] caused more than A$1 million of damage. The software error was the failure to consider 1996 a leap year; the same problem occurred 2 hours later at Comalco’s Bell Bay smelter in Tasmania.” [Reported in New Zealand Herald, 8 January 1997.]

This next one was an inconvenience for business travelers:

“…Microsoft Windows 95, 98, and NT get the start of daylight saving time wrong for years, like 2001, in which April 1 is a Sunday; in such cases, Windows has daylight saving time starting on April 8. An estimated 40 million to 50 million computers are affected, including some in hotels that are used for wake-up calls.” [Reported in New York Times, 12 January 1999.]

These two examples, while significant, had consequences that were relatively short-lived. But would you believe a calendar irregularity caused repeated political crises over the course of several centuries? It’s true, as the Ottoman Sultans would have told you. Some background information first…

The Ottoman Empire used the Islamic calendar, which is a lunar calendar with 12 months of 29 or 30 days. There are 11 leap days added every 30 years, so the average length of the year is 354 11/30 days. Obviously, this means that the Islamic calendar drifts throughout the solar year (about 11 days each year), and so the months don’t have any real connection to the changing seasons. April is always a spring month, but Ramadan can occur in any season.

Back to our story: when it came to finances, the Ottomans used the Islamic calendar for expenditures, but since many of the revenues came from seasonal activity (like farming), they used a solar calendar for tax collection. There are about 32 solar years for every 33 Islamic years, and in the 33rd year—the şiviş year—the government faced a fiscal crisis and ran the risk of failing to pay its employees (most notably the military). The financial crises easily became political crises, which have come to be known as “şiviş year crises”.

Now, in the US, we had a government shutdown where some federal employees went unpaid for 2 weeks, but could you imagine an entire year? Of course not. Any farsighted government would realize that the problem was coming, and action was often taken to adapt head off any revolt—devaluing the currency, deficit spending, spreading out payments for a few years to cover the gap, but these measures didn’t always avoid economic and political turmoil.

To give one example, the şiviş year 1677 (1088 A.H.) was weathered with significant deficit spending, but by 1687 the government was forced to postpone payments to its soldiers, whereupon the army marched to Edirne and deposed the Sultan Mehmed IV. Looking over some of the other şiviş years, it seems that one good way to avoid the crisis was to conquer a foreign country (Mehmed II greatly relieved his financial worries by taking Constantinople). It’s worth mentioning that Mehmed IV may have avoided his eventual downfall if his troops had been able to capture Vienna.

That’s all for now! You can read more about the şiviş year crises here.

On Euler’s Phi Function

In which we find that Euler’s phi function was neither phi nor a function.

First of all, a shout-out to all of my math(s) friends who are at (or traveling to) the Joint Mathematics Meetings in Baltimore! Now on to some math.

In my research for the “Evolution of…” series of posts, I came across the word totient in Steven Schwartzman’s The Words of Mathematics, which got me thinking about how Euler’s φ (phi) function—also called the “totient function”—came about. The word itself isn’t that mysterious: totient comes from the Latin word tot, meaning “so many.” In a way, it’s the answer to the question Quot? (“how many”?). Schwartzman notes that the Quo/To pairing is similar to the Wh/Th paring in English (Where? There. What? That. When? Then.). So much for the etymology.

It seems to me, though, that the more interesting questions are: who first defined it? how did the notation change over time? I did some digging, and here’s what I’ve discovered.

The first stop on my investigative tour was Leonard Dickson’s History of the Theory of Numbers (1952). At the beginning of Chapter V, titled “Euler’s Function, Generalizations; Farey Series”, Dickson has two things to say about Leonhard Euler:

“L. Euler… investigated the number φ(n) of positive integers which are relatively prime to n, without then using a functional notation for φ(n).”

“Euler later used πN to denote φ(N)…”

Each of these quotations contains a footnote, the first one to Euler’s paper “Demonstration of a new method in the theory of arithmetic” (written in 1758)  and the second to “Speculations about certain outstanding properties of numbers” (written in 1775). In the first paper, Euler is more interested in proving Fermat’s little theorem, which, true to form, he had already proven twice before. However, Euler does define the phi function (on p. 76, though as Dickson says, he doesn’t use function notation), and proves some basic facts about it, including the facts that φ(pm) = pm-1(p-1) [Theorem 3] and φ(AB) = φ(A)φ(B) when and B are relatively prime [Theorem 5]. This paper is in Latin, and while we do see the use of the words totidem and tot, they don’t seem to hold any special mathematical significance.

In the second paper, Euler returns to the phi function, having decided by this time to use π to represent it. Hard-core nerd that he is, Euler provides us with a table of values of πD for D up to 100, and replicates many of the facts he proved in the first paper. It’s interesting to note that, while Euler wrote this second paper in 1775, it wasn’t published until 1784, a year after his death.

It wasn’t until 1801, in Disquisiones Arithmeticae, that Carl Gauss introduced φN to indicate the value of the totient of N. So why did he pick φ rather than Euler’s π? Well, I checked the English translation by Arthur Clarke (no not, that Arthur Clarke), and I think it’s quite likely that he chose it for no discernible reason. In Clarke’s translation, Gauss introduces φ on page 20—and Gauss loved using Greek letters. In pages 5-19 (the beginning of Section II), he uses α, β, γ, κ, λ, μ, π, δ, ε, ξ, ν, ζ — and only after these does he use φ. As to the use of π, which was Euler’s notation, it’s possible that Gauss knew of Euler’s latter work and chose φ because he had already used π, but there’s no way to know for sure. (Also, π was already used for 3.14159… by this point, but if that was his reasoning, it’s odd that he used the symbol π at all.) Most likely, he just picked another Greek letter off the top of his head. It is important to remember that at no point did Gauss use function notation for the totient—it always appears as φN, never φ(N). (Also: Gauss goes on to use Γ and τ before getting tired of Greek and moving on to the fraktur letters 𝔄, 𝔅, and 𝖅.)

The next significant change came nearly a century later in J. J. Sylvester‘s article “On Certain Ternary Cubic-Form Equations,” published in the American Journal of Mathematics in 1879. On page 361, Sylvester examines the specific case npi, and says

pi-1(p-1) is what is commonly designated as the φ function of pi, the number of numbers less than pi and prime to it (the so-called φ function of any number I shall here and hereafter designate as its τ function and call its Totient).

While Sylvester’s usage of the word totient has become commonplace, mathematicians continue to use φ instead of τ. It just goes to show that a symbol can become entrenched in the mathematical community, even if a notational change would make more sense. Also of note is the fact that while Sylvester refers to the totient as a function, he doesn’t use the modern parenthesis notation, as in τ(n), but continues in Euler and Gauss’s footsteps by using τn.

And this is where our story ends. Sylvester’s use of the word totient, Gauss’s use of the letter φ, and Euler’s original definition all contributed to the modern construct that we call the phi/totient function. Even though Euler’s original definition came in a Latin paper, it wasn’t until Sylvester that the use of totient became commonplace.

However, Euler had proven many of the basic facts about it as early as 1758. So, while the original phi function was neither phi nor a function, it was undoubtedly Euler’s.