Calendars, Cycles, and Cool Coincidences (Part II)

This is my second post on the alignment of Thanksgiving and Hanukkah. Go back and read the first post, if you haven’t done so.

When compared to the Julian or Gregorian calendar, the Hebrew calendar is a different animal entirely. First of all, it is not a solar calendar, but is rather a lunisolar calendar. This means that while the years are kept in alignment with the solar year, the months are reckoned according to the motion of the moon. In ancient days, the start of the month was tied to the sighting of the new moon. Eventually, the Jewish people (and more specifically, the rabbis) realized that it would be better for the calendar to rely more on mathematical principles. Credit typically goes to Hillel II, who lived in the 300s CE. In the description that follows, I will be using Dershowitz and Reingold’s Calendrical Calculations as my primary source, with assistance from Tracy Rich’s Jew FAQ page.

The typical Jewish year contains 12 months of 29 or 30 days each, and is often 354 days long. (See how I worded that? It matters.) Clearly, this is significantly shorter than the solar year, so some adjustments are necessary. Specifically, there is a leap year for 7 of every 19 years. But instead of adding a leap day, the Hebrew calendar goes right ahead and adds an entire month (Adar II), which adds 30 days to the length of the year. Mathematically, you can figure out if year y is a leap year by calculating (7y+1) mod 19—if the answer is < 7, then y is a leap year. In the current year, 5774, the calculation is 7*5774+1 = 40419 = 6 (mod 19), so it’s a leap year. With just this fact, the average length of the year appears to be 365.053—about 4 1/2 hours fast. At a minimum, the leap months explain how Jewish holidays move through the Gregorian calendar: since the typical year is 354 days, a holiday will move earlier and earlier each year, until a leap month occurs, at which point it will snap back to a later date. (Next year, Hanukkah will be on 17 December.)

But it’s not as simple as all that. Owing to the lunar origins of the Hebrew calendar, the beginning of the new year is determined by the occurrence of the new moon (called the molad) in the month of Tishrei (the Jewish New Year, Rosh Hashanah, is on 1 Tishrei). Owing to the calendar reforms of Hillel II, this has become a purely mathematical process. Basically, you take a previously calculated molad and use the average length of the moon’s cycle to calculate the molad for any future month. Adding a wrinkle to this calculation is the fact that the ancient Jews used a timekeeping system in which the day had 24 hours and each hour was divided into 1080 “parts”. (So, one part = 3 1/3 seconds.) In this system, the average length of a lunar cycle is estimated as 29d 12h 793p. While this estimate is many centuries old, it is incredibly accurate—the average synodic period of the moon is 29d 12h 792.86688p, a difference of less than half a second.

Once the molad of Tishrei has been calculated, there are 4 postponement rules, called the dechiyot, which add another layer to the calculation:

  1. If the molad occurs late in the day (12pm or 6pm depending on your source) Rosh Hashanah is postponed by a day.
  2. Rosh Hashanah cannot occur on a Sunday, Wednesday, or Friday. If so, it gets postponed by a day.
  3. The year is only allowed to be 353-355 days long (or 383-385 days in a leap year). The calculations for year y can have the effect of making year y+1 too long, in which case Rosh Hashanah in year y will get postponed to avoid this problem.
  4. If year y-1 is a leap year, and Rosh Hashanah for year y is on a Monday, the year y-1 may be too short. Rosh Hashanah for year y needs to get postponed a day.

As someone who’s relatively new to the Hebrew calendar, all of this was very confusing to me. For one thing, it’s not clear that rules 3 and 4 will really keep the length of the year in the correct range. For another, it’s not clear what you’d do with the “extra” days that are inserted or removed. Here’s how I think of it: the years in the Hebrew calendar don’t live in arithmetical isolation, but are designed to be elastic. You can stretch or shrink adjacent years by a day or two so that the start of each year begins on an allowable day. When a year needs to be stretched, a leap day is included at the end of the month of Cheshvan. When a year needs to be shrunk, an “un-leap” day is removed from the end of Kislev.

Now here’s the question my mathematician’s soul wants to answer: How long is the period for the Hebrew calendar? This might seem an impossible question in light of all the postponement rules, but it turns out that each block of 19 years will have exactly the same length: 6939d 16h 595p, or 991 weeks with a remainder of 69,715 parts. As with the Julian calendar, the days of the week don’t match from block to block, so we need to use the length of a week (181,440 parts) and find the least common multiple. Using parts as the basic unit of measurement, we have:

lcm(69715, 181440) = 2,529,817,920 parts ≈ 689,472 years.

Wow! We can also calculate the “combined period” of the Hebrew and Gregorian calendars, to see how frequently they will align exactly. Writing the average year lengths as fractions, the calculation is:

lcm(689472*(365+24311/98496), 400*(365+97/400)) = 5,255,890,855,047 days = 14,390,140,400 Gregorian years = 14,389, 970,112 Hebrew years.

For comparison, the age of the universe is about 13,730,000,000 years. So while particular dates can align more frequently (for instance, Thanksgivukkah last occurred in 1888), the calendars as a whole won’t ever realign again. However, I suppose that claim depends on your view of the expansion of the universe!

Calendars, Cycles, and Cool Coincidences (Part I)

You might have heard that Hanukkah and Thanksgiving coincide this year. More specifically, you may have heard that the first day of Hanukkah (25 Kislev in the Hebrew calendar) coincides with 28 November, which just happens to be the fourth Thursday of the month. Somewhere along the way, a few clever marketers dubbed this day “Thanksgivukkah”, and America has responded: the LA Times has a recipe for “turbrisket”, kids in South Florida have been designing “menurkeys”, and Zazzle.com has a line of Thanksgivnukkah greeting cards. Christine Byrne has assembled an entire Thanksgivukkah menuI, for one, am enjoying all of the portmanteaus. (Speaking of which, have you heard of Franksgiving?)

But in addition to being a fan of portmanteaus, I’m also a fan of calendars. Some weeks ago, I began to hear from various sources that Hanukkah won’t line up with Thanksgiving for another 70,000 years or so. This got me curious, so I started researching the question myself. It turns out that the relationship between the two holidays has been examined on at least three blogs over the past few years. The first were the Lansey brothers in 2010, followed by  Stephen Morse in 2012 and Jonathan Mizrahi in January of this year. Morse’s post includes a “When Did?” page with a Javascript calendar program, and Eli Lansey kindly includes a Mathematica notebook to help the math-inclined to do the computations themselves.

Morse reports that Thanksgiving will again occur on the first day of Hanukkah in the year 79,043, while Mizrahi says it’ll be in the year 79,811. Mizrahi, by his own admission, is being cute with this number:

In all honesty, though, all of these dates are unfathomably far in the future, which was really the point [of the post].

In this post, I won’t go into exactly how the ≈79,000 number was unearthed. I will, however, sketch out some of the major features of both the Gregorian and Hebrew calendars, and how they have given rise to this strange, new holiday of Thanksgivukkah. In the end, we will find that the year 79,811 is not nearly as unfathomable as we can get.

First of all, the Western calendar as we’ve come to know it began its life as the Egyptian calendar. After the Canopus Decree in c. 238 BCE, each year in the Egyptian calendar was 365 days long, with an additional day added every 4 years. There were twelve 30-day months and five (or six) epagomenal days—days with no year or month assigned to them—to celebrate the coming of the new year. I think Pharaoh Ptolemy III said it best:

This festival is to be celebrated for 5 days: placing wreaths of flowers on their head, and placing things on the altar, and executing the sacrifices and all ceremonies ordered to be done. But that these feast days shall be celebrated in definite seasons for them to keep for ever … one day as feast of Benevolent Gods be from this day after every 4 years added to the 5 epagomenae before the new year, whereby all men shall learn, that what was a little defective in the order as regards the seasons and the year, as also the opinions which are contained in the rules of the learned on the heavenly orbits, are now corrected and improved by the Benevolent Gods.

The Egyptian model came to Rome with Julius Caesar’s calendar reforms in 46 BCE, which fixed the seriously messed up Roman calendar. It all went pretty well for the first several centuries, but there was a tiny fly in the ointment. The average length of a year in the Julian calendar is 365.25 days, while the solar year is approximately 365.24219 days long. So the Julian calendar ran slow—about 11.25 minutes per year—for 1600 years until this problem was fixed by the Gregorian calendar reforms of 1582. More specifically, Pope Gregory XIII issued a papal bull, Inter gravissimas, in which he declared that leap years would continue to occur by 4 would be leap years, except that years divisible by 100 but not by 400 would no longer be leap years. So, 1900 was not a leap year, but 2000 was. This provides an average length of 365.2425, which is only 0.00031 days (about 27 seconds) longer than the solar year. In the 431 years that have passed since the birth of the Gregorian calendar, this error has only accumulated to 3.2 hours. While the calendar isn’t perfect, it’s really quite good, especially considering that the solution amounted merely to omitting 3 leap days every 400 years. 

The key concept I’m interested in here is periodicity. In mathematics, a function is said to be periodic if it exactly repeats its values in regular intervals (or, periods). The sine function is an example of this: sin(x) = sin(x+2π) = sin(x+4π) = …, for any value of x in the interval [0,2π]. It’s very important to distinguish between a periodic function and a function that just happens to repeat some of its values. For example, the function f(x) = 1 – x2 repeats itself since f(-1) and f(1) are both equal to zero, but that doesn’t mean the function repeats itself exactly on an interval. Loosely speaking, a function is periodic when the entire curve repeats itself, not just a few select points.

We can transfer this idea to a given calendar without too much trouble:

  • A calendar’s cycle is amount of time it takes for the calendar to repeat itself exactly.
  • A calendar’s period is the amount of time it takes for the calendar to repeat itself exactly, while also taking the days of the week into account.

For consistency, it’s best to measure both the cycle and period in days, but sometimes I’ll divide by the average length of a year. For example, the Julian calendar has a cycle of 1461 days, and dividing by 365.25 gives a result of 4 years. To get the period, we need to remember that since there are 52 weeks plus 1 or 2 days in any given year, the days of the week won’t line up every 4 years. So we have to take the least common multiple to get the period: lcm(1461, 7) = 10,227 days = 28 years. For the Gregorian calendar, the cycle is 146,097 days (400 years) and the period is lcm(146097, 7) = 146097 days = 400 years—this is because 146097 happens to be a multiple of 7.

400 years is a long time, and this post has gotten pretty long, too. So I’ve broken it into two parts. Come back soon for Part II, where we will examine the mathematical labyrinth that is the Hebrew calendar…

The Evolution of Plane Curves

This is the third post in the “Evolution Of…” series; the first and second posts can be found here and here.

This time around, we’ll explore some of the words that have come to be used for various plane curves. First of all, a disclaimer: often, the names of the curves existed many centuries before the development of modern algebra and the Cartesian coordinate system. As a consequence, the original names for the curves are more geometric in origin (imagine one of the ancient Greeks saying “umm, well, it looks like a flower… so let’s call it the flower curve“).

While reviewing the curves listed in Schwartzman’s book, I noticed that most of them can be classified into four major groups: the conics, the chrones, the trixes, and the oids.

  1. Conics. You’ve probably heard of them—circle, ellipse, parabola, and hyperbola. The first one has its origins in the Latin word circus, which means “ring” or “hoop.” The other three are Greek, with their original meanings reflecting the Greeks’ use of conic sectionsEllipse comes from en (meaning “in”) and leipein (meaning “to leave out”). For the other two, note that –bola comes from ballein which means “to throw” or “to cast”. So hyperbola means “to cast over” and parabola means “to cast alongside”. (If you check out this image from Wikipedia, it may start to make more sense.)
  2. Chrones. The two curves I have in mind here are brachistochrone and tautochrone. In Greek, chrone means “time”. The prefixes come from brakhus and tauto-, which mean “short” and “same”, respectively. So these curves’ names are really “short time” and “same time.” Naturally enough, the brachistochrone is the curve on which a ball will take the least amount of time to roll down, while the tautochrone is the curve on which the time to roll down is the same regardless of the ball’s starting point. Finding equations for these curves occupied the time of many scientists and mathematicians in the 17th century, including a controversy between the brothers Jakob and Johann Bernoulli. You could say they had a “chronic” case of sibling rivalry. 
  3. Trixes. I am a fan of the feminine suffix –trix because it also provides us with the modern word obstetrics (literally, “the woman who gets in the way”—i.e., a midwife). We don’t use this suffix very much anymore, though aviatrix comes to mind. The algebraic curves in this category are trisectrix (“cut into three”) and tractrix (“the one that pulls”), along with the parabola-related term directrix (“the one that directs”). Interestingly, the masculine form of tractrix gives us the English word tractor.
  4. Oids. These were the most fun for me. The suffix is Greek, originating in oeides, which means “form” (though in modern English, “like” might be more appropriate). Here are a some examples: astroidcardioidcissoidcochleoidcycloidramphoid, strophoid. Here are their original Greek/Latin meanings: “star-like”, “heart-like”, “ivy-like”, “snail-like”, “circle-like”, “like a bird’s beak”, “(having the) form of turning”. I’ve provided images of each one below—see if you can match the name to the curve!
Curve5 Curve6 Curve1
Curve2 Curve3  
Curve4 Curve7  

Don’t worry, there are plenty more word origins coming later! However, I’ll need a break to recharge my etymology batteries. Expect an “intermission” post in the next few weeks.

The Evolution of Arithmetic

This post is the second in a series; if you haven’t read the first post, on the evolution of English counting words, I’d recommend reading that one first.

As promised, this post looks at the origins of the English words for arithmetic operations. Read on, friend!

  • Plus and Minus. These two are fairly straightforward—they’re the Latin words for “more” and “less”, respectively. The symbols, though, are less clear. It appears that the letters p and m were used (sometimes appearing as p and m) during the 1400s—Wikipedia claims that these first appeared in Luca Pacioli’s Summa de Arithmetica, though I’ve been unable to find a satisfactory example. In the 1500s, the modern + and – signs began to appear; Schwartzman attributes the + to an abbreviation of the Latin “et” (taking the t only) and the – to the bar from m.
  • Multiply. This word comes from the Latin multiplicare, meaning “to increase.” Breaking it down a little further, we have the prefix multi– (“many”) and the suffix -plex (“fold”) so that the compound word multiplex means “many folds.” (We still use “fold” language today—when we speak of a “threefold increase,” we mean that something had been multiplied by three.) The x symbol for multiplication is attributed to William Oughtred, while Schwartzman gives credit for the dot • to Gottfried Wilhelm Leibniz.
  • Divide. This word comes from Latin as well, with the origin being dividere, meaning “to separate.” (As a side note, the root videre means “to see” and gives us the modern word video, which means “I see”.) Putting di– and videre together, I suppose this means that division is literally “to see in two.”

Notice that all four of these words originate in a description of the operation itself. It turns out that exponents and roots are a little more metaphorical in their meaning:

  • Exponent. Once again, we have a Latin origin: the prefix ex– and the verb ponere, roughly meaning “to put out.” Unlike the four arithmetic operations, though, the original meaning is typographical—the exponent is the number that is “put out” above and to the right of the base. In part, it’s because the exponent is a relatively new development; Schwartzman attributes the notation to Descartes, specifically La Géométrie (1637).
  • Root. Finally, a non-Latin word! The word rot means “cause” or “origin”, which makes sense when you consider that since 8 = 23, its “origin” is 2. If you trace the word further back, the Proto-Indo-European root (see what I did there?) is wrad-. Thus, the Latin-based words radical and radish come from a source similar to root.

And there you have it! In the next installment, I’ll get a little more geometric and explore some words we’ve come to use for algebraic curves.

The Evolution of Numbers

I’ve always loved word origins. Often, knowing where a word comes from can provide you with insight on what it means today. At the very least, it makes you look smart at parties (e.g., “Did you know that the word apocalypse shares the same Greek root as calypso…”).

This post is the first in a series on mathematical word origins. Since math(s) is an old subject, many mathematical terms have ancient roots (for English, this usually means Greek and Latin). For today, we’ll explore the origins of the English words for counting and arithmetic. For all posts in this series, my primary source is Steven Schwartzman’s The Words of Mathematics, with an occasional assist from the Online Etymology Dictionary.

  1. One through Ten. I’m actually going to skip these; most Indo-European languages have a base 10 system whose words are in rough correspondence with each other (for instance, the word for 6 is six, sechsseiseis, and seks, in French, German, Italian, Spanish, and Norwegian, respectively). If you’re interested in exactly how those consonants correspond, go read up on Grimm’s law.
  2. Eleven and Twelve. These words have a particularly Germanic origin: for example, in French you’d use onze and douze while in German it’s elf and zwölf. The English word eleven comes from the Old English endleofon, which basically translates as “one left over.” If you were counting up 11 items, you’d get to ten, and then say “and there’s one left over.” Eventually, this got condensed down into our modern eleven. (It makes a certain amount of sense, no?) The same thing goes for twelve: the Old English twelf comes from the Proto-Germanic twa-lif, which means “two left.”
  3. Thousand and Million. The word thousand comes from the Germanic thus (thick) and hund (hundred), making a thousand a “thick hundred.” In the Romance languages, though, the word thousand comes from the Latin mille. The English word million originally meant “a great thousand.” Interestingly, there appears to be a connection between the words thousand and dozen (e.g., the Dutch word for a thousand is duizend), leading some scholars to speculate that some Germanic cultures had a mixed base-10 and base-12 system. This may also be seen in the fact that in the UK, a “hundredweight” is 112 lbs.
  4. Zero. This one’s a relatively new addition to English—according to Schwartzman, the first appearance in print of the word zero was in Philippi Calandri’s De Arithmetica Opusculum (1491). The word itself was borrowed from the Arabic صفر (sifr) which means “empty.” Interestingly, the word cipher has the same origin. One other point: in most European languages, zero is treated a plural (“I have zero apples” instead of “I have zero apple”). I’d be interested if this is the case in Arabic and other semitic languages.

Up next: the operations of arithmetic…

Internet Tendencies

While I prepare for my next full post, here’s a surprising and random thing that I discovered in my research today.

You may be familiar with McSweeney’s (http://www.mcsweeneys.net/), an online compendium of hilarious short essays, lists, and other creative forms of Internet writing. The part that I usually go to is McSweeney’s Internet Tendency.

Well, by random chance, I’m reading Tokaty’s A History and Philosophy of Fluid Mechanics, and I see an image of Hero of Alexandria’s “Reactive Motor” (link is here)—it’s the same as the McSweeney’s Internet Tendency icon (go check here).

A weird, and incredibly geeky, coincidence.

Extreme Musical Venues

No, I don’t mean this: www.extrememusic.com. But now that I’ve got your attention, I want to return to a topic I wrote about last month: the Earth’s gravity and its effect on mechanical metronomes (or, more abstractly, its effect on pendulums). That post boiled down this fact:

Pendulums near the poles will swing more quickly than pendulums of the same length near the equator.

Of course, that begs the question—how much more quickly? At the time of my previous post, I hadn’t researched the mathematics required to do this. But now I’ve crunched some numbers and have some results to share. Before that, though, let me set up some of the basic ideas behind the calculations.

1. The Earth isn’t a sphere. We know today that the Earth is an oblate spheroid, and is flatter at the poles than at the equator. The verification of this fact dates back to the 1730s, and is credited to Pierre Louis Maupertuis. It’s worth mentioning that the oblateness follows from Newton’s mechanics, of which Maupertuis was a notable advocate. [Maupertuis did many other things, and you can read about them in this book.]

2. Gravity varies according to latitude. This follows quickly from point (1) above, though the first attempt at a precise formula was made by Alexis Claude de Clairaut in his 1743 treatise, Théorie de la figure de la terre. W. W. R. Ball has a short summary of Clairaut’s work. Today, we use a more precise formula, as part of the World Geodetic System:

Screen Shot 2013-09-21 at 7.11.24 PM

Here, GE ≈ 9.78033 is gravity at the equator, e ≈ 0.0818192 is the Earth’s eccentricity (which measures the “sphereness” of the ellipsoid, ranging from  0 to 1), and k ≈ 0.00193185 is a parameter that depends on gravity at the poles. (Details can be found here on page 4-2). 

3. Pendulum formula requires calculus. I’m going to skip the derivation, and point you to Wikipedia on this one (grain of salt and all that). Here’s the formula for one “swoop” of a pendulum:

Screen Shot 2013-09-21 at 7.25.12 PM

with l being the length of the pendulum and θ0 being its initial angle.

4. Gravity decreases with altitude. The experts use something called the free-air correction to compensate for this. For example, Reynolds’ An Introduction to Applied and Environmental Geophysics states that the proper way to adjust the formula in part (2) above is by subtracting (3.086×10-7)h, where h is your elevation above sea level (I’ve converted the units from Reynolds’ version).

Now we’re ready for some results! For all calculations, I took θ0 = π/4, and l ≈ 0.229060. This way, a pendulum at sea level at the equator will beat exactly 120 times per minute. Then, I picked five extreme points on the surface of the Earth:  the summits of Chimborazo and Everest, and the Antarctic stations McMurdo, Russkaya, and Amundsen-Scott. And here’s what we have:

Location Elevation Latitude Beats per Minute
Reference Point 0 m. 120
Chimborazo 6268 m. 1.469° S 119.881
Everest 8848 m. 27.988° N 119.902
Russkaya 0 m. 74.766° S 120.296
McMurdo 0 m. 77.850° S 120.304
Amundsen-Scott 2835 m. 90° S 120.264

I’ve ignored the fact that the mass of the terrain that you’re standing on will affect the gravity at its summit (the Bouguer anomaly)—so the values for Everest and Chimborazo are probably a bit off. But, taking the numbers at their word, our hypothetical metronome will move the most quickly at McMurdo Station—which has the double-benefit of being both close to the pole (and thus closer to the center of the Earth) and having a low elevation. (In fact, McMurdo has the world’s southernmost port; you can go see what they’re up to right now.)

To conclude, it appears that a piece of music set to 120 beats per minute, that takes 5 minutes to perform, will finish almost exactly 1 beat sooner at McMurdo station than it would at the summit of Mount Everest. Not enough to mess anything up musically, but enough to notice. Extreme, right? (Not extreme enough for you? Then go read this comic.)

Convocation and Advice From Old Books

At my university, today marked the beginning of classes for the fall semester. It began with convocation (at which our University president provided a much-appreciated etymology of the word “convocation”), and classes followed quickly thereafter.

Since things are rather busy for me this week, I’m kicking the can down the road—no major posts until next week at the earliest. But for now, I’d like to advertise a blog I’ve recently discovered, Ask the Past, which describes itself as providing “advice from old books.” Here are three posts that seem appropriate for the start of the school year:

Plus ça change, plus c’est la même chose…

Bringing a Riemannian Gun to a Euclidean Knife Fight

The prime numbers have fascinated mathematicians for many centuries. As far back as 300 BCE, Euclid proved that there are infinitely many primes.

Today, part of the appeal of prime numbers is that there remain many apparently-simple questions about primes whose answers are unknown. (One famous example is Goldbach’s conjecture, posed in 1742 in a letter to Leonhard Euler, that every even integer > 2 can be written as the sum of two primes. Note, however, that Terence Tao has recently proven that every odd integer > 1 is the sum of at most 5 primes).

Over the years, the math (and its conjectures) has gotten considerably more complicated. The most famous (infamous?) example is the Riemann hypothesis; let me just drop the statement on you:

Riemann’s hypothesis: The nontrivial zeroes of the Riemann zeta function have real part = 1/2.

Don’t worry if you don’t have the mathematical background to make sense of this statement, but do note that it carries a hefty reward if it’s proven: the Clay Math Institute is offering $1 million for a solution to this problem.

In this post, I’m going to reverse the historical trend: I will begin with something complicated and deep, and use it to derive something quite simple.

First, let’s take the imaginary parts of the Riemann zeta function’s zeroes:

14.134725142, 21.022039639, 25.010857580, 30.424876126, etc.

My list goes up to the 100th one, at 236.524229665. Let Zn be the nth entry in this list, and now define the function F(x) according to the following rule:

Screen Shot 2013-09-03 at 10.23.44 AM

Clearly, F depends on our choice of N, which is where things get interesting. Here’s a graph for N = 5 (click to enlarge):

Screen shot 2013-09-03 at 8.08.26 AM

F(x) for N = 5.

Neat, but not very enlightening. (By the way, note that the vertical axis is at x=5, not x=0.) But what if we take N = 20?

Graph of F(x) for N = 20.

Graph of F(x) for N = 20.

Notice where those major dips occur: at x = 2, 3, 5, 7, etc. We’ve found some prime numbers! And it keeps getting better—here’s the graph for N = 100:

Graph of F(x) for N = 99.

Graph of F(x) for N = 100.

Notice that while the primes cause the most pronounced dips, other numbers produce more moderate dips—look at 4, 9, 8, and 16 for the best four examples of this. And some numbers don’t correspond to anything significant in the graph—see 6, 10, 12, 14, 15, and 18.

Take a second to reflect on this: we’ve used the zeroes of the Riemann zeta function (the “gun” from the title) to find the sequence of prime numbers (the “knife fight”)! And along the way, we’ve found a correspondence between the analytic properties of F(x) and some non-prime numbers! And, as you might imagine, the graph of F(x) will produce more accurate values as N increases.

Clearly, there’s a theorem lurking in the background of this. Unfortunately, I can’t offer a reference. I remember seeing someone speak about it at a conference, and I hastily wrote down the formula, but that’s all the information I have. So if anyone recognizes this, or has any insight on the non-prime dips, let me know!

“What is Math?” — As Told By Math Jokes

If you’ve spent any time learning a specialized subject along with a group of friends & peers, you’ve probably experienced Discipline-Specific Humor—that uncomfortable but oh-so-funny form of joke telling that only makes sense to you and your fellow students. Usually, they rely on a carefully-constructed pun (like this one), but occasionally they tell you something about the inner-workings of the discipline.

Since I’m a mathematician by training, math jokes are the ones I appreciate the most. I’m guessing you’ve all heard a few in your school days, and maybe you’ve even heard a few outside of the world of the math class.

What I’d like to do here is lay out some of my favorite math jokes, and spend a moment examining what exactly they tell us about mathematics and mathematicians. You may learn something new about the intellectual habits of the field, and you’ll definitely gain an appreciation for the people whose quirks give this subject some of its appeal. Here we go!

An engineer, a physicist, and a mathematician were on a train ride through Scotland. They passed by a field with a black sheep in it. The engineer said, “hmm, I guess sheep in Scotland are black.” The physicist said, “well, at least one sheep in Scotland is black.” The mathematician said, “there exists at least one sheep in Scotland, at least half of which is black.”

Mathematicians are known for being annoyingly precise. One famous case appears in Imre Lakatos’ book, Proofs and Refutations. In it, a professor and a group of students try to prove Euler’s formula: “The number of vertices of a polyhedron, minus the number of edges, plus the number of faces, equals 2.” The thing is, they go through a dozen proofs (and refutations of those proofs), and just as many reformulations of the theorem. After a while, you’re not sure that any mathematical statements are true at all! (Don’t stop there, though, because Lakatos brings it home eventually.) Good times. Next:

A physicist and a mathematician live in adjacent apartments. In the middle of the night, an electrical short starts fires in both of their kitchens. The physicist awakes first, sees the fire, grabs the fire extinguisher, and puts it out. The mathematician awakes later, sees the fire, then sees the fire extinguisher, and says “Aha! A solution exists!” and returns to bed.

Okay, I love this one. In mathematics, there’s a distinction between “constructive” proof and “existential” proof. Constructive proofs rely on the fact that you can find an example of something that has a desired property: for instance, you can prove the statement “there is a positive number x for which x²–x–2 equals zero” by pointing out that the number 2 satisfies this property. However, you can’t take this approach with the statement “there is a power of 2 that begins with the digits 50283086.” (Prove me wrong, I dare you.) Nonetheless, this is a true statement. (The general theorem is “For any positive integer n, there is a power of two whose first digits make up the number n.”) Maybe I’ll give the proof in a future post. In the meantime:

A biologist, a physicist, and a mathematician are having lunch at a sidewalk cafe. They see two people enter an apparently-empty building across the street; a few minutes later, the same two people leave with a third person. The physicist says “Our initial observation must have been incorrect.” The biologist says “They must have multiplied.” The mathematician says, “If another person enters the building, it will be empty again.”

Math gets some ridicule for being a “head in the clouds” sort of subject, where decades-long research programs could have no immediate applications to the real world. This isn’t all that fair—there are lots of applications of mathematics! More to the point, though: one of the best things about mathematics is that it has real-world applications regardless of whether a mathematician is looking for them or not. A famous example is Riemannian geometry, which was a paragon of “head in the clouds” mathematics, until it became the framework for general relativity theory. It’s a similar story for linear algebra and quantum mechanics. And here are some other theory/application pairings you might not know about:  group theory & quarksfield theory & cryptography, and even Graeco-Latin squares & experimental design (see Klyve & Stemkoski and then go find Ronald Fisher’s book, Design of Experiments).

This last one’s just for fun:

Professor Wilson and his family recently moved to a new neighborhood. Since the professor was known to be absent-minded, his wife wrote down their new address on a slip of paper and gave it to him as he left the house for work that morning. At some point during the day, he needed to write down an important equation for a student, and took out the slip of paper to write it on the back, and gave it to the student. Forgetting all about the paper, and his new house, he went back to his old house. Realizing his mistake when he arrived, he said to the young girl on the front step, “Hello, little girl, do you know where the Wilsons live?” The girl said, “It’s okay, daddy, mommy sent me here to get you.”

I can be absent-minded sometimes, but I’m very glad it’s not this bad.

Finally, if you’re yearning for some more math geek humor, go here next.