By James Aspinwall, co-written by Alfred Pennyworth (my trusted AI) — March 3, 2026, 08:00
Two articles ago we tore Chronic apart and found it couldn’t parse “in 2 minutes”. One article ago we cataloged every expression Ruby’s Chronic handles that Elixir’s doesn’t and concluded: fork it, add Tier 1, ship it. This article is the receipt.
What We Shipped
Six features, all additive. No architectural changes. The library still has two files, still has zero runtime dependencies, still uses the same tokenize-then-pattern-match pipeline.
1. Named Times (noon, midnight, now)
The simplest win. Three string replacements in a new preprocessing step:
defp normalize_named_times(time) do
time
|> String.replace(~r/\bnoon\b/i, "12:00pm")
|> String.replace(~r/\bmidday\b/i, "12:00pm")
|> String.replace(~r/\bmidnight\b/i, "12:00am")
end
“noon” becomes “12:00pm” before the tokenizer ever sees it. The existing [time: time] clause handles the rest. This means “tomorrow at noon”, “tuesday at midnight”, and every other combination works for free — no new process clauses needed for these.
“now” gets its own clause because it has no time component to substitute:
defp process([word: "now"], [currently: currently]) do
shift(currently, 0, :second)
end
Before:
Chronic.parse("noon") #=> {:error, :unknown_format}
Chronic.parse("midnight") #=> {:error, :unknown_format}
Chronic.parse("now") #=> {:error, :unknown_format}
After:
Chronic.parse("noon") #=> {:ok, ~N[2026-03-03 12:00:00.000000], 0}
Chronic.parse("tomorrow at midnight") #=> {:ok, ~N[2026-03-04 00:00:00.000000], 0}
Chronic.parse("now") #=> {:ok, ~N[2026-03-03 08:00:00.000000], 0}
2. The :unit Token Type
This is the foundational change that unlocks relative durations and ago expressions. The tokenizer gained a new token type and a lookup map:
@units %{
"second" => :second, "seconds" => :second, "sec" => :second, "secs" => :second,
"minute" => :minute, "minutes" => :minute, "min" => :minute, "mins" => :minute,
"hour" => :hour, "hours" => :hour, "hr" => :hour, "hrs" => :hour,
"day" => :day, "days" => :day,
"week" => :week, "weeks" => :week
}
One check in the cond chain, before the word fallback:
Map.has_key?(@units, token) ->
{:unit, Map.fetch!(@units, token)}
Now “minutes” tokenizes as {:unit, :minute} instead of {:word, "minutes"}. This single change means we can write one process/2 clause that handles all five unit types — no more clause-per-unit explosion.
3. Relative Durations and the shift/3 Helper
Three process clauses and one helper cover “in N units”, “N units ago”, and “N units from now”:
# "in 2 minutes", "in 3 hours", "in 1 day"
defp process([word: "in", number: n, unit: unit], [currently: currently]) do
shift(currently, n, unit)
end
# "3 hours ago", "2 days ago"
defp process([number: n, unit: unit, word: "ago"], [currently: currently]) do
shift(currently, -n, unit)
end
# "5 minutes from now"
defp process([number: n, unit: unit, word: "from", word: "now"], [currently: currently]) do
shift(currently, n, unit)
end
The shift/3 helper converts units to seconds and delegates to NaiveDateTime.add/2:
defp shift(currently, n, :second) do
{{year, month, day}, {hour, minute, second}} = currently
with {:ok, datetime} <- NaiveDateTime.from_erl({{year, month, day}, {hour, minute, second}}, {0, 6}) do
{:ok, NaiveDateTime.add(datetime, n)}
end
end
defp shift(currently, n, :minute), do: shift(currently, n * 60, :second)
defp shift(currently, n, :hour), do: shift(currently, n * 3600, :second)
defp shift(currently, n, :day), do: shift(currently, n * 86_400, :second)
defp shift(currently, n, :week), do: shift(currently, n * 7 * 86_400, :second)
The ago clause is identical to the “in” clause except the sign is negative. Same helper, same units, one character difference. This is exactly the composability the gap analysis predicted — once you have shift/3, both directions are trivial.
Before:
Chronic.parse("in 2 minutes") #=> {:error, :unknown_format}
Chronic.parse("3 hours ago") #=> {:error, :unknown_format}
Chronic.parse("5 minutes from now") #=> {:error, :unknown_format}
After:
Chronic.parse("in 2 minutes") #=> {:ok, ~N[2026-03-03 08:02:00.000000], 0}
Chronic.parse("3 hours ago") #=> {:ok, ~N[2026-03-03 05:00:00.000000], 0}
Chronic.parse("5 minutes from now") #=> {:ok, ~N[2026-03-03 08:05:00.000000], 0}
4. “tomorrow 9am” Without “at”
The original library required the word “at” between “tomorrow” and a time — "tomorrow at 9am" worked, "tomorrow 9am" didn’t. One clause:
defp process([word: "tomorrow", time: time], [currently: currently]) do
process_tomorrow(currently, time)
end
Placed right after the existing [word: "tomorrow", word: "at", time: time] clause. The “at” version still matches first when present. Three lines, reuses the existing process_tomorrow/2 helper. “yesterday 9am” already worked — this was an inconsistency, not a missing concept.
5. next/last + Day of Week
Six new clauses using a new find_previous_day_of_the_week/2 helper (the mirror of the existing find_next_day_of_the_week/2):
# "next monday", "next friday"
defp process([word: "next", day_of_the_week: dow], [currently: {current_date, _}]) do
date = current_date |> find_next_day_of_the_week(dow)
combine(date ++ [hour: 12, minute: 0, second: 0, microsecond: {0, 6}])
end
# "last tuesday", "last friday"
defp process([word: "last", day_of_the_week: dow], [currently: {current_date, _}]) do
date = current_date |> find_previous_day_of_the_week(dow)
combine(date ++ [hour: 12, minute: 0, second: 0, microsecond: {0, 6}])
end
Both support bare day names (defaults to noon), day + time ("next monday 9am"), and day + “at” + time ("last friday at 5pm").
The find_previous_day_of_the_week/2 helper walks backward from the current date, one day at a time, until it hits the target day of week — the same recursive approach as its forward counterpart, just subtracting instead of adding.
Before:
Chronic.parse("next friday") #=> {:error, :unknown_format}
Chronic.parse("last tuesday") #=> {:error, :unknown_format}
Chronic.parse("next monday at 9am") #=> {:error, :unknown_format}
After:
Chronic.parse("next friday") #=> {:ok, ~N[2026-03-06 12:00:00.000000], 0}
Chronic.parse("last tuesday") #=> {:ok, ~N[2026-02-24 12:00:00.000000], 0}
Chronic.parse("next monday at 9am") #=> {:ok, ~N[2026-03-09 09:00:00.000000], 0}
6. Misspelling Tolerance and Article Conversion
Two preprocessing steps that fix input before tokenization ever runs.
Misspellings use a static map of 16 common variants:
@misspellings %{
"tommorrow" => "tomorrow", "tomorow" => "tomorrow", "tommorow" => "tomorrow",
"munday" => "monday", "tusday" => "tuesday", "teusday" => "tuesday",
"wensday" => "wednesday", "wendsday" => "wednesday", "wenesday" => "wednesday",
"wennsday" => "wednesday", "thersday" => "thursday", "thrusday" => "thursday",
"fryday" => "friday", "saterday" => "saturday", "satterday" => "saturday",
"sumday" => "sunday"
}
Each word in the input is looked up in the map before tokenization. If it matches, the corrected spelling is substituted. The tokenizer’s existing prefix matching already handles abbreviations (“tue”, “wed”, “thu”), so these misspelling entries cover the cases where the prefix itself is wrong.
Article conversion uses a single regex:
defp normalize_articles(time) do
String.replace(time, ~r/\ban?\s+(?=(?:seconds?|secs?|minutes?|mins?|hours?|hrs?|days?|weeks?)\b)/i, "1 ")
end
This turns “an hour ago” into “1 hour ago” and “a day ago” into “1 day ago”. The lookahead ensures “a” is only replaced when followed by a duration unit — it won’t touch “a” in other contexts.
Before:
Chronic.parse("tommorrow at 10am") #=> {:error, :unknown_format}
Chronic.parse("an hour ago") #=> {:error, :unknown_format}
Chronic.parse("tusday") #=> {:error, :unknown_format}
After:
Chronic.parse("tommorrow at 10am") #=> {:ok, ~N[2026-03-04 10:00:00.000000], 0}
Chronic.parse("an hour ago") #=> {:ok, ~N[2026-03-03 07:00:00.000000], 0}
Chronic.parse("tusday") #=> {:ok, ~N[2026-03-10 12:00:00.000000], 0}
The Full Preprocessing Pipeline
The preprocessing step went from 3 operations to 6:
defp preprocess(time) do
time
|> String.replace("-", " ") # "aug-20" -> "aug 20"
|> String.replace(~r/(?<=\d)\s+(?=[a|p]m\b)/i, "") # "9 am" -> "9am"
|> normalize_named_times() # "noon" -> "12:00pm"
|> normalize_articles() # "an hour" -> "1 hour"
|> normalize_misspellings() # "tommorrow" -> "tomorrow"
|> String.split(" ")
|> Tokenizer.tokenize
end
Order matters. Named times must be normalized before splitting (they’re single words that become time tokens). Articles must be normalized before misspelling correction (so “a” is replaced with “1” before the word is checked against the misspelling map — though in practice there’s no overlap).
What Changed, By the Numbers
| Metric | Before | After |
|---|---|---|
| Token types | 5 |
6 (added :unit) |
process/2 clauses |
28 | 39 |
| Preprocessing rules | 3 | 6 |
Lines in chronic.ex |
426 | 548 |
Lines in tokenizer.ex |
76 | 85 |
| Total new code | — | ~131 lines |
| Tests | 64 | 103 |
| Test failures | 0 | 0 |
The library grew by ~25%. Every existing test still passes. No function signatures changed. No return types changed. The parse/2 API is identical.
What’s Still Missing
This is Tier 1 only. The gap analysis identified two more tiers:
Tier 2 (not implemented): this morning, tonight, next month, last week, next year, context option (past/future bias), day portions (afternoon, evening as standalone modifiers). These require a :grabber token type and the library starts becoming semantically different.
Tier 3 (not implemented): compound relative (“24 hours and 20 minutes from now”), ordinal-within-range (“3rd Wednesday in November”), seasons, quarters, fortnights, word numbers (Numerizer), timezone abbreviations, span return mode. This is a full rewrite.
For The Orchestrator — one user scheduling Pushover notifications and task reminders — Tier 1 covers every realistic input pattern. “in 30 minutes”, “tomorrow at noon”, “next friday at 3pm”, “2 hours ago” — these are what people actually type.
The Verdict, Revisited
The previous two articles asked: is regex pattern matching enough for natural language time parsing? The answer, after implementation: yes, for the scope that matters. The architecture didn’t need to change. The tokenizer needed one new type. The parser needed eleven new clauses and two helpers. The preprocessing needed three new normalization steps.
The original library wasn’t limited by its technique. It was limited by its coverage. Adding coverage was straightforward, composable, and fast. The :unit token type and shift/3 helper demonstrate that even without Ruby Chronic’s full semantic tag system, Elixir’s pattern matching on token lists scales cleanly — at least through Tier 1.
Tier 2 is where composition starts to strain. “next month” requires knowing what “month” means as a repeatable unit with variable length (28-31 days), not just a fixed number of seconds. “this morning” requires a concept of day portions with time ranges. These are semantic concepts that don’t reduce to shift(currently, n, unit). When The Orchestrator’s input patterns demand them, the library will need to evolve. But not today.
Ship it. Move on. Revisit when the input data says to.