Tab Completion

I'm Tab Atkins Jr, and I wear many hats. I work for Google on the Chrome browser as a Web Standards Hacker. I'm also a member of the CSS Working Group, and am either a member or contributor to several other working groups in the W3C. You can contact me here.
Listing of All Posts

Strings Shouldn't Be Iterable By Default

Last updated:

Most programming languages I use, particularly those that are more "dynamic", have made the same, annoying mistake, which has a pretty high chance of causing bugs for very little benefit: they all make strings iterable by default.

By that I mean that you can use strings as the sequence value in a loop, like for(let x of someString){...}. This is a Mistake, for several reasons, and I don't think there's any excuse to perpetuate it in future languages, as even in the cases where you intend to loop over a string, this behavior is incorrect.

Strings are Rarely Collections

The first problem with string being iterable by default is that, in your program's semantics, strings are rarely actually collections. Something being a collection means that the important part of it is that it's a sequence of individual things, each of which is important to your program. An array of user data, for example, is semantically a collection of user data.

Your average string, however, is not a "collection of single characters" in your program's semantics. It's very rare for a program to actually want to interact with the individual characters of a string as significant entities; instead, it's almost always a singular item, like an integer or a normal object.

The consequence of this is that it's very easy to accidentally write buggy code that nonetheless runs, just incorrectly. For example, you might have a function that's intended to take a sequence as one of its arguments, which it'll loop over; if the user accidentally passes a single integer, the function will throw an error since integers aren't iterable, but if the user accidentally passes a single string, the function will successfully loop over the characters of the string, likely not doing what was expected.

For example, this commonly happens to me when initializing sets in Python. set() is supposed to take a sequence, which it'll consume and add the elements of to itself. If I need to initialize it with a single string, it's easy to accidentally type set("foo"), which then initializes the set to contain the strings "f" and "o", definitely not what I intended! Had I incorrectly initialized it with a number, like set(1), it immediately throws an informative error telling me that 1 isn't iterable, rather than just waiting for a later part of my program to work incorrectly because the set doesn't contain what I expect.

As a result, you often have to write code that defensively tests if an input is a string before looping over it. There's not even a useful affirmative test for looping appropriate-ness; testing isinstance(arg, collections.Sequence) returns True for strings! This is, in almost all cases, the only sequence type that requires this sort of special handling; every single other object that implements Sequence is almost always intended to be treated as a sequence.

There's No "Correct" Way to Iterate a String

Another big issue is that there are so many ways to divide up a string, any of which might be correct in a given situation. You might want to divide it up by codepoints (like Python), grapheme clusters (like Swift), UTF-16 code units (like JS in some circumstances), UTF-8 bytes (Python bytestrings, if encoded in UTF-8), or more. For each of these, you might want to have the string normalized into one of the Unicode Normalization Forms first, too.

None of these choices are broadly "correct". (Well, UTF-16 code units is almost always incorrect, but that's legacy JS for you.) Each has its benefits depending on your situation. None of them are appropriate to select as a "default" iteration method; the author of the code should really select the correct method for their particular usage. (Strings are actually super complicated! People should think about them more!)

Infinite Descent Shouldn't Be Thrown Around Casually

A further problem is that strings are the only built-in sequence type that is, by default, infinitely recursively iterable. By that I mean, strings are iterable, yielding individual characters. But these individual characters are actually still strings, just length-1 strings, which are still iterable, yielding themselves again.

This means that if you try to write code that processes a generic nested data structure by iterating over the values and recursing when it finds more iterable items (not uncommon when dealing with JSON), if you don't specially handle strings you'll infinite-loop on them (or blow your stack). Again, this isn't something you need to worry about for any other builtin sequence type, nor for virtually any custom sequence you write; strings are pretty singular in this regard.

(And an obvious "fix" for this is worse than the original problem: Common Lisp says that strings are composed of characters, a totally different type, which doesn't implement the same methods and has to be handled specially. It's really annoying.)

The Solution

The fix for all this is easy: just make strings non-iterable by default. Instead, give them several methods that return iterators over them, like .codepoints() or what-have-you. (Similar to .keys()/.values()/.items() on dicts in Python.)

This avoids whole classes of bugs, as described in the first and third sections. It also forces authors, in the rare cases they actually do want to loop over a string, to affirmatively decide on how they want to iterate it.

So, uh, if you're planning on making a new programming language, maybe consider this?

Ki-Users, or, the Warlock Multiclassing Rules That Are Almost Already Built Into the Game

Last updated:

In earlier editions of D&D, multiclassing between spellcasters was generally pretty terrible. Spell levels increased in power super-linearly, so losing access to high-level spells was much worse than gaining double the number of low-level spells.

5e made this substantially better - you add together your levels to determine the spell slots you have, so a Wizard10/Cleric10 still gets 9th level spell slots just like a Wizard20; the drawback is that neither class gives you spells known above what each class at level 10 caster can know (5th level spells) - a lot of spells scale up in power if you use them in higher-level slots, so that 9th-level slot is still useful for a big attack, but it's not the equal of an actual 9th-level spell.

However, 5e also introduced a totally different spellcasting mechanic - Pact Magic - and then utterly failed to address multiclassing with it. A Warlock10/Wizard10 just... has 5th level slots. Two more than a Wiz10 would normally have, and those extra two refresh on a short rest, but still, this sucks.

Related to this, the Spellcasting multiclass rules also cover "half-casters" (like the Paladin or Ranger) and "third-casters" (like the Eldritch Knight or Arcane Trickster) - they add 1/2 or 1/3 their levels to a full-casting class's levels to figure out spell slots. But again, Pact Magic has no obvious way to do "half-casters", which severely limits how homebrew can approach Warlock-ish stuff.

But Here's The Thing

The special thing about Pact Magic is that your spell slots regen on short rest, so you don't need too many of them. But you know who else kinda has spellcasting that regens on short rest? MONKS.

When you go look at monk "spellcasting", they burn ki points to do it, which regen on short rest. They learn up to 5th level spells, spread over twenty levels. They can spend extra ki to power up the spell, at the same time as they unlock higher-level spells. They're basically just spell-point Warlocks, all in all.

(The Elemental monk charges spell level + 1 in ki points, but that's pretty widely recognized as crappy. The Shadow monk charges straight spell level. Other monk subclasses with spell-casting stuff also either charge spell level, or do spell level +1 but get extra benefits, like the Sun Soul which can Burning Hands as a bonus action.)

If we were to convert the Warlock over to Ki points, at the spell level = ki cost rate, the Warlock would even roughly keep up with the Monk's ki pool total, maxing out at 20 (four 5th-level slots). The Warlock just gets additional power above 5th-level spells in the form of their Mystic Arcanum, single-use higher-level spells that recharge on long rest. We'll handle those in a bit.

Overall, the Warlock would retain roughly the same power as they have today - slightly higher versatility, as they could cast more low-level spells in an encounter, but often slightly less overall power. (RAW Warlock gets 3 5th-level slots at level 11, equivalent to 15 ki, while this Ki-lock would only have 11, gradually raising to 15 at 15th level. Similarly, RAW-lock gets a fourth slot at 17, while Ki-lock only has 17 points, finally matching at level 20.) The big benefit is that the Warlock is no longer virtually restricted to scaling spells - instead, they can take non-scaling spells and actually get reasonable use out of them, since they'll just always be cast at their normal (low) cost, while RAW-lock has to "waste" the additional power of their higher-level slots.

So How's This Actually Work?

Here's the plain details of ki-using:

Warlocks get a ki pool equal to their level, just like Monks. It refills on short rest. They can cast a spell that they know by spending ki points equal to its level (and can spend additional points to cast it at a higher level).

At 1st level they can only spend 1 point on a given spell. This increases to 2 at 3rd, 3 at 5th, 4 at 7th, and 5 at 9th. This also determines what level of spells they're allowed to learn, in the same fashion as other full casters.

At 11th level they get their first Overcharge: usable 1/long rest, this lets them cast a spell for free, as if they had spent 6 ki points on it. At 13th level they gain an additional overcharge, worth a free 7-point cast; at 15th, another overcharge worth 8; and at 17th, a final overcharge worth 9. (So, by the end they have four Overcharges, each usable 1/long rest: a 6-point, 7-point, 8-point, and 9-point.) Alternately, instead of getting a free cast, they can spend an overcharge to refill their ki pool by 2 fewer points (the 6-point overcharge can be spent to refill 4 points of ki, 7-point overcharge can refill 5 points of ki, etc).

As a class feature, warlocks still learn one 6th-level spell at 11th level, 7th-level spell at 13th level, etc. These spells cannot be swapped out like their other spells known, which continue to be limited to a max of 5th level.

Multiclassing Ki-users

Monks are half-ki-users; they add 1/2 level to the full levels of Warlock to determine their ki limits and overcharges, but still add their full level to determine their ki pool. The full ki-user multiclass spellcaster table is:

Ki-User LevelBenefit
11 ki/spell
21 ki/spell
32 ki/spell
42 ki/spell
53 ki/spell
63 ki/spell
74 ki/spell
84 ki/spell
95 ki/spell
105 ki/spell
115 ki/spell, 6ki overcharge
125 ki/spell, 6ki overcharge
135 ki/spell, 6ki + 7ki overcharges
145 ki/spell, 6ki + 7ki overcharges
155 ki/spell, 6ki + 7ki + 8ki overcharges
165 ki/spell, 6ki + 7ki + 8ki overcharges
175 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
185 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
195 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
205 ki/spell, 6ki + 7ki + 8ki + 9ki overcharges
Ki-user level is Warlock + ½ Monk levels. Ki pool is Warlock + Monk levels.

"Casting" Monk subclasses, like Way of the Elements, can use overcharges earned from multiclassing in a full-ki-user like normal; they can cast their known spells at a higher level, or recharge their ki pool. They do not learn any higher-level spells, however. Non-casting subclasses, like Way of the Open Hand, have no scaling-ki abilities, and so can only use overcharges to recharge their ki pool.

Interactions with Normal Spellcasters

First, multiclassing a ki-user and a spellcaster partially counts for both; your ki-user levels count ⅓ for the spellcasting multiclass table (or half that for Monks and other half-ki users), and your spellcasting levels count ⅓ for the ki-user multiclass table (or half or third that for lesser casters) and for the ki pool.

Second, ki points and spell slots can be spent fairly interchangeably. If you know a spell from a spellcasting class, you can cast it by spending ki equal to the level of slot you would otherwise use (subject to your normal ki spending limits) or by spending an appropriate overcharge to cast a spell at 6th-level or higher; similarly, if you know a spell from a ki-using class, you can expend a spell slot of the appropriate level to cast it instead.

If a class ability would let you use a spell slot for any non-casting purpose (such as Paladin's Smite, or Sorcerer's metamagic pool recharging), you can spend ki equal to the desired slot's level (again, subject to your ki spending limits, or spending an appropriate overcharge for higher-level slots); similarly, if you have an ability that costs ki, you can instead expend a spell slot of a level equal to or higher than the ki cost.

Interactions That I Think Are Fine

Ki-locks mostly function like normal warlocks, but their interactions with two other spellcasting classes do change a little.

The Paladin/Warlock combo relies on quickly-recharging warlock slots to power more frequent Smites. The only change in using Ki-lock is that the Paladin can do more lower-level smites; a Pal3/War17, for example, would have 17 ki points, potentially powering 17 +2d8 smites per short rest, versus the RAW-lock which gets 4 +5d8 smites per short rest. The Ki-lock can also burn all their overcharges to recharge an extra 22 ki points per long rest, for more smites, while the RAW-lock is limited to using their Mystic Arcanum for their original spellcasting purpose.

So, theoretically this just means that a Paladin could be adding +2d8 to nearly every attack over a short rest. That's useful, sure, but it means they're not opening combat with a powerful +5d8 smite and likely taking an enemy out right away. The raw numbers look bigger, but you really have to take the action economy into account when evaluating this sort of thing. The weaker, more frequent smites probably roughly balance out with the smaller number of more powerful smites that the RAW-lock is restricted to.

(That said, the Ki-lock still can open combat with a big smite, then use small smites later in combat, which is probably a best-of-both-worlds thing. Impact unclear; it's probably still usually better from an action-economy perspective to do larger smites less frequently.)

The other interaction is with Sorcerer; the "Coffee-lock" can unweave their Warlock slots into metamagic points repeatedly over multiple short rest, and re-weave them into Sorcerer slots that last until a long rest. This interaction is mostly just a degenerate rules-abuse that isn't worth explicitly disallowing in rules, in favor of just house-banning such nonsense, but Ki-lock doesn't actually make it any more powerful. A 10/10 mix can produce 13 metamagic points out of ki every short rest, producing a 5th level slot and a 4th level slot; a RAW-lock can only produce 10 (for a 5th and 2nd slot), but ➀ a RAW-lock can produce 15 points per short rest at 11th level; they're just right at the cusp of a big power-gain, and ➁ Pact Magic/Spellcasting multiclassing is absolute shit in the RAW rules; if you use the "each counts ⅓ to the other" multiclassing rules I list up above with RAW-lock, you immediately get the 15 points per short rest. (And I recommend doing so; the ⅓ rule actually works really well overall.)

So overall, the multiclass interactions seem to be well-handled and nice.

New Syntax for JS "Function Stuff"

Last updated:

For the last little while, various people in TC39 have been developing several different proposed additions to JS, all trying to make various sorts of "function manipulation" easier and more convenient to work with.

At this point it's clear that TC39 isn't interested in accepting all of the proposals, and would ideally like to find a single proposal to accept and reject the rest. This post is an attempt to holistically lay out the problem space, see what problems the various proposals address well, and find the minimal set of syntax proposals that will address all the problems (or at least, help other people decide which problems they feel are worth fixing, and determine which syntaxes cover those problems).

(Note, this post is subject to heavy addition/revision as I learn more stuff. In particular, the conclusion at the end is subject to revision as we add more problems or proposals, or decide that some of the problems aren't worth solving.)

The Problems

As far as I can tell, these are the problems that have been brought up so far:

  1. .call is annoying

    If you want to rip a method off of one object and use it on an arbitrary other object as if it were a method of the second object, right now you have to either actually assign the method to the second object and call it as a method (obj.meth = meth; obj.meth(arg1, arg2);), or use the extremely awkward .call operation (meth.call(obj, arg1, arg2)).

    This sort of thing is useful for generic protocols; for example, most Array methods will work on any object with indexed properties and a length property. We'd also like to, for example, create methods usable on arbitrary iterables, without forcing authors into a totally different calling pattern from how they'd work on arrays (map(iter, fn) vs arr.map(fn)).

    Relatedly, method-chaining is a common API shape, where you start from some object and then repeatedly call mutating methods on it (or pure methods that return new instances), like foo.bar().baz(). This API shape can't easily be done without the functions actually being properties of the object, and the syntax variants are bad/confusing to write (baz(bar(foo)), for example).

  2. .bind is annoying

    If you want to store a reference to an object's method (or just use it inline, like arr.map(obj.meth)), you can't do the obvious let foo = obj.meth;, because it loses its this reference and won't work right. You instead have to write let foo = obj.meth.bind(obj); which is super annoying (and impossible if obj is actually an expression returning an object...), or write let foo = (...args) => obj.meth(...args);, which is less annoying but more verbose than we'd prefer.

  3. Heavily-nested calls are annoying.

    Particularly when writing good functional code (but fairly present in any decently-written JS imo), a lot of variable transformations are just passing a value thru multiple functions. There are only two ways to do this, both of which kinda suck.

    The first is to nest the calls: foo(bar(baz(value))). This is bad because it hides a lot of detail in minute structural bits, particularly if some of the functions take more than one argument. You end up having to do some non-trivial parsing yourself while reading it, to match up parens appropriately, and it's not uncommon to mess this up while writing or editing the code, putting too many or too few close-parens in some spots, or putting an arg-list comma in the wrong spot. You can make this a little better with heavy line-breaking and indentation, but then there's still a frustrating rightward march in your code, it's still hard to edit, and multi-arg functions are still hard to read (and really easy to forget the arg-list commas for!), because the additional arguments might be a good bit further down the page, by which point you've already lost your train of thought following the nesting of the first argument.

    The second way to handle this is to unroll the expression into a number of variable assignments, with the innermost part coming first and gradually building up into your answer. This does make reading and writing much less error-prone, but lots of small temporary variables come with their own problems. You now have to come up with names for these silly little single-use variables, and it's not immediately clear that they're single-use and can be ignored as soon as they get used in the next line. (And unless you create a dummy block, the variable names are in scope for the rest of the block, allowing for accidental reference.) Some of the temporary variables might have a meaningful concept behind them and be worthy of a name, but many are likely just semantically a "partially-processed value" and thus not worthy of anything more meaningful than temp1/temp2/etc.

    Further, this changes the shape of the code - what was once an expression that could be dropped inline anywhere is now a series of statements, which is much more limited in placement. For example, this expression might have been in the head of an if expression, and now has to be moved out to before it; this prevents you from doing easy else if chains.

  4. Partially-applying a function is annoying.

    If you want to take an existing function and fill in some of its arguments, but leave it as a function with the rest to be filled in later, right now you have to write something like let partialFoo = (arg1, arg3) => foo(arg1, value, arg3);. This is more verbose and annoying than ideal, especially since this sort of "partial application" is very common in functional programming (for example, filling in all but one of a function's arguments, then passing it to .map()).

    In particular, the problem here is that the important part of the expression is the arguments you're filling in, but the way you write it instead requires naming all the parts you're not filling in, then referencing those names a second time in the actual call, obscuring the values you're actually pre-filling. This is also especially awkward in JS if your function takes an option-bag argument and you're trying to fill in some of those arguments, but let the later caller fill in the rest; you have to do some shenanigans with Object.assign to make it work.

  5. Supporting functor & friends is annoying

    "Functor", "Applicative, "Monad", and others are ridiculous names, but represent surprisingly robust and useful abstractions that FPers have been using for years, capturing very common code patterns into reusable methods. The core operation between them is some variant of "mapping" a function over the values contained inside the object; the problem is that in JS, this is always done with an inverted fn/val relationship vs calling: rather than fn(val), you always have to write val.map(fn) or some variant thereof.

    JS does specially recognize one functor, the Promise functor, with special syntax allowing you to treat it more "normally"; you can call fn(await val) rather than having to write val.then(fn). Languages like Python also have some specialized syntax for the Array functor in the form of list comprehensions, letting you write a normal function call. But in heavily-FP languages, there's generally a generic construct for dealing with functors in this way, such as the "do-notation" of Haskell, which both makes it easier to work with such constructs, and makes it easier to recognize and reason about them, rather than having to untangle the specialized and ad-hoc interactions JS has to deal with today.

The Possible Solutions

There are a bunch! I'll list them in no particular order:

  1. "F#" pipeline operator, spelled |>. Takes a value on the LHS and a function on the RHS, calls the function on the value. So "foo" |> capitalize yields "FOO". You can chain this to continue piping the result to more functions, like val |> fn1 |> fn2.
  2. "Smart mix" pipeline operator, also spelled |>. Takes a value on the LHS, and an expression on the RHS: if the expression is of a particularly simple "bare form", like val |> foo.bar, it treats it like a function call, desugaring to foo.bar(val); otherwise the RHS is just a normal expression, but must have a # somewhere indicating where the value is to be "piped in", like val |> foo.bar(#+2), which desugars to foo.bar(val+2).

Smart-mix also has the closely-related pipeline-function prefix operator +>, where +> foo.bar(#+2) is a shorthand for x=> x |> foo.bar(#+2), with some niceties handling some common situations.

  1. Call operator, spelled ::. Takes an object on the LHS and a function-invocation on the RHS, calls the function as a method of the object. That is, given foo::bar(), this ends up calling bar.call(foo). The point of this is that it looks like just calling foo.bar(), but it doesn't require that the bar method actually live on the foo object.

    Can also be used as a prefix operator, called the "bind" operator. Takes a method-extraction on the RHS, and returns that method with its this appropriately bound. That is, given ::foo.bar, this ends up calling foo.bar.bind(foo).

  2. Partial-function syntax, spelled func(1, ?, 3). Implicitly defines a function that takes arguments equal to the number of ? glyphs, and subs them into the expression in order when called.

  3. Others?

Which Solutions Solve Which Problems?

  • The F# pipeline operator solves problem 3 partially. (You can unnest plain, unary function calls easily. Anything else requires arrow functions, or using functional tools that can manipulate functions into other functions.)

Paired with partial-functions it solves more cases easily, but not all. You can write val |> foo(?, 2) to pipe into n-ary functions, but still can't handle await, operator expressions, etc. Can technically do val |> foo.call(?, ...) as the equivalent to smart mix's val |> #.foo(...) or call operator's val::foo(...), but kinda awkward.

  • The "smart mix" pipeline operator solves problem 3 more completely. (With topic-form syntax you can trivially unnest anything. Bare-form syntax lets you do some common "tower of unary functions" stuff with a few less characters, same as "F#" style.)
  • The "smart mix" pipeline-function operator solves problems 2 and 4 well. (With bare-form syntax, +>foo.bar creates a function that calls foo.bar(...), solving the bind problem in two characters. With topic-form syntax, +>foo(#, 2, ##) fills in the second argument of foo() and creates a function that'll accept the rest. Option-bag merging is still difficult/annoying.)
  • The call operator solves problem 1 well. If you write the ecosystem well, it also solves problem 5 okay. (For example, write a generic map function that takes the object as this and a function as argument, and calls this.[Symbol.get("fmap")](fn). Then if the functor object defines a "fmap" operation, you can write obj::map(fn1)::map(fn2), similar to Haskell's obj >>= fn1 >>= fn2 syntax. )
  • The bind operator solves problem 2 well.
  • The partial-function operator solves problem 4 okay, but with some issues. (Unclear what the scope of the function is - in let result = foo(bar(), baz(?)), is that equivalent to let result = foo(bar(), x=>baz(x));, or let result = x=>foo(bar(), baz(x));? Related to that, is foo(?, bar(?)) two nested partial functions, or a single partial function taking two arguments? Can you write a partial function that only uses some of the passed-in arguments, or uses them in a different order than they are passed in?)

So, inverting this list:

  1. The call problem is well-solved by the call operator only.
  2. The bind problem is well-solved by the bind operator, and the bare-syntax pipeline-function operator. (They differ on whether the method is extracted/bound immediately (bind operator), or at time of use (pipeline-function operator).)
  3. The nesting problem is somewhat solved by "F#" pipeline operator, and better solved by "smart mix" pipeline operator.
  4. The partial-function problem is somewhat solved by the partial-function operator, and better solved by the topic-syntax pipeline-function operator.
  5. The functor problem is somewhat solved by the call operator, but not super well.

So, if you think all the problems deserve to be solved, currently the minimal set that does everything pretty well is: call operator, "smart mix" pipeline, and pipeline function.

Hs̄lgn̈, ym tsr̄f gṅln̆k

Last updated:

Back when I was a young nerd in high school, coming home on a long bus ride from a Future Problem Solvers meetup with my best friends (again, NERRRRRRRDS), we came up the idea of speaking English backwards, for fun.

Eventually, I evolved this into a more structured attempt at an actual conlang (constructed language), Hs̄lgn̈, which ended up being spoken by me and one of my little brothers.

Hs̄lgn̈ is a pig-latin, a language derived directly from English. Pig-latins are common as first conlangs, because they let you avoid the tiresome task of developing a vocabulary and jump straight into the more fun stuff of phoneme shifts, conjugations, writing systems, etc., while always having a "speakable" language ready. The pig-latin-ness is more obvious if I write its name in the Latin orthography: Hsilgne. ^_^

So yeah, it's still just English backwards, with some letter changes, a different orthography (it's written "natively" with an abugida, where vowels are indicated as diacritics on the consonants, rather than being letters on their own like in an alphabet), and specific pronunciation and stress rules vaguely similar, but not identical, to English.

Converting Orthography

The basic rules are simple.

  1. Take an English word (we'll use "English" for this example), and reverse it: "Hsilgne".

  2. If there are any C or Q letters, replace them with Ks. If there is a double-R, replace it with a rolled-R (currently represented with "ð" in my automatic converter, but that's not a good letter to use). (No change for "Hsilgne".)

  3. Merge vowels into their preceding letters:

    • a vowel before a consonant merges in directly, with aeiou becoming the diacritics ȯöōŏo̊
    • otherwise, the diacritic gets put on the "null consonant", "o", like in the previous bullet point
    • if a vowel is followed by an R, also merge that in, with a tail on the consonant, like ç. (My auto-converter currently uses an under-tilda rather than a tail, like "ẇ̰", because it renders more reliably, but I like the look of a tail better.)

    After this you have Hs̄lgn̈ - the "si" became "s̄", and the "ne" became "n̈".

Pronunciation

  • Vowels are pronounced like the English "long" vowels: ḃ=bay, b̈=bee, b̄=buy, b̆=bow (like "bow and arrow", b̊=boo.
  • Vowel-R blends are pronounced like Spanish: ḃ̰=bar, b̰̈=bear, b̰̄=beer, b̰̆=bore, b̰̊=boo-er (or t̰̊=tour)
  • An unvoweled consonant following a voweled one (like the "t" in ḃt) is pronounced as the final consonant of the syllable the voweled consonant forms (so ḃt is pronounced like "bait"), unless it's a "soft" consonant (r, y, h, l, or w), or it's followed by the same consonant again. So, for example, in "ṁy" the y is not part of the ṁ syllable, it forms a separate syllable. Similarly, r̈tṫm is three syllables (r̈)(t)(ṫm) (ree-teh-tame) - the first "t" doesn't merge into the r̈ syllable because it's followed by another "t".
  • When an unvoweled consonant isn't pronounced as the final consonant of the preceding letter, it uses the "default vowel", pronounced like "eh". (This often shortens to ə, the schwa.) So ṁy is pronounced like "may-yeh".

This means that Hs̄lgn̈ has five syllables: (H)(s̄)(l)(g)(n̈). Of the two voweled consonants, one is followed by a soft consonant, and the other ends the word, so all the unvoweled consonants get the default vowel instead: heh-sigh-leh-geh-nee

Stress

The syllable to stress is always one of the last three: if either of the last two syllables are "hard" voweled consonants, choose the last such; otherwise if either of the last two syllables are "soft" voweled consonants, choose the first such; otherwise the third-from-last syllable (or as close as you can get if there's less than three syllables).

So in Hs̄lgn̈ the stress is placed on the n̈ (heh-sigh-leh-geh-NEE), since it's the last syllable and is a "hard" voweled consonant. On the other hand, in a word like mhtyhr, which consists entirely of unvoweled consonants, the stress is on the y (meh-heh-teh-YEH-heh-reh), as it's the third-from-last syllable. In a word like w̆ð̇, the stress is on the w̆, as both syllables are "soft" voweled consonants, so you choose the first one (WOH-rray).

Other Things

There's more to the language, like conjugations and such, but they're more complex and I'm always tweaking them anyway. The most important bit is that verbs are written in the infinitive form, and any additional information that would be communication in the English tense is instead stated manually. So, for example, "this is good" would be written as "s̄ht öb d̆ŏg", using "be" instead of "is".

Automatic Converter

I have a pretty basic online converter available at https://www.xanthir.com/etc/hsilgne/. It doesn't do the verb conversion, and there might be some minor bugs in orthography, but it works surprisingly well. I used it to write all the words in this post, at least. ^_^

Some D&D Mistakes by the Sages (and how to fix them)

Last updated:

D&D 5e is an amazing game, and I sincerely love it and everything that its designers have done. (I've been following Mearls avidly since his days writing the Iron Heroes system, which got me into homebrew.) With anything so complex, tho, there will occasionally be mistakes in printed books. I can forgive this; I've sent plenty of emails that I proofread, only to immediately see an error I missed once it's too late.

But after printing, people often ask Mearls and Crawford (the Sages) about those ambiguities and mistakes, and they rule on them. Mostly these are quite good! But every once in a while they make what I feel is a big mistake, often by sticking with a textual literalism that results in something being overcomplicated for no good reason. Here's a few of those I've found, along with my preferred solutions:

Casting Spells with both Normal and Bonus Actions

In the core rules, there's a paragraph stating:

A spell cast with a bonus action is especially swift. You must use a bonus action on your turn to cast the spell, provided that you haven’t already taken a bonus action this turn. You can’t cast another spell during the same turn, except for a cantrip with a casting time of 1 action.

That is to say, if you cast a level 1-9 spell with your normal action, you can't cast one with your bonus action, and vice versa; you can cast a cantrip with the other action (or with both).

This is reasonably straightforward, and has some good justifications which I'll get into in a bit. But it gets complicated and bad when you mix in things that give the player an additional action, like the Fighter's Action Surge. What sorts of spells, precisely, can you cast with two normal actions and one bonus action?

Per the Sage Advice compendium, the answer is... it's complicated:

If you cast a spell, such as healing word, with a bonus action, you can cast another spell with your action, but that other spell must be a cantrip. Keep in mind that this particular limit is specific to spells that use a bonus action. For instance, if you cast a second spell using Action Surge, you aren’t limited to casting a cantrip with it.

This means you can use your normal action to cast a Fireball, and then Action Surge and cast another Fireball. But if you use a metamagic point to cast a Quickened Fireball with your bonus action, you cannot cast a normal Fireball, with either your normal action or the extra one from Action Surge.

(This even applies to spells cast as reactions between your turns - you can cast Counterspell or Hellish Rebuke as a reaction if you did the "fireball, action surge, fireball" thing, but if you cast a single quickened fireball and nothing else, you can't!?!)

This is obviously nonsensical! It means that taking more time to cast a spell gives you more time to cast an additional spell; that slowing down is sometimes the better answer. That's ridiculous, and means you have to metagame your actions sometimes to get a decent result. Further, this is just confusing; it doesn't correspond with our intuition for why this rule exists, so we'll either get it wrong in play or have to pay special attention to remember to apply it.

So let's look at the justification for the original rule, which is well-intentioned. There are two basic reasons for it:

  1. This heads off any unintentional combos that would be allowed by casting two spells in one turn, particularly as expansions add more spells and more combo potential. Cantrips aren't problematic to double up on; they're really simple and most of them closely resemble each other (just simple damage).
  2. More importantly, this is an anti-nova measure - it prevents players from burning thru more than one spell slot per turn, keeping their damage output more consistent with other players and slowing down the rate that they run out of steam. This was a major problem with casters in 3e, and the 5e devs addressed the issue with many complementary approaches such as this.

Maintaining these intents while removing the ambiguity, and keeping the rule as a whole as simple as possible, is really easy:

A character can only cast one non-cantrip spell during their turn, no matter what actions they have available to them. They can combine that spell with any number of cantrips that they have the actions for. This does not restrict spells cast not during your turn, such as a Hellish Rebuke cast as a reaction.

Boom, done. This is even a simpler way to state just the original rule, making it clearer that there aren't any ordering restrictions, and implying the intent behind the rule.

(Possible complication: this shuts down the possibility of a special ability that explicitly allows casting two spells in one turn. Such abilities, if they exist, can call themselves out as a special exception; or the rule can explicitly say "unless otherwise specified".)

This does shut down the "fireball, action surge, fireball" turn that was previously allowed by the rules/sage. If that is desired, then there's a somewhat more complex way of phrasing things that still allows it:

Within a single turn, a character can cast a non-cantrip spell with their normal action, or cast a non-cantrip spell with their bonus action, but not both. If they obtain additional actions by any means, such as the Fighter's Action Surge, there are no restrictions on using that action for casting.

This allows "[some bonus action], fireball, action surge, fireball" and "quickened fireball, [some standard action], action surge, fireball", which matches player's intentions better than the current rules/sage interpretation.

Spell Components and Hands and Such

Spells can require Verbal (speaking magic words), Somatic (wiggling your hands), and/or Material (using a magic wand) components. Per Sage advice, a single free hand can handle both Somatic and Material components for a given spell (in other words, you can wiggle your wand around to satisfy both). This is fine so far.

Complications come with divine casters, which get to use a holy symbol for the Material component. This just has to be "presented" in some way - it can be on a necklace, or painted onto your shield, etc. Per explicit Sage Advice, this means that a Paladin or Cleric can hold a weapon in one hand, a shield with a holy symbol painted on it in the other, and still cast a spell with both Material and Somatic components.

The trouble is, this isn't possible for any other type of Material component! You can't combine a wand, instrument, etc with a functional shield or weapon, at least not in general. This gives divine casters an advantage over other casters, for no particular reason.

(Further, this "you can do somatic components with a shield in hand" is explicitly one of the benefits of the War Caster feat, which states "You can perform the somatic components of spells even when you have weapons or a shield in one or both hands.". So this gives divine casters a piece of a feat, that other casters don't get. Confusing!)

I think the correct way to solve this is to make the actual rules-significance of components slightly more abstract, something like:

When a spell has a Material component, it means you require a magical object on your person to act as a "focus" when casting the spell. This "focus" must be visibly presented when casting, tho it does not necessarily have to be held in a hand, and it is obvious to anyone watching you that you are casting a spell of some kind using that particular object.You are not required to hold the focus in a hand.

When a spell has a Somatic component, it means that casting the spell requires certain magical gestures. You can only perform these gestures when not restrained, and it is obvious to anyone watching you that you are casting a spell of some kind.

When a spell has a Verbal component, it means that casting the spell requires uttering certain magical incantations. You can only make these utterances when you not silenced or otherwise prevented from speaking, the incantations must be spoken at a normal speaking level, and it is obvious to anyone capable of hearing you that you are casting a spell of some kind.

This normalizes all of the component rules across casting classes, and makes it clearer what each one means, and when you might not be able to cast a particular spell due to inability to provide the components. For example, it's clear that you can prevent a caster from using Somatic spells by tying them up.