With AIs it’s all about intelligence and smarts. Can they think better than us? Will they make better decisions? And so on. Probably the answer to all of these questions is yes, eventually if not sooner.

I’m a (poor and slow) runner so I like to read books about running to see how the really good ones perform. I just read the semi-autobiography of the famous ultra-marathoner, Scott Jurek (“Eat and Run: My Unlikely Journey to Ultramarathon Greatness”). This class of runner is amazing often doing runs of 100 miles in 24 hours through wild, mountainous terrain with extreme temperatures, ultra-high and ultra-low. How do they do it?

Well the easy part of the answer concerns their physical stamina and training. If you don’t have it, you’re not even at first base.

But the second part is mental. You have to possess almost super-human determination. You must go on, no matter what, and that “what” is frequently impassable and impossible for the vast majority of us. That mental part is what sorts out the sheep from the goats. That determines who drops out, and when it’s a good decision to give up. But then you don’t win.

In other words, the other element is willpower. Jurek has it in spades and so do other ultra-marathoners. Without it you’re really nothing but a normal runner. An also-ran, so to speak.

Here’s the question that Jurek puts to us in his book that he wants to answer for himself. Can you go beyond your physical and mental limits to the point where you are just about to break? Jurek has been in that position several times. That’s willpower.

So here’s my question: can an AI have willpower? Can it go beyond its physical and mental limits to the point of total breakdown and still “live” to tell the tale.

The conventional definition of willpower is toughness and determination. It’s all about self-control. For an AI it’s about doing things that are beyond its design limits. Willpower is doing what it wasn’t designed to do. Isn’t that the same thing for us humans?

Can an AI run an ultra if it wasn’t programmed and built to do so?

Can an AI go above and beyond its limits even if it wasn't designed to do whatever it is striving to do?

Can an AI strive?

Striving implies capabilities that go way beyond normal Western, standard logic. But building and developing AIs is currently based on precisely that same rational, standard approach.

I wonder if and when AI architects will get to that issue? I don’t see any evidence of it so far. If we don’t get there, I for one am going to think that AI is technologically amazing but behaviorally boring and aimless.

What is the architecture of willpower and can we bestow it on an AI?

That seems to me to be the most interesting question we face in robotics and AI at this stage.