(Translated by https://www.hiragana.jp/)
Stephen Wolfram Writings

On the Nature of Time

The Computational View of Time

Time is a central feature of human experience. But what actually is it? In traditional scientific accounts it’s often represented as some kind of coordinate much like space (though a coordinate that for some reason is always systematically increasing for us). But while this may be a useful mathematical description, it’s not telling us anything about what time in a sense “intrinsically is”.

We get closer as soon as we start thinking in computational terms. Because then it’s natural for us to think of successive states of the world as being computed one from the last by the progressive application of some computational rule. And this suggests that we can identify the progress of time with the “progressive doing of computation by the universe”.

But does this just mean that we are replacing a “time coordinate” with a “computational step count”? No. Because of the phenomenon of computational irreducibility. With the traditional mathematical idea of a time coordinate one typically imagines that this coordinate can be “set to any value”, and that then one can immediately calculate the state of the system at that time. But computational irreducibility implies that it’s not that easy. Because it says that there’s often essentially no better way to find what a system will do than by explicitly tracing through each step in its evolution. Continue reading

Nestedly Recursive Functions

Nestedly Recursive Functions

Yet Another Ruliological Surprise

Integers. Addition. Subtraction. Maybe multiplication. Surely that’s not enough to be able to generate any serious complexity. In the early 1980s I had made the very surprising discovery that very simple programs based on cellular automata could generate great complexity. But how widespread was this phenomenon?

At the beginning of the 1990s I had set about exploring this. Over and over I would consider some type of system and be sure it was too simple to “do anything interesting”. And over and over again I would be wrong. And so it was that on the night of August 13, 1993, I thought I should check what could happen with integer functions defined using just addition and subtraction. Continue reading

Five Most Productive Years: What Happened and What’s Next

Five Most Productive Years: What Happened and What's Next

So… What Happened?

Today is my birthday—for the 65th time. Five years ago, on my 60th birthday, I did a livestream where I talked about some of my plans. So… what happened? Well, what happened was great. And in fact I’ve just had the most productive five years of my life. Nine books. 3939 pages of writings (1,283,267 words). 499 hours of podcasts and 1369 hours of livestreams. 14 software product releases (with our great team). Oh, and a bunch of big—and beautiful—ideas and results.

It’s been wonderful. And unexpected. I’ve spent my life alternating between technology and basic science, progressively building a taller and taller tower of practical capabilities and intellectual concepts (and sharing what I’ve done with the world). Five years ago everything was going well, and making steady progress. But then there were the questions I never got to. Over the years I’d come up with a certain number of big questions. And some of them, within a few years, I’d answered. But others I never managed to get around to.

And five years ago, as I explained in my birthday livestream, I began to think “it’s now or never”. I had no idea how hard the questions were. Yes, I’d spent a lifetime building up tools and knowledge. But would they be enough? Or were the questions just not for our time, but only perhaps for some future century? Continue reading

What’s Really Going On in Machine Learning? Some Minimal Models

What's Really Going On in Machine Learning? Some Minimal Models

The Mystery of Machine Learning

It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do all kinds of impressive and sometimes almost magical things. But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of what’s going on inside them.

The basic structure of neural networks can be pretty simple. But by the time they’re trained up with all their weights, etc. it’s been hard to tell what’s going on—or even to get any good visualization of it. And indeed it’s far from clear even what aspects of the whole setup are actually essential, and what are just “details” that have perhaps been “grandfathered” all the way from when computational neural nets were first invented in the 1940s.

Well, what I’m going to try to do here is to get “underneath” this—and to “strip things down” as much as possible. I’m going to explore some very minimal models—that, among other things, are more directly amenable to visualization. At the outset, I wasn’t at all sure that these minimal models would be able to reproduce any of the kinds of things we see in machine learning. But, rather surprisingly, it seems they can. Continue reading

Yet More New Ideas and New Functions: Launching Version 14.1 of Wolfram Language & Mathematica

For the 36th Time… the Latest from Our R&D Pipeline

Today we celebrate the arrival of the 36th (x.x) version of the Wolfram Language and Mathematica: Version 14.1. We’ve been doing this since 1986: continually inventing new ideas and implementing them in our larger and larger tower of technology. And it’s always very satisfying to be able to deliver our latest achievements to the world.

We released Version 14.0 just half a year ago. And—following our modern version scheduling—we’re now releasing Version 14.1. For most technology companies a .1 release would contain only minor tweaks. But for us it’s a snapshot of what our whole R&D pipeline has delivered—and it’s full of significant new features and new enhancements.

If you’ve been following our livestreams, you may have already seen many of these features and enhancements being discussed as part of our open software design process. And we’re grateful as always to members of the Wolfram Language community who’ve made suggestions—and requests. And in fact Version 14.1 contains a particularly large number of long-requested features, some of which involved development that has taken many years and required many intermediate achievements. Continue reading

Ruliology of the “Forgotten” Code 10

My All-Time Favorite Science Discovery

June 1, 1984—forty years ago today—is when it would be fair to say I made my all-time favorite science discovery. Like with basically all significant science discoveries (despite the way histories often present them) it didn’t happen without several long years of buildup. But June 1, 1984, was when I finally had my “aha” moment—even though in retrospect the discovery had actually been hiding in plain sight for more than two years.

My diary from 1984 has a cryptic note that shows what happened on June 1, 1984:

Ruliology of the "Forgotten" Code 10

There’s a part that says “BA 9 pm → LDN”, recording the fact that at 9pm that day I took a (British Airways) flight to London (from New York; I lived in Princeton at that time). “Sent vega monitor → SUN” indicates that I had sent the broken display of a computer I called “vega” to Sun Microsystems. But what’s important for our purposes here is the little “side” note:
Take C10 pict.
R30
R110

What did that mean? C10, R30 and R110 were my shorthand designations for particular, very simple programs of types I’d been studying: “code 10”, “rule 30” and “rule 110”. And my note reminded me that I wanted to take pictures of those programs with me that evening, making them on the laser printer I’d just got (laser printers were rare and expensive devices at the time). Continue reading

Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes

The Model

Why does biological evolution work? And, for that matter, why does machine learning work? Both are examples of adaptive processes that surprise us with what they manage to achieve. So what’s the essence of what’s going on? I’m going to concentrate here on biological evolution, though much of what I’ll discuss is also relevant to machine learning—but I’ll plan to explore that in more detail elsewhere.

OK, so what is an appropriate minimal model for biology? My core idea here is to think of biological organisms as computational systems that develop by following simple underlying rules. These underlying rules in effect correspond to the genotype of the organism; the result of running them is in effect its phenotype. Cellular automata provide a convenient example of this kind of setup. Here’s an example involving cells with 3 possible colors; the rules are shown on the left, and the behavior they generate is shown on the right:

Note: Click any diagram to get Wolfram Language code to reproduce it.

We’re starting from a single () cell, and we see that from this “seed” a structure is grown—which in this case dies out after 51 steps. And in a sense it’s already remarkable that we can generate a structure that neither goes on forever nor dies out quickly—but instead manages to live (in this case) for exactly 51 steps. Continue reading

When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation

See also:
“Computing the Eclipse: Astronomy in the Wolfram Language” »

When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation

Updated and expanded from a post for the eclipse of August 21, 2017.

When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation

Preparing for April 8, 2024

On April 8, 2024, there’s going to be a total eclipse of the Sun visible on a line across the US. But when exactly will the eclipse occur at a given location? Being able to predict astronomical events has historically been one of the great triumphs of exact science. But how well can it actually be done now?

The answer is well enough that even though the edge of totality moves at just over 1000 miles per hour, it’s possible to predict when it will arrive at a given location to within perhaps a second. And as a demonstration of this, for the total eclipse back in 2017 we created a website to let anyone enter their geo location (or address) and then immediately compute when the eclipse would reach them—as well as generate many pages of other information. Continue reading

Computing the Eclipse: Astronomy in the Wolfram Language

See also:
“When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation” »

When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation

Computing the Eclipse: Astronomy in the Wolfram Language

Basic Eclipse Computation

It’s taken millennia to get to the point where it’s possible to accurately compute eclipses. But now—as a tiny part of making “everything in the world” computable—computation about eclipses is just a built-in feature of the Wolfram Language.

The core function is SolarEclipse. By default, SolarEclipse tells us the time of the next solar eclipse from now:

Continue reading

Can AI Solve Science?

Note: Click any diagram to get Wolfram Language code to reproduce it. Wolfram Language code for training the neural nets used here is also available (requires GPU).

Can AI Solve Science?

Won’t AI Eventually Be Able to Do Everything?

Particularly given its recent surprise successes, there’s a somewhat widespread belief that eventually AI will be able to “do everything”, or at least everything we currently do. So what about science? Over the centuries we humans have made incremental progress, gradually building up what’s now essentially the single largest intellectual edifice of our civilization. But despite all our efforts, there are still all sorts of scientific questions that remain. So can AI now come in and just solve all of them?

To this ultimate question we’re going to see that the answer is inevitably and firmly no. But that certainly doesn’t mean AI can’t importantly help the progress of science. At a very practical level, for example, LLMs provide a new kind of linguistic interface to the computational capabilities that we’ve spent so long building in the Wolfram Language. And through their knowledge of “conventional scientific wisdom” LLMs can often provide what amounts to very high-level “autocomplete” for filling in “conventional answers” or “conventional next steps” in scientific work. Continue reading