Hobby

Sure, I'm officially the author. But in reality the computer makes all the decisions. What I actually build is the tests. The computer is excellent at running the tests on billions of possibilities and saying which ranks highest. But since I define the tests, I can say what "highest" means.

Beyond that, though, you get into which tests are fastest, because brute force testing all possibilities wouldn't finish in the lifetime of the universe.

Some things are provably true. I can just structure the inputs to the tests to avoid those possibilities that I can prove would never work. It's surprising how much that cuts down the search space.

Then there's the order of decisions. You don't have to decide everything all at once. What I do is have some bits already chosen, some currently being decided, and some not considered yet. I can approximate the ones not considered yet by just filling in random choices repeatedly, taking the average, and the tests tell me what current choices work best with random noise for the later stuff.

There's also the speed of tests. Some basic stuff (is everything reachable?) can be tested in a millisecond. In parallel! Each results has many measurable properties, so I measure all of them. Turns out if n properties are each reached 1/2 of the time, logn trials usually reach all of them. I can afford not to be perfect, so if I throw away 1/10th of the possibilities by accident, no trouble. A small number of iterations (typically 15 to 100, depending on n) screens out the turkeys and leaves most of the promising candidates. Other tests take seconds, or minutes, or hours. Do the fast ones first, collect a few million candidates, then let the slow tests whittle it down. With luck a winner will make it through the gamut.

There's also the order of tests. Some search spaces are huge, so I just do random sampling. Huge search spaces usually have a number of independent decisions ... a good technique is to find a few thousand candidates, do a histogram of their settings for each decision, then bound each decision to the range containing 90% of the candidates. This can reduce the space to something enumerable. Once all possibilities are enumerable, test them all. In order. And rearrange your tests so most-recently-failed comes first. It turns out candidates near each other in the enumeration tend to fail the same tests, so if you've got 1000 tests, ordering tests by most-recently-failed tosses out turkeys 1000x faster.

Finding a candidate that passes all the tests takes days, weeks. And at that range, you run into power failures and OS upgrades. So the testing needs to read in a file of candidates, write out a file of passing candidates, and give you some way to restart it from where it left off if the power cuts off unexpectedly.

So, yeah. At the end it looks like I wrote the final product. But really, the computer did. What I did was build and tune and babysit the tests in a way that let the computer find it in a reasonable amount of time.

It's given me a particular view on predestination. I mean, a candidate will pass tests, or it won't. It's deterministic. Is a candidate good? Moral? Righteous? It is if it passes the tests. Is it wrong to label it as bad if it has no choice but to fail the tests? No, the tests are my best approximation of the challenges it will face in the real world. Judging candidates by whether they pass tests is legitimate. Even though their ability to pass those tests is predestined.

Maybe this world is testing us, and heaven will let a select few of us exercise our skills in some higher real world? I doubt it. If this world is a screening test, it seems to me an overly complex and very inefficient one.


This was first posted to reddit/r/WritingPrompts, in response to the prompt "describe performing your hobby in a way that doesn't make it clear what your hobby is". (If you don't know my hobby, do a web search for burtleburtle.)


Index of stories
Bob's web page