Given a Scoring Criteria
and a set number of iterations,
this is is what The Computer came up with.
There's not going to be a lot of code in this.
I work via a Visual Prototyping Protocol.
Step Three: I forget step three...
Plan Your Work
And Work Your Plan.
Originally, I'd planned on having the players 'interfere' with each other. It doesn't matter what that means exactly. But since the pass coverage was based on closest opponent, well, when I discovered the glitch, I decided to run with it.
Or in other words, if that's not clear, this web page, the first major sprint in this project, is in some ways a test of concept: Can I get a computer to put together a winning play book? It was never going to be that realistic. And rather than having the players experience friction (slowing down) when getting close, simply having them peel off works as well as anything else at this level of realism.
Step by Step:
A walk-through this is not.
And The Crowd Goes Wild!!!
Yeah, well, maybe not. Whatever. In all the above, the plays are hard coded (deterministic and set in advance). So, perhaps I should talk about that.
For plays, I used a four tuple: ('play_name', float, float, int). All plays are of this form. And a play_set is merely a list of plays. Each player gets their own play_set of the following form:
play_set = list of six plays
play = [string, signed int, signed int, signed int]
play = ['player', +000, +000, +000]
Each offensive player has their own play_set.
Eleven players on a team.
So, a roster must be (and it is) eleven players each with their own play_set of six plays each.
Sample Rosters
These are the rosters I used in the re-factor. What I used when I re-coded the project. Actual output images are based on variable play length rosters, which was a mistake, so not included in the re-factor and explaining here would only just be confusing (and trust me, we don't need to go out of our way to add confusion).
Test Play: Run Straight
Test Play: Run Wedge
Random Play (note: string, float, float, int)
Random Play (note: string, float, float, float)
Random Play (final form: string, followed by three signed ints)
For ease of Genetic Manipulation™, all plays have the exact same form: 11 players, six plays each, each play consisting of a text string and three signed three digit numbers. The first two rosters are for test plays that I made: plays that would perform a certain way, so I could easily debug the interface visually, as it's a simply thing to see if players are all running straight or running in a wedge pattern. I made multiple test plays of this sort, but one need not belabour the point. Known input should lead to known output.
And from there, the next three sample rosters (above) show the structured progression of the rosters. I did a lot of re-factoring of the basic data structures... and continue to do so in my new projects. It might be because my grasp of data structures is so weak... or that to me, this is such and important phase of a project. Simple data, yields simple code.
The scrimmages below are made from randomly generated offensive team rosters (that are in turn genetically improved). In each offensive roster, each player has their own play_set consisting of six plays. And each defensive player, simply follows the closest offensive player.
Please forgive the 'Pass Complete' flash frame.
I put a lot of effort into those, because I thought they'd be cool.
Turns out,
I find them very annoying.
Oh, well.
Feel free to scroll by quickly.
Every Picture Tells a Story.
The numbers (9-22) above correspond to the following stepwise improvements (all plays are random, so some of the examples are suckier than others, that's just how it goes):
9 - Random Play Generator
10 - Simple Rating System (a few plays)
11 - A Sample (relatively) High Scoring Play
12 - Rating Revised (center of board favored)
13 - Loop for Scrimmages (best of 100)
14 - Loop Again (best of 5000), Tweaked Line Up
15 - Final Refactor Prior to Starting GA (Genetic Algorithm)
16 - Refactored Ratings
17 - 18 - MutaGenetics Up and Running
19 - Added Scoring History to Title Screen (play history)
20 - MutaGenetics as a Class
21 - Select Best Play From (Score, Num Mutations, Num Plays)
22 - Blue and Red Trails for Visual Play History
As time goes by, I'm getting more used to the flashing title screen. So, maybe it was just a raving headache yesterday. Whatever the case, I'm no longer planning on doing as detailed of a walk through as I once had; and so, I can't be bothered to reformat the following words as eloquently as one might like...
<...>
So, I'm sort of... the wind has gone out of my sails... or let's just say those pictures don't look how I expected. So, a learning experience. My original intent for this page was to present those images, sort of explain how they differ, but for the most, they just give me a headache. So, right now, I'm just going to rant a little. Please forgive. Or, hey, I love reading about the philosophy of code, so who knows what nuggets may lie ahead... or what mind-numbing gibberish.
I like to code, I call myself a coder. And as a sort of reward and to document my progress, I put up a page at the end of a project (or you know, halfway through, whatever). I don't do it as I code, because that ruins the flow and half the words would be wasted (as in it's hard to say what I've learned before I've learned it). Well, this time, part of what I've learned is that I don't like those end screen shots and how they flash white on my eyes. Live and learn.
That said. This is sort of the outline of the project evolution. Take from it what you will. I was sort of intending the pictures to provide a better history of the process, but, well, enough said on that.
Five Man Team Works Out of the Box
Cool Beans!!!
I just increased the num_players variable.
2x points for 2 open receivers.
3x points for 3 open receivers.
Very pleasing to see this work.
I am not going to explain Genetic Programming or Procedural Growth or whatever you want to call it in much detail (actually I just realized, here, in the final edit of this page that I hadn't described it at all, so clearly it was never my intent to go in much detail, but at some point, a complete absence is a glaring omission). Suffice to say, in Genetic Programming a data structure is evaluated by a point scoring system (number of open receivers, position of receivers on field, and so on) and better scoring data structures are carried through from round to round in a tournament of sorts, with higher scoring data structures being used to spawn similar but different data structures for use in the next round of the competition. Or if that's not clear, take a bunch of data structures, have them interbreed, cross pollenate, tweak, twiddle, or whatever; evaluate all of the data structures by some point system; keep the best; get rid of the rest; rinse and repeat; and finally, keep the best after so many iterations or a predetermined point level has been exceeded. Viola! Genetic Programming in 100 words or less (can't be bothered to count)!
From there, I did a total re-factor, which more or less consisted of:
Players Start Facing Each Other
Lines for History in Lieu of Dots
Had two loops in a function, spent two hours looking for bug, finally changed the second i to a j, big fun...
Implemented a maximum change in velocity per turn
Implemented a maximum velocity
Added proximity check back in (two players on top of each other don't move)
Refactored Points
Added .csv output for Analysis
And finally, two last plays.
Pretty much exactly what I set out to do.
Keeping in mind that these are Internet Points;
and so, less than worthless.
But that double open receiver
is like spot on exactly what I wanted to see.
MutaGenetics Statistics™
I put this webpage on hold about six months ago.
So, like, that's when I lost interest in this project.
But the next bit was going to consist of some stats...
As follows is an analysis (an amazingly crappy analysis) of how the above plays were determined (i.e. the play modification function calls that were used).
Seriously, this next section is crap. So if you get confused, bored, or suddenly come to the reasonably conclusion that this section is, indeed, total crap, please scroll down until you get to the two final football gifs. They are from the re-factor, have numbers on the field, and are the final two images on the page, so they are hard to miss.
play_set analysis
Play Length (ave per player): 3.94
Play Length (std per player): 0.90
These are the types of plays a player can make.
distribution of play types in final play_set
Ten runs. Count of calls to rand_roster, a function that initializes an offensive line up. First group is during the initialization period (a roster had to score better than one to be kept and multiple rosters were required). Second group is calls to rand_roster during the Genetic Algorithm phase (the genetic refinement phase of incremental improvement... or in the case of a call to rand_roster, a wholesale replacement of old for new).
Gads! This is why a person should keep better notes. Do as I say, not as I do. I could rerun the program and try to decipher the stats (that follow)... but that's just not how I do things. So, here's the explanation, best as I can remember, like it makes a difference if I'm wrong.
Also, the sample rosters provided far above, don't correspond to these stats. These stats are for variable length rosters (as used in the original prototype), which those above are for fixed length (six play long rosters). And honestly, the more I type and the more I try to remember what these stats were supposed to indicate, the more I am inclined to simply delete the lot. So, the lesson there, I'm repeatedly confronted with my human failings, for every success, there is a corresponding failure, the bar gets raised, but failure to meet that ever rising bar never comes close to disappearing. Maybe I should just delete this entire section. But if I do, you'll never see how pathetic I feel about all this... which sometimes feels like the major purpose of this site: see, after all this, if I can do it, anybody can, it's just a matter of slogging through that endless feeling of failure.
name: name of the function
num: times it was called in the best winning roster in a tournament (and since num for rand_roster == 100, that means the tournament size was 100, but since the other numbers don't correspond to that in any meaningful way, there's obviously some kind of statistical mismatch (total calls to rand_roster vs meaningful calls (and therefor not total calls) to the other functions or something of that sort))) And is that enough closing brackets for you, almost like writing in LISP.
mean: is average change in point score after the change
std, max, min: similiar.
name
num
mean
std
max
min
rand_roster
100
857
793
5869
13
change_start
13
179
388
1028
-658
change_play_add
12
373
736
2146
-475
change_play_delete
4
2985
4883
11436
0
change_abc
14
1163
2655
10350
-913
mutate
35
557
2066
12293
-895
shuffle
18
403
414
1393
-51
Note, a negative value would typically throw a value out of the competition, but say, 10,000 - 1,000 = 9,000 would perhaps be a second place value (and keep the roster till the next round), where subsequently another change might add 5,000 points, just as a for instance.
Also, the purpose of this was to tell me where the big points where coming from. In my refactor implementation, the only function I implemented was change_abc.
Output for the stats above.
Why make a table, when the program can do that?
The raw tournament data.
10 Tournaments (0-9 far left)
10 Best Rosters Each (second column)
List of Play Change History (function calls, remainder)
Oh, and to do this sort of image thing, I output from the program as text and then use IrfanView to change to a .png file by hand.
MutaGenetic Football™ Refactor
So, this page mainly covered the proof of concept.
These last two images are from the refactor.
The Refactor!
Much Better Field!
Simpler Algorithm!
I believe that the eleven player re-factor is just an afternoon away from being able to create the pass completion plays that the two-player version was at when I quit.
It's sort of amazing in a way that I can't be bothered to spend an afternoon confirming that theory.
I am so done with project... for now, anyway.
In other news, I'm currently working on a Dungeon Generator, very similar in many ways; and then, completely different from this.
Hopefully, that write up will be more coherent...
What I Learned
Six months later, looking back, if I were to do this all over again?
At this point, I really am sick of this project. I just want to get it off my desktop, get if off my TODO list. I know, so much positiveness. Thus, I will only put to words one final insight:
It's simply better if a program's debugging framework is the same as that program's usage framework.
And this, of course, can hardly be clear.
I put together a lot of debugging tools for this project, text file (log) outputs, images, pre-formatted plays, etc. At some point, if the debugging output and/or the controls for the debugging match up with the default usage, I can't see how that wouldn't be better. It has to be better.
Say, for instance, if I were making a game. Presumably, we all understand computer games. And at some level of interest, mastery, and obsession over the game, I as a player would want to have access to the same level of detail that the programmers would want during creation. So, got it? When the test harness and the usage harness are the same, happiness results.