Have you ever played 2048? About a year ago, last winter, I became extremely interested in this game. All sorts of posts on Hacker News had new implementations of the game. Some that merely changed the numbers to images all the way to those that had there own unique method. You can find a ton of them here. There is even a course on Udacity that teaches you how to build your own version.

I was able to beat it many times and got close to 4098 a few times. I really became interested in what was the best strategy to use. Many people created AI to play perfectly. Another created a hard version where it gave you the worst possible outcome on every move. I started thinking how good is randomly hitting directions. I could not find one that had an API to feed random directions into so I coded my own. I think now there may be but I did this a few months ago and just wanted to write about it.

You can get my implementation by running the following in R.

`devtools::source_url('https://raw.githubusercontent.com/darrkj/darrkj.github.io/master/rcode/r2048.R')`

`## SHA-1 hash of file is bb990681619d5e7dc3928fd77345347cea459d0a`

We can play the game in the R console as can be seen below. I thought about making an app to let you play it from either Shiny or R-Fiddle. It looks pretty bad though as I only made it playable for testing the game logic.

```
play(gen_board())
. . . .
. . . .
. 2 . .
2 . . .
1: 'd'
. . . .
. . . .
. . . 2
2 2 . .
1: 'l'
. . . .
. . . 2
2 . . .
4 . . .
1:
```

The real way to use this is by simulating a game, or a large number of games. With this you can feed a strategy to play the game. So it is really meant more as a harness around the game. Now we can simulate an entire game by running the following.

`auto_play()`

```
## board_sum max moves log_sum depth
## 1 308 128 153 47 7
```

This shows us the following information:

- board_sum - The sum of all of the tiles on the board, this is directly related to the score in the official version of the game. That score is generated by every move that merges two cells, adding the merged value 2 onto 2 => 4. It is much smaller though as it only counts the final value instead of every merge to reach it. The only difference is there are some random insert of a 4, this 4 was never merged to get a score so it is not a perfect mapping.
- max - This is the max cell attained in this round of play. No surprise here that it is not 2014 or that it is really not that high.
- moves - Also pretty self explanatory, the number of moves it took to reach the end game state.
- log_sum - Doing statistics on collections of the outputs is odd because of the growth rate. Since everything is a different integer in the 2^x calculation we can take the log with base two to get a better representation for statistical purposes.
- depth - This is the log base of the highest cell. So instead of going from 128 to 256 it will be 7 to 8. Much better for means and variances of of lots of runs.

By default this is going to use a uniform distribution on each move, so it is just as likely to swipe up as down, left or right. I am just going to run this a lot of times and see what the outcome is.

```
runs <- 10000
a <- c()
for (i in seq(runs)) {
a <- rbind(a, auto_play())
}
table(a$depth)
```

```
##
## 4 5 6 7 8 9
## 21 578 3473 4967 959 2
```

`mean(a$log_sum)`

`## [1] 45.5996`

`sd(a$log_sum)`

`## [1] 3.729567`

After thinking quite a bit about how I play the game it turns out it could almost be coded as a random distribution like this. I use a very simple strategy that chooses two directions more often than the other two. It does not matter which two directions as long as they are not opposite, no left and right but you can use either left and up or left and down. This seemed to be very effective. This makes the numbers on the tiles monotonically increasing as you move in the direction of your likely moves.

Since it is easy to give this as an input we can see how much better it performs than the uniform method. I would not think that it would be getting many 2048 tiles because some of the moves are more thought out, in the scenario of left and down I will go up at very specific points not at all random.

```
b <- c()
for (i in seq(runs)) {
b <- rbind(b, auto_play(c(.45, .001, .099, .45)))
}
table(b$depth)
```

```
##
## 4 5 6 7 8 9 10
## 2 188 1847 5006 2772 184 1
```

`mean(b$log_sum)`

`## [1] 54.0733`

`sd(b$log_sum)`

`## [1] 4.911531`

We can see from the output that this is better. We can make a more rigorous statement though and see the statistical significance of the non-uniform over the uniform.

`t.test(b$log_sum, a$log_sum, alternative = 'greater')`

```
##
## Welch Two Sample t-test
##
## data: b$log_sum and a$log_sum
## t = 137.4023, df = 18652.84, p-value < 2.2e-16
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 8.372256 Inf
## sample estimates:
## mean of x mean of y
## 54.0733 45.5996
```

I have any more interesting ideas I want to try with this setup but I wanted to get this out first in case my interest fades and I never get back to it.