journal of recreational computer science

home

Stripe CTF 2014 walkthrough

28 Jan 2014

For more information about this contest, go to http://stripe-ctf.com

Sadly, this time I didn’t finish the contest in time – had way too much stuff on my plate. I’ve gotten as far as level 3, and then had to do work :(

Level 0

As easy as it gets - instead of doing a linear search in a list, stuff the dictionary into a hashtable (ruby hash/set) and then test.

Level 1

Slightly more difficult. What you’re looking for is a commit such that it’s SHA-1 is lesser than difficulty. Given there’s no good way to come up with predictable SHA-1, you’ll need to brute force your way through this. This involves generating commit messages, running SHA-1 on them and commiting.

Given that the output from the SHA-1 digest is uniformly distributed, you need a function that generates sequences – all we care about it is it being fast, and not generating the same element twice – thus an increment-by-1 is ideal. Given the difficulty, you’ll need to blaze through roughly 16M hashes to find one that fits.

I’ve started with Ruby (naively), which, without any tricks let me do around 120kH/s. The only optimization I’ve applied was to run SHA-1 inside the interpreter rather then shelling out. For that to work, I had to generate a tiny commit header before the actual commit contents. Easy, but not quite fast enough. I needed something at least ten times faster.

Next up was Java - I’ve rewritten the miner, trying to refresh my memory as I went through it. Suprisingly – the result was short of what I expected – there was no massive speedup. I’ve had one more trick I could apply up my sleeve, which I stumbled upon while doing last year’s contest. Given that the commit messages differ solely by the nonce (the counter value in my case), I could recycle the SHA-1 state – that is – calculate it once for the part that doesn’t change, and then clone() it every time I needed to update the counter. This simple trick bumped the hash rate to ca 1Mh/s, which was much closer to where I wanted to be, but not yet close enough. Stripe’s mining bots spit out a commit every few seconds, which meant I needed to be lucky and leave it running for a long time, or optimize further.

Being impatient, I’ve downloaded a copy of JProfiler to find out what else I could shave off. There were two methods that stood out: String.format and String#toLowerCase. I quickly realized I could drop the latter, as I’ve used it for later comparison against the string representing difficulty. Rewriting String.format took ca 5 mins, as I wanted to implement a truly trivial case.

My first success came after ca 10 minutes. Not too bad for a Mac Mini from 2010. Sadly, my commit was rejected – I received the following message:

Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 379 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: ===================
remote:
remote: We couldn't parse your commmit: Failed to parse signature - expected prefix doesn't match actual. (We're using Rugged for our parsing, which may behave differently on edge cases than stock Git. This message probably indicates your commit is malformed. Perhaps you're missing a committer field.)
remote:
remote: ===================
remote: error: hook declined to update refs/heads/master

Which meant I had to look into how I composed my commit messages – at least this meant that I could compute things fast enough and get lucky. Immediately, I installed the Rugged gem and started to inspect my commits - turns out, I suck at spelling (commiter vs committer). I’ve tweaked my miner to use 8 cores, and that resulted in a proportional speedup. The bonus round is similar, except that you’re competing against others at a higher difficulty.

Interested in some code? Here’s my Java Gitcoin Miner.

Level 2

There seem to be two problems to solve on this level: load balancing and blacklisting. Since the test harness gives me a score for each aspect, I’ve decided to approach each problem independently.

Load balancing

At first, I attempted to spread the load across N instances by keeping around a global variable that held the index of the next backend to use. Each request would increment the index by 1 modulo the number of backends. That yielded 2.83 negative points (not bad considered it was simple, and it was a one-liner).

Blacklisting

So there are two classes of clients - good and bad. There are many good clients, but they only issue a few requests each, and there are few bad clients that issue many bad requests. Thus, requests from bad clients will occur significantly more often than that of good clients. I’ve chosen a very simple approach, where I’d start dropping packets from IPs which sent me more than 11 packets and it seems to have worked.

Level 3

So we’re given a full-text search server that proxies requests to one of three search backends. Each search backend reads in an index in the beginning and is then available to answer queries. The way it is implemented, the index is tiny – all it contains are the file paths that have been “indexed”. When a query comes in, the entire file contents are read, a string is matched against the file contents, and then a search result is returned from a backend containing the answer. There are two aspects here (again): search speed for a single backend, and then perhaps sharding the index.

I decided that I would start by trying to speed up each worker. Judging by the test inputs, we’re given a fairly long piece of text, each containing words, separated by spaces and periods. Each match is of the form [word: file:linenumber]. In order to produce such matches, we need to create an inverted index where words are keys and each key points to locations in the text where each word appears. At first, we could try to store the full pathname of each file in the index, and if that proves to be too much data to hold in memory, we can hold an index in an array that holds all of the files. If that is still too much memory we can look into ways of hashing the input words to take up less space (creating a trie perhaps?).

comments powered by Disqus