< :the previous in number
^ :the list in numerical order
> :the next in number
P :the previous artilce (have the same parent)
N :the next (in thread)
|<:the top of this thread
>|:the next thread
^ :the parent (reply-to)
_:the child (an article replying to this)
>:the elder article having the same parent
<:the youger article having the same parent
---:split window and show thread lists
| :split window (vertically) and show thread lists
~ :close the thread frame
.:the index
..:the index of indices
Manuel Kasten wrote:
> ...
> I want to mention Luke Blanchard's second solution. It is the fastest
> solution of those I tested that worked without flaws. I never ever saw
> it running longer than 0.2 seconds and I did test it a *lot* (because it
> is around factor 10-20 faster than the next correct solution, and I
> thought it must fail sometimes, but it never did). I couldn't believe
> its speed. I studied his code for ca. 30 min just to understand what
> it's doing (luckily it's got a lot of comments, I wouldn't have been
> able to understand it otherwise). Then I could tell it is indeed
> correct, but its speed is nevertheless amazing.
>
Thanks, Manuel. Since it was 4:00 AM when I wrote the email wrapping
that guy -- I finally got up to code the thing at 3 after lying awake
all night -- I think my explanation was a little terse if not downright
misleading.
The key idea is that the easiest way to manage searching all possible
combinations of something is with loops and recursion, meaning you just
step through all combinations right inline. Most solvers of NP-complete
problem spaces end up spending a lot of effort setting up data
structures to keep track of where they are in the process. My approach
was to just use local variables and the stack, combined with Ruby's
magical ability to long-jump out of a deep stack by just returning from
a block (which I first used some 15 years ago in Smalltalk). No extra
bookkeeping required.
The "pick" function just walks all combinations recursively, and when it
finds one it yields a list of the indexes that make up the successful
combination. Since it is recursive, it ends up yielding to itself up
the chain. To take the 3-3s-and-9-2s example, we'll have an execution
setup like this, where further down the page corresponds to further down
the stack and further to the right corresponds to further nesting:
values = [3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2]
pick values, 9, 0:
pick values, 6, 1:
pick values, 3, 2:
yield [2]
yield [2, 1]
yield [2, 1, 0]
The [2, 1, 0] yielded by the outermost call corresponds to the indexes
of the 3 3s. The caller of that outermost call is "find_split", and the
way it handles this list of indexes is to dup the array of values,
delete the indices yielded from the dup, and then recurse itself on the
shorter array. In this case, it's trying to divide 9 2s in half, and
that will fail, so it just returns. This means the original call to
pick continues, meaning all of those yields finish, and we're nested
three levels deep again. The innermost one ends up recursing again
without success, and then the second one moves on to the 2s and recurses
a couple of times to get our next solution:
values = [3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2]
pick values, 9, 0:
pick values, 6, 1:
pick values, 4, 4:
pick values, 2, 5:
yield [5]
yield [5, 4]
yield [5, 4, 3]
yield [5, 4, 3, 0]
This time find_split's recursion succeeds (on 2 3s and 6 2s), and so it
returns the solution, jumping out of the above stack of picks without
ever continuing them.
Isn't that cool?
Luke Blanshard