Reviews

Now that the Systematic Sudoku blog has completed its trek through the advanced Order of Battle, Sudoku writing and hard puzzle collections can be reviewed from the perspective of Systematic Sudoku.  This means to evaluate them in regard to the human engineered techniques explained and illustrated in this blog.

The reviews will evaluate claims and theories about human Sudoku solving from a sysudokie perspective , and will grade puzzle collections on a sysudokie scale of difficulty. Reviews will also give me an opportunity to link back into the basic and advanced techniques sections of the blog, roughly the 2012 posts, for definitions and explanations related to the review.

Collection Grading

Rather than colorful but subjective labels like “extreme” or  “diabolical”, the reviews will grade collections according to the phase of the SSOB in which the puzzles tend to reach at point of collapse. The grade levels currently are:

basic:  via dublex, cross hatch, locked sets, boxline, or X-wings.

bv scan:  Sue de Coq, APE, UR, XYZ-wing, or XY-chains

x-panels:  X-chains, regular, finned and mutant fish

coloring: Medusa coloring and bridging

AIC net:  Extended XY-chains,  XY and AIC hinged networks

ALS net:  systematically enumerated ALS and ALS aided AIC

Limited pattern overlay :  a scaled back but systematic version of POM

The Survey Table

Puzzle collection grading and commentary will be based on a survey sampling of 10 puzzles, preselected arbitrarily by a random digit and taking every tenth puzzle, starting with the digit. Data from this survey is presented in a survey table that summarizes the solving experience  in box marking, line marking, and advanced levels.

Puzzle collection numbers are listed in the left column, with each puzzle having a line entry in the table.

From left to right, the table has the following column headings and contents:

The “Start” column records the number of starting clues.

The “Box” column shows  the numbers of new clues, then the slinks  or aligned triples added in box marking.  A slash character separates clues from slinks.  An aligned triple is counted as a slink, because in further marking, it has comparable power to exclude candidates from cells. The completion of a naked pair is counted as one slink.  The column also mentions any noteworthy patterns typically found in box marking, which contribute to candidate exclusions.

The “Line Marking” column contains an abbreviated history of the line marking. At each successive free cell level encountered,  basic solving events are listed, along with the numbers of clues and slinks they generate.  Noted basic solving events include naked and hidden singles(ns,hs), pairs(np,hp),  triples(nt,ht), quads(nq,hq) and quins(nqn, hqn),  boxlines(bxl), hidden double line exclusions (hdx), and X-wings(x).

The line marking history shows the numbers of lines marked at each free cell level.  A free cell level is  9, minus the number of cells reserved by clues and locked sets within the line.  It reflects the difficulty of marking that line, and line marking proceeds by increasing free cell level, easier lines first.  When an event generates new markings, the same “clue/slink” results follow the event, in parentheses.

To illustrate the line marking history, let’s read this one:

4:5nt(3/2)2,5:bxl(1)3,6:1np

Commas separate one free cell level from the next.  Line marking started with 4 free cells(typically 5 clues).  Five lines were marked routinely, then a naked triple occurred on the next line, resulting in 3 clues and 2 slinks(counted as described above). Two more level 4 lines were then marked.  At the 5 free cell level, a boxline produced one clue. One slink would have been “(0/1)”. Three more lines followed at 5 free cells.  Then line marking ended on a naked pair, the second line with 6 free cells.

When line marking is completed and the bv scan techniques begin, the number of bv left by line marking is reported in the next “bv” column.  Aside from UR detection, the bv scan techniques depend on a sufficient number of bi-value cells.

Advanced solving events are described in the rightmost “Advanced Events” column, unless the puzzle collapses earlier. In that case, the progress through basic solving is reported there.

Snapshot Posts

Reviews of collections  conclude with a “snapshot” post, giving solving highlights of the surveyed puzzles, and linking back to the earlier posts explaining the highlighted techniques.

Advertisements

3 Responses to Reviews

  1. José Corona says:

    Dear Sir.
    Congratulations for your dedication, for your systematic approach to problem solving and for your ability to postulate methodologies.
    I have a question for you. I am starting a new project to launch a high level sudoku competition, on line and a great face (in person; sorry for my english) final. However, I know that exists several sudoku solving engines that could convert the tournament into a “cheats cage”. I have some solutions in mind, but no one seems to be good or enough in the long term (irregular sudoku, easy sudokus instead of difficult, avoiding sudoku at all (don’t like it), asking the winner to demonstrate his real abilities before rewarding him, etc). I just have one that could help: obtaining average scores in different games, not only “straight sudoku”, so the cheater could not trick every kind of game in the tournament.
    Dear Sir: I am asking you if, with your unusual analysis capabilities, you could find another practical solution; and, more important, if you forsee any other problem or difficult when launching the tournament.
    Finally, if you are a MS-Excel fan (like me), I invite you to visit my page (quickrows.com), in which I tried some different way to use Excel.
    Thanks in advance and best regards

    José Corona
    Spain

    • Sudent says:

      It’s a great idea. I have a brief report of the Akron Sudoku Tournament held for the first time in November of last year. I was a participant, but had no chance to join the four finalists who worked their puzzles on large white boards on stage. The puzzles were generally too hard to finish, but scoring was on number of correct clues. Three rounds of 20 minutes with three puzzles handed out per round. Then a final puzzle for the four finalists.

      That all worked out well for a live contest, but three puzzles is too many for 20 minutes. You should not expect a large number of experts, and I think easy puzzles are best for live contests. The Akron-Summit Library got permission to use Will Shortz puzzles, without individual source attribution.

      For an inline contest, there are many problems. The only way to rule out computer based solutions is to ban arbitrary trial and error, and require the logic of each removal and promotion to be spelled out by the solver. Under these rules, you could even allow the contestant’s favorite solver, if it was clearly identified, so that others could acquire them. This would be a time consuming nightmare to adjudicate. It would be promote human solving and an appreciation of sudoku logic.

  2. José Corona says:

    You are right. it is complicated to avoid automatic solvers in an online tournament. I have an idea that could work. I will try it and will let you know if it worked.
    Thanks for your kind response.
    Best regards

    José

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s