Graduates of our school have competed in five major trials over the past four years—against graduates of traditional schools, against these school’s instructors, and against experts who have an average of seven to ten years of experience. These competitions included Advanced Written exams, Oral Interviews given by a panel of experts, Practical exams which required tackling hard real-world problems, Building a Large Network from Scratch exams, and in several cases Joining the Workforce exams.

In every case, in every trial, in every competition—the graduates of our school dominated.

These trials were all run by an independent third party, the Institute for Defense Analyses (IDA). As an example of the challenge, IDA collected 20,000 trouble tickets from the Navy—problems that were reported onboard ships, but where the ITs on these ships couldn’t solve the problem. These problems had then been sent to an elite team for resolution; the hard problems often took weeks to get fixed.

For the competition, each competitor was responsible for the computers and network you would find in a company with a thousand employees. IDA then secretly broke each competitor’s system, using one of the 20,000 trouble tickets, randomly selected. The challenge was then to see if any of the competitors could fix the problem; if they could—or they gave up—IDA would break the system again, using another randomly selected trouble ticket; this kept up for a week.

The competitors were scored similar to the way competitors are scored in ice-skating or diving competitions. Scoring was based on the quality of their solution (did they really solve the problem or just get the system running), the difficulty of the problem, whether they unknowingly broke something else while trying to solve it, etc.

For example, in one of the competitions, our school’s graduates scored 267. The experts scored 43. The students from a traditional classroom scored -7. The results were stunning—after just five months of school.