eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all . Please read our guidelines before submitting your own eLetter.
- RE: Authors’ response to Prof. Stefan Boettcher’s comment
Prof. Boettcher points out that our best result for the 2000-node MAX-CUT problem will miss 600 cuts (about 1.8% of its maximum cut value), where the maximum cut value is estimated to be 33,600 using a method based on statistical mechanics and our best result is estimated to be 33,000 by analysis of Fig. 2B. For completeness we provide the exact best cut value presented in Fig. 2B: 33,155, which means that it will miss about 445 cuts (about 1.3% of the estimated maximum cut value). This is an important and interesting point, because this estimation indicates that the approximate solutions obtained by the SB machine are actually very close to the optimal solution. In other words, we believe that this result supports our conclusion that our SB machine can obtain fairly good approximate solutions in a short time, 0.5 ms in this case, and that many applications would benefit from the substantial speedup. In our opinion, this performance is surprising and of interest as the first result obtained by the newly proposed heuristic algorithm based on a completely new mechanism, that is, classical adiabatic evolution. (To the best of our knowledge, this is the first example of “classical adiabatic optimization.” By contrast, quantum adiabatic optimization based on the quantum adiabatic theorem has been well known and been studied in the past two decades since its proposal in Refs. 10 and 11.) In response to the question regarding the achievable solution accuracy and its scaling with...
Show MoreCompeting Interests: H.G. and K.T. are inventors on two U.S. patent applications related to this work filed by the Toshiba Corporation (no. 16/123146, filed 6 September 2018; no. 16/118646, filed 31 August 2018). The authors declare that they have no other competing interests. - RE: Systematic testing of heuristics for hard combinatorial problems
Not to take away from the beauty and power of the presented algorithm and its implementation, I still feel compelled to note that this presentation is missing out on a far more powerful method of assessing its capabilities. (Incidentally, this is true for previous studies, like for the “digital annealer” cited here, and many similar comparisons.) As long as test instances can be defined over an ensemble (and the all-to-all Ising instance used here is obtained from a particularly well known ensemble!), then finite-size scaling from statistical mechanics allows for quite a stringent assessment: Plotting ensemble averages of optima for instances drawn first from rather small sizes N (which are likely exact optima) and ranging to increasingly larger N, one will quickly learn how those optima likely scale with large N. (In the Ising case here, to leading order it is exactly known to be N^(3/2) when J's are drawn from a symmetric distribution with sigma=1, but such knowledge is generally not essential.) Thus, defining intensive Ising energies e(N)=E/N^(3/2), these should converge smoothly to a finite limit. As heuristics for NP-hard problems are inherently limited, this will inevitably manifest itself in quantifiable deviations from that scaling for larger N. In fact, it is often possible to also extract first-order scaling corrections from small-N (likely exact) data to find asymptotically: e(N)~e(infinity)+A/N^omega, with e(infinity), A, and omega numerically extracted fr...
Show MoreCompeting Interests: None declared.