# Numerical Simulations of the Edwards-Anderson Spin Glass with Binary Couplings

###### Abstract

We present numerical results that allow a precise determination of the transition point and of the critical exponents of the Edwards-Anderson Spin Glass with binary quenched random couplings. We show that the low phase undergoes Replica Symmetry Breaking. We obtain results on large lattices, up to a volume : we use finite size scaling to show the relevance of our results in the infinite volume limit.

## 1 Introduction

The question of whether short range Edwards Anderson (EA) spin glasses share the remarkable features of the infinite range Sherrington Kirkpatrick (SK) model [1] is still an open one. In this work we present Monte Carlo simulations of the EA Ising Spin Glass [2, 3, 4, 5, 6, 7, 8] with a bimodal distribution of the quenched couplings, performed on large lattice volumes (thanks to the tempering and parallel tempering simulation technique [9, 10]), with a large number of samples, and down to low values of the temperature . In this way we are able to obtain detailed information about the nature of the transition, to determine with good precision critical temperature and exponents, and to give strong evidence supporting the fact that the low-temperature phase is mean-field-like. A great deal of effort has gone in ensuring reliability of the data on delicate issues such as thermalization checks and consistency of data analysis.

The paper is organized as follows: first of all we describe the model and the parameters of our MC simulation. We then present data related to the Binder cumulant and to the determination of and . By analyzing the overlap susceptibility we determine the value of . Finally we discuss in detail about the probability distribution of the overlap . We present, among others, evidence for non-triviality of single sample and for a non-zero value of the position of the maximum of , , in the thermodynamic limit.

## 2 The Numerical Simulation

It is very difficult to run reliable numerical simulation of finite dimensional spin glasses. The main reason for such difficulties is the presence of many meta-stable states (responsible for aging effects as well as for many other peculiarities of spin glasses [1]). The Monte Carlo dynamics gets easily trapped, and the system only probes a restricted part of phase space.

Many algorithmic solutions have been proposed to improve the speed of thermalization of these systems. All these techniques are related to density scaling methods (see [10] for a review and references): we use here the maybe simplest implementation of these ideas, the parallel tempering [9], where a number of configurations of the system are allowed to exchange their temperature (for multi-canonical methods, that are strongly related and have in principle an even wider range of applicability, see for example [11]). Thanks to parallel tempering we have been able to thermalize systems of volume down to .

We study the Edwards-Anderson Ising spin glass with binary couplings, with Hamiltonian

(1) |

where the sum runs over nearest neighboring sites, the are Ising spins, and the couplings are quenched variables drawn with probability among the two values . The overlap among two different systems is defined as

(2) |

where and denote two configurations of the system in the same realization of the quenched disorder. The overlap probability distribution for a given sample is

(3) |

where denotes the usual Gibbs average. Its average over samples is

(4) |

and its moments are defined as

(5) |

We always denote by the thermal averages and by the disorder averages.

Our simulation have been performed on a set of workstations, using a multi-spin-coding program that was inspired by the work of [12]. We have selected the parameters of our Monte Carlo and parallel tempering runs such to guarantee a complete thermalization of the measured observables. We will discuss this issue in some detail.

Table (1) summarizes the relevant parameters used in the simulation: we give among others the number of thermalization steps, of measurements steps, the number of different disorder realizations and the temperature ranges investigated by tempering. Temperature values have been chosen uniformly spaced in the interval between and .

Thermalization | Equilibrium | Samples | |||||
---|---|---|---|---|---|---|---|

3 | 100000 | 100000 | 3200 | 17 | 0.1 | 1.2 | 2.8 |

4 | 100000 | 100000 | 2944 | 17 | 0.1 | 1.2 | 2.8 |

5 | 100000 | 100000 | 1920 | 17 | 0.1 | 1.2 | 2.8 |

6 | 100000 | 100000 | 1120 | 33 | 0.05 | 1.2 | 2.8 |

8 | 100000 | 100000 | 1376 | 33 | 0.05 | 1.2 | 2.8 |

10 | 150000 | 150000 | 512 | 56 | 0.04 | 1.2 | 3.4 |

We have used different methods to verify that we have correctly thermalized the systems. Using “parallel tempering”, one is actually performing a generalized Markov chain where systems at different temperatures are allowed to “move” in temperature-space too. A necessary condition for the Markov chain to be effective in de-correlating different measurements is the fact that each system spans at least a few times all the allowed temperature range during the simulation. In this respect we check a posteriori that the probability of swapping temperature has been of order (ensuring in this way that a single system did not get stuck at a specific value of ) and that the histogram counting the time that each system has spent at each temperature is fairly flat. This requirement is fulfilled in all our simulations, for all and values.

Another very strong check of thermalization is the fact that the single sample are symmetric in the limits of the statistical significance of the histogram. This is very well verified as can be seen for example in figure 12 where we plot for selected samples.

## 3 The Binder Parameter, and

We start by discussing the overlap Binder parameter. We will use it to qualify the phase transition, and to determine the critical temperature and the first of the critical exponents, . We will use and describe different methods to compute the quantities we are interested in. Our statistical sample of configurations is a large sample, and our set of data precise (even as far as the dependence over the lattice volume is concerned): we will show that different analysis styles give compatible (precise) results.

We define the usual overlap Binder parameter as

(6) |

The Binder parameter is an adimensional quantity, and its value at the critical point is universal. Close to its leading behavior is

(7) |

In usual ferromagnetic systems the infinite volume limit of the magnetization Binder cumulant is in the warm phase (where the distribution of the order parameter is Gaussian) and in the broken phase: for a spin glass with replica symmetry breaking and hence a non-trivial distribution of the overlap order parameter, the transition is signaled by a non-trivial value of in the broken phase (in the warm phase one expects an infinite volume limit of zero). In both cases the location of is signaled by the crossing of the curves of versus for different values of the lattice size (asymptotically for large ): large curves are lower for and higher for . We show in figure 1 versus for different values. The crossing point is close to for all lattice values, and the value of the Binder cumulant at criticality, is close to . Also error analysis has been a sensitive issues. We have always used a jackknife or a bootstrap error analysis [13] directly on the fitted parameters to determine errors. Still one has to keep in mind that statistical errors come together with systematic errors, due to the functional form one decides to try to fit (typically the asymptotic scaling form, that on finite size lattices is affected by power corrections). The two types of errors have to be kept under control separately.

In figure 1 the crossing of the different curves is very clear. It is interesting to stress the difference with the three dimensional case [14] , where the crossing at looks more like a merging of the different curves. is (very) close to the lower critical dimension, while in we are in a safe region: potentially this is important to make the physical picture easier to understand.

Let us discuss a first naive approach to the data. By looking at the crossing of the curves versus for different values one sees that one cannot extract a systematic dependence of the crossing point (and hence of the estimate of the effective critical temperature ) over . Any systematic trend is smaller than the statistical error (maybe just showing a systematic average decrease of the estimate when going from smaller to larger values). The preferred value of is slightly larger than . A first naive estimate of can be done by linearizing around the estimate we have given for , and by evaluating the logarithm of the slope ratio (divided by the logarithm of the two lattice sizes ratio, . With this method, one gets a first estimate for a set of effective exponents . Here too, one cannot distinguish any clear strong dependence over the lattice size: the error one gets on is completely correlated to the variation of the estimate of . For larger one estimates a lower value of , while for lower estimates of one gets larger estimates for . The error is dominated by this effect. The estimate for is close to .

To get a reliable estimate of and of we have used two methods of analysis of (see for example the discussion of the analysis of reference [15, 14]). In the first approach we linearize the data close to (for all values) and we run a global fit to all data: we fit and for the two variable function (as we said, linearized close to ). We use data in a range around the interval . We estimate the errors over the fit parameters ( and ) by a jackknife approach [13]: we repeat the fit approximately times over a subsample of the data containing all of our statistical sample but a fraction . The error is estimated by looking at fluctuations of the results of the fits, and by accounting for the fact they are correlated [13]. We also repeat the fits by discarding the smaller values, to check if we can observe any systematic drift (again with good accuracy the average value of the result does not seem to depend systematically over the range selected). Results are very stable, and the value we estimate for systematically comes out to be close to .

In the second approach, that comes in different flavors, one only uses data in the warm phase. This method leads to a smaller statistical error, that is balanced from a larger systematic incertitude (since we only select data at a given distance from , and approaching leads to a systematic drift of the estimate). In this case we start by selecting a threshold value for , . We start with low values of , and we approach from below: we cannot get too close to or the merging of the curves for different sizes makes the error over the measurement too large (we use values of going from to . We use a polynomial fit to interpolate the data for , at different values. We have decided to use a polynomial of degrees four (we have checked it guarantees stable fits and consistent results), and we fit a range in the critical region (for we use the data in the range , for we use the range ). We define now as the crossing point of the fitted polynomial with the horizontal line at , and as

(8) |

When . If is too small violations of scaling are dominant, while if one approaches too much the merging of the curves makes the error over the determination of overwhelming. The errors have been estimated by using a bootstrap approach (very similar in spirit to the jackknife technique, see [13]): one emulates fake sets of data with a Gaussian distribution around the real measurements, fits these multiple sets of fake data and compute the errors over the fit parameter.

We note at last that we have also used a variation of this second method, described in [14], based on the direct analysis of the derivative of with respect to . Also this method gives results that are compatible with the other ones.

Our final estimates, averaged over the results obtained using these different approaches, are

(9) |

and

(10) |

In the rest of this paper we will use these two values as our best estimates of and . We show in figure 2 the data for rescaled by using these two values: the scaling turns out to be very satisfactory.

## 4 The Overlap Susceptibility and

The determination of the overlap susceptibility, , provides various possible ways to determine the exponent (and hence of the exponent ). In a spin glass in the RSB phase the overlap susceptibility

(11) |

is expected to diverge for all values of . We show versus in figure 3.

The first method is based on the fact that we expect that at

(12) |

We use a linear interpolation of the data in the region close to . As in the case of the error in the estimate is mainly related to the choice of . The fit at by using gives an estimate of .

In the second method we use data where . We go as close to as possible, under the condition that data on our larger lattice () coincide, in our statistical accuracy, with the ones at . Here we expect that

(13) |

We can use data down to (i.e. at a from ), where finite size effect start to be sizable even at . We show our best fit (in a interval of ) in figure 4. In this region we have a stable fit, with close to . Even if this second measurement is not very precise (we have to stay quite far from the critical region) it is interesting the fact that we get a coherent determination of , by using a completely different scaling region than in the former analysis (the new analysis also depends on the value of we have determined by using ).

The last approach we use for determining is based on the analysis of the scaling properties of the distribution probability of the overlap order parameter in the region at . We analyze the behavior of in the next section, but we discuss now the scaling of of in order to define our determination of . At we expect

(14) |

i.e. in a scaling with . We find a very good best fit (we do not include the data), with an value close to .

By considering all the methods we have discussed in this section we give our final estimate

(15) |

that we will use in the rest of our analysis. In figure 5 we plot rescaled by using our best fits. The rescaling works fine.

Let us also notice that we have a good agreement with the results reported in [7] for the EA model with Gaussian couplings. There the authors find , and . Universality seems to work.

## 5

In the two former sections we have shown that the EA model undergoes a phase transition, and we have determined its location and the critical exponents. Now we will try to qualify it in better detail, by determining and analyzing the probability distribution of the order parameter, .

In figure 6 we show our average (averaged over the different disorder realizations) at . When increasing the lattice size the peak where is maximum shifts to lower values: for showing that there is a phase transition to a phase with a non zero expectation value of we have to show that the peak does not go to when . The plateau of for does not lower when increasing , as we will discuss better in the following. We remind the reader that in the RSB Parisi Mean Field scenario the is (in zero magnetic field) a non trivial function, that in the infinite volume limit is formed by a function at and by a regular part that extends down to . On the contrary if the broken phase has the same structure of the one of an ordered ferromagnet has to become asymptotically the sum of two functions at .

For sake of comparison we show in figure 7 what happens in the warm phase, where the average shrinks to a Gaussian distribution around when .

In figure 8 we compare at different values of on the same lattice size. From the single peaked shape at high one gets a clear double peaked structure, with a clear plateau at low , in the low region.

As we have said, in order to establish that we are having a real phase transition in the infinite volume limit we have to show that the value of where is maximum does not go to zero. We start by plotting in figure 9 versus (the exponent comes from our best fit, see later). It is easy to see that an asymptotic value seems unplausible.

Making this last statement more quantitative needs a more careful analysis. In order to do that we fit

(16) |

both with and by allowing for it a non zero value. In figure 10 we plot the values of versus and the results of the two best fits, one with a fitted value of and the second with fixed . This second fit is clearly unsuitable, and it has a very high value of . In the first fit we get

(17) |

that is our best estimate for the position of the function at in the infinite volume limit. We estimate (in the fit with a zero asymptotic value one finds the very small value ).

In figure 11 we show the value of close to (averaged over a small range, where is remarkably constant, in order to diminish statistical fluctuations) as a function of . One cannot observe any statistically significant decrease of this value for increasing large lattice volume (there is a small decrease only for small volumes). The most plausible implication of this evidence is that the system has many stable states, and that the cold phase is characterized by Replica Symmetry Breaking (even if it has to be stressed that this evidence is not as strong as the one implied by the figure 10).

In figure 12 we plot for selected samples, at , . One can see here that they are very complex distributions: such a pattern is typically related to the presence of many states (it has to be notice however that the small side peaks are not always there because of the presence of a real state).

To be more quantitative we have measured the percentage of disorder configurations such that has , , and peaks versus , and we plot it in figure 13. The number of configurations with a complex phase space ( with many peaks) increases strongly with . We use this evidence to rule out the picture of a modified droplet model, that has been discussed, among others, in [16] and in references therein. The picture of the modified droplet models implies that for each realization of the quenched disorder there are (in the cold phase) only two ground states, but that the value of (i.e. the support of the function that constitutes the ) depends on the sample. Here, on the contrary, the number of states for a given sample is strongly increasing with (and with decreasing ).

## 6 Sum Rules

In this section we discuss another important feature of the broken phase of the EA model. The starting point for this analysis can be for example the relation:

(18) |

This is one of a set of relations that are valid in the Mean Field Theory of Spin Glasses [17]. The work contained in [18] has established numerically that these relations are satisfied with good accuracy also in finite dimensional spin glasses. Following these findings a rigorous and theoretical analysis has improved our understanding of such set of sum rules [19, 20, 21, 22]: they are strongly related to the ultrametric properties of the phase space.

First of all we show evidence that the relation (18) has a non-trivial content in the low-temperature phase (in the high phase it is satisfied in the form ). Figure 14 shows that the values of the three quantities involved in (18) are significantly different from zero below (we have already shown in better detail that the infinite volume of such quantities is non-zero).

In figure 15 we show the difference between the left hand side and the right hand side of (18). The two contributions cancel out with good accuracy (to significant figures), and asymptotically for large lattice size the difference extrapolates to zero.

Another possible way to visualize the result is plotting the ratio of the left hand side and the right hand side. As figure 16 shows, for below we get identically one, while for we get the value , expected for a Gaussian .

We have also fitted the difference plotted in figure 15, for various values of temperatures : in all cases a fit to an asymptotic zero value with power corrections works very well, and the exponent of the corrections is close to for all values. As an example we plot the data together with the best fit for in figure 17.

## 7 Conclusions

In this note we have been able to give strong evidence for mean field behavior of the Ising spin glass with binary couplings. Life in the model is easier than in , where even after a large number of intense numerical simulations the evidence for a phase transition is still slightly marginal (even if, at this point, convincing enough). In our case already the crossing of the Binder cumulants is the very clear signature of a typical phase transition (as opposed to the quasi-merging, quasi-Kosterlitz-Thouless behavior of the case). It is clear that is very close to the lower critical dimension, and that there observing the effects of the physical critical point is dramatically difficult. is on the safer side, and numerical simulations show that very clearly.

We have been able to determine critical exponents precisely, and to enter in the large volume region with good accuracy. For example we have been able to show that the peak of is not going to for increasing lattice size, and (with a slightly worst accuracy and level of reliability) that the plateau at does not decrease with increasing lattice size. Also we remind the reader that non-trivial sum rules are satisfied with very good accuracy. So, thinks look quite clear in the case.

## Acknowledgments

We thank Giorgio Parisi, Federico Ricci-Tersenghi and Juan Ruiz-Lorenzo for a number of interesting discussions.

## References

- [1] M. Mezard, G. Parisi and M. A. Virasoro, Spin Glass Theory and Beyond, (World Scientific, Singapore 1987)
- [2] R. N. Bhatt and A. P. Young, Phys. Rev. B 37 (1988) 5606.
- [3] J. D. Reger, R. N. Bhatt and A. P. Young, Phys. Rev. Lett. 64 (1990) 1859.
- [4] G. Parisi and F. Ritort, J. Phys. A: Math Gen. 26 (1993) 6711.
- [5] J. C. Ciria, G. Parisi and F. Ritort, J. Phys. A: Math Gen. 26 (1993) 6731.
- [6] D. Badoni, J. C. Ciria, G. Parisi, F. Ritort, J. Pech and J. J. Ruiz-Lorenzo, Europhys. Lett. 21 (1993) 495.
- [7] G. Parisi, F. Ricci-Tersenghi and J. J. Ruiz-Lorenzo, J. Phys. A: Math Gen. 29 (1996) 7943, cond-mat/9606051.
- [8] L. W. Bernardi and I. A. Campbell, Phys. Rev. B 56 (1997) 5271, cond-mat/9704086.
- [9] E. Marinari and G. Parisi, Europhys. Lett. 19 (1992) 451, hep-lat/9205018; M. C. Tesi, E. J. Janse van Rensburg, E. Orlandini and S. G. Whillington, J. Stat. Phys. 82 (1996) 155; K. Hukushima and K. Nemoto, J. Phys. Soc. Japan 65 (1996) 1604, cond-mat/9512035.
- [10] E. Marinari, Optimized Monte Carlo Methods, in Advances in Computer Simulation , edited by J. Kertész and I. Kondor (Springer-Verlag, Berlin 1998)
- [11] B. A. Berg, Nucl. Phys. Proc. Suppl. 63 (1998) 982, and references therein.
- [12] H. Rieger, Fast Vectorized Algorithm for the Monte Carlo Simulation of the Random Field Ising Model, HLRZ 53/92, hep-lat/9208019
- [13] H. Flyvbjerg, Error Estimates on Averages of Correlated Data, in Advances in Computer Simulation , edited by J. Kertész and I. Kondor (Springer-Verlag, Berlin 1998).
- [14] E. Marinari, G. Parisi and J. J. Ruiz-Lorenzo, Phase Structure of the Three-Dimensional Edwards-Anderson Spin Glass, Phys. Rev. B 58 (1998) 14852, cond-mat/9802211.
- [15] D. Iñiguez, G. Parisi, J. J. Ruiz Lorenzo, J. Phys. A: Math. Gen. 29 (1996) 4337, cond-mat/9603083.
- [16] See for example C. M. Newman and D. L. Stein, Phys. Rev. Lett. 76, 515 (1996), and references therein.
- [17] M. Mézard, G. Parisi, N. Sourlas, G. Toulouse and M. A. Virasoro, J. Physique 45, 843 (1984).
- [18] E. Marinari, G. Parisi, F. Ritort and J. Ruiz-Lorenzo, Phys. Rev. Lett. 76 (1996) 843, cond-mat/9508036.
- [19] F. Guerra, Int. J. Mod. Phys. B 10 (1996) 1675.
- [20] M. Aizenman and P. Contucci, J. Stat. Phys. 92 (1998) 765, cond-mat/9712129.
- [21] G. Parisi, On the Probabilistic Formulation of the Replica Approach to Spin Glasses, cond-mat/9801081.
- [22] E. Marinari, G. Parisi, F. Ricci-Tersenghi, J. Ruiz-Lorenzo and F. Zuliani, Replica Symmetry Breaking in Short Range Spin Glasses: A Review of the Theoretical Foundations and of the Numerical Evidence, submitted to J. Stat. Phys., cond-mat/9906076