Monte-Carlo simulations allow to solve the transport equation with a minimal amount of approximations. 🙂
On the other hand, they seem to take a maximum amount of computing time! 😢
Another way to look at Monte-Carlo simulations: it is the best tool we have, but it is not foolproof!Â
Monte-Carlo transport simulations are not concerned with solving the transport equation at all: they just simulate neutron behavior one by one, from source to absorption or leakage (escaping the system).
Given a source of neutrons, all histories are simulated by sampling collision probabilities from nuclear data.
Macroscopic quantities of interest (flux, reaction rates) are surmised from the histories via appropriate estimators.
All results of a simulation being random variables, the code outputs their estimated mean and error of the mean. The error of the mean is inversely proportional to the square root of the number of histories simulated.
When looking at real systems, the number of histories simulated is usually many order of magnitude smaller than in real life. That means that we often run the risk of under-sampling.
In this case the source is known, and one wants to find the number of particles that arrive to some detector.
Since radioactive sources are dangerous things, they are usually shielded so as to protect people passing by. This means that usually the probability that a source particle arrives to the detector of interest is very (or even vanishingly) small. So natural simulations are ineffective. You need to resort to some variance reduction technique.
In this case the (fission) source is not known, but part of the solution.
The eigenvalue problem is usually (exclusively?) solved by the power iteration. You start from an arbitrary neutron source (this is the initial guess: please always start from a spatially homogeneous distribution over the whole system !) from which you construct your next generation neutron source and so on. When the source is converged, the deterministic codes stop, but the Monte-Carlo codes start collecting scores.
In this case you couple your transport MC solver with a Bateman isotopic concentration solver. That is, you couple a Monte-Carlo code with a deterministic code!
Monte-Carlo transient calculations are a novel application (since 2013). The issue here is the difference of the time scale of prompt neutrons (the ones we follow) versus the time scale of neutron precursors decay, which are several (4-5) orders of magnitude apart. Again, some variance reduction technique is necessary to achieve reasonable (but still long!) computation times.  Â
All the data contained in the ENDF files is used with no approximations. In particular the energy dependency of collisions is treated rigorously (as opposed to all deterministic codes, where the multi-group approximation must be used).
Complex geometries can be simulated, even if usually there is a restriction to quadratic surfaces (plus toruses).
In neutron transport, all the code does is sampling the distributions prescribed in the nuclear data. The corollary is that all MC codes are supposed to give the same answer when simulating the same system with the same nuclear data. This last statement is not true for gamma transport, and even less for charged particles transport, where an implementation of models (of varying accuracy) is made inside the code.
The code gives you an answer with a statistical error. Just don't mistake it with the distance from the truth !
You are supposed to discretize your system in volumes where the isotopic concentrations are constant (and temperatures too). This is easy enough for fresh and zero-power systems, harder for power reactors.
The statistical convergence of all Monte-Carlo is inversely proportional to the square root of the number of histories simulated. This means that if you want to reduce your error by a factor of two, you must increase your number of histories (and consequently your computing time) by a factor of four!
Your results will depend on the data library you are using. The up side of this fact is that if you solve your problem with different libraries, you will get a feeling of how sensitive your problem is to data uncertainties (imperfect knowledge).Â
Variance reduction techniques are supposed to be unbiased. This means that all particles that arrive to the detector have the right weight. On the other hand, each technique neglects some possible histories, and we don't know for sure that the importance of this neglected histories is negligible.Â
A corollary of the previous point is that if you use several variance reduction techniques on the same problem (all unbiased), the one that gives the higher estimation is the closest to (or the least far away from) the truth.
You are not allowed to start collecting statistics before the source convergence is achieved (those are the inactive cycles). Some methods allow you to monitor this convergence, such as the entropy estimator. Underestimation of the number of necessary inactive cycles will lead to biases.
The fissions of the current cycle are used as the next cycle source. This means that successive cycles are correlated. This is not a problem when computing the mean (of the keff, the flux, etc.), but it is when computing the error of the mean. Thus the estimation of the error in the code output (computed assuming the values are all iid) is systematically underestimated. This underestimation is negligible for small system, but can reach a factor of 10 for full core calculations of big power thermal reactors. The amount of the underestimation depends also on the size of the detector (for instance it is not the same for assembly-integrated flux or pin-integrated flux). The sure way to get unbiased error estimations is to recur to independent simulations (you replicate the simulation with different random sequences).
A convergence of the spatial discretization is here necessary, because the spatial discretization adapted to the initial time is not necessarily adapted for the subsequent times when the isotopic concentrations will have evolved. Think about the rim effect in LWR pellets (coming from spatial self-shielding effects). You are supposed to refine until the result doesn't change anymore.
A convergence of the temporal discretization is also necessary, this converged discretization depending on the order of the time splitting algorithm. You are supposed to refine until the result doesn't change anymore.
The only way to attach a statistical error to the computed values is to resort to independent simulations.
Since the coupling between Boltzmann and Bateman is non linear, the statistical error of the Monte-Carlo solution will transform into a bias. A convergence in population may be in order!
The transport equation is derived as a continuous limit, that is under the assumption that there is an asymptotically large number of neutrons in any point of our system. In a power reactor the number of neutrons involved is also very large. In our Monte-Carlo simulations, on the other hand, the number of simulated histories is (very) finite. The effects of this finite number of simulated histories may show up in discrepancies between the behavior of real systems and the behavior of the simulation.
Clustering is an effect of the small size of the simulated population coupled with the branching phenomena of fission. An insufficient number of simulated histories (under-sampling) will cause biases (in keff, flux, etc.). A convergence in population is necessary for large systems (increase the number of histories until the results don't change anymore).
Renormalization of the fission source at the end of each cycle also plays a part in getting biased results. How this couples with clustering or is independent of it is not totally clear to me 🤔.
A number of international benchmarks have demonstrated what a colleague of mine used to call the "user effect".
You specify a system as well as you can, the participants then solve the problem with a code of their choice. The results are usually somewhat different, with a few suspicious outliers. If this is probably expected for deterministic codes, you are surprised when this happens when the participants (which are experienced researchers) use Monte-Carlo codes, even when using the same code with the same data libraries. A few possible explanations:
People can't read/understand specifications
People make mistakes
Codes (even Monte-Carlo codes) have some options that can be used appropriately or inappropriately
Corollary: if the results of the simulation are really important to you, you better ask different people to solve the problem independently, and then try to find a consensus 🙂!
Just one word about multi-group Monte-Carlo: don't! You get the worst of both worlds: the inaccuracy of the deterministic methods with the computing time of the Monte-Carlo. 😂