Multiple Interactions

The starting point for the multiple interactions physics scenario in PYTHIA is provided by [Sjo87]. Recent developments have included a more careful study of flavour and colour correlations, junction topologies and the relationship to beam remnants [Sjo04], and interleaving with initial-state radiation [Sjo05], making use of transverse-momentum-ordered initial- and final-state showers. Currently a framework to handle rescattering is under development.

A big unsolved issue is how the colour of all these subsystems is correlated. For sure there is a correlation coming from the colour singlet nature of the incoming beams, but in addition final-state colour rearrangements may change the picture. Indeed such extra effects appear necessary to describe data, e.g. on <pT>(n_ch). A simple implementation of colour rearrangement is found as part of the beam remnants description.

Main variables

Matching to hard process

The maximum pT to be allowed for multiple interactions is related to the nature of the hard process itself. It involves a delicate balance between not doublecounting and not leaving any gaps in the coverage. The best procedure may depend on information only the user has: how the events were generated and mixed (e.g. with Les Houches Accord external input), and how they are intended to be used. Therefore a few options are available, with a sensible default behaviour.

mode  MultipleInteractions:pTmaxMatch   (default = 0; minimum = 0; maximum = 2)
Way in which the maximum scale for multiple interactions is set to match the scale of the hard process itself.
option 0 : (i) if the final state of the hard process (not counting subsequent resonance decays) contains only quarks (u, d, s, c ,b), gluons and photons then pT_max is chosen to be the factorization scale for internal processes and the scale value for Les Houches input; (ii) if not, interactions are allowed to go all the way up to the kinematical limit. The reasoning is that the former kind of processes are generated by the multiple-interactions machinery and so would doublecount hard processes if allowed to overlap the same pT range, while no such danger exists in the latter case.
option 1 : always use the factorization scale for an internal process and the scale value for Les Houches input, i.e. the lower value. This should avoid doublecounting, but may leave out some interactions that ought to have been simulated.
option 2 : always allow multiple interactions up to the kinematical limit. This will simulate all possible event topologies, but may lead to doublecounting.

Cross-section parameters

The rate of interactions is determined by

parm  MultipleInteractions:alphaSvalue   (default = 0.127; minimum = 0.06; maximum = 0.25)
The value of alpha_strong at m_Z. Default value is picked equal to the one used in CTEQ 5L.

The actual value is then regulated by the running to the scale pT^2, at which it is evaluated

mode  MultipleInteractions:alphaSorder   (default = 1; minimum = 0; maximum = 2)
The order at which alpha_strong runs at scales away from m_Z.
option 0 : zeroth order, i.e. alpha_strong is kept fixed.
option 1 : first order, which is the normal value.
option 2 : second order. Since other parts of the code do not go to second order there is no strong reason to use this option, but there is also nothing wrong with it.

QED interactions are regulated by the alpha_electromagnetic value at the pT^2 scale of an interaction.

mode  MultipleInteractions:alphaEMorder   (default = 1; minimum = -1; maximum = 1)
The running of alpha_em used in hard processes.
option 1 : first-order running, constrained to agree with StandardModel:alphaEMmZ at the Z^0 mass.
option 0 : zeroth order, i.e. alpha_em is kept fixed at its value at vanishing momentum transfer.
option -1 : zeroth order, i.e. alpha_em is kept fixed, but at StandardModel:alphaEMmZ, i.e. its value at the Z^0 mass.

Note that the choices of alpha_strong and alpha_em made here override the ones implemented in the normal process machinery, but only for the interactions generated by the MultipleInteractions class.

In addition there is the possibility of a global rescaling of cross sections (which could not easily be accommodated by a changed alpha_strong, since alpha_strong runs)

parm  MultipleInteractions:Kfactor   (default = 1.0; minimum = 0.5; maximum = 4.0)
Multiply all cross sections by this fix factor.

The processes used to generate multiple interactions form a subset of the standard library of hard processes. The input is slightly different from the standard hard-process machinery, however, since incoming flavours, the alpha_strong value and most of the kinematics are aready fixed when the process is called. It is possible to regulate the set of processes that are included in the multiple-interactions framework.

mode  MultipleInteractions:processLevel   (default = 3; minimum = 0; maximum = 3)
Set of processes included in the machinery.
option 0 : only the simplest 2 -> 2 QCD processes between quarks and gluons, giving no new flavours, i.e. dominated by t-channel gluon exchange.
option 1 : also 2 -> 2 QCD processes giving new flavours (including charm and bottom), i.e. proceeding through s-channel gluon exchange.
option 2 : also 2 -> 2 processes involving one or two photons in the final state, s-channel gamma boson exchange and t-channel gamma/Z^0/W^+- boson exchange.
option 3 : also charmonium and bottomonium production, via colour singlet and colour octet channels.

Cross-section regularization

There are two complementary ways of regularizing the small-pT divergence, a sharp cutoff and a smooth dampening. These can be combined as desired, but it makes sense to coordinate with how the same issue is handled in spacelike showers. Actually, by default, the parameters defined here are used also for the spacelike showers, but this can be overridden.

Regularization of the divergence of the QCD cross section for pT -> 0 is obtained by a factor pT^4 / (pT0^2 + pT^2)^2, and by using an alpha_s(pT0^2 + pT^2). An energy dependence of the pT0 choice is introduced by two further parameters, so that pT0Ref is the pT0 value for the reference CM energy, pT0Ref = pT0(ecmRef).
Warning: if a large pT0 is picked for multiple interactions, such that the integrated interaction cross section is below the nondiffractive inelastic one, this pT0 will automatically be scaled down to cope.

The actual pT0 parameter used at a given CM energy scale, ecmNow, is obtained as
pT0 = pT0(ecmNow) = pT0Ref * (ecmNow / ecmRef)^ecmPow
where pT0Ref, ecmRef and ecmPow are the three parameters below.

parm  MultipleInteractions:pT0Ref   (default = 2.25; minimum = 0.5; maximum = 10.0)
The pT0Ref scale in the above formula.
Note: pT0Ref is one of the key parameters in a complete PYTHIA tune. Its value is intimately tied to a number of other choices, such as that of colour flow description, so unfortunately it is difficult to give an independent meaning to pT0Ref.

parm  MultipleInteractions:ecmRef   (default = 1800.0; minimum = 1.)
The ecmRef reference energy scale introduced above.

parm  MultipleInteractions:ecmPow   (default = 0.24; minimum = 0.0; maximum = 0.5)
The ecmPow energy rescaling pace introduced above.

Alternatively, or in combination, a sharp cut can be used.

parm  MultipleInteractions:pTmin   (default = 0.2; minimum = 0.1; maximum = 10.0)
Lower cutoff in pT, below which no further interactions are allowed. Normally pT0 above would be used to provide the main regularization of the cross section for pT -> 0, in which case pTmin is used mainly for technical reasons. It is possible, however, to set pT0Ref = 0 and use pTmin to provide a step-function regularization, or to combine them in intermediate approaches. Currently pTmin is taken to be energy-independent.

Gösta Gustafson has proposed (private communication, unpublished) that the amount of screening, as encapsulated in the pT0 parameter, fluctuates from one event to the next. Specifically, high-activity event are more likely to lead to interactions at large pT scales, but the high activity simultaneously leads to a larger screening of interactions at smaller pT. Such a scenario can approximately be simulated by scaling up the pT0 by a factor sqrt(n), where n is the number of interactions considered so far, including the current one. That is, for the first interaction the dampening factor is pT^4 / (pT0^2 + pT^2)^2, for the second pT^4 / (2 pT0^2 + pT^2)^2, for the third pT^4 / (3 pT0^2 + pT^2)^2, and so on. Optionally the scheme may also be applied to ISR emissions. For simplicity the same alpha_s(pT0^2 + pT^2) is used throughout. Note that, in this scenario the pT0 scale must be lower than in the normal case to begin with, since it later is increased back up. Also note that the idea with this scenario is to propose an alternative to colour reconnection to understand the rise of <pT>(n_ch), so that the amount of colour reconnection should be reduced.

mode  MultipleInteractions:enhanceScreening   (default = 0; minimum = 0; maximum = 2)
Choice to activate the above screening scenario, i.e. an increasing effective pT0 for consecutive interactions.
option 0 : No activity-dependent screening, i.e. pT0 is fixed.
option 1 : The pT0 scale is increased as a function of the number of MI's, as explained above. ISR is not affected, but note that, if SpaceShower:samePTasMI is on, then MultipleInteractions:pT0Ref is used also for ISR, which may or may not be desirable.
option 2 : Both MI and ISR influence and are influenced by the screening. That is, the dampening is reduced based on the total number of MI and ISR steps considered so far, including the current one. This dampening is implemented both for MI and for ISR emissions, for the latter provided that SpaceShower:samePTasMI is on (default).

Impact-parameter dependence

The choice of impact-parameter dependence is regulated by several parameters.

mode  MultipleInteractions:bProfile   (default = 1; minimum = 0; maximum = 3)
Choice of impact parameter profile for the incoming hadron beams.
option 0 : no impact parameter dependence at all.
option 1 : a simple Gaussian matter distribution; no free parameters.
option 2 : a double Gaussian matter distribution, with the two free parameters coreRadius and coreFraction.
option 3 : an overlap function, i.e. the convolution of the matter distributions of the two incoming hadrons, of the form exp(- b^expPow), where expPow is a free parameter.

parm  MultipleInteractions:coreRadius   (default = 0.4; minimum = 0.1; maximum = 1.)
When assuming a double Gaussian matter profile, bProfile = 2, the inner core is assumed to have a radius that is a factor coreRadius smaller than the rest.

parm  MultipleInteractions:coreFraction   (default = 0.5; minimum = 0.; maximum = 1.)
When assuming a double Gaussian matter profile, bProfile = 2, the inner core is assumed to have a fraction coreFraction of the matter content of the hadron.

parm  MultipleInteractions:expPow   (default = 1.; minimum = 0.4; maximum = 10.)
When bProfile = 3 it gives the power of the assumed overlap shape exp(- b^expPow). Default corresponds to a simple exponential drop, which is not too dissimilar from the overlap obtained with the standard double Gaussian parameters. For expPow = 2 we reduce to the simple Gaussian, bProfile = 1, and for expPow -> infinity to no impact parameter dependence at all, bProfile = 0. For small expPow the program becomes slow and unstable, so the min limit must be respected.

Rescattering

It is possible that a parton may rescatter, i.e. undergo a further interaction subsequent to the first one. The machinery to model this kind of physics has only recently become fully operational [Cor09], and is therefore not yet so well explored.

The rescatting framework has ties with other parts of the program, notably with the beam remnants.

flag  MultipleInteractions:allowRescatter   (default = off)
Switch to allow rescattering of partons; on/off = true/false.
Warning: use with caution since machinery is still not so well tested.

flag  MultipleInteractions:allowDoubleRescatter   (default = off)
Switch to allow rescattering of partons, where both incoming partons have already rescattered; on/off = true/false. Is only used if MultipleInteractions:allowRescatter is switched on.
Warning: currently there is no complete implementation that combines it with shower evolution, so you must use PartonLevel:ISR = off and PartonLevel:FSR = off. If not, a warning will be issued and double rescattering will not be simulated. The rate also comes out to be much lower than for single rescattering, so to first approximation it can be neglected.

mode  MultipleInteractions:rescatterMode   (default = 0; minimum = 0; maximum = 4)
Selection of which partons rescatter against unscattered partons from the incoming beams A and B, based on their rapidity value y in the collision rest frame. Here ySep is shorthand for MultipleInteractions:ySepRescatter and deltaY for MultipleInteractions:deltaYRescatter, defined below. The description is symmetric between the two beams, so only one case is described below.
option 0 : only scattered partons with y > 0 can collide with unscattered partons from beam B.
option 1 : only scattered partons with y > ySep can collide with unscattered partons from beam B.
option 2 : the probability for a scattered parton to be considered as a potential rescatterer against unscattered partons in beam B increases linearly from zero at y = ySep - deltaY to unity at y = ySep + deltaY.
option 3 : the probability for a scattered parton to be considered as a potential rescatterer against unscattered partons in beam B increases with y according to (1/2) * (1 + tanh( (y - ySep) / deltaY)).
option 4 : all partons are potential rescatterers against both beams.

parm  MultipleInteractions:ySepRescatter   (default = 0.)
used for some of the MultipleInteractions:rescatterMode options above, as the rapidity for which a scattered parton has a 50% probability to be considered as a potential rescatterer. A ySep > 0 generally implies that some central partons cannot rescatter at all, while a ySep < 0 instead allows central partons to scatter against either beam.

parm  MultipleInteractions:deltaYRescatter   (default = 1.; minimum = 0.1)
used for some of the MultipleInteractions:rescatterMode options above, as the width of the rapidity transition region, where the probability rises from zero to unity that a scattered parton is considered as a potential rescatterer.

Further variables

These should normally not be touched. Their only function is for cross-checks.

mode  MultipleInteractions:nQuarkIn   (default = 5; minimum = 0; maximum = 5)
Number of allowed incoming quark flavours in the beams; a change to 4 would thus exclude b and bbar as incoming partons, etc.

mode  MultipleInteractions:nSample   (default = 1000; minimum = 100)
The allowed pT range is split (unevenly) into 100 bins, and in each of these the interaction cross section is evaluated in nSample random phase space points. The full integral is used at initialization, and the differential one during the run as a "Sudakov form factor" for the choice of the hardest interaction. A larger number implies increased accuracy of the calculations.

Technical notes

Relative to the articles mentioned above, not much has happened. The main news is a technical one, that the phase space of the 2 -> 2 (massless) QCD processes is now sampled in dy_3 dy_4 dpT^2, where y_3 and y_4 are the rapidities of the two produced partons. One can show that
(dx_1 / x_1) * (dx_2 / x_2) * d(tHat) = dy_3 * dy_4 * dpT^2
Furthermore, since cross sections are dominated by the "Rutherford" one of t-channel gluon exchange, which is enhanced by a factor of 9/4 for each incoming gluon, effective structure functions are defined as
F(x, pT2) = (9/4) * xg(x, pT2) + sum_i xq_i(x, pT2)
With this technical shift of factors 9/4 from cross sections to parton densities, a common upper estimate of
d(sigmaHat)/d(pT2) < pi * alpha_strong^2 / pT^4
is obtained.

In fact this estimate can be reduced by a factor of 1/2 for the following reason: for any configuration (y_3, y_4, pT2) also one with (y_4, y_3, pT2) lies in the phase space. Not both of those can enjoy being enhanced by the tHat -> 0 singularity of
d(sigmaHat) propto 1/tHat^2.
Or if they are, which is possible with identical partons like q q -> q q and g g -> g g, each singularity comes with half the strength. So, when integrating/averaging over the two configurations, the estimated d(sigmaHat)/d(pT2) drops. Actually, it drops even further, since the naive estimate above is based on
(4 /9) * (1 + (uHat/sHat)^2) < 8/9 < 1
The 8/9 value would be approached for tHat -> 0, which implies sHat >> pT2 and thus a heavy parton-distribution penalty, while parton distributions are largest for tHat = uHat = -sHat/2, where the above expression evaluates to 5/9. A fudge factor is therefore introduced to go the final step, so it can easily be modifed when further non-Rutherford processes are added, or should parton distributions change significantly.

At initialization, it is assumed that
d(sigma)/d(pT2) < d(sigmaHat)/d(pT2) * F(x_T, pT2) * F(x_T, pT2) * (2 y_max(pT))^2
where the first factor is the upper estimate as above, the second two the parton density sum evaluated at y_3 = y_ 4 = 0 so that x_1 = x_2 = x_T = 2 pT / E_cm, where the product is expected to be maximal, and the final is the phase space for -y_max < y_{3,4} < y_max. The right-hand side expression is scanned logarithmically in y, and a N is determined such that it always is below N/pT^4.

To describe the dampening of the cross section at pT -> 0 by colour screening, the actual cross section is multiplied by a regularization factor (pT^2 / (pT^2 + pT0^2))^2, and the alpha_s is evaluated at a scale pT^2 + pT0^2, where pT0 is a free parameter of the order of 2 - 4 GeV. Since pT0 can be energy-dependent, an ansatz
pT0(ecm) = pT0Ref * (ecm/ecmRef)^ecmPow
is used, where ecm is the current CM frame energy, ecmRef is an arbitrary reference energy where pT0Ref is defined, and ecmPow gives the energy rescaling pace. For technical reasons, also an absolute lower pT scale pTmin, by default 0.2 GeV, is introduced. In principle, it is possible to recover older scenarios with a sharp pT cutoff by setting pT0 = 0 and letting pTmin be a larger number.

The above scanning strategy is then slightly modified: instead of an upper estimate N/pT^4 one of the form N/(pT^2 + r * pT0^2)^2 is used. At first glance, r = 1 would seem to be fixed by the form of the regularization procedure, but this does not take into account the nontrivial dependence on alpha_s, parton distributions and phase space. A better Monte Carlo efficiency is obtained for r somewhat below unity, and currently r = 0.25 is hardcoded. In the generation a trial pT2 is then selected according to
d(Prob)/d(pT2) = (1/sigma_ND) * N/(pT^2 + r * pT0^2)^2 * ("Sudakov")
For the trial pT2, a y_3 and a y_4 are then selected, and incoming flavours according to the respective F(x_i, pT2), and then the cross section is evaluated for this flavour combination. The ratio of trial/upper estimate gives the probability of survival.

Actually, to profit from the factor 1/2 mentioned above, the cross section for the combination with y_3 and y_4 interchanged is also tried, which corresponds to exchanging tHat and uHat, and the average formed, while the final kinematics is given by the relative importance of the two.

Furthermore, since large y values are disfavoured by dropping PDF's, a factor
WT_y = (1 - (y_3/y_max)^2) * (1 - (y_4/y_max)^2)
is evaluated, and used as a survival probability before the more time-consuming PDF+ME evaluation, with surviving events given a compensating weight 1/WT_y.

An impact-parameter dependencs is also allowed. Based on the hard pT scale of the first interaction, and enhancement/depletion factor is picked, which multiplies the rate of subsequent interactions.

Parton densities are rescaled and modified to take into account the energy-momentum and flavours kicked out by already-considered interactions.