Quasi-liquid crystals of electrons and positrons


 This work concerns the discovery of a time-domain "anomaly" in the Infra-Red synchrotron radiation spectra emitted by electrons and positrons from both the DA NE -Factory (Frascati National Laboratories, Italy) and HFL (Hefei Light Source at NSRL(National Synchrotron Radiation Laboratory, People’s Republic of China). The study was conducted with the SHT unconventional statistical category calculus system, developed and patented by the present author for the analysis of complex systems. The anomaly found in the IR synchrotron radiation emission profile of each single bunch of electrons and positrons, has been resolved by SHT analysis in two distinct waveform components, one of which is "delayed" by a few hundred of ps with respect to the other. A detailed and in-depth analysis excludes that the anomaly is the result of systematic errors The measured time differences between the two signals leads to an apparent discrepancy in the value of the speed of light in a vacuum. A deep time series analysis of the anomaly, based on considerations on the coherent emission of synchrotron radiation (CSR), demonstrates the existence of a distribution of structures and degrees of freedom inside a bunch of particles. This evidence is in contrast with the "rigid bunch" model (J. Schwinger 1945). We therefore pro- pose a model called "CFNM" (Coherent Fractal Nematic Mesophase), which describes the transition from a phase of maximum symmetry to a condensed phase, homologous to the nematic mesophase of liquid crystals. This model could have significant consequences in the study, modeling and measurement of the operating parameters of future ma- chines and collectors of accelerators, in particular with regard to emission and brightness.

Physics (INFN), called "3+ L". This experiment was aimed at carrying out a real-time beam diagnostics for the -Factory DA NE, but had been abandoned by the Accelerator Division at Frascati Laboratories because conventional data analysis had led to nothing. It was therefore entrusted to me because I had developed an unconventional statistical calculus system based on categories, a sort of "extension" of Set Theory. The work has continued to this day, revealing unexpected and interesting aspects.

Introduction
Particle accelerators play a fundamental role in many technological and scientific fields, both for fundamental research (high-energy and particle physics), and for interdisciplinary applications. One of the crucial fields concerning the activity, performance and safety of accelerator machines is the diagnostics of particle beams, both for large circular colliders designed for high energy physics such as LHC, and for the accelerators dedicated to the production of synchrotron radiation. Beam diagnostics is a complex of methods and technologies for the operation of these important and expensive machines, since it allows us to monitor the properties of the particle beams in order to improve their performances. Among the various processes concerning accelerator physics and, in particular, those dedicated to the monitoring and study of particle beams, one of the most important topics concerns the analysis of the emission of the radiation field (the "synchrotron radiation"), generated by accelerated charged particles.
Synchrotron radiation is typically extracted from a bending magnet through a special optical window and then transmitted, along a dedicated beamline, to a complex of detectors that measure specific and observable characteristics of the source itself. The advantages of a direct use of synchrotron radiation (hereinafter abbreviated as "SR") in diagnostics lie in the following specific features: 1) SR is a very reproducible and characterizable (with high precision) source; 2) SR is a not invasive and / or destructive source for the particle beam, therefore it is ideal for diagnostics; 3) SR is a source distributed over a wide spectrum of radiation (from far IR to X-rays). Consequently, photon energy can be chosen based on the type of diagnostics and the characteristics of the detectors; 4) SR is a highly collimated source with a well-defined angular divergence; 5) SR is a very bright source; 6) SR is a source that has a well-defined temporal structure, which depends directly on the longitudinal dimensions of the beam.
[ Figure 1 about here.] One of the most important characteristics of SR, lies in the peculiarities of the emission spectrum. In the case of particles accelerated by a bending magnet with radius ⇢, SR is characterized by a critical wavelength c which divides the integral of the power radiation into two equal parts. For the frequencies << c the power spectrum of the radiation tends rapidly to zero, while for >> c the power spectrum is almost constant and, therefore, approximately independent of the energy of the particles.
Most accelerators have a critical frequency in the X-ray region: for example, in the case of DA NE, c it falls into soft X-rays, with a wavelength of about 6 nm. This feature is important for diagnostic applications that generally employ frequencies in the visible region, where it is easier to design a focusing optics.
SR is used both to carry out longitudinal diagnostic measurements of the beam, and to obtain information on its transverse size. In a circular accelerator the particles are distributed in "packets" ("bunches") with a welldefined time structure. The longitudinal distribution of the particles can be detected by the temporal structure of the radiation emitted by single bunches. In this case, important information on the longitudinal dynamics of the beam can be obtained, as well as on the profile and length of the single bunches. It is also possible to measure important parameters (i.e. the impedance of the machine) and study the longitudinal instabilities of the beam. However, this type of diagnostics requires the use of fast detectors. For example, to resolve an electron bunch with a size of the order of 1 mm, the detectors must exhibit temporal responses in the order of the ps.
The spectral distribution of SR does not depend on the energy of the machine, but only on the radius of curvature of the bending magnet and on the current injected into the accelerator. The power extracted by a bending magnet at certain wavelengths is therefore comparable in almost all accelerators, allowing, in principle, the use of a certain diagnostic in all machines.
One of the limitations of the current transverse diagnostics with SR sources, lies in the time resolution for the acquisition of images, limited to the order of 30 ms for standard video cameras. This limit prevents turn-by-turn diagnostics, since the time of a revolution for almost all colliders is of the order of µs (for example the revolution time in DA NE is about 32 µs). Furthermore, to monitor the transverse size bunch-by-bunch, the time response of a detector should be less than ns (for DA NE the distance between two consecutive bunches is about 2.7 ns). Some bi-dimensional sensors have characteristic acquisition times of 104 -105 frames/s. A turn-by-turn diagnostics is thus able to achieve only a limited number of pixels of the matrix, for example 64 x 24 pixels at 105 frames/s for a typical LHC beam monitor device at CERN [1] . However, these (VIS) sensors do not allow solving bunch-by-bunch emission because they are limited both in exposure times and in sensitivity.
Bunch-by-bunch and turn-by-turn beam diagnostics are fundamental for studying beam instability phenomena. In recent years, the diagnostic systems installed at SLAC (Stanford) [2] or at the Japanese KEK [3] , used assembled devices made by a MCP ("Microchannel Plate Detector"), a fluorescent screen and a CCD ("Charge-Coupled-Device"). These systems helped to study and identify beam instabilities due to "electron clouds", therefore characterizing the transverse dimension of the beam bunch-by-bunch. However, they did not allow to obtain a synchronous image of all the bunches in the beam because they were limited by the frame rate of the CCDs used.

Materials and Methods
For space needs, I concentrated all the documentation relating to materials and methods in the SI appendix. I consider particularly significant the question of the method that used here an unconventional statistical analysis, based on the SHT calculation system by categories, developed and patented by the present author for the study of complex system [4] .
In our experiment ultra fast uncooled photo detectors based on HgCdTe (MCT) heterostructures developed by VIGO Systems SA [5,6] were used.
[ Figure 2 about here.] MCT detectors represent valid and competitive elements, capable of replacing -or in any case supporting -the "old" streak cameras, certainly more expensive, complex and delicate.
MCT detectors were used to monitor DA NE electron / positron beams in order to obtain a "real-time" diagnostics, because they can provide an analysis turn-by-turn and "bunchby-bunch" (See following fig. 3).
[ Figure 3 about here.] MCT detectors can follow the longitudinal dynamics, as well as they can identify, monitor and characterize the instabilities of the beam and the individual packets, improving the accelerator performance (i.e. maximum current and brightness). We will return later with a paragraph dedicated specifically to these detectors.
The tests were conducted on both DA NE and HFL (Hefei Light Source) of the National Synchrotron Radiation Laboratory (NSRL) of the People's Republic of China. Finally, the response of the MCT photo-detectors used was analyzed with the unconventional statistical category calculus system. The result of the analysis is a complex SR emission profile of each bunch of particles, made by the convolution of a "main" component with a "delayed" component (see figure 4 below), which represents a sort of "dichotomy" in the SR emission of DA NE electrons and positrons.
[  From the analysis carried out, it was clear that, neglecting the term "delayed", inevitable systematic errors are introduced. Obviously, the first thing I asked myself is if the dichotomy was caused by systematic effects introduced by the machine and / or by the experimental arrangement. But the results of the analysis of the time series (See SI appendix) exclude that the systematic contributions As an example, in the following figure 5, I show the discrepancies obtained from the interpolations of the data by DA NE positron SR recorded at Frascati with a streak camera. The figure 5 shows the single profile regression (black curve, courtesy of Mikhail Zobov, Frascati National Laboratories) with the multiple regression (red curve) produced by SHT-Level 9 without neglecting the "delayed" component.

A "trick of the tail" of light
The generally accepted theory on synchrotron radiation emission (See A. Hofmann [7] ) is based on the hypothesis of the bunch rigidity of relativistic particles. The accelerated motion of the bunch in the field of a bending magnet is thus represented as the motion of a massive "super-particle", that emits radiation propagating in the vacuum (along a line of light) starting from a well-defined K opening (See the following figure 6).
[ Figure 6 about here.] We just know the length of the optical path in the case of Frascati and Hefei, as in the following table 3.   Now, as a possible explanation of this anomaly, let's focus on the structure of a bunch of particles. We start from the distribution of N electrons in a circular machine [8] .

2⇡
P N k=1 e in' k where R is the radius of the machine. I am interested in the case where electrons are not evenly distributed in the machine. In this case, I will have a coherent radiation term (CSR) added to an incoherent radiation term (ISR), due to the individual contribution of each electron. To calculate the average power emitted by N electrons, I will have to mediate on all the angular positions of each particle included in the interval (-↵/2, ↵/2) as follows Where I can distinguish the contributions of ISR⇠N and CSR⇠N 2 . Therefore, I calculate the total power of the coherent radiation emitted by N electrons, as follows P (N ) June 18, 2020 Now, since the number N of particles in an accumulation ring is very large (for DA NE we have typical values of the order of 10 10 ÷ 10 11 particles / bunch), the intensity of the CSR could be not negligible.
I therefore consider the spectrum of radiated power, as follows [9][10][11][12][13][14] dP d = dp where is the wavelength of the radiation, p the power emitted by a single particle, N is the number of particles per bunch and g the so-called "CSR form factor", given by the following equation is the bunch normalized distribution and ✓ is the angle between the longitudinal direction z and the observation point. For ✓ = 0 the form factor g ( ) is precisely the square of the Fourier transform of the bunch distribution. In this case, to define dp/d I took into consideration the effect of the vacuum chamber screen, hence the cut-off wavelength is where h is the total height of the vacuum chamber and ⇢ the radius of curvature of the trajectory of the particle. Ultimately, to have a significant CSR contribution, we must have: As an example, in our case, we evaluate the form factor for Gaussian bunches. For ✓ = 0 I have: where z is the bunch length. From here, I see well that to have CSR emission it is necessary to have "short" bunches with large cut-off wavelengths. In the case of "real" machines, the CSR contribution can be observed in the typical frequency range of THz. 6. Coherent Fractal Nematic Mesophase (CFNM) and quasi liquid crystals of particles.
Recall the general density operator And the two-point density correlation function At this point, I assume that the observed anomaly is the result of a spontaneous breaking of symmetry from a state of maximum symmetry (homogeneous and isotropic fluid) to a condensed phase where the rotational symmetry is broken, but the translational symmetry is preserved. The reassembly of the particles in this condensed state generates a macroscopic coherence effect.
In this condensed state, consider thus a classical distribution of clusters of particles having the topological qualities of "micro-bunches" of z length with a density given by The range of variation of the scale for z is given by the following interval ⇤ < z < R s where ⇤ and R s represent, respectively, the lower and upper cut-off scales. Now, I recall that the lower and upper cut-off scales, ⇤ and R s , will both depend on the cut-off wavelength 0 of the Schwinger model [8] where h is the total height of the vacuum chamber and ⇢ the radius of curvature of the "cluster" trajectory.
That is, Rs is the maximum size of the clusters and can be identified with the length of the cluster intended as "super-particle ", or "super bunch" in the sense of the so-called Schwinger's" rigid bunch " [8] . This "super bunch" will emit synchrotron radiation along the direction of motion.
For scales such that ⇤ < z < R s I will have a cluster distribution, which we have called "micro-bunches", all contained in the "super-bunch", which will each emit synchrotron radiation along the direction of the movement.
These "micro-bunches" will have a structure distributed along a particular direction specified by a unitary vector n µ called "the director", aligned with the direction of motion. The positions of the centers of mass of each bunch will be distributed randomly, as if they belonged to an isotropic fluid of particles. We will therefore have a condensed phase of particles characterized by a break in rotational symmetry, but not in translational invariance.
The physical model that best describes this phenomenon is that of the nematic mesophase of a liquid crystal (See PG de Gennes, FC Frank and S. Chandrasekhar [26][27][28][29] ).
Coherence can thus be explained by the variation of the average density as a function of the bunch dimension z in the scale range given by the above interval (⇤, Rs), as the following: Where D H is the Hausdorff dimension and D T the topological dimension. Definitely, we have a nematic fractal mesophase of micro-bunches emitting IR-SR along the direction of motion, according to the snapshot by a simulation in the following figure 8.
We can call these structures "quasi-liquid crystals of electrons and positrons" [ As can be seen from the previous figure 8, a contribution of coherent radiation emission (CSR) can occur as soon as the micro-bunches reach the scale Where N is the total number of particles, an appreciable contribution of coherent emission will be expressed by the following Where the form factor will depend on N as The observed anomaly in IR-SR waveforms can thus be explained by the micro-bunch emission when z  0 . Going further, we generated a simulation for the bending of a quasi-liquid crystal bunch of particles in a DA NE bending magnet, considering that each "super-bunch" is separated by the average time interval of 2.7 ns (See the next figure 9).
[ Figure 9 about here.] In the most general case, the Stiffness for the Nematic Phase is a four rank tensor K ijkl , such that we have the Frank's free energy F el ⇠ 1 2 R d⌦ K ijkl r i n j r k n l The free energy must be invariant under uniform rotations of the whole bunch. Here we can clearly see that we are passing from an isotropic state of distribution of the particles, to a "condensed" state of lower symmetry, which is precisely the Nematic phase [29] . This will help us in the future when it comes to building models for beam diagnostics.

Conclusions
Diagnostics represents a complex of activities and techniques, vital for the operation of an accelerator machine, which offers indispensable support both for high energy physics experiments and for the use of the emission of synchrotron radiation in a parasitic or dedicated way .
This work presents and discusses the results of an unconventional analysis of the data recovered from an experiment of the Italian National Institute of Nuclear Physics, called "3 + L". The experiment consisted in the study of the bunch-by-bunch and turn-by-turn IR synchrotron radiation emission by electrons and positrons of the -DA NE Factory, collected by uncooled ultra-fast photo-detectors based on HgCdTe (MCT) hetero-structures. The analysis also considered data collected at the Hefei Light Source (HFL) of the National Synchrotron Radiation Laboratory (NSRL) of the People's Republic of China.
MCT systems represent valid and competitive elements, possibly capable of replacingor in any case supporting -the "streak cameras".
A useful application of this analysis consists in the generation of space-time maps according to the topology on the detector matrix. This result can lead to the development of a future real time imaging.
The analysis clearly showed the unexpected presence of an "anomaly" in the waveform of the IR synchrotron radiation emission in the time domain of each single bunch of electrons and positrons. This "anomaly" is resolved, in the time domain, in two distinct components, one of which is "delayed" by a few hundred of ps with respect to the other. Of course, I immediately thought that this anomaly was a systematic error of the measuring apparatus. Checks on the data and on the experimental arrangements and facilities excluded this possibility. The measurement was in fact repeated both with changes of the conditions and hardware of the local apparatus (including the detectors) and with a change of the facility (Hefei). In all these cases, the measurement has always shown the same "anomaly".
First, a positive result of this work was the verification (thanks to the SHT analysis) of the MCT detectors' ability to follow and monitor the "bunch-by-buch" and "turn-by-turn" beam dynamics. Secondly, another positive result of this work consists in the ability of the MCT detectors to identify (again thanks to the SHT analysis) a beam instability, which we have called "anomaly", consisting in the presence of a "delayed" component in the SR emission profiles of each bunch.
A deep time series analysis of the anomaly, based on considerations on the coherent emission of synchrotron radiation (CSR), demonstrates the existence of a distribution of structures and degrees of freedom inside a bunch of particles. This evidence is in contrast with the "rigid bunch" model (J. Schwinger 1945). We therefore propose a model called "CFNM" (Coherent Fractal Nematic Mesophase), which describes the transition from a phase of maximum symmetry to a condensed phase, homologous to the nematic mesophase of liquid crystals. This model could have significant consequences in the study, modeling and measurement of the operating parameters of future machines and collectors of accelerators, in particular with regard to emission and brightness.

Acknowledgments
Frascati       To avoid the collisions of the beams with the residual gases, a particularly high vacuum is maintained in the rings (less than a thousandth of a billionth of atmospheres). In the DA NE rings, about 100 m long, circulate more than 100 bunches consisting of more than 100 billion particles that perform more than 3 million revolutions in a second and whose collisions produce about 2000 particles per second. The dimensions of each bunch at the interaction point are 1mm x 10µm x 2cm.
[ Ne + ⇡ 2 · 10 10 positrons/bunch Ne ⇡ 2 · 10 10 electrons/bunch x ⇡ 1mm average quadratic horizontal dimensions in the IP (Interaction Point) y ⇡ 10µm average quadratic vertical dimensions in the IP f = 3 · 10 8 s 1 collision frequency The experiment "3+L" at Frascati. The experiment called "3 + L" (Time Resolved Positron Light Emission) had unique characteristics among the DA NE beam diagnostics techniques, because it wanted to carry out, for the first time, real-time beam diagnostics, capable of characterize each bunch of particles ("bunch-by-bunch") with a compact and contained tool, both in size and in production costs. The experiment used ultra-fast MCT detectors at room temperature.
[ In the figures 4-5 is represented the arrangement of the experiment "3 + L" [1] . MCT detectors used by the experiment had been tested on DA NE's dedicated SINBAD (Synchrotron Infrared Beamline at DA NE) infrared beamline [2] , and on the dedicated HLS (Hefei Light Source) IR beamline at the NSRL (National Synchrotron Radiation Laboratory) facility (Hefei, People's Republic of China).
The ultra-fast photo-detectors used both at Frascati and Hefei are made by HgCdTe (MCT) hetero-structures by the Polish company VIGO System SA [3,4] . MCT detectors operated at room temperature with a rapid response in an interval of the order of hundreds of ps. The compounds of HgCdTe, cadmium telluride and mercury (or cadmium telluride, mercury, MCT or CMT) are CdTe and HgTe alloys which represent the third semiconductor for technological importance after silicon and gallium arsenide. The quantity of cadmium (Cd) in the alloy can be chosen in order to optimize the optical absorption of the material at infrared (IR) wavelengths. CdTe is a semiconductor with a prohibited band of about 1.5 eV at room temperature while HgTe is a semi-metal with zero band gap energy. The mixture of these two compounds allows, in principle, to obtain a compound with a variable gap between 0 and 1.5 eV. MCT compounds are among the few materials capable of detecting infrared radiation in both accessible atmospheric windows, i.e. between 3-5 µm (MWIR) and 10-12 The detection in the MWIR and LWIR windows is generally obtained by using the compounds [(Hg 0.7 Cd 0. 3 )Te] and [(Hg 0.8 Cd 0.2 )Te] respectively. A MCT detector is also capable of detecting radiation through atmospheric windows of 2.2-2.4 µm and 1.5-1.8 µm (SWIR). Due to their extraordinary operational peculiarities, MCT detectors have been largely used in military applications for night vision, aeronautical use, satellite observation and missile guidance, in particular for the so-called "smart bombs". Large varieties of heat-seeking missiles are still equipped with MCT detectors. Today these detectors are widespread used in almost all fields of research. Many detectors even take their name from astronomical observatories (i.e., Hawaii) or from the instruments for which they were originally developed.
In this case, the MCT detectors made by VIGO System SA [4] represent the current state of the art of MCT technology by virtue of the operation at room temperature, the rapid response and contained costs. The MCT detectors of the "3 + L" experiment were used both as single elements and in a compact two-dimensional matrix consisting of two arrays of 32 elements with a response of the order of ns. (See next figures S5-S6).
[  (211) and (111). The detectors are optimized to work in the MIR (middle IR '10.6 µm). Their typical response time did not exceed 100 ps during tests carried out under cooling at 205 K with a three-stage Peltier cell. The detectors were inversely polarized and each coupled to a broadband preamplifier in order to optimize its performance and improve its S/N ratio. To this end, an amplifier characterized by a gain of 46 dB and a bandwidth between 0.1-2.500 MHz was used. Given the high sensitivity of the devices, to shield the RF signal of the DA NE klystron, both the arrays (See fig. 7) of the photo-detectors and the amplifier were isolated inside a metal box. These peculiarities make the MCT detectors ideal for analyzing the synchrotron radiation of high current accumulation rings (about 2A) -typical of DA NE operating regime -and are therefore suitable for performing effective beam diagnostics. In the experimental arrangement at room temperature, the rapidity of the response time allowed to obtain an excellent temporal resolution of the synchrotron emission signal of each bunch of electrons and positrons for each complete injection cycle. The first tests of the photo-detectors were carried out in the IR line SIN-BAD (Synchrotron Infrared Beamline at DA NE, See [2]) by initially studying the response of an individual detector and then of single elements ("single channel") of an array and, subsequently, a set of elements (4) activated on the matrix 32x2 of MCT photo detectors, indicated in red in the figure S5.
In the following of this work, the four photo-detector elements activated on the 32x2 matrix will be indicated as "pixel", hence "pixel1", "pixel2", "pixel3" and "pixel4" (See figure S5). Initially, my goal was to "categorize" data analysis processes as an algorithm and thus achieve complete automation. In the following years, the progress of age (and experience) convinced me that total automation is dangerous because it can cause and introduce serious systematic errors, especially in the study of complex systems or far from thermodynamic equilibrium. So I resumed the habit of manually rechecking the results of the various levels of analysis. Even in the case of powerful computers and large data sets (such as in Frascati or CERN) and for processes based on artificial intelligence or machine learning algorithms. In all these cases, paradoxically, I noticed that human intervention is fundamental because errors are always there and tend to spread diabolically. From which I am convinced of the danger of letting a machine, however sophisticated, have full control of a physical or mechanical process. This extends not only in research, but also in civil automation (airplanes, cars, surgical robots, etc.).

DATA ANALYSIS
For this reason, I gave way to the control of a human operator during the phases and levels of the analysis, however trying to avoid that the human operator could violate the Method's conditions of reproducibility and inter-subjectivity. For example, in the present case, there are no parametric models. The operator can follow the development of the various levels and verify that the results adhere point-by-point to reality. If the result of the process differs significantly from the experimental data, the system will stop and catalog it as a "scenario", assigning it a probability. In this case, the system will search for the most likely scenario. To be honest, in SHT analysis there is a very strong integration between man and machine, a sort of true symbiosis, in the sense that the operator's human brain is an "integral part" of the machine's mathematical algorithms. It is a symbiotic process, because mathematical algorithms exploit the plasticity of the human brain, while the human brain assumes the reproducibility and intersubjectivity rules of the Scientific Method and therefore acquires a sort of rigidity as if it were a machine! From the practical point of view, SHT analysis did not take place either with changes to the sample, or with reductions or subtractions of any kind: the SHT algorithms analyzed the system sic rebus stantibus, also considering the "junk". I think it was almost certainly for this reason that I was entrusted with this job, because the "3 + L" data had previously been identified as junk and archived, if not to say, trashed! SHT looks for a partition of each data ensemble, considered as a sort of "dynamic system". In the affirmative case, the dynamic evolution of the data around the attractors is studied. In this way, it is also easier to identify any systematic errors.
If SHT manages to identify an attractor, it will become a "category" of the experiment in question. But only of that particular experiment. Once the possible categories have been identified, SHT looks for, if it exists, a subset of "morphisms" that possess the qualities of probability functions. In this case, SHT defines these morphisms as "maximum congruence profiles".
From an implementation point of view, SHT analysis was performed through algorithms designed and adapted ad hoc in the mainframe data center machine codes (in this case, the main routines were installed as open source in the Frascati data center) and, from time to time, they are translated into programming and compilation through human interfaces of commercial software such as Matlab, Origin, Mathematica, Kaleidagraph, etc.
The analysis is conducted on nine levels. Levels 1, 5 and 9 are the most important. Level 1 restore, rebuilds and catalogs the data, trying to bring order to the initial chaos. Levels 5-6 sorts the data in time series, level 9 processes the maximum congruence regressions. The intermediate levels are dedicated to the declaration of variables, labeling ("tagging"), to the reconstruction of time intervals and delays ("lagging"), calibration in time ("bunch number arrays") and so on.
In the following sections I will give a practical example, trying to summarize the main levels and their results with diagrams.
First level SHT analysis: matching.
The optimization procedures of the first level of analysis made it possible to collect and generate groups of experimental data by selecting the relative configurations of the databases found and restored. This made it possible to cope with the total lack of references and information. The declarations and labels of the variables (tagging) were in fact devoid of references to measuring devices, zeros and gauges. The analysis was therefore performed by arranging the data in a sort of rugged funnel by Hans Frauenfelder [5][6][7][8] , which I used to call "Data-Funnel" (it is a map that represents the configuration entropies as a function of the signal / noise ratio, See the following figures). This procedure thus avoided the subjective introduction of selection criteria. The ultimate goal of the SHT first level processes was to create a data "cladistics". The following figures 9-12 show some pairs of selection examples with data-funnels for electron emissions (9-10) and positron emissions (11)(12).
[ To perform the calibration in time (ns) of the response of each photo-detector, I started from the constructive and operational parameters of DA NE The first thing that catches the eye of the above diagrams is the anomalous (See also A. Hofman [9] ) complex profile of the synchrotron radiation signal of each bunch, both for the emission of electrons and for that of positrons, which seems to be the result of the convolution of a fast component, which we will call "main" with a delayed component. Of course, I immediately thought that this "anomaly" was a systematic error of the measuring apparatus. Repeated and thorough checks on the data and on the measuring device, compared with the results obtained at the Chinese synchrotron in Hefei, have excluded this possibility. The measurement was in fact repeated with various modifications of the apparatus and conditions, also changing the MCT detectors. Furthermore, it was also performed on the Chinese Hefei synchrotron. In all cases, the anomaly remained, changing only in intensity, width and center. We will come back to this later.
Let's go to the end, moving immediately to the regression of the two components, then we'll go back to the analysis of the time series.
The intensity measurements of each emission peak were carried out by SHT level 9 regressions on the IR-SR (infrared synchrotron radiation) emission profile of each bunch, distinguishing, from time to time, the contributions of the two components, labeled as "main" and "delayed" respectively. In this way, I avoided a serious systematic error of most spectral analysis commercial and advanced tools (i.e. "peak-finder", "peaks" and so on), which are blind to the (hidden) components in the convolution. Similarly, the separation of the analysis into two components, which I call "dichotomous analysis", allows me to avoid that the study of the "main" profile is influenced by the "delayed" component. It is an important step to underline and remember in case the delayed component is caused by a systematic error of the apparatus (which is not).
[ Figure 11 about here.] The measurement of the time intervals (and the relative delays) must also be correct. We will need it in the time series (See the next section).
[ Figure 12 about here.] The following figure S13 shows double Gaussian prototypical regressions of level 9 for the complex convolution profile of the "main" and "delayed" components in the case of SR emission from DA NE electrons.
[ Figure 13 about here.] The same procedure was followed for the DA NE positrons and in the case of the measurements obtained at the Hefei Chinese synchrotron (see following tables).
Level 5-6 SHT analysis: time series. My decision to use time series analysis in accelerator and particle physics was suggested by the peculiarity of the time response of MCT detectors, capable of discriminating the IR emission of synchrotron radiation of each bunch of particles (also thanks to the statistical analysis SHT used. The estimation of the different components of a time series is often difficult and risky especially because it concerns "latent" variables that do not have a precise statistical definition, often incurring the risk of a systematic nature deriving from the introduction of arbitrary and subjective assessments. In the particular case of the evaluation of the trend, stochastic or deterministic models [10] are generally based on polynomial or transcendent regression functions, which are generally defined over the entire range of variation of the series. Regression models have been the subject of long-term studies starting from Hannan (1960), up to the "spline" functions of Duvall (1966), Stephenson and Farr (1972), to arrive at the local deterministic models of Cleveland (1990): an exhaustive review can always be found in Dagum [10] . From this, it follows that the decomposition of a time series is a very delicate and difficult operation. Ideally it is not a process suitable for a full automation (See i.e. X11ARIMA / 88). In the present work, statistical non-conventional models were based on polynomial regressions. They are not fully automated. The level 3-5 analysis results must be in conflict (matching) with the results of levels 1 and 2 and, above all, with the taggings of the variables defined by the "dichotomous" structure. The difference with the methods of Dagum lies mainly in the algebra that does not take into account the hypothesis of Wold [10][11][12][13][14] . I will return elsewhere on this important topic.
The objective of our analysis concerns the time evolution of electron and positron bunches through the study of the synchrotron radiation emitted. This is our phenomenon Z (t). I have thus an ordered collection of a sequence, not necessarily regular, ofobservations Z t = (x t ; t = 1, ...., N} Therefore, I am interested in defining a time series as the linear composition of a nonstationary deterministic process Y t possibly attributable to a trend component with a purely random erratic process or "white noise" such that Then, we define an integer index variable t2N called "bunch-number", as follows: Then, the correlogram of the Z t series, generated by the correlations between a series and the same delayed series of k2N periods, represents the variation of the auto-correlation (k), taken from the following relationship ⇢ (k) = cov(Zt,Z t k ) t t k  We have the same procedure for the "delayed" component. The following table S3 summarizes the residual statistics for four models ("delayed" component -DA NE electrons). As an example, in the following figure 26, I represent only the diagram for the linear detrend.