text
stringlengths 9
379k
| meta
stringlengths 137
151
| red_pajama_subset
stringclasses 1
value |
|---|---|---|
\section{Introduction}\label{section1}
The majority of stars are born in large clusters, and so, these sites are key to understanding the stellar contents of galaxies. Until recently the Milky Way was believed to be devoid of large young clusters. The realisation that our Galaxy harbours many young clusters with masses (M\,$>$\,10$^3$\,M$_{\odot}$) like Westerlund~1 (Wd~1), Arches, Quintuplet, RSG~1, RSG~3, Stephenson~2, Mercer~81, NGC3603, h+$\chi$ Persei, Trumpler~14, Cygnus~OB2, [DBS2003] and VdBH~222 \citep[see summary by][]{Negueruela2014}, reveals a different scenario than previously thought.
Although Milky Way offers the opportunity to resolve the cluster members, unlike other galaxies beyound the Local Group, accurate determination of the fundamental properties of these recently discovered clusters: distance, mass, age, initial mass function (IMF), and binary fraction is still lacking in many cases. The two fundamental parameters upon which all others depend are the interstellar extinction and the distance. In the present work we use accurate techniques to derive the interstellar extinction towards Wd~1 and its surroundings, what was not accurately done in previous works.
The extinction is related to the observed magnitudes by the fundamental relation (distance modulus):
\begin{equation}
m_{\lambda} = M_{\lambda} + 5\log_{10} \left(\frac{d}{10}\right) + A_{\lambda}.
\label{eq1}
\end{equation}
A set of different filters can be combined to define colour excess indices: $\displaystyle E_{\lambda1}-E_{\lambda2}=(m_{\lambda1}-m_{\lambda2})_{\rm{obs}}-(m_{\lambda1}-m_{\lambda2})_0$, where the zero index indicates the intrinsic colour of the star. Some authors -- like the classical work of \citet{Indebetouw+2005} -- attempted to derive $\displaystyle A_\lambda$ directly from the above relations, using a minimization procedure to a large number of observations from 2MASS survey \citep{Stru06}. Actually the number of variables is greater than the degrees of freedom of the system of equations we have to compute. All possible colour excess relations are linear combinations of the same parameters. In the specific case of the NIR, after dividing by the K$_s$--band extinction, there will be an infinite number of pairs $\displaystyle A_J/A_{Ks}$ and $\displaystyle A_H/A_{Ks}$ satisfaying the relations. Many minimization programs (like the downhill technique used in the {\it amoeba} program) just pick up a local minimum close to the first guess as a solution, hiding the existence of other possibilities. Derivation of the extinction can only be accomplished on the basis of a specific extinction law as a function of wavelength, which ultimately reflects the expected properties of dust grains.
Interstellar reddening is caused by scattering and refraction of light by dust grains, and the amount of light subtracted from the incoming beam (extinction) is governed by ratio between its wavelength ($\lambda$) and the size of the grains (d). For $\lambda<<d$ all photons are scattered/absorbed (gray extinction).For $\lambda>d$ the fraction of photons that escape being scattered/absorbed increases. The interstelar dust is a mixture of grains of different sizes and refactory indices, leading to a picture a little more complicated than described above. This was first modelled by \citet{Hulst1946} for different dust grain mixtures. All subsequent observational works resulted in optical and NIR extinction laws similar to that of van de Hulst model (in particular his models \#15 and \#16). A remarkable feature is that they are well represented by a power low ($A_\lambda$\,$\propto$\,$\lambda^{-\alpha}$) in the range 0.8\,$<$\,$\lambda$\,$<$\,2.4\,$\mu$m \citep[see e.g.][]{F99}.
The $\alpha$ exponent of the extinction power law: $\displaystyle A_{\lambda}/A_{Ks} = (\lambda_{Ks}/\lambda)^\alpha$ is related to the observed colour excess through:
\begin{equation}
\frac{A_{\lambda_1}-A_{\lambda_2}}{A_{\lambda_2}-A_{\lambda_{Ks}}} = \frac{\left(\frac{\lambda_2}{\lambda_1}\right)^{\alpha}-1}{1-{\left(\frac{\lambda_{Ks}}{\lambda2}\right)^{\alpha}}}.
\label{eq2}
\end{equation}
The value of the $\alpha$ exponent is driven by: a) the specific wavelength range covered by the data; b) the effective wavelengths of the filter set, which may differ from one to another photometric system, especially the $R$ and $I$ bands; and c) the fact that the effective wavelength depends on the spectral energy distribution (SED) of the star, the transmission curve of the filter and the amount of reddening of the interstellar medium (ISM). The power law exponent {\large $\alpha$} in the range 1.65\,$<$\,$\alpha$\,$<$\,2.52 has been reported \citep[see e.g.,][]{CCM89, Berdnikov+1996, Indebetouw+2005, Stead+2009, RL85, FM09, Nishiyama+2006, GF14}, but it is not clear how much spread in the value of the exponent is due to real physical differences in the dust along different lines-of-sight and how much comes from the method used on the determination of the exponent. As shown by \citet[their Fig. 5]{GF14} using the 2MASS survey, the ratio of colour excess $E_(H-K)/E_(J-K)$ grows continuously from 0.615 to 0.66 as the distance modulus grows from 6 to 12 towards the inner Galactic Plane (GP, their Fig. 5). This corresponds to a change in $\alpha$ from 1.6 to 2.2 which translates into $A_J/A_{Ks}$ from 2.4 to 3.4. \citet{Zas09} also used 2MASS data to show that colour excess ratios varies as a function of Galactic longitude, indicating increasing proportions of smaller dust grains towards the inner GP. Reddening laws steeper than the''canonical” ones have been suggested for a long time, but their reality is now clearly evident from deep imaging surveys.
The large progress reached in recent years revealed that there is no ``universal reddening law'', as believed in the past. Moreover, the extinction coefficients are quite variable from one line--of--sight to another, even on scales as small as arcminutes. At optical bands ($UBV$) it was already well established long ago that the extinction law for particular lines-of-sight had large discrepancies from the average \citep{FM09,He+1995,Popowski2000,Sumi2004,Racca+2002}. In recent years this has been proved to occur also for the NIR wavelength range \citep{Larson+2005, Nishiyama+2006, Froebrich+2007, Gosling+2009}. The patchy tapestry of extinction indices is particularly impressive in the large area work in the NIR$/$optical done by \citet{Nataf16,Nataf13} for the Galactic Bulge and by \citet{Scha16} for the GP. Although we cannot use directly the extinction coefficients from these works for our particular target (line--of--sight), their derived reddening relations help for checking the consistency of our results. Targets to derive the extinction must be selected between stars with well known intrinsic colour indices, in order to measure accurate colour excesses. Wd~1 cluster members with known spectral types are ideal for this, especially because the majority of them are hot stars, for which the intrinsic colours are close to zero. There are $\approx$ 92 stars in this group; the statistics can be improved by using stars in the field around the cluster.
\citet{Wozniak+1996} proposed using Red Clump (RC) stars, to derive the ratio between the total to selective extinction. These Horizontal Branch stars are the most abundant type of luminous stars in the Galaxy and have a relatively narrow range in absolute colours and magnitudes. RCs form a compact group in the colour-magnitude diagram (CMD) as shown by \citet[and references therein]{Stanek+2000}. This is due to the low dispersion -- a few tenths of magnitude -- in intrinsic colours and luminosities of RC stars \citep{Stanek+1997,Paczynski+1998}. This technique, initially designed for using filters $V$ and $I$ in OGLE survey for microlensing events, and filters $V$ and $R$ in MACHO, was adapted for the $JHKs$ bands \citep{Flaherty+2007,Indebetouw+2005,Nishiyama+2006,Nishiyama+2009}.
As shown by \citet{Indebetouw+2005} and by \citet{Nishiyama+2006,Nishiyama+2009}, RC stars in the CMD (e.g. $J-Ks$ {\it versus} $Ks$) may appear as an over--density ``strip''. That strip in the CMD contains interlopers which mimic RC star colours but have different luminosities (like nearby red dwarfs and distant supergiants). This does not allow the application of the relation~\eqref{eq1} to derive the absolute extinction from each particular star in the strip, but still works for the colour excess ratios in the relation~\eqref{eq2}. From the measured colour excess ratio (e.g. $E_{J-H}/E_{J-Ks}$) the value of the exponent $\alpha$ can be calculated and, therefore, the ratios $A_J/A_{Ks}$ and $A_H/A_{Ks}$.
\citet{Nishiyama+2006} reported $\alpha$\,=\,1.99 and $A_J$\,$\approx$\,3.02 and $A_H$\,$\approx$\,1.73 in a study of the Galactic Bulge, which is much higher then all previous results. \citet{Fritz+2011} also derived a large value $\alpha$\,=\,2.11 for the Galactic Centre (GC), using a completely different technique. \citet{Stead+2009} reported $\alpha$\,=\,2.14\,$\pm$\,0.05 from UKIDSS data and similar high exponents from 2MASS data. They did not derive $A_\lambda/A_{Ks}$, since in their approach those quantities vary because of shifts in the effective wavelengths as the extinction increases (see Section 2.2). However, at a first approximation, using the isophotal wavelengths, we can calculate from their extinction law: $A_J/A_{Ks}$\,$\approx$\,3.25 and $A_H/A_{Ks}$\,$\approx$\,1.78.
The dream of using interstellar DIBs to evaluate the exinction has been hampered by saturation effects in the strength of the features and on the behaviour of the carriers which differ from the hot/diffuse ISM as compared to cold/dense clouds -- however, see \citet{Maiz+2015}. The 8620\,\AA\,DIB correlates linearly with the dust extinction \citep{Munari+2008}, at least for low reddening, and is relatively insensitive to the ISM characteristics. Since this spectral region will be observed by GAIA for a large number of stars up to relatively large extinction, we used our data to extend \citet{Munari+2008} relation, which was derived for low reddening.
This work is organised as follows. In Section~\ref{section2} we describe the photometric and spectroscopic observations and data reduction. In Section~\ref{section3} we describe the colour excess ratios relations, the ratio between the absolute magnitudes and a suggested extinction law for the inner GP. In Section~\ref{section4} we compare our results with others reported in the literature. In Section~\ref{section5} we perform $J-{Ks}$ extinction maps around Wd~1 field for a series of colour slices and evaluate the 3D position of obscuring clouds. In Section~\ref{section6} we analyse the relation between the interstellar extinction and the Equivalent width (EW) of the 8620~\AA\ DIB. In Section~\ref{section7} we present our conclusions.
\section{Observations and data reduction}\label{section2}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.7cm 0cm 2cm, clip]{./RCstrip_dens.png}}
\caption{The $J-Ks$ vs. $Ks$ Hess diagram for Wd~1 field showing the RC strip. The two lower continuous lines correspond to extinction laws by \citet{Indebetouw+2005} (blue) - with extinction modules $\mu_{Ks}$\,=\,0.15\,mag\,kpc$^{-1}$ - and by \citet{Nishiyama+2006} (red) - with $\mu_{Ks}$\,=\,0.1\,mag\,kpc$^{-1}$. The two dashed red linear are empirical limits containing the RC strip, using Nishiyama's law with lower limit $\mu_{Ks}$\,=\,0.11 and upper limit 0.23 mag\,kpc$^{-1}$.}
\label{fig:RCstrip}
\end{figure}
\subsection{Photometry}\label{section2.1}
The main set of images in the $JHKs$ filters was taken on 2006 June 06 with the ISPI camera at the 4m Blanco Telescope (CTIO), with 10\arcmin\,$\times$\,10\arcmin\ FOV \citep{Bliek+2004}. The image quality, after combination of the dithered sub-images was 1.0\arcsec in the $Ks$ filter and 1.8\arcsec in $J$ and $H$. The number of stars with measurements in all the three filters was 23834 with errors less than 0.5\,mag. The limiting magnitudes were: $J$\,$<$\,21.4, $H$\,$<$\,19.2 and $Ks$\,$<$\,18.2. The basic calibration (flatfield, linearity correction) was done with {\tt IRAF}\footnote{http://www.iraf.noao.edu} and the photometry extraction was performed with \texttt{STARFINDER}\footnote{http://www.bo.astro.it/StarFinder/paper6.htm}.
In order to cover the brighter stars, we took $JHKs$ imaging at the 1.6-m and 0.6-m OPD/LNA Brazilian telescopes with the CAMIV camera. Many images were taken in several subsequent years, covering a FOV larger than that of ISPI. For each star, we always use the three $JHKs$ magnitudes taken in a single night, to preserve the stellar colours against variability. In addition to traditional imaging, we used different strategies to record the very bright stars, for example, deploying a 5\,mag neutral density filter or masks with four 5\,cm diameter holes in front of the telescope. We compared the results obtained with the different setups and found good agreement. Instead of the $Ks$ filter, we used a narrow filter with central wavelength 2.14\,$\mu$m (FWHM\,=\,0.02\,$\mu$m) and cross calibrated the photometry with $Ks$ magnitudes from the ISPI camera. The uncertainty on our $JHKs$ magnitudes reached at most 0.1\,mag for a few bright stars like W26, when comparing magnitudes at different epochs, which could be due to intrinsic variability.
Data processing was made in the same way as with ISPI images. Our catalogue in $JHKs$ from these two cameras is $\approx$\,100\% complete for the blue and red giants and blue supergiants/hypergiants down to the bottom of the Main Sequence (MS, $Ks$\,$\sim$\,15). The peak of the luminosity function in the $Ks$ filter (completeness $>$\,85\%) is at $Ks$\,=\,16.0). We call this set of data the ISPI+CAMIV photometric sample.
The calibration of ISPI photometry was performed against {\tt 2MASS}\footnote{http://irsa.ipac.caltech.edu/cgi-bin/Gator/}. Then, we used the CTIO/ISPI catalogue and stars in common with OPD/CAMIV photometry performed at smaller telescopes to extend 2MASS calibrations up to $Ks$\,$\approx$\,0. The central 2\arcmin\,$\times$\,2\arcmin field of Wd~1 is so crowded that some stars were resolved only in nights with excellent seeing. The Spartan camera at the SOAR telescope was used for this purpose on 2013 September 03 when the seeing was approximately 0.5\arcsec.
We used the VVV survey for $ZY$ magnitudes of the RC candidates we detected with ISPI and CAMIV cameras. The magnitudes were extracted using aperture photometry from the database of the survey\footnote{http://horus.roe.ac.uk/vsa/}. The date of those images was 2010 July 19th. For those specific stars, we used our ISPI JHKs magnitudes (calibrated in 2MASS system) to transform from VISTA magnitdes to 2MASS. However, even if the $JHKs$ magnitudes obtained with VVV survey were in excellent agreement with our ISPI+CAMIV+Spartan catalogue, we used them for two specific aims: a) to perform the colour index density maps and b) for checking the correct identification of the ISPI/CAMIV sources. We used magnitudes for the W1 and W2 MIR bands of the WISE survey \citep{wright10}. We used also images from Herschel Hi--GAL survey to look for cold and warm dust \citep{molinari10}.
For the Wd~1 cluster members, we used $BVI$ photometry from \citet{Lim+2013}. A few stars without $B$ or $V$ magnitudes from those authors were taken from \citet[]{Clark+2005}, who kindly made a machine readable copy of the photometry available to us. We recalibrated those magnitudes to \citet{Lim+2013} system. Additional magnitudes were taken from \citet{Bonanos2007} and also transformed to the \citet{Lim+2013} system. For the $R$ filter, we used the data from \citet{Clark+2005} and \citet{Bonanos2007} without any transformation. We used 104 stars with spectral classification done by \citet{Clark+2005}, \citet{Negueruela+2010} and \citet{Ritchie+2009}, which have accurate intrinsic colour indices. For reddening studies, we included also the Main Sequence OB eclipsing binary Wddeb \citep{Bonanos2007}. The majority of the bright cluster members are OB Supergiants, and we included also cooler giants and hypergiants. We excluded W8a, W9, W12a, W20, W26, W27, W42a, W75, W265 and the WC9 WRds which have circumstellar hot dust emission (in the $Ks$ filter) or absorption by dust (in $B$ filter).
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.65cm 0cm 2cm, clip]{./EJHXEJK.pdf}}
\caption{The excess colour ratio is $E_{J-H}/E_{J-Ks} = 0.655 \pm 0.001$. RCs are in red points (averaged in groups of 10) and Wd~1 cluster members are blue crosses and were used together RCs in the linear fit. It is clear that there is a differential extinction among the Wd~1 cluster members and their average value is intermediate as compared to the RC sample. See other plots in Appendix B.}
\label{fig:EJHXEJK}
\end{figure}
There have been discussions about the accuracy of the optical data by the cited authors, since the colour indices of the program stars were larger than that of photometric standards. Errors could amount up to a few tenths of magnitudes in the B filter and a little less at longer wavelengths. This has an impact when comparing magnitudes from different authors, but has a negligeable effect for other purposes, like study of variability in time series taken with the same instrument.
The impact is minor regarding the study of reddening due to the fact that the extinction is orders of magnitudes larger than photometric errors. Similar systematic errors due to the lack of very red photometric standards is a common place in large deep surveys, and, although they are present at all wavelength windows, they have not been discussed in connection to deep surveys.
We used extensively the R\footnote{https://www.R-project.org/} programming packages to perform statistics, fitting and plots.
\subsection{Effective filter wavelengths}\label{section2.2}
Since JHKs filters are close in wavelengths, special care must be taken, using accurate effective wavelengths for this particular range. As a matter of fact every single star is measured at different wavelength, defined by the intrinsic SED of the star, modified by its particular amount of reddening and convolved with the filter passband. This explains why the colour--colour diagram (CCD) produces curved bands. The effective wavelength of the filters shifts continuously to longer wavelengths as the reddening increases, such that the ratios vary continuously. The shifts in the effective wavelengths are approximately straight lines (see Fig.~2a in \citet{Stead+2009}), in order that ratios between effective wavelengths are reasonably constant for large extinctions. Linear approximations for the effective wavelengths of 2MASS filters taken from that plot are presented in Apendix\,\ref{appendixA}.
Although a colour plot $J-$H\,{\it vs.}\,$H-Ks$ does not produce a straight reddening line, the curvature of the line is small for large reddening and the ratio converges to a constant value. In this way, we could use a template star SED for evaluating the effective filter wavelengths. An approach simpler to the one we discussed in the last paragraph was also used by \citet{Stead+2009}, who convolved the 2MASS filter passbands with a \citet{Castelli+2004} K2III giant star spectrum, obtaining $\lambda_J$\,=\,1.244, $\lambda_H$\,=\,1.651 and $\lambda_{Ks}$\,=\,2.159. Since our data are from deep imaging towards the inner Galaxy and are calibrated in the 2MASS system, we use the above effective wavelengths for $JHKs$ filters (Table\,\ref{table1}, column~2).
Other filter wavelengths were taken as they are repported in the parent papers and not shifted, as we do not use them for critical calculations. Care must be taken because the same filter names have different wavelengths in different papers, especially $R$, $I$, $Z$ filters. For example, the $R$ filter can have $\lambda_{\rm eff} = 0.664$ or 0.70 $\micron$. For the $I$ filter, the most used in recent works is $\lambda_{\rm eff}$\,=\,0.805\,$\mu$m, but \citet{CCM89} uses 0.90\,$\mu$m, which in reality matches better the $Z$ band. To avoid confusion, we state in Table\,\ref{table1} the effective wavelength of each filter we are using.
\subsection{Spectroscopy}\label{section2.3}
For the 8620 \AA DIB, we observed 11 bright members from the Wd~1 cluster and 12 along other Galactic directions at the Coud\'e focus of the 1.6-m telescope. The OPD/LNA spectra have $R$\,=\,15000. In addition, we downloaded 31 spectra from the ESO database (see Table\,\ref{table2}) also with $R$\,$\sim$\,15000. OPD data were reduced with IRAF; the ESO prog. ID 073D-0327 data with ESO Reflex and ESO 081D-0324 with Gasgano. The spectral resolution was measured on telluric absorption lines. When there were stars in both samples, we used the OPD/LNA because its Deep Depletion CCD presented no fringes, which are prominent in some ESO spectra, especially to the red of the 8620\,DIB. EWs were measured by Gaussian fitting after the spectra were normalised to the stellar continuum. Typical uncertainty is $\sim$\,10\% for the bright and blue stars, but errors are difficult to asses for fainter stars. EWs were also difficult to measure in spectral types later than F5, because of blends with stellar lines. Our results are presented in Table\,\ref{table2} column~2. One specific star, the eclipsing binary Wddeb was measured with Gemini South/GMOS with spectral resolution $R$\,$\sim$8000.
\section{The reddening law using colour excesses }\label{section3}
When the intrinsic colours of a star are known, its colour excesses can be derived from the observed colours, and the reddening law is calculated from the ratios between colour excesses. This is the case for the Wd~1 cluster members with spectral classification reported in \citet{Clark+2005}, \citet{Negueruela+2010} and \citet{Ritchie+2009}. The intrinsic colours were derived using calibrations by \citet{Wegner1994}. For the Wolf-Rayet stars, we used \citet{Crowther+2006}. Since the number of Wd~1 cluster members are not large, additional stars were taken from the surrounding field. They do not have spectral classification, but for RC stars there are reliable intrinsic colour indices (see below).
\subsection{Selection of Red Clump stars}\label{section3.1}
A similar procedure to that of \citet{Wozniak+1996} was adopted for selecting RC candidates as they can easily be identified as an overdensity in the CMD (see Fig.\,\ref{fig:RCstrip}). We represented the photometry by a Hess diagram in order to see more clearly the overdensity structures. The lower left overdensity corresponds to foreground (low luminosity) red stars. The one that goes up vertically, around $J-Ks \approx 1.5$ is the Main Sequence (MS) of the Wd~1 cluster and the large region which curves to the red below $Ks\approx 14$ is the cluster Pre-Main Sequence (PMS). The overdensity region running brighter/bluer to fainter/redder in the middle of the CMD is the expected locus containing RC stars observed at different distances and amounts of extinction. For a preliminary study, we plotted in our observed $Ks$\,$\times$\,$J-Ks$ CMD the position of a typical RC star by varying its distance and using different extinction laws like those of \citet{Indebetouw+2005}, who reported a ``extinction modulus'' $\mu_{Ks}$\,=\,$A_{Ks}/d$\,=\,0.15\,mag\,kpc$^{-1}$ - blue line.
We followed the same procedure using the reddening law of \citet{Nishiyama+2006}, who do not report the ``extinction modulus'', but which can be derived from the RC peak in their Fig.\,2, corresponding to $\mu_{Ks}$\,=\,0.1\,mag\,kpc$^{-1}$. Although we do not use the extinction modulus for any particular measurement (since it varies from star to star) Fig.\,\ref{fig:RCstrip} indicates that typical extinction in the direction of Wd~1 cluster is much larger than for directions explored by \citet{Indebetouw+2005} - large Galactic longitudes, or by \citet{Nishiyama+2006} - the Bulge.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.5cm 0.5cm 1.5cm, clip]{./allaws_dif.pdf}}
\caption{Differences between $A_\lambda/A_{Ks}$ for a set of reddening laws and that of \citet{Hulst1946}, taken as a zero point.The vertical magenta line is the range reported by \citet{Nataf16}. Data assigned as Schlafly16 \citep{Scha16} in reality were derived by us, using their colour excess ratios. The letters at the top of the plot indicate the approximate wavelengths of the filters - see Table\,\ref{table1}.}
\label{fig:allaws_dif}
\end{figure}
The empirical limits of the RC strip in the CMD could be defined by eye, as done in many works, but we have a more objective way to do it. We used Nishiyama's law with $\mu_{Ks}$\,=\,0.11 and 0.23\,mag\,kpc$^{-1}$ to encompass the RC overdensity - dashed lines in Fig.\,\ref{fig:RCstrip}. The fact that the RC strip crosses the Wd~1 MS, indicates that there are intruders in the overdensity strip. Many of those intruders can be excluded on the basis of their colours in the CCD. We used Nishyiama's law to evaluate the colour excesses and got the average $E_{J-Ks}/E_{H-Ks}$\,=\,2.9\,$\pm$\,0.07 for the RC candidates inside the strip. We used this average and excluded objects with measurements departing more than 3 $\sigma$, keeping the ones in the range 2.7\,$<$\,$E_{J-Ks}/E_{H-Ks}$\,$<$\,3.1, which resulted in 8463 stars with colours compatible with RCs, with $JHKs$ magnitudes. Of course, stars with similar colours but different absolute magnitudes (red dwarfs, K giants) remain in the RC sample. However, they work exactly in the same way as RC stars regarding the measurement of colour excess and the reddening law.
For the RC candidates, we obtained $BVIZYW1W2$ magnitudes from the published catalogues described in Section\,\ref{section2.1}. The final number of RCs is: 26 in $B$, 64 in $V$, 205 in $I$, 1439 in $Z$, 1395 in $Y$, 8463 in $JHKs$ and 256 in $W1,W2$. We combine this RC sample with the 105 Wd~1 cluster members which have published spectral types and $BVRIJHKs$ magnitudes. Intrinsic colours for RC stars were taken from Table 1 in \citet{Nataf16} for the optical and NIR and from \citet{GF14} for WISE - their Table\,1 - interpolating the W1 and W2 effective wavelengths reported by \citet{Scha16} - their Table\,2. Absolute magnitudes were calculated by adopting $M_{I,RC}$\,=\,$-0.12$ from \citet{Nataf13}.
\subsection{Colour excess ratios combining Red Clump stars and Wd~1 cluster members}\label{section3.2}
The colour excess ratios $E_{J-H}\,/\,E_{J-Ks}$ are presented in Fig.\,\ref{fig:EJHXEJK} displays the colour excess ratio for RCs (red crosses) and Wd~1 cluster members (blue crosses) and the linear fit relating these quantities. Similar fit for other colour excess ratios are presented in Appendix\,\ref{AppendixC}. We present below the set of colour excess ratios, combining data from RCs and the Wd~1 cluster members.
\begin{eqnarray}
E_{B-J} & = & (8.167\pm0.263) ~E_{J-Ks}, \label{eq3} \\
E_{V-J} & = & (5.257\pm0.181) ~E_{J-Ks}, \label{eq4} \\
E_{R-J} & = & (3.597\pm0.185) ~E_{J-Ks}, \label{eq5} \\
E_{I-J} & = & (2.465\pm0.032) ~E_{J-Ks}, \label{eq6} \\
E_{Z-J} & = & (1.797\pm0.012) ~E_{J-Ks}, \label{eq7} \\
E_{Y-J} & = & (0.841\pm0.004) ~E_{J-Ks}, \label{eq8} \\
E_{J-H} & = & (0.655\pm0.001) ~E_{J-Ks}, \label{eq9} \\
E_{H-Ks} & = & (0.345\pm 0.001) ~E_{J-Ks}, \label{eq10} \\
E_{J-H} & = & (1.891\pm0.001) ~E_{H-Ks}, \label{eq11} \\
E_{W1-J} & = & (-1.274\pm 0.030) ~E_{J-Ks}, \label{eq12} \\
E_{W2-J} & = & (-1.194\pm 0.055) ~E_{J-Ks}. \label{eq13}
\end{eqnarray}
The set of colour excess ratios shows that our results are very accurate for the $JHKs$ bands, since we have much more data for these wavelengths than for shorter ones, and the relation for this set of three indices is completely dominated by RCs. The relation $E_{J-H} \times E_{J-Ks}$ is the tightest one. The plot even reveals the spread of extinctions among the Wd~1 cluster members and because of this we are anchoring our results on this particular colour ratio. The ratio $E_{J-H}/E_{J-Ks}$\,=\,0.655\,$\pm$\,0.001 corresponds to the more usual form $E_{J-H}/E_{H-Ks}$\,=\,1.891\,$\pm$\,0.001.
In the case of $Z$ and $Y$ filters, we have only data for RCs (taken from the VISTA archive), not for the cluster members. Regarding the $R$ filter, on the other hand, we have data only for Wd~1. The corresponding relation $E_{R-J}$\,$\times$\,$E_{J-Ks}$ does not enable an accurate linear fit and we derived the angular coefficient from the average ratio between these colour excesses. The number of measurements from RCs in the $B$ and $V$ filters is much smaller than for longer wavelengths - because RCs are faint in that spectral range - and just the closer RCs have reliable photometry in those filters. As seen in the corresponding plots, the deep imaging of Wd~1 (reaching stars under large extinction) was crucial to warranty a good fit in colour relations involving filters in the optical window.
\subsection{Absolute extinctions derived from colour excess ratios}\label{section3.3}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.7cm 0cm 2cm, clip]{./planelaw.pdf}}
\caption{Extinction law for the Galactic Plane (GP), combining our results (filters with capital letters at the top and empty squares) with colour excess ratios from \citet{Scha16} - filled triangles $g,r,i,z$ filters - for which we derived the extinctions. The dotted red line is the fit with Eq. \ref{eq19} and black asterisks are residuals from the fit minus observations. The black {\it dashed} line is the power law with exponent $\alpha$\,=\,2.17, which is a very good representation of the extinction law for the inner GP in the range 0.8-4.0\,$\mu$m.}
\label{fig:planelaw}
\end{figure}
We selected the colour excess ratio $E_{J-H}/E_{J-Ks}$ to derive the $\alpha$ exponent of the reddening law from Eq.\,\eqref{eq2}. This choice is based on the high accuracy of that index and on the fact that this is the best wavelength range to fit the reddening curve by a power law.
We performed an initial study to derive alpha based on the dynamical effective wavelengths, by following the procedure designed by Stead and Hoare (2009), which evolve as the colour indices increase. We approximate their relations from their Fig.\,2a by linear relations in Appendix~\ref{appendixA} to calculate the effective wavelengths as a function of $(H-K)$ for $JHKs$ filters. We then extracted the photometry from 2MASS catalogue inside a circle of 1$^\circ$ diameter centered on the Wd~1 cluster from which we selected RCs as we did for our photometry. We evolved the effective wavelengths using $H-Ks$ for each star, derived their $\alpha$ and took the median value: $\alpha$\,=\,2.25\,$\pm$\,0.15.
We used an alternative and simpler procedure, as described in Section\,\ref{section2}, taking the wavelengths of each filter fixed, as derived by \citet{Stead+2009} using a K2\,III star. We found $\alpha$\,=\,2.14\,$\pm$\,0.10, in reasonable agreement with the more complex method and in excellent agreement with \citet{Stead+2009} who derived $\alpha$\,=\,2.03\,$\pm$\,0.18 from 2MASS data and $\alpha$\,=\,2.17\,$\pm$\,0.07 from UKIDSS data. Since our data are calibrated in the 2MASS system, we adopted the fixed effective wavelengths, as just mentioned, for our photometric data, which have an observational setup very close to that of 2MASS filters. Now, using Eq.\,\eqref{eq2} and the measured ratio $E_{J-H}/E_{J-Ks}$\,=\,0.655\,$\pm$\,0.001 we derived $\alpha$\,=\,2.126\,$\pm$\,0.080 which translates into $A_J/A_{Ks}$\,=\,3.229 and $A_H/A_{Ks}$\,=\,1.769.
Using this value of $A_J/A_{Ks}$ and the colour excess ratios in the previous sub--section (Equations\,\ref{eq3} to \ref{eq13}), we derive the remaining $A_\lambda/A_{Ks}$ listed below:
\begin{eqnarray}
A_B = 21.43:A_V=14.95:A_R=11.25:A_I=8.72: \nonumber \\
A_Z = 7.23:A_Y=5.10:A_J=3.23:A_H=1.77: \nonumber \\
A_{Ks} = 1:A_{W1}= 0.39:A_{W2} = 0.26.
\label{eq14}
\end{eqnarray}
The above set of $A_\lambda/A_{Ks}$ defines an extinction law which is much steeper than typical ones published up to $\sim$10 years ago, but which are still in use \citep{CCM89,Indebetouw+2005,RL85}. From these relations, we can obtain the absolute extinction in all filters for a star, provided its $A_{Ks}$ is known. $A_{Ks}$ is easy to obtain from the colour excesses, for example:
\begin{eqnarray}
A_{Ks} & = & 0.449 ~E_{J-Ks}, \label{eq15} \\
A_{Ks} & = & 0.685 ~E_{J-H}, \label{eq16} \\
A_{Ks} & = & 1.300 ~E_{H-Ks}. \label{eq17}
\end{eqnarray}
In order to minimise the photometric errors, we use all these three relations and then perform the average to get a more robust value for $A_{Ks}$. We did this to obtain the extinction towards the Wd~1 cluster members, presented in Table\,\ref{table2} (column\,3) for all filters. The average value for this cluster based on 92 stars, after excluding those with obvious circumstellar contamination is:
\begin{equation}
<A_{Ks}~Wd1> = 0.736 \pm 0.079.
\label{eq18}
\end{equation}
The 18 WR stars which are not affected by dust emission resulted in $<A_{Ks} ~Wd1-WRs>$\,=\,0.736\,$\pm$\,0.082, which is indistinguishable from the less evolved Wd~1 cluster members. This value is substantially lower than those derived by previous authors for this cluster, and this has an impact on the derived distance.
\subsection{A reddening law for the Galactic Plane in the range 0.4--4.8 $\mu$m}\label{section3.4}
Our target is located in the GP and it would be interesting to compare our results with others in the same region, since most of the recent studies based on large area surveys are focused on the Galactic Bulge (GB). A particularly important work in the GP is that reported by \citet{Scha16} for many thousands of stars, using the APOGEE spectroscopic survey and photometry from the ten--band Pan-STARRS1 survey.The majority of the targets are at -5$^\circ<$\,b\,$<$+5$^\circ$ and 0$^\circ<$\,l\,$<$250$^\circ$. \citet{Scha16} reported colour excess ratios in the optical, NIR and MIR windows, and their results are in excellent agreement with ours. They did not derive the extinction relations ($A_\lambda/A_{Ks}$), but this can be obtained in the same way as we described in the previous sub--section, especially because their ratios are anchored on 2MASS colours excesses. From their $E_{J-H}/E_{H-Ks} = 1.943$ we obtain $\alpha$\,=\,2.209, which translates into $A_J/A_{Ks}$\,=\, 3.380 and $A_H/A_{Ks}$\,=\,1.809. The set of $A_\lambda/A_{Ks}$ corresponding to their colour excess ratios is: $\displaystyle A_g\,=\,16.61\,:\,A_r\,=\,12.24\,:\,A_i\,=\,9.10\,:\,A_z\,=\,7.05\,:\,A_J\,=\,3.38\,:\,A_H\,=\,1.81\,:\,A_{Ks}\,\,=\,\,1\,:\,A_{W1}\,=\,0.43\,:\,A_{W2}\,\,=\,\,0.21$.
Due to the excellent agreement of their results with ours, one can infer an average extinction law for the inner GP by performing a polynomial fit on both data sets in the range 0.4-4.8\,$\mu$m (see Eq.\,\eqref{eq19}, where $x = log(2.159/\lambda)$ in units of $1/\mu$m, and the corresponding plot is in Fig.\,\ref{fig:planelaw}).
\begin{equation}
\log\frac{A_\lambda}{A_{Ks}} = -0.015 + 2.330 x + 0.522 x^2 - 3.001 x^3 + 2.034 x^4.
\label{eq19}
\end{equation}
The standard deviation of the observed minus calculated fit (O--C) is very small and is driven by the residuals at wavelengths shorter than 1\,$\mu$m, specially in the $B$ filter. It is surprising to see that a power law with $\alpha$\,=\,2.17 is an almost perfect representation of the data in the range 0.8-4\,$\mu$m with r.m.s.\,=\,0.09\,magnitudes.
\section{Comparison between different extinction laws}\label{section4}
\subsection{The $\alpha$ exponent and JHKs colour ratios}\label{section4.1}
The comparison between reddening laws from different authors should be straightforward in the NIR, since $JHKs$ filters are very similar in modern times. Most of the authors report a narrow range of $E_{J-H}/E_{H-Ks}$ values (1.8--2.0). Surprisingly, there is a large spread in the published reddening laws, even when derived in the narrow NIR window and also when using the same database (2MASS): $\alpha$\,$\approx$\,1.65-2.64, which translates into $A_J/A_{Ks}$\,$\approx$\,2.5-4.2. The different procedures used to derive the extinction law from the colour excess ratios have different biases. In most cases, this is due to the difficulty in transforming broad-band measurements into monochromatic wavelengths \citep{Sale15}, but there are other additional issues, some of which we describe here.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.7cm 0cm 2cm, clip]{./RJKIJxRAIVI.pdf}}
\caption{Comparison between our results with other authors, showing a lack of correlation between $E_{J-Ks}/E_{I-J}$ and $A_I/E_{V-I}$. Our result for RCs (red crosses) compares well with that by \citet{Nataf16} - dashed rectangle - for the Galactic Bulge and samples the low density, higher ionization ISM. Wd~1 cluster members (blue crosses) have a contribution from a slightly different grain distribution from a denser medium. Older results by \citet{FM07} and by \citet{CCM89} are represented by pink and green circles, respectively.}
\label{fig:RJKIJxRAIVI}
\end{figure}
\begin{table*}
\scriptsize
\label{table1}
\begin{minipage}{170mm}
\caption{Comparison between some reddening laws. Authors - 1st row: vdH\#16 = \citet{Hulst1946}, Nat16 = \citet{Nataf16}, Sch16 = \citet{Scha16}, SH09 = \citet{Stead+2009}, Ind05 = \citet{Indebetouw+2005}, CCM89 = \citet{CCM89}, FM09 = \citet{FM09}, Nishi06 = \citet{Nishiyama+2006}, Nishi08 = \citet{Nishiyama+2008}.
Values marked with * were derived in this work from reddening and effective wavelengths reported by the corresponding authors. The second row indicates the photometry source. For FM09, we used only the star BD+45 973.}
\begin{tabular}{lcccccccccccc}
\hline
\hline
Filter &$\lambda_{\rm eff}$& Wd~1 & Wd1 + RCs & vdH\#16&Nat16&Sch16& SH09& Ind05& CCM89& FM09& Nishi06,08 \\
\hline
& $\mu$m &$A_{\lambda}$&$A_{\lambda}/A_{Ks}$&&VVV & Pan-STARRS1&UKIDSS &2MASS &&HST/SNAP & SIRIUS \\
\hline
$B$ & 0.442 &15.49$\pm$0.10& 21.43& 14.36 & - & - &- & - & 11.49& 17.52& -\\
$V$ & 0.537 &11.26$\pm$0.07& 14.95& 11.69& 13-15& 16.61g*&- & - & 8.77&12.76 &16.13\\
$R$ & 0.664 &8.44$\pm$0.10 & 11.25& 8.63& - & 12.24r*&- & - & 6.59& 9.48& -\\
$I$ & 0.805 &5.71$\pm$0.04 & 8.72& 6.32& 7.26 & 9.10i* & & - & - & 6.52& -\\
$Z$ & 0.878 & - & 7.23& 5.30& - & 7.05z* & - & - & 4.20& 5.81& -\\
$Y$ & 1.021 & - & 5.10& 3.97& - & - & - &- & - & - & - \\
$J$ & 1.244 &2.34$\pm$0.03 & 3.23& 2.70& 2.85 & 3.56* &3.25*& 2.5& 2.47& 2.90& 3.02 \\
$H$ & 1.651 &1.29$\pm$0.02 & 1.77& - & - & 1.84*&1.78*&1.61 & 1.54 & 1.67&1.73 \\
$Ks$ & 2.159 &0.74$\pm$0.01 & 1 & 1 & 1 & 1 & 1 & 1& 1 & 1 & 1 \\
$W1$ & 3.295 &0.29$\pm$0.03 & 0.39& - & - &0.43* & - & - & - & - &0.40 \\
$W2$ & 4.4809 & 0.19$\pm$0.05& 0.26& - & - & 0.21* & - & - & - & - &0.20 \\
\hline
{\bf$\alpha$} & - & - & 2.13& - &1.88* & 2.21*& 2.14 &1.66*&1.68*& 2.00& 1.99 \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
An example indicating differences related to the methodology is by comparing three extreme results based on 2MASS data. We adopted the effective wavelengths based on \citet{Stead+2009}, reported in Section\,\ref{section2}, and the observed colour excess ratio $E_{H-Ks}/E_{J-Ks}$\,=\,0.345 to derive $\alpha$\,=\,2.13 and $A_J/A_{Ks}$\,=\,3.23. \citet{Indebetouw+2005} measured $E_{H-Ks}/E_{J-Ks}$\,=\,0.36 and derived $A_J/A_{Ks}$\,=\,2.5 directly by minimization of the colour excesses. It is surprising to see that \citet{GF14} using 2MASS data obtained a very high $\alpha$\,=\,2.64 ($A_J/A_{Ks}$\,=\,4.2), although their colour excess ratio $E_{J-H}/E_{H-Ks}$\,=\,1.934 is very similar to others. Their method is complex to follow what exactly drove that power law exponent, but it corresponds to the the extreme values reported by \citet{FM09}, from which they adopted the reddening law. In some way their procedure converged to the higher and not to the average exponent of \citet{FM09}. Using our procedure (anchoring the reddening law in the $\alpha$ exponent) with the colour excess ratios reported by those authors, we derive $\alpha$\,=\,1.89 with $A_J/A_{Ks}$\,=\,2.84 and $\alpha$\,=\,2.15 with $ 3.27$, for \citet{Indebetouw+2005} and \citet{GF14} respectively. This agreements indicates that small differences on the effective wavelengths are not the main cause of the discrepancies found in the reddening laws.
Beyond discrepant methodolgies, genuine differences due to dust properties do exist and are clearly shown in \citet{FM09}, based on spectrophotometric studies and not affected by problems defining the effective wavelengths. Large area surveys based on broad--band filters, such as reported by \citet{Nataf16} for the Galactic Bulge and by \citet{Scha16} for the GP, also show real variations and surprisingly, they exist on very small scales (arcminutes), on top of a general pattern related to the angular distances from the Galactic Centre.
\citet{Nishiyama+2006} used their SIRIUS survey of the Bulge to measure $E_{H-Ks}/E_{J-Ks}$\,=\,0.343 and derived $\alpha=1.99$ with $A_J/A_{Ks}$\,=\,3.02, in reasonable agreement with our results for the Wd~1 direction. Our extinction law is in excellent agreement with \citet{Stead+2009}, who obtained $\alpha$\,=\,2.14.
By translating their exponential law into extinction ratios (not presented in their original work), we computed $A_J/A_{Ks}$\,=\,3.25.
\citet{Nataf16} reported an $<A_J/A_{Ks}>$\,=\,2.85 from the large and deep VVV survey towards the Bulge, as in \citet{Nishiyama+2006}, but covering more diverse environments close to the GP. Using the effective wavelengths from VVV, this corresponds to $\alpha$\,=\,1.88.
\citet{Scha16} measured $E_{J-H}/E_{H-Ks}$\,=\,1.943 for a 2MASS sample in the inner GP, which corresponds to $E_{H-Ks}/E_{J-Ks}$\,=\,0.340. Using the wavelengths they adopted for 2MASS effective wavelengths ($\lambda_J$\,=\,1.2377, $\lambda_H$\,=\,1.6382 and $\lambda_{Ks}$\,=\,2.151) this implies $\alpha$\,=\,2.3 with $A_J/A_{Ks}$\,=\,3.56 and $A_H/A_{Ks}$\,=\,1.84. This is a little higher than our value, and using our adopted effective wavelengths this is reduced to $\alpha$\,=\,2.21 with $A_J/A_{Ks}$\,=\,3.38 plus $A_H/A_{Ks}$\,=\,1.81. We will use these last values in Fig.\,\ref{fig:allaws_dif} for consistency with our procedure, although differences are very small.
As a summary we see that in general, extinction laws in $JHKs$ bands derived after 2005 are steeper than previous ones and this is driven by the deep surveys in the inner Galaxy.
The foreground extinction we obtained in this work for the Wd~1 cluster, $A_{Ks}$\,=\,0.736, is smaller than $\approx$\,0.9-1.0 claimed in previous papers \citep{Andersen16, Gennaro+2011, Crowther+2006, Brandner+2008}. We derived a separate $A_{Ks}$ extinction for 18 Wolf--Rayet stars (excluding the dusty WC9) and obtained $A_{Ks}$\,=\,0.743\,$\pm$\,0.08 in excellent agreement with that based on non--WR cluster members. For comparison, using the same set of WRs, \citet{Crowther+2006} reported $A_{Ks}$\,=\,0.96\,$\pm$\,0.14. Later, those authors revised their value to $A_{Ks}$\,=\,1.01\,$\pm$\,0.14 \citep{Crowther+2008}. A recent study on the Wd~1 low mass contents by \citet{Andersen16} reports $A_{Ks}$\,=\,0.87\,$\pm$\,0.01 in better agreement with ours than previous works, but still not compatible. This is because they used the freddening line from \citet{Nishiyama+2006}, which is appropriate for the Bulge and our target in in the GP. \citet{Andersen16} found some evidence for the extinction gradient to increase towards N--NE, based on a large population of PMS, in agreement with our result, base on fewer evolved stars. Those authors decided to derive the extinction for every single PMS star, instead of using an average extinction.
Our absolute extinction for Wd~1 is not compatible with previous ones and has an impact on the cluster distance, age and the absolute magnitudes of WRs. After measuring the cluster distance based on an eclipsing binary, we will tackle this question in a future paper.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0]{./VVV_map.pdf}}
\caption{Colour coded map from VVV survey from filter $Z$ (blue), $J$ (green) and $Ks$ (red). The labels indicate the directions in Fig.\,8 for which we measured the density of stars as a function of $J-Ks$ colour. The circles have $r$\,=\,2\arcmin. The {\it Centre} contains the Wd~1 cluster and the {\it ring} has width\,=\,1\arcmin. The yellow rectangle indicates the ISPI FOV.}
\label{fig:VVV_map}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 2.2cm 0cm 3.6cm 0cm, clip]{./dustmap.pdf}}
\caption{ Extinction map. a): $A_{Ks}$ for the cluster members: {\it white crosses} are for values departing less than 0.5$\sigma$ from the average ($0.70<A_{Ks}<0.77$); {\it orange triangles} for slightly higher reddening ($0.78<A_{Ks}<0.79$); {\it red triangles} for the highest valus ($0.81<A_{Ks}<1.17$); {\it green squares} for $0.66<A_{Ks}<0.70$; and {\it blue squares} for $0.60<A_{Ks}<0.65$. b): {\it red clouds} for cold dust (160~$\mu$m); {\it blue clouds} for warm dust (70~$\mu$m) from Herschel survey; and {\it contour lines} for hot dust (24~$\mu$m) from WISE survey. Black regions in the Figure indicate the absence of dust emission. For a complete identification of the features in this Figure, we refer to the papers \citet{Negueruela+2010}, \citet{Ritchie+2009} and \citep{dougherty10}.
}
\label{fig:dustmap}
\end{figure}
\subsection{Relation between the extinction in the NIR, optical and MIR windows}\label{section4.2}
As seen in Fig.\,\ref{fig:allaws_dif} all recent laws are much steeper than that of \citet{Hulst1946}, taken as a zero point, in contrast with \citet{CCM89}, which is much shallower. The frequently used rule of thumb $A_V/A_{Ks}$\,$\approx$\,10 based on that law and other similar laws, is no longer acceptable as representative for the Galaxy . The derived ratio in this work is $A_V/A_{Ks}$\,=\,14.95 and that of \citet{Scha16}, at a little shorter wavelength is $A_g/A_{Ks}$\,=\,16.61, both done along the GP. It is surprising that these values are not far from $A_V/A_{Ks}$\,$\approx$\,13-16 we derived from the colour excess ratios reported by \citet{Scha16} and $A_V/A_{Ks}$\,=\,16.13 reported by \citet {Nishiyama+2008}, both for the Bulge.
The absolute extinction towards Wd~1 obtained in the present work $A_V$\,=\,11.26\,$\pm$\,0.07 is in excellent agreement with $A_V$\,=\,11.6 derived by \citet{Clark+2005} for Yellow Hypergiants in Wd~1.
After the works by \citet{FM09} and \citet{Nataf16}, it became clear that all families of extinction laws cannot be represented by a single parameter, $R_V$, for example. \citet{Nataf16} showed that the the relation the ratios $R_{JKIJ}$\,=\,$E_{J-Ks}/E_{I-J}$ and $R_I$\,=\,$A_I/E(V-I)$ are not correlated. This is not unexpected, since the grain properties might be different in the general field (RCs) and inside the denser environment of the cluster.
In Fig.\,\ref{fig:RJKIJxRAIVI} we compare our results with those from previous works.
Wd~1 cluster members are seen under a range of extinction $A_V$\,$\approx$\,9-15 magnitudes, indicating the presence gas/dust condensations in the intra-cluster medium.
Results based on alternative ratios in the optical (e.g. $A_V/E_{B-V}$ and $A_V/E_{V-I}$ against $R_{JKIJ}$) point to the same scenario of a lower $A_V/E_{B-V}$ in the inner Galaxy as shown by \citet{Nataf13}.The $R_V$\,=\,$A_V/E_{B-V}$ ratio have been reported to disagree with the ``universal'' $R_V$\,=\,3.1 for the Wd~1 cluster \citep{Clark+2005,Negueruela+2010,Lim+2013}. Using 38 members of the Wd~1 cluster we obtained $\displaystyle R_V = {A_V} / {E_{B-V}} = 2.50\pm0.04$.
For the MIR, using data from WISE for our measured colour excess ratios, we obtained $A_{W1}$\,=\,0.39\,$\pm$\,0.03 and $A_{W2}$\,=\,0.26\,$\pm$\,0.02.
Such values are in very good agreement with those derived from \citet{Scha16} colour excess ratios.
\section{Extinction map}\label{section5}
The extinction map towards the Wd~1 cluster can be tackled in several diffrent ways: a) by the distrubution of the cluster members extinction; b) from maps of the colours of stars; and c) from the correlation between these two maps and the distribution of dust clouds in the FOV.
\subsection{Extiction map of the cluster members}\label{section5.1}
In Fig.\,\ref{fig:dustmap} we present a map of extinction towards Wd~1. Points represent our measurements of $A_{Ks}$ for cluster members. White crosses represent the average extinction ($0.66<A_{Ks}<0.82$) or $10.1<A_{Ks}<12.6$), orange and red triangles higher than 1$\sigma$ and green and blue points are for values lower than 1$\sigma$ from the average. Lower extinction dominates at W/SW regions and higher extinction to N/NE from the cluster centre and in the central region there are all ranges of values. The mixture of values in the central region indicates the existence of intra--cluster dust, with some members displaced to our side and others to the back. It seems that at W/SW the dust is in the back side of the cluster, but crossing to the front at N/NE. The reality of this gradient can be checked in the next sub--section, when we discuss the far--infrared (FIR) images.
Typical cluster members are seen through extinction in the range $0.66<A_{Ks}<0.82$ or $10.1<A_V<12.6$, represented by the 1$\sigma$ spread around the median value (78 stars). They are affected by interstellar plus local extinction. The $A_V = 2.5$ magnitudes range is due to local dust in front or internal to the cluster, and is surpringly large for $\sim 3.5 \arcmin$ FOV. The interstelar component can be measured from the (five) stars with lower extinction: $A_{Ks}=0.63\pm 0.02$ or $A_V=9.7\pm 0.30$. The nature of the reddening for the highest (seven) values $0.83<A_{Ks}<1.17$ or $12.7<A_V<17.9$ is difficult to assess: some stars may have really high extinction, but others might be affected by hot circumstellar dust emission, which mimics large reddening.
\subsection{Cold and warm dust probed by FIR imaging }\label{section5.2}
The red clouds in Fig.\,\ref{fig:dustmap} represent cold dust as measured by Herschel survey as measured by Herschel survey, using the PACS instrument \citep{Poglitsch2010}.
It is notable that there are areas clear of dust (black regions) at E and W, probably caused by winds from massive stars and/or supernovae explosions. Just three "fingers" of cold dust reach the central regions of the cluster at NW, S and N and their tips are crowned by bright blue spots, tracing arm dust. The ''elephant trunk" at NW was detected in radio emission by \citet{dougherty10}. Warm dust impacts the cluster center, but dominates the East side, in rough agreement with higher extinction measured for stars. Four noticeable spots of warm dust coincide with the Red Supergiants (RSGs) W237, W20, W20 and the W9 B[e] star. However, the reddening of the RSGs stars is not larger than the average, indicating that the heated dust there is in the back of the star cluster.
\subsection{Extinction and colours in the field around Wd~1}\label{section5.3}
We can use the spatial density of the colour indices $(J-Ks)$ in our whole catalogue of sources to study the 3D distributon of dust.
To study the surface density of stars projected inside a particular FOV, we need to keep in mind the many effects involved in the observed star counts. The most important parameter is the increase in the projected density with the distance, which grows as the volume of the spherical sector in the FOV (for uniform density). At a given magnitude limit this is counterbalanced by the inverse square law for the brightness and the extinction. Finally, in some regions and at a given image quality, stellar crowding drops the stellar counts. These considerations can provide guidelines to interpret the stellar counts as a function of magnitude in a particular image, but most of the effects can not be corrected.
We binned the $J-Ks$ colour indices for ISPI+CAMIV photometry and normalised each slice of colours by the average colour index of the entire slice. After some initial trials, we selected a few slices which are representative of the features we found in the image, as shown in Fig.\,\ref{sb1:colour_density}.The upper left panel of in Fig. \ref{sb1:colour_density} displays a remarkable concentration of blue stars at W--SW of the Wd~1 which is probably due to a real cluster and not a result of a window of low extinction on the image.
The upper right panel (1.251\,$<$\,$(J-Ks)$\,$<$\,1.788) shows the Wd~1 cluster at the centre of the image. The cluster size is larger than 6\arcmin and is elongated in the way described by \citet{Gennaro+2011} for the inner regions. The lower left panel shows a slice without strong concentrations in the stellar density. The regions of enhanced density at E and W of the cluster centre seems to be a halo of red pre main sequence (PMS) stars belonging to the cluster. The lower density at the S is probably due to a high extinction patch which shifts stars from that particular slice of colours to higher values since on average that slice represents the stellar field farthest from the Wd~1 cluster ($D$\,$>$\,4.5\,kpc). The depression at the Centre is due to photometric incompleteness caused by crowding of cluster members.
The lower right panel (3.757\,$<$\,$(J-Ks)$\,$<$\,4.365) represents a slice of very red colour indices. In addition to distant stars (reddened by distance), it also represents stars shifted to redder colours by clouds both closer and farther than Wd~1. This is the case for the enhanced density at S of Wd~1. We suggest that the enhanced density zone ranging from SE to NE is also due to an obscuring cloud more distant than the central cluster.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 2.6cm 0.3cm 2.6cm 0.5cm, clip]{./colour_density.pdf}}
\caption{Stellar density maps:redder zones represents higher stellar densities. {\it Top left}: 0.350\,$>$\,$J-Ks$\,$>$\,0 showing a foreground concentration of stars with low extinction at SW of Wd~1 cluster. {\it Top right}: 1.788\,$>$\,$J-Ks$\,$>$\,1.251 showing the Wd~1 cluster with an extended halo. {\it Lower left}: 3.041\,$>$\,$J-Ks$\,$>$\,2.145 located farther than Wd~1. The low density region coincident with the cluster position is due to photometric incompleteness caused by crowding. The similar one to the S of the cluster is due to a cloud of higher extinction. {\it Lower right}: 4.365\,$>$\,$J-Ks$\,$>$\,3.757: the higher densities at the E and SE of the cluster are due to a zone of enhanced extinction farther than Wd~1.}
\label{sb1:colour_density}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.5cm 0cm 2cm, clip]{./J-Kdensity.pdf}}
\caption{Density of stars for a range of bins in $J-Ks$ colour in some key directions defined in Fig.\,\ref{fig:VVV_map}. The first maximum in the {\it blue} line indicates spatial overdensity of stars at the SW foreground. The peak of the black {\it solid} and {\it dotted} lines ($J-Ks$\,$\approx$\,1.6) indicates the cluster centre. The green line (S) shows an increasing density towards redder colours up to the cluster colour/distance, then a decline followed by a recovery for large colour indices. For the NE and SE directions (red and brown lines) there is a similar behaviour, but the enhanced extinction occurs at the cluster distance/colour.}
\label{fig:J-Kdensity}
\end{figure}
In order to explore in more detail these results, we made a plot with VVV images of the region, as seen in Fig.\,\ref{fig:VVV_map}. In fact there is a zone with blue stars (foreground young stars) at W-SW and a dark patch at the S and E of the cluster, indicating zones of higher extinction. The other structures revealed in Fig.\,\ref{fig:J-Kdensity} are not seen in the colour image of Fig.\,\ref{fig:VVV_map}. We then selected a few spatial directions, shown in the VVV map (Fig.\,\ref{fig:VVV_map}), to measure the stellar density as a function of the colour index $J-Ks$ along the cylinders with 2$'$ circular sections. In Fig.\,\ref{fig:J-Kdensity} we plot the density profiles with the same colours as the circles in Fig.\,\ref{fig:VVV_map}.
The solid black line in Fig.\,\ref{fig:J-Kdensity} shows the density profile in which the most remarkable feature is the cluster in the colour range 1\,$<$\,$J-Ks$\,$<$\,2. The red extension of the peak is due to the PMS stars, which are intrinsically redder than the MS members in the peak. PMS stars also show up in the ring around the cluster area ({\it black dotted line}), with colour up to $J-Ks$\,$\approx$\,2.5. The {\it blue line} in both Figs.\,\ref{fig:VVV_map} and \ref{fig:J-Kdensity} shows the concentration of stars with $J-Ks$\,$<$\,1 -- bluer than the rest of the sources -- indicating that they are foreground stars, as discussed above. The {\it green line} explores the density profile to the S of the cluster. The density profile in this direction is normal for colours bluer than those of the Wd~1 cluster, but than it decreases faster than normal up to $J-Ks$\,$\approx$\,3 when the stellar density grows again up to $J-Ks$\,$\approx$\,4. The minimum around $J-Ks$\,$\approx$\,3 is caused by enhanced extinction, which shifts the counts to larger reddening. The {\it red line} is for the circle at NE of the cluster. Its behaviour is normal for $J-Ks<1$ after which there is a local minimum in coincidence with the cluster colour, followed by a constant increase up to $J-Ks$\,$\approx$\,3.8. Our interpretation is that there is an enhanced extinction at the distance of the cluster in that direction. The {\it brown line} corresponds to the circle at the SE and behaves similarly to that at NE, except it starts with lower counts than at the NE and presents two minima around $J-Ks$\,$\approx$\,1.5 and 3.5. Our interpretation is that there are two clouds at different distances in this direction. The extinction toward the SE looks to be higher than at any other direction. Just for a sake of spatial scale, the colour excess grows in proportion to $E_{J-Ks}$\,$\approx$\,0.37\,mag\,kpc$^{-1}$, in manner that the deepest minimum at $J-Ks$\,$\approx$\,3.7 corresponds to a distance {$D$\,$\approx$\,10\,kpc.
As a summary, the most obscuring clouds are located to the S--SE border of our image at $D$\,$\approx$\,10\,kpc and there is a lighter obscuring cloud at SE--NE border at a distance comparable to that of Wd~1, but which does not impact the borders of the ISPI image at S--SE. The question is if this particular cloud has any impact on the properties of the stellar cluster.
\section{The 8620\,\AA\,DIB tracing the extinction beyond $E_{B-V} > 1$}\label{section6}
\citet{Munari+2008} and \citet[][]{Wallerstein+2007} showed that the 8620\,\AA\,DIB has a tight correlation with $E_{B-V}$. \citet{Maiz+2015} also showed the linear correlation of the equivalent width of this DIB (EW8620\,\AA) as a function of the extinction holds up to $A_V$\,$\approx$\,6. These authors also showed that the diffuse low density interstellar medium exposed to UV radiation has a different relation to the extinction as compared to the dense cold ISM. Recently, \citet{Munari+2008} presented 68 measurements of EW8620\,\AA~in the spectra from the RAVE survey ($R \sim 7500$) obtaining the relation $\displaystyle E_{B-V} = (2.72 \pm 0.03) \times EW8620$\,\AA~for $E_{B-V} < 1.2$. In this work we extend that relation to higher extinction values, which is relevant for the innermost Galactic region of the GAIA survey.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=20.0cm,angle=0, trim= 0cm 0.5cm 0.5cm 2cm, clip]{./AKEW8620.pdf}}
\caption{$A_{Ks}$ extinction $versus$ EW8620\,\AA. Red symbols: this work -- triangles Wd~1 members, diamonds bright field stars. Black crosses \citep{Munari+2008}. Blue squares: \citet{Wallerstein+2007}. For EW8620\,$<$\,0.6\,\AA which translates to $E_{B-V}$\,$<$\,1 or $A_V$\,$<$\,7, the relation is linear and coincides with that of \citet{Munari+2008}. The size of the black cross indicates the 1-$\sigma$ standard deviation for the Wd~1 cluster members.}
\label{fig:AKEW8620}
\end{figure}
Spectra were measured as described in Sect.\,\ref{section2.3}. For Wd~1 cluster members, $A_{Ks}$ were measured in the present work. For the Munari and Wallerstein stars, we used the 2MASS photometry and spectral type taken from {\tt SIMBAD}\footnote{http://simbad.u-strasbg.fr/} to derive the extinction. Table 2 displays our results of EW8620\,\textup{\AA} (column 2), $A_{Ks}$ (column 3) and the source for the 8620\,\AA\ data (column 4). We adopt $A_{Ks}$ instead of $A_V$ or $E_{B-V}$ because it is much less sensitive to dust size than optical wavelengths, and because our photometry was done in $JHKs$ bands. In this way, we use the relation we obtained for the Wd~1 cluster members: $A_{Ks}$\,=\,0.29\,$E_{B-V}$ to transform the extinction reported by \citet{Munari+2008}, \citet[][]{Wallerstein+2007} and for the nine field OB stars without $A_{Ks}$ measurements. The linear relation obtained by \citet[][]{Munari+2008} was thus transformed to $A_{Ks}$\,=\,0.691\,$\times$\,$EW8620$\,\AA. Combining these values with those derived here, we fit the set with a polynomial function. The relation we derive from the combined dataset is:
\begin{table}
\label{table2}
\tiny
\centering
\caption{EW8620\,\AA\,, $A_{Ks}$ and $A_V$ extinction. Sources identified as Schulte\# are reported in \citet{Wallerstein+2007}. First 10 rows of the table are presented. A full version containing 103 entries is available online at the CDS.}
\begin{tabular}{lllll}
\hline
Identification &EW8620 (\AA)& $A_{Ks}$& $A_{V} $ & Spec. source \\
\hline
\hline
W2a & 0.885 & 0.686 &10.068& OPD\\
W6a & 1.067 & 0.649 &10.985& OPD \\
W6b & 0.817 & 0.719 &10.970& 081.D-0324 \\
W7 & 0.844 & 0.772 &11.773& OPD \\
W8a & - & 0.641 &11.033& OPD \\
W8b & 0.830 & 0.760 &10.411& 081.D-0324 \\
W11 & - & 0.734 &10.765& \\
W12a & 0.870 & 0.800 &12.543& \\
W13 & 1.05 & 0.784 &10.818& 081.D-0324 \\
W15 & 0.948 & 0.733 &12.258& 081.D-0324 \\
\hline
\end{tabular}
\end{table}
\begin{equation}
A_{Ks} = (0.612 \pm 0.013)~EW + (0.191 \pm 0.025)~EW^2
\label{eq25}
\end{equation}
This equation is in excellent agreement with \citet{Munari+2008} which is linear for $EW$\,$<$\,0.6\,\AA (equivalent to $A_V$\,$<$\,9 for the inner GP extinction law of this paper).
\section{Discussion and Conclusions}\label{section7}
We present a study of the interstellar extinction in a FOV 10$'$ $\times$ 10$'$ in direction of the young cluster Westerlund~1 in $JHKs$ with photometric completeness $>$\,90\% at $\,$Ks$\,$=$\,15$. Using data publicly available, we extended the wavelength coverage to shorter and longer wavelengths from the optical to the MIR (although with less complete photometry). Colour excess ratios were derived by combining (92) Wd~1 cluster members with published spectral classification with (8463) RC stars inside the FOV.
Our result for the NIR: $E_{J-H}/E_{H-Ks} = 1.891\pm0.001$ is typical of recent deep imaging towards the inner Galaxy. Using the procedure designed by \citet{Stead+2009} to obtain effective wavelengths of 2MASS survey and Eq.\,\eqref{eq2} we derived a power law exponent $\alpha = 2.126 \pm 0.080$ that implies $A_J/A_{Ks} = 3.23$. This extinction law is steeper than the older ones \citep{Indebetouw+2005,CCM89,RL85}, based on less deep imaging and is in line with recent results based on deep imaging surveys \citep{Stead+2009, Nishiyama+2006, Nataf16}. In the NIR, this implies in smaller $A_{Ks}$ and larger distances than laws based on more shallow photometry, which has a large impact on inner Galaxy studies.
Using our measured $A_{Ks} / E_{J-Ks} = 0.449$ (plus combinations between other filters) we obtained the extinction to Wd~1, $<A_{Ks}> = 0.736 \pm 0.056$. This is $0.2-0.3$ magnitudes smaller than previous work, based on older (shallower) extinction laws \citep{Gennaro+2011, Negueruela+2010, Lim+2013, Piatti+1998}. On the other hand our $A_V = 11.26 \pm 0.07$ is in excellent agreement with $A_V = 11.6$ derived by \citet{Clark+2005} based on a completely different method: the OI~7774\,\AA\,EW $\times$ $M_V$ of six Yellow Hypergiants.
The cluster extinction encompass the range $A_{Ks} = 0.55-1.17$ (which translates into $A_{V} \approx 8.5-17$). Cluster members have typical extinction $A_{Ks}=0.74\pm 0.08$ which translates into $A_V=11.4\pm 1.2$. The foreground interstellar component is $A_{Ks} = 0.63\pm 0.02$ or $A_V = 9.66\pm 0.30$.
The extinction spread of $A_V \sim 2.5$ magnitudes inside a FOV 3.5$\arcmin$ indicates that it is produced by dust connected to the cluster region. In fact Fig\ref{fig:dustmap} shows a patchy distribution of warm dust. There are indications for a gradient in $A_{Ks}$ increasing from SW to NE, which is in line with the map of warm dust and with the colour density maps in the surrounding field. However, the effect is not very clear, suggesting a patchy intra-cluster extinction. The $J-Ks$ colour density maps unveiled the existence of a group of blue foreground stars, which may or may not be a real cluster. Since those stars partially overlap the Wd~1 cluster, they must be taken into account when subtracting the field population in the usual procedures to isolate Wd~1 cluster members.
We measured the EW8620\,\AA\,DIB for 43 Wd~1 cluster members and combined them with additional filed stars and results collected from the literature, showing a good correlation with $A_{Ks}$. Although the linear relation reported by \citet{Munari+2008} was recovered for $E_{B-V}$\,$\approx 1$, it deviates for larger values and we present a polynomial fit extending the relation. The moderately large scatter in the Wd~1 measurements seems to reflect the uncertainties in our procedures to measure the extinction and the EW. Unfortunately our sample does not probe the range 0.4\,$<$\,EW8620\,$<$\,0.8, see Fig.\,\ref{fig:AKEW8620}. In order to improve the situation aiming for GAIA, the above relation should be re--done incorporating measurements from stars with 1$\,<$\,$E_{B-V}$\,$<$\,2.5 (equivalent to 0.3\,$<$\,$A_{Ks}$\,$<$\,0.7). As a matter of fact, it is expected that such relation will be different for the general ISM as compared to the denser ambients prevalent inside regions of recent star forming regions.
We examined our result $R_V\,=\,A_V /E_{B-V}\,=\,2.50 \pm 0.04$ with care, since previous works found suspicious that this ratio for Wd~1 was much smaller than the usual $R_V$\,=\,3.1 \citet{Clark+2005,Lim+2013, Negueruela+2010}, although similar values have been reported by \citet{FM09} for a couple of stars like star BD+45~973. Moreover, this value is in excellent agreement with \citet{Nataf13}, as deduced from $R_I/E_{V-I}$ based on OGLE~III fields in the Galactic Bulge, close to the position of Wd~1. However, even if there was minor systematic errors in the photometric calibrations in the B--band, it would not be larger than a few tenths of magnitudes, which are dwarfed by the large extinctions: $A_B \approx 15$.
An interesting result is the lack of correlation between the reddening law in the optical, as compared with the NIR (see Fig.\ref{fig:RJKIJxRAIVI}). This is in agreement with the result found by \citet{Nataf16} for a much larger field in the inner Galaxy. Looking to the position of \citet{CCM89} in that figure, we confirm other diagnostics, showing that dust grain properties in the inner Galaxy are different from those sampled by shallower imaging. Even in our small field (12\arcmin$\times$12\arcmin), the colour ratio diagram of Fig.\,\ref{fig:RJKIJxRAIVI} shows that the Wd~1 cluster members spread to a different zone in the diagram, as compared to RCs. This suggests that intra--cluster dust grains have properties different from the lower density ISM where RCs are located. The large spread of indices between Wd~1 members indicates the existence of clumps of dust grains with a variety of size properties.
We derived the extinction law for the range 0.4-4.8\,$\mu$m which is in very good agreement with the colour excess ratios obtained by \citet{Scha16} from large photometric and spectroscopic surveys in the inner GP. We propose our law presented in Eq.\,\eqref{eq19} to be representative of the average inner GP. A striking feature of this law is its close coincidence with a power law with exponent $\alpha = 2.17$ for the entire range 0.8-4\,$\mu$m. We call the reader's attention to the fact that this is an average law, usefull for general purposes, since the reddening law varies from place to place inside the narrow zone of the GP.
\section*{Acknowledgements}
We thank an anonymous referee for the very productive questions/remarks which has improved our manuscript. We also thank M. Gennaro and A. Bonanos for a critical comments on earlier versions of this paper. AD thanks to Funda\c{c}\~{a}o de Amparo {\'a} Pesquisa do Estado de S\~{a}o Paulo - FAPESP for support through proc. 2011/51680-6. LAA acknowledges support from the FAPESP (2013/18245-0 and 2012/09716-6). FN acknowledges support from the FAPESP (2013/11680-2). This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France.
|
{'timestamp': '2016-08-04T02:08:36', 'yymm': '1607', 'arxiv_id': '1607.04639', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04639'}
|
arxiv
|
\section{Introduction}
In this paper, we study the fully nonlinear, possibly
degenerate, elliptic partial differential equation (PDE) in a bounded domain
\begin{equation}\label{DP}\tag{DP$_\lambda$}
\begin{cases}
\gl u(x)+F(x,Du(x),D^2u(x))=0 \ \ \text{ in }\ \gO,&\\[3pt]
\BC \ \ \text{ on }\bry,
\end{cases}
\end{equation}
where $\gl$ is a given positive constant which is often called a discount factor,
and $F\mid \ol\gO\tim\R^n\tim\bS^n\to\R$
is a given continuous function.
Here $\gO$ is a bounded domain (that is, open and connected set) in $\R^n$,
$\bS^n$
denotes the space of $n\tim n$ real symmetric matrices,
and $\BC$ represents a boundary condition (state constraint, Dirichlet, or Neumann boundary condition).
The unknown here is
a real-valued function $u$ on $\ol \gO$,
and $Du$ and $D^2u$ denote the gradient and Hessian of $u$, respectively.
We are always concerned with viscosity solutions of fully nonlinear, possibly degenerate,
elliptic PDE, and the adjective ``viscosity'' is omitted henceforth.
Associated with the vanishing discount problem for (DP$_\lambda$) is the following ergodic problem
\begin{equation}\label{E}\tag{E}
\begin{cases}
F(x,Du(x),D^2u(x))=c \ \ \text{ in }\ \gO,&\\[3pt]
\BC \ \ \text{ on }\bry.
\end{cases}
\end{equation}
We refer the boundary-value problem (E), with a given constant $c$,
as (E$_c$), while the unknown for the ergodic problem (E)
is a pair of a function $u \in C(\ol \gO)$ and a constant $c \in \R$
such that $u$ is a solution of (E$_c$).
When $(u,c) \in C(\ol \gO) \times \R$
is
a solution of (E), we call $c$ a critical value (or an additive eigenvalue).
Our main goal is to study the vanishing discount problem for (DP$_\lambda$),
that is, for solutions $v^\lambda$ of (DP$_\lambda$) with $\lambda>0$,
we investigate the asymptotic behavior of $\{v^\lambda\}$ as $\lambda \to 0$.
The particular question we want to address here is whether the {\it whole family} $\{v^\gl\}_{\gl>0}$ (after normalization)
converges or not to a function in $C(\tt)$ as $\gl \to 0$.
As the limiting equation (\ref{E}$_c$) is not strictly monotone in $u$ and has many solutions in general,
proving or disproving such convergence result is challenging.
The convergence results of the whole family $\{v^\gl\}_{\gl>0}$ were established for
convex Hamilton-Jacobi equations in \cite{DFIZ} (first-order case in a periodic setting),
\cite{AlAlIsYo} (first-order case with Neumann-type boundary condition),
\cite{MiTr} (degenerate viscous case in a periodic setting).
Recently, the authors \cite{IsMtTr1}
have developed
a new variational approach
for this
vanishing discount problem for fully nonlinear, degenerate elliptic PDEs,
and
proved the convergence of the whole family $\{v^\gl\}_{\gl>0}$ in the periodic setting.
We develop the variational method introduced in \cite{IsMtTr1} further here to handle boundary value problems.
Our goal is twofold.
Firstly, we establish new representation formulas for $v^\gl$ as well as the critical value $c$
in the settings of the state constraint, Dirichlet, and Neumann boundary conditions.
Secondly, we apply these representation formulas to show that $\{v^\gl\}_{\gl>0}$ (after normalization)
converges as $\gl \to 0$.
Let us make it clear that all of the results in this paper and the aforementioned ones
require
the
convexity of $F$ in the gradient and Hessian variables.
We refer to a forthcoming paper \cite{GMT} for convergence results of vanishing discount problems for
some nonconvex first-order Hamilton-Jacobi equations.
The main results, which, as mentioned above, consist of representation formulas and the convergence of $\{v^\gl\}_{\gl>0}$,
are stated in Sections \ref{sec-s},
\ref{sec-d}, and \ref{sec-n}
for the state constraint, Dirichlet, and Neumann
problems, respectively.
\subsection{Setting and Assumptions}
We describe the setting and state the main assumptions here.
Let $\cA$ be a non-empty, $\gs$-compact and locally compact metric space and
$F\mid \ol\gO\tim\R^n\tim\bS^n\to\R$ be given by
\begin{equation}\tag{F1}\label{F1}
F(x,p,X)=\sup_{\ga\in\cA}(-\tr a(x,\ga)X-b(x,\ga)\cdot p-L(x,\ga)),
\end{equation}
where
$a\mid \ol\gO\tim\cA \to\bS_+^n$, $b\mid \ol\gO\tim\cA\to\R^n$ and $L\mid \ol\gO\tim\cA\to\R$ are continuous.
Here, $\bS_+^n$ denotes the set of all non-negative definite matrices
$A\in\bS^n$, $\tr A$ and $p\cdot q$ designate the trace of $n\tim n$ matrix $A$ and
the Euclidean inner product of $p,q\in\R^n$, respectively.
Assume further that
\begin{equation}\tag{F2}\label{F2}
F\in C(\ol\gO\tim\R^n\tim\bS^n).
\end{equation}
It is clear under \eqref{F1} and \eqref{F2}
that $F$ is degenerate elliptic in the sense
that for all $(x,p,X)\in \ \ol \gO \tim\R^n\tim\bS^n$, if $Y\in\bS_+^n$,
then $F(x,p,X+Y)\leq F(x,p,X)$ and that, for each $x\in\ol\gO$,
the function $(p,X)\mapsto F(x,p,X)$ is convex on $\R^n\tim\bS^n$.
The equations \eqref{DP}, (\ref{E}$_c$), with $F$ of the form \eqref{F1},
are called Bellman equations, or Hamilton-Jacobi-Bellman equations
in connection with the theory of stochastic optimal control.
In this viewpoint, the set $\cA$ is often called a control set or region.
With the function $L$ in the definition of $F$, we define
\[
\Phi^+:=\left\{\phi\in C(\tt\tim\cA)\mid
\phi(x,\ga)=tL(x,\ga)+\chi(x) \ \text{for some} \ t>0, \, \chi\in C(\tt)\right\}.
\]
It is clear that $\Phi^+$ is a convex cone in $C(\tt \tim \cA)$.
For $\phi= tL+\chi \in \Phi^+$, we define
\[
F_\phi(x,p,X) = \sup_{\ga \in \cA} \left(- \tr a(x,\ga)X - b(x,\ga)\cdot p - \phi(x,\ga) \right).
\]
The form of $\phi$ allows us to compute that
\begin{align*}
F_\phi(x,p,X) &= \sup_{\ga \in \cA} \left(- \tr a(x,\ga)X - b(x,\ga)\cdot p - \phi(x,\ga) \right)\\
&=t \sup_{\ga \in \cA} \left(- \tr a(x,\ga) t^{-1}X - b(x,\ga)\cdot t^{-1}p - L(x,\ga) \right) - \chi(x)\\
&=tF(x,t^{-1}p,t^{-1}X) - \chi(x),
\end{align*}
which yields that $F_\phi \in C(\tt \times \R^n \times \bS^n)$ if we assume (F2).
We note here that,
except when $L(x,\ga)$ is independent of $\ga$,
$\phi\in\Phi^+$ is represented
uniquely as $\phi=tL+\chi$ for some $t>0$ and $\chi\in C(\tt)$.
We often write $F[u]$ and $F_\phi[u]$ to denote the functions
$x\mapsto F(x,Du(x),D^2u(x))$ and
$x\mapsto F_\phi(x,Du(x),D^2u(x))$, respectively.
The
following
are some further assumptions we need
in the paper,
and we put labels on these for convenience later.
\begin{equation}\tag{CP$_{\rm loc}$}\label{CP}
\left\{\text{
\begin{minipage}{0.83\textwidth}
For any $\gl>0$, $\phi \in \Phi^+$, and open subset $U$ of $\gO$,
if $v,\,w\in C(U)$ are a subsolution and a supersolution of
$\gl u+F_\phi[u]=0$ in $U$, respectively, and $v \leq w$ on $\partial U$,
then $v\leq w$ in $U$.
\end{minipage}
}\right.\end{equation}
\begin{equation} \tag{EC}\label{EC}
\left\{\text{
\begin{minipage}{0.85\textwidth}
For $\gl>0$, let $v^\gl \in C(\tt)$ be a solution of \eqref{DP}.
The family $\{v^\gl\}_{\gl>0}$ is equi-continuous on $\tt$.
\end{minipage}
}\right.
\end{equation}
\begin{equation}\tag{L}\label{L}\left\{
\text{
\begin{minipage}{0.85\textwidth}
$L=+\infty\,$ at infinity,\
that is, for any $M\in\R$, there exists a compact subset $K$ of $\cA$
such that $L\geq M$ in $\tt\tim(\cA\setminus K)$.
\end{minipage}}
\right.
\end{equation}
We say that $L$ is coercive if condition \eqref{L} is satisfied,
and, when $\cA$ is compact, condition \eqref{L} always holds.
Some more assumptions are needed
and stated in the upcoming sections according to the type of boundary conditions.
\subsection*{Outline of the paper}
In Section \ref{sec-pre}, we introduce some notation and preliminaries.
We first consider the state constraint problem in Section \ref{sec-s} as it is the simplest one.
The Dirichlet problem and the Neumann problem are studied in Section \ref{sec-d} and Section \ref{sec-n}, respectively.
Finally, some examples are given in Section \ref{sec-ex}.
\section{Notation and preliminaries} \label{sec-pre}
First we present our notation.
Given a metric space $E$, $\Lip(E)$ denotes the space of Lipschitz continuous functions on $E$. Also,
let $C_{\rm c}(E)$ denote the space of continuous
functions on $E$ with compact support.
Let $\cR_E$, $\cR_E^+$ and $\cP_E$ denote the spaces
of all Radon measures, all nonnegative Radon measures and Radon
probability measures on $E$, respectively.
For any function $\phi$ on $E$ integrable with respect
$\mu\in\cR_E$, we write
\[
\lan\mu,\phi\ran=\int_E \phi(x)\mu(d x).
\]
With the function $L$ from the definition of $F$, we set
\[\begin{aligned}
\Phi^+&\,:=\left\{tL+\chi\mid t>0, \, \chi\in C(\tt)\right\},\qquad
\Psi^+:=\Phi^+\tim C(\bry),\\
\Psi^+(M)&\,:= \{(tL+\chi,\psi)\in\Psi^+\mid
\|\chi\|_{C(\tt)}< tM, \|\psi\|_{C(\bry)}< tM\} \ \ \text{ for }M>0,
\end{aligned}\]
where $(t,\chi)$ in the braces above
ranges over $(0,\,\infty)\tim C(\tt)$,
and
\[
\cR_L:=\{\mu\in\cR_{\bb}\mid L \text{ is integrable with respect to }\mu\}.
\]
Next, we give three basic lemmas
related to the weak convergence of measures.
Let $\cX$ be a $\gs$-compact, locally compact metric space
and $f\mid \cX\to\R$ be a continuous function.
Henceforth, when $f$ is bounded from below on $\cX$
and $\mu$ is a nonnegative Radon measure on $\cX$,
we write
\[
\int_\cX f(x)\mu(dx)=+\infty,
\]
to indicate that $f$ is not integrable with respect to $\mu$.
We denote by $\lan\mu,f\ran$, for simplicity, the integral
\[
\int_\cX f(x)\mu(dx).
\]
Assume that
$f$ is coercive in the sense that $f=+\infty$ at infinity.
In particular, $f$ is bounded below on $\cX$.
\begin{lem}\label{basic-lsc} The functional
$\mu\mapsto \lan \mu, f\ran$
is lower semicontinuous on $\cR_\cX^+$
in the topology of weak convergence of measures.
\end{lem}
\begin{proof} Let $\{\mu_j\}_{j\in\N}\subset\cR_{\cX}^+$ be a sequence
converging to $\mu\in\cR^+_{\cX}$ weakly in the sense of measures.
Select a compact set $K\subset\cX$ so that $f(x)>0$
for all $x\in\cX\setminus K$, and then a sequence
$\{\chi_k\}_{k\in\N}\subset C_{\rm c}(\cX)$ so that
$0\leq \chi_k\leq \chi_{k+1}\leq 1$ on $\cX$,
$\chi_k=1$ on $K$ for all
$k\in\N$,
and $\chi_k \to \mathbf{1}_{\cX}$ pointwise as $k \to \infty$.
Since $\chi_k f\in C_{\rm c}(\cX)$ and $\chi_k f\leq f$ on $\cX$, we have
\[
\lan\mu,\chi_k f\ran
=\lim_{j\to\infty}\lan\mu_j,\chi_kf\ran
\leq\liminf_{j\to\infty}\lan\mu_j,f\ran \ \ \ \text{ for all }\ k\in\N.
\]
Hence, using the monotone convergence theorem if
\[
\liminf_{j\to\infty}\lan\mu_j,f\ran<+\infty,
\]
we conclude that
\[
\lan\mu,f\ran
\leq\liminf_{j\to\infty}\lan\mu_j,f\ran. \qedhere
\]
\end{proof}
\begin{lem} \label{basic-cpt}
Let $r\in\R$ and $s>0$, and define
\[
\cR^+_{f,r,s}=\{\mu\in\cR^+_\cX\mid \lan \mu, f\ran\leq r,\
\mu(\cX)\leq s\}
\]
Then $\cR^+_{f,r,s}$ is compact with the topology of weak convergence of measures.
\end{lem}
\begin{proof} Let $\{\mu_j\}_{j\in\N}$ be a sequence of
measures in $\cR^+_{f,r,s}$.
We need to show that there is a subsequence $\{\mu_{j_k}\}_{k\in\N}$ of $\{\mu_j\}$ that converges to some $\mu\in\cR^+_{f,r,s}$
weakly in the sense of measures.
Since $f$ is bounded from below, the sequence
$\{\lan\mu_j,f\ran\}_{j\in\N}$ is bounded from below,
and, hence, it is bounded.
We may thus assume, by passing to a subsequence if needed, that
the sequence $\{\lan\mu_j,f\ran\}_{j\in\N}$ is convergent.
Since $f=+\infty$ at infinity, the boundedness of the sequence
$\{\lan\mu_j,f\ran\}_{j\in\N}$ and Chebyshev's inequality
imply that the family
$\{\mu_j\}_{j\in\N}$ is tight. Prokhorov's theorem
guarantees that there is a subsequence $\{\mu_{j_k}\}_{k\in\N}$
of $\{\mu_j\}$ that converges to some $\mu\in\cR^+_\cX$
weakly in the sense of measures.
The weak convergence of $\{\mu_{j_k}\}_{k\in\N}$
readily yields $\,\mu(\cX)\leq s$.
Lemma \ref{basic-lsc}
ensures that
\[\lan\mu,f\ran\leq \lim_{k\to\infty}\lan\mu_{j_k},f\ran\leq r,
\]
which then shows that $\mu\in\cR^+_{f,r,s}$. The proof is complete.
\end{proof}
In the next lemma, we consider the case where $\cX=\bb$.
\begin{lem}\label{mod} Let $\mu\in\cR^+_{\bb}$, $m\in\R$
and $L\in C(\bb)$. Assume that $\cA$ is not compact, $\mu(\bb)>0$, $m>\lan \mu,L\ran$, and
that $L$ is coercive.
Then, there exists $\tilde\mu\in\cR^+_{\bb}$ such that
\begin{align}
&\tilde\mu(\bb)=\mu(\bb),\tag{i}\\
&\lan\tilde\mu,L\ran=m,\tag{ii}
\\
&\lan\tilde\mu,\psi\ran=\lan\mu,\psi\ran \ \ \ \text{ for all }\
\psi\in C(\tt).\tag{iii}
\end{align}
\end{lem}
\begin{proof}
Put
$\,
m_0:=\lan\mu,L\ran$,
and pick $\ga_1\in\cA$ so that
\[
\min_{x\in\tt}L(x,\ga_1)\tim \mu(\bb)>m.
\]
Define the Radon measure $\nu$ on $\bb$, through the Riesz representation theorem, by requiring
\[
\lan\nu,\phi\ran=\int_{\bb}\phi(x,\ga_1)\mu(dxd\ga)
\ \ \ \text{ for }\ \phi\in C_{\rm c}(\bb),
\]
and note that
\[
\nu(\bb)=\int_{\bb}\mu(dx d\ga)=\mu(\bb).
\]
We set
\[
m_1:=\lan\nu,L\ran,
\]
observe that
\[
m_1\geq \int_{\bb}\min_{\tt}L(\cdot,\ga_1)\mu(dxd\ga)
=\min_{\tt}L(\cdot,\ga_1)\mu(\bb)>m>m_0,
\]
and put
\[
t=\fr{m-m_0}{m_1-m_0}.
\]
It is clear that $t\in (0,\,1)$ and, if we set
\[
\tilde\mu=(1-t)\mu+t\nu,
\]
then $\tilde\mu$ is a nonnegative Radon measure on $\bb$.
We observe
\[
\tilde\mu(\bb)=(1-t)\mu(\bb)+t\nu(\bb)=\mu(\bb),
\]
that
\[
\lan\tilde\mu,L\ran
=(1-t)\lan\mu,L\ran+t\lan\nu,L\ran
=(1-t)m_0+tm_1=m,
\]
and that for any $\psi\in C(\tt)$,
\[\begin{aligned}
\lan\tilde\mu,\psi\ran
&\,=(1-t)\lan\mu,\psi\ran
+t\int_{\bb}\psi(x)\nu(dxd\ga)
\\&\,=(1-t)\lan\mu,\psi\ran
+t\int_{\bb}\psi(x)\mu(dxd\ga)
=\lan\mu,\psi\ran.\end{aligned}
\]
The proof is complete.
\end{proof}
\section{State constraint problem} \label{sec-s}
We consider the state constraint boundary problem.
To avoid confusion, we rename discount equation (DP$_\lambda$) as (S$_\gl$),
and ergodic equation (E) as (ES) in this section.
Here the letter S
represents ``state constraint".
The two problems of interest are
\[\tag{S$_\gl$}\label{S}
\begin{cases}
\gl u+F[u]\leq 0 \ \ \text{ in }\ \gO, &\\[3pt]
\gl u+F[u]\geq 0 \ \ \text{ on }\ \lbar\gO,
\end{cases}
\]
for $\gl>0$, and
\[\tag{ES}\label{ES}
\begin{cases}
F[u]\leq c \ \ \text{ in }\ \gO, &\\[3pt]
F[u]\geq c \ \ \text{ on }\ \lbar\gO.
\end{cases}
\]
Given a constant $c$, we refer as (ES$_c$) the state constraint problem (ES).
We assume in addition the following assumptions, which concern
with the comparison principle and solvability for \eqref{S}.
\[\tag{CPS}\label{CPS}
\left\{\text{
\begin{minipage}{0.85\textwidth}
The comparison principle holds for \eqref{S} for every $\gl>0$.
More precisely, if $v\in C(\ol \gO)$ is a subsolution of $\gl u + F[u] = 0$ in $\gO$,
and $w\in C(\ol \gO)$ is a supersolution of $\gl u+F[u] = 0$ on $\tt$,
then $v\leq w$ on $\tt$.
\end{minipage}
}\right.
\]
\[\tag{SLS}\label{SLS}
\text{
\begin{minipage}{0.85\textwidth}
For any $\gl>0$, \eqref{S} admits a solution $v^\gl \in C(\tt)$.
\end{minipage}
}.
\]
\begin{prop}
Assume \eqref{F1}, \eqref{F2}, \eqref{CPS}, \eqref{SLS} and \eqref{EC}.
Then there exists a solution $(u,c)\in C(\tt)\times\R$ of \eqref{ES}.
Moreover, the constant $c$ is uniquely determined by
\begin{equation}\label{critical-value-s}
c=\inf\left\{d\in\R\mid \text{there exists} \ v\in C(\tt) \ \text{such that} \
F[v]\le d \ \text{in} \ \gO\right\}.
\end{equation}
\end{prop}
We denote the constant given by \eqref{critical-value-s} by $c_{\rS}$
and call it the critical value of \eqref{ES}.
The proof of the proposition above is somehow standard,
and we give only its outline.
\begin{proof} For each $\gl>0$ let $v^\gl\in C(\tt)$ be a (unique) solution of \eqref{S} and set $u^\gl:=v^\gl-m_\gl$,
where $m_\gl:=\min_{\tt}v^\gl$. By \eqref{EC}, the family
$\{u^\gl\}_{\gl>0}$ is relatively compact in $C(\tt)$.
Set $M=\|v^1\|_{C(\tt)}$,
and observe that for any $\gl>0$,
the functions $v^1+(1+\gl^{-1})M$ and $v^{1}-(1+\gl^{-1})M$ are
a supersolution and a subsolution of \eqref{S}, respectively. By the comparison principle \eqref{CPS}, we find that for any $\gl>0$,
\[
v^{1}-(1+\gl^{-1})M\leq v^\gl\leq v^1+(1+\gl^{-1})M \ \ \text{ on }\tt,
\]
which implies that the collection
$\{\gl m_\gl\}_{\gl>0}\subset\R$
is bounded.
We may now choose a sequence $\{\gl_j\}_{j\in\N}$ converging to
zero such that $\{u^{\gl_j}\}_{j\in\N}$ converges to
a function $u\in C(\tt)$ and $\{\gl_j m_{\gl_j}\}_{j\in\N}$
converges to a number $-c^*$. Since $u^{\gl_j}$ is a solution
of $\gl_j(u^{\gl_j}+m_{\gl_j})+F[u^{\gl_j}]\geq 0$ on $\tt$
and $\gl_j(u^{\gl_j}+m_{\gl_j})+F[u^{\gl_j}]\leq 0$ in $\gO$,
we conclude in the limit $j\to\infty$
that $(u,c^*)$ is a solution of \eqref{ES}.
Next we prove formula \eqref{critical-value-s}. We write $d^*$
the right
side of \eqref{critical-value-s}. Since $u$ is a solution of
(\ref{ES}$_{c^*}$), we get $\,d^*\leq c^*$.
To show that $d^*\geq c^*$, we suppose $d^*<c^*$, and select
$(v,d)\in C(\tt)\tim\R$ so that $v$ is a subsolution of
$F[v]=d$ in $\gO$. By adding $v$ a constant if necessary, we may assume that $v>u$ on $\tt$. We observe that if $\ep>0$ is sufficiently small, then $u$ and $v$ are a supersolution
and a subsolution of \eqref{S}, with $\gl$ replaced by $\ep$
and $0$ on its right side replaced by $(c+d)/2$, which means
that
the functions $u-(c+d)/(2\ep)$ and $v-(c+d)/(2\ep)$ are
a supersolution
and a subsolution of \eqref{S}, with $\gl$ replaced by $\ep$.
By the comparison principle, we get
\[
v-\fr{c+d}{2\ep}\leq u-\fr{c+d}{2\ep} \ \ \ \text{ on }\ \tt,
\]
but this is a contradiction, which shows that $d^*\geq c^*$.
\end{proof}
By normalization (replacing $F$ and $L$ by $F-c_{\rS}$
and $L+c_{\rS}$, respectively), we often assume
\begin{equation}\tag{Z}\label{Z}
\text{
\begin{minipage}{0.85\textwidth}
the
critical value
$c_{\rS}$
of \eqref{ES}
is zero.
\end{minipage}
}\end{equation}
\subsection{Representation formulas}
We assume \eqref{Z} in this subsection.
For $z\in \ol \gO$ and $\gl \geq 0$,
we define the sets
$\cF^{\rS}(\gl)\subset C(\tt\tim\cA)\times C(\tt)$
and
$\cG^{\rS}(z,\gl)\subset C(\tt)$, respectively, as
\begin{align*}
&\cF^{\rS}(\gl):=\left\{(\phi,u)\in \Phi^+\tim C(\tt)\mid u \ \text{ is a subsolution of } \ \gl u + F_\phi[u] =0 \ \text{ in } \ \gO \right\}, \\
&\cG^{\rS}(z,\gl):=\left\{\phi-\gl u(z)\mid (\phi,u)\in
\cF^{\rS}(\gl) \right\}.
\end{align*}
As $\cG^{\rS}(z,0)$ is independent of $z$, we also write
$\cG^{\rS}(0)$ for $\cG^{\rS}(z,0)$ for simplicity.
\begin{lem}\label{thm3-sec3}Assume \eqref{F1}, \eqref{F2}
and \eqref{CP}.
For any $(z,\gl)\in\tt\tim[0,\infty)$, the set $\cG^{S}(z,\gl)$ is a convex cone with vertex at the origin.
\end{lem}
The proof of this lemma parallels
that of \cite[Lemma 2.8]{IsMtTr1} and hence is skipped.
For $(z,\gl)\in \tt \tim[0,\,\infty)$, we define the set
$\cG^{\rS}(z,\gl)'\subset\cR_{L}$ (denoted also by $\cG^{\rS}(0)'$ if $\gl=0$) by
\[
\cG^{\rS}(z,\gl)':=\{\mu\in \cR_{L}\mid \lan\mu,f\ran\geq 0
\ \ \text{ for all }f\in\cG^{\rS}(z,\gl)\}.
\]
The set $\cG^{\rS}(z,\gl)'$ is indeed the dual cone of
$\cG^{\rS}(z,\gl)$ in $\cR_L$.
Set $\cP^\rS:=\cP_{\bb}$,
and, for any compact subset $K$ of $\cA$, let $\cP_K^\rS$ denote
the subset of all
probability measures $\mu\in\cP^\rS$ that have support
in $\t\tim K$, that is,
\[
\cP^\rS_K:=\{\mu\in\cP^\rS\mid \mu(\tt\tim K)=1\}.
\]
\begin{thm} \label{thm1-sc} Assume \eqref{F1}, \eqref{F2},
\eqref{CP}, \eqref{L}, \eqref{CPS}, \eqref{SLS}, \eqref{EC},
and, if $\gl = 0$, \eqref{Z} as well.
Let $(z,\gl)\in\tt\tim[0,\,\infty)$ and let $v^\gl\in C(\tt)$ be a
solution of \eqref{S}. Then
\begin{equation} \label{sc-min}
\gl v^\gl(z)=\inf_{\mu\in \cP^\rS\cap \cG^{\rS}(z,\gl)'}\lan \mu,L\ran.
\end{equation}
\end{thm}
The proof of this theorem is similar to that of \cite[Theorem 3.3]{IsMtTr1}. However, we present it here,
because it is a key component of the main result.
\begin{proof} We note that, thanks to \eqref{L}, the term
\[
\max_{x\in\tt}\min_{\ga\in\cA} L(x,\ga)
\]
defines a real number. We choose first a constant
$L_\gl\in\R$ so that
\begin{equation}\label{thm1-sc-m2}
L_\gl\geq \max_{x\in\tt}\{\gl v^\gl(x),\min_{\ga\in\cA}L(x,\ga)\},
\end{equation}
and then, in view of \eqref{L}, a compact subset $K_\gl$ of $\cA$ so that
\begin{equation}\label{thm1-sc-m1}
L(x,\ga)\geq L_\gl \ \ \ \text{ for all }\ (x,\ga)\in\tt
\tim(\cA\setminus K_\gl).
\end{equation}
We fix a compact set $K\subset\cA$
so that $K=\cA$ if $\cA$ is compact,
or, otherwise, $K\supsetneq K_\gl$.
According to the definition of $\cG^{\rS}(z,\gl)'$, since $(L,v^\gl)\in\cF^{\rS}(\gl)$, we have
\[
0\leq \lan \mu,L-\gl v^\gl(z)\ran=
\lan \mu,L\ran-\gl v^\gl(z)\ \ \text{ for all }\mu\in\cP^\rS\cap \cG^\rS(z,\gl)',
\]
and, therefore,
\begin{equation}\label{thm1-sc-1}
\gl v^\gl(z)\leq \inf_{\mu\in\cP^\rS\cap \cG^\rS(z,\gl)'}\lan\mu,\,L\ran.
\end{equation}
Next, we show
\begin{equation}\label{thm1-sc-1+}
\gl v^\gl(z)\geq \inf_{\mu\in\cP_K\cap \cG^{\rS}(z,\gl)'}\lan\mu,\,L\ran,
\end{equation}
which is enough to prove \eqref{sc-min}.
We put $K_1:=\cA$ if $K_\gl=\cA$,
or else, pick $\ga_1 \in K \setminus K_\gl$ and put $K_1 := K_\gl \cup \{\ga_1\}$.
Clearly, $K_1$ is compact and $K_1 \subset K$.
To prove \eqref{thm1-sc-1+}, we need only to show
\begin{equation}\label{thm1-sc-1++}
\gl v^\gl(z)\geq \inf_{\mu\in\cP_{K_1}\cap \cG^{\rS}(z,\gl)'}\lan\mu,\,L\ran.
\end{equation}
Suppose by contradiction that \eqref{thm1-sc-1++}
is false, which means that
\[
\gl v^\gl(z)<\inf_{\mu\in\cP_{K_1}\cap \cG^{\rS}(z,\gl)'}\lan \mu, L\ran.
\]
Pick $\ep>0$ sufficiently small such that
\begin{equation} \label{thm1-sc-2}
\gl v^\gl(z) + \ep<\inf_{\mu\in\cP_{K_1}\cap
\cG^{\rS}(z,\gl)'}\lan \mu, L\ran.
\end{equation}
Since $\cG^{\rS}(z,\gl)$ is a convex cone with vertex at the origin,
we infer that
\[
\inf_{f\in\cG^{\rS}(z,\gl)}
\lan \mu,f\ran=
\begin{cases} 0 \ \ &\text{ if }\ \mu\in\cP_{K_1} \cap\cG^{\rS}(z,\gl)', \\
-\infty &\text{ if }\ \mu\in \cP_{K_1} \setminus
\cG^{\rS}(z,\gl)'.
\end{cases}
\]
and, hence, the right hand side of \eqref{thm1-sc-2} can be rewritten as
\begin{equation}\begin{aligned}
\label{thm1-sc-2+}
\inf_{\mu\in\cP_{K_1} \cap \cG^{\rS}(z,\gl)'}\lan \mu, L\ran
&\,=
\inf_{\mu\in\cP_{K_1}}\
\Big(\lan \mu, L\ran
- \inf_{f\in\cG^{\rS}(z,\gl)}\lan\mu,f\ran
\Big)
=\inf_{\mu\in\cP_{K_1}}\
\sup_{f\in\cG^{\rS}(z,\gl)}\,\lan \mu, L-f\ran.
\end{aligned}
\end{equation}
Since $\cP_{K_1}$ is a convex compact space, with the topology of weak convergence of
measures,
we apply Sion's minimax theorem, to
get
\[
\min_{\mu\in\cP_{K_1}}
\ \sup_{f\in\cG^{\rS}(z,\gl)}\,\lan \mu, L-f\ran
= \sup_{f\in\cG^{\rS}(z,\gl)}\ \min_{\mu\in\cP_{K_1}}
\,\lan \mu, L-f\ran.
\]
In view of this and \eqref{thm1-sc-2}, we can pick $(\phi,u) \in \cF^{\rS}(\gl)$ such that
\begin{equation}\label{thm1-sc-3}
\gl v^\gl(z)+\ep < \lan \mu, L-\phi + \gl u(z) \ran \ \ \text{ for all } \mu \in \cP_{K_1},
\end{equation}
and $\phi=tL+\chi$ for some $t>0$ and $\chi \in C(\ol \gO)$.
We now prove that there exists $\gth>0$ such that $w:=\gth u$ is a
subsolution of
\begin{equation}\label{thm1-sc-4}
\gl w+F[w]=-\gl (v^\gl-w)(z)-\gth\ep\ \ \text{ in }\gO.
\end{equation}
Once this is done,
we immediately arrive at a contradiction.
Indeed, if $\gl>0$, then
$\gz:=w+{(v^\gl-w)(z)}+\gl^{-1}\gth\ep$ is a subsolution of $\gl \gz+F[\gz]=0$ in $\gO$,
and comparison principle \eqref{CPS}
yields $\gz\leq v^\gl$, which, after evaluation at $z$, gives
$\,\gl^{-1}\gth\ep\leq 0$, a contradiction. On the other hand, if $\gl=0$, then we
set $\gz:=w+C$, with a constant $C>\min_{\tt}(v^\gl-w)$,
choose a constant $\gd>0$
sufficiently small so that
\[
\gd\min_{\tt}v^\gl\geq -\gth\ep/2 \ \ \text{ and } \ \ \gd\max_{\tt} \gz\leq \gth\ep/2,
\]
observe that the functions
$\gz$
and $v^\gl$ are
a subsolution and a supersolution of $\gd u+F[u]=-\gth\ep/2$, respectively, in $\gO$
and on $\tt$, and, by \eqref{CPS}, get $\gz\le v^\gl$ on $\tt$,
which is a contradiction.
To show \eqref{thm1-sc-4},
we consider the two cases separately.
The first case is when $K_1=\cA$, and then,
the Dirac measure $\gd_{(x,\ga)}$ belongs to $\cP_{K_1}$
for any $(x,\ga) \in \ol \gO \times \cA$,
which together with \eqref{thm1-sc-3}
yields
\[
\gl v^\gl(z) + \ep < L - \phi + \gl u(z) \quad \text{on } \ol \gO \times \cA.
\]
Now, we have
\[
F(x,p,X) \leq F_\phi(x,p,X) + \gl(u-v^\gl)(z) - \ep \quad \text{for} \ (x,p,X) \in \tt \times \R^n \times \bS^n.
\]
We choose $\theta=1$ and observe that $w=u$ is a
subsolution of \eqref{thm1-sc-4}.
The other case is that when $K_1=K_\gl \cup\{\ga_1\}$, with
$\ga_1 \in K \setminus K_\gl$.
As $\gd_{(x,\ga)}\in\cP_{K_1}$ for all $(x,\ga)\in\tt\tim K_1$, we observe, in light of \eqref{thm1-sc-3}, that
\begin{equation}\label{thm1-sc-6}
\gl v^\gl(z)+\ep<(1-t)L(x,\ga)-\chi(x)+\gl u(z) \ \ \text{ for all }(x,\ga)\in\tt\tim K_1.
\end{equation}
We subdivide the argument into two cases.
Consider first the case when $t \leq 1$.
Minimize both sides of \eqref{thm1-sc-6} in $\ga \in K_\gl$
and note by \eqref{thm1-sc-m2} and \eqref{thm1-sc-m1} that
$\min_{\ga\in K_\gl}L(x,\ga)\leq L_\gl$, to get
\[\begin{aligned}
\gl v^\gl(z)+\ep&\,<(1-t)\min_{\ga \in K_\gl} L(x,\ga)-\chi(x)+\gl
u(z)
\\&\,\leq (1-t)L_\gl-\chi(x)+\gl
u(z) \ \ \text{ for all } x \in\tt.
\end{aligned}\]
We use this, \eqref{thm1-sc-m1}, and \eqref{thm1-sc-6}, to deduce that
\[
\gl v^\gl(z)+\ep<(1-t)L(x,\ga)-\chi(x)+\gl
u(z) \ \ \text{ for all }(x,\ga)\in\tt\tim \cA.
\]
From this, we observe
\[
\phi =L+(t-1)L+\chi<L-\gl (v^\gl-u)(z)-\ep \ \text{ on }\bb,
\]
and, hence, that $u$ is a subsolution of \eqref{thm1-sc-4}, with $\gth=1$.
Next, we consider the case when $t \geq 1$.
By \eqref{thm1-sc-m1} and \eqref{thm1-sc-6}, we get
\[L(x,\ga_1) \geq L_\gl \ \ \text{ and } \ \
(t-1)L(x,\ga_1)+\chi(x)<-\gl (v^\gl-u)(z)-\ep \ \ \text{ for all }x\in\tt,
\]
which together yield
\[
(t-1) L_\gl + \chi(x) \leq -\gl(v^\gl-u)(z) - \ep \ \ \text{ for all } x \in \tt.
\]
We take advantage of \eqref{thm1-sc-m2} to get further that
\[
(t-1) \gl v^\gl(z) + \chi(x) \leq -\gl(v^\gl-u)(z) - \ep \ \ \text{ for all } x \in\tt.
\]
Therefore,
\[
\chi < - t \gl v^\gl(z) + \gl u(z) - \ep,
\]
and
\[
\phi=tL+\chi < tL + t\gl (u/t-v^\gl)(z) - \ep \ \ \text{ in } \tt \times \cA.
\]
From this we deduce that $w:=u/t$ is a subsolution of
\[
\gl w +F[w]=-\gl(v^\gl-w)(z)-\ep/t \ \ \ \text{ in }\gO.
\]
This completes the proof.
\end{proof}
We remark that, in the proof above, we have proven the identity
\[
\gl v^\gl(z)=\inf_{\mu\in\cP_K\cap\cG^\rS(z,\gl)'}\lan\mu,L\ran
\]
for a compact set $K\subset\cA$,
which is a stronger claim than \eqref{sc-min}.
The minimization problem \eqref{sc-min}, in Theorem \ref{thm1-sc}, has minimizers as stated in the upcoming corollary,
Corollary \ref{cor1-sc}.
\begin{lem} \label{sc-cpt}
Assume \eqref{F1}, \eqref{F2} and \eqref{L}.
Fix a point $z\in\tt$ and a sequence $\{\gl_j\}_{j\in\N}\subset[0,\,\infty)$. Let
$\{\mu_j\}_{j\in\N}$
be a sequence of measures such that $\mu_j\in\cP^\rS\cap
\cG^\rS(z,\gl_j)'$ for all $j\in\N$. Assume that for some $\rho\in\R$ and
$\gl\in[0,\,\infty)$,
\[
\lim_{j\to\infty}\lan\mu_j,L\ran=\rho \ \ \text{ and } \ \
\lim_{j\to\infty}\gl_j=\gl.
\]
Then there exist a measure $\nu\in\cP^\rS\cap
\cG^\rS(z,\gl)'$
and a subsequence $\{\mu_{j_k}\}_{k\in\N}$ of
$\{\mu_j\}$ such that $\,\lan\nu,L\ran=\rho\,$ and
\[
\lim_{k\to\infty}\lan\mu_{j_k},\psi\ran=\lan\nu,\psi\ran \ \ \text{ for all }\ \psi\in C(\tt).
\]
\end{lem}
\begin{proof} Lemma \ref{basic-cpt} implies that
$\{\mu_j\}_{j\in\N}$ has a
subsequence $\{\mu_{j_k}\}_{k\in\N}$ convergent in the topology
of weak convergence of measures. Let $\mu_0\in\cR^+_{\bb}$ denote
the limit of $\{\mu_{j_k}\}_{k\in\N}$. It follows
from the weak convergence of $\{\mu_{j_k}\}$ that $\mu_0$
is a probability measure on $\bb$. Hence, $\mu_0\in\cP^\rS$.
By the lower semicontinuity of
the functional $\mu\mapsto\lan\mu,L\ran$ on $\cR^+_{\bb}$,
as claimed by Lemma \ref{basic-lsc}, we find that $\, \rho\geq
\lan\mu_0,L\ran$.
If $\rho>\lan\mu_0,L\ran$, then $\cA$ is not compact and
Lemma \ref{mod} ensures that there is $\tilde\mu_0\in\cP^\rS$
such that $\lan\tilde\mu_0,L\ran=\rho$ and
$\lan\tilde\mu_0,\psi\ran=\lan\mu_0,\psi\ran$ for all
$\psi\in C(\tt)$. We define $\nu\in\cP^\rS$ by setting
$\nu=\tilde\mu_0$ if $\rho>\lan\mu_0,L\ran$ and
$\nu=\mu_0$ otherwise, that is, if $\rho=\lan\mu_0,L\ran$.
The measure $\nu\in\cP^\rS$ verifies that
$\lan\nu,L\ran=\rho$ and
$\lan\nu,\psi\ran=\lan\mu_0,\psi\ran$ for all
$\psi\in C(\tt)$. It follows from the last identity
that
\[
\lim_{k\to\infty}\lan\mu_{j_k},\psi\ran=\lan\nu,\psi\ran
\ \ \ \text{ for all }\ \psi\in C(\tt).
\]
It remains to check that $\nu\in\cG^\rS(z,\gl)'$.
Let $f\in\cG^\rS(z,\gl)$, and select
$(\phi,u)\in\cF^\rS(\gl)$, $t>0$ and $\chi\in C(\tt)$
so that $f=\phi-\gl u(z)$ and $\phi=tL+\chi$.
Fix $j\in\N$, note that $(\phi+(\gl_j-\gl)u,u)
\in\cF^\rS(\gl_j)$ and get
\[
0\leq \lan\mu_j,\phi+(\gl_j-\gl)u-\gl_j u(z)\ran
=t\lan\mu_j,L\ran+
\lan\mu_j,\chi+(\gl_j-\gl)u-\gl_j u(z)\ran,
\]
because $\mu_j\in\cG^\rS(z,\gl_j)'$. Sending $j\to\infty$ yields
\[
0\leq t\rho
+\lan\mu_0,\chi-\gl u(z)\ran.
\]
Since $\rho=\lan\nu,L\ran$ and $\lan\mu_0,\chi\ran
=\lan\nu,\chi\ran$, we find that
\[
0\leq t\lan\nu,L\ran+\lan\nu,\chi-\gl u(z)\ran
=\lan\nu,\phi-\gl u(z)\ran,
\]
which shows that $\nu\in\cG^\rS(z,\gl)'$, completing the proof.
\end{proof}
\begin{cor}\label{cor1-sc} Under the hypotheses of Theorem
\ref{thm1-sc}, we have
\begin{equation}\label{cor1-sc-1}
\gl v^\gl(z)=\min_{\mu\in\cP^\rS\cap\cG^\rS(z,\gl)'}
\lan\mu,L\ran.
\end{equation}
\end{cor}
\begin{proof} In view of Lemma \ref{sc-cpt}, we need only to
show that there exists a sequence
$\{\mu_j\}_{j\in\N}\subset \cP^\rS\cap\cG^\rS(z,\gl)'$
such that
\[
\lim_{j\to\infty}
\lan\mu_j,L\ran
=\inf_{\mu\in\cP^\rS\cap\cG^\rS(z,\gl)'}\lan\mu,L\ran,
\]
but this is obviously true.
\end{proof}
\begin{remark} \label{rem1-s}
In the generality that \eqref{Z} is not assumed in Corollary \ref{cor1-sc}, we have
\[
-c_{\rS}=\inf_{\mu\in \cP^\rS\cap \cG^{\rS}(0)'}\lan \mu,L\ran.
\]
\end{remark}
\begin{definition}
We denote the set of minimizers of \eqref{cor1-sc-1}
by $\cM^{\rS}(z,\gl)$ for $\gl>0$,
and by $\cM^{\rS}(0)$ for $\gl=0$.
We call any $\mu \in \cM^{\rS}(0)$ a viscosity Mather measure,
and, if $\gl>0$,
any
$\gl^{-1}\mu$, with $\mu \in \cM^{\rS}(z,\gl)$ and $\gl>0$, a viscosity Green measure.
\end{definition}
\subsection{Convergence with vanishing discount}
The following theorem is our main result on the vanishing
discount problem for the state constraint problem.
\begin{thm}\label{thm2-sc}
Assume \eqref{F1}, \eqref{F2}, \eqref{CP}, \eqref{L},
\eqref{CPS}, \eqref{SLS}, and \eqref{EC}.
For each $\gl>0$, let $v^\gl\in C(\tt)$ be
the unique solution of \eqref{S}.
Then, the family $\{v^\gl+\gl^{-1}c_{\rS}\}_{\gl>0}$ converges to a function $u$ in $ C(\tt)$ as $\gl\to 0$.
Furthermore, $(u,c_{\rS})$ is a solution of \eqref{ES}.
\end{thm}
In order to prove this theorem, we need
the next
lemma.
\begin{lem}\label{lem2-sc}
Assume \eqref{F1}, \eqref{F2}, \eqref{CPS}, \eqref{SLS}, \eqref{EC} and \eqref{Z}.
For each $\gl>0$, let $v^\gl \in C(\tt)$ be the unique solution of \eqref{S}.
Then $\{v^\gl\}_{\gl>0}$ is uniformly bounded on $\tt$.
\end{lem}
\begin{proof}
Let $u \in C(\tt)$ be a solution of
(\ref{ES}$_0$).
It is clear that $u+\|u\|_{C(\tt)}$ and $u-\|u\|_{C(\tt)}$ are a supersolution and a subsolution of \eqref{S} respectively for any $\gl>0$.
By the comparison principle, we get
\[
u-\|u\|_{C(\tt)} \leq v^\gl \leq u+\|u\|_{C(\tt)} \ \ \text{on } \tt,
\]
which yields $\|v^\gl\|_{C(\tt)} \leq 2 \|u\|_{C(\tt)}$.
\end{proof}
We are now ready to prove the main convergence result in this section.
\begin{proof}[Proof of Theorem \ref{thm2-sc}]
As always,
we assume $c_{\rS}=0$.
Let $\cU$ be the set of accumulation points in $C(\tt)$ of $\{v^\gl\}_{\gl>0}$ as $\gl \to 0$.
By Lemma \ref{lem2-sc} and \eqref{EC}, $\{v^\gl\}_{\gl>0}$ is relatively compact in $C(\tt)$.
Clearly, $\cU \neq \emptyset$ and any $u\in\cU$ is a solution of
(\ref{ES}$_0$). Our goal is achieved when
we prove that $\cU$ has a unique element,
or equivalently, for any $v, w \in \cU$,
\begin{equation} \label{thm2-sc-1}
v \geq w \ \ \text{ on } \tt.
\end{equation}
Fix any $v,w \in \cU$.
There exist two sequences $\{\gl_j\}$ and $\{\gd_j\}$
of positive numbers converging to $0$
such that $v^{\gl_j} \to v$, $v^{\gd_j} \to w$ in $C(\tt)$ as $j \to \infty$.
Fix $z \in \tt$.
By Corollary \ref{cor1-sc}, there exists a
sequence $\{\mu_j\}_{j\in\N}$ of
measures
such that $\mu_j \in \cM^{\rS}(L,z,\gl_j)$ for every
$j \in \N$.
Since
\[
\lan\mu_j,L\ran=\gl_j v^{\gl_j}(z) \to 0=c_\rS\ \ \text{ as }\
j\to\infty,
\]
Lemma \ref{sc-cpt} guarantees that
there is $\mu\in\cP^\rS\cap\cG^\rS(0)'$ such that
\[
\lan\mu,L\ran=0 \ \ \ \text{ and } \ \ \
\lim_{j\to\infty}\lan\mu_j,\psi\ran=\lan\mu,\psi\ran
\ \ \ \text{ for all }\ \psi\in C(\tt).
\]
We note that $(L-\gd_j v^{\gd_j}, v^{\gd_j}) \in \cF^{\rS}(0)$ and $(L+\gl_j w, w) \in \cF^{\rS}(\gl_j)$, which implies that
\[
0 \leq \lan \mu, L-\gd_j v^{\gd_j} \ran
= -\gd_j \lan \mu, v^{\gd_j} \ran,
\]
and
\[
0 \leq \lan \mu_j, L + \gl_j w - \gl_j w(z) \ran = \gl_j (v^{\gl_j} - w)(z) + \gl_j \lan \mu_j, w \ran.
\]
Dividing the above inequalities by $\gd_j$ and $\gl_j$, respectively, and letting $j \to \infty$ yield
\[
\lan \mu, w \ran \leq 0 \quad \text{and} \quad 0 \leq (v-w)(z) + \lan \mu, w \ran,
\]
and thus, $v(z) \geq w(z)$. This completes the proof.
\end{proof}
\section{Dirichlet problem} \label{sec-d}
We consider the Dirichlet problem in this section.
We rename \eqref{DP} and \eqref{E} as \eqref{D} and \eqref{ED}, respectively, in
which the letter D
refers to ``Dirichlet".
For a given $g\in C(\bry)$, the two problems
of interest are
\begin{equation}\tag{D$_\gl$}\label{D}
\begin{cases}
\gl u+F[u]= 0 \ \ \text{ in }\ \gO, &\\[3pt]
u=g \ \ \text{ on }\ \bry,
\end{cases}
\end{equation}
for $\gl>0$,
and
\[\tag{ED}\label{ED}
\begin{cases}
F[u]= c \ \ \text{ in }\ \gO, &\\[3pt]
u=g \ \ \text{ on }\ \pl\gO.
\end{cases}
\]
As usual, (ED$_c$) refers the Dirichlet problem (ED), with a given
constant $c$.
The function $g\in C(\tt)$ is fixed in the following argument,
while we also consider the Dirichlet problem
\[\tag{D$_{\gl,\phi,\psi}$}\label{D'}
\begin{cases}
\gl u+F_\phi[u]=0 \ \ \text{ in }\ \gO,\\[3pt]
u=\psi \ \ \ \text{ on }\ \bry,
\end{cases}
\]
where $\gl\geq 0$, $\phi\in C(\tt\tim\cA)$ and $\psi\in C(\tt)$ are all given.
Throughout this paper, we understand that $u\in C(\tt)$ is a subsolution
(resp., a supersolution) of
\eqref{D'}, with $\gl\geq 0$, if
it is a subsolution of $\gl u+F_\phi[u]=0$ in $\gO$
in the viscosity sense and verifies $u\leq \psi$ pointwise on $\pl\gO$
(resp., a supersolution
of \eqref{D'} in the viscosity sense).
As always, we call $u\in C(\tt)$ a solution of
\eqref{D'} if it is a subsolution and supersolution of \eqref{D'}.
We assume in addition the following conditions.
\[\tag{CPD}\label{CPD}
\left\{\text{
\begin{minipage}{0.85\textwidth}
The comparison principle holds for \eqref{D'}, with
$\phi=L+\chi$, for any $\gl>0$, $\chi\in C(\tt)$ and $\psi\in C(\bry)$.
That is,
for any subsolution $u\in C(\tt)$ and supersolution $v\in C(\tt)$
of \eqref{D'}, with $\phi=L+\chi$,
the inequality $\,u\leq v\,$ holds on $\tt$.
\end{minipage}
}\right.
\]
We remark here that, if $\phi=L+\chi$, then the equation
$\,\gl u+F_\phi[u]=0\,$ can be written as $\gl u+F[u]=\chi$.
\[\tag{SLD}\label{SLD}
\left.
\text{
\begin{minipage}{0.85\textwidth}
For every $\gl>0$, \eqref{D}
admits a solution $v^\gl \in C(\tt)$.
\end{minipage}
}\right.
\]
In the vanishing discount problem for the Dirichlet
problem, the state constrain problem comes into play as the following
results indicate.
\begin{prop}\label{prop1-d0}
Assume \eqref{F1}, \eqref{F2}, \eqref{CPD} and \eqref{SLD}.
For $\gl>0$, let $v^\gl\in C(\tt)$ be the solution of \eqref{D}.
Then, \emph{(i)}\
if \emph{(ED$_0$)} has a solution in $C(\tt)$, then
\[
\lim_{\gl\to 0+ }\gl v^\gl(x)=0 \ \ \ \text{ uniformly on }\ \tt,
\]
and \emph{(ii)}\ if
\emph{(ES$_c$)}, with $c>0$, has a solution in $C(\tt)$, then
\[
\lim_{\gl\to 0+}\gl v^\gl(x)=-c \ \ \ \text{ uniformly on }\ \tt.
\]
\end{prop}
Notice that, in the proposition above, the assumption for the claim (i)
and that for (ii) are mutually exclusive, since the conclusions are exclusive of one another.
\begin{proof}
Assume first that (ED$_0$) has a solution in $C(\tt)$, which
denote by $u\in C(\tt)$. We select $M>0$ so that
$\|u\|_{C(\tt)}\leq M$ and note that the function
$u+M$ is nonnegative on $\tt$ and hence a supersolution of \eqref{D} for any $\gl>0$
and, similarly, that $u-M$ is a subsolution of \eqref{D} for any $\gl>0$.
Thus, by \eqref{CPD}, we have $u-M\leq v^\gl\leq u+M$ on $\tt$, which readily yields
\[
\lim_{\gl\to 0}\gl v^\gl(x)=0 \ \ \ \text{ uniformly on }\ \tt.
\]
Next, assume that (ES$_c$), with $c>0$, has a solution in $C(\tt)$, and
let $u\in C(\tt)$ be such a solution.
Choose $M>0$ large enough so that $\|u\|_{C(\tt)}+\|g\|_{C(\bry)}\leq M$.
Observe that, for any $\gl>0$, $u+M-\gl^{-1}c$ is a supersolution of
$\gl w+F[w]=0$ on $\tt$ and, therefore, is a supersolution of \eqref{D}
and that $u-M-\gl^{-1}c$ is a subsolution of \eqref{D} for any $\gl>0$.
Hence, we get $u-M-\gl^{-1}c\leq v^\gl\leq u+M-\gl^{-1}c$ on $\tt$ for all $\gl>0$.
This shows that
\[
\lim_{\gl\to 0}\gl v^\gl(x)=-c.
\]
The proof is now complete.
\end{proof}
\begin{prop}\label{prop1-d}
Assume \eqref{F1}, \eqref{F2}, \eqref{CPD}, \eqref{SLD} and \eqref{EC}.
Then, there is a dichotomy: either problem \emph{(ED$_0$)}, or
\emph{(ES$_c$)}, with some $c>0$, has a solution in $C(\tt)$.
\end{prop}
Here is an illustrative, simple example regarding the solvability of \eqref{ED} or \eqref{ES}. Let $n=1$ and $m\in\R$,
and consider the case
$\gO=(-1,\,1)$, $F(x,p)=|p|+m$, and $g=0$.
It is easily checked that conditions \eqref{F1}, \eqref{F2},
\eqref{CPD}, \eqref{SLD}, and \eqref{EC} are satisfied.
The Dirichlet problem
\begin{equation}\label{ex-d-1}
|u'|+m=c \ \ \text{ in }(-1,\,1)\quad \text{ and }\quad
u(-1)=u(1)=0,
\end{equation}
with $c\in\R$, has a solution $u(x):=(c-m)(1-|x|)$
if and only if $c\geq m$. Moreover, if $c=m$,
then any function $u(x):=C$, with $C\leq 0$,
is a solution of \eqref{ex-d-1} and any constant function $u$
is a solution of the state constraint problem
\begin{equation}\label{ex-d-2}
|u'|+m\geq c \ \ \text{ in }[-1,\,1]
\quad\text{ and }\quad
|u'|+m\leq c \ \ \text{ in }(-1,\,1).
\end{equation}
Thus, if $m\leq 0$, then problem \eqref{ex-d-1}, with $c=0$, has a
solution in $C([-1,\,1])$, and if $m > 0$, then
problem \eqref{ex-d-2}, with $c=m$,
has a solution in $C([-1,\,1])$.
\begin{proof} For $\gl>0$, let $v^\gl\in C(\tt)$ be the solution of \eqref{D}.
Choose a constant $M>0$ so that
$\|g\|_{C(\bry)}+\|F(\cdot,0,0)\|_{C(\tt)}\leq M$
and observe that, if $\gl>0$, then
the constant functions $\gl^{-1}M$ and
$-\gl^{-1}M$ are, respectively, a supersolution and a subsolution of \eqref{D}.
By \eqref{CPD}, we have $-\gl^{-1}M\leq v^\gl\leq \gl^{-1}M$ on $\tt$, and
consequently, the family $\{\gl v^\gl\}_{\gl>0}$ is uniformly
bounded on $\tt$.
By \eqref{EC}, the family $\{\gl v^\gl\}_{\gl>0}$
is equi-continuous on $\tt$. Setting
\[
w^\gl:=v^\gl-\min_{\tt} v^\gl \ \ \ \text{ on }\ \tt \ \text{ for }\ \gl>0,
\]
we observe that $\{w^\gl\}_{\gl>0}$ is relatively compact in $C(\tt)$,
and choose a sequence $\{\gl_j\}_{j\in\N}\subset (0,\,\infty)$, converging
to zero, so that $\{w^{\gl_j}\}_{j\in\N}$ converges in $C(\tt)$ to some
function $u\in C(\tt)$ as $j\to\infty$. The uniform
boundedness of $\{\gl v^\gl\}_{\gl>0}$ allows us to assume,
after taking a subsequence if necessary, that the limit
\[
d:=\lim_{j\to\infty}\gl_j\min_{\tt}v^{\gl_j}\, \in\,\R
\]
exists. Then the equi-continuity of $\{v^\gl\}_{\gl>0}$ ensures
that as $j\to\infty$,
\[
\gl_j v^{\gl_j}(x) \to d \ \ \ \text{ uniformly on }\tt.
\]
Now, according to the boundary condition $v^\gl\leq g$, pointwise on $\bry$,
for any $\gl>0$,
we have
\begin{equation}\label{prop-d1-1}
\min_{\tt}v^\gl\leq \min_{\bry}g \ \ \ \text{ for all }\ \gl>0,
\end{equation}
and hence, we find that $d\leq 0$.
Set
\[
m_j:=\min_{\tt}v^{\gl_j} \ \ \ \text{ for }j\in\N,
\]
and consider the sequence $\{m_j\}_{j\in\N}$, which is bounded from above
due to \eqref{prop-d1-1}. By passing
to a subsequence if needed, we may assume
that
\[
m:=\lim_{j\to\infty}m_j\in[-\infty,\,\min_{\bry}g].
\]
Observe that, for any $j\in\N$, $w^{\gl_j}$ is a solution of
\[
\begin{cases}
\gl_j(w^{\gl_j}+m_j)+F[w_j]=0 \ \ \text{ in }\gO,\\[3pt]
w^{\gl_j}+m_j=g \ \ \ \text{ on }\ \pl\gO.
\end{cases}
\]
Thus, in the limit $j\to\infty$,
we find that if $m>-\infty$,
then $w$ is a solution of (\ref{ED}$_0$), with $g$ replaced by $g-m$,
which says that the function $u:=w+m$ is a solution of
(ED$_0$). Notice that if $m>-\infty$, then $d=0$.
On the other hand, if $m=-\infty$, then, for $j$ sufficiently large,
we have $v^{\gl_j}<g$ on $\bry$, which implies that $v^{\gl_j}$ is a
supersolution of $\gl_j v^{\gl_j}+F[v^{\gl_j}]=0$ on $\tt$.
This can be stated that $w^{\gl_j}$ is a supersolution of
$\gl_j(w^{\gl_j}+m_j)+F[w^{\gl_j}]=0$ on $\tt$. Sending $j\to\infty$, we
deduce that $w$ is a solution of (ES$_{-d}$).
Note that if $w\in C(\tt)$ is a solution of (ES$_0$)
and if $C\in \R$ is large enough so that $w-C\leq g$ on $\bry$, then
$u:=w-C$ is a solution of (ED$_0$).
Thus, we conclude that either problem
(ED$_0$) has a solution in $C(\tt)$, or else
problem (ES$_c$), with some $c>0$, has a solution in $C(\tt)$.
\end{proof}
\subsection{Representation formulas in the case $\gl>0$}
We first need to do some setup
to take the Dirichlet boundary condition into account.
For $M>0$ and $\gl\ge 0$, we
define $\cF^{\rD}(\gl)$ (resp., $\cF^{\rD}(M,\gl)$) as the set of
all $(\phi,\psi,u)\in \Psi^+\tim C(\tt)$
(resp., $(\phi,\psi,u)\in \Psi^+(M)\tim C(\tt)$)
such that
$u$ is a subsolution of \eqref{D'}.
Fix $z \in \tt$ and define the sets
$\cG^{\rD}(z,\gl),\,\cG^{\rD}(M,z,\gl)\subset C(\tt)$ by
\[
\begin{aligned}
\cG^{\rD}(z,\gl):&\,=\left\{\big(\phi-\gl u(z),
\gl(\psi-u(z))\big)\mid (\phi,\psi,u)\in\cF^{\rD}(\gl)\right\},\\
\cG^{\rD}(M,z,\gl):&\,=\left\{\big(\phi-\gl u(z),
\gl(\psi-u(z))\big)\mid (\phi,\psi,u)\in\cF^{\rD}(M,\gl)\right\}.
\end{aligned}\]
We also write $\cG^\rD(0)$ and $\cG^{\rD}(M,0)$, respectively, for $\cG^{\rD}(z,0)$ and $\cG^{\rD}(M,z,0)$, which
are independent of $z$.
Notice that
\[\cG^{\rD}(0)=\left\{(\phi, 0)\mid
(\phi,\psi,u)\in\cF^{\rD}(0)\right\}\
\text{ and }\
\cG^{\rD}(M,0)=\left\{(\phi, 0)\mid
(\phi,\psi,u)\in\cF^{\rD}(M,0)\right\},
\]
and that
\[
\cF^\rD(z,\gl)=\bigcup_{M>0} \cF^{\rD}(M,z,\gl)
\ \ \ \text{ and } \ \ \
\cG^{\rD}(z,\gl)= \bigcup_{M>0} \cG^{\rD}(M,z,\gl).
\]
\begin{lem}\label{lem:dirichlet-convex}
Assume \eqref{F1}, \eqref{F2}, and \eqref{CP}.
For any $(z,\gl)\in\tt\tim[0,\infty)$ and $M>0$,
the sets $\cF^\rD_{z,\gl}$, $\cF^\rD(M,z,\gl)$, $\cG^\rD(z,\gl)$ and
$\cG^{\rD}(M, z,\gl)$ are
convex cones with vertex at the origin.
\end{lem}
\begin{proof} It is easily seen that $\Psi^+$ and $\Psi^+(M)$ are convex cones with vertex at the origin.
For $i=1,2$, let $(\phi_i,\psi_i,u_i)\in\cF^{\rD}(\gl)$,
fix $t,s\in(0,\,\infty)$ and set
\[
u=tu_1+su_2, \quad \phi=t\phi_1+s\phi_2 \ \
\text{ and } \ \ \psi=t\psi_1+ s\psi_2.
\]
As in the proof of \cite[Lemma 2.8]{IsMtTr1}, we find that $u$
is a subsolution of $\gl u+F_\phi[u]=0$ in $\gO$. Since
$u_i\leq \psi_i$ pointwise in $\bry$ for $i=1,2$, we get immediately
$u\leq \psi$ pointwise in $\bry$. Hence, we see that
$(\phi,\psi,u)\in\cF^\rD(\gl)$.
Assume, in addition, that
$(\phi_i,\psi_i,u_i)\in\cF^\rD(M,\gl)$
for $i=1,2$. Then $(\phi_i,\psi_i)\in\Psi^+(M)$ for $i=1,2$,
and, hence, the cone property of $\Psi^+(M)$ implies that
$(\phi,\psi)\in\Psi^+(M)$, which proves, together with the property
of $(\phi,\psi,u)$ being in $\cF^\rD(\gl)$,
that $(\phi,\psi,u)\in\cF^\rD(M,\gl)$.
Thus, we see that
$\cF^{\rD}(\gl)$ and $\cF^{\rD}(M,\gl)$ (and also
$\cG^{\rD}(z,\gl)$ and $\cG^{\rD}(M,z,\gl)$) are
convex cones with vertex at the origin.
\end{proof}
Henceforth, we write
\[
\cR_{1}:=\cR_{\bb},\quad
\cR_2:=\cR_{\bry},\quad \cR_L^+:=\cR_{\bb}^+\cap\cR_L
\ \ \text{ and } \ \
\quad\cR_2^+:=\cR_{\bry}^+.
\]
Define
\[
\cP^{\rD}:=\left\{(\mu_1,\mu_2)\in\cR_{L}^+\times\cR_{2}^{+}\mid
\mu_1(\tt\times \cA)+\mu_2(\bry)=1\right\},
\]
and, for any compact subset $K$ of $\cA$,
\[
\cP^{\rD}_K:=\left\{(\mu_1,\mu_2)\in\cP^{\rD}\mid
\mu_1(\tt\times K)+\mu_2(\bry)=1\right\}.
\]
We define the dual cones
$\cG^{\rD}(z,\gl)',\,\cG^\rD(M,z,\gl)'$
of $\cG^{\rD}(z,\gl),\,\cG^{\rD}(M,z,\gl)$ in
$\cR_{L}\times\cR_{2}$, respectively, by
\[\begin{aligned}
\cG^\rD(z,\gl)':&\,=\left\{(\mu_1,\mu_2)\in \cR_{L}\times\cR_{2}
\mid \lan\mu_1,f_1\ran+\lan\mu_2,f_2\ran\ge0
\ \ \text{ for all } (f_1,f_2)\in\cG^{\rD}(z,\gl)\right\},
\\
\cG^\rD(M,z,\gl)':&\,=\left\{(\mu_1,\mu_2)\in
\cR_{L}\times\cR_{2}
\mid \lan\mu_1,f_1\ran+\lan\mu_2,f_2\ran\ge0
\ \ \text{ for all } (f_1,f_2)\in\cG^{\rD}(M,z,\gl)\right\}.
\end{aligned}
\]
It is obvious that
\[
\cG^\rD(z,\gl)'=\bigcap_{M>0}\cG^\rD(M,z,\gl)'.
\]
As usual, we write $\cG^\rD(0)'$ and $\cG^\rD(M)'$
for $\cG^\rD(z,0)'$ and $\cG^\rD(M,z,0)'$, respectively.
The following proposition is a key step toward
Theorems \ref{thm1-d0} and \ref{thm3-d}, two of the main results in this section.
\begin{thm} \label{thm1-d}
Assume \eqref{F1}, \eqref{F2}, \eqref{CP},
\eqref{L} and \eqref{CPD}.
Let $(z,\gl)\in\tt\tim (0,\,\infty)$
and $M> \|g\|_{C(\bry)}$.
If $v^\gl\in C(\tt)$ is a
solution of \eqref{D}, then
\begin{equation}\label{thm1-d-0}
\gl v^\gl(z)=\inf_{(\mu_1,\mu_2)\in \cP^{\rD} \cap
\cG^\rD(M,z,\gl)'}\,\left(
\lan \mu_1,\,L\ran+\gl\lan\mu_2,\,g\ran\right).
\end{equation}
\end{thm}
We need a lemma for the proof of the theorem above.
\begin{lem}\label{lem1-d}Assume \eqref{F1}, \eqref{F2},
and \eqref{CPD}.
Let $\gl>0$.
If $(\phi,\psi,u)\in\cF^{\rD}(\gl)$ and $\phi=t(L+\chi)$, with $t>0$ and $\chi\in C(\tt)$, then
\[
\gl u\leq \max\left\{t\max_{\tt}(\chi-F(\cdot,0,0)),\,\gl\max_{\bry}\psi \right\}
\ \ \text{ on }\tt.
\]
\end{lem}
\begin{proof} Let $(\phi,\psi,u)\in\cF^{\rD}(\gl)$, and assume that
$\phi=t(L+\chi)$ for some $t>0$ and $\chi\in C(\tt)$.
Recall that
\[\begin{aligned}
F_\phi(x,p,X)&\,=\sup_{\ga\in\cA}\left(-\tr a X-b\cdot p -t(L+\chi)\right)
=t\sup_{\ga\in\cA}\left(-\tr a t^{-1}X-b\cdot t^{-1}p -L\right)-t\chi
\\&\,=tF(x,t^{-1}p,t^{-1}X)-t\chi(x),
\end{aligned}
\]
to find that $u$ is a subsolution of
\[
\gl u+tF(x,t^{-1}Du,t^{-1}D^2u)=t\chi \ \ \text{ in }\gO.
\]
Hence, the function $v:=t^{-1}u$ is a subsolution of
\[
\gl v+F[v]=\chi \ \ \text{ in }\gO
\]
and satisfies $v\leq t^{-1}\psi$ pointwise on $\bry$.
We set
\[
A=\max\left\{\gl^{-1}\max_{\tt}(\chi-F(\cdot,0,0)),\
t^{-1}\max_{\bry}\psi\right\},
\]
and note that
the constant function $w:=A$
is a supersolution of
\[
\gl w+F[w]=\chi \ \ \text{ in }\gO \ \ \text{ and } \ \
w=t^{-1}\psi \ \ \text{ on }\bry.
\]
Hence, comparison principle \eqref{CPD} guarantees that
$v\leq A$ on $\tt$, which yields
\[
\gl u(x)\leq \gl t A = \max\left\{ t\max_{\tt}(\chi-F(\cdot,0,0)),\,
\gl\max_{\bry}\psi\right\}\ \ \ \text{ for all }\ x\in\tt. \qedhere
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1-d}]
Let $v^\gl\in C(\tt)$ be a solution of \eqref{D}.
Since $\|g\|_{C(\bry)}<M$, we have
$(L,g,v^\gl)\in\cF^{\rD}(M,\gl)$. Hence,
owing to the definition of $\cG^{\rD}(M,z,\gl)'$,
we get
\[\begin{aligned}
0&\,\leq \lan \mu_1,\,L-\gl v^\gl(z)\ran+\gl \lan\mu_2,\,g-v^\gl(z)\ran
\\&\,=
\lan \mu_1,L\ran+\gl\lan \mu_2,g\ran-\gl v^\gl(z) \ \ \text{ for all }
(\mu_1,\mu_2)\in\cP^{\rD} \cap \cG^{\rD}(M,z,\gl)',
\end{aligned}\]
and, therefore,
\begin{equation}\label{thm1-d-1}
\gl v^\gl(z)\leq \inf_{(\mu_1,\mu_2)\in
\cP^{\rD} \cap \cG^{\rD}(M,z,\gl)'}
\left(\lan\mu_1,\,L\ran+\gl\lan\mu_2,\,g\ran\right).
\end{equation}
Next, we show the reverse inequality:
\begin{equation}\label{thm1-d-1+}
\gl v^\gl(z)\geq \inf_{(\mu_1,\mu_2)
\in\cP^{\rD} \cap \cG^{\rD}(M,z,\gl)'}
\left(\lan\mu_1,\,L\ran+\gl\lan\mu_2,\,g\ran\right).
\end{equation}
For this, we suppose toward a contradiction that
\begin{equation}\label{thm1-d-2}
\gl v^\gl(z)+\ep<\inf_{(\mu_1,\mu_2)\in\cP^{\rD}\cap
\cG^{\rD}(M,z,\gl)'}
\left(\lan \mu_1, L\ran+\gl\lan\mu_2,\,g\ran\right),
\end{equation}
for a small $\ep>0$.
Now, we fix a compact subset $K$ of $\cA$ as follows.
If $\cA$ is compact, then we set $K:=\cA$. Otherwise, pick
an $\ga_0\in\cA$ and choose a constant $L_0>0$ so that
\begin{equation}\label{ga_0L_0}
\max_{x\in\tt}L(x,\ga_0)\leq L_0 \ \ \text{ and } \ \
(\gl+1) |v^\gl(z)|\leq L_0.
\end{equation}
Then set
\begin{equation} \label{C_1C_2}
C_1:=M+\|F(\cdot,0,0)\|_{C(\tt)}, \ \ \ C_2:=C_1(1+\gl),
\end{equation}
\begin{equation}\label{gdL_1}
\gd:=\min\Big\{\fr 12,\,\fr\ep{4\gl(M+L_0)}\Big\} \ \ \text{ and } \ \
L_1:=\fr{M+L_0+C_2}{\gd}.
\end{equation}
Owing to \eqref{L}, we may select a compact set $K_0\subset\cA$
so that
\begin{equation}\label{defK_0}
L(x,\ga)\geq \max\{L_0,\,L_1\}(\,=L_1\,) \ \ \ \text{ for all }(x,\ga)\in\tt\tim(\cA\setminus K_0).
\end{equation}
Finally, pick an $\ga_1\in\cA\setminus K_0$ and define
the compact set $K\subset\cA$ by
\begin{equation}\label{defK}
K:=K_0\cup\{\ga_0,\ga_1\}(\,=K_0\cup\{\ga_1\}\,).
\end{equation}
It follows from
\eqref{thm1-d-2} that
\begin{equation}\label{thm1-d-2+}
\gl v^\gl(z)+\ep<\inf_{(\mu_1,\mu_2)\in\cP^{\rD}_{K} \cap
\cG^{\rD}(M,z,\gl)'}
\left(\lan \mu_1, L\ran+\gl\lan\mu_2,\,g\ran\right).
\end{equation}
By Lemma \ref{lem:dirichlet-convex}, $\cG^{\rD}(M,z,\gl)$ is a
convex cone with vertex at the origin, and hence, we get
\[
\inf_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}
\left(\lan \mu_1,\,f_1\ran+\gl\lan\mu_2,f_2\ran\right)=
\begin{cases} 0 \ \ &\text{ if }\ (\mu_1,\mu_2)\in\cP^{\rD}_{K}\cap\cG^{\rD}(M,z,\gl)', \\
-\infty &\text{ if }\ (\mu_1,\mu_2)\in \cP^{\rD}_{K}\setminus
\cG^{\rD}(M,z,\gl)',
\end{cases}
\]
and moreover,
\begin{equation}\begin{aligned}
\label{thm1-d-2++}
\inf_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}\cap
\cG^{\rD}(M,z,\gl)'}
&\left(\lan \mu_1, L\ran+\gl\lan \mu_2,g\ran\right)
\\&\,\kern-30pt=
\inf_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}}\
\sup_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}\left(\lan \mu_1, L\ran+\gl\lan\mu_2,g\ran
-\lan\mu_1,f_1\ran-\gl\lan\mu_2,f_2\ran
\right)
\\&\,\kern-30pt=\inf_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}}\
\sup_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}\,\left(\lan \mu_1, L-f_1\ran+\gl\lan\mu_2,g-f_2\ran\right).
\end{aligned}
\end{equation}
Since $K$ is compact, $\cP^{\rD}_{K}$ is a compact convex subset of
$\cR_{L}\tim\cR_2$, with the topology of weak convergence of measures.
Also, for any $(f_1,f_2)\in\cG^\rD(M,z,\gl)$, the functional
\[
\cP_K^\rD\ni
(\mu_1,\mu_2)\mapsto \lan \mu_1, L-f_1\ran+\gl\lan\mu_2,g-f_2\ran\in\R
\]
is continuous. Thus, Sion's minimax theorem implies
\[\begin{aligned}
\min_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}}
&\ \sup_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}\,\left(\lan \mu_1, L-f_1\ran+\gl\lan\mu_2,g-f_2\ran\right)
\\&= \sup_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}\ \min_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}}
\,\left(\lan \mu_1, L-f_1\ran+\gl\lan\mu_2,g-f_2\ran\right),
\end{aligned}\]
which, together with \eqref{thm1-d-2+} and \eqref{thm1-d-2++}, yields
\[
\gl v^\gl(z)+\ep <
\ \sup_{(f_1,f_2)\in\cG^{\rD}(M,z,\gl)}\ \min_{(\mu_1,\mu_2)
\in\cP^{\rD}_{K}}
\,\left(\lan \mu_1, L-f_1\ran+\gl\lan\mu_2,g-f_2\ran\right).
\]
Thus,
we may choose $(\phi,\psi,u)\in\cF^{\rD}(M,\gl)$ and $(t,\chi)\in (0,\,\infty)\tim C(\tt)$
so that $\phi=t(L+\chi)$, $\|\chi\|_{C(\tt)}< M$, and
\begin{equation}\label{thm1-d-4}
\gl v^\gl(z)+\ep <
\min_{(\mu_1,\mu_2)\in\cP^{\rD}_{K}}
\,\left(\lan \mu_1, L-\phi+\gl u(z)\ran+\gl\lan\mu_2,g-\psi+u(z)\ran\right).
\end{equation}
Note that, by the definition of $\cF^{\rD}(M,\gl)$,
the inequality $\|\psi\|_{C(\bry)}< tM$ is valid.
Since $(0,\gd_x)\in \cP^{\rD}_{K}$ for all $x\in\bry$, we get
from \eqref{thm1-d-4}
\[
\gl v^\gl(z)+\ep<\gl(g-\psi+u(z))\quad\text{on} \ \bry,
\]
which reads
\begin{equation}\label{thm1-d-4+}
\psi< g+(u-v^\gl)(z)-\gl^{-1}\ep \ \ \text{ on }\bry.
\end{equation}
Also, since $(\gd_{(x,\ga)},0)\in\cP^\rD_K$ for all $(x,\ga)\in
\tt\tim K$, we get from \eqref{thm1-d-4}
\begin{equation}\label{thm1-d-4++}
(t-1)L(x,\ga)+t\chi(x)<\gl (u-v^\gl)(z)-\ep
\ \ \ \text{ for all }\ (x,\ga)\in\tt \tim K.
\end{equation}
We show that there are a constant $\gth>0$
and a subsolution $w=\theta u\in C(\tt)$ to
\begin{equation}\label{thm1-d-6}
\gl w+F[w]=-\gl (v^\gl-w)(z)-2^{-1}\gth\ep\ \ \text{ in }\gO,
\end{equation}
and
\begin{equation}\label{thm1-d-6+}
w\leq g -(v^\gl-w)(z)-(2\gl)^{-1}\gth\ep\ \ \ \text{ on }\bry.
\end{equation}
Once this is complete, we get a contradiction right away.
Indeed, the function
$\gz:=w+(v^\gl-w)(z)+(2\gl)^{-1}\gth\ep$ is a subsolution
of \eqref{D},
and comparison principle \eqref{CPS}
yields $\gz\leq v^\gl$, which, after evaluation at $z$, gives
$\,(2\gl)^{-1}\gth\ep\leq 0$.
This is a contradiction.
Assume that $\cA$ is compact. Then we have $K=\cA$, and therefore,
we get from \eqref{thm1-d-4++}
\[
\phi=L+(t-1)L+t\chi<-\gl(v^\gl-u)(z)-\ep \ \ \ \text{ on }\ \bb,
\]
which ensures that the pair of $w:=u$ and $\gth:=1$ satisfies \eqref{thm1-d-6},
while \eqref{thm1-d-6+} for this pair
is an immediate consequence of \eqref{thm1-d-4+}.
We assume henceforth that $\cA$ is not compact.
We split our further argument into two cases.
Consider first the case when $t\leq 1$.
Recall that $K=K_0\cup\{\ga_0,\ga_1\}$. By \eqref{thm1-d-4++}, we have
\begin{equation}\label{temp1}
(t-1)L(x,\ga_0)+t\chi(x)<\gl(u-v^\gl)(z)-\ep \ \ \ \text{ for all }\ x\in\tt.
\end{equation}
By the choice of $\ga_0$ and $L_0$, we have
\[
L(x,\ga)\geq L_0\geq L(x,\ga_0) \ \ \ \text{ for all }\ (x,\ga)\in\tt\tim(\cA\setminus K).
\]
Then we combine this with \eqref{temp1}, to get
\[
(t-1)L(x,\ga)+t\chi(x)<\gl(u-v^\gl)(z)-\ep \ \ \ \text{ for all }\
(x,\ga)\in\tt\tim(\cA\setminus K),
\]
which, furthermore, yields together with \eqref{thm1-d-4++}
\[
(t-1)L(x,\ga)+t\chi(x)<\gl(u-v^\gl)(z)-\ep \ \ \ \text{ for all }\
(x,\ga)\in\tt\tim\cA.
\]
From this, we observe
\[
\phi=L+(t-1)L+t\chi<L-\gl (v^\gl-u)(z)-\ep \ \text{ on }\bb,
\]
which shows together with \eqref{thm1-d-4+} the validity of
\eqref{thm1-d-6} and \eqref{thm1-d-6+}, with $w:=u$ and $\gth:=1$.
Secondly, we consider the case when $t> 1$. Recall the choice of $L_0$ and $K_0$, to see
\[
|\gl v^\gl(z)|\leq L_0\leq L(x,\ga) \ \ \ \text{ for all }\ (x,\ga)
\in \tt\tim (\cA\setminus K_0).
\]
Since $\ga_1\in K\setminus K_0$,
this and \eqref{thm1-d-4++} yield
\begin{equation}\label{temp2}
(t-1)\gl v^\gl(z)+t\chi(x)\leq (t-1)L(x,\ga_1)
+t\chi(x)<\gl (u-v^\gl)(z)-\ep,
\end{equation}
which verifies $t\chi < -t\gl v^\gl(z) + \gl u(z) -\ep$ on $\tt$ and, furthermore,
\[
\phi=t(L+\chi) < tL - t \gl v^\gl(z) + \gl u(z) - \ep \quad \text{on} \ \tt \times \cA.
\]
This shows formally that
\[
\gl u+\cL_\ga u\leq \phi<t L-t\gl (v^\gl-t^{-1}u)(z) -\ep \ \
\text{ in }\tt\tim\cA,\]
from which we deduce that if we set $w:=t^{-1}u$
and $\gth=t^{-1}$, then
\eqref{thm1-d-6} holds.
We continue with the case when $t>1$.
By Lemma \ref{lem1-d}, in view of \eqref{C_1C_2},
we get
\[
\gl u\leq \max\{t(M+\|F(\cdot,0,0)\|_{C(\tt)}),\gl tM\}
\leq C_1(1+\gl)t= C_2t,
\]
and, accordingly,
\begin{equation}\label{temp4}
\gl (u-v^\gl)(z)\leq C_2t+L_0\leq (C_2+L_0)t.
\end{equation}
Combining this with the second inequality of \eqref{temp2}
and \eqref{defK_0}, we get
\[
(t-1) L_1 < Mt+(L_0+C_2)t=(M+L_0+C_2)t.
\]
Using this and \eqref{gdL_1}, we compute
\[
t-1<\fr{(M+L_0+C_2)t}{L_1}=\gd t\leq \fr t2,
\]
which shows that $t<2$, and we get from the above
\begin{equation}\label{temp3}
t-1<\gd t\leq 2\gd\leq \fr{\ep}{2\gl(M+L_0)}.
\end{equation}
Hence, we obtain
\[
(t-1)(g-v^\gl(z))\leq (t-1)(M+L_0)<\fr{\ep}{2\gl},
\]
and moreover, by \eqref{thm1-d-4+},
\[\begin{aligned}
t^{-1}\psi&\,<g+(t^{-1}u-v^\gl)(z)-(t\gl)^{-1}\ep
+(t^{-1}-1)(g-v^\gl(z))
\\&\,<g+(t^{-1}u-v^\gl)(z)-(2t\gl)^{-1}\ep \ \ \ \text{ on }\pl\gO,
\end{aligned}
\]
which shows that \eqref{thm1-d-6+} holds with $w:=t^{-1}u$ and $\gth:=t^{-1}$.
Therefore, both \eqref{thm1-d-6} and \eqref{thm1-d-6+} hold with $w:=t^{-1}u$ and $\gth:=t^{-1}$. The proof is complete.
\end{proof}
\begin{thm} \label{thm1-d0}
Assume \eqref{F1}, \eqref{F2}, \eqref{CP},
\eqref{L} and
\eqref{CPD}.
Let $(z,\gl)\in\tt\tim (0,\,\infty)$.
If $v^\gl\in C(\tt)$ is a
solution of \eqref{D}, then
\begin{equation}\label{thm1-d0-0}
\gl v^\gl(z)=\min_{(\mu_1,\mu_2)\in \cP^\rD\cap
\cG^\rD(z,\gl)'}\,\left(
\lan \mu_1,\,L\ran+\gl\lan\mu_2,\,g\ran\right).
\end{equation}
\end{thm}
We introduce the set $\cG^\rD(z,\gl)^\dag$ for
$(z,\gl)\in\tt\tim[0,\,\infty)$ (resp., $\cG^\rD(M,z,\gl)^\dag$ for
$(M,z,\gl)\in(0,\,\infty)\tt\tim[0,\,\infty)$ )
as the set
of triples $(\rho,\mu_1,\mu_2)\in
[0,\,\infty)\tim\cP^\rD$
satisfying
\[
0\leq t\rho+\lan\mu_1,tL+\chi-\gl u(z)\ran+\gl\lan\mu_2,\psi-u(z)\ran
\]
for all $(tL+\chi,\psi,u)\in\cF^\rD(\gl)$ (resp., $(tL+\chi,\psi,u)\in\cF^\rD(M,\gl)$), with $t>0$ and $\chi\in C(\tt)$.
Notice that it is required here for $(\rho,\mu_1,\mu_2)\in
\cG^\rD(z,\gl)^\dag$ (resp., $(\rho,\mu_1,\mu_2)\in
\cG^\rD(M,z,\gl)^\dag$) to fulfill the condition
$(\mu_1,\mu_2)\in\cP^\rD$. We remark that
the inclusion $(\mu_1,\mu_2)\in
\cP^\rD\cap \cG^\rD(z,\gl)'$ (resp., $(\mu_1,\mu_2)\in
\cP^\rD\cap \cG^\rD(M,z,\gl)'$) holds
if and only if
$(0,\mu_1,\mu_2)\in\cG^\rD(z,\gl)^\dag$
(resp., $(0,\mu_1,\mu_2)\in\cG^\rD(M,z,\gl)^\dag$), that
$\bigcap_{M>0}\cG^\rD(M,z,\gl)^\dag=\cG^\rD(z,\gl)^\dag$,
and that if $N>M$, then $\cG^\rD(N,z,\gl)^\dag\subset\cG^\rD(M,z,\gl)^\dag$.
The following lemmas are useful for the proof of Theorem
\ref{thm1-d0}.
\begin{lem}\label{cpt-d-1}
Assume \eqref{F1} and \eqref{L}. Fix $R\in\R$.
Then, \emph{(i)}\ the functional
\[
(\mu_1,\mu_2)\mapsto \lan\mu_1,L\ran
\]
is lower semicontinuous on $\cP^\rD$, with the topology
of weak convergence of measures, and \emph{(ii)}\
the set
\[
\cP^\rD_R:=\{(\mu_1,\mu_2)\in\cP^\rD
\mid \lan \mu_1,L\ran \leq R\}
\]
is compact in the topology of weak convergence of measures.
\end{lem}
It should be remarked that, by definition,
a sequence $\{\mu_1^k,\mu_2^k\}_{k\in\N}\subset
\cR_1\tim\cR_2$ converges to a $(\mu_1,\mu_2)\in\cR_1\tim\cR_2$
weakly in the sense of measures if and only if
\[
\lim_{k\to\infty}\left(\lan\mu_1^k,\phi\ran+
\lan\mu_2^k,\psi\ran\right)
=\lan\mu_1,\phi\ran+
\lan\mu_2,\psi\ran \ \ \ \text{ for all }\ (\phi,\psi)
\in C_c(\bb)\tim C(\tt).
\]
\begin{proof} Let $\cX$ denote the disjoint union of
$\bb$ and $\bry$, and note that $\cX$ has a natural metric,
for instance, the metric $d$ on $\cX$ given by
the formula
\[
d(\xi,\eta)=
\begin{cases}
d_1(\xi,\eta) \ \ &\text{ if }\ \xi,\eta\in \bb,\\
d_2(\xi,\eta) &\text{ if }\ \xi,\eta\in\bry,\\
1 &\text{ otherwise},
\end{cases}
\]
where $d_1$ is the given metric on $\bb$
and $d_2$ is the Euclidean metric on $\R^n$.
With this metric structure, $\cX$ is $\gs$-compact and locally compact. Note that $B\subset\cX$ is a Borel set if and only if
$B\cap \bb$ and $B\cap\bry$ are Borel sets in $\bb$ and $\bry$, respectively.
We define $f\mid \cX\to \R$ by
\[
f(\xi)=
\begin{cases}
L(\xi) \ &\text{ if }\ \xi\in\bb,\\
0 &\text{ if }\ \xi\in\bry,
\end{cases}\]
and set
\[
\cP_{\cX,R}:=\{\mu\in\cP_{\cX}\mid \lan\mu, f\ran\leq R\},
\]
where $\cP_\cX$ is defined as the set of all Radon probability
measures on $\cX$,
Note that $f=+\infty$ at infinity,
and observe by Lemmas \ref{basic-lsc} and \ref{basic-cpt} that, in the topology of
weak convergence of measures,
the functional $\mu\mapsto \lan \mu,f\ran$ is lower semicontinuous on $\cP_{\cX}$, and
$\cP_{\cX,R}$ is compact.
For $(\mu_1,\mu_2)\in\cP^{\cD}$, if we put
\[
\tilde\mu(B):=\mu_1(B\cap\bb)+\mu_2(B\cap\bry)
\ \ \text{ for any Borel set }B\subset\cX,
\]
then $\tilde \mu$ defines a (unique) Radon probability
measure on $\cX$.
With this notation, it is easy to see that
\[
\lan\tilde \mu, f\ran=\lan\mu_1,L\ran
\ \ \ \text{ and } \ \ \
\cP_{\cX,R}=\{\tilde\mu\mid (\mu_1,\mu_2)\in\cP^\rD_R\}.
\]
Hence, we conclude that, in the topology of weak convergence
of measures, the functional $(\mu_1,\mu_2)\mapsto
\lan\mu_1,L\ran$
is lower semicontinuous on $\cP^\rD$ and the set
$\cP_R^{\rD}$ is compact.
\end{proof}
\begin{lem} \label{d-cpt}
Assume \eqref{F1}, \eqref{F2} and \eqref{L}.
Let $(M,z,\gl)\in(0,\,\infty)\tim\tt\tim[0,\,\infty)$
and let $\{\gl_j\}_{j\in\N}\subset[0,\,\infty)$ be a sequence converging to $\gl$. Let
$\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$
be a sequence of measures such that $(\mu_1^j,\mu_2^j)\in\cP^\rD\cap
\cG^\rD(M,z,\gl_j)'$ for all $j\in\N$. Assume
that the sequence $\{\du{\mu_1^j, L}\}_{j\in\N}$ is convergent
and that $\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$ converges to some
$(\mu_1^0,\mu_2^0)\in\cP^\rD$ weakly in the sense of measures.
\emph{(i)}\ If we set
\[\rho=\lim_{j\to\infty}\du{\mu_1^j,L}-\du{\mu_1^0,L},\]
then $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(M,z,\gl)^\dag$.
\emph{(ii)}\ Assume in addition
that $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(z,\gl)^\dag$.
If either $\cA$ is compact or $\mu_1^0\not=0$, then
there exist a pair $(\nu_1,\nu_2)\in\cP^\rD\cap
\cG^\rD(z,\gl)'$ of measures
such that $\,\lan\nu_1,L\ran=\rho+\du{\mu_1^0,L}\,$ and, for all $\, (\psi,\eta)\in C(\tt)\tim C(\bry)$,
\[
\lan\mu_1^{0},\psi\ran+\du{\mu_2^{0},\eta}
=\lan\nu_1,\psi\ran+\du{\nu_2,\eta}.
\]
\end{lem}
The proof of the lemma above is similar to that of Lemma \ref{sc-cpt}, but we give the proof
for completeness.
\begin{proof} Note first that the lower semicontinuity of
the functional $(\mu_1,\mu_2)\mapsto\lan\mu_1,L\ran$ on $\cP^\rD$,
as claimed by Lemma \ref{cpt-d-1}, implies that $\rho\geq 0$.
To check the property that
$(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(M,z,\gl)^\dag$,
let $f=(f_1,f_2)\in\cG^\rD(M,z,\gl)$, and select
$(\phi,\psi,u)\in\cF^\rD(\gl)$, $t>0$ and $\chi\in C(\tt)$
so that $f=(\phi-\gl u(z),\gl(\psi-u(z)))$ and $\phi=tL+\chi$.
Let $j\in\N$ and note that
$(\phi+(\gl_j-\gl)u,\psi,u)\in \cF^\rD(\gl_j)$.
Hence, if $j$ is large enough,
then $(\phi+(\gl_j-\gl)u,\psi,u)\in \cF^\rD(M, \gl_j)$ and we get
\[\begin{aligned}
0&\,\leq \lan\mu_1^j,\phi+(\gl_j-\gl)u-\gl_j u(z)\ran+
\du{\mu_2^j,\gl_j(\psi-u(z))}
\\&\,=t\lan\mu_1^j,L\ran+
\lan\mu_1^j,\chi+(\gl_j-\gl)u-\gl_j u(z)\ran
+\gl_j\du{\mu_2^j,\psi-u(z)}.
\end{aligned}\]
Sending $j\to\infty$ yields
\[
0\leq t(\rho+\du{\mu_1^0,L})
+\lan\mu_1^0,\chi-\gl u(z)\ran+\du{\mu_2^0,\gl(\psi-u(z))},
\]
which ensures that $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(M,z,\gl)^\dag$,
proving assertion (i).
Next, we assume that $(\rho,\mu_1^0,\mu_2^0)\in\cP^\rD\cap\cG^\rD(z,\gl)'$ and show the existence of $(\nu_1,\nu_2)$ having the properties
described in assertion (ii).
Since $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(z,\gl)^\dag$, we have
\begin{equation}\label{d-cpt-1}
0\leq t\rho
+\lan\mu_1^0,tL+\chi-\gl u(z)\ran+\du{\mu_2^0,\gl(\psi-u(z))}
\end{equation}
for all $(\phi,\psi,u)\in\cF^\rD(\gl)$, where $\phi=tL+\chi$, $t>0$ and $\chi\in C(\tt)$.
If $\cA$ is compact, the weak convergence
of $\{(\mu_1^j,\mu_2^j)\}$ implies that $\rho=0$. Thus, in this case,
the pair $(\nu_1,\nu_2):=(\mu_1^0,\mu_2^0)$ has all the required properties.
Now, assume that $\cA$ is not compact and $\mu_1^0\not=0$.
Lemma \ref{mod} ensures that there is $\tilde\mu_1^0\in\cR_L^+$
such that $\tilde\mu_1^0(\bb)=\mu_1^0(\bb)$,
$\lan\tilde\mu_1^0,L\ran=\rho$ and
$\lan\tilde\mu_1^0,\psi\ran=\lan\mu_1^0,\psi\ran$ for all
$\psi\in C(\tt)$.
We define $(\nu_1,\nu_2)\in\cR_L^+\tim\cR_2^+$
by $(\nu_1,\nu_2)=(\tilde\mu_1^0,\mu_2)$.
It is obvious that $(\nu_1,\nu_2)\in \cP^\rD$,
$\lan\nu_1,L\ran=\rho+\du{\mu_1^0,L}$ and
$\lan\nu_1,\psi\ran+\du{\nu_1,\eta}=\lan\mu_1^0,\psi\ran+\du{\mu_2^0,\eta}$ for all
$(\psi,\eta)\in C(\tt)\tim C(\bry)$. These properties and inequality \eqref{d-cpt-1} imply that $(\nu_1,\nu_2)\in \cG^\rD(z,\gl)'$,
which completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1-d0}]
Since $\cG^\rD(z,\gl)'\subset \cG^\rD(M,z,\gl)'$
for any $M>0$, Theorem \ref{thm1-d} yields
\begin{equation} \label{thm1-dp-1}
\gl v^\gl(z)\leq \inf_{(\mu_1,\mu_2)\in\cP^\rD\cap\cG^\rD(z,\gl)'}
\left(\du{\mu_1,L}+\gl\du{\mu_2,g}\right).
\end{equation}
To prove the reverse inequality of \eqref{thm1-dp-1},
in view of Theorem \ref{thm1-d}, we may select a sequence
$\{(\mu_1^k, \mu_2^k)\}_{k\in\N}$ of pairs of measures on $\bb$ and on $\bry$ so that for all $k\in\N$,
\begin{equation}\label{thm1-d-00}
\gl v^\gl(z)+\fr 1k>\lan\mu_1^k,L\ran+\gl\lan\mu_2^k,g\ran
\ \ \ \text{ and } \ \ \ (\mu_1^k,\mu_2^k)\in\cP^\rD\cap\cG^\rD(k,z,\gl)'.
\end{equation}
Thanks to Lemma \ref{cpt-d-1}, there exists a subsequence
$\{(\mu_1^{k_j},\mu_2^{k_j})\}_{j\in\N}$
of $\{(\mu_1^k,\mu_2^k)\}$ such that
$\{(\mu_1^{k_j},\mu_2^{k_j})\}_{j\in\N}$ converges to some
$(\mu_1^0,\mu_2^0)\in \cP^\rD$ weakly in the sense of
measures. Since $\{\du{\mu_1^k,L}\}_{k\in\N}$ is bounded, we may
assume that $\{\du{\mu_1^{k_j},L}\}_{j\in\N}$ is convergent.
We set $\rho=\lim_{j\to\infty}\du{\mu_1^{k_j},L}-\du{\mu_1^0,L}$, and
note that
\begin{equation}\label{thm1-dp-2}
\gl v^\gl(z)\geq \lim_{k\to\infty}\du{\mu_1^k,L}+\gl\du{\mu_2^0,g}
=\rho+\du{\mu_1^0,L}+\gl\du{\mu_2^0,g},
\end{equation}
and, by Lemma \ref{d-cpt}, that $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(m,z,\gl)^\dag$ for all $m\in\N$, which implies that $(\rho,\mu_1^0,\mu_2^0)\in\cG^\rD(z,\gl)^\dag$.
If either $\cA$ is compact or $\mu_1^0\not=0$,
then Lemma \ref{d-cpt} guarantees that there exists
$(\nu_1,\nu_2)\in\cP^\rD\cap\cG^\rD(z,\gl)'$ such that $\du{\nu_1,L}=\rho+\du{\mu_1^0,L}$ and $\du{\nu_1,\psi}+\du{\nu_2,\eta}=\du{\mu_1^0,\psi}+\du{\mu_2^0,\eta}$ for all $(\psi,\eta)\in C(\tt)\tim C(\bry)$.
These identities combined with \eqref{thm1-dp-2} yield
\[
\gl v^\gl(z)\geq \du{\nu_1,L}+\gl\du{\nu_2,g},
\]
which shows that \eqref{thm1-d0-0} holds, with $(\nu_1,\nu_2)$
being a minimizer of the right hand side of \eqref{thm1-d0-0}.
It remains the case when $\cA$ is not compact and $\mu_1^0=0$.
To treat this case, we observe first that $z\in\bry$ and
$\mu_2^0=\gd_z$. Indeed, otherwise, there exists $\gz\in C^2(\tt)$ such that $\du{\mu_2^0,\gz-\gz(z)}\not=0$. By replacing
$\gz$ by a constant multiple of $\gz$,
we may assume that $\rho+\gl\du{\mu_2^0,\gz-\gz(z)}<0$. Noting that $(\gl\gz+L+F[\gz],\gz,\gz)\in\cF^\rD(\gl)$, we get
\[
0\leq \rho+\du{\mu_1^0,\gl\gz+L+F[\gz]-\gl\gz(z)}+\gl\du{\mu_2^0,\gz-\gz(z)}=\rho+\gl\du{\mu_2^0,\gz-\gz(z)},
\]
which contradicts the choice of $\gz$.
Thus, we have $z\in\bry$ and $\mu_2^0=\gd_z$.
Since
\[
\du{\mu_1^0,\phi-\gl u(z)}+\gl\du{\mu_2^0,\psi-u(z)}
=\gl(\psi-u)(z)\geq 0
\]
for any $(\phi,\psi,u)\in \cF^\rD(\gl)$, we have
$(0,\mu_1^0,\mu_2^0)\in\cG^\rD(z,\gl)^\dag$, which implies that
$(\mu_1^0,\mu_2^0)\in\cP^\rD\cap\cG^\rD(z,\gl)'$. Thus,
we find from \eqref{thm1-dp-1} and \eqref{thm1-dp-2} that
\eqref{thm1-d0-0} holds with $(\mu_1^0,\mu_2^0)$ as a minimizer of the right hand side of \eqref{thm1-d0-0}. The proof is complete.
\end{proof}
\subsection{Formula for the critical value}
According to Propositions \ref{prop1-d0} and \ref{prop1-d},
under the hypotheses \eqref{F1}, \eqref{F2}, \eqref{CPD},
\eqref{SLD} and \eqref{EC}, there exists a unique number
$c^*\in[0,\,\infty)$ such that, if $c^*=0$, then
(ED$_0$) has a solution in $C(\tt)$
and, if $c^*>0$, then (ES$_{c^*}$)
has a solution in $C(\tt)$.
We call this number $c^*$ the critical value for the
vanishing discount problem for \eqref{D} and denote it by $c_\rD$.
Then, owing to Proposition \ref{prop1-d0}, we have
\begin{equation}\label{lim-cD}
\lim_{\gl\to 0+}\gl v^\gl(x)=-c_\rD \ \ \ \text{ uniformly on }\ \tt,
\end{equation}
where $v^\gl$ is the solution of \eqref{D}.
The following proposition gives a formula for the critical value $c_\rD$ similar to \eqref{critical-value-s}.
\begin{prop}\label{prop-d-cv} Assume \eqref{F1},
\eqref{F2}, \eqref{L}, \eqref{CPD},
\eqref{SLD}, and \eqref{EC}. Then,
\begin{equation}\label{critical-value-d-1}
c_\rD=\min\{c\geq 0\mid \emph{(ED$_c$)}
\text{ has a solution in }C(\tt)\}.
\end{equation}
\end{prop}
\begin{proof} We set
\[
d:=\inf\{c\geq 0\mid \text{(ED$_c$)}
\text{ has a solution in }C(\tt)\}.
\]
By the definition of $c_\rD$ above, which is based on Propositions \ref{prop1-d} and \ref{prop1-d0}, we have
\begin{equation}\label{cD=lim}
\lim_{\gl\to 0}(-\gl v^\gl(x))=c_\rD \ \ \ \text{ uniformly on }\
\tt.
\end{equation}
Moreover, Proposition \ref{prop1-d}
ensures that, if $c_\rD=0$, then (ED$_{c_{\rD}}$) has a solution in $C(\tt)$ and, if $c_\rD>0$, then
(ES$_{c_\rD}$) has
a solution in $C(\tt)$.
Let $u\in C(\tt)$ be a solution of (ED$_0$) and
(ES$_{c_\rD}$), respectively, if $c_\rD=0$ and
$c_\rD>0$. Note that, if $c_\rD>0$, then the function
$u-C$, with $C>0$ chosen so large that $u-C\leq g$ on $\bry$,
is a solution of (ED$_{c_\rD}$).
It is now clear that $d\leq c_\rD$.
It is enough to show that $d\geq c_\rD$.
Suppose to the contrary that $d<c_\rD$, and get a contradiction.
We choose $(v,c)\in C(\tt)\tim[d,c_\rD)$ so that
$v$ is a subsolution of (ED$_c$).
Fix any $C>0$ so that
$u-C\leq g$ on $\bry$. Since $c_\rD>0$, $u$ is a solution of (ES$_{c_\rD}$), and the function $w:=u-C$ is a supersolution of
(ED$_{c_\rD}$).
Fix $c_0\in(c,\,c_\rD)$ and select
$\gl>0$ sufficiently small so that $\gl v\leq c_0-c$
and $-\gl w\leq c_\rD-c_0$ on $\tt$, which means that
$v$ and $w$ are a subsolution and a supersolution, respectively, of
\eqref{D}, with $L$ replaced by $L+c_0$.
By \eqref{CPD}, we get $v\leq w=u-C$ on $\tt$, but this gives a
contradiction when $C$ is sufficiently large.
\end{proof}
Another formula for the critical value $c_\rD$
is stated in the next theorem.
Henceforth, we write
\[
\cP^\rD_1=\{\mu_1\mid (\mu_1,\mu_2)\in\cP^\rD
\cap\cG^\rD(0)'\} \ \ \text{ and }
\ \ \cP^\rD_{1,0}=\{\mu\in\cP^\rD_1\mid \mu(\tt\tim\cA)=1\}.
\]
A crucial remark here is that
\begin{equation}\label{equiDS}
\cP^\rD_{1,0}=\cP^\rS\cap\cG^\rS(0)'.
\end{equation}
Indeed, the argument below guarantees the validity
of the identity above. It is obvious that, for $\phi\in C(\bb)$,
inclusion $(\phi,\psi,u)\in \cF^\rD(0)$ holds for some
$(\psi,u)\in C(\bb)\tim C(\bry)$ if and only if
$(\phi,u,u)\in\cF^\rD(0)$. Hence, for $\phi\in C(\bb)$,
we have $(\phi,0)\in\cG^\rD(0)$ if and only if
$(\phi,u,u)\in\cF^\rD(0)$ for some $u\in C(\tt)$,
which is if and only if $(\phi,u)\in\cF^\rS(0)$
for some $u\in C(\tt)$, and moreover, this is if and only if
$\phi\in\cG^\rS(0)$. Using these observations,
it is easy to see that, for any $\mu\in \cP_L$,
$\mu\in\cP^\rS\cap\cG^\rS(0)'$ if and only if
$\mu\in\cG^\rS(0)'$, which is if and only if
$\lan\mu,\phi\ran\geq 0$ \ for all $\, (\phi,0)\in\cG^\rD(0)$.
This is equivalent to the condition that
$(\mu,0)\in\cP^\rD\cap\cG^\rD(0)'$, which verifies
that \eqref{equiDS} is valid.
\begin{thm} \label{thm3-d}
Assume \eqref{F1}, \eqref{F2}, \eqref{CP},
\eqref{L}, \eqref{CPD}, \eqref{SLD}, and \eqref{EC}.
Then,
\begin{equation}\label{thm3-d-main}
-c_\rD
=\min_{\mu\in\cP^\rD_1}\lan\mu,L\ran.
\end{equation}
Furthermore, if $c_\rD>0$, then
\begin{equation}\label{thm3-d-main'}
-c_\rD=\min_{\mu\in\cP^\rS\cap\cG^\rS(0)'}
\lan\mu,L\ran.
\end{equation}
\end{thm}
\begin{proof}
We show first that \eqref{thm3-d-main} is valid.
Let $(\mu_1,\mu_2)\in\cP^\rD\cap\cG^\rD(0)'$ and $v\in C(\tt)$
be a solution of (ED$_{c_\rD}$).
The existence of
such a $v$ is guaranteed by Proposition \ref{prop-d-cv}.
Noting that $(L+c_\rD,v,v)\in \cF^\rD(0)$, we get
\[
0\leq \lan\mu_1,L+c_\rD\ran\leq \lan\mu_1,L\ran+c_\rD,
\]
which yields
\begin{equation}\label{thm3-d-mm}
-c_\rD\leq \inf_{\mu\in\cP^\rD_1}\lan\mu,L\ran.
\end{equation}
Pick a point $z\in\tt$ and a sequence $\{\gl_j\}_{j\in\N}
\subset(0,\,\infty)$ converging to zero.
For $j\in\N$, owing to Theorem \ref{thm1-d0},
we select $(\mu_1^j,\mu_2^j)\in \cP^{\rD}\cap
\cG^\rD(z,\gl_j)'$ so that
\begin{equation}\label{thm3-d-0}
\gl_j v^{\gl_j}(z)=\lan\mu_1^j,L\ran
+\gl_j\lan\mu_2^j,g\ran.
\end{equation}
Note that the sequence $\{\lan\mu_1^j,L\ran\}_{j\in\N}$ is bounded.
We apply Lemma \ref{cpt-d-1}, with $\gl=0$, to find that
there is a subsequence of $\{(\gl_{j},\mu_1^j,\mu_2^j)\}$,
which we denote again by the same symbol, such that the sequence $\{(\mu_1^j, \mu_2^j)\}_{j\in\N}$ converges weakly in the sense of measures
to some $(\mu_1^0,\mu_2^0)\in\cP^\rD$,
the sequence $\{\lan\mu_1^j,L\ran\}_{j\in\N}$ is convergent,
and
\[
\lim_{j\to\infty}\lan\mu_1^j,L\ran\geq \lan\mu_1^0,L\ran.
\]
Set
\[
\rho^0=\lim_{j\to\infty}
\du{\mu_1^j,L}-\lan\mu_1^0,L\ran,
\]
and note by Lemma \ref{d-cpt} that
$(\rho^0,\mu_1^0,\mu_2^0)\in \cG^\rD(0)^\dag$. Note also that
if $\cA$ is compact, then $\rho^0=0$.
From \eqref{thm3-d-0}
and \eqref{cD=lim}, we get
\begin{equation}\label{thm3-d-00}
-c_\rD=\rho^0+\lan\mu_1^0,L\ran.
\end{equation}
If $\rho^0=0$, then we have
\begin{equation}\label{thm3-d-m4}
-c_\rD=\lan\mu_1^0,L\ran,
\end{equation}
and $(\mu_1^0,\mu_2^0)\in\cP^\rD\cap\cG^\rD(0)'$.
These observations combined with
\eqref{thm3-d-mm} show that, if $\rho^0=0$, \eqref{thm3-d-main} holds.
Assume instead that $\rho^0\not=0$. Then, $\rho^0>0$ and $\cA$ is not compact. Since $c_\rD\geq 0$ by Proposition \ref{prop1-d0},
we see from \eqref{thm3-d-00} that $\mu_1^0\not=0$. By Lemma \ref{d-cpt},
there exists $(\nu_1,\nu_2)\in\cP^\rD\cap\cG^\rD(0)'$ such that
$\rho^0+\du{\mu_1^0,L}=\du{\nu_1,L}$. This together with \eqref{thm3-d-00}
and \eqref{thm3-d-mm}
proves that, if $\rho^0\not=0$, then \eqref{thm3-d-main} holds.
Thus, we conclude that \eqref{thm3-d-main} is always valid.
Finally, we consider the case $c_\rD>0$ and prove
\eqref{thm3-d-main'}. We fix any minimizer
$\mu\in\cP^\rD_1$ of the minimization problem \eqref{thm3-d-main}, and show that
$\mu(\bb)=1$, which means $\mu\in\cP^\rD_{1,0}$ and completes
the proof of \eqref{thm3-d-main'}.
The condition for $\nu\in\cR_L^+$ to be in $\cP^\rD_1$ is described
by the inequalities $\nu(\bb)\leq 1$ and
\begin{equation}\label{thm3-d-m5}
0\leq t\lan\nu,L\ran+\lan\nu,\chi\ran
\end{equation}
for all $(tL+\chi,\psi,u)\in\cF^\rD(0)$, where $t>0$ and $\chi\in C(\tt)$.
Recall that $\lan\mu,L\ran=-c_\rD<0$. Suppose for the moment
that $\mu(\bb)<1$. By choosing $\gth>1$ so that
$\gth\mu(\bb)\leq 1$, we get a new measure
$\nu:=\gth\mu\in\cR_L^+$ that satisfies \eqref{thm3-d-m5},
which shows that $\nu\in\cP^\rD_1$,
and $\lan\nu,L\ran=-\gth c_\rD<-c_\rD$.
This is a contradiction,
which proves that $\mu(\bb)=1$.
\end{proof}
\begin{definition}We call $\hat\mu\in\cP_1^\rD$ a \emph{viscosity Mather measure} if it satisfies $\,\du{\hat\mu,L}=\inf_{\mu\in\cP_1^\rD}\du{\mu,L}$, and denote by $\cM^\rD$ the
set of all viscosity Mather measures $\mu \in \cP_1^\rD$.
For $(z,\gl)\in\tt\tim(0,\,\infty)$, we denote by $\cM^{\rD}(z,\gl)$ the set of all
measures $(\hat\mu_1,\hat\mu_2)\in\cP^\rD\cap\cG^{\rD}(z,\gl)'$
that satisfies
\[
\lan \hat\mu_1,\,L\ran+\gl\lan\hat\mu_2,\,g\ran
=\inf_{(\mu_1,\mu_2)\in\cP^\rD\cap\cG^{\rD}(z,\gl)'}
\,\left(\lan \mu_1,\,L\ran+\gl\lan\mu_2,\,g\ran\right),
\]
and call $\gl^{-1}(\mu_1,\mu_2)$, with $(\mu_1,\mu_2)\in\cM^{\rD}(z,\gl)$,
a \emph{viscosity Green measure} associated with \eqref{D}.
\end{definition}
\subsection{Convergence with vanishing discount}
\begin{thm}\label{thm2-d}
Assume \eqref{F1}, \eqref{F2}, \eqref{L}, \eqref{CP}, \eqref{CPD},
\eqref{SLD} and \eqref{EC}.
For each $\gl>0$, let $v^\gl\in C(\tt)$ be
the unique solution of \eqref{D}. Then,
the family $\{v^\gl+\gl^{-1}c_{\rD}\}_{\gl>0}$ converges,
as $\gl\to 0$, to a function $u$ in $C(\tt)$. Furthermore,
the function $u$ is a solution of \emph{(ED$_{0}$)}
or \emph{(ES$_{c_\rD}$)},
if $c_\rD=0$ or $c_\rD>0$, respectively.
\end{thm}
\begin{proof} By Propositions \ref{prop1-d0} and \ref{prop1-d}, we find that there is a solution $v\in C(\tt)$ of (ED$_0$)
or (ES$_{c_\rD}$), respectively, if
$c_\rD=0$ or $c_\rD>0$. Fix such a function $v$, and argue as in the proof of Proposition \ref{prop1-d}, to show that
$\{v^\gl+\gl^{-1}c_\rD\}_{\gl>0}$ is uniformly bounded on $\tt$. Indeed, if $c_\rD=0$, then the functions
$v+\|v\|_{C(\tt)}$ and $v-\|v\|_{C(\tt)}$ are a supersolution and a subsolution of
\eqref{D} for any $\gl>0$ and \eqref{CPD}, which implies that
\[
v-\|v\|_{C(\tt)}\leq v^\gl\leq v+\|v\|_{C(\tt)} \ \ \ \text{ on }\ \tt\ \text{ for any }\ \gl>0,
\]
and, hence, the uniform boundedness of $\{v^\gl\}_{\gl>0}$
on $\tt$
in the case $c_\rD=0$. If $c_\rD>0$ and if $\gl>0$
is sufficiently large, then we observe that the functions
$v+\|v\|_{C(\tt)}-\gl^{-1}c_\rD$ and $v-\|v\|_{C(\tt)}-\gl^{-1}c_\rD$ are a supersolution and a subsolution of \eqref{D} and, therefore, that
\[
v-\|v\|_{C(\tt)}-\gl^{-1}c_\rD
\leq v^\gl\leq v+\|v\|_{C(\tt)}-\gl^{-1}c_\rD,
\]
which shows the uniform boundedness of $\{v^\gl+\gl^{-1}c_\rD\}_{\gl>0}$ on $\tt$.
Thus, together with \eqref{EC},
the family $\{v^\gl+\gl^{-1}c_\rD\}_{\gl>0}$ is relatively compact in $C(\tt)$.
We denote by $\cU$ the set of accumulation points in $C(\tt)$
of $\{v^\gl+\gl^{-1}c_\rD\}$ as $\gl\to 0+$.
The relative compactness in $C(\tt)$ of the family $\{v^\gl+\gl^{-1}c_\rD\}_{\gl>0}$ ensures that $\cU\not=\emptyset$.
In order to prove the convergence, as $\gl\to 0$, of the whole family
$\{v^\gl+\gl^{-1}c_\rD\}_{\gl>0}$,
it is enough to show that $\cU$ has a unique element.
Let $v,w\in\cU$, and we demonstrate that $v=w$.
For this, we select sequences $\{\gl_j\}_{j\in\N}$
and $\{\gd_j\}_{j\in\N}$ of positive numbers converging
to zero such that
\[
\lim_{j\to\infty}v^{\gl_j}+\gl_j^{-1}c_\rD=v\ \ \text{ and } \ \
\lim_{j\to\infty}v^{\gd_j}+\gd_j^{-1}c_\rD=w \ \ \ \text{ in }\ C(\tt).
\]
As in the last part of the proof of Proposition \ref{prop1-d},
we deduce that $v$ and $w$ are solutions of
(ED$_0$) or
(ES$_{c_\rD}$) if $c_\rD=0$ or $c_\rD>0$, respectively.
Next, fix a point $z\in\tt$ and, owing to Theorem \ref{thm1-d0}, select $(\mu_1^j,\mu_2^j)\in\cM^\rD(z,\gl_j)$ for every $j\in\N$
so that
\begin{equation}\label{thm3-d-1}
\gl_j v^{\gl_j}(z)=\lan\mu_1^j,L\ran+\gl_j\lan\mu_2^j,g\ran \ \ \ \text{ for all }\ j\in\N.
\end{equation}
Arguing as
in the proof of Theorem \ref{thm3-d}, we may assume,
after passing to a subsequence if necessary, that there is
$(\mu_1,\mu_2)\in\cP^\rD\cap \cG^\rD(0)'$ such that
\begin{align}\label{thm3-d-2}
&\lim_{j\to\infty}\lan\mu_1^j,L\ran=
\lan\mu_1,L\ran=-c_\rD,
\\&\lim_{j\to\infty}\lan\mu_1^j,\psi\ran=\lan\mu_1,\psi\ran
\ \ \ \text{ for all }\ \psi\in C(\tt),\label{thm3-d-3}
\\&\lim_{j\to\infty}\lan\mu_2^j,h\ran=\lan\mu_2,h\ran
\ \ \ \text{ for all }\ h\in C(\bry).\label{thm3-d-4}
\end{align}
Now, $w$ is a solution of either (ED$_{c_\rD}$)
or (ES$_{c_\rD}$),
and thus, we have
$(L+c_\rD+\gl_jw,w,w)\in \cF^\rD(\gl_j)$ for all $j\in\N$. Also,
we have
$(L-\gd_jv^{\gd_j},v^{\gd_j},v^{\gd_j})\in\cF^\rD(0)$
for all $j\in\N$. Hence, we get
\begin{align}
&0\leq \lan\mu_1^j,L+c_\rD+\gl_j w-\gl_jw(z)\ran
+\gl_j\lan \mu_2^j,w-w(z)\ran,\label{thm3-d-5}
\\&0\leq \lan\mu_1,L-\gd_j v^{\gd_j}\ran.
\label{thm3-d-6}
\end{align}
Sending $j\to\infty$ in \eqref{thm3-d-5} yields
together with \eqref{thm3-d-2}
\[
0\leq \lan\mu_1,L\ran+c_\rD\mu_1(\bb)
=c_\rD(-1+\mu_1(\bb))\leq 0,
\]
which shows that
\begin{equation}\label{thm3-d-7}
c_\rD(-1+\mu_1(\bb))=0.
\end{equation}
Combining this, \eqref{thm3-d-2}
and \eqref{thm3-d-6}, we compute that for any $j\in\N$,
\[
0\leq -c_\rD+\lan \mu_1^j,-\gd_jv^{\gd_j}\ran
=-\lan\mu_1,c_\rD+\gd_j v^{\gd_j}\ran,
\]
which reads
\[
\lan\mu_1,v^{\gd_j}+\gd_j^{-1}c_\rD\ran\leq 0.
\]
Hence, in the limit $j\to\infty$,
\begin{equation}\label{thm3-d-8}
\lan\mu_1,w\ran\leq 0.
\end{equation}
Moreover, combining \eqref{thm3-d-1},
\eqref{thm3-d-5} and \eqref{thm3-d-7} gives
\[
0\leq \gl_j\left(v^{\gl_j}+
\gl_j^{-1}c_\rD
+\lan\mu_1^j,w\ran
-w(z)+\lan\mu_2^j,w-g\ran\right),
\]
from which, after dividing by $\gl_j$ and taking the limit
$j\to\infty$, we get
\[
0\leq v(z)+\lan\mu_1,w\ran-w(z).
\]
This and \eqref{thm3-d-8} show that $w(z)\leq v(z)$.
Because $v,w\in\cU$ and $z\in\tt$ are arbitrary, we find that
$\cU$ is a singleton.
\end{proof}
\section{Neumann problem} \label{sec-n}
We consider the Neumann boundary problem in this section.
As above, we relabel \eqref{DP} as \eqref{N}, and \eqref{E} as \eqref{EN}.
The letter N in \eqref{N} and \eqref{EN} indicates ``Neumann".
Let $\gamma$ be a continuous
vector field on $\pl\gO$ pointing outward from $\gO$, and $g\in C(\pl\gO)$ be a given
function. The Neumann problems of interest are
\[\tag{N$_{\gl}$}\label{N}
\begin{cases}
\gl u+F[u]= 0 \ \ &\text{ in }\ \gO, \\[3pt]
\gamma\cdot Du=g \ \ &\text{ on }\ \bry,
\end{cases}
\]
for $\gl>0$, and
\begin{equation} \tag{EN}\label{EN}\begin{cases}
F[u]=c \ \ &\text{ in } \gO,\\
\gamma\cdot Du=g \ \ &\text{ on }\pl\gO,
\end{cases}\end{equation}
As be fore, we refer as (EN$_c$) this problem with a given constant $c$.
The function $g\in C(\bry)$ is fixed
throughout this section. We need to consider the Neumann
boundary problem with general datum $(\gl,\phi,\psi)
\in[0,\,\infty)\tim \Phi^+\tim C(\bry)$:
\begin{equation}\tag{N$_{\gl,\phi,\psi}$}\label{N'}
\begin{cases}
\gl u+F_\phi[u]=0 \ \ \text{ in }\ \gO,&\\[3pt]
\gamma\cdot Du=\psi \ \ \ \text{ on }\ \bry, &
\end{cases}
\end{equation}
In addition to \eqref{F1}, \eqref{F2}, \eqref{L} and \eqref{EC},
we introduce a few
assumptions proper to the Neumann boundary condition.
\[\tag{OG}\label{OG}
\left\{\text{
\begin{minipage}{0.85\textwidth}
$\gO$ has $C^1$-boundary and $\gamma$ is oblique to
$\pl\gO$ in the sense that $\gamma\cdot
\mathbf{n}>0$ on $\pl\gO$,
where $\mathbf{n}$ denotes the outward unit normal to $\gO$.
\end{minipage}
}\right.
\]
\[\tag{CPN}\label{CPN}
\left\{\text{
\begin{minipage}{0.85\textwidth}
For any $\gl>0$ and $(\phi,\psi)\in\Psi^+$, the comparison principle holds for \eqref{N'},
that is, if $v\in C(\tt)$ and $w\in C(\tt)$ are a subsolution and a supersolution of \eqref{N'}, respectively,
then the inequality $\,v\leq w\,$ holds on $\tt$.
\end{minipage}
}\right.
\]
The following is a local version of the hypothesis above.
\[\tag{CPN$_{\rm loc}$}\label{CPN'}
\left\{\text{
\begin{minipage}{0.82\textwidth}
For any $\gl>0$ and $(\phi,\psi)\in\Psi^+$,
the localized comparison principle holds for \eqref{N'},
that is, for any relative open subset $V$ of $\tt$, if
$v\in C(\lbar V)$ and $w\in C(\lbar V)$ are
a subsolution and a supersolution, respectively,
of
\[
\gl u+F_\phi[u]=0 \ \ \text{ in }\ V\cap \gO
\quad\text{ and }\quad
\gamma\cdot Du=\psi \ \ \ \text{ on }\ V\cap\pl\gO,
\]
and if $\,v\leq w\,$ on $\gO\cap\pl V$, then the inequality $\,v\leq w\,$ holds on $\lbar V$.
\end{minipage}
}\right.
\]
\[\tag{SLN}\label{SLN}
\text{
\begin{minipage}{0.85\textwidth}
For any $\gl > 0$, \eqref{N} admits a solution $v^\gl \in C(\tt)$.
\end{minipage}
}.
\]
Condition \eqref{OG} guarantees that
there exists a $C^\infty$-function $\gz$ on $\lbar\gO$
such that
\begin{equation}\label{gz}
\gamma\cdot D\gz\geq 1 \ \ \ \text{ on }\ \pl\gO,
\end{equation}
and that any classical subsolution (resp., supersolution) of
\begin{equation}\label{eq-loc}
\gl u+F_\phi[u]=0 \ \ \text{ in }\ V\cap \gO
\quad\text{ and }\quad
\gamma\cdot Du=\psi \ \ \ \text{ on }\ V\cap\pl\gO,
\end{equation}
where $V$ is an open subset of $\R^n$,
is also a viscosity subsolution (resp., supersolution) of
\eqref{eq-loc}.
Note that \eqref{CPN'} implies \eqref{CPN}. To see this,
one may select $V$ to be $\R^n$ in \eqref{CPN'}.
As in the boundary conditions treated above, by a rescaling
argument, one sees that the condition
\eqref{CPN} (reps., \eqref{CPN'}), only
with those $\phi=L+\chi$, where
$\chi\in C(\tt)$ is arbitrary, implies the full condition
\eqref{CPN} (resp., \eqref{CPN'}).
\begin{prop} \label{prop1-n}
Assume \eqref{F1}, \eqref{F2}, \eqref{OG}, \eqref{CPN}, \eqref{SLN} and \eqref{EC}.
Then there exists a solution $(u,c)\in C(\tt)\times\R$ of \eqref{EN},
and such a constant $c\in \R$ is unique.
Moreover, if $v^\gl\in C(\tt)$ is a solution of \eqref{N}, then
\begin{equation}\label{c-n}
c=-\lim_{\gl \to 0} \gl v^\gl(x) \quad \text{ uniformly on }\ \tt.
\end{equation}
\end{prop}
We denote by $c_{\rN}$ the constant given by the proposition above and call it the critical value of \eqref{EN}.
\begin{proof}[Outline of proof] As noted above, there exists
a function $\gz\in C^2(\tt)$ satisfying \eqref{gz}. We may as well
assume that $\gz\geq 0$ on $\tt$.
Choose two positive constants $M_1$ and $M_2$
so that $|g|\leq M_1$ on $\bry$ and $|F[\pm M_1\gz]|\leq M_2$
on $\tt$, and observe that, for any $\gl>0$,
the functions $M_1\gz+\gl^{-1}M_2$ and $-M_1\gz-\gl^{-1}M_2$
are a supersolution and a subsolution of \eqref{N}. By
\eqref{CPN}, we get $\,|v^\gl|\leq M_1\gz+\gl^{-1}M_2$ on $\tt$.
This shows that $\{\gl v^\gl\}_{\gl>0}$ is uniformly bounded on $\tt$. Using \eqref{EC}, we find that the family
$\{v^\gl-m^\gl\}_{\gl>0}$, where $m^\gl:=\min_{\tt}v^\gl$,
is relatively compact in $C(\tt)$, while, for each $\gl>0$, the function
$u:=v^\gl-m^\gl$ is a solution of $\gl u+\gl m^\gl
+F[u]=0$ in $\gO$ and $\gamma\cdot Du=g$ on $\bry$.
Sending $\gl\to 0$, along an appropriate sequence, yields a solution $(v,c)\in C(\tt)\tim \R$ of \eqref{EN}. The uniqueness
of the constant $c$ is a consequence of \eqref{CPN}.
\end{proof}
\subsection{Representation formulas}
Let $(z,\gl)\in\tt\tim[0,\,\infty)$.
We define the sets $\cF^{\rN}(\gl)\subset C(\tt\tim\cA)\times C(\tt)$, $\cG^{\rN}(z,\gl)\subset C(\bb)\tim C(\bry)$, respectively, by
\begin{align*}
&\cF^{\rN}(\gl):=\left\{(\phi,\psi,u)\in \Psi^+\tim C(\tt)
\mid u \ \text{is a subsolution of} \ \eqref{N'}\right\}, \\
&\cG^{\rN}(z,\gl):=\left\{(\phi-\gl u(z), \psi)\mid (\phi,\psi,u)\in\cF^{\rN}(\gl)\right\}.
\end{align*}
\begin{lem} \label{thm2-sec3+0}
Assume \eqref{F1}, \eqref{F2}, \eqref{OG} and \eqref{CPN'}.
Let $(z,\gl)\in\tt\tim[0,\,\infty)$.
The set $\cG^{\rN}(z,\gl)$ is a convex cone in $C(\bb)\tim C(\bry)$ with vertex at the origin.
\end{lem}
The proof of this is in the same line as that of \cite[Lemma 2.8]{IsMtTr1} with help of the following lemma. Thus, we omit presenting it here.
\begin{lem}\label{convexN} Assume \eqref{F1}, \eqref{F2},
\eqref{OG} and
\eqref{CPN'}. Let $\gl\in[0,\,\infty)$ and let
$(\phi_i,\psi_i, u_i)\in\cF^{\rN}(\gl)$, with $i=1,2$.
For $t \in (0,1)$, set
$\phi^t=t\phi_1+(1-t)\phi_2$ on $\bb$,
$\psi^t=t\psi_1+(1-t)\psi_2$ on
$\bry$ and $u^t=t u_1+(1-t)u_2$ on $\tt$.
Then, $u^t\in\cF^\rN(\gl)$.
\end{lem}
\begin{proof} Note first that,
given $(\phi,\psi,u)\in\Psi^+\tim C(\tt)$,
$(\phi,\psi,u)\in\cF^\rN(\gl)$ if and
only if $(\phi-\gl u,\psi,u)\in\cF^\rN(0)$.
It is then easily seen that our claim follows from the special case $\gl=0$. Thus we may assume henceforth that $\gl=0$.
Let $\eta\in C^2(\tt)$ and $z\in\tt$ be such that $u^t-\eta$ takes a strict maximum at
$z$. If $z\in\gO$, then the proof of \cite[Lemma 2.8]{IsMtTr1} ensures that
$F_{\phi_t}[\eta](z)\leq 0$.
Thus, we only need to show that, if $z\in\pl\gO$, then we have either
$F_{\phi^t}[\eta](z)\leq 0\,$ or $\,\gamma(z)\cdot D\eta(z)\leq\psi^t(z)$.
To do this, we assume that $z\in\pl\gO$,
suppose to the contrary that $F_{\phi^t}[\eta](z)>0$ and
$\,\gamma(z)\cdot D\eta(z)>\psi^t(z)$, and obtain a contradiction.
We choose $\ep>0$ and an open neighborhood $V$, in $\R^n$,
of $z$ so that
\begin{equation}\label{super-eta}
F_{\phi^t}[\eta](x)>\ep \ \ \text{ in } V\cap\lbar\gO \ \
\text{ and } \ \
\gamma(x)\cdot D\eta(x)\geq \psi^t(x) \ \
\text{ for all }x\in V\cap\pl\gO.
\end{equation}
Set
\[
w(x)=-t^{-1}(1-t)u_2(x)+t^{-1}\eta(x) \ \ \text{ for }x\in\tt,
\]
and prove that $w$ is a supersolution of
\begin{equation}\label{superVN}
\begin{cases}
F_{\phi_1}[w]=\ep \ \ \ \ \text{ in }V\cap \gO, &\\[3pt]
\gamma\cdot Dw=\psi_1 \ \ \text{ on }V\cap\pl\gO. &
\end{cases}\end{equation}
Once this is done, we apply the comparison principle
\eqref{CPN'} to $u_1$ and $w$,
to obtain
\[
\sup_{V\cap\lbar\gO}(u_1-w)\leq \sup_{\gO\cap\pl V}(u_1-w).
\]
This gives a contradiction since $t(u_1-w)=u^t-\eta$ attains a strict maximum at $z\in V$ on $V\cap\lbar\gO$.
To prove the viscosity property \eqref{superVN} of $w$,
we fix $\xi\in C^2(V\cap\lbar\gO)$
and $y\in V\cap\lbar\gO$, and assume that $w-\xi$
takes a minimum at $y$.
An immediate consequence of this is that
the function $t(1-t)^{-1}(w-\xi)$ has a minimum at $y$ and, thus, the function
\[
u_2-(1-t)^{-1}\eta+t(1-t)^{-1}\xi
\]
has a maximum at $y$. By the viscosity property of $u_2$, we get either
\begin{equation}\label{eta-xi1}
F_{\phi_2}[(1-t)^{-1}\eta-t(1-t)^{-1}\xi](y)\leq 0,
\end{equation}
or
\begin{equation}\label{eta-xi2}
y\in\pl\gO \ \ \text{ and }\ \ \gamma(y)\cdot
D((1-t)^{-1}\eta-t(1-t)^{-1}\xi)(y)\leq \psi_2(y).
\end{equation}
If \eqref{eta-xi1} holds, then, using \eqref{super-eta},
we get
\[
\ep\leq F_{\phi^t}[\eta](y)
\leq tF_{\phi_1}[\xi](y)+(1-t)F_{\phi_2}[(1-t)^{-1}\eta-t(1-t)^{-1}\xi](y)
\leq tF_{\phi_1}[\xi](y).
\]
On the other hand, if \eqref{eta-xi2} holds, then, using \eqref{super-eta}, we get
\[\begin{aligned}
\gamma(y)\cdot D\xi(y)
&\geq t^{-1}\gamma(y)\cdot D\eta(y)-t^{-1}(1-t)\psi_2(y)
\\&\geq t^{-1}\psi_t(y)-t^{-1}(1-t)\psi_2(y)=\psi_1(y).
\end{aligned}\]
These show, with help of \eqref{OG},
that $w$ is a (viscosity) supersolution of \eqref{superVN}, which completes the proof.
\end{proof}
\begin{lem}\label{lem-n-3} Assume \eqref{F1}, \eqref{F2}, \eqref{OG} and
\eqref{CPN}. Let $(\phi,\psi,u)\in\cF^\rN(\gl)$, with
$\phi=tL+\chi$ for some $t>0$ and $\chi\in C(\tt)$.
Then, there exists a constant $C>0$, depending only on $\gO$ and $F$, such that
\[
\gl u\leq \|\chi\|_{C(\tt)}+(1+\gl)C\|\psi\|_{C(\bry)} \ \ \ \text{ on }\ \tt.
\]
\end{lem}
\begin{proof} Let $\gz\in C^2(\tt)$ be a function that satisfies
\[
\gamma\cdot D\gz\geq 1 \ \ \text{ on }\ \bry \ \ \ \text{ and }
\ \ \gz\geq 0 \ \ \text{ on } \tt.
\]
As before, we observe that $v:=t^{-1}u$ is a subsolution of
\begin{equation}\label{lem-n-3-1}
\gl v+F[v]=t^{-1}\chi \ \ \text{ in }\gO \ \ \ \text{ and } \ \ \
\gamma\cdot Dv=t^{-1}\psi \ \ \text{ on }\ \bry.
\end{equation}
We set
\[
C_1:=\max_{\tt}\gz \ \ \ \text{ and } \ \ \
C_2:=\max_{(x,t)\in\tt\tim[0,\,1]}|F[t\gz](x)|.
\]
We set $w:=A\gz+B$ for constants $A\geq 0$ and $B\geq 0$,
to be fixed in a moment,
and note that
\[
\gl w+F[w]\geq \gl B+F[A\gz] \ \ \text{ in }\ \gO,
\]
and
\[
\gamma\cdot Dw=A\gamma\cdot D\gz\geq A \ \ \text{ on }\ \bry.
\]
Now, observe that if $A>1$, then the convexity of $F$
yields
\[
-C_2\leq F[\gz]\leq A^{-1}F[A\gz]+(1-A^{-1})F[0]
\leq A^{-1}F[A\gz]+C_2,
\]
and, hence,
\[
F[A\gz]\geq -2AC_2 \ \ \text{ on }\ \tt,
\]
which is obviously true also in the case when $0<A\leq 1$.
Thus, putting
\[
A:=t^{-1}\|\psi\|_{C(\bry)}\quad \text{ and } \quad
B:=\gl^{-1}(t^{-1}\|\chi\|_{C(\tt)}+2AC_2),
\]
we see that $w$ is a supersolution
of \eqref{lem-n-3-1}. Then, \eqref{CPN} implies that
$v\leq w$ on $\tt$, which reads
\[
\gl u\leq \gl t w\leq
\gl t(AC_1+B)=\|\chi\|_{C(\tt)}
+(\gl C_1+2 C_2)\|\psi\|_{C(\bry)},
\]
which completes the proof.
\end{proof}
We set
\[
\cP_1^\rN:=\cP_{\bb} \ \ \ \text{ and } \ \ \
\cP^\rN:=\cP_1^\rN\tim\cR_{\bry}^+
\]
and, for compact subset $K$ of $\cA$,
\[
\cP_{1,K}^\rN:=\left\{\mu_1\in\cP_{\bb}
\mid\mu_1(\tt\times K)=1\right\}.
\]
Let $\cG^\rN(z,\gl)'$ denote the dual cone of
$\cG^{\rN}(z,\gl)$ in $\cR_L\tim\cR_{\bry}$, that is,
\[
\cG^\rN(z,\gl)':=\left\{(\mu_1,\mu_2)\in
\cR_{L}\tim\cR_{\bry}\mid \lan\mu_1,\phi\ran
+\lan\mu_2,\psi\ran\ge0, \
\ \text{for all } (\phi,\psi)\in\cG^{\rN}(z,\gl)\right\}.
\]
We use as well the notation: $\cF^\rN(0)=\cF^\rN(z,0)$,
$\cG^\rN(0)=\cG^\rN(z,0)$, and
$\cG^\rN(0)'=\cG^\rN(z,0)'$.
\begin{thm} \label{thm1-n}
Assume \eqref{F1}, \eqref{F2}, \eqref{L}, \eqref{OG} and \eqref{CPN'}.
\ \emph{(i)}\ Let $(z,\gl)\in\tt\tim(0,\,\infty)$.
If $v^\gl\in C(\tt)$ is a
solution of \eqref{N}, then
\begin{equation} \label{n-min}
\gl v^\gl(z)=\min_{(\mu_1,\mu_2)\in \cP^\rN\cap
\cG^{\rN}(z,\gl)'}\,\left(
\lan \mu_1,\,L\ran+\lan\mu_2,\,g\ran\right).
\end{equation}
\emph{(ii)} Assume, in addition,
\eqref{SLN} and \eqref{EC}. Then
\begin{equation}\label{n-min0}
-c_\rN=\min_{(\mu_1,\mu_2)\in \cP^\rN\cap \cG^{\rN}(0)'}\,\left(
\lan \mu_1,\,L\ran+\lan\mu_2,\,g\ran\right).
\end{equation}
\end{thm}
\begin{definition}
We denote the set of minimizers of \eqref{n-min}
and that of \eqref{n-min0}
by $\cM^{\rN}(z,\gl)$
and $\cM^{\rN}(0)$, respectively.
We call any $(\mu_1,\mu_2) \in \cM^{\rN}(0)$ a viscosity Mather measure associated with \eqref{EN},
and any
$\gl^{-1}(\mu_1,\mu_2)$, with $(\mu_1,\mu_2) \in \cM^{\rN}(z,\gl)$ and $\gl>0$, a viscosity Green measure
associated with \eqref{N}.
\end{definition}
For $M>0$, $(z,\gl)\in\tt\tim (0,\infty)$,
we define $\cF^\rN(M,\gl)$, $\cG^\rN(M,z,\gl)$ and
$\cG^\rN(M,z,\gl)'$ by
\[
\begin{aligned}
\cF^\rN(M,\gl):=&\,\cF^\rN(\gl)\cap(\Psi^+(M)\tim C(\tt)),&
\\\cG^\rN(M,z,\gl):=&\,\{(\phi-\gl u(z),\psi)\mid (\phi,\psi,u)\in\cF^\rN(M,\gl)\},&
\\\cG^\rN(M,z,\gl)':=&\,\{(\mu_1,\mu_2)\in\cR_L\tim\cR_{\bry}
\mid
\lan\mu_1,f\ran+\lan\mu_1,\psi\ran\geq 0&
\\&& \kern-100pt \text{ for all }\,(f,\psi)\in\cG^\rN(M,z,\gl)\}.
\end{aligned}
\]
It is easily seen by Lemma \ref{thm2-sec3+0} that
$\cG^\rN(M,z,\gl)$ is a convex cone in $C(\bb)\tim C(\bry)$
with vertex at the origin.
\begin{thm} \label{thm1-n1}
Assume \eqref{F1}, \eqref{F2}, \eqref{OG}, \eqref{L}, and \eqref{CPN'}.
Let $z\in\tt$ and $\gl,\, M\in(0,\,\infty)$.
If $v^\gl\in C(\tt)$ is a
solution of \eqref{N}, then
\begin{equation} \label{n-inf}
\gl v^\gl(z)\geq
\inf_{(\mu_1,\mu_2)\in \cP^\rN\cap\cG^{\rN}(M,z,\gl)'}
\,\left(\lan \mu_1,\,L\ran+\lan\mu_2,\,g\ran\right).
\end{equation}
\end{thm}
\begin{proof} Let $v^\gl\in C(\tt)$ be a solution of \eqref{N}.
We fix any $\ep\in(0,\,1)$, and show that there exist
$R>0$ and a compact subset $K$ of $\cA$ such that
\begin{equation} \label{n-inf-1}
\gl v^\gl(z)+\ep\geq \inf_{(\mu_1,\mu_2)\in \cP^\rN_{K,R}\cap
\cG^\rN(M,z,\gl)'}(\lan\mu_1,L\ran+\lan\mu_2,g\ran),
\end{equation}
where
\[
\cP^\rN_{K,R}
:=\left\{(\mu_1,\mu_2)\in\cP^\rN \mid \mu_1(\tt\tim K)=1,\, \mu_2(\bry)\leq R\right\}.
\]
Since $\cP^\rN_{K,R}\subset \cP^\rN$, it follows from \eqref{n-inf-1} that
\[
\gl v^\gl(z)+
\ep\geq \inf_{(\mu_1,\mu_2) \in \cP^\rN\cap\cG^\rN(M,z,\gl)'}(\lan\mu_1,L\ran+\lan\mu_2,g\ran),
\]
which implies that \eqref{n-inf} is valid.
We choose a constant $A>0$ so that $g+A\geq 1$ on $\bry$ and,
thanks to \eqref{OG} (see also \eqref{gz}),
a function $\eta\in C^2(\tt)$ so that
\begin{equation}\label{n-inf-2}
\gamma\cdot D\eta\leq -A \ \ \text{ on }\ \bry
\quad\text{ and } \quad\eta(z)=0.
\end{equation}
Then, we choose constants $B>0$ and $R>0$ so that
\begin{equation}\label{n-inf-3}
\gl\eta+F[\eta]\leq B \ \ \ \text{ on }\tt
\quad\text{ and }\quad
R\geq 1+B+\gl v^\gl(z).
\end{equation}
Let $K$ be a compact subset of $\cA$ to be specified later, and set
\[
I_{K,R}:=\inf_{(\mu_1,\mu_2)\in\cP^\rN_{K,R}\cap\cG^\rN(M,z,\gl)'}(\lan\mu_1,L\ran+\lan\mu_2,g\ran)
\]
Since $\cG^{\rN}(M,z,\gl)$ is a convex cone with vertex at the origin,
we deduce that
\[
\inf_{(f,\psi)\in\cG^{\rN}(M,z,\gl)}
\left(\lan \mu_1,\,f\ran+\lan\mu_2,\psi\ran\right)=
\begin{cases} 0 \ \ &\text{ if }\ (\mu_1,\mu_2)\in\cP^\rN_{K,R}
\cap\cG^{\rN}(M,z,\gl)', \\
-\infty &\text{ if }\ (\mu_1,\mu_2)\in \cP^\rN_{K,R}\setminus
\cG^{\rN}(M,z,\gl)'.
\end{cases}
\]
and, furthermore,
\begin{equation}\begin{aligned}
\label{thm1-n-2+}
I_{K,R}
=\inf_{(\mu_1,\mu_2)\in\cP^{\rN}_{K,R}}\
\sup_{(f,\psi)\in\cG^{\rN}(M,z,\gl)}\,\left(\lan \mu_1, L-f\ran+\lan\mu_2,g-\psi\ran\right).
\end{aligned}
\end{equation}
Note that $\cP^\rN_{K,R}$ is a compact, convex subset
of $\cR_{L}\tim\cR_2^{+}$.
Sion's minimax theorem implies that
\begin{equation}\label{n-inf-4}\begin{aligned}
I_{K,R}&\,=\min_{(\mu_1,\mu_2)\in\cP^\rN_{K,R}}
\ \sup_{(f,\psi)\in\cG^{\rN}(M,z,\gl)}\,\left(\lan \mu_1, L-f\ran+\lan\mu_2,g-\psi\ran\right)
\\&= \sup_{(f,\psi)\in\cG^{\rN}(M,z,\gl)}\ \min_{(\mu_1,\mu_2)\in\cP^\rN_{K,R}}
\,\left(\lan \mu_1, L-f\ran+\lan\mu_2,g-\psi\ran\right).
\end{aligned}
\end{equation}
In order to prove \eqref{n-inf-1} for the fixed $R>0$ and
a suitably chosen compact set $K\subset\cA$, we argue by contradiction.
We suppose that
\[
\gl v^\gl(z)+\ep< I_{K,R},
\]
which, together with \eqref{n-inf-4}, yields
\begin{equation}\label{thm1-n-4}
\gl v^\gl(z)+\ep <
\ \sup_{(f,\psi)\in\cG^{\rN}(M,z,\gl)}\
\min_{(\mu_1,\mu_2)\in\cP^\rN_{K,R}}
\,\left(\lan \mu_1, L-f\ran+\lan\mu_2,g-\psi\ran\right).
\end{equation}
Hence, we may
choose $(\phi,\psi,u)\in\cF^{\rN}(M,\gl)$ and
$(t,\chi)\in (0,\,\infty)\tim C(\tt)$
so that $\phi=tL+\chi$, $\|\chi\|_{C(\tt)}< tM$,
$\|\psi\|_{C(\bry)}< tM$, and
\[
\gl v^\gl(z)+\ep <
\inf_{(\mu_1,\mu_2)\in\cP^\rN_{K,R}}
\,\left(\lan \mu_1, L-\phi+\gl u(z)\ran+\lan\mu_2,g-\psi\ran\right).
\]
We get from the above
\begin{equation}\label{thm1-n-4++}
\gl v^\gl(z)+\ep <
\inf_{\mu\in\cP^\rN_{1,K}}
\,\lan \mu, L-\phi+\gl u(z)\ran,
\end{equation}
and, since $(0,R\gd_{x})\in \cP^{\rN}_{K,R}$ for any $x\in\bry$,
\[
\gl v^\gl(z)+\ep <
\inf_{\mu\in\cP^\rN_{1,K}}
\,\lan \mu, L-\phi+\gl u(z)\ran+R\min_{\bry}(g-\psi).
\]
Thus, setting
\[
p:=\inf_{\mu\in\cP^\rN_{1,K}}\lan\mu, L-\phi+\gl (u-v^\gl)(z)\ran
\ \ \text{ and } \ \ q:=\min_{\bry}(g-\psi),
\]
we have
\begin{equation} \label{thm1-n-4+}
\ep<p \ \ \ \text{ and } \ \ \ \ep<p+Rq.
\end{equation}
Our choice of $A$, $B$ and $\eta$ ensures that
$(L+B,-A,\eta)\in\cF^\rN(\gl)$. Note as well that
$(L,g,v^\gl),\ (\phi,\psi,u)\in\cF^{\rN}(\gl)$.
Set
\[
\nu:=\fr{R\ep}{R\ep+p}\in (0,\,1),
\]
and observe by the convexity of $\cF^{\rN}(\gl)$ that
\[
(L+B\ep,(1-\ep)g-A\ep,(1-\ep)v^\gl+\ep\eta)\in\cF^{\rN}(\gl),
\]
and
\[
(1-\nu)(L+ B\ep,(1-\ep)g-A\ep,(1-\ep)v^\gl+\ep\eta)+\nu(\phi,\psi,u)\in\cF^{\rN}(\gl).
\]
We set
\[\begin{gathered}
(\hat\phi,\hat \psi,\hat u):=
(1-\nu)(L+ B\ep,(1-\ep)g-A\ep,(1-\ep)v^\gl+\ep\eta)
+\nu(\phi,\psi,u),
\\
\hat t:=1+\nu(t-1) \ \ \ \text{ and } \ \ \
\hat \chi:=(1-\nu) B\ep+\nu\chi,
\end{gathered}
\]
and note that
\[\begin{aligned}
\hat\phi&\,=L+\nu(\phi-L)+(1-\nu)B\ep=\hat t L+\hat\chi,
\\\hat\psi&\,=(1-\nu)(g-\ep(g+A))+\nu\psi,
\\\hat u-v^\gl&\,=\nu(u-v^\gl)+(1-\nu)\ep(\eta-v^\gl),
\\\hat t-1&\,=\nu(t-1).
\end{aligned}
\]
Using the facts that
$g+A\geq 1$ on $\bry$ and that, by the definition of $q$,
$\psi\leq g-q$ on $\bry$, and the second inequality of
\eqref{thm1-n-4+}, we compute
\[\begin{aligned}
\hat\psi&\,\leq (1-\nu)(g-\ep)+\nu(g-q)
\\&\,\leq (1-\nu)(g-\ep)+\nu\left(g+\fr{p-\ep}{R}\right)
=g-\fr{\nu\ep}{R} \ \ \text{ on }\bry.
\end{aligned}
\]
Also, we compute
\[\begin{aligned}
L-\hat\phi+&\gl(\hat u-v^\gl)(z)
\\ \,=\, &\nu(L-\phi+\gl (u-v^\gl)(z))
-(1-\nu)\ep(B+v^\gl(z)) \ \ \text{ in }\bb,
\end{aligned}
\]
and, by using \eqref{thm1-n-4+} and the second
inequality of \eqref{n-inf-3},
that for any $\mu\in\cP^\rN_{1,K}$,
\[\begin{aligned}
\lan\mu&,L-\hat\phi+\gl(\hat u-v^\gl)(z)\ran
\\&\,\geq \nu p-(1-\nu)\ep(B+\gl v^\gl(z))
\\&\,=
(1-\nu)R\ep-(1-\nu)\ep(B+\gl v^\gl(z))
\geq (1-\nu)\ep.
\end{aligned}
\]
Thus, setting $\hat\ep:=\ep\min\{R^{-1}\nu,\, 1-\nu\}$,
we find that
\begin{equation}\label{thm1-n-5}
\hat \psi\leq g-\hat\ep \ \ \text{ on }\ \bry,
\end{equation}
and
\begin{equation}\label{thm1-n-5+}
\inf_{\mu\in\cP^\rN_{1,K}}\lan\mu, L-\hat\phi
+\gl(\hat u-v^\gl)(z)\ran>\hat\ep.
\end{equation}
We now use these estimates to show that there is positive constant $\gth$ such that $w:=\gth \hat u$ is a
subsolution of
\begin{equation}\label{thm1-n-6}
\begin{cases}
\gl w+F[w]=-\gl (v^\gl-w)(z)-\gth\hat\ep\ \ \text{ in }\gO&\\[3pt]
\gamma\cdot Dw\leq g \ \ \ \text{ on }\bry.
\end{cases}
\end{equation}
After this is done, we easily get a contradiction as follows:
observe that the function
$\xi:=w+ (v^\gl-w)(z)+\gl^{-1}\gth\hat\ep$ is a subsolution
of $\gl \xi+F[\xi]=0$ in $\gO$ and $\gamma \cdot D\xi=g$ on $\bry$,
and, by comparison principle \eqref{CPN}, that
$\xi\leq v^\gl$ on $\tt$, which, evaluated at $z$, gives
$\,\gl^{-1}\gth \hat \ep\leq 0$. We thus get a contradiction.
To show \eqref{thm1-n-6}, we treat first the case
when $\cA$ is compact. Select $K=\cA$
and note that
$\gd_{(x,\ga)}\in\cP^\rN_{1,K}$ for all $(x,\ga)\in\bb$.
Thus, from \eqref{thm1-n-5+}, we get
\[
\hat\phi<L-\gl (v^\gl-\hat u)(z)-\hat\ep \ \ \text{ on }\bb,
\]
and we see that $\hat u$ is a subsolution of \eqref{thm1-n-6}, with $\gth=1$.
Consider next the case where $\cA$ is not compact.
Choose a point $\ga_0\in\cA$ and a constant $L_0>0$ so that
\begin{equation}\label{thm1-n-7}
\max_{x\in\tt}L(x,\ga_0)\leq L_0
\ \ \
\text {and } \ \ \ \gl|v^\gl(z)|\leq L_0.
\end{equation}
Let $L_1>0$ be a constant to be fixed later, and, in view of
\eqref{L}, we select a compact set $K_0\subset\cA$ so that
\begin{equation}\label{thm1-n-8}
L(x,\ga)\geq \max\{L_0,L_1\} \ \ \ \text{ for all }\
(x,\ga)\in\tt\tim(\cA\setminus K_0).
\end{equation}
Pick a point $\ga_1\in\cA\setminus K_0$ and set $K=K_0\cup\{\ga_1\}$.
Since $\gd_{(x,\ga)}\in\tt\tim \cP_{1,K}$ for all $(x,\ga)\in\tt\tim K$,
by \eqref{thm1-n-5+}, we get
\begin{equation}\label{thm1-n-9}
\gl (v^\gl-\hat u)(z)+\hat\ep<(1-\hat t)L(x,\ga)-\hat\chi(x) \ \ \text{ for all }\ (x,\ga)\in\tt\tim K.
\end{equation}
We divide the argument into two cases.
Consider first the case when $\hat t\leq 1$.
We repeat the same lines as in the proof of Theorem \ref{thm1-sc}, to deduce that
\[
(\hat t-1)L+\hat \chi<
L-\gl (v^\gl-\hat u)(z)-\hat \ep \ \text{ on }\bb,
\]
then that
\[
\hat\phi=L+(\hat t-1)L+\hat \chi<
-\gl (v^\gl-\hat u)(z)-\hat \ep \ \text{ on }\bb,
\]
and, hence, that $\hat u$ is a subsolution of \eqref{thm1-n-6}, with $\gth=1$.
Secondly, we consider the case when $\hat t\geq 1$.
Again, repeating the same lines as in the proof of Theorem \ref{thm1-sc}, we obtain
\[
\hat\phi<\hat tL-\hat t\gl v^\gl(z)+\gl \hat u(z)-\hat\ep
\ \ \text{ in }\bb,
\]
which ensures that $w:={\hat t}^{-1}\hat u$ is a subsolution
of
\begin{equation}\label{thm1-n-10}\left\{\begin{aligned}
&\gl w+F[w]=\gl(u-v^\gl)(z)-{\hat t}^{-1}\hat\ep\ \ \text{ in }\ \gO,
\\&\gamma\cdot Dw={\hat t}^{-1}(g-\hat \ep) \ \ \text{ on }\ \bry.
\end{aligned}\right.\end{equation}
Our next step is to show that there exists $L_1$ for which
$\hat t^{-1}(g-\hat \ep)\leq g$ on $\bry$.
For this, we give an upper bound of $p$, independent of the choice
of $L_1$ (and, hence, $K$). Because of \eqref{L}, we may take a positive constant $C_0$ so that $L\geq -C_0$ on $\bb$.
Recall that $\nu(t-1)=\hat t-1>0$, and observe that
\[
L-\phi =(1-t)L-\chi
\leq (t-1)C_0+tM \ \ \text{ on }\ \bb,
\]
and, moreover,
\[
\lan\mu, L-\phi\ran\leq (t-1)C_0+tM \ \ \text{ for all }\ \mu\in\cP^\rN_{1,K}.
\]
According to Lemma \ref{lem-n-3}, there exists
a constant $C_1>0$, depending only on $\gO$, $F$ and $\gl$,
such that
\[
\gl u\leq C_1Mt \ \ \ \text{on }\ \tt.
\]
Thus, by the definition of $p$, we get
\begin{equation}\label{est-p}
p\leq (t-1)C_0+(C_1+1)Mt+\gl |v^\gl(z)|.
\end{equation}
Next, by \eqref{thm1-n-4++}, we get
\[
(\phi-L)(x,\ga_1)\leq\gl(u-v^\gl)(z) \ \ \ \text{ for all }\ x\in\tt,
\]
and moreover,
\[
(t-1)L_1\leq -\chi(x)+\gl(u-v^\gl)(z)
\leq (C_1+1)Mt+|\gl v^\gl(z)| \ \ \ \text{ for all }\ x\in\tt.
\]
Hence, if $L_1>(C_1+1)M$, then we get
\begin{equation}\label{est-t}
t\leq \fr{L_1+\gl|v^\gl(z)|}{L_1-(C_1+1)M}.
\end{equation}
Now, set
\[
p_0:=C_0+2(C_1+1)M+\gl |v^\gl(z)|,\quad\ep_0:=\min\Big\{\fr{\ep^2}{R\ep+p_0},\,\fr{\ep}{R+1}\Big\},
\]
and
\[
\tau:=1+\fr{\ep_0}{\|g\|_{C(\bry)}+1},
\]
and note that $p_0>0$, $0<\ep_0<1$ and $1<\tau<2$ and that these constants
$p_0,\,\ep_0$ and $\tau$ are independent of the choice
of $L_1$ and $K$.
Then fix $L_1>(C_1+1)M$ large enough so that
\[
\fr{L_1+\gl|v^\gl(z)|}{L_1-(C_1+1)M}\leq \tau.
\]
Observe by \eqref{est-t} that $t\leq \tau$, which implies
together with \eqref{est-p} and \eqref{thm1-n-4+} that
$\ep<p\leq p_0$. This implies moreover that
\[
\fr{R\ep}{R\ep+p_0}\leq \nu<\fr{R}{R+1},
\]
and thus,
\[
\hat\ep\geq \ep\min\Big\{\fr{\ep}{R\ep+p_0},\fr{1}{R+1}\Big\}=\ep_0.
\]
Hence,
\[\begin{aligned}
\hat\ep&\,\geq \ep_0=(\tau-1)\left(\|g\|_{C(\bry)}+1\right)
> (t-1)\|g\|_{C(\bry)}
\\&\,\geq (\hat t-1)\|g\|_{C(\bry)}
\geq -(\hat t-1)g \ \ \text{ on }\ \bry,
\end{aligned}\]
and therefore,
\[
{\hat t}^{-1}(g-\hat\ep)<g \ \ \ \text{ on }\ \bry.
\]
We now conclude from this and \eqref{thm1-n-10}
that $w:=\hat{t}^{-1}\hat u$ is a subsolution
of \eqref{thm1-n-6}, with
$\gth=\hat{t}^{-1}$, which completes the proof.
\end{proof}
\begin{lem} \label{est-mu}
Assume \eqref{F1}, \eqref{F2}, \eqref{L} and \eqref{OG}. Let $(z,\gl,M)\in\tt\tim(0,\,\infty)\tim(0,\,\infty)$
and $(\mu_1,\mu_2)\in\cP^\rN\cap\cG^\rN(M,z,\gl)'$.
For each $\ep>0$ there exists a constant
$C>0$, depending only on $\ep$, $\gO$ and $F$,
such that if $M>C$, then
\[
\mu_2(\bry)\leq \ep \lan\mu_1,L\ran+C(1+\gl).
\]
\end{lem}
\begin{proof} According to \eqref{OG} or \eqref{gz}, there exists a function $\eta\in C^2(\tt)$ such that
\[
\gamma\cdot D\eta\leq -1 \ \ \text{ on }\ \bry
\quad\text{ and }\quad \eta\leq 0\ \ \text{ on }\ \tt.
\]
Fix any $\ep>0$, set
\[
C_1:=\|F[\ep^{-1}\eta]\|_{C(\tt)}+\ep^{-1},
\]
and observe that if $M>C_1$, then
\[
(L+C_1,-\ep^{-1},\ep^{-1}\eta)\in\cF^\rN(M,z,\gl).
\]
Assume that $M>C_1$.
Since $(\mu_1,\mu_2)\in\cP^\rN\cap\cG^\rN(M,z,\gl)'$, we get
\[
0\leq \lan\mu_1,L+C_1-\gl \ep^{-1}\eta(z)\ran
+\lan\mu_2,-\ep^{-1}\ran,
\]
which yields
\[
\mu_2(\bry)\leq \ep \lan\mu_1,L\ran
+\ep C_1+\gl\|\eta\|_{C(\bry)}.
\]
If we set $C=(1+\ep)C_1+\gl\|\eta\|_{C(\bry)}$
and if $M>C$, then
we have
\[
\mu_2(\bry)\leq \ep \lan\mu_1,L\ran
+C(1+\gl).
\]
The proof is now complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1-n}] We here prove only assertion
(ii), since the proof of (i) is similar and slightly simpler.
Fix a point $z\in\tt$.
For each $\gl>0$, let $v^\gl\in C(\tt)$ be a solution of \eqref{N}.
Recall that
\begin{equation}\label{thm1-np-1}
-c_\rN=\lim_{\gl\to 0+}\gl v^\gl(z).
\end{equation}
By Lemma \ref{est-mu}, there exists a constant $C_1>0$,
depending only on $\|g\|_{C(\bry)}$, $\gO$ and $F$,
such that for any $(M,\gl)\in(0,\,\infty)^2$ and
$(\mu_1,\mu_2)\in\cP^\rN\cap\cG^\rN(M,z,\gl)'$, if
$M\geq C_1$, then
\begin{equation}\label{thm1-np-2}
\mu_2(\bry)\leq \fr{1}{2\|g\|_{C(\bry)}+1}\lan\mu_1,L\ran+C_1(1+\gl)
\end{equation}
Fix a sequence $\{\gl_j\}_{j\in\N}\subset(0,\,1)$
converging to zero.
Thanks to Theorem \ref{thm1-n1},
for each $j\in\N$, there exists
$(\mu_1^j,\mu_2^j)\in\cP^\rN\cap \cG^\rN(j,z,\gl_j)'$
such that
\begin{equation}\label{thm1-np-3}
\gl_j v^{\gl_j}(z)+\fr{1}{j}\geq \lan\mu_1^j,L\ran+\lan\mu_2^j,g\ran.
\end{equation}
From this, we get
\[
\lan\mu_1^j, L\ran
\leq \gl_j v^{\gl_j}(z)+1+\|g\|_{C(\bry)}\mu_2(\bry).
\]
Combine this with \eqref{thm1-np-2}, to obtain
\[
\lan\mu_1^j, L\ran\leq 2\left(\gl_j v^{\gl_j}(z)+1+
\|g\|_{C(\bry)}C_1(1+\gl_j)\right)\ \ \ \text{ if }\ j>C_1,
\]
and to find that the sequences
\[
\{\lan\mu_1^j,L\ran\}_{j\in\N}
\quad
\text{ and }\quad\{\mu_2^j(\bry)\}_{j\in\N}
\]
are bounded from above and, hence, bounded.
The boundedness of the two sequences above implies, with help of Lemma \ref{basic-cpt}, that
the sequence $\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$ of measures
has a subsequence, convergent in the topology of
weak convergence, which we denote by the same symbol. Let $(\mu_1,\mu_2)\in\cP^\rN$ be the limit of
the sequence $\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$.
By Lemma \ref{basic-lsc}, we have
\[
\lan\mu_1, L\ran\leq \liminf_{j\to\infty}\lan\mu_1^j,L\ran.
\]
By passing again to a subsequence if necessary,
we may assume that the sequence $\{\lan\mu_1^j,L\ran\}_{j\in\N}$
is convergent.
We set
\[
\rho:=\lim_{j\to\infty}\du{\mu_1^j,L}-\du{\mu,L}\,(\,\geq 0\,).
\]
Sending $j\to\infty$ in \eqref{thm1-np-3} yields
together with \eqref{thm1-np-1}
\begin{equation}\label{thm1-np-4}
-c_\rN\geq \rho+\du{\mu_1,L}+\du{\mu_2,g}.
\end{equation}
Note that if $\cA$ is compact, then $\rho=0$.
Next, we show that $(\mu_1,\mu_2)\in\cG^\rN(0)'$.
Pick $(tL+\chi,\psi,u)\in\cF^\rN(0)$,
where $t>0$ and $\chi\in C(\tt)$,
and note that
$(tL+\chi+\gl_j u,\psi,u)\in\cF^\rN(M,\gl_j)$
for all $j\in\N$ and some constant $M>0$. The dual
property
of $\cG^\rN(M,\gl_j)'$ yields
\[
0\leq\du{\mu_1^j,tL+\chi+\gl_ju-\gl_ju(z)}+\du{\mu_2^j,\psi}
\ \ \ \text{ if }\ j>M.
\]
Sending $j\to\infty$ gives
\begin{equation}\label{thm1-np-5}
0\leq t(\rho+\du{\mu_1,L})+\du{\mu_1,\chi}+\du{\mu_2,\psi}.
\end{equation}
Owing to Lemma \ref{mod}, we may choose $\tilde \mu_1
\in\cP_1^\rN$ such that
\[
\du{\tilde\mu_1,L}=\rho+\du{\mu_1,L}
\quad\text{ and }\quad
\du{\tilde\mu_1,\gz}=\du{\mu_1,\gz} \ \ \text{ for all }\
\gz\in C(\tt).
\]
(If $\cA$ is compact, then we choose $\tilde\mu_1=\mu_1$.)
This and \eqref{thm1-np-5} give
\[
0\leq \du{\tilde\mu_1,tL+\chi}+\du{\mu_2,\psi}
\ \ \ \text{ for any }\ (tL+\chi,\psi,u)\in\cG^\rN(0),
\]
which ensures that $(\tilde\mu_1,\mu_2)\in
\cP^\rN\cap\cG^\rN(0)'$. Also, applying the above to
a solution $(u,c_\rN)$ of \eqref{EN}, which implies
$(L+c_\rN,g,u)\in\cF^\rN(0)=\cG^\rN(0)$, yields
\[
-c_{\rN}\leq\du{\tilde\mu_1,L}+\du{\mu_2,g}.
\]
Inequality \eqref{thm1-np-4} now
reads
\[
-c_\rN\geq \du{\tilde \mu_1,L}+\du{\mu_2,g}.
\]
Thus, we have
\[
-c_\rN=\du{\tilde \mu_1,L}+\du{\mu_2,g}
=\inf_{(\nu_1,\nu_2)\in\cP^\rN\cap\cG^\rN(0)'}
(\du{\nu_1,L}+\du{\nu_2,g}),
\]
which finishes the proof.
\end{proof}
\subsection{Convergence with vanishing discount}
As an application of Theorem \ref{thm1-n}, we prove
here one of main results in this section.
\begin{thm}\label{conv-n} Assume \eqref{F1},
\eqref{F2}, \eqref{OG}, \eqref{CPN'}, \eqref{SLN} and \eqref{EC}.
For $\gl>0$, let $v^\gl$ be a solution of \eqref{N}.
Then
\[
\lim_{\gl\to 0+}
(v^{\gl}+\gl^{-1}c_\rN)=u \ \ \ \text{ in }\ C(\tt)
\]
for some $u\in C(\tt)$, and $u$ is a solution of \emph{(EN$_{c_\rN}$)}.
\end{thm}
\begin{proof} Let $u_0\in C(\tt)$ be a solution of (EN$_{c_\rN}$). Observe that for each $\gl>0$, the functions
$u_0+\|u_0\|_{C(\tt)}-\gl^{-1}c_\rN$ and $u_0-\|u_0\|_{C(\tt)}-\gl^{-1}c_\rN$ are a supersolution and a subsolution of
\eqref{N}, respectively, and apply \eqref{CPN}, to get
$\|v^\gl+\gl^{-1}c_\rN\|_{C(\tt)}\leq 2\|u_0\|_{C(\tt)}$
for all $\gl>0$, which, combined with \eqref{EC}, shows
that the family
$\{v^\gl+\gl^{-1}c_\rN\}_{\gl>0}$ is relatively compact in
$C(\tt)$.
Let $\cU$ denote the set of accumulation points of
$\{v^\gl+\gl^{-1}c_\rN\}_{\gl>0}$ in $C(\tt)$ as $\gl\to 0$.
The relative compactness of the family implies that $\cU\not=\emptyset$. Also, it is a standard observation that
any $v\in\cU$ is a solution of
(EN$_{c_\rN}$).
Now, we prove that $\cU$ has a single element, which
ensures that \eqref{conv-n} holds.
Let $v,\, w\in\cU$, and select sequences $\{\gl_j\}_{j\in\N}$
and $\{\gd_j\}_{j\in\N}$ of positive numbers so that
\[
\begin{cases}\disp
\lim_{j\to\infty}\gl_j=\lim_{j\to\infty}\gd_j=0,&\\
\disp
\lim_{j\to\infty}\left(v^{\gl_j}+\gl_j^{-1}c_\rN\right)=v \ \ \text{ in }\ C(\tt),&\\
\disp
\lim_{j\to\infty}\left(v^{\gd_j}+\gd_j^{-1}c_\rN\right)=w \ \ \text{ in }\ C(\tt).
\end{cases}
\]
Fix any $z\in\tt$, and, owing to Theorem \ref{thm1-n}, choose
a sequence $\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$ of measures
so that, for every $j\in\N$,
$(\mu_1^j,\mu_2^j)\in\cP^\rN\cap \cG^\rN(z,\gl_j)'$
and
\begin{equation}\label{conv-np-1}
\gl_j v^{\gl_j}(z)=\du{\mu_1^j,L}+\du{\mu_2^j,g}.
\end{equation}
As in the proof of Theorem \ref{thm1-n}, after passing to a subsequence of $\{\gl_j,\mu_1^j,\mu_2^j\}_{j\in\N}$ if necessary,
we may assume that $\{(\mu_1^j,\mu_2^j)\}_{j\in\N}$
converges to some $(\mu_1,\mu_2)\in\cP^\rN$ and that
for some $\tilde\mu_1\in\cP_1^\rN$,
\begin{align}
&\lim_{j\to\infty}\du{\mu_1^j,L}=\du{\tilde\mu_1,L},\notag
\\&(\tilde\mu_1,\mu_2)\in\cG^\rN(0)',\notag
\\&\du{\tilde \mu_1,\psi}=\du{\mu_1,\psi} \ \ \ \text{ for all }\ \psi\in C(\tt),\notag
\\&-c_\rN=\du{\tilde\mu_1,L}+\du{\mu_2,g}.\label{conv-np-3}
\end{align}
Since $w$ is a solution of
(EN$_{c_\rN}$),
we have
$(L+\gl_j w,g,w-\gl_j^{-1}c_\rN)\in\cF^\rN(\gl_j)$ for all
$j\in\N$. Hence, by the dual cone property of
$\cG^\rN(z,\gl_j)'$, we get
\[
0\leq \du{\mu_1^j,L+\gl_j w-\gl_j(w(z)-\gl_j^{-1}c_\rN)}
+\du{\mu_2^j,g}.
\]
Combining this with \eqref{conv-np-1} yields
\[
0\leq \gl_j \left(v^{\gl_j}(z)+\gl_j^{-1}c_\rN
-w(z)+\du{\mu_1^j,w}\right) \ \ \ \text{ for all }\ j\in\N,
\]
which implies in the limit $j\to\infty$ that
\begin{equation}\label{conv-np-2}
w(z)\leq v(z)+\du{\mu_1,w}=v(z)+\du{\tilde \mu_1,w}.
\end{equation}
Next, we note that $(L-\gd_j v^{\gd_j},g,v^{\gd_j})
\in\cF^\rN(0)$ and use the
fact that $(\tilde\mu_1,\mu_2)\in\cG^\rN(0)'$
and then \eqref{conv-np-3},
to get
\[
0\leq \du{\tilde\mu_1,L-\gd_j v^{\gd_j}}
+\du{\mu_2,g}=-\gd_j\du{\tilde\mu_1,v^{\gd_j}+\gd_j^{-1}c_\rN},
\]
which yields, after division by $\gd_j$ and taking the limit
$j\to\infty$,
\[
\du{\tilde\mu_1,w}\leq 0.
\]
From this and \eqref{conv-np-2}, we get $\ w(z)\leq v(z)$,
and, since $z\in\tt$ and $v,w\in\cU$ are arbitrary,
we conclude that $\cU$ has only one element.
\end{proof}
\section{Examples} \label{sec-ex}
In this section, we give some examples to which the main theorems in this paper,
Theorems \ref{thm2-sc}, \ref{thm2-d}, \ref{conv-n}, can be applied.
For the case of Theorem \ref{conv-n}, we always assume that \eqref{OG} holds.
\subsection{First-order Hamilton-Jacobi equations}
We consider the first-order Hamilton-Jacobi equations \eqref{DP}
and \eqref{E}, where the function $F$ is replaced by
$H=H(x,p)$ on $\tt\tim\R^n$. The Hamiltonian $H$ is assumed to
be continuous and, moreover, satisfies
two conditions: $H$ is coercive and convex, that is,
\[
\lim_{|p|\to\infty}H(x,p)=+\infty \ \ \ \text{ uniformly for }\ x\in \tt,
\]
and
\[
\text{ for every $x\in\tt$, the function $p\mapsto H(x,p)$ is convex
in $\R^n$.}
\]
We assume as well that $\gO$ has a $C^1$-boundary.
Thanks to \cite{Ishi, BaMi}, for every $\gl>0$,
the solution $v^\gl$ of each of \eqref{S}, \eqref{D} and \eqref{N}
exists and the
family $\{v^\gl\mid \gl>0\}$ is equi-Lipschitz on $\tt$, which, in particular, implies that \eqref{EC} holds.
Also, the comparison principles \eqref{CP}, \eqref{CPS},
\eqref{CPD} and \eqref{CPN'} hold.
We take the advantage of the equi-Lipschitz property of $\{v^\gl\mid\gl>0\}$ and replace $H$ by another convex and coercive Hamiltonian $\widetilde H\in C(\tt\tim\R^n)$ that satisfies:
(i) $\widetilde H(x,p)=H(x,p)$ for all $(x,p)\in \tt\tim B_{M_0}$,
where $M_0>0$ is a Lipschitz bound for $\{v^\gl\mid\gl>0\}$ and
$B_{M_0}$ denotes the ball of radius $M_0$ with center at the origin,
and (ii) $\lim_{|p|\to\infty}\widetilde H(x,p)/|p|=+\infty$
uniformly on $\tt$. Let $L$ be the Lagrangian,
corresponding to $\widetilde H$, given by
\[
L(x,\ga)=\max_{p\in\R^n}(\ga\cdot p-\widetilde H(x,p)),
\]
which is a continuous function on $\tt\tim\R^n$. Moreover,
we have
\[
\widetilde H(x,p)=\max_{\ga\in\R^n}(p\cdot \ga-L(x,\ga)),
\]
and
\[
\lim_{|\ga|\to\infty}L(x,\ga)=+\infty \ \ \ \text{ uniformly for }x\in\tt.
\]
Thus, in the present case, our choice of
$\cA$, $a$ and $b$ are $\R^n$, $a(x,\ga)=0$ and
$b(x,\ga)=-\ga$, respectively, and $L$ satisfies condition \eqref{L}.
Furthermore, the solution $v^\gl$ of any of \eqref{S}, \eqref{D} or \eqref{N}, with
$F=H$, is again a solution of the respective problem \eqref{S}, \eqref{D} or \eqref{N}, with $F$ replaced by $\widetilde H$.
All the conditions and, hence, the convergence results
of Theorems \ref{thm2-sc}, \ref{thm2-d}, and \ref{conv-n}, with $F=\widetilde H$, hold. This implies that the convergence assertions
of Theorems \ref{thm2-sc}, \ref{thm2-d}, and \ref{conv-n}, with $F=H$, hold.
\subsection{Fully nonlinear, possibly degenerate elliptic equations with superquadratic growth in the gradient variable} \label{ss-superquad}
Firstly, set
\[
F_0(x,p,X):=\max_{\ga \in \cA_0} \left( - \tr\gs^t(x,\ga)\gs(x,\ga) X - b(x,\ga)\cdot p - L_0(x,\ga) \right),
\]
where $\cA_0$ is a compact subset of $\R^l$ for some $l \in \N$,
$\sigma \in \Lip(\tt \times \cA_0, \R^{k \times n})$ for some $k \in \N$,
$b \in \Lip(\tt\times \cA_0,\R^n)$, and $L_0\in C(\tt \times \cA_0,\R)$.
Consider the operator $F$ of the form
\begin{align*}
&F(x,p,X):=
\frac{c(x)}{m}|p|^m + F_0(x,p,X)\\
=&\, \sup_{(q,\ga)\in\R^n\times \cA_0}\left(- \tr\gs^t(x,\ga)\gs(x,\ga) X+q\cdot p - b(x,\ga)\cdot p -\frac{c(x)^{\frac{-1}{m-1}}}{m'}|q|^{m'}- L_0(x,\ga)\right),
\end{align*}
where $m>2$ and $c \in \Lip(\tt,\R)$ with $c>0$.
We set here $m':=m/(m-1)$. Assume further that $\partial \gO \in C^3$.
It is worth pointing out that in this superquadratic case ($m>2$), the scale of the gradient term dominates the scale of the diffusion term.
Because of this feature, for any subsolution $u$ of $\gl u +F[u] =0$ in $\gO$ with $\gl \geq 0$ such that $u$ is bounded in $\tt$,
we have $u \in C^{0,\frac{m-2}{m-1}}(\tt)$
and the H\"older constant is dependent on
$m$, $\|\sigma\|_{C(\tt\times \cA_0)}$, $\|b\|_{C(\tt\times \cA_0)}$, $\|L_0\|_{C(\tt\times \cA_0)}$,
$\min_{\tt} c$, $\gl\|u^-\|_{L^\infty(\tt)}$ and $\partial \gO$
(see \cite{CaLePo}, \cite{Barl}, \cite{ArTr}).
This important point helps verifying the equi-continuity assumption \eqref{EC}.
It is clear that \eqref{F1}, \eqref{F2} and \eqref{L} hold.
Thanks to Theorems 3.1, 3.2 and 4.1 in \cite{Barl}, \eqref{CP}, \eqref{CPS}, \eqref{CPD}, \eqref{SLS}, \eqref{SLD} and \eqref{EC} are valid.
Besides, if $u$ is a subsolution of \eqref{D'} with $\gl \geq 0$,
then $u \leq \psi$ pointwise on $\partial \gO$ (see \cite[Proposition 3.1]{DaL} or \cite[Proposition 3.1]{Barl} for the proof).
For the Neumann problem, we assume further that $\gamma \in C^2(\partial \gO)$.
By using the ideas in \cite{Barl} with careful modifications, we verify that
\eqref{CPN'} and \eqref{SLN} are valid.
Therefore, Theorems \ref{thm2-sc}, \ref{thm2-d}, \ref{conv-n} hold true.
\subsection{Fully nonlinear, possibly degenerate elliptic equations with superlinear growth in the gradient variable}
This example is only applicable to Neumann problem.
Let $F_0$ be as in Subsection \ref{ss-superquad}. Suppose that
\begin{align*}
&F(x,p,X):=
\frac{|p|^m }{m}+ F_0(x,p,X)\\
=&\, \sup_{(q,\ga)\in\R^n\times \cA_0}\left(- \tr\gs^t(x,\ga)\gs(x,\ga) X+q\cdot p - b(x,\ga)\cdot p -\frac{|q|^{m'}}{m'}- L_0(x,\ga)\right),
\end{align*}
where $m>1$ and $m':=m/(m-1)$. Assume further that $\partial \gO \in C^{1,1}$ and $\gamma \in C^{1,1}(\partial \gO)$.
This will be studied in details in the forthcoming paper \cite{IsMtTr3}.
It is verified there that all conditions in Theorem \ref{conv-n} are satisfied and thus Theorem \ref{conv-n} holds true.
As far as the authors know, this example has not been studied in the literature.
\subsection{Fully nonlinear, uniformly elliptic equations with linear growth in the gradient variable} \label{ss-fully}
We note that this example is only applicable to Dirichlet problem and Neumann problem.
We consider the operator
\[
F(x,p,X):= \max_{\ga \in \cA} \left(- \tr a(x,\ga) X -b(x,\ga)\cdot p-L(x,\ga)\right),
\]
where $\cA$ is a compact metric space, $a\in \Lip(\tt \times \cA, \bS^n)$, $b\in \Lip(\tt \times \cA,\R^n)$, $L\in \Lip(\tt \times \cA,\R)$.
Assume that $a$ is uniformly elliptic, that is,
there exists $\theta>0$ such that
\[
\frac{1}{\theta} I \le a(x,\ga)\le \theta I \quad\text{for all} \ (x,\ga) \in\tt \times \cA,
\]
where $I$ denotes the identity matrix of size $n$.
It is obvious to see that \eqref{F1} and \eqref{F2} hold.
For the Dirichlet problem,
we assume that $\partial \gO \in C^{1,\beta}$ and $g \in C^{1,\beta}(\tt)$ for some $\beta \in (0,1)$.
Comparison principles \eqref{CP}, \eqref{CPD} are
consequences of \cite[Theorem III.1 (1)]{IsLi}, \cite[Theorem 7.9]{CrIsLi}, respectively, under the Lipschitz
continuity assumption on $a$, $b$ and $L$.
By \cite[Theorem 3.1]{Tr}, \eqref{SLD} holds true.
In view of the Krylov-Safonov estimate (see \cite{KrSa} and also \cite{Ca,Tr}), we have that
$\{v^\lambda\}_{\lambda>0}$ is equi-H\"older continuous on $\tt$, which implies (EC).
Therefore, Theorem \ref{thm2-d} is valid.
For the Neumann problem, we assume that $\partial \gO \in C^2$, $\gamma \in C^{0,1}(\partial \gO)$, and $g\in C^{0,\beta}(\partial \gO)$ for some $\beta \in (0,1)$.
Thanks to \cite[Theorem 7.12]{CrIsLi}, \eqref{CPN'} and \eqref{SLN} are valid.
Furthermore, in view of \cite[Theorem A.1]{BaDa},
$\{v^\lambda\}_{\lambda>0}$ is equi-H\"older continuous on $\tt$, which implies (EC).
These yield the validity of Theorem \ref{conv-n}.
\begin{bibdiv}
\begin{biblist}
\bib{AlAlIsYo}{article}{
author={Al-Aidarous, E. S.},
author={Alzahrani, E. O.},
author={Ishii, H.},
author={Younas, A. M. M.},
title={A convergence result for the ergodic problem for Hamilton--Jacobi
equations with Neumann-type boundary conditions},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={146},
date={2016},
number={2},
pages={225--242},
}
\bib{ArTr}{article}{
author={Armstrong, S. N.},
author={Tran, H. V.},
title={Viscosity solutions of general viscous Hamilton-Jacobi equations},
journal={Math. Ann.},
volume={361},
date={2015},
number={3-4},
pages={647--687},
issn={0025-5831},
}
\bib{Barl}{article}{
AUTHOR = {Barles, G.},
TITLE = {A short proof of the {$C^{0,\alpha}$}-regularity of
viscosity subsolutions for superquadratic viscous
{H}amilton-{J}acobi equations and applications},
JOURNAL = {Nonlinear Anal.},
VOLUME = {73},
YEAR = {2010},
NUMBER = {1},
PAGES = {31--47},
ISSN = {0362-546X},
}
\bib{BaDa}{article}{
AUTHOR = {Barles, G.},
AUTHOR={Da Lio, F.}
TITLE = {On the boundary ergodic problem for fully nonlinear equations
in bounded domains with general nonlinear {N}eumann boundary
conditions},
JOURNAL = {Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire},
FJOURNAL = {Annales de l'Institut Henri Poincar\'e. Analyse Non
Lin\'eaire},
VOLUME = {22},
YEAR = {2005},
NUMBER = {5},
PAGES = {521--541},
ISSN = {0294-1449},
}
\bib{BaMi}{article}{
AUTHOR = {Barles, G.},
AUTHOR={Mitake, H.},
TITLE = {A {PDE} approach to large-time asymptotics for boundary-value
problems for nonconvex {H}amilton-{J}acobi equations},
JOURNAL = {Comm. Partial Differential Equations},
FJOURNAL = {Communications in Partial Differential Equations},
VOLUME = {37},
YEAR = {2012},
NUMBER = {1},
PAGES = {136--168},
}
\bib{Ca}{article}{
author={Caffarelli, L. A.},
title={Interior a priori estimates for solutions of fully nonlinear
equations},
journal={Ann. of Math. (2)},
volume={130},
date={1989},
number={1},
pages={189--213},
issn={0003-486X},
}
\bib{CaLePo}{article}{
author={Capuzzo-Dolcetta, I.},
author={Leoni, F.},
author={Porretta, A.},
title={H\"older estimates for degenerate elliptic equations with coercive
Hamiltonians},
journal={Trans. Amer. Math. Soc.},
volume={362},
date={2010},
number={9},
pages={4511--4536},
}
\bib{CrIsLi}{article}{
author={Crandall, M. G.},
author={Ishii, H.},
author={Lions, P.-L.},
title={User's guide to viscosity solutions of second order partial
differential equations},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={27},
date={1992},
number={1},
pages={1--67},
}
\bib{DaL}{article}{
AUTHOR = {Da Lio, F.},
TITLE = {Comparison results for quasilinear equations in annular
domains and applications},
JOURNAL = {Comm. Partial Differential Equations},
VOLUME = {27},
YEAR = {2002},
NUMBER = {1-2},
PAGES = {283--323},
ISSN = {0360-5302},
}
\bib{DFIZ}{article}{
author={Davini, A.},
author={Fathi, A.},
author={Iturriaga, R.},
author={Zavidovique, M.},
title={Convergence of the solutions of the discounted equation},
journal={Invent. Math. First online January, 2016 (Preprint is also available in arXiv:1408.6712)},
}
\bib{Ev}{article}{
author={Evans, L. C.},
title={Classical solutions of fully nonlinear, convex, second-order
elliptic equations},
journal={Comm. Pure Appl. Math.},
volume={35},
date={1982},
number={3},
pages={333--363},
issn={0010-3640},
}
\bib{Go}{article}{
author={Gomes, D. A.},
title={Duality principles for fully nonlinear elliptic equations},
conference={
title={Trends in partial differential equations of mathematical
physics},
},
book={
series={Progr. Nonlinear Differential Equations Appl.},
volume={61},
publisher={Birkh\"auser, Basel},
},
date={2005},
pages={125--136},
}
\bib{GMT}{article}{
author={Gomes, D. A.},
author={Mitake, H.},
author={Tran, H. V.},
title={The selection problem for discounted Hamilton-Jacobi equations: some non-convex cases},
journal={preprint},
}
\bib{Ishi}{article}{
author={Ishii, H.},
title={Weak {KAM} aspects of convex Hamilton-Jacobi equations with Neumann type boundary conditions},
journal={J. Math. Pures Appl. (9)},
volume={95},
date={2011},
number={1},
pages={99--135},
}
\bib{IsLi}{article}{
author={Ishii, H.},
author={Lions, P.-L.},
title={Viscosity solutions of fully nonlinear second-order elliptic
partial differential equations},
journal={J. Differential Equations},
volume={83},
date={1990},
number={1},
pages={26--78},
}
\bib{IsMtTr1}{article}{
author={Ishii, H.},
author={Mitake, H.},
author={Tran, H. V.},
title={The vanishing discount problem and viscosity Mather measures.
Part 1: the problem on a torus},
journal={submitted, (Preprint is available in arXiv:1603.01051)},
}
\bib{IsMtTr3}{article}{
author={Ishii, H.},
author={Mitake, H.},
author={Tran, H. V.},
title={work under preparation},
}
\bib{Kr}{article}{
author={Krylov, N. V.},
title={Boundedly inhomogeneous elliptic and parabolic equations},
language={Russian},
journal={Izv. Akad. Nauk SSSR Ser. Mat.},
volume={46},
date={1982},
number={3},
pages={487--523, 670},
}
\bib{KrSa}{article}{
author={Krylov, N. V.},
author={Safonov, M. V.},
title={An estimate for the probability of a diffusion process hitting a
set of positive measure},
language={Russian},
journal={Dokl. Akad. Nauk SSSR},
volume={245},
date={1979},
number={1},
pages={18--20},
}
\bib{LPV}{article}{
author={P.-L. Lions},
author={G. Papanicolaou},
author={S. R. S. Varadhan},
title={Homogenization of Hamilton--Jacobi equations},
journal={unpublished work (1987)},
}
\bib{Man}{article}{
author={R. Ma\~n\'e},
title={Generic properties and problems of minimizing measures of Lagrangian systems},
journal={Nonlinearity},
volume={9},
date={1996},
NUMBER = {2},
pages={273--310},
}
\bib{Mat}{article}{
author={J. N. Mather},
title={Action minimizing invariant measures for positive definite Lagrangian systems},
journal={Math. Z.},
volume={207},
date={1991},
NUMBER = {2},
pages={169--207},
}
\bib{MiTr}{article}{
author={Mitake, H.},
author={Tran, H. V.},
title={Selection problems for a discounted degenerate viscous Hamilton--Jacobi equation},
journal={submitted, (Preprint is available in arXiv:1408.2909)},
}
\bib{Si}{article}{
author={Sion, M.},
title={On general minimax theorems},
journal={Pacific J. Math.},
volume={8},
date={1958},
pages={171--176},
}
\bib{Te}{article}{
author={Terkelsen, F.},
title={Some minimax theorems},
journal={Math. Scand.},
volume={31},
date={1972},
pages={405--413 (1973)},
}
\bib{Tr}{article}{
author={Trudinger, N. S.},
title={On regularity and existence of viscosity solutions of nonlinear
second order, elliptic equations},
conference={
title={Partial differential equations and the calculus of variations,
Vol.\ II},
},
book={
series={Progr. Nonlinear Differential Equations Appl.},
volume={2},
publisher={Birkh\"auser Boston, Boston, MA},
},
date={1989},
pages={939--957},
}
\end{biblist}
\end{bibdiv}
\bye
|
{'timestamp': '2016-07-19T02:02:57', 'yymm': '1607', 'arxiv_id': '1607.04709', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04709'}
|
arxiv
|
\section{Introduction}
Exciton mediated many-body interactions give rise to a host of physical effects \cite{qiu2016screening,sahin2016computing,cudazzo2016exciton}
that determine the opto-electronic properties
of low dimensional transition metal dichalcogenides, MX$_2$
(M = Mo, W, Nb and X = S, Se) \cite{scharf2016probing,rama,ugeda2014giant,wang2012electronics,makatom,qiu2013,
splen,komsa2012effects,kormanyos2013monolayer},
with important consequences for fundamental and applied research.
The confinement of correlated charge carriers or excitons to a narrow region of space in low dimensional transition metal dichalcogenides (TMDCs) leads to
unique photoluminescence properties \cite{splendiani2010emerging,plechinger2012low,gao2016localized,ji2013epitaxial,zhu2016strongly,eda2013two,ghatak2011nature,bergh,molinaspin,mai2013many} that are otherwise absent in the bulk configurations \cite{saigal2016exciton}.
The availability of state-of-the art exfoliation
techniques \cite{novoselov2005two,varrla2015large,chen2015nanoimprint,fan2015fast} enable
fabrication of low dimensional transition metal dichalcogenides that is useful for applications
\cite{ou2014ion,li2016charge,perebeinos2015metal,beck2000,tsai2013few,bernardi2013extraordinary, he2012fabrication,wi2014enhancement,bertolazzi2013nonvolatile,ji2013epitaxial,yu2016evaluation,
park2016mos2,eda2013two,radisavljevic2011integrated,lembke2015single,pospischil2014solar,
zhang2014m,yoon2011good}. The excitonic processes that determine the
performance of TMDC-based electronic devices include
defect assisted scattering and trapping by surface states \cite{shi2013exciton}, decay via exciton-exciton
annihilation \cite{shin2014photoluminescence,ye2014exciton,konabe2014effect}, phonon assisted relaxation \cite{thilrelaxjap}, and capture by mid-gap defects through
Auger processes \cite{wang2015fast}. Excitonic processes that result in the formation of complex trions \cite{mak,berkelbach2013theory,
thiltrion} and electron-hole recombination with generation of hot carriers \cite{kozawa2014photocarrier}
are also of importance in device performances.
Dynamical processes incorporating exciton-phonon interactions underlie
the opto-electronic properties
of monolayer transition metal dichalcogenides \cite{jones2013optical}.
The strength of the interactions between coupled charge carriers and phonons
is deduced from experimental measurements of the dephasing times \cite{nie2014ultrafast},
exciton linewidths \cite{selig2016excitonic},
photoluminescence, \cite{mouri2013tunable} and other parameters such
as the exciton mobility and luminescence rise times.
The exciton formation time is determined by a complicated interplay of various
dynamical processes in the picosecond time scale \cite{siantidis2001dynamics}
and is linked to the efficient operation of optoelectronic devices.
To this end, a comprehensive understanding of how newly
generated electron-hole pairs relax energetically to
form excitons still remain unclear.
Recently decay times of $\approx$ 0.3 ps
of the transient absorption signal subsequent to
the interband excitation of the monolayer WSe$_2$, MoS$_2$, and MoSe$_2$
was recorded in time-resolved measurements \cite{ceballos2016exciton}.
The ultrafast decay times were
deduced as the exciton formation times from electron-hole pairs in monolayer systems.
Motivated by these considerations, we examine a mechanism by which excitons are formed from
an initial state of unbound electron-hole pairs to account
for the observed short exciton formation time \cite{ceballos2016exciton} in
common TMDCs (MoS$_2$, MoSe$_2$, WS$_2$, and WSe$_2$).
While the focus of this paper is on the theoretical aspects of excitonic
interactions, the overall aim is to seek an understanding of the critical
factors that limit the exciton formation time which is of relevance
to experimental investigations involving device applications.
The unequal charges of the basis atoms in polar crystals allow
a moving electron to polarize the electric field of its surrounding medium.
The polarization effects displaces the ions giving rise to lattice vibrations of
a optical phonon frequency in resonance with the polarization field, and enable
direct Fr\"ohlich coupling between phonons and charge carriers.
In this work we consider that the excitons are created via the two-dimensional
Fr\"ohlich interaction which provides a critical pathway by which
charge carriers undergo energy loss to optical phonons at elevated temperatures
in the monolayers MoS$_2$ and other transition-metal dichalcogenides \cite{kaasbjerg2014hot}.
The exciton is a neutral quasiparticle, and polarization effects due to the
longitudinal optical waves may appear to have less influence
than those associated with polarization effects of the individual electron or hole.
In reality the internal state of the exciton undergoes dipole type transitions and there occurs measurable effects due to Fr\"ohlich interactions in constrained systems.
The focus on LO phonons in the exciton formation process in this study is justified by the large strength of excitonic interactions with high frequency phonons that arise due to the strong confinement of the exciton wave-functions in the real space of monolayer systems. Moreover the exciton-phonon
interaction is long ranged due to the existence of polarization effects despite
large separations between charge carriers and the ions in the material
system. The phonon-limited mobility is largely dominated by polar optical scattering
via the Fr\"ohlich interaction at room temperatures \cite{kaasbjerg12}.
Exciton formation may take place via deformation potential coupling
to acoustic phonons \cite{oh2000excitonic,thilagam1993generation}, but is likely to occur
with less efficiency due to the high exciton binding energies
\cite{makatom,chei12,ugeda2014giant,hill2015observation,chei12,komsa2012effects,thiljap}
in monolayer dichalcogenides.
In conventional semiconductors such as the two band GaAs material system, excitons are formed
via the Fr\"ohlich interaction in the picosecond
time range \cite{siantidis2001dynamics,oh2000exciton}. While excitons in
GaAs materials are dominantly formed at the center of
the Brillouin zone center, the formation process occurs at the non-central points in the momentum space
of monolayer TMDCs \cite{jones2013optical}. This gives rise to
quantitative differences in the exciton
creation times between GaAs and TMDCs. For excitation energies higher than the band-gap of monolayer systems,
the electron-hole pair creates an exciton with
a non-zero wavevector associated with its center-of-mass
motion \cite{siantidis2001dynamics,oh2000exciton}. The exciton subsequently relaxes to the
zero wavevector state with emission of acoustic or LO phonons
before undergoing radiative recombination by emitting
a photon. To this end, the formation time of an exciton as a function of exciton wave
vector is useful in analyzing the luminescence rise times that can be
measured experimentally.
In this study we employ the exciton-LO phonon
interaction operator to estimate the exciton formation times in monolayer transition metal dichalcogenides.
The formation time of excitons is determined using
the interaction Hamiltonian which describes the
conversion of the photoexcited free electron-hole pair
to a final exciton state initiated by exciton-phonon Fr\"ohlich interactions,
and accompanied by absorption or emission
of phonons. The dependence of the
exciton formation time on several parameters
such as the temperatures of the crystal lattice, charge carriers and
excitons as well as the densities of charge carriers and excitons will be closely
examined in this study.
\section{Formation of Excitons in monolayer Molybdenum Disulfide \label{basic}}
\subsection{Exciton-LO phonon Hamiltonian}
We project the single monolayer of a hexagonally ordered plane of metal atoms sandwiched between two other hexagon planes of chalcogens onto a quasi two-dimensional space \cite{cho2008,mouri2013}. The motion of the exciton is generally confined to the parallel two-dimensional $XY$ layers of the atomic planes with restricted electron and hole motion in the $z$ direction perpendicular to the monolayer plane.
The monolayer MoS$_2$ has nine
phonon branches consisting of three acoustic and six optical branches.
The two lowest optical branches are weakly coupled to
the charge carriers are therefore not expected to play a significant role in the creation
of excitons. The next two phonon branches at the $\Sigma$ point positioned at energies 48 meV \cite{kaasbjerg12}
are linked to polar optical modes, which play a
critical role in the formation of exciton after photoexcitation of the material system.
The roles of the homopolar dispersionless mode at 50 meV which typically occurs in layered
structures as well as the sixth phonon mode with the highest energy will not be
considered here. Due to the large difference in momentum between valleys in
TMDCs, we assume that the exciton
formation occurs via an LO phonon-assisted intravalley process which preserves the valley
polarization in the monolayer system.
The Hamiltonian term associated with the interaction between excitons and
LO phonons is obtained by summing the electron-LO phonon and hole-LO phonon interaction
Hamiltonians as follows
\bea
H^{op}({\bf r_e,r_h}) &=&
{\sum_{\bf q}} {\mathcal C}
\left [ \exp (i {\bf q.r_e}) - \exp (i {\bf q.r_h}) b_{\vec{q},q_z}^{} + c.c \right ],\;
\label{hop} \\
{\mathcal C} &=& \frac{i e}{|\vec{q}|} \sqrt{\frac{\hbar \omega_{LO}}{2 \epsilon_o V} (\frac{1}{\kappa_{\infty}}-
\frac{1}{\kappa_{0}})}\; {\rm erfc[} |\vec{q}| \; \sigma/2]
\label{coup}
\eea
where ${\bf{r_e}}=\left(x_e,y_e,z_e\right) = \left(\vec{r_e},z_e\right)$
and ${\bf{r_h}} = \left( x_h,y_h,z_h \right) = \left( \vec{r_h},z_h \right)$
denote the respective space coordinates of the electron and hole, and
$\vec{r_e}$ (or $\vec{r_h}$) marked with an arrow
denotes the monolayer in-plane coordinates of the electron (or hole).
The phonon creation and annihilation operators are denoted by
$b^{\dagger}_{\vec{q},q_z}$ and $b_{\vec{q},q_z}$, respectively, where
${\bf{q}}=\left(\vec{q}, q_z\right)$ is composed
of the in-plane $\vec{q}$ and perpendicular $q_z$ components of the phonon wavevector.
The term $\omega_{LO}$ denotes the frequency of the LO phonon, $\epsilon_o$ is the permittivity of free space, $V$ is the volume of the crystal. The low-frequency and low-frequency relative dielectric constants are given by
$\kappa_{0}$ and $\kappa_{\infty}$, respectively. The inclusion of the
complementary error function $ {\rm erfc[q} \sigma/2]$ where $\sigma$ is the effective width of the electronic
Bloch states is based on the constrained interaction
of LO phonon with charge carriers in two-dimensional materials \cite{kaasbjerg12}.
For the monolayer MoS$_2$, the Fr\"ohlich coupling constant of 98 meV and an effective
width $\sigma$ = 4.41 \AA \; provide good fit to the interaction energies
evaluated from first principles in the long-wavelength limit \cite{kaasbjerg12}.
Due to dielectric screening,
the Fr\"ohlich interaction decreases with increase in the phonon momentum,
and larger coupling values $( \ge 330 )$ meV were obtained in the small momentum
limit in another study \cite{sohier2016two}. The Fr\"ohlich coupling constants obtained in
earlier works \cite{kaasbjerg12,sohier2016two} will be used in this study to compute the
formation times of excitons.
The field operator $\hat\Psi^\dagger_{_{e-h}}$
of a pair of electron and hole with a centre of mass that moves freely
is composed of electron and hole operators as follows
\be
\label{field}
\hat\Psi^\dagger_{_{e-h}}({\vec{R}},{\vec{r}},z_e,z_h)
= \frac{1}{A} \sum_{{\vec{K}, \vec{k}}}
e^{- i\vec{K} \cdot \vec{R}} e^{- i\vec{k} \cdot \vec{r}} \psi_e(z_e) \; \psi_h(z_h) \;
a_{v,{\alpha_h \vec{K} - \vec{k}}}^{\dagger}\; a_{c,{\alpha_e \vec{K} + \vec{k}}}^{\dagger}\;,
\ee
where $A$ is the two-dimensional quantization area in the monolayer plane,
and $a_{v,\vec{K}}^{\dagger}$ ($a_{c,\vec{K}}^{\dagger}$) are the respective hole
and electron creation operators with in-plane wavevector $\vec{K}$.
The center-of-mass wavevector $\vec{K} = \vec{k_e}+\vec{k_h}$ and the wavevector of the
relative motion $\vec{k} = \alpha_h \vec{k_e}- \alpha_e \vec{k_h}$ where
$\vec{k_e}$ ($\vec{k_h}$) is the electron (hole) in-plane wavevector,
with $\alpha_e ={m_e}/(m_e+m_h)$, $\alpha_h = {m_h}/(m_e+m_h)$
where $m_e$ ($m_h$ ) is the effective mass of the electron (hole).
In Eq.\ref{field}, the excitonic center of mass coordinate $\vec{R}$ and relative
coordinate $\vec{r}$ parallel to the monolayer plane are given by
\bea
\label{coord}
\vec{R} &=& \alpha_e \vec{r_e} + \alpha_h \vec{r_h}\;, \\
\nonumber
\vec{r} &=& \vec{r_e} -\vec{r_h}.
\eea
The electron and hole wave functions ($\psi_e(z_e)$, $\psi_h(z_h)$)
in the lowest-energy states are given by $\mathcal{N} \; \cos[\frac{\pi z_j}{L_w}]$ (j = e,h)
for $|z_j| \le \frac{L_w}{2}$, and 0 for $|z_j| > \frac{L_w}{2}$. The term $\mathcal{N}$ denotes the normalization constant and
$L_w$ is the average displacement of electrons and holes in the $z$ direction
perpendicular to the monolayer surface \cite{thilrelaxjap}.
\subsection{Exciton creation Hamiltonian}
The field operator $\hat\Psi^\dagger_{ex}$
of an exciton located at $\left(\vec{R},\vec{r},z_e,z_h \right)$
differs from the field operator $\hat\Psi^\dagger_{_{e-h}}$
of a free moving pair of electron and hole (see Eq.\ref{field}), and
is given by \cite{taka2,oh2000exciton,thilagam2006spin}
\be
\label{fieldex}
\hat\Psi^\dagger_{ex}({\vec{R}},{\vec{r}},z_e,z_h)
= \frac{1}{A} \sum_{\vec{K}}
e^{- i\vec{K} \cdot \vec{R}} \; \rho_{ex}^\star(\vec{r}) \; \psi_e(z_e) \; \psi_h(z_h) \;
B_{\vec{K}}^{\dagger},
\ee
where $B_{\vec{K}}^{\dagger}$ is the exciton creation operator with
center-of-mass wavevector $\vec{K}$ parallel to the monolayer plane.
The 1s two-dimensional exciton wavefunction
$\rho_{ex}(\vec{r})$ = $\sqrt{\frac{2 \beta^2}{\pi}} \exp(- \beta |\vec{r}|)$
where $\beta$ is a variational parameter.
Using Eqs. \ref{hop}, \ref{field} and \ref{fieldex}, the Hamiltonian associated with the
formation of an exciton from an initial state of free electron-hole
pair with absorption/emission of an LO phonon appear as
\bea
\label{int}
H^{F}_I &=& \frac{1}{\sqrt{A}}
\sum_{\vec{K},\vec{k},\vec{q},q_z} \; \lambda_o \; {\mathcal F_-}(\vec{k},\vec{q},q_z)\;
{\rm erfc[} \frac{|\vec{q}| \; \sigma}{2}]
B_{\vec{K}}^{\dagger}\; b_{\vec{q}, q_z} \;
a_{v,{\alpha_h (\vec{K}-\vec{q}) - \vec{k}}}\; a_{c,{\alpha_e (\vec{K}-\vec{q}) + \vec{k}}}\; \\
\nonumber
&+& \lambda_o \; {\mathcal F_+}(\vec{k},\vec{q},q_z)\; {\rm erfc[} \frac{|\vec{q}| \; \sigma}{2}] \;
B_{\vec{K}}^{\dagger}\; b_{\vec{q}, q_z}^{\dagger}
a_{v,{\alpha_h (\vec{K}+\vec{q}) - \vec{k}}}\; a_{c,{\alpha_e (\vec{K}+\vec{q}) + \vec{k}}}\;,
\\ \nonumber \\ \label{ffact}
{\mathcal F_{\mp}}(\vec{k},\vec{q},q_z) &=& {\mathcal F_e}(\pm q_z)\; {\mathcal G}(\vec{K} \pm \alpha_h \vec{q})
- {\mathcal F_h}(\pm q_z) \; {\mathcal G} (\vec{K} \mp \alpha_e \vec{q}),
\\ \label{ffact2}
{\mathcal G}(\vec{K} \pm \alpha_i \vec{q}) &=& \int \; d\vec{r} \rho_{ex}^\star(\vec{r})\;
e^{i(\vec{k} \pm \alpha_i \vec{q}) \cdot \vec{r}},
\\ \label{ffact3}
{\mathcal F_i}(q_z) &=& \int \; dz_i |\psi_i(z_i)|^2\; e^{i q_z z_j}, \; \; {\rm i = e,h}
\eea
where the coupling constant $\lambda_o = \sqrt{\frac{e^2 L_m \hbar \omega_{LO}}{2 \epsilon_o A} (\frac{1}{\kappa_{\infty}}- \frac{1}{\kappa_{0}})}$ and $L_m$ is the monolayer thickness. The form factor ${\mathcal G}$ is evaluated using the
explicit form of the two-dimensional exciton wavefunction $\rho_{ex}(\vec{r})$.
Likewise the second form factor ${\mathcal F}$ is computed using the
electron wavefunction $\psi_e(z_e)$ and hole wavefunction
$\psi_h(z_h)$.
\subsection{Exciton formation rate}
For transitions involving a single phonon with wavevector $\vec{q}$,
the formation rate of the exciton with wavevector $\vec{K}$
is computed by employing the Fermi golden rule
and the interaction operator in Eq.\ref{int}
as follows
\bea
\label{rate}
W(\vec{K} \pm \vec{q},q_z) &=& \frac{1}{A} \frac{2 \pi}{\hbar} \;
|\lambda_o|^2 \; |{\mathcal F_{\pm}}(\vec{k},\vec{q},q_z)|^2 \; {\rm erfc[} \frac{|\vec{q}| \; \sigma}{2}]^2 \;
f_h(\alpha_h (\vec{K} \pm \vec{q}) - \vec{k}) \; f_e(\alpha_e (\vec{K} \pm \vec{q}) + \vec{k})
\\
\nonumber &\times& (f_{ex}(\vec{K})+1) \;
(\overline n_{\bf q} + \frac{1}{2} \pm \frac{1}{2}) \; \delta(E_{ex} - E_{eh}^\pm \pm \hbar \omega_{LO}),
\eea
where the emission (absorption) of phonon is denoted by $+$ ($-$), and the exciton energy
$ E_{ex} = E_g + E_{_b}+ \frac{\hbar^2 |\vec{K}|^2}{2 (m_e+m_h)}$ where $E_{_b}$ is the
exciton binding energy. The energies of charge carrier is $E_{eh}^\pm = E_g +
\frac{\hbar^2 |\vec{K} \pm \vec{q}|^2}{2 (m_e + m_h)} + \frac{\hbar^2 |\vec{k}|^2}{2 \mu}$
where $\mu$, the reduced mass is obtained using $\frac{1}{\mu} = \frac{1}{m_e} + \frac{1}{m_h}$.
At low temperatures of the charge carriers the phonon bath can be considered
to be thermal equilibrium with negligible phonon-phonon scatterings and phonon decay
processes.The thermalized average occupation of phonons for low temperatures of the charge carriers is
given by
\be
\label{phon}
{\overline n_{q}} = [\exp \left(\frac{\hbar \omega_{LO}}{k_B T_l}\right)-1]^{-1},
\ee
where $T_l$ is the lattice
temperature and $\hbar \omega_{LO}$ is the energy of the LO phonon that is emitted
during the exciton generation process.
The relaxation of electrons and holes at high enough temperatures
($\approx 200 K$) generally displaces phonons beyond the equilibrium point when
phonon-phonon related processes become dominant.
The phonon Boltzmann equation \cite{kaasbjerg2014hot}
takes into account a common temperature that is achieved
as a result of equilibrium reached between electrons and phonons.
Hot-phonon effects is incorporated by replacing
the temperature $T_l$ in Eq.\ref{phon} by an effective lattice temperature
$T_{ph}$ \cite{kaasbjerg2014hot}.
The charge carriers are assumed to be in quasi-thermal equilibrium during the exciton formation
process. Consequently the occupation numbers ($f_h(K)), f_e(K)$) of hole and electron states in Eq.\ref{rate}
can be modeled using the Fermi-Dirac distribution
\bea
\label{fd}
f^i(K_i) &=&\left[ \exp \left( \frac{E(K_i) - \mu_i}{K_B T_i}\right ) + 1 \right]^{-1}, \quad \; { i = e, h}\\
\label{fd2}
\mu_{i} &=& K_B T_i \ln \left[\exp \left(\frac{\pi \hbar^2 n_i}{m_i k_b T_i} -1 \right) \right],
\eea
where the chemical potential $\mu_i$ is dependent on
the temperature $T_{i}$ and the two-dimensional density $n_i$ of the charge carriers.
When the mean inter-excitonic distance is higher
the exciton Bohr radius as considered to be the case in this study,
the exciton can be assumed to be an ideal boson \cite{thilpauli}
with a Bose-Einstein distribution \cite{ivanov1999bose}
\bea
\label{bes}
f^{ex}(K) &=&\left[ \exp \left( \frac{E(K) - \mu_{ex}}{K_B T_{ex}}\right ) - 1 \right]^{-1},\\
\label{bes2}
\mu_{ex} &=& K_B T_{ex} \ln \left[1-\exp \left(- \frac{2 \pi \hbar^2 n_{ex}}{g (m_e+m_h) k_b T_{ex}} \right) \right],
\eea
where $\mu_{ex}$ is the exciton chemical potential, $T_{ex}$ is the exciton temperature
and $n_{ex}$ is the exciton density. The degeneracy factor $g$ is obtained
as the product of the spin and valley degeneracy factors \cite{kaasbjerg12}.
\subsection{Numerical results of the Exciton formation Time}
The formation time of an exciton with
wavevector $\vec{K}$, $T(\vec{K})$ is obtained by summing the wavevectors $(\vec{k},\vec{q},q_z)$
over the rate obtained in Eq.\ref{rate}
\be
\label{formt}
\frac{1}{T(\vec{K})} = \sum_{\vec{k},\vec{q},q_z} \; W(\vec{K} \pm \vec{q},q_z)
\ee
To obtain quantitative estimates of the exciton formation time using Eq.\ref{formt}, we use the monolayer MoS$_2$ material parameters as $m_e$ = 0.51 $m_o$, $m_h$ = 0.58 $m_o$ \cite{jin2014intrinsic}
where $m_o$ is the free electron mass, and the coupling constant $\alpha_o$= 330 meV \cite{sohier2016two}.
We set the phonon energy $\hbar \omega_{LO}$ = 48 meV \cite{kaasbjerg12}, and the layer thickness $h$ = 3.13 \AA \; \cite{ding2011first} is used to determine
the upper limit of $\approx$ 6 \AA \; for $L_w$,
the average displacement of electrons and holes in the direction perpendicular to the plane of the
monolayer. We fix the effective lattice temperature
$T_{ph}$ = 15 K, but vary the electron and hole temperatures,
$T_e$ and $T_h$.
Fig. \ref{formK}a,b show the calculated values (using Eqs. \ref{ffact}- \ref{formt}) of the exciton formation
times as a function of exciton wavevector $\vec{K}$ with emission of an LO phonon
at different electron, hole and exciton temperatures
and densities, $n_e$ = $n_h$ = $n_{ex}$ = 1 $\times$ 10$^{11}$ cm$^{-2}$ and 5 $\times$ 10$^{11}$ cm$^{-2}$.
To obtain the results, we assume the temperatures to be the same for excitons and
unbound electron-hole pairs.
The results indicate that very fast
exciton formation times of less than one picosecond time occurs at
charge densities of 5 $\times$ 10$^{11}$ cm$^{-2}$ and carrier temperatures less than 300 K.
These ultrafast sub-picosecond exciton formation times are in agreement with recent experimental findings \cite{ceballos2016exciton} recorded at room temperatures in the monolayer MoS$_2$.
The exciton formation times are increased at the lower carrier densities of 1 $\times$ 10$^{11}$ cm$^{-2}$.
The wavevector of exciton states formed due to optical excitation
of the ground state of the crystal lies close
to zero due to selection rules. The results in Fig. \ref{formK}a,b
show that while excitons are dominantly created at $|\vec{K}|$ = 0 at low charge carrier temperatures ($\approx$ 50 K), exciton formation occurs most rapidly at non-zero exciton center-of-mass wavevectors
($|\vec{K}|_f \neq$) at higher temperatures ($T_e$ = $T_h$ $\ge $ 140 K) of the charge carriers.
At $T_e$ = $T_h$ $\approx $ 300 K, the shortest exciton formation time occurs at $|\vec{K}|_f$ =
0.04 $\times$ 10$^{10}$ m$^{-1}$ (about 5.6 meV).
The results in Fig. \ref{formK}a,b indicate that at exciton wavevectors greater than
0.06 $\times$ 10$^{10}$ m$^{-1}$, there is a notable increase in the exciton formation times linked to low electron-hole
plasma temperatures $T_e = T_h \le$ 80 K. At high carrier temperatures there is likely
conversion of newly formed composite bosons such as excitons into fermionic fragment species \cite{thilagam2013crossover}. The inclusion of considerations of
the quantum mechanical crossover of excitons into charge carriers at higher plasma temperatures
will add to greater accuracy when computing exciton formation times.
This currently lies beyond the scope of this study and will be considered in future
investigations where the role of the composite structure of excitons
in their formation rate will be examined.
\begin{figure}[htp]
\begin{center}
\subfigure{\label{Ka}\includegraphics[width=8.5cm]{WaveV}}\vspace{-1.1mm} \hspace{4.7mm}
\subfigure{\label{Kb}\includegraphics[width=8.5cm]{WaveV8}}\vspace{-1.1mm} \hspace{1.1mm}
\end{center}
\caption{(a)
The exciton formation time as a function of the exciton center-of-mass wavevector $|\vec{K}|$
in the monolayer MoS$_2$ at different temperatures, $T_e$ = $T_h$ = $T_{ex}$ (50 K, 80 K, 140 K, 200 K, 300 K).
We use $m_e$ = 0.51 $m_o$, $m_h$ = 0.58 $m_o$ \cite{jin2014intrinsic}
where $m_o$ is the free electron mass, the coupling constant $\alpha_o$ = 330 meV \cite{sohier2016two}
and $\hbar \omega_{LO}$ = 48 meV \cite{kaasbjerg12}.
The effective lattice temperature
$T_{ph}$ = 15 K, $L_w$ $\approx$ 6 \AA \;, the exciton binding energy,
$E_b$ = 300 meV \cite{thiljap} and densities, $n_e$ = $n_h$ = $n_{ex}$ = 1 $\times$ 10$^{11}$ cm$^{-2}$. \\
(b) The exciton formation time as a function of the center-of-mass wavevector $|\vec{K}|$
in the monolayer MoS$_2$ at different temperatures, $T_e$ = $T_h$ = $T_{ex}$ (50 K, 80 K, 140 K, 200 K, 300 K).
All other parameters used are the same as specified in (a) with the exception of
densities, $n_e$ = $n_h$ = $n_{ex}$ = 5 $\times$ 10$^{11}$ cm$^{-2}$.}
\label{formK}
\end{figure}
The effect of the variations within the electron-hole plasma temperatures or differences
between $T_e$ and $T_h$ on the exciton formation time is illustrated in Fig. \ref{diff}.
The formation times are computed at different exciton center-of-mass
wavevectors with the electron temperature fixed at $T_e$ = 250 K, and
exciton temperature $T_{ex}$ = 50 K. The charge densities $n_e$ = $n_h$ = $n_{ex}$ = 5 $\times$ 10$^{11}$ cm$^{-2}$
and all other parameters used are the same as specified in the caption for Fig. \ref{formK}.
At the larger wavevector $|\vec{K}|$ = 0.07 $\times$ 10$^{10}$ m$^{-1}$ ($\approx$ 17.1 meV)
the formation time is shortest when the hole temperature $T_h$ = $\approx $ 120.
With decrease in the center-of-mass wavevector $|\vec{K}|$, there is corresponding
decrease in the formation time when the hole temperature is allowed to decrease further from
the electron temperature. At the low exciton
wavevector $|\vec{K}|$ = 0.005 $\times$ 10$^{10}$ m$^{-1}$,
the shortest formation time occurs when the difference between $T_e$ and
$T_h$ reach the maximum possible value. These results demonstrate the interplay of competitive effects of the hole-phonon and the electron-phonon dynamics on a picosecond time scale which
results in a non-monotoinic temperature difference dependence $|T_h - T_e|$
of the exciton formation time.
\begin{figure}[htp]
\begin{center}
\subfigure{\label{figa}\includegraphics[width=9.9 cm]{holetemp}}\vspace{-1.1mm} \hspace{1.1mm}
\end{center}
\caption{The exciton formation time as a function of hole temperature
at different exciton center-of-mass wavevector $|\vec{K}|$ = 0.07 $\times$ 10$^{10}$ m$^{-1}$ (17.1 meV, red),
0.05 $\times$ 10$^{10}$ m$^{-1}$ (8.7 meV, blue), 0.02 $\times$ 10$^{10}$ m$^{-1}$ (1.4 meV, magenta),
0.005 $\times$ 10$^{10}$ m$^{-1}$ (0.08 meV, green). The electron temperature is fixed at $T_e$ = 250 K,
exciton temperature $T_{ex}$ = 50 K and the carrier densities $n_e$ = $n_h$ = 5 $\times$ 10$^{11}$ cm$^{-2}$.
and $n_{ex}$ = 5 $\times$ 10$^{11}$ cm$^{-2}$. All other parameters used are the same as specified in the caption for Fig. \ref{formK}.}
\label{diff}
\end{figure}
In Fig. \ref{concf}, the exciton formation time is plotted
as a function of the carrier density $n_e$ = $n_h$ at
different temperatures, $T_e$ = $T_h$ = $T_{ex}$ (300 K, 200 K, 100 K).
The exciton center-of-mass wavevector $|\vec{K}|$ = 0.03 $\times$ 10$^{10}$ m$^{-1}$.
All other parameters used are the same as specified in the caption for Fig. \ref{formK}.
Using the numerical values of the formation times,
we performed numerical fits using the following relation which involve the carrier concentrations \cite{oh2000exciton}
\be
\label{fitT}
T(n_i) = \frac{B}{n_i^p} \quad \; { i = e, h}
\ee
where $B$ and $p$ are fitting parameters. Using the results used
to obtain Fig. \ref{concf}, we get $B$ = 20.64 at $T_e$ = $T_h$ = 300 K,
$B$ = 10.35 at $T_e$ = $T_h$ = 200 K, $B$ = 3.56 at $T_e$ = $T_h$ = 100 K
and $B$ = 1.54 at $T_e$ = $T_h$ = 50 K. The constant $p \approx$ 2 irrespective
of the electron and hole temperatures. This implies an inverse
square-law dependence of the exciton formation time
on the electron/hole concentration. Consequently a square-law
dependence of the photoluminescence on excitation density is expected
to arise in the monolayer MoS$_2$ as well as other monolayer transition metal
dichalcogenides.
\begin{figure}[htp]
\begin{center}
\subfigure{\label{figa}\includegraphics[width=9.0cm]{conc}}\vspace{-1.1mm} \hspace{1.1mm}
\end{center}
\caption{The exciton formation time as a function of the carrier density $n_e$ = $n_h$ at
different temperatures, $T_e$ = $T_h$ = $T_{ex}$ (300 K, 200 K, 100 K).
The exciton center-of-mass wavevector $|\vec{K}|$ = 0.03 $\times$ 10$^{10}$ m$^{-1}$.
All other parameters used are the same as specified in the caption for Fig. \ref{formK}.}
\label{concf}
\end{figure}
\section{Exciton formation times for other exemplary monolayer transition metal
dichalcogenides}
The theoretical results obtained in this study for MoS$_2$ are expected to be applicable to
other low dimensional transition metal dichalcogenides. However subtle variations
in the exciton formation times are expected due to differences in the exciton-LO coupling
strengths and energies of the LO phonon in the monolayer materials.
The bare Fr\"ohlich interaction strengths obtained via ab initio techniques
give 334 mev (MoS$_2$), 500 meV (MoSe$_2$), 140 mev (WS$_2$) and 276 meV (WSe$_2$) \cite{sohier2016two},
hence the Molybdenum-based TMDCs possess higher exciton-phonon coupling
strengths than the Tungsten-based TMDCs.
A precise estimate of the exciton binding energy in the
monolayer TMDCs is not available, however a range of binding energies (100 to 800 meV) have been
reported for the monolayer systems \cite{hanbicki2015measurement,olsen2016simple,makatom,chei12,ugeda2014giant,
hill2015observation,choi2015linear,chei12,komsa2012effects,thiljap}. In order to compare
the exciton formation rates between Molybdenum-based TMDCs and
Tungsten-based TMDCs, we make use of the effective masses of electron and holes
at the $K$ energy valleys/peak position given in Ref. \cite{jin2014intrinsic}
and the Fr\"ohlich interaction strengths and
LO phonon energies given in Ref.\cite{sohier2016two}. To simplify the numerical
analysis, we fix the exciton binding
energies at $\approx$ 330 meV for all the TMDCs under investigation. This assumption is not expected
to affect the order of magnitude of the exciton formation times, and also to
not detract from the analysis of effects of Fr\"ohlich interaction strengths
on the formation times.
The results in Fig. \ref{comp} a, b show that the exciton formation times
of the selenide-based dichalcogenides
are smaller than the sulphide-based dichalcogenides at $T_e$ = $T_h$ = $T_{ex}$ = 100 K
and 300 K (with $|\vec{K}| \le$ 0.05 $\times$ 10$^{10}$ m$^{-1}$).
This is due to the comparatively higher Fr\"ohlich interaction strengths and lower LO phonon energies
of monolayer MoSe$_2$ and WSe$_2$. The results in Fig. \ref{comp}a,b also indicate that
excitons in the monolayer WS$_2$ are
dominantly created at non-zero center-of-mass wavevectors compared to the other three monolayer
dichalcogenide systems. This may be attributed to the comparatively lower effective masses
of hole and electron in the monolayer WS$_2$.
It is instructive to compare
the exciton formation times in Fig. \ref{comp}a,b with the
radiative lifetimes of zero in-plane momentum
excitons in suspended MoS2
monolayer of $\approx$ 0.18 - 0.30 ps at 5 K
\cite{wang2016radiative}. The lifetimes of excitons depend linearly on the
exciton temperature and increase to the picoseconds range at small temperatures and
is larger than 1 ns at the room temperature. This indicates that the exciton formation processes
are likely to dominate in the initial period when the TMDCs are optically excited
at high exciton temperatures. In the low temperature range (5 K - 20 K),
an interplay of competing effects of exciton generation and radiative decay
are expected to occur on the sub-picosecond time scale.
Environmental parameters such as impurity concentration, exciton density and density of
excess charge carriers that affect the stability of low dimensional trions will need
to be taken into account in order to accurately model the exciton generation process at the low temperature
regime.
The exciton formation scheme adopted in this study has been parameterized by physical quantities such
as the exciton density and charge carrier densities. It is not immediately clear whether
these parameters can be extracted directly using ab-initio quantum mechanical and
time-dependent density functional theory approaches.
Computations based on ab-initio techniques are generally numerically intensive and
time consuming which are the main challenges in modeling low dimensional material systems. It is expected
that improvements in first principles modeling of anisotropic systems may result
in more efficient and rewarding approaches to determining the density functions of excitons
and charge carriers in future investigations. The Auger process provides a non-radiative decay
channel for electron-hole pair recombination, hence this mechanism must be taken
into account for accurate predictions of exciton formation times in future studies.
\begin{figure}[htp]
\begin{center}
\subfigure{\label{Ka}\includegraphics[width=8.5cm]{comp-100}}\vspace{-1.1mm} \hspace{4.7mm}
\subfigure{\label{Kb}\includegraphics[width=8.5cm]{comp-300}}\vspace{-1.1mm} \hspace{1.1mm}
\end{center}
\caption{(a)
The exciton formation time as a function of the exciton center-of-mass wavevector $|\vec{K}|$
in common monolayer systems (MoS$_2$, MoSe$_2$, WS$_2$, WSe$_2$)
at temperatures, $T_e$ = $T_h$ = $T_{ex}$ = 100 K. The effective masses of electron and holes
at the $K$ energy valleys/peak are taken from Ref. \cite{jin2014intrinsic}
and the Fr\"ohlich interaction strengths and LO phonon energies are obtained from Ref.\cite{sohier2016two}.
The effective lattice temperature
$T_{ph}$ = 15 K, $L_w$ $\approx$ 6 \AA \;, and
densities, $n_e$ = $n_h$ = $n_{ex}$ = 5 $\times$ 10$^{11}$ cm$^{-2}$.\\
(b) The exciton formation time as a function of the exciton center-of-mass wavevector $|\vec{K}|$
in common monolayer systems at temperatures, $T_e$ = $T_h$ = $T_{ex}$ = 300 K.
All other parameters used are the same as specified in (a) above.
}
\label{comp}
\end{figure}
\section{Conclusion \label{conc}}
Transition metal chalcogenides have emerged as promising materials in which
excitons exist as stable quasi-particles with high binding energies and thus play
important roles in the optical processes of monolayer TMDCs.
The dynamics of excitons in monolayer transition metal
dichalcogenides has been extensively studied over the last
five years in terms of both theory and applications. However
the formation of excitons from free carriers has only been recently measured, and
in this work we develop a model within the framework of Fermi's Golden rule to calculate the
formation dynamics of excitons from free carriers.
This theoretical study is aimed at providing
a fundamental understanding of the exciton generation process
in optically excited monolayer transition metal dichalcogenides.
We focus on a mechanism by which excitons are generated via
the LO (longitudinal optical) phonon-assisted scattering process from free electron-hole pairs
in layered structures. The exciton formation time
is computed as a function of the exciton center-of-mass wavevector,
electron and hole temperatures and densities for known values of
the Fr\"ohlich coupling constant, LO phonon energy, lattice temperature and the exciton binding energy.
Our results show that the exciton is generated at
non-zero wavevectors at higher temperatures ($\ge $ 120 K) of charge carriers,
that is also dependent on the density of the electron and holes.
The inverse square-law dependence of the exciton formation time
on the density of charge carriers is also demonstrated by the
results of this study.
For monolayer MoS$_2$, we obtain exciton formation times on the picosecond time scale at
charge densities of 1 $\times$ 10$^{11}$ cm$^{-2}$ and carrier temperatures less than 100 K.
The exciton formation times decreases to the sub-picosecond time range at higher densities
(5 $\times$ 10$^{11}$ cm$^{-2}$) and electron-hole plasma temperatures ($\le$ 300 K).
These ultrafast formation times are in agreement with recent experimental results
($\approx$ 0.3 ps) for WSe$_2$, MoS$_2$, and MoSe$_2$ \cite{ceballos2016exciton}.
Due to the comparatively higher Fr\"ohlich interaction strengths and lower LO phonon energies
of monolayer MoSe$_2$ and WSe$_2$, the exciton formation times
of the selenide-based dichalcogenides
are smaller than the sulphide-based dichalcogenides at $T_e$ = $T_h$ = $T_{ex}$ = 100 K
and 300 K (with $|\vec{K}| \le$ 0.05 $\times$ 10$^{10}$ m$^{-1}$).
The results of this study is expected to be useful in understanding
the role of the exciton formation process in electroluminescence
studies \cite{sundaram2013electroluminescence,ye2014exciton} and exciton-mediated processes
in photovoltaic devices \cite{bernardi2013extraordinary,wi2014enhancement,tsuboi2015enhanced}.
|
{'timestamp': '2016-09-22T02:02:41', 'yymm': '1607', 'arxiv_id': '1607.04856', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.04856'}
|
arxiv
|
\section{Introduction}
Energy calibration is the first and most essential step in a newly installed accelerator facility. There are several experiments performed by different accelerator facilities across the globe for energy calibration such as $^{7}Li(p,n)$, $^{13}C(p,n)$, $^{27}Al(p,n)$, $^{27}Al(p,\gamma)$, $^{19}F(p,n)$ etc.. Among which neutron emitting reactions are very popular due to their relatively easier detections. Every neutron producing reaction has a particular energy, called neutron threshold energy which is the minimum required energy of the incident beam to produce neutron. The present study has been done keeping in mind the Facility for Research in Experimental Nuclear Astrophysics (FRENA)\cite{Ref1} located at Saha Institute of Nuclear Physics, INDIA. FRENA is a 3 MV Tandetron low energy high current accelerator dedicated for low energy Nuclear Astrophysics research. This is a new machine and energy calibration is needed before doing any experiment. Few neutron-producing reactions are generally chosen for calibration purposes. Neutrons generated from such experiments can interact with any elements present in the accelerator hall and produce different isotopes. Now, if produced isotopes are radioactive in nature and have a long half-life (for few months to several years) then those isotopes will contribute as background gamma peaks in the next performing experiments. These background peaks will overlap with the gammas from the actual experiments if the energy of those background gammas is very close to the gammas from actual astrophysical experiments. A systematic study has been done for all the possible radioactive isotopes that may produce during some experiments which are planned to perform in FRENA for energy calibration of the machine.
\section{Energy calibration for FRENA accelerator}
The terminal voltage of FRENA can be varied from 200 kV to 3 MV. The energy of the charged particle produced in the ion source is accelerated by this terminal voltage.
The energy of any charged particle in a potential difference is given by the relation:
\begin{equation}
E_{particle} = (q+1) \times V.
\end{equation}
Where, $E_{particle}$ is the charged particle energy, q is the charge state of the projectile, V is the terminal voltage.
In practical cases, the accelerated charge passes through several magnetic fields and finally bombard into a target material. Since every machine has some intrinsic deviation from the ideal value. Energy needs to be calibrated so that a more accurate relation between terminal voltage and beam energy can be established for this particular machine.
\\
\section{Results for different calibration reactions}
In FRENA, available proton energy varies from 400 keV to 6 MeV. For energy calibration purposes, those (p,n) experiments (Table. 1) are chosen for which neutron threshold energy lies within the available energy at FRENA.
\begin{table}
\begin{center}
\begin{tabular}{|p{4cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|}
\hline
& $^{7}Li(p,n)$ &$^{13}C(p,n)$ &$^{19}F(p,n)$& $^{27}Al(p,n)$ \\ \hline
Neutron threshold energy (MeV) & 1.88036 & 3.2355 & 4.23513 & 5.80362 \\ \hline
\end{tabular}
\caption{Neutron threshold energy for different (p,n) reactions.}
\end{center}
\end{table}
Most of the components of the accelerators are made of 304 and 316 Stainless steel alloy along with copper (Cu) and Tantalum (Ta). Apart from that Zirconium (Zr), Aluminium (Al), Calcium (Ca), Silicon (Si) and Oxygen (O) are also present in the vicinity. Chemical composition for 304 and 316 stainless steel alloys along with maximum \% are listed in Table. 2 and Table. 3.
\begin{table}
\begin{center}
\begin{tabular}{|p{1cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{1.2cm}|p{0.8cm}|}
\hline
& C & Mn & Si & P & S & Cr & Ni & Fe & N \\ \hline
Max\% & 0.07 & 2.00 & 1.00 & 0.05 & 0.03 & 19.50 & 10.50 & Balance & 0.11\\ \hline
\end{tabular}
\caption{Maximum \% of chemical compositions present in 304 Stainless steel alloy \cite{Ref2}. }
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|p{1cm}|p{0.5cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{1.2cm}|p{0.8cm}|}
\hline
& C & Mn & Si & P & S & Cr & Mo &Ni & Fe & N \\ \hline
Max\% & 0.08 & 2.00 & 0.75 & 0.045 & 0.03 & 18.00 &3.00& 14.00& Balance & 0.10\\ \hline
\end{tabular}
\caption{Maximum \% of chemical compositions present in 316 Stainless steel alloy \cite{Ref3}. }
\end{center}
\end{table}
Reaction cross-section of isotopes present in the vicinity with produced neutrons from calibration experiments are calculated using a Hauser-Feshbach statistical model code TALYS 1.95 \cite{Ref4}. In this study, channels having cross-sections less than $10^{-7}$ mb are not considered. Gamma decay schemes of every produced isotope are taken from National Nuclear Data Center (NNDC)\cite{Ref5}. Radioactive isotopes produced by neutrons having a half-life of around 3 months or more are considered. Gammas having a relative intensity of more than 1\% from those radioactive isotopes are listed in this study.
\subsection{$^{7}Li(p,n)$ reaction:}
$^{7}Li(p,n)$ is one of the most common experiments done for energy calibration at accelerator facilities. Proton beam of energy 1.88036 MeV was considered for this study (Table. 1). From kinematics neutron energy for that particular proton beam energy has been calculated and it was found to be between 20 - 42 keV. In calculations, only maximum and minimum values of neutron energy are considered. For the interaction of produced neutron, all the stable isotopes of the elements present in the vicinity are considered. \\
Table. 4 also shows isotopic abundance in nature, reaction product, and its formation cross-section along with gamma energy and corresponding relative intensities are also listed.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{fig1.pdf}
\caption{Distribution of neutron energy at threshold energy, $E_p$=1.88036 MeV}
\end{figure}
\subsection{$^{13}C(p,n)$ reaction:}
A proton beam of energy 3.2355 MeV was considered for the study of this reaction (Table. 1). From kinematics neutron energy for that particular proton beam energy has been calculated and it was found to be between 11 - 24 keV.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{fig2.pdf}
\caption{Distribution of neutron energy at threshold energy, $E_p$=3.2355 MeV}
\end{figure}
\subsection{$^{19}F(p,n)$ reaction:}
Neutron threshold energy for this reaction is 4.23513 MeV. Proton of that energy is considered for this calculation. Neutrons emitted from this reaction ranges between 4 to 20 keV.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{fig3.pdf}
\caption{Distribution of neutron energy at threshold energy, $E_p$=4.23513 MeV}
\end{figure}
\subsection{$^{27}Al(p,n)$ reaction:}
A proton of energy 5.80362 MeV was considered for this study (Table. 1). From kinematics available neutron energy for that particular proton beam energy is between 0 - 30 keV.
\begin{figure}[h]
\centering
\includegraphics[width=.8\linewidth]{fig4.pdf}
\caption{Distribution of neutron energy at threshold energy, $E_p$=5.80362 MeV}
\end{figure}
Kinematics calculations show that neutron energy for all reactions mentioned above is within the 10 to 40 keV range. Cross-sections of different stable isotopes present in the vicinity of the accelerator hall with the produced neutrons from the calibration reactions are listed in Table. 4. Half-lives of reaction products, daughter nucleus with corresponding gamma energies having relative intensities greater or equal to 1 are also listed in the table.
\begin{table}
\tiny
\begin{center}
\begin{tabular}{|p{.5cm}|p{.5cm}|p{.8cm}|p{.8cm}|p{1cm}|p{1cm}|p{.8cm}|p{1cm}|p{1.2cm}|p{1.5cm}|}
\hline
Iso-topes & Abun-dance (\%) & Neutron energy (keV)& Reaction product & Cross section (mb) & half life & Daughter nucleus & Gamma energy (keV) & Gamma intensity (\%) & Remarks\\ \hline \hline
\multirow{2}{*}{$^{13}C$} & \multirow{2}{*}{1.07} & 10 & \multirow{2}{*}{$^{14}C$} &\multirow{1}{*}{1.59$\times 10^{-1}$} & \multirow{2}{*}{5700 y} & \multirow{2}{*}{$^{14}N$} & & & \multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 8.61$\times 10^{-2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 5.44$\times 10^{-2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 4.38$\times 10^{-2}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{14}N$} & \multirow{2}{*}{99.6} & 10 & \multirow{2}{*}{$^{14}C$} &\multirow{1}{*}{1.28$\times 10^{3}$} & \multirow{2}{*}{5700 y} & \multirow{2}{*}{$^{14}N$} & & & \multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 8.68$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 5.83$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 5.34$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{17}O$} & \multirow{2}{*}{0.04} & 10 & \multirow{2}{*}{$^{14}C$} &\multirow{1}{*}{2.80$\times 10^{2}$} & \multirow{2}{*}{5700 y} & \multirow{2}{*}{$^{14}N$} & & &\multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 2.97$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 1.30$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 1.89$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{34}S$} & \multirow{2}{*}{4.25} & 10 & \multirow{2}{*}{$^{35}S$} &\multirow{1}{*}{2.00} & \multirow{2}{*}{87.37 d} & \multirow{2}{*}{$^{35}Cl$} & & & \multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 1.216 &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 9.41$\times 10^{-1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 7.82$\times 10^{-1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{40}Ca$} & \multirow{2}{*}{96.941} & 10 & \multirow{2}{*}{$^{41}Ca$} &\multirow{1}{*}{5.25$\times 10^{1}$} & \multirow{2}{*}{99400 y} & \multirow{2}{*}{$^{41}K$} & & & \multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 2.46$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 2.69$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 1.68$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{44}Ca$} & \multirow{2}{*}{2.086} & 10 & \multirow{2}{*}{$^{45}Ca$} &\multirow{1}{*}{6.50$\times 10^{2}$} & \multirow{2}{*}{162.61 d} & \multirow{2}{*}{$^{45}Sc$} & \multirow{2}{*}{12.47}& \multirow{2}{*}{3.0$\times 10^{-6}$}& \\ \cline{3-3} \cline{5-5}
&&20 && 4.15$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 3.25$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 2.74$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{54}Fe$} & \multirow{2}{*}{5.85} & 10 & \multirow{2}{*}{$^{55}Fe$} &\multirow{1}{*}{9.46$\times 10^{1}$} & \multirow{2}{*}{2.744 y} & \multirow{2}{*}{$^{55}Mn$} & \multirow{2}{*}{126}&\multirow{2}{*}{1.28 $\times 10^{-7}$} & \\ \cline{3-3} \cline{5-5}
&&20 && 6.35$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 5.17$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 4.56$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{58}Ni$} & \multirow{2}{*}{68.07} & 10 & \multirow{2}{*}{$^{59}Ni$} &\multirow{1}{*}{2.16$\times 10^{2}$} & \multirow{2}{*}{7.6$\times 10^{4}$ y} & \multirow{2}{*}{$^{59}Co$} & \multirow{2}{*}{511}&\multirow{2}{*}{7.4 $\times 10^{-5}$} & Annihilation gammas \\ \cline{3-3} \cline{5-5}
&&20 && 1.47$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 1.20$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 1.06$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{62}Ni$} & \multirow{2}{*}{3.635} & 10 & \multirow{2}{*}{$^{63}Ni$} &\multirow{1}{*}{5.45$\times 10^{1}$} & \multirow{2}{*}{101.2 y} & \multirow{2}{*}{$^{63}Cu$} & & & \multirow{2}{*}{100\% $\beta$ -decay} \\ \cline{3-3} \cline{5-5}
&&20 && 3.78$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 3.18$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 2.89$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{92}Mo$} & \multirow{2}{*}{14.65} & 10 & \multirow{2}{*}{$^{93}Mo$} &\multirow{1}{*}{1.17$\times 10^{2}$} & \multirow{2}{*}{4.0$\times 10^{3}$y} & \multirow{2}{*}{$^{93}Nb$} & \multirow{2}{*}{30.77}&\multirow{2}{*}{5.20 $\times 10^{-4}$} & \\ \cline{3-3} \cline{5-5}
&&20 && 6.80$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 5.50$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 4.05$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{2}{*}{$^{92}Zr$} & \multirow{2}{*}{17.15} & 10 & \multirow{2}{*}{$^{93}Zr$} &\multirow{1}{*}{8.36$\times 10^{1}$} & \multirow{2}{*}{1.61$\times 10^{6}$y} & \multirow{2}{*}{$^{93}Nb$} & \multirow{2}{*}{30.77}&\multirow{2}{*}{4.30 $\times 10^{-4}$} & \\ \cline{3-3} \cline{5-5}
&&20 && 4.77$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 3.49$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 2.83$\times 10^{1}$ &&&&& \\ \cline{3-3} \cline{5-5}
\hline
\hline
\multirow{20}{*}{$^{181}Ta$} & \multirow{20}{*}{99.988} & 10 & \multirow{20}{*}{$^{182}Ta$} &\multirow{1}{*}{2.09$\times 10^{3}$} & \multirow{20}{*}{114.74 d} & \multirow{20}{*}{$^{182}W$} & \multirow{1}{*}{65.722}&\multirow{1}{*}{3.01} & \\ \cline{3-3} \cline{5-5} \cline{8-9}
&&20 && 1.34$\times 10^{3}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&30 && 1.05$\times 10^{3}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&40 && 8.69$\times 10^{2}$ &&&&& \\ \cline{3-3} \cline{5-5}
&&&&&&& 67.749 & 42.9& \\ \cline{8-9}
&&&&&&& 84.680 & 2.654& \\ \cline{8-9}
&&&&&&& 100.106 & 14.2& \\ \cline{8-9}
&&&&&&& 113.672 & 1.871& \\ \cline{8-9}
&&&&&&& 152.429 & 7.02& \\ \cline{8-9}
&&&&&&& 156.386 & 2.671& \\ \cline{8-9}
&&&&&&& 179.393 & 3.119& \\ \cline{8-9}
&&&&&&& 198.352 & 1.465& \\ \cline{8-9}
&&&&&&& 222.109 & 7.57& \\ \cline{8-9}
&&&&&&& 229.321 & 3.644& \\ \cline{8-9}
&&&&&&& 264.074 & 3.612& \\ \cline{8-9}
&&&&&&& 1001.7 & 2.086& \\ \cline{8-9}
&&&&&&& 1121.29 & 35.24& \\ \cline{8-9}
&&&&&&& 1189.04 & 16.49& \\ \cline{8-9}
&&&&&&& 1221.395 & 27.23& \\ \cline{8-9}
&&&&&&& 1231.004 & 11.62& \\ \cline{8-9}
&&&&&&& 1257.407 & 1.51& \\ \cline{8-9}
&&&&&&& 1289.145 & 1.37& \\ \cline{8-9}
\hline
\hline
\end{tabular}
\caption{Cross-section of different isotopes present in the vicinity of the accelerator with neutrons of different energy, half-lives of produced isotopes and gamma energy with relative intensity
. }
\end{center}
\end{table}
\clearpage
\section{Discussion and conclusion}
In this work, a systematic detailed study has been done to identify any possible long-lived gamma emitting isotopes formed during the calibration study. Apart from the above listed isotopes two more isotopes $^{100}Mo$ and $^{180m}Ta$ (0.077 MeV, 9-) are produced in the vicinity during the energy calibration. These two isotopes have no decay record in NNDC. Decay of $^{180m}Ta$ having half-life $>4.5 \times 10^{16} y$ has never been observed till date\cite{Ref6} \cite {Ref7}. $^{100}Mo$ having half-life $6.8 \times 10^{18} y$ undergoes two-neutrino double $\beta$ -decay \citep{Ref8}. Reaction products such as $^{55}Fe$, $^{59}Ni$, $^{45}Ca$, $^{93}Mo$, $^{93}Zr$ have half-lives much greater than 3 months but intensities of gammas from those nuclei have very low intensity. Isotopes like $^{63}Ni$, $^{14}C$, $^{35}S$, $^{41}Ca $ have significantly long half-lives. These isotopes undergo
$\beta-$ decay, with no record of any gamma decay channel in NNDC. This study shows that experiments mentioned above for the energy calibration purpose are producing only $^{182}Ta$ which has significant long-lived activity (114.72 days) and decay gamma over a wide range of energies (60 - 1300 keV) with an intensity of more than 1\%. several gamma energy over a wide range. Gamma of energies 1121.29, 1189.04, 1221.395, 1231.004 keV having intensities 35.24, 16.49, 11.23, 11.62 respectively can have a significant effect on experimental data if performed just after the calibration experiments. Nuclear astrophysics experiments detecting gamma after the calibration study needs to be spaced enough to minimize the effect of such background gammas from long-lived isotopes especially if the expected gamma energy of the performing experiments overlap with the background energy.
\\
\section*{Acknowledgement}
|
{'timestamp': '2022-03-07T02:02:48', 'yymm': '2203', 'arxiv_id': '2203.01995', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.01995'}
|
arxiv
|
\section{Introduction}
Recent research efforts on conditional generative modeling, such as Imagen~\citep{saharia2022photorealistic}, DALL$\cdot$E~2~\citep{ramesh2022hierarchical}, and Parti~\citep{yu2022scaling}, have advanced text-to-image generation to an unprecedented level, producing accurate, diverse, and even creative images from text prompts. These models leverage paired image-text data at Web scale (with hundreds of millions of training examples), and powerful backbone generative models, \textit{i.e.}, autoregressive models~\citep{van2017neural,ramesh2021zero,yu2022scaling}, diffusion models~\citep{ho2020denoising,dhariwal2021diffusion}, \textit{etc.}, and generate highly realistic images. Studying these models' generation results, we discovered their outputs are surprisingly sensitive to the frequency of the entities (or objects) in the text prompts. In particular, when generating text prompts about frequent entities (see Appendix~\ref{frequent}), these models often generate realistic images, with faithful grounding to the entities' visual appearance. However, when generating from text prompts with less frequent entities, those models either hallucinate non-existent entities, or output related frequent entities (see~\autoref{fig:comparison}), failing to establish a connection between the generated image and the visual appearance of the mentioned entity. This key limitation can greatly harm the trustworthiness of text-to-image models in real-world applications and even raise ethical concerns.
\begin{figure}[!thb]
\centering
\includegraphics[width=1.0\linewidth]{comparison.001.jpeg}
\caption{{Comparison of images generated by Imagen and {Re-Imagen}\xspace on less frequent entities}. We observe that Imagen hallucinates the entities while {Re-Imagen}\xspace maintains better faithfulness.}
\label{fig:comparison}
\vspace{-2ex}
\end{figure}
In this paper, we propose a \textbf{Re}trieval-augmented Text-to-\textbf{Ima}ge \textbf{Gen}erator ({Re-Imagen}\xspace), which alleviates such limitations by searching for entity information in a multi-modal knowledge base, rather than attempting to memorize the appearance of rare entities. Specifically, we define our multi-modal knowledge base encodes the visual appearances and descriptions of entities with a collection of reference \texttt{<}image, text\texttt{>} pairs'. To use this resource, {Re-Imagen}\xspace first uses the input text prompt to retrieve the most relevant \texttt{<}image, text\texttt{>} pairs from the external multi-modal knowledge base, then uses the retrieved knowledge as model additional inputs to synthesize the target images. Consequently, the retrieved references provide knowledge regarding the semantic attributes and the concrete visual appearance of mentioned entities to guide {Re-Imagen}\xspace to paint the entities in the target images.
The backbone of {Re-Imagen}\xspace is a cascaded diffusion model \citep{ho2022cascaded}, which contains three independent generation stages (implemented as U-Nets \citep{ronneberger2015u}) to gradually produce high-resolution (\textit{i.e.}, 1024{$\times$}1024) images.
In particular, we train {Re-Imagen}\xspace on a dataset constructed from the image-text dataset used by Imagen~\citep{saharia2022photorealistic}, where each data instance is associated with the top-k nearest neighbors within the dataset, based on text-only BM25 score. The retrieved top-k \texttt{<}image, text\texttt{>} pairs will be used as a reference for the model attend to. During inference, we design an interleaved guidance schedule that switches between text guidance and retrieval guidance, which ensures both text alignment and entity alignment. We show some examples generated by {Re-Imagen}\xspace, and compare them against Imagen in~\autoref{fig:comparison}. We can qualitatively observe that the our images are more faithful to the appearance of the reference entity.
To further quantitatively evaluate {Re-Imagen}\xspace, we present \textit{zero-shot text-to-image generation} results on two challenging datasets: COCO~\citep{lin2014microsoft} and WikiImages~\citep{chang2022webqa}\footnote{The original WikiImages database contains (entity image, entity description) pairs. It was a crawled from Wikimedia Commons for visual question answering, and we repurpose it here for text-to-image generation.}. {Re-Imagen}\xspace uses an external non-overlapping image-text database as the knowledge base for retrieval and then grounds on the retrieval to synthesize the target image. We show that {Re-Imagen}\xspace achieves the state-of-the-art performance for text-to-image generation on COCO and WikiImages, measured in FID score~\cite{heusel2017gans}, among non-fine-tuned models. For non-entity-centric dataset COCO, the perform gain is coming from biasing the model to generate images with similar styles as the retrieved in-domain images. For the entity-centric dataset WikiImages, the performance gain comes from grounding the generation on retrieved images containing similar entities. We further evaluate {Re-Imagen}\xspace on a more challenging benchmark --- EntityDrawBench, to test the model's ability to generate a variety of infrequent entities (dogs, landmarks, foods) in different scenes. We compare {Re-Imagen}\xspace with Imagen~\citep{saharia2022photorealistic}, DALL-E 2~\citep{ramesh2022hierarchical} and StableDiffusion~\citep{rombach2022high} in terms of faithfulness and photorealism with human raters. We demonstrate that that {Re-Imagen}\xspace can reach around 71\% on the faithfulness score, greatly improving from Imagine's score of 27\% and DALL-E 2's score of 48\% . Analysis shows that the improvement mostly comes from low-frequency entities.
To summarize, our key contributions are: {(1)} a novel retrieval-augmented text-to-image model {Re-Imagen}\xspace, which achieves SoTA FID scores on two dasets; {(2)} interleaved classifier-free guidance during sampling to ensure both text alignment and entity fidelity; and {(3)} We introduce EntityDrawBench and show that {Re-Imagen}\xspace can significantly improve faithfulness on less-frequent entities.
\section{Related Work}
\noindent \textbf{Text-to-Image Diffusion Models} There has been a wide-spread success~\citep{ashual2022knn,ramesh2022hierarchical,saharia2022photorealistic,nichol2021glide} in modeling text-to-image generation with diffusion models, which has outperformed GANs~\citep{goodfellow2014generative} and auto-regressive Transformers~\citep{ramesh2021zero} in photorealism and diversity (under similar model size), without training instability and mode collapsing issues. Among them, some recent large text-to-image models such as Imagen~\citep{saharia2022photorealistic}, GLIDE~\citep{nichol2021glide}, and DALL-E2~\citep{ramesh2022hierarchical} have demonstrated excellent generation from complex prompt inputs. These models achieve highly fine-grained control over the generated images with text inputs. However, they do not perform explicit grounding over external visual knowledge and are restricted to memorizing the visual appearance of every possible visual entity in their parameters. This makes it difficult for them to generalize to rare or even unseen entities. In contrast, {Re-Imagen}\xspace is designed to free the diffusion model from memorizing, as models are encouraged to retrieve semantic neighbors from the knowledge base and use retrievals as context to paint the image. {Re-Imagen}\xspace improves the grounding of the diffusion models to real-world knowledge and is therefore capable of faithful image synthesis. \vspace{1ex} \\
\noindent \textbf{Concurrent Work} There are several concurrent works~\citep{li2022memory,blattmann2022retrieval,ashual2022knn}, that also leverage retrieval to improve diffusion models. RDM~\citep{blattmann2022retrieval} is trained similarly to {Re-Imagen}\xspace, using examples and near neighbors, but the neighbors in RDM are selected using image features, and at inference time retrievals are replaced with user-chosen exemplars. RDM was shown to effectively transfer artistic style from exemplars to generated images. In contrast, our proposed {Re-Imagen}\xspace conditions on both text and multi-modal neighbors to generate the image, includes retrieval at inference time, and is demonstrated to improve performance on rare images (as well as more generally). KNN-Diffusion~\citep{ashual2022knn} is more closely related work to us, as it also uses retrieval to the quality of generated images. However, KNN-Diffusion uses discrete image representations, while {Re-Imagen}\xspace uses the raw pixels, and {Re-Imagen}\xspace's retrieved neighbors can be \texttt{<}image, text\texttt{>} pairs, while KNN-Diffusion's are only images. Quantitatively, {Re-Imagen}\xspace{} outperforms KNN-Diffusion on the COCO dataset significantly. \vspace{1ex}\\
\noindent \textbf{Others} Due to the space limit, we provide an additional literature review in the Appendix~\ref{extended_review}.
\section{Model}
In this section, we discuss our proposed {Re-Imagen}\xspace in detail. We start with background knowledge, in the form of a brief overview of the cascaded diffusion models used by Imagen~\citep{wang2022high}. Next, we describe the concrete technical details on how we incorporate multi-modal retrieval for {Re-Imagen}\xspace. Finally, we discuss training and the interleaved guidance sampling for {Re-Imagen}\xspace.
\subsection{Preliminaries}
\noindent \textbf{Diffusion Models}
Diffusion models~\citep{sohl2015deep} are latent variable models, parameterized by $\theta$, in the form of $p_{\theta}(\bm{x}_0) := \int p_{\theta}(\bm{x}_{0:T})d\bm{x}_{1:T}$, where $\bm{x}_1, \cdots, \bm{x}_T$ are ``noised'' latent versions of the input image $\bm{x}_0 \sim q(\bm{x}_0)$. Note that the dimensionality of both latents and the image are the same throughout the entire process, with $\bm{x}_{0:T} \in \mathbb{R}^d$ and $d$ equals the product of \texttt{<}height, width, \# of channels\texttt{>}. The process that computes the posterior distribution $q(\bm{x}_{1:T}|\bm{x}_0)$ is also called the forward (or diffusion) process, and is implemented as a predefined Markov chain that gradually adds Gaussian noise to the data according to a schedule $\beta_t$:
\begin{equation}
q(\bm{x}_{1:T}|\bm{x}_0)=\prod_{t=1}^{T}q(\bm{x}_t | \bm{x}_{t-1}) \quad \quad q(\bm{x}_t | \bm{x}_{t-1}) := \mathcal{N}(\bm{x}_t; \sqrt{1 - \beta_t}\bm{x}_{t-1}, \beta_t \bm{I})
\end{equation}
Diffusion models are trained to learn the image distribution by reversing the diffusion Markov chain. Theoretically, this reduces to learning to denoise $\bm{x}_t \sim q(\bm{x}_t|\bm{x}_0)$ into $\bm{x}_0$, with a time re-weighted square error loss---see~\cite{ho2020denoising} for the complete proof:
\begin{equation}
\label{eq:loss}
\mathbb{E}_{\bm{x}_0, \bm{\epsilon}, t} [w_t \cdot ||\hat{\bm{x}}_{\theta}(\bm{x}_t, \bm{c}) - \bm{x}_0||_2^2]
\end{equation}
Here, the noised image is denoted as $\bm{x}_t := \sqrt{\bar{\alpha}_t} \bm{x}_0 + \sqrt{1-\bar{\alpha}_t} \bm{\epsilon}$, $\bm{x}_0$ is the ground-truth image, $\bm{c}$ is the condition, $\bm{\epsilon} \sim \mathcal{N}(\bf{0}, I)$ is the noise term, $\alpha_t: = 1 - \beta_t$ and $\bar{\alpha}_t := \prod_{s=1}^t \alpha_s$. To simplify notation, we will allow the condition $\bm{c}$ include multiple conditioning signals, such as text prompts $\bm{c}_p$, a low-resolution image input $\bm{c}_x$ (which is used in super-resolution), or retrieved neighboring images $\bm{c}_n$ (which are used in {Re-Imagen}\xspace). Imagen~\citep{saharia2022photorealistic} uses a U-Net~\citep{ronneberger2015u} to implement $\bm{\epsilon}_{\theta}(\bm{x}_{t}, \bm{c}, t)$. The U-Net represents the reversed noise generator as follows:
\begin{equation}
\label{eq:recover_original}
\hat{\bm{x}}_{\theta}(\bm{x}_t, \bm{c}) := (\bm{x}_t - \sqrt{1 - \bar{\alpha}_t} \bm{\epsilon}_{\theta}(\bm{x}_{t}, \bm{c}, t)) / \sqrt{\bar{\alpha}_t}
\end{equation}
During the training, we randomly sample $t \sim \mathcal{U}([0, 1])$ and image $\bm{x}_0$ from the dataset $\mathcal{D}$, and minimize the difference between $\hat{\bm{x}}_{\theta}(\bm{x}_t, \bm{c})$ and $\bm{x}_0$ according to~\autoref{eq:loss}. At the inference time, the diffusion model uses DDPM~\citep{ho2020denoising} to sample recursively as follows:
\begin{equation}
\bm{x}_{t-1} = \frac{\sqrt{\bar{\alpha}_{t-1}} \beta_t}{1 - \bar{\alpha}_t} \hat{\bm{x}}_{\theta}(\bm{x}_t, \bm{c}) + \frac{\sqrt{\alpha_t}(1- \bar{\alpha}_{t-1})}{1 - \bar{\alpha}_t}\bm{x}_t + \sqrt{\frac{(1 - \bar{\alpha}_{t-1})\beta_t}{1 - \bar{\alpha}_t}}\bm{\epsilon}
\end{equation}
The model sets $\bm{x}_T$ as a Gaussian noise with $T$ denoting the total number of diffusion steps, and then keep sampling in reverse until step $T=0$, i.e. $\bm{x}_{T} \rightarrow \bm{x}_{T-1} \rightarrow \cdots$, to reach the final image $\hat{\bm{x}}_0$.
For better generation efficiency, cascaded diffusion models~\citep{ho2022cascaded,ramesh2022hierarchical,saharia2022photorealistic} use three separate diffusion models to generate high-resolution images gradually, going from low resolution to high resolution. The three models 64$\times$ model, 256$\times$ super-resolution model and 1024$\times$ super-resolution model gradually increase the model resolution to $1024\times1024$.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{model.001.jpeg}
\caption{
An illustration of the text-to-image generation pipeline in the 64$\times$ diffusion model. Specifically, {Re-Imagen}\xspace learns a UNet to iteratively predict $\bm{\epsilon}(\bm{x_t}, \bm{c}_n, \bm{c}_p, t)$ that denoises the image. ($\bm{c}_n$: a set of retrieved image-text pairs from the database; $\bm{c}_p$: input text prompt; $t$: current time-step)
}
\label{fig:model}
\end{figure}
\noindent \textbf{Classifier-free Guidance}
\label{sec:cfg}
\cite{ho2021classifier} first proposed classifier-free guidance to trade off diversity and sample quality. This sampling strategy has been widely used due to its simplicity. In particular, Imagen~\citep{saharia2022photorealistic} adopts an adjusted $\epsilon$-prediction as follows:
\begin{equation}
\hat{\bm{\epsilon}} = w \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, \bm{c}, t) - (w - 1) \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, t)
\end{equation}
where $w$ is the guidance weight. The unconditional $\epsilon$-prediction $\bm{\epsilon}_{\theta}(\bm{x}_t, t)$ is calculated by dropping the condition, i.e. the text prompt.
\subsection{Generating Image with Multi-Modal Knowledge}
Similar to Imagen~\citep{saharia2022photorealistic}, {Re-Imagen}\xspace is a cascaded diffusion model, consisting of 64$\times$, 256$\times$, and 1024$\times$ diffusion models. However, {Re-Imagen}\xspace augments the diffusion model with the new capability of leveraging multimodal `knowledge' from the external database, thus freeing the model from memorizing the appearance of rare entities. For brevity (and concreteness) we present below a high-level overview of the 64$\times$ model: the others are similar.
\noindent \textbf{Main Idea} As shown in~\autoref{fig:model}, during the denoising process, {Re-Imagen}\xspace conditions its generation result not only on the text prompt $\bm{c}_p$ (and also with $\bm{c}_x$ for super-resolution), but on the neighbors $\bm{c}_n$ that were retrieved from the external knowledge base. Here, the text prompt $\bm{c}_p \in \mathbb{R}^{n \times d}$ is represented using a T5 embedding~\citep{raffel2020exploring}, with $n$ being the text length and $d$ being the embedding dimension. Meanwhile, the top-k neighbors $\bm{c}_n:=[\texttt{<}\text{image, text}\texttt{>}_1, \cdots, \texttt{<}\text{image, text}\texttt{>}_k]$ are retrieved from external knowledge base $\mathcal{B}$, using the input prompt $p$ as the query and a retrieval similarity function $\gamma(p, \mathcal{B})$. We experimented with two different choices for the similarity function: maximum inner product scores for BM25~\citep{robertson2009probabilistic} and CLIP~\citep{radford2021learning}. \vspace{1ex} \\
\noindent \textbf{Model Architecture} We show the architecture of our model in~\autoref{fig:detail}, where we decompose the UNet into the downsampling encoder (DStack) and the upsampling decoder (UStack). Specifically, the DStack takes an image, a text, and a time step as the input, and generates a feature map, which is denoted as $f_{\theta}(\bm{x}_t, \bm{c}_p, t) \in \mathbb{R}^{F \times F \times d}$, with $F$ denoting the feature map width and $d$ denoting the hidden dimension. We share the same DStack encoder when we encode the retrieved \texttt{<}image, text\texttt{>} pairs (with $t$ set to zero) which produces a set of feature maps $f_{\theta}(\bm{c}_n, 0) \in \mathbb{R}^{K \times F \times F \times d}$. We then use a multi-head attention module~\citep{vaswani2017attention} to extract the most relevant information to produce a new feature map $f'_{\theta}(\bm{x}_t, \bm{c}_p, \bm{c}_n, t) = Attn(f_{\theta}(\bm{x}_t, \bm{c}_p, t), f_{\theta}(\bm{c}_n, 0))$.
The upsampling stack decoder then predicts the noise term $\bm{\epsilon}_{\theta}(\bm{x}_{t}, \bm{c}_p, \bm{c}_n, t)$ and uses it to compute $\hat{\bm{x}}_{\theta}$ with~\autoref{eq:recover_original}, which is either used for regression during training or DDPM sampling. \vspace{1ex} \\
\noindent \textbf{Model Training} In order to train {Re-Imagen}\xspace, we construct a new dataset KNN-ImageText based on the 50M ImageText-dataset used in Imagen. There are two motivations for selecting this dataset. (1) the dataset contains many similar photos regarding specific entities, which is extremely helpful for obtaining similar neighbors, and (2) the dataset is highly sanitized with fewer unethical or harmful images. For each instance in the 50M ImageText-dataset, we search over the same dataset with text-to-text BM25 similarity to find the top-2 neighbors as $\bm{c}_n$ (excluding the query instance). We experimented with both CLIP and BM25 similarity scores, and retrieval was implemented with ScaNN~\citep{guo2020accelerating}. We train {Re-Imagen}\xspace on the KNN-ImageText by minimizing the loss function of~\autoref{eq:loss}. During training, we also randomly drop the text and neighbor conditions independently with 10\% chance. Such random dropping will help the model learn the marginalized noise term $\bm{\epsilon}_{\theta}(\bm{x}_{t}, \bm{c}_p, t)$ and $\bm{\epsilon}_{\theta}(\bm{x}_{t}, \bm{c}_n, t)$, which will be used for the classifier-free guidance. \vspace{1ex} \\
\noindent \textbf{Interleaved Classifier-free Guidance}
\label{sec:interleaved}
Different from existing diffusion models, our model needs to deal with more than one condition, \textit{i.e.}, text prompts $\bm{c}_t$ and retrieved neighbors $\bm{c}_n$, which allows new options for incorporating guidance. In particular, {Re-Imagen}\xspace could use classifier-free guidance by subtracting the unconditioned $\epsilon$-predictions, or either of the two partially conditioned $\epsilon$-predictions. Empirically, we observed that subtracting unconditioned $\epsilon$-predictions (the standard classifier-free guidance of~\autoref{sec:cfg}) often leads to an undesired imbalance, where the outputs are either dominated by the text condition or the neighbor condition. Hence, we designed an interleaved guidance schedule that balances the two conditions. Formally, we define the two adjusted $\epsilon$-predictions as:
\begin{align}
\label{eq:sample}
\begin{split}
\hat{\bm{\epsilon}}_{p} &= w_p \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, \bm{c}_p, \bm{c}_n, t) - (w_p - 1) \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, \bm{c}_n, t) \\
\hat{\bm{\epsilon}}_{n} &= w_n \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, \bm{c}_p, \bm{c}_n, t) - (w_n - 1) \cdot \bm{\epsilon}_{\theta}(\bm{x}_t, \bm{c}_p, t)
\end{split}
\end{align}
where $\hat{\bm{\epsilon}}_{p}$ and $\hat{\bm{\epsilon}}_{n}$ are the text-enhanced and neighbor-enhanced $\epsilon$-predictions, respectfully. Here, $w_p$ is the text guidance weight and $w_n$ is the neighbor guidance weight. We then interleave the two guidance predictions by a certain predefined ratio $\eta$. Specifically, at each guidance step, we sample a [0, 1]-uniform random number $R$, and $R < \eta$, we use $\hat{\bm{\epsilon}}_{p}$, and otherwise $\hat{\bm{\epsilon}}_{n}$. We can adjust $\eta$ to balance the faithfulness w.r.t text description or the retrieved image-text pairs. In EntityDrawBench experiment, we found that $\eta=0.55$ can lead to better quality.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{kimgen.001.jpeg}
\caption{The detailed architecture of our model. The retrieved neighbors are first encoded using the DStack encoder and then used to augment the intermediate representation of the denoising image (via cross-attention). The augmented representation is fed to the UStack to predict the noise.}
\label{fig:detail}
\vspace{-2ex}
\end{figure}
\section{Experiments}
{Re-Imagen}\xspace consists of three submodels: a 2.5B 64$\times$64 text-to-image model, a 750M 256$\times$256 super-resolution model and a 400M 1024$\times$1024 super-resolution model. We finetune these models on the constructed KNN-ImageText dataset. We evaluate the model under two settings: (1) automatic evaluation on COCO and WikiImages dataset, to measure the model's general performance to generate photorealistic images, and (2) human evaluation on the newly introduced EntityDrawBench, to measure the model's capability to generate long-tail entities.
\noindent \textbf{Training and Evaluation details}
The fine-tuning was run for 200K steps on 64 TPU-v4 chips and completed within two days. We use Adafactor for the 64$\times$ model and Adam for the 256$\times$ super-resolution model with a learning rate of 1e-4. We set the number of neighbors $k$=2 and set $\gamma$=BM25 during training. For the image-text database $\mathcal{B}$, we consider three different variants: {(1)} the COCO/WikiImages training set, which contains non-overlapping small-scale in-domain image-text pairs, {(2)} the ImageText dataset containing 50M \texttt{<}image, caption\texttt{>} pairs, and {(3)} the LAION dataset~\citep{schuhmann2021laion} containing 400M \texttt{<}image, text\texttt{>} crawled pairs. Since indexing ImageText and LAION with CLIP encodings is expensive, we only considered the BM25 retriever for these databases. For the COCO/WikiImages training set, we used both BM25 and CLIP.
\subsection{Evaluation on COCO and WikiImages}
In these two experiments, we used the standard non-interleaved classifier-free guidance (\autoref{sec:cfg}) with $T$=1000 steps for both the 64$\times$ diffusion model and 256$\times$ super-resolution model. The guidance weight $w$ for the 64$\times$ model is swept over [1.0, 1.25, 1.5, 1.75, 2.0], while the 256$\times$256 super-resolution models' guidance weight $w$ is swept over [1.0, 5.0, 8.0, 10.0]. We select the guidance $w$ with the best FID score, which is reported in~\autoref{tab:coco}. We also demonstrate examples in~\autoref{fig:dataset}. \vspace{1ex} \\
\noindent \textbf{COCO Results}
COCO is the most widely-used benchmark for text-to-image generation models. Although COCO does not contain many rare entities, it does contain unusual combinations of common entities, so it is plausible that retrieval augmentation could also help for some challenging text prompts. We adopt FID~\citep{heusel2017gans} score to measure image quality. Following the previous literature, we randomly sample 30K prompts from the validation set as input to the model. The generated images are compared with the reference images from the full validation set (42K). We list the results in two columns: FID-30K denotes the model with access to the COCO train set (either to fine-tune or retrieve from), while Zero-shot FID-30K does not have access to any COCO data.
\begin{table}[!t]
\small
\centering
\begin{tabular}{lccc}
\toprule
Model & \# of Params & FID-30K & Zero-shot FID-30K \\
\midrule
GLIDE~\citep{nichol2021glide} & \hphantom{0}\pd5B & - & 12.24 \\
DALL-E 2~\citep{ramesh2022hierarchical} & $\sim$5B & - & 10.39 \\
VQ-Diffusion~\citep{gu2022vector} & 0.4B & - & 19.75 \\
KNN-Diffusion~\citep{ashual2022knn} & 0.8B & - & 16.66 \\
Stable-Diffusion~\citep{rombach2022high} & \hphantom{.}\pz1B & - & 12.63 \\
Imagen~\citep{saharia2022photorealistic} & \hphantom{.}\pz3B & - & \pz7.27 \\
Make-A-Scene~\citep{gafni2022make} & \hphantom{.}\pz4B & 7.55 & 11.84 \\
Parti~\citep{yu2022scaling} & \pd20B & \textbf{3.22} & \pz7.23 \\
\midrule
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=COCO; $k$=2) & 3.6B & \textbf{5.25}$^\dagger$ & - \\
{Re-Imagen}\xspace ($\gamma$=CLIP; $\mathcal{B}$=COCO; $k$=2) & 3.6B & 5.29$^\dagger$ & - \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=ImageText; $k$=2) & 3.6B & - & \pz7.02 \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=LAION; $k$=2) & 3.6B & - & \hphantom{0} 6.88 \\
\bottomrule
\end{tabular}
\caption{MS-COCO results for zero-shot text-to-image generation. We use a guidance weight of 1.25 for the 64$\times$ diffusion model and 5 for our 256$\times$ super-resolution model. ($\dagger$: {Re-Imagen}\xspace \textit{is not fine-tuned} on the COCO data---it only uses it as the knowledge base for retrieval.) }
\label{tab:coco}
\vspace{-2ex}
\end{table}
{Re-Imagen}\xspace (with the COCO database) can achieve a significant gain on FID-30K without fine-tuning: roughly a 2.0 absolute FID improvement over Imagen. The performance is even better than fine-tuned Make-A-Scene~\citep{gafni2022make}, but slightly worse than fine-tuned 20B Parti. In contrast, {Re-Imagen}\xspace retrieving from out-of-domain databases (LAION) achieves less gain, but still obtains a 0.4 FID improvement over Imagen. {Re-Imagen}\xspace outperforms KNN-Diffusion, another retrieval-augmented diffusion model, by a large margin.
Since COCO does not contain infrequent entities, `entity knowledge' is not important. In contrast, retrieving from the training set can provide useful `style knowledge' for the model to ground on. {Re-Imagen}\xspace is able to adapt the generated images to the same style of the COCO distribution, it can achieve a much better FID score. As can be seen in the upper part of from~\autoref{fig:dataset}, {Re-Imagen}\xspace with retrieval generates images of the same style as COCO, while without retrieval, the output is still high quality, but the style is less similar to COCO.
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{dataset.001.jpeg}
\caption{{The retrieved top-2 neighbors of COCO and WikiImages and model generation.}}
\label{fig:dataset}
\vspace{-2ex}
\end{figure}
\noindent\textbf{WikiImages Results}
WikiImages is constructed based on the multimodal corpus provided in Web\-QA~\citep{chang2022webqa}, which consists of \texttt{<}image, text\texttt{>} pairs crawled from Wikimedia Commons\footnote{\url{https://commons.wikimedia.org/wiki/Main_Page}}. We filtered the original corpus to remove noisy data (see the Appendix\ref{wikiimages}), which leads to a total of 320K examples. We randomly sample 22K as our validation set to perform zero-shot evaluation, we further sample 20K prompts from the dataset as the input. Similar to the previous experiment, we also adopt the guidance weight schedule as before and evaluate 256$\times$256 images. We report our experimental results in~\autoref{tab:wiki} and mainly compare with Imagen and Stable-Diffusion.
\begin{table}[!t]
\small
\centering
\begin{tabular}{lccc}
\toprule
Model & \# of Params & FID-30K & Zero-shot FID-20K \\
\midrule
Stable-Diffusion~\citep{rombach2022high} & \hphantom{.}\pz1B & - & 7.50 \\
Imagen~\citep{saharia2022photorealistic} & \hphantom{.}\pz3B & - & 6.44 \\
\midrule
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=WikiImages; $k$=2) & 3.6B & 5.88 & - \\
{Re-Imagen}\xspace ($\gamma$=CLIP; $\mathcal{B}$=WikiImages; $k$=2) & 3.6B & 5.85 & - \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=ImageText; $k$=2) & 3.6B & - & 6.04 \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=LAION; $k$=1) & 3.6B & - & 5.94 \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=LAION; $k$=2) & 3.6B & - & 5.82 \\
{Re-Imagen}\xspace ($\gamma$=BM25; $\mathcal{B}$=LAION; $k$=3) & 3.6B & - & \textbf{5.80} \\
\bottomrule
\end{tabular}
\caption{WikiImages results for zero-shot text-to-image generation. We use a guidance weight of 1.5 for the 64$\times$ diffusion model and 5 for our 256$\times$ super-resolution model.}
\label{tab:wiki}
\vspace{-2ex}
\end{table}
From~\autoref{tab:wiki}, we found that using out-of-domain LAION-400M as the database actually achieves better performance than using in-domain WikiImages as the database. Unlike COCO, Wiki\-Images contains mostly entity-focused images, thus the importance of finding relevant entities is the database is more important than distilling the styles from the training set---and since the scale of LAION-400M is 100x larger than WikiImages-300K, the chance of retrieving related entities is much higher, which leads to better performance. One example is depicted in the lower part of~\autoref{fig:dataset}, where the LAION retrieval finds `Island of San Giorgio Maggiore', which helps the model generate the classical Renaissance-style church. When generating without retrieval, the model is not able to generate the specific church. This indicates the importance of having relevant entities in the retrievals for WikiImages dataset and also explains why LAION database achieves the best results. We also present more examples from WikiImages in the Appendix~\ref{wikiimages}.
\subsection{Entity Focused Evaluation on EntityDrawBench}
\noindent \textbf{Dataset Construction}
We introduce EntityDrawBench to evaluate the model's capability to generate diverse sets of entities in different visual scenes. Specifically, we pick three types of visual entities (dog breeds, landmarks, and foods) from Wikipedia Commons and Google Landmarks to construct our prompts. In total, we collect 150 entity-centric prompts for evaluation. These prompts are mostly unique and we cannot find corresponding images with Google Image Search. More construction details are in Appendix~\ref{entitydrawbench}.
We use the prompt as the input and its corresponding image-text pairs as the `retrieval' for {Re-Imagen}\xspace, to generate four 1024$\times$1024 images. For the other models, we feed the prompts directly also to generate four images. We will pick the best image of these four samples to rate its Photorealism and Faithfulness. For photorealism, we assign 1 if the image is moderately realistic without noticeable artifacts, otherwise, we assign a score of 0. For the faithfulness measure, we assign 1 if the image is faithful to both the entity source and the text description, otherwise, we assign 0.
\noindent \textbf{Experimental Results}
We use the proposed interleaved classifier-free guidance (\autoref{sec:interleaved}) for the 64$\times$ diffusion model, which runs for 256 diffusion steps under a strong guidance weight of $w$=30 for both text and neighbor conditions. For the 256$\times$ and 1024$\times$ resolution models, we use a constant guidance weight of 5.0 and 3.0, respectively, with 128 and 32 diffusion steps. The inference speed is 30-40 secs for 4 images on 4 TPU-v4 chips. We demonstrate our human evaluation results for faithfulness and photorealism in~\autoref{tab:faithfulness}.
\begin{table}[!t]
\small
\centering
\begin{tabular}{l|cccc|c}
\toprule
\multirow{2}{*}{Model} & \multicolumn{4}{c|}{\textbf{Faithfulness}} & \textbf{Photorealism} \\
& Dogs & Foods & Landmarks & All & All \\
\midrule \addlinespace
Imagen & 0.28 $\pm$ 0.02 & 0.26 $\pm$ 0.02 & 0.27 $\pm$ 0.02 & 0.27 & \textbf{0.98} \\
DALL-E 2 & 0.60 $\pm$ 0.02 & 0.47 $\pm$ 0.02 & 0.36 $\pm$ 0.04 & 0.48 & \textbf{0.98} \\
Stable-Diffusion & 0.16 $\pm$ 0.02 & 0.24 $\pm$ 0.04 & 0.12 $\pm$ 0.06 & 0.17 & 0.92 \\
\midrule \addlinespace
{Re-Imagen}\xspace & \textbf{0.68} $\pm$ 0.04 & \textbf{0.70} $\pm$ 0.02 & \textbf{0.74} $\pm$ 0.04 & \textbf{0.71} & 0.97 \\
\bottomrule
\end{tabular}
\caption{Human evaluation results for different models on different types of entities. }
\label{tab:faithfulness}
\vspace{-2ex}
\end{table}
We can observe that {Re-Imagen}\xspace can in general achieve much higher faithfulness than the existing models while maintaining similar photorealism scores. When comparing with our backbone Imagen, we see the faithfulness score jumps from 27\% to 71\%, which indicates that our model is paying attention to the retrieved knowledge and assimilating it into the generation process.
\begin{figure}[!h]
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[height=3cm]{figures/frequent_entity_v3} & \includegraphics[height=3cm]{figures/infrequent_entity_v3.png} \\
\end{tabular}
\vspace{-2ex}
\caption{The human evaluation scores for both frequent and infrequent entities. }
\label{fig:human_evaluation_frequency}
\end{figure}
We further partition the entities into `frequent' and `infrequent' categories based on their frequency (top 50\% as `frequent') in Imagen's training corpus. We plot faithfulness score for `frequent' and `infrequent' separately in~\autoref{fig:human_evaluation_frequency} . We can see that our model is less sensitive to the frequency of the input entities than the other models, with only a 10-20\% drop on infrequent entities. In contrast, both Imagen and DALL-E 2 drop by 40\%-50\% on infrequent entities. This study reflects the effectiveness of text-to-image generation models on long-tail entities.
\noindent \textbf{Comparison to Other Models}
We demonstrate some examples from different models in~\autoref{fig:example}. As can be seen, the images generated from {Re-Imagen}\xspace strike a good balance between text alignment and entity fidelity. Unlike image editing to perform in-place modification, {Re-Imagen}\xspace can transform the neighbor entities both geometrically and semantically according to the text guidance. As a concrete example, {Re-Imagen}\xspace generates the \textit{Braque Saint-Germain} (2nd row in \autoref{fig:example}) on the grass, in a different viewpoint from to the reference image.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{example.001.jpeg}
\caption{None-cherry picked examples from EntityDrawBench for different models. }
\vspace{-1ex}
\label{fig:example}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{more_examples_for_arxiv.001.jpeg}
\includegraphics[width=0.95\linewidth]{more_examples_for_arxiv.002.jpeg}
\includegraphics[width=0.95\linewidth]{more_examples_for_arxiv.003.jpeg}
\caption{Extra None-cherry picked examples from EntityDrawBench for different models. }
\vspace{-1ex}
\label{fig:more_example_for_arxiv}
\end{figure}
\noindent \textbf{Text and Entity Faithfulness Trade-offs}
In our experiments, we found that there is a trade-off between faithefulness to the text prompt and faithfulness to the retrieved entity images. Based on~\autoref{eq:sample}, by adjusting $\eta$, \textit{i.e.} the proportion of $\hat{\epsilon}_p$ and $\hat{\epsilon}_n$ in the sampling schedule, we can control {Re-Imagen}\xspace so as to generate images that explore this tradeoff: decreasing $\eta$ will increase the entity's image faithfulness but decrease the text alignment. In contrast, increasing the value $\eta$ will increase the text alignment and decrease the similarity to retrieved image. We demonstrate this in~\autoref{fig:transition}. With small $\eta$, the model ignores the text description and simply copies the retrieved image, while with large $\eta$, the model reverts to standard Imagen, without using to the input image much. We found that having $\eta$ around 0.5 is usually a `sweet spot' that balances both conditions.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{transition.001.jpeg}
\caption{Ablation study of interleaved guidance ratio $\eta$ to show the trade-off. }
\vspace{-2ex}
\label{fig:transition}
\end{figure}
\section{Conclusions and Limitations}
We present {Re-Imagen}\xspace, a retrieval-augmented diffusion model, and demonstrate its effectiveness in generating realistic and faithful images. We exhibit such advantages not only through automatic FID measures on standard benchmarks (\textit{i.e.}, COCO and WikiImage) but also through human evaluation on the newly introduced EntityDrawBench. We further demonstrate that our model is particularly effective in generating an image from text that mentions rare entities.
{Re-Imagen}\xspace still suffers from well-known issues in text-to-image generation, which we review below in \autoref{broader_impact}. In addition, {Re-Imagen}\xspace also has some unique limitations due to the retrieval-augmented modeling. First, because {Re-Imagen}\xspace is sensitive the to retrieved image-text pairs it is conditioned on, when the retrieved image is of low-quality, there will be a negative influence on the generated image. Second, {Re-Imagen}\xspace sometimes still fail to ground on the retrieved entities when the entity's visual appearance is out of the generation space. Third, we noticed that the super-resolution model is less effective, and frequently misses low-level texture details of the visual entities. In future work, we plan to further investigate the above limitations and address them.
\section*{Ethics Statement}
\label{broader_impact}
Strong text-to-image generation models, \textit{i.e.}, Imagen~\citep{saharia2022photorealistic} and Parti~\citep{yu2022scaling}, raise ethical challenges along dimensions such as the \textit{social bias}. {Re-Imagen}\xspace is exposed to the same challenges, as we employed Web-scale datasets that are similar to the prior models.
The retrieval-augmented modeling techniques of {Re-Imagen}\xspace has substantially improved the controllability and attribution of the generated image. Like many basic research topics, this additional control could be used for beneficial or harmful purposes. One obvious danger is that {Re-Imagen}\xspace (or similar models) could be used for malicious purposes like spreading misinformation, \textit{e.g.,} by producing realistic images of specific people in misleading visual contexts. On the other side, additional control has many potential benefits. One general benefit is that {Re-Imagen}\xspace can reduce hallucination and increase the faithfulness of the generated image to the users' intent. Another benefit is that the ability to work with tail entities makes the model more useful for minorities and other users in smaller communities: for example, {Re-Imagen}\xspace is more effective at generating images of landmarks famous in smaller communities or cultures, and generating images of indigenous foods and cultural artifacts. We argue that this model can help decrease the frequency-caused bias in current neural network based AI systems.
Considering such potential threats to the public, we have currently decide not to release the code or a public demo. In future work, we will explore a framework for responsible use that balances the value of external auditing of research with the risks of unrestricted open access, allowing this work to be used in a safe and beneficial way.
\section*{Acknoledgement}
We thank Jason Baldridge, Boqing Gong, Keran Rong and Slav Petrov for their valuable comments on an early version of the manuscript, which has helped improve this work. We also thank William Chan and Mohammad Norouzi for providing us with their support and the pre-trained models of Imagen, and Michiel de Jong for suggesting the model name.
|
{'timestamp': '2022-10-04T02:09:01', 'yymm': '2209', 'arxiv_id': '2209.14491', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.14491'}
|
arxiv
|
\section{\label{sec:level1}Introduction}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3in]{HOM_cancelling_4.pdf}
\caption{Hong-Ou-Mandel effect. Two identical photons are sent into different beam splitter input ports. There are four possible outcomes, two photons leaving one port, two photons leaving the other port, each photon reflecting to give single photons at each exit, and both transmitting to give single photons at each port. The coincidence terms cancel out since they are identical but enter with opposite sign. The final state is a superposition of two outcomes, each with both photons clustered together at the same exit port.}
\label{fig:HOM_effect}
\end{figure}
\section{Introduction}
\vspace{-10px}
The Hong-Ou-Mandel (HOM) effect is one of the most recognized quantum two-photon interference effects \cite{hong1987measurement}.
When two indistinguishable photons arrive simultaneously at different inputs of a 50:50 beam splitter (BS), single-photon amplitudes at each output cancel, resulting in quantum superposition of two-photon states appearing at each output port, as in Fig. \ref{fig:HOM_effect}.
This traditional HOM method, observed on a BS having two input and two output ports, always has the two-photon state simultaneously occupying both output spatial modes, leaving no room to engineer control of propagation direction.
Various types of studies on quantum state transformations in multiport devices have been performed such as two photon propagation in a multimode system \cite{weihs1996two,zukowski1997realizable},
quantum interference effects using a few photons \cite{meany2012non,de2014coincidence,tichy2011four,campos2000three}, and propagation of multi-photons \cite{lim2005generalized,tillmann2015generalized,menssen2017distinguishability}. Internal degrees of freedom are also incorporated to enhance communication capacity \cite{walborn2003multimode,poem2012two,zhang2016engineering}.
Systems and procedures using multi-photon states, such as boson sampling, have been analyzed using multiport beam splitters both theoretically and experimentally \cite{aaronson2011computational,tillmann2013experimental,spring2013boson,bentivegna2015experimental,he2017time,wang2019boson}.
The HOM effect plays an important role in the field of quantum metrology when two-photon $|2002\rangle$-type states are extended to $N$-photon $N00N$ state \cite{dowling2008quantum,motes2015linear}.
Additionally, coherent transport of quantum states has been attracting attention, where single- and two-photon discrete-time quantum walk schemes are employed to transfer and process quantum states \cite{bose2003quantum,perez2013coherent,lovett2010universal,chapman2016experimental,nitsche2016quantum}.
A quantum routing approach has been proposed to transfer unknown states in 1D and 2D structures to assist quantum communication protocols \cite{zhan2014perfect,vstefavnak2016perfect,bartkiewicz2018implementation}.
Photon propagation control is especially crucial in a large optical network to distribute quantum states between two parties.
The network can be formed by combining multiple copies of four-port devices.
The state manipulation schemes we present can be integrated in quantum communication protocols since state retrieval timing can be chosen at will.
In this manuscript, we propose two-photon quantum state engineering and transportation methods with a linear-optical system which allows manipulation of photon amplitudes by using linear-optical devices such as optical multiports, beam splitters, and phase shifters.
Previously, such multiports have been introduced to demonstrate a two-photon clustering effect in quantum walks when multiple multiport devices are connected to form a chain \cite{simon2020quantum}. Clustering of two photons means that after encountering a multiport, the input two-photon amplitude separates into a superposition of a right-moving and a left-moving two-photon amplitude, with no amplitude for the photons to move in opposite directions.
By utilizing this separation, a higher-dimensional unitary transformation enables flexible quantum engineering designs of possible travel path combinations by switching relative phases within right moving and left moving amplitudes independently.
When two or more multiports are combined, this control of quantum amplitudes in a two-photon state allows demonstration of a “delayed" HOM effect engaging also time-bin modes in addition to spatial modes.
To perform this delayed effect, two or more multiports are required, and relative phase shifts between two rails can reflect the incoming amplitudes.
This controllable reflection without mirrors can also be seen as an additional state manipulation feature.
We introduce two distinct systems. The first system utilizes direct transformation of two-photons by the four-port device using circulators.
The second case does not have circulators in the system.
The photons are sent from the left side of the beam splitters, then the amplitudes encounter the multiport device.
This second system has not been analyzed in the past. Specific input states for both distinguishable and indistinguishable photons are redistributed between two parties coherently. Therefore, this system is particularly useful in quantum routing type applications.
This article is organized as follows. In Sec. II, we introduce the main optical components used in this manuscript to perform quantum state transformation. These basic linear optical devices are used to show HOM effect engaging in spatial modes and time-bin modes, and is addressed in Sec. III. In Sec. IV, we show redistribution of two photon states using the devices introduced that are presented in Sec. II. The summary of the results are given in Sec. V.
\begin{figure}[htp]
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic.pdf}%
}
\vspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic_photons.pdf}%
}
\vspace{-10px}
\caption{(a) A possible experimental realization of a directionally-unbiased linear-optical four-port consists of four beam splitters, four mirrors, and four phase shifters. A photon can enter any of the four ports, and exit at any of the four ports (labeled as $a,b,c$, and $d$). With a specific choice of phase settings, a Grover matrix can be realized by coherently summing all possible paths to each output \cite{osawa2019directionally}. A schematic symbol for this device is shown on the right. (b) Single multiport transformation of a two-photon input state. The input state of two correlated photons entering from the left is depicted as $ab$ $(\ket{1,1})$. After scattering by a Grover multiport, the state transforms into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$, which has clear separation of right- and left-moving two-photon amplitudes. No cross-terms with photons moving in opposite directions occur.}
\label{fig:combined_four_port}
\vspace{-10pt}
\end{figure}
\section{Photonic state transformations via linear optical devices}
\vspace{-10px}
In this section, we consider photonic state transformations in higher-dimensional spatial modes using a unitary four-dimensional Grover matrix \cite{grover1996fast} in place of the beam splitter.
In this section, we introduce the main systems that will be used for linear state transformations, followed by the basic photonic devices to implement them.
Beam splitters and the four-dimensional Grover matrix are the central system component.
We mainly use photon number representation to describe states through out the manuscript.
The general beam splitter transformation matrix is
\begin{eqnarray}
\label{eqn:BS}
&\begin{pmatrix}
\hat{c}
\\
\hat{d}
\end{pmatrix}=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{a}\\
\hat{b}
\end{pmatrix}
\end{eqnarray}
where $\hat{a}$,$\hat{b}$,$\hat{c}$, and $\hat{d}$ are used to describe the input photon state transformation.
The labels are generalized here, therefore the specific location dependent beam splitter transformations are redefined in later sections.
We use photon number states to describe the system unless otherwise specified. The input state is denoted as $\hat{a}\hat{b}$ where $\hat{a}$ and $\hat{b}$ are respectively creation operators for the spatial modes $a$ and $b$.
The hat notation is dropped henceforth.
For a photon in spatial mode $a$ with horizontally polarized photon is denoted as $a_{H}$ and horizontally polarized photon in mode $b$ is denoted as $b_{H}$.
We omit polarization degrees of freedom when identical photons are used through out the system.
Photonic implementations of the Grover matrix can be readily realized \cite{carolan2015universal,crespi2013anderson,spagnolo2013three,fan1998channel,nikolopoulos2008directional}.
To be concrete, we use directionally-unbiased linear-optical four-ports (Fig. \ref{fig:combined_four_port} (a)) as an example.
Consider sending two indistinguishable photons into a four-dimensional multiport device realization of a Grover matrix.
This Grover operator, the multiport, described by the unitary matrix
\begin{equation}
\label{eqn:Grover}
Grover =
\frac{1}{2}
\begin{pmatrix}
-1&1&1&1\\
1&-1&1&1\\
1&1&-1&1\\
1&1&1&-1
\end{pmatrix},
\end{equation}
has equal splitting ratios between all input-output combinations and generalizes the BS transformation matrix given below in Eq.\eqref{eqn:BS}.
In general, photons in modes $a$ and $b$ are transformed in the following manner,
\begin{eqnarray}
&a \xrightarrow{M} \frac{1}{2}(-a+b+c+d)\; \mbox{ and }\;\\ \nonumber
&b \xrightarrow{M} \frac{1}{2}(a-b+c+d).
\end{eqnarray}
Theoretical analysis of the reversible Grover matrix has been performed by linear-optical directionally-unbiased four-ports \cite{simon2016group,simon2018joint,osawa2019directionally}, which consist of four beam splitters, four phase shifters, and four mirrors as indicated in Fig. \ref{fig:combined_four_port}(a).
They are represented schematically by the symbol in Fig. \ref{fig:combined_four_port}(b).
The three-port version of this device has been experimentally demonstrated using bulk optical devices \cite{osawa2018experimental}.
To have better and precise control of phases, miniaturization of the device is highly preferred to realize the four-port especially when several multiport devices are required to carry out an experiment.
In general, directional unitary devices such as those of the Reck and some other unitary matrix decomposition models \cite{reck1994experimental,su2019hybrid,clements2016optimal,de2018simple,motes2014scalable} can also realize a Grover matrix.
However, directionally-unbiased devices are advantageous when designing the delayed HOM effect, as well as requiring fewer optical resources.
Identical photons are sent into two of the four input-output ports from the left side (indicated in Fig. \ref{fig:combined_four_port}(b)).
We used multiport devices and beam splitters to form two systems for state propagation.
The photons are sent from the left side of the system through out the manuscript.
The first BS multiport composite system is denoted as subscript 0 and the other half is denoted as subscript 1.
The result differs depending on the input location of photons.
Consider a system consisting of two multiports and two beam splitters.
There are several ways to insert photons in the system, however we choose two specific ones in this manuscript.
To be able to send a photon into the middle of the system, the setup needs to be supplied with circulators, is shown in Fig. \ref{fig:multiport_circ}.
Another setup requires no circulators to propagate input photons.
The photons experience an extra transformation by a beam splitter upon photon entrance.
The system is graphically supplied in Fig. \ref{fig:multiport_no_circ}.
It needs to be noted that the number of multiports in the system does not change the final outcome.
We are using two multiports as an example, however, the result is the same when the system has a single multiport or more than two multiports as long as the devices are assumed to be lossless during the propagation.
Brief comments on the mathematical structure of the transformations carried out by the configurations given in Figs. \ref{fig:multiport_circ} and \ref{fig:multiport_no_circ} are given in the Appendix.
\vspace{-15px}
\subsection{Photon propagation using circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two_circulator.pdf}
\caption{A system setup with input photons supplied by circulators. The system consists of two beam splitters, two multiport devices, and two circulators. These circulators allow us to send photons from the left side of the multiport device without experiencing a beam splitter transformation before entering the multiport device. The input state split into right moving and left moving amplitudes (shown as dotted arrows) upon multiport transformation.}
\label{fig:multiport_circ}
\end{figure}
This method is used to distribute HOM pair between the right and the left side of the system.
The original input state $a_0b_0$ transforms to:
\begin{eqnarray} \label{eq:trans}
a_0b_0 &\xrightarrow{M} \frac{1}{2}(-a_0+b_0+c_0+d_0)\frac{1}{2}(a_0-b_0+c_0+d_0) \nonumber \\
&=-\frac{1}{4}(a_0^2+b_0^2)+\frac{1}{2}a_0b_0+\frac{1}{4}(c_0^2+d_0^2)+\frac{1}{2}c_0d_0\nonumber \\
&= -\frac{1}{4}(a_0-b_0)^2+\frac{1}{4}(c_0+d_0)^2,
\end{eqnarray}
where we have used the commutation relation $ab = ba$ since the photons are identical and in different spatial locations.
Eq.\eqref{eq:trans} shows that correlated photons are split into right moving $\frac{1}{4}(c_0+d_0)^2$ and left moving $-\frac{1}{4}(a_0-b_0)^2$ amplitudes, with no cross terms.
This absolute separation of propagation direction without mixing of right moving and left moving amplitudes is important because the photon pairs remain distinctly localized and clustered at each step \cite{simon2020quantum}.
The right moving amplitude is translated to $\frac{1}{4}(a_1+b_1)^2$ and propagates without changing its form.
$\frac{1}{4}(a_1+b_1)^2 \xrightarrow{M} \frac{1}{4}(c_1+d_1)^2$.
The left moving amplitude $-\frac{1}{4}(a_0-b_0)^2$ stays the same until BS transformation.
The controlled HOM effect can be observed in higher-dimensional multiports assisted by extra beam splitters.
Imagine beam splitters inserted in the system as in Fig. \ref{fig:multiport_circ}.
Input state $ab$ is now transformed into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$ as indicated above, then further transformed by beam splitters to obtain HOM pairs between the right side and left side of the system.
The right and left sides of the system each have two output ports, and the exit port of the photon pair can be controlled by varying phase shift settings before the beam splitters.
A phase shift on the left side of the system does not affect the result of the right side amplitude, and vice versa.
This system, having circulators at the beginning of the system, is denoted as transformation pattern I, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_I}.
\vspace{-15px}
\subsection{Photon propagation without circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two.pdf}
\caption{System setup without circulators. The input photons are subjected to a beam splitter before they enter the multiport. The input state is transformed and propagated in one direction (shown as dotted arrows). The BS transformed input state is transformed again by the first multiport devices.}
\label{fig:multiport_no_circ}
\end{figure}
This method allows to redistribute input states between right and left side of the system without changing amplitudes.
Consider sending two photons from the left side of the beam splitter as indicated in fig. \ref{fig:multiport_no_circ} then transform the output state by the multiport device.
We only consider the first multiport transformation here.
The rest of the transformation is given in sec. \ref{sec:pattern_II}.
\begin{equation}
e_0f_0\xrightarrow{BS}-\frac{1}{2}(a_0^2-b_0^2)
\xrightarrow{M}-\frac{1}{2}(a_0-b_0)(c_0+d_0)
\end{equation}
The final state has cross-terms, and it is different from the case with circulators in a sense that the output state is \textit{coupled}.
The state does not provide clear separation between right moving and left moving amplitudes.
Even though, the state does not have clear distinction between right moving and left moving, we still refer the amplitudes right and left moving amplitudes unless special attention is required.
This system having no circulators is denoted as transformation pattern II, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_II}
\vspace{-10px}
\section{Transformation pattern I: directionally-controllable HOM effect in higher-dimensional spatial and temporal modes} \label{sec:pattern_I}
\vspace{-10px}
In this section we discuss the transformation pattern I.
The higher dimensional HOM effect is generated by the multiport-based linear optics system with circulators at the inputs.
The propagation direction control and delays between amplitudes are discussed in subsections.
We use a single multiport device to show the control effect and we introduce two multiport devices in the system for delayed effect.
\begin{figure}[htp]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator1.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator2.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator3.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator4.pdf}%
}
\caption{Higher dimensional HOM effect with directional control.
Correlated photons, $a_0b_0$, are sent in from the circulators into the first multiport.
After the first multiport interaction, the incoming photon pair splits into right-moving and left-moving two-photon amplitudes.
The separately-moving amplitudes are bunched at the beam splitters on right and left sides.
We can controllably switch between four different output sites, and where the clustered output photons appear depends on the location of the phase shifter $P$.
In (a), no phase plates are introduced, and the output biphoton amplitudes leave $f_0$ and $e_1$. The final state is $\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1)$, meaning superposition of two photons in mode $f_0$ and two in mode $e_1$. In case (b), the phase shifter $P=\pi$ is to the left, changing the relative phase between upper and lower arms.
Similarly in (c) and (d), other locations for the phase shifters cause biphotons to leave in other spatial modes.} \label{fig:HOM_control_fig}
\end{figure}
\vspace{-15px}
\subsection{Control of propagation direction}
\vspace{-10px}
Given that one two-photon amplitude must exit left and one right, there are four possible combinations of outgoing HOM pairs as indicated in Fig. \ref{fig:HOM_control_fig}.
The combinations are, (a): $(f_0^2,e_1^2)$, (b): $(e_0^2,e_1^2)$, (c): $(e_0^2,f_1^2)$, and (d): $(f_0^2,e_1^2)$.
This means, in the case of (a) for example, the left-moving two-photon amplitude leaves in mode f, and the right-moving amplitude leaves in mode e.
Directional control of the four cases is readily demonstrated, as follows. In case (a) there is only a beam splitter transformation after the multiport, giving
\begin{eqnarray}
&-\frac{1}{4}(a_0-b_0)^2\xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_0-f_0-e_0-f_0)^2 = \frac{1}{2}f_0^2, \nonumber \\
&\frac{1}{4}(c_1+d_1)^2 \xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_1-f_1+e_1+f_1)^2 = \frac{1}{2}e_1^2.
\end{eqnarray}
The final output state is,
\begin{eqnarray}
\frac{1}{2}(f_0^2+e_1^2)=\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1).
\end{eqnarray}
In case (b), a phase plate is inserted in the lower arm of the left side to switch the exit port from $d$ to $c$. All the phase shifters P are set to $\pi$, therefore transforming $b \rightarrow -b$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c+d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2+e_1^2) = \frac{1}{\sqrt{2}}(-\ket{2,0}_0+\ket{2,0}_1).
\end{eqnarray}
Compared to case (a), the exit port is switched from f to e. In (c), phase plates are inserted in the lower arms of both right and left sides.
Photons in modes $b$ and $d$ are transformed to $-b$ and $-d$, respectively.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2-f_1^2) = -\frac{1}{\sqrt{2}}(\ket{2,0}_0+\ket{0,2}_1).
\end{eqnarray}
In (d), a phase plate is inserted in the lower arm of the right side. A photon in mode $d$ is transformed to $-d$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a-b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(f_0^2-f_1^2) = \frac{1}{\sqrt{2}}(\ket{0,2}_0-\ket{0,2}_1).
\end{eqnarray}
This demonstrates complete directional control of biphoton propagation direction using only linear optical devices.
Directional control does not require changing splitting ratios at each linear optical device (BS and multiport), and occurs in a lossless manner since no post-selection is required.
\begin{figure*}[htp]
\centering
\subfloat[][Delayed HOM effect without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays.pdf}\label{}}
\subfloat[][Delayed HOM effect with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays_phase_plate.pdf}\label{}}
\caption{Delayed HOM effect. The two-photon amplitude transformation progresses in time from top to bottom. The distance traveled in a single time step is indicated by vertical dashed lines. The original photons as well as photons in the target state are indicated using red circles. The green striped circles indicate intermediate transformed state. The total number of photons are always two through out the transformations. (a) Two multiports and beam splitters {\it without} phase shifters between the multiports. At the first step, the behavior is the same as for a single multiport with beam splitters. The right-moving amplitude propagates through the second multiport, and left-moving amplitude propagates through the beam splitter. The right moving amplitude is delayed by one additional multiport transformation before a two-photon observation probability will become available in spatial modes on the right. (b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports. When the P is present, the right-moving amplitude gains a relative phase between modes $
a_1$ and $b_1$. Reflection occurs at the multiport when the relative phase between the two is $\pi$. Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction. Reflection does not occur on this transformed left-moving amplitude, therefore it continues to propagate leftward. The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_HOM}
\end{figure*}
\subsection{Delayed HOM effect}
\vspace{-10px}
\subsubsection{Delayed HOM effect without reflection}
\vspace{-10px}
We introduce a phase shifter between two multiports as in Fig. \ref{fig:delayed_HOM} (b). Without the phase plate between two multiport devices, the photons behave exactly the same as in the previous subsection. However, the phase shifter can change propagation direction of right moving amplitude to the left. This reflection results in detecting HOM pairs only on the left side, but with some delay between the two exiting amplitudes. We start with the case without the phase shifter. The photon insertion is the same as the previous case, coming from the left side of the first multiport.
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}+b_{1})_R^2 \nonumber \\
&\xrightarrow{M} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(c_{1}+d_{1})_R^2 \xrightarrow{BS} -\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2,
\end{eqnarray}
where M, T, BS represents multiport, translation and beam splitter transformation respectively. We use subscript $R$ and $L$ to illustrate amplitudes propagating to the right or left. $T$ translates a photon amplitude by a single time step (for example, $\frac{1}{4}(c_{0}+d_{0})^2 \rightarrow \frac{1}{4}(a_{1}+b_{1})^2$). The second transformation $T+BS$, $T$ is read as applying $T+BS$ on the first term and $T$ on the second term.
The final state is,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0{T_0}L}-\ket{2,0}_{1{T_1}R}),
\end{eqnarray}
where $T_0$ is the time when the first biphoton amplitude leaves the system and $T_1$ is the exit time of the second.
The right moving amplitude stays in the system longer than the left moving amplitude because of the extra multiport device in the system, leading to time delay $\Delta T = T_1-T_0$.
\vspace{-10px}
\subsubsection{Delayed HOM effect with reflection}
\vspace{-10px}
When a $\pi$-phase shifter is inserted on one path between the multiports, the right-moving amplitude gets reflected upon the second multiport encounter.
Instead of having two-photon amplitudes on the right and left sides of the system, both photon amplitudes end up leaving from the left.
The HOM effect still occurs but now with some delay between the two amplitudes at the end of the BS.
This is indicated in Fig. \ref{fig:delayed_HOM} (b).
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T+P} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}-b_{1})_R^2.
\end{eqnarray}
The second transformation $T+BS$, $T+P$ is read as applying $T+BS$ on the first term and $T+P$ on the second term.
Left-moving photons leave before right-moving photons.
\begin{eqnarray}
\xrightarrow{M}&\frac{1}{4}(a_{1}-b_{1})_L^2 \xrightarrow{P+T} \frac{1}{4}(c_{0}+d_{0})_L^2 \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{0}+b_{0})_L^2 \xrightarrow{BS} \frac{1}{2}e_{0L}^2.
\end{eqnarray}
The final state,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2+\frac{1}{2}e_{0L}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0T_0L}-\ket{2,0}_{0T_2L}),
\end{eqnarray}
is now two HOM pair amplitudes, both on the left side of the system, at output ports $e_0$ and $f_0$, with some time delay $\Delta T = T_2 - T_0$ between them. The first amplitude leaves port $f_0$ at $T_0$, then the second leaves $e_0$ and the time labeled $T_2$.
\vspace{-10px}
\section{Transformation pattern II: state redistribution in higher-dimensional spatial and temporal modes} \label{sec:pattern_II}
\vspace{-10px}
\subsection{State transformation and propagation}
\vspace{-10px}
We have considered the case where the input photon state is transformed by the multiport device right after photon insertion in the previous section.
Instead of using circulators, we can transform the input state by the BS in advance and then transform the state by using the multiport device.
Even though the Grover matrix spreads the input state equally in four directions, the end result preserves the original form of the input state.
We demonstrate a state redistribution property using distinguishable and indistinguishable photons, meaning the input state gets redistributed between right and left side without changing amplitudes.
The propagation result is different from the previous case.
Consider sending two indistinguishable photons in the system.
The input two photons have the same polarization to make them indistinguishable.
The input photons are inserted from the left side of the beam splitter.
The beam splitter transforms the input state and propagates from the left side to right side of the device without any reflections.
The amplitudes are transformed by the multiport device after the beam splitter transformation.
This transformation splits input photons into coupled right- moving and left-moving amplitudes.
The coupled left moving amplitudes reflected from the first multiport counter propagates and transformed by the first beam splitter from the right to the left.
The right moving amplitude is transmitted without changes in amplitude.
This amplitude gets transmitted by the right side beam splitter at the end.
\vspace{-15px}
\subsubsection{Indistinguishable photons}
\vspace{-10px}
We examine the mathematical details on indistinguishable photons in the system without circulators first.
We consider three cases by sending photons in spatial modes e and f.
First, we consider indistinguishable a pair of single photons from spatial mode e and f.
\vspace{-5px}
\begin{eqnarray}
&e_{H0}f_{H0}\xrightarrow{BS}-\frac{1}{2}(a_{H0}^2-b_{H0}^2)\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{H0}+d_{H0}) \nonumber \\
&\xrightarrow{BS} -e_{H0}e_{H1}.
\end{eqnarray}
HOM state with relative phase between two amplitudes equal to +1 is considered here.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2+f_{H0}^2)\xrightarrow{BS}\frac{1}{2}(a_{H0}^2+b_{H0}^2) \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2+\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2+f_{H1}^2).
\end{eqnarray}
The input state is redistributed in a sense that one amplitude is on the right side of the system and the other amplitude is on the left side while maintaining the original structure of the state.
HOM state with relative phase between two amplitudes equal to -1 is considered here.
\vspace{-5px}
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2-f_{H0}^2) \xrightarrow{BS} -a_{H0}b_{H0} \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2-\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2-e_{H1}^2).
\end{eqnarray}
In both cases, the output state is identical to the input state except for the spatial modes.
\vspace{-10px}
\subsubsection{Distinguishable photons}
\vspace{-10px}
Now, we examine the case of distinguishable two photon input.
The procedure is identical to the the previous case.
We begin with two distinguishable photons at each modes without superposition.
\begin{eqnarray}
&e_{H0}f_{V0} \xrightarrow{BS} \frac{1}{2}(a_{H0}-b_{H0})(a_{V0}+b_{V0})\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{V0}+d_{V0})\nonumber\\
&\xrightarrow{BS} - e_{H0} e_{V1}.
\end{eqnarray}
We examine the case of HOM states.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \xrightarrow{BS}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(a_{V0}+b_{H0})^2\nonumber\\
&\xrightarrow{M}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(c_{V0}+d_{H0})^2\nonumber\\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2 \pm e_{V1}^2).
\end{eqnarray}
The control of exit location can be performed as well in this scheme by introducing phase shifters in the system as indicated in Fig. \ref{fig:control_without_circ}.
This procedure does not destroy the redistribution property.
There are four potential spatial modes and by switching the phase shift before beam splitters, the direction of propagation switches.
The combinations are, (a): $(e_0,f_0)\rightarrow(e_0,e_1)$, (b): $(e_0,f_0)\rightarrow(e_0,f_1)$, (c): $(e_0,f_0)\rightarrow(f_0,e_1)$, and (d): $(e_0,f_0)\rightarrow(f_0,f_1)$.
The result from the system with circulators is summarized in Table. \ref{tab:table_1}, the system without them is in Table. \ref{tab:table_2}.
In the case of indistinguishable photons, the results are cyclic in a sense that all three states can be produced by using the other system.
However, there is a significant difference when distinguishable photons are considered.
\begin{figure}[htp]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator2.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator3.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator4.pdf}%
}
\caption{Quantum state redistribution with control of propagation direction. We performed the same analysis as the higher dimensional HOM effect with direction control. By introducing phase shifters in the system before beam splitters, we can change the exit direction of the amplitudes. The starting state is $e_0f_0$. The first beam splitter transforms the input state, then they enter the multiport device. The multiport transformed state goes through beam splitters on the right and left side. The final outcome has the same form as the input state.}
\label{fig:control_without_circ}
\end{figure}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation with circulators}\\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $a_{H0}b_{H0} \rightarrow -\frac{1}{2}(e_{H0}^2 - e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(a_{H0}^2 + b_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$\\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(a_{H0}^2 - b_{H0}^2) \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $a_{H0}b_{V0} \rightarrow -\frac{1}{2}(e_{H0}-e_{H1})(e_{V0}+e_{V1})$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(a_{H0}^2 \pm b_{V0}^2) \rightarrow \frac{1}{4}\{(e_{H0}-e_{H1})^2\pm(e_{V0}-e_{V1})^2\}$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system with circulators.
The first three states deal with indistinguishable photons by giving them the same polarization.
A state consisting of two single photons will become an HOM state.
We analyzed HOM states as an initial state, and they become either the HOM state or a two single-photon state.
Distinguishable photons are also analyzed by introducing orthogonal polarizations.
The output states become coupled states meaning the original states are not preserved.
} \label{tab:table_1}
\end{table}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation without circulators} \\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $e_{H0}f_{H0} \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(e_{H0}^2 + f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(e_{H0}^2 - f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 - f_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $e_{H0}f_{V0} \rightarrow -e_{H0}e_{V1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 \pm f_{V1}^2)$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system without circulators.
The structure of the table is the same as the Table. I.
The first three states deal with indistinguishable photons by giving them the same polarization.
The last two states handle distinguishable photons.
The output states preserve the same form as the input state.
We start the transformation from the system location 0, then the transformed states are redistributed between location 0 and location 1.
The result shows coherent transportation of input states.} \label{tab:table_2}
\end{table}
\vspace{-10px}
\subsection{Delayed state redistribution}
\vspace{-10px}
\begin{figure*}[htp]
\centering
\subfloat[][State redistribution without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array.pdf}\label{}}
\subfloat[][State redistribution with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array_phase_plate.pdf}\label{}}
\caption{Delayed state redistribution.
The two-photon amplitude transformation progresses in time from top to bottom.
The distance traveled in a single time step is indicated by vertical dashed lines. The total photon numbers are two in the system through out the propagation. At the first step for both cases, the input two-photon state is transformed by the BS.
The transformed state becomes the HOM state, and it is indicated as red transparent overlapped circles occupying both modes.
The initial and the final transformed state are indicated using solid red circles, and intermediate states are indicated in striped yellow circles.
(a) Two multiports and beam splitters {\it without} phase shifters between the multiports.
The HOM state enters the multiport and transformed taking the form of $-\frac{1}{2}(a_{0}-b_{0})(c_{0}+d_{0})$.
The amplitudes are coupled, however, they propagate without changing its amplitude. After several steps, the amplitudes occupying two rails converges to a single mode state after transformation by beam splitters.
The final state has the same form as the input state.
(b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports.
When the P is present, the right-moving coupled amplitude gains a relative phase between modes $a_1$ and $b_1$.
Reflection occurs at the multiport when the relative phase between the two is $\pi$.
Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction.
Reflection does not occur on this transformed coupled left-moving amplitude, therefore it continues to propagate leftward.
The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_state_dist}
\end{figure*}
We introduce the temporal delay effect as the higher dimensional HOM case by introducing a phase shifter between two multiports.
\vspace{-10px}
\subsubsection{Without reflection}
\vspace{-10px}
When there is no phase shifter between the two multiports, the result is identical to the system with a single multiport from the previous section.
The state transformation and propagation is provided schematically in Fig. \ref{fig:delayed_state_dist} (a).
The photons are initially sent from the left side of the BS.
The correlated photons are transformed to HOM state through the BS.
\vspace{-10px}
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R \nonumber\\
&\xrightarrow{M}-\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R.
\end{eqnarray}
The HOM state is transformed by the multiport device.
This state is in a coupled state because right moving and left moving amplitudes are not separated.
We propagate this state through the BS on the left and translate the amplitudes moving to the right.
\begin{eqnarray}
&\xrightarrow{BS,T}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}+b_{1})_R \xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(c_{1}+d_{1})_R\nonumber\\
&\xrightarrow{BS} -f_{0T_0L}e_{1T_1R}
\end{eqnarray}
The left moving amplitude is transformed by the left BS while right moving amplitude propagates to the second multiport device.
We introduced temporal difference between the right moving and the left moving photons.
\vspace{-15px}
\subsubsection{With reflection}
\vspace{-10px}
Reflection of amplitudes are introduced when there is a phase shifter between two multiport devices as indicated in fig. \ref{fig:delayed_state_dist} (b).
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R
\xrightarrow{M} -\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R \nonumber \\
&\xrightarrow{BS,T+P}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_R
\end{eqnarray}
The right moving amplitude gains relative phase between upper and lower rails, and this relative phase allows the amplitude to get reflected upon multiport encounter.
\begin{eqnarray}
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_L \xrightarrow{T+P} -\frac{1}{\sqrt{2}}f_{0L}(c_{0}+d_{0})_L\nonumber\\
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{0}+b_{0})_L \xrightarrow{BS} -f_{0T_{0}L} e_{0T_{2}L}
\end{eqnarray}
The input photons do not have any delays between the two at the beginning. The delay $\Delta T = T_2 - T_0$ is introduced from the reflection in the system.
\vspace{-10pt}
\section{Conclusion}
\vspace{-10px}
We demonstrated higher dimensional quantum state manipulation such as the HOM effect and state redistribution by applying linear-optical four-ports realizing four-dimensional Grover matrix accompanied by beam splitters and phase shifters.
Identical photons are sent into two of the four input-output ports and split into right-moving and left-moving amplitudes, with no cross terms to observe the HOM effect.
This absolute separation of propagation direction without mixing of right-moving and left-moving amplitudes insures the photons remain clustered as they propagate through the system.
Variable phase shifts in the system allow the HOM photon pairs to switch between four spatial output destinations, which can increase information capacity.
Time delays between emerging parts of the clustered two-photon state illustrating “delayed” HOM effect can be engineered using two multiports.
In addition, depending on the phase shifter position, the propagation direction can be reversed so that the right moving amplitude can get reflected at the second multiport, resulting in HOM pairs always leaving only from the left side of the system and with a particular time-bin delay.
The same situations have been investigated in a system without circulators. This system allows to redistribute the input state between the right and the left side of the system without changing amplitudes.
The HOM effect and clustered photon pairs are widely used in quantum information science. The approach introduced here adds extra degrees of freedom, and paves the way for new applications that require control over the spatial and temporal modes of the HOM amplitudes as they move through one- and two-dimensional networks.
We have demonstrated two photon amplitude control in both spatial and temporal modes. This two photon system can be extended to multiphoton input states, and manipulation of more complex entangled states would be the next milestones to be achieved.
\vspace{-15pt}
\section*{Appendix}
\section{\label{sec:level1}Introduction}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3in]{HOM_cancelling_4.pdf}
\caption{Hong-Ou-Mandel effect. Two identical photons are sent into different beam splitter input ports. There are four possible outcomes, two photons leaving one port, two photons leaving the other port, each photon reflecting to give single photons at each exit, and both transmitting to give single photons at each port. The coincidence terms cancel out since they are identical but enter with opposite sign. The final state is a superposition of two outcomes, each with both photons clustered together at the same exit port.}
\label{fig:HOM_effect}
\end{figure}
\section{Introduction}
\vspace{-10px}
The Hong-Ou-Mandel (HOM) effect is one of the most recognized quantum two-photon interference effects \cite{hong1987measurement}.
When two indistinguishable photons arrive simultaneously at different inputs of a 50:50 beam splitter (BS), single-photon amplitudes at each output cancel, resulting in quantum superposition of two-photon states appearing at each output port, as in Fig. \ref{fig:HOM_effect}.
This traditional HOM method, observed on a BS having two input and two output ports, always has the two-photon state simultaneously occupying both output spatial modes, leaving no room to engineer control of propagation direction.
Various types of studies on quantum state transformations in multiport devices have been performed such as two photon propagation in a multimode system \cite{weihs1996two,zukowski1997realizable},
quantum interference effects using a few photons \cite{meany2012non,de2014coincidence,tichy2011four,campos2000three}, and propagation of multi-photons \cite{lim2005generalized,tillmann2015generalized,menssen2017distinguishability}. Internal degrees of freedom are also incorporated to enhance communication capacity \cite{walborn2003multimode,poem2012two,zhang2016engineering}.
Systems and procedures using multi-photon states, such as boson sampling, have been analyzed using multiport beam splitters both theoretically and experimentally \cite{aaronson2011computational,tillmann2013experimental,spring2013boson,bentivegna2015experimental,he2017time,wang2019boson}.
The HOM effect plays an important role in the field of quantum metrology when two-photon $|2002\rangle$-type states are extended to $N$-photon $N00N$ state \cite{dowling2008quantum,motes2015linear}.
Additionally, coherent transport of quantum states has been attracting attention, where single- and two-photon discrete-time quantum walk schemes are employed to transfer and process quantum states \cite{bose2003quantum,perez2013coherent,lovett2010universal,chapman2016experimental,nitsche2016quantum}.
A quantum routing approach has been proposed to transfer unknown states in 1D and 2D structures to assist quantum communication protocols \cite{zhan2014perfect,vstefavnak2016perfect,bartkiewicz2018implementation}.
Photon propagation control is especially crucial in a large optical network to distribute quantum states between two parties.
The network can be formed by combining multiple copies of four-port devices.
The state manipulation schemes we present can be integrated in quantum communication protocols since state retrieval timing can be chosen at will.
In this manuscript, we propose two-photon quantum state engineering and transportation methods with a linear-optical system which allows manipulation of photon amplitudes by using linear-optical devices such as optical multiports, beam splitters, and phase shifters.
Previously, such multiports have been introduced to demonstrate a two-photon clustering effect in quantum walks when multiple multiport devices are connected to form a chain \cite{simon2020quantum}. Clustering of two photons means that after encountering a multiport, the input two-photon amplitude separates into a superposition of a right-moving and a left-moving two-photon amplitude, with no amplitude for the photons to move in opposite directions.
By utilizing this separation, a higher-dimensional unitary transformation enables flexible quantum engineering designs of possible travel path combinations by switching relative phases within right moving and left moving amplitudes independently.
When two or more multiports are combined, this control of quantum amplitudes in a two-photon state allows demonstration of a “delayed" HOM effect engaging also time-bin modes in addition to spatial modes.
To perform this delayed effect, two or more multiports are required, and relative phase shifts between two rails can reflect the incoming amplitudes.
This controllable reflection without mirrors can also be seen as an additional state manipulation feature.
We introduce two distinct systems. The first system utilizes direct transformation of two-photons by the four-port device using circulators.
The second case does not have circulators in the system.
The photons are sent from the left side of the beam splitters, then the amplitudes encounter the multiport device.
This second system has not been analyzed in the past. Specific input states for both distinguishable and indistinguishable photons are redistributed between two parties coherently. Therefore, this system is particularly useful in quantum routing type applications.
This article is organized as follows. In Sec. II, we introduce the main optical components used in this manuscript to perform quantum state transformation. These basic linear optical devices are used to show HOM effect engaging in spatial modes and time-bin modes, and is addressed in Sec. III. In Sec. IV, we show redistribution of two photon states using the devices introduced that are presented in Sec. II. The summary of the results are given in Sec. V.
\begin{figure}[htp]
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic.pdf}%
}
\vspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic_photons.pdf}%
}
\vspace{-10px}
\caption{(a) A possible experimental realization of a directionally-unbiased linear-optical four-port consists of four beam splitters, four mirrors, and four phase shifters. A photon can enter any of the four ports, and exit at any of the four ports (labeled as $a,b,c$, and $d$). With a specific choice of phase settings, a Grover matrix can be realized by coherently summing all possible paths to each output \cite{osawa2019directionally}. A schematic symbol for this device is shown on the right. (b) Single multiport transformation of a two-photon input state. The input state of two correlated photons entering from the left is depicted as $ab$ $(\ket{1,1})$. After scattering by a Grover multiport, the state transforms into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$, which has clear separation of right- and left-moving two-photon amplitudes. No cross-terms with photons moving in opposite directions occur.}
\label{fig:combined_four_port}
\vspace{-10pt}
\end{figure}
\section{Photonic state transformations via linear optical devices}
\vspace{-10px}
In this section, we consider photonic state transformations in higher-dimensional spatial modes using a unitary four-dimensional Grover matrix \cite{grover1996fast} in place of the beam splitter.
In this section, we introduce the main systems that will be used for linear state transformations, followed by the basic photonic devices to implement them.
Beam splitters and the four-dimensional Grover matrix are the central system component.
We mainly use photon number representation to describe states through out the manuscript.
The general beam splitter transformation matrix is
\begin{eqnarray}
\label{eqn:BS}
&\begin{pmatrix}
\hat{c}
\\
\hat{d}
\end{pmatrix}=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{a}\\
\hat{b}
\end{pmatrix}
\end{eqnarray}
where $\hat{a}$,$\hat{b}$,$\hat{c}$, and $\hat{d}$ are used to describe the input photon state transformation.
The labels are generalized here, therefore the specific location dependent beam splitter transformations are redefined in later sections.
We use photon number states to describe the system unless otherwise specified. The input state is denoted as $\hat{a}\hat{b}$ where $\hat{a}$ and $\hat{b}$ are respectively creation operators for the spatial modes $a$ and $b$.
The hat notation is dropped henceforth.
For a photon in spatial mode $a$ with horizontally polarized photon is denoted as $a_{H}$ and horizontally polarized photon in mode $b$ is denoted as $b_{H}$.
We omit polarization degrees of freedom when identical photons are used through out the system.
Photonic implementations of the Grover matrix can be readily realized \cite{carolan2015universal,crespi2013anderson,spagnolo2013three,fan1998channel,nikolopoulos2008directional}.
To be concrete, we use directionally-unbiased linear-optical four-ports (Fig. \ref{fig:combined_four_port} (a)) as an example.
Consider sending two indistinguishable photons into a four-dimensional multiport device realization of a Grover matrix.
This Grover operator, the multiport, described by the unitary matrix
\begin{equation}
\label{eqn:Grover}
Grover =
\frac{1}{2}
\begin{pmatrix}
-1&1&1&1\\
1&-1&1&1\\
1&1&-1&1\\
1&1&1&-1
\end{pmatrix},
\end{equation}
has equal splitting ratios between all input-output combinations and generalizes the BS transformation matrix given below in Eq.\eqref{eqn:BS}.
In general, photons in modes $a$ and $b$ are transformed in the following manner,
\begin{eqnarray}
&a \xrightarrow{M} \frac{1}{2}(-a+b+c+d)\; \mbox{ and }\;\\ \nonumber
&b \xrightarrow{M} \frac{1}{2}(a-b+c+d).
\end{eqnarray}
Theoretical analysis of the reversible Grover matrix has been performed by linear-optical directionally-unbiased four-ports \cite{simon2016group,simon2018joint,osawa2019directionally}, which consist of four beam splitters, four phase shifters, and four mirrors as indicated in Fig. \ref{fig:combined_four_port}(a).
They are represented schematically by the symbol in Fig. \ref{fig:combined_four_port}(b).
The three-port version of this device has been experimentally demonstrated using bulk optical devices \cite{osawa2018experimental}.
To have better and precise control of phases, miniaturization of the device is highly preferred to realize the four-port especially when several multiport devices are required to carry out an experiment.
In general, directional unitary devices such as those of the Reck and some other unitary matrix decomposition models \cite{reck1994experimental,su2019hybrid,clements2016optimal,de2018simple,motes2014scalable} can also realize a Grover matrix.
However, directionally-unbiased devices are advantageous when designing the delayed HOM effect, as well as requiring fewer optical resources.
Identical photons are sent into two of the four input-output ports from the left side (indicated in Fig. \ref{fig:combined_four_port}(b)).
We used multiport devices and beam splitters to form two systems for state propagation.
The photons are sent from the left side of the system through out the manuscript.
The first BS multiport composite system is denoted as subscript 0 and the other half is denoted as subscript 1.
The result differs depending on the input location of photons.
Consider a system consisting of two multiports and two beam splitters.
There are several ways to insert photons in the system, however we choose two specific ones in this manuscript.
To be able to send a photon into the middle of the system, the setup needs to be supplied with circulators, is shown in Fig. \ref{fig:multiport_circ}.
Another setup requires no circulators to propagate input photons.
The photons experience an extra transformation by a beam splitter upon photon entrance.
The system is graphically supplied in Fig. \ref{fig:multiport_no_circ}.
It needs to be noted that the number of multiports in the system does not change the final outcome.
We are using two multiports as an example, however, the result is the same when the system has a single multiport or more than two multiports as long as the devices are assumed to be lossless during the propagation.
Brief comments on the mathematical structure of the transformations carried out by the configurations given in Figs. \ref{fig:multiport_circ} and \ref{fig:multiport_no_circ} are given in the Appendix.
\vspace{-15px}
\subsection{Photon propagation using circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two_circulator.pdf}
\caption{A system setup with input photons supplied by circulators. The system consists of two beam splitters, two multiport devices, and two circulators. These circulators allow us to send photons from the left side of the multiport device without experiencing a beam splitter transformation before entering the multiport device. The input state split into right moving and left moving amplitudes (shown as dotted arrows) upon multiport transformation.}
\label{fig:multiport_circ}
\end{figure}
This method is used to distribute HOM pair between the right and the left side of the system.
The original input state $a_0b_0$ transforms to:
\begin{eqnarray} \label{eq:trans}
a_0b_0 &\xrightarrow{M} \frac{1}{2}(-a_0+b_0+c_0+d_0)\frac{1}{2}(a_0-b_0+c_0+d_0) \nonumber \\
&=-\frac{1}{4}(a_0^2+b_0^2)+\frac{1}{2}a_0b_0+\frac{1}{4}(c_0^2+d_0^2)+\frac{1}{2}c_0d_0\nonumber \\
&= -\frac{1}{4}(a_0-b_0)^2+\frac{1}{4}(c_0+d_0)^2,
\end{eqnarray}
where we have used the commutation relation $ab = ba$ since the photons are identical and in different spatial locations.
Eq.\eqref{eq:trans} shows that correlated photons are split into right moving $\frac{1}{4}(c_0+d_0)^2$ and left moving $-\frac{1}{4}(a_0-b_0)^2$ amplitudes, with no cross terms.
This absolute separation of propagation direction without mixing of right moving and left moving amplitudes is important because the photon pairs remain distinctly localized and clustered at each step \cite{simon2020quantum}.
The right moving amplitude is translated to $\frac{1}{4}(a_1+b_1)^2$ and propagates without changing its form.
$\frac{1}{4}(a_1+b_1)^2 \xrightarrow{M} \frac{1}{4}(c_1+d_1)^2$.
The left moving amplitude $-\frac{1}{4}(a_0-b_0)^2$ stays the same until BS transformation.
The controlled HOM effect can be observed in higher-dimensional multiports assisted by extra beam splitters.
Imagine beam splitters inserted in the system as in Fig. \ref{fig:multiport_circ}.
Input state $ab$ is now transformed into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$ as indicated above, then further transformed by beam splitters to obtain HOM pairs between the right side and left side of the system.
The right and left sides of the system each have two output ports, and the exit port of the photon pair can be controlled by varying phase shift settings before the beam splitters.
A phase shift on the left side of the system does not affect the result of the right side amplitude, and vice versa.
This system, having circulators at the beginning of the system, is denoted as transformation pattern I, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_I}.
\vspace{-15px}
\subsection{Photon propagation without circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two.pdf}
\caption{System setup without circulators. The input photons are subjected to a beam splitter before they enter the multiport. The input state is transformed and propagated in one direction (shown as dotted arrows). The BS transformed input state is transformed again by the first multiport devices.}
\label{fig:multiport_no_circ}
\end{figure}
This method allows to redistribute input states between right and left side of the system without changing amplitudes.
Consider sending two photons from the left side of the beam splitter as indicated in fig. \ref{fig:multiport_no_circ} then transform the output state by the multiport device.
We only consider the first multiport transformation here.
The rest of the transformation is given in sec. \ref{sec:pattern_II}.
\begin{equation}
e_0f_0\xrightarrow{BS}-\frac{1}{2}(a_0^2-b_0^2)
\xrightarrow{M}-\frac{1}{2}(a_0-b_0)(c_0+d_0)
\end{equation}
The final state has cross-terms, and it is different from the case with circulators in a sense that the output state is \textit{coupled}.
The state does not provide clear separation between right moving and left moving amplitudes.
Even though, the state does not have clear distinction between right moving and left moving, we still refer the amplitudes right and left moving amplitudes unless special attention is required.
This system having no circulators is denoted as transformation pattern II, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_II}
\vspace{-10px}
\section{Transformation pattern I: directionally-controllable HOM effect in higher-dimensional spatial and temporal modes} \label{sec:pattern_I}
\vspace{-10px}
In this section we discuss the transformation pattern I.
The higher dimensional HOM effect is generated by the multiport-based linear optics system with circulators at the inputs.
The propagation direction control and delays between amplitudes are discussed in subsections.
We use a single multiport device to show the control effect and we introduce two multiport devices in the system for delayed effect.
\begin{figure}[htp!]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator1.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator2.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator3.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator4.pdf}%
}
\caption{Higher dimensional HOM effect with directional control.
Correlated photons, $a_0b_0$, are sent in from the circulators into the first multiport.
After the first multiport interaction, the incoming photon pair splits into right-moving and left-moving two-photon amplitudes.
The separately-moving amplitudes are bunched at the beam splitters on right and left sides.
We can controllably switch between four different output sites, and where the clustered output photons appear depends on the location of the phase shifter $P$.
In (a), no phase plates are introduced, and the output biphoton amplitudes leave $f_0$ and $e_1$. The final state is $\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1)$, meaning superposition of two photons in mode $f_0$ and two in mode $e_1$. In case (b), the phase shifter $P=\pi$ is to the left, changing the relative phase between upper and lower arms.
Similarly in (c) and (d), other locations for the phase shifters cause biphotons to leave in other spatial modes.} \label{fig:HOM_control_fig}
\end{figure}
\vspace{-15px}
\subsection{Control of propagation direction}
\vspace{-10px}
Given that one two-photon amplitude must exit left and one right, there are four possible combinations of outgoing HOM pairs as indicated in Fig. \ref{fig:HOM_control_fig}.
The combinations are, (a): $(f_0^2,e_1^2)$, (b): $(e_0^2,e_1^2)$, (c): $(e_0^2,f_1^2)$, and (d): $(f_0^2,e_1^2)$.
This means, in the case of (a) for example, the left-moving two-photon amplitude leaves in mode f, and the right-moving amplitude leaves in mode e.
Directional control of the four cases is readily demonstrated, as follows. In case (a) there is only a beam splitter transformation after the multiport, giving
\begin{eqnarray}
&-\frac{1}{4}(a_0-b_0)^2\xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_0-f_0-e_0-f_0)^2 = \frac{1}{2}f_0^2, \nonumber \\
&\frac{1}{4}(c_1+d_1)^2 \xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_1-f_1+e_1+f_1)^2 = \frac{1}{2}e_1^2.
\end{eqnarray}
The final output state is,
\begin{eqnarray}
\frac{1}{2}(f_0^2+e_1^2)=\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1).
\end{eqnarray}
In case (b), a phase plate is inserted in the lower arm of the left side to switch the exit port from $d$ to $c$. All the phase shifters P are set to $\pi$, therefore transforming $b \rightarrow -b$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c+d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2+e_1^2) = \frac{1}{\sqrt{2}}(-\ket{2,0}_0+\ket{2,0}_1).
\end{eqnarray}
Compared to case (a), the exit port is switched from f to e. In (c), phase plates are inserted in the lower arms of both right and left sides.
Photons in modes $b$ and $d$ are transformed to $-b$ and $-d$, respectively.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2-f_1^2) = -\frac{1}{\sqrt{2}}(\ket{2,0}_0+\ket{0,2}_1).
\end{eqnarray}
In (d), a phase plate is inserted in the lower arm of the right side. A photon in mode $d$ is transformed to $-d$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a-b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(f_0^2-f_1^2) = \frac{1}{\sqrt{2}}(\ket{0,2}_0-\ket{0,2}_1).
\end{eqnarray}
This demonstrates complete directional control of biphoton propagation direction using only linear optical devices.
Directional control does not require changing splitting ratios at each linear optical device (BS and multiport), and occurs in a lossless manner since no post-selection is required.
\begin{figure*}[htp!]
\centering
\subfloat[][Delayed HOM effect without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays.pdf}\label{}}
\subfloat[][Delayed HOM effect with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays_phase_plate.pdf}\label{}}
\caption{Delayed HOM effect. The two-photon amplitude transformation progresses in time from top to bottom. The distance traveled in a single time step is indicated by vertical dashed lines. The original photons as well as photons in the target state are indicated using red circles. The green striped circles indicate intermediate transformed state. The total number of photons are always two through out the transformations. (a) Two multiports and beam splitters {\it without} phase shifters between the multiports. At the first step, the behavior is the same as for a single multiport with beam splitters. The right-moving amplitude propagates through the second multiport, and left-moving amplitude propagates through the beam splitter. The right moving amplitude is delayed by one additional multiport transformation before a two-photon observation probability will become available in spatial modes on the right. (b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports. When the P is present, the right-moving amplitude gains a relative phase between modes $
a_1$ and $b_1$. Reflection occurs at the multiport when the relative phase between the two is $\pi$. Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction. Reflection does not occur on this transformed left-moving amplitude, therefore it continues to propagate leftward. The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_HOM}
\end{figure*}
\subsection{Delayed HOM effect}
\vspace{-10px}
\subsubsection{Delayed HOM effect without reflection}
\vspace{-10px}
We introduce a phase shifter between two multiports as in Fig. \ref{fig:delayed_HOM} (b). Without the phase plate between two multiport devices, the photons behave exactly the same as in the previous subsection. However, the phase shifter can change propagation direction of right moving amplitude to the left. This reflection results in detecting HOM pairs only on the left side, but with some delay between the two exiting amplitudes. We start with the case without the phase shifter. The photon insertion is the same as the previous case, coming from the left side of the first multiport.
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}+b_{1})_R^2 \nonumber \\
&\xrightarrow{M} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(c_{1}+d_{1})_R^2 \xrightarrow{BS} -\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2,
\end{eqnarray}
where M, T, BS represents multiport, translation and beam splitter transformation respectively. We use subscript $R$ and $L$ to illustrate amplitudes propagating to the right or left. $T$ translates a photon amplitude by a single time step (for example, $\frac{1}{4}(c_{0}+d_{0})^2 \rightarrow \frac{1}{4}(a_{1}+b_{1})^2$). The second transformation $T+BS$, $T$ is read as applying $T+BS$ on the first term and $T$ on the second term.
The final state is,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0{T_0}L}-\ket{2,0}_{1{T_1}R}),
\end{eqnarray}
where $T_0$ is the time when the first biphoton amplitude leaves the system and $T_1$ is the exit time of the second.
The right moving amplitude stays in the system longer than the left moving amplitude because of the extra multiport device in the system, leading to time delay $\Delta T = T_1-T_0$.
\vspace{-10px}
\subsubsection{Delayed HOM effect with reflection}
\vspace{-10px}
When a $\pi$-phase shifter is inserted on one path between the multiports, the right-moving amplitude gets reflected upon the second multiport encounter.
Instead of having two-photon amplitudes on the right and left sides of the system, both photon amplitudes end up leaving from the left.
The HOM effect still occurs but now with some delay between the two amplitudes at the end of the BS.
This is indicated in Fig. \ref{fig:delayed_HOM} (b).
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T+P} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}-b_{1})_R^2.
\end{eqnarray}
The second transformation $T+BS$, $T+P$ is read as applying $T+BS$ on the first term and $T+P$ on the second term.
Left-moving photons leave before right-moving photons.
\begin{eqnarray}
\xrightarrow{M}&\frac{1}{4}(a_{1}-b_{1})_L^2 \xrightarrow{P+T} \frac{1}{4}(c_{0}+d_{0})_L^2 \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{0}+b_{0})_L^2 \xrightarrow{BS} \frac{1}{2}e_{0L}^2.
\end{eqnarray}
The final state,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2+\frac{1}{2}e_{0L}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0T_0L}-\ket{2,0}_{0T_2L}),
\end{eqnarray}
is now two HOM pair amplitudes, both on the left side of the system, at output ports $e_0$ and $f_0$, with some time delay $\Delta T = T_2 - T_0$ between them. The first amplitude leaves port $f_0$ at $T_0$, then the second leaves $e_0$ and the time labeled $T_2$.
\vspace{-10px}
\section{Transformation pattern II: state redistribution in higher-dimensional spatial and temporal modes} \label{sec:pattern_II}
\vspace{-10px}
\subsection{State transformation and propagation}
\vspace{-10px}
We have considered the case where the input photon state is transformed by the multiport device right after photon insertion in the previous section.
Instead of using circulators, we can transform the input state by the BS in advance and then transform the state by using the multiport device.
Even though the Grover matrix spreads the input state equally in four directions, the end result preserves the original form of the input state.
We demonstrate a state redistribution property using distinguishable and indistinguishable photons, meaning the input state gets redistributed between right and left side without changing amplitudes.
The propagation result is different from the previous case.
Consider sending two indistinguishable photons in the system.
The input two photons have the same polarization to make them indistinguishable.
The input photons are inserted from the left side of the beam splitter.
The beam splitter transforms the input state and propagates from the left side to right side of the device without any reflections.
The amplitudes are transformed by the multiport device after the beam splitter transformation.
This transformation splits input photons into coupled right- moving and left-moving amplitudes.
The coupled left moving amplitudes reflected from the first multiport counter propagates and transformed by the first beam splitter from the right to the left.
The right moving amplitude is transmitted without changes in amplitude.
This amplitude gets transmitted by the right side beam splitter at the end.
\vspace{-15px}
\subsubsection{Indistinguishable photons}
\vspace{-10px}
We examine the mathematical details on indistinguishable photons in the system without circulators first.
We consider three cases by sending photons in spatial modes e and f.
First, we consider indistinguishable a pair of single photons from spatial mode e and f.
\vspace{-5px}
\begin{eqnarray}
&e_{H0}f_{H0}\xrightarrow{BS}-\frac{1}{2}(a_{H0}^2-b_{H0}^2)\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{H0}+d_{H0}) \nonumber \\
&\xrightarrow{BS} -e_{H0}e_{H1}.
\end{eqnarray}
HOM state with relative phase between two amplitudes equal to +1 is considered here.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2+f_{H0}^2)\xrightarrow{BS}\frac{1}{2}(a_{H0}^2+b_{H0}^2) \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2+\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2+f_{H1}^2).
\end{eqnarray}
The input state is redistributed in a sense that one amplitude is on the right side of the system and the other amplitude is on the left side while maintaining the original structure of the state.
HOM state with relative phase between two amplitudes equal to -1 is considered here.
\vspace{-5px}
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2-f_{H0}^2) \xrightarrow{BS} -a_{H0}b_{H0} \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2-\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2-e_{H1}^2).
\end{eqnarray}
In both cases, the output state is identical to the input state except for the spatial modes.
\vspace{-10px}
\subsubsection{Distinguishable photons}
\vspace{-10px}
Now, we examine the case of distinguishable two photon input.
The procedure is identical to the the previous case.
We begin with two distinguishable photons at each modes without superposition.
\begin{eqnarray}
&e_{H0}f_{V0} \xrightarrow{BS} \frac{1}{2}(a_{H0}-b_{H0})(a_{V0}+b_{V0})\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{V0}+d_{V0})\nonumber\\
&\xrightarrow{BS} - e_{H0} e_{V1}.
\end{eqnarray}
We examine the case of HOM states.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \xrightarrow{BS}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(a_{V0}+b_{H0})^2\nonumber\\
&\xrightarrow{M}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(c_{V0}+d_{H0})^2\nonumber\\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2 \pm e_{V1}^2).
\end{eqnarray}
The control of exit location can be performed as well in this scheme by introducing phase shifters in the system as indicated in Fig. \ref{fig:control_without_circ}.
This procedure does not destroy the redistribution property.
There are four potential spatial modes and by switching the phase shift before beam splitters, the direction of propagation switches.
The combinations are, (a): $(e_0,f_0)\rightarrow(e_0,e_1)$, (b): $(e_0,f_0)\rightarrow(e_0,f_1)$, (c): $(e_0,f_0)\rightarrow(f_0,e_1)$, and (d): $(e_0,f_0)\rightarrow(f_0,f_1)$.
The result from the system with circulators is summarized in Table. \ref{tab:table_1}, the system without them is in Table. \ref{tab:table_2}.
In the case of indistinguishable photons, the results are cyclic in a sense that all three states can be produced by using the other system.
However, there is a significant difference when distinguishable photons are considered.
\begin{figure}[htp!]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator2.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator3.pdf}%
}
\hspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator4.pdf}%
}
\caption{Quantum state redistribution with control of propagation direction. We performed the same analysis as the higher dimensional HOM effect with direction control. By introducing phase shifters in the system before beam splitters, we can change the exit direction of the amplitudes. The starting state is $e_0f_0$. The first beam splitter transforms the input state, then they enter the multiport device. The multiport transformed state goes through beam splitters on the right and left side. The final outcome has the same form as the input state.}
\label{fig:control_without_circ}
\end{figure}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation with circulators}\\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $a_{H0}b_{H0} \rightarrow -\frac{1}{2}(e_{H0}^2 - e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(a_{H0}^2 + b_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$\\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(a_{H0}^2 - b_{H0}^2) \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $a_{H0}b_{V0} \rightarrow -\frac{1}{2}(e_{H0}-e_{H1})(e_{V0}+e_{V1})$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(a_{H0}^2 \pm b_{V0}^2) \rightarrow \frac{1}{4}\{(e_{H0}-e_{H1})^2\pm(e_{V0}-e_{V1})^2\}$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system with circulators.
The first three states deal with indistinguishable photons by giving them the same polarization.
A state consisting of two single photons will become an HOM state.
We analyzed HOM states as an initial state, and they become either the HOM state or a two single-photon state.
Distinguishable photons are also analyzed by introducing orthogonal polarizations.
The output states become coupled states meaning the original states are not preserved.
} \label{tab:table_1}
\end{table}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation without circulators} \\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $e_{H0}f_{H0} \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(e_{H0}^2 + f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(e_{H0}^2 - f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 - f_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $e_{H0}f_{V0} \rightarrow -e_{H0}e_{V1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 \pm f_{V1}^2)$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system without circulators.
The structure of the table is the same as the Table. I.
The first three states deal with indistinguishable photons by giving them the same polarization.
The last two states handle distinguishable photons.
The output states preserve the same form as the input state.
We start the transformation from the system location 0, then the transformed states are redistributed between location 0 and location 1.
The result shows coherent transportation of input states.} \label{tab:table_2}
\end{table}
\vspace{-10px}
\subsection{Delayed state redistribution}
\vspace{-10px}
\begin{figure*}[htp!]
\centering
\subfloat[][State redistribution without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array.pdf}\label{}}
\subfloat[][State redistribution with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array_phase_plate.pdf}\label{}}
\caption{Delayed state redistribution.
The two-photon amplitude transformation progresses in time from top to bottom.
The distance traveled in a single time step is indicated by vertical dashed lines. The total photon numbers are two in the system through out the propagation. At the first step for both cases, the input two-photon state is transformed by the BS.
The transformed state becomes the HOM state, and it is indicated as red transparent overlapped circles occupying both modes.
The initial and the final transformed state are indicated using solid red circles, and intermediate states are indicated in striped yellow circles.
(a) Two multiports and beam splitters {\it without} phase shifters between the multiports.
The HOM state enters the multiport and transformed taking the form of $-\frac{1}{2}(a_{0}-b_{0})(c_{0}+d_{0})$.
The amplitudes are coupled, however, they propagate without changing its amplitude. After several steps, the amplitudes occupying two rails converges to a single mode state after transformation by beam splitters.
The final state has the same form as the input state.
(b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports.
When the P is present, the right-moving coupled amplitude gains a relative phase between modes $a_1$ and $b_1$.
Reflection occurs at the multiport when the relative phase between the two is $\pi$.
Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction.
Reflection does not occur on this transformed coupled left-moving amplitude, therefore it continues to propagate leftward.
The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_state_dist}
\end{figure*}
We introduce the temporal delay effect as the higher dimensional HOM case by introducing a phase shifter between two multiports.
\vspace{-10px}
\subsubsection{Without reflection}
\vspace{-10px}
When there is no phase shifter between the two multiports, the result is identical to the system with a single multiport from the previous section.
The state transformation and propagation is provided schematically in Fig. \ref{fig:delayed_state_dist} (a).
The photons are initially sent from the left side of the BS.
The correlated photons are transformed to HOM state through the BS.
\vspace{-10px}
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R \nonumber\\
&\xrightarrow{M}-\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R.
\end{eqnarray}
The HOM state is transformed by the multiport device.
This state is in a coupled state because right moving and left moving amplitudes are not separated.
We propagate this state through the BS on the left and translate the amplitudes moving to the right.
\begin{eqnarray}
&\xrightarrow{BS,T}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}+b_{1})_R \xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(c_{1}+d_{1})_R\nonumber\\
&\xrightarrow{BS} -f_{0T_0L}e_{1T_1R}
\end{eqnarray}
The left moving amplitude is transformed by the left BS while right moving amplitude propagates to the second multiport device.
We introduced temporal difference between the right moving and the left moving photons.
\vspace{-15px}
\subsubsection{With reflection}
\vspace{-10px}
Reflection of amplitudes are introduced when there is a phase shifter between two multiport devices as indicated in fig. \ref{fig:delayed_state_dist} (b).
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R
\xrightarrow{M} -\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R \nonumber \\
&\xrightarrow{BS,T+P}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_R
\end{eqnarray}
The right moving amplitude gains relative phase between upper and lower rails, and this relative phase allows the amplitude to get reflected upon multiport encounter.
\begin{eqnarray}
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_L \xrightarrow{T+P} -\frac{1}{\sqrt{2}}f_{0L}(c_{0}+d_{0})_L\nonumber\\
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{0}+b_{0})_L \xrightarrow{BS} -f_{0T_{0}L} e_{0T_{2}L}
\end{eqnarray}
The input photons do not have any delays between the two at the beginning. The delay $\Delta T = T_2 - T_0$ is introduced from the reflection in the system.
\vspace{-10pt}
\section{Conclusion}
\vspace{-10px}
We demonstrated higher dimensional quantum state manipulation such as the HOM effect and state redistribution by applying linear-optical four-ports realizing four-dimensional Grover matrix accompanied by beam splitters and phase shifters.
Identical photons are sent into two of the four input-output ports and split into right-moving and left-moving amplitudes, with no cross terms to observe the HOM effect.
This absolute separation of propagation direction without mixing of right-moving and left-moving amplitudes insures the photons remain clustered as they propagate through the system.
Variable phase shifts in the system allow the HOM photon pairs to switch between four spatial output destinations, which can increase information capacity.
Time delays between emerging parts of the clustered two-photon state illustrating “delayed” HOM effect can be engineered using two multiports.
In addition, depending on the phase shifter position, the propagation direction can be reversed so that the right moving amplitude can get reflected at the second multiport, resulting in HOM pairs always leaving only from the left side of the system and with a particular time-bin delay.
The same situations have been investigated in a system without circulators. This system allows to redistribute the input state between the right and the left side of the system without changing amplitudes.
The HOM effect and clustered photon pairs are widely used in quantum information science. The approach introduced here adds extra degrees of freedom, and paves the way for new applications that require control over the spatial and temporal modes of the HOM amplitudes as they move through one- and two-dimensional networks.
We have demonstrated two photon amplitude control in both spatial and temporal modes. This two photon system can be extended to multiphoton input states, and manipulation of more complex entangled states would be the next milestones to be achieved.
\vspace{-15pt}
\section*{Appendix}
\section{\label{sec:level1}Introduction}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3in]{HOM_cancelling_4.pdf}
\caption{Hong-Ou-Mandel effect. Two identical photons are sent into different beam splitter input ports. There are four possible outcomes, two photons leaving one port, two photons leaving the other port, each photon reflecting to give single photons at each exit, and both transmitting to give single photons at each port. The coincidence terms cancel out since they are identical but enter with opposite sign. The final state is a superposition of two outcomes, each with both photons clustered together at the same exit port.}
\label{fig:HOM_effect}
\end{figure}
\section{Introduction}
\vspace{-10px}
The Hong-Ou-Mandel (HOM) effect is one of the most recognized quantum two-photon interference effects \cite{hong1987measurement}.
When two indistinguishable photons arrive simultaneously at different inputs of a 50:50 beam splitter (BS), single-photon amplitudes at each output cancel, resulting in quantum superposition of two-photon states appearing at each output port, as in Fig. \ref{fig:HOM_effect}.
This traditional HOM method, observed on a BS having two input and two output ports, always has the two-photon state simultaneously occupying both output spatial modes, leaving no room to engineer control of propagation direction.
Various types of studies on quantum state transformations in multiport devices have been performed such as two photon propagation in a multimode system \cite{weihs1996two,zukowski1997realizable},
quantum interference effects using a few photons \cite{meany2012non,de2014coincidence,tichy2011four,campos2000three}, and propagation of multi-photons \cite{lim2005generalized,tillmann2015generalized,menssen2017distinguishability}. Internal degrees of freedom are also incorporated to enhance communication capacity \cite{walborn2003multimode,poem2012two,zhang2016engineering}.
Systems and procedures using multi-photon states, such as boson sampling, have been analyzed using multiport beam splitters both theoretically and experimentally \cite{aaronson2011computational,tillmann2013experimental,spring2013boson,bentivegna2015experimental,he2017time,wang2019boson}.
The HOM effect plays an important role in the field of quantum metrology when two-photon $|2002\rangle$-type states are extended to $N$-photon $N00N$ state \cite{dowling2008quantum,motes2015linear}.
Additionally, coherent transport of quantum states has been attracting attention, where single- and two-photon discrete-time quantum walk schemes are employed to transfer and process quantum states \cite{bose2003quantum,perez2013coherent,lovett2010universal,chapman2016experimental,nitsche2016quantum}.
A quantum routing approach has been proposed to transfer unknown states in 1D and 2D structures to assist quantum communication protocols \cite{zhan2014perfect,vstefavnak2016perfect,bartkiewicz2018implementation}.
Photon propagation control is especially crucial in a large optical network to distribute quantum states between two parties.
The network can be formed by combining multiple copies of four-port devices.
The state manipulation schemes we present can be integrated in quantum communication protocols since state retrieval timing can be chosen at will.
In this manuscript, we propose two-photon quantum state engineering and transportation methods with a linear-optical system which allows manipulation of photon amplitudes by using linear-optical devices such as optical multiports, beam splitters, and phase shifters.
Previously, such multiports have been introduced to demonstrate a two-photon clustering effect in quantum walks when multiple multiport devices are connected to form a chain \cite{simon2020quantum}. Clustering of two photons means that after encountering a multiport, the input two-photon amplitude separates into a superposition of a right-moving and a left-moving two-photon amplitude, with no amplitude for the photons to move in opposite directions.
By utilizing this separation, a higher-dimensional unitary transformation enables flexible quantum engineering designs of possible travel path combinations by switching relative phases within right moving and left moving amplitudes independently.
When two or more multiports are combined, this control of quantum amplitudes in a two-photon state allows demonstration of a “delayed" HOM effect engaging also time-bin modes in addition to spatial modes.
To perform this delayed effect, two or more multiports are required, and relative phase shifts between two rails can reflect the incoming amplitudes.
This controllable reflection without mirrors can also be seen as an additional state manipulation feature.
We introduce two distinct systems. The first system utilizes direct transformation of two-photons by the four-port device using circulators.
The second case does not have circulators in the system.
The photons are sent from the left side of the beam splitters, then the amplitudes encounter the multiport device.
This second system has not been analyzed in the past. Specific input states for both distinguishable and indistinguishable photons are redistributed between two parties coherently. Therefore, this system is particularly useful in quantum routing type applications.
This article is organized as follows. In Sec. II, we introduce the main optical components used in this manuscript to perform quantum state transformation. These basic linear optical devices are used to show HOM effect engaging in spatial modes and time-bin modes, and is addressed in Sec. III. In Sec. IV, we show redistribution of two photon states using the devices introduced that are presented in Sec. II. The summary of the results are given in Sec. V.
\begin{figure}[htp]
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic.pdf}%
}
\vspace{-10px}
\subfloat[]{%
\includegraphics[clip,width=0.8\columnwidth]{multiport_systematic_photons.pdf}%
}
\vspace{-10px}
\caption{(a) A possible experimental realization of a directionally-unbiased linear-optical four-port consists of four beam splitters, four mirrors, and four phase shifters. A photon can enter any of the four ports, and exit at any of the four ports (labeled as $a,b,c$, and $d$). With a specific choice of phase settings, a Grover matrix can be realized by coherently summing all possible paths to each output \cite{osawa2019directionally}. A schematic symbol for this device is shown on the right. (b) Single multiport transformation of a two-photon input state. The input state of two correlated photons entering from the left is depicted as $ab$ $(\ket{1,1})$. After scattering by a Grover multiport, the state transforms into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$, which has clear separation of right- and left-moving two-photon amplitudes. No cross-terms with photons moving in opposite directions occur.}
\label{fig:combined_four_port}
\vspace{-10pt}
\end{figure}
\section{Photonic state transformations via linear optical devices}
\vspace{-10px}
In this section, we consider photonic state transformations in higher-dimensional spatial modes using a unitary four-dimensional Grover matrix \cite{grover1996fast} in place of the beam splitter.
In this section, we introduce the main systems that will be used for linear state transformations, followed by the basic photonic devices to implement them.
Beam splitters and the four-dimensional Grover matrix are the central system component.
We mainly use photon number representation to describe states through out the manuscript.
The general beam splitter transformation matrix is
\begin{eqnarray}
\label{eqn:BS}
&\begin{pmatrix}
\hat{c}
\\
\hat{d}
\end{pmatrix}=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{a}\\
\hat{b}
\end{pmatrix}
\end{eqnarray}
where $\hat{a}$,$\hat{b}$,$\hat{c}$, and $\hat{d}$ are used to describe the input photon state transformation.
The labels are generalized here, therefore the specific location dependent beam splitter transformations are redefined in later sections.
We use photon number states to describe the system unless otherwise specified. The input state is denoted as $\hat{a}\hat{b}$ where $\hat{a}$ and $\hat{b}$ are respectively creation operators for the spatial modes $a$ and $b$.
The hat notation is dropped henceforth.
For a photon in spatial mode $a$ with horizontally polarized photon is denoted as $a_{H}$ and horizontally polarized photon in mode $b$ is denoted as $b_{H}$.
We omit polarization degrees of freedom when identical photons are used through out the system.
Photonic implementations of the Grover matrix can be readily realized \cite{carolan2015universal,crespi2013anderson,spagnolo2013three,fan1998channel,nikolopoulos2008directional}.
To be concrete, we use directionally-unbiased linear-optical four-ports (Fig. \ref{fig:combined_four_port} (a)) as an example.
Consider sending two indistinguishable photons into a four-dimensional multiport device realization of a Grover matrix.
This Grover operator, the multiport, described by the unitary matrix
\begin{equation}
\label{eqn:Grover}
Grover =
\frac{1}{2}
\begin{pmatrix}
-1&1&1&1\\
1&-1&1&1\\
1&1&-1&1\\
1&1&1&-1
\end{pmatrix},
\end{equation}
has equal splitting ratios between all input-output combinations and generalizes the BS transformation matrix given below in Eq.\eqref{eqn:BS}.
In general, photons in modes $a$ and $b$ are transformed in the following manner,
\begin{eqnarray}
&a \xrightarrow{M} \frac{1}{2}(-a+b+c+d)\; \mbox{ and }\;\\ \nonumber
&b \xrightarrow{M} \frac{1}{2}(a-b+c+d).
\end{eqnarray}
Theoretical analysis of the reversible Grover matrix has been performed by linear-optical directionally-unbiased four-ports \cite{simon2016group,simon2018joint,osawa2019directionally}, which consist of four beam splitters, four phase shifters, and four mirrors as indicated in Fig. \ref{fig:combined_four_port}(a).
They are represented schematically by the symbol in Fig. \ref{fig:combined_four_port}(b).
The three-port version of this device has been experimentally demonstrated using bulk optical devices \cite{osawa2018experimental}.
To have better and precise control of phases, miniaturization of the device is highly preferred to realize the four-port especially when several multiport devices are required to carry out an experiment.
In general, directional unitary devices such as those of the Reck and some other unitary matrix decomposition models \cite{reck1994experimental,su2019hybrid,clements2016optimal,de2018simple,motes2014scalable} can also realize a Grover matrix.
However, directionally-unbiased devices are advantageous when designing the delayed HOM effect, as well as requiring fewer optical resources.
Identical photons are sent into two of the four input-output ports from the left side (indicated in Fig. \ref{fig:combined_four_port}(b)).
We used multiport devices and beam splitters to form two systems for state propagation.
The photons are sent from the left side of the system through out the manuscript.
The first BS multiport composite system is denoted as subscript 0 and the other half is denoted as subscript 1.
The result differs depending on the input location of photons.
Consider a system consisting of two multiports and two beam splitters.
There are several ways to insert photons in the system, however we choose two specific ones in this manuscript.
To be able to send a photon into the middle of the system, the setup needs to be supplied with circulators, is shown in Fig. \ref{fig:multiport_circ}.
Another setup requires no circulators to propagate input photons.
The photons experience an extra transformation by a beam splitter upon photon entrance.
The system is graphically supplied in Fig. \ref{fig:multiport_no_circ}.
It needs to be noted that the number of multiports in the system does not change the final outcome.
We are using two multiports as an example, however, the result is the same when the system has a single multiport or more than two multiports as long as the devices are assumed to be lossless during the propagation.
Brief comments on the mathematical structure of the transformations carried out by the configurations given in Figs. \ref{fig:multiport_circ} and \ref{fig:multiport_no_circ} are given in the Appendix.
\vspace{-15px}
\subsection{Photon propagation using circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two_circulator.pdf}
\caption{A system setup with input photons supplied by circulators. The system consists of two beam splitters, two multiport devices, and two circulators. These circulators allow us to send photons from the left side of the multiport device without experiencing a beam splitter transformation before entering the multiport device. The input state split into right moving and left moving amplitudes (shown as dotted arrows) upon multiport transformation.}
\label{fig:multiport_circ}
\end{figure}
This method is used to distribute HOM pair between the right and the left side of the system.
The original input state $a_0b_0$ transforms to:
\begin{eqnarray} \label{eq:trans}
a_0b_0 &\xrightarrow{M} \frac{1}{2}(-a_0+b_0+c_0+d_0)\frac{1}{2}(a_0-b_0+c_0+d_0) \nonumber \\
&=-\frac{1}{4}(a_0^2+b_0^2)+\frac{1}{2}a_0b_0+\frac{1}{4}(c_0^2+d_0^2)+\frac{1}{2}c_0d_0\nonumber \\
&= -\frac{1}{4}(a_0-b_0)^2+\frac{1}{4}(c_0+d_0)^2,
\end{eqnarray}
where we have used the commutation relation $ab = ba$ since the photons are identical and in different spatial locations.
Eq.\eqref{eq:trans} shows that correlated photons are split into right moving $\frac{1}{4}(c_0+d_0)^2$ and left moving $-\frac{1}{4}(a_0-b_0)^2$ amplitudes, with no cross terms.
This absolute separation of propagation direction without mixing of right moving and left moving amplitudes is important because the photon pairs remain distinctly localized and clustered at each step \cite{simon2020quantum}.
The right moving amplitude is translated to $\frac{1}{4}(a_1+b_1)^2$ and propagates without changing its form.
$\frac{1}{4}(a_1+b_1)^2 \xrightarrow{M} \frac{1}{4}(c_1+d_1)^2$.
The left moving amplitude $-\frac{1}{4}(a_0-b_0)^2$ stays the same until BS transformation.
The controlled HOM effect can be observed in higher-dimensional multiports assisted by extra beam splitters.
Imagine beam splitters inserted in the system as in Fig. \ref{fig:multiport_circ}.
Input state $ab$ is now transformed into $-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2$ as indicated above, then further transformed by beam splitters to obtain HOM pairs between the right side and left side of the system.
The right and left sides of the system each have two output ports, and the exit port of the photon pair can be controlled by varying phase shift settings before the beam splitters.
A phase shift on the left side of the system does not affect the result of the right side amplitude, and vice versa.
This system, having circulators at the beginning of the system, is denoted as transformation pattern I, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_I}.
\vspace{-15px}
\subsection{Photon propagation without circulators}
\vspace{-10px}
\begin{figure}
\vspace{-5mm}
\centering
\includegraphics[width=3.5in]{multiport_two.pdf}
\caption{System setup without circulators. The input photons are subjected to a beam splitter before they enter the multiport. The input state is transformed and propagated in one direction (shown as dotted arrows). The BS transformed input state is transformed again by the first multiport devices.}
\label{fig:multiport_no_circ}
\end{figure}
This method allows to redistribute input states between right and left side of the system without changing amplitudes.
Consider sending two photons from the left side of the beam splitter as indicated in fig. \ref{fig:multiport_no_circ} then transform the output state by the multiport device.
We only consider the first multiport transformation here.
The rest of the transformation is given in sec. \ref{sec:pattern_II}.
\begin{equation}
e_0f_0\xrightarrow{BS}-\frac{1}{2}(a_0^2-b_0^2)
\xrightarrow{M}-\frac{1}{2}(a_0-b_0)(c_0+d_0)
\end{equation}
The final state has cross-terms, and it is different from the case with circulators in a sense that the output state is \textit{coupled}.
The state does not provide clear separation between right moving and left moving amplitudes.
Even though, the state does not have clear distinction between right moving and left moving, we still refer the amplitudes right and left moving amplitudes unless special attention is required.
This system having no circulators is denoted as transformation pattern II, and the detailed discussions of its transformation are in Sec. \ref{sec:pattern_II}
\vspace{-10px}
\section{Transformation pattern I: directionally-controllable HOM effect in higher-dimensional spatial and temporal modes} \label{sec:pattern_I}
\vspace{-10px}
In this section we discuss the transformation pattern I.
The higher dimensional HOM effect is generated by the multiport-based linear optics system with circulators at the inputs.
The propagation direction control and delays between amplitudes are discussed in subsections.
We use a single multiport device to show the control effect and we introduce two multiport devices in the system for delayed effect.
\begin{figure}[htp!]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator1.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator2.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator3.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_with_circulator4.pdf}%
}
\caption{Higher dimensional HOM effect with directional control.
Correlated photons, $a_0b_0$, are sent in from the circulators into the first multiport.
After the first multiport interaction, the incoming photon pair splits into right-moving and left-moving two-photon amplitudes.
The separately-moving amplitudes are bunched at the beam splitters on right and left sides.
We can controllably switch between four different output sites, and where the clustered output photons appear depends on the location of the phase shifter $P$.
In (a), no phase plates are introduced, and the output biphoton amplitudes leave $f_0$ and $e_1$. The final state is $\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1)$, meaning superposition of two photons in mode $f_0$ and two in mode $e_1$. In case (b), the phase shifter $P=\pi$ is to the left, changing the relative phase between upper and lower arms.
Similarly in (c) and (d), other locations for the phase shifters cause biphotons to leave in other spatial modes.} \label{fig:HOM_control_fig}
\end{figure}
\vspace{-15px}
\subsection{Control of propagation direction}
\vspace{-10px}
Given that one two-photon amplitude must exit left and one right, there are four possible combinations of outgoing HOM pairs as indicated in Fig. \ref{fig:HOM_control_fig}.
The combinations are, (a): $(f_0^2,e_1^2)$, (b): $(e_0^2,e_1^2)$, (c): $(e_0^2,f_1^2)$, and (d): $(f_0^2,e_1^2)$.
This means, in the case of (a) for example, the left-moving two-photon amplitude leaves in mode f, and the right-moving amplitude leaves in mode e.
Directional control of the four cases is readily demonstrated, as follows. In case (a) there is only a beam splitter transformation after the multiport, giving
\begin{eqnarray}
&-\frac{1}{4}(a_0-b_0)^2\xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_0-f_0-e_0-f_0)^2 = \frac{1}{2}f_0^2, \nonumber \\
&\frac{1}{4}(c_1+d_1)^2 \xrightarrow{BS} -\frac{1}{4\sqrt{2}}(e_1-f_1+e_1+f_1)^2 = \frac{1}{2}e_1^2.
\end{eqnarray}
The final output state is,
\begin{eqnarray}
\frac{1}{2}(f_0^2+e_1^2)=\frac{1}{\sqrt{2}}(\ket{0,2}_0+\ket{2,0}_1).
\end{eqnarray}
In case (b), a phase plate is inserted in the lower arm of the left side to switch the exit port from $d$ to $c$. All the phase shifters P are set to $\pi$, therefore transforming $b \rightarrow -b$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c+d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2+e_1^2) = \frac{1}{\sqrt{2}}(-\ket{2,0}_0+\ket{2,0}_1).
\end{eqnarray}
Compared to case (a), the exit port is switched from f to e. In (c), phase plates are inserted in the lower arms of both right and left sides.
Photons in modes $b$ and $d$ are transformed to $-b$ and $-d$, respectively.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a+b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(-e_0^2-f_1^2) = -\frac{1}{\sqrt{2}}(\ket{2,0}_0+\ket{0,2}_1).
\end{eqnarray}
In (d), a phase plate is inserted in the lower arm of the right side. A photon in mode $d$ is transformed to $-d$.
\begin{eqnarray}
&-\frac{1}{4}(a-b)^2+\frac{1}{4}(c+d)^2 \xrightarrow{P} -\frac{1}{4}(a-b)^2+\frac{1}{4}(c-d)^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(f_0^2-f_1^2) = \frac{1}{\sqrt{2}}(\ket{0,2}_0-\ket{0,2}_1).
\end{eqnarray}
This demonstrates complete directional control of biphoton propagation direction using only linear optical devices.
Directional control does not require changing splitting ratios at each linear optical device (BS and multiport), and occurs in a lossless manner since no post-selection is required.
\begin{figure*}[htp!]
\centering
\subfloat[][Delayed HOM effect without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays.pdf}\label{}}
\subfloat[][Delayed HOM effect with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_circulator_arrays_phase_plate.pdf}\label{}}
\caption{Delayed HOM effect. The two-photon amplitude transformation progresses in time from top to bottom. The distance traveled in a single time step is indicated by vertical dashed lines. The original photons as well as photons in the target state are indicated using red circles. The green striped circles indicate intermediate transformed state. The total number of photons are always two through out the transformations. (a) Two multiports and beam splitters {\it without} phase shifters between the multiports. At the first step, the behavior is the same as for a single multiport with beam splitters. The right-moving amplitude propagates through the second multiport, and left-moving amplitude propagates through the beam splitter. The right moving amplitude is delayed by one additional multiport transformation before a two-photon observation probability will become available in spatial modes on the right. (b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports. When the P is present, the right-moving amplitude gains a relative phase between modes $
a_1$ and $b_1$. Reflection occurs at the multiport when the relative phase between the two is $\pi$. Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction. Reflection does not occur on this transformed left-moving amplitude, therefore it continues to propagate leftward. The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_HOM}
\end{figure*}
\subsection{Delayed HOM effect}
\vspace{-10px}
\subsubsection{Delayed HOM effect without reflection}
\vspace{-10px}
We introduce a phase shifter between two multiports as in Fig. \ref{fig:delayed_HOM} (b). Without the phase plate between two multiport devices, the photons behave exactly the same as in the previous subsection. However, the phase shifter can change propagation direction of right moving amplitude to the left. This reflection results in detecting HOM pairs only on the left side, but with some delay between the two exiting amplitudes. We start with the case without the phase shifter. The photon insertion is the same as the previous case, coming from the left side of the first multiport.
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}+b_{1})_R^2 \nonumber \\
&\xrightarrow{M} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(c_{1}+d_{1})_R^2 \xrightarrow{BS} -\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2,
\end{eqnarray}
where M, T, BS represents multiport, translation and beam splitter transformation respectively. We use subscript $R$ and $L$ to illustrate amplitudes propagating to the right or left. $T$ translates a photon amplitude by a single time step (for example, $\frac{1}{4}(c_{0}+d_{0})^2 \rightarrow \frac{1}{4}(a_{1}+b_{1})^2$). The second transformation $T+BS$, $T$ is read as applying $T+BS$ on the first term and $T$ on the second term.
The final state is,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2 + \frac{1}{2}e_{1R}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0{T_0}L}-\ket{2,0}_{1{T_1}R}),
\end{eqnarray}
where $T_0$ is the time when the first biphoton amplitude leaves the system and $T_1$ is the exit time of the second.
The right moving amplitude stays in the system longer than the left moving amplitude because of the extra multiport device in the system, leading to time delay $\Delta T = T_1-T_0$.
\vspace{-10px}
\subsubsection{Delayed HOM effect with reflection}
\vspace{-10px}
When a $\pi$-phase shifter is inserted on one path between the multiports, the right-moving amplitude gets reflected upon the second multiport encounter.
Instead of having two-photon amplitudes on the right and left sides of the system, both photon amplitudes end up leaving from the left.
The HOM effect still occurs but now with some delay between the two amplitudes at the end of the BS.
This is indicated in Fig. \ref{fig:delayed_HOM} (b).
\vspace{-10px}
\begin{eqnarray}
&{a_{0}b_{0}}_R\xrightarrow{M}-\frac{1}{4}(a_{0}-b_{0})_L^2+ \frac{1}{4}(c_{0}+d_{0})_R^2 \nonumber \\
&\xrightarrow{T+BS,\mbox{ }T+P} -\frac{1}{2}f_{0L}^2 + \frac{1}{4}(a_{1}-b_{1})_R^2.
\end{eqnarray}
The second transformation $T+BS$, $T+P$ is read as applying $T+BS$ on the first term and $T+P$ on the second term.
Left-moving photons leave before right-moving photons.
\begin{eqnarray}
\xrightarrow{M}&\frac{1}{4}(a_{1}-b_{1})_L^2 \xrightarrow{P+T} \frac{1}{4}(c_{0}+d_{0})_L^2 \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{0}+b_{0})_L^2 \xrightarrow{BS} \frac{1}{2}e_{0L}^2.
\end{eqnarray}
The final state,
\begin{eqnarray}
-\frac{1}{2}f_{0L}^2+\frac{1}{2}e_{0L}^2=-\frac{1}{\sqrt{2}}(\ket{0,2}_{0T_0L}-\ket{2,0}_{0T_2L}),
\end{eqnarray}
is now two HOM pair amplitudes, both on the left side of the system, at output ports $e_0$ and $f_0$, with some time delay $\Delta T = T_2 - T_0$ between them. The first amplitude leaves port $f_0$ at $T_0$, then the second leaves $e_0$ and the time labeled $T_2$.
\vspace{-10px}
\section{Transformation pattern II: state redistribution in higher-dimensional spatial and temporal modes} \label{sec:pattern_II}
\vspace{-10px}
\subsection{State transformation and propagation}
\vspace{-10px}
We have considered the case where the input photon state is transformed by the multiport device right after photon insertion in the previous section.
Instead of using circulators, we can transform the input state by the BS in advance and then transform the state by using the multiport device.
Even though the Grover matrix spreads the input state equally in four directions, the end result preserves the original form of the input state.
We demonstrate a state redistribution property using distinguishable and indistinguishable photons, meaning the input state gets redistributed between right and left side without changing amplitudes.
The propagation result is different from the previous case.
Consider sending two indistinguishable photons in the system.
The input two photons have the same polarization to make them indistinguishable.
The input photons are inserted from the left side of the beam splitter.
The beam splitter transforms the input state and propagates from the left side to right side of the device without any reflections.
The amplitudes are transformed by the multiport device after the beam splitter transformation.
This transformation splits input photons into coupled right- moving and left-moving amplitudes.
The coupled left moving amplitudes reflected from the first multiport counter propagates and transformed by the first beam splitter from the right to the left.
The right moving amplitude is transmitted without changes in amplitude.
This amplitude gets transmitted by the right side beam splitter at the end.
\vspace{-15px}
\subsubsection{Indistinguishable photons}
\vspace{-10px}
We examine the mathematical details on indistinguishable photons in the system without circulators first.
We consider three cases by sending photons in spatial modes e and f.
First, we consider indistinguishable a pair of single photons from spatial mode e and f.
\vspace{-5px}
\begin{eqnarray}
&e_{H0}f_{H0}\xrightarrow{BS}-\frac{1}{2}(a_{H0}^2-b_{H0}^2)\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{H0}+d_{H0}) \nonumber \\
&\xrightarrow{BS} -e_{H0}e_{H1}.
\end{eqnarray}
HOM state with relative phase between two amplitudes equal to +1 is considered here.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2+f_{H0}^2)\xrightarrow{BS}\frac{1}{2}(a_{H0}^2+b_{H0}^2) \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2+\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2+f_{H1}^2).
\end{eqnarray}
The input state is redistributed in a sense that one amplitude is on the right side of the system and the other amplitude is on the left side while maintaining the original structure of the state.
HOM state with relative phase between two amplitudes equal to -1 is considered here.
\vspace{-5px}
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2-f_{H0}^2) \xrightarrow{BS} -a_{H0}b_{H0} \nonumber \\
&\xrightarrow{M} \frac{1}{4}(a_{H0}-b_{H0})^2-\frac{1}{4}(c_{H0}+d_{H0})^2 \nonumber \\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2-e_{H1}^2).
\end{eqnarray}
In both cases, the output state is identical to the input state except for the spatial modes.
\vspace{-10px}
\subsubsection{Distinguishable photons}
\vspace{-10px}
Now, we examine the case of distinguishable two photon input.
The procedure is identical to the the previous case.
We begin with two distinguishable photons at each modes without superposition.
\begin{eqnarray}
&e_{H0}f_{V0} \xrightarrow{BS} \frac{1}{2}(a_{H0}-b_{H0})(a_{V0}+b_{V0})\nonumber \\
&\xrightarrow{M} -\frac{1}{2}(a_{H0}-b_{H0})(c_{V0}+d_{V0})\nonumber\\
&\xrightarrow{BS} - e_{H0} e_{V1}.
\end{eqnarray}
We examine the case of HOM states.
\begin{eqnarray}
&\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \xrightarrow{BS}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(a_{V0}+b_{H0})^2\nonumber\\
&\xrightarrow{M}\frac{1}{4}(a_{H0}-b_{H0})^2\pm\frac{1}{4}(c_{V0}+d_{H0})^2\nonumber\\
&\xrightarrow{BS} \frac{1}{2}(e_{H0}^2 \pm e_{V1}^2).
\end{eqnarray}
The control of exit location can be performed as well in this scheme by introducing phase shifters in the system as indicated in Fig. \ref{fig:control_without_circ}.
This procedure does not destroy the redistribution property.
There are four potential spatial modes and by switching the phase shift before beam splitters, the direction of propagation switches.
The combinations are, (a): $(e_0,f_0)\rightarrow(e_0,e_1)$, (b): $(e_0,f_0)\rightarrow(e_0,f_1)$, (c): $(e_0,f_0)\rightarrow(f_0,e_1)$, and (d): $(e_0,f_0)\rightarrow(f_0,f_1)$.
The result from the system with circulators is summarized in Table. \ref{tab:table_1}, the system without them is in Table. \ref{tab:table_2}.
In the case of indistinguishable photons, the results are cyclic in a sense that all three states can be produced by using the other system.
However, there is a significant difference when distinguishable photons are considered.
\begin{figure}[htp!]
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator2.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator3.pdf}%
}
\subfloat[]{%
\includegraphics[clip,width=\columnwidth]{output_without_circulator4.pdf}%
}
\caption{Quantum state redistribution with control of propagation direction. We performed the same analysis as the higher dimensional HOM effect with direction control. By introducing phase shifters in the system before beam splitters, we can change the exit direction of the amplitudes. The starting state is $e_0f_0$. The first beam splitter transforms the input state, then they enter the multiport device. The multiport transformed state goes through beam splitters on the right and left side. The final outcome has the same form as the input state.}
\label{fig:control_without_circ}
\end{figure}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation with circulators}\\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $a_{H0}b_{H0} \rightarrow -\frac{1}{2}(e_{H0}^2 - e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(a_{H0}^2 + b_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$\\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(a_{H0}^2 - b_{H0}^2) \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $a_{H0}b_{V0} \rightarrow -\frac{1}{2}(e_{H0}-e_{H1})(e_{V0}+e_{V1})$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(a_{H0}^2 \pm b_{V0}^2) \rightarrow \frac{1}{4}\{(e_{H0}-e_{H1})^2\pm(e_{V0}-e_{V1})^2\}$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system with circulators.
The first three states deal with indistinguishable photons by giving them the same polarization.
A state consisting of two single photons will become an HOM state.
We analyzed HOM states as an initial state, and they become either the HOM state or a two single-photon state.
Distinguishable photons are also analyzed by introducing orthogonal polarizations.
The output states become coupled states meaning the original states are not preserved.
} \label{tab:table_1}
\end{table}
\begin{table}
\begin{tabular}{P{0.4\linewidth}P{0.6\linewidth}}
\hline \hline
\noalign{\vskip 1ex}
\multicolumn{2}{c}{State transformation without circulators} \\
\noalign{\vskip 1ex}
\hline \hline
\noalign{\vskip 1ex}
Indistinguishable photons & $e_{H0}f_{H0} \rightarrow -e_{H0}e_{H1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with +1 relative phase & $\frac{1}{2}(e_{H0}^2 + f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 + e_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
HOM pair with $-1$ relative phase & $\frac{1}{2}(e_{H0}^2 - f_{H0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 - f_{H1}^2)$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable photons & $e_{H0}f_{V0} \rightarrow -e_{H0}e_{V1}$ \\
\noalign{\vskip 1ex}
\hline
\noalign{\vskip 1ex}
Distinguishable HOM pair & $\frac{1}{2}(e_{H0}^2 \pm f_{V0}^2) \rightarrow \frac{1}{2}(e_{H0}^2 \pm f_{V1}^2)$ \\
\noalign{\vskip 1ex}
\hline \hline
\end{tabular}
\caption{State transformations in a system without circulators.
The structure of the table is the same as the Table. I.
The first three states deal with indistinguishable photons by giving them the same polarization.
The last two states handle distinguishable photons.
The output states preserve the same form as the input state.
We start the transformation from the system location 0, then the transformed states are redistributed between location 0 and location 1.
The result shows coherent transportation of input states.} \label{tab:table_2}
\end{table}
\vspace{-10px}
\subsection{Delayed state redistribution}
\vspace{-10px}
\begin{figure*}[htp!]
\centering
\subfloat[][State redistribution without reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array.pdf}\label{}}
\subfloat[][State redistribution with reflection]{\includegraphics[clip,width=\columnwidth]{multiport_two_array_phase_plate.pdf}\label{}}
\caption{Delayed state redistribution.
The two-photon amplitude transformation progresses in time from top to bottom.
The distance traveled in a single time step is indicated by vertical dashed lines. The total photon numbers are two in the system through out the propagation. At the first step for both cases, the input two-photon state is transformed by the BS.
The transformed state becomes the HOM state, and it is indicated as red transparent overlapped circles occupying both modes.
The initial and the final transformed state are indicated using solid red circles, and intermediate states are indicated in striped yellow circles.
(a) Two multiports and beam splitters {\it without} phase shifters between the multiports.
The HOM state enters the multiport and transformed taking the form of $-\frac{1}{2}(a_{0}-b_{0})(c_{0}+d_{0})$.
The amplitudes are coupled, however, they propagate without changing its amplitude. After several steps, the amplitudes occupying two rails converges to a single mode state after transformation by beam splitters.
The final state has the same form as the input state.
(b) Two multiports and beam splitters {\it with} a phase shifter P set at $\pi$ between multiports.
When the P is present, the right-moving coupled amplitude gains a relative phase between modes $a_1$ and $b_1$.
Reflection occurs at the multiport when the relative phase between the two is $\pi$.
Therefore, the transformed amplitude reflects upon a second multiport encounter, going back to the original state with opposite propagation direction.
Reflection does not occur on this transformed coupled left-moving amplitude, therefore it continues to propagate leftward.
The original left-moving amplitude becomes available for detection earlier than the transformed left-moving amplitude.}
\label{fig:delayed_state_dist}
\end{figure*}
We introduce the temporal delay effect as the higher dimensional HOM case by introducing a phase shifter between two multiports.
\vspace{-10px}
\subsubsection{Without reflection}
\vspace{-10px}
When there is no phase shifter between the two multiports, the result is identical to the system with a single multiport from the previous section.
The state transformation and propagation is provided schematically in Fig. \ref{fig:delayed_state_dist} (a).
The photons are initially sent from the left side of the BS.
The correlated photons are transformed to HOM state through the BS.
\vspace{-10px}
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R \nonumber\\
&\xrightarrow{M}-\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R.
\end{eqnarray}
The HOM state is transformed by the multiport device.
This state is in a coupled state because right moving and left moving amplitudes are not separated.
We propagate this state through the BS on the left and translate the amplitudes moving to the right.
\begin{eqnarray}
&\xrightarrow{BS,T}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}+b_{1})_R \xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(c_{1}+d_{1})_R\nonumber\\
&\xrightarrow{BS} -f_{0T_0L}e_{1T_1R}
\end{eqnarray}
The left moving amplitude is transformed by the left BS while right moving amplitude propagates to the second multiport device.
We introduced temporal difference between the right moving and the left moving photons.
\vspace{-15px}
\subsubsection{With reflection}
\vspace{-10px}
Reflection of amplitudes are introduced when there is a phase shifter between two multiport devices as indicated in fig. \ref{fig:delayed_state_dist} (b).
\begin{eqnarray}
&{e_{0}f_{0}}_R \xrightarrow{BS} \frac{1}{2}(a_{0}^2-b_{0}^2)_R
\xrightarrow{M} -\frac{1}{2}(a_{0}-b_{0})_L(c_{0}+d_{0})_R \nonumber \\
&\xrightarrow{BS,T+P}-\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_R
\end{eqnarray}
The right moving amplitude gains relative phase between upper and lower rails, and this relative phase allows the amplitude to get reflected upon multiport encounter.
\begin{eqnarray}
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{1}-b_{1})_L \xrightarrow{T+P} -\frac{1}{\sqrt{2}}f_{0L}(c_{0}+d_{0})_L\nonumber\\
&\xrightarrow{M} -\frac{1}{\sqrt{2}}f_{0L}(a_{0}+b_{0})_L \xrightarrow{BS} -f_{0T_{0}L} e_{0T_{2}L}
\end{eqnarray}
The input photons do not have any delays between the two at the beginning. The delay $\Delta T = T_2 - T_0$ is introduced from the reflection in the system.
\vspace{-10pt}
\section{Conclusion}
\vspace{-10px}
We demonstrated higher dimensional quantum state manipulation such as the HOM effect and state redistribution by applying linear-optical four-ports realizing four-dimensional Grover matrix accompanied by beam splitters and phase shifters.
Identical photons are sent into two of the four input-output ports and split into right-moving and left-moving amplitudes, with no cross terms to observe the HOM effect.
This absolute separation of propagation direction without mixing of right-moving and left-moving amplitudes insures the photons remain clustered as they propagate through the system.
Variable phase shifts in the system allow the HOM photon pairs to switch between four spatial output destinations, which can increase information capacity.
Time delays between emerging parts of the clustered two-photon state illustrating “delayed” HOM effect can be engineered using two multiports.
In addition, depending on the phase shifter position, the propagation direction can be reversed so that the right moving amplitude can get reflected at the second multiport, resulting in HOM pairs always leaving only from the left side of the system and with a particular time-bin delay.
The same situations have been investigated in a system without circulators. This system allows to redistribute the input state between the right and the left side of the system without changing amplitudes.
The HOM effect and clustered photon pairs are widely used in quantum information science. The approach introduced here adds extra degrees of freedom, and paves the way for new applications that require control over the spatial and temporal modes of the HOM amplitudes as they move through one- and two-dimensional networks.
We have demonstrated two photon amplitude control in both spatial and temporal modes. This two photon system can be extended to multiphoton input states, and manipulation of more complex entangled states would be the next milestones to be achieved.
\vspace{-15pt}
\section*{Appendix}
|
{'timestamp': '2020-12-17T02:09:27', 'yymm': '2012', 'arxiv_id': '2012.08745', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.08745'}
|
arxiv
|
\section{\label{sec:intro}Introduction}
The flow periodicity due to the vortex shedding from a bluff body exposed to a fluid stream can cause structural vibration if the body is flexible or it is elastically mounted, which is referred to as `vortex-induced vibration'. In practical applications, compliant structures usually have degrees of freedom to move both along and across the incident flow. Much of the fundamental research on the problem has dealt with rigid circular cylinders as bluff bodies, constrained elastically so as to have a single degree of freedom to oscillate either in-line with a free stream (the streamwise direction) or transversely (the cross-stream direction). The circular cylinder spawns the characteristic that is not prone to galloping vibration and vortex-induced vibration occurs in its purest form. In an early review on vortex shedding and its applications, \cite{King1977} noted that maximum in-line amplitudes are approximately 0.2 diameters, peak-to-peak, or about one-tenth of the corresponding maximum cross-stream amplitudes. As a consequence, subsequent research has mostly concentrated on purely transverse vortex-induced vibration as attested in later reviews on the topic \citep[see, e.g.,][]{Bearman1984,Sarpkaya2004,Williamson2004,Gabbai2005,Bearman2011,Paidoussis}. Insight into the fundamentals of vortex-induced vibration can be gained by the complementary study of either in-line or transverse vortex-induced vibrations since both share the same excitation mechanism. The present study deals with the in-line case.
\subsection{Characteristics of in-line free response }
Figure \ref{fig:sketch} shows the flow-structure configuration considered in the present study.
The elastically-mounted cylinder is modelled using the conventional mass-spring-damper system. The cylinder is constrained so that it can oscillate only in-line with a uniform free stream. The motion of the cylinder is governed by Newton's second law, which can be expressed per unit span as
\begin{equation}\label{eq:motion1}
m\, \ddot{x}_c + c\, \dot{x}_c + k\, x_c = F_x(t),
\end{equation}
where $x_c$, $\dot{x}_c$, and $\ddot{x}_c$ respectively are the displacement, velocity and acceleration of the cylinder, $m$ is the mass of the cylinder, $c$ is the structural damping, $k$ is the spring stiffness, and $F_x(t)$ is the time-dependent sectional fluid force acting on the cylinder.
\begin{figure}\vspace{5mm}
\centerline{\includegraphics[width=0.63\textwidth]{Fig1.eps}}
\caption{\label{fig:sketch} Schematic of the flow-structure configuration. }
\end{figure}
The condition for the onset of vortex-induced in-line vibration can be broadly expressed as $U^*_a\approx 1/(2S)$, whereas the corresponding value for cross-stream vibration is $U^*_a\approx 1/S$, where $U_a^*= U_\infty/f_{n,a}D$ is the reduced velocity and $S=f_{v0}D/U_\infty$ is the Strouhal number; here $U_\infty$ is the velocity of the free stream, $D$ is the diameter of the cylinder, $f_{v0}$ is the frequency of vortex shedding for a stationary cylinder, and $f_{n,a}$ is the natural frequency of the structure in still fluid, i.e.\ including the `added mass'. Traditionally, response data from experimental studies obtained as the flow velocity is varied over the attainable range of experimental facilities are presented as a function of the reduced velocity based on the structural frequency in still fluid. Other non-dimensional parameters governing the structural response are: the ratio of the cylinder mass to the fluid mass displaced by the cylinder, denoted as the mass ratio $m^*$, the ratio of the structural damping to the critical damping at which the mechanical system can exhibit oscillatory response to external forcing, denoted as the damping ratio $\zeta$, as well as the Reynolds number, $Re$, which determines the flow regime. The definitions of the mass ratio and the damping ratio are also not uniform in the literature but depend on whether the added fluid mass is taken into account. In this work, we have selected a set of non-dimensional parameters listed in table \ref{tab:parameters}. It should be noted that we define the reduced velocity using the natural frequency of the system in vacuum, $f_n=(1/2\upi)\sqrt{k/m}$, in common with most previous numerical studies of vortex-induced vibration.
\begin{table}
\begin{center}
\begin{tabular}{cccccc}
\thead{Normalized\\amplitude} & \thead{Normalized\\frequency} &
\thead{Reduced\\velocity} & \thead{Mass ratio} &
\thead{Damping ratio} & \thead{Reynolds number} \\
$\displaystyle A^* = \frac{A}{D}$ & $\displaystyle f^* = \frac{fD}{U_\infty}$ &
$\displaystyle U^* = \frac{U_\infty}{f_nD}$ &
$\displaystyle m^* = \frac{m}{\frac{1}{4}\upi\rho D^2}$ &
$\displaystyle \zeta = \frac{c}{2\sqrt{km}}$ &
$\displaystyle Re = \frac{\rho U_\infty D}{\mu}$ \vspace{3pt} \\
\end{tabular}
\caption{\label{tab:parameters} Definitions of non-dimensional parameters employed in the present study.}
\end{center}
\end{table}
Typically, the response amplitude of cylinder vibration is magnified in distinct ranges of the reduced velocity, which mimic the classical resonance of a single degree-of-freedom oscillator to external harmonic forcing. These distinct regions of high-amplitude response have been given various names such as `instability regions' \citep{King1977}, `excitation regions' \citep{Naudascher1987}, or `response branches' \citep{Williamson2004}.
It has been established as early as the 1970's that there exist two distinct excitation regions of free in-line vibration: the first one appears at $U_a^*\lesssim 2.5$ and has been associated with symmetrical shedding of vortices simultaneously from both sides of the cylinder, whereas the second one appears at $U_a^*\gtrsim2.5$ and has been associated with alternating shedding of vortices from each side of the cylinder \citep{Wootton1972,King1977,Aguirre1977}. The value of $U_a^*\approx2.5$ corresponds to $1/(2S)$ assuming a Strouhal number of 0.20, at which reduced velocity the frequency of vortex shedding from a stationary cylinder becomes equal to half the natural frequency of the structure in still fluid, i.e.\ $f_{v0}\approx \frac{1}{2}f_{n,a}$. A factor of 2 arises in the denominator from the fact that two vortices shed from alternate sides of the cylinder each one induces a periodic oscillation of the fluid force in the streamwise direction. It should be remembered that the Strouhal number is a function of the Reynolds number, defined as $Re=\rho U_\infty D/\mu$, where $\rho$ is the density and $\mu$ is the dynamic viscosity of the fluid. Therefore, the above value of $U_a^*\approx2.5$ should generally be replaced by $U_a^*\approx 1/(2S)$ in dealing with response data at different Reynolds numbers.
In experimental studies with elastically mounted rigid cylinders, the structural frequency depends on the oscillating mass and the stiffness of the supporting springs, which provide elastic restoring forces.
In an early study, \cite{Aguirre1977} conducted more than a hundred tests in a water channel to investigate vortex-induced in-line vibration. He concluded that the structural mass and the stiffness affected the cylinder response independently and in different ways: for a given value of $f/f_{v0}$, the density ratio (i.e.\ the mass ratio here) did not affect the response amplitude when normalized with the cylinder diameter, nor did the stiffness affect the normalized frequency of vibration $f/f_{n,a}$, where $f$ is the actual vibration frequency. Beyond that early study, the effects of the structural mass and stiffness, which are embodied in the mass ratio and the reduced velocity values, have not been systematically addressed in more recent studies. Yet, \cite{Okajima2004} found that the response amplitude of in-line oscillation decreases in both excitation regions with increasing the reduced mass-damping, or Scruton number - a non-dimensional parameter that is proportional to the product $m^*\zeta$ and is often employed to compile peak amplitude data as a function of a single parameter. The later findings possibly illustrate the influence of structural damping alone since the mass ratio was constant in those tests.
The existence of two distinct response branches of free in-line vibration and corresponding modes of vortex shedding were confirmed in more recent experimental studies at Reynolds numbers in the range approximately from $10^3$ to $3.5\times10^4$ \citep{Okajima2004,Cagney2013a}. The drop in response amplitude in-between the two branches, i.e.\ at $U_a^*\approx2.5$, has been attributed to the phasing of alternating vortex shedding, which provides a positive-damping, or negative-excitation force with respect to the oscillation of the cylinder \citep{Konstantinidis2005,Konstantinidis2014}. More recently, a mixed mode of combined symmetric and alternating vortex shedding was also reported to exist in-between the two branches \citep{Gurian2019}.
\subsection{Energy transfer and harmonic approximation}
For self-excited vibrations to be possible, energy must be transferred from the fluid to the structural motion in an average cycle so as to sustain the oscillations, i.e.\ $E\geqslant0$ where $ E=\oint{F_x\mathrm{d}x_c}$; ${F_x}$ is the instantaneous fluid force driving the body motion and $x_c$ is the displacement of the body. A rationale is to determine the energy transfer from forced vibrations of the body in order to predict whether free vibrations can occur for the corresponding case where the cylinder is elastically constrained. This is typically based on the approximation that both the motion of the cylinder $x_c(t)$ and the driving fluid force per unit length $F_x(t)$ can be expressed as single-harmonic functions of time $t$, e.g.\
\begin{eqnarray}
x_c(t) & = & X_0 + A\cos{(2\upi f t)},\label{eq:Xharmonic}\\
F_x(t) & = & F_{x0} + F_{x1}\cos{(2\upi ft+\phi_x)},\label{eq:Fharmonic}
\end{eqnarray}
where $X_0$ is the mean streamwise displacement of the cylinder, $A$ is the amplitude and $f$ is the frequency of body oscillation, $F_{x0}$ and $F_{x1}$ respectively are the magnitudes of the mean and unsteady in-line fluid forces, and $\phi_x$ is the phase lag between the displacement and the driving force at the oscillation frequency.
Using the above harmonic approximations, it can be readily shown that
\begin{equation}
E = \upi A F_{x1}\sin\phi_x.
\end{equation}
Hence, free vibration is possible only if $\sin\phi_x\geqslant0$, or equivalently the phase lag be within the range $0^\circ\leqslant\phi_x<180^\circ$.
Two independent studies employing forced harmonic in-line vibrations at fixed amplitudes of oscillation in the range from 0.1 to 0.28 diameters peak-to-peak, have shown that energy is transferred from the fluid to the cylinder motion in two excitation regions separated by approximately $U^*_{r} \approx2.5$ for $Re$ values higher than $10^3$ \citep{Tanida1973,Nishihara2005}. Here, $U^*_{r} =U_\infty/fD$ is the reduced velocity based on the actual frequency of forced oscillation. The use of $U^*_{r}$ is also critical in correlating forced-vibration studies with the response from free-vibration studies, in which case the vibration frequency is not necessarily equal to the structural frequency \citep{Williamson2004,Konstantinidis2014}. Overall, predictions using forced harmonic vibration agree well with the excitation regions found in free vibration at relatively high Reynolds numbers, including the wake modes responsible for free vibration \citep{Tanida1973,Nishihara2005}. On the contrary, \citeauthor{Tanida1973} found that energy transfer was always negative for all reduced velocities at $Re=80$.
They stated that results obtained for $Re=80$ were representative over the range $40\leqslant Re \leqslant150$, which indicated that free vibration may not be possible in that range Reynolds numbers.
More recently, detailed results from two-dimensional numerical simulations for a cylinder placed in an oscillating free stream and the equivalent case of a cylinder oscillating in-line with a steady free stream both showed that $E<0$ for all reduced velocities at fixed Reynolds numbers of $Re=150$ \citep{Konstantinidis2017AOR} and $Re=100$ \citep{Kim2019}. The lowest amplitudes of forced oscillation in those studies were 0.1 and 0.05 diameters, respectively. The findings from both studies have also indicated that free in-line vibration may not be feasible for Reynolds numbers in the laminar regime, which is consistent with the earlier experimental study of \cite{Tanida1973}. Nevertheless, it is plausible that energy transfer may become positive at lower amplitudes of oscillation than those employed in previous studies, which has not received attention to date since free in-line vibration has scarcely been studied at low Reynolds numbers; the authors are aware of only few numerical studies where in-line vibration of circular cylinder rotating at prescribed rates was investigated at $Re=100$ \citep{Bourguet2015,LoJacono2018}. These studies showed that an elastically-mounted rotating cylinder can be excited into large-amplitude galloping-type vibrations as the reduced velocity increases. However, the response amplitudes were negligible in the case of a non-rotating cylinder compared to the rotating cases and the former results were not discussed.
Apart from addressing the question whether in-line vortex-induced vibration is possible at low Reynolds numbers, a more fundamental issue is to clarify what are the flow physics causing variations in the amplitude and frequency of response when self-excited vibration does occur, not only at low Reynolds numbers. This issue impacts our understanding of vortex-induced vibration as well as its modelling and prediction using semi-empirical codes in industrial applications. To address that issue, it is essential to formulate a theoretical framework with the aid of which results can be interpreted. In this study, we maintain that the in-line free vibration offers a convenient test case because it allows different dynamical effects, which are associated with fluid inertia, fluid damping, and fluid excitation from the unsteady wake, to be segregated.
\subsection{Previous theoretical-empirical approaches}
A long-standing approach is to represent the in-line force per unit length $F_x(t)$ based on the equation proposed by \cite{Morison1950}. For a cylinder of circular cross section oscillating in-line with a steady free stream, the equation can be written as
\begin{equation}\label{eq:Morison1}
F_x(t) = \frac{1}{2}\rho D C_{dh}\left|U_\infty - \dot{x}_c\right|\left(U_\infty - \dot{x}_c\right) - \frac{1}{4}\upi\rho D^2C_{mh} \ddot{x}_c,
\end{equation}
where $\rho$ is the density of the fluid. The coefficients $C_{dh}$ and $C_{mh}$ are often referred to as drag and added mass (or inertia) coefficients, respectively, and their values are empirically determined from measurements or simulations. Inherent to this approach is the harmonic approximation since the coefficients $C_{dh}$ and $C_{mh}$ are often determined from tests where the cylinder is forced to vibrate harmonically. Even when the cylinder motion is self-excited, it is still necessary to characterize the vibration in terms of the least number of appropriate non-dimensional parameters for compiling fluid forcing data; this usually boils down to the use of two parameters, i.e.\ the normalized amplitude and normalized frequency of oscillation, which can fully characterize only single-harmonic oscillations.
Another similar approach is to decompose the fluctuating part of the in-line force into harmonic components in-phase with the displacement (or alternately acceleration) and in-phase with the velocity of the oscillating cylinder, in addition to a steady term for the mean drag.
By using harmonic approximations, the steady-state response can be predicted as we present in appendix \ref{app:harmonic}. This approach is analogous to using Morison \etal's equation and linearising the drag term as $\left|U_\infty - \dot{x}_c\right|\left(U_\infty - \dot{x}_c\right) \approx U_\infty^2 - 2U_\infty\dot{x}_c$. However, Morison \etal's equation comprises two force coefficients whereas the harmonic approximation comprises three force coefficients. The lack of an independent term for the mean drag may explain, at least partially, why Morison \etal's equation reconstructs the in-line force unsatisfactorily in the case of vibrations in-line with a steady free stream. However, it should be noted that the addition of a third term for steady drag in Morison's equation did not considerably improve the empirical-fit results \citep[see][]{Konstantinidis2017AOR}.
\citet{Sarpkaya2001} discussed some limitations of Morison \etal's equation to represent the in-line force on a cylinder placed perpendicular to zero-mean oscillatory flow. Recently, \cite{Konstantinidis2017AOR} demonstrated the inability of the thus reconstructed force acting on a cylinder in non-zero-mean oscillatory flows to capture fluctuations due to vortex shedding in the drag-dominated regime, where the vortex shedding and the cylinder motion (the wave motion in that study) are not synchronized. When the primary mode of sub-harmonic synchronization occurs, Morison's equation provides a fairly accurate fit to the in-line force but subtle differences still exist, which may have detrimental effects when the model equation is used for predicting the free response. A disadvantage of previous approaches based on Morison \etal's equation as well as on the harmonic representation, is that the values of the force coefficients show some dependency on the best-fitting method, e.g.\ Fourier averaging \textit{vs.} least-squares method \citep[see][]{Konstantinidis2017AOR}. Another disadvantage, more important, is that force coefficients are empirically determined and as a consequence it is difficult to decipher the flow physics from the variation of the force coefficients, which is our primary goal in this work.
\subsection{Force decomposition and added mass}
Despite the empirical use of Morison \etal's equation, its inventors stated that it originates from the summation of a quasi-steady drag force and the added-mass force resulting from `wave theory' \citep{Morison1950}. A comprehensive discussion of the theory can be found in \citet{Lighthill1986}. For a body accelerating rectilinearly within a fluid medium, there is an ideal `potential' force acting on the body, which can be expressed as $F_{x,\mathrm{potential}}=-C_am_d\ddot{x}_c$, where $C_a$ is the added mass coefficient, $m_d$ is the mass of fluid displaced by the body, and $\ddot{x}_c$ is the acceleration of the body. Thus, the body behaves as if it has a total mass of $m+m_a$, where $m_a=C_am_d$ is the added mass of fluid. According to the theory of inviscid flow, in which case the velocity field can be defined by the flow potential, the added mass coefficient $C_a$ of any body is assumed to depend exclusively on the shape of the body. For a circular cylinder, $C_a=1$. Free-decay oscillation tests in quiescent fluid have shown that $C_a$ is quite close to the ideal value of unity. However, the applicability of the ideal $C_a$ value in general flows, including cylinders oscillating normal to a free stream, has been criticized \citep{Sarpkaya1979,Sarpkaya2001,Sarpkaya2004}. On the other hand, \citet{Khalak1996} argued that removing the ideal added-mass force from the total force will leave a viscous force that may still comprise a component in-phase with acceleration, i.e.\ the decomposition does not have to separate all of the acceleration-depended forces as done in empirical approaches. Ever since the separation of `potential' (inviscid) and `vortex' (viscous) components has been widely employed to shed light into the vortex dynamics around oscillating bodies transversely to a free stream \citep[see, e.g.,][]{Govardhan2000,Carberry2005,Morse2009b,Zhao2014,Zhao2018,Soti2018}.
For body oscillations in-line with a free stream, one may also split the streamwise force as
\begin{equation}
F_x(t) = F_{x,\mathrm{potential}}(t) + F_{x,\mathrm{vortex}}(t),
\end{equation}
in order to explore the link between fluid forcing and vortex dynamics.
This was previously done for the case of a fixed cylinder placed normal to a free stream with small-amplitude sinusoidal oscillations superimposed on a mean velocity \citep{Konstantinidis2011}. This case is kinematically equivalent to the forced vibration of the cylinder in-line with a steady free stream. In that study, large-eddy simulations corresponding to $Re=2150$ showed that alternating vortex shedding provides positive energy transfer for $U^*_r>2.5$, in very good agreement with previous experimental studies discussed earlier. It was also observed that the streamwise vortex force diminished in magnitude while the instantaneous phase of the vortex force with respect to the imposed oscillation drifted continuously near the middle of the wake resonance (synchronization) region. This was considered to be inconsistent with the flow physics in the following sense: within the synchronization region the vortex shedding and the oscillation are strongly phase-locked and the corresponding wake fluctuations are resonantly intensified. Therefore, the magnitude of the vortex force would have been expected to increase and its instantaneous phase to remain fairly constant in this region. The irregular phase dynamics observed in that study indicates that the vortex force remaining from subtracting the ideal inertial force from the total force may not fully represent the effect of the unsteady vortex motions on the fluid forcing.
\subsection{New theory and outline of the present approach}
In this paper, we develop a new theoretical model for representing the streamwise force on a cylinder oscillating in-line with a free stream. The model stems from some recent observations. In particular, it was recently shown that Morison \etal's equation based on the sum of a quasi-steady viscous drag force and an inviscid inertial force represents the in-line force with comparable accuracy as does the equation with best-fitted coefficients over a wide range of parameters from the inertia to drag-dominated regimes \citep{Konstantinidis2017AOR}. However, neither method could capture fluctuations at the vortex shedding frequency in the drag-dominated regime, as noted earlier. Thus, the idea here is to introduce an independent force term $F_{dw}$, i.e.\ to express the total force as
\begin{equation}\label{eq:Morison2}
F_x(t) = \frac{1}{2}\rho D C_{d}\left|U_\infty - \dot{x}_c\right|\left(U_\infty - \dot{x}_c\right) - \frac{1}{4}\upi\rho D^2C_a \ddot{x}_c + F_{dw}(t),
\end{equation}
where the first term represents the quasi-steady drag, the second term represents the inviscid added-mass force, and the third term represents the unsteady force due to periodic vortex formation in the wake. In this new approach, there are two viscous contributions: the quasi-steady drag, which is an `instantaneous' reaction force, and the wake drag, which represents the `memory' effect in a time-dependent flow. These contributions may be thought of as originating from the vorticity in the thin boundary and free shear layers, and from the vorticity in the near-wake region, respectively. Both are affected by the rate of diffusion of the vorticity, which is finite. However, at sufficiently high Reynolds numbers for which separation occurs, the diffusion within the thin vortex layers occurs fast enough, almost `instantaneously'. Then, the force required to supply the rate of increase of the kinetic energy of the rotational motion in these regions may be taken to be proportional to the square of the relative velocity $U_\infty-\dot{x}_c$, which gives rise to a quasi-steady drag \citep{Lighthill1986}. On the other hand, the diffusion of vorticity at the back of the cylinder is a very complex process involving its cross-annihilation as oppositely-signed vortices roll-up close together in the formation region \citep[see, e.g.,][]{Konstantinidis2016}; as a consequence the resulting fluid force acting on the body depends on the history of the vortex motions in the near wake.
The splitting of the viscous drag to quasi-steady and wake components is consistent with the contribution of vorticity in distinguishable flow regions around a cylinder to the fluid forces as shown in the work of \citet{Fiabane2011}. They revealed these separable contributions by displaying force-density distributions based on the volume-integral expression proposed by \citet{Wu2007}. \citeauthor{Fiabane2011} were able to separate an `external-flow' region containing the thin vortex structures in the attached and free shear layers, which contributed 90\% of the mean drag, and a `back-flow' region between these two vortex layers behind the cylinder, which contributed almost all the drag fluctuations. Moreover, they found that the intensification of the vortex roll-up closer to the cylinder with increasing Reynolds number in the range $Re=50-400$ resulted an increase of the drag fluctuations imposed by the back-flow region. Their findings suggest that -- when the cylinder is oscillating -- the `external flow' is at the origin of the quasi-steady drag whereas the contribution from the `back flow' is captured by the wake drag. Interestingly, \citet{Wu2007} also considered the flow around a circular cylinder in a steady free stream as a test case in their study; they remarked that while a concentrated vortex after its feeding sheet is cut off makes little direct contribution to the fluid force, it plays an indirect but major role through its induced effect on unsteadiness of the boundary-layer separation and the motion of separated shear layers, which implies that wake vortices influence the phasing of the fluid forces.
Assuming that a single periodic mode of vortex shedding occurs, the wake-induced force can be further modelled as a single-harmonic function of time, i.e.
\begin{equation}\label{eq:Fvharmonic}
F_{dw}(t) = \frac{1}{2}\rho U_\infty^2DC_{dw}\cos{(2\upi f_{dw}t+\phi_{dw})},
\end{equation}
where the coefficient $C_{dw}$ represents the magnitude of the unsteady wake drag, and $\phi_{dw}$ the phase between the wake drag and the displacement of the cylinder. The frequency $f_{dw}$ excited by the unsteady wake depends on the vortex-shedding mode, e.g.\ $f_{dw}=2f_{vs}$ for the alternating mode, whereas $f_{dw}=f_{vs}$ for the symmetrical mode.
Equation~(\ref{eq:Fvharmonic}) serves as a reduced-order model with the aid of which the fluid dynamics of vortex-induced vibration can be analysed more thoroughly than possible heretofore by employing previous semi-empirical approaches from the literature.
In this study, we conducted numerical simulations of the flow-structure interaction of a circular cylinder elastically constrained so as to oscillate only in-line with a free stream. The main objectives are: (\textit{a}) use the numerical data from simulations to obtain the variations of the model parameters $C_{dw}$ and $\phi_{dw}$ as functions of the problem parameters, and (\textit{b}) use the expressions derived to calculate the model parameters in conjunction with the equation of cylinder motion to develop a theoretical framework for interpreting the phenomenology of vortex-induced in-line vibration. Simulations were restricted to the two-dimensional laminar regime at low Reynolds numbers to keep computer time within reason so as to examine the influence of the reduced velocity and the mass ratio over wide ranges and with a good resolution. The numerically produced sets of data for flow fields and induced fluid forces and their interaction with the resulting free motion of the cylinder allowed us to address the issues raised in the foregoing paragraphs and hopefully make a contribution to the understanding of the complex flow physics.
\section{Methodology}
\subsection{Governing equations\label{sec:equations} }
The flow is assumed incompressible and two-dimensional while physical properties of the fluid are constant. The fluid motion is governed by the momentum (Navier--Stokes) and continuity equations, which can be written in non-dimensional form using the pressure-velocity formulation as
\begin{eqnarray}
\frac{\partial u_x}{\partial t} + u_x\frac{\partial u_x}{\partial x} + u_y\frac{\partial u_x}{\partial y} & = & - \frac{\partial p}{\partial x} + \frac{1}{Re} \left(\frac{\partial^2 u_x}{\partial x^2} + \frac{\partial^2u_x}{\partial y^2} \right) - \ddot{x}^*_{c}, \label{eq:Xmomentum} \\
\frac{\partial u_y}{\partial t} + u_x\frac{\partial u_y}{\partial x} + u_y\frac{\partial u_y}{\partial y} &=& - \frac{\partial p}{\partial y} +
\frac{1}{Re} \left(\frac{\partial^2 u_y}{\partial x^{2}} + \frac{\partial^2u_y}{\partial y^{2}} \right),\\
\frac{\partial u_x}{\partial x} + \frac{\partial u_y}{\partial y}&=& 0.\label{eq:continuity}
\end{eqnarray}
Coordinates are normalized with $D$, fluid velocities with $U_\infty$, time with $D/U_\infty$, and pressure with $\rho U_\infty^2$. The acceleration of the cylinder $\ddot{x}^*_c$ appears on the right-hand side of equation (\ref{eq:Xmomentum}) because the Navier--Stokes equations are applied in a non-inertial frame of reference that moves with the vibrating cylinder. Instead of explicitly enforcing the continuity equation, the pressure field was computed by solving the following Poisson equation at each time step,
\begin{equation}\label{eq:poisson}
\nabla^2p = 2\left( \frac{\partial u_x}{\partial x}\frac{\partial u_y}{\partial y} - \frac{\partial u_x}{\partial y}\frac{\partial u_y}{\partial x} \right)
-\frac{\partial \mathcal{D}}{\partial t},
\end{equation}
where $\mathcal{D}=\partial u_x/\partial x + \partial u_y/\partial y$ is the dilation. Although the dilation is zero by default in incompressible flows (Eq.~\ref{eq:continuity}), the term $\partial\mathcal{D}/\partial t$ is kept in Eq. (\ref{eq:poisson}) to avoid the propagation of numerical inaccuracies \citep{Harlow1965}.
On the cylinder surface, the no-slip boundary condition gives
\refstepcounter{equation}
$$
u_x = 0, \qquad u_y = 0, \eqno{(\theequation{\mathit{a},\mathit{b}})}\label{eq35}
$$
and the following condition for the normal pressure gradient at the wall
\begin{equation}
\frac{\partial p}{\partial n} = \frac{1}{Re}\nabla^2 u_n - \ddot{x}^*_{c,n},
\end{equation}
where $n$ refers to the component normal to the cylinder surface pointing to the fluid side.
At the far field, a potential flow field is assumed so that
\refstepcounter{equation}
$$
u_x = u_{x,pot} - \dot{x}_c^*, \qquad u_y = u_{y,pot},
\eqno{(\theequation{\mathit{a},\mathit{b}})} \label{eq:BCfar}
$$
where $u_{x,pot}$ and $u_{y,pot}$ is the velocity field from the known potential of irrotational flow. The corresponding condition for the far-field pressure is
\begin{equation}
\frac{\partial p}{\partial n} = 0. \label{eq:BCfar2}
\end{equation}
The initial field corresponds to the potential flow around a circular cylinder. The dimensionless form of the equations illustrates that the fluid motion depends solely on the Reynolds number given the boundary and initial conditions.
The displacement of a cylinder elastically constrained so that it can oscillate only in-line with a uniform free stream is governed by Newton's law of motion, which can be written in non-dimensional form as
\begin{equation}\label{eq:motion}
\ddot{x}_c^* + \frac{4\upi\zeta}{U^*}\dot{x}_c^* + \left(\frac{2\upi}{U^*}\right)^2 x_c^* = \frac{2C_x(t)}{\upi m^*},
\end{equation}
where $x_c^*$, $\dot{x}_c^*$, and $\ddot{x}_c^*$ respectively are the non-dimensional displacement, velocity, and acceleration of the cylinder, normalized using $D$ and $U_\infty$ as length and velocity scales; $C_x(t)$ is the sectional fluid force on the cylinder normalized by $0.5\rho U^2_\infty D$. Here, the fluid forcing is provided by the ambient flow through balancing the normal and shear stresses on the cylinder surface. The cylinder is initially at rest. The dimensionless form of equation (\ref{eq:motion}) illustrates that the cylinder motion depends on the reduced velocity $U^*$, the mass ratio $m^*$, and the damping ratio $\zeta$. The full set of independent dimensionless parameters of the problem comprises $Re,\,U^*,\,m^*$, and $\zeta$.
Because of the two-dimensional approach, simulations were limited up to $Re = 250$.
Our main simulations are at $Re = 180$, a value which is slightly lower than the threshold for which mode-A spanwise instability occurs in the wake of a stationary cylinder, i.e.\ $Re_c \approx190$ \citep[see][]{Williamson1996,Barkley1996}, and may provide some indication of the corresponding threshold of three-dimensional transition for oscillating cylinders. Yet, the in-line oscillation of the cylinder causes wake synchronization, which has been shown to suppress the mode-A spanwise instability into two-dimensional laminar flow with strong K\'arm\'an vortices at $Re=220$ \citep{Kim2009}. Therefore, the flow may well be expected to remain strictly two dimensional for our main simulations at $Re=180$.
For simulations at the highest value of $Re=250$, it is plausible that some three-dimensional instability might exist. Three dimensionality usually appears first in the form of weak coherent structures of streamwise vorticity with specific wavelength riding on the primary spanwise vorticity that remains in-phase along the length of freely-vibrating cylinders \citep[see][]{LoJacono2018,Bourguet2020}. Under such flow conditions, the spanwise vorticity component is much higher than the other two components by one order and the direct effect of the three-dimensionality of the vortical structures on the force is weak as noted by \citet{Wu2007}. Thus, the plausible existence of three-dimensional vortex structures in the cylinder wake may be reasonably expected to not directly influence the magnitude and phase of the streamwise and transverse fluid forces, which are primarily determined by the two-dimensional wake instability, nor influence the cylinder response at the highest Reynolds number of 250 at which we conducted simulations to check the trends of results in the laminar regime.
\subsection{Numerical code}
An in-house code based on the finite difference method was used to solve the equations of fluid motion \citep{Baranyi2008}. The flow domain is enclosed between two concentric circles: the inner circle is the boundary fitted to the cylinder surface while the outer circle represents the far field boundary. The polar physical domain is mapped into a rectangular computational domain using linear mapping functions. The computational mesh of the `physical domain' is fine in the vicinity of the cylinder and coarse in the far field while the corresponding mesh of the transformed domain is equidistant. Space derivatives are approximated using a fourth order finite-difference scheme except for the convective terms for which a third-order modified upwind difference scheme is employed. The pressure Poisson equation is solved using the successive over-relaxation (SOR) method and the continuity equation is implicitly satisfied at each time step. The Navier-Stokes equations are integrated explicitly using the first-order Euler method and the fourth-order Runge--Kutta scheme is employed to integrate the equation of cylinder motion in time. At each time step the fluid forces acting on the cylinder are calculated by integrating the pressure and shear stresses around the cylinder surface, which are obtained from the flow solver. The streamwise force is supplied to the right-hand side of Eq.~(\ref{eq:motion}) which is integrated to advance the cylinder motion. At the next time step, the cylinder acceleration is updated and the equations of fluid motion are integrated to complete the fluid-solid coupling. For all simulations reported here, the cylinder is initially at rest and the initial field around the cylinder satisfies the potential flow.
\subsection{Domain size, grid resolution and time-step dependence studies}
In this section, we present results from preliminary simulations to check the dependence of main output parameters on a) the size of the computational domain in terms of the radius ratio $R_2/R_1$, b) the grid resolution $\xi_{max}\times\eta_{max}$, and c) the dimensionless time step $\Delta t$. Here $\xi_{max}$ and $\eta_{max}$ are the number of grid points in peripheral and radial directions, respectively. During these computations, the Reynolds number, the reduced velocity, the mass ratio, and the structural damping ratio were fixed at $Re=180$, $U^*=2.55$, $m^*=10$, and $\zeta=0$, respectively. The main output parameters of interest are the amplitude $A^*$ and frequency $f^*$ of cylinder response (normalized with $D$ and $U_\infty$) as well as the standard deviations of the in-line and transverse fluid forces (normalized with $0.5\rho U_\infty^2D$), which are respectively denoted $C'_x$ and $C'_y$.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{lccllll}
domain & $R_2/R_1$ & $\xi_{max}\times\eta_{max}$ & \mbox{$A^*$} & \mbox{$f^*$} & \mbox{$C'_x$} & \mbox{$C'_y$} \\
small &120 & $360\times274$ & 0.01082 & 0.3730 & 0.06974 & 0.4226 \\
medium &160 & $360\times291$ & 0.01079 & 0.3726 & 0.07051 & 0.4231 \\
large & 200 & $360\times304$ & 0.01077 & 0.3725 & 0.07094 & 0.4235 \\
\end{tabular}
\caption{\label{tab:Indep_Area} Results of the domain dependence studies at $(U^*, m^*, Re)=(2.55, 10, 180)$}
\end{center}
\end{table}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{lcllll}
grid & $\xi_{max}\times\eta_{max}$ & \mbox{$A^*$} & \mbox{$f^*$} & \mbox{$C'_x$} & \mbox{$C'_y$} \\
coarse & 300$\times$242 & 0.01074 & 0.3725 & 0.07087 & 0.4235 \\
medium & 360$\times$291 & 0.01079 & 0.3726 & 0.07051 & 0.4231 \\
fine & 420$\times$339 & 0.01082 & 0.3727 & 0.07033 & 0.4229 \\
\end{tabular}
\caption{\label{tab:Indep_Grid} Results of the grid dependence studies at $(U^*, m^*, Re)=(2.55, 10, 180)$}
\end{center}
\end{table}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{cllll}
$\Delta t$ & \mbox{$A^*$} & \mbox{$f^*$} & \mbox{$C'_x$} & \mbox{$C'_y$} \\
0.0004 & 0.01082 & 0.3727 & 0.07060 & 0.4233 \\
0.0002 & 0.01079 & 0.3726 & 0.07051 & 0.4231 \\
0.0001 & 0.01078 & 0.3726 & 0.07049 & 0.4231 \\
\end{tabular}
\caption{\label{tab:Indep_Time} Results of dimensionless time step dependence studies at $(U^*, m^*, Re)=(2.55, 10, 180)$}
\end{center}
\end{table}
First, three different values of the radius ratio $R_2/R_1$ of the inner and outer circles defining the computational domain were tested and the corresponding results are shown in table~\ref{tab:Indep_Area}. The number of circumferential nodes $\eta_{max}$ of the physical domain was adjusted in each case in order to keep the grid equidistant in the transformed domain. For these computations, the dimensionless time step was fixed at $\Delta t=10^{-4}U^*\cong0.0002$. Table~\ref{tab:Indep_Area} shows that the most sensitive quantity on the domain size is $C'_x$ displaying a relative difference of 1.7\% between the small and large domains, whereas the corresponding differences for $A^*$, $f^*$ and $C'_y$ are below 0.7\%. The relative differences for all quantities of interest become less than 0.6\% between the medium and large domains. Thus, the medium-sized domain with a radius ratio of $R_2/R_1=160$ was chosen for the rest of the computations.
Next, we tested three grids with different resolutions $\xi_{max}\times\eta_{max}$ where the number of peripheral and radial grid points was increased so that the grid remains equidistant in the transformed plane. For these computations, the radius ratio and dimensionless time step values were fixed at $R_2/R_1=160$ and $\Delta t=10^{-4}U^*\cong0.0002$, respectively. The results of these tests are shown in table~\ref{tab:Indep_Grid}. Again, $C'_x$ is the most sensitive quantity displaying a relative difference of 0.7\% between coarse and fine grids. All quantities of interest display relative differences less than 0.3\% between medium and fine grids. Thus, the medium-resolution grid was chosen for further computations.
Finally, we tested the dependence on the dimensionless time step, $\Delta t$ for three different values corresponding to $\Delta t\cong2\times10^{-4}U^*, 10^{-4}U^*$ and $5\times10^{-5}U^*$. For these computations, the radius ratio and grid resolution values were fixed at $R_2/R_1=160$ and $360\times292$, respectively. The results are shown in table~\ref{tab:Indep_Time}. In this tests, the quantity that is most sensitive to the time step is $A^*$, which shows a relative difference of 0.4\% between the largest and smallest time steps whereas the corresponding relative differences for $f^*$, $C'_x$ and $C'_y$ are all below 0.15\%. The relative differences of all quantities of interest are below 0.1\% between the intermediate and the smallest time steps. Thus, a dimensionless time step of $\Delta t=10^{-4}U^*$ was chosen for the main computations.
\subsection{Code validation}
\begin{figure}
\centerline{\includegraphics[width=0.95\textwidth]{Fig2.eps}}
\caption{Comparison of results obtained in the present study (open circles) against the study of \cite{Bourguet2015} (filled squares) pertinent to purely in-line free vibration in terms of the variation of normalized amplitude $A^*$ and normalized frequency $f^*$ of cylinder response and the standard deviations of the normalized forces in-line and transverse to the free stream, $C'_x$ and $C'_y$ respectively, as functions of the reduced velocity $U^*$ for $(Re,\,m^*,\,\zeta)=(100,\,4/\upi,\,0)$.} \label{fig:comparison}
\end{figure}
The computational code used in the present study was previously employed in several studies of flows about stationary and oscillating cylinders and results have been extensively compared against data from the literature \citep[see][]{Baranyi2008,Dorogi2018,Dorogi2019}.
For instance, \cite{Baranyi2008} found good agreement with the study of \cite{Al-Mdallal2007} in terms of the time history of the lift coefficient and Lissajous patterns of lift \textit{vs.} cylinder displacement at comparable situations for the case of a cylinder forced to oscillate in the streamwise direction. In addition, the extended code handling flow-structure interaction has been validated for the case of a cylinder undergoing vortex-induced vibration with two degrees of freedom with equal natural frequencies in the streamwise and transverse directions against results from \cite{Prasanth2008} for $m^*=10$, $\zeta=0$, and Reynolds numbers in the range from 60 to 240 \citep[for details see][]{Dorogi2018}. Furthermore, \cite{Dorogi2019} showed that results obtained with the present code compare well with published results in \cite{Navrose2017} for purely transverse free vibration, as well as in \cite{Prasanth2011} and \cite{Bao2012} for free vibration with two degrees of freedom with equal or unequal, respectively, natural frequencies in the streamwise and transverse directions, for similar conditions in each case.
In addition to the validation tests presented in previous studies, we compare in figure \ref{fig:comparison} results obtained with the present code against the study of \cite{Bourguet2015} for purely in-line free vibration. Here, we employed a finer step in the reduced velocity to resolve the maximum in $A^*$ as well as the minimum in $C'_x$.
There is excellent agreement of results for $A^*$ and $C'_x$ but there are some minor deviations for $f^*$ and $C'_y$ of 0.2\% and 1.2\%, respectively, which might be attributable to different numerical methods employed in those studies (finite difference \textit{vs.} spectral element). Overall, previous and present validation tests show that the numerical code employed in the present study provides accurate solutions.
\section{Results and discussion}
\subsection{Effect of Reynolds number on in-line response at a fixed mass ratio}
\begin{figure}
\centerline{\includegraphics[width=0.68\textwidth]{Fig3.eps}}
\caption{The in-line response amplitude $A^*$ and frequency $f^*$ with reduced velocity $U^*$ at different Reynolds numbers (see legend); $(m^*,\,\zeta)=(5,\,0)$. The $f^*$ values are divided by the corresponding $S$ values to take into account the effect of Reynolds number on the Strouhal number.} \label{fig:Re_response}
\end{figure}
To start with, we consider effect of the Reynolds number on the in-line response of a cylinder with a mass ratio of $m^*=5$. The structural damping was set to zero so as to allow for the highest possible amplitude response to take place.
Figure~\ref{fig:Re_response} (top plot) shows that there exists a single excitation region in which the response amplitude $A^*$ displays a marked peak for all Reynolds numbers considered. The $U^*$ value at which peak amplitudes occur decreases with $Re$, which can be attributable to the corresponding increase of the Strouhal number since peak amplitudes occur at approximately $U^*_a\approx 1/(2S)$. The peak amplitude over the entire $U^*$ range, denoted as $A^*_\mathrm{max}$, increases from 0.002 at $Re=100$ to 0.024 at $Re=250$, i.e.\ an increase in $Re$ by a factor of 2.5 results in an increase in $A^*_\mathrm{max}$ by a factor of 12. This is a remarkable increase of the order of magnitude, which contrasts the constancy of peak amplitudes of purely transverse free vibration in the corresponding range of Reynolds numbers \citep[a compilation of $A^*_\mathrm{max}$ data as a function of $Re$ from several studies can be found in][]{Govardhan2006}.
For all simulations conducted in this study, the vortex shedding remained synchronized at half the frequency of the cylinder oscillation. As shown in the bottom plot in figure~\ref{fig:Re_response} the normalized response frequency $f^*$ is approximately twice the corresponding Strouhal number at each Reynolds number. However, $f^*$ displays a trough within the excitation region that becomes more pronounced as the Reynolds number increases; the trough can be hardly discerned for $Re=100$. The variations of $A^*$ and $f^*$ appear to be strongly correlated. The characteristics of free response remain similar for all Reynolds numbers considered here. However, a sudden drop in $A^*$ appears just after the peak amplitude for $Re=250$, a feature which is not present for lower Reynolds numbers. This might indicate the onset of branching behaviour similar to that observed in vortex-induced vibration purely transverse to the free stream \citep[see][]{Leontini2006}.
\begin{figure}
\centerline{\includegraphics[width=0.96\textwidth]{Fig4.eps}}
\caption{\label{fig:Re_wake} Snapshots of the distribution of vorticity around a cylinder undergoing free in-line vibration for $(m^*,\,\zeta)=(5,0)$; (\emph{a}) $(Re,\,U^*)=100,\,2.75)$, (\emph{b}) $(Re,\,U^*)=(180,\,2.44)$, (\emph{c}) $(Re,\,U^*)=(250,\,2.40)$. $U^*$ values correspond to peak response amplitudes in figure \ref{fig:Re_response}. Each snapshot corresponds to a random phase of the cylinder oscillation. Contour levels of normalized vorticity at $\pm0.1,\pm0.5,\pm0.9,\ldots$.}
\end{figure}
In comparison to previous experimental studies, which typically correspond to Reynolds numbers above $10^3$, we did not observe another excitation region associated with symmetrical vortex shedding. In contrast, we observed only the alternating mode of vortex shedding in the present simulations corresponding to the laminar wake regime. Figure~\ref{fig:Re_wake} shows vorticity distributions in the wake at $U^*$ values corresponding to peak amplitudes for different Reynolds numbers. In all cases, the vorticity distributions display the familiar von K\'arm\'an vortex street similar to the wake of a stationary cylinder.
As the Reynolds number is increased, the contours of individual vortices become more concentrated and peak vorticity values within them increase. This might be partly attributable to the increase in the amplitude of cylinder oscillation with Reynolds number, which accrues the generation of vorticity on the cylinder surface \citep{Konstantinidis2016}. In addition, the streamwise spacing between the centres of subsequent vortices decreases due to the increase of the normalized frequency of cylinder oscillation $f^*$ with Reynolds number. The absence of the other excitation region associated with the symmetrical vortex shedding may be attributable to the fact that, as has been shown in several previous studies where the cylinder is forced to oscillate in the streamwise direction at correspondingly low Reynolds numbers, the onset of this mode occurs at relatively high amplitudes above 0.1 diameters \citep{Al-Mdallal2007,Marzouk2009,Kim2019}. Since streamwise amplitudes of free vibration are much lower than that threshold, it is not surprising that the mode of symmetrical shedding and the corresponding excitation region were not observed in the present study.
\subsection{Effect of mass ratio on in-line response at a fixed Reynolds number}
Next, we concentrate on the effect of mass ratio on the in-line response at a fixed Reynolds number of $Re=180$, at which the flow is expected to remain laminar and strictly two dimensional. The structural damping was set to zero $(\zeta=0)$ to allow the highest possible amplitudes.
\subsubsection{Cylinder response }
Figure~\ref{fig:response} shows the variations of $A^*$ and $f^*$ with $U^*$ for four $m^*$ values. It can be seen that $A^*$ displays a single excitation region with peak amplitudes of approximately 1\% of the cylinder diameter, irrespectively of $m^*$. At high reduced velocities, i.e.\ $U^*>4$, the response amplitude gradually drops off down to a level that depends on the mass ratio, with $A^*$ becoming lower as $m^*$ increases.
The response frequency $f^*$ initially decreases within the excitation region reaching a minimum value of approximately 0.372 for all mass ratios. As $U^*$ is increased beyond the point of minimum, $f^*$ increases asymptotically towards the value corresponding to twice the Strouhal number for a stationary cylinder, i.e.\ $f^*\approx2S=0.384$. It is interesting to note that $f^*<2S$ over the entire $U^*$ range for all $m^*$, i.e.\ the cylinder always oscillates at a frequency slightly lower than twice the frequency of vortex shedding from a stationary cylinder. This is consistent with forced harmonic vibration studies, which show that vortex-induced vibration due to alternating vortex shedding occurs for $f<2f_{v0}$ \citep{Nishihara2005,Konstantinidis2011}.
\begin{figure}
\centerline{\includegraphics[width=0.72\textwidth]{Fig5.eps}}
\caption{\label{fig:response} The variation of normalized amplitude $A^*$ and normalized frequency $f^*$ of cylinder response as functions of the reduced velocity $U^*$ for $Re=180$ and different mass ratios $m^*$ (see the symbol legend for $m^*$ values). }
\end{figure}
The variations of $A^*$ and $f^*$ shown in figure~\ref{fig:response} appear to be strongly correlated, i.e.\ $A^*$ increases as $f^*$ decreases and \textit{vice versa}. However, it should be noted that peak amplitudes occur at marginally lower $U^*$ values than those that correspond to the minimum in $f^*$. The $U^*$ values at which peak amplitudes occur depend on $m^*$ quite substantially. In fact, peak amplitudes occur approximately at the point where the frequency of cylinder oscillation approaches the natural frequency of the system in still fluid, i.e.\ $f\approx f_{n,a}$. This condition can also be expressed in dimensionless form as:
\begin{equation}\label{eq:Upeak}
\mathrm{Normalized~frequency~at~point~of~peak~amplitude:}\qquad f^*\approx\frac{1}{U^*}\sqrt{\frac{m^*}{m^*+C_a}}.
\end{equation}
The above condition can be verified in table~\ref{tab:Apeak}, which summarizes the response characteristics at peak amplitudes for different mass ratios. Effectively, we see that the reduced velocity at which the peak amplitude occurs primarily depends on the mass ratio but the peak response amplitude does not depend on the mass ratio.
\begin{table}
\begin{center}
\small\addtolength{\tabcolsep}{10pt}
\begin{tabular}{cllll}
$m^*$ & \mbox{$U^*$} & \mbox{$A_\mathrm{max}^*$} & \mbox{$f^*$} & $\frac{1}{U^*}\sqrt{\frac{m^*}{m^*+1}}$ \\
2 & 2.17 & 0.01081 & 0.3725 & 0.3763\\
5 & 2.44 & 0.01082 & 0.3724 & 0.3741\\
10&2.55 & 0.01078 & 0.3726 & 0.3739\\
20& 2.614 & 0.01081& 0.3727 & 0.3733\\
\end{tabular}
\caption{\label{tab:Apeak} Response characteristics at peak amplitude for different mass ratios ($Re=180)$. }
\end{center}
\end{table}
The drop in normalized frequency $f^*$ within the excitation region in fact illustrates the tendency of the oscillation to `lock-in' at the natural frequency of the structure in still fluid, i.e.\ $f\approx f_{n,a}$, over a range of reduced velocities. It is important to distinguish this `lock-in' tendency, which only occurs in free vibration, from `vortex lock-in' (a.k.a.\ `vortex lock-on'), which regards the synchronization of the vortex shedding and the cylinder oscillation and may occur in both forced and free vibration \citep{Konstantinidis2014}. In forced vibration, there is no natural frequency of the structure so the lock-in relationship $f=f_{n,a}$ is meaningless. It should also be noted that for all simulations reported in figure~\ref{fig:response} (i.e.\ for all $U^*$ and $m^*$ values), the vortex shedding was synchronized with the cylinder oscillation so that $f=2f_{vs}$, where $f_{vs}$ is the frequency of vortex shedding from the freely vibrating cylinder. This is tantamount to the sub-harmonic vortex lock-on in the context of forced oscillations where alternating vortex shedding synchronizes at half the frequency of cylinder oscillation, i.e.\ $f_{vs}=\frac{1}{2}f$ \cite[see, e.g.,][]{Kim2019}. However, the conventional lock-in $f\approx f_{n,a}$ occurs over a narrow range of reduced velocities in the region of peak-amplitude response.
\subsubsection{Magnitude and phase of fluid forces}
\begin{figure}
\centerline{\includegraphics[width=0.74\textwidth]{Fig6.eps}}
\caption{The variations of the mean drag coefficient $\overline{C}_x$ and the standard deviations of the normalized fluid forces in-line with and transversely to the free stream, respectively $C'_x$ and $C'_y$, as functions of the reduced velocity $U^*$ for different mass ratios, $m^*$ at $Re=180$ (see the legend for $m^*$ values). Dashed lines indicate constant values that correspond to the stationary cylinder.}\label{fig:forces}
\end{figure}
In figure \ref{fig:forces} we present the variations of the mean drag coefficient $\overline{C}_x$ and the standard deviations of the unsteady forces in-line with and transversely to the free stream, $C'_x$ and $C'_y$ respectively, as functions of $U^*$. The dashed lines indicate constant values corresponding to a stationary cylinder at $Re=180$ \cite[taken from][]{Qu2013}.
All three quantities exhibit similar variations with $U^*$ for all mass ratios, i.e.\ initially they increase above the corresponding fixed-cylinder values, subsequently they decrease steeply, and finally they gradually increase thereafter as $U^*$ is increased.
The maximum $\overline{C}_x$ is 0.5\% greater and the minimum $\overline{C}_x$ is 1.6\% smaller than the mean drag coefficient for the stationary cylinder, independently of the $m^*$ value. However, the maxima and minima in $\overline{C}_x$ occur at different $U^*$ values depending on the $m^*$ value.
The middle plot in figure~\ref{fig:forces} shows that $C'_x$ reaches a peak value at approximately the point of peak amplitude response, which is nearly three times the value corresponding to a stationary cylinder; a substantial increase despite the low amplitude of peak oscillation of only 1\% of the cylinder diameter.
Following the peak there is a steep decrease within a narrow $U^*$ range at the end of which $C'_x$ tends to zero. Interestingly, for all mass ratios this occurs at $U^*=2.625$, a point at which the response frequency coincides with the natural frequency of the structure in vacuum, i.e.\ $f=f_n$ (or in non-dimensional parameters $f^*=1/{U^*}$). Hereafter, this special operating point will be referred to as the `coincidence point' and its ramifications will be discussed in more detail in Sect.~\ref{sec:superharmonic}. Beyond the coincidence point, $C'_x$ gradually reaches to a plateau at a value that is proportional to $m^*$. On the other hand, the peak and trough $C'_y$ values are merely 6\% higher and 5\% lower, respectively, than the value corresponding to a stationary cylinder (see the bottom plot in figure~\ref{fig:forces}). For all $m^*$, a maximum value of $C'_y=0.45$ exactly is attained at the point of peak response amplitude. In addition, at the coincidence point, i.e.\ $U^*=2.625$, $C'_y$ attains exactly the same value of 0.418 for all $m^*$, which is just slightly below the value corresponding to a stationary cylinder. Since there is no body acceleration in the transverse direction, changes of $C'_y$ can be related to changes in the vortex dynamics around the oscillating cylinder \citep{Leontini2013}. Then, the small variation in the $C'_y$ magnitude illustrates that the process of vortex formation and shedding does not substantially change over the entire range of reduced velocities, even though the cylinder can be oscillating with different but generally small amplitudes.
The phase angle $\phi_x$ between the driving force $F_x(t)$ and cylinder displacement $x_c(t)$, as defined by harmonic approximations in equations (\ref{eq:Xharmonic}) and (\ref{eq:Fharmonic}), is often considered to be a useful parameter in studies of vortex-induced vibration. In addition to $\phi_x$, we also compute here the phase angle $\phi_y$ between the unsteady force acting in the transverse direction $F_y(t)$ and the displacement, assuming that $F_y(t)$ can also be approximated as a single-harmonic function of time. The calculation method of the phase angle is described in appendix \ref{app:phases}. It should be noted that the harmonic approximations work very well at low Reynolds numbers considered here except for the time history of $F_x(t)$ near the coincidence point as will be discussed in detail further below.
Figure \ref{fig:phases} shows the variations of the phase angles $\phi_x$ and $\phi_y$ as functions of $U^*$.
\begin{figure}
\centerline{\includegraphics[width=0.74\textwidth]{Fig7.eps}}
\caption{The variation of the phase angles $\phi_x$ and $\phi_y$ between in-line and transverse forces, respectively, and the cylinder displacement as functions of the reduced velocity $U^*$ at $Re=180$ and different mass ratios $m^*$ (see the symbol legend for $m^*$ values).}\label{fig:phases}
\end{figure}
In figure \ref{fig:phases}, it can be seen that $\phi_x$ jumps suddenly from $0^\circ$ to $180^\circ$ across the coincidence point at $U^*=2.625$ for all $m^*$. This behaviour can be predicted from the steady-state harmonic solution, which is given in appendix \ref{app:harmonic}, as follows. Equation (\ref{eq:Hsine}) requires that the condition $\sin\phi_x=0$ be satisfied when the structural damping is null $(\zeta=0)$. This constrains $\phi_x$ to be either $0^\circ$ or $180^\circ$. This taken in tandem with the requirement per equation (\ref{eq:Hcosine}) that $\cos\phi_x$ must change from positive to negative as $U^*$ increases through $f^*U^*=1$ translates to the jump in $\phi_x$ from $0^\circ$ to $180^\circ$ appearing at the coincidence point. We would like to stress that the equation of cylinder motion constraints $\phi_x$, which makes it impossible to infer any changes in the flow from the variation of $\phi_x$ as the reduced velocity is varied.
In contrast to $\phi_x$, there is no analogous constraint on $\phi_y$, which instead of a jump displays a smooth variation with $U^*$, as can be seen in figure \ref{fig:phases}. For all mass ratios, $\phi_y$ increases from an initial value of approximately $20^\circ$ at low reduced velocities to a terminal value of approximately $105^\circ$ at high reduced velocities, with precise values being slightly depended on $m^*$. The $U^*$ range over which $\phi_y$ changes rapidly is consistent with the range of relatively high-amplitude response, which broadens as $m^*$ decreases (see figure \ref{fig:response}). The variation of $\phi_y$ clearly suggests a gradual change in the vortex dynamics as $U^*$ is varied over the prescribed range. Based on evidence from previous studies \citep[see][]{Konstantinidis2005,Konstantinidis2011}, our main hypothesis is that the variation of the phase angle $\phi_y$ can be linked to a gradual shift in the timing of vortex shedding with respect to the cylinder oscillation as $U^*$ is increased.
This hypothesis will be verified in the following subsection by inspection of vorticity distributions at different phases of the cylinder oscillation.
\subsubsection{Vorticity distributions}
We have selected three $U^*$ values, which are listed in table~\ref{tab:phase}, for presenting vorticity distributions in the wake. In purpose, the normalized frequency of cylinder oscillation $f^*$ is nearly the same in all three cases so that the vorticity patterns are easier to describe since the streamwise spacing of the shed vortices scales with $f^*$ \citep{Griffin1978}. However, the frequency ratio $f/f_n=U^*f^*$ and the phase angle $\phi_y$ both increase with $U^*$, at different degrees, as also shown in table~\ref{tab:phase}. The two higher $U^*$ values are just before and after the `coincidence point', $U^*f^*=1$.
For each $U^*$ value, figure~\ref{fig:vortex_phase} shows instantaneous vorticity distributions at two phases corresponding to the maximum and the subsequent zero displacement of the cylinder as indicated in the time traces (see bottom plots in figure~\ref{fig:vortex_phase}). The significant change of the phase angle $\phi_y$ as a function of $U^*$ can be readily inferred from the time traces of the displacement and the transverse force, which have been appropriately normalized with their maximum amplitudes for better visualisation of their waveforms.
As $U^*$ is increased, the position of individual vortices at corresponding instants appears to have shifted slightly downstream. The shift in the streamwise position of individual vortices fits very well with the change in $\phi_y$ as $U^*$ is varied; when $U^*$ changes from 2.35 to 2.6, $\phi_y$ more than doubles and a large shift is observed; when $U^*$ changes from 2.6 to 2.7, $\phi_y$ changes by few degrees and the shift is hardly perceptible.
We made a quantitative estimation of the relative phase shift for pairs of $U^*$ values from the corresponding spatial shift in the position of corresponding vortex centres at the instant of maximum displacement, with vortex centres extracted by locating the points of peak vorticity in each individual vortex.
When $U^*$ changes from 2.35 to 2.6 the above method yields a relative phase shift of $54^\circ$, which is very close to $\Delta\phi_y=55.4^\circ$ computed directly from the phase shift of the transverse force (see table \ref{tab:phase}). Similar inferences can be made by looking into the stage of formation of vortices just behind the cylinder. For instance, a stripe of negative vorticity (in blue colour) connecting the second vortex to the third vortex taken from right to left, i.e.\ as time progresses, can be observed at maximum displacement for $U^*=2.35$. However, the corresponding vortices are no longer connected by a stripe of vorticity for $U^*=2.6$ and 2.7, which illustrates that vortex shedding is at a progressed stage in these cases.
It should be noted that although this visual feature, which we could make clear by selecting a minimum level of the normalized vorticity of 0.1, depends on this minimum level, it does provide a quantitative comparison of the process of vortex formation at different reduced velocities since the same contour levels were employed for producing all vorticity distributions; in a similar manner the space occupied by individual vortices provides a measure of their strength, which allows quantitative comparisons when the same contour level is employed. It is also interesting to observe that nothing special changes in the vortex patterns between $U^*=$ 2.6 and 2.7 but a marginal phase shift, although $\phi_x$ jumps by $180^\circ$ on crossing over the coincidence point.
The above observations firmly support that the variation of $\phi_y$ as a function of $U^*$, in contrast to the variation of $\phi_x$, is directly linked to changes in the timing of vortex shedding from the cylinder.
\begin{table}
\begin{center}
\small\addtolength{\tabcolsep}{22pt}
\begin{tabular}[1.8\textwidth]{rlll}
\mbox{$U^*$} & 2.35 & 2.60 & 2.70 \\
\mbox{$f^*$} & 0.378 & 0.380 & 0.381\\
\mbox{$U^*f^*$} & 0.888 & 0.989 & 1.030\\
\mbox{$\phi_y$} (degrees) & 40.6 & 96.0 & 99.2\\
\end{tabular}
\caption{\label{tab:phase} Important characteristics at three reduced velocities $(Re,\,m^*)=(180,\,5)$. }
\end{center}
\end{table}
\begin{figure}
\centerline{\includegraphics[width=0.99\textwidth]{Fig8.eps}}
\caption{The top two rows show instantaneous distributions of the vorticity at maximum and subsequent zero displacement during a cycle of cylinder oscillation for three reduced velocities in each column; $(Re,\,m^*)=(180,\,5)$.
The bottom row shows time traces of the normalized displacement $\hat{x}_c(t)$ (dashed lines) and the normalized transverse force $\hat{C}_y(t)$ (solid lines) where circle and square symbols, respectively, mark the instants for which vorticity distributions are shown above. Contour levels of normalized vorticity: $\pm0.1,\,\pm0.5,\,\pm0.9, \ldots$}
\label{fig:vortex_phase}
\end{figure}
\subsubsection{Variable added mass}
Changes in the frequency of cylinder response are often correlated with variations in the inertial force due to the added mass . This follows from the relationship
\begin{equation}\label{eq:CGW}
\frac{f}{f_{n,a}}=\sqrt{\frac{m^*+C_a}{m^*+C_{EA}}}.
\end{equation}
Here, $C_a$ is the ideal added mass coefficient from potential flow theory and $C_{EA}$ is a variable added mass coefficient \citep{Aguirre1977}. The latter coefficient has become known as the `effective added mass' in the context of transverse free vibration \citep{Khalak1996,Williamson2004}. Equation (\ref{eq:CGW}) is equivalent to (\ref{eq:Hcosine}) of the harmonic approximation solution where the component of the force in-phase with displacement has been substituted by
\begin{equation}
C_{x1}\cos\phi_x = 2\upi^3f^{*2}A^*C_{EA},
\end{equation}
and the ratio of the natural structural frequency in vacuum to that in still fluid is given by
\begin{equation}\label{eq:fn_ratio}
\frac{f_n}{f_{n,a}}=\sqrt{\frac{m^*+C_a}{m^*}}.
\end{equation}
\begin{figure}
\centerline{\includegraphics[width=0.75\textwidth]{Fig9.eps}}
\caption{The variation of the effective added mass $C_{EA}$ with the reduced velocity $U^*$ for different mass ratios $m^*$ (symbol legend as in figure \ref{fig:forces}).} \label{fig:CEA}
\end{figure}
In figure \ref{fig:CEA}, we present the variation of $C_{EA}$ as a function of $U^*$ for each $m^*$ investigated. $C_{EA}$ values were computed \textit{via} equation (\ref{eq:CGW}) and the known frequency ratio; note that $f/f_{n,a}=f^*U_a^*=f^*U^*(f_n/f_{n,a})$.
It can be seen that $C_{EA}$ decreases continuously from positive to negative values with $U^*$; $C_{EA}$ decreases from as high as 40.8 to as low as $-13.3$ for $m^*=20$. At some point, $C_{EA}$ takes the theoretical $C_a$ value of unity, which is indicated by the dashed line in the inset of figure \ref{fig:CEA}. Interestingly, this occurs at approximately the reduced velocity of peak amplitude response, which is given by equation~(\ref{eq:Upeak}).
Furthermore, for $m^*=20$ a gap separating operating points with $C_{EA}$ values above and below unity appears at $U^*=2.614$, which is not present at lower $m^*$values. The discontinuous variation remained in place even though an extremely fine step of $\Delta U^*=0.002$ was employed around this point.
When the frequency of cylinder oscillation approaches the natural frequency of the structure in vacuum, equation~(\ref{eq:CGW}) yields $C_{EA}=0$, which occurs at the coincidence point $U^*=2.625$ for all mass ratios.
The very wide variation of $C_{EA}$ values as a function of $U^*$ is improbable to represent some physical change, e.g.\ due to inertial effects, as we have already seen that flow physics remain fairly robust over the entire range of reduced velocities. We point out this simply because we would like to decipher the true effect due to the added mass, which seems impossible to do through the effective added mass concept. On the other hand, it is shown further below with the aid of the new theory that a constant value of the added mass coefficient aligns very well the observations from the simulations.
Nonetheless, the empirical values of $C_{EA}$ as defined above are valid.
\subsubsection{Super-harmonic fluid forcing at the coincidence point}\label{sec:superharmonic}
As already pointed out, the standard deviation of the unsteady in-line force tends to zero level as the frequency of cylinder vibration approaches the natural frequency of the structure in vacuum, which occurs at $U^*\approx2.625$ for all mass ratios.
This can be predicted from the harmonic solution as follows. Equations (\ref{eq:Hsine}) and (\ref{eq:Hcosine}) given in appendix \ref{app:harmonic} can be combined so as to eliminate the phase angle $\phi_x$. In the case of zero structural damping $(\zeta=0)$, the following relationship is obtained
\begin{equation}\label{eq:Ctotal}
C_{x1} = 2\upi^3\frac{m^*A^*}{{U^*}^2}\left| 1 - \left(f^*U^*\right)^2 \right|.
\end{equation}
The above relationship shows that if the vibration frequency approaches the natural frequency of the structure in vacuum, which can be expressed in non-dimensional form as $f^*U^*\rightarrow1$, then the only feasible solution is that $C_{x1}\rightarrow0$. In the present simulations we have found that for all mass ratios $f$ approaches $f_n$ within less than 0.1\%, which was made possible by using very fine steps of $\Delta U^*=0.01$ or even less.
\begin{figure}
\centerline{\includegraphics[width=0.99\textwidth]{Fig10.eps}}
\caption{The top-row plots show time series of the cylinder displacement (dashed lines) and the unsteady streamwise force (solid lines) for different reduced velocities around the coincidence point; $(Re,\,m^*)=(180,2)$. Here, the cylinder displacement $\hat{x}_c(t)$ and the in-line force $\hat{C}_x(t)$ have been normalized with their corresponding maximum amplitudes in order to reveal their relative waveforms.
The bottom-row plots show the spectra of the streamwise force for the corresponding cases shown in the top row. } \label{fig:m2traces}
\end{figure}
Figure \ref{fig:m2traces} shows time series of the cylinder displacement and the fluctuating part of the in-line force at different reduced velocities. The displacement and the force were normalized with their corresponding maxima in order to reveal their relative waveforms because absolute values of the force are extremely small. Although the displacement remains remarkably harmonic, the force starts to deviate from pure harmonic as $U^*$ is varied with a fine step around the coincidence point. Spectra of the force show that a secondary peak appears at the first super-harmonic of the vibration frequency. At $U^*=2.63$, the main spectral peak appears at the first super-harmonic of the vibration frequency, whereas a very small peak remains at the main frequency of vibration; the latter corresponds to an extremely small magnitude of $C_{x1}$, which is however sufficient to sustain the vibration at the main harmonic of the vibration.
The emergence of a dominant super harmonic accompanies the phase jump by $180^\circ$ of the main harmonic occurring as the coincidence point is crossed over, which can be also seen from the time traces in figure \ref{fig:m2traces}. The predominance of the first super harmonic was also observed in flow-induced transverse vibration of a rotating cylinder at corresponding coincidence points \citep{Bourguet2014}. It seems that the phase-jump mechanism through a super harmonic could be a generic feature.
\section{Development of new linear theory}\label{sec:theory}
In this section, we develop a theoretical framework based on the triple decomposition of the total in-line force $F_x(t)$ in equation~(\ref{eq:Morison2}) and the reduced-order model of the wake force $F_{dw}(t)$ in equation~(\ref{eq:Fvharmonic}). Initially, we derive analytical expressions from which the model parameters $C_d$, $C_{dw}$ and $\phi_{dw}$ can be calculated using numerical data from the simulations. Then, we present the variations of the model parameters with reduced velocity and mass ratio to elucidate the fluid dynamics of vortex-induced in-line vibration. In addition, we combine the analytical expressions for drag components with the equation of cylinder motion in order to derive further expressions that allow us to interpret the flow physics at play.
\subsection{Linearised force}
We begin with linearising the quasi-steady drag term in equation~(\ref{eq:Morison2}) and substituting the harmonic approximations in equations (\ref{eq:Xharmonic}) and (\ref{eq:Fvharmonic}) to obtain the linearised force, which we can express explicitly as a function of time $t$ as
\begin{eqnarray}
F_x(t) = \frac{1}{2}\rho U_\infty^2D \left[ C_d +2 \left(\frac{\omega A}{U_\infty}\right) C_d \sin{(\omega t)} + C_{dw} \cos{(\omega t+\phi_{dw})} \right] \nonumber\\ + \frac{1}{4}\upi\rho D^2C_a \omega^2A\cos{(\omega t)}, \label{eq:Morison3}
\end{eqnarray}
where the angular frequency $\omega=2\upi f$ has been introduced for brevity and $f_{dw}$ has been replaced by $f$ due to the synchronization of the wake and the cylinder vibration.
The first term on the right-hand side of equation (\ref{eq:Morison3}) represents the steady part of the in-line force whereas the remaining terms represent the unsteady part, which comprises the combination of cosine and sine functions at the frequency of cylinder oscillation. Equating the steady parts in equations (\ref{eq:Fharmonic}) and (\ref{eq:Morison3}) yields
\begin{equation}\label{eq:meanCd}
C_d = \overline{C}_x,
\end{equation}
i.e.\ $C_d$ is equal to the mean drag coefficient. Equating the cosine and sine terms in equations (\ref{eq:Fharmonic}) and (\ref{eq:Morison3}) yields the following relationships:
\begin{eqnarray}
C_{dw}\sin{\phi_{dw}} & = & C_{x1}\sin\phi_x + 4\upi f^*A^* C_d, \label{eq:Cdvsin} \\
C_{dw}\cos{\phi_{dw}} & = & C_{x1}\cos\phi_x - 2\upi^3f^{*2}A^*C_a. \label{eq:Cdvcos}
\end{eqnarray}
The set of equations (\ref{eq:meanCd}-\ref{eq:Cdvcos}) establishes relationships between fluid forcing components as functions of $A^*$ and $f^*$ alone, i.e.\ the structural parameters do not appear. Therefore, the same values of the fluid forcing components also hold when the cylinder is forced to oscillate at the same operating points in the $A^*:f^*$ parameter space as the free vibration. We use the above set of relationships in order to determine $C_{dw}$ and $\phi_{dw}$ appearing in the new theoretical model from the cylinder response $(A^*,\,f^*)$ and the fluid forcing $(C_{x1},\,\phi_{x})$ data obtained from the simulations. It should be remembered here that $C_a=1$ is the known ideal added-mass coefficient of unity.
\subsection{Magnitude and phase of wake drag}
Figure \ref{fig:vortex_drag} shows the variation of $C_{dw}$ and $\phi_{dw}$ as functions of the parameter $U^*_af^*$. We have employed this parameter because peak amplitudes occur at $U^*_af^*\approx1$. As can be seen in the top plot, $C_{dw}$ also displays a peak at the same point with a maximum value of 0.067 for all $m^*$. The mutual amplification of the response amplitude and the wake drag are representative of resonance. Hence, we refer to the condition $U^*_af^*=1$ as the `resonance point'. Away from resonance $C_{dw}$ tends asymptotically to 0.036, which corresponds to the amplitude of the fluctuating drag coefficient for a non-vibrating cylinder at $Re=180$.
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{Fig11.eps}}
\caption{The variation of the wake drag magnitude $C_{dw}$ and phase $\phi_{dw}$ as functions of $U^*_af^*$ for different mass ratios $m^*$ at $Re=180$ (see the symbol legend for $m^*$ values).} \label{fig:vortex_drag}
\end{figure}
In the bottom plot in figure~\ref{fig:vortex_drag} can be seen that $\phi_{dw}$ follows an S-type increase as a function of $U^*_af^*$ from approximately $0^\circ$ to $180^\circ$. At $U^*_af^*=1$, $\phi_{dw}$ passes through $90^\circ$ for all $m^*$.
Overall, the variation of $\phi_{dw}$ is very similar to that of $\phi_y$ shown earlier; in fact there is a direct relationship between them, which is illustrated in figure \ref{fig:phaselink}. Since the variation of $\phi_y$ is invariably linked to the vortex dynamics, in particular to the timing of vortex shedding as discussed earlier, it follows that $\phi_{dw}$ appropriately captures related changes as the reduced velocity is varied.
\begin{figure}
\centerline{\includegraphics[width=0.74\textwidth]{Fig12.eps}}
\caption{Relationship between phase angles $\phi_{dw}$ and $\phi_{y}$ for different mass ratios $m^*$ (see the symbol legend for $m^*$ values).} \label{fig:phaselink}
\end{figure}
\subsection{Peak response at resonance point}
The new hydrodynamic model can be combined with the equation of cylinder motion in order to illustrate the interaction between the fluid and the structural dynamics.
The steady-state solution can be obtained following a similar procedure as described in appendix \ref{app:harmonic}, which results in the following two relationships:
\begin{equation}
C_{dw}\sin{\phi_{dw}} = 4\upi f^*A^* \left(C_d + \frac{\upi^2m^*\zeta}{U^*}\right), \label{eq:Cdvsin2}\\
\end{equation}
\begin{equation}
C_{dw}\cos{\phi_{dw}} = 2\upi^3\frac{m^*A^*}{U^{*2}}\left[1 - \left(\frac{f}{f_{n,a}}\right)^2 \right]. \label{eq:Cdvcos2}
\end{equation}
When the structural damping is zero $(\zeta=0)$, the solution of equation (\ref{eq:Cdvsin2}) apparently does not depend on $m^*$. It should be noted however that the feasible operating points in the $A^*:f^*$ space are limited to those that satisfy equation~(\ref{eq:Cdvsin2}), which correspond to the contour of zero energy transfer $(\sin\phi_x=0)$.
In addition, equation (\ref{eq:Cdvcos2}) becomes indeterminate at the resonance point where $f=f_{n,a}$ and $\phi_{dw}=90^\circ$ simultaneously. It should be noted that there is no \textit{a priori} guarantee that the `resonance point' is attainable since the solution pursued here is open form. Nevertheless, as we can see from the simulations the resonance point is attained and corresponds to the maximum amplitude, in which case we can write equation (\ref{eq:Cdvsin2}) as
\begin{equation}\label{eq:peak}
A^*_\mathrm{max} = \frac{C_{dw}}{4\upi f^*C_d}.
\end{equation}
This result shows that the steady-state solution at the resonance point does not depend on $m^*$; rather the operating point is solely determined by the fluid dynamics.
When the structural damping is finite positive $(\zeta>0)$, equation~(\ref{eq:peak}) may still be used to estimate approximately the peak amplitude for very low values of the mass and the damping such that $\upi^2m^*\zeta/U^* \ll C_d$, on the condition that the resonance point is attained. The early experiments of \citet{Aguirre1977} support the above inference; he observed little variation of the peak amplitude in the second excitation region, i.e.\ the one associated with alternating vortex shedding, which occurred at $U_a^*\approx 1.2/(2S)$ for $m^*=1.2$ and 4.3 (see figure 43 of his thesis). The mass--damping was reported in terms of a stability parameter $k'_{s0}$ to be approximately 0.5, which was estimated from free-decay tests in still water, i.e.\ it includes contributions from hydrodynamic damping from the cylinder as well as its mountings. From his data, we estimate here that the corresponding $k_s$ value in air was significantly lower than the one measured in water so that we can assume $\upi^2m^*\zeta/U^*<0.05$; this is small compared to the mean drag coefficient $C_d$. Thus, we argue that the present analysis is consistent with the experimental facts at Reynolds numbers higher than considered in this study.
At the resonance point ($\phi_{dw}=90^\circ$), equations (\ref{eq:Cdvsin}) and (\ref{eq:Cdvcos}) reduce to
\begin{eqnarray}
C_{dw} & = & 4\upi f^*A^* C_d, \label{eq:Cdvsin3}\\
C_{x1} & = & 2\upi^3f^{*2}A^*C_a. \label{eq:Cdvcos3}
\end{eqnarray}
The above set of equations establishes relationships between the wake, mean, and total drag coefficients at resonance point, which involve variations in $A^*$ and $f^*$.
Considering that all three force coefficients, $C_{dw}$, $C_{d}$, and $C_{x1}$, may be `uniquely' specified in the parameter space $A^*:f^*$, this could suggest a single operating point inside this space at which a steady-state harmonic solution is feasible, which implies that the cylinder peak response is drawn towards this operating point irrespectively of the mass ratio. Nonetheless, it should be cautioned that bimodal dynamics can also appear at particular regions of the parameter space $A^*:f^*$ \citep{Cagney2013pof,Cagney2013a,Gurian2019}. Assuming that a unique solution exists at the resonance point, equation~(\ref{eq:Cdvsin3}) illustrates that the component of the wake drag in-phase with the cylinder velocity counterbalances the quasi-steady drag, whereas equation~(\ref{eq:Cdvcos3}) illustrates that the only contribution of the fluid force in-phase with the cylinder acceleration is the inviscid added-mass force. This is commensurate to the situation where an elastically mounted cylinder oscillates freely within a fluid medium with no net viscous forces; in such a situation, it would be anticipated that the frequency of cylinder oscillation is equal to the natural frequency of the structure in an inviscid fluid, i.e.\ $f=f_{n,a}$. This is as if the fluid was phenomenologically interacting with the cylinder motion only through its ideal inertia. It should be emphasized that in the case of a viscous fluid, the zero net viscous force results from the cancelling out of the wake drag and the quasi-steady drag, which is brought about by a gradual change in the phasing of the vortex shedding, thereby in $\phi_{dw}$, as the reduced velocity is varied.
The above changes can be viewed from another perspective, which is more insightful. As $U^*$ increases, the phasing of vortex shedding gradually shifts to follow the changes in the frequency of oscillation with the reduced velocity. Since these variations occur in a continuous manner for the low-$Re$ cases considered here, a point is reached where the timing of vortex shedding induces a wake drag exactly in-phase with the cylinder velocity, $\phi_{dw}=90^\circ$. At this point, the net viscous force must become null (Eq.~\ref{eq:Cdvsin3}), whereas only the added mass contributes to the force in-phase with the cylinder acceleration (Eq.~\ref{eq:Cdvcos3}). It is important to note again that these changes are brought in entirely by the fluid dynamics.
\subsection{Infinitesimal net force at coincidence point}
At first glance, it seems counter-intuitive that the cylinder experiences almost no net force at the coincidence point. However, this can be explained by the cancelling out of the fluid-force components. It should be remembered that when the structural damping is zero $(\zeta=0)$ the linearised solution of the equation of cylinder motion (Eq.~\ref{eq:Ctotal}) shows that the component of the streamwise force at the main harmonic is null $(C_{x1}=0)$ at the coincidence point $(f^*U^*=1)$. In this case, equations (\ref{eq:Cdvsin}) and (\ref{eq:Cdvcos}) reduce to
\begin{eqnarray}
C_{dw}\sin{\phi_{dw}} & = & 4\upi f^*A^* C_d, \label{eq:Cdvsin4} \\
-C_{dw}\cos{\phi_{dw}} & = & 2\upi^3f^{*2}A^*C_a. \label{eq:Cdvcos4}
\end{eqnarray}
From these equations, we can see that the wake drag in-phase with the velocity of the cylinder balances the quasi-steady drag while the wake drag in-phase with acceleration balances the inviscid inertia. This is physically possible as these force contributions originate from different flow structures. Furthermore, the cylinder must be oscillating for this case to be realized; the terms on the right-hand side of equations (\ref{eq:Cdvsin4}) and (\ref{eq:Cdvcos4}) are null for a non-oscillating cylinder. Although the above scenario is idealised as some damping will be present in a real situation, extending the analysis for a system with very low structural damping shows that the cylinder will experience a very low net in-line force at the coincidence point, if this point can still be attained. It should be noted that we approached very close to the coincidence point by employing a very fine step around this region, but exact coincidence was unattainable. A similar observation was made by \citet{Shiels2001} who showed that the net transverse force $C_y(t)$ exerted on a cylinder undergoing free vibration in the transverse direction becomes null at limiting values of the structural parameters, $m=c=k=0$. Here, we show that a null net streamwise force $C_x(t)$ on a cylinder undergoing free in-line vibration is physically possible at the coincidence point for finite $m$ and $k$ values.
\subsection{$A^*_\mathrm{max}$ as a function of $Re$ in the laminar regime}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\small\addtolength{\tabcolsep}{16pt}
\begin{tabular}{lcc}
$Re$ & $SC_{d}$ & $(f^*/2)C_d$\\
100 & 0.217 & 0.216 \\
180 & 0.252 & 0.244 \\
\end{tabular}
\caption{\label{tab:SCd} Compilation of data for stationary and vibrating circular cylinders in the laminar regime. Data for the stationary cylinder are from the study of \citet{Qu2013} and for the freely vibrating cylinder are from the present simulations at peak amplitude.}
\end{center}
\end{table}
The variation of the peak amplitude as a function of the Reynolds number can be estimated using the following assumptions. The mean drag coefficient $C_d$ and the inverse of the Strouhal number $1/S$ for stationary bluff bodies display similar variations as functions of $Re$ so that the product $SC_d$ remains almost constant \citep{Alam2008}. For vibrating cylinders, there is also evidence that the corresponding product $(f^*/2)C_d$ remains constant \citep{Griffin1978}. \citet{Alam2008} showed that $SC_d\approx0.25$ for stationary bluff bodies of various cross-sectional shapes throughout the sub-critical regime. However, $SC_d$ appears to increase slightly with $Re$ in the laminar regime as seen in their data compilation (see their figure 5). We have confirmed this dependency on $Re$ in the laminar regime for the stationary as well as the freely-vibrating circular cylinder as shown in table~\ref{tab:SCd}. Nevertheless, the variation of $SC_d$ and $(f^*/2)C_d$ with $Re$ is small, and in order to keep some generality in the result, we assume that $f^*C_d$ remains approximately constant. In this case, equation (\ref{eq:peak}) shows that $A^*_\mathrm{max}\propto C_{dw}$. In addition, we assume $C_{dw}$ is magnified by some constant factor at resonance so that $A^*_\mathrm{max}\propto C_{dw0}$, where $C_{dw0}$ is the fluctuating drag coefficient for a stationary cylinder. Then, peak amplitudes at different $Re$ can be estimated from data at one particular Reynolds number $Re_0$, i.e.
\begin{equation}\label{eq:Apredict}
A^*_\mathrm{max}(Re) = \left( \frac{A^*_\mathrm{max}}{C_{dw0}} \right)_{Re_0} C_{dw0}(Re).
\end{equation}
Figure \ref{fig:Amax} shows predictions of $A^*_\mathrm{max}$ using $C_{dw0}$ data at different $Re$ \citep[taken from][]{Qu2013} and the known response at $Re=180$. It can be seen that $A^*_\mathrm{max}$ predictions fit well with the amplitude from the simulations at $Re=100$ while they also extrapolate to the result at $Re=250$. The enhancement of $A^*_\mathrm{max}$ with $Re$ is close to quadratic, which is quite marked compared to the nearly constant amplitude of purely transverse vortex-induced vibration in the laminar regime \citep[see][]{Govardhan2006}.
\begin{figure}
\centerline{\includegraphics[width=0.82\textwidth]{Fig13.eps}}
\caption{Comparison of $A^*_\mathrm{max}$ as a function of $Re$ obtained from the simulations (open symbols) and predicted from equation (\ref{eq:Apredict}) (full symbols).} \label{fig:Amax}
\end{figure}
\section{Conclusions}
In this study we have developed a theoretical model for the fluid force acting on a circular cylinder vibrating in-line with a free stream. The streamwise fluid force comprises the sum of three components: an inviscid added-mass force opposing the inertia of the cylinder, a quasi-steady drag force opposing the velocity of the cylinder, and a wake drag force.
The key element here is the splitting of the viscous force into quasi-steady and wake components, which stem from the contribution of vorticity in fairly-separable flow regions, i.e.\ in the boundary/free shear layers and in the near wake, respectively.
The theory enabled us to decipher three important fluid-dynamical aspects. First, the phase angle of the vortex force with respect to the cylinder displacement $\phi_{dw}$ increases smoothly with $U^*$ due to a gradual shift in the timing of vortex shedding. This contrasts the variation of the phase angle between the total force and the cylinder displacement, which must remain fixed at $\phi_x=0^\circ$ for $U^*f^*<1$ and at $\phi_x=180^\circ$ for $U^*f^*>1$, with a sudden jump at the coincidence point $(U^*f^*=1)$; this constraint is imposed by the equation of cylinder motion when the structural damping is null and has no bearing on the fluid dynamics. Second, the added mass coefficient for inviscid flow is also applicable for viscous flow about a cylinder oscillating in-line with a free stream. Third, the wake drag is amplified in the excitation region of relatively high-amplitude response, which supports the classification of vortex-induced in-line vibration as a resonance phenomenon.
The above findings could be confirmed because, unlike previous works dealing with vortex-induced vibration that primarily consider the fluid force in the direction of cylinder motion, here we calculated the phase angle between the force transverse to the direction of motion and the cylinder displacement, $\phi_y$. This allowed us to illustrate that the smooth variation of $\phi_y$ as a function of $U^*$, in contrast to $\phi_x$ that displays a sudden jump by $180^\circ$, is directly linked to a gradual shift in the timing of vortex shedding.
The theory developed in this investigation is linear, which means that all force components can be well approximated by single-harmonic functions of time, although the fluid dynamics is non linear by default. Departures from linearity can arise at much higher amplitudes and/or frequencies of cylinder vibration because of the quadratic relative velocity term. In addition, other non linearities may arise due to competing requirements posed by the structural dynamics and the fluid dynamics. In fact, we have pinpointed in the present study that such non-linear effects become apparent at the `coincidence point' where the vibration frequency becomes equal to the natural frequency of the structure in vacuum. At this point, the equation of cylinder motion requires that the component of the in-line force at the main harmonic of the vibration becomes almost null, which results from balances between the quasi-steady drag and the wake drag in-phase with the velocity, and between the inviscid inertia and the wake drag in-phase with acceleration. In this case, the component at the first super-harmonic of the vibration frequency dominates the driving in-line force.
An important result predicted from the theory is that the response does not depend on $m^*$ at the point where $\phi_{dw}=90^\circ$. The simulations show that this occurs at the `resonance point' where the same peak amplitude is attained for all $m^*$ values, in accord with the theory. It should be noted however that the theory cannot predict that the maximum amplitude occurs when $\phi_{dw}=90^\circ$, or that this operating point does materialize, since the solution is given in open form.
On the other hand, the flow physics suggest that at some point $\phi_{dw}=90^\circ$ due to the gradual shift in the timing of vortex shedding, which accompanies the continuous variation of the vibration frequency as the reduced velocity is varied. Nevertheless, we have also observed that discontinuities may arise by changing some parameter, e.g.\ $Re$ or $m^*$. Further study over wider ranges of these parameters than considered here may worth undertaking in the future.
The simulations show that Reynolds number has a remarkable effect on the maximum amplitudes $A^*_\mathrm{max}$ attained over the entire reduced velocity range; increasing $Re$ from 100 to 250 results in a 12-fold increase of $A^*_\mathrm{max}$. This stands in sharp contrast to the free vibration transversely to a free stream, in which case peak amplitudes remain fairly constant in the laminar regime. This may be attributable to variations of the added mass coefficient in the latter configuration \citep{Konstantinidis2013prsa}. In view of this complexity, we consider that free in-line vibration offers a more convenient test case to uncover the fluid dynamics.
The fluid dynamics could be elucidated because in-line response amplitudes remain very small for the low Reynolds numbers investigated in the present study. As a consequence, the fluid excitation comes solely from the primary wake instability associated with alternating vortex shedding, which remains robust and similar as in the wake of a non-vibrating cylinder. It has been well established in the published literature that other instabilities can be excited, even at small amplitudes, with increasing the Reynolds number, such as the symmetrical mode of vortex shedding. There may also exist competition between different modes. Further instabilities due to gradual transition to turbulence will inevitably perplex the phenomenology of vortex-induced vibration. However, we maintain that the theory developed here remains valid and can be used to analyse the more complex phenomena at higher Reynolds numbers, possibly with some adjustments for different fluid excitation mechanisms.
\vspace{3pt}
{\bfseries Acknowledgements.}
This research was supported by the European Union and the Hungarian State, co-financed by the European Regional Development Fund in the framework of the GINOP-2.3.4-15-2016-00004 project, aimed to promote the cooperation between the higher education and the industry.
\vspace{3pt}
{\bfseries Declaration of Interests.} The authors report no conflict of interest.
|
{'timestamp': '2020-11-17T02:23:21', 'yymm': '2005', 'arxiv_id': '2005.14434', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14434'}
|
arxiv
|
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Notation}
\input{notation}
\section{DCTN's description}
\subsection{Input preprocessing}
\label{sec:input_preprocessing}
\input{input_preprocessing}
\subsection{Entangled plaquette states}
\label{sec:entangled_plaquette_states}
\input{entangled_plaquette_states}
\subsection{Description of the whole model}
\label{sec:description_of_the_whole_model}
\input{description_of_the_whole_model}
\subsection{Optimization}
\input{optimization}
\section{Experiments}
\label{sec:experiments}
\input{experiments}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgements}
The work of Anh-Huy Phan was supported by the Ministry of Education
and Science of the Russian Federation under Grant 14.756.31.0001.
\bibliographystyle{plainnat}
\subsection{MNIST}
We tested DCTN with one EPS, \(\nu=0.5\) in \cref{eq:phi_definition}, \(K_1=4, Q_1=4\),
\(\text{lr} = 3 \cdot 10^{-3}\), \(\lambda = 0\) in \cref{eq:opt_problem_whole_tn_l2_reg},
batch size 128 on MNIST dataset with 50000/10000/10000 training/validation/test split. We got
98.75\% test accuracy. MNIST is considered relatively easy and doesn't represent modern
computer vision tasks \citep{fashionmnist_readme}.
\subsection{FashionMNIST}
FashionMNIST \citep{xiao2017fashion} is a dataset fully compatible with MNIST: it contains
70000 grayscale \(28 \times 28\) images. Each image belongs to one of 10 classes of
clothes. We split 70000 images into 50000/10000/10000
training/validation/test split and experimented with models with one, two, and three EPSes. The
more EPSes we used, the more overfitting DCTN experienced and the worse validation accuracy
got, so we didn't experiment with more than three EPSes.
For one, two, and three EPSes, we chose hyperparameters by a combination of gridsearch and
manual choosing and presented the best result (chosen by validation accuracy before being
evaluated on the test dataset) in \Cref{table:fashionmnist_experiments}.
In \Cref{sec:how_hyperparameters_affect_optimization_and_generalization}, we describe
more experiments and discuss how various hyperparameters affect optimization and generalization of DCTN.
\begin{table}[h]
\caption{Comparison of our best models (top 3 rows) with 1, 2, and 3 EPSes, respectively,
with the best (by a combination of accuracy and parameter count) existing models on
FashionMNIST dataset. DCTN with one EPS wins against existing models with similar parameter
count. Adding more EPSes makes test accuracy worse due to overfitting. All 3 of our models
eventually reach nearly 100\% accuracy if not stopped early. We trained all DCTNs
with batch size 128.}
\label{table:fashionmnist_experiments}
\resizebox{\textwidth}{!}{%
\begin{tabular}{p{13cm}lr}
Model & Accuracy & Parameter count \\
\hline One EPS \(K_1{=}4\), \(Q_1{=}4\), \(\nu{=}0.5\),
\(E_1 {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25)\),
\(A,b {\sim} U[-(H_1 W_1 Q_1)^{-0.5}, -(H_1 W_1 Q_1)^{0.5}]\),
\(\text{lr}{=}3\cdot10^{-3}\), \(\lambda{=}0\), \cref{eq:opt_problem_whole_tn_l2_reg}
& 89.38\% & \(2.9 \cdot 10^5\) \\
\hline Two EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}6\), \(\nu {\approx} 1.46\),
EPSes initialized from \(\mathcal{N}(\mu{=}0, \sigma{=}Q_\text{in}^{-0.5 K^2})\),
\(A {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25(H_2 W_2 Q_2)^{-0.5})\),
\(b {\sim} U[-(H_2 W_2 Q_2)^{-0.5}, (H_2 W_2 Q_2)^{-0.5}]\),
\(\text{lr}{=}1.11\cdot10^{-4}\),
\(\lambda{=}10^{-2}\), \cref{eq:opt_problem_epswise_l2_reg} & 87.65\% & \(1.8 \cdot 10^6\) \\
\hline Three EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}12, K_2{=}2, Q_2{=}24\),
\(\nu {\approx} 1.46\), EUSIR initialization of EPSes (see \Cref{sec:initialization_and_scaling_of_input}),
\(A {\sim} \mathcal{N}(\mu{=}0, \sigma{=}0.25 (H_2 W_2 Q_2)^{-0.5})\),
\(b {\sim} U[-(H_3 W_3 Q_3)^{-0.5}, (H_3 W_3 Q_3)^{-0.5}]\),
\(\text{lr}{=}10^{-7}\), \(\lambda {=} 10^{-1}\), \cref{eq:opt_problem_whole_tn_l2_reg}
& 75.94\% & \(4\cdot10^6\) \\
\hline GoogleNet + Linear SVC &
93.7\% & \(6.8\cdot 10^6\) \\
\hline VGG16 &
93.5\%
& \(2.6 \cdot 10^7\) \\
\hline CNN: 5x5 conv -\textgreater 5x5 conv -\textgreater linear -\textgreater linear &
91.6\%
& \(3.3 \cdot 10^6\) \\
\hline AlexNet + Linear SVC &
89.9\%
& \(6.2 \cdot 10^7\) \\
\hline Matrix tensor train in snake pattern (Glasser 2019) & 89.2\% & ? \\
\hline Multilayer perceptron &
88.33\%
& \(2.3 \cdot 10^5\)
\end{tabular}}
\end{table}
\subsection{CIFAR10}
CIFAR10~\citep{cifar10} is a colored dataset of 32 by 32 images in 10 classes. We used 45000/5000/10000 train/validation/test split. We evaluated DCTN on the colored version using YCbCr color scheme and on grayscale version which mimics MNIST and FashionMNIST. The results are in \Cref{table:cifar10_experiments}.
DCTN overfits and performs poorly -- barely better than a linear classifier. Our hypotheses for why DCTN performs poorly on CIFAR10 in contrast to MNIST and FashionMNIST are:
(a)~CIFAR10 images have much less zero values; (b)~classifying CIFAR10 is a much more difficult problem; (c)~making CIFAR10 grayscale loses too much useful information, while non-grayscale version has too many features, which leads to overfitting. In the future work, we are going to check these hypotheses with intensive numerical experiments.
\begin{table}[h]
\caption{DCTN results on CIFAR10. For each number of color channels, for each number of EPSes, we chose the
kernel sizes \(K_n\), the quantum dimension sizes \(Q_n\), and the learning rate using grid
search (excluding models the training of which didn't fit in 8 Gb of videocard's RAM) and
showed the best model in the table. All of
these models can reach almost 100\% training accuracy if not stopped early. Two bottom rows
show the accuracy of a linear classifier and of one of the state of the art CNNs for comparison.}
\label{table:cifar10_experiments}
\centering
\begin{tabular}{ccc}
Channels & Model & Accuracy\\
\hline Grayscale & One EPS, \(K_1{=}4, Q_1{=}4\) & 49.5\%\\
\hline Grayscale & Two EPSes, \(K_1{=}4, Q_1{=}4, K_2{=}3, Q_2{=}6\) & 54.8\%\\
\hline YCbCr & One EPS, \(K_1{=}2, Q_1{=}24\) & 51\%\\
\hline YCbCr & Two EPSes, \(K_1{=}2, Q_1{=}23, K_2{=}2, Q_2{=}24\) & 38.6\%\\
\hline RGB & Linear classifier & 41.73\%\\
\hline RGB & EfficientNet-B7~\citep{efficientnet} & 98.9\%\\
\end{tabular}
\end{table}
\subsection{Properties of successful neural networks}
\label{sec:properties}
Nowadays, neural networks (\emph{NNs}) achieve outstanding results in many machine learning tasks~\citep{paperswithcode_sota}, including computer vision, language modeling, game playing (e.g. Checkers, Go), automated theorem proving~\citep{gpt_f}. There are three properties many (but not all) NNs enjoy, which are thought to be responsible for their success. For example, \citep{cohen2016expressive} discusses the importance of these properties for deep CNNs.
\begin{itemize}
\item \emph{Parameter sharing}, aka applying the same transformation multiple times in parallel or sequentially. A layer of a convolutional neural network (\emph{CNN}) applies the same function, defined by a convolution kernel, to all sliding windows of an input. A recurrent neural network (RNN) applies the same function to the input token and the hidden state at each time step. A self-attention layer in a transformer applies the same query-producing, the same key-producing, and the same value-producing function to each token.~\citep{illustrated_transformer}
\item \emph{Locality}. Interactions between nearby parts of an input are modeled more accurately, while interactions between far away parts are modeled less accurately or not modeled at all. This property makes sense only for some types of input. For images, this is similar to receptive fields in a human's visual cortex. For natural language, nearby tokens are usually more related than tokens far away from each other. CNNs and RNNs enjoy this property.
\item \emph{Depth}. Most successful NNs, including CNNs and transformers, are
deep, which allows them to learn complicated transformations.
\end{itemize}
\subsection{The same properties in tensor networks}
Tensor networks (\emph{TNs}) are linear algebraic representations of quantum many-body states based on
their entanglement structure. They've found applications in signal processing.
People are exploring their applications to machine
learning, e.g. tensor regression -- a class of machine learning models
based on contracting (connecting the edges) an input tensor with a parametrized TN. Since NNs with the
three properties mentioned in \Cref{sec:properties} are so successful, it would make sense to try to devise a
tensor regression model with the same properties. That is what we do in our paper. As far as we
know, some existing tensor networks have one or two out of the three properties, but none have
all three.
\begin{itemize}
\item MERA (see Ch. 7 of \citep{bridgeman2017handwaving_and_interpretive_dance}) is a
tree-like tensor network used in quantum many-body physics. It's deep and has locality.
\item Deep Boltzmann machine can be viewed as a tensor network. (See Sec. 4.2 of
\citep{cichocki_part_2} or \citep{glasser2018probabilistic} for discussion of how
restricted Boltzmann machine is actually a tensor network. It's not difficult to see a DBM
is a tensor network as well). For supervised learning, it can be viewed as tensor
regression with depth, but without locality or weight sharing.
\item \citep{glasser2018probabilistic} introduced Entangled plaquette states (\emph{EPS}) with
weight sharing for tensor regression. They combined one EPS with a linear classifier or a
matrix tensor train. Such a model has locality and parameter sharing but isn't deep.
\item \citep{cohen2016expressive} introduced a tensor regression model called Deep
convolutional arithmetic circuit. However, they used it only theoretically to analyze the
expressivity of deep CNNs and compare it with the expressivity of tensor regression
with tensor
in CP format (canonical polyadic / CANDECOMP PARAFAC). Their main result is a theorem about
the typical canonical rank of a tensor network used in Deep convolutional arithmetic
circuit. The tensor network is very similar to the model we propose, with a few small
modifications. We conjecture that the proof of their result about the typical canonical rank
being exponentially large can be modified to apply to our tensor network as well.
\item \citep{miller_2020_umps_psm} did language modeling by contracting an input sequence with
a matrix tensor train with all cores equal to each other. It has locality and parameter
sharing.
\item \citep{liu2019ml_by_unitary_tn_of_hierarchical_tree_structure} used a tree-like
tensor regression model with all cores being unitary. Their model has locality and depth,
but no weight sharing.
\item \citep{stoudenmire1605supervised} and \citep{novikov2016exponential} performed
tensor regression on MNIST images and tabular datasets, respectively. They encoded input data
as rank-one tensors like we do in \Cref{sec:input_preprocessing} and contracted it with a matrix
tensor train to get predictions. Such a model has locality if you order the matrix tensor
train cores in the right way.
\end{itemize}
\subsection{Contributions}
The main contributions of our article are:
\begin{itemize}
\item We devise a novel tensor regression model called Deep convolutional tensor network
(\emph{DCTN}). It has all three properties listed in \Cref{sec:properties}. It is based on
the (functional) composition of TNs called Entangled plaquette state (\emph{EPS}).
DCTN is similar to a deep CNN. We apply it to image classification, because that's the
most straightforward application of deep CNNs. (\Cref{sec:description_of_the_whole_model})
\item We show how EPS can be implemented as a
backpropagatable function/layer which can be used in neural networks or other
backpropagation based models (\Cref{sec:entangled_plaquette_states}).
\item Using common techniques for training deep neural networks, we train and evaluate DCTN on
MNIST, FashionMNIST, and CIFAR10 datasets. A shallow model based on one EPS works well on
MNIST and FashionMNIST and has a small parameter count. Unfortunately, increasing depth of
DCTN by adding more EPSes hurts its accuracy by increasing overfitting. Also, our model works
very badly on CIFAR10 regardless of depth. We discuss hypotheses why this is the
case. (\Cref{sec:experiments}).
\item We show how various hyperparameters affect the model's optimization and generalization
(\Cref{sec:how_hyperparameters_affect_optimization_and_generalization}).
\end{itemize}
|
{'timestamp': '2020-11-17T02:15:07', 'yymm': '2005', 'arxiv_id': '2005.14506', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14506'}
|
arxiv
|
\section{Introduction: Hospital Readmissions and the Pitfalls of Risk-Based Targeting}
Unplanned hospital readmissions represent an undesirable outcome following a hospitalization, but are common, costly, and associated with substantial morbidity and mortality, occurring within 30 days following nearly 20\% of hospitalizations by Medicare beneficiaries \cite{Jencks2009RehospitalizationsProgram}. In 2011, 3.3 million patients in the United States were readmitted to the hospital within 30 days, incurring costs of \$41 billion \cite{Hines2011Conditions2011}. In 2012, responding the growing awareness of the toll of readmissions, the Centers for Medicare and Medicaid Services introduced the Hospital Readmissions Reduction Program (HRRP), which penalizes hospitals with risk-adjusted 30-day readmission rates higher than the average. As a consequence of the HRRP and other value-based care initiatives, many hospitals and health care systems in the United States have since implemented quality improvement (QI) initiatives and population health management programs that rely on risk assessment tools to identify hospitalized patients at high risk of readmission. Tailored interventions can be then targeted to these patients immediately following discharge, with the goal of preventing their readmission. The effectiveness of these interventions in preventing readmissions has been mixed, and the precise mechanisms through which they do so remain unclear. \cite{Leppin2014PreventingTrials,Hansen2011InterventionsReview,Wadhera2018AssociationPneumonia,Kansagara2016SoLiterature,Finkelstein2020HealthTrial,Bates2014BigPatients,Berkowitz2018AssociationJ-CHiP}
Many risk assessment tools used in these efforts apply statistical modeling or supervised machine learning to estimate readmission risk among hospitalized patients based on data prior to discharge. \cite{Kansagara2011RiskReview.,Bayati2014Data-drivenStudy.,Escobar2015NonelectiveMortality,Billings2006CasePatients,Berkowitz2018AssociationJ-CHiP,Bates2014BigPatients} Stakeholders select a risk threshold with respect to resource constraints, which underpins objective criteria (often referred to in the machine learning literature as a \textit{treatment policy}) that determine which patients are selected to receive the intervention. Common threshold-based criteria specify that an intervention is to be delivered to all patients above a prespecified risk threshold, while those below it receive usual care. Underlying many population health management and QI efforts aimed at reducing readmissions is the implicit assumption that the patients most at risk are also those most likely to benefit from the intervention. \cite{Fihn2014InsightsAdministration,Bates2014BigPatients,HealthITAnalytics2016UsingHealthitanalytics.com/features/using-risk-scores-stratification-for-population-health-management,HealthITAnalytics2018TopHttps://healthitanalytics.com/news/top-4-big-data-analytics-strategies-to-reduce-hospital-readmissions} Ostensibly, this assumption has intuitive appeal, given that higher-risk patients appear to have "more room to move the needle", but it is not guaranteed to hold in practice \cite{Ascarza2018RetentionIneffective, Athey2017BeyondProblems}, especially in the context of readmissions \cite{Finkelstein2020HealthTrial,Lindquist2011UnderstandingFactors} and other settings where treatment effect heterogeneity may be present \cite{Athey2017BeyondProblems}.
The need for analytical approaches that estimate patient-level benefit---referred to in some contexts as \textit{impactibility} \cite{Lewis2010ImpactibilityPrograms,Freund2011IdentificationPrograms,Steventon2017PreventingRisk,Flaks-Manov2020PreventingPrediction}---is beginning to be recognized, particularly for readmission reduction programs \cite{Steventon2017PreventingRisk}. However, the distinction between benefit and risk does not yet appear to be widely appreciated by both those developing and applying risk assessment tools. Individual benefit is often expressed in terms of treatment effects, which cannot be estimated by modelling outcome risk. Predicting, for example, a readmission risk of 60\% for a patient provides no information on their counterfactual risk if they were to receive a readmissions reduction intervention. The actual counterfactual risk for this hypothetical patient could be unchanged, on average, corresponding to no effect for the intervention. On the other hand, the effect of this intervention may be heterogeneous across levels of predicted risk, so that, for example, this patient experiences an absolute risk reduction (ARR) of 10\% as a result of the intervention, while another patient at a predicted risk of 30\% experiences an ARR of 20\%. Given limited resources, a decision-maker may wish to give the intervention to the latter patient. Indeed, when it comes to preventing readmissions, there is growing evidence that higher-risk patients---referred to in some contexts as "super-utilizers" \cite{Finkelstein2020HealthTrial}---may be less sensitive to a class of care coordination interventions relative to those at lower risk \cite{Steventon2017PreventingRisk,Lindquist2011UnderstandingFactors,Rich1993PreventionStudy.}.
Moreover, efforts targeting preventative interventions based on predicted risk also fail to take into account that low-risk patients comprise the majority of readmissions \cite{Roland2012ReducingTrack}. That the majority of poor outcomes are experienced by patients at low risk, but who would not have been selected to receive an intervention, is an observation which has also surfaced in a range of predictive modeling problems in population health management \cite{Bates2014BigPatients}. Thus, targeting preventative interventions so as to include lower-risk patients among whom they may be effective, rather than targeting them only to high-risk patients, may potentially prevent more readmissions than the latter strategy \cite{Rose1985SickPopulations,Chiolero2015TheStrategy, McWilliams2017FocusingCosts}. However, even given a hypothetical ideal intervention---one guaranteed to be effective for \textit{all} patients in a population---staffing and other resource constraints may preclude scaling up such an intervention to an entire population. Hence, in order to maximize the net benefit of an intervention, "decoupling" the prediction problem into causal and predictive components---modeling not just heterogeneity in treatment effects, but also the (possibly heterogeneous) "payoffs" associated with successful prevention---may be necessary \cite{Kleinberg2015PredictionProblems}.
Few, if any, analytical approaches to identify "care-sensitive" patients, or those whose outcomes may be most "impactible", currently exist, despite the clear need for such approaches. \cite{Lewis2010ImpactibilityPrograms,Flaks-Manov2020PreventingPrediction} Existing approaches based on off-the-shelf supervised machine learning methods, despite their flexibility and potential predictive power, cannot meet this need. \cite{Athey2017BeyondProblems} In this paper, we propose and demonstrate the feasibility of a causal machine learning framework to identify preventable hospital readmissions with respect to a readmission prevention intervention. In our context, the "preventability" of a readmission is not based on predefined, qualitative criteria, as in prior work (e.g., \cite{Goldfield2008IdentifyingReadmissions,Auerbach2016PreventabilityPatients}). Rather, it is expressed in quantitative terms: the greater the treatment effect on readmission estimated for a patient, the more preventable their potential readmission may be.
To do so, we leverage a rich set of data drawn from before and after the roll-out of a comprehensive readmissions prevention intervention in an integrated health system, seeking to: (1) estimate the heterogeneity in the treatment effect of this intervention; (2) characterize the potential extent of mismatch between treatment effects and predicted risk; and (3) quantify the potential gains afforded by targeting based on treatment effects or benefit instead of risk. Finally, based on our findings, we also outline some possible directions for how population health management programs could be redesigned so as to maximize the aggregate benefit, or overall impact, of these preventative interventions.
\section{Methods}
\subsection{Data and Context}
The data consist of 1,584,902 hospitalizations taking place at the 21 hospitals in Kaiser Permanente's Northern California region (hereafter KPNC) between June 2010 and December 2018. They include patient demographics, diagnosis codes, laboratory-based severity of illness scores at admission and at discharge, and a comorbidity burden score that is updated monthly. The data also record whether a patient experienced a non-elective re-admission and/or death within 30 days. These data are described in greater detail in \cite{Escobar2019MultiyearSystem}.
These data encompass a period where a comprehensive readmissions prevention intervention, known as the \textit{Transitions Program}, began and completed implementation at all 21 KPNC hospitals from January 2016 to May 2017. The Transitions Program had two goals: (1) to standardize post-discharge care by consolidating a range of preexisting care coordination programs for patients with complex care needs; and (2) to improve the efficiency of this standardized intervention by targeting it to the patients at highest risk of the composite outcome of post-discharge re-admission and/or death.
As currently implemented, the Transitions Program relies on a validated predictive model for the risk of this composite outcome \cite{Escobar2015NonelectiveMortality}, which was developed using historical data from between June 2010 and December 2013, before the implementation of the Transitions Program. Following development and validation of this model by teams at KPNC's Division of Research, it was subsequently integrated into KP HealthConnect, KPNC's electronic health record (EHR) system to produce continuous risk scores, ranging from 0 to 100\%, at 6:00 AM on the planned discharge day. These risk scores are used to automatically assign inpatients awaiting discharge to be followed by the Transitions Program over the 30-day period post-discharge. Inpatients with a predicted risk of $\geq\!\!25$ are assigned to be followed by the Transitions Program, and are considered to have received the Transitions Program intervention. On the other hand, inpatients with a predicted risk below 25\% receive usual post-discharge care at the discretion of the discharging physician.
The Transitions Program intervention consists of a bundle of discrete interventions aimed at improving the transition from inpatient care to home or a to a skilled nursing facility. Together, they comprise an interlocking care pathway over the 30 days following discharge, beginning on the morning of the planned discharge day, and which we summarize step-by-step here and in Table \ref{tab:transitions}.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}cccccc@{}}
\toprule
Risk Level & Initial Assessment & Week 1 & Week 2 & Week 3 & Week 4 \\ \midrule
\begin{tabular}[c]{@{}c@{}}High\\ ($\geq 45$\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Phone follow-up \\ within 24 to 48 hours \\ \\ \textit{and}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Phone follow-up\\ every other day\end{tabular} & \begin{tabular}[c]{@{}c@{}}2 phone follow-ups; \\ more as needed\end{tabular} & \begin{tabular}[c]{@{}c@{}}Phone follow-up \\ once weekly; \\ more as needed\end{tabular} & \begin{tabular}[c]{@{}c@{}}Phone follow-up \\ once weekly; \\ more as needed\end{tabular} \\ \cmidrule(r){1-1} \cmidrule(l){3-6}
\begin{tabular}[c]{@{}c@{}}Medium\\ ($25-45$\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Primary care physician follow-up visit\\ within 2 to 5 days\end{tabular} & \multicolumn{4}{c}{\begin{tabular}[c]{@{}c@{}}Once weekly phone follow-up \\ (with more as needed)\end{tabular}} \\ \midrule
\begin{tabular}[c]{@{}c@{}}Low \\ ($\leq 25$\%)\end{tabular} & \multicolumn{5}{c}{Usual care at discretion of discharging physician} \\ \bottomrule
\end{tabular}%
}
\caption{\footnotesize The Transitions Program intervention pathway. The initial assessment applies to both the medium and high risk groups. Following it, the pathway diverges in terms of the frequency of phone contact.}
\label{tab:transitions}
\end{table}
For inpatients awaiting discharge and who have been assigned to be followed by the Transitions Program, on their planned discharge day, a case manager meets with them at the bedside to provide information on the Transitions Program. Next, once the patient has arrived home, a Transitions case manager calls them within 24 to 48 hours to walk them through their discharge instructions, and to identify any gaps in their understanding of them. If necessary, the case manager can also refer the patient to a pharmacist or social worker for focused follow-up. At the same time, the nurse also works to make an appointment with the patient's primary care physician to take place within 3 to 5 days post-discharge.
Following this initial outreach, the Transitions case manager continues to contact the patient weekly by phone, and remains available throughout if the patient requires further assistance. At 30 days post-discharge, the patient is considered to have "graduated" and is no longer followed by the Transitions Program. All steps of this process are initiated through and documented in the EHR, enabling consistent followup for the patients enrolled. A special category of patients are considered at very high risk if their predicted risk is $\geq \!\! 45$\% or if they lack social support, and receive a more intensified version of the intervention. This version entails follow-up every other day via telephone for the first week post-discharge, followed by $\geq \!\! 2$ times a week the second week, and once a week afterward until "graduation" at 30 days.
Initial analyses of the impacts of the Transitions Program \cite{Marafino2020ASystem}, using a hybrid difference-in-differences analysis/regression discontinuity approach \cite{Walkey2020NovelInitiative}, indicated that it was effective, being associated with approximately 1,200 and 300 fewer annual readmissions and deaths, respectively, within 30 days following their index discharge. Notably, these analyses also suggested some extent of risk-based treatment effect heterogeneity, in that the intervention appeared to be somewhat less effective for patients at higher risk compared to those at relatively lower risk.
As a consequence of the conclusions of these analyses, two questions presented themselves. The first was whether the Transitions Program intervention could be re-targeted more efficiently, leading to greater aggregate benefit, possibly by including patients at lower risk who would not otherwise have received the intervention. The second arose out of the risk-based treatment effect heterogeneity suggested by these analyses---may the intervention in fact be less effective for patients at higher risk? If so, what is the extent of the mismatch between risk and treatment effects? To answer these questions requires modeling individual heterogeneous treatment effects, and not risk, as is commonly done.
In this paper, we use the same data on 1,584,902 patients as that used for the analysis of the effectiveness of the Transitions Program. As in that analysis, we use a subset of 1,539,285 "index stays" which meet a set of eligibility criteria. These criteria include: the patient was discharged alive from the hospital; age $\geq \!\! 18$ years at admission; and their admission was not for childbirth (although post-delivery complications were included) nor for same-day surgery. Moreover, we consider a readmission non-elective if it began in the emergency department; if the principal diagnosis was an ambulatory care-sensitive condition \cite{AgencyforHealthcareResearchandQuality2001AHRQConditions.}; or if the episode of care began in an outpatient clinic, and the patient had elevated severity of illness, based on a mortality risk of $\geq\!\!7.2$\% as predicted by their laboratory-based acuity score (\texttt{LAPS2}) alone. The covariates used in this study (as input to the causal forest below) are summarized in Table \ref{tab:table1}.
This project was approved by the KPNC Institutional Review Board for the Protection of Human Subjects, which has jurisdiction over all the study hospitals and waived the requirement for individual informed consent.
\begin{table}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ll}
\toprule
\texttt{AGE} & Patient age in years, recorded at admission \\
\texttt{MALE} & Male gender indicator \\
\texttt{DCO\_4} & Code status at discharge (4 categories) \\
\texttt{HOSP\_PRIOR7\_CT} & Count of hospitalizations in the last 7 days prior to the current admission \\
\texttt{HOSP\_PRIOR8\_30\_CT} & Count of hospitalizations in the last 8 to 30 days prior to the current admission \\
\texttt{LOS\_30} & Length of stay, in days (with stays above 30 days truncated at 30 days) \\
\texttt{MEDICARE} & Indicator for Medicare Advantage status \\
\texttt{DISCHDISP} & Discharge disposition (home, skilled nursing, home health; one-hot encoded) \\
\texttt{LAPS2} & Laboratory-based acuity of illness score, recorded at admission \\
\texttt{LAPS2DC} & Laboratory-based acuity of illness score, recorded at discharge\\
\texttt{COPS2} & Comorbidity and chronic condition score, updated monthly\\
\texttt{HCUPSGDC} & Diagnosis super-group classification (30 groups; one-hot encoded) \\
\midrule
\texttt{W} (or $W_i$) & Treatment: Transitions Program intervention \\
\texttt{Y} (or $Y_i$) & Outcome: Non-elective readmission within 30 days post-discharge \\
\bottomrule
\end{tabular}}
\vspace{0.2em}
\caption{\footnotesize A list of the covariates used in this study.}
\label{tab:table1}
\end{table}
\subsection{From (observational) data to predicted treatment effects: Causal forests}
To identify potentially preventable readmissions, we undertake a causal machine learning approach using data taken from before and after the implementation of the Transitions Program at KPNC. Our causal machine learning approach is distinct from supervised machine learning as it is commonly applied in that it seeks to estimate individual \textit{treatment effects}, and not outcome risk. This objective cannot be readily accomplished with off-the-shelf supervised machine learning methods. \cite{Athey2017BeyondProblems} Compared to other methods for studying treatment effect heterogeneity (e.g. subgroup analyses), causal machine learning methods afford two advantages: one, they avoid strong modeling assumptions, allowing a data-driven approach; and two, they guard against overfitting through applying a form of regularization.
We express these individual treatment effects of the Transitions Program intervention in terms of the estimated conditional average treatment effects (CATE), $\hat{\tau}_i$, which can be interpreted as absolute risk reductions (ARRs). It is through the sign and magnitude of these estimated CATEs that we consider a readmission potentially preventable: a $\hat{\tau}_i < 0$ denotes that the intervention would be expected to lower 30-day readmission risk for that patient, while a $\hat{\tau}_i > 0$ suggests that the intervention would be more likely to result in readmission within 30 days. A larger (more negative) CATE suggests a greater extent of preventability: i.e., $\hat{\tau}_j < \hat{\tau}_i < 0$ implies that patient $j$'s readmission is more "preventable"---their risk is more modifiable by the intervention---compared to patient $i$'s.
To estimate these CATEs, we apply causal forests to the KPNC data described in the previous subsection. Causal forests \cite{AtheyGRF, Wager2018EstimationForests} represent a special case of generalized random forests \cite{Athey2019GeneralizedForests}. They have been used in a variety of applications, including to estimate conditional average partial effects; our overall approach resembles that undertaken in \cite{Athey}, which used them to study treatment effect heterogeneity in an observational setting. Causal forests can be viewed as a form of adaptive, data-driven subgroup analysis, and can be applied in either observational or randomized settings, relying on much of the machinery of random forests \cite{Breiman2001RandomForests}.
These machinery retained from random forests include recursive partitioning, subsampling, and random selection of the splits. However, causal forests differ from random forests in several significant ways. For one, the splitting criterion seeks to place splits so as to maximize heterogeneity, instead of minimizing prediction error as with random forests. Moreover, when placing splits, the causal forest algorithm (depending on implementation details) has access to the data $X_i$ and treatment assignments $W_i$, but not to the outcomes $Y_i$, and a separate sample is used to estimate effects once these splits have been placed---a property often referred to as 'honesty' \cite{Wager2018EstimationForests}.
We describe some necessary assumptions that are required in order to identify the CATEs, as the data are considered observational, and not randomized. The causal forest already fits a treatment assignment model, which deconfounds the subgroups in our study (described below) to some extent, but we also make further assumptions regarding the data, which we describe below, to facilitate deconfounding. We begin with some notation: for each of a set of units $i = 1, \ldots, n$, we observe the triple $(X_i, Y_i, W_i)$, where $X_i \in \mathbb{R}^p$ is a covariate vector, $Y_i \in \{0,1\}$ denotes the observed outcome, and $W_i \in \{0,1\}$ treatment assignment. Following Rubin's potential outcomes framework \cite{Rubin1974EstimatingStudies}, we assume the existence of potential outcomes, $Y_i(1)$ and $Y_i(0)$, for each unit, and define the conditional average treatment effect (CATE) for an unit $i$ with the covariate vector $X_i = x$ as
\begin{equation}
\tau_i(x) = \mathbb{E}[Y_i(1) - Y_i(0) \mid X_i = x].
\end{equation}
Within a leaf $L$, a causal forest estimates this quantity as
\begin{equation}
\hat{\tau}(x) = \frac{1}{\mid\!\! \{j : W_j = 1 \wedge X_j \in L \}\!\! \mid} \sum_{\{j : W_j = 1 \wedge X_j \in L\}} Y_j \quad - \quad \frac{1}{\mid\!\! \{ j : W_j = 0 \wedge X_j \in L \}\!\! \mid} \sum_{\{j : W_j = 0 \wedge X_j \in L\}} Y_j
\end{equation}
for a $x \in L$. Heuristically, the process of fitting a causal forest aims to make these leaves $L$ as small as possible so that the data in each resemble a randomized experiment, while simultaneously maximizing effect heterogeneity. \cite{Wager2018EstimationForests}
However, as we observe only one of the two potential outcomes for each unit, $Y_i = Y_i(W_i)$, we cannot estimate $Y_i(1) - Y_i(0)$ directly from these data. Under some assumptions, however, we can reprovision the units in the data that did experience the counterfactual outcome to estimate $\tau_i$, by having those units serve as 'virtual twins' for an unit $i$. These assumptions entail (1) the existence of these twins; and (2) that, in some sense, these twins look similar in terms of their covariates. These are the \textit{overlap} and \textit{uncounfoundedness} assumptions, respectively. Heuristically, the overlap assumption presumes that these twins could exist, and unconfoundedness posits that these twins are in fact similar in terms of their observed covariates. Together, these assumptions allow us to have some confidence that the $\tau_i(x)$ in fact do identify causal effects.
We address identification with respect to the predicted risk threshold of 25\%, as well as the time period. By time period, recall that the Transitions Program intervention was rolled out to each of the 21 KPNC hospitals in a way that formed two completely disjoint subsets of patients, corresponding to the pre- and post-implementation periods. Also recall that patients were assigned to the intervention if they were discharged during the post-implementation period and their risk score was $>\!\!25$\%, while those below that value received usual care. An useful feature of these data is that all patients in the pre-implementation period were assigned 'shadow' risk scores using the same instantiation of the predictive model, even though none of these patients received treatment. Hence, the data are split into four disjoint subgroups, indexed by risk category and time period:
\[
(\text{pre}, \geq\!\!25), \quad (\text{post}, \geq\!\!25), \quad (\text{pre}, <\!\!25), \quad (\text{post}, <\!\!25),
\]
i.e., the tuple $(\text{pre}, \geq\!\!25)$ denotes the subgroup consisting of patients discharged during the pre-implementation period with a risk score of $\geq\!\!25$\%, and $(.\,, \geq\!\!25)$ denotes all patients with risk $\geq\!\!25$\% in the data. Each hospital discharge belongs to one and only one of these subgroups.
Heuristically, these 'shadow' risk scores allow us to mix data from across periods so that the $(\text{pre}, \geq\!\!25)$ subgroup can be used as a source of counterfactuals for patients in the $(\text{post}, \geq\!\!25)$ subgroup, assuming no unmeasured confounding as a consequence of the shift from "pre" to "post". Stratifying on the risk score allows us to deconfound the potential outcomes of the patients in these two subgroups. Moreover, these two subgroups together can be used to provide plausible counterfactuals for the patients in the $(.\,, <\!\!25)$ risk subgroup, despite none of those patients having been assigned to the intervention. We describe the identification strategy---which relies on standard ignorability assumptions---in more detail below, beginning with the $(.\,, \geq\!\!25)$ subgroup.
First, for each of the four subgroups, we assume overlap: given some $\epsilon > 0$ and all possible $x \in \mathbb{R}^p$,
\begin{equation}
\epsilon < P(W_i = 1 \mid X_i = x) < 1 - \epsilon.
\end{equation}
This assumption means that no patient is guaranteed to receive the intervention, nor are they guaranteed to receive the control, based on their covariates $x$.
For the patients in the $(.\,, \geq\!\!25)$ group, our identification strategy makes use of the balancing properties of risk scores (or prognostic scores) \cite{Hansen2008TheScore} to establish unconfoundedness. Assuming no hidden bias, conditioning on a prognostic score $\Psi(x) = P(Y \mid W = 0, x)$ is sufficient to deconfound the potential outcomes. The risk score used to assign the Transitions intervention is a prognostic score; hence, it is sufficient to deconfound these potential outcomes.
For patients in the $(.\,, <\!\!25)$ subgroup, the picture is slightly more complicated. Among these patients, we cannot assume exchangeability conditional on the risk score $\hat{Y}_i$,
\begin{equation}
\{ Y_i(1), Y_i(0) \} \mathrel{\text{\scalebox{1.07}{$\perp\mkern-10mu\perp$}}} W_i \mid \hat{Y}_i,
\end{equation}
because, again, treatment assignment is contingent on a predicted risk $\hat{Y}_i \geq 0.25$, i.e., $W_i = \mathbf{1}\{\hat{Y}_i \geq 0.25\}$. However, recall that the causal forests are performing estimation in $X_i$-space, and not in $\hat{Y}_i$-space, and note that we can instead impose the slightly weaker assumption of ignorability conditional on some subset $X'_i \subseteq X_i$, which comprise inputs to the score $\hat{Y}_i$; namely, that
\begin{equation}
\{ Y_i(1), Y_i(0) \} \mathrel{\text{\scalebox{1.07}{$\perp\mkern-10mu\perp$}}} W_i \mid X'_i,
\end{equation}
which we can justify \textit{a priori} with the knowledge that no one component predictor predominates in the risk model (see the appendix to \cite{Escobar2015NonelectiveMortality}); that is, no one covariate strongly determines treatment assignment. We provide empirical evidence to establish the plausibility of this assumption, at least in low dimensions, in Figure \ref{fig:overlap}. Together with assuming the existence of potential outcomes, this \textit{unconfoundedness} assumption is sufficient to obtain consistent estimates of $\tau(x)$ \cite{Wager2018EstimationForests}. Moreover, since causal forests perform a form of local estimation, our assumptions are independent for the $(.\,, \geq\!\!25)$ and $(.\,, <\!\!25)$ subgroups in the sense that if the unconfoundedness assumption fails for either subgroup, but not the other, the estimates for the subgroup in which it does hold should not be affected.
Finally, as a formal assessment of treatment effect heterogeneity, we also perform the omnibus test for heterogeneity \cite{Chernozhukov2017}, which seeks to estimate the best linear predictor of the CATE by using the "out-of-bag" predictions from the causal forest, $\hat{\tau}^{-i}$, to fit the following linear model:
\begin{equation}
Y_i - \hat{m}^{-i}(X_i) = \alpha\bar{\tau}(W_i - \hat{e}^{-i}(X_i)) + \beta(\hat{\tau}^{-i}(X_i) - \bar{\tau})(W_i - \hat{e}^{-i}(X_i)) + \epsilon,
\end{equation}
where
\begin{equation}
\bar{\tau} = \frac{1}{n} \sum^n_{i=1} \hat{\tau}^{-i}(X_i),
\end{equation}
and $\hat m(.)$ and $\hat e(.)$ denote the marginal outcome and assignment models estimated by the causal forest, respectively. (The superscript $-i$ denotes that the quantity was computed "out-of-bag", i.e., that the forest was not trained on example $i$). Fitting this linear model yields two coefficient estimates, $\alpha$ and $\beta$; an interpretation of these coefficients is that $\alpha$ captures the average treatment effect, and if $\alpha \approx 1$, then the predictions the forest makes are correct, on average. Likewise, $\beta$ measures how the estimated CATEs covary with the true CATEs; if $\beta \approx 1$, then these CATE estimates are well-calibrated. Moreover, we can use the $p$-value for $\beta$ as an omnibus test for heterogeneity; if the coefficient is statistically significantly greater than zero, then we can reject the null hypothesis of no treatment effect heterogeneity. \cite{AtheyEstimatingApplication}
All analyses were performed in R (version 3.6.2); causal forests and the omnibus test for heterogeneity were implemented using the \texttt{grf} package (version 0.10.4). Causal forests were fit using default settings with $n = 8,000$ trees and per-hospital clusters (for a total of 21 clusters).
\subsection{Translating predictions into treatment policies: Decoupling effects and payoffs}
A relevant question, once predictions have been made---be they of risk or of treatment effects---is how to translate them into treatment decisions. With predicted risk, these decisions are made with respect to some risk threshold, or to a decision-theoretic threshold that takes utilities into account (e.g. as in the approach in \cite{Bayati2014Data-drivenStudy.}). However, both approaches are potentially suboptimal in the presence of treatment effect heterogeneity, requiring strong assumptions to be made regarding the nature of the treatment effect. (Indeed, \cite{Bayati2014Data-drivenStudy.} assumes a constant treatment effect for all patients.)
Here, however, we deal with treatment effects instead of risk; an obvious approach starts by treating all patients $i$ with $\hat{\tau}(X_i) < 0$---that is, by treating all patients who are expected to benefit. However, resources may be constrained so that it is infeasible to treat all these patients, and making it necessary to prioritize from among those with $\hat{\tau}_i < 0$. One way to do so is to incorporate the costs associated with the potential outcome of a readmission, or the "payoffs" $\pi$ associated with successfully preventing a readmission \cite{Kleinberg2015PredictionProblems}, which we denote by $\pi_i = \pi(X_i)$.
There are several ways to characterize these payoffs, which ideally can be done mainly in terms of the direct costs required to provide care for a readmitted patient, as well as financial penalties associated with high readmission rates. However, these data are not available to us, so we instead use length of stay (LOS) from the readmission as a proxy for cost, and assume that higher LOS is associated with higher resource utilization and thus higher costs. A range of payoffs could be specified, such as the risk of in-hospital mortality during the readmission, or an acuity-scaled LOS measure, as well as weighted combinations of these quantities. It is important to emphasize that these payoffs are associated with the characteristics of the readmission following the index stay, if one does occur---not those of the index stay itself.
One approach to estimating these payoffs is to predict them using historical data, i.e., $\hat{\pi}(X_i) = \mathbb{E}[\pi_i \mid X_i = x]$ in a manner similar to that used to derive the risk scores. However, this is beyond the scope of this paper, and so we make some simplifying assumptions regarding the payoffs. Namely, we assume that (1) the individual payoffs $\pi_{i}$ can be approximated by the mean payoff across all patients, $\pi_{i} \approx \mathbb{E}[\pi_{i}]$, and (2) that the payoffs are mean independent of the predicted treatment effects, $\mathbb{E}[\pi_i \mid \tau_i] = \mathbb{E}[\pi_i]$. These two assumptions make it so that the $\hat{\tau}_i$ become the sole decision criterion for the treatment policies we evaluate in this paper, but we briefly outline how to incorporate these payoffs in decision-making.
Given both the predicted treatment effects, $\hat{\tau}_i$, and payoffs, $\hat{\pi}_i$, we can compute the individual expected utilities, $\mathbb{E}[u_i] = \hat{\tau}_i \hat{\pi}_i$ for each patient. We assume that decision-makers are risk-neutral and that the cost to intervene is fixed. Then, given two patients, $i$ and $j$, and their respective expected utilities, we would prefer to treat $i$ over $j$ if $\mathbb{E}[u_i] > \mathbb{E}[u_j]$. Another interpretation (in a population sense) is that ordering the discharges in terms of their $\hat{\tau}_i$ induces one rank ordering, while ordering them in terms of their $\mathbb{E}[u_i]$ induces another. We can treat the top $k$\% of either ordering, subject to resource constraints, but doing so with the latter will result in greater net benefit and thus would be preferred. Under the assumptions we make above, $\mathbb{E}[u_i] \propto \hat{\tau}_i$ for each patient $i$. This particular decision-theoretic approach requires absolute, and not relative outcome measures, such as the relative risk reduction. \cite{Sprenger2017ThreeMeasures}
\subsection{Measuring the impacts of different targeting strategies}
To estimate the impact (in terms of the number of readmissions prevented) of several notional targeting strategies that focus on treating those with the largest expected benefit, and not those at highest predicted risk, we undertake the following approach. We stratify the patients in the dataset into ventiles $V_1, \ldots, V_{20}$ of predicted risk, where $V_1$ denotes the lowest (0 to 5\%) risk ventile, and $V_{20}$ the highest (95 to 100\%). Then, the causal forest is trained on data through the end of 2017, and used to predict CATEs for all patients discharged in 2018.
First, for all patients above a predicted risk of 25\%, we compute the impact of the current risk-based targeting strategy based on the predicted CATEs from 2018, and compare it to that from prior work which estimated based on the average treatment effect of this intervention. \cite{Marafino2020ASystem} This comparison serves as one check of the calibration of the predicted CATEs; the number of readmissions prevented should substantially agree in both cases.
Second, we then use these same predicted CATEs for 2018 to assess the impact of three CATE-based targeting strategies, which treat the top 10\%, 20\%, and 50\% of patients in each risk ventile based on their predicted CATE. These strategies spread the intervention across ventiles as a form of a hedge ("not putting all one's eggs in one basket"), rather than treating the top $k$\% patients in the dataset.
The impacts of all targeting strategies are characterized both in terms of the annual number of readmissions prevented as well as the number needed to treat (NNT) to prevent one readmission. We estimate the annual number of readmissions prevented by training the causal forest on data through the end of 2017, and then summing the predicted CATEs for discharges taking place in 2018. The annual number of readmissions prevented can be expressed as
\begin{equation}
\sum^{20}_{j = 1} \sum_{i \in V^T_j} \hat{\tau}_i,
\end{equation}
where $V^T_j$ denotes the set of patients notionally selected for treatment in ventile $j$, and $\hat{\tau}_i$ the predicted CATE (treatment effect) for the patient $i$.
Finally, the NNT in this setting has a slightly different interpretation than the one commonly encountered in many studies. Often, the NNT is cited as a summary measure based on the overall results of a trial, e.g., the average treatment effect (ATE). In this case, an ATE estimate $\hat{\tau}$, expressed in terms of the ARR, corresponds to a NNT of $1/\overline{\hat{\tau}_i} = 1/\hat{\tau}$. However, here, we instead estimate CATEs at the individual patient level and are interested in the NNTs specific to the subgroups that receive the intervention. Hence, the NNT is a property of these subgroups and can vary across subgroups as the average predicted CATE $\overline{\hat{\tau}_i}$ varies.
\section{Results}
\subsection{Overall characteristics of the cohort}
From June 2010 to December 2018, 1,584,902 hospitalizations took place at the 21 KPNC hospitals represented in this sample. These included both inpatient hospitalizations as well as stays for observation. Further details regarding the overall cohort are presented in Table \ref{tab:a1}. Of these hospitalizations, 1,539,285 met the inclusion criteria, of which 1,127,778 (73.3\%) occurred during the pre-implementation period for the Transitions Program, and 411,507 (26.7\%) during the post-implementation period. Among these 411,507 hospitalizations taking place post-implementation, 80,424 (19.5\%) were predicted to be at high risk of 30-day post-discharge mortality or readmission; these patients were considered to have received the Transitions Program intervention following hospital discharge.
Of the patients whose index stays were included, their mean age was 65.0 years, and 52.5\% were women. The overall 30-day non-elective rehospitalization rate among these index stays was 12.4\%, and 30-day post-discharge mortality was 4.0\%. Other patient-level characteristics are presented in Table \ref{tab:a1} in the Appendix. Notably, based on the distributions of \texttt{COPS2} and \texttt{LAPS2}, a key modeling assumption---that of overlap---appears to have been satisfied (Figure \ref{fig:overlap}).
Patients at low risk (risk score $<\!\!25$\%) represented 63.3\% of all readmissions throughout the study period, while making up 82.9\% of index stays, compared to 36.7\% of all readmissions among those at high risk ($\geq \!\! 25$\%), which represented 17.1\% of index stays. Moreover, the mean length of stay of the readmission following an index stay was approximately constant across ventiles of predicted risk, satisfying another assumption; patients with predicted risk of 5 to 50\% at their index discharge had a mean length of stay during their readmission that ranged from 4.6 to 5.7 days, and these patients represented 90.5\% of all readmissions. (Figure \ref{fig:los-v-risk})
\begin{figure}
\centering
\includegraphics[scale=0.67]{img/overlap.pdf}
\caption{\footnotesize Assessing the "unconfoundedness" assumption: each point denotes an admission, which are colored according to whether they received the Transitions Program intervention post-discharge ($W_i$). The $x$-axis records the value of a laboratory-based acuity score (\texttt{LAPS2DC}) and the $y$-axis the value of a chronic condition score (\texttt{COPS2)}, both at discharge. The extent of overlap displayed here is relatively good, and implies that overlap may be implausible only among patients at very high or very low risk. This plot is based on a random sample of $n = 20,000$ index admissions taken from the post-implementation period.}
\label{fig:overlap}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.60]{img/los-v-risk.pdf}
\caption{\footnotesize Average length of stay (LOS) by risk score ventile. The values in parentheses below the name of each ventile denote the proportion of all 30-day readmissions incurred by patients in that ventile; patients with a predicted risk below 25\% based on their index stay accounted for 63\% of all readmissions. Notably, the average LOS is roughly similar (at 5 days) for patients with predicted risk of 5\% to 80\%. The vertical dotted line represents the 25\% risk threshold used to assign the Transitions Program intervention.}
\label{fig:los-v-risk}
\end{figure}
\subsection{Characterizing the treatment effect heterogeneity of the Transitions Program intervention}
The estimated out-of-bag conditional average treatment effects (CATEs) yielded by the causal forest are presented in Figure \ref{fig:cate-overall}. The distributions of the CATEs are those of the discharges in their respective risk ventiles, but are not drawn on a common scale and so do not reflect the variation in sample size across ventiles. Qualitatively, these distributions exhibit wide spread, and suggest some extent of heterogeneity in the treatment effect of the Transitions Program intervention. In particular, treatment effects appear to be largest for patients discharged with a predicted risk of around 15 to 35\%. These effects also appeared to be somewhat attenuated as risk increased, with the center of mass tending towards zero for patients at higher risk. Notably, particularly among patients at higher risk, some estimated effects were greater than zero, indicating that the intervention was more likely to lead to readmission within 30 days. Finally, we also note that the CATE estimates themselves were well-calibrated in the sense that we identified no cases where an individual's CATE estimate was greater than their predicted risk.
Figure \ref{fig:cate-hcupsg} is similar to the previous, but stratifies the display by Clinical Classification Software (CCS) supergroups. Definitions of these supergroups can be found in Table \ref{tab:b1} in the Appendix. The overall pattern is similar to that in the unstratified plot, in that treatment effects appear to be greatest for patients at low to moderate risk, but the shapes of these distributions vary from supergroup to supergroup. All supergroups appear to exhibit heterogeneity in treatment effect within ventiles, as well, which is more pronounced for some conditions, including hip fracture, trauma, and highly malignant cancers. Qualitatively, some supergroups exhibit bimodal or even trimodal distributions in the treatment effect of the Transitions Program intervention, suggesting identification of distinct subgroups based on these effects. Some ventiles are blank for some supergroups, because there were no patients belonging to those supergroups with predicted risks falling within those ranges.
Quantitatively, fitting the best linear predictor yields estimates of $\hat \alpha = 1.16$ and $\hat \beta = 1.06$, with $p = 5.3 \times 10^{-8}$ and $2.23 \times 10^{-7}$, respectively. Interpreting the estimate of $\beta$ as an omnibus test for the presence of heterogeneity, we can reject the null hypothesis of no treatment effect heterogeneity.
These effects can also be evaluated on a grid of two covariates to assess how the estimated CATE function varies with interaction of these covariates. This yields insight into the qualitative aspects of the surface of the CATE function and may identify subgroups among which the Transitions Program intervention may have been more or less effective. Here, we choose the Comorbidity Point Score (\texttt{COPS2}) and the Laboratory-based Acuity Score at discharge (\texttt{LAPS2DC}), while holding all other continuous covariates at their median values, except for age, which we set to 50 and 80. Categorical covariates were held at their mode, except for the supergroup, which we set to chronic heart failure (CHF). We plot the CATE function from the 10th to 90th percentiles of \texttt{LAPS2DC} and from the 0th to 95th percentiles of \texttt{COPS2}. This is akin to evaluating the CATE function for a set of pseudo-patients with CHF having these values of \texttt{COPS2} and \texttt{LAPS2DC}.
Figure \ref{fig:cate-hcupsg} shows the resulting CATE functions for two choices of patient age: 50 and 80 years. In this region, the estimated CATE ranged from -0.060 to 0.025 (-6.0 to 2.5\%), meaning that the estimated absolute risk reduction of the Transitions Program intervention was as large as -6\% for some patients, while for others, their readmission risk was increased by as much as 2.5\%. The estimated CATE generally was increased in magnitude---suggesting that the Transitions Program intervention became more effective as age increased---at age 80 compared to 50. Moreover, and notably, the estimated CATE tended to increase with increasing \texttt{LAPS2DC}, which measures how acutely ill a patient was upon discharge based on their laboratory test data. This finding suggests that, for patients who were more ill at discharge (indeed, the average \texttt{LAPS2DC} in 2018 was 45.5), enrolling them in the Transitions Program may actually have encouraged them to return to the hospital. While this finding of such an effect may appear surprising, it is unclear if it actually represents "harm" in the sense it is usually interpreted; we discuss this finding in more depth in the Discussion section.
\begin{figure}
\centering
\includegraphics[scale=0.67]{img/overall-cate-estimate.pdf}
\caption{\footnotesize Treatment effect heterogeneity across risk score ventiles. The densities represent the distribution of estimated conditional average treatment effects within each ventile. They are drawn on a common scale, and hence do not reflect the variation in sample size across ventiles.}
\label{fig:cate-overall}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.67]{img/cate-by-hcupsg.pdf}
\caption{\footnotesize Treatment effect heterogeneity across risk score ventiles, stratified by Clinical Classification Software (CCS) supergroups based on the principal diagnosis code at discharge. A full listing of the definitions of these supergroups is given in Table \ref{tab:b1} in the Appendix. Abbreviations: CVD, cerebrovascular disease; AMI, acute myocardial infarction; CAP, community-acquired pneumonia; CHF, congestive heart failure; GI, gastrointestinal; UTI, urinary tract infection.}
\label{fig:cate-hcupsg}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.67]{img/5080_chf.pdf}
\caption{\footnotesize The estimated CATE function as it varies in the dimensions of \texttt{LAPS2DC} and \texttt{COPS2}, for a patient with chronic heart failure at ages 50 and 80. All other continuous variables were fixed at their median, and other categorical variables were fixed at their mode. (The transitions from one cell to another in this figure should not appear smooth; if this is the case, try a different PDF viewer.)}
\label{fig:cate-chf}
\end{figure}
\subsection{Notional estimates of overall impact under different targeting strategies}
Based on these individual CATE estimates, we compute the potential impacts of several notional targeting strategies using these estimated effects, and not predicted risk, to target the Transitions Program intervention. These impacts are expressed in terms of the number of annual readmissions prevented, and are presented in Table \ref{tab:policies}. These quantities are computed by training a model on all data through December 2017, which is then used to predict effects for patients discharged in 2018. These predicted effects are used to compute the numbers of readmissions prevented and number needed to treat (NNT). We also present the estimated number of interventions required under each strategy.
We first confirm the calibration of the individual effect estimates by taking the same group of patients who were intervened upon under the current risk-based strategy, and use this group to estimate the number of readmissions prevented, with the aim of comparing this number to a previous estimate of the impact of this policy using the average treatment effect \cite{Marafino2020ASystem}. This results in an estimate of 1,246 (95\% confidence interval [CI] 1,110-1,381) readmissions prevented annually, which compares favorably to the previous estimate of 1,210 (95\% CI 990-1,430), representing further evidence that these estimates are well-calibrated. The NNT under both of these these strategies is 33, and the number of individual interventions needed is 39,985. (Table \ref{tab:policies})
Next, computing the impacts of the CATE-based strategies, which target the Transitions Program intervention to the top 10\%, 20\%, and 50\% of each risk ventile, we find that all of these policies are estimated to result in greater potential reductions in the absolute number of readmissions prevented. \ref{tab:policies} The top-10\% strategy may prevent 1,461 (95\% CI 1,294-1,628) readmissions annually, and does so more efficiently, as implied by the NNT of 13. Moreover, the top-20\% strategy requires the same total number of interventions as the existing risk-based strategy (39,648 vs. 39,985), yet is estimated to lead to double the number of annual readmissions prevented, at 2,478 (95\% CI 2,262-2,694). This strategy appears to be much more efficient, as evidenced by its estimated NNT of 16.
Even under the most expansive strategy, which treats the top 50\% of each risk ventile and requires 250\% the number of total interventions compared to the risk-based strategy, also represents an improvement in the NNT (23 vs. 33). This strategy is estimated to lead to 4,458 (95\% CI 3,925-4,990) readmissions prevented annually, or nearly four times as many as the existing strategy. Finally, we also note that while there appears to exist a tradeoff in terms of absolute impact and efficiency, all CATE-based strategies substantially improved upon the risk-based targeting strategy in terms of the NNT.
\begin{table}[]
\centering
\begin{tabular}{@{}lccc@{}}
\toprule
Treatment strategy & \begin{tabular}[c]{@{}c@{}}Annual readmissions \\ prevented, $n$\end{tabular} & \begin{tabular}[c]{@{}c@{}}Total interventions,\\ $n$\end{tabular} & NNT \\ \midrule
\textbf{Risk-based targeting} & & & \\
\quad Target to $ \geq\!\! 25$\% (DiD estimate) & 1,210 (990-1,430) & 39,985 & 33 \\
\quad Target to $ \geq\!\! 25$\% (CF estimate) & 1,246 (1,110-1,381) & 39,985 & 33 \\ \midrule
\textbf{CATE-based targeting} & & & \\
\quad Targeting top 10\% & 1,461 (1,294-1,628) & 18,993 & 13 \\
\quad Targeting top 20\% & 2,478 (2,262-2,694) & 39,648 & 16 \\
\quad Targeting top 50\% & 4,458 (3,925-4,990) & 102,534 & 23 \\ \bottomrule
\end{tabular}
\vspace{0.2em}
\caption{\footnotesize Estimates of overall impacts of risk-based and notional CATE-based targeting strategies in terms of the annual numbers of readmissions prevented as well as the numbers needed to treat (NNTs) under each targeting strategy, based on the estimates for index admissions in 2018. The first quantity---the difference-in-differences (DiD) estimate---is based on the results of \cite{Marafino2020ASystem}. All quantities are rounded to the nearest integer. Parentheses represent 95\% confidence intervals. Abbreviations: CATE, conditional average treatment effect; DiD, difference-in-differences; CF, causal forest.}
\label{tab:policies}
\end{table}
\section{Discussion}
In this paper, we have shown the feasibility of estimating individual treatment effects for a comprehensive readmissions prevention intervention using data on over 1.5 million hospitalizations, representing an example of an "impactibility" model \cite{Lewis2010ImpactibilityPrograms}. Even though our analysis used observational data, we found that these individual estimates were well-calibrated, in that none of the individual estimates were greater than the predicted risk. Moreover, these estimates, in aggregate, when used to compute the impact of the risk-based targeting policy, substantially agreed with a separate estimate computed via a difference-in-differences analysis. Notably, our results suggest that strategies targeting similar population health management and quality improvement (QI) interventions based on these individual effects may lead to far greater aggregate benefit compared to targeting based on risk. In our setting, the difference translated to nearly as many as four times the number of readmissions prevented annually over the current risk-based approach.
Our analysis also found both qualitative and quantitative evidence for treatment effect heterogeneity, particularly across levels of predicted risk: the Transitions Program intervention seemed less effective as predicted risk increased. The extent of this mismatch between treatment effect and predicted risk appeared substantial, and may have implications for the design of readmission reduction programs and related population health management programs. Furthermore, our analysis, when stratified by diagnostic supergroup, also appeared to identify distinct subgroups consisting of patients with larger, more negative treatment effects, indicating that the intervention may have been more effective in those subgroups. Patients expected to benefit could be prioritized to receive the intervention, while patients unlikely to benefit could instead receive more targeted care that better meets their needs, including specific subspecialty care, and in some cases, palliative care. More work remains to be done to investigate if these findings hold for other types of preventative interventions, to characterize these subgroups, and to evaluate how to best translate our preliminary findings into practice. Notably, our finding of a risk-treatment effect mismatch is in line with suggestions in the readmissions prevention literature \cite{Finkelstein2020HealthTrial,Steventon2017PreventingRisk,Lindquist2011UnderstandingFactors}.
To the best of our knowledge, this work is the first to apply causal machine learning together with decision analysis to estimate the treatment effect heterogeneity of a population health management intervention, and as such, represents the first example of an end-to-end "impactibility" model \cite{Lewis2010ImpactibilityPrograms}. Our approach is also notable in that we show how to decouple the causal and predictive aspects of this prediction problem \cite{Kleinberg2015PredictionProblems, Ascarza2018RetentionIneffective}. In particular, our approach was principally inspired by a study of the effectiveness of targeting marketing interventions based on risk by Ascarza \cite{Ascarza2018RetentionIneffective}, as well as previous work on the causal aspects of prediction problems by Kleinberg and co-authors \cite{Kleinberg2015PredictionProblems, Kleinberg2018HumanPredictions}, and others taking causal approaches to prediction problems. \cite{Rubin2006EstimatingMethodology, Blake2015ConsumerExperiment} From a modeling perspective, treating such a problem as purely predictive, as is commonly done in studies developing readmission risk tools \cite{Kansagara2011RiskReview.}, relies on an assumption that may be implausible---specifically that treatment effects correlate with risk. On the other hand, a purely causal approach---one that focuses on modeling treatment effects---fails when confronted with prediction problems that involve resource allocation. Even those patients whose readmissions may be the most "impactible" may not also have the most resource-intensive readmissions. Indeed, this potential oversight exemplifies a form of bias referred to as "omitted payoff bias". \cite{Kleinberg2018HumanPredictions}
Previous studies have investigated the extent to which preexisting risk assessment tools may already capture a patient's degree of impactibility, by integrating qualitative assessments from nurses and physicians into these models \cite{Flaks-Manov2020PreventingPrediction}. While there seems to be some overlap between predicted risk and impactibility as expressed by these providers, it appears incomplete, and it is not clear how clinician assessments could be integrated into existing models. \cite{Flaks-Manov2020PreventingPrediction} Further afield, performance metrics for predictive models such as the recently proposed "C statistic for benefit" \cite{vanKlaveren2018TheEffects} can assess the ability of such a model to discriminate between patients expected to benefit from a treatment and those who will not. While useful, these metrics do not facilitate assessment of treatment effect heterogeneity. Attempts have been made to model treatment effect heterogeneity among patients undergoing antihypertensive therapy using a similar causal machine learning approach (the X-learner \cite{Kunzel2019MetalearnersLearning}), and also found some extent of mismatch between treatment effects and risk. \cite{Duan2019ClinicalTherapy}
\subsection{The necessity of estimating heterogeneous treatment effects, and not just outcome risk}
Our results highlight the necessity of estimating the treatment effect heterogeneity associated with preventative interventions, particularly at the population scale. A full appraisal of such heterogeneity can aid in targeting these interventions to patients among whom they might be most effective, while targeting intensified versions, or different versions, to patients who may not be expected to benefit from the intervention as originally implemented. As a result, our analyses suggest that up to nearly as four times as many readmissions could be prevented compared to the current state of affairs, while incurring nominal marginal costs in terms of the total number of interventions delivered. Thus, the current risk-centric modeling methodology employed by many hospitals and health systems, as well as by payers, may limit the full potential of these interventions.
We focus on what could be considered a special case of treatment effect heterogeneity---of that on the absolute risk scale. \cite{Kent2016RiskTrials., Kent2018PersonalizedEffects} Unlike other studies performing similar analyses (e.g. \cite{Kent2008AInfarction} and others cited in \cite{Kent2016RiskTrials.}) finding that a small group of high-risk patients accounted for most of the aggregate benefits, we instead found that the effect of the Transitions Program intervention were largest in the most numerous subgroup of patients, namely those at relatively low to moderate risk. For high-risk patients, the intervention appeared to be less effective, which is in line with the growing body of evidence showing similar findings \cite{Finkelstein2020HealthTrial,Steventon2017PreventingRisk,Lindquist2011UnderstandingFactors,Rich1993PreventionStudy.}. An open question is the extent to which these high-risk/ineffective patients are qualitatively distinct, to the point that they should be considered "futile", as some have proposed \cite{Lewis2010ImpactibilityPrograms}, or if they may be amenable to a more intensified version of the intervention or an entirely different approach altogether.
A notable finding is that some patients had a predicted CATE greater than zero, indicating that the Transitions Program intervention may have encouraged them to return to the hospital. In other settings, a positive treatment effect would often be interpreted as harm, and suggest that the treatment be withheld from these patients. However, we argue that our finding does not readily admit such an interpretation. To see why, we note that this subgroup of patients with a positive CATE appeared to be those who were more acutely ill at discharge, as evidenced by their higher \texttt{LAPS2DC} scores (Figure \ref{fig:cate-chf}). In light of this finding, an alternative interpretation of these weakly positive effects is that they represent readmissions which may have been necessary, and which perhaps may have been facilitated by aspects of the Transitions Program intervention, including instructions to patients outlining the circumstances (e.g. new or worsening symptoms) under which they should return to the hospital. This finding holds particular relevance given increasing concern that readmission prevention programs, in responding to the incentives of the HRRP, may be reducing 30-day hospitalization rates at the expense of increased short- and long-run mortality. \cite{Wadhera2018AssociationPneumonia, Fonarow2017TheReconsider, Gupta2018ThePolicy}
Moreover, this finding also suggests that the CATE estimates may be insufficient to capture the full impact of the Transitions Program intervention on patient outcomes, meaning that the estimated effect of the intervention on readmission alone may not represent a sufficient basis for future targeting strategies. It is plausible that intervening in a patient with a positive estimated effect may be warranted if the readmission would have a positive effect on other outcomes, despite the current emphasis of value-based purchasing programs on penalizing excess 30-day readmissions. For example, in fiscal year 2016, the maximum penalty for excess 30-day mortality was 0.2\% of a hospital's diagnosis-related group (DRG) payments under the Hospital Value-Based Purchasing program, while the maximum penalty for excess 30-day readmission was 3.0\% of DRG payments under the HRRP. \cite{Abdul-Aziz2017AssociationMortality} A more holistic targeting strategy would incorporate estimates of the intervention's effect on short- and long-run mortality and other outcomes, and explicitly include these quantities when computing expected utility. Selecting patients for treatment can then be formulated as an optimization problem that attempts to balance regulatory incentives, organizational priorities, patient welfare, and resource constraints.
\subsection{Causal aspects of prediction problems}
Our approach, in contrast to many studies which have developed readmission risk prediction models \cite{Kansagara2011RiskReview.,Bayati2014Data-drivenStudy.,Bates2014BigPatients}, instead focuses on the causal aspects of the readmissions prediction problem. Our analyses emphasize modeling treatment effect heterogeneity of the Transitions Program intervention and on closing the loop from prediction to decision. However, more work remains to formalize criteria that more clearly delineate the roles of causal inference and prediction in these prediction problems, and the extent to which either is necessary or sufficient for a given problem.
As explored by \cite{Bayati2014Data-drivenStudy.} and \cite{Kleinberg2018HumanPredictions}, fully operationalizing a prediction model entails the following steps:
\begin{equation*}
\text{data} \Rightarrow \text{prediction} \Rightarrow \text{decision}.
\end{equation*}
Historically, many machine learning studies, including those in medicine, have emphasized the $\textit{data} \Rightarrow \textit{prediction}$ link, while neglecting the $\textit{prediction} \Rightarrow \textit{decision}$ link. These studies often evaluate model quality on the basis of performance metrics, including the area under the reciever operating characteristic curve (AUROC or AUC) or C-statistic, the area under the precision-recall curve (AUPRC), and other measures of accuracy. A final model is chosen from among candidate models because it maximizes the value of the metric of interest, with the implicit hope that it also maximizes its utility in a real-world setting were it to be so applied. However, this approach conflates prediction with classification---the risks of doing so which are clear, though perhaps underappreciated \cite{Harrell2019ClassificationThinking}---and hence also conflates prediction quality with decision quality.
Indeed, it is conceivable (though not likely) that even an intervention based on a perfectly accurate prediction model---for example, one that correctly identified all readmissions---could result in no net benefit, if the intervention proved ineffective for all the "high-risk" patients whose readmissions were identified. This contrasts with the setting of many classification problems (which we emphasize again are distinct from prediction problems), including, for example, image classification. In these cases, the utility function of classification is expressible in terms of the performance metric of interest alone, such as the accuracy or $F_1$ score. For a given choice of metric, the utility of an image classifier is strictly increasing in that metric, whereas in our hypothetical readmission prediction model, even perfect "classification" is not sufficient. This mismatch between predictive performance and utility is less pronounced with a less severe extent of treatment effect heterogeneity, but our hypothetical example highlights the importance of making the distinction.
To be sure, not all prediction problems decompose neatly into causal and predictive components. Some such problems are purely causal, while others can be solved with prediction alone. Models developed for risk adjustment, for example, represent an example of the latter types of problems, as in those contexts there is no link between an individual prediction and a decision. Rather, the predictions of risk adjustment are used in aggregate; for example, to obtain observed-to-expected ratios of outcomes for quality measures, or to adjust capitation payments according to case mix. \cite{MedPAC2016MedicareSystem} Another class of prediction models that can be viewed as solving purely predictive problems are those used for clinical decision support \cite{Chen2017MachineExpectations}, including the APACHE ICU mortality model \cite{Knaus1985APACHESystem}.
These decision support systems can be used in the context of an individual patient at the point of care to inform decision-making. For example, the APACHE model can be used to track the response of a patient in the intensive care unit (ICU) to treatment, or an objective measure to help decide when palliative care may be necessary. \cite{Knaus2002APACHEReflections} But it is not used in isolation to decide when to start and stop an intervention, unlike the prediction model used to enroll patients in the Transitions Program which we studied in this paper. Instead, the physician is meant to integrate APACHE model output with other information to arrive at a decision, which may incorporate other clinical data not captured in the model, resource constraints, or even family preferences. This is an significant distinction: the view of the APACHE model as solving a purely predictive problem implicitly off-loads the burden of decision-making, and thus of conceptualizing an utility function, to the physician. Without a human (or physician) in the loop, the utility function must be made more explicit in order to maximize aggregate utility, as in the framework we describe in this paper.
On the other hand, a setting where the link between prediction and decision is more explicit is that of resource allocation problems. A well-known example involves the MELD score \cite{Kamath2001ADisease}, which predicts 3-month mortality in liver failure and is used to prioritize patients for liver transplant. The prediction problem can again be classed as purely predictive, because patients with higher MELD scores (and hence higher 3-month mortality) benefit most: allocating donor livers to the patients with the highest MELD scores yields more years of life saved, in aggregate. A similar insight was employed by Kleinberg and co-authors \cite{Kleinberg2015PredictionProblems}, who investigated a different resource allocation problem---that of allocating joint replacement surgeries. A patient who receives a joint replacement will not see the full benefit until at least a year post-surgery, due to the time required for recovery and physical therapy in the interim.
Hence, patients who receive a joint replacement, but die within a year post-surgery, do not benefit from surgery. With this in mind, the problem of allocating joint replacements can be cast as a purely predictive one by developing a prediction model for mortality at 1 year post-surgery. Surgeries can then be allocated to patients in need below a certain risk threshold. Unlike with the MELD score, the authors found that risk did not appear to correlate with benefit as measured by claims for physical therapy, joint injections, and physician visits pre-surgery. If the riskiest patients would otherwise have derived the most benefit from surgery, this would constitute an example of "omitted payoff bias", requiring a different utility function that took the payoff of decreased medical need (presumably corresponding to less severe symptoms) into account in order to assign patients to surgery. \cite{Kleinberg2015PredictionProblems}
We close this subsection with an example of a prediction problem that appears purely predictive, but is in fact largely, and perhaps purely, causal. Consider the problem of identifying hospitalized patients who would benefit from a palliative care consultation, which has attracted considerable attention over the last two decades as the need for palliative care has grown. \cite{Weissman2011IdentifyingCare.} Holistic criteria have been proposed to identify these patients, spanning domains including mortality risk in the next 12 months, the presence of refractory symptoms, the patient's level of social support, and inability to carry out activities of daily living---which constitute a selection of the 13 indicators outlined in \cite{Weissman2011IdentifyingCare.}.
Recently, attempts have also been made to reduce the problem of identifying patients who might benefit from palliative care to one of modeling 12-month mortality from hospital admission, e.g., \cite{Avati2018ImprovingLearning}. Notwithstanding the causal issues involved in defining this outcome of interest retrospectively and applying it prospectively (e.g., see \cite{Einav2018PredictiveLife} for a related discussion), this approach is also problematic in that it is unlikely to reliably identify patients who will actually benefit from palliative care, because it all omits other indicators of palliative care need beyond 12-month mortality. A palliative care consultation is a complex intervention which will likely have highly heterogeneous treatment effects on quality of life---which is the real outcome of interest---and it is unclear if these effects do in fact correlate with 12-month mortality risk.
In fact, solving this proxy problem will likely identify many patients who will die in the short run, but who may not be amenable to palliative care. This includes patients in the ICU, for whom providing the full spectrum of palliative care can be challenging, due to sedation, the use of mechanical ventilation and other invasive interventions, environmental light and noise, and limits on family visitation. \cite{Cook2014DyingUnit} Conversely, this approach would also miss many patients who are at low mortality risk, but who may otherwise benefit from palliative care early in their disease course. Indeed, guidelines recommend initiating palliative care early in patients with cancer, for example---with some trials finding that initiation at as early as the time of diagnosis may be most effective. \cite{Howie2013EarlyImplications}
\subsection{New processes for predictive algorithm-driven intervention deployments within learning health systems}
Our findings could be used to retarget the Transitions Program intervention prospectively to patients who are most expected to benefit, rather than those at highest risk, resulting in larger aggregate benefit in terms of the number of readmissions prevented. Similarly, our approach could be used to retarget other population health management and QI interventions which are often deployed wholesale or using risk assessment tools. However, as we mention, these individual estimates were derived from observational data, and not from data generated via a randomized experiment---the latter type of data which represent the ideal substrate for estimating treatment effects insofar as randomization is able to mitigate the effects of confounding. \cite{Kent2018PersonalizedEffects} Furthermore, our approach requires interventional data, unlike those used to develop more traditional risk assessment tools, which are often based on retrospective data. Hence, to implement our approach, as an alternative to risk tool-driven approaches, may require rethinking how these predictive algorithm-driven interventions (or "prediction-action dyads", cf. \cite{Liu2019TheHealthcare}) are deployed within health systems, particularly in relation to existing digital infrastructure and institutional oversight processes. We outline several starting points for doing so below.
One option is to first deploy a new predictive algorithm-driven intervention as part of a simple two-arm randomized trial which compares that intervention to usual care. This represents a pilot phase, generating data that are used to derive an impactibility model, along with a payoff model if necessary for the prediction problem. Following this pilot phase, two paths are possible: 1) based on this impactibility model, the intervention could be re-targeted to the patients most expected to benefit; or 2) alternatively, carrying out another two-arm randomized trial comparing risk-based to impactibility-based targeting. In the latter choice, patients would be randomized to either of the risk or impactibility arms, and based on their covariates would either receive or not receive the intervention according to their risk or benefit estimate. Based on the results of this second trial, whichever targeting approach proved more effective could then be put into use.
Another option, if a predictive algorithm-driven intervention based on risk scores is already in use, does away with the pilot randomized trial. Instead, the first step is to derive a impactibility model from observational data, as we did in this study. Then, if the goal is to retarget this intervention to include more patients at lower risk---for example, patients between 10 and 25\% predicted risk---risk-based and impactibility-based targeting could be compared to each other in a three-arm randomized trial. The first arm consists of risk-based targeting using the current risk threshold; the second arm, impactibility-based targeting; and the third arm, risk-based targeting to patients in the 10-25\% risk subgroup. Finally, cluster-randomized trial designs based on the regression discontinuity design would allow treatment effect heterogeneity across levels of risk to be characterized, enabling the intervention to be retargeted to the groups among which it is most effective; we hope to investigate these designs in future work.
These proposed options represent major shifts from how deployments of predictive algorithm-driven interventions are usually carried out in health systems. New institutional overnight processes \cite{Faden2013AnEthics}, digital infrastructure, and statistical methodology would be required in order to realize the full potential of these approaches. Many such interventions, if deemed to create only minimal risk, fall under the umbrella of routine quality improvement (QI) studies, which exempts them from ongoing independent oversight \cite{Finkelstein2015OversightResearch}. However, incorporating randomization, as we do in the options we describe above, shifts these interventions from the category of routine QI to non-routine QI or research. These two categories of studies often require independent oversight by, for example, an institutional review board (IRB), and may need to incorporate additional ethical considerations, e.g., requiring informed consent. It is possible that a categorization of a deployment process like the ones we describe above as non-routine QI or research could impede their being carried out, as has occurred in previous QI work. \cite{Baily2006SpecialSafety} However, similar attempts at deploying QI interventions as a part of rapid-cycle randomized experiments have escaped such categorization \cite{Horwitz2019CreatingTesting}, although standards will likely vary from institution to institution.
Moreover, these new deployment processes we envision will also require new digital infrastructure to implement these models and provision their linked interventions as a part of randomized trials and to monitor their impacts in real-time (while accounting for early stopping and other biases), akin to how A/B testing is usually carried out by many Web-facing companies today \cite{Kohavi2013OnlineScale}. Another sticking point is the lack of established analogues of power analyses for causal forests and other methods. Without these analogues, it is not clear how to set the necessary sample size for a pilot randomized trial like the one we describe above, although it could possibly be determined via simulation. The investments required to establish these platforms may be considerable, and it remains unclear whether the costs in fact do outweigh the rewards, particularly when one also accounts for the opportunity costs associated with experimentation. However, the potential impact, both on patient outcomes and in terms of the knowledge generated, could be substantial, and merit further investigation.
\subsection{Limitations}
This study has several limitations. First, as this study is observational in nature, our analysis necessarily relies on certain assumptions, which, while plausible, are unverifiable. The unconfoundedness assumption that we make presumes no unmeasured confounding, and cannot be verified through inspection of the data nor via statistical tests. It is possible that our results, particularly the notional estimates of overall impact in terms of the numbers of readmissions prevented annually under the CATE-based targeting strategies, are biased. However, it appears that the magnitude of unobserved bias would have to be large to negate our results, particularly our estimates of the potential impacts of the CATE-based strategies. Furthermore, because the causal forest performs local linear estimation, the assumptions that we make with respect to the risk threshold can be considered modular in the sense that if they fail for the subgroup of patients with a predicted risk of below 25\% across both the pre- and post-implementation periods (which we anticipate would be most likely), they would still hold for the $\geq \!\!25$\% risk subgroup.
Moreover, we again note that the CATEs appeared well-calibrated in at least two aspects: the number of readmissions prevented under the risk-based targeting strategy, as estimated using the predicted CATEs, agreed with that estimated via a separate analysis; and the predicted CATEs were never larger than the predicted risk for any individual. Furthermore, this limitation becomes moot if the data are generated by a randomized experiment. We outline several approaches for how existing processes of deploying predictive algorithm-driven interventions could be redesigned to incorporate randomization in order to iteratively refine these interventions through improved targeting.
Second, these estimates of benefit must be computed with respect to an intervention, which cannot be made with historical data alone, as is often done when developing risk assessment tools. As such, the necessary data must be generated, whether through wholesale deployments of an intervention (as in many QI studies) or via randomized experiments. The latter provide a better basis for these analyses, but are costly and require additional infrastructure, and may be subject to more institutional oversight. Third, our simplifying assumption with respect to the nature of the payoffs may not have resulted in a targeting strategy that was in fact optimal, but we hope to explore in future work the feasibility of predicting payoffs using supervised machine learning in parallel with treatment effects within this framework, and observing how the resulting targeting strategies change.
Third, although we incorporated it into our analysis, the \texttt{HCUPSGDC} variable is not uniformly always available at discharge. From the point of a view of a retrospective analysis that principally seeks to characterize treatment effect heterogeneity, this does not constitute a limitation. However, this consideration may preclude the application of this impactibility model prospectively in its current form, but it is possible that the patient's problem list at discharge could be used to infer the value of this variable, as there are only 25 categories used in our analysis. Finally, our results may not be applicable to all settings, particularly hospitals and health systems without a high degree of integration. However, the framework we outline here is agnostic as to application as well as the type of causal machine learning model used, and could be applied to resource allocation and other prediction problems more generally.
\section{Conclusion}
Causal machine learning can be used to identify preventable hospital readmissions, if the requisite interventional data are available. Moreover, our results point to a mismatch between readmission risk and treatment effect, which is consistent with suggestions in prior work. In our setting, the extent of this mismatch was considerable, suggesting that many preventable readmissions may be being "left on the table" with current risk modeling methodology. Our proposed framework is also generalizable to the study of a variety of population health management and quality improvement interventions driven by predictive models, as well as of algorithm-driven interventions in a range of settings outside of healthcare.
\newpage
\section*{Acknowledgements}
The authors are immensely grateful to Colleen Plimier of the Division of Research, Kaiser Permanente Northern California, for assistance with data preparation, as well as to Dr. Tracy Lieu, also of the Division of Research, for reviewing the manuscript. In addition, the authors wish to thank Minh Nguyen, Stephen Pfohl, Scotty Fleming, and Rachael Aikens for helpful feedback on earlier versions of this work.
Mr. Marafino was supported by a predoctoral fellowship from the National Library of Medicine of the National Institutes of Health under Award Number T15LM007033, as well as by funding from a Stanford School of Medicine Dean's Fellowship. Dr. Baiocchi was also supported by grant KHS022192A from the Agency for Healthcare Research and Quality. Dr. Vincent Liu was also supported by NIH grant R35GM128672 from the National Institute of General Medical Sciences. Portions of this work were also funded by The Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no conflicts of interest to disclose. The funders played no role in the study design, data collection, analysis, reporting of the data, writing of the report, nor the decision to submit the article for publication.
\section*{Appendix A}
\setcounter{table}{0}
\renewcommand{\thetable}{A\arabic{table}}
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lccccc@{}}
\toprule
& Total & Pre-implementation & Post-implementation & \multicolumn{1}{l}{$p$-value} & \multicolumn{1}{l}{SMD} \\ \midrule
Hospitalizations, $n$ & 1,584,902 & 1,161,452 & 423,450 & --- & --- \\
Patients, $n$ & 753,587 & 594,053 & 266,478 & --- & --- \\
Inpatient (\%) & 82.8 (69.7-90.6) & 84.4 (73.1-90.7) & 78.5 (57.7-90.3) & $<0.0001$ & -0.151 \\
Observation (\%) & 17.2 (9.4-30.3) & 15.6 (9.3-26.9) & 21.5 (9.7-42.3) & $<0.0001$ & 0.151 \\
Inpatient stay $<24$ hours & 5.2 (3.3-6.7) & 5.1 (3.8-6.5) & 5.6 (1.8-9.1) & $<0.0001$ & 0.041 \\
Transport-in & 4.5 (1.4-8.7) & 4.5 (1.7-8.8) & 4.5 (0.4-8.4) & 0.56 & -0.001 \\
Age, mean (years) & 65.3 (62.2-69.8) & 65.1 (61.9-69.6) & 65.8 (62.8-70.4) & $<0.0001$ & 0.038 \\
Male gender (\%) & 47.5 (43.4-53.8) & 47.0 (42.4-53.5) & 48.9 (45.3-54.9 & $<0.0001$ & 0.037 \\
KFHP membership (\%) & 93.5 (75.3-97.9) & 93.9 (80.0-98.0) & 92.5 (61.7-97.6) & $<0.0001$ & -0.052 \\
Met strict membership definition (\%) & 80.0 (63.4-84.9) & 80.6 (67.6-85.5) & 78.5 (51.3-83.8) & $<0.0001$ & -0.053 \\
Met regulatory definition (\%) & 61.9 (47.2-69.7) & 63.9 (50.2-72.2) & 56.5 (38.7-66.6) & $<0.0001$ & -0.152 \\
Admission via ED (\%) & 70.4 (56.7-82.0) & 68.9 (56.0-80.3) & 74.4 (58.4-86.6) & $<0.0001$ & 0.121 \\
Charlson score, median (points) & 2.0 (2.0-3.0) & 2.0 (2.0-3.0) & 2.0 (2.0-3.0) & $<0.0001$ & 0.208 \\
Charlson score $\geq 4$ (\%) & 35.2 (29.2-40.7) & 33.2 (28.2-39.8) & 40.9 (33.0-46.2) & $<0.0001$ & 0.161 \\
COPS2, mean (points) & 45.6 (39.1-52.4) & 43.5 (38.4-51.5) & 51.2 (39.7-55.8) & $<0.0001$ & 0.159 \\
COPS2 $\geq 65$ (\%) & 26.9 (21.5-32.0) & 25.3 (21.0-31.6) & 31.1 (22.5-35.4) & $<0.0001$ & 0.129 \\
Admission LAPS2, mean (points) & 58.6 (48.0-67.6) & 57.6 (47.4-65.8) & 61.3 (50.2-72.8) & $<0.0001$ & 0.092 \\
Discharge LAPS2, mean (points) & 46.7 (42.5-50.8) & 46.3 (42.5-50.8) & 47.6 (42.3-52.9) & $<0.0001$ & 0.039 \\
LAPS2 $\geq 110$ (\%) & 12.0 (7.8-16.0) & 11.6 (7.5-15.2) & 12.9 (8.3-18.4) & $<0.0001$ & 0.039 \\
Full code at discharge (\%) & 84.4 (77.3-90.5) & 84.5 (77.7-90.5) & 83.9 (75.9-90.5) & $<0.0001$ & -0.016 \\
Length of stay, days (mean) & 4.8 (3.9-5.4) & 4.9 (3.9-5.4) & 4.7 (3.9-5.6) & $<0.0001$ & -0.034 \\
Discharge disposition (\%) & & & & & 0.082 \\
\quad To home & 72.7 (61.0-86.2) & 73.3 (63.9-85.9) & 71.0 (52.1-86.9) & $<0.0001$ & \\
\quad Home Health & 16.1 (6.9-23.3) & 15.2 (6.9-22.6) & 18.5 (7.0-34.5) & $<0.0001$ & \\
\quad Regular SNF & 9.9 (5.9-14.3) & 10.0 (6.0-15.2) & 9.5 (5.6-12.4) & $<0.0001$ & \\
\quad Custodial SNF & 1.3 (0.7-2.5) & 1.5 (0.8-2.7) & 0.9 (0.4-1.8) & $<0.0001$ & \\
Hospice referral (\%) & 2.6 (1.7-4.4) & 2.6 (1.7-4.6) & 2.7 (1.5-4.0) & $<0.0001$ & 0.007 \\ \midrule
\textbf{Outcomes}& & & & & \\
\quad Inpatient mortality (\%) & 2.8 (2.1-3.3) & 2.8 (2.1-3.3) & 2.8 (1.8-3.3) & 0.17 & -0.003 \\
\quad 30-day mortality (\%) & 6.0 (4.0-7.3) & 6.1 (4.1-7.6) & 5.9 (3.9-6.8) & $<0.0001$ & -0.006 \\
\quad Any readmission (\%) & 14.5 (12.7-17.2) & 14.3 (12.3-17.3) & 15.1 (13.3-17.0) & $<0.0001$ & 0.021 \\
\quad Any non-elective readmission (\%) & 12.4 (10.4-15.4) & 12.2 (10.2-15.5) & 13.1 (10.8-15.4) & $<0.0001$ & 0.029 \\
\quad Non-elective inpatient readmission (\%) & 10.5 (8.2-12.6) & 10.4 (8.1-12.8) & 10.8 (8.6-12.9) & $<0.0001$ & 0.012 \\
\quad Non-elective observation readmission (\%) & 2.4 (1.4-3.7) & 2.2 (1.2-3.4) & 3.0 (1.9-5.6) & $<0.0001$ & 0.049 \\
\quad 30-day post-discharge mortality (\%) & 4.0 (2.6-5.2) & 4.1 (2.7-5.4) & 3.9 (2.3-4.9) & $<0.0001$ & -0.007 \\
\quad Composite outcome (\%) & 15.2 (12.9-18.8) & 15.0 (12.9-19.1) & 15.8 (13.3-18.0) & $<0.0001$ & 0.023 \\ \bottomrule
\end{tabular}}
\caption{Characteristics of the cohort, including both index and non-index stays. Notably, comparing pre- to post-implementation, hospitalized patients were older, and tended to have higher comorbidity burden (higher COPS2) as well as a higher acuity of illness at admission (higher LAPS2). The use of observation stays also increased. These differences reflect a broader trend towards the pool of potential inpatient admissions becoming more and more ill over the decade from 2010, in large part due to the effectiveness of outpatient preventative care processes at KPNC, as well as of programs providing care outside of the hospital setting as an alternative to admission. Otherwise, care patterns did not substantially change, as evidenced by, e.g., transports-in, Kaiser Foundation Health Plan (KFHP) membership status, and discharge disposition mix, all of which had standardized mean differences (SMDs) $<0.1$. In large cohorts such as this one, SMDs can be a better guide to detecting covariate imbalances or differences between groups, owing to the effects of large sample sizes. Finally, as a consequence of increased comorbidity burden and admission acuity, and despite the implementation of the Transitions Program, rates of readmission and of the composite outcome increased from pre- to post-implementation. Abbreviations: SMD, standardized mean difference; KFHP, Kaiser Foundation Health Plan; LAPS2, Laboratory-based Acute Physiology Score, version 2; COPS2, COmorbidity Point Score, version 2; SNF, skilled nursing facility.}
\label{tab:a1}
\end{table}
\section*{Appendix B}
\setcounter{table}{0}
\renewcommand{\thetable}{B\arabic{table}}
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lc@{}}
\toprule
Supergroup name (\texttt{HCUPSGDC}) & \begin{tabular}[c]{@{}c@{}}Clinical Classification Software (CCS) \\ category code(s)\end{tabular} \\ \midrule
Acute CVD & 109 \\
AMI & 100 \\
CAP & 122 \\
Cardiac arrest & 107 \\
CHF & 108 \\
Coma; stupor; and brain damage & 85 \\
Endocrine \& related conditions & 48-51, 53, 54, 56, 58, 200, 202, 210, 211 \\
Fluid and electrolyte disorders & 55 \\
GI bleed & 153 \\
Hematologic conditions & 59-64 \\
Highly malignant cancer & 17, 19, 27, 33, 35, 38-43 \\
Hip fracture & 226 \\
Ill-defined signs and symptoms & 250-253 \\
Less severe cancer & 11-16, 18, 20-26, 28-32, 34, 36, 37, 44-47, 207 \\
Liver and pancreatic disorders & 151, 152 \\
Miscellaneous GI conditions & 137-140, 155, 214 \\
Miscellaneous neurological conditions & 79-84, 93-95, 110-113, 216, 245, 653 \\
Miscellaneous surgical conditions & 86-89, 91, 118-121, 136, 142, 143, 167, 203, 204, 206, 208, 209, 212, 237, 238, 254, 257 \\
Other cardiac conditions & 96-99, 103-105, 114, 116, 117, 213, 217 \\
Other infectious conditions & 1, 3-9, 76-78, 90, 92, 123-126, 134, 135, 148, 197-199, 201, 246-248 \\
Renal failure (all) & 156, 157, 158 \\
Residual codes & 259 \\
Sepsis & 2 \\
Trauma & 205, 225, 227-236, 239, 240, 244 \\
UTI & 159 \\ \bottomrule
\end{tabular}}
\caption{List of Clinical Classification Software (CCS)-defined supergroups and their CCS codes used in this study. These supegroups represent levels of the covariate \texttt{HCUPSGDC}. More details on the CCS codes themselves, as well as mappings to their component ICD codes, can be found at \url{www.ahrq.gov/data/hcup}.}
\label{tab:b1}
\end{table}
\newpage
\printbibliography
\end{spacing}
\end{document}
|
{'timestamp': '2020-07-21T02:14:51', 'yymm': '2005', 'arxiv_id': '2005.14409', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.14409'}
|
arxiv
|
\section{Introduction}
Strategy-proofness -- the idea that it is in an agent's interest to reveal their true preferences -- is a fundamental desideratum in mechanism design. All the other properties a mechanism may have become suspect if we can not assume that an agent will play according to the rules. Unfortunately, in the field of voting, this is a property we have to live without -- per the Gibbard-Satterthwaite theorem (\cite{Gibbard1973}; \cite{Satterthwaite1975}), the only strategy-proof rule with a range consisting of more than two candidates is dictatorship. This negative result, however, did not mean that scholars were willing to give up on either voting or resistance to strategy. Instead, the search was on for a workaround.
It was already known that the impossibility does not hold on restricted domains \cite{Dumett1961}, and if the preferences of the voters are separable \cite{Barbera1991} or single-peaked \cite{Moulin1980}, then natural families of strategy-proof voting rules exist. For those committed to the universal domain, there was the statistical approach -- all rules may be manipulable, but it could be the case that some are more manipulable than others. This lead to a voluminous literature on manipulation indices, that sought to quantify how likely a voting rule is to be manipulable (\cite{Nitzan1985}; \cite{Kelly1993}; \cite{Aleskerov1999}). With the incursion of computer science into social choice, an approach based on computational complexity came into prominence -- the idea being, if a strategic vote is computationally infeasible to find, that is almost as good as there being no strategic vote in the first place (\cite{Bartholdi1989}; \cite{Conitzer2007}; \cite{Walsh2011}).
None of these approaches were entirely convincing. Domain restrictions are by nature arbitrary, and there is little point in arguing as to how natural single-peaked preferences may be, if no real-world example actually is \cite{Elkind2017}. Manipulation indices are sensitive to the choice of the statistical culture, and are usually obtained by means of computer simulations for particular choices of the number of voters and candidates; so while an index could tell us which voting rule is more manipulable under, say, impartial culture with four voters and five candidates, it would be a stretch to extrapolate from this to statements about the manipulability of a voting rule in general. Computational complexity focuses on the worst case of finding a strategic vote, and a high worst-case complexity does not preclude the possibility of the problem being easy in any practical instance \cite{Faliszewski2011}.
In the closely related field of matching, \cite{Pathak2013} proposed a method to compare the manipulability of mechanisms that seemed to sidestep all these issues -- mechanism $f$ is said to be more manipulable than $g$ if the set of profiles on which $g$ is manipulable is included in the set of profiles on which $f$ is manipulable. No restrictions on domain, statistical culture, or computational ability is required. In the appendix to their paper, Pathak and S\"onmez theorised how the approach could be extended to other areas of mechanism design. This was taken up by the matching community (\cite{Decerf2021}; \cite{Bonkoungou2021}), but to our knowledge the only authors to apply this approach to voting were Arribillaga and Mass\'o (\cite{Arribillaga2016}; \cite{Arribillaga2017}). However, the notion used by Arribillaga and Mass\'o differed from that of Pathak and S\"onmez. A l\`a Pathak and S\"onmez, we would say that a voting rule $f$ is more manipulable than $g$ just if:
$$\forall P: \text{$g$ is manipulable at $P$}\Rightarrow\text{$f$ is manipulable at $P$}.$$
In other words, if a voter can manipulate $g$ in profile $P$, then a voter can also manipulate $f$ in the same profile. Arribillaga and Mass\'o's notion is a bit harder to parse:
$$\forall P_i: (\exists P_{-i}: \text{$g$ is manipulable at $(P_i,P_{-i})$})\Rightarrow (\exists P_{-i}': \text{$f$ is manipulable at $(P_i,P_{-i}')$}).$$
That is, if there exists a possible \emph{preference order} of voter $i$, $P_i$, such that there exists \emph{some} profile extending $P_i$, in which the voter can manipulate $g$, then there also exists a \emph{possibly different} profile extending $P_i$, in which the voter can manipulate $f$. To see why this could be an issue, recall that a voting rule is neutral if for any permutation of candidates $\pi$, $f(\pi P)=\pi f(P)$. A neutral voting rule is \emph{always} manipulable under the definition above -- by the Gibbard-Satterthwaite theorem, there exists some profile $P$ where a voter can manipulate. If we pick an appropriate permutation of candidate names, we will obtain a manipulable profile where voter $i$'s preference order is any order we want. Indeed, the papers of Arribillaga and Mass\'o deal with the manipulability of median voter schemes \cite{Moulin1980} and voting by committees \cite{Barbera1991}, both of which are fundamentally non-neutral procedures.
Most voting rules, however, are intended to capture democratic values, and so are neutral at their core; neutrality is typically relaxed only for the purposes of tie-breaking. The notion of Arribillaga and Mass\'o is inappropriate in this setting. The purpose of this paper is to see whether the original notion of Pathak and S\"onmez is any better.
\subsection{Our contribution}
We apply the manipulability notion of Pathak and S\"onmez to the families of $k$-approval and truncated Borda voting rules. We find that all members of the $k$-approval family are incomparable with respect to this notion, while in the truncated Borda family, in the special case of two voters, $(k+1)$-Borda is more manipulable than $k$-Borda; all other members are incomparable.
\section{Preliminaries}
Let $\mathcal{V}$, $|\mathcal{V}|=n$, be a set of voters, $\mathcal{C}$, $|\mathcal{C}|=m$, a set of candidates, and $\mathcal{L}(\mathcal{C})$ the set of linear orders over $\mathcal{C}$. Every voter is associated with some $\succeq_i\in\mathcal{L}(\mathcal{C})$, which denotes the voter's preferences. A profile $P\in\mathcal{L}(\mathcal{C})^n$ is an $n$-tuple of preferences, $P_i$ is the $i$th component of $P$ (i.e.\ the preferences of voter $i$), and $P_{-i}$ the preferences of all the other voters.
A voting rule is a mapping:
$$f:\mathcal{L}(\mathcal{C})^n\rightarrow \mathcal{C}$$
Note two consequences of the definition above. First, the number of voters and candidates is integral to the definition of a voting rule. I.e., for the purposes of this paper the Borda rule with $n=3, m=4$ is considered to be a different voting rule from the Borda rule with $n=4, m=3$. This is why our results meticulously consider every combination of $n$ and $m$ in detail.
Second, since we are requiring the voting rule to output a single candidate, we are assuming an inbuilt tie-breaking mechanism. For the purposes of this paper, all ties will be broken lexicographically. Capital letters will be used to denote candidates with respect to this tie-breaking order. That is, in a tie between $A$ and $B$ the tie is broken in favour of $A$. In the case of subscripts, ties are broken first by alphabetical priority then by subscript. That is, in the tie $\set{A_3,A_5,B_1}$, the winner is $A_3$ since $A$ has priority over $B$ and $3$ is smaller than 5.
In cases where we do not know a candidate's position in the tie-breaking order, we denote the candidate with lower case letters. Thus, if the tie is $\set{a,b,c}$, we cannot say who the winner is, and must proceed by cases.
We study two families of voting rules:
\begin{definition}
$k$-approval, denoted $\alpha_k$, is the voting rule that awards one point to a candidate each time that candidate is ranked in the top $k$ positions of a voter. The highest scoring candidates are the tied winners, ties are broken lexicographically.
$k$-Borda, denoted $\beta_k$, is the voting rule that awards $k-i+1$ points to a candidate each time that candidate is ranked $i$th, $i\leq k$. The highest scoring candidates are the tied winners, ties are broken lexicographically.
The corner case of $\alpha_1=\beta_1$ is also known as the plurality rule, while $\beta_{m-1}$ is known as the Borda rule.
Both families are special cases of \emph{scoring rules}, under which a candidate gains $s_i$ points each time it is ranked $i$th. Under $k$-approval, $s_1=\dots=s_k=1$, while under $k$-Borda $s_i=\max(0,k-i+1)$.
\end{definition}
We will be comparing the $k$-approval and $k$-Borda families of voting rules via the notion of manipulability pioneered by \cite{Pathak2013}.
\begin{definition}[Pathak and S\"onmez, 2013]
Let $f,g$ be two voting rules. We say that $f$ is manipulable at profile $P$ just if there exists a voter $i$ and a preference order $P_i'$ such that:
$$f(P_i',P_{-i})\succ_i f(P_i,P_{-i}).$$
We say that $f$ is more manipulable than $g$, denoted $f\geq_{\mathcal{PS}} g$, just if, for every profile $P$, if $g$ is manipulable at $P$ then so is $f$.
$f>_{\mathcal{PS}} g$ is shorthand for $f\geq_{\mathcal{PS}} g$ and $g\ngeq_{\mathcal{PS}} g$, and $f\times_{\mathcal{PS}} g$ is shorthand for $f\ngeq_{\mathcal{PS}} g$ and $g\ngeq_{\mathcal{PS}} f$.
\end{definition}
\section{$k$-Approval family}
In this section we fix $i<j$. Our end result (\autoref{thm:approval}) is that for any $n,m$, $\alpha_i\times_{\mathcal{PS}}\alpha_j$.
\begin{proposition}\label{lemLaij}
For all $n,m$: $\alpha_{i} \ngeq_{\mathcal{PS}} \alpha_{j}$.
\end{proposition}
\begin{proof}
Consider a profile with $i$ $B$ candidates, $j-i$ $A$ candidates, and $m-j$ $C$ candidates.
\[
\begin{array}{|c|c|c|c|}
\hline
n-1\text{ voters of type 1} & B_{1} \ldots B_{i} & A_{1} \ldots A_{j-i}& C_{1} \ldots C_{m-j} \\
\hline
1\text{ voter of type 2} & B_{1} \ldots B_{i} & A_{j-i} \ldots A_{1}& C_{1} \ldots C_{m-j} \\
\hline
\end{array}
\]
In $\alpha_{j}$ all $A$ and $B$ candidates are tied by score, and $A_{1}$ wins the tie. The voter of type 2 can swap $A_{1}$ for $C_{1}$. This will lower the score of $A_1$ from $n$ to $n-1$, while the other $A$ and $B$ candidates still have $n$. If $j-i>1$, the winner will be $A_2$. If $j-i=1$, the winner will be $B_1$. In either case, the outcome will be better for the manipulator.
In $\alpha_{i}$ $B_1$ wins, so every voter gets his best outcome. No one has an incentive to manipulate.
\end{proof}
\begin{lemma}\label{lem:alphabigm}
For $n=2q, m\geq 2j-1$: $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{lemma}
\begin{proof}
The profile consists of $j-1$ $B$ candidates, $j-1$ $C$ candidates, one $A$ candidate, and $m-2j+1$ $D$ candidates. The condition that $m\geq 2j-1$ guarantees that the top $j$ candidates of voters of types 1 and 2 intersect only at $A$:
\[
\begin{array}{|c|c|c|c|}
\hline
q\text{ voters of type 1} & B_{1} \ldots B_{i} & B_{i+1} \ldots B_{j-1}A & C_1\dots C_{j-1}D_1\dots D_{m-2j+1}\\
\hline
q-1\text{ voters of type 2} & AC_1\dots C_{i-1} & C_{i} \ldots C_{j-1} & B_1\dots B_{j-1}D_1\dots D_{m-2j+1}\\
\hline
1\text{ voter of type 3} & C_{1} \ldots C_{i} & C_{i+1} \ldots C_{j-1}A & B_1\dots B_{j-1}D_1\dots D_{m-2j+1}\\
\hline
\end{array}
\]
Under $\alpha_i$, $A$ has $q-1$ points, $B_1$ through $B_{i}$ and $C_1$ through $C_{i-1}$ have $q$, $C_i$ has 1. The winner is $B_1$. The voter of type 3 can manipulate by swapping $A$ for $C_1$. $A$ will have $q$ points and will beat $B_1$ in the tie.
Under $\alpha_j$, $A$ has $n$ points and is the winner. Voters of type 2 get their best choice elected. A voter of type 1 would rather have a $B$ candidate win, but $A$ has a lead of at least one point on these. Moving $A$ below the $j$th position will only drop the score by one, and $A$ will win the tie. The voter of type 3 would rather see a $C$ candidate win, but $A$ has a lead of at least one. Dropping $A$'s score will at most force a tie, which $A$ will win.
\end{proof}
\begin{lemma}\label{aoddsmall}
For $n=2q+1, m\geq 2j-1$:
if $i\geq 2$, then $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{lemma}
\begin{proof}
The profile consists of $j$ $B$ candidates, $j-1$ $A$ candidates, and $m-2j+1$ $C$ candidates. The condition that $m\geq 2j-1$ guarantees that the top $j$ candidates of voters of types 1 and 2 intersect only at $B_1$:
\[
\begin{array}{|c|c|c|c|}
\hline
q\text{ voters of type 1} & \overbrace{A_{1} \ldots A_{i}}^{\geq 2} & B_{1}A_{i+1}\dots A_{j-1}& B_2\dots B_jC_1\dots C_{m-2j+1} \\
\hline
q\text{ voters of type 2} & B_{i} \ldots B_{1} &B_{i+1}\ldots B_{j} & A_1\dots A_{j-1}C_1\dots C_{m-2j+1} \\
\hline
1\text{ voter of type 3} & B_{1} \ldots B_{i} & B_{i+1}\ldots B_{j} &A_1\dots A_{j-1} C_1\dots C_{m-2j+1} \\
\hline
\end{array}
\]
Under $\alpha_i$, the winner is $B_1$ with $q+1$ points. An voter of type 2 can swap $B_1$ with $B_j$, and $B_2$ will win (since $i\geq 2$, $B_1$ and $B_2$ are necessarily distinct).
Under $\alpha_j$, $B_1$ wins with $n$ points. The other $B$ candidates have at most $n-1$ points, and the $A$ candidates have at most $n-2$. A voter of type 2 would rather see another $B$ candidate win, but such a voter can only lower the score of $B_1$ by 1, and $B_1$ will win the tie against any $B$ candidate. A voter of type 1 would rather see an $A$ candidate win, but $B_1$ has a lead of at least two points, so will beat the $A$ candidates by points even if ranked last.
\end{proof}
\begin{lemma}\label{lem:aione}
For all $n,m$: if $i=1$, then $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{lemma}
\begin{proof}
The profiles consist of candidates $A,B,C$, and $D_1$ through $D_{m-3}$.
Case one: $n$ even, $n=2q$.
\[
\begin{array}{|c|c|c|c|}
\hline
q\text{ voters of type 1} & B&AD_1\dots D_{j-2}&C D_{j-1}\dots D_{m-3} \\
\hline
q-1\text{ voters of type 2} & A&BD_1\dots D_{j-2} &CD_{j-1}\dots D_{m-3} \\
\hline
1\text{ voter of type 3} & C &AD_1\dots D_{j-2}&BD_{j-1}\dots D_{m-3} \\
\hline
\end{array}
\]
Under $\alpha_i$, $B$ has $q$ points, $A$ has $q-1$, and $C$ has 1. If $q\geq 2$, $B$ wins by score, and if $q=1$, $B$ wins the tie. The voter of type 3 can swap $C$ with $A$ to force a tie, which $A$ will win.
Under $\alpha_j$, $A$ is the winner with $n$ points. Voters of type 2 have no incentive to manipulate. A voters of type 1 would rather see $B$ win, but $B$ has at most $n-1$ points, so a voter of this type can at most force a tie, which $A$ will win. Likewise, the voter of type 3 would rather $C$ win, who has 1 point. Decreasing $A$'s score will at most force a tie.
Case two: $n$ odd, $n=2q+1$.
\[
\begin{array}{|c|c|c|c|}
\hline
q\text{ voters of type 1} & B&CD_1\dots D_{j-2}& A D_{j-1}\dots D_{m-3} \\
\hline
q\text{ voters of type 2} & A&BD_1\dots D_{j-2} &CD_{j-1}\dots D_{m-3} \\
\hline
1\text{ voter of type 3} & C &BD_1\dots D_{j-2} &AD_{j-1}\dots D_{m-3} \\
\hline
\end{array}
\]
Under $\alpha_i$, $A$ wins by tie-breaking. The voter of type 3 can swap $C$ with $B$ to give $B$ a points victory.
Under $\alpha_j$, $B$ wins with $n$ points. A voter of type 2 would rather see $A$ win, but $B$ has at least a two point lead, so he cannot force a tie. The voter of type 3 would rather $C$ win, but $C$ is behind by at least one point, so he can at best force a tie, which $B$ will win.
\end{proof}
\begin{corollary}\label{lem:ambig}
For all $n,m \geq 2j$: $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{corollary}
\begin{proof} \autoref{lem:alphabigm}, \autoref{aoddsmall}, and \autoref{lem:aione}.
\end{proof}
\begin{lemma}\label{lem:ammiddle}
For all $n, m<2j$: if $i\geq 2$, then $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{lemma}
\begin{proof}
Case one: $m\geq i+j$.
Since $m\geq i+j = 2i + (j-i)$, we can guarantee the existence of $2i$ $A$ candidates and $(j-i)$ $B$ candidates. The remaining $m-(i+j)$ candidates are the $C$ candidates. Observe that since $m < 2j$, the number of the $C$ candidates is smaller than the number of the $B$ candidates $(m-j-i < j-i)$.
\[
\begin{array}{|c|c|c|c|}
\hline
\floor{n/2}\text{ voters of type 1} & \overbrace{A_{1} \ldots A_{i}}^{\geq2}&B_1\dots B_{j-i}&A_{2i}\dots A_{i+1}\overbrace{C_1\dots C_{m-(i+j)}}^{<j-i} \\
\hline
\ceil{n/2}\text{ voter of type 2} & A_{i+1} \dots A_{2i}& B_{1} \ldots B_{j-i} & A_{i}\dots A_1 C_1\dots C_{m-(i+j)}\\
\hline
\end{array}
\]
Under $\alpha_i$, if $n$ is even the winner is $A_1$. A voter of type 2 can manipulate by swapping $A_{2i}$ with $A_{i}$, giving $A_i$ $n/2+1$ points. Since $i\geq 2$, $A_i\neq A_1$. If $n$ is odd, the winner is $A_{i+1}$ with $\ceil{n/2}$ points. A voter of type 1 can swap $A_1$ with $A_{2i}$ to give $A_{2i}$ $\ceil{n/2}+1$ points. Since $i\geq 2$, $A_{2i}\neq A_{i+1}$.
Under $\alpha_j$, all the $B$ candidates have $n$ points, and the winner is $B_1$. A voter of type 1 would rather see one of $A_1,\dots, A_i$ win. Since he cannot raise the score of these, he will have to lower the score of the $B$ candidates. However, there are more $B$ candidates than $C$ candidates, so if the voter were to rank all the $B$ candidates below the $j$th position, he would necessarily raise the score of one of $A_{i+1},\dots,A_{2i}$ to $\ceil{n/2}+1$. That candidate would then win by score, and he is even worse for the manipulator than $B_1$.
Likewise, a voter of type 2 would rather see one of $A_{i+1},\dots, A_{2i}$ win. He can attempt to rank all the $B$ candidates below $j$, but then one of $A_1,\dots,A_i$ will get $\floor{n/2}+1\geq\ceil{n/2}$ points and win the election (possibly by tie-breaking).
Case two: $m< i+j$.
In the profile below we have $i$ $C$ candidates, $j-i$ $B$ candidates, and $m-j$ $A$ candidates. Since $m<i+j$, $m-j<i$, so the voters of type 1 can rank all the $A$ candidates in the top $i$ positions, as well as at least one $B$ candidate.
\[
\begin{array}{|c|c|c|c|}
\hline
n-1\text{ voters of type 1} & \overbrace{B_1\dots B_{i-(m-j)}}^{\geq1}\overbrace{A_{1} \ldots A_{m-j}}^{\geq1}&\text{any order} &\text{any order} \\
\hline
1\text{ voter of type 2} & C_{1} \dots C_{i}& B_{1} \ldots B_{j-i} & A_{1}\dots A_{m-j}\\
\hline
\end{array}
\]
Under $\alpha_i$, the winner is $A_1$. The voter of type 2 can manipulate by ranking $B_1$ first.
Under $\alpha_j$, the winner is $B_1$ (either by score, or winning a tie against a $C$ candidate). The voters of type 1 get their best choice elected. The voter of type 2 would rather see a $C$ candidate win, but to do so he would have to lower the score of the $B$ candidates. If he ranks any $B$ candidate below the $j$th position, he would have to rank one of the $A$ candidate above -- that candidate would then win the election with $n$ points, and the outcome would be worse than $B_1$.
\end{proof}
\begin{corollary}\label{lem:amsmall}
For all $n, m < 2j$: $\alpha_{j} \ngeq_{\mathcal{PS}} \alpha_{i}$.
\end{corollary}
\begin{proof}
\autoref{lem:aione} and \autoref{lem:ammiddle}.
\end{proof}
\begin{theorem}\label{thm:approval}
For all $n, m$: $\alpha_{i} \times_{PS} \alpha_{j}$.
\end{theorem}
\begin{proof}
By \autoref{lemLaij}, $\alpha_i\ngeq_{\mathcal{PS}}\alpha_j$. By \autoref{lem:ambig} and \autoref{lem:amsmall}, $\alpha_j\ngeq_{\mathcal{PS}}\alpha_i$.
\end{proof}
\section{The $k$-Borda Family}
As before, we fix $i<j$. In this section we will show that for $n=2,j\neq m-1$, $\beta_j>_{\mathcal{PS}}\beta_i$ (\autoref{cor:main}), but in all other cases the rules are incomparable (\autoref{thm:bordainc}, \autoref{cor:betaibetaj}, \autoref{prop:betakborda}).
We make use of a standard result about the manipulability of scoring rules:
\begin{lemma}\label{lem:normalform}
Consider a profile $P$, and scoring rule $f$. Let $w$ be the winner under sincere voting, $f(P)=w$. Call all the candidates voter $i$ perceives to be at least as bad as $w$ (including $w$) the bad candidates. The others, the good candidates. Order the good candidates $g_1,\dots, g_q$ and the bad candidates $b_1,\dots,b_r$ from the highest to the lowest scoring in $P_{-i}$. In case of equal scores, order candidates by their order in the tie-breaking. We claim that if $i$ can manipulate $f$ at $P$, he can manipulate with the following vote:
$$P_i^*=g_1\succ\dots\succ g_q\succ b_r\succ\dots\succ b_1.$$
\end{lemma}
\begin{proof}
Let $\text{score}(c,P)$ be the score of candidate $c$ at profile $P$. Suppose voter $i$ can manipulate at $P$. That is, there is a $P_i'$ such that $f(P_i',P_{-i})=g_j$. In order to be the winner, $g_j$ must have the highest score.
\begin{equation}\label{eqngiwins}
\text{score} (g_j, P_{i}', P_{-i}) \geq \max_{c\neq g_j}(\text{score} ( c, P_{i}', P_{-i})).
\end{equation}
Observe that $\text{score} (g_j, P_{i}', P_{-i})=\text{score} (g_j, P_{-i})+s_k$, where $k$ is the position in which $g_j$ is ranked in $P_i'$. By ranking $g_1$ first in $P_i^*$ it follows that $\text{score} (g_1, P_{i}^{*}, P_{-i})=\text{score} (g_1, P_{-i})+s_1$, and observe that $\text{score} (g_1, P_{-i})\geq \text{score} (g_j, P_{-i})$, and $s_1\geq s_k$. Thus:
\begin{equation}\label{eqng1hasmore}
\text{score} ( g_1, P_{i}^{*}, P_{-i})\geq \text{score} (g_j, P_{i}', P_{-i}).
\end{equation}
We now claim that the score of the highest scoring bad candidate in $(P_i',P_{-i})$ is no higher than in $(P_i^*,P_{-i})$. For contradiction, suppose that $b_p$ is the highest scoring bad candidate in $(P_i^*,P_{-i})$, and his score is higher than any bad candidate in $(P_i',P_{-i})$. Observe that $\text{score} ( b_p, P_{i}^{*}, P_{-i}) = \text{score} ( b_p, P_{-i}) + s_{m-p+1}$. Since $\text{score} ( b_1, P_{-i}),\dots, \text{score} ( b_p, P_{-i})\geq \text{score} ( b_p, P_{-i})$, this means that bad candidates $b_1,\dots,b_p$ must all get strictly less than $s_{m-p+1}$ points in $P_i'$. However there are $p$ such candidates, and only $p-1$ positions below $m-p+1$.
Since the highest scoring candidate in $(P_i',P_{-i})$ has at least as many points as the highest scoring bad candidate, it follows that:
\begin{equation}\label{eqnbjhasless}
\max_{c\neq g_j}(\text{score} ( c, P_{i}', P_{-i}))\geq \max_{b\in\set{b_1,\dots,b_r}}(\text{score} ( b, P_{i}^{*}, P_{-i})).
\end{equation}
Combining \ref{eqngiwins}, \ref{eqng1hasmore}, and \ref{eqnbjhasless}, we conclude that $g_1$ is among the highest scoring candidates in $(P_i^*,P_{-i})$. If at least one of the inequalities is strict, $g_1$ has more points than any bad candidate and we are done.
Suppose then that all the inequalities are equal. Observe that this implies that if $g_1\neq g_j$, then $g_1$ must come before $g_j$ in the tie-breaking order. To see this, observe that if we assume $\text{score} ( g_1, P_{i}^{*}, P_{-i})= \text{score} (g_j, P_{i}', P_{-i})$, then it follows that $\text{score} ( g_1, P_{-i}) + s_1= \text{score} (g_j, P_{-i}) + s_k$, where $k$ is the position in which $g_j$ is ranked in $P_i'$. Since $\text{score} (g_1, P_{-i})\geq \text{score} (g_j, P_{-i})$, and $s_1\geq s_k$, the only way this is possible is if $\text{score} (g_1, P_{-i})= \text{score} (g_j, P_{-i})$. By definition, in the case of equal scores in $P_{-i}$, the candidate that is labelled $g_1$ must have priority in the tie-breaking.
If $g_1$ also wins the tie against any bad candidate, we are done. For contradiction, suppose a bad candidate $b_p$ wins the tie given $P_i^*$. This means that $b_p$ beats $g_1$ and $g_j$ in the tie-breaking. Observe that $b_p$ is ranked in position ${m-p+1}$ in $P_i^*$. Since $b_p$ loses in $(P_i',P_{-i})$, $b_p$ must have been ranked lower than ${m-p+1}$ in $P_i'$. This means $b_p$ was ranked lower than at least $m-p+1$ candidates. Since $m=q+r$, and there are $q$ candidates, this means $b_p$ was ranked lower than at least $r-p+1$ bad candidates. In $P_i^*$, $b_p$ is ranked below exactly $r-p$ bad candidates, so there must exist a bad candidate that was ranked above $b_p$ in $P_i'$, but is ranked below $b_p$ in $P_i^*$. Call this candidate $b_t$. By definition of $P_i^*$, it must be the case that $\text{score}(b_t,P_{-i}) >\text{score}(b_p,P_{-i})$ or $\text{score}(b_t,P_{-i}) =\text{score}(b_p,P_{-i})$ and $b_t$ wins the tie. But that is impossible, because then $b_t$ would have gained at least as many points in $(P_i',P_{-i})$ as $b_p$ did in $(P_i*,P_{-i})$, and since the score of $b_p$ in $(P_i*,P_{-i})$ is equal to $g_1$, it means $b_t$ has at least as many points in $(P_i',P_{-i})$ as $g_j$, so wins either by points or by tie-breaking.
\end{proof}
\begin{lemma}\label{prop:2qij}
For $n=2q$, all $m$: $\beta_i\ngeq_{\mathcal{PS}}\beta_{j}$.
\end{lemma}
\begin{proof}
Consider a profile with one $A$ candidate, one $B$ candidate, and $m-2$ $C$ candidates:
\[
\begin{array}{|c|c|c|c|}
\hline
1\text{ voter of type 1} & A C_{1} \ldots C_{i-1}& C_{i} \dots C_{j-2} B& C_{j-1}\dots C_{m-2}\\
\hline
1\text{ voter of type 2} & B A C_{m-2} \ldots C_{m-i+1}& C_{m-i}\dots C_{m-j+1} & C_{m-j}\dots C_1\\
\hline
q-1\text{ voters of type 3} & A C_{1} \ldots C_{i-1} & C_i\dots C_{j-1} & C_{j-2}\dots C_{m-2}B\\
\hline
q-1\text{ voter of type 4} & B C_{m-2} \ldots C_{m-i}& C_{m-i-1}\dots C_{m-j} & C_{m-j-1}\dots C_1A\\
\hline
\end{array}
\]
Under $\beta_{j}$, $A$ has $qj+(j-1)$ points. $B$ has $qj+1$. $C_1$ is the highest scoring $C$ candidate with $q(j-1)$. The winner is $A$. However, the voter of type 2 can rank $A$ last and shift the $C$ candidates up one. This gives $A$ a score of $qj$, $B$'s score is still $qj+1$, and a $C$ candidate's is at most $q(j-1)+1$. $B$ wins a points victory.
Under $\beta_i$, $A$ has $qi +(i-1)=qi+i-1$ points. $B$ has $qi$. $C_1$ has $q(i-1)$, the other $C$ candidates no more. Voters of type 1 and three have no incentive to manipulate. The voter of type 2 would rather see $B$ win, but by \autoref{lem:normalform} this would mean ranking $A$ last, and $A$ would still have $qi$ points and win the tie. A voter of type 4 would rather see anyone win, and by \autoref{lem:normalform} this involves putting either $B$ or $C_1$ first. $B$ is already ranked first and does not win, and putting $C_1$ first would give $C_1$ $q(i-1)+i=qi-q+i$ points, which is less than $A$.
\end{proof}
\begin{lemma}\label{prop:oddij}
For $n=2q+1$, all $m$: $\beta_i\ngeq_{\mathcal{PS}}\beta_{j}$.
\end{lemma}
\begin{proof}
Consider a profile with one $A$ candidate, one $B$ candidate, and $m-2$ $C$ candidates:
\[
\begin{array}{|c|c|c|c|}
\hline
1\text{ voter of type 1} & BC_1\dots C_{i-1}&C_{i}\dots C_{j-2}A&C_{j-1}\dots C_{m-2}\\
\hline
q\text{ voters of type 2} &BAC_1\dots C_{i-2}&C_{i-1}\dots C_{j-2}&C_{j-1}\dots C_{m-2}\\
\hline
q\text{ voters of type 3} &AB C_{m-2}\dots C_{m-i+1}& C_{m-i}\dots C_{m-j+1}&C_{m-j}\dots C_1\\
\hline
\end{array}
\]
Under $\beta_i$, $A$ has $qi+q(i-1)=2qi-q$ points. $B$ has $(q+1)i+q(i-1)=2qi-q+i$. All the $C$ candidates are Pareto dominated by $B$, so the winner is $B$. A voter of type 3 would like to see $A$ win, but if he ranks $B$ last, $A$ will have $2qi-q$ points to $B$'s $2qi-q+i-(i-1)=2qi-q+1$, so $B$ would still win.
Under $\beta_j$, $A$ has $qj+q(j-1)+1=2qj-q+1$, $B$ has $(q+1)j+q(j-1)=2qj-q+j$. If a voter of type three ranks $B$ last and shifts the $C$ candidates up one, $B$ will have $2qj-q+j-(j-1)=2qj-q+1$, tying with $A$, and $A$ wins the tie. It remains to check that $A$ will have more points than the highest scoring $C$ candidate, which is clearly $C_1$. After the manipulation, $C_1$'s score will increase by at most one. $C_1$'s score before manipulation is $j-1+q(j-2)$, so $A$ will beat $C_1$ if:
\begin{align*}
2qj-q+1&\geq j + qj-2q,\\
qj-q&\geq j -1 -2q,\\
qj-j&\geq -1 -q,\\
(q-1)j&\geq -1 -q.
\end{align*}
Which is always satisfied.
\end{proof}
\begin{corollary}\label{cor:betaibetaj}
For all $n, m$: $\beta_i\ngeq_{\mathcal{PS}}\beta_{j}$.
\end{corollary}
\begin{proof}
\autoref{prop:2qij} and \autoref{prop:oddij}.
\end{proof}
\begin{lemma}\label{prop:betajibordaodd}
For $n=2q+1$, all $m$: $\beta_{j}\ngeq_{\mathcal{PS}} \beta_i$.
\end{lemma}
\begin{proof}
Consider the following profile:
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
q\text{ voters of type 1} & ABC_{m-2}\dots C_{m-i+1}&C_{m-i}\dots C_{m-j+1}& C_{m-j}\dots C_1 \\
\hline
q-1\text{ voters of type 2} & BAC_1\dots C_{i-2}& C_{i-1}\dots C_{j-2}&C_{j-1}\dots C_{m-2} \\
\hline
1\text{ voters of type 3} & BC_1\dots C_{i-1}& C_i\dots C_{j-1}&C_j\dots C_{m-2} A\\
\hline
1\text{ voter of type 4} & C_1\dots C_i& B C_{i+1} \dots C_{j-1}&C_{j}\dots C_{m-2} A \\
\hline
\end{array}
\]
Under $\beta_i$, $A$ has $qi+(q-1)(i-1)=2qi-q-i+1$ points. $B$ has $qi+q(i-1)=2qi-q$. $C_1$ is clearly the highest scoring $C$ candidate, and has $i+(i-1)+(q-1)(i-2)=qi+i-2q+1$ points. If $i>1$, $B$ wins a points victory, but a voter of type 1 can rank $B$ last to force a tie, which $A$ will win. If $i=1$, then $A$ wins by tie-breaking, but the voter of type 4 can rank $B$ first to make $B$ the winner.
Under $\beta_j$, $A$ has $qj+(q-1)(j-1)$. $B$ has $qj+q(j-1)+(j-i)$. A voter of type 1 would like to manipulate in favour of $A$, but if he ranks $B$ last, $B$'s score will only drop by $j-1$, and $B$ will still win.
The voter of type 4 would rather see one of $C_1$ through $C_i$ win. Observe that an upper bound on the score a $C$ candidate can get from the voters of type 1 through 3 is $q(j-1)$ -- for $q-1$ voters of types 1 and 2, each time the first group gives the candidate $j-2-k$ points, the other gives at most $k$ points, for an upper bound of $(q-1)(j-2)$. The voter of type 3 ranks all $C$ candidates one position higher, so combined with the remaining voter of type 1 the contribution to the candidate's score is at most $j-1$, which gives a total of $(q-1)(j-2)+(j-1) <q(j-1)$. If the voter ranks $B$ last and the $C$ candidate first, $B$ will still have $qj+q(j-1)$ points to the $C$'s candidate $j+q(j-1)$, so $B$ will still win.
\end{proof}
\begin{lemma}\label{prop:betajiqbigbordaeven}
For $n=2q$, all $m$: if $q>2$, $\beta_{j}\ngeq_{\mathcal{PS}} \beta_i$.
\end{lemma}
\begin{proof}
Consider the following profile:
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
q-1\text{ voters of type 1} & ABC_{m-2}\dots C_{m-i+1}& C_{m-i} \dots C_{m-j+1}& C_{m-j}\dots C_1 \\
\hline
q-2\text{ voters of type 2} & BA C_{1}\dots C_{i-2}& C_{i-1} \dots C_{j-2}& C_{j-1}\dots C_{m-2} \\
\hline
1\text{ voter of type 3} & BC_{1}\dots C_{i-1}& C_{i} \dots C_{j-1}& C_{j}\dots C_{m-2} A \\
\hline
1\text{ voter of type 4} & C_{1}\dots C_{i}&B C_{i+1} \dots C_{j-1}& C_{j}\dots C_{m-2} A \\
\hline
1\text{ voter of type 5} & C_{m-2}\dots C_{m-i-1}&B C_{m-i-2} \dots C_{m-j}& C_{m-j-1}\dots C_{1} A \\
\hline
\end{array}
\]
Case one: $m>3$, and hence $C_1\neq C_{m-2}$.
Under $\beta_i$, $A$ has $(q-1)i+(q-2)(i-1)$ points and $B$ has $(q-1)i+(q-1)(i-1)$. A $C$ candidate has at most $(q-2)(i-2)+(i-1)+(i+1)$ -- observe that if the candidate gets $i-2-k$ points from a voter of type 1, he gets at most $k$ from a voter of type 2, which gives us at most $(q-2)(i-2)$ from $q-2$ of each type of voter; the voter of type 3 gives one more point to the candidate, so paired with the remaining voter of type 1, the contribution is $i-1$; as for the voters of voters of type 4 and 5, if one gives the candidate $i-k$ points, the other gives at most $k+1$, for the remaining $(i+1)$.
Since $q>2$, $A$ and $B$ have more points than the $C$ candidates. If $i>1$, $B$ also beats $A$ by points, but a voter of type 1 can rank $B$ last and shift the $C$ candidates up one to force a tie between $A$ and $B$. This operation will raise the score of a $C$ candidate by at most 1, so such a candidate will at worst enter the tie, which $A$ wins. If $i=1$, $A$ wins by tie-breaking, but a voter of type 4 can rank $B$ first to give him one more point.
Under $\beta_j$, $A$ has $(q-1)j+(q-2)(j-1)$ points, $B$ has $(q-1)j+(q-1)(j-1)+2(j-i)$. $B$ has more points than $A$, and a voter of type 1 can no longer change this by ranking $B$ last.
A voter of type 4 would like to manipulate in favour of one of $C_1$ through $C_i$. As we have argued above, such a candidate would get no more than $(q-2)(j-2)+(j-1)$ from voters of types 1 through 3, which we round up to $(q-1)(j-1)$. If the manipulator ranks this candidate first, he will get at most $2j$ from the voters of type $4,5$ for an upper bound of $2j+(q-1)(j-1)$. In comparison, $B$ gets $(q-1)j+(q-1)(j-1)$ from the voters of type 1 through 3. Since $q>2$, $B$ would still win. The argument for the voter of type 5 is analogous.
Case two: $m=3$. The same profile we had above collapses to the following:
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
q-1\text{ voters of type 1} & A&B&C \\
\hline
q-2\text{ voters of type 2} & B&A& C\\
\hline
1\text{ voter of type 3} & B&C& A \\
\hline
1\text{ voter of type 4} & C&B& A \\
\hline
1\text{ voter of type 5} & C&B&A \\
\hline
\end{array}
\]
The argument with respect to $A$ and $B$ is unchanged. We need only verify that $C$ cannot win under $\beta_i$ or $\beta_j$.
Under $\beta_i$, $C$ has exactly 2 points. $A$ and $B$ are tied with $q-1$, and the voter of type 4 can only manipulate in favour of $B$ by ranking $B$ first.
Under $\beta_j$, $C$ has exactly 5 points. $B$ has at least 8, and a voter of type 4 or 5 can only lower $B$'s score by one point.
\end{proof}
\begin{lemma}\label{prop:betajiqbigbordafour}
For $n=4$, all $m$: $\beta_{j}\ngeq_{\mathcal{PS}} \beta_i$.
\end{lemma}
\begin{proof}
Case one: $i>1$.
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
2\text{ voters of type 1} & BC_{m-2}\dots C_{m-i}& C_{m-i-1} \dots C_{m-j}& C_{m-j-1}\dots C_1A \\
\hline
1\text{ voter of type 2} & ABC_{1}\dots C_{i-2}& C_{i-1} \dots C_{j-2}& C_{j-1}\dots C_{m-2} \\
\hline
1\text{ voter of type 3} & AC_{1}\dots C_{i-1}& C_{i} \dots C_{j-2}B& C_{j-1}\dots C_{m-2} \\
\hline
\end{array}
\]
Under $\beta_i$, $B$ wins with $2+i-1$ points. The voter of type 2 can manipulate by ranking $B$ last.
Under $\beta_j$, $B$ wins with $3j$ points. If the voter of type 2 ranks $B$ last, $B$ will still have $2j+1$, beating $A$. The voter of type 3, likewise, cannot manipulate in favour of $A$, but could try to manipulate in favour of a $C$ candidate. If he ranks $B$ last and $C_{i-1}$ first, then $B$ will have a score of $3j-1$. We can bound $C_{i-1}$'s score by $j$ (from one voter of type 1 and type 2) $+j$ (the manipulator ranks $C_{i-1}$ first) $+x$ (the points from the remaining voter of type 1). In order for $C_{i-1}$ to win, we must have $2j+x > 3j-1$, which is clearly impossible.
Case two: $i=1$.
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
2\text{ voters of type 1} & B&AC_{m-2} \dots C_{m-j+1}& C_{m-j}\dots C_1 \\
\hline
1\text{ voter of type 2} & A&C_{1}\dots C_{j-1}& C_{j-1}\dots C_{m-2}B \\
\hline
1\text{ voter of type 3} & C_{1}&AC_2\dots C_{j-1}& C_{j}\dots C_{m-2}B \\
\hline
\end{array}
\]
Under $\beta_1$, $B$ is the winner, but the voter of type 3 can manipulate in favour of $A$.
Under $\beta_j$, $A$ has $4j-3$ points to $B$'s $2j$. Since $j\geq 2$, $A$ is the winner. A voter of type 1 would rather see $B$ win, but even if he ranks $A$ last, $A$ will still have $3j-2\geq 2j$ points. A voter of type 3 would rather see $C_1$ win, but $C_1$ has $2j-1$ points, so would lose to $B$ no matter what the voter does.
\end{proof}
\begin{corollary}\label{corbetaji}
For $n>2$, all $m$: $\beta_{j}\ngeq_{\mathcal{PS}} \beta_i$.
\end{corollary}
\begin{proof}
\autoref{prop:betajibordaodd}, \autoref{prop:betajiqbigbordaeven}, and \autoref{prop:betajiqbigbordafour}.
\end{proof}
\begin{theorem}\label{thm:bordainc}
For $n>2$, all $m$: $\beta_i\times_{\mathcal{PS}}\beta_j$.
\end{theorem}
\begin{proof}
\autoref{cor:betaibetaj} and \autoref{corbetaji}.
\end{proof}
Thus far the story resembles that of $k$-approval. However, in the case of $n=2$, a hierarchy of manipulability is observed:
\begin{theorem}\label{thm:bordan2}
For $n=2$, $m>k+2$: $\beta_{k+1} \geq_{\mathcal{PS}} \beta_{k}$.
\end{theorem}
\begin{proof}
Let voter 1's preferences be $c_1\succ_1\dots\succ_1c_m$ and voter 2's $b_1\succ_2\dots\succ_2b_m$. Note that $c_1\neq b_1$, else manipulation would not be possible.
Let $\beta_k(P_1,P_2)=d$ and $\beta_{k+1}(P_1,P_2)=e$. We consider whether or not $d=e$ by cases.
Case one: $d=e$.
Assume voter 1 can manipulate $\beta_k$ in favour of $c_q\succ_1 d$. By \autoref{lem:normalform}, this means $c_q$ is the winner in the following profile:
\[
\begin{array}{|c|c|c|c|}
\hline
P_1^* & c_q b_{m}\dots b_{m-k+1}&b_{m-k} &b_{m-k-1}\dots b_1 \\
\hline
P_2 & b_1 \ldots b_k& b_{k+1} & b_{k+1}\dots b_m \\
\hline
\end{array}
\]
Let us consider who the winner must be under $\beta_{k+1}(P_1^*,P_2)$. Observe that the score of a candidate under $\beta_{k+1}$ is at most two points higher than under $\beta_k$ -- it will increase by one point for each voter who ranks the candidate in the top $k+1$ positions.
If $c_q$'s points increase by 2 points then we are done -- whenever $c_q$ has more points than $f$ under $\beta_k$, $c_q$ still has more points under $\beta_{k+1}$; and if $c_q$ is tied with $f$ under $\beta_k$ then that must mean $c_q$ beats $f$ in the tie, under $\beta_{k+1}$ $c_q$ will either tie with $f$ and win the tie, or have more points outright. Thus voter 1 can manipulate in favour of $c_q\succ_1 d$.
If $c_q$'s points increase by 1, then that must mean that voter 2 does not rank $c_q$ in the top $k+1$ positions, and a fortiori in the top $k$ positions. Thus under $\beta_k(P_1^*,P_2)$ $c_q$ has $k$ points, $b_1$ has $k$ points, and the other candidates strictly less. Under $\beta_{k+1}$ $c_q$ and $b_1$ will still tie at $k+1$, and, since $m>k+2$, the other candidates will still have strictly less.
Case two: $d\neq e$.
As we have argued before, the score of a candidate can increase by at most two points when going from $\beta_k$ to $\beta_{k+1}$. Since $d$ wins under $\beta_k$ but $e$ wins under $\beta_{k+1}$, this means that $e$'s score must increase by 2 and $d$'s by 1. This means that one voter does not rank $d$ in the top $k+1$ positions, and, since $d$ must still win under $\beta_k$, this means the other voter must rank $d$ first (at least one candidate will have a score of $k$, so the winner's score must be at least $k$). Since voter 1 is the one with an incentive to manipulate, this means the sincere profile must be the following:
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Voter }1 & c_1 \ldots c_{i-1} e c_{i+1} \dots c_m \\
\hline
\text{Voter }2 & d b_2 \ldots b_{j-1} eb_{j+1} \dots b_m \\
\hline
\end{array}
\]
Under $\beta_k$ $d$ either has one point more than $e$, or they are tied and $d$ wins the tie. Under $\beta_{k+1}$ $e$ wins, which means $d$'s score increases by 1 and $e$'s by two -- thus $d$ cannot be in the top $k+1$ positions of voter 1. But this means in the sincere profile both $d$ and $c_1$ have $k$ points under $\beta_k$, and $d$ wins the tie. Voter 2 can thus manipulate $\beta_{k+1}$ as follows:
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Voter }1 & c_1 \ldots c_{i-1} e c_{i+1} \dots c_m \\
\hline
\text{Voter }2 & d c_m \dots c_1 \\
\hline
\end{array}
\]
Both $d$ and $c_1$ have $k+1$ points, since $m>k+2$ the other candidates have strictly less, and $d$ wins the tie.
\end{proof}
\begin{corollary}\label{cor:main}
For $n=2$, $k<m-2$: $\beta_{j}>_{\mathcal{PS}}\beta_i$.
\end{corollary}
\begin{proof}
Transitivity of $\geq_{\mathcal{PS}}$, \autoref{thm:bordan2}, and \autoref{cor:betaibetaj}.
\end{proof}
To finish, we observe that the proviso that $m>k+2$ really is necessary -- the Borda rule proper ($\beta_{m-1}$) is incomparable with $\beta_k$.
\begin{proposition}\label{prop:betakborda}
For $n=2$, all $m$: $\beta_{m-1}\ngeq_{\mathcal{PS}} \beta_k$.
\end{proposition}
\begin{proof}
Consider the following profile, with a $B$ candidate, a $C$ candidate, and $m-2$ $A$ candidates:
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Voter }1 & B C A_{1}\dots A_{k-2}&A_{k-1} \dots A_{m-2}\\
\hline
\text{Voter }2 & C A_1 \dots A_{k-1}&A_{k}\dots A_{m-2} B\\
\hline
\end{array}
\]
Under $\beta_k$, if $k>1$ then $C$ is the winner with $2k-1$ points. Voter one can manipulate by voting $B\succ A_m\succ\dots\succ A_1\succ C$. This way $B$ will have $k$ points, and the other candidates strictly less. If $k=1$ then $B$ is the winner by tie-breaking. Voter 2 can manipulate by voting for $A_1$.
Under $\beta_{m-1}$, $C$ is the winner. Voter 2 has no incentive to manipulate, voter 1 would rather see $B$ win. By \autoref{lem:normalform}, if this is possible then it is possible in the following profile:
\[
\begin{array}{|c|c|c|c|}
\hline
\text{Voter }1 & B A_{m-2} \dots A_1 C \\
\hline
\text{Voter }2 & C A_1 \dots A_{m-2} B \\
\hline
\end{array}
\]
However, in this profile all candidates are tied with $m-1$ points, and $A_1$ wins the tie, which is worse than $C$ for the manipulator.
\end{proof}
\section{Conclusion}
In this paper we have shown:
\begin{enumerate}
\item For any choice of $n,m$: $\alpha_i\ngeq_{\mathcal{PS}}\alpha_j$;
\item For $n=2,i<j,j\neq m=1$: $\beta_j>_{\mathcal{PS}}\beta_i$;
\item In every other instance, $\beta_i\ngeq_{\mathcal{PS}}\beta_j$.
\end{enumerate}
These results are negative in nature. Even in the case of two natural, hierarchical families of scoring rules, the notion of Pathak and S\"onmez fails to make a meaningful distinction between their manipulability. The quest for a useful framework for comparing the manipulability of voting rules continues.
|
{'timestamp': '2022-03-30T02:32:50', 'yymm': '2203', 'arxiv_id': '2203.15494', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.15494'}
|
arxiv
|
\section{Introduction}
Most real-world networks have power-law degrees, so that the proportion of nodes having $k$ neighbors scales as $k^{-\tau}$ with exponent $\tau$ between 2 and 3 \cite{albert1999,faloutsos1999,jeong2000,vazquez2002}. Power-law degrees imply various intriguing scale-free network properties, such as ultra-small distances~\cite{hofstad2007, newman2001} and the absence of percolation thresholds when $\tau<3$~\cite{janson2009b, pastor2001}. Empirical evidence has been matched by random graph null models that are able to explain mathematically why and how these properties arise. This paper deals with another fundamental property observed in many scale-free networks related to three-point correlations that suppress the creation of triangles and signal the presence of hierarchy. We quantify this property in terms of the {\it clustering spectrum}, the function $k\mapsto \bar c(k)$ with $\bar c(k)$ the probability that two neighbors of a degree-$k$ node are neighbors themselves.
In {\it uncorrelated} networks the clustering spectrum $\bar c (k)$ remains constant and independent of $k$. However, the majority of real-world networks have spectra that decay in $k$, as first observed in technological networks including the Internet~\cite{pastor2001b,ravasz2003}. Figure~\ref{fig:chas} shows the same phenomenon for a social network: YouTube users as vertices, and edges indicating friendships between them~\cite{snap}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{youtubech.pdf}
\caption{$\bar c(k)$ for the YouTube social network}
\label{fig:chas}
\end{figure}
Close inspection suggests the following properties, not only in Fig.~\ref{fig:chas}, but also in the nine further networks in Fig.~\ref{fig:ch}. The right end of the spectrum appears to be of the power-law form $k^{-\alpha}$; approximate values of $\alpha$ give rise to the dashed lines; (ii) The power law is only approximate and kicks in for rather large values of $k$. In fact, the slope of $\bar c (k)$ decreases with $k$; (iii) There exists a transition point: the minimal degree as of which the slope starts to decline faster and settles on its limiting (large $k$) value.
\begin{figure*}
\subfloat[]{%
\centering
\includegraphics[width=0.33\linewidth]{hudongch.pdf}
\label{fig:chhudong}
}%
\subfloat[]{%
\centering
\includegraphics[width=0.33\linewidth]{baiduch.pdf}
\label{fig:chbaidu}
}
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{wordnetch.pdf}
\label{fig:chwordnet}
}
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{trecch.pdf}
\label{fig:chtrec}
}
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{webgooglech.pdf}
\label{fig:chgoogle}
}
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{asskitch.pdf}
\label{fig:chwiki}
}
\subfloat[]{%
\centering
\includegraphics[width=0.33\linewidth]{catsterch.pdf}
\label{fig:chcatster}
}%
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{gowallach.pdf}
\label{fig:chgowalla}
}
\subfloat[]{
\centering
\includegraphics[width=0.33\linewidth]{wikitalkch.pdf}
\label{fig:chyou}
}
\caption{$\bar c(k)$ for several information (red), technological (green) and social (blue) real-world networks. (a) Hudong encyclopedia~\cite{niu2011}, (b) Baidu encyclopedia~\cite{niu2011}, (c) WordNet~\cite{miller1998}, (d) TREC-WT10g web graph~\cite{bailey2003}, (e) Google web graph~\cite{snap}, (f) Internet on the Autonomous Systems level~\cite{snap}, (g) Catster/Dogster social networks~\cite{konect}, (h) Gowalla social network~\cite{snap}, (i) Wikipedia communication network~\cite{snap}. The different shadings indicate the theoretical boundaries of the regimes as in Fig.~\ref{fig:curve}, with $N$ and $\tau$ as in Table~\ref{tab:data}.
}
\label{fig:ch}
\end{figure*}
For scale-free networks a decaying $\bar c(k)$ is taken as an indicator for the presence of modularity and hierarchy~\cite{ravasz2003}, architectures that can be viewed as collections of subgraphs with dense connections within themselves and sparser ones between them. The existence of clusters of dense interaction signals hierarchical or nearly decomposable structures. When the function $\bar c(k)$ falls off with $k$, low-degree vertices have relatively high clustering coefficients, hence creating small modules that are connected through triangles. In contrast, high-degree vertices have very low clustering coefficients, and therefore act as bridges between the different local modules. This also explains why $\bar c(k)$ is not just a local property, and when viewed as a function of $k$, measures crucial mesoscopic network properties such as modularity, clusters and communities. The behavior of $\bar c(k)$ also turns out to be a good predictor for the macroscopic behavior of the network. Randomizing real-world networks while preserving the shape of the $\bar c(k)$ curve produces networks with very similar component sizes as well as similar hierarchical structures as the original network~\cite{colomer2013}. Furthermore, the shape of $\bar c(k)$ strongly influences the behavior of networks under percolation~\cite{serrano2006}. This places the $\bar c(k)$-curve among the most relevant indicators for structural correlations in network infrastructures.
In this paper, we obtain a precise characterization of clustering in the hidden variable model, a tractable random graph null model. We start from an explicit form of the $\bar c(k)$ curve for the hidden variable model~\cite{boguna2003,serrano2007,dorogovtsev2004}.
We obtain a detailed description of the $\bar c(k)$-curve in the large-network limit that provides rigorous underpinning of the empirical observations (i)-(iii). We find that the decay rate in the hidden variable model is significantly different from the exponent $\bar c(k)\sim k^{-1}$ that has been found in a hierarchical graph model~\cite{ravasz2003} as well as in the preferential attachment model~\cite{krot2015} and a preferential attachment model with enhanced clustering~\cite{szabo2003}. Furthermore, we show that before the power-law decay of $\bar c(k)$ kicks in, $\bar c(k)$ first has a constant regime for small $k$, and a logarithmic decay phase. This characterizes the entire clustering spectrum of the hidden variable model.
This paper is structured as follows. Section \ref{sec:hidden} introduces the random graph model and its local clustering coefficient. Section \ref{sec:three} presents the main results for the clustering spectrum. Section \ref{sec:deep} explains the shape of the clustering spectrum in terms of an energy minimization argument, and Section \ref{sec:deep2} quantifies how fast the limiting clustering spectrum arises as function of the network size. We conclude with a discussion in Section \ref{sec:disc} and present all mathematical derivations of the main results in the appendix.
\section {Hidden variables}\label{sec:hidden}
As null model we employ the hidden variable model~ \cite{boguna2003,park2004,bollobas2007,britton2006,norros2006}.
Given $N$ nodes, hidden variable models are defined as follows. Associate to each node a hidden variable $h$ drawn from a given probability distribution function
\begin{equation}\label{eq:rhoh}
\rho(h)=Ch^{-\tau}
\end{equation}
for some constant $C$. Next join each pair of vertices independently according to a given probability $p(h,h')$ with $h$ and $h'$ the hidden variables associated to the two nodes.
Many networks can be embedded in this hidden-variable framework, but particular attention goes to the case in which the hidden variables have themselves the structure of the degrees of a real-world network. In that case the hidden-variable model puts soft constraints on the degrees, which is typically easier to analyze than hard constraints as in the configuration model~\cite{clauset2009,newman2003book,vazquez2002,dhara2016}.
Chung and Lu \cite{chung2002} introduced the hidden variable model in the form
\begin{equation}\label{c1}
p(h,h')\sim \frac{h h'}{N \mean{h}},
\end{equation}
so that the expected degree of a node equals its hidden variable.
We now discuss the structural and natural cutoff, because both will play a crucial role in the description of the clustering spectrum. The structural cutoff is defined as the largest possible upper bound on the degrees required to guarantee single edges, while the natural cutoff characterizes the maximal degree in a sample of $N$ vertices. For scale-free networks with exponent $\tau\in(2,3]$ the structural cutoff scales as $\sqrt{N}$ while the natural cutoff scales as $N^{1/(\tau-1)}$, which gives rise to structural negative correlations and possibly other finite-size effects. If one wants to avoid such effects, then the maximal value of the product $h h'$ should never exceed $N \mean{h}$, which can be guaranteed by the assumption that the hidden degree $h$ is smaller than the structural cutoff $h_s=\sqrt{N\mean{h}}$.
While this restricts $p(h,h')$ in \eqref{c1} within the interval $[0,1]$, banning degrees larger than the structural cutoff strongly violates the reality of scale-free networks, where degrees all the way up to the
natural cutoff $(N\mean{h})^{1/(\tau-1)}$ need to be considered. We therefore work with (although many asymptotically equivalent choices are possible; see \cite{hofstad2017b} and Appendix \ref{sec:chcomp})
\begin{equation}\label{c11}
p(h,h')= \min\Big(1,\frac{h h'}{N \mean{h}}\Big),
\end{equation}
putting no further restrictions on the range of the hidden variables (and hence degrees).
In this paper, we shall work with $c(h)$, the local clustering coefficient of a randomly chosen vertex with hidden variable $h$. However, when studying local clustering in real-world data sets, we can only observe $\bar c(k)$, the local clustering coefficient of a vertex of degree $k$. In Appendix \ref{sec:hk} we show that the approximation $\bar{c}(h)\approx c(h)$ is highly accurate.
We start from the explicit expression for $c(h)$ \cite{boguna2003}, which measures the probability that two randomly chosen edges from $h$ are neighbors, i.e.,
\begin{equation}\label{int1}
c(h)=\int_{h'}\int_{h''} p(h'|h)p(h',h'')p(h''|h){\rm d} h''{\rm d} h',
\end{equation}
with $p(h'|h)$ the conditional probability that a randomly chosen edge from an $h$-vertex is connected to an $h'$-vertex and $p(h,h')$ as in \eqref{c11}. The goal is now to characterize the $c(h)$-curve (and hence the $\bar c(k)$-curve).
\section{Universal clustering spectrum}\label{sec:three}
The asymptotic evaluation of the double integral \eqref{int1} in the large-$N$ regime reveals three different ranges, defined in terms of the scaling relation between the hidden variable $h$ and the network size $N$. The three ranges together span the entire clustering spectrum as shown in Fig.~\ref{fig:curve}. The detailed calculations are deferred to Appendix \ref{sec:chcomp}.
\begin{figure}[tb]
\centering\includegraphics[width=0.4\textwidth]{curve.pdf}
\caption{Clustering spectrum $h\mapsto c(h)$ with three different ranges for $h$: the flat range, logarithmic decay, and the power-law decay.}
\label{fig:curve}
\end{figure}
The first range pertains to the smallest-degree nodes, i.e., vertices with a hidden variable that does not exceed $N^{\beta(\tau)}$ with $\beta(\tau)= \frac{\tau-2}{\tau-1}$. In this case we show that
\begin{equation}\label{eq:r1}
c(h)\propto N^{2-\tau}\ln N, \quad h\leq N^{\beta(\tau)}.
\end{equation}
In particular, here the local clustering does not depend on the degree and in fact corresponds with the large-$N$ behavior of the global clustering coefficient \cite{hofstad2017b,colomer2012}. Note that the interval $[0,\beta(\tau)]$ diminishes when $\tau$ is close to 2, a possible explanation for why the flat range associated with Range I is hard to recognize in some of the real-world data sets.
Range II considers nodes with hidden variables (degrees) above the threshold $N^{\beta(\tau)}$, but below the structural cutoff $\sqrt{N}$. These nodes start experiencing structural correlations, and close inspection of the integral \eqref{int1} yields
\begin{equation}\label{eq:r2}
c(h)\propto N^{2-\tau}\Big(1+\ln \Big(\frac{\sqrt{N}}{h}\Big)\Big), \quad N^{\beta(\tau)}\leq h \leq \sqrt{N}.
\end{equation}
This range shows relatively slow, logarithmic decay in the clustering spectrum, and is clearly visible in the ten data sets.
Range III considers hidden variables above the structural cutoff, when the restrictive effect of degree-degree correlations becomes more evident. In this range we find that
\begin{equation}\label{eq:r3}
c(h)\propto \frac{1}{N}\Big(\frac{h}{N}\Big)^{-2(3-\tau)}, \quad h\geq \sqrt{N}
\end{equation}
hence power-law decay with a power-law exponent $\alpha=2(3-\tau)$.
Such power-law decay has been observed in many real-world networks
\cite{vazquez2002,ravasz2003,serrano2006b,catanzaro2004, leskovec2008,krioukov2012}, where most networks were found to have the power-law exponent close to one. The asymptotic relation \eqref{eq:r3} shows that the exponent $\alpha$ decreases with $\tau$ and takes values in the entire range $(0,2)$. Table~\ref{tab:data} contains estimated values of $\alpha$ for the ten data sets.
\section{Energy minimization}\label{sec:deep}
\begin{figure}[tb]
\centering
\subfloat[]{
\centering
\includegraphics[width=0.2\textwidth]{triangsmall.pdf}
\label{fig:contsmall}
}
\hspace{0.2cm}
\subfloat[]{
\centering
\includegraphics[width=0.2\textwidth]{trianglarge.pdf}
\label{fig:contlarge}
}
\caption{Orders of magnitude of the major contributions in the different $h$-ranges. The highlighted edges are present with asymptotically positive probability. (a) $h<\sqrt{N}$ (b) $h>\sqrt{N}$.}
\label{fig:major}
\end{figure}
We now explain why the clustering spectrum splits into three ranges, using an argument that minimizes the energy needed to create triangles among nodes with specific hidden variables.
In all three ranges for $h$, there is one type of `most likely' triangle, as shown in Fig.~\ref{fig:major}. This means that most triangles containing a vertex $v$ with hidden variable $h$ are triangles with two other vertices $v'$ and $v''$ with hidden variables $h'$ and $h''$ of specific sizes, depending on $h$. The probability that a triangle is present between $v$, $v'$ and $v''$ can be written as
\begin{equation}\label{eq:probtr}
\min\left(1,\frac{hh'}{N\mean{h}}\right)\min\left(1,\frac{hh''}{N \mean{h} }\right)\min\left(1,\frac{h'h''}{N\mean{h} }\right).
\end{equation}
While the probability that such a triangle exists among the three nodes thus increases with $h'$ and $h''$, the number of such nodes decreases with
$h'$ and $h''$ because vertices with higher $h$-values are rarer. Therefore, the maximum contribution to $c(h)$ results from a trade-off between large enough $h',h''$ for likeliness of occurrence of the triangle, and $h',h''$ small enough to have enough copies. Thus, having $h'> N\mean{h}/h$ is not optimal, since then the probability that an edge exists between $v$ and $v'$ no longer increases with $h'$. This results in the bound
\begin{equation}\label{eq:b1}
h',h''\leq \frac{N\mean{h} }{h}.
\end{equation}
Similarly, $h'h''> N\mean{h}$ is also suboptimal, since then further increasing $h'$ and $h''$ does not increase the probability of an edge between $v'$ and $v''$. This gives as a second bound
\begin{equation}\label{eq:b2}
h'h''\leq N\mean{h}.
\end{equation}
In Ranges I and II, $h<\sqrt{N\mean{h}}$, so that $N\mean{h}/h>\sqrt{N\mean{h}}$. In this situation we reach bound~\eqref{eq:b2} before we reach bound~\eqref{eq:b1}. Therefore, the maximum contribution to $c(h)$ comes from $h'h''\approx N$, where also $h',h''<N\mean{h}/h$ because of the bound~\eqref{eq:b1}. Here the probability that the edge between $v'$ and $v''$ exists is large, while the other two edges have a small probability to be present, as shown in Fig.~\ref{fig:contsmall}. Note that for $h$ in Range I, the bound \eqref{eq:b1} is superfluous, since in this regime $N\mean{h}/h>h_c$, while the network does not contain vertices with hidden variables larger than $h_c$. This bound indicates the minimal values of $h'$ such that an $h$-vertex is guaranteed to be connected to an $h'$-vertex. Thus, vertices in Range I are not even guaranteed to have connections to the highest degree vertices, hence they are not affected by the single-edge constraints. Therefore the value of $c(h)$ in Range I is independent of $h$.
In Range III, $h>\sqrt{N\mean{h}}$, so that $N\mean{h}/h<\sqrt{N\mean{h}}$. Therefore, we reach bound~\eqref{eq:b1} before we reach bound~\eqref{eq:b2}. Thus, we maximize the contribution to the number of triangles by choosing $h',h''\approx N\mean{h}/h$. Then the probability that the edge from $v$ to $v'$ and from $v$ to $v''$ is present is large, while the probability that the edge between $v'$ and $v''$ exists is small, as illustrated in Fig.~\ref{fig:contlarge}.
\section{Convergence rate}\label{sec:deep2}
We next ask how large networks should be, or become, before they reveal the features of the universal clustering spectrum.
In other words, while the results in this paper are shown for the large-$N$ limit, for what finite $N$-values can we expect to see the different ranges and clustering decay?
To bring networks of different sizes $N$ on a comparable footing, we consider
\begin{equation}\label{eq:sigmam}
\sigma_N(t)=\frac{\ln\left(c(h)/c(h_c)\right)}{\ln(N\mean{h})}, \quad h=(N\mean{h})^t,
\end{equation}
for $0\leq t \leq \tfrac{1}{\tau-1}$. The slope of $\sigma_N(t)$ can be interpreted as a measure of the decay of $c(h)$ at $h=(N\mean{h})^t$, and all curves share the same right end of the spectrum; see Appendix~\ref{sec:hcder} for more details.
Figure~\ref{fig:chfinite} shows this rescaled clustering spectrum for synthetic networks generated with the hidden variable model with $\tau=2.25$. Already $10^4$ vertices reveal the essential features of the spectrum: the decay and the three ranges. Increasing the network size further to $10^5$ and $10^6$ nodes shows that the spectrum settles on the limiting curve. Here we note that the real-world networks reported in Figs.~\ref{fig:chas} and~\ref{fig:ch} are also of order $10^5$-$10^6$ nodes, see Table~\ref{tab:data}.
\begin{figure}[ht]
\centering\includegraphics[width=0.4\textwidth]{clustN.pdf}
\caption{$\sigma_N(t)$ for $N=10^4,10^6$ and $10^8$ together with the limiting function, using $\tau=2.25$, for which $\tfrac{1}{\tau-1}=0.8$. }
\label{fig:chfinite}
\end{figure}
\begin{table}[htbp]
\centering
\begin{ruledtabular}
\begin{tabular}{lrrrr}
& $N$ & $\tau$ & g.o.f. &$\alpha$ \\
\textbf{Hudong } & 1.984.484 & 2,30 & 0.00 &0,85 \\
\textbf{Baidu} & 2.141.300 & 2,29 & 0.00 & 0,80 \\
\textbf{Wordnet} & 146.005 & 2,47 & 0.00 &1,01 \\
\textbf{Google web} & 875.713 & 2,73 & 0.00 & 1,03 \\
\textbf{AS-Skitter} & 1.696.415 & 2,35 & 0.06 & 1,12 \\
\textbf{TREC-WT10g} & 1.601.787 & 2,23 & 0.00 & 0,99 \\
\textbf{Wiki-talk} & 2.394.385 & 2,46 & 0.00 &1,54 \\
\textbf{Catster/Dogster} & 623.766 & 2,13 & 0.00 & 1,20 \\
\textbf{Gowalla} & 196.591 & 2,65 & 0.80 &1,24 \\
\textbf{Youtube} & 1.134.890 & 2,22& 0.00 &1,05
\end{tabular}
\end{ruledtabular}%
\caption{Data sets. $N$ denotes the number of vertices, $\tau$ the exponent of the tail of the degree distribution estimated by the method proposed in~\cite{clauset2009} together with the goodness of fit criterion proposed in~\cite{clauset2009} (when the goodness of fit is at least 0.10, a power-law tail cannot be rejected), and $\alpha$ denotes the exponent of $c(k)$.}
\label{tab:data}%
\end{table}%
Figure~\ref{fig:chfinite} also brings to bear a potential pitfall when the goal is to obtain statistically accurate estimates for the slope of $c(h)$. Observe the extremely slow convergence to the limiting curve for $N=\infty$; a well documented property of certain clustering measures~\cite{boguna2009,colomer2012,janssen2015,hofstad2017b}. In Appendix~\ref{sec:hcder}
we again use the integral expression \eqref{int1} to characterize the limiting curve for $N=\infty$ and
the rate of convergence as function of $N$, and indeed extreme $N$-values are required for statistically reliable slope estimates for e.g.~$t$-values of $\tfrac12$ and $\tfrac{1}{\tau-1}$; this is also apparent from visual inspection of Fig.~\ref{fig:chfinite}.
Therefore, the estimates in Table~\ref{tab:data} only serve as indicative values of $\alpha$. Finally, observe that Range II disappears in the limiting curve, due to the rescaling in \eqref{eq:sigmam}, but again only for extreme $N$-values. Because this paper is about structure rather than statistical estimation, the slow convergence in fact provides additional support for the persistence of Range II in Figs.~\ref{fig:chas} and~\ref{fig:ch}.
Table~\ref{tab:data} also shows that the relation $\alpha=-2(3-\tau)$ is inaccurate for the real-world data sets, in turn affecting the theoretical boundaries of the three regimes indicated in Fig.~\ref{fig:ch}. One explanation for this inaccuracy is that the real-world networks might not follow pure power-law distributions, as measured by the goodness of fit criterion in Table~\ref{tab:data}, and visualized in Appendix~\ref{sec:degree}. Furthermore, real-world networks are usually highly clustered and contain community structures, whereas the hidden variable model is locally tree-like. These modular structure may explain, for example, why the power-law decay of the hidden variable model is less pronounced in the three social networks of Fig.~\ref{fig:ch}. It is remarkable that despite these differences between hidden variable models and real-world networks, the global shape of the $c(k)$ curve of the hidden variable model is still visible in these heavy-tailed real-world networks.
\section{Discussion}\label{sec:disc}
The hidden variable model gives rise to {\it single-edge} networks in which pairs of vertices can only be connected once. Hierarchical modularity and the decaying clustering spectrum have been contributed to this restriction that no two vertices have more than one edge connecting them~\cite{pastor2001b, maslov2004,park2003,newman2002assortative,newman2003}. The physical intuition is that the single-edge constraint leads to far fewer connections between high-degree vertices than anticipated based on randomly assigned edges.
We have indeed confirmed this intuition, not only through analytically revealing the universal clustering curve, but also by providing an alternative derivation of the three ranges based on energy minimization and structural correlations.
We now show that the clustering spectrum revealed using the hidden variable model, also appears for a second widely studied null model. This second model cannot be the Configuration Model (CM), which preserves the degree distribution by making connections between vertices in the most random way possible~\cite{bollobas1980, newman2001}. Indeed, because of the random edge assignment, the CM has no degree correlations, leading in the case of scale-free networks with diverging second moment to uncorrelated networks with non-negligible fractions of self-loops (a vertex joined to itself) and multiple connections (two vertices connected by more than one edge). This picture changes dramatically when self-loops and multiple edges are avoided,
a restriction mostly felt by the high-degree nodes, who can no longer establish multiple edges among each other.
We therefore consider the Erased Configuration Model (ECM) that takes a sample from the CM and then erases all the self-loops and multiple edges. While this removes some of the edges in the graph, thus violating the hard constraint, only a small proportion of the edges is removed, so that the degree of vertex $j$ in ECM is still close to $D_j$~\cite[Chapter 7]{hofstad2009}. In the ECM, the probability that a vertex with degree $D_i$ is connected to a vertex with degree $D_j$ can be approximated by $1-{\rm e}^{-D_iD_j/\mean{D}N}$~\cite[Eq.(4.9)]{hofstad2005}. Therefore, we expect the ECM and the hidden variable model to have similar properties (see e.g.~\cite{hofstad2017b}) when we choose
\begin{equation}\label{eq:conecm}
p(h,h')= 1-{\rm e}^{-h h'/N\mean{h}}\approx\frac{h h'}{N\mean{h}}.
\end{equation}
Figure~\ref{fig:ECMhidden} illustrates how both null models generate highly similar spectra, which provides additional support for the claim that the clustering spectrum is a universal property of simple scale-free networks. The ECM is more difficult to deal with compared to hidden variable models, since edges in ECM are not independent. In particular, we expect that these dependencies vanish for the $k\mapsto\bar{c}(k)$ curve. Establishing the universality of the $k\mapsto \bar c(k)$ curve for other random graph null models such as ECM, networks with an underlying geometric space~\cite{serrano2008} or hierarchical configuration models~\cite{stegehuis2015} is a major research direction.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{ECMhidden.pdf}
\caption{$\bar c(k)$ for a hidden variable model with connection probabilities~\eqref{eq:conecm} (solid line) and an erased configuration model (dashed line). The presented values of $\bar c(k)$ are averages over $10^4$ realizations of networks of size $N=10^5$. }
\label{fig:ECMhidden}
\end{figure}
The ECM and the hidden variable model are both null models with soft constraints on the degrees. Putting hard constraints on the degrees with the CM, has the nice property that simple graphs generated using this null model are uniform samples of all simple graphs with the same degree sequence. Dealing with such uniform samples is notoriously hard when the second moment of the degrees is diverging, for example since the CM will yield many edges between high-degree vertices. This makes sampling uniform graphs difficult~\cite{milo2003, viger2005,delgenio2010}. Thus, the joint requirement of hard degree and single-edge constraints, as in the CM, presents formidable technical challenges. Whether our results for the $k\mapsto\bar{c}(k)$ curve for soft-constraint models also carry over to these uniform simple graphs is a challenging open problem.
In this paper we have investigated the presence of triangles in the hidden variable model.
We have shown that by first conditioning on the node degree, there arises a unique
`most likely' triangle with two other vertices of specific degrees. We have not only explained this insight heuristically, but it is also reflected in the elaborate analysis of the double integral for $c(h)$ in Appendix \ref{sec:chcomp}. As such, we have introduced an intuitive and tractable mathematical method for asymptotic triangle counting. It is likely that the method carries over to counting other motifs, such as squares, or complete graphs of larger sizes. For any given motif, and first conditioning on the node degree, we again expect to find specific configuration that are most likely. Further mathematical challenges need to be overcome, though, because we expect that the `most likely' configurations critically depend on the precise motif topologies and the associated energy minimization problems.
\acknowledgements
This work is supported by NWO TOP grant 613.001.451 and by the NWO Gravitation Networks grant 024.002.003.
The work of RvdH is further supported by the NWO VICI grant 639.033.806. The work of JvL is further supported by an NWO TOP-GO grant and by an ERC Starting Grant.
|
{'timestamp': '2017-10-06T02:06:38', 'yymm': '1706', 'arxiv_id': '1706.01727', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.01727'}
|
arxiv
|
\section{One-way definability in the sweeping case}%
\label{sec:characterization-sweeping}
\reviewOne[inline]{From Part 5 onwards, a major point is that an output you can bound is an output you can guess, and a periodic output is an output you don't have to guess, as you can produce it "out of order". On your Figures (and any you might care to add during potential revisions), having a specific visual representation (if possible coherent throughout the paper) to differentiate path sections with empty, bounded, or periodic output will make your point way more intelligible.
}%
\felix[inline]{so this is just about uniformizing the figure for sweeping and for two way ?}%
In the previous section we have shown the implication \PR1 $\Rightarrow$ \PR2 for a functional sweeping
transducer $\cT$. Here we close the cycle by proving the implications \PR2 $\Rightarrow$ \PR3 and \PR3 $\Rightarrow$ \PR1.
In particular, we show how to derive the existence of successful runs admitting a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition
and construct a one-way transducer $\cT'$ that simulates $\cT$ on those runs.
This will basically prove Theorem~\ref{thm:main2} in the sweeping case.
\medskip
\subsection*{Run decomposition.}
We begin by giving the definition of $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition for a run $\rho$ of $\cT$.
Intuitively, a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition of $\rho$ identifies factors of $\rho$ that can be easily simulated
in a one-way manner. The definition below describes precisely the shape of these factors.
First we need to recall the notion of almost periodicity:
a word $w$ is \emph{almost periodic with bound $p$} if $w=w_0~w_1~w_2$
for some words $w_0,w_2$ of length at most $p$ and some word $w_1$
of period at most $p$.
We need to work with subsequences of the run $\rho$ that are induced by
particular sets of locations, not necessarily consecutive.
Recall that $\rho[\ell,\ell']$ denotes the factor of $\rho$
delimited by two locations $\ell\mathrel{\unlhd}\ell'$. Similarly, given any
set $Z$ of locations, we denote by $\rho|Z$ the subsequence of $\rho$
induced by $Z$. Note that, even though $\rho|Z$ might not be a valid
run
\label{rhoZ}
of the transducer, we can still refer to the number of transitions in it
and to the size of the produced output $\out{\rho|Z}$.
Formally, a transition in $\rho|Z$ is a transition from some $\ell$ to $\ell'$,
where both $\ell,\ell'$ belong to $Z$. The output $\out{\rho|Z}$ is the
concatenation of the outputs of the transitions of $\rho| Z$
(according to the order given by $\rho$).
\reviewOne[inline]{Fig. 5 is moderately informative... Figure 17 and 18 are a more inspiring illustration of the intuition behind diagonal and blocks. Their "restricted" counterpart could be simpler to represent.}%
\gabriele[inline]{done}%
\begin{defi}\label{def:factors-sweeping}
Consider a factor $\rho[\ell,\ell ']$ of a run $\rho$ of $\cT$,
where $\ell=(x,y)$, $\ell'=(x',y')$ are two locations with $x \le x'$.
We call $\rho[\ell,\ell']$
\begin{itemize}
\medskip
\item \parbox[t]{\dimexpr\textwidth-\leftmargin}
\vspace{-2.75mm}
\begin{wrapfigure}{r}{7.5cm}
\end{wrapfigure}
%
a \emph{floor} if $y=y'$ is even, namely, if $\rho[\ell,\ell']$
lies entirely on the same level and is rightward oriented;
}\noparbreak
\bigskip
\bigskip
\item \parbox[t]{\dimexpr\textwidth-\leftmargin}
\vspace{-2.75mm}
\begin{wrapfigure}{r}{7.5cm}
\vspace{-22mm}
\input{diagonal-sweeping}
\vspace{-4mm}
\end{wrapfigure}
%
a \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal} if
there is a sequence
$\ell = \ell_0 \mathrel{\unlhd} \ell_1 \mathrel{\unlhd} \dots \mathrel{\unlhd} \ell_{2n+1} = \ell'$,
where each $\rho[\ell_{2i+1},\ell_{2i+2}]$ is a floor,
each $\rho[\ell_{2i}, \ell_{2i+1}]$ produces an output
of length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
and the position of each $\ell_{2i}$
is to the left of the position of $\ell_{2i+1}$;
}
\bigskip
\smallskip
\item \parbox[t]{\dimexpr\textwidth-\leftmargin}
\vspace{-2.75mm}
\begin{wrapfigure}{r}{7.5cm}
\vspace{-10mm}
\input{block-sweeping}
\vspace{-5mm}
\end{wrapfigure}
%
a \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block} if
the output produced by $\rho[\ell,\ell']$ is almost periodic with bound $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
and the output produced by the subsequence $\rho|Z$,
where $Z=[\ell,\ell'] ~\setminus~ \big([x,x']\times[y,y']\big)$,
has length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
}
\vspace{8mm}
\end{itemize}
\end{defi}
Before continuing we give some intuition on the above definitions.
The simplest concept is that of floor, which is a rightward oriented factor of a run.
Diagonals are sequences of consecutive floors interleaved by factors that
produce small (bounded) outputs. We see an example of a diagonal
in Figure \ref{fig:diagonal-sweeping}, where we marked the important
locations and highlighted with thick arrows the two floors that form
the diagonal. The factors of the diagonal that are not floors are
represented instead by dotted arrows.
The third notion is that of a block.
An important constraint in the definition of a block
is that the produce output must be almost periodic, with small enough bound.
In Figure \ref{fig:block-sweeping}, the periodic output is represented by the
thick arrows, either solid or dotted, that go from location $\ell$ to
location $\ell'$.
In addition, the block must satisfy a constraint on the length of
the output produced by the subsequence $\rho|Z$, where
$Z=[\ell,\ell'] ~\setminus~ \big([x,x']\times[y,y']\big)$.
The latter set $Z$ consists of location that are either to the left
of the position of $\ell$ or to the right of the position of $\ell'$.
For example, in Figure \ref{fig:block-sweeping} the set $Z$ coincides
with the area outside the hatched rectangle. Accordingly, the portion
of the subsequence $\rho|Z$ is represented by the dotted bold arrows.
Diagonals and blocks are used as key objects to derive a notion of
decomposition for a run of a sweeping transducer. We formalize this
notion below.
\begin{defi}\label{def:decomposition-sweeping}
A \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition} of a run $\rho$ of $\cT$ is a factorization
$\prod_i\,\rho[\ell_i,\ell_{i+1}]$ of $\rho$ into $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonals and $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-blocks.
\end{defi}
\reviewOne[inline]{Fig.6 does not look like a fully factorized run: what happens for example, between the end of $D_1$ and $l_1$? Can a concrete example be devised? This works rather well in Part 8.}%
\gabriele{Improved the figure and the explanation}%
Figure~\ref{fig:decomposition-sweeping} gives an example of such a
decomposition.
\anca{dans Fig 7, les aretes de $B_2$ vont dans la
mauvaise direction}%
\gabriele{I have corrected this. Now the figure is a bit more regular (perhaps too much),
but at least conveys the correct information!}%
Each factor is either a diagonal $D_i$ or a block $B_i$.
Inside each factor we highlight by thick arrows the portions
of the run that can be simulated by a one-way transducer,
either because they are produced from left to right (thus forming diagonals)
or because they are periodic (thus forming blocks).
We also recall from Definition \ref{def:factors-sweeping}
that most of the output is produced inside the
hatched rectangles, since the output produced by a diagonal
or a block outside the corresponding blue or red hatched rectangle
has length at most $2{\hmax^2}} %{{h^2_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Finally, we observe that the locations delimiting the factors of the
decomposition are arranged following the natural order of positions and levels.
All together, this means that the output produced by a run that enjoys
a decomposition can be simulated in a one-way manner.
\input{decomposition-sweeping}
\medskip
\reviewOne[inline]{Part 5: In "From periodicity of inversions to existence of decompositions", the general proof structure in general and transition between lemmas in particular could be made more visually and conceptually explicit, as the global idea of the proof is difficult to grasp at first. A mini "road map" as made in Part 3 for the whole of Part 4 and 5 might be another way to help the reader through these rather intricate proofs.}%
\subsection*{From periodicity of inversions to existence of decompositions.}
Now that we set up the definition of $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition, we turn
towards proving the implication \PR2 $\Rightarrow$ \PR3 of Theorem \ref{thm:main2}.
In fact, we will prove a slightly stronger result than \PR2 $\Rightarrow$ \PR3,
which is stated further below.
Formally, when we say that a run \emph{$\rho$ satisfies \PR2} we mean
that for every inversion $(L_1,\ell_1,L_2,\ell_2)$ of $\rho$, the word
$\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}$
has period $\gcd(|\out{\tr{\ell_1}}|, |\out{\tr{\ell_2}}|) \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
We aim at proving that every run that satisfies \PR2 enjoys a decomposition,
independently of whether other runs do or do not satisfy \PR2:
\begin{prop}\label{prop:decomposition-sweeping}
If $\rho$ is a run of $\cT$ that satisfies \PR2, then $\rho$ admits a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition.
\end{prop}
\felix{a few lines for the roadmap asked by the reviewer, but I don't think they're useful at all}%
\gabriele{I think it is good}%
Let us fix a run $\rho$ of $\cT$ and assume that it satisfies \PR2.
To show that $\rho$ admits a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition, we will identify
the blocks of the decomposition as equivalence classes of a suitable
relation based on inversions (cf.~Definition \ref{def:crossrel}).
Then, we will use combinatorial arguments (notably,
Lemmas \ref{lem:output-minimal-sweeping} and \ref{lem:overlapping})
to prove that the constructed blocks satisfy the desired properties.
Finally, we will show how the resulting equivalence classes form all
the necessary blocks of the decomposition, in the sense that the factors
in between those classes are diagonals.
We begin by introducing the equivalence relation by means of which we can
then identify the blocks of a decomposition of $\rho$.
\begin{defi}\label{def:crossrel}
Let $\ensuremath{\mathrel{\text{\sf S}}}$ be the relation that pairs every two locations $\ell,\ell'$ of $\rho$
whenever there is an inversion $(L_1,\ell_1,L_2,\ell_2)$ of $\rho$ such that
$\ell_1 \mathrel{\unlhd} \ell,\ell' \mathrel{\unlhd} \ell_2$, namely, whenever $\ell$ and $\ell'$
occur within the same inversion.
Let $\simeq$ be the reflexive and transitive closure of $\ensuremath{\mathrel{\text{\sf S}}}$.
\end{defi}
It is easy to see that every equivalence class of $\simeq$ is a convex
subset with respect to the run order on locations of $\rho$.
Moreover, every \emph{non-singleton} equivalence class of $\simeq$ is a
union of a series of inversions that are two-by-two overlapping.
One can refer to Figure~\ref{fig:overlapping}
for an intuitive account of what we mean by two-by-two overlapping: the thick
arrows represent factors of the run that lie entire inside an $\simeq$-equivalence class,
each inversion is identified by a pair of consecutive anchor points with the same
color. According to the run order, between every pair of anchor points with the
same color, there is at least one anchor point of different color: this means that
the inversions corresponding to the two colors are overlapping.
\reviewOne[inline]{Fig.7 might benefit from some $L_1$, $L_2$ appearing, to illustrate the point made in Claim 5.6
\felix[inline]{I don't agree}%
\olivier[inline]{I also disagree. Argued in review1-answer.txt.}%
}%
Formally, we say that an inversion $(L_1,\ell_1,L_2,\ell_2)$ \emph{covers}
a location $\ell$ when $\ell_1 \mathrel{\unlhd} \ell \mathrel{\unlhd} \ell_2$. We say that
two inversions $(L_1,\ell_1,L_2,\ell_2)$ and $(L_3,\ell_3,L_4,\ell_4)$
are \emph{overlapping} if $(L_1,\ell_1,L_2,\ell_2)$ covers $\ell_3$
and $(L_3,\ell_3,L_4,\ell_4)$ covers $\ell_2$ (or the other way around).
\input{overlapping}
The next lemma uses the fact that $\rho$ satisfies \PR2 to deduce that
the output produced inside every $\simeq$-equivalence class has
period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. Note that the proof below does not exploit
the fact that the transducer is sweeping.
\begin{lem}\label{lem:overlapping}
If $\rho$ satisfies \PR2 and $\ell\mathrel{\unlhd}\ell'$ are two locations of $\rho$
such that $\ell \simeq \ell'$, then the output $\out{\rho[\ell,\ell']}$
produced between these locations has period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{lem}
\begin{proof}
The claim for $\ell=\ell'$ holds trivially, so we assume that $\ell\mathrel{\lhd}\ell'$.
Since the $\simeq$-equivalence class that contains $\ell,\ell'$ is non-singleton,
we know that there is a series of inversions
\[
(L_0,\ell_0,L_1,\ell_1) \quad
(L_2,\ell_2,L_3,\ell_3) \quad
\dots\dots \quad
(L_{2k},\ell_{2k},L_{2k+1},\ell_{2k+1})
\]
that are two-by-two overlapping and such that
$\ell_0\mathrel{\unlhd}\ell\mathrel{\lhd}\ell'\mathrel{\unlhd}\ell_{2k+1}$.
Without loss of generality, we can assume that every
inversion $(L_{2i},\ell_{2i},L_{2i+1},\ell_{2i+1})$ is \emph{maximal}
in the following sense: there is no other inversion
$(\tilde L,\tilde\ell,\tilde L',\tilde\ell') \neq (L_{2i},\ell_{2i},L_{2i+1},\ell_{2i+1})$
such that $\tilde\ell \mathrel{\unlhd} \ell_{2i} \mathrel{\unlhd} \ell_{2i+1} \mathrel{\unlhd} \tilde\ell'$.
For the sake of brevity, let $v_i = \out{\tr{\ell_i}}$ and $p_i = | v_i |$.
Since $\rho$ satisfies \PR2 (recall Proposition~\ref{prop:periodicity-sweeping}), we know that, for all $i=0,\dots,n$, the word
\[
v_{2i} ~ \out{\rho[\ell_{2i},\ell_{2i+1}]} ~ v_{2i+1}
\]
has period that divides both $p_{2i}$ and $p_{2i+1}$ and is at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
In order to show that the period of $\out{\rho[\ell,\ell']}$ is also bounded
by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, it suffices to prove the following claim by induction on $i$:
\reviewOne[inline]{Claim 5.6, end of page 18: making explicit what words you wish to use Fine-Wilf on might make for a more readable proof.
\olivier[inline]{answered in review1-answer.txt}%
}%
\begin{clm}
For all $i=0,\dots,k$, the word
$\outb{\rho[\ell_0,\ell_{2i+1}]} \: v_{2i+1}$
has period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ that divides $p_{2i+1}$.
\end{clm}
The base case $i=0$ follows immediately from our hypothesis,
since $(L_0,\ell_0,L_1,\ell_1)$ is an inversion.
For the inductive step, we assume that the claim holds for $i<k$,
and we prove it for $i+1$. First of all, we factorize our word as follows:
\[
\outb{\rho[\ell_0,\ell_{2i+3}]} ~ v_{2i+3}
~=~
\rightward{ \underbracket[0.5pt]{ \phantom{
\outb{\rho[\ell_0,\ell_{2i+2}]} ~
\outb{\rho[\ell_{2i+2},\ell_{2i+1}]} ~ } }%
_{\text{period } p_{2i+1}} }
\outb{\rho[\ell_0,\ell_{2i+2}]}
\overbracket[0.5pt]{ ~ \outb{\rho[\ell_{2i+2},\ell_{2i+1}]} ~
\outb{\rho[\ell_{2i+1},\ell_{2i+3}]} ~
v_{2i+3} }%
^{\text{periods } p_{2i+2} \text{ and } p_{2i+3}}
.
\]
By the inductive hypothesis, the output produced between $\ell_0$
and $\ell_{2i+1}$, extended to the right with $v_{2i+1}$,
has period that divides $p_{2i+1}$.
Moreover, because $\rho$ satisfies \PR2 and
$(L_{2i+2},\ell_{2i+2},L_{2i+3},\ell_{2i+3})$ is an inversion,
the output produced between the locations
$\ell_{2i+2}$ and $\ell_{2i+3}$,
extended to the left with $v_{2i+2}$ and to the
right with $v_{2i+3}$, has period that divides both $p_{2i+2}$ and $p_{2i+3}$.
Note that this is not yet sufficient for applying Fine-Wilf's theorem,
since the common factor $\outb{\rho[\ell_{2i+2},\ell_{2i+1}]}$
might be too short (possibly just equal to $v_{2i+2}$).
The key argument here is to prove that the interval $[\ell_{2i+2},\ell_{2i+1}]$
is covered by an inversion
which is different from those that we considered above, namely,
i.e.~$(L_{2i+2},\ell_{2i+2},L_{2i+1},\ell_{2i+1})$. For example,
$[\ell_2,\ell_1]$ in
Figure~\ref{fig:overlapping} is covered by the inversion
$(L_2,\ell_2,L_1,\ell_1)$.
For this, we have to prove that the anchor points $\ell_{2i+2}$ and $\ell_{2i+1}$
are correctly ordered w.r.t.~$\mathrel{\unlhd}$ and the ordering of positions
(recall Definition~\ref{def:inversion-sweeping}).
First, we observe that $\ell_{2i+2} \mathrel{\unlhd} \ell_{2i+1}$, since
$(L_{2i},\ell_{2i},L_{2i+1},\ell_{2i+1})$ and $(L_{2i+2},\ell_{2i+2},L_{2i+3},\ell_{2i+3})$
are overlapping inversions.
Next, we prove that the position of $\ell_{2i+1}$ is strictly to the left of
the position of $\ell_{2i+2}$. By way of contradiction, suppose that this is
not the case, namely, $\ell_{2i+1}=(x_{2i+1},y_{2i+1})$, $\ell_{2i+2}=(x_{2i+2},y_{2i+2})$,
and $x_{2i+1} \ge x_{2i+2}$. Because $(L_{2i},\ell_{2i},L_{2i+1},\ell_{2i+1})$ and
$(L_{2i+2},\ell_{2i+2},L_{2i+3},\ell_{2i+3})$ are inversions,
we know that $\ell_{2i+3}$ is strictly to the left of $\ell_{2i+2}$
and $\ell_{2i+1}$ is strictly to the left of $\ell_{2i}$.
This implies that $\ell_{2i+3}$ is strictly to the left of $\ell_{2i}$,
and hence $(L_{2i},\ell_{2i},L_{2i+3},\ell_{2i+3})$ is also an inversion.
Moreover, recall that $\ell_{2i} \mathrel{\unlhd} \ell_{2i+2} \mathrel{\unlhd} \ell_{2i+1} \mathrel{\unlhd} \ell_{2i+3}$.
This contradicts the maximality of $(L_{2i},\ell_{2i},L_{2i+1},\ell_{2i+1})$,
which we assumed at the beginning of the proof.
Therefore, we must conclude that $\ell_{2i+1}$ is strictly to the left of $\ell_{2i+2}$.
Now that we know that $\ell_{2i+2} \mathrel{\unlhd} \ell_{2i+1}$ and that $\ell_{2i+1}$ is to the left of $\ell_{2i+2}$,
we derive the existence of the inversion $(L_{2i+2},\ell_{2i+2},L_{2i+1},\ell_{2i+1})$.
Again, because $\rho$ satisfies \PR2, we know that the word
$v_{2i+2} ~ \out{\rho[\ell_{2i+2},\ell_{2i+1}]} ~ v_{2i+1}$ has period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$
that divides $p_{2i+2}$ and $p_{2i+1}$.
Summing up, we have:
\begin{enumerate}
\item $w_1 ~=~ \outb{\rho[\ell_0,\ell_{2i+1}]} ~ v_{2i+1}$ has period $p_{2i+1}$,
\item $w_2 ~=~ v_{2i+2} ~ \outb{\rho[\ell_{2i+2},\ell_{2i+1}]} ~ v_{2i+1}$
has period $p = \gcd(p_{2i+2},p_{2i+1})$,
\item $w_3 ~=~ v_{2i+2} ~ \outb{\rho[\ell_{2i+2},\ell_{2i+3}]} ~ v_{2i+3}$
has period $p' = \gcd(p_{2i+2},p_{2i+3})$.
\end{enumerate}
We are now ready to exploit our stronger variant of Fine-Wilf's theorem,
that is, Theorem~\ref{thm:fine-wilf}.
We begin with (1) and (2) above.
Let $w = \outb{\rho[\ell_{2i+2},\ell_{2i+1}]}~ v_{2i+1}$ be the common suffix of
$w_1$ and $w_2$. First note that since $p$ divides $p_{2i+2}$, the
word $w$ is also a prefix of $w_2$, thus we can write
$w_2=w\,w'_2$. Second, note that the length of $w$ is at least
$|v_{2i+1}| = p_{2i+1} = p_{2i+1} + p - \gcd(p_{2i+1},p)$. We can
apply now Theorem~\ref{thm:fine-wilf} to $w_1=w'_1\,w$ and
$w_2=w\,w'_2$ and obtain:
\begin{enumerate}
\setcounter{enumi}{3}
\item $w_4 ~=~ w'_1 \: w \: w'_2
~=~ \outb{\rho[\ell_0,\ell_{2i+2}]} ~ v_{2i+2} ~ \outb{\rho[\ell_{2i+2},\ell_{2i+1}]} ~ v_{2i+1}$
has period $p$.
\end{enumerate}
We apply next Theorem~\ref{thm:fine-wilf} to (2) and (3), namely, to
the words $w_2$ and $w_3$ with $v_{2i+2}$ as common factor. It is not
difficult to check that $|v_{2i+2}|=p_{2i+2} \ge p+p'-p''$ with
$p''=\gcd(p,p')$, using the definitions of $p$ and $p'$: we can
write $p_{2i+2}=p''rq=p''r'q'$ with $p=p''r$ and $p'=p''r'$. It suffices to check
that $p''rq+p''r'q' \ge 2(p''r + p''r'-p'')$, hence that $rq+r'q' \ge 2r+2r'-2$. This
is clear if $\min(q,q')>1$. Otherwise the inequality $p_{2i+2} \ge
p+p'-p''$ follows easily because $p=p''$ or $p'=p''$ holds. Hence we obtain
that $w_2$ and $w_3$ have both period $p''$.
Applying once more Theorem~\ref{thm:fine-wilf} to $w_3$ and $w_4$ with
$v_{2i+2}$ as common factor, yields period $p''$ for the word
\[
w_5 ~=~ \outb{\rho[\ell_0,\ell_{2i+2}]} ~ v_{2i+2} ~
\outb{\rho[\ell_{2i+2},\ell_{2i+3}]} ~ v_{2i+3}
\]
Finally, the periodicity is not affected when we remove
factors of length multiple than the period. In particular,
by removing the factor $v_{2i+2}$ from $w_5$,
we obtain the desired word
$\outb{\rho[\ell_0,\ell_{2i+3}]} ~ v_{2i+3}$, whose period
$p''$ divides $p_{2i+3}$. This proves the claim for the inductive step,
and completes the proof of the proposition.
\end{proof}
\medskip
The $\simeq$-classes considered so far cannot be directly used to
define the blocks in the desired decomposition of $\rho$, since the $x$-coordinates of their
endpoints might not be in the appropriate order. The next definition
takes care of this, by enlarging the $\simeq$-classes according to
$x$-coordinates of the anchor points in the equivalence class.
\bigskip\noindent
\begin{minipage}[l]{\textwidth-5.8cm}
\begin{defi}\label{def:bounding-box-sweeping}
Consider a non-singleton $\simeq$-equivalence class $K=[\ell,\ell']$.
Let $\an{K}$
be the restriction of $K$ to the anchor points occurring in some inversion,
and $X_{\an{K}} = \{x \::\: \exists y\: (x,y)\in \an{K}\}$
be the projection of $\an{K}$ on positions.
We define $\block{K}=[\tilde\ell,\tilde\ell']$, where
\begin{itemize}
\item $\tilde\ell$ is the latest location $(\tilde x,\tilde y) \mathrel{\unlhd} \ell$
such that $\tilde x = \min(X_{\an{K}})$,
\item $\tilde\ell'$ is the earliest location $(\tilde x,\tilde y) \mathrel{\unrhd} \ell'$
such that $\tilde x = \max(X_{\an{K}})$
\end{itemize}
(note that the locations $\tilde\ell,\tilde\ell'$ exist since $\ell,\ell'$
are anchor points in some inversion).
\end{defi}
\end{minipage}
\begin{minipage}[r]{5.7cm}
\input{block-construction-sweeping}
\end{minipage}
\smallskip
\begin{lem}\label{lem:bounding-box-sweeping}
If $K$ is a non-singleton $\simeq$-equivalence class,
then $\rho|\block{K}$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block.
\end{lem}
\begin{proof}
Consider a non-singleton $\simeq$-class $K=[\ell,\ell']$ and let $\an{K}$, $X_{\an{K}}$, and
$\block{K}=[\tilde\ell,\tilde\ell']$
be as in Definition \ref{def:bounding-box-sweeping}.
The reader can refer to Figure \ref{fig:block-construction-sweeping}
to quickly recall the notation.
We need to verify that $\rho[\tilde\ell,\tilde\ell']$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block
(cf.~Definition \ref{def:factors-sweeping}), namely, that:
\begin{itemize}
\item $\tilde\ell=(\tilde x,\tilde y)$, $\tilde\ell'=(\tilde x',\tilde y')$,
with $\tilde x \le \tilde x'$,
\item the output produced by $\rho[\tilde\ell,\tilde\ell']$
is almost periodic with bound $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
\item the output produced by the subsequence $\rho|Z$,
where $Z=[\tilde\ell,\tilde\ell'] ~\setminus~
\big([\tilde x,\tilde x']\times[\tilde y,\tilde y']\big)$,
has length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{itemize}
The first condition $\tilde x \le \tilde x'$ follows immediately from
the definition of $\tilde x$ and $\tilde x'$ as $\min(X_{\an{K}})$
and $\max(X_{\an{K}})$, respectively.
Next, we prove that the output produced by the factor
$\rho[\tilde\ell,\tilde\ell']$ is almost periodic with bound $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Definition \ref{def:bounding-box-sweeping}, we have
$\tilde\ell \mathrel{\unlhd} \ell \mathrel{\lhd} \ell' \mathrel{\unlhd} \tilde\ell'$,
and by Lemma \ref{lem:overlapping} we know that $\out{\rho[\ell,\ell']}$
is periodic with period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ ($\le 2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$). So it suffices to show that
the lengths of the words $\out{\rho[\tilde\ell,\ell]}$ and
$\out{\rho[\ell',\tilde\ell']}$ are at most $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
We shall focus on the former word, as the arguments for
the latter are similar.
First, we note that the factor $\rho[\tilde\ell,\ell]$
lies entirely to the right of position $\tilde x$, and
in particular, it starts at an even level $\tilde y$. This follows
from the definition of $\tilde\ell$, and whether $\ell$ itself is at
an odd/even level. In particular, the location $\ell$ is either at the same level as $\tilde\ell$,
or just one level above.
Now, suppose, by way of contradiction, that $|\out{\rho[\tilde\ell,\ell]}| > 2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
We head towards a contradiction by finding a location $\ell'' \mathrel{\lhd} \ell$
that is $\simeq$-equivalent to the first location $\ell$ of the $\simeq$-equivalence
class $K$.
Since the location $\ell$ is either at the same level as $\tilde\ell$,
or just above it,
the factor $\rho[\tilde\ell,\ell]$ is of the form $\alpha\,\beta$,
where $\alpha$ is rightward factor lying on the same level as $\tilde\ell$
and $\beta$ is either empty or a leftward factor on the next level.
Moreover, since $|\out{\rho[\tilde\ell,\ell]}| > 2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, we know that
either $|\out{\alpha}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ or $|\out{\beta}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Thus, Lemma \ref{lem:output-minimal-sweeping}
says that one of the two factors $\alpha,\beta$ is not output-minimal.
In particular, there is a loop $L_1$, strictly to the right of $\tilde x$,
that intercepts a subfactor $\gamma$ of $\rho[\tilde\ell,\ell]$,
with $\out{\gamma}$ non-empty and output-minimal.
Let $\ell''$ be the first location of the factor $\gamma$.
Clearly, $\ell''$ is an anchor point of $L$ and $\out{\tr{\ell''}}\neq\emptystr$.
Further recall that $\tilde x=\min(X_{\an{K}})$ is the leftmost position of
locations in the class $K=[\ell,\ell']$ that are also anchor points of inversions.
In particular, there is a loop $L_2$ with some anchor point
$\ell''_2=(\tilde x,y''_2)\in \an{K}$, and such that $\tr{\ell''}$ is
non-empty and output-minimal.
Since $\ell'' \mathrel{\lhd} \ell \mathrel{\unlhd} \ell''_2$
and the position of $\ell''$ is to the right of the position of $\ell''_2$,
we know that $(L_1,\ell'',L_2,\ell''_2)$ is also an inversion,
and hence $\ell'' \simeq \ell''_2 \simeq \ell$.
But since $\ell'' \mathrel{\lhd} \ell$, we get a contradiction with the
assumption that $\ell$ is the first location of a $\simeq$-class.
In this way we have shown that $|\out{\rho[\ell_1,\ell]}| \le 2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
It remains to show that the output produced
by the subsequence $\rho|Z$, where
$Z=[\tilde\ell,\tilde\ell'] ~\setminus~ \big([\tilde x,\tilde x']\times[\tilde y,\tilde y']\big)$,
has length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
For this it suffices to prove that
$|\out{\alpha}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ for every factor $\alpha$ of
$\rho[\tilde\ell, \tilde\ell']$ that lies
at a single level and either to the left of $\tilde x$ or to the right of $\tilde x'$.
By symmetry, we consider only one of the two types of factors.
Suppose, by way of contradiction, that there is a factor $\alpha$
at level $y''$, to the left of $\tilde x$,
and such that $|\out{\alpha}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Lemma \ref{lem:output-minimal-sweeping} we know that
$\alpha$ is not output-minimal, so there is some loop
$L_2$ strictly to the left of $\tilde x$ that intercepts an
output-minimal subfactor
$\beta$ of $\alpha$ with non-empty output.
Let $\ell''$ be the first location of $\beta$. We know that
$\tilde\ell \mathrel{\lhd} \ell'' \mathrel{\unlhd} \tilde\ell'$. Since the level
$\tilde y$ is even, this means that the level of $\ell''$ is strictly
greater than $\tilde y$. Since we also know that $\ell$
is an anchor point of some inversion, we can take a suitable loop $L_1$
with anchor point $\ell$ and obtain that $(L_1,\ell,L_2,\ell'')$ is an
inversion, so $\ell'' \simeq \ell$. But
this contradicts the fact that $\tilde x$ is the leftmost position of
$\an{K}$.
We thus conclude that $|\out{\alpha}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, and this
completes the proof that $\rho|\block{K}$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block.
\begin{comment}
We now prove that the output produce by the subsequence $\rho\mid Z$, where
$Z=[\ell_1,\ell_2]\setminus[x_1,x_2]\times[y_1,y_2]$, has length at most
$2 (y_2 - y_1)\cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax} \leq {\boldsymbol{H}}} %h_{\mathsf{max}}} \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax} $.
Suppose, by way of contradiction, that this is not the case, and
that $\big|\out{\rho\mid Z}\big| > 2(y_2 - y_1) \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Since the set $Z$ spans across at most $y_2 - y_1$ levels and does not cover
the locations inside the ``bounding box'' $[x_1,x_2]\times[y_1,y_2]$,
we could find a factor $\alpha$ that produces an output longer than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
that lies on a single level $y'$, and either to the left of $x_1$ or to the
right of $x_2$.
Suppose that this output is on the left (the right case is symmetric), and that $\alpha$ is a factor
on some level $y'\in\{y_1+1,\ldots,y_2\}$ that is intercepted by the interval
$I=[1,x_1]$ and that produces an output of length strictly greater than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By the contrapositive of Lemma \ref{lemma:simple-loop}, the factor $\alpha$
is captured by some loop $L'=[x'_1,x'_2]$ that is {\sl strongly contained} in
$I=[1,x_1]$, namely, such that $1\le x'_1<x'_2<x_1$, and such that the output of $L'$ at level $y'$ is non-empty.
Let $\ell'=(x',y')$ be the anchor point of $L'$, where $x'=x'_1$ or $x'=x'_2$
depending on whether $y'$ is odd or even. Recall that, by the definition of $[K]$,
there is also a location $\ell=(x,y)$ that strictly precedes $\ell'$ along the
run and that belongs to $H$. This implies that $\ell$ is an, anchor point of some loop $L$.
Moreover, recall that $x'_1\le x'\le x'_2 < x_1$ and $x_1\le x\le x_2$, namely,
$\ell'$ is {\sl strictly to the left} of $\ell$ according to the ordering of
the $x$-coordinates. This shows that $(L,\ell,L',\ell')$ is also an inversion, and
hence $\ell'$ also belongs to $H$.
However, this contradicts the definition of $x_1$ as the minimum of the
$x$-coordinates of the locations in $H$.
A similar contradiction can be obtained in the case where the factor that produces an
output of length strictly greater than $ \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ lies at the right of $x_2$.
By the above arguments, we know that $\big|\out{\rho\mid Z}| \le 2 (y_2 - y_1)\cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax} \leq {\boldsymbol{H}}} %h_{\mathsf{max}}} \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z=[\ell_1,\ell_2]\setminus[x_1,x_2]\times[y_1,y_2]$, namely, that the second
condition of the definition of block is satisfied.
It remains to verify the first condition, that the output produced by
$\rho[\ell_1,\ell_2]$ is almost periodic with bound $2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Let $\ell'_1$ and $\ell'_2$ be the first and the last locations of the
$\simeq$-equivalence class $K$ (note that $\ell_1\le \ell'_1\le \ell'_2\le\ell_2$).
Recall that Lemma \ref{lem:overlapping} already shows that the output produced between
$\ell'_1$ and $\ell'_2$ is periodic with period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Thus, we just need to show that the prefix $\out{\rho[\ell_1,\ell'_1]}$ and
the suffix $\out{\rho[\ell'_2,\ell_2]}$ have length at most $2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Suppose that the length of $\out{\rho[\ell_1,\ell'_1]}$ exceeds $ 2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By the definition of $[K]$, the two locations $\ell_1$ and $\ell'_1$ would be either
on the same level, i.e.~$y_1$, or on adjacent levels, i.e.~$y_1$ and $y_1+1$.
In the following, we show that none of these cases can happen, thus reaching
a contradiction from the assumption $\big|\out{\rho[\ell_1,\ell'_1]}\big| > 2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
If $\ell_1$ were on the same level as $\ell'_1$, then clearly the factor
$\out{\rho[\ell_1,\ell'_1]}$ would lie on a single level and to the right
of $x_1$, and would produce an output longer than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Then, by using the contrapositive of Lemma \ref{lemma:simple-loop}, we could
find a loop $L'=[x'_1,x'_2]$ strongly contained in the interval $I=[x_1,n]$,
where $n$ is the rightmost position of the input, that captures the factor
$\out{\rho[\ell_1,\ell'_1]}$ with non-empty output.
We recall that $\ell'_1$ is the first location of the $\simeq$-equivalence
class $K$, and hence there is an inversion $(L'_1,\ell'_1,L'',\ell'')$, for some
location $\ell''$ that follows $\ell'_1$ along the run. We then define
$\ell=(x'_1,y_1)$ and we observe that this location is an anchor point
of $L'$ and it strictly precedes $\ell''$.
We thus get that $(L',\ell,L'',\ell'')$ is also an inversion. This, however,
is a contradiction because the inversion $(\ell,\ell'')$ intersects
the $\simeq$-equivalence class $K$, without being contained in it.
Let us now consider the second case, where $\ell_1$ is on level $y_1$ and
$\ell'_1$ is on level $y_1+1$.
Since $\rho[\ell_1,\ell'_1]$ spans across two levels and produces an
output longer than $ 2 \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, there is a factor
$\alpha$ of $\out{\rho[\ell_1,\ell'_1]}$ that lies entirely on a single
level -- either $y_1$ or $y_1+1$ -- and to the right of $x_1$, and
produces an output longer than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Then, by reasoning as in the previous case, we can get a contradiction
by finding an inversion $(L,\ell,L'',\ell'')$ that intersects $K$ without
being contained in it.
We have just shown that $\big|\out{\rho[\ell_1,\ell'_1]}\big| \le 2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By using symmetric arguments, we can also prove that
$\big|\out{\rho[\ell'_2,\ell_2]}\big| \le 2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Finally, we recall that, by Lemma \ref{lem:overlapping}, the period of
$\out{\rho[\ell'_1,\ell'_2]}$ is smaller than $ \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
All together, this shows that the output produced between the
locations $\ell_1$ and $\ell_2$ is almost periodic with bound
$2 \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and hence $[K]=(\ell_1,\ell_2)$ is a block.
\end{comment}
\end{proof}
The next lemma shows that blocks do not overlap along the input axis:
\begin{lem}\label{lem:consecutive-blocks-sweeping}
Suppose that $K_1$ and $K_2$ are two different non-singleton $\simeq$-classes
such that $\ell \mathrel{\lhd} \ell'$ for all $\ell \in K_1$ and $\ell' \in K_2$.
Let $\block{K_1}=[\ell_1,\ell_2]$ and $\block{K_2}=[\ell_3,\ell_4]$,
with $\ell_2=(x_2,y_2)$ and $\ell_3=(x_3,y_3)$.
Then $x_2 < x_3$.
\end{lem}
\begin{proof}
Suppose by contradiction that $K_1$ and $K_2$ are as in the
statement, but $x_2 \ge x_3$.
By Definition \ref{def:bounding-box-sweeping},
$x_2=\max(X_{\an{K_1}})$ and $x_3=\min(X_{\an{K_2}})$.
This implies the existence of some inversions
$(L,\ell,L',\ell')$ and $(L'',\ell'',L''',\ell''')$
such that $\ell=(x_2,y)$ and $\ell'''=(x_3,y''')$.
Moreover, since $\ell \mathrel{\unlhd} \ell'''$ and $x_2 \ge x_3$,
we know that $(L,\ell,L''',\ell''')$ is also an inversion,
thus implying that $K_1=K_2$.
\end{proof}
For the sake of brevity, we call \emph{$\simeq$-block} any
factor of the form $\rho|\block{K}$ that is obtained by applying
Definition~\ref{def:bounding-box-sweeping} to a non-singleton $\simeq$-class $K$.
The results obtained so far imply that every location covered by an
inversion is also covered by an $\simeq$-block (Lemma \ref{lem:bounding-box-sweeping}),
and that the order of occurrence of $\simeq$-blocks is the same as the order of positions
(Lemma \ref{lem:consecutive-blocks-sweeping}).
So the $\simeq$-blocks can be used as factors for the $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition
of $\rho$ we are looking for. Below, we show that the remaining
factors of $\rho$, which do not overlap the $\simeq$-blocks, are $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonals.
This will complete the construction of a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition of $\rho$.
Formally, we say that a factor $\rho[\ell,\ell']$
\emph{overlaps} another factor $\rho[\ell'',\ell''']$ if
$[\ell,\ell'] \:\cap\: [\ell'',\ell'''] \neq \emptyset$,
$\ell' \neq \ell''$, and $\ell \neq \ell'''$.
\begin{lem}\label{lem:diagonal-sweeping}
Let $\rho[\ell,\ell']$ be a
factor of $\rho$,
with $\ell=(x,y)$, $\ell'=(x',y')$, and $x\le x'$,
that does not overlap any $\simeq$-block.
Then $\rho[\ell,\ell']$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal.
\end{lem}
\begin{proof}
Consider a factor $\rho[\ell,\ell']$, with $\ell=(x,y)$, $\ell'=(x',y')$,
and $x\le x'$, that does not overlap any $\simeq$-block.
We will focus on locations $\ell''$ with $\ell \mathrel{\unlhd} \ell''
\mathrel{\unlhd} \ell'$
that are anchor points of some loop with $\out{\tr{\ell''}}\neq\emptystr$.
We denote by $A$ the set of all such locations.
First, we show that the locations in $A$ are monotonic
\reviewOne[inline]{Monotonic$\to$Monotonous?
\olivier[inline]{we keep monotonic}%
}%
w.r.t.~the position order. Formally,
we prove that for all $\ell_1,\ell_2\in A$, if $\ell_1=(x_1,y_1) \mathrel{\unlhd} \ell_2=(x_2,y_2)$, then
$x_1 \le x_2$. Suppose that this were not the case, namely, that $A$ contained two anchor points
$\ell_1=(x_1,y_1)$ and $\ell_2=(x_2,y_2)$ with $\ell_1 \mathrel{\lhd} \ell_2$ and $x_1 > x_2$.
Let $L_1,L_2$ be the loops of $\ell_1,\ell_2$, respectively, and recall that
$\out{\tr{\ell_1}},\out{\tr{\ell_2}}\neq\emptystr$. This means that $(L_1,\ell_1,L_2,\ell_2)$
is an inversion, and hence $\ell_1 \simeq \ell_2$. But this contradicts the hypothesis that
$\rho[\ell,\ell']$ does not overlap any $\simeq$-block.
Next, we identify the floors of our diagonal. Let $y_0,y_1,\dots,y_{n-1}$ be all the {\sl even}
levels that have locations in $A$. For each $i=0,\dots,n-1$, let $\ell_{2i+1}$ (resp.~$\ell_{2i+2}$)
be the first (resp.~last) anchor point of $A$ at level $y_i$. Further let $\ell_0=\ell$ and $\ell_{2n+1}=\ell'$.
Clearly, each factor $\rho[\ell_{2i+1},\ell_{2i+2}]$ is a floor. Moreover, thanks to the previous
arguments, each location $\ell_{2i}$ is to the left of the location $\ell_{2i+1}$.
It remains to prove that each factor $\rho[\ell_{2i},\ell_{2i+1}]$ produces an
output of length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. By construction, $A$ contains no anchor
point at an even level and strictly between $\ell_{2i}$ and $\ell_{2i+1}$.
By Lemma \ref{lem:output-minimal-sweeping} this means that the outputs produced
by subfactors of $\rho[\ell_{2i},\ell_{2i+1}]$ that lie entirely at an {\sl even}
level have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Let us now consider the subfactors $\alpha$ of $\rho[\ell_{2i},\ell_{2i+1}]$
that lie entirely at an {\sl odd} level, and let us prove that they produce outputs
of length at most $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. Suppose that this is not the case, namely, that
$|\out{\alpha}| > 2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. In this case we show that an inversion
would exist at this level. Formally, we can find two locations $\ell'' \mathrel{\lhd} \ell'''$
in $\alpha$ such that the prefix of $\alpha$ that ends at location $\ell''$ and the suffix
of $\alpha$ that starts at location $\ell'''$ produce outputs of
length greater than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Lemma \ref{lem:output-minimal-sweeping},
those two factors would not be output-minimal,
and hence $\alpha$ would contain disjoint loops $L_1,L_2$ with anchor points $\ell''_1,\ell''_2$
forming an inversion $(L_1,\ell''_1,L_2,\ell''_2)$. But this would imply that $\ell''_1,\ell''_2$
belong to the same non-singleton $\simeq$-equivalence class, which contradicts the hypothesis
that $\rho[\ell,\ell']$ does not overlap any $\simeq$-block.
We must conclude that the subfactors of $\rho[\ell_{2i},\ell_{2i+1}]$
produce outputs of length at most $2\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Overall, this shows that the output produced by each factor $\rho[\ell_{2i},\ell_{2i+1}]$
has length at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{proof}
\begin{comment}
LICS proof for the 2-way case
\begin{proof}
Suppose by contradiction that there is some $x\in[x_1,x_2]$
such that, for all locations $\ell=(x,y)$ between $\ell_1$ and $\ell_2$,
one of the following conditions holds:
\begin{enumerate}
\item $|\out{\rho\mid Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}} = [\ell,\ell_2] \:\cap\: \big([0,x]\times\bbN\big)$,
\item $|\out{\rho\mid Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}} = [\ell_1,\ell] \:\cap\: \big([x,\omega]\times\bbN\big)$.
\end{enumerate}
We claim first that for each condition above
there is some level $y$ at which it holds. Observe that for the
highest location $\ell$ of the run at position $x$, the set
$Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}$ is empty, since the outgoing transition at $\ell$
is rightward. So condition 1 is trivially violated at $\ell$ as above, hence
condition 2 holds by the initial assumption. Symmetrically, condition
1 holds at the lowest location of the run at position $x$.
Let us now compare, for each condition, the levels where it holds.
Clearly, the lower the level of the location $\ell$,
the easier it is to satisfy condition 1, and symmetrically for condition 2.
So, let $\ell=(x,y)$ (resp.~$\ell'=(x,y')$) be the highest (resp.~lowest)
location at position $x$ that satisfies condition 1 (resp.~condition 2).
We claim that $y \ge y'$.
For this, we first observe that $y \ge y'-1$, since otherwise there would
exist a location $\ell=(x,y'')$, with $y < y'' < y'$, violating both conditions 1 and 2.
Moreover, $y$ must be odd, otherwise the transition departing from $\ell=(x,y)$
would be rightward oriented and the location $\ell''=(x,y+1)$ would still
satisfy condition 1, contradicting the fact that $\ell=(x,y)$ was chosen to
be the highest location.
For similar reasons, $y'$ must also be odd, otherwise there would be
a location $\ell''=(x,y'-1)$ that precedes $\ell'$ and satisfies condition 2.
But since $y \ge y'-1$ and both $y$ and $y'$ are odd, we need to have $y\ge y'$.
From the previous arguments we know that in fact $\ell=(x,y)$ satisfies both conditions
1 and 2. We can thus apply Theorem \ref{thm:simon2} to the sets
$Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}$ and $Z_\ell^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}$, deriving the existence of
two idempotent loops $L_1,L_2$ and two components $C_1,C_2$ of $L_1,L_2$,
respectively, such that
\begin{itemize}
\item $\max(L_2) < x < \min(L_1)$,
\item $\ell_1 \mathrel{\lhd} \an{C_1} \mathrel{\lhd} \ell \mathrel{\lhd} \an{C_2} \mathrel{\lhd} \ell_2$,
\item $\out{\tr{C_1}},\out{\tr{C_2}}\neq\emptystr$.
\end{itemize}
In particular, since $\an{C_1}$ is to the right of $\an{C_2}$ w.r.t.~the order
of positions, we know that $(L_1,C_1,L_2,C_2)$ is an inversion, and hence
$\an{C_1} \simeq \an{C_2}$. But this contradicts the assumption that
$\rho[\ell_1,\ell_2]$ does not overlap with any $\simeq$-block.
\end{proof}
\end{comment}
\begin{comment}
We consider the locations
that are not strictly covered by $\simeq$-blocks, formally, the
set $B = \{ \ell \:\mid\: \nexists\:(\ell_1,\ell_2)\text{ $\simeq$-block s.t.}~\ell_1<\ell<\ell_2\}$.
We equip $B$ with the natural ordering of locations induced by $\rho$.
We now consider some maximal convex subset $C$ of $B$.
Note that the left/right endpoint of $C$ coincides with the first/last
location of the run $\rho$ or with the right/left endpoint of some $\simeq$-block.
Below, we show how to decompose the subrun $\rho\mid C$ into a series of diagonals
and blocks. After this, we will be able to get a full decomposition of $\rho$ by
interleaving the $\simeq$- blocks that we defined above with the diagonals
and the blocks that decompose each subrun $\rho\mid C$.
Let $D_C$ be the set of locations $\ell=(x,y)$ of $C$ such that
there is some loop $L=[x,x']$, with $x'\ge x$, whose trace at level $y$
lies entirely inside $C$ and
produces non-empty output.
We remark that the set $D_C$ may be non-empty. To see this, one can
imagine the existence of two consecutive $\simeq$- blocks (e.g.~$R_1$
and $R_2$ in Figure~\ref{fig:decomposition}) and a loop between them
that produces non-empty output (e.g.~the factor $F_2$). In a more
general scenario, one can find several loops between two consecutive
$\simeq$- blocks that span across different levels.
We can observe however that all the locations in $D_C$ are on even levels.
Indeed, if this were not the case for some $\ell=(x,y) \in D_C$,
then we could select a minimal loop $L=[x_1,x_2]$ such that $x_1\le x\le x_2$
and $\out{\tr{\ell}} \neq \varepsilon$. Since $y$ is odd,
$\ell_1=(x_2,y)$ and $ \ell_2=(x_1,y)$ are anchor points of $L$, and hence $(\ell_1,\ell_2)$ is an inversion.
Since $\ell_1\le\ell\le\ell_2$ and all inversions are covered by
$\simeq$-blocks, there is a $\simeq$-block $(\ell'_1,\ell'_2)$ such
that $\ell'_1\le\ell\le\ell'_2$.
However, as $\ell'_1$ and $\ell'_2$ are at even levels,
$\ell$ must be different from these two locations, and
this would contradict the definition of $D_C$.
Using similar arguments, one can also show that the locations in $D_C$ are
arranged along a ``rising diagonal'', from lower left to upper right.
The above properties suggest that the locations in $D_C$ identify some
diagonals and blocks that form a decomposition of $\rho\mid C$.
The following lemma shows that this is indeed the case, namely, that
any two consecutive locations in $D_C$ form a diagonal or a block.
\begin{lem}\label{lemma:ladder}
Let $\ell_1=(x_1,y_1)$ and $\ell_2=(x_2,y_2)$ be two consecutive locations of $D_C$.
Then, $x_1\le x_2$ and $y_1\le y_2$ and the pair $(\ell_1,\ell_2)$ is a diagonal
or a block, depending on whether $y_1=y_2$ or $y_1<y_2$.
\end{lem}
\begin{proof}
Recall that all the locations in $D_C$ are on even levels, so the lemma holds in particular for $\ell_1$ and $\ell_2$
If $y_1=y_2$, then $(\ell_1,\ell_2)$ is clearly a floor.
So let us assume that $y_1 \neq y_1$.
The fact that $x_1\le x_2$ and $y_1 < y_2$ holds follows from the arguments
given in part of the body that just preceded the lemma.
The first three conditions of the definition of blockare also satisfied because
$\ell_1 $ and $\ell_2 $ are consecutive locations in $D_C$.
Below, we verify that the output produced by the factor $\rho[\ell_1,\ell_2]$
has length at most $(y_2 - y_1 +1)\cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Note that this will prove both the fourth and the fifth conditions:
for $Z=[\ell_1,\ell_2] \:\setminus\: [x_1,x_2]\times[y_1,y_2]$,
the subsequence $\rho\mid Z$ produces an output that is clearly
shorter than that of $\rho[\ell_1,\ell_2]$, and in addition we have
$(y_2 - y_1 + 1) < 2(y_2 - y_1)$.
Suppose, by way of contradiction, that $\out{\rho[\ell_1,\ell_2]} > (y_2 - y_1 +1)\cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Since between $\ell_1$ and $\ell_2$ there are $y_2-y_1+1$ levels,
there is a factor $\alpha$ of $\rho[\ell_1,\ell_2]$ that produces an output longer than
$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and that lies on a single level $y$, for some $y_1 \leq y \leq y_2$.
The contrapositive of Lemma~\ref{lemma:simple-loop} implies that
the factor $\alpha$ is captured by some loop $L'=[x'_1,x'_2]$ such that
$x'_1 > x_1$ if $y=y_1$, and $x'_2 < y_2$ if $y=y_2$.
In particular, the location $\ell'=(x'_1,y)$ is an entry point of $L$ and it
belongs to $D_C$. We have just shown that there is $\ell'\in D_C$ such that
$\ell_1<\ell'<\ell_2$. However, this contradicts the hypothesis that $\ell_1$
and $\ell_2$ were consecutive locations in $D_C$.
We must conclude that the output produced by the infix $\rho[\ell_1,\ell_2]$
has length at most $(y_2- y_1 +1) \cdot \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, and hence $(\ell_1,\ell_2)$ is a block.
\end{proof}
\end{comment}
\smallskip
We have just shown how to construct a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition of the run $\rho$
that satisfies \PR2. This proves Proposition \ref{prop:decomposition-sweeping},
as well as the implication \PR2 $\Rightarrow $ \PR3 of Theorem \ref{thm:main2}.
\medskip
\subsection*{From existence of decompositions to an equivalent one-way transducer.}
We now focus on the last implication \PR3 $\Rightarrow $ \PR1 of Theorem \ref{thm:main2}.
More precisely, we show how to construct a one-way transducer $\cT'$ that simulates
the outputs produced by the successful runs of $\cT$ that admit $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decompositions.
In particular, $\cT'$ turns out to be equivalent to $\cT$ when $\cT$ is one-way definable.
Here we will only give a proof sketch of this construction (as there is no real difference between the sweeping and two-way cases) assuming that $\cT$ is a sweeping
transducer; a fully detailed construction of $\cT'$ from an arbitrary two-way transducer $\cT$
will be given in Section~\ref{sec:characterization-twoway} (Proposition \ref{prop:construction-twoway}),
together with a procedure for deciding one-way definability of $\cT$ (Proposition \ref{prop:complexity}).
\reviewOne[inline]{The structure of Prop 8.6 is all but indistinguishable visually/ This is made all the more aggravating by the fact that there is no corresponding proof in the sweeping case (merely the intuition sketched on Page 23.
\felix[inline]{added a sentence}
\olivier[inline]{answered in review1-answer.txt}%
}%
\begin{prop}\label{prop:construction-sweeping}
Given a functional sweeping transducer $\cT$
a one-way transducer $\cT'$ can be constructed in
$2\exptime$ such that the following hold:
\begin{enumerate}
\item $\cT' \subseteq \cT$,
\item $\dom(\cT')$ contains all words that induce successful runs of $\cT$
admitting $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-de\-com\-po\-si\-tions.
\end{enumerate}
In particular, $\cT'$ is equivalent to $\cT$ iff $\cT$ is one-way definable.
\end{prop}
\begin{proof}[Proof sketch.]
Given an input word $u$, the one-way transducer $\cT'$ needs to guess a successful run $\rho$ of $\cT$ on $u$
that admits a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition. This can be done by guessing the crossing sequences of $\rho$ at each
position,
together with a
sequence of locations $\ell_i$ that identify the factors of a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition of $\rho$.
To check the correctness of the decomposition, $\cT'$ also needs to guess a bounded amount of information
(words of bounded length) to reconstruct the outputs produced by the $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonals
and the $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-blocks. For example, while scanning a factor of the input underlying a diagonal, $\cT'$
can easily reproduce the outputs of the floors and the guessed outputs of factors between them.
In a similar way, while scanning a factor of the input underlying a block, $\cT'$ can simulate
the almost periodic output by guessing its repeating pattern and the bounded prefix and suffix
of it, and by emitting the correct amount of letters, as it is done in Example \ref{ex:running}.
In particular, one can verify that the capacity of $\cT'$ is linear in ${\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Moreover, because the guessed objects are of size linear in ${\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and ${\boldsymbol{H}}} %h_{\mathsf{max}}}\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ is a
simple exponential in the size of $\cT$, the size of the one-way transducer $\cT'$ has
doubly exponential size in that of $\cT$.
\end{proof}
\section{The characterization in the two-way case}\label{sec:characterization-twoway}
In this section we generalize the characterization of one-way
definability of sweeping transducers to the general two-way case. As
usual, we fix through the rest of the section a successful run $\rho$
of $\cT$ on some input word $u$.
\medskip
\subsection*{From periodicity of inversions to existence of decompositions.}
We continue by proving the second implication \PR2 $\Rightarrow$ \PR3 of
Theorem \ref{thm:main2} in the two-way case. This requires showing the
existence of a suitable decomposition of a run $\rho$ that
\emph{satisfies} property \PR2. Recall that \PR2 says that for every inversion
$(L_1,\ell_1,L_2,\ell_2)$, the period of the word
$\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}$
divides $\gcd(|\out{\tr{\ell_1}}|, |\out{\tr{\ell_2}}|) \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
The definitions underlying the decomposition of $\rho$ are similar
to those given in the sweeping case:
\begin{defi}\label{def:factors-twoway}
Let $\rho[\ell,\ell']$ be a factor of a run $\rho$ of $\cT$,
where $\ell=(x,y)$, $\ell'=(x',y')$, and $x\le x'$.
We call $\rho[\ell,\ell']$
\begin{itemize}
\medskip
\item \parbox[t]{\dimexpr\textwidth-\leftmargin}{
\vspace{-2.75mm}
\begin{wrapfigure}{r}{8cm}
\vspace{-6mm}
\input{diagonal-twoway}
\vspace{-5mm}
\end{wrapfigure}
%
a \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal}
if for all $z\in[x,x']$, there is a location $\ell_z$ at position $z$
such that $\ell \mathrel{\unlhd} \ell_z \mathrel{\unlhd} \ell'$ and the words
$\out{\rho|Z_{\ell_z}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}}$ and
$\out{\rho|Z_{\ell_z}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}}$
have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z_{\ell_z}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}} = [\ell_z,\ell'] \:\cap\: \big([0,z]\times\bbN\big)$
and $Z_{\ell_z}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}} = [\ell,\ell_z] \:\cap\: \big([z,\omega]\times\bbN\big)$;
}
\bigskip
\medskip
\item \parbox[t]{\dimexpr\textwidth-\leftmargin}{%
\vspace{-2.75mm}
\begin{wrapfigure}{r}{8cm}
\vspace{-6mm}
\input{block-twoway}
\vspace{-6mm}
\end{wrapfigure}
%
a \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block} if the word
$\out{\rho[\ell,\ell']}$ is almost periodic with bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
and
$\out{\rho|Z^{\shortleftarrow}}$ and
$\out{\rho|Z^{\shortrightarrow}}$ have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z^{\shortleftarrow} = [\ell,\ell'] \:\cap\: \big([0,x]\times \bbN\big)$
and $Z^{\shortrightarrow} = [\ell,\ell'] \:\cap\: \big([x',\omega]\times \bbN\big)$.
}
\vspace{8mm}
\end{itemize}
\end{defi}
The definition of $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition is copied verbatim from the sweeping case,
but uses the new notions of $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal and $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block:
\begin{defi}\label{def:decomposition-twoway}
A \emph{$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition} of a run $\rho$ of $\cT$ is a factorization
$\prod_i\,\rho[\ell_i,\ell_{i+1}]$ of $\rho$ into $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonals and $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-blocks.
\end{defi}
\noindent
To provide further intuition,
we consider the transduction of Example~\ref{ex:running}
and the two-way transducer $\cT$ that implements it in the most natural way.
Figure~\ref{fig:decomposition-twoway} shows an example of a run of $\cT$ on
an input of the form $u_1 \:\#\: u_2 \:\#\: u_3 \:\#\: u_4$, where
$u_2,\, u_4 \in (abc)^*$, $u_1,\,u_3\nin (abc)^*$, and $u_3$ has even length.
The factors of the run that produce long outputs are highlighted
by the bold arrows. The first and third factors of the decomposition,
i.e.~$\rho[\ell_1,\ell_2]$ and $\rho[\ell_3,\ell_4]$, are diagonals
(represented by the blue hatched areas); the second and fourth factors
$\rho[\ell_2,\ell_3]$ and $\rho[\ell_4,\ell_5]$ are blocks
(represented by the red hatched areas).
To identify the blocks of a possible decomposition of $\rho$,
we reuse the equivalence relation $\simeq$ introduced in Definition \ref{def:crossrel}.
Recall that this is the reflexive and transitive closure of the relation $\ensuremath{\mathrel{\text{\sf S}}}$
that groups any two locations $\ell,\ell'$ that occur between $\ell_1,\ell_2$, for some
inversion $(L_1,\ell_1,L_2,\ell_2)$.
The proof that the output produced inside each $\simeq$-equivalence class is periodic,
with period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ (Lemma \ref{lem:overlapping}) carries over in the
two-way case without modifications.
Similarly, every $\simeq$-equivalence class can be extended to the left and to the
right by using Definition \ref{def:bounding-box-sweeping}, which
we report here verbatim for the sake of readability, together with an exemplifying figure.
\input{decomposition-twoway}
\bigskip\noindent
\begin{minipage}[l]{\textwidth-5.8cm}
\begin{defi}\label{def:bounding-box-twoway}
Consider a non-singleton $\simeq$-equivalence class $K=[\ell,\ell']$.
Let $\an{K}$
be the restriction of $K$ to the anchor points occurring in some inversion,
and $X_{\an{K}} = \{x \::\: \exists y\: (x,y)\in \an{K}\}$
be the projection of $\an{K}$ on positions.
We define $\block{K}=[\tilde\ell,\tilde\ell']$, where
\begin{itemize}
\item $\tilde\ell$ is the latest location $(\tilde x,\tilde y) \mathrel{\unlhd} \ell$
such that $\tilde x = \min(X_{\an{K}})$,
\item $\tilde\ell'$ is the earliest location $(\tilde x,\tilde y) \mathrel{\unrhd} \ell'$
such that $\tilde x = \max(X_{\an{K}})$
\end{itemize}
(note that the locations $\tilde\ell,\tilde\ell'$ exist since $\ell,\ell'$
are anchor points in some inversion).
\end{defi}
\end{minipage}
\begin{minipage}[r]{5.7cm}
\vspace{-2mm}
\input{block-construction-twoway}
\end{minipage}
\medskip
As usual, we call \emph{$\simeq$-block} any factor of $\rho$ of the form $\rho|\block{K}$
that is obtained by applying the above definition to a non-singleton $\simeq$-class $K$.
Lemma \ref{lem:bounding-box-sweeping}, which shows that $\simeq$-blocks can indeed
be used as $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-blocks in a decomposition of $\rho$, generalizes easily to the
two-way case:
\reviewOne[inline]{Most proofs are quite notation-heavy. A small figure to represent the positions and factors involved is a good thing to have. For example, Lemma 5.8 has Fig.5, but the corresponding (and slightly more involved) Lemme 8.4 hasn't. This seems like a worthwile investment for any long proof, or any time a huge number of notations are involved.
}%
\gabriele[inline]{I think the reviewer is referring to the lack of a figure for the previous definition,
exactly like for Definition 5.7. I put it.}%
\reviewTwo[inline]{
-p.37 l.21-22 the $\ell$ should be replaced with $\tilde{\ell}$
\olivier[inline]{I didn't find where}%
\felix[inline]{second and third line of the list of properties we will prove. I agree.}%
}%
\begin{lem}\label{lem:bounding-box-twoway}
If $K$ is a non-singleton $\simeq$-equivalence class,
then $\rho|\block{K}$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block.
\end{lem}
\begin{proof}
The proof is similar to that of Lemma \ref{lem:bounding-box-sweeping}. The main
difference is that here we will bound the lengths of some outputs using
a Ramsey-type argument (Theorem \ref{thm:simon2}), instead of output-minimality
of factors (Lemma \ref{lem:output-minimal-sweeping}). To follow the various constructions
and arguments the reader can refer to Figure \ref{fig:block-construction-twoway}.
Let $K=[\ell,\ell']$, $\an{K}$, $X_{\an{K}}$, and $\block{K}=[\tilde\ell,\tilde\ell']$
be as in Definition \ref{def:bounding-box-twoway}, where
$\tilde\ell=(\tilde x,\tilde y)$, $\tilde\ell'=(\tilde x',\tilde y')$,
$\tilde x=\min(X_{\an{K}})$, and $\tilde x'=\max(X_{\an{K}})$.
We need to verify that $\rho|\block{K}$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block, namely, that:
\begin{itemize}
\item $\tilde x \le \tilde x'$,
\item $\out{\rho[\tilde \ell,\tilde \ell']}$ is almost periodic with bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
\item $\out{\rho|Z^{\shortleftarrow}}$ and $\out{\rho|Z^{\shortrightarrow}}$ have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z^{\shortleftarrow} = [\tilde \ell,\tilde \ell'] \:\cap\: \big([0,x]\times \bbN\big)$
and $Z^{\shortrightarrow} = [\tilde \ell,\tilde \ell'] \:\cap\: \big([x',\omega]\times \bbN\big)$.
\end{itemize}
The first condition $\tilde x \le \tilde x'$ follows immediately from
$\tilde x=\min(X_{\an{K}})$ and $\tilde x'=\max(X_{\an{K}})$.
Next, we prove that the output produced by the factor
$\rho[\tilde\ell,\tilde\ell']$ is almost periodic with bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Definition \ref{def:bounding-box-twoway}, we have
$\tilde\ell \mathrel{\unlhd} \ell \mathrel{\lhd} \ell' \mathrel{\unlhd} \tilde\ell'$,
and by Lemma \ref{lem:overlapping}
we know that $\out{\rho[\ell,\ell']}$ is periodic with
period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. So it suffices to bound the length
of the words $\out{\rho[\tilde\ell,\ell]}$ and $\out{\rho[\ell',\tilde\ell']}$.
We shall focus on the former word, as the arguments for the latter
are similar.
First, we show that the factor $\rho[\tilde\ell,\ell]$
lies entirely to the right of position $\tilde x$
(in particular, it starts at an even level $\tilde y$).
Indeed, if this were not the case, there would exist another location
$\ell''=(\tilde x,\tilde y + 1)$, on the same position as $\tilde\ell$,
but at a higher level, such that $\tilde\ell \mathrel{\lhd} \ell'' \mathrel{\unlhd} \ell$.
But this would contradict Definition \ref{def:bounding-box-twoway}
($\tilde\ell$ is the \emph{latest} location $(x,y) \mathrel{\unlhd} \ell$
such that $x = \min(X_{\an{K}})$).
Suppose now that the length of $|\out{\rho[\tilde\ell,\ell]}| >
\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. We head towards a contradiction by finding a location
$\ell'' \mathrel{\lhd} \ell$ that is $\simeq$-equivalent to the first
location $\ell$ of the $\simeq$-equivalence class $K$. Since the
factor $\rho[\tilde\ell,\ell]$ lies entirely to the right of position
$\tilde x$, it is intercepted by the interval $I=[\tilde x,\omega]$.
So $|\out{\rho[\tilde\ell,\ell]}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ is equivalent to saying
$|\out{\rho|Z}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, where $Z = [\tilde\ell,\ell] \:\cap\:
\big([\tilde x,\omega]\times\bbN\big)$. Then, Theorem
\ref{thm:simon2} implies the existence of an idempotent loop $L$ and
an anchor point $\ell''$ of $L$ such that
\begin{itemize}
\item $\min(L) > \tilde x$,
\item $\tilde\ell \mathrel{\lhd} \ell'' \mathrel{\lhd} \ell$,
\item $\out{\tr{\ell''}}\neq\emptystr$.
\end{itemize}
Further recall that $\tilde x=\min(X_{\an{K}})$ is the leftmost position of
locations in the class $K=[\ell,\ell']$ that are also anchor points of inversions.
In particular, there is an inversion $(L_1,\ell''_1,L_2,\ell''_2)$, with
$\ell''_2=(\tilde x,y''_2) \in K$.
Since $\ell'' \mathrel{\lhd} \ell \mathrel{\unlhd} \ell''_2$
and the position of $\ell''$ is to the right of the position of $\ell''_2$,
we know that $(L,\ell'',L_2,\ell''_2)$ is also an inversion,
and hence $\ell'' \simeq \ell''_2 \simeq \ell$.
But since $\ell'' \neq \ell$, we get a contradiction with the
assumption that $\ell$ is the first location of the $\simeq$-class $K$.
In this way we have shown that $|\out{\rho[\tilde\ell,\ell]}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
It remains to bound the lengths of the outputs produced
by the subsequences $\rho|Z^{\shortleftarrow}$ and $\rho|Z^{\shortrightarrow}$,
where $Z^{\shortleftarrow}=[\tilde\ell,\tilde\ell'] \:\cap\: \big([0,\tilde x]\times\bbN\big)$
and $Z^{\shortrightarrow}=[\tilde\ell,\tilde\ell'] \:\cap\: \big([\tilde x',\omega]\times\bbN\big)$.
As usual, we consider only one of the two symmetric cases.
Suppose, by way of contradiction, that $|\out{\rho|Z^{\shortleftarrow}}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Theorem \ref{thm:simon2}, there exist an idempotent loop $L$
and an anchor point $\ell''$ of $L$ such that
\begin{itemize}
\item $\max(L) < \tilde x$,
\item $\tilde\ell \mathrel{\lhd} \ell'' \mathrel{\lhd} \tilde\ell'$,
\item $\out{\tr{\ell''}}\neq\emptystr$.
\end{itemize}
By following the same line of reasoning as before, we recall that
$\ell$ is the first location of the non-singleton class $K$.
From this we derive the existence an inversion $(L_1,\ell''_1,L_2,\ell''_2)$
where $\ell''_1 = \ell$.
We claim that $\ell \mathrel{\unlhd} \ell''$.
Indeed, if this were not the case, then, because $\ell''$ is strictly to the
left of $\tilde x$ and $\ell$ is to the right of $\tilde x$, there would exist
a location $\ell'''$ between $\ell''$ and $\ell$ that lies at position $\tilde x$.
But $\tilde\ell \mathrel{\lhd} \ell'' \mathrel{\unlhd} \ell''' \mathrel{\unlhd} \ell$ would
contradict the fact that $\tilde\ell$ is the {\sl latest} location before
$\ell$ that lies at the position $\tilde x$.
Now that we know that $\ell \mathrel{\unlhd} \ell''$ and that $\ell''$ is to the left of $\tilde x$,
we observe that $(L_1,\ell''_1,L,\ell'')$ is also an inversion, and hence $\ell''\in \an{K}$.
Since $\ell''$ is strictly to the left of $\tilde x$,
we get a contradiction with the definition of $\tilde x$ as leftmost
position of the locations in $\an{K}$.
So we conclude that $|\out{\rho|Z^{\shortleftarrow}}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{proof}
The proof of Lemma \ref{lem:consecutive-blocks-sweeping}, which
shows that $\simeq$-blocks do not overlap along the input axis,
carries over in the two-way case, again without modifications.
Finally, we generalize Lemma \ref{lem:diagonal-sweeping} to the new
definition of diagonal, which completes the construction of a
$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition for the run $\rho$:
\begin{lem}\label{lem:diagonal-twoway}
Let $\rho[\ell,\ell']$ be a
factor of $\rho$,
with $\ell=(x,y)$, $\ell'=(x',y')$, and $x\le x'$,
that does not overlap any $\simeq$-block.
Then $\rho[\ell,\ell']$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal.
\end{lem}
\begin{proof}
Suppose by way of contradiction that there is some $z \in [x,x']$
such that, for all locations $\ell''$ at position $z$ and between $\ell$ and $\ell'$,
one of the two conditions holds:
\begin{enumerate}
\item $|\out{\rho|Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}} = [\ell'',\ell'] \:\cap\: \big([0,z]\times\bbN\big)$,
\item $|\out{\rho|Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
where $Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}} = [\ell,\ell''] \:\cap\: \big([z,\omega]\times\bbN\big)$.
\end{enumerate}
First, we claim that \emph{each} of the two conditions above are satisfied at
some locations $\ell''\in [\ell,\ell']$ at position $z$.
Consider the highest even level $y''$
such that $\ell''=(z,y'') \in[\ell,\ell']$
(use Figure \ref{fig:diagonal-twoway} as a reference).
Since $z\le x'$, the outgoing transition at $\ell''$ is rightward oriented,
and the set $Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}$ is empty. This means that
condition (1) is trivially violated at $\ell''$,
and hence condition (2) holds at $\ell''$ by the initial assumption.
Symmetrically, condition (1) holds at the location $\ell''=(z,y'')$,
where $y''$ is the lowest even level with $\ell'' \in[\ell,\ell']$
Let us now compare the levels where the above conditions hold.
Clearly, the lower the level of location $\ell''$,
the easier it is to satisfy condition (1), and symmetrically for condition (2).
So, let $\ell^+=(z,y^+)$ (resp.~$\ell^-=(z,y^-)$) be the highest (resp.~lowest)
location in $[\ell,\ell']$ at position $z$ that satisfies
condition (1) (resp.~condition (2)).
We claim that $y^+ \ge y^-$.
For this, we first observe that $y^+ \ge y^- - 1$, since otherwise there
would exist a location $\ell''=(z,y'')$, with $y^+ < y'' < y^-$, that
violates both conditions (1) and (2).
Moreover, $y^+$ must be odd, otherwise the transition departing from
$\ell^+ = (z,y^+)$ would be rightward oriented and the location $\ell'' = (z,y^+ + 1)$
would still satisfy condition (1), contradicting the definition of highest location $\ell^+$.
For similar reasons, $y^-$ must also be odd, otherwise there would be a location
$\ell'' = (z,y^- - 1)$ below $\ell^-$ that satisfies condition (2).
But since $y^+ \ge y^- - 1$ and both $y^+$ and $y^-$ are odd,
we need to have $y^+ \ge y^-$.
In fact, from the previous arguments we know that the location $\ell''=(z,y^+)$
(or equally the location $(x,y^-)$)
satisfies {\sl both} conditions (1) and (2). We can thus apply Theorem \ref{thm:simon2} to the
sets $Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}$ and $Z_{\ell''}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}$, deriving the existence of
two idempotent loops $L_1,L_2$ and two anchor points $\ell_1,\ell_2$ of $L_1,L_2$,
respectively, such that
\begin{itemize}
\item $\max(L_2) < z < \min(L_1)$,
\item $\ell \mathrel{\lhd} \ell_1 \mathrel{\lhd} \ell'' \mathrel{\lhd} \ell_2 \mathrel{\lhd} \ell'$,
\item $\out{\tr{\ell_1}},\out{\tr{\ell_2}}\neq\emptystr$.
\end{itemize}
In particular, since $\ell_1$ is to the right of $\ell_2$ w.r.t.~the order
of positions, we know that $(L_1,\ell_1,L_2,\ell_2)$ is an inversion, and
hence $\ell_1 \simeq \ell_2$. But this contradicts the assumption that
$\rho[\ell,\ell']$ does not overlap with any $\simeq$-block.
\end{proof}
\medskip
\reviewOne[inline]{Part 5, "From existence of decompositions to an equivalent one-way transducer": so far the paper did a great job hinting at the proof that remains. Here an intuition is given for the construction later detailed in 8.6, but no mention of an intuition for Proposition 9.2: if it is at all possible to do so, this might help the paper's overall flow.
\olivier[inline]{answered in review1-answers.txt}
}%
\subsection*{From existence of decompositions to an equivalent one-way transducer.}
It remains to prove the last implication \PR3 $\Rightarrow$ \PR1 of Theorem~\ref{thm:main2},
which amounts to construct a one-way transducer $\cT'$ equivalent to $\cT$.
Hereafter, we denote by $D$ the language of words $u\in\dom(\cT)$ such that
{\sl all} successful runs of $\cT$ on $u$ admit a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition.
So far, we know that if $\cT$ is one-way definable (\PR1),
then $D=\dom(\cT)$ (\PR3).
As a matter of fact, this reduces the one-way definability problem
for $\cT$ to the containment problem $\dom(\cT) \subseteq D$.
\label{testing-containment}
We will see later (in Section~\ref{sec:complexity})
how the latter problem can be decided
in double exponential space
by further reducing it to checking the emptiness of the
intersection of the languages $\dom(\cT)$ and $D^\complement$,
where $D^\complement$ is the complement of $D$.
Below, we show how to construct a one-way transducer
$\cT'$ of triple exponential size such that
$\cT' \subseteq \cT$ and
$\dom(\cT')$ is the set of all input words that have
{\sl some} successful run admitting a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition
(hence $\dom(\cT')\supseteq D$).
In particular, we will have that
\[
\cT|_D \:\subseteq\: \cT' \:\subseteq\: \cT.
\]
Note that this will prove \PR3 to \PR1, as well as the second
item of Theorem~\ref{thm:main}, since $D=\dom(\cT)$
if and only if $\cT$ is one-way definable.
A sketch of the proof of this construction when
$\cT$ is a sweeping transducer was given at the
end of Section \ref{sec:characterization-sweeping}.
\begin{prop}\label{prop:construction-twoway}
Given a functional two-way transducer $\cT$,
a one-way transducer $\cT'$
satisfying
\[\cT' \subseteq \cT \quad \text{ and } \quad \dom(\cT') \supseteq D\] can be constructed in $3\exptime$.
Moreover, if $\cT$ is sweeping, then $\cT'$
can be constructed in $2\exptime$.
\end{prop}
\begin{proof}
Given an input word $u$, the transducer $\cT'$ will guess (and check)
a successful run $\rho$ of $\cT$ on $u$, together with a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition
$\prod_i \rho[\ell_i,\ell_{i+1}]$.
The latter decomposition will be used by $\cT'$ to simulate the output of
$\rho$ in left-to-right manner, thus proving that $\cT' \subseteq \cT$.
Moreover, $u\in D$ implies the existence of a successful run that can be
decomposed, thus proving that $\dom(\cT') \supseteq D$.
We now provide the details of the construction of $\cT'$.
Guessing the run $\rho$ is standard (see, for instance, \cite{she59,HU79}):
it amounts to guess the crossing sequences $\rho|x$ for
each position $x$ of the input. Recall that this is a bounded
amount of information for each position $x$, since the run is normalized.
As concerns the decomposition of $\rho$, it can be encoded
by the endpoints $\ell_i$ of its factors, that is, by annotating
the position of each $\ell_i$ as the level of $\ell_i$.
In a similar way $\cT'$ guesses the information of whether
each factor $\rho[\ell_i,\ell_{i+1}]$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-diagonal or a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block.
Thanks to the definition of decomposition
(see Definition~\ref{def:decomposition-twoway} and Figure \ref{fig:decomposition-twoway}),
every two distinct factors span across non-overlapping intervals of positions.
This means that each position $x$ is covered by exactly one factor of
the decomposition. We call this factor the \emph{active factor at position $x$}.
The mode of computation of the transducer will depend on
the type of active factor: if the active factor is a diagonal
(resp.~a block), then we say that $\cT'$ is in \emph{diagonal mode}
(resp.~\emph{block mode}).
Below we describe the behaviour for these two modes of computation.
\smallskip
\par\noindent\emph{Diagonal mode.}~
We recall the key condition satisfied by the diagonal
$\rho[\ell,\ell']$ that is active at position $x$
(cf.~Definition~\ref{def:factors-twoway} and Figure~\ref{fig:diagonal-twoway}):
there is a location $\ell_x=(x,y_x)$ between $\ell$ and $\ell'$ such that the words
$\out{\rho|Z_{\ell_x}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}}}$ and $\out{\rho|Z_{\ell_x}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}}}$
have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, where
$Z_{\ell_x}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr135}{\fixed@sra}}\mspace{-2mu}} = [\ell_x,\ell'] \:\cap\: \big([0,x]\times\bbN\big)$
and $Z_{\ell_x}^{\mspace{-2mu}\text{\rotatebox[origin=c]{\numexpr315}{\fixed@sra}}\mspace{-2mu}} = [\ell,\ell_x] \:\cap\: \big([x,\omega]\times\bbN\big)$.
Besides the run $\rho$ and the decomposition, the transducer $\cT'$ will
also guess the locations $\ell_x=(x,y_x)$, that is, will annotate each $x$
with the corresponding $y_x$.
Without loss of generality, we can assume that the function that
associates each position $x$ with the guessed location $\ell_x=(x,y_x)$
is monotone, namely, $x\le x'$ implies $\ell_x\mathrel{\unlhd}\ell_{x'}$.
While the transducer $\cT'$ is in diagonal mode, the goal is to preserve
the following invariant:
\begin{quote}
\em
After reaching a position $x$ covered by the active diagonal,
$\cT'$ must have produced the output of $\rho$ up to location $\ell_x$.
\end{quote}
\noindent
To preserve the above invariant when moving from $x$ to the next
position $x+1$, the transducer should output the word
$\out{\rho[\ell_x,\ell_{x+1}]}$. This word consists of
the following parts:
\begin{enumerate}
\item The words produced by the single transitions of $\rho[\ell_x,\ell_{x+1}]$
with endpoints in $\{x,x+1\}\times\bbN$.
Note that there are at most ${\boldsymbol{H}}} %h_{\mathsf{max}}}$ such words,
each of them has length at most ${\boldsymbol{C}}} %c_{\mathsf{max}}}$, and they can all be determined
using the crossing sequences at $x$ and $x+1$ and the information
about the levels of $\ell_x$ and $\ell_{x+1}$.
We can thus assume that this information is readily available
to the transducer.
\item The words produced by the factors of $\rho[\ell_x,\ell_{x+1}]$
that are intercepted by the interval $[0,x]$.
Thanks to the definition of diagonal, we know that
the total length of these words is at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
These words cannot be determined from the information
on $\rho|x$, $\rho|x+1$, $\ell_x$, and $\ell_{x+1}$
alone, so they need to be constructed while scanning the input.
For this, some additional information needs to be stored.
More precisely, at each position $x$ of the input,
the transducer stores all the outputs produced by the factors of
$\rho$ that are intercepted by $[0,x]$ and that occur {\sl after}
a location of the form $\ell_{x'}$, for any $x'\ge x$ that is
covered by a diagonal.
This clearly includes the previous words when $x'=x$, but also
other words that might be used later for processing other diagonals.
Moreover, by exploiting the properties of diagonals,
one can prove that those words have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
so they can be stored with triply exponentially many states.
Using classical techniques, the stored information
can be maintained while scanning the input $u$ using the
guessed crossing sequences of $\rho$.
\item The words produced by the factors of $\rho[\ell_x,\ell_{x+1}]$
that are intercepted by the interval $[x+1,\omega]$.
These words must be guessed, since they depend on a portion
of the input that has not been processed yet.
Accordingly, the guesses need to be stored into memory,
in such a way that they can be checked later. For this, the transducer
stores, for each position $x$, the guessed words that correspond
to the outputs produced by the factors of $\rho$ intercepted by
$[x,\omega]$ and occurring {\sl before} a location of the form
$\ell_{x'}$, for any $x'\le x$ that is covered by a diagonal.
\end{enumerate}
\smallskip
\par\noindent\emph{Block mode.}~
Suppose that the active factor $\rho[\ell,\ell']$ is a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-block.
Let $I=[x,x']$ be the set of positions covered by this factor.
Moreover, for each position $z\in I$, let
$Z^{\shortleftarrow}_z = [\ell,\ell'] \:\cap\: \big([0,z]\times \bbN\big)$
and $Z^{\shortrightarrow}_z = [\ell,\ell'] \:\cap\: \big([z,\omega]\times \bbN\big)$.
We recall the key property of a block
(cf.~Definition~\ref{def:factors-twoway} and Figure~\ref{fig:block-twoway}):
the word $\out{\rho[\ell,\ell']}$ is almost periodic with bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
and the words $\out{\rho|Z^{\shortleftarrow}_x}$ and $\out{\rho|Z^{\shortrightarrow}_{x'}}$
have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
For the sake of brevity, suppose that $\out{\rho[\ell,\ell']} = w_1\,w_2\,w_3$,
where $w_2$ is periodic with period $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and $w_1,w_3$
\reviewOne[inline]{$w_1,w_3$ I assume.
\olivier[inline]{fixed (was $w_1,w_2$)}%
}%
have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Similarly, let $w_0 = \out{\rho|Z^{\shortleftarrow}_x}$ and $w_4 = \out{\rho|Z^{\shortrightarrow}_{x'}}$.
The invariant preserved by $\cT'$ in block mode is the following:
\begin{quote}
\em
After reaching a position $z$ covered by the active block $\rho[\ell,\ell']$,
$\cT'$ must have produced the output of the prefix of $\rho$
up to location $\ell$, followed by a prefix of $\out{\rho[\ell,\ell']} = w_1\,w_2\,w_3$
of the same length as $\out{\rho|Z^{\shortleftarrow}_z}$.
\end{quote}
\noindent
The initialization of the invariant is done when reaching the left
endpoint $x$. At this moment, it suffices that $\cT'$ outputs
a prefix of $w_1\,w_2\,w_3$ of the same length as
$w_0 = \out{\rho|Z^{\shortleftarrow}_x}$, thus bounded by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Symmetrically, when reaching the right endpoint $x'$,
$\cT'$ will have produced almost the entire word
$\out{\rho[\ell,\ell']} \, w_1 \, w_2 \, w_3$,
but without the suffix $w_4 = \out{\rho|Z^{\shortrightarrow}_{x'}}$
of length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Thus, before moving to the next factor of the decomposition, the transducer will
produce the remaining suffix, so as to complete the output
of $\rho$ up to location $\ell_{i_x+1}$.
It remains to describe how the above invariant can be maintained
when moving from a position $z$ to the next position $z+1$ inside $I=[x,x']$.
For this, it is convenient to succinctly represent the word $w_2$
by its repeating pattern, say $v$, of length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
To determine the symbols that have to be output at each step,
the transducer will maintain a pointer on either $w_1\,v$ or $w_3$.
The pointer is increased in a deterministic way, and precisely
by the amount $|\out{\rho|Z^{\shortleftarrow}_{z+1}}| - |\out{\rho|Z^{\shortleftarrow}_z}|$.
The only exception is when the pointer lies in $w_1\,v$, but its
increase would go over $w_1\,v$: in this case the transducer has
the choice to either bring the pointer back to the beginning of $v$
(representing a periodic output inside $w_2$), or move it to $w_3$.
Of course, this is a non-deterministic choice, but it can be
validated when reaching the right endpoint of $I$.
Concerning the number of symbols that need to be emitted at each
step, this can be determined from the crossing sequences at
$z$ and $z+1$, and from the knowledge of the lowest and highest
levels of locations that are at position $z$ and between
$\ell$ and $\ell'$. We denote the latter levels by
$y^-_z$ and $y^+_z$, respectively.
Overall, this shows how to maintain the invariant of the block mode,
assuming that the levels $y^-_z,y^+_z$ are known, as well as
the words $w_0,w_1,v,w_3,w_4$ of bounded length.
Like the mapping $z \mapsto \ell_z=(z,y_z)$ used in diagonal mode,
the mapping $z \mapsto (y^-_z,y^+_z)$ can be guessed and checked
using the crossing sequences.
Similarly, the words $w_1,v,w_3$ can be guessed just before
entering the active block, and can be checked along the process.
As concerns the words $w_0,w_4$, these can be guessed and checked
in a way similar to the words that we used in diagonal mode.
More precisely, for each position $z$ of the input, the
transducer stores the following additional information:
\begin{enumerate}
\item the outputs produced by the factors of $\rho$ that are
intercepted by $[0,z]$ and that occur after the beginning
$\ell''$ of some block, with $\ell''=(x'',y'')$ and $x''\ge z$;
\item the outputs produced by the factors of $\rho$ that are
intercepted by $[z,\omega]$ and that occur before the ending
$\ell'''$ of a block, where $\ell'''=(x''',y''')$
and $x'''\le z$.
\end{enumerate}
By the definition of blocks, the above words have length
at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and can be maintained while processing the input
and the crossing sequences.
Finally, we observe that the words, together with the information
given by the lowest and highest levels $y^-_z,y^+_z$, for both $z=x$ and
$z=x'$, are sufficient for determining the content of $w_0$ and $w_4$.
\smallskip
We have just shown how to construct a one-way transducer $\cT' \subseteq \cT$
such that $\dom(\cT') \supseteq D$.
From the above construction it is easy to see that the number of states
and transitions of $\cT'$, as well as the number of letters emitted by
each transition, are at most exponential in $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. Since $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ is
doubly exponential in the size of $\cT$, this shows that $\cT'$ can
be constructed from $\cT$ in $3\exptime$.
Note that the triple exponential
complexity comes from the lengths of the words that need to be guessed
and stored in the control states, and these lengths are bounded by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
However, if $\cT$ is a sweeping transducer, then, according to the results
proved in Section \ref{sec:characterization-sweeping}, the bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$
is simply exponential. In particular, in the sweeping case
we can construct the one-way transducer $\cT'$ in $2\exptime$.
\end{proof}
\medskip
\subsection*{Generality of the construction.}
We conclude the section with a discussion on the properties of the one-way
transducer $\cT'$ constructed from $\cT$. Roughly speaking, we would like
to show that, even when $\cT$ is not one-way definable, $\cT'$ is somehow
the {\sl best one-way under-approximation of $\cT$}.
However, strictly speaking, the latter terminology is meaningless:
if $\cT'$ is a one-way transducer strictly contained in $\cT$, then
one can always find a better one-way transducer $\cT''$ that satisfies
$\cT' \subsetneq \cT'' \subsetneq \cT$, for instance by extending $\cT'$
with a single input-output pair. Below, we formalize in an appropriate
way the notion of ``best one-way under-approximation''.
We are interested in comparing the domains of transducers, but only up to
a certain amount. In particular, we are interested in languages that are
preserved under pumping loops of runs of $\cT$. Formally, given a language
$L$, we say that $L$ is \emph{$\cT$-pumpable} if $L \subseteq \dom(\cT)$ and
for all words $u\in L$, all successful runs $\rho$ of $\cT$ on $u$, all
loops $L$ of $\rho$, and all positive numbers $n$, the word $\ensuremath{\mathsf{pump}}_L^n(u)$
also belongs to $L$.
Clearly, the domain $\dom(\cT)$ of a transducer $\cT$ is a regular $\cT$-pumpable language.
Another noticeable example of $\cT$-pumpable regular language is the domain
of the one-way transducer $\cT'$, as defined in Proposition \ref{prop:construction-twoway}.
Indeed, $\dom(\cT')$ consists of words $u\in\dom(\cT)$ that induce
successful runs with $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decompositions, and the property of
having a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition is preserved under pumping.
The following result shows that $\cT'$ is the best under-approximation
of $\cT$ within the class of one-way transducers with $\cT$-pumpable domains:
\begin{cor}\label{cor:best-underapproximation}
Given a functional two-way transducer $\cT$, one can construct a one-way transducer $\cT'$ such that
\begin{itemize}
\item $\cT' \subseteq \cT$ and $\dom(\cT')$ is $\cT$-pumpable,
\item for all one-way transducers $\cT''$, if $\cT'' \subseteq \cT$ and $\dom(\cT'')$ is $\cT$-pumpable,
then $\cT'' \subseteq \cT'$.
\end{itemize}
\end{cor}
\begin{proof}
The transducer $\cT'$ is precisely the one defined in Proposition \ref{prop:construction-twoway}.
As already explained, its domain $\dom(\cT')$ is a $\cT$-pumpable language. In particular, $\cT'$
satisfies the conditions in the first item.
For the conditions in the second item, consider a one-way transducer $\cT'' \subseteq \cT$
with a $\cT$-pumpable domain $L=\dom(\cT'')$. Let $\tilde\cT$ be the transducer obtained from
$\cT$ by restricting its domain to $L$. Clearly, $\tilde\cT$ is one-way definable, and one
could apply Proposition \ref{prop:periodicity-twoway} to $\tilde\cT$, using $\cT''$ as a
witness of one-way definability. In particular, when it comes to comparing the outputs of the
pumped runs of $\tilde\cT$ and $\cT''$, one could exploit the fact that the domain $L$ of $\cT''$,
and hence the domain of $\tilde\cT$ as well, is $\cT$-pumpable. This permits to derive
periodicities of inversions with the same bound $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ as before, but only restricted
to the successful runs of $\cT$ on the input words that belong to $L$.
As a consequence, one can define $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decompositions of successful runs of $\cT$
on words in $L$, thus showing that $L \subseteq \dom(\cT')$. This proves that $\cT'' \subseteq \cT'$.
\end{proof}
\section{Basic combinatorics for sweeping transducers}\label{sec:combinatorics-sweeping}
We fix for the rest of the section a functional \emph{sweeping transducer} $\cT$,
an input word $u$, and a (normalized) successful run $\rho$ of $\cT$ on $u$.
\medskip
\subsection*{Pumping loops.}
Loops turn out to be a basic concept for characterizing one-way definability.
Formally, a \emph{loop} of $\rho$ is an interval $L=[x_1,x_2]$ such that $\rho|x_1=\rho|x_2$,
namely, with the same crossing sequences at the extremities.
The run $\rho$ can be pumped at any loop $L=[x_1,x_2]$, and this gives rise
to new runs with iterated factors. Below we study precisely the shape of
these pumped runs.
\begin{defi}[anchor point, trace]
Given a loop $L$ and a location $\ell$ of $\rho$, we say that $\ell$ is an
\emph{anchor point in $L$} if $\ell$ is the first location of some factor
of $\rho$ that is intercepted by $L$;
this factor is then denoted%
\footnote{This is a slight abuse of notation, since the factor $\tr{\ell}$
is not determined by $\ell$ alone, but requires also the knowledge of the loop $L$,
which is usually clear from the context.}
as $\tr{\ell}$ and called the \emph{trace of $\ell$}.
\end{defi}
Observe that a loop can have at most ${\boldsymbol{H}}} %h_{\mathsf{max}}} = 2|Q|-1$ anchor points, since
we consider only normalized runs.
Given a loop $L$ of $\rho$ and a number $n\in\bbN$, we can replicate $n$ times
the factor $u[x_1,x_2]$ of the input, obtaining a new input of the form
\begin{equation}\label{eq:pumped-word}
\ensuremath{\mathsf{pump}}_L^{n+1}(u) ~=~ u[1,x_1]\cdot \big(u[x_1+1,x_2]\big)^{n+1} \cdot u[x_2+1,|u|].
\end{equation}
\reviewTwo[inline]{The notation $u[x_1,x_2]$ is not defined.\olivier[inline]{fixed}}
Similarly, we can replicate $n$ times the intercepted factors $\tr{\ell}$ of $\rho$,
for all anchor points $\ell$ of $L$. In this way we obtain a successful run on $\ensuremath{\mathsf{pump}}_L^{n+1}(u)$
that is of the form
\begin{equation}\label{eq:pumped-run}
\ensuremath{\mathsf{pump}}_L^{n+1}(\rho) ~=~ \rho_0 ~ \tr{\ell_1}^n ~ \rho_1 ~ \dots ~ \rho_{k-1} ~ \tr{\ell_k}^n ~ \rho_k
\end{equation}
where $\ell_1\mathrel{\unlhd}\dots\mathrel{\unlhd}\ell_k$ are all the anchor points in $L$
(listed according to the run order $\mathrel{\unlhd}$), $\rho_0$ is the prefix of $\rho$
ending at $\ell_1$,
$\rho_k$ is the suffix of $\rho$
starting at $\ell_k$, and for all $i=1,\dots,k-1$,
$\rho_i$ is the factor of $\rho$ between
$\ell_i$ and $\ell_{i+1}$.
Note that $\ensuremath{\mathsf{pump}}_L^1(\rho)$ coincides with the original run $\rho$. As a matter of fact,
one could define in a similar way the run $\ensuremath{\mathsf{pump}}_L^0(\rho)$ obtained from removing the loop
$L$ from $\rho$. However, we do not need this, and we will always parametrize the operation
$\ensuremath{\mathsf{pump}}_L$ by a positive number $n+1$.
An example of a pumped run $\ensuremath{\mathsf{pump}}_{L_1}^3(\rho)$ is given in Figure \ref{fig:pumping-sweeping},
together with the indication of the anchor points $\ell_i$ and the intercepted factors $\tr{\ell_i}$.
\input{pumping-sweeping}
\medskip
\subsection*{Output minimality.}
We are interested into factors of the run $\rho$ that lie on a single level
and that contribute to the final output, but in a minimal way, in the sense
that is formalized by the following definition:
\begin{defi}\label{def:output-minimal-sweeping}
Consider a factor $\a=\rho[\ell,\ell']$ of $\rho$.
We say that $\a$ is \emph{output-minimal} if
$\ell=(x,y)$ and $\ell'=(x',y)$, and all loops $L \subsetneq
[x,x']$
produce empty output at level $y$.
\end{defi}
From now on, we set the constant $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax} = {\boldsymbol{C}}} %c_{\mathsf{max}}} |Q|^{\boldsymbol{H}}} %h_{\mathsf{max}}} +1$,
where ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ is the capacity of the transducer, that is, the
maximal length of an output produced on a single transition (recall
that $|Q|^{\boldsymbol{H}}} %h_{\mathsf{max}}}$ is the maximal number of crossing sequences).
As shown below, $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ bounds the length of
the output produced by an output-minimal factor:
\begin{lem}\label{lem:output-minimal-sweeping}
For all output-minimal factors $\alpha$,
$|\out{\alpha}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{lem}
\begin{proof}
Suppose by contradiction that $|\out{\alpha}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, with
$\a=\rho[\ell,\ell']$, $\ell=(x,y)$ and $\ell=(x',y)$.
Let $X$ be the set of all positions $x''$, with $\min(x,x') < x'' < \max(x,x')$,
that are sources of transitions of $\alpha$ that produce non-empty output.
Clearly, the total number of letters produced by the transitions that depart
from locations in $X\times\{y\}$ is strictly larger than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}-1$.
Moreover, since each transition emits at most ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ symbols, we have $|X| > \frac{\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}-1}{{\boldsymbol{C}}} %c_{\mathsf{max}}}} = |Q|^{\boldsymbol{H}}} %h_{\mathsf{max}}}$.
Now, recall that crossing sequences are sequences of states of length at most ${\boldsymbol{H}}} %h_{\mathsf{max}}}$.
Since $|X|$ is larger than the number of crossing sequences, $X$ contains two positions
$x_1<x_2$ such that $\rho|x_1=\rho|x_2$. In particular, $L=[x_1,x_2]$
is a loop strictly between $x,x'$
with non-empty output on level $y$.
This shows that $\rho[\ell,\ell']$ is not output-minimal.
\end{proof}
\medskip
\subsection*{Inversions and periodicity.}
Next, we define the crucial notion of inversion. Intuitively, an inversion
in a run identifies a part of the run that is potentially difficult to
simulate in a one-way manner because the order of generating the
output is reversed w.r.t.~the input. Inversions arise naturally in transducers
that reverse arbitrarily long portions of the input, as well as in transducers
that produce copies of arbitrarily long portions of the input.
\label{page-def-inversion}
\begin{defi}\label{def:inversion-sweeping}
An \emph{inversion} of the run $\rho$ is a tuple $(L_1,\ell_1,L_2,\ell_2)$ such that
\begin{enumerate}
\item $L_1,L_2$ are loops of $\r$,
\item $\ell_1=(x_1,y_1)$ and $\ell_2=(x_2,y_2)$
are anchor points of $L_1$ and $L_2$, respectively,
\item $\ell_1 \mathrel{\lhd} \ell_2$ and $x_1 > x_2$
\par\noindent
(namely, $\ell_2$ follows $\ell_1$ in the run,
but the position of $\ell_2$ precedes the position of $\ell_1$),
\item for both $i=1$ and $i=2$, $\out{\tr{\ell_i}}\neq\emptystr$ and $\tr{\ell_i}$ is output-minimal.
\end{enumerate}
\end{defi}
\noindent
The left hand-side of Figure~\ref{fig:inversion-sweeping} gives an example of an inversion,
assuming that the outputs $v_1=\tr{\ell_1}$ and $v_2=\tr{\ell_2}$ are non-empty
and the intercepted factors are output-minimal.
\input{inversion-sweeping}
The rest of the section is devoted to prove the implication \PR1 $\Rightarrow$ \PR2
of Theorem \ref{thm:main2}.
We recall that a word $w=a_1 \cdots a_n$ has \emph{period} $p$ if for every $1\le i\le |w|-p$,
we have $a_i = a_{i+p}$. For example, the word $abc \, abc \, ab$ has period $3$.
We remark that, thanks to Lemma \ref{lem:output-minimal-sweeping},
for every inversion $(L_1,\ell_1,L_2,\ell_2)$, the outputs
$\out{\tr{\ell_1}}$ and $\out{\tr{\ell_2}}$ have length at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By pairing this with the assumption that the transducer $\cT$ is one-way definable,
and by using some classical word combinatorics, we show that the output
produced between the anchor points of every inversion has period that divides
the lengths of $\out{\tr{\ell_1}}$ and $\out{\tr{\ell_2}}$. In particular, this
period is at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
The proposition below shows a slightly stronger periodicity property,
which refers to the output produced between the anchor points $\ell_1,\ell_2$
of an inversion, but extended on both sides with the words $\out{\tr{\ell_1}}$ and $\out{\tr{\ell_2}}$.
We will exploit this stronger periodicity property later, when dealing with
overlapping portions of the run delimited by different inversions
(cf.~Lemma \ref{lem:overlapping}).
\begin{prop}\label{prop:periodicity-sweeping}
If $\cT$ is one-way definable, then the following property \PR2 holds:
\begin{quote}
For all inversions
$(L_1,\ell_1,L_2,\ell_2)$ of $\rho$, the period $p$ of the word
\[
\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}
\]
divides both $|\out{\tr{\ell_1}}|$ and
$|\out{\tr{\ell_2}}|$. Moreover, $p \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{quote}
\end{prop}
The above proposition
thus formalizes the implication \PR1 $\Rightarrow$ \PR2 of
Theorem \ref{thm:main2}.
Its proof relies on a few combinatorial results.
The first one is Fine and Wilf's theorem~\cite{Lothaire97}.
In short, this theorem says that, whenever two periodic
words $w_1,w_2$ share a sufficiently long factor, then they
have the same period.
Here we use a slightly stronger variant of Fine and Wilf's theorem,
which additionally shows how to align a common factor of the two words $w_1,w_2$
so as to form a third word containing a prefix of $w_1$ and a suffix of $w_2$.
This variant of Fine-Wilf's theorem will be particularly useful in the proof of Lemma~\ref{lem:overlapping}, while for all other applications the classical
statement suffices.
\olivier{This last sentence is outdated, right?}%
\gabriele{This refers to the variant of Fine-Wilf, which still used in the referenced lemma.
But I rephrased so as to make it clear. Let me know if it makes more sense now.}%
\begin{thm}[Fine-Wilf's theorem]\label{thm:fine-wilf}
If $w_1 = w'_1\,w\:w''_1$ has period $p_1$,
$w_2 = w'_2\,w\,w''_2$ has period $p_2$, and
the common factor $w$ has length at least $p_1+p_2-\gcd(p_1,p_2)$,
then $w_1$, $w_2$, and $w_3 = w'_1\,w\,w''_2$ have period $\gcd(p_1,p_2)$.
\end{thm}
The second combinatorial result required in our proof concerns periods of words
with iterated factors, like those that arise from considering outputs of pumped runs,
and it is formalized precisely by the lemma below.
To improve readability, we often highlight the
important iterations of factors inside a word.
\begin{lem}\label{lem:periods}
Assume that $v_0 \: \pmb{v_1^n} \: v_2 \: \cdots \: v_{k-1} \: \pmb{v_k^n} \: v_{k+1}$
has period $p$ for some $n >p$.
Then $v_0 \: \pmb{{v_1}^{n_1}} \: v_2 \: \cdots \: v_{k-1} \: \pmb{{v_k}^{n_k}} \: v_{k+1}$
has period $p$ for all $n_1,\ldots,n_k \in \Nat$.
\end{lem}
\begin{proof}
Assume that
$w=v_0 \: \pmb{v_1^n} \: v_2 \: \cdots \: v_{k-1} \: \pmb{v_k^n} \: v_{k+1}$
has period $p$, and that $n >p$.
Consider an arbitrary factor $v_i^p$ of $w$. Since $v_i^p$ has periods $p$ and $|v_i|$,
it has also period $r=\gcd(p,|v_i|)$.
By Fine-Wilf (Theorem \ref{thm:fine-wilf}),
we know that $w$ has period $r$ as well.
Moreover, since the length of $v_i$ is multiple of $r$,
changing the number of repetitions of $v_i$ inside $w$ does
not affect the period $r$ of $w$.
Since $v_i$ was chosen arbitrarily, this means that, for all $n_1,\dots,n_k\in\bbN$,
$v_0 \: \pmb{{v_1}^{n_1}} \: v_2 \: \cdots \: v_{k-1} \: \pmb{{v_k}^{n_k}} \: v_{k+1}$
has period $r$, and hence period $p$ as well.
\end{proof}
Recall that our goal is to show that the output produced amid every
inversion has period bounded by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
The general idea is to pump the loops of the inversion and compare the outputs
of the two-way transducer $\cT$ with those of an equivalent one-way
transducer $\cT'$.
The comparison leads to an equation between words with
iterated factors, where the iterations are parametrized by two unknowns
$n_1,n_2$ that occur in opposite order in the left, respectively right
hand-side of the equation.
Our third and last combinatorial result considers a word equation of
this precise form, and derives from it a periodicity property.
\reviewOne[inline]{In Corollary 4.8, the definition of $v^{(n_1,n_2)}$ is hard to process, especially when introduced in the middle of a proof. The notion is used later in the part: it warrants a proper definition and/or clear example either in the preliminaries or at the beginning of Part 4.
\felix[inline]{I don't understand this remark}
\olivier[inline]{answered in review1-answers.txt: the best place is here.}
}
\gabriele[inline]{This reviewer commend has led to a number of corrections, and the removal of Saarela tool}
For the sake of brevity, we use the notation $v^{(n_1,n_2)}$
to represent words with factors iterated $n_1$ or $n_2$ times,
namely, words of the form
$v_0 \: v_1^{n_{i_1}} \: v_2 \: \cdots \: v_{k-1} \: v_k^{n_{i_k}} \: v_{k+1}$,
where the $v_0,v_1,v_2,\dots,v_{k-1},v_k,v_{k+1}$ are fixed words (possibly empty)
and each index among $i_1,\dots,i_k$ is either $1$ or $2$.
\begin{lem}\label{lem:oneway-vs-twoway}
Consider a word equation of the form
\[
v_0^{(n_1,n_2)} \: \pmb{v_1^{n_1}} \: v_2^{(n_1,n_2)} \: \pmb{v_3^{n_2}} \: v_4^{(n_1,n_2)}
~=~
w_0 \: \pmb{w_1^{n_2}} \: w_2 \: \pmb{w_3^{n_1}} \: w_4
\]
where $n_1,n_2$ are the unknowns and $v_1,v_3$ are non-empty words.
If the above equation holds for all $n_1,n_2\in\bbN$,
then
\[
\pmb{v_1} ~ \pmb{v_1^{n_1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2}} ~ \pmb{v_3}
\]
has period $\gcd(|v_1|,|v_3|)$ for all $n_1,n_2\in\bbN$.
\end{lem}
\begin{proof}
The idea of the proof is to let the parameters $n_1,n_2$ of the equation
grow independently, and apply Fine and Wilf's theorem (Theorem \ref{thm:fine-wilf})
a certain number of times to establish periodicities in overlapping factors of the
considered words.
We begin by fixing $n_1$ large enough so that the factor
$\pmb{v_1^{n_1}}$ of the left hand-side of the equation becomes
longer than $|w_0|+|w_1|$ (this is possible because $v_1$ is non-empty).
Now, if we let $n_2$ grow arbitrarily large, we see that the length of
the periodic word $\pmb{w_1^{n_2}}$ is almost equal to the length of
the left hand-side term
$v_0^{(n_1,n_2)} ~ \pmb{v_1^{n_1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2}} ~ v_4^{(n_1,n_2)}$:
indeed, the difference in length is given by the constant
$|w_0| + |w_2| + n_1\cdot |w_3| + |w_4|$.
In particular, this implies that $\pmb{w_1^{n_2}}$
covers arbitrarily long prefixes of
$\pmb{v_1} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2+1}}$,
which in its turn contains long repetitions of the word $v_3$.
Hence, by Theorem \ref{thm:fine-wilf},
the word
$\pmb{v_1} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2+1}}$
has period $|v_3|$.
We remark that the periodicity shown so far holds for a large enough
$n_1$ and for all but finitely many $n_2$, where the threshold for
$n_2$ depends on $n_1$: once $n_1$ is fixed, $n_2$ needs to be larger
than $f(n_1)$, for a suitable function $f$.
In fact, by using
\gabriele{Saarela has been replaced by the new lemma here}
Lemma \ref{lem:periods},
with $n_1$ fixed and $n=n_2$ large enough,
we deduce that the periodicity holds for large enough $n_1\in\bbN$ and
for all $n_2\in\bbN$.
We could also apply a symmetric reasoning: we choose
$n_2$ large enough and let $n_1$ grow arbitrarily large. Doing so, we
prove that for a large enough $n_2$ and for all but finitely many $n_1$,
the word $\pmb{v_1^{n_1+1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3}$ is periodic
with period $|v_1|$. As before, with the help of
\gabriele{same here for Saarela...}
Lemma \ref{lem:periods},
this can be strengthened to hold for large enough $n_2\in\bbN$ and for all $n_1\in\bbN$.
Putting together the results proven so far, we get that for all but finitely many $n_1,n_2$,
\[
\rightward{ \underbracket[0.5pt]{ \phantom{ \pmb{v_1^{n_1}} \cdot \pmb{v_1} ~\cdot~
v_2^{(n_1,n_2)} ~\cdot~ \pmb{v_3} } }%
_{\text{period } |v_1|} }
\pmb{v_1^{n_1}} \cdot
\overbracket[0.5pt]{ \pmb{v_1} ~\cdot~ v_2^{(n_1,n_2)} ~\cdot~
\pmb{v_3} \cdot \pmb{v_3^{n_2}} }%
^{\text{period } |v_3|}
.
\]
Finally, we observe that the prefix
$\pmb{v_1^{n_1+1}}\cdot v_2^{(n_1,n_2)}\cdot \pmb{v_3}$
and the suffix
$\pmb{v_1}\cdot v_2^{(n_1,n_2)}\cdot\pmb{v_3^{n_2+1}}$
share a common factor of length at least $|v_1|+|v_3|$.
By Theorem~\ref{thm:fine-wilf},
we derive that $\pmb{v_1^{n_1+1}}\cdot v_2^{(n_1,n_2)}\cdot
\pmb{v_3^{n_2+1}}$ has period $\gcd(|v_1|,|v_3|)$ for all but finitely
many $n_1,n_2$.
Finally, by using again
\gabriele{...and here}
Lemma \ref{lem:periods},
we conclude that the periodicity holds for all $n_1,n_2\in\bbN$.
\end{proof}
\medskip
We are now ready to prove the implication \PR1 $\Rightarrow$ \PR2:
\begin{proof}[Proof of Proposition~\ref{prop:periodicity-sweeping}]
Let $\cT'$ be a one-way transducer equivalent to $\cT$, and consider
an inversion $(L_1,\ell_1,L_2,\ell_2)$ of the successful run $\rho$ of $\cT$
on input $u$.
The reader may refer to Figure \ref{fig:inversion-sweeping}
to get basic intuition about the proof technique.
For simplicity, we assume that the loops $L_1$ and $L_2$ are disjoint,
as shown in the figure. If this were not the case, we would have
at least $\max(L_1) > \min(L_2)$, since the anchor point $\ell_1$
is strictly to the right of the anchor point $\ell_2$.
We could then consider the pumped run $\ensuremath{\mathsf{pump}}_{L_1}^k(\rho)$ for a
large enough $k>1$ in such a way that the rightmost copy of $L_1$
turns out to be disjoint from and strictly to the right of $L_2$.
We could thus reason as we do below, by replacing everywhere
(except in the final part of the proof, cf.~{\em Transferring periodicity to the original run})
the run $\rho$ with the pumped run $\ensuremath{\mathsf{pump}}_{L_1}^k(\rho)$, and the formal
parameter $m_1$ with $m_1+k$.
\smallskip\noindent
{\em Inducing loops in $T'$.}~
We begin by pumping the run $\rho$ and the underlying input $u$,
on the loops $L_1$ and $L_2$, in order to induce new loops $L'_1$
and $L'_2$ that are also loops in a successful run of $\cT'$.
Assuming that $L_1$ is strictly to the right of $L_2$, we define for all numbers $m_1,m_2\in\bbN$:
\[
\begin{array}{rcl}
u^{(m_1,m_2)} &=& \ensuremath{\mathsf{pump}}_{L_1}^{m_1+1}(\ensuremath{\mathsf{pump}}_{L_2}^{m_2+1}(u)) \\[1ex]
\rho^{(m_1,m_2)} &=& \ensuremath{\mathsf{pump}}_{L_1}^{m_1+1}(\ensuremath{\mathsf{pump}}_{L_2}^{m_2+1}(\rho)).
\end{array}
\]
In the pumped run $\rho^{(m_1,m_2)}$, we identify the positions that mark the
endpoints of the occurrences of $L_1,L_2$. More precisely, if $L_1=[x_1,x_2]$
and $L_2=[x_3,x_4]$, with $x_1>x_4$, then the sets of these positions are
\[
\begin{array}{rcl}
X_2^{(m_1,m_2)} &=& \big\{ x_3 + i(x_4-x_3) ~:~ 0\le i\le m_2+1 \big\} \\[1ex]
X_1^{(m_1,m_2)} &=& \big\{ x_1 + j(x_2-x_1) + m_2(x_4-x_3) ~:~ 0\le j\le m_1+1 \big\}.
\end{array}
\]
\smallskip\noindent
{\em Periodicity of outputs of pumped runs.}~
We use now the fact that $\cT'$ is a one-way transducer equivalent to $\cT$.
\reviewTwo[inline]{you fix $\lambda^{m_1,m_2}$ to be a particular run of T’ on the input $u^{m_1,m_2}$, and then, on line 10, you state that for some particular $m_1$ and $m_2$, this run can be obtained by pumping $\lambda^{k_0,k_0}$. This does not seem true in general: since $T$ is not deterministic, we might just have chosen a different run.}%
\gabriele[inline]{Corrected}%
We first recall (see, for instance, \cite{eilenberg1974automata,berstel2013transductions})
that every functional one-way transducer can be made unambiguous, namely,
can be transformed into an equivalent one-way transducer that admits at
most one successful run on each input.
This means that, without loss of generality, we can assume that $\cT'$ too
is unambiguous, and hence it admits exactly one successful run, say
$\lambda^{(m_1,m_2)}$, on each input $u^{(m_1,m_2)}$.
Since $\cT'$ has finitely many states, we can find, for a large enough number $k_0$
two positions $x'_1<x'_2$, both in $X_1^{(k_0,k_0)}$, such that $L'_1=[x'_1,x'_2]$
is a loop of $\lambda^{(k_0,k_0)}$. Similarly, we can find two positions
$x'_3<x'_4$, both in $X_2^{(k_0,k_0)}$, such that $L'_2=[x'_3,x'_4]$ is a
loop of $\lambda^{(k_0,k_0)}$.
By construction $L'_1$ (resp.~$L'_2$) consists of $k_1\leq k_0$
(resp.~$k_2\leq k_0$) copies of $L_1$ (resp.~$L_2$), and
hence $L'_1,L'_2$ are also loops of $\rho^{(k_0,k_0)}$.
In particular, this implies that for all $n_1,n_2\in\bbN$:
\[
\begin{array}{rcl}
\ensuremath{\mathsf{pump}}_{L'_1}^{n_1+1}(\ensuremath{\mathsf{pump}}_{L'_2}^{n_2+1}(u^{(k_0,k_0)}))
&=& u^{(f(n_1),g(n_2))} \\[1ex]
\ensuremath{\mathsf{pump}}_{L'_1}^{n_1+1}(\ensuremath{\mathsf{pump}}_{L'_2}^{n_2+1}(\rho^{(k_0,k_0)}))
&=& \rho^{(f(n_1),g(n_2))} \\[1ex]
\ensuremath{\mathsf{pump}}_{L'_1}^{n_1+1}(\ensuremath{\mathsf{pump}}_{L'_2}^{n_2+1}(\lambda^{(k_0,k_0)}))
&=& \lambda^{(f(n_1),g(n_2))}.
\end{array}
\]
where $f(n_1)=k_1 n_1+k_0$ and $g(n_2)=k_2 n_2+k_0$.
Now recall that $\rho^{(f(n_1),g(n_2))}$ and $\lambda^{(f(n_1),g(n_2))}$ are
runs of $\cT$ and $\cT'$ on the same word $u^{(f(n_1),g(n_2))}$, and they
produce the same output. Let us denote this output by $w^{(f(n_1),g(n_2))}$.
Below, we show two possible factorizations of $w^{(f(n_1),g(n_2))}$
based on the shapes of the pumped runs $\lambda^{(f(n_1),g(n_2))}$
and $\rho^{(f(n_1),g(n_2))}$.
For the first factorization, we recall that $L'_2$ precedes $L'_1$,
according to the ordering of positions, and that the run
$\lambda^{(f(n_1),g(n_2))}$ is one-way (in particular loops have only one anchor point and one trace). We thus obtain:
\reviewTwo[inline]{eq. 4.3 The exponents should be $n_2 + 1$ and $n_1 + 1$. (same for eq. 4.4)}
\felix[inline]{Yes but not sure it makes sense to keep the +1 everywhere then}
\gabriele[inline]{We agreed to change the two itemized lists below and replace `right border'
by `left border' everywhere. I have implemented this...}
\begin{equation}\label{eq:one-way}
w^{(f(n_1),g(n_2))} ~=~ w_0 ~ \pmb{w_1^{n_2}} ~ w_2 ~ \pmb{w_3^{n_1}} ~ w_4
\end{equation}
where
\begin{itemize}
\item $w_0$ is the output produced by the prefix of $\lambda^{(k_0,k_0)}$
ending at the only anchor point of $L'_2$,
\item $\pmb{w_1}$ is the trace of $L'_2$,
\item $w_2$ is the output produced by the factor of $\lambda^{(k_0,k_0)}$
between the anchor points of $L'_2$ and $L'_1$,
\item $\pmb{w_3}$ is the trace of $L'_1$,
\item $w_4$ is the output produced by the suffix of
$\lambda^{(k_0,k_0)}$ starting at the anchor point of $L'_1$. Hence, $w_0 ~ w_2 ~ w_4$ is the output of $\lambda^{(k_0,k_0)}$.
\end{itemize}
\felix{added the last sentence to avoid the $+1$ or not confusion. In addition to the anchor point/trace usage that we talked about}
For the second factorization, we consider $L'_1$ and $L'_2$ as loops of $\rho^{(k_0,k_0)}$.
We recall that $\ell_1,\ell_2$ are anchor points of the loops $L_1,L_2$ of $\rho$, and that
there are corresponding copies of these anchor points in the pumped
run $\rho^{(f(n_1),g(n_2))}$.
We define $\ell'_1$ (resp.~$\ell'_2$) to be the first (resp.~last)
location in $\rho^{(f(n_1),g(n_2))}$
that corresponds to $\ell_1$ (resp.~$\ell_2$) and that is an anchor point of a copy of $L'_1$ (resp.~$L'_2$).
For example, if $\ell_1=(x_1,y_1)$, with $y_1$ even, then $\ell'_1=\big(x_1+f(n_2)(x_4-x_3),y_1\big)$.
Thanks to Equation \ref{eq:pumped-run} we know that the output produced by
$\rho^{(f(n_1),g(n_2))}$ is of the form
\begin{equation}\label{eq:two-way}
w^{(f(n_1),g(n_2))} ~=~
v_0^{(n_1,n_2)} ~ \pmb{v_1^{n_1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2}} ~ v_4^{(n_1,n_2)}
\end{equation}
where
\begin{itemize}
\item $\pmb{v_1}=\out{\tr{\ell'_1}}$, where $\ell'_1$ is seen as an anchor point in a copy of $L'_1$,
\item $\pmb{v_3}=\out{\tr{\ell'_2}}$, where $\ell'_2$ is seen as an anchor point in a copy of $L'_2$
\par\noindent
(note that the words $v_1,v_3$ depend on $k_0$, but not on $n_1,n_2$),
\item $v_0^{(n_1,n_2)}$ is the output produced by the prefix of $\rho^{(f(n_1),g(n_2))}$
that ends at $\ell_1'$
(this word may depend on the parameters $n_1,n_2$ since the loops
$L'_1,L'_2$ may be traversed several times before reaching the first occurrence of $\tr{\ell'_1}$),
\item $v_2^{(n_1,n_2)}$ is the output produced by the factor of $\rho^{(f(n_1),g(n_2))}$
that starts at $\ell'_1$
and ends at $\ell'_2$,
\item $v_4^{(n_1,n_2)}$ is the output produced by the suffix of $\rho^{(f(n_1),g(n_2))}$
that starts at $\ell'_2$.
\end{itemize}
Putting together Equations~(\ref{eq:one-way}) and (\ref{eq:two-way}), we get
\begin{equation}\label{eq:one-way-vs-two-way}
v_0^{(n_1,n_2)} ~ \pmb{v_1^{n_1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2}} ~ v_4^{(n_1,n_2)}
~~=~~
w_0 ~ \pmb{w_1^{n_2}} ~ w_2 ~ \pmb{w_3^{n_1}} ~ w_4 .
\end{equation}
Recall that the definition of inversions (Definition
\ref{def:inversion-sweeping}) states
that the words $v_1,v_3$ are non-empty.
This allows us to apply Lemma \ref{lem:oneway-vs-twoway}, which shows that the word
$\pmb{v_1} ~ \pmb{v_1^{n_1}} ~ v_2^{(n_1,n_2)} ~ \pmb{v_3^{n_2}} ~ \pmb{v_3}$
has period $p=\gcd(|v_1|,|v_3|)$, for all $n_1,n_2\in\bbN$.
Note that the latter period $p$ still depends on $\cT'$, since
the words $v_1,v_3$ were obtained from loops
$L'_1,L'_2$ on the run $\lambda^{(k_0,k_0)}$ of $\cT'$.
However, because each loop $L'_i$ consists of $k_i$ copies of the original loop $L_i$,
we also know that $v_1=(\out{\tr{\ell_1}})^{k_1}$ and $v_3=(\out{\tr{\ell_2}})^{k_2}$.
By Theorem \ref{thm:fine-wilf}, this implies that for all $n_1,n_2\in\bbN$, the word
\[
\big(\out{\tr{\ell_1}}\big) ~
\big(\out{\tr{\ell_1}}\big)^{k_1 n_1} ~
v_2^{(n_1,n_2)} ~
\big(\out{\tr{\ell_2}}\big)^{k_2 n_2} ~
\big(\out{\tr{\ell_2}}\big)
\]
has a period that divides $|\out{\tr{\ell_1}}|$ and $|\out{\tr{\ell_2}}|$.
\smallskip\noindent
{\em Transferring periodicity to the original run.}~
The last part of the proof amounts at showing a similar periodicity property
for the output produced by the original run $\rho$.
By construction, the iterated factors inside $v_2^{(n_1,n_2)}$ in the previous
word are all of the form $v^{k_1 n_1 + k_0}$ or $v^{k_2 n_2 + k_0}$,
for some words $v$. By taking out the constant factors $v^{k_0}$ from the
latter repetitions, we can write $v_2^{(n_1,n_2)}$ as a word with iterated
factors of the form $v^{k_1 n_1}$ or $v^{k_2 n_2}$, namely, as ${v'_2}^{(k_1 n_1, k_2 n_2)}$.
So the word
\[
\big(\out{\tr{\ell_1}}\big) ~
\big(\out{\tr{\ell_1}}\big)^{k'_1} ~
{v'_2}^{(k'_1, k'_2)} ~
\big(\out{\tr{\ell_2}}\big)^{k'_2} ~
\big(\out{\tr{\ell_2}}\big)
\]
is periodic, with period that divides $|\out{\tr{\ell_1}}|$ and $|\out{\tr{\ell_2}}|$,
for all $k'_1 \in \{k_1 n \::\: n\in\bbN\}$ and all $k'_2 \in \{k_2 n \::\: n\in\bbN\}$.
We now apply
\gabriele{Also here Saarela has been replaced by the new lemma}
Lemma \ref{lem:periods},
once with $n=k'_1$ and once with $n=k'_2$,
to conclude that the latter periodicity property holds also for $k'_1=1$ and $k'_2=1$.
This shows that the word
\[
\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}
\]
is periodic, with period that divides $|\out{\tr{\ell_1}}|$ and $|\out{\tr{\ell_2}}|$.
\end{proof}
\section{Combinatorics in the two-way case}\label{sec:combinatorics-twoway}
In this section we develop the main combinatorial techniques required
in the general case.
In particular, we will show how to derive the existence of idempotent
loops with bounded outputs using Ramsey-based arguments, and we will
use this to derive periodicity properties for the outputs produced
between inversions.
As usual, $\rho$ is a fixed successful run of $\cT$ on some input word $u$.
\medskip
\subsection*{Ramsey-type arguments.}
We start with a technique used for bounding the lengths of the outputs
of certain factors, or subsequences of a two-way run. This technique
is a Ramsey-type argument, more precisely it relies on
Simon's ``factorization forest'' theorem~\cite{factorization_forests,factorization_forests_for_words_paper},
which is recalled below. The classical version of Ramsey theorem would
yield a similar result, but without the tight bounds that we get here.
Let $X$ be a set of positions of $\rho$.
A \emph{factorization forest} for $X$ is an unranked tree, where the nodes are
intervals $I$ with endpoints in $X$ and labeled with the corresponding effect $E_I$,
the ancestor relation is given by the containment order on intervals, the leaves are the
minimal intervals $[x_1,x_2]$, with $x_2$ successor of $x_1$ in $X$, and for every
internal node $I$ with children $J_1,\dots,J_k$, we have:
\begin{itemize}
\item $I=J_1\cup\dots\cup J_k$,
\item $E_I = E_{J_1}\odot\dots\odot E_{J_k}$,
\item if $k>2$, then $E_I = E_{J_1} = \dots = E_{J_k}$
is an idempotent of the semigroup $(\cE,\odot)$.
\end{itemize}
Recall that in a normalized run there are at most $|Q|^{{\boldsymbol{H}}} %h_{\mathsf{max}}}}$ distinct
crossing sequences. Moreover, a flow contains at most ${\boldsymbol{H}}} %h_{\mathsf{max}}}$ edges,
and each edge has one of the 4 possible types ${\ensuremath{\mathsf{LL}}},{\ensuremath{\mathsf{LR}}},{\ensuremath{\mathsf{RL}}},\RR$, so they are at most $4^{{\boldsymbol{H}}} %h_{\mathsf{max}}}}$ different flows.
Hence, the effect semigroup $(\cE,\odot)$ has size at most
${\boldsymbol{E}}} %e_{\mathsf{max}}}=4^{{\boldsymbol{H}}} %h_{\mathsf{max}}}}\cdot|Q|^{2{\boldsymbol{H}}} %h_{\mathsf{max}}}}\cdot|Q|^{2{\boldsymbol{H}}} %h_{\mathsf{max}}}} =(2|Q|)^{2{\boldsymbol{H}}} %h_{\mathsf{max}}}}$.
\reviewTwo[inline]{the way you bound the number of flows could be explained more precisely.
Here you just state that there is at most H edges, and that each edge has one of the possible 4 types.
Then, you bound the number of flows by $4^{H}$, which seems to correspond to any choice of types for the H edges.
However, as you state, there is at most H edges, but there could be less.}%
\felix{done}%
Further recall that ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ is the maximum number of letters output by a
single transition of $\cT$.
Like we did in the sweeping case, we define the constant
${\boldsymbol{B = {\boldsymbol{C}}} %c_{\mathsf{max}}} \cdot {\boldsymbol{H}}} %h_{\mathsf{max}}} \cdot (2^{3{\boldsymbol{E}}} %e_{\mathsf{max}}}}+4) + 4{\boldsymbol{C}}} %c_{\mathsf{max}}}}}$
that will be used to bound the lengths of some outputs of $\cT$.
Note that now $\boldsymbol{B}$ is doubly exponential with respect
to the size of $\cT$, due to the size of the effect semigroup.
\felix{added this explanation, it could be misleading otherwise}%
\begin{thm}[Factorization forest theorem \cite{factorization_forests_for_words_paper,factorization_forests}]%
\label{th:simon}
For every set $X$ of positions of $\rho$, there is a factorization forest for $X$
of height at most $3{\boldsymbol{E}}} %e_{\mathsf{max}}}$.
\end{thm}
\begin{wrapfigure}{r}{5cm}
\vspace{-4mm}
\input{ramsey}
\end{wrapfigure}
The above theorem can be used to show that if $\rho$
produces an output longer than $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, then it contains an idempotent
loop and a trace with non-empty output. Below, we present a result
in the same spirit, but refined in a way that it can be used to
find anchor points inside specific intervals.
To formally state the result, we consider subsequences
of $\r$ induced by sets of locations that are not necessarily
contiguous.
Recall the notation $\rho|Z$ introduced on page~\pageref{rhoZ}:
$\rho|Z$ is the subsequence of $\rho$ induced by the location set $Z$.
For example, Figure \ref{fig:ramsey} depicts a set $Z=[\ell_1,\ell_2]\cap (I\times\bbN)$
by a hatched area, together with the induced subrun $\rho|Z$, represented by
thick arrows.
\smallskip
\begin{thm}\label{thm:simon2}
Let $I=[x_1,x_2]$ be an interval of positions, $K=[\ell_1,\ell_2]$
an interval of locations, and $Z = K \:\cap\: (I\times\bbN)$.
If $|\out{\rho|Z}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
then there exists an idempotent loop $L$ and an anchor point $\ell$ of $L$ such that
\begin{enumerate}
\item $x_1 < \min(L) < \max(L) < x_2$ (in particular, $L\subsetneq I$),
\item $\ell_1 \mathrel{\lhd} \ell \mathrel{\lhd} \ell_2$ (in particular, $\ell \in K$),
\item $\out{\tr{\ell}} \neq \emptystr$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $I$, $K$, $Z$ be as in the statement,
and suppose that $\big|\out{\rho| Z}\big| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
We define
$Z' = Z ~\setminus~ (\{\ell_1,\ell_2\} \cup \{x_1,x_2\}\times\bbN)$
and we observe that there are at most $2{\boldsymbol{H}}} %h_{\mathsf{max}}} +2$ locations
that are missing from $Z'$. This means that $\rho| Z'$ contains
all but $4{\boldsymbol{H}}} %h_{\mathsf{max}}}+4$
transitions of $\rho|Z$,
and because each transition outputs at most ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ letters, we have
$|\out{\rho| Z'}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax} - 4{\boldsymbol{C}}} %c_{\mathsf{max}}}\cdot{\boldsymbol{H}}} %h_{\mathsf{max}}} -4{\boldsymbol{C}}} %c_{\mathsf{max}}} = {\boldsymbol{C}}} %c_{\mathsf{max}}}\cdot{\boldsymbol{H}}} %h_{\mathsf{max}}}\cdot 2^{3{\boldsymbol{E}}} %e_{\mathsf{max}}}}$.
\input{ramsey-proof}
For every level $y$, let $X_y$ be the set of positions $x$ such that
$(x,y)$ is the source location of some transition of $\rho|Z'$
that produces non-empty output.
For example, if we refer to Figure~\ref{fig:ramsey-proof},
the vertical dashed lines represent the positions
of $X_y$ for a particular level $y$; accordingly, the circles
in the figure represent the locations of the form $(x,y)$, for
all $x\in X_y$.
Since each transition outputs at most ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ letters,
we have $\sum_y |X_y| > {\boldsymbol{H}}} %h_{\mathsf{max}}}\cdot 2^{3{\boldsymbol{E}}} %e_{\mathsf{max}}}}$.
Moreover, since there are at most ${\boldsymbol{H}}} %h_{\mathsf{max}}}$ levels,
there is a level $y$ (which we fix hereafter) such that $|X_y| > 2^{3{\boldsymbol{E}}} %e_{\mathsf{max}}}}$.
We now prove the following:
\begin{clm}
There are two consecutive loops $L_1=[x,x']$ and $L_2=[x',x'']$
with endpoints $x,x',x''\in X_y$ and such that $E_{L_1}=E_{L_2}=E_{L_1\cup L_2}$.
\end{clm}
\begin{proof}
By Theorem \ref{th:simon},
there is a factorization forest for $X_y$ of height at most $3{\boldsymbol{E}}} %e_{\mathsf{max}}}$.
Since $\rho$ is a valid run, the dummy element $\bot$ of the effect
semigroup does not appear in this factorization forest.
Moreover, since $|X_y| > 2^{3{\boldsymbol{E}}} %e_{\mathsf{max}}}}$, we know that the factorization
forest contains an internal node $L'=[x'_1,x'_{k+1}]$ with $k > 2$ children,
say $L_1=[x'_1,x'_2], \dots, L_k=[x'_k,x'_{k+1}]$.
By definition of factorization forest, the effects
$E_{L'}$, $E_{L_1}$, \dots, $E_{L_k}$ are all equal and idempotent.
In particular, the effect $E_{L'}=E_{L_1}=\dots=E_{L_k}$
is a triple of the form $(F_{L'},c_1,c_2)$,
where $c_i=\rho|x_i$ is the crossing sequence at $x'_i$.
Finally, since $E_{L'}$ is idempotent, we have that $c_1 = c_2$ and
this is equal to the crossing sequences of $\rho$ at the positions
$x'_1,\dots,x'_{k+1}$. This shows that $L_1,L_2$ are idempotent loops.
\end{proof}
Turning back to the proof of the theorem, we know from the above claim that
there are two consecutive idempotent loops $L_1=[x,x']$ and $L_2=[x',x'']$ with the
same effect and with endpoints $x,x',x''\in X_y \subseteq I \:\setminus\: \{x_1,x_2\}$
(see again Figure~\ref{fig:ramsey-proof}).
Let $\tilde\ell_1=(x,y)$ and $\tilde\ell_2=(x'',y)$, and observe that
$\tilde\ell_1,\tilde\ell_2\in Z'$. In particular, $\tilde\ell_1$ and $\tilde\ell_2$
are strictly between $\ell_1$ and $\ell_2$.
Suppose by symmetry that $\tilde\ell_1 \mathrel{\unlhd} \tilde\ell_2$.
Further let $C$ be the component of $L_1\cup L_2$ (or, equally, of $L_1$ or $L_2$)
that contains the node $y$.
Below, we focus on the factors of $\rho[\tilde\ell_1,\tilde\ell_2]$
that are intercepted by $L_1\cup L_2$: these are represented in
Figure~\ref{fig:ramsey-proof} by the thick arrows.
By Lemma~\ref{lem:component2} all these factors correspond to edges
of the same component $C$, namely, they are $(L_1 \cup L_2,C)$-factors.
Let us fix an arbitrary factor $\alpha$ of $\rho[\tilde\ell_1,\tilde\ell_2]$
that is intercepted by $L_1\cup L_2$, and assume that $\alpha=\beta_1 \cdots \beta_k$,
where $\beta_1,\dots,\beta_k$ are the factors intercepted by either $L_1$ or $L_2$.
\begin{clm}
If $\beta,\beta'$ are two factors intercepted by $L_1=[x,x']$ and $L_2=[x',x'']$,
with $E_{L_1}=E_{L_2}=E_{L_1\cup L_2}$, and $\beta,\beta'$ are adjacent in the run
$\rho$ (namely, they share an endpoint at position $x'$), then $\beta,\beta'$
correspond to edges in the same component of $L_1$ (or, equally, $L_2$).
\end{clm}
\begin{proof}
Let $C$ be the component of $L_1$ and $y_1 \mapsto y_2$ the edge of $C$
that corresponds to the factor $\beta$ intercepted by $L_1$.
Similarly, let $C'$ be the component of $L_2$ and $y_3 \mapsto y_4$ the edge of $C'$
that corresponds to the factor $\beta'$ intercepted by $L_2$.
Since $\beta$ and $\beta'$ share an endpoint at position $x'$,
we know that $y_2=y_3$. This shows that $C \cap C' \neq \emptyset$,
and hence $C=C'$.
\end{proof}
The above claim shows that any two adjacent factors $\beta_i,\beta_{i+1}$
correspond to edges in the same component of $L_1$ and $L_2$, respectively.
Thus, by transitivity, all factors $\beta_1,\dots,\beta_k$ correspond to
edges in the same component, say $C'$.
We claim that $C'=C$. Indeed, if $\beta_1$ is intercepted by $L_1$,
then $C'=C$ because $\alpha$ and $\beta_1$ start from the same location
and hence they correspond to edges of the flow that depart from the
same node. The other case is where $\beta_1$ is intercepted by $L_2$,
for which a symmetric argument can be applied.
So far we have shown that every factor of $\rho[\tilde\ell_1,\tilde\ell_2]$ intercepted
by $L_1\cup L_2$ can be factorized into some $(L_1,C)$-factors and some $(L_2,C)$-factors.
We conclude the proof with the following observations:
\begin{itemize}
\item By construction, both loops $L_1,L_2$ are contained in the interval of positions $I=[x_1,x_2]$,
and have endpoints different from $x_1,x_2$.
\item Both anchor points of $C$ inside $L_1,L_2$ belong to the interval of locations
$K\:\setminus\:\{\ell_1,\ell_2\}$.
This holds because
$\rho[\tilde\ell_1,\tilde\ell_2]$ contains a factor $\alpha$ that is intercepted
by $L_1\cup L_2$ and spans across all the positions from $x$ to $x''$, namely,
an ${\ensuremath{\mathsf{LR}}}$-factor.
This factor starts at the anchor point of $C$ inside $L_1$
and visits the anchor point of $C$ inside $L_2$.
Moreover, by construction, $\alpha$ is also a factor of the subsequence $\rho|Z'$.
This shows that the anchor points of $C$ inside $L_1$ and $L_2$ belong to $Z'$,
and in particular to $K\:\setminus\:\{\ell_1,\ell_2\}$.
\item The first factor of $\rho[\tilde\ell_1,\tilde\ell_2]$ that is intercepted by
$L_1\cup L_2$ starts at $\tilde\ell_1=(x,y)$, which by construction is the
source location of some transition producing non-empty output.
By the previous arguments, this factor is a concatenation of $(L_1,C)$-factors
and $(L_2,C)$-factors. This implies that the trace of the anchor point
of $C$ inside $L_1$, or the trace of $C$ inside $L_2$ produces non-empty output.
\qedhere
\end{itemize}
\end{proof}
\medskip
\subsection*{Inversions and periodicity.}
The first important notion that is used to characterize one-way definability
is that of inversion. It turns out that
the
definition of inversion in the sweeping case (see
page~\pageref{page-def-inversion}) can be reused almost verbatim
in the two-way setting. The only difference is that here we require the loops
to be idempotent and we do not enforce output-minimality (we will discuss this
latter choice further below, with a formal definition of output-minimality at hand).
\begin{defi}\label{def:inversion-twoway}
An \emph{inversion} of the run $\rho$ is a tuple $(L_1,\ell_1,L_2,\ell_2)$ such that
\begin{enumerate}
\item $L_1,L_2$ are idempotent loops,
\item $\ell_1=(x_1,y_1)$ and $\ell_2=(x_2,y_2)$
are anchor points inside $L_1$ and $L_2$, respectively,
\item $\ell_1 \mathrel{\lhd} \ell_2$ and $x_1 > x_2$,
\item for both $i=1$ and $i=2$, $\out{\tr{\ell_i}}\neq\emptystr$.
\end{enumerate}
\end{defi}
\input{inversion-twoway}
\noindent
Figure \ref{fig:inversion-twoway} gives an example of an inversion involving
the idempotent loop $L_1$ with anchor point $\ell_1$, and the idempotent
loop $L_2$ with anchor point $\ell_2$. The intercepted factors that form
the corresponding traces are represented by thick arrows; the one highlighted in red
are those that produce non-empty output.
The implication \PR1 $\Rightarrow$ \PR2 of Theorem \ref{thm:main2} in the two-way
case is formalized below exactly as in Proposition~\ref{prop:periodicity-sweeping},
and the proof is very similar to the sweeping case.
More precisely, it can be checked that the proof of the first claim in
Proposition~\ref{prop:periodicity-sweeping} was shown independently of the sweeping assumption
--- one just needs to replace the use of Equation \ref{eq:pumped-run}
with Proposition \ref{prop:pumping-twoway}.
The sweeping assumption was used only for deriving the notion of \emph{output-minimal}
factor, which was crucial to conclude that the period $p$ is bounded by the specific
constant $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
In this respect, the proof of Proposition~\ref{prop:periodicity-twoway}
requires a different argument for showing that $p \le\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$:
\begin{prop}\label{prop:periodicity-twoway}
If $\cT$ is one-way definable, then the following property \PR2 holds:
\begin{quote}
For all inversions $(L_1,\ell_1,L_2,\ell_2)$ of $\rho$,
the period $p$ of the word
\[
\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}
\]
divides both $|\out{\tr{\ell_1}}|$ and $|\out{\tr{\ell_2}}|$.
Moreover, $p \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{quote}
\end{prop}
We only need to show here that $p \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$. Recall that in the
sweeping case we relied on the assumption that the factors
$\tr{\ell_1}$ and $\tr{\ell_2}$ of an inversion are output-minimal,
and on Lemma~\ref{lem:output-minimal-sweeping}. In the general case
we need to replace output-minimality by the following notion:
\begin{defi}\label{def:output-minimal-twoway}
Consider pairs $(L,C)$ consisting of an idempotent loop $L$
and a component $C$ of $L$.
\begin{enumerate}
\item On such pairs, define the relation
$\sqsubset$ by $(L',C') \sqsubset (L,C)$ if
$L'\subsetneq L$ and at least one $(L',C')$-factor
is contained in some $(L,C)$-factor.
\item A pair $(L,C)$ is \emph{output-minimal} if
$(L',C') \sqsubset (L,C)$ implies $\out{\tr{\an{C'}}}=\emptystr$.
\end{enumerate}
\end{defi}
\noindent
Note that the relation $\sqsubset$ is not a partial order in general
(it is however antisymmetric).
Moreover, it is easy to see that the notion of output-minimal
pair $(L,C)$ generalizes that of output-minimal factor introduced
in the sweeping case: indeed, if $\ell$ is the anchor point of a
loop $L$ of a sweeping transducer and $\tr{\ell}$ satisfies
Definition \ref{def:output-minimal-sweeping}, then the pair
$(L,C)$ is output-minimal, where $C$ is the unique component
whose edge corresponds to $\tr{\ell}$.
The following lemma bounds the length of the output trace
$\out{\tr{\an{C}}}$ for an output-minimal pair $(L,C)$:
\begin{lem}\label{lem:output-minimal-twoway}
For every output-minimal pair $(L,C)$, $|\out{\tr{\an{C}}}| \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{lem}
\begin{proof}
Consider a pair $(L,C)$ consisting of an idempotent loop $L=[x_1,x_2]$ and a component $C$ of $L$.
Suppose by contradiction that $|\out{\tr{\an{C}}}|>\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
We will show that $(L,C)$ is not output-minimal.
Recall that $\tr{\an{C}}$ is a concatenation of $(L,C)$-factors, say,
$\tr{\an{C}}=\beta_1\cdots\beta_k$. Let $\ell_1$ (resp.~$\ell_2$) be the
first (resp.~last) location that is visited by these factors. Further let
$K = [\ell_1,\ell_2]$ and $Z = K \:\cap\: (L\times\bbN)$.
By construction, the subsequence $\rho|Z$ can be seen as a concatenation
of the factors $\beta_1,\dots,\beta_k$, possibly in a different order than
that of $\tr{\an{C}}$. This implies that $|\out{\rho|Z}| > \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
By Theorem \ref{thm:simon2}, we know that there exist an idempotent
loop $L'\subsetneq L$ and a component $C'$ of $L'$ such that
$\an{C'} \in K$ and $\out{\tr{\an{C'}}}\neq\emptystr$.
Note that the $(L',C')$-factor that starts at the anchor point
$\an{C'}$ (an ${\ensuremath{\mathsf{LR}}}$- or ${\ensuremath{\mathsf{RL}}}$-factor) is entirely contained
in some $(L,C)$-factor.
This implies that $(L',C') \sqsubset (L,C)$, and thus $(L,C)$ is not output-minimal.
\end{proof}
We remark that the above lemma cannot be used directly to bound the period
of the output produced amid an inversion. The reason is that we cannot
restrict ourselves to inversions $(L_1,\ell_1,L_2,\ell_2)$ that induce
output-minimal pairs $(L_i,C_i)$ for $i=1,2$, where $C_i$ is the unique
component of the anchor point $\ell_i$.
An example is given in Figure~\ref{fig:inversion-twoway},
assuming that the factors depicted in red are the only ones that
produce non-empty output, and the lengths of these outputs exceed $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
On the one hand $(L_1,\ell_1,L_2,\ell_2)$ is an inversion, but
$(L_1,C_1)$ is not output-minimal.
On the other hand, it is possible that $\rho$ contains no other
inversion than $(L_1,\ell_1,L_2,\ell_2)$: any loop strictly contained
in the red factor in $L_1$ will have the anchor point \emph{after}
$\ell_2$.
We are now ready to show the second claim of
Proposition~\ref{prop:periodicity-twoway}.
\input{non-output-minimal-inversion-full}
\begin{proof}[Proof of Proposition~\ref{prop:periodicity-twoway}]
The proof of the second claim requires a refinement of the arguments that involve
pumping the run $\rho$ simultaneously on three different loops. As usual, we
assume that the loops $L_1,L_2$ of the inversion are disjoint (otherwise,
we preliminarily pump one of the two loops a few times).
Recall that the word
\[
\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}
\]
has period $p = \gcd\big( |\out{\tr{\ell_1}}|, |\out{\tr{\ell_2}}|
\big)$, but that we cannot bound $p$ by assuming that $(L_1,\ell_1,L_2,\ell_2)$
is output-minimal.
However, in the pumped run $\rho^{(2,1)}$ we do find inversions with
output-minimal pairs.
For example, as depicted in the right part of Figure~\ref{fig:non-output-minimal-inversion-full},
we can consider the left and right copy of $L_1$ in $\rho^{(2,1)}$,
denoted by $\mathpalette{\overarrow@\shortleftarrowfill@} L_1$ and $\mathpalette{\overarrow@\shortrightarrowfill@} L_1$, respectively.
Accordingly, we denote by $\mathpalette{\overarrow@\shortleftarrowfill@}\ell_1$ and $\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1$ the left and
right copy of $\ell_1$ in $\rho^{(2,1)}$.
Now, let $(L_0,C_0)$ be any {\sl output-minimal} pair such that $L_0$
is an idempotent loop, $\out{\tr{\an{C_0}}}\neq\emptystr$, and either
$(L_0,C_0)=(\mathpalette{\overarrow@\shortleftarrowfill@} L_1,C_1)$ or $(L_0,C_0) \sqsubset (\mathpalette{\overarrow@\shortleftarrowfill@} L_1,C_1)$.
Such a loop $L_0$ is represented in
Figure~\ref{fig:non-output-minimal-inversion-full} by
the red vertical stripe. Further let $\ell_0=\an{C_0}$.
We claim that either $(L_0,\ell_0,L_2,\ell_2)$ or $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1,L_0,\ell_0)$
is an inversion of the run $\rho^{(2,1)}$, depending on whether $\ell_0$ occurs
before or after $\ell_2$.
First, note that all the loops $L_0$, $L_2$, $\mathpalette{\overarrow@\shortrightarrowfill@} L_1$ are idempotent
and non-overlapping;
more precisely, we have
$\max(L_2) \le \min(L_0)$ and $\max(L_0) \le \min(\mathpalette{\overarrow@\shortrightarrowfill@} L_1)$.
Moreover, the outputs of the traces $\tr{\ell_0}$, $\tr{\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1}$,
and $\tr{\ell_2}$ are all non-empty.
So it remains to distinguish two cases based on the ordering of the anchor
points $\ell_0$, $\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1$, $\ell_2$.
If $\ell_0 \mathrel{\lhd} \ell_2$, then $(L_0,\ell_0,L_2,\ell_2)$ is an inversion.
Otherwise, because $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1,L_2,\ell_2)$ is an inversion, we know that
$\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1 \mathrel{\lhd} \ell_2 \mathrel{\unlhd} \ell_0$, and hence $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1,L_0,C_0)$
is an inversion.
Now, we know that
$\rho^{(2,1)}$ contains the inversion $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@} \ell_1,L_2,\ell_2)$, but also
an inversion involving the output-minimal pair $(L_0,C_0)$, with $L_0$ strictly
between $\mathpalette{\overarrow@\shortrightarrowfill@} L_1$ and $L_2$.
For all $m_0,m_1,m_2$, we define $\rho^{(m_0,m_1,m_2)}$ as the run obtained from
$\rho^{(2,1)}$ by pumping $m_0,m_1,m_2$ times the loops $L_0,\mathpalette{\overarrow@\shortrightarrowfill@} L_1,L_2$, respectively.
By reasoning as we did in the proof of Proposition~\ref{prop:periodicity-sweeping}
(cf.~{\em Periodicity of outputs of pumped runs}), one can show that there are
arbitrarily large output factors of $\rho^{(m_0,m_1,m_2)}$ that are produced
within the inversion on $\ell_0$ (i.e.~either $(L_0,\ell_0,L_2,\ell_2)$ or $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1,L_0,\ell_0)$)
and that are periodic with period $p'$ that divides $|\out{\tr{\ell_0}}|$.
In particular, by Lemma~\ref{lem:output-minimal-twoway}, we know that
$p' \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
Moreover, large portions of these factors are also produced within the
inversion $(\mathpalette{\overarrow@\shortrightarrowfill@} L_1,\mathpalette{\overarrow@\shortrightarrowfill@}\ell_1,L_2,\ell_2)$, and hence
by Theorem~\ref{thm:fine-wilf} they have period $\gcd(p,p')$.
To conclude the proof we need to transfer the periodicity property
from the pumped runs $\rho^{(m_0,m_1,m_2)}$ to the original run $\rho$.
This is done exactly like in Proposition~\ref{prop:periodicity-sweeping}
by relying on
\gabriele{Usual replacement of Saarela...}
Lemma \ref{lem:periods}:
we observe that the periodicity
property holds for large enough parameters $m_0,m_1,m_2$, hence for
all values of the parameters, and in particular
for $m_0 = m_1 = m_2 = 1$. This shows that the word
\[
\out{\tr{\ell_1}} ~ \out{\rho[\ell_1,\ell_2]} ~ \out{\tr{\ell_2}}
\]
has period $\gcd(p,p') \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
\end{proof}
So far we have shown that the output produced amid every inversion of a
run of a one-way definable two-way transducer is periodic, with period
bounded by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ and dividing the lengths of the trace outputs of
the inversion. This basically proves the implication \PR1 $\Rightarrow$ \PR2
of Theorem \ref{thm:main2}.
In the next section we will follow a line of arguments similar to that of
Section \ref{sec:characterization-sweeping} to prove the remaining
implications \PR2 $\Rightarrow$ \PR3 $\Rightarrow$ \PR1.
\section{Complexity of the one-way definability problem}\label{sec:complexity}
In this section we analyze the complexity of the problem of deciding whether
a transducer $\cT$ is one-way definable. We begin with the case of a functional
two-way transducer. In this case, thanks to the results presented in
Section~\ref{sec:characterization-twoway} page \pageref{testing-containment},
we know that $\cT$ is one-way
definable if and only if $\dom(\cT) \subseteq D$, where $D$ is the language of words
$u\in\dom(\cT)$ such that all successful runs of $\cT$ on $u$ admit a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition.
In particular, the one-way definability problem reduces to an emptiness problem
for the intersection of two languages:
\[
\cT \text{ one-way definable}
\qquad\text{if and only if}\qquad
\dom(\cT) \cap D^\complement = \emptyset.
\]
The following lemma exploits the characterization of Theorem~\ref{thm:main2}
to show that the language $D^\complement$ can be recognized by a non-deterministic
finite automaton $\cA$ of triply exponential size w.r.t.~$\cT$. In fact,
this lemma shows that the automaton recognizing $D^\complement$ can be constructed
using doubly exponential {\sl workspace}. As before, we gain an exponent when
restricting to sweeping transducers.
\reviewOne[inline]{Lemma 9.1: D is a poor choice of notation for the automaton given that there already is a language D. Might A (same font as current D) be an adequate choice?
\gabriele[inline]{Done. BTW. if anyone has changed the macros \cA, \cT, etc. mind that
now they are rendered as simple capital letters instead as with mathcal.
I like this, and it is more uniform than using caligraphic A for automata
and normal T for transducers... But we should check we are consistent!}%
}%
\begin{lem}\label{lem:D-complement}
Given a functional two-way transducer $\cT$, an NFA $\cA$ recognizing
$D^\complement$ can be constructed in $2\expspace$.
Moreover, when $\cT$ is sweeping, the
NFA $\cA$ can be constructed in $\expspace$.
\end{lem}
\begin{proof}
Consider an input word $u$. By Theorem~\ref{thm:main2} we know that
$u\in D^\complement$ iff there exist a successful run $\rho$ of $\cT$
on $u$ and an inversion $\cI=(L_1,\ell_1,L_2,\ell_2)$ of $\rho$ such that
no positive number $p \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ is a period of the word
\[
w_{\rho,\cI} ~=~
\outb{\tr{\ell_1}} ~ \outb{\rho[\ell_1,\ell_2]} ~ \outb{\tr{\ell_2}}.
\]
The latter condition on $w_{\rho,\cI}$ can be rephrased as follows:
there is a function $f:\{1,\dots,\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}\} \rightarrow \{1,\dots,|w_{\rho,\cI}|\}$
such that $w_{\rho,\cI}\big(f(p)\big) \neq w_{\rho,\cI}\big(f(p)+p\big)$
for all positive numbers $p\le\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
In particular, each of the images of the latter function $f$, that is,
$f(1),\dots,f(\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax})$, can be encoded by a suitable marking of the
crossing sequences of $\rho$. This shows that the run $\rho$, the
inversion $\cI$, and the function $f$ described above can all be
guessed within space $\cO(\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax})$: $\r$ is guessed on-the-fly, the
inversion is guessed by marking the anchor points, and for $f$ we only
store two symbols and a counter $\le\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$, for each $1 \le i \le \boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$.
That is, any state of $\cA$ requires doubly
exponential space, resp.~simply exponential space, depending on whether $\cT$ is arbitrary
two-way or sweeping.
\end{proof}
As a consequence of the previous lemma, the emptiness problem for the language
$\dom(\cT) \cap D^\complement$, and thus the one-way definability problem for $\cT$,
can be decided in $2\expspace$ or $\expspace$, depending on whether $\cT$ is
two-way or sweeping:
\begin{prop}\label{prop:complexity}
The problem of deciding whether a functional two-way transducer
$\cT$ is one-way definable is in $2\expspace$. When $\cT$ is
sweeping, the problem is in $\expspace$.
\end{prop}
\reviewOne[inline]{The transition from 9.2 to 9.3 is a tad abrupt. The kind of argument you make before Corollary 8.7 to explain that this makes your result more robust or tight than I expected in first approach could be useful here.
\gabriele[inline]{Done, see below}%
}%
\medskip
The last result of the section shows that functional two-way transducers
are close to be the largest class for which a characterization
of one-way definability is feasible: as soon as we consider
arbitrary transducers (including non-functional ones),
the problem becomes undecidable.
\begin{prop}\label{prop:undecidability}
The one-way definability problem for \emph{non-functional}
sweeping transducers is undecidable.
\end{prop}
\begin{proof}
The proof uses some ideas and variants of constructions provided in \cite{Ibarra78},
concerning the proof of undecidability of the equivalence problem for one-way
non-functional transducers.
We show a reduction from the Post Correspondence Problem (PCP).
A \emph{PCP instance} is described by two finite alphabets $\Sigma$ and $\Delta$
and two morphisms $f,g:\Sigma^*\then\Delta^*$. A \emph{solution} of such an instance
is any non-empty word $w\in\Sigma^+$ such that $f(w)=g(w)$. We recall that the problem
of testing whether a PCP instance has a solution is undecidable.
Below, we fix a tuple $\tau=(\Sigma,\Delta,f,g)$ describing a PCP instance and we
show how to reduce the problem of testing the {\sl non-existence of solutions} of
$\tau$ to the problem of deciding {\sl one-way definability} of a relation computed
by a sweeping transducer.
Roughly, the idea is to construct a relation $B_\tau$ between words over a suitable
alphabet $\Gamma$ that encodes all the {\sl non-solutions} to the PCP instance
$\tau$ (this is simpler than encoding solutions because the presence of errors
can be easily checked). The goal is to have a relation $B_\tau$ that
(i) can be computed by a sweeping transducer and (ii) coincides with
a trivial one-way definable relation when $\tau$ has no solution.
We begin by describing the encodings for the solutions of the PCP instance.
We assume that the two alphabets of the PCP instance, $\Sigma$ and $\Delta$,
are disjoint and we use a fresh symbol $\#\nin \Sigma\cup\Delta$.
We define the new alphabet $\Gamma = \Sigma\cup\Delta\cup\{\#\}$ that will
serve both as input alphabet and as output alphabet for the transduction.
We call \emph{encoding} any pair of words over $\Gamma$ of the form
$(w\cdot u,w\cdot v)$, where $w\in\Sigma^+$, $u\in\Delta^*$, and $v\in\{\#\}^*$.
We will write the encodings as vectors to improve readability, e.g., as
\[
\lbinom{w\cdot u}{w\cdot v} \ .
\]
We denote by $E_\tau$ the set of all encodings and we observe that $E_\tau$
is computable by a one-way transducer (note that this transducer needs
$\varepsilon$-transitions).
We then restrict our attention to the pairs in $E_\tau$ that are encodings
of valid solutions of the PCP instance.
Formally, we call \emph{good encodings} the pairs in $E_\tau$ of the form
\[
\lbinom{w\cdot u}{w\cdot\#^{|u|}}
\qquad\qquad\text{where } u = f(w) = g(w) \ .
\]
All the other pairs in $E_\tau$ are called \emph{bad encodings}.
Of course, the relation that contains the good encodings is not computable
by a transducer. On the other hand, we can show that the complement
of this relation w.r.t.~$E_\tau$ is computable by a sweeping transducer.
Let $B_\tau$ be the set of all bad encodings.
Consider $(w\cdot u,w\cdot \#^m)\in E_\tau$, with $w\in\Sigma^+$,
$u\in\Delta^*$, and $m\in\bbN$, and we observe that this pair belongs to
$B_\tau$ if and only if one of the following conditions is satisfied:
\begin{enumerate}
\item $m<|u|$,
\label{enc1}
\item $m>|u|$,
\label{enc2}
\item $u\neq f(w)$,
\label{enc3}
\item $u\neq g(w)$.
\label{enc4}
\end{enumerate}
We explain how to construct a sweeping transducer $\cS_\tau$ that computes $B_\tau$.
Essentially, $\cS_\tau$ guesses which of the above conditions holds and processes
the input accordingly. More precisely, if $\cS_\tau$ guesses that the first condition
holds, then it performs a single left-to-right pass, first copying the prefix $w$
to the output and then producing a block of occurrences of the symbol $\#$ that is
shorter than the suffix $u$. This task can be easily performed while reading
$u$: it suffices to emit at most one occurrence of $\#$ for each position in $u$,
and at the same time guarantee that, for at least one such position, no occurrence
of $\#$ is emitted. The second condition can be dealt with by a similar strategy:
first copy the prefix $w$, then output a block of $\#$ that is longer than
the suffix $u$. To deal with the third condition, the transducer $\cS_\tau$
has to perform two left-to-right passes, interleaved by a backward pass that
brings the head back to the initial position.
During the first left-to-right pass, $\cS_\tau$ copies the prefix $w$ to the output.
During the second left-to-right pass, it reads again the prefix $w$, but this time
he guesses a factorization of it of the form $w_1\:a\:w_2$.
On reading $w_1$, $\cS_\tau$ will output $\#^{|f(w_1)|}$.
After reading $w_1$, $\cS_\tau$ will store the symbol $a$ and move to the position
where the suffix $u$ begins. From there, it will guess a factorization of $u$
of the form $u_1\:u_2$, check that $u_2$ does not begin with $f(a)$, and
emit one occurrence of $\#$ for each position in $u_2$.
The number of occurrences of $\#$ produced in the output is thus
$m=|f(w_1)| + |u_2|$, and the fact that $u_2$ does not begin with $f(a)$
ensures that the factorizations of $w$ and $u$ do not match, i.e.
\[ m\neq|f(w)| \]
Note that the described behaviour does not immediately guarantee that $u\neq f(w)$.
Indeed, it may still happen that $u=f(w)$, but as a consequence $m\neq |u|$.
This case is already covered by the first and second condition,
so the computation is still correct in the sense that it produces only
bad encodings.
On the other hand, if $m$ happens to be the same as $|u|$,
then $|u| = m \neq |f(w)|$ and thus $u\neq f(w)$.
A similar behaviour can be used to deal with the fourth condition.
\smallskip
We have just shown that there is a sweeping non-functional transducer $\cS_\tau$
that computes the relation $B_\tau$ containing all the bad encodings.
Note that, if the PCP instance $\tau$ admits no solution, then all encodings
are bad, i.e., $B_\tau=E_\tau$, and hence $B_\tau$ is one-way definable.
It remains to show that when $\tau$ has a solution, $B_\tau$ is not one-way
definable. Suppose that $\tau$ has solution $w\in\Sigma^+$ and let
$\big(w\cdot u,\:w\cdot \#^{|u|}\big)$ be the corresponding good encoding,
where $u=f(w)=g(w)$.
Note that every exact repetition of $w$ is also a solution, and hence
the pairs $\big(w^n\cdot u^n,\:w^n\cdot \#^{n\cdot|u|}\big)$ are also
good encodings, for all $n\ge 1$.
Suppose, by way of contradiction, that there is a one-way transducer $\cT$
that computes the relation $B_\tau$.
For every $n,m\in\bbN$, we define the encoding
\[
\alpha_{n,m} ~=~
\lbinom{w^n\cdot u^m}{w^n\cdot \#^{m\cdot|u|}}
\]
and we observe that $\alpha_{n,m} \in B_\tau$ if and only if $n\neq m$
(recall that $w\neq\emptystr$ is the solution of the PCP instance $\tau$ and $u=f(w)=g(w)$).
Below, we consider bad encodings like the above ones,
where the parameter $n$ is supposed to be large enough.
Formally, we define the set $I$ of all pairs of indices $(n,m)\in\bbN^2$
such that (i) $n\neq m$ (this guarantees that $\alpha_{n,m}\in B_\tau$)
and (ii) $n$ is larger than the number $|Q|$ of states of $\cT$.
We consider some pair $(n,m)\in I$ and we choose a successful
run $\rho_{n,m}$ of $\cT$ that witnesses the membership of
$\alpha_{n,m}$ in $B_\tau$, namely, that reads the input
$w^n\cdot u^m$ and produces the output $w^n\cdot \#^{m\cdot|u|}$.
We can split the run $\rho_{n,m}$ into a prefix $\mathpalette{\overarrow@\shortleftarrowfill@}\rho_{n,m}$
and a suffix $\mathpalette{\overarrow@\shortrightarrowfill@}\rho_{n,m}$ in such a way that $\mathpalette{\overarrow@\shortleftarrowfill@}\rho_{n,m}$
consumes the prefix $w^n$ and $\mathpalette{\overarrow@\shortrightarrowfill@}\rho_{n,m}$ consumes the remaining
suffix $u^m$.
Since $n$ is larger than the number of state of $\cT$, we can find a
factor $\hat\rho_{n,m}$ of $\mathpalette{\overarrow@\shortleftarrowfill@}\rho_{n,m}$ that starts and ends with
the same state and consumes a non-empty exact repetition of $w$,
say $w^{n_1}$, for some $1\le n_1\le |Q|$.
We claim that the output produced by the factor $\hat\rho_{n,m}$
must coincide with the consumed part $w^{n_1}$ of the input.
Indeed, if this were not the case, then deleting the factor $\hat\rho_{n,m}$
from $\rho_{n,m}$ would result in a new successful run that reads
$w^{n-n_1}\cdot u^m$ and produces $w^{n-n_2}\cdot \#^{m\cdot|u|}$
as output, for some $n_2\neq n_1$. This however would contradict
the fact that, by definition of encoding, the possible outputs produced
by $\cT$ on input $w^{n-n_1}\cdot u^m$ must agree on the prefix $w^{n-n_1}$.
We also remark that, even if we do not annotate this explicitly, the number
$n_1$ depends on the choice of the pair $(n,m)\in I$. This number, however,
range over the fixed finite set $J = \big[1,|Q|\big]$.
We can now pump the factor $\hat\rho_{n,m}$ of the run $\rho_{n,m}$ any
arbitrary number of times. In this way, we obtain new successful runs of
$\cT$ that consume inputs of the form
$w^{n+k\cdot n_1}\cdot u^m$ and produce outputs of the form
$w^{n+k\cdot n_1}\cdot \#^m$, for all $k\in\bbN$.
In particular, we know that $B_\tau$ contains all
pairs of the form $\alpha_{n+k\cdot n_1,m}$.
Summing up, we can claim the following:
\begin{clm}
There is a function $h:I\then J$ such that, for all pairs $(n,m)\in I$,
\[
\big\{ (n+k\cdot h(n,m),m) ~\big|~ k\in\bbN \big\} \:\subseteq\: I \ .
\]
\end{clm}
\noindent
We can now head towards a contradiction. Let $\tilde n$
be the maximum common multiple of the numbers $h(n,m)$,
for all $(n,m)\in I$. Let $m=n+\tilde n$ and observe that
$n\neq m$, whence $(n,m)\in I$. Since $\tilde n$ is a multiple
of $h(n,m)$, we derive from the above claim that the pair
$(n+\tilde n,m) = (m,m)$ also belongs to $I$.
However, this contradicts the definition of $I$, since we
observed earlier that $\alpha_{n,m}$ is a bad encoding if
and only if $n\neq m$.
We conclude that $B_\tau$ is not one-way definable
when $\tau$ has a solution.
\end{proof}
\section{Conclusions}\label{sec:conclusions}
It was shown in \cite{fgrs13} that it is decidable whether
a given two-way transducer can be implemented by some one-way
transducer. However, the provided algorithm has non-elementary complexity.
The main contribution of our paper is a new algorithm that solves
the above question with elementary complexity, precisely in $2\expspace$.
The algorithm is based on a characterization of those transductions,
given as two-way transducers, that can be realized by one-way
transducers. The flavor of our characterization is
different from that of \cite{fgrs13}. The approach of
\cite{fgrs13} is based on a variant of Rabin and Scott's construction
\cite{RS59} of one-way automata, and on local modifications of
the two-way run. Our approach instead relies on the global notion
of \emph{inversion} and more involved combinatorial arguments.
The size of the one-way transducer that we obtain is triply exponential
in the general case, and doubly exponential in the sweeping case, and
the latter is optimal.
\felix{add last sentence. Maybe we can do more on future work ? The characterization of sweeping in term of rational relations maybe ?}%
\gabriele{I did not like the sentence in the end, so I moved it here and rephrased (and it was not future work!)}%
The approach described here was adapted to characterize functional two-way transducers
that are equivalent to some sweeping transducer with either known or
unknown number of passes (see~\cite{bgmp16}, \cite{Bas17} for details).
\reviewOne[inline]{The conclusion gives no clue towards future works, either current, considered or at all possible.
\olivier[inline]{added the following paragraph}%
}%
Our procedure considers non-deterministic transducers,
both for the initial two-way transducer, and
for the equivalent one-way transducer, if it exists.
\gabriele{Improved and mentioned that determinism does not bring much to our problem}%
Deterministic two-way transducers are as expressive as
non-deterministic functional ones. This means that
starting from deterministic two-way transducers would address
the same problem in terms of transduction classes, but could
in principle yield algorithms with better complexity.
\gabriele{Added this tiny remark}%
We also recall that Proposition \ref{prop:lower-bound}
gives a tight lower bound for the size of a one-way transducer
equivalent to a given deterministic sweeping transducer. This
latter result basically shows that there is no advantage in
considering one-way definability for deterministic variants
of sweeping transducers, at least with respect to the size
of the possible equivalent one-way transducers.
\gabriele{slightly rephrased}%
A variant of the one-way definability problem asks whether a given
two-way transducer is equivalent to some \emph{deterministic} one-way transducer.
A decision procedure for this latter problem is obtained by combining
our characterization with the classical algorithm that determines whether a one-way
transducer can be determinized \cite{cho77,bealcarton02,weberklemm95}.
In terms of complexity, it is conceivable that a better algorithm
may exist for deciding definability by deterministic one-way transducers,
since in this case one can rely on structural properties
that characterize
deterministic transducers. \anca{I deleted the last phrase (``on the
contrary, our approach etc'') because at this point, the reader
should have remembered that we do only guessing.}
\section{Introduction}\label{sec:introduction}
Since the early times of computer science, transducers have been
identified as a fundamental computational model. Numerous
fields of computer science are ultimately concerned with
transformations, ranging from databases to image
processing, and an important challenge is to perform transformations with
low costs, whenever possible.
The most basic form of transformations is obtained using devices that
process an input with finite memory
and produce outputs during the processing. Such
devices are called finite-state transducers. Word-to-word finite-state
transducers were considered in very early work in formal language
theory~\cite{sch61,ahu69,eilenberg1974automata}, and it was soon clear that they are much more
challenging than finite-state word acceptors (classical finite-state
automata). One essential difference between transducers and automata
over words is that the capability to process the
input in both directions strictly increases the expressive power in the case
of transducers, whereas this does not for
automata~\cite{RS59,she59}. In other words, two-way word transducers
are strictly more expressive than one-way word transducers.
We consider in this paper functional transducers, that compute
functions from words to words. Two-way word transducers capture very
nicely the notion of regularity in this setting. Regular word
functions, in other words, functions computed by functional two-way transducers%
\footnote{We know from \cite{EH98,de2013uniformisation} that deterministic
and non-deterministic functional two-way transducers have the
same expressive power, though the non-deterministic variants are
usually more succinct.},
inherit many of the characterizations and algorithmic properties of the robust
class of regular languages. Engelfriet and Hoogeboom~\cite{EH98}
showed that monadic second-order definable graph transductions,
restricted to words, are equivalent to two-way transducers --- this
justifies the
notation ``regular'' word functions, in the spirit of classical
results in automata theory and logic by B\"uchi, Elgot, Rabin and
others. Recently, Alur and Cern\'{y}~\cite{AlurC10} proposed an enhanced
version of one-way transducers called streaming transducers, and
showed that they are equivalent to the two previous models. A streaming
transducer processes the input word from left to right, and stores
(partial) output words in some given write-only registers, that are
updated through concatenation and constant updates.
\reviewOne[inline]{Few reasons are given for a specific focus towards two-way transducers as opposed to equivalent models.
\olivier[inline]{I answered to the editor that the main reason is that we are more familiar with it, so it seems natural. But we shouldn't mention it here.}
}%
\anca[inline]{I agree not to say it here, but I would not say this
is the real reason - and I changed the answer. The real reason (for
me) is that pumping SST is much less intuitive.}%
Two-way transducers raise challenging questions about resource
requirements. One crucial resource is the number of times the
transducer needs to re-process the input word. In particular, the case
where the input can be processed in a single pass, from left to right,
is very attractive as it corresponds to the setting of \emph{streaming},
where the (possibly very large) inputs do not need to be stored in order
to be processed. The~\emph{one-way definability} of a functional
two-way transducer, that is, the question
whether the transducer is equivalent to some one-way transducer, was
considered quite recently: \cite{fgrs13} shows that one-way
definability of string transducers is a decidable property. However, the decision procedure
of~\cite{fgrs13} has non-elementary complexity, which raises the
question about the intrinsic complexity of this problem.
In this paper we provide an algorithm of elementary complexity solving
the one-way definability problem.
Our decision algorithm has single or doubly exponential space complexity,
depending on whether the input transducer is sweeping (namely, it performs reversals
only at the extremities of the input word) or genuinely two-way.
We also describe an algorithm that constructs an equivalent one-way
transducer, whenever it exists, in doubly or triply exponential time,
again depending on whether the input transducer is sweeping or
two-way. For the
construction of an equivalent one-way transducer we obtain a doubly
exponential lower bound, which is tight for sweeping transducers. Note that for the decision problem,
the best lower bound known is only polynomial space~\cite{fgrs13}.
\reviewOne[inline]{The subclass of sweeping transducers is central to
the paper: not only is the algorithm of better complexity in this
particular case, it is also used as an easy case to prepare for the
general proof. However, beyond this property, it is unclear whether
this class has other interesting properties, or has garnered
interest in the part. This information should, ideally, be made
available in the introduction. \olivier[inline]{added a paragraph
below.}%
}%
\reviewOne[inline]{Were deterministic two-way word transducers
considered at all (Introduction)? If not, do you plan to consider
them (Conclusion)? \olivier[inline]{added the footnote in intro,
and a paragraph in conclusion}%
\felix[inline]{I think the question is "do you have a better
complexity if the given transducer is deterministic?" like
usually. We maybe also need to add something like : One can notice
that if we want to decide definability by \emph{deterministic}
one-way transducers, an efficient procedure is known since
Choffrut78 \olivier[inline]{you can complete the conclusion if you
want. To me it is sufficient.}%
}%
}%
\olivier{added this paragraph}%
\gabriele{Changed a bit (without the reviewer comment the paragraph was a bit isolated)}%
\anca{changed a bit more}%
Our initial interest in sweeping transducers was the fact that they
provide a simpler setting for characterizing one-way
definability. Later it turned out that sweeping transducers enjoy interesting and
useful connections with streaming transducers: they have the same
expressiveness as streaming transducers where concatenation of
registers is disallowed. The connection goes even further, since the number of
sweeps corresponds exactly to the number of
registers~\cite{bgmp16}. The results of this paper were refined
in~\cite{bgmp16}, and used to determine the minimal number of
registers required by functional streaming transducers without
register concatenation.
\subsection*{Related work.}
Besides the papers mentioned above, there are
several recent results around the expressivity and the resources of
two-way transducers, or
equivalently, streaming transducers.
First-order definable transductions were shown to be equivalent to
transductions defined by aperiodic
streaming transducers~\cite{FiliotKT14} and to aperiodic two-way
transducers~\cite{CartonDartois15}. An effective characterization of
aperiodicity for one-way transducers was obtained in~\cite{FGL16}.
Register minimization for right-appending deterministic streaming
transducers
was shown to be
decidable in~\cite{DRT16}. An algebraic characterization of (not necessarily
functional) two-way transducers over unary alphabets was
provided in~\cite{CG14mfcs}. It was shown that in this case
sweeping transducers have the same expressivity.
The expressivity of non-deterministic input-unary
or output-unary two-way
transducers was investigated in~\cite{Gui15}.
In \cite{Smith14} a pumping lemma for two-way transducers is proposed,
and used to investigate properties of the output languages of two-way
transducers. In this paper we also rely on pumping arguments over runs
of two-way transducers, but we require loops of a particular form,
that allows to identify periodicities in the output.
\olivier{TODO ref Ismael to be improved before resubmission}%
The present paper unifies results on one-way definability obtained
in~\cite{bgmp15} and~\cite{bgmp17}.
Compared to the conference versions, some combinatorial proofs have
\gabriele{I have removed the reference to Saarela}%
been simplified,
and the complexity of the procedure presented in~\cite{bgmp15} has been
improved by one exponential.
\anca{please check. I deleted the sentence on the
under-approximation, because I don't find it understandable at this point.}%
\olivier{added this paragraph}%
\subsection*{Overview.}
Section~\ref{sec:preliminaries} introduces basic
notations for two-way and sweeping transducers, and Section~\ref{sec:overview}
states the main result and provides a roadmap of the proofs. For
better readability our paper
is divided in two parts: in the first part we consider the easier case of
sweeping transducers, and in the second part the general case. The two
proofs are similar, but the general case is more involved since we
need to deal with special loops (see
Section~\ref{sec:loops-twoway}). Since the high-level ideas of the
proofs are the same and the sweeping case illustrates them in a
simpler way, the proof in that setting is a preparation for the
general case. Both proofs have
the same structure: first we introduce some combinatorial arguments
(see Section~\ref{sec:combinatorics-sweeping} for the sweeping case
and Section~\ref{sec:combinatorics-twoway} for the general case), then we provide the
characterization of one-way definability (see
Section~\ref{sec:characterization-sweeping} for the sweeping case and
Section~\ref{sec:characterization-twoway} for the general
case). Finally, Section~\ref{sec:complexity} establishes the complexity
of our algorithms.
\section{The structure of two-way loops}\label{sec:loops-twoway}
While loop pumping in a sweeping transducer is rather simple,
we need a much better understanding when it comes to pump loops of
unrestricted two-way transducers.
This section is precisely devoted to untangling the structure of two-way loops.
We will focus on specific loops, called idempotent,
that generate repetitions with a ``nice shape'', very similar
to loops of sweeping transducers.
We fix throughout this section a functional two-way transducer $\cT$,
an input word $u$, and a (normalized) successful run $\rho$ of $\cT$ on $u$.
As usual, ${\boldsymbol{H}}} %h_{\mathsf{max}}}=2|Q|- 1$ is the maximal length of a crossing sequence
of $\rho$, and ${\boldsymbol{C}}} %c_{\mathsf{max}}}$ is the maximal number of letters output by a single
transition.
\reviewTwo[inline]{
I have some questions concerning the definition of the flows and effects.
Initially you fix a transducer T, a word u and a run $\rho$ of T on u, and you define the flow corresponding to an interval I of $\rho$. On line 10, you say that “we consider effects that are not necessarily associated with intervals of specific runs”. Then you define the set of all flows, and all effects.
If I understand correctly, a flow has to correspond to an interval I, whereas an effect is a triple containing a flow (corresponding to an interval I) and two crossing sequences (that might not correspond to I).
My first question is: wouldn’t it be more natural to have either
1. also flows that do not correspond to an interval,
2. or only effects that correspond to intervals?
I realise that in case 1, defining the flows in a way for Lemma 6.5 to still hold seems complicated, but I was wondering if you had considered it.
Second, I was wondering if it wouldn’t be easier to fix the set of nodes of the flows to $\{0,\dots , 2|Q| - 2 \}$.
This would allow a smoother definition of $G \cdot G'$ (l.25): as of now, the set of nodes of $G \cdot G'$ is not stated explicitly. If I am not mistaken, it is the set of nodes that are part of G and admit an outgoing edge, or are part of $G'$ and admit an incoming edge (which is not that complicated, but wouldn't be needed at all if we always have all the nodes).
\olivier[inline]{addressed in review2-answer.txt}%
}%
\medskip
\subsection*{Flows and effects.}
We start by analyzing the shape of factors of $\rho$ intercepted
by an interval $I=[x_1,x_2]$.
We identify four types of factors $\alpha$ intercepted by $I$
depending on the first location $(x,y)$ and the last location
$(x',y')$:
\begin{itemize}
\item $\alpha$ is an ${\ensuremath{\mathsf{LL}}}$-factor if $x=x'=x_1$,
\item $\alpha$ is an $\RR$-factor if $x=x'=x_2$,
\item $\alpha$ is an ${\ensuremath{\mathsf{LR}}}$-factor if $x=x_1$ and $x'=x_2$,
\item $\alpha$ is an ${\ensuremath{\mathsf{RL}}}$-factor if $x=x_2 $ and $x'=x_1$.
\end{itemize}
In Figure \ref{fig:intercepted-factors} we see that $\a$ is an
${\ensuremath{\mathsf{LL}}}$-factor, $\b,\delta$ are ${\ensuremath{\mathsf{LR}}}$-factors, $\z$ is an $\RR$-factor,
and $\g$ is an ${\ensuremath{\mathsf{RL}}}$-factor.
\begin{defi}\label{def:flow}
Let $I = [x_1,x_2]$ be an interval of $\rho$ and $h_i$ the length of
the crossing sequence $\rho|x_i$, for both $i=1$ and $i=2$.
The \emph{flow} $F_I$ of $I$ is the directed graph with set of nodes
$\set{0,\dots,\max(h_1,h_2)-1}$ and set of edges consisting of all
$(y,y')$ such that there is a factor of $\rho$ intercepted by $I$
that starts at location $(x_i,y)$ and ends at location $(x_j,y')$,
for $i,j\in\{1,2\}$.
The \emph{effect} $E_I$ of $I$ is the triple $(F_I,c_1,c_2)$,
where $c_i=\r|x_i$ is the crossing sequence at $x_i$.
\end{defi}
\noindent
For example, the interval $I$ of Figure \ref{fig:intercepted-factors}
has the flow graph $0\mapsto 1\mapsto 3\mapsto 4\mapsto 2\mapsto 0$.
It is easy to see that every node of a flow $F_I$ has at most one
incoming and at most one outgoing edge. More precisely, if $y<h_1$ is
even, then it has one outgoing edge (corresponding to an ${\ensuremath{\mathsf{LR}}}$- or ${\ensuremath{\mathsf{LL}}}$-factor
intercepted by $I$), and if it is odd it has one incoming edge (corresponding
to an ${\ensuremath{\mathsf{RL}}}$- or ${\ensuremath{\mathsf{LL}}}$-factor intercepted by $I$). Similarly, if $y<h_2$ is
even, then it has one incoming edge (corresponding to an ${\ensuremath{\mathsf{LR}}}$- or $\RR$-factor),
and if it is odd it has one outgoing edge (corresponding to an ${\ensuremath{\mathsf{RL}}}$- or
$\RR$-factor).
In the following we consider effects that are not necessarily
associated with intervals of specific runs. The definition of such effects
should be clear: they are triples consisting of a graph (called flow)
and two crossing sequences of lengths $h_1,h_2 \le {\boldsymbol{H}}} %h_{\mathsf{max}}}$, with sets of
nodes of the form $\{0,\ldots,\max(h_1,h_2)-1\}$,
that satisfy the in/out-degree properties stated above.
It is convenient to distinguish the edges in a flow based on the
parity of the source and target nodes. Formally, we partition any
flow $F$ into the following subgraphs:
\begin{itemize}
\item $F_{\ensuremath{\mathsf{LR}}}$ consists of all edges of $F$ between pairs of even nodes,
\item $F_{\ensuremath{\mathsf{RL}}}$ consists of all edges of $F$ between pairs of odd nodes,
\item $F_{\ensuremath{\mathsf{LL}}}$ consists of all edges of $F$ from an even node to an odd node,
\item $F_\RR$ consists of all edges of $F$ from an odd node to an even node.
\end{itemize}
We denote by $\cF$ (resp.~$\cE$) the set of all flows (resp.~effects)
augmented with a dummy element $\bot$. We equip both sets $\cF$ and $\cE$ with
a semigroup structure, where the corresponding products $\circ$ and $\odot$ are
defined below (similar definitions appear in \cite{Birget1990}).
Later we will use the semigroup structure to identify the \emph{idempotent loops},
that play a crucial role in our characterization of one-way definability.
\begin{defi}\label{def:product}
For two graphs $G,G'$, we denote by $G\cdot G'$ the graph with edges of
the form $(y,y'')$ such that $(y,y')$ is an edge of $G$ and $(y',y'')$ is an
edge of $G'$, for some node $y'$ that belongs
to both $G$ and $G'$.
Similarly, we denote by $G^*$ the graph with edges $(y,y')$
such that there exists a (possibly empty) path in $G$ from $y$ to $y'$.
The product of two flows $F,F'$ is the unique flow $F\circ F'$ (if it exists) such that:
\begin{itemize}
\item $(F\circ F')_{\ensuremath{\mathsf{LR}}} = F_{\ensuremath{\mathsf{LR}}} \cdot (F'_{\ensuremath{\mathsf{LL}}} \cdot F_\RR)^* \cdot F'_{\ensuremath{\mathsf{LR}}}$,
\item $(F\circ F')_{\ensuremath{\mathsf{RL}}} = F'_{\ensuremath{\mathsf{RL}}} \cdot (F_\RR \cdot F'_{\ensuremath{\mathsf{LL}}})^* \cdot F_{\ensuremath{\mathsf{RL}}}$,
\item $(F\circ F')_{\ensuremath{\mathsf{LL}}} = F_{\ensuremath{\mathsf{LL}}} ~\cup~ F_{\ensuremath{\mathsf{LR}}} \cdot (F'_{\ensuremath{\mathsf{LL}}} \cdot F_\RR)^* \cdot F'_{\ensuremath{\mathsf{LL}}} \cdot F_{\ensuremath{\mathsf{RL}}}$,
\item $(F\circ F')_\RR = F'_\RR ~\cup~ F'_{\ensuremath{\mathsf{RL}}} \cdot (F_\RR \cdot F'_{\ensuremath{\mathsf{LL}}})^* \cdot F_\RR \cdot F'_{\ensuremath{\mathsf{LR}}}$.
\end{itemize}
If no flow $F\circ F'$ exists with the above properties,
then we let $F\circ F'=\bot$.
The product of two effects $E=(F,c_1,c_2)$ and $E'=(F',c'_1,c'_2)$ is either the effect
$E\odot E' = (F\circ F',c_1,c'_2)$ or the dummy element $\bot$, depending on whether
$F\circ F'\neq \bot$ and $c_2=c'_1$.
\end{defi}
\noindent
For example, let $F$ be the flow of interval $I$
in Figure \ref{fig:intercepted-factors}. Then
$(F \circ F)_{\ensuremath{\mathsf{LL}}}=\set{0\mapsto 1, 2\mapsto 3}$,
$(F \circ F)_\RR=\set{1\mapsto 2, 3\mapsto 4}$, and
$(F \circ F)_{\ensuremath{\mathsf{LR}}}=\set{4\mapsto 0}$
--- one can quickly verify this with the
help of Figure \ref{fig:pumping-twoway}.
It is also easy to see that $(\cF,\circ)$ and $(\cE,\odot)$ are finite semigroups, and that
for every run $\r$ and every pair of consecutive intervals $I=[x_1,x_2]$ and $J=[x_2,x_3]$ of $\r$,
$F_{I\cup J} = F_I \circ F_J$ and $E_{I\cup J} = E_I \odot E_J$.
In particular, the function $E$ that associates each interval $I$ of $\rho$ with the
corresponding effect $E_I$ can be seen as a semigroup homomorphism.
\medskip
\subsection*{Loops and components.}
Recall that a loop is an interval $L=[x_1,x_2]$
with the same crossing sequences at $x_1$ and $x_2$.
We will follow techniques similar to those presented in Section \ref{sec:combinatorics-sweeping}
to show that the outputs generated in non left-to-right manner are essentially periodic.
However, differently from the sweeping case, we will consider only special types
of loops:
\begin{defi}\label{def:idempotent}
A loop $L$ is \emph{idempotent} if $E_L = E_L \odot E_L$ and $E_L\neq\bot$.
\end{defi}
\noindent
For example, the interval $I$ of Figure \ref{fig:intercepted-factors} is a loop,
if one assumes that the crossing sequences at the borders of $I$ are the same.
By comparing with Figure \ref{fig:pumping-twoway}, it is easy to see that $I$
is not idempotent. On the other hand, the loop consisting of 2 copies of $I$
is idempotent.
\input{pumping-twoway}
As usual, given a loop $L=[x_1,x_2]$ and a number $n\in\bbN$, we can
introduce $n$ new copies of $L$ and connect the intercepted factors
in the obvious way.
This results in a new run $\ensuremath{\mathsf{pump}}_L^{n+1}(\rho)$ on the word $\ensuremath{\mathsf{pump}}_L^{n+1}(u)$.
Figure \ref{fig:pumping-twoway} shows how to do this for $n=1$ and $n=2$.
Below, we analyze in detail the shape of the pumped run $\ensuremath{\mathsf{pump}}_L^{n+1}(\rho)$
(and the produced output as well) when $L$ is an {\sl idempotent} loop.
We will focus on idempotent loops because pumping non-idempotent loops
may induce permutations of factors that are difficult to handle.
For example, if we consider again the non-idempotent loop $I$ to the
left of Figure \ref{fig:pumping-twoway}, the factor of the run between
$\beta$ and $\gamma$ (to the right of $I$, highlighted in red) precedes
the factor between $\gamma$ and $\delta$ (to the left of $I$, again in red),
but this ordering is reversed when a new copy of $I$ is added.
When pumping a loop $L$, subsets of factors intercepted by $L$ are glued
together to form factors intercepted by the replication of $L$.
The notion of component introduced below identifies
groups of factors that are glued together.
\begin{defi}\label{def:component}
A \emph{component} of a loop $L$ is any strongly
connected component of its flow $F_L$
(note that this is also a cycle, since
every node in it has in/out-degree $1$).
Given a component $C$, we denote by
$\min(C)$ (resp.~$\max(C)$) the minimum (resp.~maximum)
node in $C$.
We say that $C$ is \emph{left-to-right} (resp.~\emph{right-to-left})
if $\min(C)$ is even (resp., odd).
An \emph{$(L,C)$-factor} is a factor of the run that is
intercepted by $L$ and that corresponds to an edge of $C$.
\end{defi}
\noindent
We will usually list the $(L,C)$-factors based on their order of occurrence in the run.
For example, the loop $I$ of Figure \ref{fig:pumping-twoway} contains
a single component $C=0\mapsto 1\mapsto 3\mapsto 4\mapsto 2\mapsto 0$
which is left-to-right.
Another example is given in Figure \ref{fig:many-components}, where the
loop $L$ has three components $C_1,C_2,C_3$ (colored in blue, red,
and green, respectively):
$\a_1,\a_2,\a_3$ are the $(L,C_1)$-factors, $\b_1,\b_2,\b_3$ are
the $(L,C_2)$-factors, and $\g_1$ is the unique $(L,C_3)$-factor.
\input{many-components}
\medskip
Below, we show that the levels of each component of a loop (not necessarily idempotent)
form an interval.
\begin{lem}\label{lem:component}
Let $C$ be a component of a loop $L=[x_1,x_2]$.
The nodes of $C$ are precisely the levels in the interval $[\min(C),\max(C)]$.
Moreover, if $C$ is left-to-right (resp.~right-to-left), then $\max(C)$
is the smallest level $\ge \min(C)$
such that between $(x_1,\min(C))$ and $(x_2,\max(C))$ (resp.~$(x_2,\min(C))$ and $(x_1,\max(C))$)
there are equally many ${\ensuremath{\mathsf{LL}}}$-factors and $\RR$-factors intercepted by $L$.
\end{lem}
\begin{proof}[Proof idea]
The proof of this lemma is rather technical and deferred to
Appendix~\ref{app:proof-component}, since the lemma is not at the core of the proof of the main
result. Let us first note that with the definition of $\max(C)$ stated
in the lemma it is rather easy to see that the interval
$[\min(C),\max(C)]$ is a union of cycles (i.e., components). This
can be shown by arguing that every node in $[\min(C),\max(C)]$ has
in-degree and out-degree one. What is much less obvious is that
$[\min(C),\max(C)]$ is connected, thus consists of a single cycle.
\input{edges}
The crux is thus to show that the nodes visited by every cycle of the flow
(or, equally,
every component) form an interval. For this, we use an induction based
on portions of the cycle, namely, on {\sl paths} of the flow.
The difficulty underlying the formalization of the inductive invariant comes from the
fact that, differently from cycles, paths of a flow may visit sets of levels that do
not form intervals.
An example is given in Figure \ref{fig:edges}, which represents some edges of a flow
forming a path from $\mathpalette{\overarrow@\shortrightarrowfill@} y_i$ to $\mathpalette{\overarrow@\shortrightarrowfill@} y_i +1$ and covering a non-convex set of nodes:
note that there could be a large gap between the nodes $\mathpalette{\overarrow@\shortrightarrowfill@} y_i$ and $\mathpalette{\overarrow@\shortleftarrowfill@} y_i$ due
to the unbalanced numbers of ${\ensuremath{\mathsf{LL}}}$-factors and $\RR$-factors below $\mathpalette{\overarrow@\shortleftarrowfill@} y_i$.
Essentially, the first part of the proof of the lemma amounts at identifying the sources
$\mathpalette{\overarrow@\shortrightarrowfill@} y_i$ (resp.~$\mathpalette{\overarrow@\shortleftarrowfill@} y_i$) of the ${\ensuremath{\mathsf{LR}}}$-factors (resp.~${\ensuremath{\mathsf{RL}}}$-factors), and at showing
that the latter factors are precisely of the form $\mathpalette{\overarrow@\shortrightarrowfill@} y_i \rightarrow \mathpalette{\overarrow@\shortleftarrowfill@} y_{i-1}+1$
(resp.~$\mathpalette{\overarrow@\shortleftarrowfill@} y_i \rightarrow \mathpalette{\overarrow@\shortrightarrowfill@} y_i +1$).
Once these nodes are identified, we show by induction on $i$ that every two
consecutive nodes $\mathpalette{\overarrow@\shortrightarrowfill@} y_i$ and $\mathpalette{\overarrow@\shortrightarrowfill@} y_i +1$ must be connected by a path whose
intermediate nodes form the interval $[\mathpalette{\overarrow@\shortleftarrowfill@} y_{i-1}+1,\mathpalette{\overarrow@\shortleftarrowfill@} y_i]$.
Finally, we argue that every cycle (or component) $C$ visits all and only
the nodes in the interval $[\min(C),\max(C)]$.
\end{proof}
The next lemma describes the precise shape and order of the intercepted factors
when the loop $L$ is idempotent.
\begin{lem}\label{lem:component2}
If $C$ is a left-to-right (resp.~right-to-left) component
of an {\sl idempotent} loop $L$, then the $(L,C)$-factors are in the following order:
$k$ ${\ensuremath{\mathsf{LL}}}$-factors (resp.~$\RR$-factors), followed by one ${\ensuremath{\mathsf{LR}}}$-factor (resp.~${\ensuremath{\mathsf{RL}}}$-factor),
followed by $k$ $\RR$-factors (resp.~${\ensuremath{\mathsf{LL}}}$-factors), for some $k \ge 0$.
\end{lem}
\begin{proof}
Suppose that $C$ is a left-to-right component of $L$.
We show by way of contradiction that $C$ has only one ${\ensuremath{\mathsf{LR}}}$-factor
and no ${\ensuremath{\mathsf{RL}}}$-factor. By Lemma~\ref{lem:component} this will yield
the claimed shape. Figure~\ref{fig:notidem} can be used as a reference
example for the arguments that follow.
We begin by listing the $(L,C)$-factors.
As usual, we order them based on their occurrences in the run $\rho$.
Let $\gamma$ be the first $(L,C)$-factor that is not an ${\ensuremath{\mathsf{LL}}}$-factor,
and let $\beta_1,\dots,\beta_k$ be the $(L,C)$-factors that precede $\gamma$
(these are all ${\ensuremath{\mathsf{LL}}}$-factors).
Because $\gamma$ starts at an even level, it must be an ${\ensuremath{\mathsf{LR}}}$-factor.
Suppose that there is another $(L,C)$-factor, say $\zeta$, that comes
after $\gamma$ and it is neither an $\RR$-factor nor an ${\ensuremath{\mathsf{LL}}}$-factor.
Because $\zeta$ starts at an odd level, it must be an ${\ensuremath{\mathsf{RL}}}$-factor.
Further let $\delta_1,\dots,\delta_{k'}$ be the intercepted $\RR$-factors
that occur between $\gamma$ and $\zeta$.
We claim that $k'<k$, namely, that the number of $\RR$-factors between
$\gamma$ and $\zeta$ is strictly less than the number of ${\ensuremath{\mathsf{LL}}}$-factors
before $\gamma$. Indeed, if this were not the case, then, by Lemma
\ref{lem:component}, the level where $\zeta$ starts would not belong to
the component $C$.
Now, consider the pumped run $\rho'=\ensuremath{\mathsf{pump}}_L^2(\rho)$, obtained by adding a new
copy of $L$. Let $L'$ be the loop of $\rho'$ obtained from the union
of $L$ and its copy. Since $L$ is idempotent, the components of $L$ are isomorphic
to the components of $L'$. In particular, we can denote by $C'$ the component
of $L'$ that is isomorphic to $C$.
Let us consider the $(L',C')$-factors of $\rho'$. The first $k$ factors
are isomorphic to the $k$ ${\ensuremath{\mathsf{LL}}}$-factors $\beta_1,\ldots,\beta_k$ from $\rho$.
However, the $(k+1)$-th element has a different shape: it is isomorphic to
$\gamma~\beta_1~\delta_1~\beta_2~\cdots~\delta_{k'}~\beta_{k'+1}~\zeta$,
and in particular it is an ${\ensuremath{\mathsf{LL}}}$-factor.
This implies that the $(k+1)$-th edge of $C'$ is of the form $(y,y+1)$,
while the $(k+1)$-th edge of $C$ is of the form $(y,y-2k)$.
This contradiction comes from having assumed the existence of
the ${\ensuremath{\mathsf{RL}}}$-factor $\zeta$, and is illustrated in Figure~\ref{fig:notidem}.
\end{proof}
\input{notidem}
\begin{rem}
Note that every loop in the sweeping case is
idempotent. Moreover, the $(L,C)$-factors are precisely the factors
intercepted by the loop $L$.
\end{rem}
\medskip
\subsection*{Pumping idempotent loops.}
To describe in a formal way the run obtained by pumping an idempotent loop,
we need to generalize the notion of anchor point in the two-way case
(the reader may compare this with the analogous definitions in
Section~\ref{sec:combinatorics-sweeping} for the sweeping case).
Intuitively, the anchor point of a component $C$ of an idempotent loop $L$
is the source location of the unique ${\ensuremath{\mathsf{LR}}}$- or ${\ensuremath{\mathsf{RL}}}$-factor intercepted by
$L$ that corresponds to an edge of $C$ (recall Lemma~\ref{lem:component2}):
\begin{defi}\label{def:anchor}
Let $C$ be a component of an idempotent loop $L = [x_1,x_2]$.
The \emph{anchor point} of $C$ inside $L$, denoted%
\footnote{In denoting the anchor point --- and similarly the trace --- of a component $C$
inside a loop $L$, we omit the annotation specifying $L$, since this is
often understood from the context.}
$\an{C}$, is either the location $(x_1,\max(C))$ or the location
$(x_2,\max(C))$, depending on whether $C$ is left-to-right or right-to-left.
\end{defi}
\noindent
We will usually depict anchor points by black circles (like, for instance, in Figure \ref{fig:many-components}).
It is also convenient to redefine the notation $\tr{\ell}$ for representing
an appropriate sequence of transitions associated with each anchor point
$\ell$ of an idempotent loop:
\begin{defi}\label{def:trace}
Let $C$ be a component of some idempotent loop $L$, let $\ell=\an{C}$
be the anchor point of $C$ inside $L$, and let
$i_0 \mapsto i_1 \mapsto i_2 \mapsto \dots \mapsto i_k \mapsto i_{k+1}$
be a cycle of $C$, where $i_0=i_{k+1}=\max(C)$.
For every $j=0,\dots,k$, further let $\beta_j$ be the factor intercepted
by $L$ that corresponds to the edge $i_j \mapsto i_{j+1}$ of $C$.
The \emph{trace} of $\ell$ inside $L$ is the run $\tr{\ell} = \beta_0 ~ \beta_1 ~ \cdots ~ \beta_k$.
\end{defi}
\noindent
Note that $\tr{\ell}$ is not necessarily a factor of the original run
$\rho$. However, $\tr{\ell}$ is indeed a run, since $L$ is a loop and
the factors $\b_i$ are concatenated according to the flow. As we will see below,
$\tr{\ell}$ will appear as (iterated) factor of the pumped version of $\rho$,
where the loop $L$ is iterated.
As an example, by referring again to the components $C_1,C_2,C_3$ of
Figure~\ref{fig:many-components}, we have the following traces:
$\tr{\an{C_1}}=\alpha_2\:\alpha_1\:\alpha_3$,
$\tr{\an{C_2}}=\beta_2\:\beta_1\:\beta_3$, and
$\tr{\an{C_3}}=\gamma_1$.
The next proposition shows the effect of pumping idempotent loops. The
reader can note the similarity with the sweeping case.
\begin{prop}\label{prop:pumping-twoway}
Let $L$ be an idempotent loop of $\rho$ with components $C_1,\dots,C_k$,
listed according to the order of their anchor points:
$\ell_1=\an{C_1} \mathrel{\lhd} \cdots \mathrel{\lhd} \ell_k=\an{C_k}$.
For all $n\in\bbN$, we have
\[
\ensuremath{\mathsf{pump}}_L^{n+1}(\rho) ~=~
\rho_0 ~ \tr{\ell_1}^n ~ \rho_1 ~ \cdots ~ \rho_{k-1} ~ \tr{\ell_k}^n ~ \rho_k
\]
where
\begin{itemize}
\item $\rho_0$ is the prefix of $\rho$ that ends at the first anchor point $\ell_1$,
\item $\rho_k$ is the suffix of $\rho$ that starts at the last anchor point $\ell_k$,
\item $\rho_i$ is the factor $\rho[\ell_i,\ell_{i+1}]$, for all $1\le i<k$.
\end{itemize}
\end{prop}
\begin{proof}
Along the proof we sometimes refer to Figure \ref{fig:many-components} to
ease the intuition of some definitions and arguments.
For example, in the left hand-side of Figure \ref{fig:many-components},
the run $\rho_0$ goes until the first location marked by a black circle;
the runs $\rho_1$ and $\rho_2$, resp., are between the first and the
second black dot, and the second and third black dot; finally, $\rho_3$
is the suffix starting at the last black dot. The pumped run
$\ensuremath{\mathsf{pump}}_L^{n+1}(\rho)$ for $n=2$ is depicted to the right of the figure.
Let $L=[x_1,x_2]$ be an idempotent loop and, for all $i=0,\dots,n$, let
$L'_i=[x'_i,x'_{i+1}]$ be the $i$-th copy of the loop $L$ in the pumped run
$\rho'=\ensuremath{\mathsf{pump}}_L^{n+1}(\rho)$, where $x'_i = x_1 + i\cdot (x_2-x_1)$
(the ``$0$-th copy of $L$'' is the loop $L$ itself).
Further let $L'=L'_0\cup\dots\cup L'_n = [x'_0,x'_{n+1}]$, that is,
$L'$ is the loop of $\rho'$ that spans across the $n+1$ occurrences of $L$.
As $L$ is idempotent, the loops $L'_0,\dots,L'_n$ and $L'$ have all the
same effect as $L$.
In particular, the components of $L'_0,\dots,L'_n$, and $L'$ are isomorphic
to and in same order as those of $L$.
We denote these components by $C_1,\dots,C_k$.
We let $\ell_j=\an{C_j}$ be the anchor point of each component $C_j$ inside
the loop $L$ of $\rho$
(these locations are marked by black dots in the left hand-side
of Figure \ref{fig:many-components}).
Similarly, we let $\ell'_{i,j}$ (resp.~$\ell'_j$)
be the anchor point of $C_j$ inside the loop $L'_i$ (resp.~$L'$).
From Definition \ref{def:anchor}, we have that either $\ell'_j=\ell'_{1,j}$ or $\ell'_j=\ell'_{n,j}$,
depending on whether $C_j$ is left-to-right or right-to-left (or, equally, on whether $j$ is odd or even).
Now, let us consider the factorization of the pumped run $\rho'$
induced by the locations $\ell'_{i,j}$, for all $i=0,\dots,n$ and for $j=1,\dots,k$
(these locations are marked by black dots in the right hand-side of the figure).
By construction, the prefix of $\rho'$ that ends at location $\ell'_{0,1}$
coincides with the prefix of $\rho$ that ends at $\ell_1$,
i.e.~$\rho_0$ in the statement of the proposition.
Similarly, the suffix of $\rho'$ that starts at location $\ell'_{n,k}$ is isomorphic
to the suffix of $\rho$ that starts at $\ell_k$, i.e. $\rho_k$ in the statement.
Moreover, for all odd (resp.~even) indices $j$, the factor
$\rho'[\ell'_{n,j},\ell'_{n,j+1}]$ (resp.~$\rho'[\ell_{0,j},\ell_{0,j+1}]$) is isomorphic
to $\rho[\ell_j,\ell_{j+1}]$, i.e.~the $\rho_j$ of the statement.
The remaining factors of $\rho'$ are those delimited by the pairs of locations
$\ell'_{i,j}$ and $\ell'_{i+1,j}$, for all $i=0,\dots,n-1$ and all $j=1,\dots,k$.
Consider one such factor $\rho'[\ell'_{i,j},\ell'_{i+1,j}]$,
and assume that the index $j$ is odd (the case of an even $j$ is similar).
This factor can be seen as a concatenation of factors intercepted by $L$
that correspond to edges of $C_j$ inside $L'_i$.
More precisely, $\rho'[\ell'_{i,j},\ell'_{i+1,j}]$ is obtained by concatenating
the unique ${\ensuremath{\mathsf{LR}}}$-factor of $C_j$ --- recall that by Lemma \ref{lem:component2}
there is exactly one such factor --- with an interleaving of the ${\ensuremath{\mathsf{LL}}}$-factors
and the $\RR$-factors of $C_j$.
As the components are the same for all $L'_i$'s and for $L$, this corresponds
precisely to the trace $\tr{\ell_j}$ (cf.~Definition \ref{def:trace}).
Now that we know that $\rho'[\ell'_{i,j},\ell'_{i+1,j}]$ is isomorphic to $\tr{\ell_j}$,
we can conclude that
$\rho'[\ell'_{0,j},\ell'_{n,j}] \:=\:
\rho'[\ell'_{0,j},\ell'_{1,j}] ~ \dots ~ \rho'[\ell'_{n-1,j},\ell'_{n,j}]$
is isomorphic to $\tr{\ell_j}^n$.
\end{proof}
\section*{Acknowledgment}
\noindent The authors wish to acknowledge fruitful discussions with
Emmanuel Filiot, Isma\"el Jecker and Sylvain Salvati. We also thank
the referees for their very careful reading and the suggestions for improvement.
\bibliographystyle{abbrv}
\section{One-way definability: overview}\label{sec:overview}
In this section we state our main result, which is the existence of an
elementary algorithm for checking whether a two-way transducer is
equivalent to some one-way transducer. We call such transducers
\emph{one-way definable}. Before stating our result, we
start with a few examples illustrating the reasons that may prevent a
transducer to be one-way definable.
\begin{exa}\label{ex:one-way-definability}
We consider two-way transducers that accept any input $u$
from a given regular language $R$ and produce as output the word $u\,u$.
We will argue how, depending on $R$, these transducers may or may not be one-way definable.
\begin{enumerate}
\item If $R=(a+b)^*$, then there is no equivalent one-way transducer,
as the output language is not regular.
If $R$ is finite, however, then the transduction mapping $u\in R$ to $u\,u$
can be implemented by a one-way transducer that stores the input $u$
(this requires at least as many states as the cardinality of $R$),
and outputs $u\,u$ at the end of the computation.
\item A special case of transduction with finite domain is obtained from the language
$R_n = \{ a_0 \, w_0 \, \cdots a_{2^n-1} \, w_{2^n-1} \::\: a_i\in\{a,b\} \}$,
where $n$ is a fixed natural number, the input alphabet is $\{a,b,0,1\}$,
and each $w_i$ is the binary encoding of the index $i=0,\dots,2^n-1$
(hence $w_i\in\{0,1\}^n$).
According to Proposition~\ref{prop:lower-bound} below, the transduction
mapping $u\in R_n$ to $u\,u$ can be implemented by a two-way transducer
of size $\cO(n^2)$, but every equivalent one-way transducer
has size (at least) doubly exponential in $n$.
\item Consider now the periodic language $R=(abc)^*$.
The function that maps $u\in R$ to $u\,u$ can be easily implemented by a
one-way transducer: it suffices to output alternatively $ab$, $ca$, $bc$
for each input letter, while checking that the input is in $R$.
\end{enumerate}
\end{exa}
\begin{exa}\label{ex:running}
We consider now a slightly more complicated transduction
that is defined on input words of the form $u_1 \:\#\: \cdots \:\#\: u_n$,
where each factor $u_i$ is over the alphabet $\Sigma=\{a,b,c\}$.
The associated output has the form
$w_1 \:\#\: \cdots \:\#\: w_n$, where each $w_i$
is either $u_i \: u_i$ or just $u_i$, depending on whether or not
$u_i\in (abc)^*$ and $u_{i+1}$ has even length, with $u_{n+1}=\emptystr$
by convention.
\noindent
The natural way to implement this transduction is by means
of a two-way transducer that performs multiple passes
on the factors of the input:
a first left-to-right pass is performed on
$u_i \,\#\, u_{i+1}$ to produce the first copy of $u_i$
and to check whether $u_i\in (abc)^*$ and $|u_{i+1}|$ is even; if so,
a second pass on $u_i$ is performed to produce
another copy of $u_i$.
\noindent
Observe however that the above transduction can also be implemented by a one-way
transducer, using non-determinism: when entering a factor $u_i$, the transducer
guesses whether or not $u_i\in (abc)^*$ and $|u_{i+1}|$ is even;
depending on this it outputs either $(abc\,abc)^{\frac{|u_i|}{3}}$
or $u_i$, and checks that the guess is correct while proceeding to
read the input.
\end{exa}
\noindent
\medskip
The main result of our paper is an elementary algorithm that decides
whether a functional transducer is one-way definable:
\begin{thm}\label{thm:main}
There is an algorithm that takes as input a functional two-way
transducer $\cT$ and outputs in $3\exptime$ a \emph{one-way} transducer
$\cT'$ satisfying the following properties:
\begin{enumerate}
\item $\cT'\subseteq\cT$,
\item $\dom(\cT')=\dom(\cT)$ if and only if $\cT$ is one-way definable.
\item $\dom(\cT')=\dom(\cT)$ can be checked in $2\expspace$.
\end{enumerate}
Moreover, if $\cT$ is a sweeping transducer, then $\cT'$ can be
constructed in $2\exptime$
and $\dom(\cT')=\dom(\cT)$ is decidable in $\expspace$.
\end{thm}
\begin{rem}
The transducer $\cT'$ constructed in the above theorem is
in a certain sense maximal: for every
$v \in \dom(\cT) ~\setminus~ \dom(\cT')$
and every one-way transducer $\cT''$ with
$\dom(\cT') \subseteq \dom(\cT'') \subseteq \dom(\cT)$ there exists
some witness input $v'$ obtained from $v$ such that
$v' \in \dom(\cT) ~\setminus~ \dom(\cT'')$. We will make this more
precise at the end of Section~\ref{sec:characterization-twoway}.
\end{rem}
We also provide a two-exponential lower bound for the size of the equivalent transducer.
As the lower bound is achieved by a sweeping transduction (even a deterministic one),
this gives a tight lower bound on the size of any one-way transducer equivalent to
some sweeping transducer.
\begin{prop}\label{prop:lower-bound}
\gabriele{I have added the fact that the sweeping transducers are even deterministic}%
There is a family $(f_n)_{n\in\bbN}$ of transductions such that
\begin{enumerate}
\item $f_n$ can be implemented by a deterministic sweeping transducer of size $\cO(n^2)$,
\item $f_n$ can be implemented by a one-way transducer,
\item every one-way transducer that implements $f_n$
has size $\Omega(2^{2^n})$.
\end{enumerate}
\end{prop}
\begin{proof}
The family of transformations is precisely the one described in
Example~\ref{ex:one-way-definability}~(2), where $f_n$ maps inputs of the form
$u = a_0 \, w_0 \, \cdots \, a_{2^n-1} ~ w_{2^n-1}$ to outputs of the form $u\,u$,
where $a_i\in\{a,b\}$ and $w_i\in\{0,1\}^n$ is the binary encoding of $i$.
A deterministic sweeping transducer implementing $f_n$ first checks that the
binary encodings $w_i$, for $i=0,\dots,2^n-1$, are correct.
This can be done with $n$ passes:
the $j$-th pass uses $\cO(n)$ states to check the correctness of
the $j$-th bits of the binary encodings.
Then, the sweeping transducer performs two additional passes to
copy the input twice. Overall, the sweeping transducer has size $\cO(n^2)$.
As already mentioned, every one-way transducer that implements $f_n$ needs
to remember input words $u$ of exponential length in order to output $u\,u$, which
roughly requires doubly exponentially many states.
A more formal argument providing a lower bound for the size of a one-way
transducer implementing $f_n$ goes as follows.
First of all, one observes that given a one-way transducer $\cT$,
the language of its outputs,
i.e., $L^{\text{out}}_\cT = \{ w \::\: (u,w)\in\sL(\cT) \text{ for
some } u\}$
is regular. More precisely, if $\cT$ has size $N$, then the language
$L^{\text{out}}_\cT$ is recognized by an automaton of size linear in $N$.
Indeed, while parsing $w$, the automaton can guess an input word $u$
and a run on $u$,
together with a factorization of $w$ in which the $i$-th
factor corresponds to the output of the transition
on the $i$-th letter of $u$. Basically, this requires
storing as control states the transition rules of $\cT$ and the
suffixes of outputs.
Now, suppose that the function $f_n$ is implemented by a one-way
transducer $\cT$ of size $N$. The language
$L^{\text{out}}_\cT = \{ u\,u \::\: u\in\dom(f_n) \}$ is then
recognized by an automaton of size $\cO(N)$.
Finally, we recall a result from \cite{GlaisterShallit96}, which shows that,
given a sequence of pairs of words $(u_i,v_i)$, for $i=1,\dots,M$,
every non-deterministic automaton that separates the language
$\{u_i\,v_i \::\: 1\le i\le M\}$ from the language
$\{u_i\,u_j \::\: 1\le i\neq j\le M\}$ must have at least $M$ states.
By applying this result to our language $L^{\text{out}}_\cT$, where
$u_i=v_i$ for all $i=1,\dots,M=2^{2^n}$, we get that $N$ must be at
least linear in $M$, and hence $N \in \Omega(2^{2^n})$.
\end{proof}
\bigskip
\gabriele{improved a bit, but we may do better}%
The proof of Theorem~\ref{thm:main} will be developed in the next sections.
The main idea is to decompose a run of the two-way transducer $\cT$
into factors that can be easily simulated in a one-way manner. We defer the
formal definition of such a decomposition to Section \ref{sec:characterization-sweeping},
while here we refer to it simply as a ``$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition'', where
$\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ is a suitable number computed from $\cT$.
The reader can refer to Figure~\ref{fig:decomposition-sweeping} on page
\pageref{fig:decomposition-sweeping}, which provides some intuitive
account of a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition for a sweeping run.
Roughly speaking,
each factor
of a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition either already looks like a run of a
one-way transducer (e.g.~the factors $D_1$ and $D_2$ of Figure~\ref{fig:decomposition-sweeping}),
or it produces a periodic output, where the period is bounded by $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$
(e.g.~the factor between $\ell_1$ and $\ell_2$).
Identifying factors that look like runs of one-way transducers is rather easy.
On the other hand, to identify factors with periodic outputs we rely on
a notion of ``inversion'' of a run. Again, we defer the formal definition
and the important combinatorial properties of inversions
to Section \ref{sec:combinatorics-sweeping}.
The reader can refer to Figure \ref{fig:inversion-sweeping}
on page \pageref{fig:inversion-sweeping}
for an example of an inversion of a run of a sweeping transducer.
Intuitively, this is a portion of run that is potentially difficult to simulate
in a one-way manner, due to existence of long factors of the output that are
generated following the opposite order of the input.
Finally, the complexity of the decision procedure in Theorem~\ref{thm:main}
is analyzed in Section~\ref{sec:complexity}.
\reviewOne[inline]{As neither inversions nor blocks are defined by Page 8, the presence of Theorem 3.6 is not as enlightening as one might hope. If the notion of inversion and their relevance is made quite clear at a glance of Page 11, the definition of run decomposition is not as intuitive. Hence, Theorem 3.6 cannot be appreciated before Page 17. It might be a good idea to give an intuition about inversion and diagonal/blocks before 3.6 (this solution being preferable, but made challenging by how involved the notion of decomposition seems to be in a first read).
\felix[inline]{added the references to the corresponding figures}%
\olivier[inline]{and Gabriele added some intuitions on decompositions}%
}%
\subsection*{Roadmap.}
In order to provide a roadmap of our proofs, we state below the equivalence
between the key properties related to one-way definability, inversions of runs,
and existence of decompositions:
\begin{thm}\label{thm:main2}
Given a functional two-way transducer $\cT$,
an integer $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$ can be computed such that the following are equivalent:
\begin{itemize}
\item[\PR1)] $\cT$ is one-way definable,
\item[\PR2)] for every successful run of $\cT$ and every inversion in it,
the output produced amid
the inversion has period at most $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$,
\item[\PR3)] every input has a successful run of $\cT$ that admits a $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition.
\end{itemize}
\end{thm}
As the notions of inversion and $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decomposition are simpler to formalize
for sweeping transducers, we will first prove the theorem assuming that $T$ is
a sweeping transducer; we will focus later on unrestricted two-way transducers.
Specifically, in Section~\ref{sec:combinatorics-sweeping} we introduce the basic combinatorics
on words and the key notion of inversion for a run of a sweeping transducer, and we prove the
implication \PR1 $\Rightarrow$ \PR2.
In Section~\ref{sec:characterization-sweeping} we define $\boldsymbol{B}} %{\cmax \cdot \hmax \cdot 2^{3\emax}$-decompositions of runs of
sweeping transducers, prove the implication \PR2 $\Rightarrow$ \PR3, and sketch a proof of
\PR3 $\Rightarrow$ \PR1 (as a matter of fact, this latter implication can be proved in a way
that is independent of whether $\cT$ is sweeping or not, which explains why we
only sketch the proof in the sweeping case).
Section~\ref{sec:loops-twoway} lays down the appropriate definitions
concerning loops of two-way transducers, and analyzes in detail the effect of
pumping special idempotent loops.
In Section~\ref{sec:combinatorics-twoway} we further develop the combinatorial arguments
that are used to prove the implication \PR1 $\Rightarrow$ \PR2 in the general case.
Finally, in Section~\ref{sec:characterization-twoway} we prove the implications
\PR2 $\Rightarrow$ \PR3 $\Rightarrow$ \PR1 in the general setting,
and show how to decide the condition $\dom(\cT')=\dom(\cT)$ of Theorem \ref{thm:main}.
\section{Preliminaries}\label{sec:preliminaries}
We start with some basic notations and definitions for two-way
automata and transducers. We assume that every input word $w=a_1\cdots a_n$
has two special delimiting symbols $a_1 = \vdash$
and $a_n = \dashv$ that do not occur elsewhere: $a_i \notin \{\vdash,\dashv\}$
for all $i=2,\dots,n-1$.
A \emph{two-way automaton} is a tuple
$\cA=(Q,\Sigma,\vdash,\dashv,\Delta,I,F)$, where
\begin{itemize}
\item $Q$ is a finite set of states,
\item $\Sigma$ is a finite alphabet (including $\vdash, \dashv$),
\item $\Delta \subseteq Q \times \Sigma \times Q \times \set{\ensuremath{\mathsf{left}},\ensuremath{\mathsf{right}}}$
is a transition relation,
\item $I,F\subseteq Q$ are sets of initial and final states, respectively.
\end{itemize}
By convention, left transitions on $\vdash$ are not
allowed; on the other hand, right transitions on $\dashv$ are allowed, but, as we will see,
they will necessarily appear as last transitions of successful runs.
A \emph{configuration} of $\cA$ has the form $u\,q\,v$,
with $uv \in \vdash\, \S^* \, \dashv$
and $q \in Q$. A configuration $u\,q\,v$ represents the
situation where the current state of $\cA$ is $q$ and its head reads the first
symbol of $v$ (on input $uv$). If $(q,a,q',\ensuremath{\mathsf{right}}) \in \Delta$,
then there is a transition from any configuration of the form
$u\,q\,av$ to the configuration $ua\,q'\,v$; we denote such a transition
by $u\,q\,av \trans{a,\ensuremath{\mathsf{right}}} ua\,q'\,v$.
Similarly, if $(q,a,q',\ensuremath{\mathsf{left}}) \in \Delta$,
then there is a transition from any configuration of the form
$ub\,q\,av$ to the configuration $u\,q'\,bav$,
denoted as $ub\,q\,av \trans{a,\ensuremath{\mathsf{left}}} u\,q'\,bav$.
A \emph{run} on $w$
is a sequence of transitions.
It is \emph{successful} if it starts in an initial configuration
$q\, w$, with $q\in I$, and ends in a final configuration $w\,q'$,
with $q' \in F$ --- note that this latter configuration does not allow
additional transitions. The \emph{language} of $\cA$ is the set of words
that admit a successful run of $\cA$.
The definition of \emph{two-way transducers} is similar to that of two-way automata,
with the only difference that now there is an additional output alphabet $\Gamma$ and the transition
relation is a finite subset of $Q \times \Sigma \times \Gamma^* \times Q \times \{\ensuremath{\mathsf{left}},\ensuremath{\mathsf{right}}\}$,
which associates an output over $\Gamma$ with each transition of the underlying two-way automaton.
For a two-way transducer $\cT=(Q,\Sigma,\vdash,\dashv,\Gamma,\Delta,I,F)$,
we have a transition of the form $ub\,q\,av \trans{a,d \, \mid w} u'\,q'\,v'$, outputting $w$,
whenever $(q,a,w,q',d)\in\Delta$ and either $u'=uba ~\wedge~ v'=v$ or
$u'=u ~\wedge~ v'=bav$, depending on whether $d=\ensuremath{\mathsf{right}}$ or $d=\ensuremath{\mathsf{left}}$.
The \emph{output} associated with a run
$\rho = u_1\,q_1\,v_1 \trans{a_1,d_1 \mid w_1}<\qquad> \dots
\trans{a_n,d_n \mid w_n}<\qquad> u_{n+1}\,q_{n+1}\,v_{n+1}$
of $\cT$ is the word $\out{\rho} = w_1\cdots w_n$. A transducer $\cT$
defines a relation $\sL(\cT)$
\anca{I was wondering if we ever use this notation, but
I found it on page 10... Maybe it is the only place...
consisting of all pairs $(u,w)$ such that
$w=\out{\rho}$, for some successful run $\rho$ on $u$.
The \emph{domain} of $\cT$, denoted $\dom(\cT)$,
is the set of input words that have a successful run.
For transducers $\cT,\cT'$, we write $\cT' \subseteq \cT$
to mean that $\dom(\cT') \subseteq \dom(\cT)$ and the transductions
computed by $\cT,\cT'$ coincide on $\dom(\cT)$.
A transducer is called \emph{one-way} if it does not contain transition
rules of the form $(q,a,w,q',\ensuremath{\mathsf{left}})$. It is called \emph{sweeping} if
it can perform reversals only at the borders of the input word.
A transducer that is equivalent to some one-way
(resp.~sweeping) transducer is called \emph{one-way definable}
(resp.~\emph{sweeping definable}).
The \emph{size} of a transducer takes into
account both the state space and the transition relation, and thus
includes the length of the output of each transition.
\medskip
\subsection*{Crossing sequences.}
The first notion that we use throughout the paper is that of crossing sequence.
We follow the convenient presentation from \cite{HU79}, which appeals to a
graphical representation of runs of a two-way transducer, where each configuration
is seen as a point (location) in a two-dimensional space.
Let $u=a_1\cdots a_n$ be an input word (recall that $a_1=\vdash$ and $a_n=\dashv$)
and let $\rho$ be a run of a two-way automaton (or transducer) on $u$.
The \emph{positions} of $\rho$ are the numbers from $0$ to $n$, corresponding
to ``cuts'' between two consecutive letters of the input. For example,
position $0$ is just before the first letter $a_1$,
position $n$ is just after the last letter $a_n$,
and any other position $x$, with $1\le x<n$, is between
the letters $a_x$ and $a_{x+1}$.
We will denote by $u[x_1,x_2]$ the factor of $u$ between
the positions $x_1$ and $x_2$ (both included).
\olivier{added}%
\gabriele{Please check this reformulation in terms of configurations}%
Each configuration $u\,q\,v$ of a two-way run $\r$
has a specific position associated with it. For technical reasons
we need to distinguish leftward and rightward transitions. If the configuration $u\,q\,v$
is the target of a rightward transition
then the position associated with $u\,q\,v$ is $x=|u|$.
The same definition also applies when $u\,q\,v$ is the initial configuration,
for which we have $u=\emptystr$ and $x=|u|=0$.
Otherwise, if $u\,q\,v$ is the target of a leftward transition
then the position associated with $u\,q\,v$ is $x=|u|+1$. Note that in
both cases, the letter read by the transition leading to $u\,q\,v$ is
$a_x$.
\anca{rewritten a bit}%
A \emph{location} of $\rho$ is any pair $(x,y)$,
where $x$ is the position of some configuration of $\rho$ and
$y$ is any non-negative integer for which there are at least
$y+1$ configurations in $\rho$ with the same position $x$.
The second component $y$ of a location is called \emph{level}.
For example, in Figure \ref{fig:run} we represent a possible
run of a two-way automaton together with its locations $(0,0)$,
$(1,0)$, $(2,0)$, $(2,1)$, etc.
Each location is naturally associated with a configuration, and thus a state.
Formally, we say that $q$ is the \emph{state at location $\ell=(x,y)$} in $\rho$,
and we denote this by writing $\rho(\ell)=q$,
if the $(y+1)$-th configuration of $\rho$ with position $x$ has state $q$.
Finally, we define the \emph{crossing sequence} of $\rho$ at position $x$
as the tuple $\rho|x=(q_0,\dots,q_h)$, where the $q_y$'s are all the states
at locations of the form $(x,y)$, for $y=0,\dots,h$.
\input{run}
As shown in Figure~\ref{fig:run}, a two-way run can be represented
as an path between locations annotated by the associated states.
We observe in particular that if a location $(x,y)$ is the target
of a rightward transition, then this transition has read the symbol $a_x$;
similarly, if $(x,y)$ is the target of a leftward transition, then
the transition has read the symbol $a_{x+1}$.
We also observe that, in any successful run $\rho$, every crossing
sequence has odd length and every rightward (resp.~leftward) transition
reaches a location with even (resp.~odd) level.
In particular, we can identify four types of transitions between locations,
depending on the parities of the levels (the reader may refer again to
Figure~\ref{fig:run}):
\begin{center}
\begin{tikzpicture}[baseline=0,scale=0.9]
\draw (-1,1.75) node (node1) {$\phantom{\!+\!1}~(x,2y)$};
\draw (3,1.75) node (node2) {$(x\!+\!1,2y')~\phantom{\!+\!1}$};
\draw (-1,0) node (node3) {$(x,2y\!+\!1)$};
\draw (3,0) node (node4) {$(x\!+\!1,2y'\!+\!1)$};
\draw (9,1.75) node (node5) {$\phantom{\!+\!1}~(x,2y)$};
\draw (9,2.5) node (node6) {$(x,2y\!+\!1)$};
\draw (9,0) node (node7) {$(x,2y\!+\!1)$};
\draw (9,0.75) node (node8) {$(x,2y\!+\!2)$};
\draw (node1) edge [->] node [above=-0.05, scale=0.9] {\small $a_{x+1},\ensuremath{\mathsf{right}}$} (node2);
\draw (node4) edge [->] node [above=-0.05, scale=0.9] {\small $a_{x+1},\ensuremath{\mathsf{left}}$} (node3);
\draw (node5.east) edge [->, out=0, in=0, looseness=2]
node [right=0, scale=0.9] {\small $a_{x+1},\ensuremath{\mathsf{left}}$} (node6.east);
\draw (node7.west) edge [->, out=180, in=180, looseness=2]
node [left=0, scale=0.9] {\small $a_x,\ensuremath{\mathsf{right}}$} (node8.west);
\end{tikzpicture}
\vspace{1mm}
\end{center}
Hereafter, we will identify runs with the corresponding annotated paths between locations.
It is also convenient to define a total order $\mathrel{\unlhd}$ on the locations of a run $\rho$
by letting $\ell_1 \mathrel{\unlhd} \ell_2$ if $\ell_2$ is reachable from $\ell_1$ by following the
path described by $\rho$ --- the order $\mathrel{\unlhd}$ on locations is called
\emph{run order}.
Given two locations $\ell_1 \mathrel{\unlhd} \ell_2$ of a run $\rho$, we write $\rho[\ell_1,\ell_2]$
for the factor of the run that starts in $\ell_1$ and ends in $\ell_2$. Note that the latter
is also a run and hence the notation $\outb{\rho[\ell_1,\ell_2]}$ is permitted.
\gabriele{Parity of levels needs to be preserved}%
We will often reason with factors of runs up to isomorphism, that is, modulo
shifting the coordinates of their locations while preserving the parity of the levels.
Of course, when the last location of (a factor of) a run $\rho_1$
coincides with the first location of (another factor of) a run $\rho_2$,
then $\rho_1$ and $\rho_2$ can be concatenated to form a longer run,
denoted by $\rho_1 \rho_2$.
\olivier{added, see review 2 below}%
\gabriele{rephrased a bit, and I think corrected a problem with negative numbers.
But do we really need to say this?}%
This operation can be performed even if the two locations,
say $(x_1,y_1)$ and $(x_2,y_2)$, are different, provided that
$y_1=y_2\bmod 2$: in this case it suffices to shift the positions
(resp.~levels) of the locations of the first run $\rho_1$ by $x_2$ (resp.~$y_2$)
and, similarly, the positions (resp.~levels) of the locations of the second
run $\rho_2$ by $x_1$ (resp.~$y_1$).
\felix{it's not really enough, with our current definition the first location of $\rho_2$ is $(x,0)$ is $x$ is the position of the cut. So we cannot even concatenate $\rho_1$ and $\rho_2$. I have an alternate paragraph where the locations of the subruns use the levels of the original run.}%
\gabriele{I don't understand your comment Felix. Does it still apply?}%
\reviewTwo[inline]{
There seems to be some problems in you definition of crossing sequences.
First, according to your definition, the state at location $(x,y)$ is the state reached by the $(y+1)$-th transition that crosses x. In order for this to be consistent with Figure 1, you should swap u and u' in your definition of a position crossed by a transition.
\olivier[inline]{Correct, there was a mistake there! Check my fix above.}%
Second, since, following your definition, the state at a given location always is the target of a transition, the first state of the run has no location (it should be added explicitly).
\olivier[inline]{Correct, also fixed}%
Third, not allowing more freedom in the second components of the locations seems to cause some problems while considering subruns.
Let me explain what I mean with an example.
Let us suppose that we split the run $\rho$ of your Figure 1 into two runs $\rho_1$ and $\rho_2$, where $\rho_2$ is composed of the last three transitions.
Then the locations of the states of the run $\rho_1$ (considered as a whole run) are equal to the locations of the same states considered as part of $\rho$.
However, the locations of the states of the run $\rho_2$ (considered as a whole run) are (1,0), (2,0), (3,0), (4,0), whereas in $\rho$, the locations of the corresponding states are (1,2), (2,2), (3,0), (4,0).
As a consequence, $\rho_1$ and $\rho_2$ can not be concatenated to reform $\rho$, since to do so we need that “the last location of [\dots] $\rho_1$ coincides with the first location of [\dots] $\rho_2$” (p.5 l.15).
\olivier[inline]{Added a sentence to explicit}%
Finally (and this one is not a real problem), you introduce precise notions, but sometimes do not use them in the following parts of the paper: for example, p.4, “never uses a left transition from position x” corresponds to “no transition crosses position x”. Why not use the latter ?
\olivier[inline]{``no left transition on $\vdash$'' comes before the definition of ``crosses'' and is more explicit, so I would not change.}%
}%
\anca{there is no ``crosses'' anymore, right?}%
\begin{wrapfigure}{r}{5.1cm}
\input{intercepted-factors}
\end{wrapfigure}
\medskip
\subsection*{Intercepted factors.}
For simplicity, we will denote by $\omega$
the maximal position of the input word.
We will consider \emph{intervals of positions} of
the form $I=[x_1,x_2]$, with $0 \le x_1<x_2 \le \omega$.
The \emph{containment} relation $\subseteq$ on intervals is
defined expected, as $[x_3,x_4] \subseteq [x_1,x_2]$ if $x_1\le x_3 < x_4\le x_2$.
A \emph{factor} of a run $\rho$ is a contiguous subsequence of $\rho$.
A factor of $\rho$ \emph{intercepted} by an interval
$I=[x_1,x_2]$ is a maximal factor that visits only
positions $x\in I$, and never uses a left transition from
position $x_1$ or a right transition from position $x_2$.
Figure~\ref{fig:intercepted-factors} on the right shows
the factors $\alpha,\beta,\gamma,\delta,\zeta$ intercepted
by an interval $I$.
The numbers that annotate the endpoints of the factors
represent their levels.
\medskip
\subsection*{Functionality.}
We say that a transducer is \emph{functional} (equivalently, one-valued, or single-valued)
if for each input $u$, at most one output $w$ can be
produced by any possible successful run on $u$.
Of course, every deterministic transducer is functional, while the opposite
implication fails in general.
To the best of our knowledge determining the precise complexity of the determinization
of a two-way transducer (whenever an equivalent deterministic one-way transducer exists)
is still open. From classical bounds on determinization of finite automata, we only know
that the size of a determinized transducer may be exponential in the worst-case.
One solution to this question, which is probably
not the most efficient one, is to
check one-way definability: if an equivalent one-way transducer is
constructed, one can check in $\ptime$ if it can be
determinized~\cite{cho77,weberklemm95,bealcarton02}.
The following result, proven in Section \ref{sec:complexity},
is the reason to consider only functional transducers:
{
\renewcommand{\thethm}{\ref{prop:undecidability}}
\begin{prop}[
The one-way definability problem for \emph{non-functional}
sweeping transducers is undecidable.
\end{prop}
}
Unless otherwise stated, hereafter we tacitly assume that all transducers
are functional. Note that functionality is a decidable property, as shown below.
The proof of this result is similar to the decidability proof for the equivalence problem
of deterministic two-way transducers~\cite{Gurari80}, as it reduces the functionality
problem to the reachability problem of a $1$-counter automaton of exponential size.
A matching $\pspace$ lower bound follows by a reduction of the emptiness problem
for the intersection of finite-state automata \cite{Kozen77}.
\begin{prop}\label{prop:functionality-pspace}
Functionality of two-way transducers can be decided in polynomial
space. This problem is $\pspace$-hard already for sweeping transducers.
\end{prop}
A (successful) run of a two-way transducer is called \emph{normalized}
if it never visits two locations with the same position, the same
state, and both either at even or at odd level. It is easy to see that
if a successful run $\rho$ of a \emph{functional} transducer visits two locations
$\ell_1=(x,y)$ and $\ell_2=(x,y')$ with the same state
$\rho(\ell_1)=\rho(\ell_2)$ and with $y=y' \mod 2$, then the output
produced by $\rho$ between $\ell_1$ and $\ell_2$ is empty:
otherwise, by repeating the non-empty factor $\rho[\ell_1,\ell_2]$, we would
contradict functionality. So, by deleting the factor
$\rho[\ell_1,\ell_2]$ we obtain a successful run that produces the
same output. Iterating this operation leads to an equivalent,
normalized run.
Normalized runs are interesting because their crossing sequences have
bounded length (at most ${\boldsymbol{H}}} %h_{\mathsf{max}}} = 2|Q|-1$). Throughout the paper we
will implicitly assume that successful runs are normalized. The
latter property can be easily checked on crossing sequences.
|
{'timestamp': '2018-12-07T02:13:01', 'yymm': '1706', 'arxiv_id': '1706.01668', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.01668'}
|
arxiv
|
\section{Introduction}
\label{seq1}
Nowadays, Cloud computing allows data owners to use massive data storage and large computation capabilities at a very low costs. Despite these benefits, such a data outsourcing induces important security challenges. Indeed, data owners lose the control over the pieces of information they outsource. To protect data in terms of confidentiality and privacy from unauthorized users as well as from the cloud, one common solution consists in encrypting data. However, if using encryption achieves data confidentiality, it may limit the possible reuse or processing of outsourced data as well as the sharing of data. In this work, we are interested in the sharing of data between different users who have outsourced their data encrypted with their own public keys, i.e. using some asymmetric cryptosystem. Such a kind of problem is referred as proxy re-encryption (PRE) \cite{blaze1998divertible}, where Alice (the delegator or the emitter) wants to share with Bob (the delegate or recipient) some data she previously outsourced encrypted into the cloud (the proxy). When working with asymmetric encryption, the objective of PRE is to securely enable the proxy to re-encrypt Alice's cipher-text, encrypted with her public key, into a cipher-text that can be decrypted with Bob's private key. To do so, one simple PRE solution consists in asking Alice to provide her private key to the proxy. However, this strategy imposes the proxy to be completely trusted and does not work in the case the cloud is considered as semi-honest (i.e., it will not disclose the data but will be curious). Blaze \textit{et al}. \cite{blaze1998divertible} proposed the first PRE scheme in such a semi-honest framework. This one is based on the ElGamal cryptosystem and on a set of secret pieces of information, referred as secret re-encryption key, Alice has to send to the proxy so as to make possible the change of the public key encryption (i.e., re-encrypt data with Bob's public key). One main issue of this proposal, remarked by Ateniese \textit{et al}. \cite{ateniese2006improved}, is that Blaze \textit{et al}.'s scheme is inherently bidirectional, that is to say that the re-encryption key which allows transferring cipher-texts from Alice to Bob, enables the proxy to convert all Bob's cipher-texts under Alice's public key. This is not acceptable for Bob. The main reason of this is that the re-encryption key depends on the delegate (Bob) private key. In order to solve this problem and achieve a unidirectional PRE different approaches have been proposed. The first class of methods relies on classical asymmetric encryption cryptosystems. For instance, \cite{jakobsson1999quorum} take advantage of a quorum-based protocol which stands on distributed proxies, each of them possesses a part of the data of Alice but receive a different re-encryption key independent of Bob private key. However, with this approach, the security of Alice private key is safe as long as some proxies are honest. An alternative, proposed in \cite{dodis2003proxy}, works with only one proxy where the re-encryption key provided by Alice is split into two parts, one for the proxy and the other for Bob. Unfortunately, with \cite{dodis2003proxy}, the data of Alice, she encrypted with her public-key are turned into symmetrically encrypted data and not asymmetrically with the public key of Bob. The second class regroups methods referred as identity-based proxy re-encryption (IBPRE) and was introduced by Green and Ateniese \cite{green2007identity}. Such a method mixes PRE with identity-based cryptography (IBC). In IBC, the public encryption key of one user is derived from his identity (e.g., his email address); by combining it with PRE, the emitter and the proxy just need to know the delegates' identities instead of verifying their certificates. Basically, the unidirectional propriety is achieves due to the fact the re-encryption key depends on the identity of the delegate. However, it must be known that IB-PRE suffers of the key-escrow issue (see \cite{dodis2003proxy} for more details). Most of these schemes also rely on cryptosystems which are based on bilinear pairing \cite{han2013identity, chu2007identity, matsuo2007proxy, liang2009attribute, xu2016conditional}, an application considered as a very expensive in terms of computation complexity compared to modular multiplication or exponentiation \cite{baek2005certificateless}. To overcome this issue, Deng \textit{et al}. \cite{deng2008chosen} proposed an asymmetric cross-cryptosystem re-encryption scheme instead of pairing.
Beyond, if the above approaches allow one user to share data with another one, they do not make possible the processing of encrypted data by the cloud or proxy. This capacity is usually achieved with the help of homomorphic cryptosystems. With these ones, one can perform operations onto encrypted data with the guarantee that the decrypted result equals the one carried out onto un-encrypted data \cite{rivest1978data}. The first homomorphic based PRE attempt has been proposed by Bresson \textit{et al}. in \cite{bresson2003simple}, using the Paillier cryptosystem \cite{paillier1999public}. However, even though their solution makes possible data sharing, it cannot be seen as a pure proxy re-encryption scheme. Indeed, data are not re-encrypted with the public key of the delegate. If this one wants to ask the cloud to process the data he receives from Alice, he has: i) first to download Alice data, ii) decrypt them based on some secret pieces of information provided by Alice; iii) re-encrypt them with his public key and send them back to the cloud. There is thus still a need for a homomorphic based PRE.
In this work, we propose the first homomorphic proxy re-encryption scheme which does not require the delegate to re-upload the data another user has shared with him. It is based on the Paillier cryptosystem. It can be roughly summarized as follows. Bob and Alice agree on a secret key; key Alice sends Paillier encrypted to the cloud. The cloud uses this key so as to generate a Paillier encrypted random sequence with the help of a secure linear congruential generator (SLCG) we propose and which works in the Paillier encrypted domain. All computations are conducted by the cloud server. This SLCG provides a sequence of Paillier encrypted random numbers. Based on a fast and new solution we propose so as to compute the difference in-between Paillier encrypted data, the cloud: i) computes in clear the difference between this encrypted random sequence and the encrypted data of Alice and, ii) encrypts this sequence of differences with the public key of Bob. Then, Bob just has to ask the cloud to remove the noise from the encrypted data in order to get access to the data Alice wants to share with him and process them in an outsourced manner if he wants.
The rest of the paper is organized as follow. In Section \ref{seq2}, we come back on the definition of Paillier cryptosystem and show how to use it in order to: i) quickly compute the difference between Paillier encrypted data; and ii) implement a secure linear congruential generator so as to generate an encrypted random sequence. Section \ref{seq3} describes the overall architecture of our Homomorphic PRE solution (HPRE) in the case of the sharing of images. Performance of the proposed solution is given in Section \ref{seq4}. Conclusions are given in Section \ref{seq6}.
\section{Processing Paillier Encrypted Data}
\label{seq2}
In this section, we first introduce the Paillier cryptosystem as well as a new way to compute the difference between Paillier encrypted data before presenting a secure linear congruential generator (LCG) implemented in the Paillier encrypted domain so as to generate an encrypted pseudo random sequence of integers.
\subsection{Paillier cryptosystem}
\label{sseq2:1}
We opted for the asymmetric Paillier cryptosystem because of its additive homomorphic property \cite{paillier1999public}. In this work, we use a fast version of it defined as follows. Let $((g,K_p), K_s)$ be the public/private key pair, such as:
\begin{equation}
K_p = pq \quad and \quad K_s= (p-1)(q-1)
\label{eq1}
\end{equation}
where $p$ and $q$ are two large prime integers. $\mathbb{Z}_{K_p}= \{0, 1,..., K_p-1\}$ and $\mathbb{Z}_{K_p}^*$ denotes the integers that have multiplicative inverses modulo $K_p$. We select $g\in\mathbb{Z}_{K_p^2}^*$ such as:
\begin{equation}
\frac{g^{K_s}-1 \, mod\, K_p^2}{K_p} \in \mathbb{Z}_{K_p}^*
\label{eq2}
\end{equation}
The Paillier encryption of a plain-text $m\in\mathbb{Z}_{K_p}$ into the cipher-text $c\in\mathbb{Z}_{K_p^2}^*$ using the public key $K_p$ is given by
\begin{equation}
c= E[m,r]= g^m r^{K_p} \mod K_p^2
\label{eq3}
\end{equation}
where $r \in \mathbb{Z}_{K_p}^*$ is a random integer associated to $m$ making the Paillier cryptosystem probabilistic or semantically secure. More clearly, depending on the value of $r$, the encryption of the same plain-text message will yield to different cipher-texts even though the public encryption key is the same. Notice that it is possible to get a fast version of~\eqref{eq3} by fixing $g=1+K_p$ without reducing the algorithm security. By doing so, the encryption of $m$ into $c$ requires only one modular exponentiation and two modular multiplications
\begin{equation}
c=E[m,r]=(1+mK_p)r^{K_p} \mod K_p^2
\label{eq4}
\end{equation}
As we will see in section \ref{sseq2:2}, this property will be of importance for the computation of the difference between Paillier encrypted data.\\
Based on the assumption $g=1+K_p$, the decryption of $c$ using the private Key $K_s$ is such as
\begin{equation}
m=\frac{(c^{K_s}-1)K_s^{-1} \mod K_p^2}{K_p} \mod K_p
\label{eq5}
\end{equation}
If we consider two plain-texts $m_1$ and $m_2$, the additive homomorphic property of the Paillier cryptosystem allows linear operations on encrypted data like addition and multiplication, ensuring that
\begin{equation}
E[m_1,r_1]E[m_2,r_2] = E[m_1+m_2, r_1r_2]
\label{eq6}
\end{equation}
\begin{equation}
E[m_1,r_1]^{m_2} = E[m_1 m_2, r_1^{m_2}]
\label{eq7}
\end{equation}
\subsection{Computing the difference in-between encrypted data}
\label{sseq2:2}
In this work, we propose a solution that allows the calculation by one server of the difference between two Paillier encrypted data. More clearly if $a$ and $b$ are two integers, we want to compute their difference $a-b$ from their encrypted versions.
Let us consider a user-server relationship where the server has two cipher-texts $E_{K_p}[a,r]$ and $E_{K_p}[b,r]$ encrypted by the user. It is important to notice that to make such computation possible; the two cipher-texts have to be encrypted with the same random value $r$. Under this constraint, one can directly derive the difference $d$ between $a$ and $b$ from $E_{K_p}[a,r]$ and $E_{K_p}[b,r]$ by taking advantage of the fast Paillier cryptosystem assumption, i.e. $g=1+K_p$, as follows
$$\begin{array}{ccc}
d & = & D(a,b) =D^e(E_{K_p}[a, r], E_{K_p}[b, r]) \\[.3cm]
& = & \frac{E_{K_p}[a,r]E_{K_p}[b,r]^{-1}-1 \mod K_p^2}{K_p} \mod K_p \\[.3cm]
& = & \frac{g^a r g^{-b}r^{-1}-1 \mod K_p^2}{K_p} \mod K_p \\[.3cm]
& = & \frac{g^{a-b} -1 \mod K_p^2}{K_p} \mod K_p \\[.3cm]
d & = & a- b \mod K_p
\end{array}$$
\begin{equation}
\label{eq8}
\end{equation}
where $D$ and $D^e$ denote the two functions that allows computing the difference $d$ in the clear and Paillier encrypted domain, respectively. Notice that knowing the difference $d$ between $a$ and $b$ gives no clues about the values of $a$ and $b$, respectively.
\subsection{Secure Linear Congruential Generator}
\label{sseq2:3}
As stated in the introduction, our HPRE scheme will require the cloud to securely generate a pseudo random sequence that is to say a Paillier encrypted random sequence of integers.
The generator we propose to secure is LCG \cite{l1999tables} (Linear Congruential Generator). This one is based on congruence and a linear functions; functions that can be easily implemented in the Paillier encrypted domain.
In the clear domain, LCG works as follows
\begin{equation}
X_{n+1} = a X_n + c \mod m
\label{eq9}
\end{equation}
where: $X_n$ is the $n^{th}$ random integer value of the LCG sequence; $a$ is a multiplier; $c$ is an increment; $m$ is the modulo; and, $X_0$ the initial term, also called the seed or the secret LCG key, one needs to know so as to re-generate a random sequence. The security of the LCG is based on the seed $X_0$. The knowledge of the parameters $a$, $c$ and $m$ does not endanger its security \cite{l1999tables}.
This random generator can be implemented into the Paillier encrypted domain, i.e. turned into a Secure LCG (SLCG), so as to generate an encrypted random sequence of integers (i.e. $\{E[X_n,r_n ]\}_{n=0...N-1}$) in the following way :
\begin{equation}
E[X_{n+1},r_{n+1}]=E[X_n,r_n ]^a E[c,r_c] =E[a X_n+c,r_n^ar_c]
\label{eq10}
\end{equation}
under the constraint however that $m$ equals the user Paillier public key $K_p$, (i.e., $m = K_p$, see ~\eqref{eq1}).
If the increment as well as all terms of the sequence are encrypted (including the LCG seed) that is not the case of the multiplier $a$. However, this does not reduce the security of our system as the parameter $a$ is not supposed to be secret \cite{l1999tables}.
It is important to notice that, in our SLCG, a recursive relation exists between the random integers $r_n$ which ensure the semantic security of the Paillier cryptosystem. Derived from ~\eqref{eq10}, this one is such as:
\begin{equation}
r_{n+1}= r_n^a r_c
\label{eq11}
\end{equation}
where $r_c$ is the random variable used to encrypt the increment. $r_0$ is the random value associated to the seed $X_0$. This recursive relationship will be considered in Section \ref{sseq3:2} so as to allow data exchange between two different users.
\begin{figure}[!t]
\centering
\includegraphics[scale=.5]{image/sharing.png}
\caption{General framework for data sharing through public-cloud}
\label{fig1}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{image/diagram3.png}
\end{center}
\caption{Main steps of our HPRE for an image sharing}
\label{fig2}
\end{figure*}
\section{Sharing outsourced encrypted data}
\label{seq3}
In this Section, we first refine the data exchange framework we consider and its basic security assumptions. We then present our Homomorphic based Proxy Re-Encryption scheme (HPRE).
\subsection{Data exchange scenario in outsourced environment}
\label{sseq3:1}
Fig. \ref{fig1} illustrates the general data exchange framework we consider where a data owner (the emitter or the delegator) has \textit{a priori} stored his data into a public cloud in an asymmetrically encrypted form; data he wants to share with another user (the recipient or the delegate). We further assume a semi-honest cloud server. This one honestly stores encrypted data uploaded by the users and responds to their requests. If the server does not disclose data to any parties who fail to prove ownership or access rights, it is however curious and may try to infer information about the content of users' data or about their private keys. One last assumption is that all communications between the server and the users are protected with the help of the Paillier cryptosystem. Eavesdroppers cannot infer messages being transmitted.
As stated previously, our objective is to allow them to share some data under the constraint the delegator does not have to download his data, re-encrypt them with the public key of the delegate and upload them into the cloud. We also want this process conducted by the cloud (proxy) without giving the delegator private key as well as with very few communications in-between the delegator, the proxy and the delegate. In our idea, if one user wants to share data with several users at once, all of them will have to agree on a single secret with the delegator.
\subsection{Secure data exchange between users}
\label{sseq3:2}
Let us thus consider that Alice (the delegator) wants to share with Bob (the delegate) a set of data she is the owner of. These data could be a set of integer values like for instance a gray-scale image $I$, the $N$ pixels of which $I=\{I_i\}_{i=0..N-1}$ are encoded on $b$ bits.
As stated previously, it is assumed that Alice has already outsourced an image into the cloud by Paillier encrypting its pixels independently with her public key $K_{p1}$, such as (see in Fig. \ref{fig2} – Data outsourcing step)
\begin{equation}
I_i^e=E_{K_{p1}}[I_i,r_i]
\label{eq12}
\end{equation}
where $r_i$ is the random value associated to the $i^{th}$ pixel $I_i$ of $I$, $I_i^e$ is the encrypted version of $I_i$. As we will see in the sequel, our HPRE procedure imposes a constraint on the way Alice generates the random values $\{r_i\}_{i=0..N-1}$. These ones should satisfy ~\eqref{eq11}, that is to say that for one file Alice stores into the cloud, she has to memorize the first random value $r_c$ and $r_0$ she used when she encrypted the first pixel of her image (or of any files she stored), $I_0^e=E_{K_{p1}}[I_0,r_0]$.
In order to share this encrypted image with Bob, the public Paillier encryption key of whom is $K_{p2}$, we propose the following HPRE procedure also depicted in Fig. \ref{fig2}
\begin{enumerate}
\item \textbf{User agreement for data exchange} - In this first tsep, Bob and Alice have to agree on the exchange by defining the LCG parameters, in other words: the secret key $X_0$, the multiplier $a$ and the increment $c$. Let us recall that knowing $c$ and $a$ is not critical from a security point of view (see Section \ref{sseq2:3}).
\item \textbf{Secret random sequence generation} - Alice encrypts $X_0$ and $c$ under her public key $K_{p1}$: $E_{K_{p1}}[X_0,r_0]$ $E_{K_{p1}}[c,r_c]$, and sends them to the cloud. Notice that $X_0$ is encrypted with the same random integer $r_0$ Alice used to encrypt the first pixel of her image (see above). She also sends the multiplier $a$. Based on these pieces of information, the cloud generates the secret random sequence $X^e=\{X_i^e=E_{K_{p1}}[X_i,r_i]\}_{i=0..N-1}$ using ~\eqref{eq10}.
\item \textbf{Data encryption for the delegator} - This procedure relies on different stages: i) the computation of differences between the encrypted data of Alice $(I^e)$ and the secret random sequence $(X^e)$; ii) the encryption of this differences with the public key of Bob $K_{p2}$.
\begin{enumerate}
\item \textit{Difference computation} - since $X_i^e$ and $I_i^e$ have been encrypted with the same public key $K_{p1}$ and the same random values $r_i$ (see above), the cloud computes their difference $D_i$ as exposed in see Section \ref{sseq2:2}, that is to say
\begin{equation}
\begin{small}
\begin{array}{ccc}
D_i & = &D(X_i,I_i )=D^e(E_{K_{p1}}[X_i,r_i],E_{K_{p1}}[I_i,r_i ]) \\
• & = & X_i-I_i \mod K_{p1}
\end{array}
\end{small}
\label{eq13}
\end{equation}
Even though the cloud knows $D=\{D_i\}_{i=0...N-1}$, it cannot deduce the value of $I_i$ and $X_i$.
\item \textit{Data encryption for the delegator} - From this stand point, one may think the cloud just has to encrypt $D$ with the public key of Bob, $K_{p2}$, and then remove the noise so as so to give him access to the data. This is possible under the constraint $D_i \mod K_{p1} = D_i \mod K_{p2}$ which is achieved when $0<D_i<min(K_{p1}, K_{p2})$. Unfortunately, this constraint is hard to satisfy because of the SLCG the output amplitude of which can not be controlled simply. To overcome this issue, our HPRE includes a "noise refreshment procedure" (see Fig. \ref{fig2}) before encrypting the data with the public key of Bob.
\begin{itemize}
\item Noise refreshment
\end{itemize}
To refresh the noise, Bob first generates on his side the sequence $\{X_i\}_{i=0..N-1}$, using an LCG parameterized as the SLGC of the cloud.
He also produces a second noise $\{\beta_i\}_{i=0..N-1}$ such as:
\begin{equation}
2^b-1<\beta_i<\min(K_{p1},K_{p2})
\label{eq14}
\end{equation}
where $b$ is the number of bits on which is encoded the pixel values of the image of Alice. Under such a constraint: we ensure: $\beta_i \mod K_{p1}= \beta_i \mod K_{p2}$ and $\beta_i-I_i \mod K_{p1}= \beta_i-I_i \mod K_{p2}$.
Then Bob sends to the cloud $\{E_{K_{p2}}[\beta_i,r_i']\}_{i=0...N-1}$ and $\{\alpha_i=\beta_i-X_i \mod K_{p1}\}_{i=0..N-1}$. Where $r_i'$ is a random value defined by Bob.
On its side, in order to remove the noise $\{X_i\}_{i=0...N-1 }$, the cloud computes
\begin{equation}
G_i=\alpha_i+D_i \mod K_{p1}=\beta_i-I_i \mod K_{p1}
\label{eq15}
\end{equation}
Then it encrypts $\{G_i\}_{i=0..N-1}$ with the public key of Bob
\begin{equation}
\{E_{K_{p2}}[G_i,r_i'']=E_{K_{p2}}[\beta_i-I_i,r_i'']\}_{i=0..N-1}
\label{eq16}
\end{equation}
Finally, in order to remove the noise $\beta_i$ from of the data of Bob, the server computes
\begin{equation}
E_{K_{p2}}[I_i,r_i' r_{i}^{''-1}]=E_{K_{p2}}[\beta_i,r_i']E_{K_{p2}}[\beta_i-I_i,r_i'']^{-1}
\label{eq17}
\end{equation}
\end{enumerate}
At the end of this procedure, Bob has on the cloud the image of Alice encrypted with his own public key.
\end{enumerate}
As depicted, this system allows the data exchange between Alice and Bob, without extra-communication between the cloud and Alice, and the downloading of data by Bob. It is also possible to notice that the access to the shared data is based on the knowledge of the secret SLCG key $X_0$ generated by Alice in agreement with Bob. Because our scheme is based on homormophic encryption, data can be by next processed by the cloud without endangering data confidentiality.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Entities & Delegator (Alice) & Proxy (Cloud) & Delegate (Bob) \\
\hline
Time computation (sec) & 0.002 & 90 & 30\\
\hline
Encrypted data volume of & 0 & 22986753 & 2048 \\
\hline
\end{tabular}
\end{center}
\caption{Amount of information stored (in bits) as well as the corresponding computation time that each entity needs (Alice, Bob and the cloud) for sharing an image of $92\times 122$ pixels}
\label{tab1}
\end{table*}
\begin{figure}[!t]
\centering
\includegraphics[scale=.5]{image/face.PNG}
\caption{Samples of face database}
\label{fig3}
\end{figure}
\section{Experimental results}
\label{seq4}
The previous solution was experimented in the case of the sharing of uncompressed images between two users. These images are issued from the Olivetti Research Laboratory of Cambridge, UK. It contains $400$ images of $8$ bit encoded of $92\times 122$ pixels. Some samples of our image test set are given Fig. \ref{fig3}. These images were encrypted with Paillier public keys of more than $1024$ bits in order to provide a high level of security.
Performance of our scheme are evaluated in terms of storage and computation complexity. Our HPRE was implemented in C/C++ with GMP library and all experiments were conducted using a machine equipped with $23$GB RAM running on Ubuntu $14.04$ LTS.
\begin{itemize}
\item Storage complexity:
\end{itemize}
Assuming that images are Paillier encrypted with a key of $1024$ bits, one encrypted image needs $2,7Mo$ so as to be stored into the cloud.
For one image the delegator (Alice) outsources, she only has to store on her side the random values $r_0$ and $r_c$. However this is not obligation. Indeed, based on the fact she knows both her public and private keys, she just has to download the encrypted seed $E_{K_{p1}}[X_0,r_0]$ and the encrypted increment $E_{K_{p1}}[c,r_c]$, to get access to these random values (i.e. $r_c$ and $r_0$).
During an image exchange, the delegator sends the encrypted seed $E_{K_{p1}} [X_0,r_0]$, the encrypted increment $E_{K_{p1}}[c,r_c]$ and the multiplier $a$. This amount of data is bounded by $O(log_2(K_{p1}^2))$. For a key of $1024$ bits, it is closed to $2048$ bits. On its side, the delegate (Bob) has to store $X_0$, the secret key of the LCG, but only for one session of data exchange.
\begin{itemize}
\item Computation complexity:
\end{itemize}
On the delegator side, the computation complexity is limited to the encryption of the SLCG parameters (i.e. $X_0$, $c$). Such a complexity is independent of the image's size.
Regarding the cloud, this one has to compute: the secret random sequence, compute the difference between the encrypted date of Alice with this random sequence, refresh the noise based on the inputs of Bob, encrypt the result with the public key of Bob and finally remove the noise. For an image of $N$ pixels, the secret random sequence generation is equivalent to $N$ encryptions. It is the same for the computation of the differences $\{D_i\}_{i=0...N-1}$. As described above, the noise refreshment procedure consists in modular additions. We consider its complexity negligible compared to encryption operations. The last step, the encryption of the differences $\{G_i\}_{i=0...N-1}$ is made of $N$ encryptions. As a consequence, the computation complexity for the cloud is bounded by $O(3\times N)$ encryptions.
The delegate computation complexity is attached to the noise refreshment procedure. He has to generate a LCG noise (i.e. $\{X_i\}_{i=0...N-1}$), a task the complexity of which is negligible compared to the $N$ encryptions of the second noise (i.e. $\{\beta_i\}_{i=0...N-1}$) he also produces and that he next sends to the cloud. The computation complexity of the delegate is thus of $N$ encryptions.
We provide in Table \ref{tab1} the amount of data that each entities has to store as well as the computation time required in the case of sharing images of our data set. Our HPRE scheme takes about $1'30$ minutes so as to share an image with a standard computer.
\section{Conclusion}
\label{seq6}
In this paper, we proposed the first homomorphic proxy re-encryption scheme. Its originality stands on a solution we propose so as to compute the difference of data encrypted with the fast version of the Paillier cryptosystem. It takes also advantage of a secure linear congruential generator we implemented in the Paillier encrypted domain. This one drastically reduces the computation complexity of the cloud and delegator. Furthermore, this solution doesn't need extra communication between the cloud and the delegator, i.e. the data owner. Moreover, since the data are homomorphically encrypted, it is possible to process outsourced data while ensuring their confidentiality. Our HPRE was implemented in the case of the sharing of uncompressed images stored in the cloud showing good time computation performance. Our scheme is not limited to images and can be used with any kinds of data.
\bibliographystyle{IEEEtran}
|
{'timestamp': '2017-06-07T02:06:39', 'yymm': '1706', 'arxiv_id': '1706.01756', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.01756'}
|
arxiv
|
\section{Introduction}\label{Introduction}
If one fixes the length of each edge of a triangle, or more generally of an $n$-dimensional simplex, then the geometry of the triangle (simplex) is completely determined: no vertex can be moved without changing the length of some adjacent edge.
Contrast this to a square, for example, where two adjacent vertices can be shifted in the same direction while not changing the lengths of any edges.
Thus, if $\tau$ is an $n$-simplex and $g$ is a metric on $\tau$ with constant curvature, then the geometry of $(\tau,g)$ is completely determined by just the edge lengths that $g$ assigns to $\tau$.
What we mean in the title by ``synthetic geometry" is formulas/procedures to compute geometric quantities of $(\tau,g)$ using only the edge lengths of $\tau$, and without needing to isometrically embed $(\tau,g)$ into the appropriate model space and use explicit coordinates.
The study of {\it Euclidean simplices}, or simplices $(\tau,g)$ where $g$ has constant curvature zero, has a long history.
Heron's formula, which computes the area of a triangle using its edge lengths, is an example of synthetic geometry in a Euclidean 2-simplex which dates back thousands of years.
The study of higher dimensional simplices goes back (at least) to Menger in \cite{Menger} and Cayley in \cite{Cayley}.
More recently, the second author in \cite{Minemyer3} developed a more efficient technique to encode the geometry of a Euclidean $n$-simplex using the edge lengths of the simplex (this formula is also developed in \cite{Rivin}).
The techniques and results in \cite{Minemyer3} will be summarized in Section \ref{section:Euclidean}, as they will play a vital role in the work done in this paper.
Hyperbolic simplices, or metric simplices $(\tau,g)$ where $g$ has constant curvature -1, play an important role in the current mathematical zeitgeist.
For example, distances in hyperbolic triangles considering only the edge lengths of that triangle are used to determine whether a geodesic metric space is CAT(-1) (or an Alexandrov space with curvature bounded below by $-1$).
Hyperbolic structures are used by Charney and Davis in \cite{CD} for their strict hyperbolization, which are then further used by Ontaneda in \cite{Ontaneda} in his smooth Riemannian hyperbolization.
The goal of this research is to develop synthetic geometric formulas for simplices with constant curvature. This, together with \cite{Minemyer3}, establishes a foundation on which further research may be more easily performed due to the formulas' relative simplicity in contrast to previous work in the literature.
We will focus primarily on {\it hyperbolic simplices} due to their mathematical importance, but the last Section will discuss analogous results for {\it spherical simplices} (constant curvature 1), and more generally for simplices of constant curvature $\kappa$.
The main results of this paper are as follows:
\begin{enumerate}
\item We develop a simple criterion which determines whether or not a set of positive edge lengths for an $n$-simplex determine a legitimate hyperbolic simplex. (Section \ref{section:realizability}, Theorem \ref{theorem : hyperbolic-realizability}).
\item Given a hyperbolic simplex $(\tau,g_{\mathbb{H}})$ and two points $x, y \in \tau$, we determine an easy procedure to find $d_{\mathbb{H}}(x,y)$ using only the edge lengths of $\tau$ and the barycentric coordinates of $x$ and $y$. (Section \ref{section:distances}, Theorem \ref{thm:hyp dist}).
\item Given a Euclidean $n$-simplex $(\tau,g_{\mathbb{E}})$ where $\tau = \langle v_1, v_2, \hdots, v_n, v_{n+1} \rangle$, we develop a formula for $\text{proj}_{\tau_{n+1}}(v_{n+1})$, the orthogonal projection of $v_{n+1}$ onto the $(n-1)$-face opposite of it (denoted by $\tau_{n+1}$). (Section \ref{section:Euclidean proj}, Theorem \ref{thm:Euclid Projection}).
\item Given a hyperbolic $n$-simplex $(\tau,g_{\mathbb{H}})$ where $\tau = \langle v_1, v_2, \hdots, v_n, v_{n+1} \rangle$, we develop a formula for $\text{proj}_{\tau_{1}}^{\mathbb{H}}(v_{1})$. (Section \ref{section:hyperbolic proj}, Theorem \ref{theorem:hyperbolic-projection}).
\end{enumerate}
The goal of all of this work is to develop formulas which are simple to use and reasonably intuitive.
Toward this, in Section \ref{section:example} we work out an example with a 3-simplex in which we use all of the formulas mentioned above (and some of the formulas in \cite{Minemyer3}).
The hope is that this example will be useful for any researchers who wish to use these formulas in the future.
Lastly, there are more difficult formulas in the literature for some of the quantities listed above.
In \cite{KSY}, Karli\u{g}a, Savas, and Yakut give a formula for orthogonal projection in hyperbolic space (Theorem 3 in \cite{KSY}).
As one can see, our formula is considerably simpler, and in any case their formula uses outward normal vectors and is therefore not a truly ``synthetic" formula.
Also, in \cite{Karliga}, Karli\u{g}a provides necessary and sufficient conditions for when a collection of edge lengths yields a legitimate hyperbolic simplex.
But again, one can see that our necessary and sufficient condition listed in Section \ref{section:realizability} is more natural, and simpler to use.
Also, just before submitting this paper to the arXiv, Abrosimov and Vuong posted the article \cite{AV} which gives a geometric version for our Theorem \ref{theorem : hyperbolic-realizability} when $n=3$.
\section{Notation and Formulas in Euclidean Simplices}\label{section:Euclidean}
Let us first establish some notation for the remainder of the paper.
Let $\tau = \langle v_1, v_2, \hdots, v_n, v_{n+1} \rangle$ be an $n$-dimensional simplex, and let $g$ be a Riemannian metric on $\tau$ with constant curvature.
The notation $g_\mathbb{E}$ means that $g$ has constant curvature $0$, or is {\it Euclidean}; the notation $g_\mathbb{H}$ implies that $g$ has curvature $-1$, or is {\it hyperbolic}; and the notation $g_\mathbb{S}$ means that $g$ has curvature $1$, or is \textit{spherical}.
Let $e_{ij}$ denote the edge of $\tau$ adjacent to the vertices $v_i$ and $v_j$, and let $\gamma_{ij}$ denote the length of $e_{ij}$ under $g$, also denoted by $g(e_{ij})$.
We make the convention that $\gamma_{ii} = 0$ for all $i$.
Denote the determinant of a matrix $M$ by $|M|$, and let $M_{ij}$ denote the $ij$-th minor of $M$ (that is, $M_{ij}$ equals the determinant of the matrix obtained by removing the $i^{th}$ row and $j^{th}$ column of $M$).
The purpose of this Section is to quickly summarize and explain the main results from \cite{Minemyer3} to be used in this paper.
Suppose a Euclidean $n$-simplex $(\tau,g_{\mathbb{E}})$ with $\tau = \langle v_1, \hdots, v_{n+1} \rangle$ is linearly isometrically embedded into $\mathbb{R}^m$ ($m \geq n$) endowed with some symmetric bilinear form $\langle , \rangle$.
Translate the image of $v_{n+1}$ so that it is mapped to the origin, and by abuse of notation we associate each vertex $v_i$ with its image in $\mathbb{R}^m$.
Define the vectors $w_i = v_i - v_{n+1}$ for $1 \leq i \leq n$.
The collection $( w_1, \hdots, w_n )$ forms a basis for the smallest subspace of $\mathbb{R}^m$ containing $\tau$, and the form $\langle, \rangle$ is completely determined on this subspace by the $n \times n$ matrix $Q$ whose $ij^{th}$ entry is defined by
$$ q_{ij} = \langle w_i, w_j \rangle. $$
Now, notice that
\begin{equation*}
\gamma_{ij}^2 = \langle w_i-w_j, w_i-w_j \rangle = \gamma_{i,n+1}^2 + \gamma_{j,n+1}^2 - 2 \langle w_i, w_j \rangle
\end{equation*}
and so
\begin{equation}\label{eqn:Q}
q_{ij} = \langle w_i, w_j \rangle = \frac{1}{2} \left( \gamma_{i,n+1}^2 + \gamma_{j,n+1}^2 - \gamma_{ij}^2 \right).
\end{equation}
This allows one to construct the matrix $Q$ using only the edge lengths of $(\tau,g_\mathbb{E})$.
Then, theoretically, one should be able to calculate any geometric quantity of $\tau$ using $Q$ since it completely determines the geometry of $(\tau,g_\mathbb{E})$.
The main results from \cite{Minemyer3} which follow from the definition of $Q$ above are:
\begin{enumerate}
\item {\bf Realizability of $(\tau,g_{\mathbb{E}})$:} $n(n+1)/2$ positive real numbers $\{ \gamma_{ij} \}$ are the edge lengths of some Euclidean simplex $(\tau, g_{\mathbb{E}})$ if and only if the matrix $Q$ is positive definite.
\item {\bf Distances in $(\tau,g_{\mathbb{E}})$:} Let $x, y \in \tau$ be such that $x$ has barycentric coordinates $(\alpha_i)_{i=1}^{n+1}$ and $y$ has barycentric coordinates $(\beta_i)_{i=1}^{n+1}$, where we have $\sum_{i=1}^{n+1} \alpha_i = 1 = \sum_{i=1}^{n+1} \beta_i$.
Then the squared Euclidean distance between $x$ and $y$ can be calculated by the formula
\begin{equation}\label{euclid-dist}
d_\mathbb{E}^2(x,y) = [x-y]^T Q [x-y]
\end{equation}
where $[x-y]$ is the vector in $\mathbb{R}^m$ whose $i^{th}$ coordinate is $(\alpha_i - \beta_i)$, with $1 \leq i \leq n$.
\item {\bf Volume of $(\tau, g_{\mathbb{E}}):$} The $n$-dimensional volume of $(\tau, g_{\mathbb{E}})$ is given by
\begin{equation*}
\text{Vol}(\tau) = \frac{\sqrt{\text{det}(Q)}}{n!}.
\end{equation*}
\end{enumerate}
\section{Determining the realizability of hyperbolic simplices}\label{section:realizability}
\subsection*{The hyperboloid model for hyperbolic space} The goal of this research is to develop geometric formulas independent of the embedding of our hyperbolic simplex $(\tau, g_\mathbb{H})$ into hyperbolic space.
But we will need to isometrically embed our hyperbolic simplex into some model space for hyperbolic space in order to prove that our formulas are correct.
We will always use the hyperboloid model for hyperbolic space, and so we establish our notation for this now.
We will use the notation $\mathbb{R}^{n,1}$ to denote the standard Minkowski space with signature $(n,1)$.
That is, as a vector space, $\mathbb{R}^{n,1}$ is just $\mathbb{R}^{n+1}$ endowed with the symmetric bilinear form
\begin{equation*}
\langle x,y \rangle = x_1y_1 + x_2y_2 + \hdots + x_ny_n - x_{n+1}y_{n+1},
\end{equation*}
where $x = (x_1, \hdots, x_{n+1})$ and $y = (y_1, \hdots, y_{n+1})$.
The solution set to the equation $\langle x, x \rangle = -1$ forms a two-sheeted hyperboloid, and the ``upper" sheet (the sheet with $x_{n+1} > 0$) is our model for $n$-dimensional hyperbolic space $\mathbb{H}^n$.
Given two points $x, y \in \mathbb{H}^n$, the hyperbolic distance $d_\mathbb{H}(x,y)$ is given by
\begin{equation}\label{eqn:hyperbolic distance}
d_\mathbb{H}(x,y) = \text{arccosh}(-\langle x,y \rangle)
\end{equation}
and is equivalent to the induced path-metric on $\mathbb{H}^n$.
\subsection*{Induced flat simplices}
Let $(\tau, g_\mathbb{H})$ be an $n$-dimensional hyperbolic simplex, and assume that it is isometrically embedded in $\mathbb{H}^n$.
Let $\tau = \langle v_1, \hdots, v_{n+1} \rangle$, and by abuse of notation associate $v_i$ with its image in $\mathbb{H}^n$.
Identifying $\mathbb{H}^n$ with the upper-half plane model described above, we can consider the convex hull of the vertices $v_1, \hdots, v_{n+1}$ in $\mathbb{R}^{n,1}$.
This yields an $n$-dimensional simplex which we will call $\sigma$.
Note that $\sigma$, endowed with the quadratic form inherited from $\mathbb{R}^{n,1}$, has curvature $0$.
\begin{remark}\label{rmk:non positive definite}
Note that this form may or may not be positive-definite.
For an easy example of where $(\tau,g_\mathbb{H})$ is a legitimate hyperbolic simplex but the quadratic form on $\sigma$ is not positive definite, consider the three points
\begin{equation*}
v_1 = (0,0,1) \qquad v_2 = (0,1,\sqrt{2}) \qquad v_3 = (0,2,\sqrt{5}).
\end{equation*}
These three points lie on a line in $\mathbb{H}^2$, but their convex hull $\sigma$ is a triangle in $\mathbb{R}^{2,1}$.
The plane containing $\sigma$ (the $yz$-plane) clearly has signature $(1,1)$.
Now, perturb the point $v_3$ on the hyperboloid so that the three points are no longer colinear in $\mathbb{H}^2$.
For a sufficiently small perturbation, this will provide a legitimate hyperbolic triangle where the quadratic form associated to the convex hull is not positive-definite.
\end{remark}
In many of the formulas and arguments later in this paper, we will care about the $(n+1)$-dimensional simplex $\Sigma$ defined as follows.
Assume we have a hyperbolic simplex $(\tau, g_\mathbb{H})$ isometrically embedded in $\mathbb{H}^n$, and define $\sigma$ as above.
Then $\Sigma := \{ \vec{0} \} \vee \sigma$.
That is, $\Sigma = (v_0, v_1, \hdots, v_{n+1})$ where $v_0 = \vec{0}$ and $v_1, \hdots, v_{n+1}$ are the vertices of $\sigma$.
So $\Sigma$ is just the $(n+1)$-dimensional simplex in $\mathbb{R}^{n,1}$ obtained by combining the origin with the vertices of $\sigma$.
Using similar notation to Section \ref{section:Euclidean}, we can compute a simple formula for the matrix $Q_\Sigma$ associated to $\Sigma$.
Let $w_i = v_i - v_0 = v_i$.
The collection $( w_i )$ forms a basis for $\mathbb{R}^{n,1}$.
With respect to this basis, the $ij^{th}$ entry of $Q_\Sigma$ is given by
\begin{equation}\label{eqn:Qij}
q_{ij} = \langle w_i, w_j \rangle = \langle v_i, v_j \rangle = -\cosh(\gamma_{ij})
\end{equation}
where $\gamma_{ij} = d_\mathbb{H}(v_i, v_j)$ is the edge length of $\tau$ determined by $g_\mathbb{H}$.
Note that the last equality in equation \eqref{eqn:Qij} is obtained directly from equation \eqref{eqn:hyperbolic distance}, and that the diagonal entries of $Q_\Sigma$ are all $-1$.
Lastly, observe that the matrix $Q_\Sigma$ can be constructed from $(\tau,g_\mathbb{H})$ without ever needing to isometrically embed $\tau$ in $\mathbb{H}$.
\subsection*{Determining the realizability of hyperbolic simplices}
Let $\{ \gamma_{ij} \}_{i,j=1}^{n+1}$; $\gamma_{ij} = \gamma_{ji}$ be a set of positive real numbers. The purpose of this Subsection is to establish whether this set, when defined as the edge lengths of $(\tau,g_\mathbb{H})$, will form a legitimate simplex in hyperbolic space.
\begin{theorem}\label{theorem : hyperbolic-realizability}
A set of $n(n+1)/2$ positive real numbers $\{\gamma_{ij}\}_{i,j=1, i < j}^{n+1}$ are the edge lengths of a non-degenerate hyperbolic simplex $(\tau,g_\mathbb{H})$ if and only if the matrix $Q_\Sigma$ defined by equation \eqref{eqn:Qij} has signature $(n,1)$.
\end{theorem}
\begin{proof}
If $(\tau, g_\mathbb{H})$ is a hyperbolic simplex, then from the discussion above it is clear that $Q_\Sigma$ will have signature $(n,1)$.
Conversely, assume that $Q_\Sigma$ has signature $(n,1)$. Let $( \alpha_i )_{i=1}^{n+1}$ be a basis for $\mathbb{R}^{n+1}$, and define a symmetric bilinear form $\langle, \rangle$ on $\mathbb{R}^{n+1}$ by
\begin{equation*}
\langle \alpha_i, \alpha_j \rangle = q_{ij}
\end{equation*}
for all $i, j$, and where $q_{ij}$ denotes the $ij^{th}$ entry of $Q_{\Sigma}$.
The matrix $Q_\Sigma$ is then the Gram matrix for $(\mathbb{R}^{n+1}, \langle, \rangle )$, and so the form $\langle , \rangle$ has signature $(n,1)$.
Therefore, $\mathbb{R}^{n+1}$ equipped with the form $\langle, \rangle$ is a model for Minkowski space $\mathbb{R}^{n,1}$.
The isometric embedding of $(\tau, g_\mathbb{H})$ into $(\mathbb{R}^{n+1}, \langle, \rangle)$ is just the map that sends $v_i$ to the terminal point of $\alpha_i$ for each $i$.
This map is a linear isometry by construction, and every vertex lies on the two-sheeted hyperboloid defined by the equation $\langle x, x \rangle = -1$.
The only remaining thing that needs to be checked is that each vertex is mapped to the same sheet of this hyperboloid.
But suppose the vertices $v_i$ and $v_j$ were mapped to opposite sheets of the hyperboloid.
Then the vector $v_i - v_j$ would be in the light cone, and therefore we would have that
\begin{equation*}
\langle v_i -v_j, v_i - v_j \rangle < 0.
\end{equation*}
Expanding the left-hand side of this inequality gives
\begin{equation*}
\langle v_i -v_j, v_i - v_j \rangle = \langle v_i, v_i \rangle + \langle v_j, v_j \rangle - 2 \langle v_i, v_j \rangle = -1 -1 - 2q_{ij} = 2\cosh(\gamma_{ij}) - 2
\end{equation*}
which is always greater than or equal to 0.
\end{proof}
\section{Barycentric Coordinates in Simplices of Constant Curvature}\label{section:coordinates}
For this Section let $(\tau,g)$ be a non-degenerate simplex with some constant curvature $\kappa$ possibly different from 0 or -1.
Linearly isometrically embed $\tau$ into the model space $\mathbb{R}^{n,1}$, and by abuse of notation let $v_i$ denote the image of $v_i$ under this realization.
As before, let $\sigma$ be the $n$-dimensional Euclidean simplex determined by the convex hull of $(v_1, \cdots, v_{n+1})$ in $\mathbb{R}^{n,1}$.
Let $p \in \sigma$ be given by the barycentric coordinates $(\alpha_1, \cdots, \alpha_{n+1})$, where $\sum_{i=1}^{n+1} \alpha_i = 1$.
The purpose of this Section is to define the correspond point $\tilde{p}$ in $\tau$.
The immediate idea is to project $p$ onto $\tau$ from the origin.
This point $\tilde{p}$ will then naturally depend on how $\tau$ was embedded in $\mathbb{R}^{n,1}$.
In what follows we will give a formula for how to compute $\tilde{p}$ using only the barycentric coordinates $(\alpha_i)_{i=1}^{n+1}$, the edge lengths $(\gamma_{ij})_{i,j=1}^{n+1}$, and the vectors $(v_i)_{i=1}^{n+1}$, therefore proving that $\tilde{p}$ is well-defined with respect to the location of the vertices $(v_i)_{i=1}^{n+1}$ in the model space.
Define $\tilde{p}$ by
\begin{equation}\label{eqn:curved-barycentric}
\tilde{p} = \begin{cases}
\frac{p}{\kappa^2 \cdot \sqrt{\langle p, p \rangle }} & \qquad \text{if } \kappa > 0 \\
p & \qquad \text{if } \kappa = 0 \\
\frac{p}{\kappa^2 \cdot \sqrt{-\langle p, p \rangle }} & \qquad \text{if } \kappa < 0
\end{cases}
\end{equation}
Note that $\tilde{p}$ is in fact the projection (from the origin) of $p$ onto the model space with constant curvature $\kappa$.
We see from the above definition of $\tilde{p}$ that it depends on $p$ and $\langle p, p \rangle$.
Of course, $p$ is completely determined by its barycentric coordinates and the location of the vertices of $\tau$.
To see that the same is true of $\langle p, p \rangle$, we compute
\begin{equation}\label{eqn:p-dot-p}
\langle p, p \rangle = \left\langle \sum_{i=1}^{n+1} \alpha_i v_i, \sum_{j=1}^{n+1} \alpha_j v_j \right\rangle = \sum_{i, j=1}^{n+1} \alpha_i \alpha_j \langle v_i, v_j \rangle
\end{equation}
where
\[ \langle v_i, v_j \rangle = \begin{cases}
\frac{1}{\sqrt{\kappa}} \cos( \gamma_{ij} ) & \qquad \text{if } \kappa > 0 \\
\frac{1}{2} \left( \gamma_{i,n+1}^2 + \gamma_{j,n+1}^2 - \gamma_{ij}^2 \right) & \qquad \text{if } \kappa = 0 \\
\frac{-1}{\sqrt{-\kappa}} \cosh( \gamma_{ij} ) & \qquad \text{if } \kappa < 0
\end{cases}
\]
Thus, the value of $\langle p, p \rangle$ depends only on the the barycentric coordinates of $p$, as well as the edge lengths of $\tau$ as defined by $g$.
Then since $p = \sum_{i=1}^{n+1} \alpha_i v_i$ depends only on its barycentric coordinates and the location of the vertices $(v_i)_{i=1}^{n+1}$, we see that the definition of $\tilde{p} \in \tau$ above is well-defined with respect to the location of the vertices of $\tau$ and the metric $g$. Note that when referring to a specific point in $\tau$, it is generally simpler to instead refer to the point's corresponding point in $\sigma$, so that will be the convention used in the coming sections, including the example.
\section{Distances in hyperbolic simplices}\label{section:distances}
Let $(\tau, g_\mathbb{H})$ be a hyperbolic simplex, and let $x, y \in \tau$ with barycentric coordinates $x = (x_1, \hdots, x_{n+1})$ and $y = (y_1, \hdots, y_{n+1})$.
The purpose of this Section is to give a simple algorithm to compute $d_\mathbb{H}(x,y)$, the hyperbolic distance between the points $x$ and $y$, using only the edge lengths $(\gamma_{ij})$ associated to $g$ and the barycentric coordinates of $x$ and $y$.
First, define
\begin{equation*}
\tilde{x} = \frac{x}{\sqrt{-\langle x, x \rangle}} \qquad \tilde{y} = \frac{y}{\sqrt{-\langle y, y \rangle}}
\end{equation*}
where $\langle x, x \rangle$ and $\langle y, y \rangle$ can be easily calculated using the matrix $Q_{\Sigma}$ as described in equation \eqref{eqn:p-dot-p}.
Note that, if $\tau$ were linearly isometrically embedded in the hyperboloid model, then $x$ and $y$ would lie on the convex hull $\sigma$ while $\tilde{x}$ and $\tilde{y}$ would denote the corresponding projections of $x$ and $y$ onto the hyperboloid (from the origin).
From equation \eqref{eqn:hyperbolic distance} we know that the hyperbolic distance from $\tilde{x}$ to $\tilde{y}$ is $\text{arccosh}(-\langle \tilde{x}, \tilde{y} \rangle )$.
This quantity corresponds to $d_{\mathbb{H}}(x,y)$.
That is, $d_{\mathbb{H}}(x,y) = \text{arccosh}(-\langle \tilde{x}, \tilde{y} \rangle )$.
We formally state this in the following Theorem.
\begin{theorem}\label{thm:hyp dist}
Let $(\tau, g_\mathbb{H})$ be a hyperbolic simplex, and let $x, y \in \tau$ with barycentric coordinates $x = (x_1, \hdots, x_{n+1})$ and $y = (y_1, \hdots, y_{n+1})$.
Then the hyperbolic distance between $x$ and $y$ is given by
\begin{equation}\label{eqn: hyp dist formula}
d_\mathbb{H} (x,y) = \text{arccosh} (-\langle \tilde{x}, \tilde{y} \rangle) = \text{arccosh} \left( \frac{ -\langle x,y \rangle }{\sqrt{\langle x,x \rangle \cdot \langle y,y \rangle }} \right).
\end{equation}
\end{theorem}
Finally, note that the inner products in equation \eqref{eqn: hyp dist formula} are very easy to calculate using the matrix $Q_\Sigma$.
If one defines
\begin{equation*}
\vec{x} = \begin{pmatrix}
x_1 \\ x_2 \\ \vdots \\ x_{n+1}
\end{pmatrix}
\qquad
\text{and}
\qquad
\vec{y} = \begin{pmatrix}
y_1 \\ y_2 \\ \vdots \\ y_{n+1}
\end{pmatrix}
\end{equation*}
then
\begin{equation*}
\langle x, y \rangle = \vec{x}^T Q_{\Sigma} \vec{y} \qquad \langle x,x \rangle = \vec{x}^T Q_\Sigma \vec{x} \qquad \langle y, y \rangle = \vec{y}^T Q_{\Sigma} \vec{y}.
\end{equation*}
\begin{comment}
Let $x,y$ be two points on the convex hull $\sigma_\mathbb{E}$ equipped with the standard barycentric coordinates, and let $\tilde{x}, \tilde{y}$ be $x$'s and $y$'s respective points in $\tau$ defined as in Section \ref{section:coordinates}. As before let $\Sigma$ denote the Euclidean $n+1$-simplex including the origin as $v_0$. Note that as above, we have
\[
Q_\Sigma =
\begin{pmatrix}
-1 & \langle v_1, v_2 \rangle & \cdots & \langle v_1, v_n \rangle \\
\\
\langle v_1, v_2 \rangle & -1 & \cdots & \langle v_2, v_n \rangle \\
\vdots &\vdots & \ddots & \vdots\\
\langle v_1, v_n \rangle & \langle v_2, v_n \rangle & \cdots & -1
\end{pmatrix} = \begin{pmatrix}
-1 & -\cosh(\gamma_{12}) & \cdots & -\cosh(\gamma_{1n}) \\
\\
-\cosh(\gamma_{12}) & -1 & \cdots & -\cosh(\gamma_{2n}) \\
\vdots &\vdots & \ddots & \vdots\\
-\cosh(\gamma_{1n}) & -\cosh(\gamma_{2n}) & \cdots & -1
\end{pmatrix},
\]
which is equivalent to the edge matrix $M$ discussed in \cite{Karliga}. Consider the squared Euclidean distance between $\tilde{x}$ and $\tilde{y}$:
\begin{equation}\label{diff prod}
d_\mathbb{E}^2(\tilde{x}, \tilde{y}) = \langle \tilde{x}-\tilde{y}, \tilde{x}-\tilde{y} \rangle = \langle \tilde{x}, \tilde{x} \rangle + \langle \tilde{y}, \tilde{y} \rangle -2 \langle \tilde{x}, \tilde{y} \rangle
\end{equation}
\noindent For the hyperbolic case, equation \ref{diff prod} yields: $d_\mathbb{E}^2(\tilde{x}, \tilde{y})=2\cosh(d_H(\tilde{x}, \tilde{y})) - 2$, so we get:
\begin{equation}\label{eqn:hyperbolic-dist}
d_\mathbb{H}(\tilde{x},\tilde{y}) = \text{arccosh}\bigg( \frac{2 + d_\mathbb{E}^2(\tilde{x},\tilde{y})}{2} \bigg) = \text{arccosh}\bigg( \frac{2+[\tilde{x} - \tilde{y}]^T Q_\Sigma [\tilde{x} - \tilde{y}]}{2} \bigg)
\end{equation}
The equivalent calculations for an arbitrary $\kappa$ is discussed in Section \ref{section:spherical}.
\end{comment}
\section{Orthogonal projection in Euclidean Simplices}\label{section:Euclidean proj}
For the remainder of this paper, we denote the sub-simplex generated by removing $v_i$ from $(\tau,g)$ by $(\tau_i,g)$. Likewise, the sub-simplex generated by removing $v_i$ and $v_j$ is denoted by $\tau_{ij}$, etc. The natural question arises as to the barycentric coordinates of the orthogonal projections of some point $p \in \tau$ onto one of these sub-simplices in $(\tau,g_\mathbb{E})$ and $(\tau,g_\mathbb{H})$. Define the matrix $Q_{\tau_{n+1}}$ as the form from \cite{Minemyer3} for $(\tau_{n+1},g_\mathbb{E})$ (defined by equation \eqref{eqn:Q}). This Section will be focusing on $(\tau,g_\mathbb{E})$, and the following Section will discuss $(\tau,g_\mathbb{H})$.
We first note that in both cases we need only consider the projection of a vertex onto a sub-simplex, as for any other point $x \in \tau$, we may subdivide $\tau$ in a manner which makes $x$ into a vertex opposite the face we are projecting onto. Similarly, for projection onto a sub-simplex with $n-2$ or fewer vertices, we need only know how to project onto an $(n-1)$-face, as we may define a new simplex by removing the vertices that are not being projected and are not in the sub-simplex. The question, then, becomes one of optimization: we must minimize the distance from our projective vertex to the $(n-1)$-face. For notational purposes, we relabel our vertices so that we are always projecting $v_{n+1}$ onto the face $\tau_{n+1}$.
Let $p = \text{proj}_{\tau_{n+1}}(v_{n+1})$.
To prove our formula in Theorem \ref{thm:Euclid Projection} for the barycentric coordinates of $p$, we first need the following Lemma.
Recall our notation that for a square matrix $Q$, we denote its determinant by $|Q|$.
\begin{lemma}\label{determinantEquality}
$|Q_\tau|=d^2_\mathbb{E}(v_{n+1},p)|Q_{\tau_{n+1}}|$.
\end{lemma}
\begin{proof}
A well known result has that $\text{Vol}(\tau)=\frac{\text{Vol}(\tau_{n+1})d_\mathbb{E}(v_{n+1},p)}{n}$, but by Theorem 4 of \cite{Minemyer3}, \\$\text{Vol}(\tau) = \frac{1}{n!}\sqrt{|Q_\tau|}$, and $\text{Vol}(\tau_{n+1})=\frac{1}{(n-1)!}\sqrt{|Q_{\tau_{n+1}}|}$. So \[\text{Vol}(\tau) = \frac{\text{Vol}(\tau_{n+1})d_\mathbb{E}(v_{n+1},p)}{n} \iff \frac{1}{n!}\sqrt{|Q_\tau|}=\frac{\sqrt{|Q_{\tau_{n+1}}|}d_\mathbb{E}(v_{n+1},p)}{n!}\]
Solving for $|Q_\tau|$ then, we obtain
\[
|Q_\tau|=|Q_{\tau_{n+1}}|d^2_\mathbb{E}(v_{n+1},p),
\]thus completing the proof.
\end{proof}
\begin{theorem}\label{thm:Euclid Projection}
Let $(\tau,g_\mathbb{E})$ be a Euclidean $n$-simplex with $\tau = ( v_1,v_2,\dots,v_{n+1} )$. Let $\tau_{n+1}$ be the $(n-1)$-face of $\tau$ with vertices $( v_1,\dots,v_n )$. Then the barycentric coordinates of the orthogonal projection $p$ of $v_{n+1}$ onto $\tau_{n+1}$ are given by:
\[
\alpha_i = \frac{\sum_{j=1}^n (-1)^{i+j}Q_{ij}} {|Q_{\tau_1}|};\; 1\leq i \leq n.
\]
where $p = (\alpha_1, \dots, \alpha_{n},0)$, $Q = Q_\tau$, and $Q_{ij}$ denotes the $ij^{th}$ minor of $Q$.
\end{theorem}
\begin{proof}
Linearly isometrically embed the Euclidean simplex $(\tau, g_\mathbb{E})$ into $\mathbb{R}^n$ in some way, and by abuse of notation identify each vertex $v_i$ with its image under this isometry.
Let $w_i = v_i - v_{n+1}$.
Then the collection $(w_1, \dots, w_n)$ forms a basis for $\mathbb{R}^n$.
We proceed via the method of Lagrange multipliers, and we seek to minimize $d_\mathbb{E}(v_{n+1},p)$ subject to the constraint $\alpha_1 + \alpha_2 + \alpha_3 + \cdots + \alpha_n=1$. Let $\Vec{\alpha} = \begin{bmatrix} \alpha_1 & \alpha_2 & \cdots & \alpha_n \end{bmatrix}^T$. Then $d_\mathbb{E}^2(v_{n+1},p) = \Vec{\alpha}^TQ\Vec{\alpha}$ by Equation \eqref{euclid-dist}. Calculating the distance function, then, we obtain that \[d_\mathbb{E}^2(v_{n+1},p)=\sum\limits_{i,j=1}^n q_{ij}\alpha_i\alpha_j,\] (where $q_{ij}$ denotes the $ij^{th}$ entry of $Q$) noting here that due to symmetry there are exactly two copies of each term where $i\neq j$. We now define our Lagrangian function: \[\mathscr{L}(\alpha_1,\alpha_2,\cdots,\alpha_n,\lambda) = \sum\limits_{i,j=1}^n q_{ij}\alpha_i\alpha_j - \lambda\left(\left(\sum\limits_{i=1}^n\alpha_i\right) -1\right)\]
We now seek to optimize $\mathscr{L}$. Consider just one $\frac{\partial}{\partial\alpha_i}\mathscr{L}$. The nonconstant components of $\mathscr{L}$ with respect to $\alpha_i$ are \[2q_{i1}\alpha_1\alpha_i + 2q_{i2}\alpha_2\alpha_i + \cdots + q_{ii}\alpha_i^2 + \cdots + 2q_{in}\alpha_n\alpha_i - \lambda\alpha_i,\] so
\[\frac{\partial}{\partial\alpha_i}\mathscr{L} = \left(\sum\limits_{j=1}^n 2q_{ij}\alpha_j\right) - \lambda \text{, and thus, the gradient of $\mathscr{L}$ is }
\nabla\mathscr{L}=\begin{pmatrix}
\left(\sum\limits_{j=1}^n 2q_{1j}\alpha_j\right) - \lambda\\
\left(\sum\limits_{j=1}^n 2q_{2j}\alpha_j\right) - \lambda\\
\vdots\\
\left(\sum\limits_{j=1}^n 2q_{nj}\alpha_j\right) - \lambda
\end{pmatrix}:=0.
\]Adding across by the $n\times 1$ column vector with $\lambda$ as entries, we obtain
\begin{equation}\label{lambda_equation}
\begin{pmatrix}
\sum\limits_{j=1}^n 2q_{1j}\alpha_j\\
\sum\limits_{j=1}^n 2q_{2j}\alpha_j\\
\vdots\\
\sum\limits_{j=1}^n 2q_{nj}\alpha_j
\end{pmatrix}=\begin{pmatrix}\lambda \\ \lambda \\ \vdots \\ \lambda\end{pmatrix}=\vec{\lambda}.\end{equation}
From which we may factor out $\Vec{\alpha}$ and obtain
\[\begin{pmatrix}
2q_{11} & 2q_{12} & \cdots & 2q_{1n} \\
2q_{12} & 2q_{22} & \cdots & 2q_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
2q_{1n} & \cdots & \cdots & 2q_{nn} \\
\end{pmatrix}\vec{\alpha}=\begin{pmatrix}\lambda \\ \lambda \\ \vdots \\ \lambda\end{pmatrix}.\] But clearly, this $n\times n$ matrix is $2Q$. Since $\tau$ is a nondegenerate simplex, $Q$ is positive definite, and is thus invertible. Left multiplying by $Q^{-1}$, we obtain: $2\vec{\alpha}=Q^{-1}\vec{\lambda}.$ But $Q^{-1}=\frac{1}{|Q|}C$, where $C$ is the cofactor matrix of $Q$. Thus,
\[
\vec{\alpha}=\frac{1}{2|Q|}C\vec{\lambda} = \frac{\lambda}{2|Q|}\begin{pmatrix}
\sum_{i=1}^n (-1)^{i+1}Q_{1i}\\
\sum_{i=1}^n (-1)^{i+2}Q_{2i}\\
\vdots\\
\sum_{i=1}^n (-1)^{i+n}Q_{ni}
\end{pmatrix}
\]Then by Lemma \ref{determinantEquality}, we obtain
\[\vec{\alpha}=\frac{\lambda}{2|Q_{\tau_{n+1}}|d^{2}_\mathbb{E}(v_{n+1},p)}\begin{pmatrix}
\sum_{i=1}^n (-1)^{i+1}Q_{1i}\\
\sum_{i=1}^n (-1)^{i+2}Q_{2i}\\
\vdots\\
\sum_{i=1}^n (-1)^{i+n}Q_{ni}
\end{pmatrix}.\]
Now let us inspect Equation \eqref{lambda_equation}.
Left multiplying both sides by $\vec{\alpha}^T$, we obtain $2d^{2}(v_{n+1},p) = \lambda\left(\sum_{i=1}^n \alpha_i\right)$.
But $\sum_{i=1}^n\alpha_i = 1$ by the constraint, so $\lambda = 2d_\mathbb{E}^2(v_{n+1},p)$. Thus, we finally have
\[
\vec{\alpha}=\begin{pmatrix}
\frac{\sum_{i=1}^n (-1)^{i+1}Q_{1i}}{|Q_{\tau_1}|}\\
\frac{\sum_{i=1}^n (-1)^{i+2}Q_{2i}}{|Q_{\tau_1}|}\\
\vdots\\
\frac{\sum_{i=1}^n (-1)^{i+n}Q_{ni}}{|Q_{\tau_1}|}
\end{pmatrix},\]as desired.
\end{proof}
\begin{cor}\label{sum-determinant equality}
$\sum\limits_{i,j=1}^n (-1)^{i+j}Q_{ij} = |Q_{\tau_{n+1}}|$.
\end{cor}
\begin{proof}
By Theorem \ref{thm:Euclid Projection} and the definition of barycentric coordinates in Euclidean simplices, we have:
\[ 1 = \sum\limits_{i=1}^n \alpha_i =\sum\limits_{i=1}^n \frac{\sum_{j=1}^n (-1)^{i+j}Q_{ij}} {\text{det}(Q_{\tau_{n+1}})}= \frac{1}{\text{det}(Q_{\tau_{n+1}})}\sum\limits_{i,j=1}^n (-1)^{i+j}Q_{ij}.\] Thus, we have \begin{equation}\label{minor-det eqn}
\sum\limits_{i,j=1}^n (-1)^{i+j}Q_{ij} = \text{det}(Q_{\tau_{n+1}}).
\end{equation}
\end{proof}
\begin{cor}
The barycentric coordinates of the orthogonal projection $p=(0,\alpha_1,\cdots,\alpha_n)$ of $v_{n+1}$ onto $\tau_{n+1}$ are given by\[\alpha_i = \frac{\sum_{j=1}^{n}(-1)^{i+j}Q_{ij}}{\sum_{j,k=1}^{n}(-1)^{j+k}Q_{jk}}.\]
\end{cor}
\begin{proof}
Immediate from Theorem \ref{thm:Euclid Projection} and Corollary \ref{sum-determinant equality}.
\end{proof}
\begin{cor}
$\text{Vol}(\tau_{n+1}) = \frac{1}{(n-1)!}\sqrt{\sum\limits_{i,j=1}^n (-1)^{i+j}Q_{ij}}$.
\end{cor}
\begin{proof}
Theorem 4 of \cite{Minemyer3} shows that $\text{Vol}(\tau_{n+1})=\frac{1}{(n-1)!}\sqrt{|Q_{\tau_{n+1}}|}$. Substituting Equation \eqref{minor-det eqn} for the radical, we obtain
\[ \text{Vol}(\tau_{n+1}) = \frac{1}{(n-1)!}\sqrt{\sum\limits_{i,j=1}^n (-1)^{i+j}Q_{ij}}, \] thus completing the proof.
\end{proof}
\section{Orthogonal Projection in Hyperbolic Simplices}\label{section:hyperbolic proj}
We now turn our attention to orthogonal projection in hyperbolic simplices.
In this Section we are going to project the vertex $v_1$ onto the face $\tau_1$, as opposed to last Section where we projected $v_{n+1}$ onto $\tau_{n+1}$.
This reason for this change in notation is purely to make labeling subscripts easier.
A first thought may be to just project within the convex hull $\sigma$ of the vertices of $\tau$, and then poject this point onto the hyperboloid. But, in general, this does not work. The reason for this can be found in Remark \ref{rmk:non positive definite}. The quadratic form in $\mathbb{R}^{n,1}$ when restricted to the hyperplane containing the vertices of $\tau$ may not be positive definite. In that case, the procedure in Theorem \ref{thm:Euclid Projection} may not work.
\begin{theorem}\label{theorem:hyperbolic-projection}
Let $(\tau,g_\mathbb{H})$ be a hyperbolic $n$-simplex with vertices $( v_1,v_2,\cdots,v_{n+1} )$. Then the barycentric coordinates of the orthogonal projection $p \in \tau$ of $v_1$ onto $\tau_1$ is given by
\begin{equation}\label{eqn:hyperbolic proj coords}
\alpha_i = \frac{(-1)^{i+1}Q^\Sigma_{1i}}{\sum_{j=2}^{n+1}(-1)^{1+j}Q_{1j}^\Sigma}.
\end{equation}
where $p = (0,\alpha_2,\alpha_3,\cdots,\alpha_{n+1})$, $Q_\Sigma$ is defined as in equation \eqref{eqn:Qij}, and $Q_{ij}^\Sigma$ denotes the $ij^{th}$ minor of $Q_{\Sigma}$.
\end{theorem}
\begin{remark}
Suppose the hyperbolic simplex $(\tau, g_\mathbb{H})$ is linearly isometrically embedded in the hyperboloid model for $\mathbb{H}^n$. The point $p$ described in Theorem \ref{theorem:hyperbolic-projection} would lie in the convex hull $\sigma$ of $\tau$ (it would actually lie on the covex hull of the points $(v_2, \dots, v_{n+1})$).
To find the actual point $\tilde{p}$ that is the orthogonal projection of $v_1$ onto $\tau_1$, you would need to project $p$ onto the hyperboloid from the origin. To do this, recall that $\tilde{p} = p/\sqrt{- \langle p, p \rangle }$.
A formula for the components of $\tilde{p}$ is given by
\[\tilde{p}=(0,\tilde{\alpha}_2,\tilde{\alpha}_3,\cdots,\tilde{\alpha}_{n+1}),\text{ where }\tilde{\alpha}_i = \frac{(-1)^{i+1}Q^\Sigma_{1i}}{\sqrt{\sum_{i,j=1}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}},\]
In practice though, it is easier to calculate $\alpha_i$ using equation \eqref{eqn:hyperbolic proj coords}, calculating $\sqrt{- \langle p , p \rangle}$ using $Q_\Sigma$, and then dividing.
\end{remark}
\begin{remark}
Note that the formula for calculating the orthogonal projection in Theorem \ref{thm:Euclid Projection} uses the $n \times n$ matrix $Q_\tau$ defined in equation \eqref{eqn:Q}, whereas the formula in Theorem \ref{theorem:hyperbolic-projection} uses the $(n+1) \times (n+1)$ matrix $Q_\Sigma$ defined in \eqref{eqn:Qij}.
This difference is why it is notationally easier to project $v_{n+1}$ onto $\tau_{n+1}$ in Theorem \ref{thm:Euclid Projection} and $v_1$ onto $\tau_1$ in Theorem \ref{theorem:hyperbolic-projection}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{theorem:hyperbolic-projection}]
Linearly isometrically embed the hyperbolic simplex $(\tau, g_\mathbb{H})$ into the hyperboloid model for $\mathbb{H}^n$.
By abuse of notation, identify each vertex $v_i$ with its image in the hyperboloid.
Let $\sigma$ denote the convex hull of the vertices $(v_1, \dots, v_{n+1})$, and let $\Sigma$ be the $(n+1)$-simplex obtained as the convex hull of $(v_0, v_1, \dots, v_{n+1})$ where $v_0$ is the origin in $\mathbb{R}^{n,1}$.
Just as in the proof of Theorem \ref{thm:Euclid Projection} we proceed via the method of Lagrange Multipliers. Let $p = \alpha_2v_2 + \alpha_3v_3 + \cdots + \alpha_{n+1}v_{n+1}$ be the point on $\sigma$ corresponding to the projection $\tilde{p}$ of $v_1$ onto $\tau_1$. Then $\tilde{p} = \frac{p}{\sqrt{-\langle p,p \rangle}}$ by Equation \eqref{eqn:curved-barycentric}.
The hyperbolic distance between $v_1$ and $\tilde{p}$ is given by
\begin{equation*}
d_\mathbb{H}(v_1, \tilde{p}) = \text{arccosh}(- \langle v_1, \tilde{p} \rangle).
\end{equation*}
Since $\text{arccosh}()$ is increasing for arguments greater than $1$, our goal is to maximize $\langle v_1,\tilde{p}\rangle$ subject to the constraint $\alpha_2+\alpha_3+\cdots + \alpha_{n+1}=1$.
Define our Lagrangian function $\mathscr{L}(\alpha_2,\cdots,\alpha_{n+1},\lambda):=\langle v_1,\tilde{p} \rangle -\lambda(\alpha_2+\dots+\alpha_{n+1}-1)$. Consider just one $\frac{\partial}{\partial \alpha_i}\mathscr{L}:$
\[
\frac{\partial}{\partial \alpha_i}\mathscr{L} = \langle v_1,\frac{\partial}{\partial \alpha_i}\tilde{p}\rangle - \lambda = \left\langle v_1, \frac{-1}{2}(-\langle p,p\rangle)^{-3/2}(-2\langle p,v_i\rangle)p + (-\langle p,p \rangle)^{-1/2}v_i \right\rangle
-\lambda\]
\[
= \frac{\langle p,v_i\rangle}{(-\langle p,p \rangle)^{3/2}}\langle v_1,p \rangle + \frac{\langle v_1,v_i \rangle}{(-\langle p,p\rangle)^{1/2}} - \lambda := 0
\]
By adding over the $\lambda$ and clearing denominators, we obtain
\[
\frac{\langle p,v_i\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_i \rangle = \lambda(-\langle p,p \rangle)^{1/2}
\]
And thus our system of equations can be written as
\begin{align}
\frac{\langle p,v_2\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_2 \rangle &= \lambda(-\langle p,p \rangle)^{1/2} \label{eqn:2} \\
\frac{\langle p,v_3\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_3 \rangle &= \lambda(-\langle p,p \rangle)^{1/2} \label{eqn:3} \\
\vdots \notag \\
\frac{\langle p,v_{n+1}\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_{n+1} \rangle &= \lambda(-\langle p,p \rangle)^{1/2} \label{eqn:n+1}
\end{align}
The first step to solving this system of equations is to show that $\lambda = 0$.
To do this, we take $\alpha_2 \eqref{eqn:2} + \alpha_3 \eqref{eqn:3} + \dots + \alpha_{n+1} \eqref{eqn:n+1}$.
Recalling the constraint $\sum_{i=2}^{n+1}\alpha_i=1$, we obtain:
\[
\text{Right-Hand Side:} \quad \sum_2^{n+1}\alpha_i\lambda(-\langle p,p\rangle)^{1/2}=\lambda(-\langle p,p\rangle)^{1/2}
\]\[
\text{Left-Hand Side:} \quad \sum_{2}^{n+1}\alpha_i\left(\frac{\langle p,v_i\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_i \rangle\right)\]\[ = \sum_{i=2}^{n+1}\frac{\langle p,\alpha_iv_i \rangle}{-\langle p,p \rangle}\langle v_1,p\rangle + \langle v_1, \alpha_iv_i\rangle = \frac{\langle p,p\rangle}{-\langle p,p\rangle}\langle v_1,p\rangle+\langle v_1,p\rangle = 0
\] And thus since $(-\langle p,p \rangle)^{1/2}\neq 0$, we must have that $\lambda = 0$. Our system of equations becomes
\[
\begin{pmatrix}
\frac{\langle p,v_2\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_2 \rangle\\
\frac{\langle p,v_3\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_3 \rangle\\
\vdots\\
\frac{\langle p,v_{n+1}\rangle}{-\langle p,p \rangle}\langle v_1,p \rangle + \langle v_1,v_{n+1} \rangle
\end{pmatrix} = \vec{0}
\] But $-\langle p,p \rangle = \left(\sqrt{-\langle p,p \rangle}\right)^2$, and so we may distribute up into our inner products to obtain
\[\begin{pmatrix}
\langle \tilde{p},v_2 \rangle \langle v_1,\tilde{p} \rangle + \langle v_1,v_2 \rangle\\
\langle \tilde{p},v_3 \rangle \langle v_1,\tilde{p} \rangle + \langle v_1,v_3 \rangle\\
\vdots\\
\langle \tilde{p},v_{n+1} \rangle \langle v_1,\tilde{p} \rangle + \langle v_1,v_{n+1} \rangle
\end{pmatrix} = \vec{0}
\]
Now we may rearrange using the linearity of the bilinear form to achieve
\[\begin{pmatrix}
\langle \langle v_1,\tilde{p}\rangle \tilde{p} + v_1, v_2 \rangle\\
\langle \langle v_1,\tilde{p}\rangle \tilde{p} + v_1, v_3 \rangle\\
\vdots\\
\langle \langle v_1,\tilde{p}\rangle \tilde{p} + v_1, v_{n+1} \rangle
\end{pmatrix}=\vec{0}
\]
Therefore we must have that $\langle v_1,\tilde{p}\rangle \tilde{p} + v_1 \in \bigcap_{i=2}^{n+1}v_i^\perp$.
This intersection is one-dimensional, and is spanned by the first column of $Q_{\Sigma}^{-1}$ (for a proof, see Lemma \ref{lemma:orth-complement} below).
Since $\tilde{p}$ has a $v_1$ component of 0, we have
\begin{comment}
Note that, due to the linear independence of $(v_2, \dots, v_{n+1})$, the space $\bigcap_{i=2}^{n+1}v_i^\perp$ is 1-dimensional.
It is easy to see that a spanning vector for this space is the first column of $Q_\Sigma^{-1}$ (written with respect to the basis $\beta = (v_i)_{i=1}^{n+1}$ of $\mathbb{R}^{n,1}$).
To see this, let $x$ be the first column of $Q_{\Sigma}^{-1}$. Then, with respect to the basis $\beta$, we have that $v_i = e_i$ (the standard basis vector with a $1$ in the $i^{th}$ component and $0's$ everywhere else) and $Q_{\Sigma} x = e_1$. Then
\begin{equation*}
\langle v_i , x^* \rangle = e_i^T Q_\Sigma x = e_i^T e_1 = \vec{0}.
\end{equation*}
for all $2 \leq i \leq n+1$, and where $x^*$ just denotes $x$ but with respect to the standard basis.
\end{comment}
\[
\langle v_1,\tilde{p}\rangle \tilde{p} + v_1 = \begin{pmatrix}
1\\
\frac{-Q^\Sigma_{12}}{Q^\Sigma_{11}}\\
\vdots\\
\frac{(-1)^{(n+1)+1}Q^\Sigma_{1(n+1)}}{Q^\Sigma_{11}}
\end{pmatrix},
\text{ which then implies that }
\langle v_1,\tilde{p}\rangle \tilde{p}=\begin{pmatrix}
0\\
\frac{-Q^\Sigma_{12}}{Q^\Sigma_{11}}\\
\vdots\\
\frac{(-1)^{(n+1)+1}Q^\Sigma_{1(n+1)}}{Q^\Sigma_{11}}\end{pmatrix}.
\] Factoring $(-\langle p,p \rangle)^{1/2}$ from each of the $\tilde{p}$'s and dividing by $\frac{\langle v_1,p\rangle}{-\langle p,p \rangle}$, we have
\begin{equation}\label{eqn:p-vector}
p =
\begin{pmatrix}
0\\
\frac{Q^\Sigma_{12}\langle p,p \rangle}{Q^\Sigma_{11}\langle v_1,p\rangle}\\
\vdots\\
\frac{(-1)^{n+3}Q^\Sigma_{1(n+1)}\langle p,p \rangle}{Q^\Sigma_{11}\langle v_1,p\rangle}
\end{pmatrix}.
\end{equation}
Recall that $p = (0, \alpha_2, \dots, \alpha_{n+1})$. Thus we obtain a preliminary solution for each $\alpha_i$ by aligning the components of equation \eqref{eqn:p-vector}. But this is not a sufficient solution since $p$ is needed to compute each $\alpha_i$. But consider the sum of these equations:
\[
1 =\sum_{i=2}^{n+1}\alpha_i = \frac{-\langle p,p \rangle}{\langle v_1,p \rangle}\frac{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}{Q^\Sigma_{11}},
\] and thus
\begin{equation}\label{eqn:p-dot-p-constant}
\frac{-\langle p,p \rangle}{\langle v_1,p \rangle}= \frac{Q^\Sigma_{11}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}.
\end{equation}
Substituting into Equation \ref{eqn:p-vector}, we therefore have
\[p =
\begin{pmatrix}
0\\
\frac{-Q^\Sigma_{12}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}\\\
\\
\frac{Q^\Sigma_{13}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}\\
\vdots\\
\frac{(-1)^{(n+1)+1}Q^\Sigma_{1(n+1)}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}\\
\end{pmatrix},
\] and thus $\alpha_i = \frac{(-1)^{i+1}Q^\Sigma_{1i}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}$, proving the Theorem. To calculate a formula for $\tilde{p} = \frac{p}{\sqrt{-\langle p,p \rangle}}$, we must calculate $\sqrt{-\langle p,p \rangle}$ with respect to $Q_\Sigma$:
\[ \sqrt{-\langle p,p \rangle} = \sqrt{-\sum_{i,j=2}^{n+1} \alpha_i\alpha_j\langle v_i,v_j \rangle} =\sqrt{\frac{\sum_{i,j=2}^{n+1}(-1)^{i+j+4}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}{\left(\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}\right)^2}}\]
\[ =\frac{\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}}{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}}. \]
Thus, we finally have that
\[\tilde{p} = \frac{(\sum_{i=2}^{n+1}\alpha_iv_i)(\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i})}{\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}} = \frac{\sum_{i=2}^{n+1}(-1)^{i+1}Q^\Sigma_{1i}v_i}{\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}},\]
and so $\tilde{\alpha}_i = \frac{(-1)^{i+1}Q^\Sigma_{1i}}{\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}}$, as desired.
\end{proof}
\begin{lemma}\label{lemma:orth-complement}
Using the notation in Theorem \ref{theorem:hyperbolic-projection}, the intersection of the orthogonal complements of the vertex vectors $v_i$ for $2\leq i \leq n+1$ is
\[\bigcap_{i=2}^{n+1}v_i^\perp = \left\langle\begin{pmatrix}
Q_{11}^\Sigma \\ -Q_{12}^\Sigma \\ \vdots \\ (-1)^{(n+1)+1}Q_{1(n+1)}^\Sigma\\
\end{pmatrix}\right\rangle,\] where $\langle \vec{v} \rangle$ denotes the span of $\vec{v}$.
\end{lemma}
\begin{proof}
We first note that $\bigcap_{i=2}^{n+1}v_i^\perp$ is a one dimensional vector space. Let $\vec{x}$ be an $(n+1)\times 1$ column vector in $\bigcap_{i=2}^{n+1}v_i^\perp$. We also note that since we are working with respect to the vertex vectors, \[\langle v_i, \vec{x} \rangle = \sum_{j=1}^{n+1} \langle v_i, x_jv_j\rangle = \sum_{j=1}^{n+1} x_j\langle v_i,v_j\rangle = \sum_{j=1}^{n+1} x_jq_{ij},\] where $x_j$ is the $j$-th entry of $\vec{x}$. Because $\bigcap_{i=2}^{n+1}v_i^\perp$ is one dimensional and does not lie on the hyperplane $x_1 = 0$, we may let $x_1=1$ without loss of generality (and this is the form of $x$ that was needed in Theorem \ref{theorem:hyperbolic-projection}). Then we must solve for $n$ unknowns in $n$ equations:
\[
\begin{pmatrix}
q_{12} + q_{22}x_2 + q_{23}x_3 + \cdots + q_{2(n+1)}x_{n+1}\\
q_{13} + q_{23}x_2 + q_{33}x_3 + \cdots + q_{3(n+1)}x_{n+1}\\
\vdots\\
q_{1(n+1)} + q_{2(n+1)}x_2 + q_{23}x_3 + \cdots + q_{(n+1)(n+1)}x_{n+1}\\
\end{pmatrix}=\vec{0},
\] which can be rearranged to:
\[
\begin{pmatrix}
q_{22} & q_{23} & \cdots & q_{2(n+1)} \\
q_{23} & q_{33} & \cdots & q_{3(n+1)} \\
\vdots &\vdots& \ddots & \vdots \\
q_{2(n+1)} & q_{3(n+1)} & \cdots & q_{(n+1)(n+1)} \\
\end{pmatrix}
\begin{pmatrix}
x_2\\x_3\\\vdots\\x_{n+1}
\end{pmatrix}=
\begin{pmatrix}
-q_{12}\\-q_{13}\\\vdots\\-q_{1(n+1)}
\end{pmatrix}.
\]
First note that this matrix is the submatrix $Q_\Sigma(1,1)$ of $Q_\Sigma$ formed by removing the first row and first column. By Cramer's rule, $x_i = \frac{|A_{i-1}|}{Q^\Sigma_{11}}$, and $A_{i-1}$ is the matrix formed by replacing the $(i-1)$th column of $Q(1,1)$ with $\begin{bmatrix}
-q_{12} & -q_{13} & \cdots & -q_{1(n+1)}
\end{bmatrix}^T$. Thus, we have that
\[\vec{x}=\begin{pmatrix}
1\\
\frac{-Q_{12}}{Q_{11}}\\
\vdots\\
\frac{(-1)^{n+1}Q_{1(n+1)}}{Q_{11}}
\end{pmatrix}.\]
And thus, since $\vec{x} \in \bigcap_{i=2}^{n+1}v_i^\perp$ and $\bigcap_{i=2}^{n+1}v_i^\perp$ is one dimensional, $\bigcap_{i=2}^{n+1}v_i^\perp = \langle \vec{x} \rangle$. Scaling $\vec{x}$ by $Q_{11}$, we have
\[\bigcap_{i=2}^{n+1}v_i^\perp = \left\langle\begin{pmatrix}
Q_{11} \\ -Q_{12} \\ \vdots \\ (-1)^{n+1}Q_{1(n+1)}\\
\end{pmatrix}\right\rangle,\] as desired.
\end{proof}
Symmetrically to the Euclidean Projection, in order to project onto an $n-2$ or smaller sub-simplex, we redefine a simplex that removes the irrelevant vertices and perform this calculation. As one can see, this calculation proves to be far simpler than other methods, and it only relies on the edge lengths of the simplex.
\section{An example}\label{section:example}
The formulas in our Theorems, specifically Theorems \ref{thm:Euclid Projection} and \ref{theorem:hyperbolic-projection}, look a lot more complicated to use than they are in practice.
The purpose of this Section is to work out an example to demonstrate the efficiency of these formulas.
Let $\tau = \langle v_1,v_2,v_3,v_4\rangle$ be a 3-simplex with edge lengths given by:
\begin{tabular}{c|c|c|c|c}
$\gamma_{ij}$ & 1 & 2 & 3 & 4 \\
\hline
1 & 0 & 2 & 3 & 4 \\
\hline
2 & 2 & 0 & 4 & 5 \\
\hline
3 & 3 & 4 & 0 & 3 \\
\hline
4 & 4 & 5 & 3 & 0 \\
\end{tabular}
In this section, we use the methods developed in this paper to compute various quanities in $\tau$. We first verify that $\tau$ is a non-degenerate simplex when given both the Euclidean metric $g_\mathbb{E}$ and the hyperbolic metric $g_\mathbb{H}$. We then follow this by finding the distances between $p = (\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$ and $q = (\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$ in the Euclidean and Hyperbolic metric, and finally to find the orthogonal projection of $v_1$ onto $\tau_1$ with $\tau$ viewed as both Euclidean and Hyperbolic simplex.
\subsection{Calculations in $(\tau,g_\mathbb{E})$}
\subsection*{Verifying that $(\tau, g_\mathbb{E})$ is a legitimate Euclidean simplex}. We construct the matrix $Q$ from \eqref{eqn:Q}.
\[
Q =
\begin{pmatrix}
1/2(\gamma_{12}^2 +\gamma_{12}^2-\gamma_{22}^2) & 1/2(\gamma_{12}^2 +\gamma_{13}^2-\gamma_{23}^2) & 1/2(\gamma_{12}^2 +\gamma_{14}^2-\gamma_{24}^2) \\ \\
1/2(\gamma_{12}^2 +\gamma_{13}^2-\gamma_{23}^2) & 1/2(\gamma_{13}^2 +\gamma_{13}^2-\gamma_{33}^2) & 1/2(\gamma_{13}^2 +\gamma_{14}^2-\gamma_{34}^2) \\ \\
1/2(\gamma_{12}^2 +\gamma_{14}^2-\gamma_{24}^2) & 1/2(\gamma_{13}^2 +\gamma_{14}^2-\gamma_{34}^2) & 1/2(\gamma_{14}^2 +\gamma_{14}^2-\gamma_{44}^2)
\end{pmatrix} =
\begin{pmatrix}
4 & -3/2 & -5/2 \\ \\
-3/2 & 9 & 8\\ \\
-5/2 & 8 & 16
\end{pmatrix}
\]
\medskip
\noindent One can check that the eigenvalues of $Q$ are approximately 21.7, 3.81, and 3.48. Then, since $Q$ is positive-definite, $\tau$ is a legitimate Euclidean simplex.
\subsection*{Calculating distances in $(\tau, g_\mathbb{E})$} Let $p = (\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$ and $q=(\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$. We wish to calculate $d_\mathbb{E}(p,q)$. The squared Euclidean distance between them, by Equation \ref{euclid-dist}, is
\[d_\mathbb{E}^2(p,q) = [p-q]^TQ_E[p-q] =
\begin{pmatrix}
\frac{1}{12} & \frac{1}{12} & \frac{-1}{4}
\end{pmatrix}\begin{pmatrix}
4 & -3/2 & -5/2 \\
-3/2 & 9 & 8\\
-5/2 & 8 & 16
\end{pmatrix}\begin{pmatrix}
\frac{1}{12}\\\frac{1}{12}\\\frac{-1}{4}
\end{pmatrix} = \frac{121}{144},\] so $d_E(p,q)=11/12$.
\subsection*{Projecting $v_1$ onto $\tau_1$ in $(\tau, g_\mathbb{E})$} Now, we find the projection of $v_1$ onto $\tau_1$. Firstly, we have
\[
Q_{\tau_1} =
\begin{pmatrix}
1/2(\gamma_{23}^2 + \gamma_{23}^2-\gamma_{33}^2) & 1/2(\gamma_{23}^2 + \gamma_{24}^2-\gamma_{34}^2)\\
1/2(\gamma_{23}^2 + \gamma_{24}^2-\gamma_{34}^2) & 1/2(\gamma_{24}^2 + \gamma_{24}^2-\gamma_{44}^2)
\end{pmatrix} = \begin{pmatrix}
16 & 16 \\
16 & 25
\end{pmatrix}
\] So $|Q_{\tau_1}|=144$. We now compute the minors of $Q$:
\[Q_{11}=
\begin{vmatrix}
9 & 8 \\
8 & 16
\end{vmatrix}=80
\qquad Q_{12}=
\begin{vmatrix}
-3/2 & 8 \\
-5/2 & 16
\end{vmatrix}=-4
\qquad Q_{13}=
\begin{vmatrix}
-3/2 & 9 \\
-5/2 & 8
\end{vmatrix}=21/2
\]\[ Q_{22}=
\begin{vmatrix}
4 & -5/2 \\
-5/2 & 16
\end{vmatrix}=231/4
\qquad Q_{23}=
\begin{vmatrix}
4 & -3/2 \\
-5/2 & 8
\end{vmatrix}=113/4
\qquad Q_{33}=
\begin{vmatrix}
4 & -3/2 \\
-3/2 & 9
\end{vmatrix}=135/4
\]
Thus, we can then find each $\alpha_i$, noting that $Q_{ij} = Q_{ji}$ due to the symmetry of $Q$:
\[
\alpha_2 = \frac{Q_{11} - Q_{12} + Q_{13}}{|Q_{\tau_1}|} = \frac{80-(-4) + 21/2}{144} \approx 0.65625
\]\[
\alpha_3 = \frac{-Q_{21} + Q_{22} - Q_{23}}{|Q_{\tau_1}|} = \frac{-(-4) + 231/4 - 113/4}{144} \approx 0.23264
\]\[
\alpha_4 = \frac{Q_{31}-Q_{32}+Q_{33}}{|Q_{\tau_1}|} = \frac{21/2-113/4+135/4}{144} \approx 0.11111
\] Let us quickly remark that the subscripts of the $\alpha_i's$ are off by 1 from Theorem \ref{thm:Euclid Projection} since we are projecting $v_1$ as opposed to $v_4$ as in the Theorem.
Note that $\alpha_2+\alpha_3+\alpha_4=1$. So the orthogonal projection of $v_1$ onto $\tau_1$ has barycentric coordinates $(0,0.6525, 0.23264, 0.11111)$. It is worth noting that each coordinate is positive, and so the projection $p$ lies inside the triangle $\tau_1$. Our formula gives an easy way to check if a vertex projects inside or outside of the opposite face.
\subsection*{Calculating the ``height" of $(\tau, g_\mathbb{E})$}
Consider the altitude of $v_1$ over $p$:
\[
d_\mathbb{E}(v_1,p) = \sqrt{p^TQp} \approx 1.4136.
\]
We will use this result to compare to the analogous result in the hyperbolic example.
\subsection{Calculations in $(\tau,g_\mathbb{H})$}
\subsection*{Verifying that $(\tau, g_\mathbb{H})$ is a legitimate hyperbolic simplex} Via Theorem \ref{theorem : hyperbolic-realizability}, we need to calculate $Q_{\Sigma}$ using equation \eqref{eqn:Qij}. We have that $Q_\Sigma$ =
\[
\begin{pmatrix}
-1 & -\cosh(\gamma_{12}) & -\cosh(\gamma_{13}) & -\cosh(\gamma_{14})\\
\\
-\cosh(\gamma_{12}) & -1 & -\cosh(\gamma_{23}) & -\cosh(\gamma_{24})\\
\\
-\cosh(\gamma_{13}) & -\cosh(\gamma_{23}) & -1 & -\cosh(\gamma_{34})\\
\\
-\cosh(\gamma_{14}) & -\cosh(\gamma_{24}) & -\cosh(\gamma_{34}) & -1
\end{pmatrix} = \begin{pmatrix}
-1 & -\cosh(2) & -\cosh(3) & -\cosh(4)\\
\\
-\cosh(2) & -1 & -\cosh(4) & -\cosh(5)\\
\\
-\cosh(3) & -\cosh(4) & -1 & -\cosh(3)\\
\\
-\cosh(4) & -\cosh(5) & -\cosh(3) & -1
\end{pmatrix}
\] The eigenvalues of $Q_\Sigma$ are approximately -90.1, 79.2, 5.5, and 1.4. Since $Q_\Sigma$ has signature $(3,1)$, by Theorem 1 we know that $\tau$ is a legitimate hyperbolic simplex.
\subsection*{Calculating distances in $(\tau, g_\mathbb{H})$} We now wish to calculate $d_\mathbb{H}(p,q)$, where $p = (\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$ and $q=(\frac{1}{3},\frac{1}{3},\frac{1}{3},0)$. By Theorem \ref{thm:hyp dist}, we have
\[d_\mathbb{H}(p,q) = \text{arccosh}\left(\frac{-\langle p,q \rangle}{\sqrt{\langle p,p \rangle \cdot \langle q,q \rangle}}\right),\]
with:
\[
\langle p,q \rangle = p^t(Q_\Sigma) q \approx -16.40517\;, \quad \langle p,p \rangle = p^t(Q_\Sigma) p \approx -19.34049, \text{ and}\quad \langle q,q \rangle = q^t(Q_\Sigma) q \approx -9.47513.
\]
Thus, we have that \[d_\mathbb{H}(p,q) = \text{arccosh} \left(\frac{16.40516}{\sqrt{(-19.34049)(-9.47513)}}\right) = 0.63997.\]
\medskip
Notice here that $d_\mathbb{H}(p,q) < d_\mathbb{E}(p,q)$, as expected.
\subsection*{Projecting $v_1$ onto $\tau_1$ in $(\tau, g_\mathbb{H})$}
We now consider the projection $\tilde{p}$ of $v_1$ onto $\tau_1$. First, we find the barycentric coordinates for the corresponding point $p = (0,\alpha_2,\alpha_3,\alpha_4)$ on the convex hull $\sigma$, before finding the coordinates of $\tilde{p}$. First, the relevant minors of $Q_\Sigma$ are:
\[
\qquad
Q_{12}^\Sigma = \begin{vmatrix}
-\cosh(2) & -\cosh(4) & -\cosh(5)\\
\\
-\cosh(3) & -1 & -\cosh(3)\\
\\
-\cosh(4) & -\cosh(3) & -1
\end{vmatrix}\approx -12350.57
\]
\medskip
\[
Q_{13}^\Sigma = \begin{vmatrix}
-\cosh(2) & -1 & -\cosh(5)\\
\\
-\cosh(3) & -\cosh(4) & -\cosh(3)\\
\\
-\cosh(4) & -\cosh(5) & -1
\end{vmatrix}\approx 2340.72 \quad
Q_{14}^\Sigma = \begin{vmatrix}
-\cosh(2) & -1 & -\cosh(4)\\
\\
-\cosh(3) & -\cosh(4) & -1\\
\\
-\cosh(4) & -\cosh(5) & -\cosh(3)
\end{vmatrix}\approx -718.81
\]
\bigskip
\noindent From Theorem \ref{theorem:hyperbolic-projection}, we know that
\begin{equation*}
\alpha_2 = \frac{-Q_{12}}{-Q_{12}+Q_{13}-Q_{14}} \qquad \alpha_3 = \frac{Q_{13}}{-Q_{12}+Q_{13}-Q_{14}} \qquad \alpha_4 = \frac{-Q_{14}}{-Q_{12}+Q_{13}-Q_{14}}.
\end{equation*}
Plugging in the values for the minors and calculating, we get that $p = (0, 0.80146, 0.15190, 0.04665)$. Note that $\alpha_2+\alpha_3+\alpha_4 = 1$, as expected.
Also by Theorem \ref{theorem:hyperbolic-projection}, we have
\[
\tilde{p} = (0,\tilde{\alpha}_2,\tilde{\alpha}_3,\tilde{\alpha}_4);
\quad \text{for } \tilde{\alpha}_i = \frac{(-1)^{i+1}Q^\Sigma_{1i}}{\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))}}
\]And thus, since \[\sqrt{\sum_{i,j=2}^{n+1}(-1)^{i+j}Q^\Sigma_{1i}Q^\Sigma_{1j}(\cosh(\gamma_{ij}))} \approx 55578.499 \text{, we have } \tilde{p} = \left(0,0.22222,0.04212, 0.01293 \right).\] Alternatively, one could just calculate $\langle p, p \rangle$, and then use that $\tilde{p} = \frac{p}{\sqrt{- \langle p, p \rangle}}$.
\subsection*{Calculating the ``height" of $(\tau, g_\mathbb{H})$}
Let us find the altitude of $v_1$ over $\tilde{p}$:
\[ d_\mathbb{E}^2(v_1,\tilde{p}) = \begin{pmatrix} -1 & 0.222 & 0.042 & 0.0129 \end{pmatrix}Q_\Sigma \begin{pmatrix} -1 \\ 0.222 \\ 0.042 \\ 0.0129 \end{pmatrix}\approx 1.22644 \]\[
d_\mathbb{H}(v_1,\tilde{p}) = \text{arccosh}\left(\frac{2+1.22644}{2}\right) \approx 1.0575.
\]
Note that $d_\mathbb{H}(v_1,p) < d_\mathbb{E}(v_1,p)$, as we would expect.
\section{Analogous formulas for spherical simplices}\label{section:spherical}
Our model space for $\mathbb{S}^n$ is the unit sphere in $\mathbb{R}^{n+1}$. Similar to equation \eqref{eqn:hyperbolic distance} we have
\begin{equation}\label{eqn:spherical distance}
d_\mathbb{S}(x,y) = \arccos(\langle x,y \rangle).
\end{equation}
Also, in the same way as we did in Section \ref{section:realizability}, we can consider the $(n+1)$-simplex $\Sigma$ which is the convex hull of the vertices of a simplex and the origin. The Gram matrix $Q_\Sigma$ is calculated by the formula
\begin{equation}\label{eqn:spherical Qij}
q_{ij} = \langle w_i, w_j \rangle = \langle v_i, v_j \rangle = \cos(\gamma_{ij}).
\end{equation}
\begin{theorem}[Analogous to Theorem \ref{theorem : hyperbolic-realizability}]\label{theorem : spherical-realizability}
A collection of $n(n+1)/2$ positive real numbers $\{\gamma_{ij}\}_{i,j=1}^{n+1}$ with $\gamma_{ij} < \pi/2$ for all $i, j$ are the edge lengths of a spherical $n$-simplex $(\tau, g_\mathbb{S})$ if and only if the $(n+1) \times (n+1)$ matrix $Q_\Sigma$ defined by equation \eqref{eqn:spherical Qij} is positive-definite.
\end{theorem}
\begin{proof}
This is essentially identical to the proof of Theorem \ref{theorem : hyperbolic-realizability}.
\begin{comment}
Let $(\tau, g_\mathbb{S})$ be a spherical simplex with edge lengths $\{\gamma_{ij}\}_{i,j=1}^{n+1}$. Suppose $(\tau, g_\mathbb{S})$ is a non-degenerate simplex. Isometrically embed $(\tau, g_\mathbb{S})$ into $S^n$. Then $(\tau, g_\mathbb{S})$ cannot be embedded in any spherical $n-1$ plane by definition. Thus, the edges $e_{0i}$ span $R^{n+1,0}$, so $\Sigma$ is non-degenerate, and by extension $Q_\Sigma$ has signature $(n+1,0)$.
Now, for the other direction of the equivalence, we consider the contraposition. Suppose that $(\tau, g_\mathbb{S})$ is degenerate. Isometrically embed $(\tau, g_\mathbb{S})$ into $S^n$. Then $(\tau, g_\mathbb{S})$ lies in a spherical $n-1$ plane by definition of degeneracy. This then implies that $(\tau, g_\mathbb{S})$ lies on some $n$-plane intersecting the origin by the definition of Minkowski space. So $\Sigma$ also lies in that $n$-plane, so $\Sigma$ is degenerate, so $Q_\Sigma$ does not have signature $(n+1,0)$.
\end{comment}
\end{proof}
Distances in spherical simplices are computed in the analogous way as they are in hyperbolic simplices: one projects the points onto the sphere using the techniques from Section \ref{section:coordinates}, and then calculates the distance using equation \eqref{eqn:spherical distance} and $Q_{\Sigma}$.
For orthogonal projection in spherical simplices, you just project within the convex hull of the points and then project that point onto the sphere. More precisely, let $(\tau, g_\mathbb{S})$ be a spherical simplex. Linearly isometrically embed $\tau$ into $\mathbb{S}^n$ in some way, and identify the vertices $v_i$ with their image in $\mathbb{S}^n$. Let $\sigma$ be the $n$-simplex formed by the convex hull of $(v_1, \dots, v_{n+1})$. Then, to calculate $\text{proj}_{\tau_1}(v_1)$, you calculate $\text{proj}_{\sigma_1}(v_1)$ using Theorem \ref{thm:Euclid Projection} and then project this point onto the sphere from the origin.
This process works for spherical simplices but not for hyperbolic simplices because the quadratic form restricted to the hyperplane containing $\sigma$ is always positive-definite for spherical simplices.
Finally, the distance and projection formulas in this paper can be extended to simplices with constant curvature $\kappa$ by adjusting equations \eqref{eqn:hyperbolic distance}, \eqref{eqn:Qij}, \eqref{eqn:spherical distance}, and \eqref{eqn:spherical Qij} accordingly.
\begin{comment}
\begin{remark}
The degeneracy of a simplex of arbitrary constant curvature $(\tau, g)$ is determined symmetrically: a set of edge forms a degenerate simplex if and only if $Q_\Sigma$ has the same signature as the model space.
\end{remark}
Now let us consider the distance between any two points in a simplex of arbitrary curvature $\kappa$. For any $\kappa$, the definition of the space of that curvature in $\mathbb{R}^{N,1}$ to be that $\langle x,x \rangle = \frac{\kappa}{\sqrt{|\kappa|}}$. Thus, for an arbitrary $\kappa$ we have:
\begin{equation}
d(\tilde{x},\tilde{y})=
\begin{cases}
\arccos\bigg(\frac{2 - \sqrt{\kappa}\cdot d^2_\mathbb{E}(\tilde{x},\tilde{y})}{2} \bigg) & \text{if } \kappa > 0\\
\\
\text{arccosh}\bigg( \frac{2 + \sqrt{-\kappa}\cdot d_{\mathbb{E}}^2(\tilde{x},\tilde{y})}{2} \bigg) & \text{if } \kappa < 0
\end{cases}
=
\begin{cases}
\arccos\bigg( \frac{2-\sqrt{\kappa}[\tilde{x}-\tilde{y}]^TQ_\Sigma[\tilde{x}-\tilde{y}]}{2} \bigg) & \text{if } \kappa > 0\\
\\
\text{arccosh}\bigg( \frac{2+\sqrt{-\kappa}[\tilde{x} - \tilde{y}]^T Q_\Sigma [\tilde{x} - \tilde{y}]}{2} \bigg) & \text{if } \kappa < 0
\end{cases}
\end{equation}
So for $\kappa = 1$, we have $d_\mathbb{E}^2(\tilde{x}, \tilde{y}) = 1 + 1 - 2\cos(d_S(\tilde{x}, \tilde{y})) = 2 - 2\cos(d_S(\tilde{x}, \tilde{y}))$, so with some rearrangement, we have our formula:
\begin{equation}\label{sphere dist eqn}
d_\mathbb{S}(\tilde{x},\tilde{y}) = \arccos\bigg( \frac{2-d_\mathbb{E}^2(\tilde{x},\tilde{y})}{2} \bigg) = \arccos\bigg( \frac{2-[\tilde{x}-\tilde{y}]^TQ_\Sigma[\tilde{x}-\tilde{y}]}{2} \bigg)
\end{equation}
Turning now to orthogonal projections in spherical space, we note that since both spherical and Euclidean spaces are positive definite, the projected point will be precisely the corresponding to the Euclidean orthogonal projection on $\sigma$.
\end{comment}
\bibliographystyle{amsplain}
|
{'timestamp': '2021-07-14T02:03:15', 'yymm': '2107', 'arxiv_id': '2107.05706', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.05706'}
|
arxiv
|
\section{Introduction}
Frictional interfaces inside solids appear in a wide range of problems in science and engineering.
Examples in civil and mechanical engineering applications include cracks, slip surfaces, and soil--structure interfaces.
\revised{Mathematically, frictional contact is a constrained optimization problem, in which the constraints emanate from non-penetration of two contacting surfaces and frictional resistance in the surfaces.
Therefore, a numerical strategy for the contact problem should deal with two related aspects: (i) the algorithm to impose the constraints, and (ii) the discretization of the constraint variables.}
In modern finite element analysis, interfaces are often embedded in elements ({\it i.e.}~allowed to pass through the interior of elements) to simplify the meshing procedure dramatically.
When the interfaces are modeled as sharp (lower-dimensional) discontinuities, they are embedded by enriching basis functions either locally or globally.
Representative examples of locally or globally enriched finite element methods are the assumed enhanced strain (AES) method~\cite{simo1990class} and the extended finite element method (XFEM)~\cite{moes1999finite}, respectively.
At the expense of the simplified mesh, however, these methods pose a new challenge of treating frictional contact on embedded interfaces.
Over the last couple of decades, a variety of methods have been studied for modeling frictional interfaces embedded in finite elements.
Dolbow {\it et al.}~\cite{dolbow2001extended} were the first to propose a method for enforcing frictional contact on interfaces embedded with XFEM. Their method employs an iterative scheme called LATIN to solve a variational equation formulated in terms of the total displacements of the interface surfaces.
Yet the convergence behavior and accuracy of the LATIN iterative scheme appeared to have much room for improvement.
As such, a number of subsequent studies have developed and investigated various types of methods for a more efficient and robust treatment of embedded frictional interfaces ({\it e.g.}~\cite{khoei2007enriched,liu2008contact,nistor2009xfem}).
A notable example is the work of Liu and Borja~\cite{liu2008contact}, where the variational equation is reformulated based on the relative displacements of the discontinuity surfaces and the contact constraints are treated by the classical penalty method.
Other classical methods for handling frictional contact, such as the Lagrange multiplier method and the augmented Lagrangian method, have also been applied and tailored to embedded interfaces ({\it e.g.}~\cite{elguedj2007mixed,bechet2009stable,liu2010stabilized}).
\revised{At the same time, some researchers have pursued alternative discretization strategies for the contact variables, instead of discretizing them in a way similar to the classical node-to-surface method in computational contact mechanics~\cite{laursen2003computational,wriggers2006computational}.
A remarkable example is the mortared finite element formulation proposed by Kim {\it et al.}~\cite{kim2007mortared}, where the contact variables are discretized on an intermediate mortar surface independent from the bulk mesh.
In that formulation, the displacement fields in the bulk material and the mortar surface are weakly coupled via a Lagrange multiplier, and the contact constraints are handled by the penalty method.}
Nevertheless, it remains an unresolved challenge as to how to handle frictional contact on embedded interfaces in a both computationally robust and efficient manner.
The Lagrange multiplier method provides exact enforcement of the contact constraints.
However, it introduces an additional field variable which not only increases the number of degrees of freedom but also gives rise to a stability issue in mixed discretization~\cite{ji2004strategies,bechet2009stable,liu2010stabilized}.
As a practical means, the penalty method is often used because it retains the number of degrees of freedom and can be used easily.
However, this method unavoidably permits inter-penetration (over-closure) to some extent and the degree of inter-penetration can only be reduced indirectly by enlarging the penalty stiffness parameter.
Unfortunately, too large a penalty parameter makes the resultant matrix system highly ill-conditioned.
So the determination of an appropriate penalty parameter is often a non-trivial issue.
\revised{The augmented Lagrangian method combines ideas from the Lagrange multiplier and the penalty methods to alleviate the drawbacks of each method.
However, it is still subjected to the stability problem (when the multiplier is discretized independently as in Alart and Curnier~\cite{alart1991mixed}) or requires additional iterative steps to calculate the multiplier (when the multiplier is not discretized separately as in Simo and Laursen~\cite{simo1992augmented}).}
Recent studies have thus developed alternative and more advanced methods such as the weighted Nitsche method~\cite{annavarapu2013nitsche,annavarapu2014nitsche,annavarapu2015weighted} or explored diffuse approaches for algorithm-free modeling of frictional interfaces~\cite{fei2020phase-a,fei2020phase-b,fei2021double,fei2021phase}.
Yet none of the existing methods is considered optimally robust and efficient for modeling embedded interfaces with frictional contact.
The barrier method -- a simple yet robust algorithm for constrained optimization -- has also been used for imposing the contact constraints on external domain boundaries in multi-body dynamics~\cite{kane1999finite,pandolfi2002time,li2020incremental,Li2021CIPC,Ferguson:2021:RigidIPC,Lan2021MIPC}.
In essence, the method replaces the constraint of a constrained optimization problem by augmenting a barrier function to the objective functional, so that one can solve the problem using an algorithm for unconstrained optimization such as Newton's method.
In the context of contact mechanics, the non-penetration constraint can be approximated as a barrier energy function augmented to the potential energy functional.
The particular barrier energy function used in Li {\it et al.}~\cite{li2020incremental} is a product of the following two: (i) a $C^{2}$-continuous barrier function of the unsigned distance between domain boundaries,
and (ii) a scalar scaling parameter that converts the value of the barrier function to an energy.
Importantly, the barrier function has a free parameter that controls the size of the barrier region, whereas the scaling parameter can be tuned to avoid ill-conditioning of the resultant matrix system.
So this barrier method has several advantages over the classical methods, including:
(i) it does not introduce any additional degrees of freedom or iterative steps,
(ii) it is free of inter-penetration,
(iii) it avoids an ill-conditioned matrix system,
and (iv) it allows one to control the solution accuracy directly.
\revised{It is noted that although the barrier method results in a very small gap between two contacting surfaces, such a gap indeed exists in real world due to asperities at microscopic scales.
In this regard, a gap in the barrier method is more physically realistic than an inter-penetration in the penalty method.
Further, whenever the crack opening displacement is used to calculate physical properties such as the hydraulic conductivity ({\it e.g.}~\cite{liu2017stabilized,choo2018cracking,liu2020modeling}), a gap must be preferred to an inter-penetration to ensure non-negative values of the properties.}
While the features of the barrier method are also attractive for embedded interfaces, the existing barrier method is incompatible with interfaces embedded in finite elements.
A few critical reasons are as follows.
First, the majority of embedded interfaces are closed initially, but the barrier method does not allow one to initialize perfectly closed surfaces because the barrier energy becomes infinity as the distance between the two surfaces approaches zero.
An intuitive way to address this issue is to introduce a new parameter representing the initial ``gap'' of a closed interface; however, it is unclear how to determine the value of this new parameter.
Second, it is uncertain as to how to determine the scalar scaling parameter for an embedded interface under quasi-static conditions, because the existing way to determine the parameter assumes dynamic contact between multiple bodies.
Third, in embedded finite element methods, numerical solutions may show severe oscillations in traction fields when the enriched degrees of freedom pose strong constraints.
In this work, we develop the first barrier method for embedded interfaces with frictional contact, addressing the aforementioned challenges in a physically meaningful and numerically efficient manner.
The work proceeds as follows.
After formulating a variational equation for a solid with internal discontinuities, we employ a barrier energy function to derive the contact pressure such that the non-penetration constraint is always satisfied.
Likewise, we utilize a smoothed friction law in which the stick--slip transition is a continuous function of the slip displacement.
We then discretize the formulation using XFEM to embed interfaces inside finite elements.
In doing so, we devise a surface integration scheme for XFEM that can simply suppress oscillations in traction fields.
Subsequently, we develop a way to tailor the parameters of the barrier method to embedded interface, such that the resulting method can be used without parameter tuning.
Through numerical examples with varied levels of complexity, we verify and investigate the proposed method for handling embedded frictional interfaces, and discuss its performance compared with that of the penalty method which has similar computational cost. To limit the scope of the present work, we shall focus on single and stationary interfaces \revised{in two dimensions} throughout.
\revised{It is noted that the present scope is comparable to that of previous work on frictional contact on embedded interfaces ({\it e.g.}~\cite{annavarapu2014nitsche}).}
\section{Formulation}
In this section, we first formulate a variational equation for a solid with internal discontinuities, which are subjected to constraints on the contact condition and frictional sliding.
We then handle the constraints employing a smooth barrier energy function and a smoothed friction law.
Without loss of generality, we assume the quasi-static condition, absence of body force, and infinitesimal deformation.
\subsection{Problem statement}
Consider a domain $\Omega\in\mathbb{R}^{\rm dim}$ delimited by boundary $\partial\Omega$ with outward normal vector $\bm{\upsilon}$.
The boundary is suitably decomposed into the displacement (Dirichlet) boundary $\partial_{u}\Omega$ where the displacement vector is prescribed as $\bar{\bm{u}}$, and the traction (Neumann) boundary $\partial_{t}\Omega$ where the traction vector is prescribed as $\bar{\bm{t}}$, such that $\partial\Omega=\overline{\partial_{u}\Omega\cup\partial_{t}\Omega}$ and $\emptyset=\partial_{u}\Omega\cap\partial_{t}\Omega$.
The domain contains a set of embedded, lower-dimensional interfaces $\Gamma$, across which the displacement field $\bm{u}(\bm{x})$ can be discontinuous ($\bm{x}$ denotes the position vector).
Each interface segment has two surfaces, and the positive and negative surfaces are denoted by $\Gamma_{+}$ and $\Gamma_{-}$, respectively.
In that follows, we shall use the subscripts $(\cdot)_{+}$ and $(\cdot)_{-}$ to denote quantities pertaining to the positive and negative sides.
For example, the two sides of the domain are denoted by $\Omega_{+}$ and $\Omega_{-}$.
We denote by $\bm{n}\equiv\bm{n}_{+}=-\bm{n}_{-}$ the unit normal vector to the interface pointing toward $\Gamma_{+}$ and by $\bm{m}$ the unit tangent vector pointing toward the slip direction.
Let $\jump{\bm{u}}:=\bm{u}_{+}-\bm{u}_{-}$ denote the relative displacement (displacement jump) across the interface surfaces.
The gap distance between the two surfaces is given by
\begin{equation}
u_{N} = \jump{\bm{u}}\cdot\bm{n}.
\end{equation}
The non-penetration constraint then can be written as
\begin{equation}
u_{N} \geq 0.
\label{eq:non-penetration-constraint}
\end{equation}
Let $\bm{t}\equiv\bm{t}_{+}=-\bm{t}_{-}$ denote the traction vector acting on the contact surfaces.
The traction vector is decomposed into its normal and tangential components as
\begin{equation}
\bm{t} = -p_{N}\bm{n} + \bm{t}_{T},
\label{eq:contact-traction-decomposition}
\end{equation}
where $p_{N}$ is the contact pressure, and $\bm{t}_{T}$ is the tangential traction vector.
It is noted that the contact pressure should be non-negative, {\it i.e.}
\begin{equation}
p_{N} \geq 0.
\label{eq:contact-pressure-constraint}
\end{equation}
The frictional sliding behavior is constrained by a friction law.
In this work, we consider the standard Coulomb friction law, which can be written as
\begin{linenomath}
\begin{align}
\tau \leq \mu p_{N},
\label{eq:friction-law}
\end{align}
\end{linenomath}
where $\tau:=\|\bm{t}_{T}\|$ is the tangential stress (resolved shear stress), and $\mu$ is the friction coefficient.
Conventionally, the constraint imposed by the friction law has been cast into a yield function.
However, we shall not follow the conventional method and introduce a new approach later in this section.
For simplicity and without loss of generality, we assume that the bulk material is linear elastic.
The constitutive behavior of the bulk material can then be written as
\begin{equation}
\tensor{\stress} = \mathbb{C}^{\mathrm{e}}:\tensor{\strain},
\end{equation}
where $\tensor{\stress}$ is the Cauchy stress tensor, $\tensor{\strain}:=\symgrad\bm{u}$ is the infinitesimal strain tensor defined as the symmetric gradient of the displacement field, and $\mathbb{C}^{\mathrm{e}}$ is the fourth-order elasticity stiffness tensor.
The strong form of the boundary-value problem can be stated as follows: find the displacement field $\bm{u}$ such that
\begin{linenomath}
\begin{align}
\diver\bm{\sigma} = \bm{0} \;\; &\text{in}\;\; \Omega\setminus\Gamma,
\label{eq:linear-momentum} \\
\bm{u} = \bar{\bm{u}} \;\; &\text{on}\;\; \partial_{u}\Omega,
\label{eq:displacement-bc} \\
\bm{\upsilon}\cdot\tensor{\stress} = \bar{\bm{t}} \;\; &\text{on}\;\; \partial_{t}\Omega,
\label{eq:traction-bc} \\
\bm{n}\cdot\tensor{\stress} = \bm{t}_{-} \;\; &\text{on}\;\; \Gamma_{-}, \\
-\bm{n}\cdot\tensor{\stress} = \bm{t}_{+} \;\; &\text{on}\;\; \Gamma_{+}.
\end{align}
\end{linenomath}
To develop the weak form of the problem, we define the spaces of trial functions and variations, respectively, as
\begin{linenomath}
\begin{align}
\bm{\mathcal{U}} &:= \{\bm{u}\,|\,\bm{u}\in H^{1}(\Omega),\; \bm{u}=\bar{\bm{u}}\; \text{on}\; \partial_{u}\Omega\}, \\
\bm{\mathcal{V}} &:= \{\bm{\eta}\,|\,\bm{\eta}\in H^{1}(\Omega),\; \bm{\eta}=\bm{0}\; \text{on}\; \partial_{u}\Omega\},
\end{align}
\end{linenomath}
where $H^{1}$ is the Sobolev space of order one.
Through the standard procedure, we arrive at the following variational equation
\begin{linenomath}
\begin{align}
\int_{\Omega\setminus\Gamma} \symgrad{\bm{\eta}}:\tensor{\stress}\, \mathrm{d} V
+ \int_{\Gamma} \jump{\bm{\eta}}\cdot\bm{t}\, \mathrm{d} A
- \int_{\partial_{t}\Omega} \bm{\eta}\cdot\bar{\bm{t}}\, \mathrm{d} A
= 0,
\label{eq:variational-form}
\end{align}
\end{linenomath}
where $\jump{\bm{\eta}}:=\bm{\eta}_{+} - \bm{\eta}_{-}$.
The focus of this work is on how to efficiently handle the second term in Eq.~\eqref{eq:variational-form} -- the virtual work done by the contact traction vector -- subjected to the contact constraints.
For this purpose, we decompose the contact surface integral into its normal and tangential components by substituting Eq.~\eqref{eq:contact-traction-decomposition}, {\it i.e.}
\begin{equation}
\int_{\Gamma} \jump{\bm{\eta}}\cdot\bm{t}\, \mathrm{d} A
= \int_{\Gamma} \jump{\bm{\eta}}\cdot(-p_{N}\bm{n})\, \mathrm{d} A
+ \int_{\Gamma} \jump{\bm{\eta}}\cdot\bm{t}_{T}\, \mathrm{d} A.
\label{eq:contact-integral}
\end{equation}
In what follows, we first derive $p_{N}$ from a barrier energy function to address the contact normal behavior, and then use a smoothed friction law to evaluate $\bm{t}_{T}$ in the sliding behavior.
\subsection{Barrier method}
In this work, we make use of a barrier method to treat the contact normal behavior subjected to the non-penetration constraint.
While one can apply the barrier method in a purely mathematical manner as in Li {\it et al.}~\cite{li2020incremental}, here we do it from a physical point of view to provide another perspective of the method.
The physical perspective would be useful to better understand the barrier method, like how the spring interpretation of the penalty method does.
To begin, let us introduce a elastic barrier between the two faces of a discontinuity.
The elastic energy stored in the barrier can be represented by a barrier energy density function taking the gap distance as an argument.
In this work, we adapt the smooth barrier energy function proposed by Li {\it et al.}~\cite{li2020incremental} to embedded interfaces, which gives
\begin{linenomath}
\begin{align}
B(u_{N}) :=
\begin{cases}
-\kappa(u_{N} - \hat{d})^{2}\ln\left(\dfrac{u_{N}}{\hat{d}}\right) & \text{if}\;\; 0<u_{N}<\hat{d}, \\
0 & \text{if}\;\; u_{N}\geq\hat{d}.
\end{cases}
\label{eq:barrier-function}
\end{align}
\end{linenomath}
Here, $\hat{d}$ is the value of $u_{N}$ above which the value of the barrier function is zero.
In other words, the value of $\hat{d}$ defines the maximum value of gap distance at which the two surfaces are considered ``in contact.''
In this sense, $\hat{d}$ can be interpreted as the thickness of the barrier.
Also, $\kappa>0$ is a scalar parameter having the unit of pressure per length, which is introduced to let the unit of $B(u_{N})$ be energy per area.
Later in this section, we will show that $\kappa$ controls the stiffness of the barrier for a given $\hat{d}$.
Figure~\ref{fig:barrier-functions} illustrates how the values of the barrier energy density function vary with $\hat{d}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/barrier-function-d-hat.pdf}
\caption{Variation of the barrier energy density function, $B(u_{N})$, with $\hat{d}$. ($\kappa = 1$ MPa/m)}
\label{fig:barrier-functions}
\end{figure}
The contact pressure can then be derived based on energy conjugacy, {\it i.e.}
\begin{linenomath}
\begin{align}
p_{N} :=
-\dfrac{\partial B(u_{N})}{\partial u_{N}} =
\begin{cases}
\kappa(u_{N} - \hat{d})\left[2\ln\left(\dfrac{u_{N}}{\hat{d}}\right) - \dfrac{\hat{d}}{u_{N}} + 1\right] & \text{if}\;\; 0<u_{N}<\hat{d}, \\
0 & \text{if}\;\; u_{N} \geq \hat{d}.
\end{cases}
\label{eq:contact-pressure}
\end{align}
\end{linenomath}
Figure~\ref{fig:contact-pressure} shows the relations between the contact pressure and the gap distance derived from the barrier functions in Fig.~\ref{fig:barrier-functions}.
One can see that the contact pressure becomes positive when the gap distance becomes smaller than $\hat{d}$ and it increases toward infinity as the gap approaches zero.
The contact pressure thus prevents the gap to become zero, not to mention negative, ensuring satisfaction of the non-penetration constraint, Eq.~\eqref{eq:non-penetration-constraint}.
It is also noted that the contact pressure is always non-negative, satisfying Eq.~\eqref{eq:contact-pressure-constraint}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/contact-pressure-d-hat.pdf}
\caption{Variations of the contact pressure, $p_{N}$, with the gap distance, $u_{N}$, derived from the barrier energy functions in Fig.~\ref{fig:barrier-functions}.}
\label{fig:contact-pressure}
\end{figure}
Further, we can gain insight into the physical meaning of $\kappa$ by calculating the tangent of the (negative) contact pressure with respect to the gap distance, which can be regarded as the normal stiffness of the interface.
The tangent is given by
\begin{linenomath}
\begin{align}
k_{N} :=
\dfrac{\partial (-p_{N})}{\partial u_{N}} =
\begin{cases}
-2\kappa\ln\left(\dfrac{u_{N}}{\hat{d}}\right) - \kappa\dfrac{(u_{N}-\hat{d})(3u_{N} + \hat{d})}{u_{N}^{2}} & \text{if}\;\; 0<u_{N}<\hat{d}, \\
0 & \text{if}\;\; u_{N} \geq \hat{d}.
\end{cases}
\label{eq:barrier-stiffness}
\end{align}
\end{linenomath}
The above equation shows that the value of $\kappa$ directly controls the normal stiffness of the barrier-treated interface.
For this reason, hereafter we shall refer to $\kappa$ as the barrier stiffness parameter.
It is noted that this parameter affects not only the physical behavior of the barrier but also the matrix condition of the discretized problem.
Later in Section~\ref{sec:parameterization}, we will explain how to determine this parameter for an embedded interface such that it results in a physically meaningful and numerically efficient formulation.
\smallskip
\begin{remark}
The barrier method can be easily implemented and utilized with a trivial modification of any existing penalty-based code.
Concretely, one only needs to replace the contact pressure--penetration relationship in the penalty method with the contact pressure--gap distance relationship in the barrier method, Eq.~\eqref{eq:contact-pressure}.
So the computational cost of the barrier method is practically the same as that of the penalty method.
\end{remark}
\smallskip
\begin{remark}
The form of the barrier function~\eqref{eq:barrier-function}, proposed in Li {\it et al.}~\cite{li2020incremental}, has two features that distinguish itself from the standard logarithmic barrier function in optimization~\cite{boyd2004convex}: (i) it is truncated at $\hat{d}$, and (ii) a quadratic term is multiplied to a logarithmic term.
The first feature allows us not only to avoid the barrier energy to be negative but also to localize the barrier energy such that the method can be efficiently utilized for multiple pairs of contacting surfaces.
The second feature makes the barrier energy function $C^{2}$-continuous despite the truncation.
The $C^{2}$-continuity is critical to solving the problem using a gradient-based algorithm such as Newton's method.
\end{remark}
\smallskip
\begin{remark}
The use of the barrier method has made the non-penetration constraint more strict ({\it i.e.}~$u_{N}>0$) since $p_{N}(u_{N})\rightarrow\infty$ as $u_{N}\rightarrow 0$.
While this change may be viewed as a source of numerical inaccuracy, it can also be physically justified in that no real surfaces can be perfectly closed together due to asperities at smaller scales.
In other words, surfaces can only be closed at the scales of their asperities.
A similar argument can be found from Wriggers {\it et al.}~\cite{wriggers1990finite}, among others.
\label{rem:impossible-zero-gap}
\end{remark}
\smallskip
\begin{remark}
One important difference between embedded interface problems and multi-body contact problems is that embedded interfaces are often closed in the initial condition.
For the reason explained in Remark~\ref{rem:impossible-zero-gap}, however, a perfectly closed discontinuity cannot be initialized in the barrier method.
To approximate an initially closed discontinuity, we need a new parameter that represents the initial ``gap'' distance of a closed interface.
We will thoroughly discuss this new parameter later in Section~\ref{sec:parameterization}, because its appropriate value can be determined after discretization.
\end{remark}
\revised{
\smallskip
\begin{remark}
While the value of $\hat{d}$ is considered constant in the barrier method, it can be updated through algorithmic iterations -- like interior point methods~\cite{boyd2004convex} -- to attain a more accurate solution.
At the expense of improved accuracy, however, such an update makes the method more complicated.
As will be demonstrated later, the barrier method gives sufficiently good solutions for engineering purposes, provided that the value of $\hat{d}$ is small enough.
Further, because $\hat{d}$ is the upper bound of the tolerable separation gap between contacting surfaces, it would often be more desirable to specify this upper bound explicitly rather than letting it depend implicitly on numerical parameters as in interior point methods.
Therefore, here we treat the barrier parameter (and hence the barrier function) constant such that the resultant method can be utilized as efficiently and conveniently as the classic penalty method.
\end{remark}
}
\subsection{Smoothed friction law}
We now shift our attention to the frictional sliding along the interface.
While the sliding process can be described independently from the foregoing barrier treatment,
here we utilize a smoothed friction law that models the stick--slip transition behavior as a continuous function of the slip displacement~\cite{li2020incremental}.
The use of the smoothed friction law is motivated for two reasons.
First, this smoothing approach is consistent with the barrier treatment where the contact pressure varies smoothly with the gap distance.
Second, it provides significant robustness for problems in which stick and slip conditions are mixed, because the tangential traction increases mildly in the beginning of slip, in contrast to a jump-like increase in the classic yield-function approach.
To begin, we introduce a friction smoothing function $m(u_T)\in[0,1]$ which continuously varies with the slip displacement magnitude $u_T := \jump{\bm{u}} \cdot \bm{m}$.
Similar to how we have obtained the barrier energy function~\eqref{eq:barrier-function}, we modify the smoothing function in Li {\it et al.}~\cite{li2020incremental} -- originally proposed for multi-body contact dynamics -- for an embedded interface under quasi-static condition.
The modified version reads
\begin{equation}
m(u_T) :=
\begin{cases}
-\dfrac{u_T^2}{\hat{s}^2} + \dfrac{2 |u_T|}{\hat{s}} & \text{if}\; |u_T| < \hat{s}, \\
1 & \text{if}\; |u_T| \geq \hat{s}.
\end{cases}
\label{eq:smoothed-friction-magnitude}
\end{equation}
Here, $\hat{s}$ is a parameter defining the amount of slip displacement at which the kinetic (dynamic) friction is mobilized.
In other words, the function is designed to allow for a small amount of slip in the static friction regime, which is often called microslip.
In this regard, $\hat{s}$ may be called the maximum microslip displacement.
Figure~\ref{fig:smoothed-friction} shows how the friction smoothing function varies with $\hat{s}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/friction-smoothing.pdf}
\caption{Variation of the friction smoothing function, $m(u_{T})$, with $\hat{s}$.}
\label{fig:smoothed-friction}
\end{figure}
We then multiply the smoothing function to the tangential stress constrained by the friction law.
When the Coulomb friction law~\eqref{eq:friction-law} is used, the smoothed friction law is
\begin{equation}
\tau = m(u_T)\mu p_N.
\end{equation}
Accordingly, the tangential traction vector is given by
\begin{equation}
\bm{t}_T = \tau\bm{m} = m(u_T)\mu p_N\bm{m}.
\label{eq:smoothed-tangential-traction}
\end{equation}
Notably, Eq.~\eqref{eq:smoothed-tangential-traction} is used to calculate the tangential traction without any yield function.
The main advantage of this smoothed friction law is that the derivative of $\bm{t}_T$ with respect to $u_T$ is a continuous function of $u_T$, because $m(u_T)$ is $C^{1}$-continuous.
This is in contrast with the classic yield function approach whereby the derivative changes discontinuously between zero and a non-zero value during a stick--slip transition.
Such a continuous change in the derivative of the smoothed friction law provides significant robustness when a stick--slip transition takes place during a gradient-based solution stage such as a Newton iteration.
This aspect will be demonstrated later through a numerical example.
\smallskip
\begin{remark}
Although we have distinguished the parameter $\hat{s}$ from $\hat{d}$ for its mathematical definition, in practice one can simply assign the same value for these two parameters.
According to our experience, any value of $\hat{s}$ works well as long as it is less than the maximum slip displacement.
Because a reasonable value of $\hat{d}$ must be much smaller than the maximum slip displacement, it is feasible and practical to let $\hat{s}=\hat{d}$.
By doing so, only the value of $\hat{d}$ needs to be prescribed at one's discretion.
\end{remark}
\smallskip
\begin{remark}
Like the gap displacement ($u_{N} < \hat{d}$) arising in the barrier treatment, the slip displacement in the static friction regime ($u_{T} < \hat{s}$) can also be justified from a physical viewpoint.
Due to asperities, real contacting surfaces often exhibit a measurable amount of microslip displacement in the static friction regime ({\it e.g.}~\cite{sandeep2019experimental}).
Indeed, some researchers have intentionally incorporated microslip displacement using a penalty-like formulation, see, {\it e.g.}~Wriggers {\it et al.}~\cite{wriggers1990finite}.
In this regard, the smoothed friction law may even be viewed more realistic than the conventional method.
We also note that the smoothed friction law has two key advantages over the penalty-like model accounting for microslip: (i) it allows us to directly control the maximum microslip displacement via $\hat{s}$, and (ii) it is numerically more robust thanks to its smooth variation with the slip displacement.
\label{rem:microslip}
\end{remark}
\section{Discretization and algorithms}
In this section, we discretize the formulation using finite elements with embedded discontinuities.
Broadly speaking, there are two classes of finite element methods with embedded discontinuities: (i) the AES method whereby the degrees of freedom of cracked elements are enriched locally~\cite{simo1990class}, (ii) the XFEM approach whereby the degrees of freedom are enriched globally~\cite{moes1999finite}.
Both classes of methods rely commonly on enrichment of finite elements, while the specific kinematic modes of enriched basis functions differ by the particular type of method.
Given that the XFEM enrichment introduces complete kinematic modes for cracked elements (see, {\it e.g.}~Cusini {\it et al.}~\cite{cusini2021simulation} for a relevant discussion), here we choose XFEM as our particular means to study the performance of the barrier method.
We note that the barrier formulation can also be well discretized by any other types of embedded finite element methods, and we believe that the conclusions drawn from the present work may be applied equally well to interfaces embedded by other types of methods.
\subsection{Enrichment of finite elements}
Enrichment of finite elements builds on the idea of decomposing the solution field into its continuous part -- which can be approximated by the standard continuous finite elements -- and its discontinuous part -- which can be approximated by enriching the degrees of freedom of cracked elements.
For the displacement field $\bm{u}(\bm{x})$ passing through the interfaces
$\Gamma$, let $\bar{\bm{u}}(\bm{x})$ and $\jump{\bm{u}}(\bm{x})$ denote its continuous and discontinuous parts, respectively.
Then the decomposition of the displacement field can be written as~\cite{borja2008assumed}
\begin{equation}
\bm{u} (\bm{x}) = \bar{\bm{u}}(\bm{x}) + M_{\Gamma}(\bm{x}) \jump{\bm{u}} (\bm{x}).
\label{eq:displacement-decomposition}
\end{equation}
Here, $M_{\Gamma}(\bm{x})$ is a scalar function for enrichment, given by
\begin{equation}
M_{\Gamma}(\bm{x}) = H_{\Gamma} (\bm{x}) - f^h (\bm{x}),
\end{equation}
where $H_{\Gamma} (\bm{x})$ is the Heaviside function, which is discontinuous on the surface $\Gamma$ as
\begin{equation}
H_{\Gamma} (\bm{x}) =
\begin{cases}
1 & \text{if}\;\; \bm{x} \in \Omega_{+} \\
0 & \text{if}\;\; \bm{x} \in \Omega_{-}\,,
\end{cases}
\end{equation}
and $f^h(\bm{x})$ is a smooth blending function that satisfies
\begin{equation}
f^h(\bm{x}) \jump{\bm{u}}(\bm{x}) = \sum_{I \in n_{nodes}} N_I(\bm{x}) H_{\Gamma}(\bm{x}_I) \jump{\bm{u}}(\bm{x}_I)
\end{equation}
with $n_{nodes}$ denoting the number of nodes in an element.
Enriched finite elements are then constructed by interpolating $\bar{\bm{u}}(\bm{x})$ using the standard (continuous Galerkin) finite element method and interpolating $\jump{\bm{u}}(\bm{x})$ through enrichment.
The standard finite element interpolation of $\bar{\bm{u}} (\bm{x})$ may be written as two equivalent ways~\cite{cusini2021simulation}: (i) one using scalar shape functions $N_I(\bm{x})$ and vector-valued coefficients $\bm{d}_I$ for the $I$-th node, and (ii) the other using vector-valued shape functions $\bm{N}_i(\bm{x})$ and scalar coefficients $d_i$ for the $i$-th degree of freedom in standard finite elements.
So we can write the standard finite element approximation of $\bar{\bm{u}} (\bm{x})$ as
\begin{equation}
\bar{\bm{u}} (\bm{x}) \approx \sum_{I \in n_{nodes}} N_I (\bm{x}) \bm{d}_I = \sum_{i \in n_{std}} \bm{N}_i(\bm{x}) d_i,
\label{eq:interpolation-continuous-u}
\end{equation}
where $n_{std}$ is the number of degrees of freedom in standard finite elements. Next, enrichment of $\jump{\bm{u}}(\bm{x})$ can be written as
\begin{equation}
\jump{\bm{u}}(\bm{x}) = \sum_{j \in n_{enr}} \bm{\xi}_j(\bm{x}) a_j,
\label{eq:interpolation-discontinuous-u}
\end{equation}
where $\bm{\xi}_j(\bm{x})$ and $a_j$ are the vector-valued shape functions and scalar coefficients for the $j$-th enriched degree of freedom, respectively, and $n_{enr}$ is the number of enriched degrees of freedom.
Inserting Eqs.~\eqref{eq:interpolation-continuous-u} and~\eqref{eq:interpolation-discontinuous-u} into Eq.~\eqref{eq:displacement-decomposition} gives the enriched finite element approximation of $\bm{u}(\bm{x})$ as
\begin{linenomath}
\begin{align}
\bm{u} (\bm{x})
&\approx \sum_{i \in n_{std}} \bm{N}_i(\bm{x}) d_i + M_{\Gamma}(\bm{x}) \sum_{j \in n_{enr}} \bm{\xi}_j (\bm{x}) a_j
= \sum_{i \in n_{std}} \bm{N}_i(\bm{x}) d_i + \sum_{j \in n_{enr}} \bm{\phi}_j(\bm{x}) a_j,
\label{eq:displacement-expansion}
\end{align}
\end{linenomath}
where a new function $\bm{\phi}_j(\bm{x})$ is defined as
\begin{equation}
\bm{\phi}_j(\bm{x}) := H_{\Gamma}(\bm{x}) \bm{\xi}_j(\bm{x}) - \sum_{I \in n_{nodes}} N_I(\bm{x}) H_{\Gamma}(\bm{x}_I) \bm{\xi}_j (\bm{x}_I).
\end{equation}
The foregoing formulation is general for a wide class of enriched finite element methods, and the particular type of enriched method is determined by the specific form of $\bm{\phi}_j$.
In what follows, we specialize the formulation to XFEM.
\subsection{Extended finite element method}
The XFEM approach enriches global degrees of freedom to interpolate the discontinuous part of the displacement field, $\jump{\bm{u}}(\bm{x})$, in the same way as the continuous part, $\bar{\bm{u}}(\bm{x})$.
This enrichment allows XFEM to describe the complete kinematic modes of the discontinuities, unlike locally enriched AES methods where some kinematic modes are neglected by design.
At the expense of this feature, however, XFEM entails a larger number of enriched degrees of freedom than the AES methods.
Since XFEM interpolates $\jump{\bm{u}}(\bm{x})$ in the same manner as the standard finite element method, $n_{enr} = n_{std}$, $\bm{\xi}_j(\bm{x}) = \bm{N}_j(\bm{x})$, and $\bm{\phi}_j(\bm{x})$ takes the following form
\begin{equation}
\bm{\phi}_j(\bm{x}) = [H_{\Gamma}(\bm{x}) - H_{\Gamma}(\bm{x}_j)] \bm{N}_{j}(\bm{x}).
\end{equation}
Because XFEM employs the standard Galerkin method, it approximates the variation $\bm{\eta}$ in the same way as the $\bm{u}$, {\it i.e.}
\begin{equation}
\bm{\eta} = \bar{\bm{\eta}} + M_{\Gamma} \jump{\bm{\eta}}.
\end{equation}
Note that $\bar{\bm{\eta}}$ and $\jump{\bm{\eta}}$ are two independent variations.
The variation can be further expanded like Eq.~\eqref{eq:displacement-expansion}, and it is omitted for brevity.
Substituting the enriched finite element approximations of $\bm{u}$ and $\bm{\eta}$ into Eq.~\eqref{eq:variational-form}, we arrive at the following two discrete variational equations (in residual form):
\begin{linenomath}
\begin{align}
\bm{\mathcal{R}}_i^{d} &:=
\int_{\Omega \setminus \Gamma} \symgrad \bm{N}_i : \bm{\sigma} \, \mathrm{d} V
- \int_{\partial_t \Omega} \bm{N}_i \cdot \bar{\bm{t}} \, \mathrm{d} A
= 0, \quad i = 1,2,\ldots, n_{std},
\label{eq:weak-form-standard} \\
\bm{\mathcal{R}}_j^{a} &:=
\int_{\Omega \setminus \Gamma} \symgrad \bm{\phi}_j : \bm{\sigma} \, \mathrm{d} V
+ \int_{\Gamma} \bm{N}_j \cdot \bm{t} \, \mathrm{d} A
= 0, \quad j = 1,2,\ldots, n_{enr}.
\label{eq:weak-form-enriched}
\end{align}
\end{linenomath}
We use Newton's method to solve Eqs.~\eqref{eq:weak-form-standard} and~\eqref{eq:weak-form-enriched}.
The $k$th Newton iteration proceeds in the following two steps:
\begin{linenomath}
\begin{align}
\text{solving} &\quad \bm{\mathcal{J}}^k \Delta \bm{X} = - \bm{R}^k, \label{eq:newton-linear-system}\\
\text{updating} &\quad \bm{X}^{k+1} = \bm{X}^{k} + \Delta \bm{X}.
\end{align}
\end{linenomath}
Here, $\bm{R}$ and $\bm{X}$ are the residual vector and the unknown vector, respectively, which are given by
\begin{linenomath}
\begin{align}
\bm{R}\left(\bm{X}\right)
:= \begin{Bmatrix}
\bm{\mathcal{R}}^d\\
\bm{\mathcal{R}}^a
\end{Bmatrix}, \quad
\bm{X}
:= \begin{Bmatrix}
\bm{d} \\
\bm{a}
\end{Bmatrix},
\label{eq:residual-unknown}
\end{align}
\end{linenomath}
with $\bm{d}$ and $\bm{a}$ denoting the nodal solution vectors for the standard and enriched degrees of freedom, respectively.
The Jacobian matrix, $\bm{\mathcal{J}}$, is given by
\begin{linenomath}
\begin{align}
\bm{\mathcal{J}} := \frac{\partial \bm{R}\left(\bm{X}\right)}{\partial \bm{X}} =
\begin{bmatrix}
\partial_{\bm{d}} \bm{\mathcal{R}}^d & \partial_{\bm{a}} \bm{\mathcal{R}}^d \\
\partial_{\bm{d}} \bm{\mathcal{R}}^a & \partial_{\bm{a}} \bm{\mathcal{R}}^a
\end{bmatrix}.
\label{eq:jacobian}
\end{align}
\end{linenomath}
In this work, we use a direct linear solver in each Newton iteration.
In the following, we describe two points that are new in the present work: (i) an integration-point-level algorithm to update the traction vector $\bm{t}$ treated by the barrier method, and (ii) a new surface integration scheme to alleviate traction oscillations that may occur in some problems.
\subsection{Traction update algorithm}
Algorithm~\ref{algo:traction-update} describes how to update the barrier-treated traction vector at an integration point during a Newton iteration.
For notational brevity, all the quantities at the current time steps are written without any subscripts, while the displacement jump at the previous time step is denoted by $\jump{\bm{u}}_{n}$.
The calculation of the contact pressure ($p_{N}$) and the tangential traction ($\bm{t}_{T}$) was explained in the previous section.
It is noted that the tangential traction is computed without evaluating a yield function, due to the use of the smoothed friction law~\eqref{eq:smoothed-tangential-traction}.
The last step of the algorithm is to compute the tangent operator of the traction with respect to the displacement jump, which is critical to the optimal convergence of Newton's method.
Specifically, the tangent operator can be calculated as:
\begin{linenomath}
\begin{align}
\mathbb{C}_{\text{interface}}
:= \dfrac{\partial \bm{t}}{\partial \jump{\bm{u}}}
&= \dfrac{\partial (-p_N \bm{n})}{\partial \jump{\bm{u}}}
+ \dfrac{\partial \bm{t}_T}{\partial \jump{\bm{u}}} \nonumber\\
&= k_{N} (\bm{n} \otimes \bm{n})
- \mu k_{N} m(u_T)(\bm{m} \otimes \bm{n})
+ \mu p_N \dfrac{\partial m(u_T)}{\partial u_T}(\bm{m} \otimes \bm{m})
\label{eq:traction-CTO}
\end{align}
\end{linenomath}
where $k_{N} := -\partial{p_{N}}/\partial{u_{N}}$ defined in Eq.~\eqref{eq:barrier-stiffness}.
Notably, when $\|\bm{u}_{T}\|$ is found to be zero ({\it i.e.}~no slip), one may set $\bm{m}=\bm{0}$ in the succeeding steps.
\begin{algorithm}[h!]
\setstretch{1.2}
\caption{Traction update procedure in the barrier-treated enriched finite element method.}
\begin{algorithmic}[1]
\Require $\Delta \jump{\bm{u}}$ and $\bm{n}$ at the current step, and $\jump{\bm{u}}_{n}$ at the previous step.
\State Update the displacement jump in the current step, $\jump{\bm{u}} = \jump{\bm{u}}_{n} + \Delta \jump{\bm{u}}$.
\State Calculate the normal displacement jump $u_N = \jump{\bm{u}}\cdot\bm{n}$.
\State Compute the contact pressure, $p_N = -\partial B/\partial u_N$, as in Eq.~\eqref{eq:contact-pressure}.
\State Calculate the slip displacement vector, $\bm{u}_T = \jump{\bm{u}} - u_{N}\bm{n}$, the slip magnitude, $u_T = \|\bm{u}_T\|$, and the slip direction vector, $\bm{m}=\bm{u}_T/\|\bm{u}_T\|$.
\State Compute the tangential traction vector, $\bm{t}_T = m(u_T)\mu p_N\bm{m}$.
\State Calculate the traction vector, $\bm{t} = -p_N \bm{n} + \bm{t}_T$.
\State Compute the tangent operator, $\mathbb{C}_{\text{interface}}$, as in Eq.~\eqref{eq:traction-CTO}.
\Ensure $\bm{t}$ and $\mathbb{C}_{\text{interface}}$ at the current step.
\end{algorithmic}
\label{algo:traction-update}
\end{algorithm}
One can see that the traction update procedure of the barrier method is as simple as that of the classic penalty method -- perhaps even simpler because it does not require one to evaluate a yield function.
We again note that one can easily change a penalty-based code to a barrier-based code, by substituting Algorithm~\ref{algo:traction-update} into an existing traction update algorithm.
\subsection{Averaged surface integration scheme}
As a final task of discretization, we devise a simple method that mitigates spurious oscillations in traction field solutions.
Embedded finite element solutions often exhibit non-physical oscillations in traction fields when the enriched degrees of freedom exert strong constraints on the kinematics.
The origin of such oscillations can be attributed to the inf--sup stability problem common in various types of constrained problems ({\it e.g.}~\cite{bochev2006stabilization,choo2015stabilized,choo2019stabilized,zhao2020stabilized}).
Several strategies have been proposed to stabilize traction oscillations in embedded finite elements employing the Lagrange multiplier or the penalty method~\cite{ji2004strategies,bechet2009stable,liu2010stabilized}.
While these methods have been found effective, their implementation often requires non-trivial effort.
Here, we present a simple approach to stabilization of traction oscillations in embedded finite elements that employ penalty-like methods including the barrier method.
The approach recasts the weak penalty formulation proposed by Svenning~\cite{svenning2016weak} -- originally developed for a mixed discretization of non-embedded finite elements -- to an XFEM discretization.
In essence, the weak penalty method evaluates the surface integral in the variational equation after projecting $\bm{\tilde{u}}$ to piecewise constant space.
The projection operator is defined as
\begin{equation}
\mathcal{P} (\tilde{\bm{u}}) := \dfrac{1}{|\Gamma^e|} \int_{\Gamma^e} \tilde{\bm{u}} \, \mathrm{d} A, \;\; \text{where}\;\;
|\Gamma^e| := \int_{\Gamma^e} \, \mathrm{d} A
\label{eq:projection-opereator-original}
\end{equation}
with $\Gamma^e$ denoting the interface element in non-embedded finite elements.
Importantly, when the displacement field is approximated by linear shape functions, the projection should be performed over two elements across the interface element~\cite{svenning2016weak}.
We now modify the projection operator~\eqref{eq:projection-opereator-original} for XFEM.
As standard, we consider linear approximation of the displacement field, focus on an element cut by a single interface, and introduce two integration points at the interface to evaluate the surface integral in Eq.~\eqref{eq:weak-form-enriched}.
In this case, the cracked element may be viewed as a collection of two displacement elements (where volume integrals are evaluated) and an interface element (where surface integrals are evaluated).
Applying the projection operator~\eqref{eq:projection-opereator-original} to the cracked element then gives a constant value of $\bm{\tilde{u}}$ over the element.
To express this idea mathematically, let $\bm{x}_{Q1}$ and $\bm{x}_{Q2}$ denote the position vectors of the two surface integration points, $A_{Q1}$ and $A_{Q2}$ denote the surface areas calculated at the respective integration points, and $\jump{\bm{u}}_{Q1}$ and $\jump{\bm{u}}_{Q2}$ denote the displacement jump fields at the points.
Then the project operator for XFEM can be written as
\begin{linenomath}
\begin{align}
\mathcal{P}_{\text{XFEM}}(\tilde{\bm{u}})
&= \dfrac{1}{A_{Q1} + A_{Q2}} [A_{Q1}\tilde{\bm{u}}_{Q1} + A_{Q2} \tilde{\bm{u}}_{Q2} ] \nonumber \\
&= \dfrac{1}{A_{Q1} + A_{Q2}} [A_{Q1}\sum_{I \in n_{nodes}} N_I (\bm{x}_{Q1}) \tilde{\bm{u}}_I + A_{Q2}\sum_{I \in n_{nodes}} N_I (\bm{x}_{Q2}) \tilde{\bm{u}}_I] \nonumber \\
&= \dfrac{1}{2} \sum_{I \in n_{nodes}} [ N_I (\bm{x}_{Q1}) + N_I (\bm{x}_{Q2}) ] \tilde{\bm{u}}_I.
\end{align}
\end{linenomath}
In words, the operator averages the two displacement-jump fields calculated from the two integration points.
So we shall refer to this method as an averaged (surface) integration scheme.
To summarize, we stabilize traction oscillations using the averaged integration scheme
\begin{equation}
\jump{\bm{u}}_{Q1} = \jump{\bm{u}}_{Q2} = \mathcal{P}_{\text{XFEM}}(\tilde{\bm{u}}),
\end{equation}
which is an XFEM version of the weak penalty formulation proposed by Svenning~\cite{svenning2016weak}.
The scheme uses a single value of the displacement jump to evaluate the surface integral in Eq.~\eqref{eq:weak-form-enriched} within a cracked element.
As will be demonstrated later, the averaged integration scheme can effectively suppress traction oscillations in XFEM.
The key advantage of this method over existing methods is that it can be implemented with trivial effort.
It would be worthwhile to note two points regarding the averaged integration scheme.
First, the scheme is applicable to general penalty-like methods that share the same mathematical structure.
Indeed, it works well for the barrier method because the barrier method has the same mathematical structure as the penalty method, as discussed earlier.
Second, when the standard two-point integration scheme does not show traction oscillations, the averaged scheme would be sub-optimal in terms of accuracy.
So we propose the averaged scheme as a new option for problems that the standard scheme gives severe traction oscillations, rather than an absolute alternative to the standard scheme for general contact problems.
Later in numerical examples, we use the standard integration scheme by default, and switch to the averaged scheme if severe oscillations are observed.
\section{Parameterization}
\label{sec:parameterization}
In this section, we tailor the parameters of the barrier method to embedded interfaces, such that the method can be used without parameter tuning.
The motivation is twofold: (i) application of the barrier method to embedded interfaces requires a new parameter that represents the initial ``gap'' of a closed interface, and (ii) the existing way to determine the barrier stiffness parameter ($\kappa$) cannot be applied to embedded interfaces under quasi-static conditions.
In what follows, we develop a way to constrain these two parameters for a given interface embedded in finite elements. Then the barrier method has only one free parameter, $\hat{d}$, which controls the solution accuracy ($\hat{s}=\hat{d}$ is assumed).
The value of $\hat{d}$ is then recommended as a function of the problem domain size, which gives sufficiently accurate numerical solutions in most cases.
To begin, let us denote by $d_0$ the initial gap of a closed interface.
By definition, $d_0$ should range in between $0$ and $\hat{d}$, but it is unclear how to set this value within this range.
To answer this question, we recall that the gap distance determines the contact pressure and the normal stiffness of a barrier-treated interface.
So we can obtain the initial values of the contact pressure $p_{N}$ and the normal stiffness $k_{N}$ by inserting $d_0$ into Eq.~\eqref{eq:contact-pressure} and~\eqref{eq:barrier-stiffness}, respectively.
This operation gives the initial contact pressure, $p_{N,0}$ and the initial normal stiffness, $k_{N,0}$, as follows:
\begin{linenomath}
\begin{align}
p_{N,0} &= \kappa(d_{0} - \hat{d})\left[2\ln\left(\dfrac{d_{0}}{\hat{d}}\right) - \dfrac{\hat{d}}{d_{0}} + 1\right], \label{eq:initial-contact-pressure} \\
k_{N,0} &= -2\kappa\ln\left(\dfrac{d_{0}}{\hat{d}}\right) - \kappa\dfrac{(d_{0}-\hat{d})(3d_{0} + \hat{d})}{d_{0}^{2}}. \label{eq:initial-barrier-stiffness}
\end{align}
\end{linenomath}
It can be seen that for a given value of $\hat{d}$, $d_0$ and $\kappa$ together determine $p_{N,0}$ and $k_{N,0}$.
Thus we seek to constrain the values of $d_0$ and $\kappa$ by prescribing the values of $p_{N,0}$ and $k_{N,0}$.
We first prescribe the value of $p_{N,0}$ based on a physics-based argument that the optimal value of $p_{N,0}$ is the contact pressure acting on the interface if it was perfectly closed in the initial condition.
This optimal value, which we denote by $(p_{N,0})_{\text{optimal}}$, can be calculated or approximated based on our knowledge of the geometry and boundary conditions of the problem at hand.
So we postulate that $(p_{N,0})_{\text{optimal}}$ is given and assign $p_{N,0}=(p_{N,0})_{\text{optimal}}$, in order to let the barrier-approximated interface behave like a perfectly closed interface as much as possible.
Next, we prescribe the value of $k_{N,0}$ such that it makes the condition of the Jacobian matrix~\eqref{eq:jacobian} as good as possible.
When the bulk material has a constant stiffness, the optimal value of $k_{N,0}$ is approximated to be $E/h$, where $E$ is Young's modulus and $h$ is the element size.
See \ref{appendix:A} for the details of the approximation.
We thus seek to find the value of $d_{0}$ that makes $k_{N,0}=E/h$ when $(p_{N,0})_{\text{optimal}}$ is given.
To this end, we rearrange Eq.~\eqref{eq:initial-contact-pressure} in terms of $\kappa$, insert the expression for $\kappa$ into Eq.~\eqref{eq:initial-barrier-stiffness}, equate $p_{N,0}$ to $(p_{N,0})_{\text{optimal}}$, and let $k_{N,0}=E/h$.
This gives
\begin{equation}
g(d_0/\hat{d}) = \dfrac{E}{(p_{N,0})_{\text{optimal}}}\dfrac{\hat{d}}{h},
\label{eq:d0_optimal}
\end{equation}
where
\begin{equation}
g(d_0/\hat{d}) := \dfrac{-2 \mathrm{ln}(d_0/\hat{d}) - [1-(d_0/\hat{d})^{-1}][3+(d_0/\hat{d})^{-1}]}{(d_0/\hat{d} - 1) \left[ 2 \mathrm{ln} (d_0/\hat{d}) - (d_0/\hat{d})^{-1} + 1 \right]}.
\end{equation}
One can try to solve Eq.~\eqref{eq:d0_optimal} to find the optimal value of $d_0/\hat{d}$.
Unfortunately, however, Eq.~\eqref{eq:d0_optimal} is usually not solvable, because the minimum value of the left hand side, $g(d_0/\hat{d})$, is usually greater than the value of the right hand side, $E/(p_{N,0})_{\text{optimal}}(\hat{d}/{h})$.
To illustrate this point, in Fig.~\ref{fig:bowl-shaped-function} we plot the values of $g(d_0/\hat{d})$ in the range of $0\leq d_0/\hat{d} \leq 1$.
As can be seen, $g(d_0/\hat{d})$ is a bowl-shaped function and attains its minimum value of $\sim 5.03$ when $d_0/\hat{d}=0.376$.
However, $E/(p_{N,0})_{\text{optimal}}(\hat{d}/{h})$ is typically lower than $\sim 5.03$ because $\hat{d}\ll h$ in practical analysis settings.
So Eq.~\eqref{eq:d0_optimal} does not have a solution in almost all cases.
Still, however, we can see that $d_0/\hat{d}=0.376$ makes the value of $g(d_0/\hat{d})$ closest to the optimal value, which means that it will make the matrix condition as good as possible under the given physical and numerical parameters.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/bowl-shaped-function.pdf}
\caption{Variation of $g(d_0/\hat{d})$ with $d_0/\hat{d}\in[0,1]$.}
\label{fig:bowl-shaped-function}
\end{figure}
Based on the foregoing statement, we set $d_0$ as
\begin{equation}
d_{0} = 0.376\,\hat{d}.
\end{equation}
Also, rearranging Eq.~\eqref{eq:initial-contact-pressure} and substituting $(p_{N,0})_{\rm optimal}$ into $p_{N}$, we can write $\kappa$ as a function of $d_0$ and then insert $d_{0} = 0.376\,\hat{d}$ as follows:
\begin{linenomath}
\begin{align}
\kappa &= \dfrac{(p_{N,0})_{\rm optimal}}{(d_{0} - \hat{d})\left[2\ln\left(\dfrac{d_{0}}{\hat{d}}\right) - \dfrac{\hat{d}}{d_{0}} + 1\right]} \label{eq:kappa} \\
&= \dfrac{(p_{N,0})_{\rm optimal}}{2.256\,\hat{d}}.
\end{align}
\end{linenomath}
The upshot of the above parameterization is that now $d_0$ are $\kappa$ constrained to $\hat{d}$.
Then if we take $\hat{d}=\hat{s}$, $\hat{d}$ is the only free parameter of the method.
While the value of $\hat{d}$ can technically be assigned with any positive value according to the desired accuracy of the numerical solution, we also recommend $\hat{d}=10^{-4}L$ where $L$ is the size of the problem domain.
This recommendation is based on our own experience with a variety of numerical examples, as well as on the values of $\hat{d}$ used for multi-body contact problems~\cite{li2020incremental}.
In Li {\it et al.}~\cite{li2020incremental}, $\hat{d}=10^{-3}L$ is used for most of their multi-body contact problems.
Yet we have experienced that $\hat{d}=10^{-3}L$ gives slight errors under a large compression, presumably because $d_0$ is not sufficiently small.
According to our experience, however, $\hat{d}=10^{-4}L$ always give sufficiently accurate solutions.
So we recommend $\hat{d}=10^{-4}L$ as a rule of thumb for embedded interface problems.
\revised{Note also that a lower value of $\hat{d}$ can be well used if desired ({\it e.g.}~when the domain size is very large like a geologic scale.)}
Table~\ref{tab:barrier-parameters} summarizes the parameters of the barrier method and their recommended values for an embedded interface problem.
Unless otherwise specified, we use these parameters in the numerical examples in the next section.
\begin{table}[h!]
\centering
\begin{tabular}{llllr}
\toprule
Parameter & Symbol & Recommended Value \\
\midrule
Barrier thickness & $\hat{d}$ & $10^{-4}L$\\
Maximum microslip & $\hat{s}$ & $\hat{d}$ \\
Initial gap distance & $d_{0}$ & $0.376\,\hat{d}$ \\
Barrier stiffness & $\kappa$ &$(p_{N,0})_{\rm optimal}/(2.256\,\hat{d})$ \\
\bottomrule
\end{tabular}
\caption{Parameters of the barrier method and their recommended values for an embedded interface problem. The user can first set the value of $\hat{d}$ according to the domain size $L$, and then use the value of $\hat{d}$ to determine the other parameters. It is noted that $(p_{N,0})_{\rm optimal}$ is calculated based on the geometry and boundary conditions of the problem at hand, and that $(p_{N,0})_{\rm optimal}$ does not need to be calculated accurately (see Remark~\ref{rem:p_0_optimal}).}
\label{tab:barrier-parameters}
\end{table}
\smallskip
\begin{remark}
In practice, it suffices to use an approximate value of $(p_{N,0})_{\rm optimal}$ to calculate $\kappa$ because $\kappa$ controls the condition of the Jacobian matrix~\eqref{eq:jacobian} but not the solution accuracy.
According to our experience, even when the value of $(p_{N,0})_{\rm optimal}$ is one or two orders of magnitude higher than that of the ``true'' value, the matrix is sufficiently well-conditioned.
We have observed, however, if the assigned value for $(p_{N,0})_{\rm optimal}$ (and hence $\kappa$) is too low, sometimes the Newton iteration does not converge at early steps.
It is therefore recommended that one assigns the highest possible value for $(p_{N,0})_{\rm optimal}$ to calculate the value of $\kappa$.
\label{rem:p_0_optimal}
\end{remark}
\smallskip
\begin{remark}
In a rare case where Eq.~\eqref{eq:d0_optimal} is solvable, one may determine $d_0$ by solving the equation and take the smaller solution. (The equation usually has two solutions, see Fig.~\ref{fig:bowl-shaped-function}.) Then the solution can be used to calculate $\kappa$ through Eq.~\eqref{eq:kappa}.
\end{remark}
\smallskip
\begin{remark}
The foregoing parameterization procedure has been developed assuming that the user wants to follow the classical model formulation in which asperity effects are neglected.
However, if the proposed method is used for incorporating asperity effects (see Remarks~\ref{rem:impossible-zero-gap} and~\ref{rem:microslip}), one can also assign the values of $\hat{d}$ and $\hat{s}$ freely as desired.
Still, $d_0$ and $\kappa$ may be determined as recommended in Table~\ref{tab:barrier-parameters} to avoid ill-conditioning.
\end{remark}
\section{Numerical examples}
\label{sec:numerical-examples}
This section has two purposes:
(i) to verify the barrier method for embedded interfaces,
and (ii) to investigate the performance of the barrier method under diverse conditions.
For these purposes, we apply the barrier method to numerical examples with varied levels of complexity, from a simple horizontal crack to a circular interface between two materials.
In doing so, we compare the accuracy and robustness of the barrier method with those of the penalty method which has more or less the same computational cost.
The penalty method is incorporated into XFEM following the formulation of Liu and Borja~\cite{liu2008contact} in which a normal penalty parameter, $\alpha_{N}$, and a tangential penalty parameter, $\alpha_{T}$, are used as normal and tangential spring constants, respectively.
We note that such a penalty method has been commonly used in XFEM and compared with other contact algorithms, see, {\it e.g.}~\cite{khoei2007enriched,liu2010finite,annavarapu2014nitsche}.
It is however noted that whenever the contact pressure is positive, the penalty method unavoidably permits inter-penetration ($u_{N}<0$) because it calculates the contact pressure as $p_{N}=\alpha_{N}(-u_{N})$.
The numerical examples in this section are prepared using an in-house finite element code \texttt{Geocentric}, which is built on the \texttt{deal.II} finite element library~\cite{arndt2021deal}.
Verification of the code for a variety of non-interface problems can be found from the literature ({\it e.g.}~\cite{choo2018enriched,choo2018large}).
In the following, we assume plane-strain conditions and use quadrilateral elements with linear shape functions.
\subsection{Horizontal crack under compression and shear}
Our first example applies the barrier method to the problem of a horizontal crack under compression and shear, which was simulated by Liu and Borja~\cite{liu2008contact} and Annavarapu {\it et al.}~\cite{annavarapu2014nitsche} using XFEM.
Figure~\ref{fig:horizontal-setup} depicts the geometry and boundary conditions of the problem.
The friction coefficient of the crack is $\mu=0.3$, and the Young's modulus and Poisson's ratio of the bulk material are $E=10$ GPa and $\nu=0.3$, respectively.
We calculate the barrier method parameters as suggested in Table~\ref{tab:barrier-parameters}, with $(p_{N,0})_{\text{optimal}} = 0.55$ GPa which is calculated assuming that the top boundary's displacement is uniform and equal to the average of the actual displacement.
To conduct a mesh refinement study with the embedded crack, we discretize the domain using three different meshes: Mesh 1 ($h=1/11$ m), Mesh 2 ($h=1/25$ m), and Mesh 3 ($h=1/51$ m).
Each mesh is comprised of square elements of uniform size.
\begin{figure}[h!]
\centering
\includegraphics[width=0.55\textwidth]{figures/horizontal-setup.pdf}
\caption{Horizontal crack under compression and shear: problem geometry and boundary conditions.}
\label{fig:horizontal-setup}
\end{figure}
Figure~\ref{fig:horizontal-mesh} presents the numerical solutions obtained by the three different meshes, in terms of the normal displacement jump ($u_{N}$), slip displacement ($u_{T}$), contact pressure ($p_{N}$), and the tangential stress ($\tau$).
It can be seen that all these quantities converge very well upon mesh refinement.
One may also find that the converged solutions agree well with those obtained by Annavarapu {\it et al.}~\cite{annavarapu2014nitsche} using the weighted Nitsche method and the penalty method.
\begin{figure}[h!]
\centering
\subfloat[Normal displacement jump]{\includegraphics[width=0.45\textwidth]{figures/horizontal-mesh-uN.pdf}}$\quad$
\subfloat[Slip displacement]{\includegraphics[width=0.45\textwidth]{figures/horizontal-mesh-uT.pdf}}\\
\subfloat[Contact pressure]{\includegraphics[width=0.45\textwidth]{figures/horizontal-mesh-pN.pdf}}$\quad$
\subfloat[Tangential stress]{\includegraphics[width=0.45\textwidth]{figures/horizontal-mesh-tau.pdf}}
\caption{Horizontal crack under compression and shear: mesh refinement study.}
\label{fig:horizontal-mesh}
\end{figure}
In Fig.~\ref{fig:horizontal-barrier-penalty-uN}, we demonstrate how the normal displacement jump -- a direct measure of numerical error -- of the barrier method changes by its parameter, in comparison with the penalty method.
When the barrier thickness parameter $\hat{d}$ decreases from $10^{-4}$ m to $10^{-3}$ m, the gap distance in the barrier solution increases.
Likewise, when the normal penalty parameter $\alpha_{N}$ decreases by an order or magnitude, the amount of interpenetration in the penalty solution increases.
An important difference, however, exists between the barrier and penalty solutions: the maximum error in the barrier solution is guaranteed to be less than $\hat{d}$, whereas that in the penalty solution cannot be controlled directly.
\begin{figure}[h!]
\centering
\subfloat[Barrier]{\includegraphics[width=0.45\textwidth]{figures/horizontal-barrier-uN.pdf}}$\quad$
\subfloat[Penalty]{\includegraphics[width=0.45\textwidth]{figures/horizontal-penalty-uN.pdf}}
\caption{Horizontal crack under compression and shear: comparison between the barrier and penalty solutions to the normal displacement jump.}
\label{fig:horizontal-barrier-penalty-uN}
\end{figure}
In Fig.~\ref{fig:horizontal-barrier-penalty-uT} we further examine how the slip displacement solutions of the barrier and penalty methods are sensitive to their parameters.
As can be seen, the slip displacement solutions are virtually insensitive to the parameters.
The reason would be that the magnitudes of slip displacements are much larger than those of numerical errors.
While not presented for brevity, we have found that the slip displacement solution of the barrier method exhibits noticeable errors only when $\hat{s}$ is larger than the maximum amount of slip displacement.
\begin{figure}[h!]
\centering
\subfloat[Barrier]{\includegraphics[width=0.45\textwidth]{figures/horizontal-barrier-uT.pdf}}$\quad$
\subfloat[Penalty]{\includegraphics[width=0.45\textwidth]{figures/horizontal-penalty-uT.pdf}}
\caption{Horizontal crack under compression and shear: comparison between the barrier and penalty solutions to the slip displacement.}
\label{fig:horizontal-barrier-penalty-uT}
\end{figure}
\subsection{Inclined crack under compression}
In our second example, we examine the (in)sensitivity of the barrier method to the value of $(p_{N,0})_{\text{optimal}}$, which is used to calculate the parameter $\kappa$ as suggested in Table~\ref{tab:barrier-parameters}.
The motivation is that unlike the previous problem, it is sometimes not straightforward to calculate $(p_{N,0})_{\text{optimal}}$.
As an example, we adopt the problem of compression of a square domain with an inclined crack, which was first presented in Dolbow {\it et al.}~\cite{dolbow2001extended} and later used by other studies on embedded frictional interfaces ({\it e.g.}~\cite{annavarapu2014nitsche,fei2020phase-a}).
Figure~\ref{fig:inclined-crack-setup} illustrates the geometry and boundary conditions of the problem.
The interface is inclined with an angle $\theta = \text{tan}^{-1}(0.2)$.
The friction coefficient of the interface is set as $\mu=0.19$ so that it undergoes slip under compression. Following the original problem, we assign the Young's modulus and Poisson's ratio of the material as $E=1$ GPa and $\nu=0.3$, respectively.
We discretize the domain by $160$ by $160$ square elements and apply the compression through 10 steps.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/inclined-crack-setup.pdf}
\caption{Inclined crack under compression: problem geometry and boundary conditions.}
\label{fig:inclined-crack-setup}
\end{figure}
Because the value of $(p_{N,0})_{\text{optimal}}$ in this problem is not constant along the interface and not straightforward to calculate, we consider two options.
The first option is to use the upper bound on the initial contact pressure that would have exerted on the interface if it was horizontal.
Specifically, the upper bound value is calculated to be $(p_{N,0})_{\rm horizontal}=10$ MPa under the compressive displacement at the first load step.
As the second option, we use $0.5(p_{N,0})_{\rm horizontal}$ to approximate $(p_{N,0})_{\text{optimal}}$.
These two options will lead to two different values of $\kappa$.
Note, however, that all other parameters are unaffected and set according to Table~\ref{tab:barrier-parameters}.
In Figs.~\ref{fig:inclined-crack-x-disp} and~\ref{fig:inclined-crack-pN-tau}, we compare the numerical solutions obtained with the two different approximations to $(p_{N,0})_{\rm optimal}$, in terms of the displacement field and the interface traction (contact pressure and tangential stress), respectively.
One can see that the two solutions are practically identical.
While not presented for brevity, we have confirmed that these solutions are virtually the same as the penalty solutions to the same problem.
Notably, we have also observed that the solutions are unaffected when $(p_{N,0})_{\rm optimal}$ was approximated with values even much higher than $(p_{N,0})_{\rm horizontal}$.
This is not surprising because $(p_{N,0})_{\rm optimal}$ and thus $\kappa$ controls the matrix condition only -- not the solution accuracy -- as demonstrated in Figs.~\ref{fig:inclined-crack-x-disp} and~\ref{fig:inclined-crack-pN-tau}.
Yet we have also observed that when $(p_{N,0})_{\rm optimal}$ is approximated to be lower than $0.5(p_{N,0})_{\rm horizontal}$, the Newton iteration did not converge, presumably because the barrier is too ``soft'' in some places.
Therefore, as described in Remark~\ref{rem:p_0_optimal}, we recommend using an upper bound value to approximate $(p_{N,0})_{\rm optimal}$ in calculating the value of $\kappa$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figures/inclined-crack-x-disp.pdf}
\caption{Inclined crack under compression: the $x$-displacement fields obtained with two different approximations to $(p_{N,0})_{\rm optimal}$ in calculating $\kappa$. The domains are deformed by the displacement fields. Color bar in meters.}
\label{fig:inclined-crack-x-disp}
\end{figure}
\begin{figure}[h!]
\centering
\subfloat[Contact pressure]{\includegraphics[width=0.45\textwidth]{figures/inclined-crack-pN.pdf}}$\quad$
\subfloat[Tangential stress]{\includegraphics[width=0.45\textwidth]{figures/inclined-crack-tau.pdf}}
\caption{Inclined crack under compression: (a) contact pressures and (b) tangential stresses obtained with two different approximations to $(p_{N,0})_{\rm optimal}$ in calculating $\kappa$.}
\label{fig:inclined-crack-pN-tau}
\end{figure}
Before moving to the next example, we note that the traction solutions in Fig.~\ref{fig:inclined-crack-pN-tau} show slight oscillations like those obtained by penalty and other methods (see Annavarapu {\it et al.}~\cite{annavarapu2014nitsche} for example).
As the oscillations are not so significant, we have not used the averaged surface integration scheme for this example.
In the last example of this section, we will deal with a more challenging problem that shows much severe oscillations and apply the averaged integration scheme to demonstrate its performance.
\subsection{Sliding between two blocks with stiffness contrast}
Our third example investigates the performance of the smoothed friction law for handling challenging sliding problems. To this end, we simulate sliding between two blocks with stiffness contrast, extending the problem of an elastic block on a rigid foundation that was designed originally by Oden and Pires~\cite{oden1984algorithms} and later used in several other studies ({\it e.g.}~\cite{simo1992augmented,annavarapu2014nitsche,fei2020phase-a}).
As the original problem is not an embedded interface problem, we modify the problem as in Fig.~\ref{fig:stiffness-contrast-setup}, similar to the modification in Annavarapu {\it et al.}~\cite{annavarapu2014nitsche}.
Fixing the Young's modulus of the soft block to be 1000 kPa (the same as the original problem~\cite{oden1984algorithms}), we consider two cases of stiffness contrast between the two blocks: $E_{\text{hard}}/E_{\text{soft}}=10^{1}$ and $E_{\text{hard}}/E_{\text{soft}}=10^{7}$, where $E_{\text{hard}}$ and $E_{\text{soft}}$ denote the Young's moduli of the hard and soft blocks, respectively.
The latter case -- higher stiffness contrast -- is intended to emulate the original problem where the foundation is rigid.
The Poisson's ratios of the two blocks are both $\nu=0.3$.
The friction coefficient of the interface is $\mu=0.5$.
It is noted that under this problem setup, stick and slip conditions are mixed along the interface.
The barrier stiffness parameter $\kappa$ is calculated setting $(p_{N,0})_{\text{optimal}}$ equal to the magnitude of the compressive traction, 200 kPa.
We discretize the domain by $40$ by $41$ elements, embedding the interface inside elements.
\begin{figure}[h!]
\centering
\includegraphics[width=0.55\textwidth]{figures/stiffness-contrast-setup.pdf}
\caption{Sliding between two blocks with stiffness contrast: problem geometry and boundary conditions.}
\label{fig:stiffness-contrast-setup}
\end{figure}
The main reason for selecting this problem is that the penalty method is known to not work well for this problem.
Annavarapu {\it et al.}~\cite{annavarapu2014nitsche} have pointed out that the penalty method does not converge for this problem unless the penalty parameter is chosen very carefully through trial and error, and that even if it converges, the resulting solution exhibits severe oscillations in normal and tangential tractions.
To examine whether the use of the smoothed friction law can remedy this problem, we repeat the same problem using the smoothed friction law and the penalty method.
To isolate the effect of normal contact behavior, we use the two different methods only for the tangential traction, while treating the normal traction commonly by the barrier approach.
We assign the tangential penalty parameter as $\alpha_{T}=1000E_{\text{hard}}$, without an attempt to find a parameter that gives a converging (but perhaps erroneous) solution.
Figure~\ref{fig:stiffness-contrast-convergence} shows the Newton convergence profiles of the smoothed friction law and the penalty method.
Being consistent with what reported by Annavarapu {\it et al.}~\cite{annavarapu2014nitsche}, the penalty method does not converge in both cases, despite the use of a direct linear solver.
While not presented for brevity, similar non-convergence occurs when the normal contact is treated by the penalty method instead of the barrier approach.
Conversely, when the smoothed friction law is used, the simulation does converge in both cases, showing optimal (asymptotically quadratic) rates of convergence.
From the numerical viewpoint, the main difference between the two methods is that the tangent operator of the tangential traction vector, $\partial\bm{t}_{T}/\partial u_{T}$, changes discontinuously during a stick--slip transition in the penalty method, whereas it varies continuously with the slip displacement in the smoothed friction law.
It can be seen that this difference provides significant numerical robustness to problems where stick and slip modes are mixed.
\begin{figure}[h!]
\centering
\subfloat[$E_{\mathrm{hard}}/E_{\mathrm{soft}}=10^1$]{\includegraphics[width=0.45\textwidth]{figures/stiffness-contrast-convergence-1e1.pdf}}
\subfloat[$E_{\mathrm{hard}}/E_{\mathrm{soft}}=10^7$]{\includegraphics[width=0.45\textwidth]{figures/stiffness-contrast-convergence-1e7.pdf}}$\quad$
\caption{Sliding between two blocks with stiffness contrast: Newton convergence profiles of the smoothed friction law and the penalty method.}
\label{fig:stiffness-contrast-convergence}
\end{figure}
We then verify the numerical solutions obtained by the smoothed friction law in Figs.~\ref{fig:stiffness-contrast-deformed-geometry} and~\ref{fig:stiffness-contrast-pN-tau}.
As shown in Fig.~\ref{fig:stiffness-contrast-deformed-geometry}, the deformed geometry of our numerical solution to the high stiffness contrast case is virtually identical to that obtained by Simo and Laursen~\cite{simo1992augmented} using an augmented Lagrangian method along with the standard (non-embedded) finite element method.
Figure~\ref{fig:stiffness-contrast-pN-tau} also demonstrates that the contact pressures and tangential stresses along the interface do not show any oscillations, unlike the penalty solutions (and the weighted Nitsche solutions) in Annavarapu {\it et al.}~\cite{annavarapu2014nitsche}.
Note, however, that the proposed method is still subjected to severe pressure oscillations.
In the next example, we demonstrate a case where the barrier method shows severe oscillations and how to suppress the oscillations using the averaged surface integration scheme.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{figures/stiffness-contrast-deformed-geometry.pdf}
\caption{Sliding between two blocks with stiffness contrast: comparison of deformed geometries obtained by Simo and Laursen~\cite{simo1992augmented} and the current method. The barrier result also shows the $x$-displacement field in meter.}
\label{fig:stiffness-contrast-deformed-geometry}
\end{figure}
\begin{figure}[h!]
\centering
\subfloat[Contant pressure]{\includegraphics[width=0.45\textwidth]{figures/stiffness-contrast-pN.pdf}}$\quad$
\subfloat[Tangential stress]{\includegraphics[width=0.45\textwidth]{figures/stiffness-contrast-tau.pdf}}
\caption{Sliding between two blocks with stiffness contrast: contact pressures and tangential stresses. The reference results from Oden and Pires~\cite{oden1984algorithms} and Simo and Laursen~\cite{simo1992augmented} are obtained when the lower block is rigid.}
\label{fig:stiffness-contrast-pN-tau}
\end{figure}
\subsection{Separation of a circular inclusion}
\revised{The purpose of our final example is twofold: (i) to investigate the performance of the averaged surface integration scheme to alleviate traction oscillations, and (ii) to demonstrate the robustness of the barrier treatment when the contact condition changes with load.}
To this end, we consider the separation of a circular inclusion in an elastic domain due to to compressive and tensile tractions, which is depicted in Fig.~\ref{fig:inclusion-setup}.
Keer {\it et al.}~\cite{keer1973separation} derived an analytical solution to this problem in the case of that the surrounding domain is infinitely large, the inclusion and the surrounding material have the same stiffness, and the interface between the inclusion and the surrounding material is frictionless.
So we first consider this particular case to verify our solution against the analytical solution.
According to the analytical solution, some part of the interface (roughly, $\theta < 55^{\circ}$) would be separated, while the rest is in contact and shows positive contact pressures.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/inclusion-setup.pdf}
\caption{Separation of a circular inclusion: problem geometry and boundary conditions. $\bar{t}$ denotes the magnitude of the external traction.}
\label{fig:inclusion-setup}
\end{figure}
To simulate the case that permits an analytical solution, we assign the Young's modulus and Poisson's ratio of the inclusion and the surrounding material as $E=1000$ kPa and $\nu=0$, and the boundary tractions $\bar{t}_{h}=\bar{t}_{v}=10$ kPa.
To investigate the convergence of the averaged integration scheme, we use three different meshes comprised of mono-sized square elements: Mesh 1 ($h=0.2$ m), Mesh 2 ($h=0.1$ m), and Mesh 3 ($h=0.05$ m).
In each mesh, the center and its adjacent nodes inside the inclusion are supported by horizontal rollers to prevent the rotation of the inclusion.
We compute the barrier stiffness parameter $\kappa$ approximating the value of $(p_{N,0})_{\text{optimal}}$ to be the external traction.
Figure~\ref{fig:circular-integration-default} shows contact pressure solutions along the interface obtained by the standard integration scheme and the averaged integration scheme, comparing them with the analytical solution (the pressures are normalized by $\bar{t}\equiv\bar{t}_{v}=\bar{t}_{h}$).
It can be seen that the solutions obtained by the standard scheme show severe oscillations which are not well suppressed by mesh refinement.
Conversely, those obtained by the averaged scheme do not show any oscillations and converge well to the analytical solution as the element size decreases.
Notably, the results are slightly less oscillatory than the numerical solutions obtained by the augmented Lagrangian method in Belytschko {\it et al.}~\cite{belytschko2001arbitrary}.
\revised{In Fig.~\ref{fig:circular-integration-convergence}, we also plot the relative error norms of the pressure field solutions with the element size.
It can be seen that the numerical solutions show a good asymptotic convergence rate, which is comparable to that of Lagrange multiplier solutions in Kim {\it et al.}~\cite{kim2007mortared}.}
These results demonstrate that the averaged integration scheme is an effective and consistent method to suppress traction oscillations.
It is also noted that while not presented for brevity, we observe the same from the penalty solutions to this problem.
\begin{figure}[h!]
\centering
\subfloat[Standard integration scheme]{\includegraphics[width=0.45\textwidth]{figures/inclusion-default-std.pdf}}$\quad$
\subfloat[Averaged integration scheme]{\includegraphics[width=0.45\textwidth]{figures/inclusion-default-avg.pdf}}
\caption{Separation of a circular inclusion: contact pressure solutions obtained by (a) the standard surface integration scheme and (b) the averaged surface integration scheme.}
\label{fig:circular-integration-default}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figures/inclusion-convergence.pdf}
\caption{\revised{Separation of a circular inclusion: convergence of the contact pressure solutions with mesh refinement.}}
\label{fig:circular-integration-convergence}
\end{figure}
Having verified the averaged integration scheme against the analytical solution, we extend the problem to more challenging cases, by introducing (i) stiffness contrast between the inclusion and the surrounding material, and then (ii) friction along the interface.
For the former extension, we increase the Young's modulus of the inclusion by 100 times. For the latter, we set the friction coefficient of the interface as $\mu = 0.3$.
The resulting problem may be akin to a typical soil--pipe interaction problem.
We use Mesh 3 (the finest mesh) for both cases.
Figure~\ref{fig:circular-integration-extended} presents the contact pressure solutions in the two extended cases.
We confirm that the averaged integration scheme continues to perform well for these more challenging cases.
So it can be concluded that the combination of the barrier method and the averaged surface integration scheme is robust for highly challenging frictional contact problems, while it can be utilized at appreciably low computational cost.
\begin{figure}[h!]
\centering
\subfloat[With stiffness contrast]{\includegraphics[width=0.45\textwidth]{figures/inclusion-stiffness-contrast.pdf}}$\quad$
\subfloat[With stiffness contrast and friction]{\includegraphics[width=0.45\textwidth]{figures/inclusion-friction-stiffness-contrast.pdf}}
\caption{Separation of a circular inclusion (a) with stiffness contrast and (b) with stiffness contrast and friction: contact pressure solutions obtained by the standard and averaged surface integration schemes.}
\label{fig:circular-integration-extended}
\end{figure}
\revised{Lastly, to investigate the robustness of the barrier treatment under evolving contact conditions, we further extend the problem by increasing the magnitude of the tensile traction in the horizontal direction, $\bar{t}_{h}$, from $\bar{t}_{h}=\bar{t}_{v}$ to $\bar{t}_{h}=5\bar{t}_{v}$.
Figure~\ref{fig:circular-integration-deformed-mesh} presents the deformed meshes during this course of loading, superimposing the distribution of $p_{N}/\bar{t}_{v}$ along the interface calculated by the averaged integration scheme.
It can be seen that as the domain is pulled more in the horizontal direction, the contacting area becomes narrower and hence the contact pressure increases.
The results affirm that the barrier method continues to provide robust numerical solutions under evolving contact conditions.}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{figures/inclusion-deformed-mesh.pdf}
\caption{\revised{Separation of a circular inclusion: deformed meshes and normalized contact pressures ($p_{N}/\bar{t}$) during an increase in the horizontal traction. Displacements are exaggerated by a factor of 5.}}
\label{fig:circular-integration-deformed-mesh}
\end{figure}
\section{Closure}
We have developed the first barrier method for frictional contact on interfaces embedded in finite elements.
The method derives the contact pressure from a smooth barrier energy function that satisfies the contact constraints.
Also applied in the proposed method is a smoothed friction law where the stick--slip transition is approximated by a continuous function of the slip displacement.
The resulting formulation has the same mathematical structure as the penalty method, and hence it can be well cast into an existing penalty-based XFEM.
Drawing on this similarity, we have also devised the averaged surface integration scheme which can alleviate traction oscillations in the barrier method and other penalty-type methods.
Last but not least, we have derived values for the parameters of the barrier method specialized to embedded interfaces, so that the method can be practically used without any parameter tuning.
Through various numerical examples, we have demonstrated that the proposed barrier method features a highly competitive combination of robustness, accuracy, and efficiency.
The barrier method has practically the same computational cost as the penalty method, which has been popular in practice due to its unparalleled efficiency.
Unlike the penalty method, however, the barrier method guarantees satisfaction of the non-penetration constraint with controllable accuracy. This feature is particularly desirable when inter-penetration should be strictly avoided, like coupled multiphysical problems in which the fracture opening should be non-negative.
The proposed method is also highly robust to common numerical issues such as non-convergence and erroneous traction oscillations.
Particularly, the barrier method aided by the averaged integration scheme has provided convergent and non-oscillatory solutions to all problems we have tested (besides those presented in this paper).
For these reasons, we believe that the proposed method is one of the most attractive options for handling frictional sliding in embedded finite element methods.
\revised{
It is believed that the proposed barrier method will continue to perform well for frictional contact under more complex conditions such as three-dimensional geometry, evolving discontinuity, and sophisticated friction laws.
The main reason is that the proposed method essentially provides an alternative expression to the pressure--displacement relationship of the classic penalty method, which has been well applied to embedded interfaces in more challenging settings ({\it e.g.}~\cite{liu2009extended,liu2013extended,prevost2016faults}).
The method may also be extended to handle frictional contact on interfaces embedded in other types of numerical methods ({\it e.g.}~\cite{liang2022shear}).
Note, however, that extending the current formulation to accommodate these additional complexities entail lots of effort on other aspects ({\it e.g.}~algorithms for handling three-dimensional and/or evolving interface geometry in embedded elements). We thus leave such an extension as a topic of future research.}
\section*{Acknowledgments}
The authors wish to thank the two reviewers for their expert comments.
Portions of the work were supported by the Research Grants Council of Hong Kong through Projects 17201419 and 27205918.
Financial support from KAIST is also acknowledged.
|
{'timestamp': '2022-03-01T02:36:05', 'yymm': '2107', 'arxiv_id': '2107.05814', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.05814'}
|
arxiv
|
\section{Introduction}
\label{sec:intro}
Minkowski summation is a basic and ubiquitous operation on sets. Indeed, the Minkowski
sum $A+B = \{a+b : a \in A, b \in B\}$ of sets $A$ and $B$ makes sense as long as $A$ and $B$ are subsets
of an ambient set in which a closed binary operation denoted by $+$ is defined. In particular, this notion makes sense
in any group, and {\it additive combinatorics} (which arose out of exploring the additive structure of sets of integers, but then expanded to the consideration of additive structure in more general groups) is a field of mathematics that is preoccupied with studying what exactly this operation does in a quantitative way.
``Sumset estimates'' are a collection of inequalities developed in additive combinatorics that provide bounds on the cardinality of sumsets of finite sets in a group. In this paper, we use $\#(A)$ to denote the cardinality of a countable set $A$, and $|A|$ to denote the volume (i.e., $n$-dimensional Lebesgue measure) of $A$ when $A$ is a measurable subset of ${\bf R}^n$. The simplest sumset estimate is the two-sided inequality
$\#(A) \#(B) \geq \#(A+B) \geq \#(A)+\#(B)-1$, which holds for finite subsets $A, B$ of the integers; equality in the second inequality holds only for arithmetic progressions.
A much more sophisticated sumset estimate is Kneser's theorem \cite{Kne53} (cf., \cite[Theorem 5.5]{TV06:book}, \cite{Dev14}), which asserts that for finite, nonempty subsets $A, B$ in any abelian group $G$,
$\#(A+B) \geq \#(A+H) +\#(B+H)-\#(H)$, where $H$ is the stabilizer of $A+B$, i.e., $H=\{g\in G: A+B+g=A+B\}$. Kneser's theorem contains, for example, the Cauchy-Davenport inequality that provides a sharp lower bound on sumset cardinality in $\mathbb{Z}/p\mathbb{Z}$. In the reverse direction of finding upper bounds on cardinality of sumsets, there are the so-called Pl\"unnecke-Ruzsa inequalities \cite{Plu70, Ruz89}. One example of the latter states that if $\#(A+B)\leq \alpha \# A$, then $\#(A+k\cdot B)\leq \alpha^k \# A$, where $k\cdot B$ refers to the sum of $k$ copies of $B$.
Such sumset estimates form an essential part of the toolkit of additive combinatorics.
In the context of the Euclidean space ${\mathbb R}^n$, inequalities for the volume of Minkowski sums of convex sets, and more generally Borel sets, play a central role in geometry and functional analysis. For example, the well known Brunn-Minkowski inequality can be used to deduce the Euclidean isoperimetric inequality, which identifies the Euclidean ball as the set of any given volume with minimal surface area.
Therefore, it is somewhat surprising that in the literature, there has only been rather limited exploration of geometric analogies of sumset estimates. We work towards correcting that oversight in this contribution.
The goal of this paper is to explore a variety of new inequalities for volumes of Minkowski sums of convex sets, which have a combinatorial flavor and are inspired by known inequalities in the discrete setting of additive combinatorics. These inequalities are related to the notion of supermodularity: we say that a set function $F:2^{[n]}\ra{\mathbb R}$ is {\it supermodular} if
$F(s\cupt)+F(s\capt) \geq F(s) + F(t)$
for all subsets $s, t$ of $[n]$, and that $F$ is {\it submodular} if $-F$ is supermodular.
Our study is motivated by two relatively recent observations.
The first observation motivating this paper, due to \cite{FMMZ18} (Theorem 4.5), states that given convex bodies $A, B_1, B_2$ in ${\mathbb R}^{n}$,
$|A+B_1+B_2|+|A|\geq |A+B_1|+|A+B_2|$.
This inequality has a form similar to that of Kneser's theorem-- indeed, observe that the latter can be written as
$\#(A+B+H)+\#(H) \geq \#(A+H) +\#(B+H)$, since adding the stabilizer to $A+B$ does not change it.
Furthermore, it implies that the function $v: 2^{[n]} \to {\mathbb R}$ defined, for given convex bodies $B_1, \dots, B_k$ in ${\mathbb R}^{n}$, by
$v(s)=\left|\sum_{i\in s} B_i \right|$
is supermodular.
Foldes and Hammer \cite{FH05} defined the notion of higher order supermodularity for set functions. In Section~\ref{sec:super}, we generalize their definition and main characterization theorem from \cite{FH05} to functions defined on ${\mathbb R}_+^n$, and apply it to show that volumes and mixed volumes satisfy this higher order supermodularity.
The second observation motivating this paper is due to Bobkov and the second named author \cite{BM12:jfa}, who proved that given convex bodies $A, B_1, B_2$ in ${\mathbb R}^{n}$,
\begin{equation}\label{eqBM3}
|A+B_1+B_2 | |A| \leq 3^n |A+B_1| |A+B_2|.
\end{equation}
The above inequality is inspired by an inequality in information theory analogous to the Pl\"unnecke-Ruzsa inequality (the most general version of which was proved by Ruzsa for compact sets in \cite{Ruz97}, and is discussed in Section \ref{secPR} below).
If not for the multiplicative factor of $3^n$ in (\ref{eqBM3}), this inequality would imply that the {\it logarithm} of the volume of the Minkowski sum of convex sets is submodular. In this sense, it goes in the reverse direction to the supermodularity of volume and thus complements it. However, the constant $3^n$ obtained by \cite{BM12:jfa} is rather loose. We take up the question of tightening this constant in Section~\ref{sec:Pl\"unnecke}.
Specifically, we obtain both upper and lower bounds for the optimal constant
\begin{eqnarray}\label{cn-def}
c_n=\sup \frac{|A+B+C | |A| }{|A+B| |A+B|} ,
\end{eqnarray}
where the supremum is taken over all convex bodies $A, B, C$ in ${\mathbb R}^n$,
in general dimension $n$. We get an upper bound of $c_n\leq \varphi^{n}$ in Section~\ref{ss:gen-ub}, where $\varphi=(1+\sqrt{5})/2$ is the golden ratio, and an asymptotic lower bound of $c_n\geq (4/3+o(1))^n$ in Section~\ref{ss:PR-LB}. In Section~\ref{ss:ub34}, we show that the optimal constant is $1$ in dimension $2$ and $\frac{4}{3}$ in dimension 3 (i.e., $c_2=1$ and $c_3=4/3$), and also that $c_4\leq 2$.
In Section~\ref{ss:improved}, we improve inequality (\ref{eqBM3}) in the special case where $A$ is an ellipsoid, $B_1$ is a zonoid, and $B_2$ is any convex body: in this case, the optimal constant is $1$. This result partially answers a question of Courtade, who asked (motivated by an analogous inequality in information theory) if $|A+B_1+B_2|\,|A|\le |A+B_1|\,|A+B_2|$ holds when $A$ is the Euclidean ball and $B_1, B_2$ are arbitrary convex bodies.
Finally, in Section~\ref{ss:compact}, we prove that (\ref{eqBM3}) cannot possibly hold in the more general setting of compact sets with any absolute constant, which signifies a sharp difference between the proof of this inequality compared with the tools used by Ruzsa in \cite{Ruz97}.
The last section of the paper is dedicated to questions surrounding Ruzsa's triangle inequality: if $A, B,$ and $C$ are finite subsets of an abelian group, then
$ \#(A)\#(B-C)\leq \#(A-B)\#(A-C)$. The inequality is also known to be true for volume of compact sets in ${\mathbb R}^n$: $|A|\,|B-C|\le |A-B|\,|A-C|$. We investigate the best constant $c$ such that the inequality
\begin{equation}\label{triang3}
|A|\,|A+B+C|\leq c |A-B|\,|A-C|.
\end{equation}
is true for all convex sets $A,B$ and $C$ in ${\mathbb R}^n$. For example, in the plane, we observe that it holds with the sharp constant $c=\frac{3}{2}$.
Again, it is interesting to note that (\ref{triang3}) is different from Ruzsa's triangle inequality, and it is not true, with any absolute constant $c$, if one omits the assumption of convexity.
In a companion paper \cite{FMMZ22}, we explore the question of reducing the constant in the Pl\"unnecke-Ruzsa inequality for volumes from $\varphi^{n}$, when we restrict attention to the subclass of convex bodies known as zonoids. In another companion paper \cite{FLMZ22}, we explore measure-theoretic extensions of the preceding results for convex bodies, in the category of $\log$-concave and in particular Gaussian measures.
We also mention that there are probabilistic or entropic analogs of many of the inequalities in this paper. For example, the afore-mentioned observation due to \cite{BM12:jfa}, that a Pl\"unnecke-Ruzsa inequality for convex bodies holds with a constant $3^n$, emerges as a consequence of R\'enyi entropy comparisons for convex measures on the one hand, and the submodularity of entropy of convolutions on the other. The submodularity of entropy of convolutions refers
to the inequality $h(X)+h(X+Y+Z)\leq h(X+Y)+h(X+Z)$, where $h$ denotes entropy, and $X, Y, Z$ are independent ${\mathbb R}^n$-valued random variables, and may be thought of as an entropic analogue of the Pl\"unnecke-Ruzsa inequality. This latter inequality was obtained in \cite{Mad08:itw} as part of an attempt to develop an additive combinatorics of probability measures where cardinality or volume is replaced by entropy. A number of works have explored this avenue, starting with
\cite{Ruz09:1, Tao10, MMT12, ALM17, MWW21} for discrete probability measures on groups (e.g., when the random variables take values in finite groups or the integers), and
\cite{Mad08:itw, MK10:isit, MK18, Hoc22} for probability measures on ${\mathbb R}^n$ and more general locally compact abelian groups.
\vspace{.1in}
\noindent
{\bf Acknowledgments.}
Piotr Nayar and Tomasz Tkocz \cite{NT17} independently obtained upper and lower bounds on the optimal constants in the Pl\"unnecke-Ruzsa inequality for volumes (versions of Theorems~\ref{thm:ub} and \ref{thm:lb}, though with weaker bounds obtained using different methods); we are grateful to them for communicating their work.
We are indebted to Ramon Van Handel for pointing to us the original work of W. Fenchel on the local version of Alexandrov's inequality, to Daniel Hug for suggesting that we consider equality cases in Theorem \ref{lmcool2}, and to Mathieu Meyer and Dylan Langharst for a number of valuable discussions and suggestions.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Mixed Volumes}
In this section, we introduce basic notation and collect essential facts and definition
from Convex Geometry that are used in the paper. As a general reference on the theory we use \cite{Sch14:book}.
We write $x \cdot y$ for the inner product of vectors $x$ and $y$ in ${\mathbb R}^n$ and by $|x|$ the length of a vector $x \in {\mathbb R}^n$. The closed unit ball in ${\mathbb R}^n$ is denoted by $B_2^n$, and its boundary by $S^{n-1}$. We will also denote by $e_1, \dots, e_n$ the standard orthonormal basis in ${\mathbb R}^n$. Moreover, for any set in $A \subset {\mathbb R}^n$, we denote its boundary by $\partial A$. A convex body is a convex, compact set with nonempty interior. We write $|K|_m$ for the $m$-dimensional Lebesgue measure (volume) of a measurable set $K \subset {\mathbb R}^n$, where $m = 1, . . . , n$ is the dimension
of the minimal affine space containing $K$, we will often use the shorten notation $|K|$ for $n$-dimensional volume. A polytope which is the Minkowski sum of
finitely many line segments is called a zonotope. Limits of zonotopes in the Hausdorff
metric are called zonoids, see \cite{Sch14:book}, Section 3.2, for details.
From \cite[Theorem 5.1.6]{Sch14:book}, for any compact convex sets $K_1,\dots K_r$ in ${\mathbb R}^n$ and any non-negative numbers $t_1, \dots, t_r$
one has
\begin{eqnarray}\label{eq:mvf}
\left|t_1K_1+\cdots+t_rK_r\right|=
\sum_{i_1,\dots,i_n=1}^rt_{i_1}\cdots t_{i_n}V(K_{i_1},\dots K_{i_n}),
\end{eqnarray}
for some non-negative numbers $V(K_{i_1},\dots K_{i_n})$, which are called the mixed volumes of $K_1,\dots K_r$.
One readily sees that the mixed volumes satisfy $V(K,\dots,K)=|K|$, moreover, they satisfy a number of properties which are crucial
for our study (see \cite{Sch14:book}) including the fact that a mixed volume is symmetric in its argument; it is multilinear, i.e. for any $\lambda, \mu \ge 0$ we have $V(\lambda K + \mu L, K_2, \dots, K_n)=\lambda V(K,K_2, \dots, K_n)+ \mu V(L,K_2, \dots, K_n).$
Mixed volume is translation invariant, i.e. $V(K+a,K_2, \dots K_n)= V(K,K_2, \dots, K_n),$ for $a \in {\mathbb R}^n$ and satisfy a monotonicity property, i.e $V(K,K_2, K_3, \dots, K_n) \le V(L,K_2, K_3, \dots, K_n)$, for $ K \subset L$.
We will also often use a two body version of (\ref{eq:mvf}) -- the Steiner formula:
\begin{eqnarray}\label{eq:ste}
\left|A+tB\right|=
\sum_{k=0}^n {n \choose k} t^k V(A[n-k],B[k]),
\end{eqnarray}
for any $t>0$ and compact, convex sets $A, B$ in ${\mathbb R}^n$, where for simplicity we use notation $A[m]$ for a convex set $A$ repeated $m$ times. Mixed volumes are also very useful for studying the volume of orthogonal projections of convex bodies. Let $P_H A$ be an orthogonal projection of a convex body $A$ onto $m$ dimensional subspace $H$ of ${\mathbb R}^n$, then
\begin{eqnarray}\label{eq:proj}
|P_HA|_m|U|_{n-m}={{n}\choose{m}}V(A[m], U[n-m]),
\end{eqnarray}
where $U$ is any convex body of volume one in the subspace $H^\perp$ orthogonal to $H$. For example, if we denote by $\theta^\perp = \{x \in {\mathbb R}^n : x \cdot \theta =0\}$ a hyperplane orthogonal to $\theta\in S^{n-1}$, we obtain
\begin{eqnarray}\label{eq:proj1}
|P_{\theta^\perp} A|_{n-1}= n V(A[n-1], [0, \theta]).
\end{eqnarray}
Yet another useful formula is connected with computation of surface area and mixed volumes:
\begin{eqnarray}\label{surface}
|\partial A|= nV(A[n-1], B_2^n),
\end{eqnarray}
where by $|\partial A|$ we denote the surface area of the compact set $A$ in ${\mathbb R}^n$.
Mixed volumes satisfy a number of extremely useful inequalities. The first one is the Brunn-Minkowski inequality
\begin{eqnarray}\label{BM}
|A+B|^{1/n}\ge |A|^{1/n}+|B|^{1/n},
\end{eqnarray}
whenever $A,B$ and $A+B$ are measurable.
The most powerful inequality for mixed volumes is the
Alexandrov--Fenchel inequality:
\begin{eqnarray}\label{AF}
V(K_1,K_2, K_3, \dots, K_n) \ge \sqrt{V(K_1,K_1, K_3, \dots, K_n)V(K_2,K_2, K_3, \dots, K_n)},
\end{eqnarray}
for any compact convex sets $K_1,\dots K_r$ in ${\bf R}^n$.
We will also use the following classical local version of Alexandrov-Fenchel's inequality that was proved by W. Fenchel (see \cite{Fen36} and also \cite{Sch14:book}) and further generalized in \cite{FGM03, AFO14, SZ16}
\begin{equation}\label{alexloc}
|A|V(A[n-2], B, C)\le 2 V(A[n-1],B) V(A[n-1],C),
\end{equation}
for any convex compact sets $A,B,C$ in ${\mathbb R}^n$, moreover it was noticed in \cite{SZ16} that (\ref{alexloc}) is true with constant one instead of two in the case when $A$ is a simplex. The inequality turned out to be a part of rich class Bezout inequalities proposed in \cite{ SZ16, SSZ16}. The core tool of our work is the following inequality of J.~Xiao (Theorem 1.1 and Lemma 3.3 in \cite{Xia19})
\begin{align}\label{eq:jxiao}
|A|V(A[n-j-m], &B[j], C[m]) \nonumber \\&\le \min\left( {n \choose j} , {n \choose m} \right)V(A[n-j],B[j])V(A[n-m],C[m]).
\end{align}
\subsection{Pl\"unnecke-Ruzsa inequality.}\label{secPR}
Pl\"unnecke-Ruzsa inequalities (see for example \cite{TV06:book}) is an important class of inequalities in the field of
additive combinatorics. These were introduced by Pl\"unnecke \cite{Plu70} and
generalized by Ruzsa \cite{Ruz89}, and a simpler proof was given by Petridis \cite{Pet12}; a more recent generalization is proved
in \cite{GMR08}, and entropic versions are developed in \cite{MMT12}.
For illustration, the form of Pl\"unnecke's inequality developed in \cite{Ruz89} states that,
if $A, B_1,\ldots ,B_m$ are finite sets in a commutative group, then there exists an $X \subset A, X \neq \emptyset$, such that
\begin{eqnarray*}
\#(A)^m\#(X+B_1+\ldots +B_m) \leq \#(X) \prod_{i=1}^m\#(A + B_i).
\end{eqnarray*}
In \cite{Ruz97}, Ruzsa generalized the above inequality to the case of compact sets on a locally compact commutative group, written additively, with the Haar measure. The volume case of this deep theorem is one of our main inspirations: for any compact sets $A,B_1, \ldots ,B_m$ in ${\bf R}^n,$ with $|A|>0$ and for every $\varepsilon >0$ there exists a compact set $A' \subset A $ such that
\begin{equation}\label{eq:ruzvol}
|A|^m|A'+B_1+\ldots +B_m| \le (1+\varepsilon)|A'| \prod_{i=1}^m|A + B_i|.
\end{equation}
It immediately follows that for any compact sets $A,B_1, \ldots ,B_m$ in ${\bf R}^n,$
\begin{equation}\label{eq:ruzvol1}
|A|^{m-1}|B_1+\ldots +B_m| \le \prod_{i=1}^m|A + B_i|.
\end{equation}
\subsection{Submodularity and supermodularity}
\label{sec:smod-prelim}
Let us first recall the notion of a supermodular set function.
\begin{defn}\label{def:supermod}
A set function $F:2^{[n]}\ra{\mathbb R}$ is {\it supermodular} if
\begin{eqnarray}\label{supmod:defn}
F(s\cupt)+F(s\capt) \geq F(s) + F(t)
\end{eqnarray}
for all subsets $s, t$ of $[n]$.
\end{defn}
One says that a set function $F$ is submodular if $-F$ is supermodular.
Submodularity is closely related to a partial ordering on hypergraphs as we will see below. This relationship is frequently attributed to Bollobas and Leader \cite{BL91} (cf. \cite{BB12}), where they introduced the related notion of ``compressions''.
However, it seems to go back much longer -- it is more or less explicitly discussed in a 1975 paper of Emerson \cite{Eme75}, where he says it is ``well known''.
To present this relationship, let us introduce some notation. Let $\mbox{${\cal M}$}(n,m)$ be the following family of
(multi)hypergraphs: each consists of non-empty (ordinary) subsets $s_i$ of $[n]$,
$s_i=s_j$ is allowed, and
$\sum_i |s_i|= m$.
Consider a given multiset
$\mathcal{C} = \{s_1, \dots, s_l\} \in \mbox{${\cal M}$}(n,m)$.
The idea is to consider an operation that takes two sets in $\mathcal{C}$
and replaces them by their union and intersection; however, note that
(i) if $s_i$ and $s_j$ are nested (i.e., either $s_i \subset s_j$ or
$s_j \subset s_i$), then replacing $(s_i,s_j)$ by
$(s_i \cap s_j ,s_i \cup s_j)$ does not change $\mathcal{C}$,
and
(ii) if $s_i \cap s_j = \emptyset$, the empty set may enter the collection,
which would be undesirable.
Thus, take any pair of non-nested sets $\{s_i,s_j\}\subset \mathcal{C}$
and let $\mathcal{C}' = \mathcal{C}(i,j)$ be obtained from $\mathcal{C}$ by replacing $s_i$ and $s_j$
by $s_i \cap s_j$ and $s_i \cup s_j$,
keeping only $s_i \cup s_j$ if $s_i \cap s_j = \emptyset$.
$\mathcal{C}'$ is called an {\it elementary compression} of $\mathcal{C}$. The result of a sequence of
elementary compressions is called a {\it compression}.
Define a partial order on $\mbox{${\cal M}$}(n,m)$ by setting $\mathcal{A} > \mathcal{A}'$ if $\mathcal{A}'$ is a
compression of $\mathcal{A}$. To check that this is indeed a partial order,
one needs to rule out the possibility of cycles, which can be done by noting that if $\mathcal{A}'$ is an elementary compression of $\mathcal{A}$ then
\begin{eqnarray*}
\sum_{s\in\mathcal{A}} |s|^2 < \sum_{s\in\mathcal{A}'} |s|^2 .
\end{eqnarray*}
\begin{thm}\label{thm:compr}
Suppose $F$ is a supermodular function on the ground set $[n]$.
Let $\mathcal{A}$ and $\mathcal{B}$ be finite multisets of subsets of $[n]$, with $\mathcal{A} > \mathcal{B}$.
Then
\begin{eqnarray*}
\sum_{s\in\mathcal{A}} F(s) \leq \sum_{t\in\mathcal{B}} F(t) .
\end{eqnarray*}
\end{thm}
\begin{proof}
When $\mathcal{B}$ is an {\it elementary} compression of $\mathcal{A}$, the statement is immediate
by definition, and transitivity of the partial order gives the full statement.
\end{proof}
Note that for every multiset $\mathcal{A} \in \mbox{${\cal M}$}(n,m)$ there is a unique minimal multiset
$\mathcal{A}^{\#}$ dominated by $\mathcal{A}$, i.e. $\mathcal{A}^{\#}<\mathcal{A},$ consisting of the sets
$s^{\#}_j = \{i \in [n] : i \text{ lies in at least } j \text{ of the sets } s \in \mathcal{A}\}$.
Thus a particularly nice instance of Theorem~\ref{thm:compr} is for the special case of $\mathcal{B}=\mathcal{A}^{\#}$ (we refer to \cite{BB12} page 132 for further discussion).
We also have a notion of supermodularity on the positive orthant of the Euclidean space.
\begin{defn}\label{def:supermod-Rmaxmin}
A function $f:{\mathbb R}_{+}^{n} \ra{\mathbb R}$ is supermodular if
\begin{eqnarray*}
f(x\vee y) + f(x \wedge y) \ge f(x) + f(y)
\end{eqnarray*}
for any $x,y \in \mathbb{R}_+^n$, where $x \vee y$ denotes the componentwise maximum of $x$ and
$y$ and $x \wedge y$ denotes the componentwise
minimum of $x$ and $y$.
\end{defn}
We note that Definition \ref{def:supermod-Rmaxmin} can be viewed as an extension of Definition~\ref{def:supermod} if one consider set functions on $2^{[n]}$ as a function on $\{0;1\}^n$. Indeed, if $f:{\mathbb R}_{+}^{n} \ra{\mathbb R}$ is supermodular then we define the function $F:2^{[n]}\to{\mathbb R}$, for $s\subset[n]$, by $F(s)=f(e(s))$, where $e(s)_i=1$ if $i\in s$ and $e(s)_i=0$ if $i\notin s$. Then, the set function $F$ is supermodular:
\begin{lem}\label{lem:set-rn}
If $f:{\mathbb R}_{+}^n\ra{\mathbb R}$ is supermodular, and we set $F(s):=f(e(s))$ for each $s\subset [n]$,
then $F$ is a supermodular set function.
\end{lem}
\begin{proof}
Observe that
\begin{eqnarray*}\begin{split}
F(s\cup t) + F(s\cap t)
&= f(e(s\cup t)) + f(e(s\cap t)) \\
&= f(e(s) \vee e(t)) + f(e(s)\wedge e(t)) \\
&\geq f(e(s))+f(e(t)) \\
&= F(s) + F(t) .
\end{split}\end{eqnarray*}
\end{proof}
The fact that supermodular functions are closely related to functions
with increasing differences is classical (see, e.g., \cite{MG19} or \cite{Top98:book}, which describes
more general results involving arbitrary lattices).
We will denote by $\partial_i f$ the partial derivative of function $f$ with the respect to the $i$'s coordinate and by $\partial^m_{i_1, \dots, i_m}$ the mixed derivative with respect to coordinates $i_1, \dots i_m$.
\begin{prop}\label{prop:diff}
Suppose a function $f:{\mathbb R}_+^n\ra{\mathbb R}$ is in $C^2$, i.e., it is twice-differentiable with
a continuous Hessian matrix. Then $f$ is supermodular if and only if
\begin{eqnarray*}
\partial_{i,j}^2 f(x)\geq 0
\end{eqnarray*}
for every distinct $i, j \in [n]$, and for any $x\in {\mathbb R}_+^n$.
\end{prop}
We will prove Proposition \ref{prop:diff} as part of a more general statement on the mixed derivatives of the supermodular functions of higher order (Theorem \ref{thm:diff} below).
\section{Higher order supermodularity of mixed volumes}
\label{sec:super}
\subsection{Local characterization of higher order supermodularity}
We now present analogues of the above development for higher-order supermodularity.
Let us notice that a set function $F:2^{[n]}\to{\mathbb R}$ is supermodular if and only if for any $s_0,s_1, s_2\in 2^{[n]}$ with $s_1\cap s_2=\emptyset$ one has
\[
F(s_0\cup s_1)+F(s_0\cup s_2)\le F(s_0\cup s_1\cup s_2)+F(s_0).
\]
Generalizing this property, Foldes and Hammer \cite{FH05} defined the notion of higher order supermodularity. In this section, we will adapt their definition and study the following property:
\begin{defn}\label{eq:hsup}
Let $1\le m\le n$. A function $F:2^{[n]}\to{\mathbb R}$ is $m$-supermodular if for any $s_0\in 2^{[n]}$ and for any mutually disjoint $s_1,\dots, s_m\in2^{[n]}$ one has
\[
\sum_{I\subset[m]}(-1)^{m-|I|}F\left(s_0\cup\bigcup_{i\in I}s_i\right)\ge0.
\]
\end{defn}
Note that for $m=2$ in the above definition, we recover a supermodular set function.
We also introduce the notion of higher-order supermodularity for functions defined on the positive orthant of a Euclidean space.
\begin{defn}\label{def:supermod-Rm}
Let $1\le m\le n$. A function $f:{\mathbb R}_{+}^{n} \ra{\mathbb R}$ is $m$-supermodular if
\begin{eqnarray*}
\sum_{I\subset[m]}(-1)^{m-|I|}f\left(x_0\vee\bigvee_{i\in I}x_i\right)\ge0,
\end{eqnarray*}
for any $x_0$ and any $x_1,\dots,x_m \in \mathbb{R}_+^n$, with mutually disjoint supports, that is such that $x_i\wedge x_j=0$, for any $1\le i<j\le m$.
\end{defn}
\begin{rem}\label{rk:setsuper}
Notice that, as in Lemma \ref{lem:set-rn}, if $f:{\mathbb R}_+^n\to{\mathbb R}$ is $m$-supermodular then $F:2^{[n]}\to{\mathbb R}$ defined by $F(s)=f(e(s))$ is $m$-supermodular.
\end{rem}
For $m=1$ in the above definition, we obtain that $f$ is $1$-supermodular if and only if it is non-decreasing in each coordinate.
For $m=2$, we recover a supermodular function on the orthant as we prove in the following lemma.
\begin{lem}
Let $f:{\mathbb R}_+^n\ra{\mathbb R}$. Then $f$ is supermodular if and only if for any $x,y,z\in{\mathbb R}_+^n$ such that $y\wedge z=0$ one has
\begin{eqnarray}\label{eq:disj}
f(x\vee y\vee z)+f(x)\ge f(x\vee y)+f(x\vee z).
\end{eqnarray}
\end{lem}
\begin{proof}
Suppose $f$ is supermodular and one has $x,y,z\in{\mathbb R}_+^n$ such that $y\wedge z=0$. Then, we set $a=x\vee y$ and $b=x\vee z,$ then, $a\vee b=x\vee y\vee z$ and $a\wedge b=x$, since $y\wedge z=0$. Thus,
\[
f(x\vee y\vee z)+f(x)- f(x\vee y)-f(x\vee z)=f(a\wedge b)+ f(a\vee b)-f(a)-f(b)\ge0.
\]
Now assume that $f$ satisfies (\ref{eq:disj}) and let $a,b\in{\mathbb R}_+^n$. We set $x=a\wedge b$ and we define $y$ by putting $y_i=a_i$ for $i$ such that $b_i<a_i$ and $y_i=0$ otherwise. In the same way, we set $z_i=b_i$, for $i$ such that $a_i<b_i$ and $z_i=0$ otherwise. Then $x\vee y=a$, $x\vee z=b$ and $x\vee y\vee z=a\vee b$, hence, we conclude similarly.
\end{proof}
The next theorem generalizes Proposition \ref{prop:diff} to higher order supermodularity.
\begin{thm}\label{thm:diff}
Let $f:{\mathbb R}_+^n\ra{\mathbb R}$ be a $C^m$ function. Then $f$ is $m$-supermodular if and only if
\begin{eqnarray*}
\partial^m_{i_1, \dots, i_m} f(x)\geq 0
\end{eqnarray*}
for every distinct $i_1,\dots, i_m \in [n]$, and for any $x\in {\mathbb R}_+^n$.
\end{thm}
\begin{proof}
Let $x_0\in \mathbb{R}_+^n$ and $x_1,\dots,x_m \in \mathbb{R}_+^n$, with mutually disjoint supports.
\begin{equation}\label{eq:induction}
\begin{split}
\sum_{I\subset[m]}&(-1)^{m-|I|}f\left(x_0\vee\bigvee_{i\in I}x_i\right)\\
=&\sum_{I\subset[m-1]}(-1)^{m-|I|-1}f\left(x_0\vee\bigvee_{i\in I}x_i \vee x_m\right)
+\sum_{I\subset[m-1]}(-1)^{m-|I|}f\left(x_0\vee\bigvee_{i\in I}x_i\right)
\\
= & \sum_{I\subset[m-1]}(-1)^{m-1-|I|}\left[f\left(x_0\vee\bigvee_{i\in I}x_i \vee x_m\right) -f\left(x_0\vee\bigvee_{i\in I}x_i\right) \right]\\
=& \sum_{I\subset[m-1]}(-1)^{m-1-|I|}g_{x_m}\left(x_0\vee\bigvee_{i\in I}x_i ) \right),
\end{split}
\end{equation}
where $g_{z}(x)=f(x \vee z)-f(x),$ for any $x,z\in {\mathbb R}_+^n$. Thus $f$ is $m$-supermodular if and only if $x \mapsto g_{z}(x)$ is $(m-1)$-supermodular for any $z\in{\mathbb R}_+^n$ as a function on the coordinate subspace $H_z=\{x \in {\mathbb R}_+^n ; x_i z_i=0, \forall\ i =1, \dots, n\}$.
Now we are ready to prove the theorem for the case $m=2$. In this case, the above equivalence (\ref{eq:induction}) gives us that $f$ is supermodular if and only if $x\mapsto g_z(x)$ is $1$-supermodular for any $z\in{\mathbb R}_+^n$ as a function of $x\in H_z$ and thus $g_z$ is non-decreasing in each coordinate direction of $H_z$, i.e. for each coordinate index $i$ such that $z_i=0$. Thus, assuming differentiability, this is equivalent $\partial_i g_z \ge 0 $ for all $i$ which is a coordinate direction in $H_z$. Taking $z=z_je_j$, $z_j > 0$, we get $\partial_i g_{z_je_j} \ge 0$ for all $i\not=j$. Thus $\partial_i f (x \vee z_je_j)-\partial_if(x) \ge 0,$ and finally $\partial^2_{i,j} f(x) \ge 0$. Reciprocally, assuming that $\partial^2_{i,j} f(x) \ge 0$ for all $i\neq j$ and all $x$, we get $D_y \partial_i f (x) \ge 0$ for $y\in {\mathbb R}_+^n$ such that $y_i=0$, where by $D_y f= y\cdot \nabla f $ we denote the directional derivative with respect to vector $y\in {\mathbb R}^n$. Thus $\partial_i f(x+y)- \partial_i f(x) \ge 0$ for all $y\in {\mathbb R}_+^n$ with $y_i=0$. Thus, considering $y\in {\mathbb R}^n_+$, such that $y_i=0$ and $y_j=x_j\vee z_j - x_j,$ $j\not=i$, we get $\partial_ig(z)=\partial_if(x \vee z)-\partial_if(x) \ge 0$ for all $i$ not in the support of $z$.
We will finish the proof applying an induction argument. Assume that the statement of the theorem is true for $m-1$, for some $m\ge 3$. Let $f:{\mathbb R}_+^n\ra{\mathbb R}$ be a $C^m$ $m$-supermodular function. Then applying (\ref{eq:induction}) we get that $x\mapsto g_z(x)$ is $m-1$-supermodular for any $z\in{\mathbb R}_+^n$ as a function of $x\in H_z$, which, applying inductive assumption, gives us
\begin{eqnarray*}
\partial_{i_1}\cdots \partial_{i_{m-1}} [f(x \vee z)-f(x)]\geq 0
\end{eqnarray*}
for every distinct $i_1,\dots, i_{m-1}$, coordinates of $H_z$, applying it to $z=z_{i_m} e_{i_m}$ we get
\begin{eqnarray*}
\partial_{i_1}\cdots \partial_{i_{m}} f(x)\geq 0.
\end{eqnarray*}
Now assume the partial derivative condition of the theorem. Then for every $z\in {\mathbb R}_+^n$ and $i_1,\dots, i_{m-1}$, coordinates of $H_z$, we have
\begin{eqnarray*}
\partial_{i_1}\cdots \partial_{i_{m-1}} g_z(x)=\partial_{i_1}\cdots \partial_{i_{m-1}} f(x \vee z)- \partial_{i_1}\cdots \partial_{i_{m-1}}f(x),
\end{eqnarray*}
but $\partial_{i_1}\cdots \partial_{i_{m-1}}\partial_{i_{m}}f(x) \ge 0$ for every $i_m\not= i_k,$ for $k=1, \dots, m-1$ and thus for every $i_m$ for which $z_{i_m}\not = 0$. So $\partial_{i_1}\cdots \partial_{i_{m-1}}f(x)$ is an non-decreasing function in each coordinate $i_m$ for which $z_{i_m}\not = 0$:
$$
\partial_{i_1}\cdots \partial_{i_{m-1}} f(x \vee z)- \partial_{i_1}\cdots \partial_{i_{m-1}}f(x) \ge 0
$$
and, applying inductive assumption, we get that $x\mapsto g_z(x)$ is $(m-1)$-supermodular for any $z\in{\mathbb R}_+^n$ as a function of $x\in H_z$, which, finishes the proof with the help of (\ref{eq:induction}).
\end{proof}
As an example, which will help to understand the connection of supermodularity to Minkowski sum of sets, let $\varphi: {\mathbb R}_+\to{\mathbb R}$ be a convex function. Then for every $a_0,a_1,a_2\in{\mathbb R}_+$ one has
\begin{eqnarray*}
\varphi(a_0+a_1)+\varphi(a_0+a_2)\le \varphi(a_0+a_1+a_2)+\varphi(a_0).
\end{eqnarray*}
This property can be seen again as the supermodularity of the function $\Phi:2^{[2]}\to{\mathbb R}$ defined by $\Phi(s)=\varphi(a_0+e(s)_1a_1+e(s)_2a_2)$, for any $s\in 2^{[2]}$, where $e(s)$ is defined above Lemma~\ref{lem:set-rn}.
We remark in passing that the positivity of mixed partial derivatives and its global manifestation also arises in the theory of copulas in probability (see, e.g., \cite{Car09}). In particular, it is well known there that for smooth functions $C:[0,1]^m\ra [0,1]$, the condition $\partial^{m}_{1,2,\ldots,m} C\geq 0$
is equivalent to the condition that
$\sum_{z\in \{x_i, y_i\}^m} (-1)^{N(z)} C(z)\geq 0$ for every box $\prod_{i=1}^m [x_i,y_i] \subset [0,1]^m$, where
$N(z)=\#\{k:z_k=x_k\}$.
\subsection{Higher order supermodularity of volume}
\label{ss:vol-super}
\begin{thm}\label{thm:supmodvol}
Let $n,k\in\mathbb{N}$. Let $B_1, \dots, B_k$ be convex compact sets of ${\mathbb R}^{n}$. Then the function $v: {\mathbb R}^k_+ \to {\mathbb R}$ defined as
\begin{equation}\label{eq:upmodvol}
v(x)=\left|\sum_{i=1}^k x_i B_i \right|
\end{equation}
is $m$-supermodular for any $m\in\mathbb{N}$.
\end{thm}
\begin{proof}
From the mixed volume formula \eqref{eq:mvf}, the function $v$ is a polynomial with non-negative coefficients so its mixed derivatives of any order are non-negative on ${\mathbb R}_+^n$. By Theorem \ref{thm:diff}, we conclude that it is $m$-supermodular for any $m$.
\end{proof}
\begin{rem}\label{rk:mixed-vol-sm} Theorem \ref{thm:supmodvol} can be given in a more general form: for any natural number $l \le n$, any convex bodies $C_1,\dots, C_l$ in ${\mathbb R}^n$ and any convex sets $B_1, \dots, B_k$ in ${\mathbb R}^{n}$, the function $v: {\mathbb R}^k_+ \to {\mathbb R}$
$$v(x)=V\left(\left(\sum_{i=1}^k x_i B_i\right)[n-l], C_1, \dots, C_l \right)$$ is $m$-supermodular for any $m\in\mathbb{N}$.
\end{rem}
We notice that in Theorem \ref{thm:supmodvol} the convexity assumption is essential. Indeed, as was observed in \cite{FMMZ18}, for $k=3$, there exists non convex sets $B_1, B_2, B_3$ such that the function $v$ defined above is not supermodular. We will discuss this issue in more details in Section \ref{sec:notconv} below.
Using Theorem \ref{thm:supmodvol}, Remark \ref{rk:mixed-vol-sm}, Remark \ref{rk:setsuper} and Theorem \ref{thm:compr} we deduce the following corollary.
\begin{cor}\label{cor:supmod}
Let $n,k\in\mathbb{N}$ and $B_1,\dots B_k$ be compact convex sets of ${\mathbb R}^n$. Let $0\le l \le n$ and $C_1,\dots, C_l$ be convex bodies in ${\mathbb R}^n$. Then
\begin{enumerate}
\item the function $\bar{v}:2^{[k]}\ra [0,\infty)$ defined by
\begin{eqnarray}\label{def:setfn-v}
\bar{v}(s)=V\left(\left(\sum_{i\ins} B_i\right) [n-l], C_1, \dots, C_l \right)
\end{eqnarray}
for each $s\subset [k]$, is a $m$-supermodular set function, for any $m\in\mathbb{N}$.
\item Let $\mathcal{A}$ and $\mathcal{B}$ be finite multisets of subsets of $[n]$, with $\mathcal{A} > \mathcal{B}$.
Then
\begin{eqnarray}
\sum_{s\in\mathcal{A}} \bar{v}(s) \leq \sum_{t\in\mathcal{B}} \bar{v}(t).
\end{eqnarray}
\end{enumerate}
\end{cor}
Let us note that the above $m$-supermodularity of the function $\bar{v}$ is equivalent to the fact that for any convex bodies $B_0, B_1, \dots, B_{k}, C_1, \dots, C_l$ in ${\mathbb R}^n$
$$
\sum_{s\subset[k]}(-1)^{k-|s|}V\left(\left(B_0+\sum_{i\in s}B_i\right)[n-l], C_1, \dots, C_l\right)\ge0.
$$
Applying the previous theorem to $l=0$, we get
\begin{equation}\label{eq_B_0}
\sum_{s\subset[k]}(-1)^{k-|s|}\left|B_0+\sum_{i\ins} B_i\right| \geq 0.
\end{equation}
The above inequality for $k=n$ and $B_0=\{0\}$ follows also directly from the following classical formula (see Lemma 5.1.4 in \cite{Sch14:book})
$$
\sum_{s\subset[n]}(-1)^{n-|s|}\left|\sum_{i\ins} B_i\right| = n!V(B_1, \dots, B_n).
$$
In the same way, we can also give another proof of the general case of (\ref{eq_B_0}).
\begin{thm}\label{sch}
Let $B_0, B_1, \dots, B_{m}$ be convex bodies in ${\mathbb R}^n$.
\begin{eqnarray*}
\sum_{s\subset[m]} (-1)^{m-|s|} \left|B_0+\sum_{i\ins} B_i\right| =\!\!\!
\displaystyle\sum\limits_{\substack{\sum_{i=0}^m k_i=n;\\ k_1, \dots, k_m \ge 1}} {n \choose k_0, k_1, \dots, k_m} V(B_0[k_0], B_1[k_1], \dots, B_m[k_m])
\end{eqnarray*}
for $m\le n$ and zero otherwise.
\end{thm}
\begin{proof} Following the proof of Lemma 5.1.4 in \cite{Sch14:book}. Define
$$
g(t_0, t_1, \dots, t_m)=\sum_{s\subset[m]} (-1)^{m-|s|} \left|t_0B_0+\sum_{i\ins} t_iB_i\right|.
$$
Observe that $g$ is a homogeneous polynomial of degree $n$ and note that $g(t_0, 0, t_2, \dots, t_m)=0$, which can be seen by noticing that, in this particular case, the sum is telescopic. This implies that, in the polynomial $g(t_0, \dots, t_m)$, all monomials with non-zero coefficients must contain a non-zero power of $t_1$. The same being true for each $t_i$, $i \ge 1$, there is no non-zero monomials if $m>n$. If $m \le n$ all non-zero monomials must come from the case $|s|=m$, i.e. from
$$
|t_0B_0+t_1 B_1+ \dots +t_mB_m|,
$$
which finishes the proof.
\end{proof}
Thanks to the fact that supermodular set functions taking the value 0 at the empty set are fractionally
superadditive (see, e.g., \cite{MP82, MT10}), we can immediately deduce the following inequality.
Let $n\ge1$, $k\ge2$ be integers and let $A_1, \dots, A_k$ be $k$ convex sets in ${\bf R}^n$. Then, for any
fractional partition $\beta$ using a hypergraph $\mathcal{C}$ on $[k]$,
\begin{eqnarray}\label{eq:fsa}
\left|\sum_{i=1}^kA_i\right| \ge \sum_{s\in\mathcal{C}} \beta(s)\left|\sum_{j\in s}A_j\right|.
\end{eqnarray}
It was shown in \cite{BM21}
that \eqref{eq:fsa} actually extends to all compact sets in ${\mathbb R}^n$, but supermodularity does not extend to compact sets as discussed in the next section.
\subsection{Going beyond convex bodies}\label{sec:notconv}
Consider sets $A, B \subset {\bf R}^n$, such that $0 \in B$. Define $\Delta_B(A)=(A+B)\setminus A$, and note that $A+B$ is always a superset of $A$ because of the assumption
that $0\in B$.
The supermodularity of volume is also saying something about set increments. Indeed, for any sets $A, B, C$ consider
\begin{eqnarray*}
\Delta_C\Delta_B(A)
=\Delta_C \big((A+B)\setminus A\big)
=\bigg(\big((A+B)\setminus A\big)+C \bigg)\setminus \big((A+B)\setminus A\big).
\end{eqnarray*}
We have, if $0 \in B \cap C$:
\begin{align}\label{difference}
\left|\Delta_C\Delta_B(A)\right|
&=\left|\big((A+B)\setminus A\big)+C\right|- \left|(A+B)\setminus A\right| \nonumber\\
&\geq \left|(A+B+C)\setminus (A+C)\right|- \left|A+B\right|+\left|A \right|\nonumber\\
&= |A+B+C|- |A+C|- |A+B|+ |A|,
\end{align}
where
the inequality follows from the general fact that
$(K+C)\setminus (L+C)\subset (K\setminus L)+C$. Moreover, if $A,B,C$ are convex, compact sets then the estimate is non-trivial, i.e., using Theorem~\ref{sch} we get that
the right hand side of the above quantity is non-negative.
It is interesting to note that the $\Delta$ operation is not commutative, i.e. $\Delta_C\Delta_B(A) \not = \Delta_A\Delta_C(A)$; this can be seen, for example, in ${\mathbb R}^2$ by taking $A$ to be a square, $B$ to be a segment, and $C$ to be a Euclidean ball.
It is natural to ask if the higher-order analog of this observation remains true.
\begin{ques}
Let $m\in \mathbb{N}$ and $B_1, \ldots B_m \subset {\mathbb R}^n$ be compact sets containing the origin. Then, for any compact $B_0 \subset {\mathbb R}^n$, is it true that
\begin{eqnarray*}
\left|\Delta_{B_1}\ldots \Delta_{B_m} (B_0)\right|\geq\sum_{s\subset[m]}(-1)^{m-|s|}\left|B_0+\sum_{i\in s} B_i\right| ?
\end{eqnarray*}
\end{ques}
The inequality (\ref{difference}) gives a positive answer to the above question in the case $m=2$. We also observe that if $B_0, B_1, \dots, B_m$ are convex, then the right hand side is non-negative thanks to Theorem~\ref{thm:supmodvol}. We note that it was observed in \cite{FMMZ18}, by considering $A=\{0,1\}$ and $B=C=[0,1]$ in ${\mathbb R}^1$,
that the volume of Minkowski sums cannot be supermodular (even in dimension 1) if the convexity assumption on the set $A$ is removed.
Nonetheless \cite{FMMZ18} observed that if $A, B, C\subset {\bf R}$ are compact, then
\begin{eqnarray*}
|A+B+C| +|\mathrm{conv}(A)|\ge |A+B| + |A+C|;
\end{eqnarray*}
it is unknown if this extends to higher dimension. In particular, we do not know if the following conjecture is true for $n\ge2$.
\begin{conj}\label{submodulnotconv} For any convex body $A$ and any compact sets $B$ and $C$ in ${\bf R}^n$,
$$|A+B+C| +|A|\ge |A+B| + |A+C|. $$
\end{conj}
We can confirm Conjecture \ref{submodulnotconv} under the assumption that $B$ is a zonoid.
\begin{thm}
Assume $A$ is a convex compact set, $B$ is a zonoid and $C$ is any compact set in ${\mathbb R}^n$. Then
$$|A+B+C| +|A|\ge |A+B| + |A+C|. $$
\end{thm}
\begin{proof}
By approximation, we may assume that $B$ is a zonotope.
Using the definition of mixed volumes (\ref{eq:mvf})
and (\ref{eq:proj1}) we get that for any convex compact set $M$ in ${\mathbb R}^n$
$$
|M+[0,tu]|- |M|=t|P_{u^\perp} M|_{n-1}, \mbox{ for all } t>0, u\in S^{n-1}.
$$
The above formula can be also proved using a geometric approach and thus studied in the case of not necessary convex $M$. Indeed, consider a compact set $M$ in ${\mathbb R}^n,$ $t>0$ and $u\in S^{n-1},$ let $\partial_u M$ be the set of all $x \in \partial M$ such that $x \cdot u \ge y \cdot u,$ for all $y\in M$ for which $P_{u^\perp} y =P_{u^\perp}x.$ Note that
$$
(\partial_u M + (0,tu]) \cap M =\emptyset, \mbox{ but } M \cup (\partial_u M+ (0, tu]) \subseteq M+[0,tu].
$$
Thus
$$
|M+[0,tu]| \ge |M|+ t|P_{u^\perp} M|_{n-1}.
$$
Now, we are ready to prove the theorem with $B=[0,tu]$
$$
|A+C+[0,tu]| -|A+C| \ge t|P_{u^\perp} (A+C)|_{n-1}
\ge t |P_{u^\perp} A|_{n-1}
= |A+[0,tu]| -|A|.
$$
Thus, we proved that, for any $u \in {\bf R}^n$,
\begin{eqnarray}\label{in:vec}
|A+C+[0,u]| - |A+[0,u]| \ge |A+C| -|A|.
\end{eqnarray}
Now, we can prove the theorem for the case of a zonotope. Indeed, let $Z_k=\sum_{i=1}^{k} [0, u_i]$ be a zonotope. Apply inequality (\ref{in:vec}) to the convex body $A+Z_{k-1}$ and the vector $u=u_{k}$ to get
$$
|A+C+Z_{k}| -|A+Z_{k}| \ge |A+C+Z_{k-1}| -|A+Z_{k-1}|.
$$
Iterate the above inequality to prove the theorem for the case of $B$ being a zonotope. The theorem now follows from continuity of the volume and the fact that every zonoid is a limit of zonotopes.
\end{proof}
\section{Pl\"unnecke-Ruzsa inequalities for convex bodies}
\label{sec:Pl\"unnecke}
\subsection{Existing Pl\"unnecke-Ruzsa inequality for convex bodies}
\label{ss:plun}
Bobkov and Madiman \cite{BM12:jfa} developed a technique for going from entropy to volume estimates,
by using certain reverse H\"older inequalities that hold for convex measures.
Specifically, \cite[Proposition 3.4]{BM12:jfa} shows that if $X_i$ are independent random variables
with $X_i$ uniformly distributed on a convex body $K_i \subset {\mathbb R}^n$ for each $i=1,\ldots, m$,
then
$h(X_1+\ldots+X_m)\geq \log |K_1+\ldots+K_m| -n\log m$, where the entropy of a random variable $X$ with density $f$ on ${\mathbb R}^n$ is defined by
\begin{equation}\label{eq:entropy}
h(X)=-\int f(x)\log f(x) dx.
\end{equation}
This is a reverse H\"older inequality in the sense that $h(X_1+\ldots+X_m)\leq \log |K_1+\ldots+K_m|$ may be seen
by applying H\"older's inequality and then taking a limit. More general sharp inequalities relating
R\'enyi entropies of various orders for measures having convexity properties are described in \cite{FLM20} (see also \cite{BM11:it, FMW16, BFLM17}). Applied to the submodularity of entropy of sums discovered in \cite{Mad08:itw}, they use this technique to demonstrate the following inequality.
\begin{thm}\label{thm:frac-Pl\"unnecke}
Let $\mathcal{C}_k$ denote the collection of all subsets of $[m]=\{1,\ldots, m\}$ that are of cardinality $k$.
Let $A$ and $B_{1},\ldots, B_{m}$ be convex bodies in ${\mathbb R}^{n}$, and suppose
$$
\bigg|A+\sum_{i\in s} B_i \bigg|^{\frac{1}{n}} \leq c_{s} |A|^{\frac{1}{n}}
$$
for each $s\in\mathcal{C}_k$, with given numbers $c_{s}$.
Then
$$
\bigg|A+\sum_{i=1}^m B_i \bigg|^{\frac{1}{n}}
\leq (1+m) \bigg[\prod_{s\in\mathcal{C}_k} c_{s}\bigg]^{\frac{1}{\binom{m-1}{k-1}}} |A|^{\frac{1}{n}} .
$$
\end{thm}
In particular, by choosing $k=1$, one already obtains an interesting inequality for volumes of Minkowski sums.
\begin{cor}\label{cor:Pl\"unnecke}
Let $A$ and $B_{1},\ldots, B_{m}$ be convex bodies in ${\mathbb R}^{n}$. Then
$$
|A|^{m-1}\bigg|A+\sum_{i=1}^m B_i \bigg|
\leq (1+m)^n \prod_{i=1}^m |A+B_i| .
$$
\end{cor}
Thus, one may think of Corollary~\ref{cor:Pl\"unnecke} as providing yet another continuous analogues of the Pl\"unnecke-Ruzsa inequalities
in the context of volumes of convex bodies in Euclidean spaces (compare with (\ref{eq:ruzvol})), where going from the discrete to the continuous
incurs the extra factor of $(1+m)$, but one does not need to bother with taking subsets of the set $A$.
In particular, with $m=2$, one gets ``log-submodularity of volume up to an additive term'' on convex bodies.
\begin{cor}\label{cor:Pl\"unnecke2}
Let $A$ and $B_{1}, B_{2}$ be convex bodies in ${\mathbb R}^{n}$. Then
\begin{eqnarray}\label{3body-3n}
|A|\,|A+B_1+B_2 |
\leq 3^n |A+B_1| \,|A+B_2|.
\end{eqnarray}
\end{cor}
Unfortunately the dimension-dependent additive term is a hindrance that one would like to remove or improve, which is the purpose of the next section.
\begin{rem} We notice that in the case where $B_1=B_2=B$ the inequality holds with constant $1$:
$$
|A|\,|A+B+B |
\leq |A+B|^2.
$$
by the Brunn-Minkowski inequality. In the next section, we shall see that it is no longer true for $B_1\neq B_2$. Moreover, as we will see in Lemma \ref{lm:example},
the above inequality is not true with any absolute constant if we only assume that the sets $A$ and $B_1$ are compact, which exposes an essential difference of this inequality with (\ref{eq:ruzvol}).
\end{rem}
\subsection{Improved upper bounds in general dimension}
\label{ss:gen-ub}
In this section, we will present an improvements in the constant $3^n$ in the three body inequality from Corollary~\ref{cor:Pl\"unnecke2}. We define the constant $c_n$ by \eqref{cn-def}, or equivalently as the infimum of the constants $c>0$ such that, for every convex compact sets $A,B,C$ in ${\mathbb R}^n$,
\begin{eqnarray*}
|A|\,|A+B+C| \le c |A+B|\,|A+C|.
\end{eqnarray*}
We recall that $\varphi=(1+\sqrt{5})/2$ denotes the golden ratio.
\begin{thm}\label{thm:ub} Let $n\ge2$. Then, one has $1=c_2\le c_n\le \varphi^n$, i.e., for every convex compact sets $A, B, C \subset {\mathbb R}^n$,
$$
|A|\,|A+B+C| \le \varphi^{n} |A+B|\,|A+C|.
$$
\end{thm}
\begin{proof}
Observe that, taking $B=C=\{0\}$ we get $c_n\ge1$.
We apply (\ref{eq:mvf}) to get
\begin{align*}
|A|\,|A+B+C|=&\sum_{k+j+m=n} {n \choose k,j,m} |A| V(A[k], B[j],C[m])\\=&\sum_{0\le j+m\le n} {n \choose j,m,n-j-m} |A| V(A[n-j-m], B[j],C[m]),
\end{align*}
$$
|A+B|\,|A+C|=\sum_{j=0}^n \sum_{m=0}^n {n \choose j} {n \choose m} V(A[n-j], B[j]) V(A[n-m], C[m]).
$$
The comparison of the above sums term by term shows that $c_n\le d_n$ where $d_n$ satisfies
\begin{equation}
|A| V(A[n-j-m], B[j],C[m]) \le d_n\frac{ {n \choose j} {n \choose m}}{ {n \choose j,m,n-j-m}} V(A[n-j], B[j]) V(A[n-m], C[m]).
\end{equation}
Rewriting the above in a more symmetric way we get, for $m, j \ge 0$ and $m+j\le n$:
\begin{eqnarray}\label{eq:conj-mixed}
\frac{|A|}{n!}\frac{V(A[n-j-m], B[j], C[m])}{(n-j-m)!}\le d_n\frac{V(A[n-j],B[j])}{(n-j)!}\frac{V(A[n-m],C[m])}{(n-m)!}.
\end{eqnarray}
Notice that for $m=0$ or $j=0$, (\ref{eq:conj-mixed}) trivially holds for any $d_n\ge 1$.
Using inequality (\ref{eq:jxiao}) we get that $d_n$ will satisfy inequality (\ref{eq:conj-mixed}) as long as
\begin{equation}\label{est_b}
\min\left\{ {n-j \choose m}, {n-m \choose j} \right\} \le d_n.
\end{equation}
Note that the above is true with constant $d_n=1$ if $m+j=n$.
We also note that, if $m=j=1$, then the required inequality (\ref{eq:conj-mixed}) becomes
\begin{eqnarray}
|A|V(A[n-2], B, C)\le d_n\frac{n}{n-1} V(A[n-1],B) V(A[n-1],C).
\end{eqnarray}
Using (\ref{alexloc}), we see that in this case it is enough to select $d_n \ge \frac{2(n-1)}{n}$. In particular, we get that $c_2=d_2=1$. For the more general case, we can provide a bound for $d_n^{1/n}$ using Stirling's approximation formula. Indeed,
$$
{p \choose q}\le \frac{p^p}{(p-q)^{p-q} q^q} e^{\frac{1}{12p}-\frac{1}{12(p-q)+1}-\frac{1}{12q+1}} \sqrt{\frac{p}{2\pi (p-q)q}}
$$
$$
{p \choose q}\le \frac{p^p}{(p-q)^{p-q} q^q} e^{\frac{1}{12p}-\frac{12p+2}{(12(p-q)+1)(12q+1)}} \sqrt{\frac{p}{2\pi (p-1)}} \le \frac{p^p}{(p-q)^{p-q} q^q}.
$$
Next, let $j=yn$ and $m=xn$, where $x,y \ge 0$, $x+y \le 1,$ then it is sufficient for $d_n$ to satisfy
$$
\max_{\substack{x, y \ge 0\\ x+y\le 1}}\min\left\{\frac{(1-y)^{1-y}}{(1-x-y)^{1-x-y}x^x}, \frac{(1-x)^{1-x}}{(1-x-y)^{1-x-y}y^y} \right\}\le d_n^{1/n}.
$$
Without loss of generality we may assume that $|x-1/2| \le |y-1/2|$ and thus $(1-x)^{1-x}x^x \le (1-y)^{(1-y)}y^y$. Our next goal is to provide an upper estimate for
$$
\max \frac{(1-x)^{1-x}}{(1-x-y)^{1-x-y}y^y} ,
$$
where the maximum is taken over a set
\begin{align*}
\Omega&=\{(x, y)\in{\bf R}_+^2: x+y\le 1, |1/2-x| \le |1/2-y|\}\\
&=\{(x,1-x); 0\le x\le 1/2\}\cup\{(x,y)\in{\bf R}_+^2; y \le \min(x,1-x)\}.
\end{align*}
We note that the function $y\mapsto(1-x-y)^{1-x-y}y^y$ is decreasing for $y \in [0, (1-x)/2]$ and increasing on $[(1-x)/2, (1-x)]$. So we may consider two cases, comparing $x$ and $(1-x)/2$. Next,
$$
\max_{\Omega \cap \{x\in [0,1/3]\}} \frac{(1-x)^{1-x}}{(1-x-y)^{1-x-y}y^y} = \max_{[0,1/3]} \frac{(1-x)^{1-x}}{(1-2x)^{1-2x}x^x} = \frac{1+\sqrt{5}}{2}.
$$
The last equality follows from the fact that the maximum is achieved when
$
\frac{(1-2x)^2}{(1-x)x}=1,
$
i.e. at $x=(5-\sqrt{5})/10$. Finally
$$
\max_{\Omega \cap \{x\in [1/3,1]\}} \frac{(1-x)^{1-x}}{(1-x-y)^{1-x-y}y^y} \le \max_{x\in [1/3,1]} \frac{(1-x)^{1-x}}{((1-x)/2)^{1-x}} = 2^{2/3} < \frac{1+\sqrt{5}}{2}.
$$
\end{proof}
The next proposition gives a different proof of (\ref{eq:ruzvol1}) in the special case of convex sets and, we hope, gives yet another example of how the methods of mixed volumes as well as the B\'ezout type inequality (\ref{eq:jxiao}) can be applied in this context.
\begin{prop}\label{lem:weak-3body}
Let $A, B_1, \dots, B_m$ be convex bodies in ${\mathbb R}^n$, then
\begin{equation}\label{eq:threesimple}
|A|\,|B_1+\dots +B_m| \le \prod\limits_{i=1}^m|A+B_i|,
\end{equation}
with equality if and only if $|A|=0$.
\end{prop}
\begin{proof}
By induction, the general case follows immediately from the case $m=2$, so we assume $m=2$ and denote $B_1=B$ and $B_2=C$.
The inequality follows from the proof of Theorem \ref{thm:ub} and the observation that decomposing the left and right hand sides of the (\ref{eq:threesimple}) we need to show that
$$
\sum_{m=0}^{n} {n \choose m} |A| V(B[n-m], C[m])
\le \sum_{j=0}^n \sum_{m=0}^n {n \choose j} {n \choose m} V(A[n-j], B[j]) V(A[n-m], C[m]).
$$
It turns out that it is enough to consider only the terms with $m+j=n$ on the right hand side, i.e. to show that
$$
\sum_{m=0}^{n} {n \choose m} |A| V(B[n-m], C[m])
\le \sum_{m=0}^{n} {n \choose n-m} {n \choose m} V(A[m], B[n-m]) V (A[n-m], C[m]),
$$
which is true term by term by using (\ref{eq:jxiao}) with $m+j=n$. Now assume that there is equality. This implies that the term $j=m=0$ in the above double sum must vanish, i.e
$|A|=0$.
\end{proof}
\subsection{Improved constants in dimensions 3 and 4}
\label{ss:ub34}
Theorem \ref{thm:ub} gives an optimal bound of $1$ for the three body inequality in dimension $2$. Next, we will show how we can get better bounds for $c_n$ in dimension $3$ and $4$.
\begin{thm}\label{th:r3} Let $A, B, C$ be convex compact sets in ${\mathbb R}^3$ then
$$
|A|\,|A+B+C| \le \frac{4}{3} |A+B|\,|A+C|
$$
and the constant is best possible: $c_3=\frac{4}{3}$. Moreover, if $A$ is a simplex, then
$$
|A|\,|A+B+C| \le |A+B|\,|A+C|.
$$
\end{thm}
\begin{proof} We follow the same strategy as in the proof of Theorem \ref{thm:ub} and arrive to the inequality (\ref{eq:conj-mixed}) with $m, j \ge 0$ and $m+j\le 3$:
$$
\frac{|A|}{3!}\frac{V(A[3-j-m], B[j], C[m])}{(3-j-m)!}\le d_3\frac{V(A[3-j],B[j])}{(3-j)!}\frac{V(A[3-m],C[m])}{(3-m)!}.
$$
Again, the inequality is trivially true for $m=0$ or $j=0$ with any constant $d_3\ge1$. Thus, we are left with the two following inequalities
\begin{equation}\label{eqXiao}
|A|V(C, B[2])\le3d_3V(B,A[2])V(A,C[2])
\end{equation}
\begin{equation}\label{eq:bez}
|A|V(A,B,C) \le \frac{3}{2} d_3 V(B,A[2])V(C, A[2])
\end{equation}
We note that the inequality (\ref{eqXiao}) with $d_3\ge1$ follows from (\ref{eq:jxiao}).
Next we note that (\ref{eq:bez}) is true with $d_3 \ge 1$ when $A$ is a simplex (see \cite{SZ16}). The general case of (\ref{eq:bez}) follows from (\ref{alexloc}) with $c_3=4/3$. The proof that this bound is optimal is made in section \ref{ss:PR-LB}, where we, in particular, establish that $c_n\ge 2-\frac{2}{n}$.
\end{proof}
\begin{thm} \label{th:r4} Let $A, B, C$ be convex compact sets in ${\mathbb R}^4$, then
$$
|A|\,|A+B+C| \le 2 |A+B|\,|A+C|.
$$
Thus $c_4\le2$. Moreover, if $A$ is a simplex, then
$$
|A|\,|A+B+C| \le |A+B|\,|A+C|.
$$
\end{thm}
\begin{proof}
We will check inequality (\ref{eq:conj-mixed}) for $n=4$, $0\le m\le j\le4$ and $m+j\le 4$:
$$
\frac{|A|}{4!}\frac{V(A[4-j-m], B[j], C[j])}{(4-j-m)!}\le c_4\frac{V(A[4-j],B[j])}{(4-j)!}\frac{V(A[4-m],C[m])}{(4-j)!}.
$$
The inequality is trivially true for $m= 0$ or $j= 0$ with a constant $d_4 \ge 1$. Taking in account that the inequality is symmetric with respect to $m$ and $j$ and to $B$ and $C$ we get that it is enough to obtain cases $(j,m)=\{(1,1); (1,2); (1,3); (2,2)\}$. For $(j,m)=(1,1)$, we need to obtain
$$
|A| V(A[2],B,C) \leq \frac{4}{3} d_4 V(A[3],B) V(A[3],C).
$$
If $A$ is a simplex then the above is true with $\frac{4}{3} d_4=1$ (see \cite{SZ16}) and the general case, follows from (\ref{alexloc}) with $\frac{4}{3} d_4 \ge 2$, that is $d_4\ge \frac{3}{2}$. For $(j,m)=(1,2)$, we need to obtain
$$
|A| V(A,B,C,C)\leq 2d_4 V(A[3],B) V(A[2],C[2]).
$$
We again observe that if $A$ is a simplex then the above is true with $2d_4=1$. To resolve the general case we apply (\ref{eq:jxiao}) with $n=4$ and $(j,m)=(1,2)$, we get
$2d_4 \ge 4$ will satisfy the requirement.
When $(j,m)=1,3$ we need to show that
$$
|A|V(B, C[3])\le 4 d_4 V(A[3], B) V(A,C[3]),
$$
which, from (\ref{eq:jxiao}), is true for $4d_4\ge 4$ for all convex, compact sets.
Finally, when $(j,m)=(2,2)$ we need to obtain
$$
|A|V(A, B[2], C[2])\le 6 d_4 V(A[2],B[2])V(A[2],C[2]),
$$
which is from (\ref{eq:jxiao}) true for $6 d_4 \ge 6$ for all convex compact sets.
\end{proof}
\begin{rem}
We conjecture that actually $c_4=3/2$.
\end{rem}
\begin{rem} We conjecture that, for $1\leq j\leq n$,
$$
|A| V(L_1, \ldots, L_j, A[n-j]) \leq j V(L_j, A[n-1]) V(L_1, \ldots, L_{j-1}, A[n-j+1]).
$$
The inequality (which is an improvement of a special case of (\ref{eq:jxiao}) would help to obtain the best constant in ${\mathbb R}^4$ and corresponds to the $(j,m)=(1,2)$ in the proof of Theorem \ref{th:r4}.
\end{rem}
\subsection{Lower bounds in general dimension}
\label{ss:PR-LB}
In this section, we provide a lower bound for the Pl\"unnecke-Ruzsa inequality for convex bodies. A weaker lower bound was also independently obtained by Nayar and Tkocz \cite{NT17}.
We first observe that the best constant $c_n$ in the Pl\"unnecke-Ruzsa inequality
\begin{equation}\label{PR2}
|A|\,|A+B+C| \le c_n |A+B|\,|A+C|,
\end{equation}
satisfies $c_{n+m} \ge c_n c_m$. Indeed, this follows immediately by considering critical examples of $A_1, B_1, C_1$ in ${\mathbb R}^n$ and $A_2, B_2, C_2$ in ${\mathbb R}^m$ together with their direct products
$A_1 \times A_2, B_1\times B_2, C_1\times C_2$ in ${\mathbb R}^{n+m}$.
Next we notice that if (\ref{PR2}) is true in a class of convex bodies closed by linear transformations, then
\begin{equation}\label{projcontr}
|P_{E \cap H}K| |K| \le c_n |P_E K|\,|P_H K|,
\end{equation}
for any $K$ in this class and any subspaces $E,H$ of ${\mathbb R}^n,$ such that $\dim E =i,$ $\dim H=j,$ $i+j \ge n+1$ and $E^\perp \subset H$. To see this consider $B=U$, with $\dim U = n-i,$ $|U|=1$ and $C=V$, with $\dim V = n-j,$ $|V|=1$ and $U, V$ belong to orthogonal subspaces of ${\bf R}^n$. Let $A=tK$, where $t > 0$ and set $k=n-(n-i)-(n-j)=i+j-n$. Then (\ref{PR2}) yield together with (\ref{eq:mvf}) and (\ref{eq:proj})
\begin{align*}
t^n|K|\big(\sum_{m=k}^n &{{n}\choose{m}}V(K[m], (U+V)[n-m])t^m \big) \\ &\le c_n\big(\sum_{m=i}^n {{n}\choose{m}}V(K[m], U[n-m])t^m \big)\big(\sum_{m=j}^n {{n}\choose{m}}V(K[m], V[n-m])t^m \big).
\end{align*}
Dividing the above inequality by $t^{n+k}$ and taking $t= 0$ we get
$$
|K|{{n}\choose{k}}V(K[k], (U+V)[n-k]) \le c_n
{{n}\choose{i}}V(K[i], U[n-i]) {{n}\choose{j}}V(K[j], V[n-j]).
$$
Finally, using (\ref{eq:proj}), we get (\ref{projcontr}).
It was proved in \cite{GHP02} that
\begin{equation}\label{eqGPH}
|P_{\{u,v\}^\perp}K| |K| \le \frac{2(n-1)}{n} |P_{u^\perp} K|\,|P_{v^\perp}K|,
\end{equation}
for any convex body $K \subset {\mathbb R}^n$ and a pair of orthogonal vectors $u,v \in S^{n-1}$. It was also shown in \cite{GHP02} that the constant $2(n-1)/n$ is optimal. Thus $c_n \ge 2 -\frac{2}{n}$ and this estimate gives a sharp constant in ${\bf R}^3$: $c_3 = 4/3.$ In the case when $n=4$, we get $c_4 \ge 3/2$.
The inequalities analogous to (\ref{eqGPH}) and (\ref{projcontr}) were studied in many other works, including \cite{FGM03, SZ16, AFO14, AAGHV17}. In particular, it was proved in \cite{AAGHV17} that (\ref{projcontr}) is sharp with
$$
c_n \ge c_n(i,j,k)=\frac{{{i}\choose{k}}{{j}\choose{k}}}{{{n}\choose{k}}}.
$$
Thus to find a lower bound on $c_n$ one may maximize over $c_n(i,j,k)$ with restriction that $i+j \ge n+1$ and $k=i+j-n$. One may use Stirling's approximation, with $i=j=2n /3$ and $k=n/3$ (when $n$ is a multiple of 3, with minor modifications if not) to obtain the following theorem.
\begin{thm}\label{thm:lb}
For sufficiently large $n$, we have that
$
c_n
\geq \frac{2}{\sqrt{\pi n}} \left(\frac{4}{3}\right)^n
= \left(\frac{4}{3} +o(1)\right)^n.
$
\end{thm}
\subsection{Improved upper bound for subclasses of convex bodies}
\label{ss:improved}
The goal of this section is to prove the following theorem
\begin{thm}\label{thm:zonoid-ellipsoid}
Let $n\ge1$ and $K$ be a convex body in ${\mathbb R}^n$. Let $B$ be an ellipsoid and $Z$ be a zonoid in ${\mathbb R}^n$. Then
$$
|B|\,|B+K+Z|\le |B+K|\,|B+Z|.
$$
\end{thm}
Theorem~\ref{thm:zonoid-ellipsoid} motivates us to pose the following conjecture.
\begin{conj}
Let $n\ge1$ and $A, B, C$ be zonoids in ${\mathbb R}^n$. Then
$$
|A|\,|A+B+C|\le |A+B|\,|A+C|.
$$
\end{conj}
A detailed study of this conjecture is undertaken in the forthcoming paper \cite{FMMZ22}.
Before proving Theorem \ref{thm:zonoid-ellipsoid} we will prove a theorem which would help us to verify Pl\"unnecke-Ruzsa inequalities for convex bodies for a fixed body $A$.
\begin{thm}\label{thm:zonoid-projratio}
Let $n\ge1$ and $A,B$ be a convex bodies in ${\mathbb R}^n$ such that for every $1\le k\le n$ and any subspace $E$ of ${\mathbb R}^n$ of dimension $k$ one has
\[
\frac{|P_E(A+B)|_k}{|P_{E\cap u^\bot} (A+B)|_{k-1}}\geq \frac{|P_EA|_k}{|P_{E\cap u^\bot}A|_{k-1}}.
\]
Then for any zonoid $Z$ in ${\mathbb R}^n$ one has
$$
|A|\,|A+B+Z|\le |A+B|\,|A+Z|.
$$
\end{thm}
\begin{proof}
Notice that it is enough to prove the inequality for $Z$ being a zonotope and use an approximation argument. In fact, we prove by induction on $k$ that for any $1\le k\le n$ and any subspace $E$ of ${\mathbb R}^n$ of dimension $k$ and zonoid $Z$ in $E$ one has
\begin{eqnarray}\label{eq:induc:sum-proj}
|P_EA|_k|P_E(A+B)+Z|_k\le |P_E(A+B)|_k|P_EA+Z|_k.
\end{eqnarray}
This statement is true for $k=1$ so let us assume that it's true for $k-1$, for some $1\le k\le n$ and let's prove it for $k$. Let $E$ be a subspace of dimension $k$. To prove that inequality (\ref{eq:induc:sum-proj}) holds for any zonotope in $E$, we proceed by induction on the number of segments in $Z$.
Notice that the inequality holds as an equality for $Z=0$. Let us assume that inequality (\ref{eq:induc:sum-proj}) holds for some fixed zonotope $Z$ in $E$ and prove it for $Z+t[0,u]$ where $t>0$ and $u\in S^{n-1} \cap E$.
Using (\ref{eq:ste}), we get
\[
|P_E(A+B)+Z+t[0,u]|_{k}=|P_E(A+B)+Z|_k+t|P_{E\cap u^\bot}(A+B+Z)|_{k-1}
\]
and
$$
|P_EA+Z+[0,tu]|_k=|P_EA+Z|_k+t|P_{E\cap u^\bot}(A+Z)|_{k-1}.
$$
Applying the induction hypothesis, it is enough to prove that
\begin{equation}\label{inductiontwob}
|P_EA|_k|P_{E\cap u^\bot}(A+B+Z)|_{k-1} \le |P_{E\cap u^\bot}(A+Z)|_{k-1} |P_E(A+B)|_k.
\end{equation}
But the inequality in the $k-1$-dimensional subspace $E\cap u^\perp$ for the zonotope $P_{u^\perp} Z$ gives
$$
|P_{E\cap u^\bot}(A)|_{k-1}|P_{E\cap u^\bot}(A+B+Z)|_{k-1} \le |P_{E\cap u^\bot}(A+B)|_{k-1} |P_{E\cap u^\bot}(A+Z)|_{k-1}.
$$
Multiplying this inequality by the assumption of the theorem:
\[
\frac{|P_EA|_k}{|P_{E\cap u^\bot}A|_{k-1}} \le \frac{|P_E(A+B)|_k}{|P_{E\cap u^\bot} (A+B)|_{k-1}},
\]
we get (\ref{inductiontwob}).
\end{proof}
Next we will prove that $B_2^n$ satisfies the conditions of theorem \ref{thm:zonoid-projratio}.
\begin{thm}\label{thm:projection-ball}
Let $n\ge1$ and $K$ be a compact set in ${\mathbb R}^n$. Let $u\in S^{n-1}$. Then
$$
\frac{|K+B_2^n|}{|P_{u^\bot}(K+B_2^n)|_{n-1}}\ge\frac{|B_2^n|}{|B_2^{n-1}|_{n-1}}.
$$
\end{thm}
\begin{proof}
We will use a trick from \cite{AFO14} to reduce the proof to the case of $K \subset u^\perp$. Indeed, let $S_u$ be the Steiner symmetrization with respect to $u\in S^{n-1}$ (see \cite{Sch14:book} and \cite{BZ88:book} remark 9.3.2). Then one has
$$
S_u(K+B_2^n)\supset S_u(K)+S_u(B_2^n)=S_u(K)+B_2^n.
$$
Hence
$$
|K+B_2^n|=|S_u(K+B_2^n)|\ge |S_u(K)+B_2^n|.
$$
Moreover, $P_{u^\bot}(S_u(K))=P_{u^\bot}K$ and $P_{u^\bot}K\subset K$, hence,
$$
|K+B_2^n|\ge|P_{u^\bot}K+B_2^n|.
$$
It follows that we are reduced to the case when $K\subset u^\bot$, {\em i.e.} $K=P_{u^\bot}K$.
Without loss of generality, we may assume that $u=e_n$ and we write $B_2^n\cap u^\bot=B_2^{n-1}$.
In this case, we can describe the set $K+B_2^n$ by its slices by the hyperplanes orthogonal to $e_n$ denoted $H_t=\{x\in{\mathbb R}^n; x_n=t\}$, $t\in{\mathbb R}$.
We have
$$
(K+B_2^n)\cap H_t=K+B_2^n\cap H_t=K+\sqrt{1-t^2}B_2^{n-1}+te_n.
$$
Using (\ref{eq:ste}), we get that for all $t\in[-1,1]$
$$
|(K+B_2^n)\cap H_t|_{n-1}=\left|K+\sqrt{1-t^2}B_2^{n-1}\right|_{n-1}.
$$
It follows from Proposition 2.1 \cite{FM14} (see also \cite{Sta76}) that
$$
\left|K+\sqrt{1-t^2}B_2^{n-1}\right|_{n-1} \ge \left|\sqrt{1-t^2} K+\sqrt{1-t^2}B_2^{n-1}\right|_{n-1}.
$$
Using Fubini's theorem, we get
$$
|K+B_2^n|= \int_{-1}^1|(K+B_2^n)\cap H_t|_{n-1}dt \ge \left| K+B_2^{n-1}\right|_{n-1}\int_{-1}^1(1-t^2)^\frac{n-1}{2}dt.
$$
We finish the proof by noticing that
$$
\int_{-1}^1(1-t^2)^\frac{n-1}{2}dt=\frac{|B_2^n|}{|B_2^{n-1}|}.
$$
\end{proof}
\noindent{\it Proof of Theorem \ref{thm:zonoid-ellipsoid}:}
Let $T$ be the affine transform such that $B=T(B_2^n)$. If $B$ lives in an hyperplane then $|B|=0$ and the inequality holds. If not, then $T$ is invertible and since the affine image of a zonoid is a zonoid by applying $T^{-1}$ we may assume that $B=B_2^n$. Now the theorem follows immediately from Theorems \ref{thm:projection-ball} and \ref{thm:zonoid-projratio}. {\begin{flushright} $\Box $\end{flushright}}
\begin{rem}
By applying a linear transform, it is possible to show that, more generally, for any compact set $K$ and for any ellipsoid $B$,
$$
\frac{|K+B|}{|P_{u^\bot}(K+B)|_{n-1}}\ge\frac{|B|}{|P_{u^\bot}B|_{n-1}}.
$$
Indeed, let $T\in GL_n$, such that $B=TB_2^n$ denoting $H=(T^*u)^\bot$, one has, that for any compact $K$, $P_{u^\bot}(TK)=TP_{H,T^{-1}u}K$, where $P_{H,T^{-1}u}K$ is the linear projection of $K$ onto $H$ along $T^{-1}u$.
Then the remark follows by applying Theorem \ref{thm:projection-ball} to $TK$.
\end{rem}
\begin{cor} Let $n\ge 1$ and $K$ be a convex body in ${\mathbb R}^n$. Let $B$ be an ellipsoid and $Z_1, \dots, Z_m,$ $m\ge 1$ be zonoids in ${\mathbb R}^n$. Then
$$
|B|^{m}|B+K+Z_1+\dots Z_m|\le |B+K|\prod_{i=1}^m|B+Z_i|.
$$
\end{cor}
\begin{proof} The corollary follows immediately applying induction on $m \ge 1$ together with Theorem \ref{thm:zonoid-ellipsoid}:
$$
|B|^{m}|B+K+Z_1+\dots Z_m|\le |B|^{m-1} |B+K+ Z_1+\dots Z_{m-1}|\,|B+Z_m|.
$$
\end{proof}
\begin{rem}
Theorem \ref{thm:ub} was inspired by the following inequality
\begin{equation}\label{eq:HS}
V(B_2^n,Z)V(B_2^n,K)\geq \frac{n-1}{n} c_n |B_2^n|V(B_2^n[n-2],Z,K),
\end{equation}
where $c_n =|B_2^{n-1}|^2/(|B_2^{n-2}|\,|B_2^n|)>1$. This inequality was proved by Artstein-Avidan, Florentin, and Ostrover \cite{AFO14} when $Z$ is a zonoid and $K$ is an arbitrary convex body, as a generalisation of of a result of Hug and Schneider \cite{HS11:1} who proved it for $K$ and $Z$ zonoids.
It is an interesting question if one can prove directly Theorem~ \ref{thm:zonoid-ellipsoid} by using the decomposition into mixed volumes as it was done in the proof of Theorem \ref{thm:ub} and applying inequality \eqref{eq:HS}. Inequality (\ref{eq:HS}) is a sharp improvement of (\ref{eq:jxiao}) in the case when $A=B_2^n$ and $m=j=1,$ and one of the bodies is a zonoid. Unfortunately, there seems not to be a direct way to apply (\ref{eq:HS}) to prove Theorem \ref{thm:zonoid-ellipsoid} due to the lack of a sharp analog of this inequality for $V(B_2^n[n-2],Z[m],K[j])$ when $m,j >1$.
\end{rem}
\subsection{The case of compact sets}
\label{ss:compact}
Let us note that inequality,
\begin{eqnarray}\label{eq:cnonconv}
|A|\,|A+B+C|\le |A+B|\,|A+C|.
\end{eqnarray}
is valid, when $A,B$ are intervals and $C$ is any compact set in ${\mathbb R}$. Indeed, by approximation we may assume that $C$ is a finite union of closed intervals, $A=[0,a]$ and $B=[0,b],$ for some $a, b \ge 0$. Then we may assume that
$A+C=\cup_{i=1}^m [\alpha_i, \beta_i+a]$ where intervals $[\alpha_i, \beta_i+a]$ are mutually disjoint. Then
$$
|A+C| = ma+\sum\limits_{i=1}^m (\beta_i-\alpha_i),
$$
$$
|A+B+C| \le \sum \limits_{i=1}^m (\beta_i-\alpha_i +a+b)=m(a+b)+\sum\limits_{i=1}^m (\beta_i-\alpha_i)
$$
and (\ref{eq:cnonconv}) follows from
$$
a (m(a+b)+\sum\limits_{i=1}^m (\beta_i-\alpha_i))\le (ma+\sum\limits_{i=1}^m (\beta_i-\alpha_i))(a+b).
$$
We also note that, as we discussed before, inequality (\ref{eq:ruzvol}) as well as inequality (\ref{eq:threesimple}) is valid without additional convexity assumptions (as well as Theorem \ref{thm:projection-ball} from above). Still, we will show that there is a sharp difference to those inequalities the convexity assumption in Theorem \ref{thm:ub} can not be removed. The construction is inspired by the proof of Theorem 7.1 from \cite{Ruz96:1}:
\begin{lem}\label{lm:example}
Fix $n\ge 1$, then for any $\beta >0$ there exist two compact sets $A, B\subset {\bf R}^n$ such that
$$
|A|\,|A+B+B| > \beta |A+B|^2.
$$
\end{lem}
\begin{proof} It is enough to prove the theorem for the case $n=1$, indeed, for any other dimension $n>1$, one can consider $A \times [-1, 1]^{n-1}$, $B\times [-1, 1]^{n-1}$ where $A,B$ are the example constructed in ${\bf R}$.
To construct the sets $A,B \subset {\bf R}$, we fix $m, l \in \mathbb{N}$ large enough and define first two discrete sets $A'$ and $B'$ in ${\bf R}$ to establish the analogue result for cardinality instead of volume. Let
$$
A'=\{x+y\sqrt{2}: x,y \in \{0,1,\dots, m-1\}\} \cup \{z\sqrt 3, z \in \{1,\dots, m\}\},
$$
thus $\#A'=m(m+1).$ We also define
$$
B'=\{x: x \in \{0,1,\dots, l\}\} \cup \{y\sqrt 2, y \in \{1,\dots, l\}).
$$
Thus, one has
$$
B'+B'= \{x+y\sqrt{2}: x,y \in \{0,1,\dots, l\}\} \cup \{x, x\in \{l+1,\dots, 2l\}\}\cup \{y\sqrt{2}, y\in \{l+1,\dots, 2l\}\}.
$$
It follows that $\#(B'+B')=l^2+4l+1$. It may help to imagine $A'$ and $B'$ as $3$-dimensional subsets which are linear combinations of vectors $e_1, \sqrt{2}e_2$ and $\sqrt{3}e_3$. Then it is easy to see that $A'+B'$ consists of an $m\times m$ square of integer points united with two $m\times l$ rectangles of integer points in the $\{e_1, e_2\}$ plane and two additional rectangles: one of size $m\times (l+1)$ one in the $\{e_1, e_3\}$ plane and another of size $m\times (l+1)$ in the $\{e_2, e_3\}$ plane, where the last two rectangles overlap by $m$ points. Thus
$$
\#(A'+B')=m^2 +2ml +2 (m(l+1)-m)=m(m+4l+1).
$$
Finally, we note that $A'+B'+B'$ contains the set
$$
\{z\sqrt{3}; z\in \{1,\dots, m\}\} + B'+B',
$$
thus $\#(A'+B'+B')\ge l^2 m$. Now consider any $\beta'>0$. Our goal is to select $m,l \in \mathbb{N}$ such that
\begin{equation}\label{eq:rusd}
\#(A')\#(A'+B'+B') > \beta' (\#(A'+B'))^2.
\end{equation}
For this it is enough to pick $m,l \in \mathbb{N}$ such that
$$
m(m+1) l^2 m \ge \beta' m^2(m+4l+1)^2
$$
or
$$
\sqrt{m+1} l \ge \sqrt{\beta'} (m+4l+1),
$$
which is true as long as $m+1> 16\beta'$ and $l$ is large enough.
Now we are ready to construct our continuous example in ${\bf R}$, for volume. For the fixed $\beta>0$ consider $\beta'=\frac{4}{3}\beta$ and sets $A'$ and $B'$ satisfying (\ref{eq:rusd}). Define $A=A'+[-\varepsilon, \varepsilon]$ and $B=B'+[-\varepsilon, \varepsilon]$ where $\epsilon>0$ small enough such that
$$
|A|=2\varepsilon \#(A'); |A+B|=4\varepsilon \#(A'+B') \mbox{ and } |A+B+B|=6\varepsilon \#(A'+B'+B'),
$$
which, together with (\ref{eq:rusd}) gives the required inequality.
\end{proof}
It turns out (see for example \cite{FLZ19}) that some sumsets estimates can still be proved if the convexity assumption is relaxed by an assumption that the body is star-shaped. The next lemma shows that it is still not the case for Theorem \ref{thm:ub}.
\begin{lem}
Fix $n\ge 3$, then for any $\beta >0$ there exist a compact star-shaped symmetric body $A\subset {\bf R}^n$ such that
$$
|A|\,|A+A+A|\ge\beta|A+A|^2.
$$
\end{lem}
\begin{proof}
Let $n=3$, consider a cube $Q=[-1,1]^3$ and a set
$
X=m([-e_1,e_1] \cup [-e_2,e_2] \cup [-e_3,e_3]),
$
i.e. the union of $3$ orthogonal segments of length $2m$. Then
$$
|A+A| \le |X+X|+|Q+Q|+|X+Q| \le 0+4^3+3*(2m+2)*2
$$
but
$
|A+A+A| \ge |X+X+X| =8m^3.
$
We note that in dimension $n>3$ one can consider
the direct sum of the above three dimensional example with $[-1,1]^{n-3}$.
\end{proof}
\section{On Ruzsa's triangle inequality}
\label{sec:diff}
In additive combinatorics, the Ruzsa distance is defined by
$
d(A,B)=\log \frac{\#(A-B)}{\sqrt{\#(A) \#(B)}},
$
where $A$ and $B$ are subsets of an abelian group. We refer to \cite{TV06:book} for more information and properties of this object, which is useful even though it is {\it not} a metric (since typically $d(A,A)>0$). The Ruzsa distance satisfies the triangle inequality which is equivalent to
$\#(C-B)\cdot \# A \leq \#(A-C) \#(B-A).$
An analogue of Ruzsa's triangle inequality holds for compact sets in ${\bf R}^n$.
\begin{thm}(see, e.g., \cite[Lemma 3.12]{TV09:1})
For any compact sets $A,B,C \subset {\bf R}^n$,
\begin{equation}\label{triangleI}
|A|\,|B-C| \le |A-C|\,|A-B|.
\end{equation}
\end{thm}
This inequality has a short proof that we provide here for the sake of completeness. Indeed
\[
|B-A| |A-C|=\int_{{\bf R}^n} 1_{B-A}*1_{A-C}(x)dx\ge \int_{B-C} 1_{B-A}*1_{A-C}(x)dx,
\]
where $1_M$ is a characteristic function of a set $M \subset {\bf R}^n$ and $f*g$ is the convolution of functions $f,g: {\bf R}^n \to {\bf R}$. Now let $x\in B-C$, there is $b\in B$ and $c\in C$ such that $x=b-c$. Thus, changing variable, one has
\begin{align*}1_{B-A}*1_{A-C}(b-c)&= \int_{{\bf R}^n}1_{B-A}(z)1_{A-C}(b-c-z)dz=\int_{{\bf R}^n}1_{B-A}(b-y)1_{A-C}(y-c)dy\\
&\ge\int_{A}1_{B-A}(b-y)1_{A-C}(y-c)dy =|A|.
\end{align*}
In view of Ruzsa's triangle inequality, it is natural to try to generalize Theorem \ref{thm:ub} to the case of the difference of convex bodies. We recall that $\varphi=(1+\sqrt{5})/2$ denotes the golden ratio and for $n \ge 2$ the constant $c_n$ was defined in Theorem \ref{thm:ub} satisfying $1=c_2\le c_n\le \varphi^n$.
\begin{thm}\label{thm:ruzsa}
Let $A, B, C$ be convex bodies in ${\mathbb R}^n$. Then
\begin{equation}\label{eq:ruzsa}
|A|\,|A+B+C| \le \frac{1}{2^n}{ 2n \choose n}c_n
\min\{ |A-B|\,|A+C| , |A-B|\,|A-C|.\}
\end{equation}
\end{thm}
\begin{proof}
We first recall Litvak's observation (see \cite[pp. 534]{Sch14:book})
that
\begin{equation}\label{eq:lit}
|A+B| \le \frac{1}{2^n}{2n \choose n} |A-B|,
\end{equation}
for any convex bodies $A, B$ in ${\mathbb R}^n$, with equality for $A=B$ being simplices. Litvak obtained this by simply combining the Rogers-Shephard inequality (applied to $A+B$) and the Brunn-Minkowski inequality.
Now we can write
$$
|A|\,|A+B+C| \le c_n|A+B|\,|A+C|
\le \frac{1}{2^n}{ 2n \choose n}c_n|A-B|\,|A+C|,
$$
where the first inequality follows from Theorem \ref{thm:ub}, and the
second from Litvak's observation. This gives the first half of the inequality (\ref{eq:ruzsa}) (i.e., the inequality with the first term inside the minimum).
Next, we notice that changing $B$ to $-B$ and $C$ to $-C$, Theorem \ref{thm:ub} becomes equivalent to
$$
|A|\,|A-(B+C)| \le c_n|A-B|\,|A-C|,
$$
and we finish the proof of the inequality (\ref{eq:ruzsa}) (i.e., we get the inequality with the second term inside the minimum) by applying Litvak's observation on the left hand side.
\end{proof}
\begin{rem} We note that the constant in (\ref{eq:ruzsa})
is sharp in the case $n=2$, no matter which term inside the minimum we consider. Indeed, if $C=\{0\}$, then (\ref{eq:ruzsa}) becomes
$
|A+B| \le \frac{3}{2}
|A-B|,
$
which is sharp for triangles $A,B$ such that $A=-B$.
\end{rem}
Next we would like to discuss an improvement of Theorem \ref{thm:ruzsa} via an improvement of Litvak's inequality (\ref{eq:lit}) in ${\bf R}^2$.
\begin{thm}\label{lmcool2} Let $A, C$ be two convex sets in ${\bf R}^2,$ then
\begin{equation}\label{eqprs3}
|A-C|\le |A+C|+2\sqrt{|A|\,|C|},
\end{equation}
moreover, the equality in the above inequality is only possible in the following cases
\begin{itemize}
\item One of the sets $A$ or $C$ is a singleton or a segment and the other one is any convex body.
\item $A$ is a triangle and $C=t A+b,$ for some $t>0$ and $b\in {\bf R}^2.$
\end{itemize}
\end{thm}
\begin{proof}
Let us first prove the inequality. We note that (\ref{eqprs3}) is equivalent to
\begin{equation}\label{eqprs2}
V(A,-C)\le V(A,C)+\sqrt{|A|\,|C|}.
\end{equation}
Assume $A=A_1+A_2$ and
$$
V(A_1,-C)\le V(A_1,C)+\sqrt{|A_1|\,|C|} \mbox{ and }
V(A_2,-C)\le V(A_2,C)+\sqrt{|A_2|\,|C|}.
$$
Then
$$
V(A, -C)=V(A_1,-C)+V(A_2,-C) \le V(A,C)+\sqrt{|A_1|\,|C|}+\sqrt{|A_1|\,|C|}.
$$
Using that by Brunn-Minkowski inequality in the plane $\sqrt{|A_1|}+\sqrt{|A_1|} \le \sqrt{|A|}$ we get
$$
V(A, -C) \le V(A,C)+\sqrt{|A|\,|C|}.
$$
Thus, to prove $(\ref{eqprs2})$ we may assume that both $A$ and $C$ are triangles. Indeed, any planar convex body can be approximated by a polygon and any planar polygon can be written as a Minkowski sum of triangles \cite{Sch14:book} (here we will treat a segment as a degenerate triangle). If $A$ or $C$ is a segment, then the inequality becomes an equality. Thus, we may assume that $A$ and $C$ are not degenerate triangles. We notice that (\ref{eqprs2}) is invariant under dilation and shift of the convex body $C$, thus we may assume that $C \subset A$ and has common points with all three edges of $A$. Thus $V(A,C)=|A|$ (see, for example, \cite{ SZ16}). Finally, our goal is to show that for any triangles $A,C$, such that $C \subset A$ and $C$ touches all edges of $A$, we have
\begin{equation}\label{shadow}
V(A, -C) \le |A|+\sqrt{|A|\,|C|}.
\end{equation}
To prove the above inequality, one may use the technique of shadow systems (\cite{RS58:2} or \cite{Sch14:book}, Section 10.4). In this particular case, the method can be applied directly.
Indeed, if $C=A$ then (\ref{shadow}) becomes an equality. Otherwise, let $C=\mbox{conv}\{c_1, c_2, c_3\}$ and one of the $c_i's$ is not a vertex of $A$. Assume, without loss of generality, that $c_1$ is not a vertex of $A$. Then, there exists a vertex $a_1$ of $A$ such that the segment $(a_1,c_1)$ does not intersect $C$. Let $c_{t}=tc_1 +(1-t)a_1$ for $t\in[t_0,t_1]$, where $[t_0,t_1]$ is the largest interval such that $c_{t}\in A$ and $c_{t}$ is not aligned with $c_2$ and $c_3$, for all $t \in (t_0,t_1)$. Notice that $c_{t_0}$ and $c_{t_1}$ are either vertices of $A$ or belong to the line containing $[c_2,c_3]$. Let $C_t=\mbox{conv}\{c_{t}, c_2, c_3\}$. Then
$$
V(A, -C_t)=V(-A, C_t)=\frac{1}{2}\sum_{i=1}^3 h_{C_t}(-u_i) |F_i|,
$$
where $u_i,$ $i=1,2,3$ is a normal vector to the edge $F_i$ of $A$. The function $h_{C_t}(-u_i)$ is convex in $t$ and thus the same is true for $V(A,-C_t)$. We also notice that $|C_t|$ is an affine function of $t$ on $[t_0,t_1]$ and thus
$$
g(t) = V(A, -C_t) - \sqrt{|A|\,|C_t|}
$$
is a convex function of $t\in [t_0,t_1]$ and thus $\max_{t\in [t_0,t_1]}=\max\{g(t_0), g(t_1)\}.$ Thus the maximum of $g(t)$ achieved when either when $C_{t}$ becomes a segment (and the proof is complete in this case) or $c_{t}$ reaches a vertex of $A$. Then either $C=A$ or there is a vertex of $C$ which is not a vertex of $A$ and we repeat the procedure.
Now let us consider the equality case. Assume
\begin{equation}\label{equaltrian}
V(A,-C) = V(A,C)+\sqrt{|A|\,|C|}.
\end{equation}
First, let us assume that $A$ is not a triangle. Then $A$ is decomposable: it can be written as $A=A_1+A_2$ where $A_1$ is not homothetic to $A_2$. Then, applying (\ref{eqprs2}) we get
$$
V(A_1,-C)\le V(A_1,C)+\sqrt{|A_1|\,|C|} \mbox{ and }
V(A_2,-C)\le V(A_2,C)+\sqrt{|A_2|\,|C|}.
$$
From (\ref{equaltrian}), we have equality in the above inequalities and
$$
\sqrt{|A_1|\,|C|}+\sqrt{|A_2|\,|C|} =\sqrt{|A_1+A_2|\,|C|}.
$$
The above equality is only possible in two cases: first, when there is an equality in Brunn-Minkowski inequality, which would result $A_2$ to be homothetic to $A_1,$ and we assumed that this is not the case, and the second case, when $|C|=0,$ i.e. $C$ is the singleton or a segment.
If $C$ is not a triangle, then, the above discussion shows that $|A|=0$.
Now, we assume that $A$ and $C$ are non degenerate triangles. Using homogeneity of equality (\ref{equaltrian}) we may assume that the triangle $C$ touches all edges of $A$ and equality (\ref{equaltrian}) becomes
\begin{equation}
V(A,-C)- \sqrt{|A|\,|C|} = |A|.
\end{equation}
Assume towards the contradiction that $C \not = A$. Then, there is a vertex $c_1$ of the triangle $C=\mathrm{conv}(c_1,c_2,c_3)$ which is not a vertex of $A$. We reproduce the same shadow system $(C_t)_{t\in[t_0,t_1]}$ as in the proof of the inequality. We only need to prove that the function $g(t) = V(A, -C_t) - \sqrt{|A|\,|C_t|}$ is not constant on $[t_0,t_1]$. To do this, we prove that, among the two functions, $V(A, -C_t)$ and $\sqrt{|C_t|}$ at least one is not affine. There are two cases. If $|C_t|$ is not constant then $\sqrt{|C_t|}$ is strictly concave, thus $g$ is not a constant. If $|C_t|$ is constant then all vertices of $C$ are different from the vertices of $A$. Recall that $V(A, -C_t)=\frac{1}{2}\sum_{i=1}^3 h_{C_t}(-u_i)|F_i|$. Let $u_2, u_3$ be the normal of the edges of $A$ which do not contain $c_1$. Then it is easy to see that $h_{C_t}(-u_2)+h_{C_t}(-u_3)$ is not affine. Thus $V(A,-C_t)$ is not constant. This is a contradiction.
\end{proof}
The inequality \eqref{eqprs3} is an intriguing improvement (in dimension 2) of Litvak's observation. To see this, observe that by the Brunn-Minkowski inequality,
$|A+C|\geq (\sqrt{|A|}+\sqrt{|C|})^2
= |A|+|C|+2\sqrt{|A|\,|C|}
= (\sqrt{|A|}-\sqrt{|C|})^2+4\sqrt{|A|\,|C|}$,
and hence the right hand side of \eqref{eqprs3} is bounded by $\frac{3}{2}|A+C|$.
Thus, in dimension 2, since $c_2=1$, we obtain
\begin{eqnarray*}\begin{split}
|A|\,|A+B+C| &\le |A+B|\,|A+C| \\
&\le (|A-B|+2\sqrt{|A|\,|B|})\,|A+C|\\
&\le \bigg[\frac{3}{2}|A-B|-\frac{1}{2}(\sqrt{|A|}-\sqrt{|B|})^2\bigg] |A+C|,
\end{split}\end{eqnarray*}
which is an improvement of Theorem \ref{thm:ruzsa} in dimension 2.
Let us define the {\it additive asymmetry} of the pair $(A,C)$ by $$
\text{asym}(A,C)=\bigg| \frac{|A+C|}{|A-C|}-1\bigg|,
$$
and note that $\text{asym}(A,C)$ is trivially 0 if either $A$ or $C$ is symmetric.
Observe that inequality \eqref{eqprs3} may be rewritten as
\begin{eqnarray}\label{asym}
\text{asym}(A,C) \leq 2e^{-d(A,C)} ,
\end{eqnarray}
where $d(A,C)=\log \frac{|A-C|}{\sqrt{|A|\cdot |C|}}$ is the Euclidean analogue of the Ruzsa distance defined at the beginning of this section.
One wonders if this inequality extends to dimension higher than 2.
Finally we note that comparison of cardinality of $A+C$ and $A-C$
has also been of interest in additive combinatorics (see, e.g., \cite{PF73, TV06:book}),
and there are also results comparing the entropies of sums and differences
of independent random variables in different abelian groups (see, e.g., \cite{MK10:isit, KM14, ALM17, LM19}).
\bibliographystyle{plain}
|
{'timestamp': '2022-06-06T02:14:28', 'yymm': '2206', 'arxiv_id': '2206.01565', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.01565'}
|
arxiv
|
\section{Introduction}
The famous Tracy--Widom distribution appears as the limiting distribution for the largest eigenvalue of many classical random matrix ensembles \cite{MR3800840,MR4019881,MR4091114,MR2485010,MR2545551,MR2398763,MR3582818,MR2988403,MR4260217,MR2308592,MR4474568,MR1727234,MR3161313,MR1257246,MR1385083}. However, for many other matrix ensembles---such as those containing certain structural properties or significantly less independence---other distributions can appear as the limiting law for the largest eigenvalue \cite{MR2797947,MR2797949,MR2288065}.
The present paper focuses on the limiting distribution for the largest eigenvalue of random Laplacian matrices with Gaussian entries.
\begin{definition} \label{def:Laplacian}
For an $n \times n$ real symmetric matrix $A$, we define the \emph{Laplacian matrix} $\mathcal{L}_A$ of $A$ as
\[ \mathcal{L}_A := D_A - A, \]
where $D_A = (D_{ij})$ is the diagonal matrix containing the row sums of $A$:
\[ D_{ii} = \sum_{j=1}^n A_{ij}. \]
We refer to $\mathcal{L}_A$ as a \emph{Laplacian matrix}.
\end{definition}
Every real symmetric matrix that maps the all-ones vector to zero can be represented as the Laplacian matrix $\mathcal{L}_A$ of some real symmetric matrix $A$. In the literature, Laplacian matrices are also known as Markov matrices.
When $A$ is a random real symmetric matrix, we say $\mathcal{L}_A$ is a random Laplacian matrix.
Random Laplacian matrices play an important role in many applications involving complex graphs \cite{MR2248695}, analysis of algorithms for the $\mathbb{Z}_2$ synchronization problem \cite{MR3777782}, community detection in the stochastic block model \cite{MR3349181}, and other optimization problems in semidefinite programming \cite{MR3777782}.
This paper focuses on the fluctuations of the largest eigenvalue of $\mathcal{L}_A$ when $A$ is drawn from the Gaussian Orthogonal Ensemble.
Recall that an $n \times n$ real symmetric matrix $A$ is drawn from the Gaussian Orthogonal Ensemble (GOE) if the upper-triangular entries $A_{ij}$, $1 \leq i \leq j \leq n$ are independent Guassian random variables, where $A_{ij}$ has mean zero and variance $\frac{1 + \delta_{ij}}{n}$ and $\delta_{ij}$ is the Kronecker delta.
When $A$ is drawn from the GOE, the limiting empirical spectral distribution for $\mathcal{L}_A$ is given by the free convolution of the semicircle law and the Gaussian distribution \cite{MR2759729,MR2206341,MR4440252}. More generally, this is also the limiting spectral distribution of $\mathcal{L}_A$ when $A$ is a properly normalized Wigner matrix \cite{MR2759729,MR2206341,MR4440252}; extensions of this result are also known for generalized Wigner matrices \cite{MR4440252}, inhomogeneous and dilute Erd\H{os}--R\'{e}nyi random graphs \cite{MR4193182,MR2967963}, and block Laplacian matrices \cite{MR3321497}. Moreover, the asymptotic location of the largest eigenvalue of $\mathcal{L}_A$, for a large class of Wigner matrices $A$, has been established by Ding and Jiang \cite{MR2759729}. When $A$ is an $n \times n$ matrix drawn from the GOE, these results show that the largest eigenvalue of $\mathcal{L}_A$ is asymptotically close to $\sqrt{2 \log n}$ as the dimension $n$ tends to infinity. The generalized Wigner case was studied in \cite{MR4440252}, while the smallest eigenvalues were investigated in \cite{MR2948689}. Spectral norm bounds are also known \cite{MR2206341,MR4440252}.
The goal of this paper is to show that the Gumbel distribution is the limiting law for the largest eigenvalue of $\mathcal{L}_A$ when $A$ is drawn from the GOE. This particular random matrix appears in the $\mathbb{Z}_2$ synchronization problem of recovering binary labels with Gaussian noise \cite{MR3777782}. While we focus on the edge of the spectrum, the eigenvalues in the bulk (and corresponding eigenvectors) were studied by Huang and Landon \cite{MR4058984} for the case when $A$ is a Wigner matrix or the adjacency matrix of a sparse Erd\H{os}--R\'{e}nyi random graph. Similar to \cite{MR4058984}, we also use resolvent techniques to prove our main result.
For any $n \times n$ real symmetric matrix $M$, we let $\lambda_n(M) \leq \cdots \leq \lambda_1(M)$ be the ordered eigenvalues of $M$.
Recall that the standard Gumbel distribution has cumulative distribution function
\begin{equation} \label{def:GumbelCDF}
F(x) := \exp \left(-e^{-x} \right), \qquad x \in \mathbb{R}.
\end{equation}
Define
\begin{equation} \label{def:anbn}
a_n := \sqrt{2 \log n} \qquad \text{and} \qquad b_n := \sqrt{2 \log n} - \frac{ \log \log n + \log(4 \pi) - 2 }{2 \sqrt{2 \log n}}.
\end{equation}
Our main result is the following.
\begin{theorem}[Main result] \label{thm:main}
Let $A$ be an $n \times n$ matrix drawn from the GOE. Then the centered and rescaled largest eigenvalue of $\mathcal{L}_A$
\[ a_n \left( \lambda_1(\mathcal{L}_A) - b_n \right) \]
converges in distribution as $n \to \infty$ to the standard Gumbel distribution, where $a_n$ and $b_n$ are defined in \eqref{def:anbn}.
\end{theorem}
\begin{remark} \label{rem:symmetry}
By symmetry, an analogous version of Theorem \ref{thm:main} holds for the smallest eigenvalue of $\mathcal{L}_A$ as well.
\end{remark}
The value of the centering term $b_n$ is surprising as it does not coincide with the location of the largest diagonal entry of $\mathcal{L}_A$. It was widely suspected that the behavior of the largest eigenvalue of $\mathcal{L}_A$ was dictated by the largest diagonal entry.\footnote{In fact, this was the result of \cite{arenas2021extremal}, which unfortunately contains an error in its proof \cite{SV}.} The largest diagonal entry of $\mathcal{L}_A$ does indeed have a Gumbel distribution but with a different deterministic shift. Define
\begin{equation} \label{eq:def:bn'}
b_n' := \sqrt{2 \log n} - \frac{\log \log n + \log(4 \pi)}{2 \sqrt{2 \log n}}.
\end{equation}
It is shown in Appendix \ref{sec:appendix} that, when $A$ is drawn from the GOE, the centered and rescaled largest diagonal entry of $\mathcal{L}_A$
\[ a_n \left(\max_{1 \leq i \leq n} (\mathcal{L}_A)_{ii} - b_n' \right) \]
converges in distribution as $n \to \infty$ to the standard Gumbel distribution, where $a_n$ is specified in \eqref{def:anbn}. In other words, while the scaling factors are the same, the centering terms for the largest eigenvalue and the largest diagonal entry are different. The terms $a_n$ and $b_n'$ are the correct scaling and centering terms, respectively, when considering the limiting fluctuations for the maximum of $n$ independent and identically distributed (iid) standard Gaussian random variables (see, for example, \cite[Theorem 1.5.3]{MR691492}). We refer the reader to \cite{MR900810,MR691492} for more details concerning the extreme values of sequences of iid random variables.
In many matrix models with Gaussian entries (such as the GOE), there are explicit formulas for the density of the eigenvalues. The authors are not aware of any formulas for the eigenvalues of $\mathcal{L}_A$ when $A$ is drawn from the GOE. In particular, $\mathcal{L}_A$ is not orthogonally invariant in this case. The proof of Theorem \ref{thm:main}---which is outlined in Section \ref{sec:overview} below---instead relies on comparing the largest eigenvalue of $\mathcal{L}_A$ to the largest eigenvalue of a matrix model with independent entries and then applying resolvent techniques to analyze this new model. The fact that the entries are Gaussian is crucial to our method, but we conjecture that Theorem \ref{thm:main} should still hold when $A$ is a Wigner matrix with some appropriate moment assumptions on the entries.
\section*{Acknowledgements}
The authors thank Santiago Arenas-Velilla and Victor P\'erez-Abreu for comments on an earlier draft of this manuscript and for contributing Appendix \ref{sec:appendix}.
\section{Overview and main reduction} \label{sec:overview}
Let $A$ be an $n \times n$ matrix drawn from the GOE. Define
\begin{equation} \label{def:L}
L := D - A,
\end{equation}
where $D$ is an $n \times n$ diagonal matrix with iid standard normal entries, independent of $A$. We will show that the largest eigenvalue of $L$ has the same asymptotic behavior as that described in Theorems \ref{thm:main}.
\begin{theorem} \label{thm:main_L}
Let $L$ be the $n \times n$ random matrix defined above. The centered and rescaled largest eigenvalue of $L$
\[ a_n \left( \lambda_1(L) - b_n \right) \]
converges in distribution as $n \to \infty$ to the standard Gumbel distribution, where $a_n$ and $b_n$ are defined in \eqref{def:anbn}.
\end{theorem}
\begin{remark}
Since $L$ has the same distribution as $D + A$ and $A - D$, Theorem \ref{thm:main_L} also applies to the largest eigenvalues of these matrices. In fact, in the forthcoming proofs, it will sometimes be convenient to work with these other, equivalent expressions for $L$.
\end{remark}
Theorem \ref{thm:main_L} was motivated by many results in the literature for models of deformed Wigner matrices, including the results in \cite{MR3449389,MR3502606,MR3208886,MR3500269,MR3800833,MR2288065} and references therein.
In this section, we prove Theorem \ref{thm:main} using Theorem \ref{thm:main_L}. We begin by introducing the notation used here and throughout the paper.
\subsection{Notation}
For a complex number $z$, we let $\Re(z)$ be the real part and $\Im(z)$ be the imaginary part of $z$. We will use $i$ for both the imaginary unit and as an index in summations; the reader can tell the difference based on context. We denote the upper-half plane as $\mathbb{C}_+ := \{z \in \mathbb{C} : \Im(z) > 0\}$.
For a matrix $A$, we let $A_{ij}$ be the $(i,j)$-entry of $A$. The transpose of $A$ is denoted $A^\mathrm{T}$, and $A^\ast$ is the conjugate transpose of $A$. We use $\tr A$ to denote the trace of $A$, and $\det A$ is the determinant of $A$. If $A$ is an $n \times n$ real symmetric matrix, we let $\lambda_n(A) \leq \cdots \leq \lambda_1(A)$ be its eigenvalues. For any matrix $M$, let $\|M\|$ be its spectral norm (also known as the operator norm). We let $I$ be the identity matrix. If $M$ is a square matrix and $z \in \mathbb{C}$, we will sometimes write $M + z$ for the matrix $M + zI$.
For a vector $v$, we use $\|v\|_2$ to mean the standard Euclidean norm.
For an event $E$, $\mathbb{P}(E)$ is its probability.
We let $\oindicator{E}$ be the indicator function of the event $E$.
For a random variable $\xi$, $\mathbb{E} \xi$ is its expectation. $\mathbb{E}_A \xi$ is the expectation of $\xi$ with respect to the GOE matrix $A$ and $\mathbb{E}_D \xi$ is its expectation with respect to the random diagonal matrix $D$.
For a natural number $n$, we let $[n] = \{1, \ldots, n\}$ be the discrete interval. For a finite set $S$, $|S|$ will denote the cardinality of $S$. The function $\log(\cdot)$ will always denote the natural logarithm.
Asymptotic notation is used under the assumption that $n$ tends to infinity, unless otherwise noted. We use $X = O(Y)$, $Y=\Omega(X)$, $X \ll Y$, or $Y \gg X$ to denote the estimate $|X| \leq C Y$ for some constant $C > 0$, independent of $n$, and all $n \geq C$. If $C$ depends on other parameters, e.g. $C = C_{k_1, k_2, \ldots, k_p}$, we indicate this with subscripts, e.g. $X = O_{k_1, k_2, \ldots, k_p}(Y)$. The notation $X = o(Y)$ denotes the estimate $|X| \leq c_n Y$ for some sequence $(c_n)$ that converges to zero as $n \to \infty$, and, following a similar convention, $X = \omega(Y)$ means $|X| \geq c_n Y$ for some sequence $(c_n)$ that converges to infinity as $n \to \infty$. Finally, we write $X = \Theta(Y)$ if $X \ll Y \ll X$.
\subsection{Proof of Theorem \ref{thm:main}}
Using Theorem \ref{thm:main_L}, we now complete the proof of Theorem \ref{thm:main}.
We begin with an observation from \cite{MR4058984}. Let $A$ be an $n \times n$ matrix drawn from the GOE and $\mathcal{L}_A$ its corresponding Laplacian matrix as defined in Definition \ref{def:Laplacian}. Let $\tilde{R}$ be any fixed, $n \times n$ orthogonal matrix with last column
\begin{equation} \label{eq:def:e}
\mathbf{e} := (1/\sqrt{n}, \ldots, 1/\sqrt{n})^\mathrm{T}
\end{equation}
and we use the notation $\tilde{R} = (R|\mathbf{e})$, where $R$ contains the first $n-1$ columns of $\tilde{R}$. The eigenvalues of $\mathcal{L}_A$ coincide with those of $\tilde{R}^\mathrm{T} \mathcal{L}_A \tilde{R}$. The set of eigenvalues of $\tilde{R}^\mathrm{T} \mathcal{L}_A \tilde{R}$ are simply the $n-1$ eigenvalues of $R^\mathrm{T}
\mathcal{L}_A R$ along with a zero eigenvalue. In particular, with probability $1 - o(1)$, the behavior of the largest eigenvalue of $\mathcal{L}_A$ will be that of the largest eigenvalue of $R^\mathrm{T} \mathcal{L}_A R$ since these largest eigenvalues are positive with probability tending to one (see Theorem 1 in \cite{MR2759729} or, alternatively, see the proof of Proposition \ref{kprop:finalreduction} below).
\begin{lemma} [Proposition 2.10, \cite{MR4058984}] \label{klem:firstreduction}
The random matrix $R^\mathrm{T} \mathcal{L}_A R$ is equal in distribution to the matrix $A' + R^\mathrm{T} \tilde DR + gI$, where $A'$ is an $(n-1) \times (n-1)$ GOE matrix, $\tilde{D}$ is an $n \times n$ diagonal matrix with iid centered Gaussian random variables with variance $n/(n-1)$ along the diagonal, $g$ is a centered Gaussian random variable with variance $1/(n-1)$, and $A'$, $\tilde D$ and $g$ are jointly independent.
\end{lemma}
In the next lemma we make some minor reductions to remove some nuisances such as the slight dimension discrepancy and the awkward variances.
\begin{lemma} \label{klem:scalingandg}
In the notation of Lemma \ref{klem:firstreduction}, if we let $W = A' + \sqrt{\frac{n-1}{n}} R^\mathrm{T} \tilde{D} R$ and $W' = A' + R^\mathrm{T} \tilde{D}R + gI$ then for any fixed $\varepsilon > 0$ we have
\[
\lim_{n \rightarrow \infty} \P(|a_{n} \lambda_1 (W) - a_{n} \lambda_1(W')| \geq \varepsilon) = 0
\]
\end{lemma}
\begin{proof}
We use a simple coupling argument by placing $W$ and $W'$ on the same probability space.
Since $a_n g$ converges to zero in probability, by Weyl's inequality (also known as Weyl's perturbation theorem, see Corollary III.2.6 in \cite{MR1477662}), with probability $1- o(1)$,
\[
|a_n \lambda_1(W') - a_n \lambda_1(A' + R^\mathrm{T} \tilde{D}R)| = o(1).
\]
Similarly, with probability $1-o(1)$
\[
|a_n \lambda_1(W) - a_n \lambda_1(A' + R^\mathrm{T} \tilde{D}R)| \leq \frac{a_n}{n} \|R^\mathrm{T} \tilde{D}R\| \leq \frac{a_n}{n} \| \tilde D \| = o(1),
\]
where $\| \tilde D \|$ was controlled using standard bounds on the maximum of iid normal random variables (see, for example, \cite[Theorem 3]{MR3389997}).
Therefore,
\begin{align*}
\lim_{n \rightarrow \infty} \P\left( a_{n} \left|\lambda_1 (W) - \lambda_1(W')\right| \geq \varepsilon \right) = 0,
\end{align*}
as desired.
\end{proof}
In the final reduction, we compare $R^\mathrm{T} \tilde{D} R + A'$ to a slightly augmented matrix.
\begin{proposition} \label{kprop:finalreduction}
We recall the notation of Lemma \ref{klem:firstreduction} and define
\begin{equation} \label{keq:D}
D := \sqrt{\frac{n-1}{n}} \tilde{D}
\end{equation}
\begin{equation} \label{keq:A}
A := \sqrt{\frac{n-1}{n}} \left(\begin{array}{cc} A' & Y \\
Y^\mathrm{T} & g' \end{array} \right)
\end{equation}
where $Y$ is a random vector in $\mathbb{R}^{n-1}$ with entries that are independent, centered Gaussians of variance $\frac{1}{n-1}$, $g'$ is a centered gaussian random variable with variance $\frac{2}{n-1}$. We consider a probability space in which $\tilde{D}$, $A'$, $Y$ and $g'$ are all defined and are jointly independent of each other.
Then,
for any $\varepsilon > 0$,
\[
\lim_{n \rightarrow \infty} \P(|a_n\lambda_1(R^\mathrm{T} D R + A') - a_{n} \lambda_1(\tilde{R}^\mathrm{T} D \tilde{R} + A) | \geq \varepsilon) =0
\]
\end{proposition}
\begin{remark}
The scaling in \eqref{keq:D} and \eqref{keq:A} are designed so that $D$ has iid standard Gaussian random variables along its diagonal and $A$ is drawn from a GOE so it is compatible with our previous notation.
\end{remark}
\begin{proof}[Proof of Proposition \ref{kprop:finalreduction}]
Again, we use a coupling argument by embedding $R^\mathrm{T} D R + A'$ in a slightly larger matrix. Note that $A$ is drawn from a $n \times n$ GOE, so we define
\[
Z := \tilde{R}^\mathrm{T} D \tilde{R} + \sqrt{\frac{n}{n-1}} A = \left( \begin{array}{cc} R^\mathrm{T} D R + A' & Y + R^\mathrm{T} D \mathbf{e} \\
\mathbf{e}^\mathrm{T} DR + Y^\mathrm{T} & \mathbf{e}^\mathrm{T} D \mathbf{e} + g' \end{array} \right).
\]
As $R^\mathrm{T} D R +A'$ is a submatrix of $Z$, by Cauchy's interlacing theorem, we immediately have that
\begin{equation} \label{keq:upperbound}
\lambda_1(R^\mathrm{T} D R + A') \leq \lambda_1(Z).
\end{equation}
To find a corresponding lower bound for $\lambda_1(R^\mathrm{T} D R + A')$, we consider the eigenvalue-eigenvector equation for $Z$:
\begin{equation}\label{keq:eigenvalue}
Z v = \left( \begin{array}{cc} R^\mathrm{T} D R + A' & Y + R^\mathrm{T} D \mathbf{e} \\
\mathbf{e}^\mathrm{T} DR + Y^\mathrm{T} & \mathbf{e}^\mathrm{T} D \mathbf{e} + g' \end{array} \right) \left(
\begin{array}{c} w \\
t \end{array} \right) = \lambda_1(Z) \left(
\begin{array}{c} w \\
t \end{array} \right).
\end{equation}
Here, $v = \begin{pmatrix} w \\ t \end{pmatrix}$ is a unit eigenvector of $\lambda_1(Z)$, $w \in \mathbb{R}^{n-1}$ and $t \in \mathbb{R}$. We observe that if $\|w\|_2 > 0$,
\[
\frac{w^\mathrm{T}}{\|w\|_2} (R^\mathrm{T} D R + A') \frac{w}{\|w\|_2} \leq \lambda_1(R^\mathrm{T} D R + A')
\]
by the Courant minimax principle.
Considering the top $n-1$ coordinates of equation \eqref{keq:eigenvalue} yields
\begin{equation} \label{keq:topn}
(R^\mathrm{T} D R + A') w + t (Y + R^\mathrm{T} D \mathbf{e}) = \lambda_1(Z) w.
\end{equation}
Multiplying this equation by $\frac{w^\mathrm{T}}{\|w\|_2^2}$ from the left and rearranging, we conclude that
\begin{equation} \label{keq:lowerbound}
\lambda_1(R^\mathrm{T} D R + A') \geq \frac{w^\mathrm{T}}{\|w\|_2} (R^\mathrm{T} D R + A') \frac{w}{\|w\|_2} = \lambda_1(Z) - \frac{t}{\|w\|_2^2} w^\mathrm{T} (Y + R^\mathrm{T} D \mathbf{e}).
\end{equation}
Our next goal is to control the size of $\frac{t}{\|w\|_2^2} w^\mathrm{T} (A' + R^\mathrm{T} D \mathbf{e})$.
We define the following events, which we later show hold with probability $1-o(1)$:
\[
\mathcal{E}_1 = \left\{\|A'\| \leq 10, \|\mathbf{e}^\mathrm{T} D R\|_2 \leq 10, \|Y\|_2 \leq 10, |\mathbf{e}^\mathrm{T} D \mathbf{e} + g| \leq \frac{10}{\sqrt{n}} \right\},
\]
\[
\mathcal{E}_2 = \left\{ \|Y^\mathrm{T} R^\mathrm{T} D\|_2 \leq \sqrt{\log \log n}, \|\mathbf{e}^\mathrm{T} D R R^\mathrm{T} D\|_2 \leq \sqrt{\log \log n} \right\},
\]
\[
\mathcal{E}_3 = \left\{\sqrt{\log n} \leq \lambda_1(Z) \leq 2 \sqrt{\log n } \right\}.
\]
We extract the final coordinate of equation \eqref{keq:eigenvalue} to obtain
\begin{equation}\label{keq:lastcoordinate}
(\mathbf{e}^\mathrm{T} D R + Y^\mathrm{T}) w + t(\mathbf{e}^\mathrm{T} D \mathbf{e} + g) = \lambda_1(Z) t.
\end{equation}
On the event $\mathcal{E} = \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3$, we have
\begin{equation} \label{keq:tbound}
|t| \leq 2 \frac{(\|\mathbf{e}^\mathrm{T} D R\|_2 + \|Y\|_2) \|w\|_2 }{\lambda_1(Z)} \leq \frac{40}{\sqrt{\log n}},
\end{equation}
which also assures us that $\|w\|_2 \geq 1/2> 0$ for all $n$ sufficiently large.
Left multiplying equation \eqref{keq:topn} by $Y^\mathrm{T}$ yields
\[
Y^\mathrm{T} R^\mathrm{T} D R w + t Y^\mathrm{T} (Y + R^\mathrm{T} D \mathbf{e}) = \lambda_1(Z) Y^\mathrm{T} w.
\]
This implies that on the event $\mathcal{E}$,
\begin{align}\label{keq:firstproduct}
|Y^\mathrm{T} w| &\leq \frac{\|Y^\mathrm{T} R^\mathrm{T} D\|_2 \|R w\|_2 + t \|Y\|_2^2 + t \|Y^\mathrm{T} R^\mathrm{T} D \|_2 \|\mathbf{e}\|_2 }{\lambda_1(Z)} \nonumber \\
&\leq \frac{\sqrt{\log \log n} + 4000/\sqrt{\log n} + 100 \sqrt{\log \log n}/\sqrt{\log n}}{\sqrt{\log n}} \nonumber \\
&\ll \sqrt{\frac{\log \log n}{\log n}}.
\end{align}
Left multiplying equation \eqref{keq:topn} by $\mathbf{e}^\mathrm{T} D R$ gives
\[
\mathbf{e}^\mathrm{T} D R R^\mathrm{T} D R w + \mathbf{e}^\mathrm{T} D R A' w + t \mathbf{e}^\mathrm{T} D R Y + t \mathbf{e}^\mathrm{T} D R R^\mathrm{T} D \mathbf{e} = \lambda_1(Z) \mathbf{e}^\mathrm{T} D R w
\]
On the event $\mathcal{E}$, we deduce that
\begin{align} \label{keq:secondproduct}
|\mathbf{e}^\mathrm{T} D R w| &\leq \frac{\|\mathbf{e}^\mathrm{T} D RR^\mathrm{T} D\|_2 \| R w\|_2 + \|\mathbf{e}^\mathrm{T} D R\|_2 \|A'\| \|w\|_2 }{\lambda_1(Z)} \nonumber \\
&\qquad \qquad + \frac{t \|\mathbf{e}^\mathrm{T} D R\|_2 \|Y\|_2 + t \|\mathbf{e}^\mathrm{T} D RR^\mathrm{T}D\|_2 \|\mathbf{e}\|_2 }{\lambda_1(Z)} \nonumber \\
&\leq \frac{100 \sqrt{\log \log n} + 4000/\sqrt{\log n} + 40 \sqrt{\log \log n}/\sqrt{\log n}}{\sqrt{\log n}} \nonumber \\
&\ll \sqrt{\frac{\log \log n}{\log n}}.
\end{align}
Combining, equations \eqref{keq:upperbound}, \eqref{keq:lowerbound}, \eqref{keq:tbound}, \eqref{keq:firstproduct} and \eqref{keq:secondproduct} we conclude that
\[
\lambda_1(Z) - O\left(\frac{\sqrt{\log \log n}}{\log n}\right) \leq \lambda_1(Z) - \left|\frac{t}{\|w\|_2^2} w^\mathrm{T} (Y + R^\mathrm{T} D \mathbf{e})\right|\leq \lambda_1(R^\mathrm{T} D R + A') \leq \lambda_1(Z).
\]
Therefore, since $a_n O(\sqrt{\log \log n}/\log n) = o(1)$,
\begin{align*}
\lim_{n \rightarrow \infty} \P(|a_n\lambda_1(&R^\mathrm{T} D R + A') - a_{n} \lambda_1(Z) | \geq \varepsilon) \\
&\leq \lim_{n \rightarrow \infty} \P(|a_n\lambda_1(R^\mathrm{T} D R + A') - a_{n} \lambda_1(Z) | \geq \varepsilon | \mathcal{E}) + \lim_{n \rightarrow \infty} \P(\mathcal{E}^c) \\
&= 0.
\end{align*}
We can then remove the factor $\sqrt{n/n-1}$ using Weyl's inequality (as in the proof of Lemma \ref{klem:scalingandg}) to conclude that
\[
\lim_{n \rightarrow \infty} \P(|a_n\lambda_1(R^\mathrm{T} D R + A') - a_{n} \lambda_1(\tilde{R}^\mathrm{T} D \tilde{R} + A) | \geq \varepsilon) =0.
\]
It remains to show that $\P(\mathcal{E}) = 1 - o(1)$. The bounds
\[
\|A\| \leq 10, \|\mathbf{e}^\mathrm{T} D R\|_2 \leq 10, \|Y\|_2 \leq 10, |\mathbf{e}^\mathrm{T} D \mathbf{e} + g| \leq \frac{10}{\sqrt{n}}
\]
are straightforward (and sub-optimal) gaussian concentration or random matrix theory results, so the proofs are omitted (see \cite[Section 5.4]{MR3185193}, \cite[Chapter 5]{MR2567175}). For $\mathcal{E}_2$, we first observe that $\|Y^\mathrm{T} R^\mathrm{T}\|_2^2 = Y^\mathrm{T} R^\mathrm{T} R Y = \|Y\|_2^2$.
By Markov's inequality,
\[
\P(|\|Y^\mathrm{T} R^\mathrm{T} D\|_2^2 \geq \log \log n) \leq \frac{\mathbb{E} \|Y^\mathrm{T} R^\mathrm{T} D\|_2^2}{\log \log n} = \frac{\mathbb{E} \|Y\|_2^2}{\log \log n} = o(1).
\]
Thus, with probability $1-o(1)$,
\[
\|Y^\mathrm{T} R^\mathrm{T} D\|_2 \leq \sqrt{\log \log n}.
\]
We now bound $\|\mathbf{e}^\mathrm{T} D R R^\mathrm{T} D\|_2$. As $\tilde{R}$ is an orthogonal matrix,
we have that
\[
(R R^\mathrm{T})_{ii} = 1 - \frac{1}{n} \text{ and } (R R^\mathrm{T})_{ij} = -\frac{1}{n}
\]
for $i \neq j$. Therefore,
\begin{align*}
\|\mathbf{e}^\mathrm{T} D R R^\mathrm{T} D\|_2^2 &= \sum_{i,j, k=1}^{n} \frac{1}{n} D_{ii} (R R^\mathrm{T})_{ij} D_{jj}^2 (R R^\mathrm{T})_{jk} D_{kk} \\
&= \frac{1}{n} \sum_{i=1}^n D_{ii}^4 \left(1 - \frac{1}{n}\right)^2 - \frac{2}{n^2} \sum_{i \neq j} D_{ii}^3 D_{jj} \left(1 - \frac{1}{n}\right) \\
&\qquad + \frac{1}{n^3} \sum_{i \neq j} D_{ii}^2 D_{jj}^2 + \frac{1}{n^3} \sum_{i \neq j, j \neq k, k \neq i} D_{ii} D_{jj}^2 D_{kk}.
\end{align*}
Each sum on the right hand side can be easily controlled via Markov's inequality.
For the first sum, which is the dominant one,
\[
\P\left(\frac{1}{n} \sum_{i=1}^{n} D_{ii}^4 \geq \frac{ \log \log n}{2}\right) \leq \frac{2 \sum_{i=1}^n \mathbb{E} D_{ii}^4}{n \log \log n} = \frac{6n}{n \log \log n} = o(1).
\]
For the second sum,
\begin{align*}
\P\left( \left|\frac{2}{n^2} \sum_{i \neq j} D_{ii}^3 D_{jj} \right| \geq 1\right) &\leq \frac{4 \mathbb{E}\left[ \left( \sum_{i \neq j} D_{ii}^3 D_{jj} \right)^2 \right]}{n^4 } \\
&= \frac{4 \sum_{i \neq j} (\mathbb{E} D_{ii}^6 \mathbb{E} D_{jj}^2 + \mathbb{E} D_{ii}^4 \mathbb{E} D_{jj}^4)}{n^4} \\
&= o(1),
\end{align*}
where in the first equality we make use of the fact that odd moments of a centered Gaussian random variable vanish.
Similarly, for the third sum,
\[
\P\left(\frac{1}{n^3} \sum_{i \neq j} D_{ii}^2 D_{jj}^2 \geq 1 \right) = o(1).
\]
For the final sum, we again make use of the fact that the odd moments of a centered Gaussian random variable vanish to obtain
\begin{align*}
\P\left( \left|\frac{1}{n^3} \sum_{i \neq j, j \neq k, k \neq i} D_{ii} D_{jj}^2 D_{kk} \right| \geq 1 \right) &\leq \frac{\mathbb{E} \left| \sum_{i \neq j, j \neq k, k \neq i} D_{ii} D_{jj}^2 D_{kk} \right|^2}{n^6} \\
&=O \left( \frac{1}{n^2} \right).
\end{align*}
Thus, asymptotically almost surely,
\[
\|\mathbf{e}^\mathrm{T} D R R^\mathrm{T} D\|_2 \leq \sqrt{\log \log n},
\]
which completes the proof for $\mathcal{E}_2$.
Finally, we show that $\P(\mathcal{E}_3) = 1- o(1)$. The eigenvalues of $\tilde{R}^\mathrm{T} D \tilde{R}$ are the diagonal entries of $D$, which are independent Gaussian random variables, so $\lambda_1(\tilde{R}^\mathrm{T} D \tilde{R})$ is the maximum of $n+1$ iid Gaussian random variables.
The result easily follows from Proposition \ref{kprop:maxgauss} and Weyl's inequality as $\|A\| \leq 10$ with probability $1 - o(1)$.
\end{proof}
We now summarize the reductions that culminate in the statement of Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We again use the notation introduced in Lemma \ref{klem:firstreduction} and Proposition \ref{kprop:finalreduction}, where all the random elements are placed on the same probability space.
We have the decomposition
\begin{align*}
a_{n} (\lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) -b_{n}) &= a_{n} (\lambda_1(\tilde{R}^\mathrm{T} D \tilde{R} + A) - b_{n} ) \\
&\qquad + a_n (\lambda_1(R^\mathrm{T} D R + A') - \lambda_1(\tilde{R}^\mathrm{T} D \tilde{R} + A) ) \\
&\qquad + a_n ( \lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) - \lambda_1(R^\mathrm{T} D R + A')).
\end{align*}
By Theorem \ref{thm:main_L} and the rotational invariance of $A$, $a_n (\lambda_1(\tilde{R}^\mathrm{T} D\tilde{R} + A) - b_{n} )$ converges in distribution as $n \rightarrow \infty$ to the standard Gumbel distribution. By Lemma \ref{klem:scalingandg} and Proposition \ref{kprop:finalreduction}, both
\[
a_n (\lambda_1(R^\mathrm{T} D R + A') - \lambda_1(D + A) )
\]
and
\[
a_n ( \lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) - \lambda_1(R^\mathrm{T} D R + A'))
\]
converge to zero in probability. Therefore, by Slutsky's theorem,
\[
a_{n} (\lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) -b_{n})
\]
converges in distribution to the standard Gumbel distribution.
We let $\mathcal{E}$ be the event that $\lambda_1(\mathcal{L}_A) > 0$ and $\mathcal{E'}$ the event that $\lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) > 0$.
Now, by Lemma \ref{klem:firstreduction},
\[
\lambda_1(\mathcal{L}_A) \oindicator{\mathcal{E}}
\]
and
\[
\lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) \oindicator{\mathcal{E}'}
\]
are equal in distribution. In addition, both
\[
\lambda_1(\mathcal{L}_A) \oindicator{\mathcal{E}^c}
\]
and
\[
\lambda_1(A' + R^\mathrm{T} \tilde{D}R + gI) \oindicator{\mathcal{E}'^c}
\]
converge to zero in probability
since $\P(\mathcal{E}^c) + \P(\mathcal{E}'^c) = o(1)$ by the remarks preceding Lemma \ref{klem:firstreduction}.
Therefore, we conclude that $a_n(\lambda_1(\mathcal{L}_A) - b_n)$ converges in distribution to the standard Gumbel distribution as well.
\end{proof}
\subsection{Overview}
The rest of the paper is devoted to the proof of Theorem \ref{thm:main_L}. Our proof is based on the resolvent approach, which compares the eigenvalues of $L$ with the eigenvalues of $D$. To this end, we define the resolvent matrices
\[ G(z) := (L - z)^{-1} \qquad \text{and} \qquad Q(z) := (D - z)^{-1} \]
for $z \in \mathbb{C}_+ := \{ z \in \mathbb{C} : \Im(z) > 0 \}$. Here, we use the convention that $(L-z)^{-1}$ (alternatively, $(D-z)^{-1}$) denotes the matrix $(L-zI)^{-1}$ (alternatively, $(D-zI)^{-1}$), where $I$ is the identity matrix. Often, we will simply write $G$ and $Q$ for the matrices $G(z)$ and $Q(z)$, respectively. We will define the Stieltjes transforms
\begin{equation} \label{def:mnsn}
m_n(z) := \frac{1}{n} \tr G(z) \qquad \text{and} \qquad s_n(z) := \frac{1}{n} \tr Q(z).
\end{equation}
The limiting Stieltjes transform of $s_n$ is given by
\begin{equation} \label{def:s}
s(z) := \int_{-\infty}^{\infty} \frac{\phi(x)}{x - z} \d x
\end{equation}
for $z \in \mathbb{C}_+$, where
\begin{equation} \label{def:phi}
\phi(x) := \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\end{equation}
is the density of the standard normal distribution. The limiting Stieltjes transform $m$ (which is the free convolution of the semicircle law with the standard normal distribution) of $m_n$ is uniquely defined as the solution of
\begin{equation} \label{def:fc}
m(z) = \int_{-\infty}^\infty \frac{\phi(x)}{x - z - m(z)} \d x, \qquad z \in \mathbb{C}_+,
\end{equation}
where $z m(z) \to -1$ as $|z| \to \infty$ in the upper-half plane.
For fixed (small) $\delta > 0$, we define the spectral domains
\[ S_{\delta} := \{z \in \mathbb{C}_+ : \sqrt{ (2 - \delta) \log n} \leq \Re(z) \leq \sqrt{3 \log n}, n^{-1/4} \leq \Im(z) \leq 1\}, \]
\[ \tilde{S}_{\delta} := \{ z \in \mathbb{C}_+ : \sqrt{ (2 - \delta) \log n} \leq \Re(z) \leq \sqrt{3 \log n}, \Im(z) = n^{-1/4} \} \]
and
\[ \hat{S}_{\delta} := \{ z \in \mathbb{C}_+ : \sqrt{ (2 - \delta) \log n} \leq \Re(z) \leq \sqrt{3 \log n}, \Im(z) = \sqrt{2} n^{-1/4} \}. \]
Our method requires us to mostly work on $\tilde{S}_{\delta}$ and $\hat{S}_{\delta}$ for some fixed $\delta > 0$, but it will sometimes be more convenient to state results for the larger domain $S_{\delta}$.
As is common in the literature, we will often take $E := \Re(z)$ and $\eta := \Im(z)$.
For any fixed $z \in \mathbb{C}_+$, $m_n(z)$ is random and $\mathbb{E}_A m_n(z)$ will denote its expectation with respect to the GOE random matrix $A$.
Our key technical result is the following.
\begin{theorem} \label{thm:stieltjes_transforms}
There exists $\delta > 0$ so that
\[ \sup_{z = E + i \eta \in \tilde S_{\delta} \cup \hat{S}_{\delta}} n \eta \left| m_n(z) - s_n(z + \mathbb{E}_A m_n(z)) \right| = o(1) \]
with overwhelming probability\footnote{An event $E$ holds with overwhelming probability if, for every $p > 0$, $\mathbb{P}(E) \geq 1 - O_p(n^{-p})$; see Definition \ref{def:events} for details.}.
\end{theorem}
Theorem \ref{thm:stieltjes_transforms} will allow us to compare the largest eigenvalue of $L$ to the largest eigenvalue of $D$, up to a small shift, which we will need to track carefully. Theorem \ref{thm:stieltjes_transforms} should be compared to other local laws in the random matrix theory literature such as \cite{MR2784665,MR2669449,MR4134946,MR3183577,MR3770875,MR3622895,MR3602820,MR4078529,MR2481753,MR2871147,MR3098073,MR2964770,MR3068390,MR3699468,MR2567175,MR3962004,MR3800840,MR4019881,MR4091114,MR3208886} and references therein; however, the reader should be aware that this list is very incomplete and represents only a small fraction of the known local law results.
\subsection{Outline of the remainder of the article}
In the next section, we gather some technical tools and their proofs that will be of use in the rest of the argument. In Section \ref{sec:stability}, we prove a quantitative stability theorem for approximate solutions of \eqref{def:fc}. Section \ref{sec:concentrationofstieltjes} is devoted to the concentration of $m_n(z)$ near its expectation $\mathbb{E}_A m_n(z)$. Section \ref{sec:proofofmaintechnical} contains the proof of our main technical result, Theorem \ref{thm:stieltjes_transforms}. Finally, we combine all the results in Section \ref{sec:proofofmain} to prove Theorem \ref{thm:main_L}.
\section{Tools}
This section introduces the tools we will need in the proof of Theorem \ref{thm:main_L}. We begin with a definition describing high probability events.
\begin{definition}[High probability events] \label{def:events}
Let $E$ be an event that depends on $n$.
\begin{itemize}
\item $E$ holds \emph{asymptotically almost surely} if $\mathbb{P}(E) = 1 - o(1)$.
\item $E$ holds \emph{with high probability} if $\mathbb{P}(E) = 1 - O(n^{-c})$ for some constant $c > 0$.
\item $E$ holds \emph{with overwhelming probability} if, for every $p > 0$, $\mathbb{P}(E) \geq 1 - O_p(n^{-p})$.
\end{itemize}
\end{definition}
For $z = E + i \eta \in \mathbb{C}_+$, the \emph{Ward identity} states that
\begin{equation} \label{eq:ward}
\sum_{j = 1}^n \left| G_{ij}(z) \right|^2 = \frac{1}{ \eta} \Im G_{ii}(z).
\end{equation}
If $A$ and $B$ are invertible matrices, the \emph{resolvent identity} states that
\begin{equation} \label{eq:resolvent}
A^{-1} - B^{-1} = A^{-1} (B - A) B^{-1} = B^{-1} (B - A) A^{-1}.
\end{equation}
If $\xi$ is a Gaussian random variable with mean zero and variance $\sigma^2$ and $f: \mathbb{R} \to \mathbb{C}$ is continuously differentiable, the \emph{Gaussian integration by parts formula} states that
\begin{equation} \label{eq:ibp}
\mathbb{E}[ \xi f(\xi)] = \sigma^2 \mathbb{E}[ f'(\xi) ],
\end{equation}
provided the expectations are finite.
Our next result bounds (via Chernoff's inequality) the number of large entries in the diagonal matrix $D$.
\begin{proposition} \label{prop:count}
Let $D$ be the $n \times n$ diagonal matrix whose entries are iid standard normal random variables. Then, for any $\varepsilon \in (0, 1/2)$, there exists $\delta > 0$ so that
\[ \left| \{ 1 \leq i \leq n : D_{ii} \geq \sqrt{(2- \delta) \log n} \} \right| = O_{\varepsilon}(n^{\varepsilon}) \]
with overwhelming probability.
\end{proposition}
\begin{proof}
Fix $\varepsilon \in (0,1/2)$, and let $2\varepsilon<\delta<2$. Let $X_i=\indicator{D_{ii}\geq \sqrt{(2- \delta) \log n}}$ and $S_n=\sum_{i=1}^n X_i$. Chernoff's inequality (see Theorem 2.1.3 in \cite{MR2906465}) gives that for any $\lambda>0$ \begin{equation}\label{eq:Chernoff's for Sn}
\P\left(S_n-\mathbb{E} S_n\geq \lambda\sqrt{\var (S_n)} \right)\leq C\max\left\{\exp(-c\lambda^2),\exp(-c\lambda\sqrt{\var(S_n)}) \right\}
\end{equation} for absolute constants $C,c>0$, where $\var (S_n)$ is the variance of $S_n$. From the standard bounds \begin{equation*}
\left(\frac{1}{x}-\frac{1}{x^3} \right)\frac{e^{-x^2/2}}{\sqrt{2\pi}}\leq \P\left(D_{11}\geq x\right)\leq\frac{e^{-x^2/2}}{x\sqrt{2\pi}},\qquad x>0
\end{equation*} on standard Gaussian random variables it is straightforward to show that \begin{equation*}
\mathbb{E} S_n\leq n^{\delta/2},
\end{equation*} and \begin{equation}\label{eq:A:Variance of Sn lower bounds}
\frac{n^{\delta/2}}{2\sqrt{2 \pi (2-\delta)\log n}}\left(1-\frac{1}{(2-\delta)\log n} \right)\leq \var(S_n)\leq n^{\delta/2}.
\end{equation} Letting $\lambda=\sqrt{\var(S_n)}$ in \eqref{eq:Chernoff's for Sn} gives that \begin{equation*}
\P\left(S_n> 2n^{\delta/2} \right)\leq C\exp(-c\var(S_n)),
\end{equation*} and lower bounding $\var(S_n)$ by \eqref{eq:A:Variance of Sn lower bounds} completes the proof.
\end{proof}
We will need the following general concentration result for the Stieltjes transform of random symmetric matrices with independent entries.
\begin{proposition}[Naive concentration of the Stieltjes transform] \label{prop:concentration}
Let $W$ be an $n \times n$ real symmetric random matrix whose entries on and above the diagonal $W_{ij}$, $1 \leq i \leq j \leq n$ are independent random variables. Then
\[ \mathbb{P} \left( \left| \frac{1}{n} \tr (W - z)^{-1} - \mathbb{E} \frac{1}{n} \tr (W - z)^{-1} \right| \geq \frac{t}{\eta \sqrt{n}} \right) \leq C e^{-ct^2} \]
for any $t \geq 0$ and any $z = E + i \eta \in \mathbb{C}_+$, where $C, c > 0$ are absolute constants.
\end{proposition}
\begin{proof}
Fix $z=E+i\eta \in \mathbb{C}_+$ and let $M$ be an $n\times n$ real symmetric matrix. Let $M'$ be an $n\times n$ real symmetric matrix equal to $M$, up to possibly a single row and corresponding column being different. Define $R(z)=(M - z)^{-1}$ and $ R'(z)=(M' - z)^{-1}$. It follows from the resolvent identity \eqref{eq:resolvent} that\begin{equation}
\rank\left( R(z)- R'(z) \right) \leq 2.
\end{equation} It then follows that \begin{align*}
\left|\frac{1}{n} \tr (M - z)^{-1}-\frac{1}{n} \tr (M' - z)^{-1} \right|&=\left|\frac{1}{n} \tr \left((M - z)^{-1}- (M' - z)^{-1}\right) \right| \\
&\leq \frac{\rank\left(R(z)- R'(z) \right) }{n}\|R(z)- R'(z)\|\\
&\leq \frac{4}{n\eta}.
\end{align*} We can then conclude from McDiarmid's inequality (see \cite{MR1678578}) that\begin{equation}
\mathbb{P} \left( \left| \frac{1}{n} \tr (W - z)^{-1} - \mathbb{E} \frac{1}{n} \tr (W - z)^{-1} \right| \geq \frac{t}{\eta \sqrt{n}} \right) \leq C e^{-ct^2},
\end{equation} for any $t \geq 0$, where $C, c > 0$ are absolute constants.
\end{proof}
\subsection{Basic concentration and linear algebra identities}
We record several well-known concentration inequalities and algebraic identities that will be of use. The first proposition is a strong concentration result for the maximum of a sequence of iid Gaussian random variables.
\begin{proposition}[Theorem 3 from \cite{MR3389997}] \label{kprop:maxgauss}
Let $X_1, \dots, X_n$ be iid standard Gaussian random variables. Define $M_n = \max_{1 \leq i \leq n} X_i$. Then, for any $t \geq 0$,
\[
\P(|M_n - b_n'|) > t) \leq C \exp(-c t \sqrt{\log n}).
\]
where $b_n'$ is defined in \eqref{eq:def:bn'} and $C, c > 0$ are absolute constants.
\end{proposition}
The next lemma is a convenient moment bound for a martingale difference sequence.
\begin{lemma} [Lemma 2.12 from \cite{MR2567175}] \label{klem:burkholder2}
Let $\{X_k\}$ be a complex martingale difference sequence and $\mathcal{F}_k = \sigma(X_1, \dots, X_k)$ be the $\sigma$-algebra generated by $X_1, \dots, X_k$. Then, for any $p \geq 2$,
\[
\mathbb{E} \left|\sum_{k=1}^n X_k \right|^p \leq K_p \left(\mathbb{E}\left( \sum_{k=1}^n \mathbb{E}_{k-1} |X_k|^2 \right)^{p/2} + \mathbb{E} \sum_{k=1}^n |X_k|^p \right).
\]
where $K_{p}$ is a constant that only depends on $p$ and $\mathbb{E}_{k-1}[\cdot] := \mathbb{E}[\cdot | \mathcal{F}_{k-1}]$.
\end{lemma}
The next concentration lemma is helpful in controlling the deviation of a quadratic form from its expectation.
\begin{lemma} [Equation (3) from \cite{MR4164840}] \label{klem:quadraticform}
Let $X$ be an $n$-vector containing iid standard Gaussian random variables, $A$ a deterministic $n \times n$ matrix and $\ell \geq 1$ an integer. Then
\[
\mathbb{E}[X^* A X - \tr A|^{2 \ell} \leq K_{\ell} (\tr A A^* )^\ell
\]
where $K_{\ell}$ is a constant that only depends on $\ell$.
\end{lemma}
Finally, we will require the following algebraic identity in Section \ref{sec:concentrationofstieltjes}.
\begin{lemma} [Theorem A.5 from \cite{MR2567175}] \label{klem:tracedifference}
Let $A$ be an $n \times n$ symmetric matrix and $A_k$ be the $k$-th major submatrix of size $(n-1) \times (n-1)$. If $A$ and $A_k$ are both invertible, then
\[
\tr( A^{-1}) - \tr(A_k^{-1}) = \frac{1+ \alpha_k^* A_k^{-2} \alpha_k}{A_{kk} - \alpha_k^* A_k^{-1} \alpha_k}
\]
where $\alpha_k$ is obtained from the $k$-th column of $A$ by deleting the $k$-th entry.
\end{lemma}
\subsection{Basic Stieltjes transform and free probability estimates}
This section isolates some useful estimates on $s(z)$ and $m(z)$, defined by \eqref{def:s} and \eqref{def:fc} respectively. Several of the proofs use a contour integral arguement, which is given in detail in Lemma \ref{lemma:Free convolution is sub-Gaussian} below. We begin by stating a result of Biane \cite{MR1488333} on free convolutions of semicircle distributions and an arbitrary distribution. For convenience, we specialize this result to the Gaussian distribution.
\begin{lemma}[Corollaries 3 and 4 from \cite{MR1488333}]\label{lemma:A:Biane result}
Define the function $v:\mathbb{R}\rightarrow[0,\infty)$ by\begin{equation}
v(u)=\inf\left\{v\geq0\,\Bigg|\,\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{e^{-x^2/2}dx}{(u-x)^2+v^2}\leq 1 \right\}
\end{equation} and the function $\psi:\mathbb{R}\rightarrow\mathbb{R}$ by \begin{equation}
\psi(u)=u+\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{(u-x)e^{-x^2/2}dx}{(u-x)^2+v(u)^2}.
\end{equation} Then $\psi$ is an increasing homeomorphism from $\mathbb{R}$ to $\mathbb{R}$, and the free additive convolution of the semicircle distribution and the standard Gaussian distribution has density $p: \mathbb{R} \to [0, \infty)$ with \begin{equation}
p(\psi(u))=\frac{v(u)}{\pi}.
\end{equation} Moreover $p$ is analytic where it is positive, and hence must be bounded.
\end{lemma}
The first lemma below is an important technical bound in our argument and is of interest in its own right. It demonstrates that the free convolution of the semicircle distribution with the standard Gaussian distribution is sub-Gaussian.
\begin{lemma}\label{lemma:Free convolution is sub-Gaussian}
Let $p: \mathbb{R} \to [0, \infty)$ be the density of the free additive convolution of the semicircle distribution and a standard Gaussian distribution. Then there exists some constant $C>0$ such that $p(x)\leq Ce^{-x^2/2}$ for all $x\in\mathbb{R}$.
\end{lemma}
\begin{proof}
Let $p,\ v,$ and $\psi$ be as in Lemma \ref{lemma:A:Biane result}. It is straight forward to see from the definitions that $v$ is an even function of $u$ and $\psi$ (and hence $\psi^{-1}$) is odd. From now on we will assume $u>0$ and in this proof we will use asymptotic notation under $u\rightarrow\infty$. Consider $R>0$ and curves $\gamma_1,\gamma_2,\gamma_3$ in $\mathbb{C}$ where $\gamma_1$ is the straight line from $(0,0)$ to $(R,0)$, $\gamma_2$ is the counterclockwise circular arc from $(R,0)$ to $(2R/\sqrt{5},R/\sqrt{5})$, and $\gamma_3$ the straight line from $(2R/\sqrt{5},R/\sqrt{5})$ to $(0,0)$. Let $\gamma=\gamma_1\cup\gamma_2\cup\gamma_3$. The residue theorem gives for $R$ sufficiently large \begin{align*}
\frac{1}{\sqrt{2\pi}}\oint_\gamma \frac{e^{-z^2/2}dz}{(u-z)^2+v(u)^2}&=\frac{1}{\sqrt{2\pi}}\oint_\gamma \frac{e^{-z^2/2}dz}{(z-(u+iv(u)))(z-(u-iv(u)))}\\
&=\frac{2\pi i}{\sqrt{2\pi}}\frac{e^{-(u+iv(u))^2/2}}{2iv(u)}\\
&=\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}\frac{e^{-u^2/2}}{v(u)}.
\end{align*} Taking $R\rightarrow\infty$, it is straight forward to show\begin{equation}\label{eq:ResidueEquality}
\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}\frac{e^{-u^2/2}}{v(u)}=\frac{1}{\sqrt{2\pi}}\int_0^\infty\frac{e^{-x^2/2}dx}{(u-x)^2+v(u)^2}+\frac{1}{\sqrt{2\pi}}\int_{\gamma_3'} \frac{e^{-z^2/2}dz}{(u-z)^2+v(u)^2}
\end{equation} where $\gamma_3'=\left\{z=x+iy\in\mathbb{C}\,\Bigg|x\geq0,y=x/2 \right\}$ with orientation such that $\Re(z)$ is decreasing. Note an equivalent definition of $v$ is that $v(u)$ is the unique solution to \begin{equation}
v(u)=\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{e^{-x^2/2}dx}{(u-x)^2+v(u)^2}= 1,
\end{equation} for $u\in\mathbb{R}$. For the first integral on the right-hand side, note that \begin{align}\label{eq:A:real integral bound}
\frac{1}{\sqrt{2\pi}}\int_0^\infty\frac{e^{-x^2/2}dx}{(u-x)^2+v(u)^2}&=1-\frac{1}{\sqrt{2\pi}}\int_{-\infty}^0\frac{e^{-x^2/2}dx}{(u-x)^2+v(u)^2}, \nonumber\\
&\geq 1-\frac{1}{2u^2+2v(u)^2}.
\end{align} The second integral on the right-hand side of \eqref{eq:ResidueEquality} can also be bounded in modulus by $C/u^2$ for some absolute constant $C>0$. Consider this bound and \eqref{eq:A:real integral bound} in \eqref{eq:ResidueEquality} yields that there exists some bounded function $f:[0,\infty)\rightarrow\mathbb{C}$ such that \begin{equation*}
\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}\frac{e^{-u^2/2}}{v(u)}=1-\frac{f(u)}{u^2},
\end{equation*} and \begin{equation}\label{eq:vequality}
v(u)=\left(1-\frac{f(u)}{u^2}\right)^{-1}\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}e^{-u^2/2}.
\end{equation} It follows from Lemma \ref{lemma:A:Biane result} that $v$ is bounded, and thus we get from \eqref{eq:vequality} that $v(u)\rightarrow0$ as $u\rightarrow\infty$ and there exists some constant $C>0$ such that \begin{equation}\label{eq:v bounds}
v(u)\leq Ce^{-u^2/2}
\end{equation}for $u\in\mathbb{R}$ and absolute constant $C>0$.
We now turn our attention to $\psi(u)$, in particular $\psi(u)-u$. Using the contour $\gamma$, we get \begin{equation*}
\frac{1}{\sqrt{2\pi}}\oint_\gamma \frac{(u-z)e^{-z^2/2}dz}{(u-z)^2+v(u)^2}=-i\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}e^{-u^2/2}.
\end{equation*} Again taking $R\rightarrow\infty$ gives\begin{align*}
\frac{1}{\sqrt{2\pi}}\int_0^\infty&\frac{(u-x)e^{-x^2/2}dx}{(u-x)^2+v(u)^2} \\
&=-\frac{1}{\sqrt{2\pi}}\int_{\gamma_3'} \frac{(u-z)e^{-z^2/2}dz}{(u-z)^2+v(u)^2}-i\sqrt{\frac{\pi}{2}}e^{v(u)^2/2}e^{-iuv(u)}e^{-u^2/2},
\end{align*} where the integral on the right-hand side is $O\left(\frac{1}{u}\right)$. It is also straightforward to show \begin{equation*}
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^0\frac{(u-x)e^{-x^2/2}dx}{(u-x)^2+v(u)^2}=O\left(\frac{1}{u}\right).
\end{equation*} Thus we get $\psi(u)=u+O\left(\frac{1}{u}\right)$. Thus there exists some bounded continuous function $g$ such that \begin{equation*}
\psi(u)=u+\frac{g(u)}{u},
\end{equation*} and \begin{equation*}
u=\psi\left(\psi^{-1}(u)\right)=\psi^{-1}(u)+\frac{g(\psi^{-1}(u))}{\psi^{-1}(u)}.
\end{equation*} Solving for $\psi^{-1}(u)$, for $u$ large enough, gives\begin{equation}\label{eq:psi bounds}
\psi^{-1}(u)=\frac{1}{2}\left(u+u\sqrt{1-\frac{4g(\psi^{-1}(u))}{u^2}}\right)=u+O\left(\frac{1}{u}\right).
\end{equation} Combining \eqref{eq:v bounds}, \eqref{eq:psi bounds}, and Lemma \ref{lemma:A:Biane result} we see there exists some absolute constant $C'>0$ such that $p(u)\leq C'e^{-u^2/2}$ for $u\in\mathbb{R}$.
\end{proof}
Our next lemma confirms that $s(z)$ is bounded and Lipshitz continuous.
\begin{lemma}\label{lemma:Gaussian Stieltjes is Bounded}
There exists constants $C,C'>0$ such that \begin{equation*}
|s(z)|\leq C,
\end{equation*} for all $z\in\mathbb{C}_+$ and $s$ is $C'$-Lipschitz continuous on $\mathbb{C}_+$.
\end{lemma}
\begin{proof}
It is clear that $s(z)$ and $s'(z)$ are both uniformly bounded by $1$ for $\Im(z)\geq 1$. Let $\gamma:\mathbb{R}\rightarrow\mathbb{C}_+$ be the curve $\gamma(t)=t+2i$. Then for any $z$ such that $0<\Im(z)\leq 1$ Cauchy's integral formula (after passing from finite contours to the image of $\gamma$, similar to what was done in the proof of Lemma \ref{lemma:Free convolution is sub-Gaussian}) gives that \begin{equation*}
s(z)=i\sqrt{2\pi}e^{-z^2/2}+\int_\gamma\frac{1}{w-z}e^{-w^2/2}dw,
\end{equation*} and \begin{equation*}
s'(z)=-i\sqrt{2\pi}ze^{-z^2/2}+\int_\gamma\frac{1}{(w-z)^2}e^{-w^2/2}dw.
\end{equation*} Both of which are uniformly bounded for $0<\Im(z)\leq 1$.
\end{proof}
The next result establishes some simple asymptotics for $s(z)$.
\begin{lemma}\label{Alemma:Gaussian Asymptotic Expansion}
The function $z \mapsto z^2\left(s(z)+\frac{1}{z} \right)$ is uniformly bounded on the strip $\{z \in \mathbb{C}:0< \Im(z)\leq 1 \}\subseteq\mathbb{C}_+$.
\end{lemma}
\begin{proof}
Let $z\in \mathbb{C}$ with $0< \Im(z)\leq 1$, and note \begin{equation*}
z^2\left(s(z)+\frac{1}{z} \right)=\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{zx}{x-z}e^{-x^2/2}dx.
\end{equation*} Let $\gamma:\mathbb{R}\rightarrow\mathbb{C}_+$ be the curve which is given piecewise by \begin{equation*}
\gamma(t)=\begin{cases}
t-it/2,\ t\leq -4\\
t+2i,\ -4<t<4\\
t+it/2,\ t\geq 4
\end{cases},
\end{equation*} with left to right orientation. Similar to what was done in the proof of Lemma \ref{lemma:Free convolution is sub-Gaussian} we can approximate $\mathbb{R} \cup \gamma$ with finite contours moving from $(-R,0)$ to $(R,0)$, then counterclockwise from $(R,0)$ to $\gamma(2R/\sqrt{5})$, then along $\gamma$ to $\gamma(-2R/\sqrt{5})$ and finally counterclockwise from $\gamma(-2R/\sqrt{5})$ to $(-R,0)$. Applying Cauchy's integral formula we see that \begin{equation*}
\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{zx}{x-z}e^{-x^2/2}dx=\sqrt{2\pi}iz^2e^{-z^2/2}+\frac{1}{\sqrt{2\pi}}\int_\gamma\frac{zw}{w-z}e^{-w^2/2}dw
\end{equation*} where the right-hand side is easily seen to be uniformly bounded for $z$ with $0< \Im(z)\leq 1$.
\end{proof}
Biane \cite[Lemma 3]{MR1488333} provides a region where $s$ is Lipschitz with Lipschitz constant strictly less than $1$. The next lemma gives a description of this region. This is a key component of the stability result of Section \ref{sec:stability}.
\begin{lemma}\label{lemma:LessthanoneLipschitz}
Fix $t\geq1$, and define $v_{t}:\mathbb{R}\rightarrow[0,\infty)$ by \begin{equation*}
v_{t}(u)=\inf\left\{v\geq0\,\Bigg|\,\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}\frac{e^{-x^2/2}dx}{(u-x)^2+v^2}\leq \frac{1}{t} \right\}.
\end{equation*} Let $\Omega_t=\{x+iy\in\mathbb{C}_+:y\geq v_{t}(x)\}$ be the region defined in \cite[Lemma 3]{MR1488333}. Then there exists a constant $C_t>0$ such that $v_{t}(x)\leq C_t e^{-x^2/2}$ for all $x\in\mathbb{R}$ and $s$ is Lipschitz on $\Omega_t$ with Lipschitz constant at most $\frac{1}{t}$.
\end{lemma}
\begin{proof}
The sub-Gaussian bound on $v$ follows from a straightforward adaptation of the proof of Lemma \ref{lemma:Free convolution is sub-Gaussian}. The Lipschitz statement of $s$ is the result of Biane \cite[Lemma 3]{MR1488333}.
\end{proof}
The next lemma on the behavior of $m(z)$ can be deduced from the proof of Lemma 3.4 in \cite{MR4058984}.
\begin{lemma}\label{lemma:Bounds on m}
There exists a constant $C>0$ so that $m$ is $C$-Lipschitz continuous on $\mathbb{C}_+$ and \begin{equation*}
|m(z)|\leq 1
\end{equation*} for all $z\in\mathbb{C}_+$.
\end{lemma}
The following lemma exhibits the asymptotic behavior of $m(z)$ near infinity.
\begin{lemma}\label{Alemma:FC Asymptotic Expansion}
On $\mathbb{C}_+$, $m$ has the asymptotic expansion at infinity given by\begin{equation*}
m(z)=-\frac{1}{z}+O\left(\frac{1}{z^2}\right).
\end{equation*}
\end{lemma}
\begin{proof}
For $z$ bounded away from the real line the result follows from a straightforward application of the dominated convergence theorem. Applying Lemma \ref{Alemma:Gaussian Asymptotic Expansion} on the strip $\{z \in \mathbb{C}:0< \Im(z)\leq 1 \}$ and using that $m$ is bounded, we obtain \begin{align*}
m(z)&=s(z+m(z))\\
&=-\frac{1}{z+m(z)}+O\left(\frac{1}{(z+m(z))^2}\right)\\
&=-\frac{1}{z}+O\left(\frac{1}{z^2}\right),
\end{align*}
which completes the proof.
\end{proof}
The lemma that follows controls the imaginary part of $m(z)$ and is crucial in the iteration argument in the proof of Theorem \ref{thm:main_L}.
\begin{lemma}\label{lem:Imm_fc is small}
Fix $0<\delta<2/9$. If $z=E+i\eta \in \tilde S_{\delta} \cup \hat{S}_\delta$, then $\Im m(z)=o(\eta)$.
\end{lemma}
\begin{proof} Let $p$ be the density of the free additive convolution of the semicircle distribution and a standard Gaussian distribution. Applying Lemma \ref{lemma:Free convolution is sub-Gaussian}, we have
\begin{align*}
\Im m(z)&=\int_\mathbb{R}\frac{\eta}{(u-E)^2+\eta^2}p(u)du\\
&\leq C\int_\mathbb{R} \frac{\eta}{(u-E)^2+\eta^2}e^{-u^2/2}du\\
&\leq C\int_{|u-E|\leq E/4} \frac{\eta}{(u-E)^2+\eta^2}e^{-u^2/2}du \\
&\qquad \qquad +C\int_{|u-E|\geq E/4} \frac{\eta}{(u-E)^2+\eta^2}e^{-u^2/2}du\\
&\leq C'\left(\frac{E}{\eta}e^{-(3E/4)^2/2}+\frac{\eta}{E^2} \right)\\
&\leq C''\left(\sqrt{3\log n}n^{1/4}n^{-9(1-\frac{\delta}{2})/16}+\frac{n^{-1/4}}{(2-\delta)\log n} \right)\\
&=o(n^{-1/4}),
\end{align*}
for absolute constants $C,C',C''>0$, completing the proof.
\end{proof}
The next results is also utilized in the proof of Theorem \ref{thm:main_L}.
\begin{lemma}\label{Alemma:Difference of real FC} Fix $0<\delta<\frac{1}{2}$. Then \begin{equation}
\sup_{E+i\eta \in \tilde S_{\delta}}\left|\Re m(E+i\eta)-\Re m(E+i\sqrt{2}\eta)\right|=o\left( \frac{1}{n^{1/2}}\right).
\end{equation}
\end{lemma}
\begin{proof}
Let $p$ be the density of the free additive convolution of the standard semicircle measure and the standard Gaussian measure. Then\begin{equation}\label{eq:A:difference integral}
\Re m(E+i\eta)-\Re m(E+i\sqrt{2}\eta)=\eta^2\int_\mathbb{R}\frac{(u-E)p(u)}{\left((u-E)^2+\eta^2 \right)\left((u-E)^2+2\eta^2 \right)}du.
\end{equation} Let $0<c_2<c_1<1$ be functions of $E$ and $\eta$ such that $c_1E\rightarrow\infty$, $\frac{\eta}{c_2E}\rightarrow0$. We will break the right-hand side of \eqref{eq:A:difference integral} up into the integral over four subsets of $\mathbb{R}$. Define first the set $I_1=\{u: |u-E|\geq c_1E \}$, where $|E-u|$ is large. Then define $I_2=\{u: c_1E\geq|u-E|\geq c_2E \}$, where $|E-u|$ is not too large, but much larger then $\eta$. Next define the set $I_3=\{u: c_2E\geq|u-E|\geq \eta \}$, where $|E-u|$ is roughly on the order of $\eta$. Finally define $I_4=\{u: |u-E|\leq \eta \}$, where $|E-u|$ is smaller than $\eta$. Let \begin{equation*}
f_{E,\eta}(u)=\frac{(u-E)p(u)}{\left((u-E)^2+\eta^2 \right)\left((u-E)^2+2\eta^2 \right)}.
\end{equation*} We complete the proof by showing $\left|\int_{I_k}f_{E,\eta}(u)du \right|=o(1)$ uniformly on $\tilde S_\delta$ for $k=1,2,3,4$ for $c_1=1/\sqrt{E}$ and $c_2=\sqrt{\eta}$. Note that, by Lemma \ref{lemma:Free convolution is sub-Gaussian}, there exists $C>0$ such that $p(u)\leq Ce^{-u^2/2}$ for all $u\in\mathbb{R}$.
$\mathbf{I_1}$: On $I_1$ we have $|f_{E,\eta}(u)|\leq\frac{p(u)}{c_1^3E^3}$. Recalling that $p$ is a probability density the conclusion is clear.
$\mathbf{I_2}$: On $I_2$ we have that \begin{equation*}
\left|\int_{I_2}f_{E,\eta}(u)du \right| \ll \frac{c_1E^2(c_1-c_2)}{c_2^4E^4}\exp\left(-(E-c_1E)^2/2\right)=o(1)
\end{equation*} uniformly on $\tilde S_\delta$ for $\delta<1/2$.
$\mathbf{I_3}$: On $I_3$ we have that \begin{equation*}
\left|\int_{I_3}f_{E,\eta}(u)du \right|\ll \frac{c_2E^2(c_2-\eta)}{\eta^4}\exp\left(-(E-c_2E)^2/2\right)=o(1)
\end{equation*} uniformly on $\tilde S_\delta$ for $\delta<1/2$.
$\mathbf{I_4}$: Finally on $I_4$ we have that \begin{equation*}
\left|\int_{I_4}f_{E,\eta}(u)du \right|\ll \frac{1}{\eta^2}\exp\left(-(E-\eta)^2/2\right)=o(1)
\end{equation*} uniformly on $\tilde S_\delta$ for $\delta<1$.
\end{proof}
In the final lemma of this section, we determine the existence and behavior of a solution to an equation that appears in our analysis.
\begin{lemma}\label{lemma:A:Existence of E}
Let $X_n >0$ and $\eta_n>0$ be such that $X_n \rightarrow\infty$ and $\eta_n\rightarrow 0$ as $n \rightarrow \infty$. Then for any fixed $n$ there exists a solution $E_n$ to the equation\begin{equation}\label{eq:A:Zero real part}
X_n-E_n-\Re m(E_n+i\eta_n)=0,
\end{equation} and $E_n=X_n+\frac{1}{X_n}+O\left(\frac{1}{X_n^2}\right)$. Moreover if $\sqrt{3\log n} \geq X_n\geq\sqrt{(2-\delta)\log n}$ for some $0<\delta<1/2$, $\eta_n= n^{-1/4}$, and $E_n$ is the solution to \eqref{eq:A:Zero real part}, then\begin{equation*}
X_n-E_n-\Re m(E_n+i\sqrt{2}\eta_n)=o(\eta_n^2).
\end{equation*}
\end{lemma}
\begin{proof}
Existence of a solution to \eqref{eq:A:Zero real part} follows from the intermediate value theorem. The asymptotic statement is then an immediate application of Lemma \ref{Alemma:FC Asymptotic Expansion}. The final statement follows from Lemma \ref{Alemma:Difference of real FC} after noting $E_n+i\eta_n \in \tilde S_\delta$.
\end{proof}
\section{Stability of the fixed point equation} \label{sec:stability}
In this section, we establish a stability property for approximate solutions of the fixed-point equation \eqref{def:fc} on $\tilde S_{\delta}$. Stability of similar equations was considered by the third author and Vu in \cite{MR3208886}, though the techniques used there are not applied here.
\begin{theorem}[Stability] \label{thm:stability}
For any $\delta \in (0,1)$, there exists $C > 0$ so that the following holds. Let $\{\varepsilon_n\}$ be a sequence of complex numbers so that $|\varepsilon_n| \leq 1$ for all $n$ and $\varepsilon_n = o(1)$. Assume $\tilde m_n$ is a complex number with $\Im \tilde m_n \geq 0$ which satisfies
\begin{equation}\label{eq:Approximate fixed point}
\tilde m_n=s(z+\tilde m_n)+\varepsilon_n
\end{equation}
for some $z \in \tilde S_\delta$. Then $\left|m(z)-\tilde m_n \right|\leq C\varepsilon_n$ for all $n > C$.
\end{theorem}
\begin{proof}
It follows from Lemma \ref{lemma:Gaussian Stieltjes is Bounded} that $\tilde m_n$ is bounded and from Lemma \ref{lemma:Bounds on m} that $|m(z)|\leq 1$. Let $\Omega_2$ and $v_2$ be defined as in Lemma \ref{lemma:LessthanoneLipschitz}. There exists a constant $N_\delta\in \mathbb{N}$ depending only on $\delta$ such that for $n\geq N_\delta$\begin{align*}
v_{2}(\Re(z+m(z)))&\leq Ce^{-(\Re(z+m(z)))^2/2}\\
&\leq Ce^{-(E-1)^2/2}\\
&=Ce^{-E^2/2}e^{E}e^{-1/2}\\
&\leq Ce^{-1/2}n^{-(1-\delta/2)}e^{\sqrt{3\log(n)}}\\
&\leq n^{-1/4}\\
&\leq \eta,
\end{align*} for some absolute constant $C>0$. Thus is follows that $z+m(z)\in\Omega_2$. A similar argument shows $z+\tilde m_n \in \Omega_2$. It then follows from Lemma \ref{lemma:LessthanoneLipschitz} that \begin{align*}
|m(z)-\tilde m_n|&\leq|s(z+m(z))-s(z+\tilde m_n)|+\varepsilon_n\\
&\leq\frac{1}{2}|m(z)-\tilde m_n|+\varepsilon_n.
\end{align*} A rearrangement completes the proof.
\end{proof}
\section{Concentration of the Stieltjes transform} \label{sec:concentrationofstieltjes}
This section is devoted to the following bound.
\begin{theorem}[Concentration] \label{thm:concentration}
There exists $\delta > 0$ so that
\[ \sup_{z = E + i \eta \in \tilde{S}_{\delta} \cup \hat{S}_{\delta}} n \eta \left| m_n(z) - \mathbb{E}_A m_n(z) \right| = o(1) \]
with overwhelming probability.
\end{theorem}
\begin{proof}
By Proposition \ref{prop:count}, there exists a $\delta' > 0$ such that
\[
\mathcal{E} = \left\{\left| \{1 \leq i \leq n: D_{ii} \geq \sqrt{(2 - \delta') \log n}\} \right| \leq n^{1/8} \right\}
\]
holds with overwhelming probability. Thus, it suffices to prove the theorem conditioned on the occurrence of $\mathcal{E}$. As $A$ is independent of $\mathcal{E}$, for the sake of brevity, we omit this conditioning from our notation. Furthermore, in the remainder of the proof, we will use $\mathbb{E}$ to mean $\mathbb{E}_A$.
We let $\delta = \delta'/2$ and begin by considering a fixed $z \in \tilde{S}_{\delta} \cup \hat{S}_\delta$.
Let $\mathbb{E}_k$ denote the conditional expectation with respect to the $\sigma$-field generated by $A_{ij}$ with $i,j > k$, so that $\mathbb{E}_n m_n(z) = \mathbb{E} m_n(z)$ and $\mathbb{E}_0 m_n(z) = m_n(z)$. Thus, $m_n(z) - \mathbb{E} m_n(z)$ can be written as the following telescopic sum
\[
m_n(z) - \mathbb{E} m_n(z) = \sum_{k=1}^n \left( \mathbb{E}_{k-1} m_n(z) - \mathbb{E}_k m_n(z) \right) := \sum_{k=1}^n \gamma_k
\]
The following martingale argument is inspired by a similar calculation in \cite[Chapter 6]{MR2567175}. We let $G_k$ denote the resolvent of $L$ after removing the $k$-th row and column and $a_k$ is obtained from the $k$-th column of $L$ by removing the $k$-th entry.
We then have that
\begin{align*}
\gamma_k &= \frac{1}{n}(\mathbb{E}_{k-1} \tr (L-z)^{-1} - \mathbb{E}_k \tr (L-z)^{-1} ) \\
&= \frac{1}{n} \Big(\mathbb{E}_{k-1} \big[ \tr (L-z)^{-1} - (L_k - z)^{-1} \big] - \mathbb{E}_k \big[ \tr (L-z)^{-1} - \tr (L_k - z)^{-1} \big] \Big) \\
&= \frac{1}{n} (\mathbb{E}_{k-1} - \mathbb{E}_k) \Bigg(\frac{a_k^* G_k^{2} a_k - \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{L_{kk} - z - a_k^* G_k a_k} + \frac{1 + \mathbb{E}_{a_k} a_k^* G_k^{2} a_k }{L_{kk} - z - a_k^* G_k a_k} \\
&\hspace{7cm}- \frac{1 + \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{L_{kk} - z - \mathbb{E}_{a_k} a_k^* G_k a_k} \Bigg) \\
&= \frac{1}{n} (\mathbb{E}_{k-1} - \mathbb{E}_k) \Bigg(\frac{a_k^* G_k^{2} a_k - \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{L_{kk} - z - a_k^* G_k a_k} \\
&\hspace{4cm} - \frac{(1 + \mathbb{E}_{a_k}a_k^* G_k^{2} a_k) (a_k^* G_k a_k - \mathbb{E}_{a_k} a_k^* G_k a_k)}{(L_{kk} - z - a_k^* G_k a_k)(L_{kk} - z - \mathbb{E}_{a_k} a_k^* G_k a_k)}\Bigg) \\
&= \frac{1}{n} (\mathbb{E}_{k-1} - \mathbb{E}_k) \Bigg(\frac{a_k^* G_k^{2} a_k - \frac{1}{n} \tr G_k^{2} }{L_{kk} - z - a_k^* G_k a_k} \\
&\hspace{4cm} - \frac{(1 + \frac{1}{n} \tr G_k^{2} ) (a_k^* G_k a_k - \frac{1}{n}\tr G_k )}{(L_{kk} - z - a_k^* G_k a_k)(L_{kk} - z - \frac{1}{n} \tr G_k )}\Bigg),
\end{align*}
where the third equality follows from Lemma \ref{klem:tracedifference} and $\mathbb{E}_{a_k}$ indicates expectation over the random variables in $a_k$.
We define the following quantities,
\[
\alpha_k = a_k^* G_k^{2} a_k - \frac{1}{n} \tr G_k^{2},
\]
\[
\beta_k = \frac{1}{ L_{kk} - z - a_k^* G_k a_k}, \quad \bar{\beta}_k = \frac{1}{L_{kk} - z - \frac{1}{n} \tr G_k},
\]
\[
b_k = \frac{1}{L_{kk} - z- \frac{1}{n} \mathbb{E}\tr G_k}
\]
\[
\delta_k = a_k^* G_k a_k - \frac{1}{n}\tr G_k, \quad \hat{\delta}_k = a_k^* G_k a_k - \frac{1}{n}\mathbb{E}\tr G_k
\]
\[
\epsilon_k = 1 + \frac{1}{n} \tr G_k^{2} ,
\]
so that
\begin{align*}
m_n(z) - \mathbb{E} m_n(z) &= \frac{1}{n} \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k - \frac{1}{n} \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \epsilon_k \delta_k \beta_k \bar{\beta}_k \\
&:= S_1 - S_2.
\end{align*}
We will show that $n \eta |S_1| = o(1)$ and $n \eta |S_2| = o(1)$ with overwhelming probability uniformly for all $z \in \tilde{S}_\delta \cup \hat{S}_\delta$. This will be done via the method of moments. We begin with $S_1$. By Markov's inequality, it suffices to bound $\mathbb{E}| \eta n S_1|^{2 \ell} = \mathbb{E} |\eta \sum_{k =1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k|^{2 \ell}$ for $\ell \in \mathbb{N}$.
By Lemma \ref{klem:burkholder2}, for any $\ell \geq 1$,
\[
\mathbb{E} |\eta \sum_{k =1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k|^{2 \ell} \leq K_{\ell} \left( \mathbb{E} \left(\sum_{k=1}^n \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} + \sum_{k=1}^n \mathbb{E} |\eta \alpha_k \beta_k|^{2 \ell} \right).
\]
We use $K_\ell$ to indicate a constant that only depends on $\ell$, but may change from line to line. Since $\Im a^*_k G_k a_k > 0$,
\[
|\beta_k| \leq \eta^{-1}.
\]
Therefore,
\begin{equation} \label{keq:S1}
\mathbb{E} |\eta \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k|^{2 \ell} \leq K_{\ell} \left( \mathbb{E} \left(\sum_{k=1}^n \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} + \sum_{k=1}^n \mathbb{E} |\alpha_k|^{2 \ell} \right).
\end{equation}
By Lemma \ref{klem:quadraticform},
\[
\mathbb{E}|\alpha_k|^{2 \ell} \leq K_{\ell} n^{-2 \ell} \mathbb{E}|\tr G_k^{2} G_k^{*2}|^\ell.
\]
Let $\mathcal{E}'$ denote the event that $\|A\| \leq 10$. We have already seen that $\mathcal{E}'$ occurs with overwhelming probability.
Note that on the event $\mathcal{E} \cap \mathcal{E}'$, due to Cauchy's interlacing theorem and Weyl's inequality, with overwhelming probability, the eigenvalues of $L$ after removing the $k$-th row and $k$-th column will have at most $n^{1/4}$ eigenvalues larger than $\sqrt{(2- 2 \delta) \log n}$ (see Proposition \ref{prop:count}) for $n$ sufficiently large. Thus, we have
\begin{align} \label{keq:indicator}
\tr G_k^2 G_k^{*2} &= \left(\sum_{i=1}^n \frac{1}{((\lambda_i - E)^2 + \eta^2)^2} \right) \oindicator{\mathcal{E}'} + \left(\sum_{i=1}^n \frac{1}{((\lambda_i - E)^2 + \eta^2)^2} \right) \oindicator{\bar{\mathcal{E}'}} \nonumber\\
&\leq n^{1/4} \eta^{-4} + O_\delta(n \log^{-2} n) + n \eta^{-4} \oindicator{\mathcal{E}'^c} \nonumber\\
&\leq 2 n^{5/4} + n^{2} \oindicator{\mathcal{E}'^c}.
\end{align}
The same argument establishes that with overwhelming probability,
\begin{equation} \label{keq:gkgk bound}
\tr G_k G_k^* \leq n + n^2 \oindicator{\mathcal{E}'^c}.
\end{equation}
We now have that
\begin{align*}
\mathbb{E}|\alpha_k|^{2 \ell} &\leq K_{\ell} n^{-2 \ell} \mathbb{E} | 2 n^{5/4} + n^{2} \oindicator{\mathcal{E}'^c}|^\ell \\
&\leq K_{\ell} n^{-2 \ell} (n^{5 \ell/4} + n^{2 \ell} \mathbb{E} \oindicator{\mathcal{E}'^c}) \\
&= K_{\ell} n^{-2 \ell} (n^{5 \ell/4} + n^{2 \ell} \P(\mathcal{E}'^c) ) \\
&\leq K_{\ell} n^{-3 \ell/4}
\end{align*}
where the last line follows from the observation that since $\mathcal{E}'$ occurs with overwhelming probability, $\P(\mathcal{E}'^c) = O_{\ell}(n^{-100 \ell})$, say.
Therefore, by equation \eqref{keq:S1},
\begin{equation*}
\mathbb{E} |\eta \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k|^{2 \ell} \leq K_{\ell} \left( \mathbb{E} \left(\sum_{k=1}^n \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} + n^{-3\ell/4 + 1} \right)
\end{equation*}
We now direct our attention to the remaining sum on the right-hand side. Observe that
\begin{align*}
|\alpha_k \beta_k| &\leq \left| \frac{a_k^* G_k^{2} a_k}{ L_{kk} - z - a_k^* G_k a_k} \right|+ \left| \frac{\frac{1}{n} \tr G_k^{2}}{ L_{kk} - z - a_k^* G_k a_k} \right| \\
&\leq \frac{|a_k^* G_k^{2} a_k|}{ |\Im (L_{kk} - z - a_k^* G_k a_k)|} + \left|\frac{\frac{1}{n} \tr G_k^{2}}{ L_{kk} - z - a_k^* G_k a_k} \right| \\
&\leq \eta^{-1} + \eta^{-3},
\end{align*}
where the last inequality follows from the observation that
\[
|a_k^* G_k^{2} a_k| \leq 1 + a_k^* (L_k - z)^{-1} (L_k - \bar{z})^{-1} a_k = -\eta^{-1} \Im(L_{kk} - z - a^*_k G_k a_k)
\]
and $L_k$ denotes the matrix $L$ with the $k$-th row and $k$-th column removed.
Thus, for a fixed constant $K_0 > 0$,
\[
|\alpha_k \beta_k|^2 \leq 4 K_0^2 \alpha_k^2 + 4 \eta^{-6} \oindicator{|\beta_k| \geq 2 K_0}
\]
We again have by Lemma \ref{klem:quadraticform} and \eqref{keq:indicator} that for some constant $K > 0$,
\[
\mathbb{E}_k |\alpha_k|^2 \leq \frac{K}{n^2} \mathbb{E}_k \tr G_k^2 G_k^{*2} \leq K (n^{-3/4} + \mathbb{E}_k \oindicator{\mathcal{E}'^c}) .
\]
We now return to bounding $S_1$. Let $I$ denote the indices such that $D_{ii} \geq \sqrt{(2 - \delta') \log n}$. On the event $\mathcal{E}$, $|I| \leq n^{1/8}$, so we have that
\begin{align} \label{keq:S1bound}
\P(n \eta |S_1| \geq \varepsilon) &\leq K_{\ell} \left( \mathbb{E} \left(\sum_{k=1}^n \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} + n^{-3\ell/4 + 1}\right) \nonumber \\
&\leq K_{\ell} \left( \mathbb{E} \left(\sum_{k \in I} \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} + \left(\sum_{k \notin I} \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} +n^{-3\ell/4 + 1}\right) \nonumber \\
&\leq K_{\ell} \left( \mathbb{E} \left(\sum_{k \notin I} \mathbb{E}_{k} (4 K_0^2 \alpha_k^2 \eta^2 + 4 \eta^{-4} \oindicator{|\beta_k| \geq 2 K_0}) \right)^{\ell} + n^{-5\ell/8}\right) \nonumber\\
&\leq K_{\ell} \left( \eta^{2 \ell} n^{\ell/4} + \eta^{-4 \ell} n^{\ell-1} \sum_{k \notin I} \P(|\beta_k| \geq 2 K_0) + n^{-5\ell/8}\right) \nonumber\\
&\leq K_{\ell} \left( \eta^{-4 \ell} n^{\ell-1} \sum_{k \notin I} \P(|\beta_k| \geq 2 K_0) + n^{-\ell/4}\right)
\end{align}
where in the third inequality we have utilized the calculation
\begin{align} \label{keq:indicatorcalculation}
\mathbb{E} \left(\sum_{k \in I} \mathbb{E}_{k} |\eta \alpha_k \beta_k|^2 \right)^{\ell} &\leq \mathbb{E} \left(\sum_{k \in I} \mathbb{E}_{k} |\alpha_k|^2 \right)^{\ell} \nonumber \\
&\leq \mathbb{E} \left(\sum_{k \in I} K (n^{-3/4} + \mathbb{E}_k \oindicator{\mathcal{E}'^c}) \right)^{\ell} \nonumber \\
&\leq K_{\ell} \left(n^{-5 \ell/8} + \mathbb{E} \left(\sum_{k \in I} \mathbb{E}_k \oindicator{\mathcal{E}'^c} \right)^\ell \right) \nonumber \\
&\leq K_{\ell} \left(n^{-5 \ell/8} + n^{(\ell-1)/8} \mathbb{E} \left(\sum_{k \in I} \mathbb{E}_k \oindicator{\mathcal{E}'^c} \right) \right) \nonumber \\
&\leq K_{\ell} \left(n^{-5 \ell/8} + n^{(\ell-1)/8} \sum_{k \in I} \P(\mathcal{E}'^c) \right) \nonumber \\
&\leq K_{\ell} n^{-5 \ell/8}.
\end{align}
A nearly identical calculation justifies the fourth inequality in \eqref{keq:S1bound}.
To control $\P(|\beta_k| \geq 2 K_0)$ in \eqref{keq:S1bound}, we make the following observation. For $k \notin I$,
\[
|b_k| \leq \frac{1}{|\Re(L_{kk} - z- \frac{1}{n} \mathbb{E}\tr G_k)|} \leq \frac{1}{(\delta/10)\sqrt{ \log n} - |\frac{1}{n} \mathbb{E}\tr G_k|} \leq K_0.
\]
Therefore, if $|\beta_k| \geq 2 K_0$ then,
\begin{align*}
|L_{kk} - z - a_k^* G_k a_k| &= |L_{kk} -z - \frac{1}{n} \mathbb{E}\tr G_k + \frac{1}{n} \mathbb{E}\tr G_k - a_k^* G_k a_k| \\
&= |b_k^{-1} - \hat{\delta}_k| \\
&\leq \frac{1}{2 K_0}
\end{align*}
which implies that $\hat{\delta}_k \geq \frac{1}{2 K_0}$.
Thus, continuing from \eqref{keq:S1bound},
\[
\P(n \eta |S_1| \geq \varepsilon) \leq K_{\ell} \left( \eta^{-4 \ell} n^{\ell-1} \sum_{k \notin I} \P(|\hat{\delta}_k| \geq 1/2 K_0) + n^{-\ell/4} \right).
\]
A straight-forward calculation shows that
\begin{equation}
\mathbb{E}|\delta_k - \hat{\delta}_k|^{8 \ell} \leq K_{\ell} n^{-4 \ell}.
\end{equation}
As the proof is similar to the many calculations we have done above, it is omitted. One can find the analogous argument in Section 6.2.3 of \cite{MR2567175}.
By Markov's inequality and equation \eqref{keq:gkgk bound},
\begin{align*}
\P\left(|\hat{\delta}_k| \geq \frac{1}{2 K_0}\right) &\leq \frac{\mathbb{E} |\hat{\delta}_k|^{8\ell}}{(2 K_0)^{8 \ell}} \\
&\leq K_{\ell} (\mathbb{E} |\hat{\delta}_k - \delta_k|^{8\ell} + \mathbb{E} |\delta_k|^{8 \ell}) \\
&\leq K_{\ell} (n^{-4 \ell} + \mathbb{E} |\delta_k|^{8 \ell}) \\
&\leq \mathbb{E} |a_k^* G_k a_k - \frac{1}{n} \mathbb{E} \tr G_k|^{8 \ell} \\
&\leq K_{\ell} n^{-4 \ell}.
\end{align*}
We then have that
\[
\P(n \eta |S_1| \geq \varepsilon) \leq K_{\ell} \left( \eta^{-4 \ell} n^{-3\ell} + n^{-\ell/4} \right) \leq K_{\ell} n^{-\ell/4}.
\]
As $\ell$ can be arbitrarily large, we have established that $n \eta |S_1| = o(1)$ uniformly in $z$, with overwhelming probability.
We now show that $n \eta |S_2| = o(1)$ with overwhelming probability. Again, the argument is quite similar to the above, so we will be more brief. By Markov's inequality,
\[
\P(n \eta |S_2| \geq \varepsilon) \leq \frac{\mathbb{E} \Bigg| \eta \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \epsilon_k \delta_k \beta_k \bar{\beta}_k \Bigg|^{2 \ell} }{\varepsilon^{2 \ell}}.
\]
By the same argument in \eqref{keq:indicator}, we have that $|\epsilon_k| \leq 2 + n^2 \oindicator{\mathcal{E}'^c}$. As $\mathcal{E}'$ occurs with overwhelming probability, we omit the $\oindicator{\mathcal{E}'^c}$ term in the remainder of this calculation as it is negligible as seen from \eqref{keq:indicatorcalculation}.
We continue with
\begin{align*}
\mathbb{E} \Bigg| \eta \sum_{k=1}^n &(\mathbb{E}_{k-1} - \mathbb{E}_k) \epsilon_k \delta_k \beta_k \bar{\beta}_k \Bigg|^{2 \ell} \\
&\leq K_{\ell} \left( \mathbb{E} \left( \sum_{k=1}^n \mathbb{E}_k |\eta \epsilon_k \delta_k \beta_k \bar{\beta}_k|^2 \right)^\ell + \sum_{k=1}^n \mathbb{E} |\eta \epsilon_k \delta_k \beta_k \bar{\beta}_k|^{2 \ell} \right) \\
&\leq K_{\ell} \left( \mathbb{E} \left( \sum_{k=1}^n \mathbb{E}_k |\eta \delta_k \beta_k \bar{\beta}_k|^2 \right)^\ell + \eta^{-2 \ell} n^{-\ell + 1} \right) \\
&\leq K_{\ell} \left( \ \mathbb{E} \left(\sum_{k \notin I} \mathbb{E}_k |\eta \delta_k \beta_k \bar{\beta}_k|^2 \right)^\ell+ \eta^{-2 \ell} n^{-7\ell/8 } \right) \\
&\leq K_{\ell} \left( \ \mathbb{E} \left(\sum_{k \notin I} \mathbb{E}_k |\eta \delta_k \beta_k \bar{\beta}_k|^2 \right)^\ell+ n^{-3\ell/8 } \right).
\end{align*}
The summands on the right-hand side can be bounded as follows:
\begin{align*}
|\delta_k \beta_k \bar{\beta}_k|^2 &\leq (2 K_0)^4 |\delta_k|^2 + \eta^{-4} |\delta_k^2| \oindicator{|\beta_k \bar{\beta}_k| \geq (2 K_0)^2} \\
&\leq (2 K_0)^4 |\delta_k|^2 + \eta^{-4} |\delta_k^2| \oindicator{|\delta_k| \geq 1/(4 K_0) \text{ or } |\hat{\delta}_k \geq 1/(4 K_0)}.
\end{align*}
The final inequality is due to the observation that on the event that
\[
|L_{kk} - z - a_k^* G_k^{-1} a_k| |L_{kk} - z - \frac{1}{n} \tr G_k^{-1} | \leq (2 K_0)^2,
\]
either
\[
|L_{kk} - z - a_k^* G_k^{-1} a_k| \leq 2 K_0
\]
or
\[
|L_{kk} - z - \frac{1}{n} \tr G_k^{-1} | \leq 2 K_0.
\]
If $|L_{kk} - z - a_k^* G_k^{-1} a_k| \leq 2 K_0$, then on the event that $|b_k| \leq K_0$ we have that $|\hat{\delta}_k| \geq 1 /(2 K_0)$ as before. On the other hand, if $|L_{kk} - z - \frac{1}{n} \tr G_k^{-1} | \leq 2 K_0$ then
\[
|b_k^{-1} - \delta_k| \leq \frac{1}{2 K_0}
\]
which implies that $|\delta_k| \geq 1/(2 K_0)$. Returning to the moment calculation, we have
\begin{align*}
\mathbb{E} &\left| \eta \sum_{k=1}^n (\mathbb{E}_{k-1} - \mathbb{E}_k) \epsilon_k \delta_k \beta_k \bar{\beta}_k \right|^{2 \ell} \\
&\leq K_{\ell} \left( \mathbb{E} \left(\sum_{k \notin I} \mathbb{E}_k \eta^2 |\delta_k|^2 + \mathbb{E}_k \eta^{-2} |\delta_k^2| \oindicator{|\delta_k| \text{ or } |\hat{\delta}_k| \geq 1/(2 K_0)}\right)^\ell+ n^{-3\ell/8 } \right) \\
&\leq K_{\ell} \left( \eta^{2 \ell} + \mathbb{E} \left(\sum_{k \notin I} \mathbb{E}_k \eta^{-2} |\delta_k^2| \oindicator{|\delta_k| \text{ or } |\hat{\delta}_k |\geq 1/(2 K_0)}\right)^\ell+ n^{-3\ell/8 } \right) \\
&\leq K_{\ell} \left( \eta^{-2 \ell} n^{\ell -1} \sum_{k \notin I} \mathbb{E} |\delta_k|^{2 \ell} \oindicator{|\delta_k| \text{ or } |\hat{\delta}_k| \geq 1/(2 K_0)} +n^{-3\ell/8 } \right) \\
&\leq K_{\ell} \Bigg( \eta^{-2 \ell} n^{\ell -1} \sum_{k \notin I} (\mathbb{E} |\delta_k|^{4 \ell})^{1/2} (\P(|\delta_k| \geq 1/(2 K_0)) + \P( |\hat{\delta}_k| \geq 1/(2 K_0))^{1/2} \\
&\qquad \qquad +n^{-3\ell/8 } \Bigg) \\
&\leq K_{\ell} \Bigg( \eta^{-2 \ell} n^{\ell -1} \sum_{k \notin I} \sqrt{\mathbb{E} |\delta_k|^{4 \ell}} \sqrt{\mathbb{E}|\delta_k|^{2 \ell} +\mathbb{E}|\hat{\delta}_k|^{2 \ell} } +n^{-3\ell/8 } \Bigg) \\
&\leq K_{\ell} \left( \eta^{-2 \ell} n^{-\ell} + n^{-3\ell/8 } \right) \\
&\leq K_{\ell} n^{-3\ell/8 }.
\end{align*}
Thus, we have shown that for a fixed $z \in \tilde{S}_\delta \cup \hat{S}_\delta$, $n \eta | m_n(z) - \mathbb{E} m_n(z)| = o(1)$ with overwhelming probability where the probability and the $o(1)$ error are uniform in $z$. To extend this result to all $z \in \tilde{S}_\delta \cup \hat S_{\delta}$, we observe that $m_n(z)$ is $n^2$-Lipschitz so it suffices to establish the result for an $n^{-3}$-net of $\tilde{S}_\delta \cup \hat{S}_\delta$. Such a net will be of size at most $O(n^4)$ and since an event that holds with overwhelming probability can tolerate a polynomial-sized union bound, the result is proved.
\end{proof}
\section{Proof of Theorem \ref{thm:stieltjes_transforms}} \label{sec:proofofmaintechnical}
We now turn to the proof of Theorem \ref{thm:stieltjes_transforms}. We begin with some preliminary results and notation we will need for the proof. For convenience, we establish the result for $\tilde{S}_\delta$ as an identical argument applies to $\hat{S}_\delta$.
Fix $\varepsilon \in (0, 1/100)$. Since $A$ is a GOE matrix, it follows from standard norm bounds (see, for example, \cite{MR1863696,MR2963170} and references therein) that $\|A\| \leq 3$ with overwhelming probability.
Thus, by Proposition \ref{prop:count} and Weyl's perturbation theorem (see, for instance, Corollary III.2.6 in \cite{MR1477662}), there exists $\delta > 0$ so that the event
\begin{equation} \label{eq:s:Leig}
\left\{ | \{ 1 \leq i \leq n : \lambda_i(L) \geq \sqrt{(2 - 2\delta) \log n} \}| = O(n^{\varepsilon}) \right\}
\end{equation}
holds with overwhelming probability. Moreover, by taking $\delta$ smaller if needed, we can also ensure the same value of $\delta$ applies to the bounds in Theorem \ref{thm:concentration} and Proposition \ref{prop:count}. For convenience, we define the event
\begin{align*}
\mathcal{E} := &\left\{ | \{ 1 \leq i \leq n : \lambda_i(L) \geq \sqrt{(2 - 2\delta) \log n} \}| = O(n^{\varepsilon}) \right\} \\
&\qquad \bigcap \left\{ \sup_{z = E + i \eta \in \tilde S_{\delta}} n \eta \left| m_n(z) - \mathbb{E}_A m_n(z) \right| = o(1) \right\} \\
&\qquad \bigcap \left\{\left| \{ 1 \leq i \leq n : D_{ii} \geq \sqrt{(2- 2\delta) \log n} \} \right| = O_{\varepsilon}(n^{\varepsilon}) \right\}
\end{align*}
to be the intersection of the events from \eqref{eq:s:Leig}, Theorem \ref{thm:concentration}, and Proposition \ref{prop:count}. It follows that $\mathcal{E}$ holds with overwhelming probability, and we will work on this event throughout the proof.
For the remainder of the proof, we consider $\varepsilon$ and $\delta$ fixed. We will allow implicit constants in our asymptotic notation to depend on these parameters without denoting this dependence.
Since the event $\mathcal{E}$ holds with overwhelming probability, we will often be able to insert or remove the indicator function $\oindicator{\mathcal{E}}$ into the expected value with only negligible error. For example, using the naive bound, $\sup_{i,j} |G_{ij}(z)| \leq \|G(z)\| \leq \frac{1}{\eta}$, we have
\begin{align*}
\mathbb{E} \sum_{i,j} G_{ij}(z) = \mathbb{E} \sum_{i,j} G_{ij}(z) \oindicator{\mathcal{E}} + \mathbb{E} \sum_{i,j} G_{ij}(z) \oindicator{\mathcal{E}^c},
\end{align*}
where
\[ \left| \mathbb{E} \sum_{i,j} G_{ij}(z) \oindicator{\mathcal{E}^c} \right| \leq \frac{n^2}{\eta} \mathbb{P}(\mathcal{E}^c) = O_p \left( \frac{1}{n^p \eta} \right) \]
for any $p > 0$. Here, we use the convention that all indices in the sums are over $[n]$ unless otherwise noted. In particular, taking $p$ sufficiently large shows that
\[ \mathbb{E} \sum_{i,j} G_{ij}(z) = \mathbb{E} \sum_{i,j} G_{ij}(z) \oindicator{\mathcal{E}} + o \left( \frac{1}{n \eta} \right) \]
for any $z$ in the spectral domain $S_\delta$.
In a similar way, one can apply the same procedure to the conditional expectation $\mathbb{E}_A \sum_{i,j} G_{ij}(z)$, where the error term
\[ \mathbb{E}_A \sum_{i,j} G_{ij}(z) \oindicator{\mathcal{E}^c} \]
can be bounded with overwhelming probability using an $L^1$-norm bound as above and Markov's inequality.
We will often insert and remove indicator functions of events that hold with overwhelming probability in this way. As the arguments are all of a similar format, we will not always show all of the details, and we refer to this as procedure as ``inserting the indicator function using naive bounds.''
Theorem \ref{thm:stieltjes_transforms} focuses on the spectral domain $\tilde S_{\delta} \cup \hat S_{\delta}$. However, it will sometimes be convenient to work on the larger spectral domain $S_{\delta}$. We begin with some initial bounds for $m_n$ in the spectral domain $S_\delta$.
\begin{lemma} \label{lemma:s:mn}
With overwhelming probability,
\[ \sup_{z \in S_{\delta}} \mathbb{E} |m_n(z)| + \sup_{z \in S_{\delta}} \mathbb{E}_A |m_n(z)| + \sup_{z \in S_{\delta}} |m_n(z)| = O\left( \frac{1}{ \sqrt{\log n}} \right). \]
\end{lemma}
\begin{proof}
We start by bounding
\[ \sup_{z \in S_{\delta}} \mathbb{E} |m_n(z)|. \]
By inserting the indicator function of $\mathcal{E}$ using naive bounds, it suffices to bound
\[ \sup_{z \in S_{\delta}} \mathbb{E} |m_n(z)\oindicator{\mathcal{E}}|. \]
For $z = E + i \eta \in S_\delta$, we have the uniform bound
\begin{align*}
\mathbb{E} \left| m_n(z)\oindicator{\mathcal{E}}\right| &\leq \mathbb{E} \frac{1}{n} \sum_{j=1}^n \frac{1}{ \sqrt{ ( \lambda_j(L) - E)^2 + \eta^2 }} \oindicator{\mathcal{E}}\\
&\leq \mathbb{E} \frac{1}{n} \sum_{j : \lambda_j(L) \geq \sqrt{(2 - 2\delta)\log n}} \frac{1}{\eta}\oindicator{\mathcal{E}} + \mathbb{E} \frac{1}{n} \sum_{j : \lambda_j(L) < \sqrt{(2 - 2\delta)\log n}} \frac{1}{ |\lambda_j(L) - E| }\oindicator{\mathcal{E}} \\
&\ll \frac{n^\varepsilon}{n \eta} + \frac{n}{n \sqrt{\log n}},
\end{align*}
where in the first sum we used that, on the event $\mathcal{E}$, there are only $O(n^{\varepsilon})$ terms in the sum and in the second sum we bounded the number of summands by $n$ and used that $\lambda_j(L) < \sqrt{(2 - 2\delta)\log n}$ while $E \geq \sqrt{(2 - \delta) \log n}$. Thus, since $\eta \geq n^{-1/4}$ and $\varepsilon < 1/100$, we conclude that
\[ \sup_{z \in S_{\delta}} \mathbb{E} |m_n(z)| = O \left( \frac{1}{\sqrt{\log n}} \right). \]
The same method also bounds $\sup_{z \in S_{\delta}} \mathbb{E}_A |m_n(z)|$, where we can again use naive bounds to insert the the indicator function of $\mathcal{E}$ into the conditional expectation. In this case, one also needs to use a net argument and continuity to establish the result for all $z \in S_{\delta}$; we omit the details. The bound for $\sup_{z \in S_{\delta}} |m_n(z)|$ is also similar. However, in this case, we do not have any expectation and can simply repeat the arguments above by working on the event $\mathcal{E}$, which holds with overwhelming probability; we omit the details.
\end{proof}
With Theorem \ref{thm:concentration} and Lemma \ref{lemma:s:mn} in hand, we can now complete the proof of Theorem \ref{thm:stieltjes_transforms}.
\begin{proof}[Proof of Theorem \ref{thm:stieltjes_transforms}]
For $z \in S_{\delta}$, we define
\[ G := G(z) \]
and
\[ Q := Q(z + \mathbb{E}_A m_n(z)), \]
where $G(z) := (L-z)^{-1}$ is the resolvent of $L$ and $Q(z) := (D - z)^{-1}$ is the resolvent of $D$. In particular, $s_n(z + \mathbb{E}_A m_n(z)) = \frac{1}{n} \tr Q = \mathbb{E}_A \frac{1}{n} \tr Q$ since $Q$ does not depend on $A$. By the resolvent identity \eqref{eq:resolvent}, we have
\begin{align} \label{eq:s:res_start}
\mathbb{E}_A m_n(z) - s_n(z + \mathbb{E}_A m_n(z)) &= \mathbb{E}_A \frac{1}{n} \tr G - \frac{1}{n} \tr Q \\
&= \mathbb{E}_A \frac{1}{n} \tr (GAQ) - \mathbb{E}_A m_n(z) \mathbb{E}_A \frac{1}{n} \tr (GQ). \nonumber
\end{align}
We will apply the Gaussian integration by parts formula \eqref{eq:ibp} to
\begin{equation} \label{eq:s:ibp}
\mathbb{E}_A \frac{1}{n} \tr (GAQ) = \frac{1}{n} \sum_{i,j} Q_{ii} \mathbb{E}_A [ G_{ij} A_{ji} ],
\end{equation}
where we used the fact that $Q$ is a diagonal matrix and does not depend on $A$ (so we can pull it out of the conditional expectation). A simple computation involving the resolvent identity \eqref{eq:resolvent} shows that
\[ \frac{ \partial G_{kl} } { \partial A_{ij} } = \begin{cases}
G_{ki} G_{jl} + G_{kj} G_{il}, & \text{ if } i \neq j, \\
G_{ki} G_{jl}, & \text{ if } i = j.
\end{cases}
\]
Thus, returning to \eqref{eq:s:ibp} and applying \eqref{eq:ibp}, we obtain
\[ \mathbb{E}_A \frac{1}{n} \tr (GAQ) = \frac{1}{n^2} \mathbb{E}_A \sum_{i,j} Q_{ii} G_{ij}^2 + \frac{1}{n} \mathbb{E}_A m_n(z) \tr (QG), \]
which when combined with \eqref{eq:s:res_start} yields
\begin{align} \label{eq:s:res_mid}
\mathbb{E}_A m_n(z) - s_n(z + \mathbb{E}_A m_n(z)) &= \frac{1}{n^2} \mathbb{E}_A \sum_{i,j} Q_{ii} G_{ij}^2 + \frac{1}{n} \mathbb{E}_A m_n(z) \tr (QG) \\
&\qquad\qquad - \mathbb{E}_A m_n(z) \mathbb{E}_A \frac{1}{n} \tr (GQ). \nonumber
\end{align}
We aim to bound the terms on the right-hand side uniformly for $z \in \tilde{S}_\delta$.
For the first term, we apply the Ward identity \eqref{eq:ward} to get
\begin{align*}
\left| \frac{1}{n^2} \mathbb{E}_A \sum_{i,j} Q_{ii} G_{ij}^2 \right| &\leq \mathbb{E}_A \frac{1}{n^2} \sum_{i} | Q_{ii} | \sum_{j} |G_{ij}|^2 \\
&\leq \mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i} |Q_{ii}| \Im G_{ii} \\
&\leq \mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii}.
\end{align*}
Define the event
\begin{equation} \label{eq:s:F}
\mathcal{F} := \left\{ \sup_{z \in \tilde S_{\delta}} \mathbb{E}_A m_n(z) = O \left( \frac{1}{\sqrt{ \log n}} \right) \right\},
\end{equation}
and note that, by Lemma \ref{lemma:s:mn}, $\mathcal{F}$ holds with overwhelming probability. By using naive bounds, we can insert the indicator function of the event $\mathcal{E} \cap \mathcal{F}$ into the above equations to obtain
\begin{align*}
&\mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \\
&= \mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \oindicator{\mathcal{E} \cap \mathcal{F}} + o \left( \frac{1}{n \eta} \right) \\
&= \mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i: D_{ii} \geq \sqrt{ (2 - 2\delta) \log n}} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \oindicator{\mathcal{E} \cap \mathcal{F}} \\
&\qquad + \mathbb{E}_A \frac{1}{n^2 \eta}\sum_{i: D_{ii} < \sqrt{(2 - 2 \delta) \log n}} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \oindicator{\mathcal{E} \cap \mathcal{F}} + o \left( \frac{1}{n \eta} \right)
\end{align*}
with overwhelming probability.
Observe that since $\Re(z) \geq \sqrt{(2 - \delta) \log n}$, one has
\[ | D_{ii} - z - \mathbb{E}_A m(z) | \gg \sqrt{\log n} \]
on the event $\mathcal{F}$ whenever $D_{ii} < \sqrt{(2 - 2 \delta) \log n}$. Thus, we have
\begin{align*}
\mathbb{E}_A \frac{1}{n^2 \eta} &\sum_{i: D_{ii} < \sqrt{(2 - 2 \delta) \log n}} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \oindicator{\mathcal{E} \cap \mathcal{F}} \\
&\ll \frac{1}{n \eta \sqrt{\log n}} \mathbb{E}_A \Im m_n(z) \oindicator{\mathcal{E} \cap \mathcal{F}} \\
&= o \left( \frac{1}{n \eta} \right).
\end{align*}
For the other term, we apply the naive bounds $\|G\| \leq \frac{1}{ \eta}$ and $ \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \leq \frac{1}{\eta}$ to obtain
\begin{align*}
\mathbb{E}_A \frac{1}{n^2 \eta} \sum_{i: D_{ii} \geq \sqrt{ (2 - 2\delta) \log n}} \frac{1}{ | D_{ii} - z - \mathbb{E}_A m(z) | } \Im G_{ii} \oindicator{\mathcal{E} \cap \mathcal{F}} \ll \frac{n^\varepsilon}{n^2 \eta^3} = o \left( \frac{1}{n \eta} \right)
\end{align*}
uniformly for $z \in \tilde S_\delta$.
Combining the terms above, we conclude that, with overwhelming probability,
\[ \frac{1}{n^2} \mathbb{E}_A \sum_{i,j} Q_{ii} G_{ij}^2 = o \left( \frac{1}{n \eta} \right) \]
uniformly for $z \in \tilde S_\delta$. In view of \eqref{eq:s:res_mid}, it remains to show
\begin{equation} \label{eq:s:res_end}
\frac{1}{n} \mathbb{E}_A m_n(z) \tr (QG) - \mathbb{E}_A m_n(z) \mathbb{E}_A \frac{1}{n} \tr (GQ) = o \left( \frac{1}{n \eta} \right)
\end{equation}
with overwhelming probability, uniformly for $z \in \tilde S_{\delta}$.
We will use Theorem \ref{thm:concentration} to establish \eqref{eq:s:res_end}. Indeed, by using naive bounds, we can insert the indicator function of the event $\mathcal{E} \cap \mathcal{F}$, and it suffices to bound
\[ \sup_{z \in \tilde{S}_\delta} \mathbb{E}_A \left| \frac{1}{n} m_n(z) \tr (QG) - \mathbb{E}_A \left[ m_n(z)\right] \frac{1}{n} \tr (GQ) \right|\oindicator{\mathcal{E} \cap \mathcal{F}}. \]
In order to bound this term, we will need the following result.
\begin{lemma} \label{lem:s:GQbnd}
One has
\[ \sup_{z \in \tilde S_{\delta}} \left| \frac{1}{n} \tr (GQ) \right|\oindicator{\mathcal{E} \cap \mathcal{F}} = O\left(\frac{1}{\log n} \right) \]
with probability $1$.
\end{lemma}
We prove Lemma \ref{lem:s:GQbnd} below, but let us first complete the proof of Theorem \ref{thm:stieltjes_transforms}. Indeed, applying Lemma \ref{lem:s:GQbnd}, we obtain
\begin{align*}
\mathbb{E}_A &\left[ \left| \frac{1}{n} m_n(z) \tr (QG) - \mathbb{E}_A \left[ m_n(z)\right] \frac{1}{n} \tr (GQ) \right|\oindicator{\mathcal{E} \cap \mathcal{F}} \right] \\
&\qquad\leq \mathbb{E}_A \left[ \left| m_n(z) - \mathbb{E}_A \left[ m_n(z)\right] \right| \left| \frac{1}{n} \tr (GQ) \right|\oindicator{\mathcal{E} \cap \mathcal{F}} \right] \\
&\qquad\ll \frac{1}{\log n} \mathbb{E}_A \left[ \left| m_n(z) - \mathbb{E}_A \left[ m_n(z)\right] \right| \oindicator{\mathcal{E} \cap \mathcal{F}} \right]
\end{align*}
uniformly for $z \in \tilde S_{\delta}$. Applying Theorem \ref{thm:concentration} (which we included in the event $\mathcal{E}$ for just this purpose) establishes \eqref{eq:s:res_end}. In view of \eqref{eq:s:res_mid}, this completes the proof of Theorem \ref{thm:stieltjes_transforms}.
\end{proof}
We now give the proof of Lemma \ref{lem:s:GQbnd}.
\begin{proof}[Proof of Lemma \ref{lem:s:GQbnd}]
By the Cauchy--Schwarz inequality, it suffices to show
\begin{equation} \label{eq:s:GGast}
\sup_{z \in \tilde S_{\delta}} \frac{1}{n} \tr (G G^\ast)\oindicator{\mathcal{E} \cap \mathcal{F}} = O\left(\frac{1}{\log n} \right)
\end{equation}
and
\begin{equation} \label{eq:s:QQast}
\sup_{z \in \tilde S_{\delta}} \frac{1}{n} \tr (Q Q^\ast)\oindicator{\mathcal{E} \cap \mathcal{F}} = O\left(\frac{1}{\log n} \right)
\end{equation}
with probability $1$. Fix $z = E + i \eta \in \tilde S_{\delta}$; all of our bounds will be uniform in $z$ and hold with probability $1$.
For the term on the left-hand side of \eqref{eq:s:GGast}, we apply the spectral theorem to obtain
\begin{align*}
\frac{1}{n} \tr (G G^\ast)\oindicator{\mathcal{E} \cap \mathcal{F}} &= \frac{1}{n} \sum_{j} \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 }\oindicator{\mathcal{E} \cap \mathcal{F}} \\
&= \frac{1}{n} \sum_{j: \lambda_j(L) \geq \sqrt{(2-2\delta) \log n}} \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 }\oindicator{\mathcal{E} \cap \mathcal{F}} \\
&\qquad\qquad+ \frac{1}{n} \sum_{j: \lambda_j(L) < \sqrt{(2-2\delta) \log n}} \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 }\oindicator{\mathcal{E} \cap \mathcal{F}}.
\end{align*}
Observe that, on the event $\mathcal{E}$, the first sum only contains $O(n^\varepsilon)$ terms, and so using a naive bound we obtain
\[ \frac{1}{n} \sum_{j: \lambda_j(L) \geq \sqrt{(2-2\delta) \log n}} \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 }\oindicator{\mathcal{E} \cap \mathcal{F}} \ll \frac{n^{\varepsilon}}{n \eta^2} = O\left(\frac{1}{\log n} \right) \]
with probability $1$. For the second sum, we have
\[ \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 } \gg \frac{1}{\log n} \]
uniformly for $z \in \tilde S_{\delta}$ since $\lambda_j(L) < \sqrt{(2 - 2\delta) \log n}$ and $E \geq \sqrt{(2 - \delta) \log n}$. Thus, we find
\[ \frac{1}{n} \sum_{j: \lambda_j(L) < \sqrt{(2-2\delta) \log n}} \frac{1}{ (\lambda_j(L) - E)^2 + \eta^2 } \ll \frac{n}{n \log n} = O\left(\frac{1}{\log n} \right) \]
with probability $1$, where we bounded the total number of terms in this sum by $n$. This completes the proof of \eqref{eq:s:GGast}.
The proof of \eqref{eq:s:QQast} is similar. Since $Q$ is a diagonal matrix, we have
\begin{align*}
\frac{1}{n} \tr (Q Q^\ast)\oindicator{\mathcal{E} \cap \mathcal{F}} &= \frac{1}{n} \sum_{j} \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \oindicator{\mathcal{E} \cap \mathcal{F}} \\
&= \frac{1}{n} \sum_{j: D_{jj} \geq \sqrt{(2 - 2\delta) \log n}} \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \oindicator{\mathcal{E} \cap \mathcal{F}} \\
&\qquad \qquad + \frac{1}{n} \sum_{j: D_{jj} < \sqrt{(2 - 2\delta) \log n}} \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \oindicator{\mathcal{E} \cap \mathcal{F}}.
\end{align*}
On the event $\mathcal{E}$, the first sum contains $O(n^{\varepsilon})$ terms, and so using a naive bound we obtain
\[ \frac{1}{n} \sum_{j: D_{jj} \geq \sqrt{(2 - 2\delta) \log n}} \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \oindicator{\mathcal{E} \cap \mathcal{F}} \leq \frac{n^{\varepsilon}}{n \eta^2} = O\left(\frac{1}{\log n} \right) \]
with probability $1$. We now turn our attention to the second sum. On the event $\mathcal{F}$, $\mathbb{E}_A m_n(z)$ is uniformly bounded for $z \in \tilde S_{\delta}$. Thus, on the event $\mathcal{E} \cap \mathcal{F}$, whenever $D_{jj} < \sqrt{(2 - 2 \delta) \log n}$, we have
\[ \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \gg \frac{1}{\log n} \]
uniformly for $z \in \tilde S_{\delta}$ since $E \geq \sqrt{(2-\delta) \log n}$. Therefore, we conclude that
\[ \frac{1}{n} \sum_{j: D_{jj} < \sqrt{(2 - 2\delta) \log n}} \frac{ 1} { |D_{jj} - z - \mathbb{E}_A m_n(z)|^2} \oindicator{\mathcal{E} \cap \mathcal{F}} \ll \frac{n}{n \log n } = O\left(\frac{1}{\log n} \right) \]
with probability $1$, and the proof is complete.
\end{proof}
We include the following extensions of Theorem \ref{thm:stieltjes_transforms}, which we will need in the next section.
\begin{theorem} \label{thm:s:expected_stieltjes_transforms}
There exists $\delta > 0$ so that
\begin{equation} \label{eq:s:explimit}
\sup_{z \in \tilde S_{\delta} \cup \hat{S}_\delta} \sqrt{n} \eta \left| \mathbb{E} m_n(z) - m(z) \right| = o (1).
\end{equation}
\end{theorem}
\begin{remark}
It is likely that the error term in \eqref{eq:s:explimit} can be improved. However, we will not need a sharp bound to prove our main results. In addition, the proof reveals that \eqref{eq:s:explimit} can be extended to hold for all $z \in S_{\delta}$, but we will not need a larger spectral domain in this work.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:s:expected_stieltjes_transforms}]
The proof is similar to the proof of Theorem \ref{thm:stieltjes_transforms}. Again, for notational convenience, we only prove this for $\tilde{S}_\delta$. We outline the main ideas of the proof here.
Fix $\varepsilon \in (0, 1/100)$. Since $A$ is a GOE matrix, it follows from standard norm bounds (see, for example, \cite{MR1863696,MR2963170} and references therein) that $\|A\| \leq 3$ with overwhelming probability.
Thus, by Proposition \ref{prop:count} and Weyl's perturbation theorem (see, for instance, Corollary III.2.6 in \cite{MR1477662}), there exists $\delta > 0$ so that the event given in \eqref{eq:s:Leig}
holds with overwhelming probability. Moreover, by taking $\delta$ smaller if needed, we can also ensure the same value of $\delta$ applies to the bounds in Proposition \ref{prop:count}. For convenience, we define the event
\begin{align*}
\Omega := &\left\{ | \{ 1 \leq i \leq n : \lambda_i(L) \geq \sqrt{(2 - 2\delta) \log n} \}| = O(n^{\varepsilon}) \right\} \\
&\qquad \bigcap \left\{ \sup_{z = E + i \eta \in \tilde S_{\delta}} \sqrt{n} \eta (\log n)^{3/4} \left| m_n(z) - \mathbb{E} m_n(z) \right| = o(1) \right\} \\
&\qquad \bigcap \left\{\left| \{ 1 \leq i \leq n : D_{ii} \geq \sqrt{(2- 2\delta) \log n} \} \right| = O_{\varepsilon}(n^{\varepsilon}) \right\}
\end{align*}
to be the intersection of the events from \eqref{eq:s:Leig}, Proposition \ref{prop:count}, and Proposition \ref{prop:concentration}. Here, we applied Proposition \ref{prop:concentration} with $t = (\log n)^{3/4}$ and then used continuity and a net argument to extend the bound to all $z \in \tilde S_{\delta}$. It follows that $\Omega$ holds with overwhelming probability, and we will work on this event throughout the proof.
For $z \in S_{\delta}$, we define
\[ G := G(z) \]
and
\[ Q := Q(z + \mathbb{E} m_n(z)), \]
where $G(z) := (L-z)^{-1}$ is again the resolvent of $L$ and $Q(z) := (D - z)^{-1}$ is the resolvent of $D$.
Similar to the proof of Theorem \ref{thm:stieltjes_transforms}, we can use the resolvent identity \eqref{eq:resolvent} to write
\begin{align*}
\mathbb{E} m_n(z) - \mathbb{E} s_n(z + \mathbb{E} m_n(z)) &= \mathbb{E} \frac{1}{n} \tr G - \mathbb{E} \frac{1}{n} \tr Q \\
&= \mathbb{E} \frac{1}{n} \tr (GAQ) - \mathbb{E} m_n(z) \mathbb{E} \frac{1}{n} \tr (GQ).
\end{align*}
Applying the same Gaussian integration by parts argument from the proof of Theorem \ref{thm:stieltjes_transforms} and noting that
\[ \mathbb{E} s_n(z + \mathbb{E} m_n(z)) = s(z + \mathbb{E} m_n(z)), \]
we obtain
\begin{align} \label{eq:s:res_mid2}
\mathbb{E} m_n(z) - s(z + \mathbb{E} m_n(z)) &= \frac{1}{n^2} \mathbb{E} \sum_{i,j} Q_{ii} G_{ij}^2 + \frac{1}{n} \mathbb{E} m_n(z) \tr (QG) \\
&\qquad\qquad - \mathbb{E} m_n(z) \mathbb{E} \frac{1}{n} \tr (GQ). \nonumber
\end{align}
Our goal is to bound the error terms on the right-hand side. The first term
\[ \frac{1}{n^2} \mathbb{E} \sum_{i,j} Q_{ii} G_{ij}^2 \]
is handled using the Ward identity, exactly in the same way as it was bounded in the proof of Theorem \ref{thm:stieltjes_transforms}, and we omit the details.
We now turn to bounding the error term
\begin{align*}
\left| \frac{1}{n} \mathbb{E} m_n(z) \tr (QG) - \mathbb{E} m_n(z) \mathbb{E} \frac{1}{n} \tr (GQ) \right| &\leq \mathbb{E} \left[ \left| m_n(z) - \mathbb{E} \left[ m_n(z) \right] \right| \left| \frac{1}{n} \tr (QG) \right| \right].
\end{align*}
Using naive bounds, we can insert the indicator function of $\Omega$, and it suffices to bound
\[ \mathbb{E} \left[ \left| m_n(z) - \mathbb{E} \left[ m_n(z) \right] \right| \left| \frac{1}{n} \tr (QG) \right| \oindicator{\Omega} \right]. \]
We will need the following analogue of Lemma \ref{lem:s:GQbnd}.
\begin{lemma} \label{lem:s:GQbnd2}
One has
\[ \sup_{z \in \tilde S_{\delta} \cup \hat{S}_\delta} \left| \frac{1}{n} \tr (GQ) \right|\oindicator{\Omega} = O\left(\frac{1}{\log n} \right) \]
with probability $1$.
\end{lemma}
We provide the proof of Lemma \ref{lem:s:GQbnd2} below, but first we complete the proof of Theorem \ref{thm:s:expected_stieltjes_transforms}. Indeed, we find
\begin{align*}
\mathbb{E} \left[ \left| m_n(z) - \mathbb{E} \left[ m_n(z) \right] \right| \left| \frac{1}{n} \tr (QG) \right|\oindicator{\Omega} \right] &\ll \frac{1}{\log n} \mathbb{E} \left[ \left| m_n(z) - \mathbb{E} \left[ m_n(z) \right] \right| \oindicator{\Omega} \right] \\
&\ll \frac{1}{\sqrt{n} (\log n)^{1/4} \eta} \\
&= o \left( \frac{1}{\sqrt{n} \eta} \right)
\end{align*}
uniformly for $z \in \tilde S_{\delta}$. Here, we used Lemma \ref{lem:s:GQbnd2} in the first bound and
\[ \sup_{z = E + i \eta \in \tilde S_{\delta}} \sqrt{n} \eta (\log n)^{3/4} \left| m_n(z) - \mathbb{E} m_n(z) \right| = o(1), \]
which is part of the event $\Omega$, in the second bound.
Combining the bounds above with \eqref{eq:s:res_mid2}, we conclude that
\[ \sup_{z \in \tilde S_{\delta} } \sqrt{n} \eta \left| \mathbb{E} m_n(z) - s(z + \mathbb{E} m_n(z)) \right| = o(1). \]
An application of Theorem \ref{thm:stability} now completes the proof.
\end{proof}
We now outline the proof of Lemma \ref{lem:s:GQbnd2}.
\begin{proof}[Proof of Lemma \ref{lem:s:GQbnd2}]
The proof of Lemma \ref{lem:s:GQbnd2} follows the proof of Lemma \ref{lem:s:GQbnd} nearly exactly. Only the following changes need to be made:
\begin{itemize}
\item One needs to replace the indicator function $\oindicator{\mathcal{E} \cap \mathcal{F}}$ with $\oindicator{\Omega}$ and any use of the event $\mathcal{E}$ by $\Omega$.
\item Occurrences of $\mathbb{E}_A m_n(z)$ need to be replaced by $\mathbb{E} m_n(z)$.
\item One does not need the event $\mathcal{F}$ to control $\mathbb{E} m_n(z)$ (in fact, $\mathbb{E} m_n(z)$ is deterministic). Instead, one can use that $\mathbb{E} m_n(z)$ is bounded uniformly for $z \in \tilde S_{\delta}$ by Lemma \ref{lemma:s:mn}.
\end{itemize}
\end{proof}
To conclude this section, we present the following concentration bound, which we will also need in the next section.
\begin{lemma} \label{lem:expected_stieltjes_transforms2}
For any fixed $\delta > 0$, asymptotically almost surely, one has
\begin{equation} \label{eq:s:conc}
\sup_{z \in \tilde S_{\delta} \cup \hat{S}_\delta} \sqrt{n} \eta \left| \mathbb{E} m_n(z) - \mathbb{E}_A m_n(z) \right| = o (1).
\end{equation}
\end{lemma}
\begin{proof}
We will establish \eqref{eq:s:conc} by applying the Gaussian Poincar\'{e} inequality. Since $\mathbb{E}_A m_n(z)$ only depends on the randomness from $D$, we will apply the tensorization property of the Poincar\'{e} inequality to the $n$ iid standard normal entries of $D$. We refer the reader to Section 4.4 of \cite{MR2760897} for further details concerning the Poincar\'{e} inequality and its uses in random matrix theory.
For any $z := E + i \eta \in \tilde S_{\delta} \cup \hat{S}_\delta$, we begin with
\begin{align*}
\mathbb{E} \left| \mathbb{E}_A m_n(z) - \mathbb{E} m_n(z) \right|^2 &\leq \frac{1}{n^2} \mathbb{E} \sum_i \left| \sum_j \mathbb{E}_A G_{ji} G_{ij} \right|^2 \\
&\leq \frac{1}{n^2} \mathbb{E} \sum_i \left| \sum_j G_{ji} G_{ij} \right|^2,
\end{align*}
where the first inequality follows from the Gaussian Poincar\'{e} inequality and the second from Jensen's inequality. Here, we also used the fact that
\[ \frac{ \partial \mathbb{E}_A G_{jj}} {\partial D_{ii}} = - \mathbb{E}_A \left[ G_{ji} G_{ij} \right], \]
which can be deduced from the resolvent identity \eqref{eq:resolvent}. Applying the triangle inequality and the Ward identity \eqref{eq:ward}, we obtain
\begin{align*}
\mathbb{E} \left| \mathbb{E}_A m_n(z) - \mathbb{E} m_n(z) \right|^2 &\leq \frac{1}{n^2} \mathbb{E} \sum_i \left( \sum_j |G_{ij}|^2 \right)^2 \\
&\leq \frac{1}{n^2 \eta^2} \mathbb{E} \sum_i \left( \Im G_{ii} \right)^2 \\
&\leq \frac{1}{n^2 \eta^3} \mathbb{E} \Im m_n(z) \\
&\ll \frac{1}{n^2 \eta^3 \sqrt{\log n}},
\end{align*}
where we used Lemma \ref{lemma:s:mn} in the last step. Thus, by Markov's inequality, we conclude that
\begin{equation} \label{eq:s:markov}
\sup_{z \in \tilde S_{\delta} \cup \hat{S}_\delta} \mathbb{P} \left( \left| \mathbb{E}_A m_n(z) - \mathbb{E} m_n(z) \right| \geq \frac{1}{\sqrt{n} \eta (\log n)^{1/4}} \right) \ll \frac{1}{n \eta} = \frac{1}{n^{3/4}}.
\end{equation}
We now extend the bound in \eqref{eq:s:markov} to hold simultaneously for all $z \in \tilde S_{\delta}$ by a net argument. We note that all of our bounds below will hold uniformly for $z \in \tilde S_{\delta} \cup \hat{S}_\delta$. Let $\mathcal{N}$ be a $\frac{1}{\sqrt{n} (\log n)^{1/4}}$-net of $\tilde S_{\delta} \cup \hat{S}_\delta$. A counting argument shows that $\mathcal{N}$ can be chosen to have cardinality
\[ |\mathcal{N}| = O_{\delta}(\sqrt{n} \log n ). \]
Therefore, \eqref{eq:s:markov} together with the union bound implies that the event
\[ \left\{ \sup_{z \in \mathcal{N}} \left| \mathbb{E}_A m_n(z) - \mathbb{E} m_n(z) \right| \leq \frac{1}{\sqrt{n} \eta (\log n)^{1/4}} \right\} \]
holds with probability $1 - o(1)$. Let $\mathcal{F}'$ be the intersection of the event above with the event $\mathcal{F}$, defined in \eqref{eq:s:F}. It follows that $\mathcal{F}'$ holds with probability $1 - o(1)$, and we will prove that \eqref{eq:s:conc} holds on $\mathcal{F}'$. Indeed, let $z \in \tilde S_{\delta} \cup \hat{S}_\delta$ be arbitrary. Then there exists $z' \in \mathcal{N}$ so that
\begin{equation} \label{eq:s:net}
|z - z'| \leq \frac{1}{\sqrt{n} (\log n)^{1/4}}.
\end{equation}
Thus, on the event $\mathcal{F}'$, we have
\[ \left| \mathbb{E}_A m_n(z') - \mathbb{E} m_n(z') \right| \ll \frac{1}{\sqrt{n} \eta (\log n)^{1/4}}. \]
Moreover, by the resolvent identity \eqref{eq:resolvent}, we find
\begin{align*}
\left| \mathbb{E} m_n(z) - \mathbb{E} m_n(z') \right| &= \frac{1}{n} |z - z'| \left| \mathbb{E} \tr (G(z) G(z')) \right| \\
&\leq |z - z'| \frac{1}{n} \mathbb{E} \sum_{i,j} \left[|G_{ij}(z)|^2 + |G_{ij}(z')|^2 \right] \\
&= \frac{ |z - z'|}{\eta} \left[ \mathbb{E} \Im m_n(z) + \mathbb{E} \Im m_n(z') \right] \\
&\ll \frac{1}{\sqrt{n} \eta (\log n)^{3/4}},
\end{align*}
where we used the Ward identity \eqref{eq:ward}, Lemma \ref{lemma:s:mn}, and \eqref{eq:s:net}. Similarly, we have
\begin{align*}
\left| \mathbb{E}_A m_n(z) - \mathbb{E}_A m_n(z') \right| &\leq |z - z'| \frac{1}{n} \mathbb{E}_A \sum_{i,j} \left[|G_{ij}(z)|^2 + |G_{ij}(z')|^2 \right] \\
&= \frac{ |z - z'|}{\eta} \left[ \mathbb{E}_A \Im m_n(z) + \mathbb{E}_A \Im m_n(z') \right] \\
&\ll \frac{1}{\sqrt{n} \eta (\log n)^{3/4}},
\end{align*}
where the final inequality holds on the event $\mathcal{F} \supset \mathcal{F}'$. Combining the bounds above, we conclude that
\[ \left| \mathbb{E}_A m_n(z) - \mathbb{E} m_n(z) \right| = o \left( \frac{1}{\sqrt{n} \eta} \right). \]
Since this bound holds uniformly for $z \in \tilde S_{\delta} \cup \hat{S}_\delta$, the proof is complete.
\end{proof}
\section{Proof of Theorem \ref{thm:main_L}} \label{sec:proofofmain}
This section is devoted to the proof of Theorem \ref{thm:main_L}. In view of Theorem \ref{thm:stieltjes_transforms} (noting that $\tilde{S}_{\delta_1}\subseteq\tilde{S}_{\delta_2}$ for any $0<\delta_1<\delta_2$), there exists $\tilde\delta > 0$ so that for any $0<\delta<\tilde\delta$ \begin{equation}\label{eq:A:Recursive Estimate}
\sup_{z = E + i \eta \in \tilde S_{\delta} \cup \hat{S}_\delta} n \eta \left| m_n(z) - s_n(z + \mathbb{E}_A m_n(z)) \right| = o(1)
\end{equation}
with overwhelming probability. Throughout this section we will make repeated use of Proposition \ref{prop:count}, so we fix $0 < \delta < \tilde \delta$ such that
\[ \left| \{ 1 \leq i \leq n : D_{ii} \geq \sqrt{(2- \delta) \log n} \} \right| = O(n^{1/100}) \]
with overwhelming probability. Considering imaginary parts, we conclude that, on the same event as \eqref{eq:A:Recursive Estimate}, \begin{align}\label{eq:A:eigenvalue counting function}
\sup_{z = E + i \eta \in \tilde S_{\delta} \cup \hat{S}_\delta}\Bigg|&\sum_{j=1}^{n}\frac{\eta^2}{(\lambda_j(L)-E)^2+\eta^2} \nonumber\\ &\qquad -\sum_{j=1}^{n}\frac{\eta^2+\eta\Im\mathbb{E}_A m_n(z)}{(D_{jj}-E-\Re\mathbb{E}_Am_n(z))^2+(\eta+\Im\mathbb{E}_A m_n(z))^2} \Bigg|
\end{align} tends to $0$.
If $E$ is chosen such that $(D_{jj}-E-\Re\mathbb{E}_Am_n(z))^2=0$ for some $j$, then the second sum in \eqref{eq:A:eigenvalue counting function} is bounded below by something asymptotically close to $1$. Hence, the first sum must also be at least $1$. One way in which the first sum can be close to $1$ is if $E$ is close to an eigenvalue of $L$ and the other terms are negligible. Bai and Silverstein (see \cite[Chapter 6]{MR2567175}) used this observation to show the spectra of certain random matrix models separate.
In the remainder of this section, we use a similar method (and an iteration argument) to precisely locate $\lambda_1(L)$ and complete the proof of Theorem \ref{thm:main_L}.
For the remainder of this section we fix $\eta=n^{-1/4}$. For any $z_1=E+i\eta \in \tilde S_\delta$, let $z_2=E+i\sqrt{2}\eta$ and consider the differences \begin{equation*}
I_1=\sum_{j=1}^{n}\frac{\eta^2}{(\lambda_j(L)-E)^2+\eta^2}-\sum_{j=1}^{n}\frac{\eta^2}{(\lambda_j(L)-E)^2+2\eta^2},
\end{equation*} and \begin{align*}
I_2&=\sum_{j=1}^{n}\frac{\eta^2+\eta\Im\mathbb{E}_A m_n(z_1)}{(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1))^2+(\eta+\Im\mathbb{E}_A m_n(z_1))^2}\\
&\quad-\sum_{j=1}^{n}\frac{\eta^2+\frac{1}{\sqrt{2}}\eta\Im\mathbb{E}_A m_n(z_2)}{(D_{jj}-E-\Re\mathbb{E}_Am_n(z_2))^2+(\sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2))^2}.
\end{align*} From \eqref{eq:A:Recursive Estimate} we have that \begin{equation}\label{eq:A:Difference of sums}
\sup_{z_1 = E + i \eta \in \tilde S_{\delta}}|I_1-I_2|=o(1),
\end{equation} with overwhelming probability. Note that \begin{equation*}
I_1=\sum_{j=1}^{n}\frac{\eta^4}{((\lambda_j(L)-E)^2+\eta^2)((\lambda_j(L)-E)^2+2\eta^2)},
\end{equation*} and \begin{equation*}
I_2=\sum_{j=1}^{n}\frac{N_{j,n}}{B_{j,n}},
\end{equation*} where\begin{align*}
B_{j,n}&=((D_{jj}-E-\Re\mathbb{E}_Am_n(z_1))^2+(\eta+\Im\mathbb{E}_A m_n(z_1))^2) \\
&\qquad \times ((D_{jj}-E-\Re\mathbb{E}_Am_n(z_2))^2+(\sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2))^2),
\end{align*} and \begin{align*}
N_{j,n}&=\eta^2\left[\left(\sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2) \right)^2-\left(\eta+\Im\mathbb{E}_A m_n(z_1) \right)^2 \right]\\
&\quad+\eta\Im\mathbb{E}_A m_n(z_1)\left(\sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2)\right)^2 \\
&\quad -\frac{1}{\sqrt{2}}\eta\Im\mathbb{E}_A m_n(z_2)\left(\eta+\Im\mathbb{E}_A m_n(z_1)\right)^2\\
&\quad+\eta^2\left[\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_2) \right)^2-\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1) \right)^2 \right]\\
&\quad+\eta\Im\mathbb{E}_A m_n(z_1)\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_2)\right)^2 \\
&\quad-\frac{1}{\sqrt{2}}\eta\Im\mathbb{E}_A m_n(z_2)\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1)\right)^2.
\end{align*}
In several steps we now show $N_{j,n}=\eta^4(1+o(1))$ uniformly on $\tilde{S}_\delta$. The implied constants in our asymptotic notation in this section are uniform over $j\in[n]$. Applying Lemma \ref{lem:Imm_fc is small} with Theorem \ref{thm:s:expected_stieltjes_transforms} and Lemma \ref{lem:expected_stieltjes_transforms2}, we deduce that \begin{equation*}
\sup_{z = E + i \eta \in \tilde{S}_{\delta} \cup \hat{S}_{\delta}} \Im\mathbb{E}_Am(z)=o(\eta)
\end{equation*} with overwhelming probability. From this we see that \begin{equation}\label{eq:A:first N term small}
\sup_{z_1 = E + i \eta \in \tilde{S}_{\delta}}\left|\eta^2\left[\left(\sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2) \right)^2-\left(\eta+\Im\mathbb{E}_A m_n(z_1) \right)^2 \right]\right|=\eta^4(1+o(1)),
\end{equation}
and
\begin{align}\label{eq:A:second N term small}
\! \eta\Im\mathbb{E}_A m_n(z_1)\left(\! \sqrt{2}\eta+\Im\mathbb{E}_A m_n(z_2)\!\right)^2\!\! -\frac{\eta\Im\mathbb{E}_A m_n(z_2)}{\sqrt{2}}\left( \eta+\Im\mathbb{E}_A m_n(z_1)\right)^2\! = o(\eta^4),
\end{align} uniformly for $z_1\in\tilde{S}_\delta$ with overwhelming probability. The following lemma allows us to control the terms of $N_{j,n}$ which involve $\Re \mathbb{E}_A m_n$.
\begin{lemma}\label{lemma:A:expected real parts are close}
There exists $\delta>0$ such that \begin{equation*}
\sup_{z_1 = E + i \eta \in \tilde S_{\delta}}\left| \Re\mathbb{E}_A m_n(z_1)-\Re\mathbb{E}_A m_n(z_2)\right|=o \left(n^{-1/2} \right),
\end{equation*} with overwhelming probability.
\end{lemma} Lemma \ref{lemma:A:expected real parts are close} is very similar to Lemma \ref{Alemma:Difference of real FC}, however neither follows from the other as we do not have a result comparing $\mathbb{E}_A m_n(z)$ to $m(z)$ within the appropriate error $o(n^{-1/2})$. The proof of Lemma \ref{lemma:A:expected real parts are close} follows in a nearly identical way to the proof of Lemma \ref{Alemma:Difference of real FC}, with summation replacing integration.
\begin{proof}[Proof of Lemma \ref{lemma:A:expected real parts are close}]
Fix $E+in^{-1/4}\in \tilde{S}_\delta$. We begin by considering
\begin{equation}\label{eq:Difference of transforms}
\Re m_n(z_1)-\Re m_n(z_2)=\frac{1}{n}\sum_{j=1}^n\frac{\eta^2(\lambda_j-E)}{\left((\lambda_j-E)^2+\eta^2 \right)\left((\lambda_j-E)^2+2\eta^2 \right)},
\end{equation}
where we let $\lambda_n \leq \cdots \leq \lambda_1$ denote the eigenvalues of $L$.
We divide $\{1,\dots, n\}$ into three subsets $J_1=\{j:|\lambda_j-E|\geq \sqrt{E} \}$, $J_2=\{j:n^{-1/8}E \leq |\lambda_j-E|\leq \sqrt{E} \}$, and $J_3=\{j:|\lambda_j-E|\leq n^{-1/8}E \}$. Note for any $E\in[\sqrt{(2-\delta)\log n},\sqrt{3\log n}]$, $J_2,J_3\subseteq\{j:\lambda_j\geq \sqrt{(2-\delta)\log n}-\left[(2-\delta)\log n\right]^{1/4} \}$. From standard bounds on $\|A\|$ (see, for example, \cite{MR1863696,MR2963170} and references therein) and taking $\delta$ sufficiently small in Proposition \ref{prop:count}, we have \begin{equation}\label{eq:A:J sizes}
|J_2|,|J_3|\leq\left|\{j:\lambda_j\geq \sqrt{(2-\delta)\log n}-\left[(2-\delta)\log n\right]^{1/4} \} \right|=O(n^{1/100})
\end{equation} with overwhelming probability. Note that\begin{equation}\label{eq:A:J_1 estimates}
\left|\frac{1}{n}\sum_{j\in J_1}\frac{\eta^2(\lambda_j-E)}{\left((\lambda_j-E)^2+\eta^2 \right)\left((\lambda_j-E)^2+2\eta^2 \right)}\right|\leq \frac{\eta^2}{(2-\delta)^{3/4}\log^{3/4}n},
\end{equation} and so we focus on the sets where $|\lambda_j-E|$ is small. For $J_2$ applying \eqref{eq:A:J sizes} \begin{align}\label{eq:A:J_2 estimates}
\left|\frac{1}{n}\sum_{j\in J_2}\frac{\eta^2(\lambda_j-E)}{\left((\lambda_j-E)^2+\eta^2 \right)\left((\lambda_j-E)^2+2\eta^2 \right)}\right|&\leq\frac{1}{n}\sum_{j\in J_2}\frac{\eta^2}{|\lambda_j-E|^3}\nonumber\\
& =\eta^2O\left(\frac{n^{1/100}}{n\eta^2E} \right)\\
&= O\left(\frac{1}{n^{99/100}}\right)\nonumber\\
&=o(\eta^2)\nonumber
\end{align} with overwhelming probability and implicit constants depending only on $\delta$. In a similar fashion applying \eqref{eq:A:J sizes}, \begin{align}\label{eq:A:J_3 estimates}
\left|\frac{1}{n}\sum_{j\in J_3}\frac{\eta^2(\lambda_j-E)}{\left((\lambda_j-E)^2+\eta^2 \right)\left((\lambda_j-E)^2+2\eta^2 \right)}\right|&\leq\frac{1}{n}\sum_{j\in J_3}\frac{\eta^2|\lambda_j-E|}{\eta^4}\nonumber \\
&\leq O\left(\eta^{5/2}\sqrt{3\log n}n^{1/100}\right)\\
&= o(\eta^2)\nonumber
\end{align} with overwhelming probability and implicit constants depending only on $\delta$. Combining \eqref{eq:Difference of transforms}, \eqref{eq:A:J_1 estimates}, \eqref{eq:A:J_2 estimates}, and \eqref{eq:A:J_3 estimates} we can conclude that\begin{equation*}
\Re m_n(z_1)-\Re m_n(z_2)=o(\eta^2)
\end{equation*} with overwhelming probability. Applying Theorem \ref{thm:concentration} we see that
\begin{align*}
\left|\Re\mathbb{E}_A m_n(z_1)-\Re\mathbb{E}_A m_n(z_2)\right|&\leq\left|\Re\mathbb{E}_A m_n(z_1)-\Re m_n(z_1)\right|\\
&\quad+\left|\Re\mathbb{E}_A m_n(z_2)-\Re m_n(z_2)\right|\\
&\quad +\left|\Re m_n(z_1)-\Re m_n(z_2)\right|\\
&=o(\eta^3)+o(\eta^2)
\end{align*} with overwhelming probability and implicit constants depending only on $\delta$. To extend to the supremum over $\tilde{S}_\delta$ note that $\mathbb{E}_A m_n$ (or any Stieltjes transform) is Lipschitz on $\tilde{S}_\delta$ with Lipschitz constant at most $\sqrt{n}$. Taking a $\frac{1}{n^4}$-net and using the union bound completes the proof.
\end{proof}
Thus we have from Lemma \ref{lemma:A:expected real parts are close} that \begin{align}\label{eq:A:third N piece is small}
&\eta^2\left[\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_2) \right)^2-\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1) \right)^2 \right]\nonumber \\
&=\eta^2\left[\left(2D_{jj}-2E-\Re\mathbb{E}_Am_n(z_2)-\Re\mathbb{E}_Am_n(z_1) \right)\left(\Re\mathbb{E}_Am_n(z_1)-\Re\mathbb{E}_Am_n(z_2) \right) \right]\nonumber\\
&=o(\eta^4|D_{jj}-E|),
\end{align} with overwhelming probability.
To deal with the last term in $N_{j,n}$ note some algebra leads to\begin{equation}\label{eq:A:recursive for sums}
\eta\Im m_n(z_1)-\frac{1}{\sqrt{2}}\eta\Im m_n(z_2)=\frac{1}{n}I_1,
\end{equation} and applying Theorem \ref{thm:concentration} to \eqref{eq:A:recursive for sums} gives \begin{align}\label{eq:A:fourth N piece is small}
&\eta\Im\mathbb{E}_A m_n(z_1)\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_2)\right)^2 \\
&-\frac{1}{\sqrt{2}}\eta\Im\mathbb{E}_A m_n(z_2)\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1)\right)^2\\
&=\frac{\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1) \right)^2}{n}I_1+o(\eta^4|D_{jj}-E|^2)
\end{align} uniformly for $E\in[\sqrt{(2-\delta)\log n},\sqrt{3\log n}]$ and $j\in[n]$, with overwhelming probability. Thus we conclude from \eqref{eq:A:first N term small}, \eqref{eq:A:second N term small}, \eqref{eq:A:third N piece is small}, and \eqref{eq:A:fourth N piece is small} that \begin{equation}\label{eq:A:simplified I_2}
I_2=\sum_{j=1}^{n}\frac{\eta^4(1+o(1))+n^{-1}\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1) \right)^2I_1+o(\eta^4|D_{jj}-E|^2)}{B_{j,n}},
\end{equation} with overwhelming probability A straightforward application of Proposition \ref{prop:count} with $\varepsilon=1/100$ (similar to \eqref{eq:A:J_2 estimates} and \eqref{eq:A:J_3 estimates} in the proof of Lemma \ref{lemma:A:expected real parts are close}) yields\begin{equation}\label{eq:A:controlling I_2 piece}
\sup_{z_1 = E + i \eta \in \tilde{S}_{\delta}}\left|\sum_{j=1}^{n}\frac{\eta^4\left(D_{jj}-E-\Re\mathbb{E}_Am_n(z_1) \right)^2}{B_{j,n}}\right|=o(1)
\end{equation} and \begin{equation}\label{eq:A:I_2 reduction}
\sup_{z_1 = E + i \eta \in \tilde{S}_{\delta}}\left|\sum_{j=1}^{n}\frac{o(\eta^4|D_{jj}-E|^2)}{B_{j,n}}\right|=o(1),
\end{equation} with overwhelming probability. Combining \eqref{eq:A:Difference of sums}, \eqref{eq:A:simplified I_2}, \eqref{eq:A:controlling I_2 piece}, and \eqref{eq:A:I_2 reduction} we arrive at the conclusion \begin{equation}\label{eq:A:simple description of differences}
\sup_{z_1 = E + i \eta \in \tilde{S}_{\delta}}\left|(1+o(1))I_1- (1+o(1))\sum_{j=1}^{n}\frac{\eta^4}{B_{j,n}}\right|=o(1),
\end{equation} with overwhelming probability, where we have pulled out $(1+o(1))$ terms which are uniform in $j$. At this point, we order the diagonal entries of $D$ as $D_{(1)}> D_{(2)}>\dots> D_{(n)}$. We will now complete the proof of Theorem \ref{thm:main_L}. Let $E_{(1)}$ be a solution to the equation \begin{equation*}
D_{(1)}-E-\Re m(E+i\eta)=0,
\end{equation*} whose existence is guaranteed by Lemma \ref{lemma:A:Existence of E}. From Lemma \ref{lemma:A:Existence of E} we see that asymptotically almost surely $E_{(1)}=D_{(1)}+\frac{1}{D_{(1)}}+O\left(\frac{1}{D_{(1)}^2}\right)$, and after noting $D_{(1)}=a_n+o(1)$ asymptotically almost surely \begin{equation*}
\sup_{E\in\left[E_{(1)}-\frac{1}{a_n^{3/2}},E_{(1)}+\frac{1}{a_n^{3/2}} \right] } (1+o(1))\sum_{j=1}^{n}\frac{\eta^4}{B_{j,n}}\geq \frac{1}{2}.
\end{equation*} Thus, from \eqref{eq:A:simple description of differences} \begin{equation}\label{eq:A:I_1 is big at location.}
\sup_{E\in\left[E_{(1)}-\frac{1}{a_n^{3/2}},E_{(1)}+\frac{1}{a_n^{3/2}} \right] }I_1\geq 1/2,
\end{equation} asymptotically almost surely.
Assume, for the sake of contradiction, that there is not at least one eigenvalue of $L$ within $1/a_n^{3/2}$ of $E_{(1)}$. For any $\delta'>0$, $E_{(1)}\geq \sqrt{(2-\delta')\log n}$ asymptotically almost surely. By taking $\delta'$ sufficiently small and applying Proposition \ref{prop:count} there are at most $O(n^{1/100})$ eigenvalues of $L$ greater than $\sqrt{(2-2\delta')\log n}$. If $J=\{j: \lambda_j(L)\geq\sqrt{(2-2\delta')\log n} \}$, then asymptotically almost surely \begin{equation}
I_1\leq o(1)\sum_{j\notin J}\eta^4+\sum_{j\in J}\frac{\eta^4}{1/a_n^6}=o(1)+O\left(\frac{a_n^6}{n^{99/100}} \right)=o(1),
\end{equation} a contradiction of \eqref{eq:A:simple description of differences} and \eqref{eq:A:I_1 is big at location.}. Thus there must be at least one eigenvalue of $L$ within $1/a_n^{3/2}$ of $E_{(1)}$. Let $\delta''>0$ be sufficiently small such that from Proposition \ref{prop:count} $\tilde J=\{j: D_{jj}\geq\sqrt{(2-\delta'')\log n}\}$ has $O(n^{1/100})$ elements. Then\begin{align}\label{eq:A:no big eignevalues first bound}
\sup_{E\in\left[E_{(1)}+\frac{1}{a_n^{3/2}},\sqrt{3\log n} \right] }\left|\sum_{j=1}^{n}\frac{\eta^4}{B_{j,n}}\right|&\leq\sup_{E\in\left[E_{(1)}+\frac{1}{a_n^{3/2}},\sqrt{3\log n} \right] }\left|\sum_{j\in \tilde J}\frac{\eta^4}{B_{j,n}}\right| \nonumber\\
&\qquad+\sup_{E\in\left[E_{(1)}+\frac{1}{a_n^{3/2}},\sqrt{3\log n} \right] }\left|\sum_{j\notin \tilde J}^{n}\frac{\eta^4}{B_{j,n}}\right|\nonumber\\
&\leq \left|\sum_{j\in \tilde J}a_n^6\eta^4\right|+\left|\sum_{j\notin \tilde J}\frac{\eta^4}{(2-\delta'')^4\log^2 n}\right|\nonumber\\
&=o(1).
\end{align} Since $\|L \| \leq \sqrt{3 \log n}$ asymptotically almost surely, if there exists an eigenvalue of $L$ beyond $\frac{1}{a_n^{3/2}}$ of $E_{(1)}$, it would then follow from the definition of $I_1$ that\begin{equation}
\sup_{E\in\left[E_{(1)}+\frac{1}{a_n^{3/2}},\sqrt{3\log n} \right] } I_1\geq \frac{1}{2},
\end{equation} a contradiction of \eqref{eq:A:simple description of differences} and \eqref{eq:A:no big eignevalues first bound}. Thus there cannot be an eigenvalue of $L$ beyond $\frac{1}{a_n^{3/2}}$ of $E_{(1)}$ asymptotically almost surely, and hence $|\lambda_1(L)-E_{(1)}|=O\left(\frac{1}{a_n^{3/2}}\right)$ asymptotically almost surely. Using the asymptotic expansion in Lemma \ref{lemma:A:Existence of E} we see that with probability tending to one that \begin{equation*}
E_{(1)}=D_{(1)}+\frac{1}{\sqrt{2\log n}}+O\left(\frac{1}{\log n}\right).
\end{equation*} Let $b_n'$ be defined as in \eqref{eq:def:bn'}, we conclude the proof of Theorem \ref{thm:main_L} by noting with probability tending to one \begin{align*}
a_n(\lambda_1(L)-b_n)&=a_n(E_{(1)}-b_n)+o(1)\\
&=a_n(D_{(1)}-b_n')+o(1)
\end{align*} and hence for any $x\in\mathbb{R}$\begin{align*}
\lim_{n\rightarrow\infty}\P\left(a_n(\lambda_1(L)-b_n)\leq x \right)&=\lim_{n\rightarrow\infty}\P\left(a_n(D_{(1)}-b_n')\leq x+o(1) \right)\\
&=e^{-e^{-x}},
\end{align*} where the last equality is a well known result for maximum of iid Gaussian random variables (see, for instance, \cite[Theorem 1.5.3]{MR691492}). This completes the proof of Theorem \ref{thm:main_L}.
|
{'timestamp': '2022-12-01T02:19:46', 'yymm': '2211', 'arxiv_id': '2211.17175', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.17175'}
|
arxiv
|
\section{Introduction}
The singularity theorems \cite{PhysRevLett.14.57, senovilla1998singularity, ford2003classical} are some of the important aspects of general relativity. These have profound consequences in black hole mechanics, also known as black hole thermodynamics (BH). The singularity theorems are proved using the earlier work of Raychaudhuri \cite{PhysRev.98.1123} in the causal structure of geodesic congruences, generalized in \cite{abreu2011some}. However, proofs of singularity theorems in BH strongly depend on the null energy condition (see \cite{mandal2018revisiting, chrusciel2001regularity, kar2007raychaudhuri}). The null energy condition essentially states that the energy-momentum tensor $T_{\mu\nu}$ of the matter satisfies the following condition
\begin{equation}\label{NEC}
T_{\mu\nu}(x)n^{\mu}(x)n^{\nu}(x)\geq0,
\end{equation}
for any null vector $n^{\mu}$, satisfying $g_{\mu\nu}(x)n^{\mu}(x)n^{\nu}(x)=0$ where $g_{\mu\nu}$ is the metric tensor of the spacetime manifold. However, over the years, violations of the null energy condition (NEC) \cite{rubakov2014null, bouhmadi2014wormholes, toshmatov2017energy, baccetti2012null, mandal2018revisiting} have been found in several physical situations. Later, it has been proposed that average null energy condition (ANEC) \cite{klinkhammer1991averaged, yurtsever1995averaged, yurtsever1995remarks, PhysRevD.44.403, fewster2007averaged, fewster2003null, kontou2013averaged} is the minimum requirement for the study of BH and it holds for any spacetime, based on the work \cite{jacobson1995thermodynamics}. The average null energy condition states that the integral of projection of the matter stress-energy onto the tangent vector of a null geodesic cannot be negative
\begin{equation}\label{ANEC}
\int_{\gamma}T_{\mu\nu}l^{\mu}l^{\nu}\equiv\int_{\gamma}T_{\mu\nu}(\lambda)l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda\geq0,
\end{equation}
where $l^{\mu}$ is the tangent vector to the null geodesic $\gamma$ and $\lambda$ is the affine parameter. Later, it has also been shown that ANEC can be violated \cite{visser1995scale, urban2010averaged, freivogel2018smeared}. If two points are connected by a null geodesic $\gamma$ parametrized by $\lambda$, then (\ref{ANEC}) can be expressed as
\begin{equation}
\int_{\gamma}T_{\mu\nu}l^{\mu}l^{\nu}=\int_{\gamma}T_{\mu\nu}(x)l^{\nu}(x)dx^{\mu}=\int_{\gamma}\mathcal{J}_{\mu}(x)dx^{\mu},
\end{equation}
where $\mathcal{J}_{\mu}(x)=T_{\mu\nu}(x)l^{\nu}(x)$. If there exist closed null geodesics \cite{klinkhammer1992vacuum, sarma2013vacuum}, then the same two points must be connected by two null geodesics, let's say $\gamma_{1},\gamma_{2}$, hence,
\begin{equation}
\int_{\gamma_{1}}T_{\mu\nu}l^{\mu}l^{\nu}-\int_{\gamma_{2}}T_{\mu\nu}l^{\mu}l^{\nu}=\int_{\mathcal{C}}\mathcal{J}_{\mu}(x)dx^{\mu}=\int_{\mathcal{S}}\nabla_{[\mu}\mathcal{J}_{\nu]}(x)dx^{\mu}\wedge dx^{\nu},
\end{equation}
where the boundary of the surface $\mathcal{S}$ is given by the closed null geodesic $\mathcal{C}$. In general, the flux-integral in \textit{r.h.s} is non-zero. However, if it is positive definite or negative-definite, then there may exist null geodesics for which (\ref{ANEC}) is violated which follows from the above equation.
In quantum field theory in curved spacetime (QFTCS), the stress-energy tensor of a matter is given by a hermitian operator, constructed out of field operators. Hence, both the conditions (\ref{NEC}) and (\ref{ANEC}) depend strongly on the quantum states of matter in the curved spacetime. Unfortunately, under what conditions, the quantum states of matter satisfy either of (\ref{NEC}) or (\ref{ANEC}) are ambiguous yet. Hence, it is important to classify quantum states of matter which hold NEC (\ref{NEC}) since it is stronger than ANEC (\ref{ANEC}). However, it is quite clear that in general the collection of all the quantum states of matter, satisfying NEC do not form a vector space since given a null vector $n^{\mu}$,
\begin{equation}\label{0.1}
\bra{\psi_{1}}\hat{T}_{\mu\nu}\ket{\psi_{1}}n^{\mu}n^{\nu}\geq0, \ \bra{\psi_{2}}\hat{T}_{\mu\nu}\ket{\psi_{2}}n^{\mu}n^{\nu}\geq0\notimplies(\bra{\psi_{1}}a^{*}+\bra{\psi_{2}}b^{*})\hat{T}_{\mu\nu}(a\ket{\psi_{1}}+b\ket{\psi_{2}})n^{\mu}n^{\nu}\geq0,
\end{equation}
for arbitrary complex numbers $a,b$ where $\ket{\psi_{1}}$ and $\ket{\psi_{2}}$ are two quantum states satisfying NEC.
The aim of the present article is to classify a collection of quantum states of the matter in a generic curved spacetime, satisfying NEC. For the sake of mathematical simplicity, we use a non-interacting real scalar field theory. In order to classify these quantum states, we use coherent state descriptions since we follow the canonical approach to QFTCS \cite{ford2002d3, dewitt1975quantum, fulling1989aspects}. We also briefly discuss the classification of quantum states based on ANEC.
\section{Introduction to the coherent states}
In this section, we briefly review the properties of coherent states as the preliminary material for our later studies. In quantum mechanics, the phase space observables in classical mechanics are mapped to linear hermitian operators defined over the Hilbert space through the map $x\mapsto\hat{x}, \ p\mapsto\hat{p}, \ \{x,p\}=1\mapsto[\hat{x},\hat{p}]=i$. Using the operators $\hat{x},\hat{p}$, creation operator $\hat{a}^{\dagger}$ and annihilation operator $\hat{a}$ can be constructed, satisfying the algebra $[\hat{a},\hat{a}^{\dagger}]=1$. A coherent state $\ket{\alpha}$, also known as Glauber state, is an eigenstate of the annihilation operator $\hat{a}$ with eigenvalue $\alpha\in\mathbb{C}$, defined by $\hat{a}\ket{\alpha}=\alpha\ket{\alpha}$. Now, we define the displacement operator
\begin{equation}
\hat{\mathcal{D}}(\alpha)\equiv e^{\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a}}=e^{-\frac{1}{2}|\alpha|^{2}}e^{\alpha\hat{a}^{\dagger}}e^{-\alpha^{*}\hat{a}},
\end{equation}
where BCH formula is used in the second equality. The above definition implies $\hat{\mathcal{D}}^{\dagger}(\alpha)=\hat{\mathcal{D}}(-\alpha)=\hat{\mathcal{D}}^{-1}(\alpha)$. Further, it can be shown that
\begin{equation}\label{3}
\hat{\mathcal{D}}^{\dagger}(\alpha)\hat{a}\hat{\mathcal{D}}(\alpha)=\hat{a}+\alpha, \ \hat{\mathcal{D}}^{\dagger}(\alpha)\hat{a}^{\dagger}\hat{\mathcal{D}}(\alpha)=\hat{a}^{\dagger}+\alpha^{*},
\end{equation}
and $\hat{\mathcal{D}}(\alpha+\beta)=\hat{\mathcal{D}}(\alpha)\hat{\mathcal{D}}(\beta)e^{-i\text{Im}(\alpha\beta^{*})}$. The relation (\ref{3}) essentially implies $\ket{\alpha}=\hat{\mathcal{D}}(\alpha)\ket{0}$, where $\ket{0}$ is the state, annihilated by the annihilation operator. Therefore, the coherent states can be obtained through the action of the displacement operator on the state $\ket{0}$ (see \cite{zhang1990coherent} for more details). As a result, the inner product between two coherent states can be obtained
\begin{equation}\label{inner-product}
\braket{\beta|\alpha} = \bra{0}\hat{\mathcal{D}}^{\dagger}(\beta)\hat{\mathcal{D}}(\alpha)\ket{0}=e^{-\frac{|\alpha-\beta|^{2}}{2}+i\text{Im}(\alpha\beta^{*})}.
\end{equation}
\section{Scalar field theory in a generic curved spacetime}
The action for a minimally coupled real massless scalar field theory in a generic curved spacetime is given by
\begin{equation}\label{action}
\mathcal{S}=-\int\sqrt{-g(x)}d^{4}x \ \frac{1}{2}g^{\mu\nu}(x)\partial_{\mu}\phi(x)\partial_{\nu}\phi(x).
\end{equation}
A complete set of mode solutions $\{f_{j},f_{j}^{*}\}$ of the Klein-Gordon equation can be obtained through the extremization of the above action, with $\{j\}$ being a set of discrete or continuous labels distinguishing the independent solutions. These modes are normalized \cite{ford2002d3} \textit{w.r.t} the following inner product
\begin{equation}
\langle f,g\rangle=-i\int\sqrt{-g(x)}d^{3}x[f^{*}(t,\vec{x})\overleftrightarrow{\partial^{0}}g(t,\vec{x})],
\end{equation}
such that
\begin{equation}
\begin{split}
\langle f_{j},f_{j'}\rangle=\delta_{jj'}, & \ \langle f_{j}^{*},f_{j'}^{*}\rangle=-\delta_{jj'}\\
\langle f_{j}^{*},f_{j'}\rangle & =\langle f_{j},f_{j'}^{*}\rangle=0.
\end{split}
\end{equation}
The above inner product is well-defined since it is time-independent which can be easily checked. The corresponding completeness relation can be written as follows
\begin{equation}
\sum_{j}[f_{j}(t,\vec{x})\partial^{0}f_{j}^{*}(t,\vec{x}')-f_{j}^{*}(t,\vec{x} )\partial^{0}f_{j}(t,\vec{x}')]=-\frac{i}{\sqrt{-g(x)}}\delta^{(3)}(\vec{x}-\vec{x}').
\end{equation}
The field operator $\hat{\phi}(x)$ can be written in terms of the above basis solutions as follows
\begin{equation}\label{3.5}
\hat{\phi}(x)=\sum_{j}[\hat{a}_{j}f_{j}(x)+\hat{a}_{j}^{\dagger}f_{j}^{*}(x)].
\end{equation}
with the following commutation relations
\begin{equation}\label{3.6}
[\hat{a}_{j},\hat{a}_{j'}]=0=[\hat{a}_{j}^{\dagger},\hat{a}_{j'}^{\dagger}], \
[\hat{a}_{j},\hat{a}_{j'}^{\dagger}]=\delta_{jj'}.
\end{equation}
The vacuum state $\ket{0}$ is defined by
\begin{equation}
\hat{a}_{j}\ket{0}=0, \ \forall j \ .
\end{equation}
The multi-particle states can be obtained by applying the products of creation operators $\{\hat{a}_{j}^{\dagger}\}$ on the vacuum state with a suitable normalization constant
\begin{equation}
\ket{j_{1}j_{2}\ldots j_{n}}=\mathcal{N}\sum_{\sigma\in S_{n}}\prod_{i=1}^{n}(\hat{a}_{j_{\sigma i}}^{\dagger})^{n_{i}}\ket{0},
\end{equation}
where $\mathcal{N}$ is a normalization factor. In terms of the creation and annihilation operators, the stress-energy tensor operator can be expressed as follows
\begin{equation}
\begin{split}
\hat{T}_{\mu\nu} & =\sum_{i,j}[\mathcal{T}_{\mu\nu}(f_{i},f_{j})\hat{a}_{i}\hat{a}_{j}+\mathcal{T}_{\mu\nu}(f_{i},f_{j}^{*})\hat{a}_{i}\hat{a}_{j}^{\dagger}+\mathcal{T}_{\mu\nu}(f_{i}^{*},f_{j})\hat{a}_{i}^{\dagger}\hat{a}_{j}+\mathcal{
T}_{\mu\nu}(f_{i}^{*},f_{j}^{*})\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}]\\
\implies :\hat{T}_{\mu\nu}: & =\sum_{i,j}[\mathcal{T}_{\mu\nu}(f_{i},f_{j})\hat{a}_{i}\hat{a}_{j}+\mathcal{T}_{\mu\nu}(f_{i},f_{j}^{*})\hat{a}_{j}^{\dagger}\hat{a}_{i}+\mathcal{T}_{\mu\nu}(f_{i}^{*},f_{j})\hat{a}_{i}^{\dagger}\hat{a}_{j}+\mathcal{
T}_{\mu\nu}(f_{i}^{*},f_{j}^{*})\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}],
\end{split}
\end{equation}
where $: \ :$ is the normal-ordering operation and $\mathcal{T}_{\mu\nu}(f_{i},f_{j})=\partial_{\mu}f_{i}\partial_{\nu}f_{j}$. Normal-ordering is used to make $\bra{0}:\hat{T}_{\mu\nu}:\ket{0}=0$. Further, we obtain the following relation
\begin{equation}\label{operator}
\begin{split}
:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu} & =\sum_{i,j}\Big[\mathcal{N}(f_{i},f_{j})\hat{a}_{i}\hat{a}_{j}+\mathcal{N}(f_{i},f_{j}^{*})\hat{a}_{j}^{\dagger}\hat{a}_{i}+\mathcal{N}(f_{i}^{*},f_{j})\hat{a}_{i}^{\dagger}\hat{a}_{j}+\mathcal{N}(f_{i}^{*},f_{j}^{*})\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\Big],
\end{split}
\end{equation}
where $\mathcal{N}(f_{i},f_{j})=\partial f_{i}\partial f_{j}\equiv n^{\mu}\partial_{\mu}f_{i}n^{\nu}\partial_{\nu}f_{j}$.
\section{Coherent states and NEC}\label{section 4}
Let us consider a generic coherent state given by
\begin{equation}\label{coherent1}
\ket{\mathcal{O}}=\hat{\mathcal{D}}(\{\mathcal{O}\})\ket{0}=\prod_{i}\hat{\mathcal{D}}(\mathcal{O}_{i})\ket{0},
\end{equation}
where $\hat{\mathcal{D}}(\mathcal{O}_{i})=e^{\mathcal{O}_{i}\hat{a}_{i}^{\dagger}-\mathcal{O}_{i}^{*}\hat{a}_{i}}$. Hence, the expectation value of the operator in (\ref{operator}) \textit{w.r.t} the state (\ref{coherent1}) is given by
\begin{equation}
\begin{split}
\bra{\mathcal{O}}:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\mathcal{O}} & =\sum_{i,j}\Big[\mathcal{N}(f_{i},f_{j})\mathcal{O}_{i}\mathcal{O}_{j}+\mathcal{N}(f_{i}^{*},f_{j})\mathcal{O}_{i}^{*}\mathcal{O}_{j}+\mathcal{N}(f_{i},f_{j}^{*})\mathcal{O}_{i}\mathcal{O}_{j}^{*}\\
& +\mathcal{N}(f_{i}^{*},f_{j}^{*})\mathcal{O}_{i}^{*}\mathcal{O}_{j}^{*}\Big].
\end{split}
\end{equation}
The above expression can further be simplified in the following way
\begin{equation}\label{result1}
\begin{split}
\bra{\mathcal{O}}:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\mathcal{O}} & =\sum_{i,j}\Big[\mathcal{O}_{i}(\mathcal{O}_{j}\mathcal{N}(f_{i},f_{j})+\mathcal{N}(f_{i},f_{j}^{*})\mathcal{O}_{j}^{*})+\mathcal{O}_{i}^{*}(\mathcal{O}_{j}\mathcal{N}(f_{i}^{*},f_{j})\\
+\mathcal{N}(f_{i}^{*},f_{j}^{*})\mathcal{O}_{j}^{*})\Big] & =\sum_{i,j}\mathcal{N}(\mathcal{O}_{i}f_{i}+\mathcal{O}_{i}^{*}f_{i}^{*},\mathcal{O}_{j}f_{j}+\mathcal{O}_{j}^{*}f_{j}^{*})=4\mathcal{N}(\bar{\mathcal{O}},\bar{\mathcal{O}})\geq0, \ \bar{\mathcal{O}}=\sum_{i}\text{Re}(\mathcal{O}_{i}f_{i}).
\end{split}
\end{equation}
The above expression leads to the following theorem.
\begin{theorem}\label{Theorem1}
The set of all the coherent states $\{\ket{\mathcal{O}}\}$ of the field operator $\hat{\phi}$ holds NEC.
\end{theorem}
Let us now consider two such coherent states given by $\ket{\mathcal{O}^{(1)}}$ and $\ket{\mathcal{O}^{(2)}}$, then
\begin{equation}\label{interference}
\begin{split}
\bra{\mathcal{O}^{(1)}}:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\mathcal{O}^{(2)}} & =\sum_{i,j}\Big[\mathcal{N}(f_{i},f_{j})\mathcal{O}_{i}^{(2)}\mathcal{O}_{j}^{(2)}+\mathcal{N}(f_{i},f_{j}^{*})\mathcal{O}_{i}^{(2)}\mathcal{O}_{j}^{(1)*}\\
& +\mathcal{N}(f_{i}^{*},f_{j})\mathcal{O}_{i}^{(1)*}\mathcal{O}_{j}^{(2)}+\mathcal{N}(f_{i}^{*},f_{j}^{*})\mathcal{O}_{i}^{(1)*}\mathcal{O}_{j}^{(1)*}\Big]\\
& =\sum_{i,j}\mathcal{N}(f_{i}\mathcal{O}_{i}^{(2)}+f_{i}^{*}\mathcal{O}_{i}^{(1)*},f_{j}\mathcal{O}_{j}^{(2)}+f_{j}^{*}\mathcal{O}_{j}^{(1)*})\braket{\mathcal{O}^{(1)}|\mathcal{O}^{(2)}}.
\end{split}
\end{equation}
Using the result in (\ref{inner-product}), we can express
\begin{equation}
\braket{\mathcal{O}^{(1)}|\mathcal{O}^{(2)}}=e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}+i\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})}\equiv\prod_{i}\Big[e^{-\frac{1}{2}|\mathcal{O}_{i}^{(1)}-\mathcal{O}_{i}^{(2)}|^{2}+i\text{Im}(\mathcal{O}_{i}^{(1)*}\mathcal{O}_{i}^{(2)})}\Big].
\end{equation}
Hence, the expression (\ref{interference}) can be expressed as
\begin{equation}\label{result2}
\bra{\mathcal{O}^{(1)}}:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\mathcal{O}^{(2)}}=\mathcal{N}(F,F)e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}+i\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})},
\end{equation}
where $F=\sum_{i}(f_{i}\mathcal{O}_{i}^{(2)}+f_{i}^{*}\mathcal{O}_{i}^{(1)*})$ is a complex function. The expression (\ref{result2}) has a huge implication which we discuss now.
In equation (\ref{0.1}), it is shown that the quantum states which do not violate NEC do not necessarily form a vector space. Although the theorem \ref{Theorem1} shows that the set of coherent states satisfy NEC, however, (\ref{result2}) shows that all the combinations of coherent states do not satisfy NEC, as expected. In the next few theorems, we discuss a few possible combinations of coherent states with some constraints that satisfy NEC. This helps in constructing a subset of Hilbert space containing quantum states that satisfy NEC, therefore, the following theorems play an important role in classifying quantum states satisfying NEC.
Given two coherent states $\ket{\mathcal{O}^{(1)}}$ and $\ket{\mathcal{O}^{(2)}}$ holding NEC condition, we obtain a two-dimensional vector space which is nothing but the span of $\ket{\mathcal{O}^{(1)}}$ and $\ket{\mathcal{O}^{(2)}}$. Let us consider a generic state from this two-dimensional vector space, denoted by $\ket{\psi_{+}}=\mathcal{A}(\ket{\mathcal{O}^{(1)}}+e^{\delta_{1}+i\delta_{2}}\ket{\mathcal{O}^{(2)}})$, then
\begin{equation}\label{plus-state}
\begin{split}
\bra{\psi_{+}} & :\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\psi_{+}}=|\mathcal{A}|^{2}\Big[4\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+4e^{2\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})\\
+\mathcal{N}(F,F) & e^{\delta_{1}-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}+i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})}+\mathcal{N}(F^{*},F^{*})e^{\delta_{1}-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}-i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})}\Big]\\
=4|\mathcal{A}|^{2} & \Big[\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+e^{2\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})+\frac{e^{\delta_{1}}}{2}\text{Re}(\mathcal{N}(F,F)e^{i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})})e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}\Big].
\end{split}
\end{equation}
Similarly, for the state $\ket{\psi_{-}}=\mathcal{A}(\ket{\mathcal{O}^{(1)}}-e^{\delta_{1}+i\delta_{2}}\ket{\mathcal{O}^{(2)}})$,
\begin{equation}\label{minus-state}
\begin{split}
\bra{\psi_{-}} & :\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\ket{\psi_{-}}=4|\mathcal{A}|^{2}\Big[\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+e^{2\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})\\
& -\frac{e^{\delta_{1}}}{2}\text{Re}(\mathcal{N}(F,F)e^{i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})})e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}\Big].
\end{split}
\end{equation}
The expressions of the expectation values of $:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}$ \textit{w.r.t} the states $\ket{\psi_{\pm}}$ in (\ref{plus-state}) and (\ref{minus-state}) lead to the following theorem.
\begin{theorem}\label{Theorem2}
Given two coherent states $\ket{\mathcal{O}^{(1)}}$ and $\ket{\mathcal{O}^{(2)}}$, the states of the form $\ket{\psi_{\pm}}=\mathcal{A}(\ket{\mathcal{O}^{(1)}}\pm e^{\delta_{1}+i\delta_{2}}\ket{\mathcal{O}^{(2)}})$ hold NEC, provided the following condition is satisfied
\begin{equation}\label{inequality}
\begin{split}
\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1}) & +e^{2\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})\pm\frac{e^{\delta_{1}}}{2}\text{Re}(\mathcal{N}(F,F)e^{i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})})e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}\geq0\\
\implies e^{\delta_{1}}\Big|\text{Re}[\mathcal{N}(F,F) & e^{i(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\delta_{2})}]\Big|e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}\leq2[\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+e^{2\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})].
\end{split}
\end{equation}
\end{theorem}
Since the function $\mathcal{N}(F,F)$ is a complex function, we can express it as $\mathcal{N}(F(x),F(x))=\mathcal{F}(x)e^{i\tilde{\Delta}_{12}(x)}$ where $\mathcal{F}(x)$ is a positive real-valued function. Hence, the inequality in (\ref{inequality}) can be expressed as follows
\begin{equation}\label{inequality2}
\begin{split}
\mathcal{F}(x)|\cos(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\tilde{\Delta}_{12}(x)+\delta_{2})|e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}} &\leq2[e^{-\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+e^{\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})]\\
& =2(e^{-\delta_{1}}\mathcal{F}_{1}(x)+e^{\delta_{1}}\mathcal{F}_{2}(x)).
\end{split}
\end{equation}
where $\mathcal{N}(\bar{\mathcal{O}}_{i},\bar{\mathcal{O}}_{i})=\mathcal{F}_{i}(x)$s are positive real-valued functions. An example of the inequality (\ref{inequality}) is provided in the Appendix. The inequality in (\ref{inequality2}) leads to the following theorem.
\begin{theorem}\label{Theorem3}
If the following condition
\begin{equation}\label{condition}
\frac{e^{-\delta_{1}}\mathcal{F}_{1}(x)+e^{\delta_{1}}\mathcal{F}_{2}(x)}{\mathcal{F}(x)}e^{\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}\geq\frac{1}{2},
\end{equation}
holds for all the points in the spacetime, then the states of the form $\ket{\psi_{\pm}}$ definitely hold NEC.
\end{theorem}
The above condition strongly depends on the states $\ket{\mathcal{O}_{1}},\ket{\mathcal{O}_{2}}$ and the mode solutions of the Klein-Gordon equation. It is quite clear that the condition (\ref{condition}) is more likely to be satisfied if $||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||\gg1$ or $|\delta_{1}|\gg1$. It is important to note that the above theorems are still valid even if we consider the mass term in the action (\ref{action}).
Recall the following definitions
\begin{equation}
\begin{split}
\bar{\mathcal{O}}_{1}=\frac{1}{2}\sum_{i}(\mathcal{O}_{i}^{(1)}f_{i}+\mathcal{O}_{i}^{(1)*}f_{i}^{*}), & \ \bar{\mathcal{O}}_{2}=\frac{1}{2}\sum_{i}(\mathcal{O}_{i}^{(2)}f_{i}+\mathcal{O}_{i}^{(2)*}f_{i}^{*}), \ F=\sum_{i}(f_{i}\mathcal{O}_{i}^{(2)}+f_{i}^{*}\mathcal{O}_{i}^{(1)*}),
\end{split}
\end{equation}
and therefore, we can write the following
\begin{equation}
\begin{split}
\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{2}) & =\frac{1}{4}\sum_{i,j}\Big[\mathcal{O}_{i}^{(1)}\mathcal{O}_{j}^{(2)}\partial f_{i}\partial f_{j}+\mathcal{O}_{i}^{(1)}\mathcal{O}_{j}^{(2)*}\partial f_{i}\partial f_{j}^{*}+\mathcal{O}_{i}^{(1)*}\mathcal{O}_{j}^{(2)}\partial f_{i}^{*}\partial f_{j}+\mathcal{O}_{i}^{(1)*}\mathcal{O}_{j}^{(2)*}\partial f_{i}^{*}\partial f_{j}^{*}\Big]\\
|\mathcal{N}(F,F)| & =\partial F\partial F^{*}=\sum_{i,j}\Big[\mathcal{O}_{i}^{(1)}\mathcal{O}_{j}^{(2)}\partial f_{i}\partial f_{j}+\mathcal{O}_{i}^{(1)}\mathcal{O}_{j}^{(1)*}\partial f_{i}\partial f_{j}^{*}+\mathcal{O}_{i}^{(2)*}\mathcal{O}_{j}^{(2)}\partial f_{i}^{*}\partial f_{j}+\mathcal{O}_{i}^{(2)*}\mathcal{O}_{j}^{(1)*}\partial f_{i}^{*}\partial f_{j}^{*}\Big],
\end{split}
\end{equation}
where $\partial\equiv n^{\mu}\partial_{\mu}$. Defining $\mathcal{O}_{f}^{(1,2)}\equiv\sum_{i}f_{i}\mathcal{O}_{i}^{(1,2)}$ and using the above expressions, we obtain the following inequality
\begin{equation}
\begin{split}
|\mathcal{N}(F,F)|-4\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{2}) & =\Big[\partial\mathcal{O}_{f}^{(1)}\partial\mathcal{O}_{f}^{(1)*}+\partial\mathcal{O}_{f}^{(2)}\partial\mathcal{O}_{f}^{(2)*}-\partial\mathcal{O}_{f}^{(1)}\partial\mathcal{O}_{f}^{(2)*}-\partial\mathcal{O}_{f}^{(1)*}\partial\mathcal{O}_{f}^{(2)}\Big]\\
& =\partial(\mathcal{O}_{f}^{(1)}-\mathcal{O}_{f}^{(2)})\partial(\mathcal{O}_{f}^{(1)*}-\mathcal{O}_{f}^{(2)*})=|\partial(\mathcal{O}_{f}^{(1)}-\mathcal{O}_{f}^{(2)})|^{2}\geq0\\
\implies\frac{\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{2})}{|\mathcal{N}(F,F)|} & \leq\frac{1}{4}.
\end{split}
\end{equation}
From the theorem (\ref{Theorem2}) and theorem (\ref{Theorem3}), it follows that if $\frac{e^{-\delta_{1}}\mathcal{F}_{1}(x)+e^{\delta_{1}}\mathcal{F}_{2}(x)}{\mathcal{F}(x)}e^{\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}<\frac{1}{2}$, atleast for a spacetime point $x$ and $\delta_{1}$, and
\begin{equation}\label{NEC-violation}
\mathcal{F}(x)|\cos(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\tilde{\Delta}_{12}(x)+\delta_{2})|e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}>2[e^{-\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{1},\bar{\mathcal{O}}_{1})+e^{\delta_{1}}\mathcal{N}(\bar{\mathcal{O}}_{2},\bar{\mathcal{O}}_{2})],
\end{equation}
then the states $\ket{\psi_{\pm}}$ violates NEC. Since $\frac{e^{-\delta_{1}}\mathcal{F}_{1}(x)+e^{\delta_{1}}\mathcal{F}_{2}(x)}{\mathcal{F}(x)}e^{\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}<\frac{1}{2}$, there always exists a relative phase $\bar{\delta}_{2}$ for which (\ref{NEC-violation}) is satisfied. This shows that all the states belong to the linear span of any two coherent states do not hold NEC. Further, the minimum value of the function $\frac{e^{-\delta_{1}}\mathcal{F}_{1}(x)+e^{\delta_{1}}\mathcal{F}_{2}(x)}{\mathcal{F}(x)}$ is $2\frac{\sqrt{\mathcal{F}_{1}(x)\mathcal{F}_{2}(x)}}{\mathcal{F}(x)}$. Therefore, if there exists a $\bar{\delta}_{2}$ for which the following condition
\begin{equation}
|\cos(\Delta(\mathcal{O}^{(1)},\mathcal{O}^{(2)})+\tilde{\Delta}_{12}(x)+\delta_{2})|e^{-\frac{1}{2}||\mathcal{O}^{(1)}-\mathcal{O}^{(2)}||^{2}}>4\frac{\sqrt{\mathcal{F}_{1}(x)\mathcal{F}_{2}(x)}}{\mathcal{F}(x)},
\end{equation}
is satisfied atleast for a spacetime point $x$ then, NEC is violated. It is important to remember that $\delta_{2}\in[0,\pi]$ for the states $\ket{\psi_{\pm}}$.
The above-mentioned approach can be generalized to a $m$-dimensional subspace of the Hilbert space of matter quantum states. In order to see this, let us consider a collection of $m$ linear independent coherent states $\{\ket{\mathcal{O}^{(i)}}\}_{i=1}^{m}$. A generic state belongs to the linear span of the coherent states $\{\ket{\mathcal{O}^{(i)}}\}_{i=1}^{m}$ can be expressed as follows
\begin{equation}\label{4.17}
\ket{\psi}=\mathcal{A}[\ket{\mathcal{O}^{(1)}}+e^{\delta_{2}+i\phi_{2}}\ket{\mathcal{O}^{(2)}}+e^{\delta_{3}+i\phi_{3}}\ket{\mathcal{O}^{(3)}}+\ldots+e^{\delta_{m}+i\phi_{m}}\ket{\mathcal{O}^{(m)}}],
\end{equation}
where $\{\delta_{i}\}_{i=2}^{m}\in\mathbb{R}$ and $\{\phi_{i}\}_{i=2}^{m}\in[0,2\pi)$. The expectation value of $:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}$ \textit{w.r.t} the state $\ket{\psi}$ can be expressed as follows
\begin{equation}\label{4.18}
\begin{split}
\bra{\psi}:\hat{T}_{\mu\nu}: & n^{\mu}n^{\nu}\ket{\psi}=4|\mathcal{A}|^{2}\Big[\sum_{i=1}^{m}e^{2\delta_{i}}\mathcal{N}(\bar{\mathcal{O}}_{i},\bar{\mathcal{O}}_{i})+\sum_{i<j}\frac{e^{\delta_{i}+\delta_{j}}}{2}\text{Re}\left(\mathcal{N}(F_{ij},F_{ij})e^{i(\Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})+\phi_{i}+\phi_{j})}\right)e^{-\frac{1}{2}||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}\Big]\\
F_{ij}(x) & =\sum_{k}(f_{k}(x)\mathcal{O}_{k}^{(j)}+f_{k}^{*}(x)\mathcal{O}_{k}^{(i)*}), \ \delta_{1}=\phi_{1}=0, \ \Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})=\sum_{k}\text{Im}(\mathcal{O}_{k}^{(i)*}\mathcal{O}_{k}^{(j)}).
\end{split}
\end{equation}
Expressing $\mathcal{N}(F_{ij}(x),F_{ij}(x))=\mathcal{F}_{ij}(x)e^{i\Delta_{ij}(x)}$, the above expression can be re-expressed as follows
\begin{equation}
\begin{split}
\bra{\psi}:\hat{T}_{\mu\nu}(x):n^{\mu}n^{\nu}\ket{\psi} & =4|\mathcal{A}|^{2}\Bigg[\sum_{i=1}^{m}e^{2\delta_{i}}\mathcal{N}(\bar{\mathcal{O}}_{i}(x),\bar{\mathcal{O}}_{i}(x))+\sum_{i<j}\Big[\frac{e^{\delta_{i}+\delta_{j}}\mathcal{F}_{ij}(x)}{2}e^{-\frac{1}{2}||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}\\
& \times\cos(\Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})+\Delta_{ij}(x)+\phi_{i}+\phi_{j})\Big]\Bigg].
\end{split}
\end{equation}
The above expression leads to the following theorem, a generalized version of the theorem (\ref{Theorem2}).
\begin{theorem}\label{Theorem4}
Given a collection of linear independent coherent states $\{\ket{\mathcal{O}^{(i)}}\}_{i=1}^{m}$, the states of the form $\ket{\psi}$ (shown earlier) violate NEC if for any spacetime point $x$ and the sets of parameters $\{\delta_{i}\}_{i=2}^{m}, \{\phi_{i}\}_{i=2}^{m}$
\begin{equation}
\sum_{i<j}\frac{e^{\delta_{i}+\delta_{j}}\mathcal{F}_{ij}(x)}{2}\cos(\Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})+\Delta_{ij}(x)+\phi_{i}+\phi_{j})e^{-\frac{1}{2}||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}<-\sum_{i=1}^{m}e^{2\delta_{i}}\mathcal{N}(\bar{\mathcal{O}}_{i}(x),\bar{\mathcal{O}}_{i}(x)),
\end{equation}
is satisfied provided
\begin{equation}
\sum_{i=1}^{m}e^{2\delta_{i}}\mathcal{N}(\bar{\mathcal{O}}_{i}(x),\bar{\mathcal{O}}_{i}(x))<\sum_{i<j}\frac{e^{\delta_{i}+\delta_{j}}\mathcal{F}_{ij}(x)}{2}e^{-\frac{1}{2}||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}.
\end{equation}
\end{theorem}
If for any value of $m$, the above condition holds then, we conclude that NEC is violated by the matter. Theorem \ref{Theorem2} and \ref{Theorem4} give the condition in terms of a mathematical inequality under which a linear combination of coherent states of the form given in (\ref{4.17}) violates NEC. On the other hand, the theorem \ref{Theorem3} directly follows from the theorem \ref{Theorem2}.
Suppose a matter in a generic curved spacetime is given by a density matrix \cite{fano1957description, sabbaghzadeh2007role} $\hat{\rho}$ which could be either a pure or a mixed state. Then we obtain the following relation
\begin{equation}
\begin{split}
\langle:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\rangle & =\text{Tr}[\hat{\rho}:\hat{T}_{\mu\nu}(x):n^{\mu}n^{\nu}]=\sum_{j}\bra{\mathcal{O}^{(j)}}\hat{\rho}:\hat{T}_{\mu\nu}(x):n^{\mu}n^{\nu}\ket{\mathcal{O}^{(j)}}\\
& =\sum_{i,j}\bra{\mathcal{O}^{(j)}}\hat{\rho}\ket{\mathcal{O}^{(i)}}\bra{\mathcal{O}^{(i)}}:\hat{T}_{\mu\nu}(x):n^{\mu}n^{\nu}\ket{\mathcal{O}^{(j)}}\\
& =\sum_{i,j}\Big[\bra{\mathcal{O}^{(j)}}\hat{\rho}\ket{\mathcal{O}^{(i)}}\mathcal{N}(F_{ij},F_{ij})e^{-\frac{1}{2}||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}e^{i\Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})}\Big],
\end{split}
\end{equation}
where $\{\ket{\mathcal{O}^{(i)}}\}$ is the set of complete coherent state basis. Defining $\rho_{ji}\equiv\frac{\bra{\mathcal{O}^{(j)}}\hat{\rho}\ket{\mathcal{O}^{(i)}}}{\braket{\mathcal{O}^{(j)}|\mathcal{O}^{(i)}}}$, the above expression can further be simplified as follows
\begin{equation}
\begin{split}
\langle:\hat{T}_{\mu\nu}:n^{\mu}n^{\nu}\rangle & =\sum_{i,j}\Big[\rho_{ji} \ \mathcal{N}(F_{ij},F_{ij})e^{-||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}\Big]\\
& =\sum_{i,j}\Big[\text{Re}\left(\rho_{ji} \ \mathcal{N}(F_{ij},F_{ij})\right)e^{-||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}\Big],
\end{split}
\end{equation}
where we have used the following relation
\begin{equation}
\Delta(\mathcal{O}^{(i)},\mathcal{O}^{(j)})=-\Delta(\mathcal{O}^{(j)},\mathcal{O}^{(i)}).
\end{equation}
\begin{theorem}
A density matrix $\hat{\rho}$, describing the state of matter in a generic curved spacetime would violate NEC if and only if there exists a spacetime point $x$ such that the following condition
\begin{equation}
\sum_{i,j}\Big[\text{Re}[\rho_{ji} \ \mathcal{N}(F_{ij}(x),F_{ij}(x))]e^{-||\mathcal{O}^{(i)}-\mathcal{O}^{(j)}||^{2}}\Big]<0,
\end{equation}
is satisfied.
\end{theorem}
\section{Dynamical violation of NEC}
Earlier, we discussed that there exist certain combinations of coherent states satisfying NEC. However, it is not quite clear under what circumstances the time-evolution of a NEC satisfying quantum state also satisfies NEC. The result of this section is important in order to find out the existence of spacetime singularity from the Raychaudhuri equation which becomes an evolution equation when the affine parameter is chosen to be the coordinate time. Moreover, in quantum theory, the nature of a solution of the Raychaudhuri equation depends on the many-body matter quantum state, and in particular, it depends on the null energy condition of that state, expressed by the quantity $\bra{\psi(t)}:\hat{T}_{\mu\nu}(t,\vec{x}):n^{\mu}(t,\vec{x})n^{\nu}(t,\vec{x})\ket{\psi(t)}$ in the interaction picture. In this section, it is shown that the time-evolution can indeed map a NEC satisfying quantum state outside the set of NEC satisfying quantum states. The possibility of this kind of violation of NEC depends on the Hamiltonian of the system, the underlying curved spacetime, and the quantum state at the initial time $\ket{\psi(t_{0})}$.
\subsection{Mathematical formulation}
In this section, we address whether a quantum state of matter can violate NEC dynamically or not i.e. can the time evolution of a quantum state of matter satisfying NEC, generate a state that violates NEC. In order to compute the time evolution of quantum states and observables in an interacting QFT, the interaction picture is often used. In order to define the interaction picture, the Hamiltonian is divided into two parts $\hat{H}=\hat{H}_{0}+\hat{H}_{\text{int}}$. In this picture, the operators carry the time dependence through the free Hamiltonian $\hat{H}_{0}$ whereas the states carry the time dependence through the interaction Hamiltonian $\hat{H}_{\text{int}}$ in the following way
\begin{equation}\label{6.1}
i\frac{d}{dt}\ket{\psi(t)}_{I}=\hat{H}_{\text{int},I}\ket{\psi(t)}_{I}, \ \hat{\mathcal{O}}_{I}(t)=e^{i\hat{H}_{0}t}\hat{\mathcal{O}}_{S}e^{-i\hat{H}_{0}t}
\end{equation}
where $\hat{\mathcal{O}}_{S}$ is the operator in the Schr$\ddot{o}$dinger picture and $\hat{\mathcal{O}}_{I}(t)$ is the operator in the interaction picture. Now onwards, we drop the labels $I$ and $S$.
Let us consider a state $\ket{\psi(t_{1})}$ satisfying $\bra{\psi(t_{1})}:\hat{T}_{\mu\nu}(t_{1},\vec{x}):n^{\mu}(t_{1},\vec{x})n^{\nu}(t_{1},\vec{x})\ket{\psi(t_{1})}\geq0$. Then, at time $t_{2}>t_{1}$, the quantum state becomes
\begin{equation}
\ket{\psi(t_{2})}=\mathcal{T}\left(e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\ket{\psi(t_{1})},
\end{equation}
where $\mathcal{T}$ is the time-ordering operator. Using the above time-evolved state, we obtain the following relation
\begin{equation}\label{6.3}
\begin{split}
\bra{\psi(t_{2})} & :\hat{T}_{\mu\nu}(t_{2},\vec{x}):n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\ket{\psi(t_{2})}\\
& =\bra{\psi(t_{1})}\bar{\mathcal{T}}\left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right):\hat{T}_{\mu\nu}(t_{2},\vec{x}):\mathcal{T}\left(e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\ket{\psi(t_{1})}n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\\
& \equiv\bra{\psi(t_{1})}\tilde{\mathcal{T}}\left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\ket{\psi(t_{1})}n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x}),
\end{split}
\end{equation}
where $\bar{\mathcal{T}}$ is the anti time-ordering operator. If $\hat{H}_{0}(t)$ is time-independent, then $\mathcal{T}\left(e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{0}(t)dt}\right)=e^{-i\hat{H}_{0}(t_{2}-t_{1})}$. For the sake of simplicity, we defined the operation $\tilde{\mathcal{T}}$ in the last line of the equation (\ref{6.3}).
Further, the exponent in time-evolution operator can be expressed in terms of the Hamiltonian density $\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt=\int d^{4}y \ \hat{\mathcal{H}}_{\text{int}}(y)$ where $\hat{\mathcal{H}}(y)$ is the Hamiltonian density operator. Hence, we obtain the following relation
\begin{equation}\label{commutation}
\begin{split}
\tilde{\mathcal{T}} & \left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)=\bar{\mathcal{T}}\left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right):\hat{T}_{\mu\nu}(t_{2},\vec{x}):\mathcal{T}\left(e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\\
& =:\hat{T}_{\mu\nu} (t_{2},\vec{x}):+i\int d^{4}y[\hat{\mathcal{H}}_{\text{int}}(y),:\hat{T}_{\mu\nu}(t_{2},\vec{x}):]+\ldots.
\end{split}
\end{equation}
The above relation shows that the violation of NEC by a quantum state under time evolution depends on the commutator of the second term of the above equation in the leading order. The above commutator takes into account the effect of curved spacetime, as the Hamiltonian density and the energy-momentum tensor both depend on the metric explicitly for a minimally coupled field theory. From the micro-causality condition, it follows that $[\hat{\mathcal{O}}_{1}(x),\hat{\mathcal{O}}_{2}(y)]=0$ if and only if the spacetime points $x$ and $y$ are spacelike separated. As a consequence of this relation, only the operators located within the past light-cone of the spacetime point $(t_{2},\vec{x})$ give rise to non-zero commutators in (\ref{commutation}).
Analogously, in the case of a system described by a mixed state with the density matrix $\hat{\rho}$, we obtain the following relation
\begin{equation}\label{6.7}
\begin{split}
\langle:\hat{T}_{\mu\nu} & (t_{2},\vec{x}):\rangle n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})=\text{Tr}[\hat{\rho}(t_{2}):\hat{T}_{\mu\nu}(t_{2},\vec{x}):]n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\\
& =\text{Tr}\Bigg[\mathcal{T}\left(e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\hat{\rho}(t_{1})\bar{\mathcal{T}}\left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right):\hat{T}_{\mu\nu}(t_{2},\vec{x}):\Bigg]n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\\
& =\text{Tr}\Bigg[\hat{\rho}(t_{1})\tilde{\mathcal{T}}\left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}:\hat{T}_{\mu\nu}(t_{2},\vec{x}): \ e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)\Bigg]n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x}),
\end{split}
\end{equation}
where the cyclicity property of trace operation is used. Hence, the results (\ref{6.3}) and (\ref{6.7}) are almost the same. If we consider the density matrix to be a pure state ($\hat{\rho}(t_{1})=\ket{\psi(t_{1})}\bra{\psi(t_{1})}$), then the expression in (\ref{6.7}) reduces to the expression in (\ref{6.3}), however, the expression in (\ref{6.7}) is also valid for a mixed state. Therefore, the expression in (\ref{6.7}) is much more general than the expression in (\ref{6.3}) in describing the null energy condition of a time-evolved quantum state.
\subsection{Massless $\phi^{3}$ scalar field theory in a static spacetime}
Let us consider a massless scalar field theory in a static curved spacetime with the $\phi^{3}$-interaction. The corresponding action is given by
\begin{equation}\label{interacting action}
\mathcal{S}=-\int\sqrt{-g} \ d^{4}x\Big[\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+V(\phi)\Big], \ V(\phi)=\frac{\lambda}{3!}\phi^{3}.
\end{equation}
The conjugate momentum variable is given by
\begin{equation}
\Pi(x)=-\sqrt{-g(x)}g^{0\nu}\partial_{\nu}\phi(x)=-\sqrt{-g(x)}\partial^{0}\phi(x),
\end{equation}
and the only non-zero equal-time commutation bracket is
\begin{equation}
[\hat{\phi}(x),\hat{\Pi}(y)]=i\delta^{(3)}(\vec{x}-\vec{y})\implies[\hat{\phi}(x),\partial^{0}\hat{\phi}(y)]=-i\frac{\delta^{(3)}(\vec{x}-\vec{y})}{\sqrt{-g(x)}}, \ x^{0}=y^{0}.
\end{equation}
Hence, the Hamiltonian density is given by
\begin{equation}
\begin{split}
\mathcal{H}(x) & =-\sqrt{-g(x)}\partial^{0}\phi(x)\partial_{0}\phi(x)+\sqrt{-g(x)}\Big[\frac{1}{2}g^{\mu\nu}(x)\partial_{\mu}\phi(x)\partial_{\nu}\phi(x)+V(\phi(x))\Big]\\
& =\sqrt{-g(x)}\Big[-\frac{1}{2}\partial_{0}\phi(x)\partial^{0}\phi(x)+\frac{1}{2}\partial_{i}\phi(x)\partial^{i}\phi(x)+V(\phi(x))\Big].
\end{split}
\end{equation}
In a static spacetime, the above expression reduces to the following
\begin{equation}
\mathcal{H}(x)=\sqrt{-g(x)}\Big[\frac{1}{2g(x)g^{00}(x)}\Pi^{2}(x)+\frac{1}{2}\partial_{i}\phi(x)\partial^{i}\phi(x)+V(\phi(x))\Big].
\end{equation}
In this section, we discuss the effect of $V(\phi)=\frac{\lambda}{3!}\phi^{3}$ interaction through the perturbative technique on the violation of NEC by quantum states under time evolution. In order to apply the perturbative technique, we follow the interaction picture of quantum field theory \cite{Schwartz:2013pla}. Hence, the field operator can be expanded on a basis consists of mode solutions of free d'Alembert's equation in the following manner
\begin{equation}
\hat{\phi}(x)=\sum_{i}[\hat{a}_{i}g_{i}(x)+\hat{a}_{i}^{\dagger}g_{i}^{*}(x)],
\end{equation}
where $\Box g_{i}(x)=0$, and the creation and annihilation operators satisfy the algebra $[\hat{a}_{i},\hat{a}_{j}^{\dagger}]=\delta_{ij}$. As a result of this expansion, the general commutation bracket is given by
\begin{equation}
[\hat{\phi}(x),\hat{\phi}(y)]=\Delta_{\phi}^{(0)}(x,y), \ [\hat{\Pi}(x),\hat{\Pi}(y)]=\Delta_{\Pi}^{(0)}(x,y), \ [\hat{\phi}(x),\hat{\Pi}(y)]=\Delta^{(0)}(x,y),
\end{equation}
where
\begin{equation}
\begin{split}
\Delta_{\phi}^{(0)}(x,y) & =\sum_{i}[g_{i}(x)g_{i}^{*}(y)-g_{i}^{*}(x)g_{i}(y)]=2i\sum_{i}\text{Im}(g_{i}(x)g_{i}^{*}(y))\\
\Delta_{\Pi}^{(0)}(x,y) & =\sqrt{g(x)g(y)}\partial_{x}^{0}\partial_{y}^{0}\Delta_{\phi}^{(0)}(x,y), \ \Delta^{(0)}(x,y)=-\sqrt{-g(y)}\partial_{y}^{0}\Delta_{\phi}^{(0)}(x,y).
\end{split}
\end{equation}
On the other hand, we also obtain the following relation
\begin{equation}\label{6.16}
\partial_{\mu}(n^{\mu}(x)n^{\nu}(x)\partial_{\nu}\hat{\phi}(x))=\sum_{i}[\hat{a}_{i}\bar{\Delta}_{g_{i}}(x)+\hat{a}_{i}^{\dagger}\bar{\Delta}_{g_{i}^{*}}(x)]=\sum_{i}[\hat{a}_{i}\bar{\Delta}_{g_{i}}(x)+\hat{a}_{i}^{\dagger}\bar{\Delta}_{g_{i}}^{*}(x)],
\end{equation}
where
\begin{equation}
\bar{\Delta}_{g_{i}}(x)=\partial_{\mu}(n^{\mu}(x)n^{\nu}(x)\partial_{\nu}g_{i}(x)), \ \bar{\Delta}_{g_{i}^{*}}(x)=\partial_{\mu}(n^{\mu}(x)n^{\nu}(x)\partial_{\nu}g_{i}^{*}(x))=\bar{\Delta}_{g_{i}}^{*}(x).
\end{equation}
Since $:\hat{T}_{\mu\nu}(x):n^{\mu}(x)n^{\nu}(x)=:(\partial\hat{\phi}(x))^{2}:$, we obtain the following commutation relation
\begin{equation}\label{6.18}
[\hat{\mathcal{H}}_{\text{int}}(y),:\hat{T}_{\mu\nu}(x):n^{\mu}(x)n^{\nu}(x)]=2\sqrt{-g(y)}\Delta_{\phi}(x,y)\frac{\delta V(\phi)}{\delta\phi(y)}\partial_{\mu}(n^{\mu}(x)n^{\nu}(x)\partial_{\nu}\hat{\phi}(x)).
\end{equation}
Considering $V(\phi)=\frac{\lambda}{3!}\phi^{3}$, we obtain the following relation
\begin{equation}
\frac{\delta V(\phi)}{\delta\phi}=\frac{\lambda}{2}\phi^{2}(y)=\sum_{i,j}\Big[\mathcal{V}_{ij}(y)\hat{a}_{i}\hat{a}_{j}+\mathcal{V}_{\bar{i}j}(y)\hat{a}_{i}^{\dagger}\hat{a}_{j}+\mathcal{V}_{i\bar{j}}(y)\hat{a}_{i}\hat{a}_{j}^{\dagger}+\mathcal{V}_{\bar{i}\bar{j}}(y)\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\Big],
\end{equation}
where
\begin{equation}
\begin{split}
\mathcal{V}_{ij}(y) & =\frac{\lambda}{2}g_{i}(y)g_{j}(y), \ \mathcal{V}_{\bar{i}j}(y)=\frac{\lambda}{2}g_{i}^{*}(y)g_{j}(y)\\
\mathcal{V}_{i\bar{j}}(y) & =\frac{\lambda}{2}g_{i}(y)g_{j}^{*}(y), \ \mathcal{V}_{\bar{i}\bar{j}}(y)=\frac{\lambda}{2}g_{i}^{*}(y)g_{j}^{*}(y).
\end{split}
\end{equation}
As a result of the above relations, we can write
\begin{equation}\label{6.21}
\begin{split}
n^{\mu}(x)n^{\nu}(x)\tilde{\mathcal{T}} & \left(e^{i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):e^{-i\int_{t_{1}}^{t_{2}}\hat{H}_{\text{int}}(t)dt}\right)=:\hat{T}_{\mu\nu}(t_{2},\vec{x}):n^{\mu}(x)n^{\nu}(x)\\
+2i\partial_{\mu} & (n^{\mu}(x)n^{\nu}(x)\partial_{\nu}\hat{\phi}(x))\int d^{4}y\sqrt{-g(y)}\Big[\Delta_{\phi}^{(0)}(x,y)\frac{\delta V(\phi)}{\delta\phi(y)}\Big]-\ldots,
\end{split}
\end{equation}
where $\hat{H}_{\text{int}}(t)$ denotes the interaction Hamiltonian in the interaction picture which in this case is given by $\int\sqrt{-g(x)}d^{3}x\frac{\lambda}{3!}\phi^{3}(x)$. The integral in the above expression can further be simplified as follows
\begin{equation}
2i\int d^{4}y\sqrt{-g(y)}\Delta_{\phi}^{(0)}(x,y)\frac{\delta V(\phi)}{\delta\phi(y)}=i\sum_{i,j}\Big[\bar{\mathcal{V}}_{ij}(x)\hat{a}_{i}\hat{a}_{j}+\bar{\mathcal{V}}_{\bar{i}j}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}+\bar{\mathcal{V}}_{i\bar{j}}(x)\hat{a}_{i}\hat{a}_{j}^{\dagger}+\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\Big],
\end{equation}
where
\begin{equation}
\begin{split}
\bar{\mathcal{V}}_{ij}(x) & =\lambda\int d^{4}y\sqrt{-g(y)}\Delta_{\phi}^{(0)}(x,y)g_{i}(y)g_{j}(y), \ \bar{\mathcal{V}}_{\bar{i}j}(x)=\lambda\int d^{4}y\sqrt{-g(y)}\Delta_{\phi}^{(0)}(x,y)g_{i}^{*}(y)g_{j}(y)\\
\bar{\mathcal{V}}_{i\bar{j}}(x) & =\lambda\int d^{4}y\sqrt{-g(y)}\Delta_{\phi}^{(0)}(x,y)g_{i}(y)g_{j}^{*}(y), \ \bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)=\lambda\int d^{4}y\sqrt{-g(y)}\Delta_{\phi}^{(0)}(x,y)g_{i}^{*}(y)g_{j}^{*}(y).
\end{split}
\end{equation}
Hence, the second term in the equation (\ref{6.21}) can be expressed as
\begin{equation}\label{6.24}
\begin{split}
i & \sum_{k}[\hat{a}_{k}\bar{\Delta}_{g_{k}}(x)+\hat{a}_{k}^{\dagger}\bar{\Delta}_{g_{k}}^{*}(x)]\sum_{i,j}\Big[\bar{\mathcal{V}}_{ij}(x)\hat{a}_{i}\hat{a}_{j}+\bar{\mathcal{V}}_{\bar{i}j}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}+\bar{\mathcal{V}}_{i\bar{j}}(x)\hat{a}_{i}\hat{a}_{j}^{\dagger}+\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\Big]\\
=i & \sum_{i,j,k}\Bigg[\bar{\Delta}_{g_{k}}(x)\Big[\bar{\mathcal{V}}_{ij}(x)\hat{a}_{i}\hat{a}_{j}\hat{a}_{k}+\bar{\mathcal{V}}_{\bar{i}j}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}\hat{a}_{k}+\bar{\mathcal{V}}_{i\bar{j}}(x)\hat{a}_{j}^{\dagger}\hat{a}_{i}\hat{a}_{k}+\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{k}\Big]\\
& +\bar{\Delta}_{g_{k}}^{*}(x)\Big[\bar{\mathcal{V}}_{ij}(x)\hat{a}_{k}^{\dagger}\hat{a}_{i}\hat{a}_{j}+\bar{\mathcal{V}}_{\bar{i}j}(x)\hat{a}_{i}^{\dagger}\hat{a}_{k}^{\dagger}\hat{a}_{j}+\bar{\mathcal{V}}_{i\bar{j}}(x)\hat{a}_{j}^{\dagger}\hat{a}_{k}^{\dagger}\hat{a}_{i}+\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{k}^{\dagger}\Big]\Bigg]\\
+i & \Big[2\sum_{i,j}\bar{\Delta}_{g_{i}}(x)(\bar{\mathcal{V}}_{\bar{i}j}(x)\hat{a}_{j}+\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\hat{a}_{j}^{\dagger})+\sum_{i,k}\bar{\mathcal{V}}_{i\bar{i}}(x)(\hat{a}_{k}\bar{\Delta}_{g_{k}}(x)+\bar{\Delta}_{g_{k}}^{*}(x)\hat{a}_{k}^{\dagger})\Big].
\end{split}
\end{equation}
The above equation clearly shows that the coherent states or eigenstates of the annihilation operators will not remain the eigenstates of the annihilation operators under time evolution. Further, the interaction term can in principle lead to the
violation of NEC by the quantum states under time evolution, which is argued below. This also shows that the violation of NEC depends on the functions $\{\bar{\mathcal{V}}_{ij}(x),\ldots,\bar{\mathcal{V}}_{\bar{i}\bar{j}}(x)\}$ consist of mode solutions, hence, they are directly connected to the geometry. Further, this violation also depends on the properties of the quantum states since it depends on the action of the creation and annihilation operators on the quantum states.
The terms in the equation (\ref{6.24}) affect the NEC condition of the time-evolved states significantly in the leading order. If this expression sandwiched between $\bra{\psi(t_{1})}$ and $\ket{\psi(t_{1})}$ is negative, and its magnitude is more than $\bra{\psi(t_{1})}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\ket{\psi(t_{1})}$, then the $\bra{\psi(t_{2})}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\ket{\psi(t_{2})}<0$ in the leading order quantum correction. The expressions in (\ref{4.18}) and (\ref{6.24}) are made out of the solutions of the Klein-Gordon equations. Since the term in the first parenthesis of (\ref{6.24}) sandwiched between $\bra{\psi(t_{1})}$ and $\ket{\psi(t_{1})}$ and the expression (\ref{4.18}) are cubic and quadratic in $\{\mathcal{O}_{i}^{(k)}\}_{k=1}^{m}$ respectively, hence, considering a state such that $\{|\mathcal{O}_{i}^{(k)}|\rightarrow0\}_{k=1}^{m}$, we can effectively neglect the effect of these terms compared to the terms in the second parenthesis of (\ref{6.24}) sandwiched between $\bra{\psi(t_{1})}$ and $\ket{\psi(t_{1})}$. On the other hand, there exist spacetimes \cite{Ford:1997hb, jacobson2005introduction, parker2009quantum, alsing2001phase, al2018dirac, lehn2018klein} in which the mode solutions of the Klein-Gordon equation are oscillating in nature. As a result, we can always find a spacetime point in this class of spacetimes at which $\bra{\psi(t_{2})}:\hat{T}_{\mu\nu}(t_{2},\vec{x}):n^{\mu}(t_{2},\vec{x})n^{\nu}(t_{2},\vec{x})\ket{\psi(t_{2})}$ becomes negative in the leading order quantum correction due to the oscillating nature of the terms in the second parenthesis (\ref{6.24}). Moreover, as $\{\mathcal{O}_{i}^{(k)}\}_{k=1}^{m}$ are the state-dependent free parameters, it is possible to choose such parameters or in other words, construct a quantum state suitably which can violate NEC after time evolution. This clearly shows that the time-evolution can map a NEC satisfying quantum state to a NEC violating quantum state.
\section{Average null energy condition}
In this section, we show the dependence of ANEC on the geometry of spacetime and the quantum states of matter. In order to show that, we rewrite the mathematical inequality of ANEC in a different manner. As shown earlier, the ANEC \textit{w.r.t} a quantum state $\ket{\psi}$ mathematically demands
\begin{equation}\label{7.1}
\int_{\gamma}\langle T_{\mu\nu}(X(\lambda))\rangle l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda\geq0,
\end{equation}
where $l^{\mu}(\lambda)=\frac{dX^{\mu}(\lambda)}{d\lambda}$ is the tangent vector of the null geodesic $\gamma$ and $\langle T_{\mu\nu}(X(\lambda))\rangle=\bra{\psi}T_{\mu\nu}(X(\lambda))\ket{\psi}$. The above expression can also be expressed as follows
\begin{equation}\label{ANEC1}
\begin{split}
\int_{\gamma} & \langle T_{\mu\nu}(X(\lambda))\rangle l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda=\int d^{4}x\langle T_{\mu\nu}(x)\rangle\int\frac{d^{4}k}{(2\pi)^{4}}\int_{\gamma}e^{ik.(x-X(\lambda))}\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)d\lambda\\
& =\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\int_{\gamma} e^{-ik.X(\lambda)}\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)d\lambda,
\end{split}
\end{equation}
where `dot' represents derivative \textit{w.r.t} $\lambda$. We make progress further by using the following result
\begin{equation}\label{ANEC2}
\begin{split}
\int_{\gamma} e^{-ik.X(\lambda)} & \dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)d\lambda=\int_{\gamma}\frac{i\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)}{k.\dot{X}(\lambda)}\frac{d}{d\lambda}e^{-ik.X(\lambda)}\\
& =i\Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]-i\int_{\gamma}\frac{d}{d\lambda}\left(\frac{\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)}{k.\dot{X}(\lambda)}\right)e^{-ik.X(\lambda)}d\lambda,
\end{split}
\end{equation}
where $\dot{X}(\lambda_{1,2})=\dot{X}_{1,2}$ and $X(\lambda_{1,2})=X_{1,2}$. In the above expression, we obtain a boundary and a bulk term. Plugging (\ref{ANEC2}) in (\ref{ANEC1}), we obtain the following expression
\begin{equation}
\begin{split}
\int_{\gamma} & \langle T_{\mu\nu}(X(\lambda))\rangle l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda=i\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]\\
& -i\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\int_{\gamma}\frac{e^{-ik.X(\lambda)}}{(k.\dot{X}(\lambda))^{2}}k_{\rho}[2\ddot{X}^{(\mu}(\lambda)\dot{X}^{\nu)}(\lambda)\dot{X}^{\rho}-\ddot{X}^{\rho}\dot{X}^{\mu}\dot{X}^{\nu}]d\lambda.
\end{split}
\end{equation}
Using the geodesic equations, the above expression reduces to
\begin{equation}\label{7.5}
\begin{split}
\int_{\gamma} & \langle T_{\mu\nu}(X(\lambda))\rangle l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda=i\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]\\
& +i\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\int_{\gamma}\Big[\frac{e^{-ik.X(\lambda)}}{(k.\dot{X}(\lambda))^{2}}k_{\rho}[2\Gamma_{ \ \sigma_{1}\sigma_{2}}^{(\mu}(X(\lambda))\dot{X}^{\nu)}(\lambda)\dot{X}^{\rho}(\lambda)\\
& -\Gamma_{ \ \sigma_{1}\sigma_{2}}^{\rho}(X(\lambda))\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)]\dot{X}^{\sigma_{1}}(\lambda)\dot{X}^{\sigma_{2}}(\lambda)\Big]d\lambda.
\end{split}
\end{equation}
The first term in the above expression is boundary term as it depends on the end-points of the null geodesic $\gamma$ and the second term is the bulk terms as it depends on the nature of the null geodesic $\gamma$. However, both the terms depend on the quantum state $\ket{\psi}$ through $\langle T_{\mu\nu}(-k)\rangle$. Hence, for an arbitrary stress-energy tensor, ANEC demands
\begin{equation}\label{7.6}
\begin{split}
\text{Im}\Bigg[\int\frac{d^{4}k}{(2\pi)^{4}} & \langle T_{\mu\nu}(-k)\rangle\Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]\Bigg]\leq-\text{Im}\Bigg[\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\\
\times\int_{\gamma}\frac{e^{-ik.X(\lambda)}}{(k.\dot{X}(\lambda))^{2}}k_{\rho} & \Big[[2\Gamma_{ \ \sigma_{1}\sigma_{2}}^{(\mu}(X(\lambda))\dot{X}^{\nu)}(\lambda)\dot{X}^{\rho}(\lambda)-\Gamma_{ \ \sigma_{1}\sigma_{2}}^{\rho}(X(\lambda))\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)]\dot{X}^{\sigma_{1}}(\lambda)\dot{X}^{\sigma_{2}}(\lambda)\Big]d\lambda\Bigg].
\end{split}
\end{equation}
On the other hand, since $\int_{\gamma}\langle T_{\mu\nu}(X(\lambda))\rangle l^{\mu}(\lambda)l^{\nu}(\lambda)d\lambda$ is a real quantity, we expect the following equality
\begin{equation}\label{7.7}
\begin{split}
\text{Re}\Bigg[\int\frac{d^{4}k}{(2\pi)^{4}} & \langle T_{\mu\nu}(-k)\rangle\Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]\Bigg]=-\text{Re}\Bigg[\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle\\
\times\int_{\gamma}\frac{e^{-ik.X(\lambda)}}{(k.\dot{X}(\lambda))^{2}}k_{\rho} & \Big[[2\Gamma_{ \ \sigma_{1}\sigma_{2}}^{(\mu}(X(\lambda))\dot{X}^{\nu)}(\lambda)\dot{X}^{\rho}(\lambda)-\Gamma_{ \ \sigma_{1}\sigma_{2}}^{\rho}(X(\lambda))\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)]\dot{X}^{\sigma_{1}}(\lambda)\dot{X}^{\sigma_{2}}(\lambda)\Big]d\lambda\Bigg].
\end{split}
\end{equation}
For an interacting scalar field theory given by (\ref{interacting action}), we only need the following contribution
\begin{equation}
\langle T_{\mu\nu}(-k)\rangle=\int\frac{d^{4}q}{(2\pi)^{4}}\langle\phi(q)\phi(-k-q)\rangle q_{\mu}(k_{\nu}+q_{\nu}).
\end{equation}
Therefore, in this case, we obtain the following relations
\begin{equation}\label{7.9}
\begin{split}
\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle & \Big[\frac{\dot{X}_{2}^{\mu}\dot{X}_{2}^{\nu}}{k.\dot{X}_{2}}e^{-ik.X_{2}}-\frac{\dot{X}_{1}^{\mu}\dot{X}_{1}^{\nu}}{k.\dot{X}_{1}}e^{-ik.X_{1}}\Big]=\int\frac{d^{4}k}{(2\pi)^{4}}\int\frac{d^{4}q}{(2\pi)^{4}}\langle\phi(q)\phi(-k-q)\rangle\\
\times\Bigg[\Big[\left(\frac{(q.\dot{X}_{2})^{2}}{k.\dot{X}_{2}}\right) & e^{-ik.X_{2}}-\left(\frac{(q.\dot{X}_{1})^{2}}{k.\dot{X}_{1}}\right)e^{-ik.X_{1}}\Big]+[q.\dot{X}_{2}e^{-ik.X_{2}}-q.\dot{X}_{1}e^{-ik.X_{1}}]\Bigg],
\end{split}
\end{equation}
and
\begin{equation}\label{7.10}
\begin{split}
\int\frac{d^{4}k}{(2\pi)^{4}}\langle T_{\mu\nu}(-k)\rangle & \int_{\gamma}\frac{e^{-ik.X(\lambda)}}{(k.\dot{X}(\lambda))^{2}}k_{\rho}[2\Gamma_{ \ \sigma_{1}\sigma_{2}}^{(\mu}(X(\lambda))\dot{X}^{\nu)}(\lambda)\dot{X}^{\rho}(\lambda)\\
& -\Gamma_{ \ \sigma_{1}\sigma_{2}}^{\rho}(X(\lambda))\dot{X}^{\mu}(\lambda)\dot{X}^{\nu}(\lambda)]\dot{X}^{\sigma_{1}}(\lambda)\dot{X}^{\sigma_{2}}(\lambda)d\lambda\\
=\int\frac{d^{4}k}{(2\pi)^{4}}\int\frac{d^{4}q}{(2\pi)^{4}} & \langle\phi(q)\phi(-k-q)\rangle\int_{\gamma} e^{-ik.X(\lambda)}\Big[2\frac{q.\ddot{X}(\lambda)q.\dot{X}(\lambda)}{k.\dot{X}(\lambda)}+q.\ddot{X}(\lambda)\\
& -\frac{k.\ddot{X}(\lambda)(q.\dot{X}(\lambda))^{2}}{[k.\dot{X}(\lambda)]^{2}}\Big]d\lambda.
\end{split}
\end{equation}
The results in (\ref{7.5}), (\ref{7.6}) and (\ref{7.7}) classify ANEC satisfying quantum states for a given null geodesic since the expressions in these equations depend on the chosen null geodesic and the quantum state through the expectation value of the stress-energy tensor. This also shows apparently that if the null geodesic between two points in spacetime is not unique (existence and uniqueness of a solution of the geodesic equation are valid locally), then the quantum states satisfying ANEC for a null geodesic $\gamma_{1}$ may not satisfy ANEC for another null geodesic $\gamma_{2}$ between the same two boundary points in general. Therefore, a quantum state does not satisfy (\ref{7.1}) in general for all the null curves. As a result, one can classify the quantum states satisfying ANEC for a given null geodesic in that case. Moreover, the equations (\ref{7.9}) and (\ref{7.10}) suggest that for an interacting scalar field theory with polynomial interaction, the state dependence of ANEC comes only from the two-point function of the scalar field \textit{w.r.t} a quantum state and this two-point function also depends on the nature of the interaction. We also want to highlight that for non-unique null geodesics, the dependence of null geodesics in (\ref{7.5}) comes from the projection of momentum modes along the tangent vector of null geodesics and its derivative \textit{w.r.t} the affine parameter.
\section{Discussion}
Violation of NEC by quantum states of matter in curved spacetimes put a restriction on the possible quantum states of matter which lead to the stable configuration (see \cite{dubovsky2006null, buniy2006null, buniy2006instabilities}). It is also shown in \cite{dubovsky2006null} that even if a violation of NEC occurs in stable quantum matter, it leads to modes with superluminal propagation. This violates causality. Further, dynamical violation of NEC is another important aspect in cosmological models, discussed in \cite{Vikman:2007sj}.
Here, we discuss the violation of NEC by the quantum states of matter in a generic curved spacetime using the coherent states of field operators. Further, certain criteria are also discussed by providing some important theorems under which a quantum matter state can hold NEC. These lead to the classification of NEC violating quantum states of matter. This classification helps in avoiding NEC violating quantum states which lead to violation of causality and instabilities. Furthermore, we also show the classification of quantum states based on ANEC. In order to do that, we rewrite ANEC inequality differently in which geometrical and state dependences are shown explicitly.
\section{Acknowledgement}
SM wants to thank IISER Kolkata for supporting this work through a doctoral fellowship.
\bibliographystyle{unsrt}
|
{'timestamp': '2021-09-14T02:14:43', 'yymm': '2109', 'arxiv_id': '2109.05321', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.05321'}
|
arxiv
|
\section{Concluding Remarks}
\label{sec:conclusion}
In this paper, we studied the obnoxious facility location game with
dichotomous preferences. This game generalizes obnoxious facility location game to more realistic scenarios.
All of the mechanisms presented in this paper run in polynomial time,
except that the running time of Mechanism~\ref{mech:sum-min} has
exponential dependence on $k$ (and polynomial dependence on $n$).
We can extend the results of Section~\ref{subsec:min-min-square} to obtain an analogue of Theorem~\ref{thm:min-min-opt-eg-plane} that holds for arbitrary convex polygon.
We showed
that Mechanism~\ref{mech:sum-min} is WGSP for all $k$ and is efficient
for $k \le 3$.
Properties~1 and~2 in the proof of our associated theorem, Theorem~\ref{thm:opt-sum-min}, do not hold for $k>3$. Nevertheless, we conjecture that Mechanism~\ref{mech:sum-min} is efficient for all $k$.
It remains an interesting open problem to reduce the gap
between the $\Omega(\sqrt{n})$ and $O(n)$ bounds on the approximation ratio $\alpha$ of WGSP $\alpha$-egalitarian mechanisms.
\subsection{Egalitarian mechanisms for the cycle}
\label{subsec:min-min-cycle}
In this section, we present egalitarian mechanisms for the case where the agents are located on the unit-circumference circle $C$.
We denote the point antipodal to $u$ on $C$ by $\widehat{u}$. We begin by considering the natural adaptation of Mechanism~\ref{mech:min-min} to a cycle.
\begin{custommech}{8}
\label{mech:min-min-circle}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
If $H$ is empty build facility $F_1$ at $0$. If $H$ has only one agent $i$, then build $F_1$ at $\widehat{x_i}$.
Otherwise, build $F_1$ at the midpoint of the largest gap between any two consecutive agents from $H$. Formally, let $H$ have $\ell$ agents $z_0, \dots, z_{\ell-1}$ such that $x_{z_0} \le x_{z_1} \le \dots \le x_{z_{\ell-1}}$. Let $\oplus$ denote addition modulo $\ell$. Build $F_1$ at the midpoint of $x_{z_o}$ and $x_{z_{o\oplus 1}}$ where $o$ is the smallest number in $\{0, \dots \ell-1\}$ such that $\Delta(x_{z_{o\oplus 1}}, x_{z_o}) = \max_{j \in \{0, \dots \ell-1\} } \Delta(x_{z_{j\oplus 1}}, x_{z_j})$.
\end{custommech}
\begin{lemma}
\label{lem:min-min-circle-sp}
Mechanism~\ref{mech:min-min-circle} is SP for single-facility $\OOFLG$.
\end{lemma}
\begin{proof}
Let $(I,I')$ denote a single-facility $\OOFLG$ instance with $I = (n, 1, \mathbf{x}, \mathbf{a})$, $I' = (n, 1, \mathbf{x}, \mathbf{a}'),$ and let $i$ be an agent such that $\mathbf{a}' = (\mathbf{a}_{-i}, a_i')$.
Using the same arguments as in the proof of Lemma~\ref{lem:min-min-sp-sfo}, we restrict our attention to the case where $F_1$ is in $a_i \setminus a'_i$.
Let Mechanism~\ref{mech:min-min-circle} build $F_1$ at $y$ when agent $i$ reports truthfully.
Let $H = \haters(I, 1)$ denote the set of agents who dislike $\{F_1\}$.
Note that Mechanism~\ref{mech:min-min-circle} does not build $F_1$ at the location of any agent in $H$, that is, $y \neq x_{i'}$ for any $i'$ in $H$.
Let the arc of $C$ that goes clockwise from $y$ to $x_i$ be $r_1$ and let the arc of $C$ that goes anti-clockwise from $y$ to $x_i$ be $r_2$. Both arcs $r_1$ and $r_2$ include the end-points $y$ and $x_i$. We consider three cases.
Case 1: No agent in $H - i$ is located on $r_1$ or $r_2$. Hence $H = \{i\}$. Thus $y = \widehat{x_i}$, and $\Delta(x_i, y) = 1/2$. When agent $i$ reports $a_i'$, $F_1$ is built at $0$. Since $\Delta(x_i, 0) \le 1/2$, reporting $a_i'$ does not benefit agent $i$.
Case 2: There are agents in $H - i$ located on $r_1$ or $r_2$, but not both.
Without loss of generality, we assume that there is an agent in $H - i$ located on $r_2$ (and that there are no agents in $H - i$ on $r_1$).
Let $i'$ be the closest agent to $y$ in $H-i$ on $r_2$. Let $d' = \Delta(y, x_{i'})$. Thus, $y$ is the midpoint of $x_{i'}$ and $x_i$. Hence $d' = \Delta(x_i, y)$. Let $i''$ be the closest agent in $H-i$ in the clockwise direction from $x_i$. Thus, $\Delta(x_{i''}, x_i) \le 2d'$. Since $\Delta(x_{i''}, x_i) \le 2d'$, when agent $i$ reports $a_i'$, $F_1$ is built on $r_1$, which does not benefit agent~$i$.
Case 3: There are agents in $H - i$ on $r_1$ and $r_2$. Let the closest agent from $y$ in $H - i$ on $r_2$ (resp., $r_1$) be $a$ (resp., $b$), respectively. We have $\Delta(x_a, y) = \Delta(y, x_b)$. Let $d'$ denote $\Delta(x_a, y)$. Note that $\Delta(x_i, y) \ge d'$. Let the first agent from $H - i$ in the anti-clockwise and clockwise direction from $x_i$ be $i'$ and $i''$. We have, $\Delta(x_i, x_{i'}) \le 2d'$ and $\Delta(x_i, x_{i''}) \le 2d'$. Thus, when agent $i$ reports $a_i'$, either $F_1$ is built at $y$ or $F_1$ is built within distance $d'$ of $x_i$, neither of which benefits agent $i$.
Thus, agent $i$ does not benefit by reporting $a_i'$.
\end{proof}
\begin{lemma}
\label{lem:min-min-circle-opt}
Mechanism~\ref{mech:min-min-circle} is egalitarian for single-facility $\OOFLG$.
\end{lemma}
\begin{proof}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
Using the same arguments as in the proof of Lemma~\ref{lem:min-min-opt-sfo}, we assume that all agents belong to $H$.
Let $y^*$ denote an optimal location for the facility and let $\textsc{OPT}$ denote $\MMW (I, y^*)$.
Let $y'$ denote the location at which Mechanism~\ref{mech:min-min-circle} builds the facility and let $\textsc{ALG}$ denote $\MMW (I, y')$.
Below we establish that $\textsc{ALG} \ge \textsc{OPT}$, which implies that Mechanism~\ref{mech:min-min-circle} is egalitarian.
If $H$ is empty, then trivially Mechanism~\ref{mech:min-min-circle} is egalitarian. For the remainder of the
proof, assume that $H$ is non-empty. We say that an agent in $H$ is
\emph{tight} if it is as close to $y^*$ as any other agent in $H$.
Thus for any tight agent $i$, $\textsc{OPT} = \Delta(y^*,x_i)$.
Let $i$ be a tight agent. Assume that in the shorter arc between $x_i$ and $y^*$, $x_i$ is anti-clockwise from $y^*$ (the other case can be handled similarly). Thus $\textsc{OPT} = \Delta(x_i, y^*)$.
Let $i'$ be the closest agent in $H$ in the clockwise direction from $y^*$.
The definition of $i'$ implies that $\Delta(x_{i'}, y^*) \ge \textsc{OPT}$. It follows that $\Delta(x_i, x_{i'}) \ge 2 \cdot \textsc{OPT}$. Since $\Delta(x_i, x_{i'})$ is the gap between two consecutive agents in $H$ and Mechanism~\ref{mech:min-min-circle} builds the facility at the midpoint of the largest gap, we deduce that $\textsc{ALG} \ge \textsc{OPT}$.
\end{proof}
We define Mechanism~9 as the $\OOFLG$ mechanism $\Parallel{M}$, where
$M$ denotes Mechanism~\ref{mech:min-min-circle}.
Using Lemmas~\ref{lem:min-min-sfo-ooflg-sp}, \ref{lem:min-min-sfo-ooflg-opt}, \ref{lem:min-min-circle-sp}, and~\ref{lem:min-min-circle-opt}, we immediately obtain
Theorem~\ref{thm:min-min-opt-eg-circle} below.
\begin{theorem}
\label{thm:min-min-opt-eg-circle}
Mechanism~9 is SP and egalitarian.
\end{theorem}
Theorem~\ref{thm:min-min-circle-lowerbound} below extends Theorem~\ref{thm:min-min-lowerbound} to the case of the cycle. Theorem~\ref{thm:min-min-circle-lowerbound} implies that Mechanism~9 is not WGSP.
\begin{theorem}
\label{thm:min-min-circle-lowerbound}
Let $M$ be a WGSP $\alpha$-egalitarian mechanism. Then $\alpha$ is $\Omega(\sqrt{n})$, where $n$ is the number of agents.
\end{theorem}
\begin{proof}
It is easy to verify that the construction used in the proof of Theorem~\ref{thm:min-min-lowerbound} also works for the cycle and establishes the same lower bound. (We identify the point $1$ with the point $0$.)
\end{proof}
The following variant of Mechanism~9 is SGSP. As in the SGSP mechanism for the case when the agents are located in the unit interval, in this variant, we
first replace the reported dislikes of all agents with $\{F_1\}$ and
use Mechanism~\ref{mech:min-min-circle} to determine where to build
$F_1$. Then we build all of the remaining facilities at the same location
as $F_1$. This mechanism is SGSP because it disregards the reported
aversion profile. We claim that this mechanism is $n$-egalitarian,
where $n$ denotes the number of agents. To prove this claim, we first
observe that the largest gap between two consecutive agents who report to dislike $\{F_1\}$ is at least $1/n$. Thus the minimum welfare achieved by the mechanism is at
least $1/(2n)$.
Since the optimal minimum welfare is at most
$1/2$, the claim holds.
\section{Egalitarian Mechanisms}
\label{sec:min-min}
We now design egalitarian mechanisms for $\OOFLG$ when the agents are
located on an interval, cycle, or square.
In Definition~\ref{def:parallel} below, we introduce a
simple way to convert a single-facility $\OOFLG$ mechanism into a $\OOFLG$ mechanism.
Observe that
for single-facility $\OOFLG$ mechanisms, specifying the input $\DOFL$ instance $I = (n, 1, \mathbf{x}, \mathbf{a})$ is
equivalent to specifying $(n, 1, \mathbf{x}, \haters(I, 1))$.
\begin{definition}
\label{def:parallel}
For any single-facility $\OOFLG$ mechanism $M$, $\Parallel{M}$ denotes the $\OOFLG$ mechanism that takes as input a $\DOFL$ instance $I =(n,k,\Location, \Aversion)$ and outputs $\mathbf{y} = (y_1, \dots,
y_k)$, where $y_j$ is the location at which $M$ builds the facility on input
$(n, 1, \mathbf{x}, \haters(I, j))$.
\end{definition}
Lemmas~\ref{lem:min-min-sfo-ooflg-sp} and
\ref{lem:min-min-sfo-ooflg-opt} below reduce the task of designing
a SP egalitarian $\OOFLG$ mechanism to the single-facility case.
\begin{restatable}{lemma}{minminsfoooflgsp}
\label{lem:min-min-sfo-ooflg-sp}
Let $M$ be a SP single-facility $\OOFLG$ mechanism. Then
$\Parallel{M}$ is a SP $\OOFLG$ mechanism.
\end{restatable}
\begin{proof}
Let $(I,I')$ denote a $\OOFLG$ instance with $I = (n,k,\Location, \Aversion)$, $I' = (n,k,\Location,\LieAversion)$, and let $i$ be an agent such that $\mathbf{a}' = (\mathbf{a}_{-i}, a_i')$.
Let $\mathbf{y} = (y_1, \dots, y_k)$ (resp., $\mathbf{y'} = (y'_1, \dots, y'_k)$) denote $\Parallel{M}(I)$ (resp., $\Parallel{M}(I')$).
Since $M$ is SP, we have $\Delta(x_i, y_j) \ge \Delta(x_i, y_j')$ for each facility $F_j$ in $a_i$. Thus $w(I,i, \mathbf{y}) \ge w(I, i, \mathbf{y'})$, implying that agent $i$ does not benefit by reporting $a_i'$ instead of $a_i$.
\end{proof}
\begin{restatable}{lemma}{minminsfoooflgopt}
\label{lem:min-min-sfo-ooflg-opt}
Let $M$ be an egalitarian single-facility $\OOFLG$ mechanism. Then
$\Parallel{M}$ is an egalitarian $\OOFLG$ mechanism.
\end{restatable}
\begin{proof}
Let $I = (n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance.
Let an optimal solution be $\mathbf{y^*} = (y_1^*, \dots, y_k^*)$ and let the optimal (maximum) value of the minimum welfare be $\textsc{OPT} = \MMW(I, \mathbf{y^*})$. Let $\Parallel{M}$ build the facilities at $\mathbf{y}' = (y'_1, \dots, y'_k)$, resulting in a minimum welfare $\textsc{ALG} = \MMW(I, \mathbf{y}')$.
For each facility $F_j$, we define $\textsc{OPT}_j$ (resp., $\textsc{ALG}_j$), as the distance from $y_j^*$ (resp., $y'_j$) to the nearest agent in $\haters(I, j)$ (or $\infty$ if $\haters(I, j)$ is empty).
We have,
\begin{equation*}
\textsc{OPT} = \min\left(\min_j \textsc{OPT}_j, \min_{i\in \indiff(I)} w(I,i, \mathbf{y^*})\right)
\end{equation*}
and
\begin{equation*}
\textsc{ALG} = \min\left(\min_j \textsc{ALG}_j, \min_{i\in \indiff(I)} w(I, i, \mathbf{y}')\right).
\end{equation*}
Since $M$ is egalitarian, we have $\textsc{OPT}_j = \textsc{ALG}_j$ for all $j$. The welfare of agents in $\indiff(I)$ does not depend on the locations of the facilities. Thus, $\textsc{ALG} = \textsc{OPT}$ implying that $\Parallel{M}$ is egalitarian.
\end{proof}
\subsection{Egalitarian mechanisms for the unit interval}
\label{subsec:min-min-interval}
We begin by describing a SP egalitarian mechanism for single-facility
$\OOFLG$ when the agents are located in the unit interval.
\begin{custommech}{6}
\label{mech:min-min}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
If $H$ is empty, build $F_1$ at $0$. Otherwise, let $H$ contain
$\ell$ agents $z_1, \dots, z_\ell$ such that $x_{z_1} \le x_{z_2} \le
\dots \le x_{z_\ell}$. Let $d_1 = x_{z_1}$ and $d_3 = 1 -
x_{z_\ell}$. If $\ell = 1$, then build $F_1$ at $0$ if $d_1 \ge d_3$,
and at $1$ otherwise. If $\ell>1$, let $m$ be the midpoint of the
leftmost largest interval between consecutive agents in
$H$. Formally, $m = (x_{z_o} + x_{z_{o+1}}) / 2$, where $o$ is the
smallest number in $[\ell-1]$ such that $x_{z_{o+1}} - x_{z_o} =
\max_{j \in [\ell-1]} (x_{z_{j+1}} - x_{z_j})$. Let $d_2 = m -
x_{z_o}$. Then build facility $F_1$ at $0$ if $d_1 \ge d_2$ and $d_1
\ge d_3$, at $m$ if $d_2 \ge d_3$, and at $1$ otherwise.
\end{custommech}
The following lemma shows that Mechanism~\ref{mech:min-min} is SP. It
is established by examining how the location of the facility changes
when an agent misreports.
\begin{restatable}{lemma}{minminspsfo}
\label{lem:min-min-sp-sfo}
Mechanism~\ref{mech:min-min} is SP for single-facility $\OOFLG$.
\end{restatable}
\begin{proof}
Let $(I,I')$ denote a single-facility $\OOFLG$ instance with $I = (n, 1, \mathbf{x}, \mathbf{a})$, $I' = (n, 1, \mathbf{x}, \mathbf{a}')$, and let $i$ be an agent such that $\mathbf{a}' = (\mathbf{a}_{-i}, a_i')$.
If $F_1$ does not belong to $a_i$, the welfare of agent $i$ is independent of the location of $F_1$ and agent $i$ does not benefit by reporting $a'_i$. Moreover, if $F_1$ is in $a_i \cap a_i'$, then the location of $F_1$ does not change by reporting $a'_i$ instead of $a_i$, and again, agent $i$ does not benefit by reporting $a'_i$. Thus for the remainder of the proof, we assume that $F_1$ is in $a_i \setminus a'_i$.
Let Mechanism~\ref{mech:min-min} build $F_1$ at $y$ when agent $i$ reports truthfully. We assume that $y < x_i$ (the other case can be handled symmetrically). Let $H = \haters(I, 1)$ denote the set of agents who dislike $\{F_1\}$. Note that Mechanism~\ref{mech:min-min} does not build $F_1$ at the location of any agent in $H$, that is, $y \neq x_{i'}$ for any $i'$ in $H$. Let $d_1$, $d_2$, and $d_3$ be as defined in Mechanism~\ref{mech:min-min} when all agents report truthfully.
We consider two cases based on whether there is an agent from $H$ between $y$ and $x_i$.
Case 1: No agent in $H - i$ is located in $[y, x_i]$. We consider two cases.
Case 1.1: $y = 0$. Thus $d_1 = x_i$. When agent $i$ reports $a'_i$,
$F_1$ is built at $0$, which does not benefit agent $i$.
Case 1.2: $y \neq 0$. Thus $d_2 = x_i - y$, there is an agent $i'$ in $H$ at $y - d_2$, and there are no agents in $H$ in $(y - d_2, y + d_2)$. We consider two cases.
Case 1.2.1: No agent in $H$ is located to the right of $x_i$. Hence $x_i \ge 1 - d_2$. Thus when agent $i$ reports $a'_i$, $F_1$ is built at $1$, which does not benefit agent $i$.
Case 1.2.2: There is an agent in $H$ located to the right of $x_i$. Let $i''$ be the first agent to the right of $x_i$, breaking ties arbitrarily. Then $x_{i''} - x_i \le 2d_2$. Thus when agent $i$ reports $a'_i$, $F_1$ is built in $[y, x_i]$, which does not benefit agent $i$.
Case 2: There is an agent in $H - i$ in $[y, x_i]$. Let $i'$ be the first agent to the right of $y$ in $H - i$.
Let $d$ denote $d_1 = x_{i'}$
if $y = 0$, and $d_2 = x_{i'} - y$ otherwise. We deduce that $x_i - y \ge d$. We consider two cases.
Case 2.1: No agent in $H$ is located to the right of $x_i$. Hence $x_i \ge 1 - d$. Thus when agent $i$ reports $a'_i$, $F_1$ is either built at $y$ or at $1$, neither of which benefits agent $i$.
Case 2.2: There is an agent in $H$ located to the right of $x_i$. Let $b$ be the first agent to the right of $x_i$, breaking ties arbitrarily.
Let agent $a$ be the agent located in $[0, x_i]$ that is closest to agent $i$, breaking ties arbitrarily.
We deduce that $x_i - x_a \le 2d$ and $x_b - x_i \le 2d$. When agent $i$ reports $a'_i$, $F_1$ is built at $y$ or in $[x_i - d, x_i + d]$, neither of which benefits agent $i$.
Thus agent $i$ does not benefit by reporting $a'_i$.
\end{proof}
\begin{lemma}
\label{lem:min-min-opt-sfo}
Mechanism~\ref{mech:min-min} is egalitarian for single-facility $\OOFLG$.
\end{lemma}
\begin{proof}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
The welfare of any agent in $[n] \setminus H$ is independent of the
location of the facility. Thus, a mechanism is egalitarian if it
maximizes the minimum welfare of any agent in $H$.
Mechanism~\ref{mech:min-min} ignores the agents that are not in $H$.
Thus it is sufficient to show that Mechanism~\ref{mech:min-min} maximizes the minimum welfare on instances where all agents belong to $H$.
Hence for the remainder of the proof, we assume that all agents belong to
$H$.
Let $y^*$ denote an optimal location for the facility and let
$\textsc{OPT}$ denote $\MMW(I,y^*)$. Let $y'$ denote the location where
Mechanism~\ref{mech:min-min} builds the facility and let $\textsc{ALG}$ denote
$\MMW(I,y')$. Below we establish that $\textsc{ALG} \ge \textsc{OPT}$, which implies
that Mechanism~\ref{mech:min-min} is egalitarian.
If $H$ is empty then trivially Mechanism~\ref{mech:min-min} is egalitarian. For the remainder of the
proof, assume that $H$ is non-empty. We say that an agent in $H$ is
\emph{tight} if it is as close to $y^*$ as any other agent in $H$.
Thus for any tight agent $i$, $\textsc{OPT} = |y^* - x_i|$. Similarly , $\textsc{ALG}$
is the distance between $y'$ and a closest agent in $H$.
If $y^* = 0$, consider any tight agent $i$. Then no agent in $H$ is
located in $[0, x_i)$. It follows that $d_1 = x_i = \textsc{OPT}$. As $\textsc{ALG}
\ge d_1$, we have $\textsc{ALG} \ge \textsc{OPT}$. A symmetric argument can be made for the case $y^* = 1$.
It remains to consider the case where $y^*$ does not belong to $\{0,1\}$. Assume that $x_i < y^*$ where $i$ is a tight agent (the other
case can be handled symmetrically). We have $\textsc{OPT} = y^* - x_i$. Thus
no agent in $H$ is located in $(x_i = y^* - \textsc{OPT}, y^* + \textsc{OPT})$. We
consider two cases.
Case 1: There is no agent $i'$ to the right of $y^*$. Thus $d_3 \ge \textsc{OPT}$. Since $\textsc{ALG} \ge d_3$, we have $\textsc{ALG} \ge \textsc{OPT}$.
Case 2: There is an agent in $H$ to the right of $y^*$. Consider the leftmost such agent $i'$. Since $x_{i'} \ge y^* + \textsc{OPT}$, we have $d_2 \ge \textsc{OPT}$. Since $\textsc{ALG} \ge d_2$, we have $\textsc{ALG} \ge \textsc{OPT}$.
\end{proof}
We define Mechanism~7 as the $\OOFLG$ mechanism $\Parallel{M}$, where
$M$ denotes Mechanism~\ref{mech:min-min}.
Using Lemmas~\ref{lem:min-min-sfo-ooflg-sp}
through \ref{lem:min-min-opt-sfo}, we immediately obtain
Theorem~\ref{thm:min-min-opt-eg} below.
\begin{theorem}
\label{thm:min-min-opt-eg}
Mechanism~7 is SP and egalitarian.
\end{theorem}
Below we provide a lower bound on the
approximation ratio of any WGSP egalitarian mechanism.
Theorem~\ref{thm:min-min-lowerbound} implies that Mechanism~7 is not WGSP.
\begin{theorem}
\label{thm:min-min-lowerbound}
Let $M$ be a WGSP $\alpha$-egalitarian mechanism. Then $\alpha$ is $\Omega(\sqrt{n})$, where $n$ is the number of agents.
\end{theorem}
\begin{proof}
Let $q$ be a large even integer, let $p$ denote $q^2 + 1$, and let $U$ (resp., $V$) denote the set of all integers $i$ such that $0<i<q^2/2$ (resp., $q^2/2<i<q^2$).
We construct two $(p+3)$-agent two-facility
$\OOFLG$ instances $(I,I)$ and $(I,I')$.
In both $(I,I)$ and $(I,I')$,
there is an agent located at $i/q^2$, called agent $i$, for each $i$ in $U \cup V$, and there are two agents each at $0$,
$1/2$, and $1$.
In $I$, each agent $i$ such that $i$ is in $U$ dislikes $\{F_2\}$, each agent $i$ such that $i$ is in $V$ dislikes $\{F_1\}$, one agent at $0$ (resp.,
$1/2$, $1$) dislikes $\{F_1\}$, and the other agent at $0$ (resp.,
$1/2$, $1$) dislikes $\{F_2\}$.
In $I'$, agents $i$ such that $i$ is in $U \setminus \{1, \dots, q-1\}$, have alternating
reports: agent $q$ reports $\{F_1\}$, agent $q+1$ reports
$\{F_2\}$, agent $q+2$ reports $\{F_1\}$, and so on. Symmetrically,
the agents $i$ such that $i$ is in $V \setminus \{q^2-q+1, \dots, q^2-1\}$, have alternating reports:
agent $q^2-q$ reports $\{F_2\}$, agent $q^2-q-1$ reports
$\{F_1\}$, agent $q^2-q-2$ reports $\{F_2\}$, and so on. All other agents in $I'$ report truthfully.
Let the optimal minimum welfare for $\DOFL$ instance $I$ (resp., $I'$) be
$\textsc{OPT}$ (resp., $\textsc{OPT}'$). It is easy
to see that $\textsc{OPT}=1/4$ and $\textsc{OPT}'=\frac{1}{2q}$ (obtained by
building the facilities at $(1/4, 3/4)$ and $(\frac{1}{2q},
1-\frac{1}{2q})$, respectively).
Let $\textsc{ALG}$ (resp., $\textsc{ALG}'$) denote the minimum welfare achieved by $M$ on instance $I$ (resp., $I'$).
Below we
prove that either $\textsc{OPT}/\textsc{ALG}\ge\frac{q}{4}$ or
$\textsc{OPT}'/\textsc{ALG}'\ge q/2$.
Let $M$ build facilities at $(y_1, y_2)$ (resp., $(y'_1, y'_2)$)
on instance $I$ (resp., $I'$). We
consider two cases.
Case~1: $0 \le y'_1 < 1/q$ and $1-1/q < y'_2 \le 1$.
Let $S$ denote the set of agents who lie in $I'$.
If $y'_1 < y_1$ and $y'_2 > y_2$, then all agents in $S$ benefit by lying. Hence for $M$ to be WGSP, either
$y'_1 \ge y_1$ or $y'_2 \le y_2$. Let us assume that $y'_1 \ge
y_1$; the other case can be handled symmetrically. Since $y'_1 <
1/q$, we have $y_1 < 1/q$. Note that there is an agent at $0$
who reported $\{F_1\}$. Thus $\textsc{ALG} \le y_1 < 1/q$. Hence
$\textsc{OPT}/\textsc{ALG}\ge \frac{q}{4}$.
Case~2: $y'_1 \ge 1/q$ or $y'_2 \le 1-1/q$. If $y'_1 \ge 1/q$, then at least one agent within distance $1/q^2$ of
$y'_1$ reported $\{F_1\}$ in $I'$. A similar observation holds for the case
$y'_2 \le 1-1/q$. Thus $\textsc{ALG}' \le 1/q^2$. Hence $\textsc{OPT}' / \textsc{ALG}' \ge
q/2$.
The preceding case analysis shows that $\alpha \ge q/4$.
Since $q = \sqrt{p-1} = \sqrt{n-4}$, the theorem holds.
\end{proof}
The following variant of Mechanism~7 is SGSP. In this variant, we
first replace the reported dislikes of all agents with $\{F_1\}$ and
use Mechanism~\ref{mech:min-min} to determine where to build
$F_1$. Then we build all of the remaining facilities at the same location
as $F_1$. This mechanism is SGSP because it disregards the reported
aversion profile. We claim that this mechanism is $2(n+1)$-egalitarian,
where $n$ denotes the number of agents. To prove this claim, we first
observe that when Mechanism~\ref{mech:min-min} is run as a subroutine
within this mechanism,
we have $\max(d_1, 2d_2, d_3) \ge
1/(n+1)$. Thus the minimum welfare achieved by the mechanism is at
least $1/(2(n+1))$.
Since the optimal minimum welfare is at most
$1$, the claim holds.
\subsection{An egalitarian mechanism for the unit square}
\label{subsec:min-min-square}
We extend Mechanism~\ref{mech:min-min} to a SP egalitarian mechanism for single-facility $\OOFLG$ when the agents are located in the unit square. Let $S$ denote $[0,1]^2$, let $B$ denote the boundary of $S$, and let $x_i = (a_i, b_i)$ denote the location of agent $i$. For convenience, we assume that all agents are located at distinct points; the results below generalize easily to the case where this assumption does not hold.
\begin{custommech}{10}
\label{mech:min-min-plane}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
If $H$ is empty, build $F_1$ at $(0,0)$. Otherwise, construct the Voronoi diagram $D$ associated with the locations of the agents in $H$.
Let $V$ be the union of the following three sets of vertices: the vertices of $D$ in the interior of $S$; the points of intersection between $B$ and $D$; the four vertices of $S$.
For each $v$ in $V$, let $d_v$ denote the minimum distance from $v$ to any agent in $H$. Build $F_1$ at a vertex $v$ maximizing $d_v$, breaking ties first by $x$-coordinate and then by $y$-coordinate.
\end{custommech}
Toussaint presents an efficient $O(n\log n)$ algorithm to find the optimal $v$ in Mechanism~\ref{mech:min-min-plane} \cite{TOUSSAINT1983}. The following lemma establishes that Mechanism~\ref{mech:min-min-plane} is egalitarian. The lemma is shown using Theorem~2 of \cite{TOUSSAINT1983} for the largest empty circle with location constraints problem.
\begin{restatable}{lemma}{minminplaneopt}
\label{lem:min-min-plane-opt}
Mechanism~\ref{mech:min-min-plane} is egalitarian for single-facility $\OOFLG$.
\end{restatable}
\begin{proof}
Let $I = (n, 1, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance and let $H$ denote $\haters(I, 1)$.
Using the same arguments as in the proof of Lemma~\ref{lem:min-min-opt-sfo}, we assume that all agents belong to $H$. Let $y^*$ denote an optimal location for the facility and let $\textsc{OPT}$ denote $\MMW(I, y^*)$.
Let $y'$ denote the location where Mechanism~\ref{mech:min-min-plane} builds the facility and let $\textsc{ALG}$ denote $\MMW(I, y')$.
Below we establish that $\textsc{ALG} = \textsc{OPT}$, which implies that Mechanism~\ref{mech:min-min-plane} is egalitarian.
If $H$ is empty, then clearly $\textsc{ALG} = \textsc{OPT}$. Otherwise, finding the optimal location at which to build facility $F_1$ is equivalent to finding the maximum-radius circle centered in the interior or on the boundary of $S$ such that the interior of the circle has no points from $\{x_i \mid i \in H\}$. This problem is known as Largest Empty Circle with Location Constraint \cite{TOUSSAINT1983}. Toussaint [\cite{TOUSSAINT1983}, Theorem 2]\footnote{Toussiant assumes that no three points are collinear and no four points are cocircular, but the result coninues to hold without these assumptions.} shows that the optimal center for the circle is either a vertex of the Voronoi diagram of $S$, a point of intersection of $D$ with $B$, or a vertex of $S$. Hence $\textsc{ALG} = \textsc{OPT}$.
\end{proof}
We use a case analysis to establish Lemma~\ref{lem:min-min-opt-sfo-plane} below.
The most interesting case deals with an agent $i$ who dislikes $F_1$ but does not report it.
In this case, the key insight is that when agent $i$ misreports, facility $F_1$ is built either (1) at the same location as when agent $i$ reports truthfully, or (2) inside or on the boundary of the Voronoi region that contains $x_i$ when agent $i$ reports truthfully.
\begin{restatable}{lemma}{minminoptsfoplane}
\label{lem:min-min-opt-sfo-plane}
Mechanism~\ref{mech:min-min-plane} is SP for single-facility $\OOFLG$.
\end{restatable}
\begin{proof}
Let $(I,I')$ denote a single-facility $\OOFLG$ instance with $I = (n, 1, \mathbf{x}, \mathbf{a})$, $I' = (n, 1, \mathbf{x}, \mathbf{a}')$, and let $i$ be an agent such that $\mathbf{a}' = (\mathbf{a}_{-i}, a_i')$.
Using the same arguments as in the proof of Lemma~\ref{lem:min-min-sp-sfo}, we restrict our attention to the case where $F_1$ is in $a_i \setminus a'_i$.
Let Mechanism~\ref{mech:min-min-plane} build $F_1$ at $y$ ($y'$, resp.) when agent $i$ reports $a_i$ ($a'_i$, resp.).
Let $H = \haters(I, 1)$ denote the set of agents who dislike $\{F_1\}$. Note that Mechanism~\ref{mech:min-min-plane} does not build $F_1$ at the location of any agent in $H$, that is, $y \neq x_{i'}$ for all $i'$ in $H$.
When all agents report truthfully,
the Voronoi diagram of $S$ consists of $|H|$ non-overlapping polygons with each polygon containing one agent.
Let $P$ be the polygon containing agent $i$ when all agents report truthfully. When agent $i$ reports $a_i'$, the only change to the Voronoi diagram is in the interior and on the boundary of $P$. So when agent $i$ reports $a_i'$, $F_1$ is built either at $y$ or inside $P$ (boundary inclusive). If it is built at $y$, agent $i$ does not benefit. Thus, for the remainder of the proof, we assume that $y'$ belongs to the interior of $P$.
Let $\textsc{OPT}$ (resp., $\textsc{OPT}'$) be the closest distance of any agent from $H$ (resp., $H - i$) to the point $y$ (resp., $y'$). Let $d$ and $d'$ denote $\Delta(x_i, y)$ and $\Delta(x_i, y')$, respectively. Hence $d \ge \textsc{OPT}$. Since the distance from $y$ to any agent in $H - i$ is at least $\textsc{OPT}$, we have $\textsc{OPT}' \ge \textsc{OPT}$ .
Assume for the sake of contradiction that Mechanism~\ref{mech:min-min-plane} is not SP. Hence $d' > d$. We begin by showing that $\textsc{OPT} = \textsc{OPT}'$. Suppose $\textsc{OPT} \neq \textsc{OPT}'$. Since $\textsc{OPT}' \ge \textsc{OPT}$, we have $\textsc{OPT}' > \textsc{OPT}$. Let us examine the situation when $F_1$ is built at $y'$ when agent $i$ reports truthfully. We have $d' > d$. Moreover, the agent in $H - i$ closest to $y'$ is at distance $\textsc{OPT}'$ from $y'$. Since $d' > d$ and $\textsc{OPT}' > \textsc{OPT}$, we have $\MMW(I, y') > \MMW(I, y)$. This contradicts Lemma~\ref{lem:min-min-plane-opt}, since it implies that Mechanism~\ref{mech:min-min-plane} is egalitarian. Thus $\textsc{OPT} = \textsc{OPT}'$.
Recall that $y'$ belongs to $P$. Hence the closest agent in $H$ to $y'$ is agent $i$. Thus $d' \le \textsc{OPT}'$. We have established that $\textsc{OPT} \le d$.
Since $\textsc{OPT} \le d$, $d < d'$, and $d' \le \textsc{OPT}'$, we obtain $\textsc{OPT} < \textsc{OPT}'$, which contradicts $\textsc{OPT} = \textsc{OPT}'$. Thus $d' \le d$, and hence agent $i$ does not benefit by reporting $a_i'$.
\end{proof}
We define Mechanism~11 as the $\OOFLG$ mechanism $\Parallel{M}$, where
$M$ denotes Mechanism~\ref{mech:min-min-plane}. Using Lemmas~\ref{lem:min-min-sfo-ooflg-sp}, \ref{lem:min-min-sfo-ooflg-opt}, \ref{lem:min-min-plane-opt}, and~\ref{lem:min-min-opt-sfo-plane}, we immediately obtain
Theorem~\ref{thm:min-min-opt-eg-plane} below.
\begin{theorem}
\label{thm:min-min-opt-eg-plane}
Mechanism~11 is SP and egalitarian.
\end{theorem}
\section{Introduction}
The facility location game ($\FLG$) was introduced by Procaccia and
Tannenholtz~\cite{PROCACCIA2009}. In this setting, a central
planner wants to build a facility that serves agents located on a path. The agents report their locations, which are fed to a
mechanism that decides where the facility should be built. Procaccia
and Tannenholtz studied two different objectives that the planner
seeks to minimize: the sum of the distances from the facility to all
agents and the maximum distance of any agent to the facility.
Every agent aims to maximize their welfare, which increases as their
distance to the facility decreases. An agent or a coalition of agents
can misreport their location(s) to try to increase their welfare.
It is natural to seek
strategyproof~(SP) or group-strategyproof (GSP) mechanisms,
which incentivize truthful reporting. Often such mechanisms cannot
simultaneously optimize the planner's objective. In these cases, it is desirable to approximately optimize the planner's objective.
In real scenarios, an agent might dislike a certain facility, such as
a power plant, and want to stay away from it. This variant, called the
obnoxious facility location game ($\OFLG$), was introduced by Cheng et
al., who studied the problem of building an obnoxious facility on a
path~\cite{CHENG2011}. In the present paper, we consider the
problem of building multiple obnoxious facilities on a path. With
multiple facilities, there are different ways to define the welfare
function. For example, in the case of two facilities, the welfare of
the agent can be the sum, minimum, or maximum of the distances to the
two facilities. In our work, as all the facilities are obnoxious, a
natural choice for welfare is the minimum distance to
any obnoxious facility: the closest facility to an agent causes them
the most annoyance, and if it is far away, then the agent
is satisfied.
A facility might not be universally obnoxious. Consider, for example,
a school or sports stadium. An agent with no children might consider a
school to be obnoxious due to the associated noise and traffic, while
an agent with children might not consider it to be obnoxious. Another
agent who is not interested in sports might similarly consider a
stadium to be obnoxious. We assume that each agent has dichotomous
preferences; they dislike some subset of the facilities and are
indifferent to the others. Each agent reports a subset of
facilities to the planner. As the dislikes are private information,
the reported subset might not be the subset of facilities that the
agent truly dislikes. On the other hand, we assume that the agent
locations are public and cannot be misreported.
In this paper, we study a variant of $\FLG$, which we call $\OOFLG$
(Dichotomous Obnoxious Facility Location Game), that
combines the three aspects mentioned above: multiple (heterogeneous)
obnoxious facilities, minimum distance as welfare, and dichotomous
preferences.
We seek to design mechanisms that perform well with respect to either a utilitarian or egalitarian objective.
The utilitarian objective
is to maximize the social welfare, that is, the total welfare of
all the agents. A mechanism that maximizes social welfare is said to be
efficient. The egalitarian objective is to maximize the minimum
welfare of any agent. For both objectives, we
seek mechanisms that are SP, or better yet, weakly or
strongly group-strategyproof (WGSP / SGSP).
\subsection{Our contributions}
We study $\OOFLG$ with $n$ agents.
In Section~\ref{sec:sum-min}, we consider the utilitarian objective.
We present $2$-approximate SGSP mechanisms for any number of facilities when the agents are located on a path, cycle, or square.
We obtain the following two additional results for the path setting.
In the first main result of the paper, we obtain a mechanism that is WGSP for any number of facilities and efficient for up to three facilities.
To show that this mechanism is
WGSP, we relate it to a weighted approval voting mechanism.
To prove its efficiency, we identify two crucial properties that the welfare
function satisfies, and we use an exchange argument.
For the path setting, we also show that no SGSP mechanism can achieve an approximation ratio better than $5/4$, even for one facility.
In Section~\ref{sec:min-min}, we consider the egalitarian objective. We provide SP mechanisms for
any number of facilities when the agents are located on a path, cycle,
or square. In the second main result of the paper, we prove that the approximation ratio achieved by any WGSP mechanism is $\Omega(\sqrt{n})$, even for two facilities.
Also, we
present a straightforward $O(n)$-approximate WGSP
mechanism. Both of the results for WGSP mechanisms hold
for $\OOFLG$ when the agents are located on a path or
cycle. Table~\ref{tab:results} summarizes our results.
\begin{table}
\centering
\caption{Summary of our results for
$\OOFLG$ when the agents are located on a path. The heading LB (resp., UB) stands for lower (resp., upper) bound. The results in
the egalitarian column also hold when the agents are
located on a cycle. Boldface results hold when the agents are
located on a path, cycle, or square.}
\begin{tabular}{ l | c c | c c |}
\cline{2-5}
& \multicolumn{2}{c|}{Utilitarian} & \multicolumn{2}{c|}{Egalitarian} \\
\cline{2-5}
& LB & UB & LB & UB \\
\hline
\multicolumn{1}{|l|}{SP} & \multirow{2}{*}{$\mathbf{1}$} & \multirow{2}{*}{$1$ for $k \le 3$} & $\mathbf{1}$ & $\mathbf{1}$\\
\cline{1-1} \cline{4-5}
\multicolumn{1}{|l|}{WGSP} & & & \multirow{2}{*}{$\Omega(\sqrt{n})$} & \multirow{2}{*}{$O(n)$}\\ \cline{1-3}
\multicolumn{1}{|l|}{SGSP} & $5/4$ & $\mathbf{2}$ & & \\
\hline
\end{tabular}
\label{tab:results}
\end{table}
\section{Preliminaries}
\label{sec:preliminaries}
The problems considered in this paper involve a set of agents located
on a path, cycle, or square. In the path (resp., cycle, square)
setting, we assume without loss of generality that the path (resp.,
cycle, square) is the unit interval (resp., unit-circumference circle,
unit square). We map the points on the unit-circumference circle to
$[0,1)$, in the natural manner. Thus, in the path (resp., cycle,
square) setting, each agent $i$ is located in $[0,1]$ (resp., $[0,1)$,
$[0,1]^2$). The distance between any two points $x$ and $y$ is
denoted $\Delta(x,y)$. In the path and square settings, $\Delta(x,y)$ is
defined as the Euclidean distance between $x$ and $y$. In the cycle
setting, $\Delta(x,y)$, is defined as the length of the shorter arc between
$x$ and $y$. In all settings, we index the agents from $1$. Each
agent has a specific location in the path, cycle, or square. A
\emph{location profile} $\mathbf{x}$ is a vector $(x_1,\ldots,x_n)$ of
points, where $n$ denotes the number of agents and $x_i$ is the
location of agent $i$.
Sections~\ref{subsec:sum-min-interval}
and~\ref{subsec:min-min-interval} (resp., Sections~\ref{subsec:sum-min-cycle}
and~\ref{subsec:min-min-cycle}, Sections~\ref{subsec:sum-min-square}
and~\ref{subsec:min-min-square}) present our results for the path (resp., cycle, square)
setting.
Consider a set of agents $1$ through $n$ and a set of facilities
$\mathcal{F}$, where we assume that each agent dislikes (equally)
certain facilities in $\mathcal{F}$ and is indifferent to the rest. In
this context, we define an \emph{aversion profile} $\mathbf{a}$ as a
vector $(a_1,\ldots,a_n)$ where each component $a_i$ is a subset of
$\mathcal{F}$. We say that such an aversion profile is \emph{true} if
each component $a_i$ is equal to the subset of $\mathcal{F}$ disliked
by agent $i$. In this paper, we also consider \emph{reported}
aversion profiles where each component $a_i$ is equal to the set of
facilities that agent $i$ claims to dislike. Since agents can lie, a
reported aversion profile need not be true. For any aversion profile $\mathbf{a}$ and any subset $C$ of agents $[n]$, $\mathbf{a}_C$ (resp., $\mathbf{a}_{-C}$) denotes the aversion profile for the agents in (resp., not in) $C$.
For a singleton set
of agents $\{i\}$, we abbreviate $\mathbf{a}_{-\{i\}}$ as $\mathbf{a}_{-i}$.
An instance of the dichotomous obnoxious facility location ($\DOFL$)
problem is given by a tuple $(n,k,\mathbf{x},\mathbf{a})$ where $n$ denotes the
number of agents, there is a set of $k$ facilities
$\mathcal{F}=\{F_1,\ldots,F_k\}$ to be built, $\mathbf{x}=(x_1,\ldots,x_n)$
is a location profile for the agents, and $\mathbf{a}=(a_1,\ldots,a_n)$ is
an aversion profile (true or reported) for the agents with respect to
$\mathcal{F}$. A solution to such a $\DOFL$ instance is a vector
$\mathbf{y}=(y_1,\ldots,y_k)$ where component $y_j$ specifies the point at
which to build $F_j$. We say that a $\DOFL$ instance is true (resp.,
reported) if the associated aversion profile is true (resp.,
reported).
For any $\DOFL$ instance $I = (n,k,\Location, \Aversion)$ and any $j$ in $[k]$, we define $\haters(I, j)$ as $\{i \in [n] \mid F_j \in a_i\}$,
and $\indiff(I)$ as $\{ i \in [n] \mid a_i = \emptyset \}$.
For any $\DOFL$ instance $I=(n,k,\mathbf{x},\mathbf{a})$ and any associated
solution $\mathbf{y}$, we define the \emph{welfare} of agent $i$, denoted
$w(I,i,\mathbf{y})$, as $\min_{j:F_j \in a_i} \Delta(x_i, y_j)$, i.e., the minimum
distance from $x_i$ to any facility in $a_i$. Remark: If $a_i$ is
empty, we define $w(I,i,\mathbf{y})$ as $1/2$ in the cycle setting,
$\max(\Delta(x_i, 0), \Delta(x_i, 1))$ in the path setting, and the maximum
distance from $x_i$ to a corner in the square setting.
The foregoing definition of agent welfare is suitable for true $\DOFL$
instances, and is only meaningful for reported $\DOFL$ instances where
the associated aversion profile is close to true. In this paper,
reported aversion profiles arise in the context of mechanisms that
incentivize truthful reporting, so it is reasonable to expect such
aversion profiles to be close to true. We define the \emph{social
welfare} (resp., \emph{minimum welfare}) as the sum (resp., minimum)
of the individual agent welfares. When the facilities are built at
$\mathbf{y}$, the social welfare and minimum welfare are denoted by
$\SSW(I,\mathbf{y})$ and $\MMW(I,\mathbf{y})$, respectively. Thus
$\SSW(I,\mathbf{y})=\sum_{i \in [n]}w(I,i,\mathbf{y})$ and
$\MMW(I,\mathbf{y})=\min_{i \in [n]}w(I,i,\mathbf{y})$.
\begin{definition}
For $\alpha \ge 1$, a $\DOFL$ algorithm $A$ is $\alpha$-efficient if for any $\DOFL$ instance $I$,
\begin{equation*}
\max_{\mathbf{y}} \SSW(I, \mathbf{y}) \le \alpha \SSW (I, A(I)).
\end{equation*} Similarly, $A$ is $\alpha$-egalitarian if for any $\DOFL$ instance $I$,
\begin{equation*}
\max_{\mathbf{y}} \MMW(I, \mathbf{y}) \le \alpha \MMW (I, A(I)).
\end{equation*}
\end{definition}
A $1$-efficient (resp., $1$-egalitarian) $\DOFL$ algorithm, is said to be efficient (resp., egalitarian).
We are now ready to define a $\DOFL$-related game, which we call $\OOFLG$. It is
convenient to describe a $\OOFLG$ instance in terms of a pair $(I,I')$ of
$\DOFL$ instances where $I=(n,k,\mathbf{x},\mathbf{a})$ is true and
$I'=(n,k,\mathbf{x},\mathbf{a}')$ is reported. There are $n$ agents indexed
from $1$ to $n$, and a planner. There is a set of $k$ facilities
$\mathcal{F}=\{F_1,\ldots,F_k\}$ to be built. The numbers $n$ and $k$
are publicly known, as is the location profile $\mathbf{x}$ of the
agents. Each component $a_i$ of the true aversion profile $\mathbf{a}$ is
known only to agent $i$. Each agent $i$ submits component $a_i'$ of
the reported aversion profile $\mathbf{a}'$ to the planner. The planner,
who does not have access to $\mathbf{a}$, runs a $\DOFL$ algorithm, call it
$A$, to map $I'$ to a solution. The input-output behavior of $A$
defines a $\OOFLG$ mechanism, call it $M$; in the special case where
$k=1$, we say that $M$ is a single-facility $\OOFLG$ mechanism. We would
like to choose $A$ so that $M$ enjoys strong game-theoretic
properties. We say that $M$ is $\alpha$-efficient (resp.,
$\alpha$-egalitarian, efficient, egalitarian) if $A$ is $\alpha$-efficient (resp.,
$\alpha$-egalitarian, efficient, egalitarian). As indicated earlier, such properties (which
depend on the notion of agent welfare) are only meaningful if the
reported aversion profile is close to true. To encourage truthful
reporting, we require our mechanisms to be SP, as defined below; we
also consider the stronger properties WGSP and SGSP.
The SP property says
that no agent can increase their welfare by lying about their
dislikes.
\begin{definition}
A $\OOFLG$ mechanism $M$ is SP if for any $\OOFLG$ instance $(I, I')$ with $I = (n,k,\Location, \Aversion)$, and $I' = (n,k,\Location,\LieAversion)$,
and any agent $i$ in $[n]$ such that $\mathbf{a}' = (\mathbf{a}_{-i}, a_i')$, we have
\begin{equation*}
w(I, i, M(I)) \ge w(I, i, M(I')).
\end{equation*}
\end{definition}
The WGSP property says that if a non-empty coalition $C \subseteq [n]$ of agents
lies, then
at least one agent in $C$ does not increase their welfare.
\begin{definition}
A $\OOFLG$ mechanism $M$ is WGSP if for any $\OOFLG$ instance $(I, I')$ with $I = (n,k,\Location, \Aversion)$, and $I' = (n,k,\Location,\LieAversion)$, and any non-empty coalition $C \subseteq [n]$ such that $\mathbf{a}' = (\mathbf{a}_{-C}, \mathbf{a}'_C)$, there exists an agent $i$ in $C$ such that
\begin{equation*}
w(I, i, M(I)) \ge w(I, i, M(I')).
\end{equation*}
\end{definition}
The SGSP property says that if a coalition $C \subseteq [n]$ of agents lies
and some agent in $C$ increases their welfare then some agent in $C$ decreases their welfare.
\begin{definition}
A $\OOFLG$ mechanism $M$ is SGSP if for any $\OOFLG$ instance $(I, I')$ with $I = (n,k,\Location, \Aversion)$, and $I' = (n,k,\Location,\LieAversion)$, and any coalition $C \subseteq [n]$ such that $\mathbf{a}' = (\mathbf{a}_{-C}, \mathbf{a}'_C)$, if there exists an agent $i$ in $C$ such that
\begin{equation*}
w(I, i, M(I)) < w(I, i, M(I')),
\end{equation*} then there exists an agent $i'$ in $C$ such that
\begin{equation*}
w(I, i', M(I)) > w(I, i', M(I')).
\end{equation*}
\end{definition}
Every SGSP mechanism is WGSP and every WGSP mechanism is SP.
\subsection{Related work}
\label{sec:related-works}
$\FLG$ was introduced by Procaccia and Tannenholtz
\cite{PROCACCIA2009}. Many generalizations and extensions of
$\FLG$ have been studied
\cite{ALON2010,DOKOW2012,FELDMAN2013,FILOS0217,FOTAKIS2010,FOTAKIS2014,LU2010,ZHANG2014}; here
we highlight some of the most relevant work.
Cheng et al.\ introduced $\OFLG$ and presented a WGSP
mechanism to build a single facility on a path
\cite{CHENG2011}. Later they extended the model to cycles and
trees \cite{CHENG2013}. A complete characterization of
single-facility SP/WGSP mechanisms for paths has
been developed \cite{HAN2012,IBARA2012}. Duan et al.\ studied the
problem of locating two obnoxious facilities at least distance $d$
apart \cite{DUAN2019}. Other variants of $\OFLG$ have been
considered \cite{CHENG2013BOUNDEDSERVICE,FUKUI2019,OOMINE2016,YE2015}.
Agent preferences over the facilities were introduced to $\FLG$ in
\cite{FEIGENBAUM2015} and \cite{ZOU2015}. Serafino and Ventre studied
$\FLG$ for building two facilities where each agent likes a subset of
the facilities \cite{SERAFINO201627}. Anastasiadis and Deligkas extended this model to allow the agents to like, dislike, or be
indifferent to the facilities \cite{ANASTASIADIS2018}. The
aforementioned works address linear (sum) welfare function. Yuan et al.\ studied non-linear welfare functions (max and min) for building two
non-obnoxious facilities \cite{YUAN2016}; their results have
subsequently been strengthened \cite{CHEN2020,LI2020}. In the present
paper, we initiate the study of a non-linear welfare function (min) for building
multiple obnoxious facilities.
\subsection{Efficient mechanism for the cycle}
\label{subsec:sum-min-cycle}
Now we present a simple adaptation of Mechanism~\ref{mech:sum-min-sgsp} to the case where the agents are located on a cycle.
\begin{custommech}{4}
\label{mech:sum-min-sgsp-circle}
Let $(n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance. Build all facilities at $0$ if
\begin{equation*}
\sum_{i\in [n]} \Delta(x_i,0) \ge \sum_{i \in [n]} \Delta(x_i, 1/2);
\end{equation*}
otherwise, build all facilities at $1/2$.
\end{custommech}
As with Mechanism~\ref{mech:sum-min-sgsp}, reported dislikes do not affect the locations at which Mechanism~\ref{mech:sum-min-sgsp-circle} builds the facilities. Hence Mechanism~\ref{mech:sum-min-sgsp-circle} is SGSP.
\begin{theorem}
\label{thm:sum-min-sgsp-circle-sgsp}
Mechanism~\ref{mech:sum-min-sgsp-circle} is SGSP.
\end{theorem}
\begin{theorem}
\label{thm:sum-min-sgsp-circle-opt}
Mechanism~\ref{mech:sum-min-sgsp-circle} is $2$-efficient.
\end{theorem}
\begin{proof}
We sketch a proof that is similar to our proof of Theorem~\ref{thm:opt-sum-min-sgsp}.
Let $I = (n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance. Let $\textsc{ALG}$ denote the social
welfare obtained by Mechanism~\ref{mech:sum-min-sgsp-circle} on this instance, and let $\textsc{OPT}$ denote the maximum possible social welfare on this instance.
We need to prove that
$2\cdot\textsc{ALG}\geq\textsc{OPT}$.
Assume without loss of generality that
Mechanism~\ref{mech:sum-min-sgsp-circle} builds all facilities at
$0$. (A symmetric argument handles the case where all
facilities are built at $1/2$).
Using similar arguments, we obtain $\textsc{ALG} \ge \sum_{i\in [n]} \Delta(x_i,0)$. Also, we have $\sum_{i\in [n]} \Delta(x_i,0) \ge \sum_{i \in [n]} \Delta(x_i, 1/2)$, and $\Delta(x_i, 0) + \Delta(x_i, 1/2) \ge 1/2$ for all agents $i$, implying that $\sum_{i\in [n]} \Delta(x_i, 0) \ge n/4$. Thus $\textsc{ALG} \ge n/4$.
Since no agent has welfare greater than $1/2$, we have $n/2 \ge \textsc{OPT}$. Thus, $2 \cdot \textsc{ALG} \ge n/2 \ge \textsc{OPT}$, as required.
\end{proof}
\subsection{Efficient mechanism for the unit square}
\label{subsec:sum-min-square}
We now show how to adapt Mechanism~\ref{mech:sum-min-sgsp} to the case where the agents are located in the unit square.
\begin{custommech}{5}
\label{mech:sum-min-sgsp-square}
Let $(n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance.
For each point $p$ in $\{0,1\}^2$, let $d_p$ denote $\sum_{i\in [n]} \Delta(x_i,p)$.
Let $q$ be the point in $\{0,1\}^2$ that maximizes $d_q$, breaking ties lexicographically.
Build all facilities at $q$.
\end{custommech}
As in the case of Mechanism~\ref{mech:sum-min-sgsp}, reported dislikes do not affect the locations at which Mechanism~\ref{mech:sum-min-sgsp-square} builds the facilities. Hence Mechanism~\ref{mech:sum-min-sgsp-square} is SGSP.
\begin{theorem}
\label{thm:sum-min-sgsp-square-sgsp}
Mechanism~\ref{mech:sum-min-sgsp-square} is SGSP.
\end{theorem}
\begin{theorem}
\label{thm:sp-sum-min-sgsp-square-opt}
Mechanism~\ref{mech:sum-min-sgsp-square} is $2$-efficient.
\end{theorem}
\begin{proof}
We sketch a proof that is similar to our proof of Theorem~\ref{thm:opt-sum-min-sgsp}.
Let $I = (n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance. Let $\textsc{ALG}$ denote the social
welfare obtained by Mechanism~\ref{mech:sum-min-sgsp-square} on this instance, and let $\textsc{OPT}$ denote the maximum possible social welfare on this instance.
We need to prove that
$2\cdot\textsc{ALG}\geq\textsc{OPT}$.
Assume without loss of generality that
Mechanism~\ref{mech:sum-min-sgsp-square} builds all facilities at
$(0, 0)$. (A symmetric argument handles other cases).
Using similar arguments, we obtain $\textsc{ALG} \ge \sum_{i\in [n]} \Delta(x_i,(0,0))$.
Also, we have $$\sum_{i\in [n]} \Delta(x_i,(0,0)) \ge \max_{p\in \{(0,1), (1,0), (1,1)\}} \sum_{i \in [n]} \Delta(x_i, p),$$ and $$\Delta(x_i, (0,0)) + \Delta(x_i, (0,1)) + \Delta(x_i, (1,0)) + \Delta(x_i, (1,1))\ge 2\sqrt{2}$$ for all agents $i$, implying that $\sum_{i\in [n]} \Delta(x_i, (0,0)) \ge n/\sqrt{2}$. Thus $\textsc{ALG} \ge n/\sqrt{2}$.
Since no agent has welfare greater than $\sqrt{2}$, we have $\sqrt{2}n \ge \textsc{OPT}$.
Thus, $2 \cdot \textsc{ALG} \ge \sqrt{2}n \ge \textsc{OPT}$, as required.
\end{proof}
\section{Efficient Mechanisms}
\label{sec:sum-min}
\subsection{Efficient mechanisms for the unit interval}
\label{subsec:sum-min-interval}
We now present our efficient mechanism for $\OOFLG$.
\begin{mechanism}
\label{mech:sum-min}
For a given reported $\DOFL$ instance $I = (n,k,\Location, \Aversion)$,
output the lexicographically least solution $\mathbf{y}$ in $\{0, 1\}^k$ that maximizes the social welfare $\SSW (I,\mathbf{y})$.
\end{mechanism}
\begin{theorem}
\label{thm:sp-sum-min}
Mechanism~\ref{mech:sum-min} is WGSP.
\end{theorem}
\begin{proof}
To establish this theorem, we show that Mechanism~\ref{mech:sum-min} can be equivalently expressed in terms of the approval voting mechanism. Hence Theorem~\ref{thm:wgsp-app-vote} implies the theorem.
Let $(I,I')$ denote a $\OOFLG$ instance where $I=(n,k,\Location, \Aversion)$ and $I'=(n,k,\Location,\LieAversion)$.
We view each agent $i \in [n]$ as a voter, and each $\mathbf{y}$ in $\{0,1\}^k$ as a candidate. We obtain the top-tier candidates $C_i$ of voter $i$, and their reported top-tier candidates $C'_i$, from $a_i$ and $a'_i$, respectively. Assume without loss of generality that $x_i \le 1/2$ (the other case can be handled similarly). Set $C_i = \{\mathbf{y} = (y_1, \dots, y_k) \in \{0,1\}^k \mid y_j = 1 \text{ for all } F_j \in a_i\}$ and similarly $C'_i = \{\mathbf{y} = (y_1, \dots, y_k) \in \{0,1\}^k \mid y_j = 1 \text{ for all } F_j \in a'_i\}$. Also set $w^+_i = 1-x_i$ and $w_i^- = x_i$. With this notation, it is easy to see that $A(\mathbf{y}) = \SSW (I', \mathbf{y})$, and that choosing the $\mathbf{y}$ with the highest social welfare in Mechanism~\ref{mech:sum-min} is the same as electing the candidate with the highest approval in Mechanism~\ref{mech:weighted-approval-voting}.
\end{proof}
We show that Mechanism~\ref{mech:sum-min} is efficient for $k = 3$. First, we note a well-known result about the 1-Maxian problem. In this problem, there are $n$ points located at $z_1, \dots, z_n$ in the interval $[a,b]$, and the task is to choose a point in $[a,b]$ such that the sum of the distances from that point to all $z_i$s is maximized.
\begin{lemma}[Optimality of the 1-Maxian Problem]
\label{lem:1-maxian}
Let $[a,b]$ be a real interval, let $z_1, \dots, z_n$ belong to $[a,b]$, and let $f(z)$ denote $\sum_{i\in [n]} |z-z_i|$.
Then $\max_{z\in [a,b]} f(z)$ belongs to
$\{f(a), f(b)\}$.
\end{lemma}
Before proving the main theorem, we establish Lemma~\ref{lem:opt-sum-min}, which follows from Lemma~\ref{lem:1-maxian}.
\begin{restatable}{lemma}{optsummin}
\label{lem:opt-sum-min}
Let $I = (n, k, \mathbf{x}, \mathbf{a})$ denote the reported $\DOFL$ instance,
let $Y$ denote the set of all $y$ in $[0,1]$ such that it is efficient to build all $k$ facilities at $y$, and assume that $Y$ is non-empty. Then $Y \cap \{0,1\}$ is non-empty.
\end{restatable}
\begin{proof}
Let $U$ denote $\indiff(I)$. When all of the facilities are built at $y$,
\begin{equation*}
\SSW (I, (y, \dots, y)) = \sum_{i\in [n] \setminus U} |x_i - y| + \sum_{i \in U} w(I, i, y).
\end{equation*}
Since $Y$ is non-empty, $\max_y \SSW (I, (y, \dots, y)) = \max_\mathbf{y} \SSW(I, \mathbf{y})$.
Moreover, since $\sum_{i \in U} w(I, i, y)$ does not depend on $y$, Lemma~\ref{lem:1-maxian} implies that
\begin{equation*}
\max(\SSW (I, (0, \dots, 0)), \SSW (I, (1, \dots, 1))) = \max_y \SSW (I, (y, \dots, y)).
\end{equation*}
Thus, if $\SSW (I, (0, \dots, 0)) \ge \SSW (I, (1, \dots, 1))$, it is efficient to build all $k$ facilities at $0$.
Otherwise, it is efficient to build all $k$ facilities at $1$.
\end{proof}
\begin{theorem}
\label{thm:opt-sum-min}
Mechanism~\ref{mech:sum-min} is efficient for $k=3$.
\end{theorem}
\newcommand{ y_{1}^*,y_{3}^* }{ y_{1}^*,y_{3}^* }
\newcommand{\ensuremath{[\tuple ]}}{\ensuremath{[ y_{1}^*,y_{3}^* ]}}
\newcommand{|_{y_1 = y_1^*, y_3 =y_3^*}}{|_{y_1 = y_1^*, y_3 =y_3^*}}
\newcommand{|_{y_2 = y_2^*, y_3 =y_2^*}}{|_{y_2 = y_2^*, y_3 =y_2^*}}
\newcommand{|_{y_1=0, y_2 = y_3=y}}{|_{y_1=0, y_2 = y_3=y}}
\begin{proof}
Let $I = (n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance and let $\mathbf{y}^* = (y_1^*,y_2^*, y_3^*)$ be an efficient solution for $I$ such
that $y_1^*\leq y_2^*\leq y_3^*$.
Consider fixing variables $y_1$ and $y_3$ in the social welfare
function $\SSW(I, \mathbf{y})$. That is, we have
\begin{equation*}
\SSW(I, \mathbf{y})|_{y_1 = y_1^*, y_3 =y_3^*} = \sum_{i\in [n]} w(I,i, \mathbf{y})|_{y_1 = y_1^*, y_3 =y_3^*}.
\end{equation*}
For convenience, let $\SSW(y_2)$ denote $\SSW(I,\mathbf{y})|_{y_1 = y_1^*, y_3 =y_3^*} $ and
let $w_i(y_2)$ denote $w(I,i, \mathbf{y})|_{y_1 = y_1^*, y_3 =y_3^*}$ for each agent
$i$.
Claim~1: For each agent $i$, the welfare function
$w_i(y_2)$ with $y_2 \in \ensuremath{[\tuple ]}$ satisfies at least one of the
following two properties:
\begin{enumerate}
\item $w_i(y_2) = |y_2-x_i|$;
\item $w_i(y_1^*) = w_i(y_3^*) = \max_{y\in \ensuremath{[\tuple ]}} w_i(y)$.
\end{enumerate}
Proof: Consider an agent $i$. We consider five cases.
Case~1: $F_2 \notin a_i$. Since the welfare of agent $i$ is independent of the location of $F_2$, $w_i$ is a constant function. Hence property~2 is satisfied.
Case~2: $a_i = \{F_2\}$. By definition, we
have $w_i(y_2) = |y_2-x_i|$. Hence property~1 is satisfied.
Case~3: $a_i= \{F_1,F_2\}$. By definition, we
have $w_i(y_2) = \min(|y_1^*-x_i|,|y_2-x_i|)$. Notice that
$w_i(y_1^*) = \min(|y_1^*-x_i|,|y_1^*-x_i|) = |y_1^*-x_i|= \max_{y\in
\ensuremath{[\tuple ]}} w_i(y)$. Moreover, $w_i(y_3^*) =
\min(|y_1^*-x_i|,|y_3^*-x_i|)$. We consider two cases.
Case~3.1: $|y_1^*-x_i| > |y_3^*-x_i|$. Then $w_i(y_3^*)= |y_3^*-x_i|$
and hence $w_i(y_2) = |y_2- x_i|$ for all $y_2$ in $\ensuremath{[\tuple ]}$, that is,
$w_i(y_2)$ satisfies property~1.
Case~3.2: $|y_1^*-x_i|\le |y_3^*-x_i|$. Then $w_i(y_3^*)=
|y_1^*-x_i|=\max_{y\in \ensuremath{[\tuple ]}} w_i(y) = w_i(y_1^*)$ and hence $w_i(y_2)$
satisfies property~2.
Case~4: $a_i= \{F_2,F_3\}$. This case is symmetric to Case~3 and can be handled similarly.
Case~5: $a_i= \{F_1,F_2,F_3\}$. By definition, we
have $w_i(y_2) = \min(|y_1^*-x_i|,|y_2-x_i|,|y_3^*-x_i|)$.
Notice that $w_i(y_1^*) = w_i(y_3^*) = \min(|y_1^*-x_i|,|y_3^*-x_i|)$.
Also notice that for any $y_2$ in $\ensuremath{[\tuple ]}$, $w_i(y_2) = \min(|y_1^*-x_i|,|y_2-x_i|,|y_3^*-x_i|) \le \min(|y_1^*-x_i|,|y_3^*-x_i|) = w_i(y_1^*)$.
Hence property~1 holds.
This concludes our proof of Claim~1.
Claim~2: There is a solution that optimizes $\max_\mathbf{y}
\SSW(I, \mathbf{y})$ and builds facilities in at most two locations.
Proof:
We establish the claim by proving
that either $\SSW(I,(y_1^*,y_{1}^*, y_{3}^*)) \ge \SSW(I,\mathbf{y}^*) $
or $\SSW(I, (y_1^*,y_{3}^*, y_{3}^*)) \ge \SSW(I, \mathbf{y}^*)$.
Claim~1 implies that the set of agents $[n]$ can be partitioned into two sets
$(S,\overline{S})$ such that $w_i(y_2)$ satisfies property~1 for all
$i$ in $S$, and $w_i(y_2)$ satisfies property~2 for all $i$ in
$\overline S$. Thus, we have $\SSW(y_2) = \sum_{i\in [n]}
w_i(y_2) = \sum_{i\in S} w_i(y_2) + \sum_{i\in \overline S} w_i(y_2)$.
By Lemma~\ref{lem:1-maxian}, there is a $b$ in $\{ y_{1}^*,y_{3}^* \}$ such
that $\sum_{i\in S} w_i(b)\ge \sum_{i\in S} w_i(y_2)$ for all $y_2$ in
\ensuremath{[\tuple ]}. For any $i$ in $\overline{S}$, we deduce from property~2
that $w_i(b)\ge w_i(y_2)$ for all $y_2$ in \ensuremath{[\tuple ]}. Therefore,
$\SSW(b)\ge \SSW(y_2)$ for all $y_2$ in $\ensuremath{[\tuple ]}$. This
completes our proof of Claim~2.
Having established Claim~2, we can assume without loss of generality
that $y_2^*=y_3^*$.
A similar argument as above can be used to
prove that either $(0, y_2^*,y_2^*)$ or $(y_2^*, y_2^*,y_2^*)$ is an
efficient solution. Now if $(0, y_2^*,y_2^*)$ is efficient, then one can
use a similar argument to prove that either $(0,0,0)$ or $(0,1,1)$ is
efficient. And if $(y_2^*, y_2^*,y_2^*)$ is efficient, then by applying
Lemma~\ref{lem:opt-sum-min} with $k=3$, we deduce that either $(0,0,0)$
or $(1,1,1)$ is efficient. Thus, there is a $0$-$1$ efficient solution.
The efficiency of Mechanism~\ref{mech:sum-min} follows.
\end{proof}
When $k=2$ (resp., $1$), we can add one (resp., two) dummy facilities and use Theorem~\ref{thm:opt-sum-min} to establish that Mechanism~\ref{mech:sum-min} is efficient for $k = 2$ (resp., $1$). Theorem~\ref{thm:sum-min-sgsp-lower-bound} below provides a lower bound on the
approximation ratio of any SGSP efficient mechanism; this result implies that Mechanism~\ref{mech:sum-min} is not SGSP.
\begin{restatable}{theorem}{summinsgsplowerbound}
\label{thm:sum-min-sgsp-lower-bound}
There is no SGSP $\alpha$-efficient $\OOFLG$ mechanism with
$\alpha<5/4$.
\end{restatable}
\begin{proof}
Let $n$ be a large even integer.
We construct two $\left(\frac{3n}{2} + 1\right)$-agent single-facility $\OOFLG$ instances $(I,I)$ and $(I,I')$. In both $(I,I)$ and $(I,I')$, agent $1$ is located at $0$ and dislikes $\{F_1\}$, $n/2$ agents are located at $1$ and dislike $\{F_1\}$, and the remaining $n$ agents, which we denote by the set $U$, are located at $0$ and dislike $\emptyset$. In $I$, all agents report truthfully, while in $I'$, all agents in $U$ report $\{F_1\}$ and the remaining agents report truthfully.
Let the maximum social welfare for instances $I$ and $I'$ be $\textsc{OPT}$ and $\textsc{OPT}'$, respectively. It is easy to see that $\textsc{OPT} = 3n/2$ and $\textsc{OPT}' = n+1$ (obtained by building $F_1$ at $0$ and $1$, respectively). Let the social welfare achieved by some SGSP $\OOFLG$ mechanism $M$ on these instances be $\textsc{ALG}$ and $\textsc{ALG}'$, respectively.
Let $M$ build $F_1$ at $y$ on $I$. It follows that $\textsc{ALG} = y + \frac{3n}{2} - \frac{ny}{2}$. If the agents in $U$ and agent $1$ form a coalition in $I$ and the agents in $U$ report $\{F_1\}$, then the instance becomes $I'$. Thus, as $M$ is SGSP, $M$ cannot build $F_1$ to the right of $y$ in $I'$. Using this fact, it is easy to see that $\textsc{ALG}' \le (n+1)y + \frac{n}{2}(1-y) = \frac{ny}{2} + \frac{n}{2} + y$.
Using $\textsc{OPT} = \frac{3n}{2}$ and $\textsc{ALG} = y + \frac{3n}{2} - \frac{ny}{2}$, we obtain
\begin{equation}
\label{eqn:alpha-bound-one}
\alpha \ge \frac{\frac{3n}{2}}{y + \frac{3n}{2} - \frac{ny}{2}}.
\end{equation}
Similarly, using $\textsc{OPT}' = n + 1$ and $\textsc{ALG}' \le \frac{ny}{2} + \frac{n}{2} + y$, we obtain
\begin{equation}
\label{eqn:alpha-bound-two}
\alpha \ge \frac{n + 1}{\frac{ny}{2} + \frac{n}{2} + y}.
\end{equation}
Let $f(y)$ denote
\begin{equation*}
\max\left( \frac{\frac{3n}{2}}{y + \frac{3n}{2} - \frac{ny}{2}}, \frac{n + 1}{\frac{ny}{2} + \frac{n}{2} + y} \right).
\end{equation*}
From \eqref{eqn:alpha-bound-one} and \eqref{eqn:alpha-bound-two} we deduce that $\alpha \ge f(y)$.
Let $y^*$ denote a value of $y$ in $[0,1]$ minimizing $f(y)$.
It is easy to verify that $y^*$ satisfies $f(y^*) = \frac{5n^2 + 4n - 4}{4n(n+1)}$. Thus, $\alpha \ge f(y^*)$. As $n$ approaches infinity, $f(y^*)$ approaches $5/4$. Thus, for any SGSP $\alpha$-efficient mechanism, we have $\alpha \ge 5/4$.
\qed
\end{proof}
In view of Theorem~\ref{thm:sum-min-sgsp-lower-bound}, it is natural to try to determine the minimum value of $\alpha$ for which an SGSP $\alpha$-efficient $\OOFLG$ mechanism exists.
Below we present a $2$-efficient SGSP
mechanism. It remains an interesting open problem to improve the
approximation ratio of $2$, or to establish a tighter lower bound for the
approximation ratio.
\begin{mechanism}
\label{mech:sum-min-sgsp}
Let $(n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance. Build all facilities at $0$ if
$\sum_{i\in [n]} x_i \ge \sum_{i \in [n]} (1-x_i)$;
otherwise, build all facilities at $1$.
\end{mechanism}
\begin{theorem}
\label{thm:sp-sum-min-sgsp}
Mechanism~\ref{mech:sum-min-sgsp} is SGSP.
\end{theorem}
\begin{proof}
Reported dislikes do not affect the locations at which the facilities are built. Hence the theorem follows.
\end{proof}
\begin{restatable}{theorem}{optsumminsgsp}
\label{thm:opt-sum-min-sgsp}
Mechanism~\ref{mech:sum-min-sgsp} is 2-efficient.
\end{restatable}
\begin{proof}
Let $I = (n,k,\Location, \Aversion)$ denote the reported $\DOFL$ instance. Let $\textsc{ALG}$ denote the social
welfare obtained by Mechanism~\ref{mech:sum-min-sgsp} on this instance, and let $\textsc{OPT}$ denote the maximum possible social welfare on this instance.
We need to prove that
$2\cdot\textsc{ALG}\geq\textsc{OPT}$.
Assume without loss of generality that
Mechanism~\ref{mech:sum-min-sgsp} builds all facilities at
$0$. (A symmetric argument handles the case where all
facilities are built at $1$). Then the welfare of an agent $i$ not in $\indiff(I)$ is $x_i$ and the welfare of an agent $i'$ in $\indiff(I)$ is $\max(x_{i'}, 1-x_{i'}) \ge x_{i'}$. Thus, $\textsc{ALG} \ge \sum_{i\in [n]} x_i$. As Mechanism~\ref{mech:sum-min-sgsp} builds the facilities at $0$ and not $1$, we have $\sum_{i\in [n]} x_i \ge \sum_{i \in [n]} (1-x_i)$, which implies that $\sum_{i\in [n]} x_i \ge n/2$. Combining the above two inequalities, we have $\textsc{ALG} \ge n/2$.
Since no agent has welfare greater than $1$, we have $n \ge \textsc{OPT}$.
Thus, $2 \cdot \textsc{ALG} \ge n \ge \textsc{OPT}$, as required.
\end{proof}
We now establish that the analysis of Theorem~\ref{thm:opt-sum-min-sgsp} is
tight by exhibiting a two-facility $\DOFL$ instance on which Mechanism~\ref{mech:sum-min-sgsp} achieves
half of the optimal social welfare.
For the reported $\DOFL$ instance $I = (2, 2, (0, 1), (\{F_1\}, \{F_2\}))$, it is easy to verify that the optimal social welfare is
$\SSW(I, (1,0)) = 2$, while the social welfare obtained by
Mechanism~\ref{mech:sum-min-sgsp} is $\SSW (I, (0,0))=1$.
\section{Weighted Approval Voting}
\label{sec:weight-app-vote}
Before studying efficient mechanisms for our problem, we review a variant of the approval voting mechanism \cite{BRAMS1978}.
An instance of Dichotomous Voting (DV) is a tuple $(m, n, \mathbf{C}, \mathbf{w}^+, \mathbf{w}^-)$ where $m$ voters $1, \dots, m$ have to elect a candidate among the set of candidates $C = \{c_1, \dots, c_n\}$.
Each voter $i$ has dichotomous preferences, that is, voter $i$ partitions all of the candidates into two equivalence classes:
a top (most preferred) tier $C_i$ and a bottom tier $\overline{C_i} = C \setminus C_i$.
Each voter $i$ has associated (and publicly known) weights $w_i^+ \ge w_i^- \ge 0$.
The symbols $\mathbf{C}$, $\mathbf{w}^+$, and $\mathbf{w}^-$ denote length-$m$ vectors with $i$th element $C_i$, $w^+_i$, and $w^-_i$, respectively.
We now present our weighted approval voting mechanism.\footnote{Our mechanism differs from the homonymous mechanism of Massó et al., which has weights for the candidates instead of the voters \cite{MASSO2008}.}
\begin{mechanism}
\label{mech:weighted-approval-voting}
Given a $\DV$ instance $(m, n,\mathbf{C}, \mathbf{w}^+, \mathbf{w}^-)$, every voter $i$ votes by partitioning $C$ into $C'_i$ and $\overline{C'_i}$.
Let the weight function $w$ be such that for voter $i$ and candidate $c_j$, $w(i, j) = w_i^+$ if $c_j$ is in $C'_i$ and $w(i, j) = w_i^-$ otherwise.
For any $j$ in $[n]$, we define $A(j) = \sum_{i \in [m]} w(i, j)$ as the approval of candidate $c_j$.
The candidate $c_j$ with highest approval $A(j)$ is declared the winner.
Ties are broken according to a fixed ordering of the candidates (e.g., in favor of lower indices).
\end{mechanism}
We note that the approval voting mechanism can be obtained from the weighted approval voting mechanism by setting weights $w_i^+$ to $1$ and $w_i^-$ to $0$ for all voters $i$.
In Section~\ref{sec:preliminaries}, we defined SP, WGSP, and SGSP in the $\OOFLG$ setting.
These definitions are easily generalized to the voting setting.
Brams and Fishburn proved that the approval voting mechanism is SP~\cite{BRAMS1978}.
Below we prove that our weighted approval voting mechanism is WGSP (and hence also SP).
\begin{restatable}{theorem}{wgspappvote}
\label{thm:wgsp-app-vote}
Mechanism~\ref{mech:weighted-approval-voting} is WGSP.
\end{restatable}
\begin{proof}
Assume for the sake of contradiction that there is an
instance in which a coalition of voters $U$ with true preferences
$\{(C_i,\overline{C_i})\}_{i\in U}$ all benefit by misreporting their
preferences as $\{(C'_i,\overline{C'_i})\}_{i\in U}$. For any
candidate $c_j$, let $A(j)$ denote the approval of $c_j$ when
coalition $U$ reports truthfully, and let $A'(j)$ denote the approval
of $c_j$ when coalition $U$ misreports.
Let $c_k$ be the winning candidate when coalition $U$ reports
truthfully, and let $c_\ell$ be the winning candidate when coalition
$U$ misreports. Since every voter in $U$ benefits when the coalition
misreports, we know that $c_k$ belongs to
$\bigcap_{i\in U}\overline{C_i}$ and $c_\ell$ belongs to $\bigcap_{i\in U}C_i$.
Since $c_k$ belongs to $\bigcap_{i\in U}\overline{C_i}$, we deduce that
$A'(k)=A(k)+\sum_{i\in U:c_k\in C'_i}w_i^+-w_i^-$ and hence
$A'(k)\geq A(k)$. Similarly, since $c_\ell$ belongs to $\bigcap_{i\in U}C_i$, we
deduce that
$A'(\ell)=A(\ell)+\sum_{i\in U:c_\ell\in\overline{C'_i}}w_i^--w_i^+$
and hence $A(\ell)\geq A(\ell')$.
Since $c_k$ wins when coalition $U$ truthfully, one of the following
two cases is applicable.
Case~1: $A(k)>A(\ell)$. Since $A'(k)\geq A(k)$ and
$A(\ell)\geq A'(\ell)$, the case condition implies that
$A'(k)>A'(\ell)$. Hence $c_\ell$ does not win when coalition $U$
misreports, a contradiction.
Case~2: $A(k)=A(\ell)$ and $c_k$ has higher priority than $c_\ell$.
Since $A'(k)\geq A(k)$ and $A(\ell)\geq A(\ell')$, the case condition
implies that $A'(k)\geq A(\ell')$ and $c_k$ has higher priority than
$c_\ell$. Hence $c_\ell$ does not win when coalition $U$ misreports, a
contradiction.
\end{proof}
\begin{theorem}
Mechanism~\ref{mech:weighted-approval-voting} is not SGSP.
\end{theorem}
The above theorem can be established by adapting the instance shown in Section~\ref{sec:sum-min} to prove that Mechanism~\ref{mech:sum-min} is not SGSP.
|
{'timestamp': '2021-09-14T02:18:04', 'yymm': '2109', 'arxiv_id': '2109.05396', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.05396'}
|
arxiv
|
\section{Introduction}
Quantum Unique Ergodicity (QUE) in a disordered or chaotic quantum system asserts
that the eigenvectors of the Hamilton operator tend to become uniformly distributed
in the phase space, see~\cite{MR0402834, MR818831, MR916129, MR1266075, MR1810753, MR3961083}
for the seminal results and~\cite{2012.13215} for more recent references.
We study a particularly strong form of this phenomenon
for Wigner random matrices, the simplest prototype of a fully chaotic Hamiltonian.
These are \(N\times N\) random Hermitian matrices \(W=W^*\)
with centred, independent, identically distributed \emph{(i.i.d.)} entries up to the symmetry constraint \(w_{ab} = \overline{w_{ba}}\).
Let \(\set{\bm{u}_i}_{i=1}^N\) be an orthonormal eigenbasis of \(W\) corresponding to the eigenvalues
\({\bm \lambda} = (\lambda_i)_{i=1}^N\) listed in increasing order.
Recently we showed~\cite{2012.13215} that for any deterministic matrix \(A\) with \(\|A\|\le1\),
the eigenvector overlaps \(\braket{\bm{u}_i, A \bm{u}_i}\)
converge to \(\braket{A}:=\frac{1}{N}\Tr A\), the normalized trace of \(A\), in the large \(N\) limit. More generally,
we proved that
\begin{equation}\label{eq:eth}
\max_{i,j}\abs[\Big]{\braket{ \bm{u}_i, A \bm{u}_j}-\braket{A}\delta_{ij}}
\lesssim \frac{N^\epsilon}{\sqrt{N}}
\end{equation}
holds with very high probability. We note that the bound~\eqref{eq:eth} is optimal for high-rank deterministic matrices \(A\) and is coined as the \emph{Eigenstate Thermalization Hypothesis} by
Deutsch~\cite{9905246} and Srednicki~\cite{9962049}, see also~\cite[Eq.~(20)]{Dalessio2015}.
The main result of the current paper, Theorem~\ref{theo:flucque},
asserts that \(\braket{\bm{u}_i, A \bm{u}_i}\) has a Gaussian fluctuation
on scale \(N^{-1/2}\),
more precisely
\begin{equation}\label{eq:flu}
\sqrt{N} \Big[ \braket{\bm{u}_i, A \bm{u}_i} - \braket{A}\Big]
\end{equation}
converges to a normal distribution for any Hermitian observables \(A=A^*\) of high rank and
for any eigenvectors \({\bm u}_i\) whose eigenvalue belongs to the bulk of the spectrum.
For Gaussian ensembles and $A$ being a projection onto macroscopically many coordinates~\eqref{eq:flu} can be proven by using the special invariance property of the eigenvectors (see~\cite[Theorem 2.4]{MR3534074}). Our result concerns general Wigner matrices and it has two main features: it concerns individual eigenvectors \emph{and} it is valid for general high rank observables.
We now explain related previous results which all addressed only one of these features.
First, Gaussianity of~\eqref{eq:flu} after a small averaging
in the index \(i\) has recently been established in~\cite[Theorem 2.3]{2012.13218} using resolvent methods. Second, fluctuations involving individual eigenvectors in the bulk spectrum for general Wigner
matrices can only be accessed by the
Dyson Brownian motion approach which has only been developed for finite rank observables~\cite{MR3606475, 2005.08425, MR4164858}.
We now explain the background of these concepts.
\subsection{Dyson Brownian motion for eigenvectors}
For the simplest rank one case,
\(A=|{\bm q}\rangle \langle {\bm q}|\) with some deterministic unit vector \(\bm{q}\),
Bourgade and Yau~\cite{MR3606475} showed that the squared normalised overlaps
\(N |\langle \bm{u}_i, \bm{q}\rangle |^2\) converge in distribution to the square of a standard Gaussian variable as \(N\to\infty\) (see also~\cite{MR3034787, MR2930379} for
the same result without DBM but under four moment matching condition in the bulk). Similar results have been obtained for deformed Wigner matrices~\cite{MR4164858}, for sparse matrices~\cite{MR3690289}, and for L\'evy matrices~\cite{MR4260468}. Note that both the scaling and the limit distribution for the rank one case are different from~\eqref{eq:flu}. The basic intuition
is that the coordinates of \(\bm{u}_i\) are roughly independent, thus the sum in \(\langle \bm{u}_i, \bm{q}\rangle=
\sum_a \bm{u}_i(a) \bm{q}(a)\) obeys a central limit theorem (CLT) on scale \(N^{-1/2}\).
In fact,~\cite{MR3606475} also considers the joint distribution of
finitely many eigenvectors tested against one fixed vector \(\bm{q}\)
and the joint distribution
of a single eigenvector with finitely many test vectors \(\bm{q}_1, \bm{q}_2, \ldots \bm{q}_K\) for any fixed \(K\), independent of
\(N\).
Very recently, Marcinek and Yau~\cite{2005.08425} have established that the overlaps of
finitely many eigenvectors \emph{and} finitely many orthogonal test vectors are also asymptotically independent (squared) normal. Their method is very general and also applies to a large class of other random matrix ensembles, such as sparse or L\'evy matrices.
The fundamental method behind all results involving individual eigenvectors for general Wigner matrices
is the Dyson Brownian motion (DBM) for
eigenvectors, also called the \emph{stochastic eigenstate equation}
generated by
a simple matrix Brownian motion for \(W\), introduced by Bourgade and Yau in~\cite{MR3606475}. We briefly summarize the key steps in~\cite{MR3606475}
in order to highlight the new ideas we needed to prove the Gaussianity of~\eqref{eq:flu}.
For each fixed \(n\), the evolution of the joint \(n\)-th order moments of the
overlaps \(N |\langle \bm{u}_i, \bm{q}\rangle |^2\) for different \(i\)'s and fixed \(\bm{q}\)
is described by
a system of parabolic evolution equations, called the \emph{eigenvector moment flow}. %
Interpreting
each such overlap as a particle sitting at location \(i\) in the discrete one dimensional
index space \([N] = \{ 1,2, \ldots , N\}\), the moment flow naturally corresponds to
a Markovian jump process of \(n\) particles. It turns out that the rate of a jump from site \(i\) to
site \(j\) is proportional with \(N^{-1}(\lambda_i-\lambda_j)^{-2}\).
Different \(\bm{q}\)'s can be incorporated by appropriately
assigning \emph{colours} to the particles.
By the fast local equilibration property of the DBM
the moments of \(N |\langle \bm{u}_i, \bm{q}\rangle |^2\) quickly become essentially independent of the index \(i\)
at least for indices corresponding to nearby eigenvalues \(\lambda_i\),
hence they can be computed by locally averaging over \(i\). For example, in the
simplest \(n=1\) case we have
\begin{equation}\label{2ndmoment}
f_i:= \E \big[ N |\langle \bm{u}_i, \bm{q}\rangle |^2 \big| {\bm \lambda}\big] \approx
f_{i'}= \E \big[ N |\langle \bm{u}_{i'}, \bm{q}\rangle |^2 \big| {\bm \lambda}\big],
\quad |i-i'|\ll N,
\end{equation}
already after a very short time \(t\gg |i-i'|/N\). Here we consider the conditional expectation of the
eigenvectors given that the eigenvalues are fixed.
Since the global equilibrium of the
DBM is the constant function \(f_i=1\), equilibration directly implies \emph{smoothing} or \emph{regularisation} in the
dependence on the indices \(i\).
On the other hand, by spectral theorem
\begin{equation}\label{qGq}
\langle \bm{q}, \Im G(\lambda_i + \mathrm{i} \eta)\bm{q}\rangle = \frac{1}{N} \sum_{i'=1}^N \frac{\eta}{ (\lambda_i-\lambda_{i'})^2+\eta^2}
N|\langle \bm{u}_{i'}, \bm{q}\rangle |^2,
\end{equation}
where \(G=G(z)=(W-z)^{-1}\) is the resolvent at a spectral parameter \(z\in \mathbf{C}\setminus\mathbf{R}\).
Using that the eigenvalues \(\lambda_{i'}\) are \emph{rigid}, i.e.\ they are very close the
corresponding quantiles \(\gamma_{i'}\) of the Wigner semicircle density (see~\eqref{def:Omega} later),
the \(i'\)-summation in~\eqref{qGq} is a regularised averaging over indices \(|i'-i|\lesssim N\eta\).
Performing the \(i'\) summation in~\eqref{qGq} by using~\eqref{2ndmoment} we obtain
\[
\E \big[ N |\langle \bm{u}_i, \bm{q}\rangle |^2 \big| {\bm \lambda}\big]
\approx \frac{1}{\Im m_{sc}(\gamma_i)} \E \big[ \langle \bm{q}, \Im G(\gamma_i + \mathrm{i} \eta)\bm{q}\rangle\big| {\bm \lambda}\big],
\]
for times \(t\gg \eta\) where \(m_{sc}\) is the Stieltjes transform of the Wigner semicircle law.
Choosing \(\eta\) slightly above the local eigenvalue spacing, \(\eta=N^{-1+\epsilon}\) in the bulk of the spectrum, we have
\(\langle \bm{q}, \Im G(\gamma_i + \mathrm{i} \eta)\bm{q}\rangle\approx \Im m_{sc}(\gamma_i)\)
not only in expectation but even in high probability by the \emph{isotropic local law} for Wigner matrices~\cite{MR3103909}.
Combining these inputs we obtain \(\E N |\langle \bm{u}_i, \bm{q}\rangle |^2\approx 1\) along the DBM after a short time
\(t\gg N^{-1+\epsilon}\). A similar argument holds for higher moments.
Finally, the small Gaussian component added by the DBM can be removed by standard perturbation methods, by the
so called \emph{Green function comparison} theorems.
\subsection{Dyson Brownian motion for general overlaps}
Given the method to handle \(N |\langle \bm{u}_i, \bm{q}\rangle |^2\) described above,
the Gaussianity of overlaps \(\braket{\bm{u}_i, A \bm{u}_i}\) with a general high rank matrix \(A\)
can be approached in two natural ways. We now explain both of them to justify our choice.
The first approach is to write
\(A=\sum_{k=1}^N a_k |\bm{q}_k\rangle \langle \bm{q}_k|\) in spectral decomposition
with \(|a_k|\lesssim 1\) and an orthonormal set \(\{ \bm{q}_k\}_{k=1}^N\) to have
\begin{equation}\label{spe}
\braket{\bm{u}_i, A \bm{u}_i} = \sum_{k=1}^N a_k |\langle \bm{u}_i, \bm{q}_k \rangle|^2.
\end{equation}
If all overlaps \(|\langle \bm{u}_i, \bm{q}_k \rangle|^2\), \(k=1, 2, \ldots, N\), were independent, then the central
limit theorem applied to the summation in~\eqref{spe} would prove the normality
of \(\braket{\bm{u}_i, A \bm{u}_i}\). This requires that the number of nonzero summands in~\eqref{spe}, the rank of \(A\), also goes to infinity as \(N\) increases. Hence, via the spectral decomposition of \(A\),
the Gaussianity
of \(\braket{\bm{u}_i, A \bm{u}_i}\) appears rather an effect of the approximate independence of
the overlaps \(|\langle \bm{u}_i, \bm{q}_k \rangle|^2\) for different \(k\)'s than their actual limit distribution.
The analysis of the eigenvector moment flow~\cite{MR3606475,2005.08425} yields this independence for finitely many \(k\)'s,
but it is not well suited for tracking overlaps \(|\langle \bm{u}_i, \bm{q}_k \rangle|^2\)
with a very large number of \(\bm{q}_k\) vectors simultaneously.
%
Hence we discarded this approach.
The second natural approach is to generalise the eigenvector moment flow to moments of \(\braket{\bm{u}_i, A \bm{u}_i}\);
this has been first achieved in~\cite{MR4156609}.
Such flow naturally involves off-diagonal overlaps \(\braket{\bm{u}_i, A \bm{u}_j}\) as well.
Therefore, we need to describe conditional
moments of the form \(\E \big[ \prod_{r=1}^n \braket{\bm{u}_{i_r}, A \bm{u}_{j_r}} \big|{\bm \lambda}\big]\)
with different collections
of \emph{index pairs} \((i_1, j_1), (i_2, j_2), \ldots, (i_n, j_n)\)
with the constraint that every index appears even number of times. Thus the relevant moments can naturally be represented
in an \(n\)-dimensional subset \(\Lambda^n\) of \([N]^{2n}\) (see Section~\ref{sec:equivrep}).
Moreover, in~\cite[Eq. (2.15)]{MR4156609} a certain symmetrised linear combination of \(n\)-order moments,
the \emph{perfect matching observable} was found that satisfies a closed equation along the Dyson Brownian motion,
see~\eqref{eq:deff} and~\eqref{eq:1dequa}. Moments of diagonal overlaps \(\braket{\bm{u}_i, A \bm{u}_i}\) can then be recovered
from the perfect matching observable by setting all indices equal. Off-diagonal overlaps \(\braket{\bm{u}_i, A \bm{u}_j}\)
in general cannot be recovered (except in the \(n=2\) case using an additional anti-symmetric (``fermionic'') version
of the perfect matching observable~\cite{MR4242625}).
The main obstacle along this second approach
is the lack of the analogue of~\eqref{qGq} for general overlaps \(\braket{\bm{u}_i, A \bm{u}_j}\). Consider the \(n=2\) case. A (regularized) local averaging in \emph{one} index yields
\begin{equation}\label{1av}
\frac{1}{N} \sum_{i'=1}^N \frac{\eta}{ (\lambda_i-\lambda_{i'})^2+\eta^2}
N|\braket{\bm{u}_{i'}, A \bm{u}_j} |^2 = \braket{\bm{u}_j, A \Im G(\gamma_i + \mathrm{i} \eta) A \bm{u}_j}
\end{equation}
which still involves an eigenvector \(\bm{u}_j\), hence is not accessible solely by resolvent methods.
Note that for \(A=|\bm{q}\rangle \langle \bm{q}|\) the overlap \(\braket{\bm{u}_{i'}, A \bm{u}_j}\) factorizes and
the averaging in \(i'\) can be done independently of \(j\).
For general \(A\) we can handle an averaging
in \emph{both} indices, i.e.\ we will use that
\begin{equation}\label{2av}
\frac{1}{N^2} \sum_{i', j'=1}^N \frac{\eta}{ (\lambda_i-\lambda_{i'})^2+\eta^2}\frac{\eta}{ (\lambda_j-\lambda_{j'})^2+\eta^2}
N|\braket{\bm{u}_{i'}, A \bm{u}_{j'}} |^2 = \braket{ A \Im G(\gamma_i + \mathrm{i} \eta) A \Im G(\gamma_j + \mathrm{i} \eta)}.
\end{equation}
The normalised trace in the right hand side
is accessible by resolvent methods using the recent multi-\(G\) local law proven in~\cite[Prop. 3.4]{2012.13215}.
However, the generator of the eigenvector moment flow~\eqref{eq:1dkernel} involves the \emph{sum} of averaging
operators as in~\eqref{1av} in all
coordinate directions and not their \emph{product} as needed in~\eqref{2av}. Higher moments (\(n>2\)) require
averaging in more than two indices
simultaneously that is not apparently available in the generator.
To remedy this situation, we now review how the equilibration (smoothing) property of
the parabolic equation for the perfect matching observable can be manifested.
\subsection{Local smoothing of the eigenvector moment flow: an overview}
The technically simplest way to exploit the smoothing effect is via the maximum principle introduced in~\cite{MR3606475}. However, this requires that the generator is negative and itself has
the necessary local averaging property to obtain a quantity computable by a local law;
this is the case for eigenfunction overlaps as in~\eqref{qGq} but not for general overlaps \(\braket{\bm{u}_i, A \bm{u}_j}\) in~\eqref{1av}. We remark that the maximum principle was also used in~\cite{MR4156609} for more general overlaps, but only for
getting an a priori bound and not for establishing their distribution. For this cruder purpose
a rougher bound on~\eqref{1av} was sufficient that could be iteratively improved, but always by an \(N^\epsilon\) factor
off the optimal value.
A technically much more demanding way to exploit the equilibration of the eigenvector moment flow
would be via homogenisation theory. In random matrix theory
homogenisation was originally introduced for the Dyson eigenvalue flow in~\cite{MR3541852, MR3914908}
by noticing that the generator is a discrete approximation of the one dimensional fractional
Laplacian operator \(|p|=\sqrt{-\Delta}\) with translation invariant kernel \((x-y)^{-2}\) whose heat kernel is explicitly known.
Unfortunately, the eigenvector flow is more complicated and a good approximation with a well-behaving continuous heat
kernel is missing although homogenisation might also be accessible via a sequence of maximum principles
as in~\cite{1812.10376}.
Finally, the last and most flexible method for equilibration are the ultracontractivity estimates on the heat kernel that can
be obtained by the standard Nash method from Poincar\'e or Sobolev inequalities for the Dirichlet form
determined by the generator. In random matrix theory, these ideas have been introduced in~\cite{MR3372074}
for the eigenvalue gap statistics
and have later been used as a priori bounds for the homogenisation theory. However, in the bulk regime
they are barely not sufficiently strong to get the necessary precision for individual eigenvalues; they had
to be complemented either by De Giorgi-Nash-Moser H\"older regularity estimates~\cite{MR3372074} or
homogenisation~\cite{MR3541852, MR3914908}.
The recent work by Marcinek and Yau~\cite{2005.08425} remedies this shortcoming of the ultracontractivity
bound by combining it with an energy method. The main motivation of~\cite{2005.08425}
was to consider the joint distribution of the overlaps \(|\braket{\bm{u}_i, \bm{q}_k}|^2\) for several eigenvectors
and several test vectors simultaneously. The generator of the resulting \emph{coloured eigenvector moment flow}
lacks the positivity preserving property rendering the simple argument via maximum principle
impossible. It turns out that
this lack of positivity is due to a new \emph{exchange term} in the generator that is present only
because several \(\bm{q}_k\)'s (distinguished by colours) are considered simultaneously. However, the
generator with the problematic exchange term is still positive in \(L^2\)-sense and its Dirichlet
form satisfies the usual Poincar\'e inequality from which ultracontractivity bounds can still be derived.
The additional smallness now comes from an effective decay of the \(L^2\)-norm of the solution
where local averaging like~\eqref{qGq} can be exploited.
\subsection{Main ideas of the current paper} The proof of Gaussianity of~\eqref{eq:flu} consists of three steps.
\begin{enumerate}[label=Step \arabic*.]
\item\label{energy} We use the energy method inspired by~\cite{2005.08425} together with
the recent two-\(G\) local law from~\cite[Prop. 3.4]{2012.13215}
and more general multi-\(G\) local laws, proven in Section~\ref{sec:llaw}, to exploit
an effective averaging mechanism to reduce the \(L^2\)-norm of the solution.
In particular, to understand~\eqref{2av} we need a two-\(G\) local law
instead of the single-\(G\) isotropic law
used in~\eqref{qGq}.
\item\label{ultracontractivity} We use an \(L^2\to L^\infty\) ultracontractivity bound of
the colourblind eigenvector moment flow from~\cite[Proposition 6.29]{2005.08425}.
\item\label{gft step} The first two steps prove the Gaussianity of the overlap~\eqref{eq:flu}
for Wigner matrices with a tiny Gaussian component. With a standard Green function comparison
argument combined with the a priori bound~\eqref{eq:eth} proven in~\cite{2012.13215}
we remove this Gaussian component.
\end{enumerate}
\ref{ultracontractivity} and~\ref{gft step} are standard adaptations of existing previous results, so we focus only on explaining~\ref{energy}
We use the energy method in a very different way and for a very different purpose than~\cite{2005.08425}, but
for the same reason: its robustness. In the standard energy argument, if \(f_t\) satisfies the
parabolic evolution equation \(\partial_t f_t = {\mathcal L}_t f_t\) with a (time-dependent)
generator \({\mathcal L}_t\), then
\[
\frac{1}{2}\partial_t \| f_t\|_2^2 =\braket{ f_t, {\mathcal L_t}f_t} =: - D_t(f_t)\le 0,
\]
where \(D_t\) is the Dirichlet form (energy) associated to the generator \(\mathcal{L}_t\). The goal is to give a good lower bound
\begin{equation}\label{Df}
D_t(f)\ge c\|f \|_2^2 - \mbox{error},
\end{equation}
and use a Gronwall argument to conclude an effective \(L^2\)-decay along the
dynamics. However, at this moment, the Dirichlet form may first be replaced by a smaller one,
\(\widetilde D_t(f)\lesssim D_t(f) \), for which an effective lower bound~\eqref{Df} is easier to obtain.
In our case, the gain comes from estimating the error term in~\eqref{Df} by exploiting the
local averaging in all directions as in~\eqref{2av} so that we could use the multi-\(G\) local law.
How to find \(\widetilde D\)?
Very heuristically, the generator of the eigenvector moment flow is a discrete analogue of \(|p_1|+|p_2|+ \cdots + |p_n|\), i.e.
the \emph{sum} of \(|p|\)-operators along all the \(n\) coordinate directions in the \(n\)-dimensional space \(\Lambda^n\).
However, the necessary averaging in~\eqref{2av}
is rather the product of these one dimensional operators.
Normally, sums of first order differential
operators cannot be compared with their product since they scale differently with the length. But our
operators have a short range regularization on the scale \(\eta\), i.e.\ they rather correspond to \(\eta^{-1}[1-e^{-\eta|p|}]\) than just \(|p|\) (see~\cite[Theorem 7.12]{MR1817225}). Therefore, we will prove the discrete analogue of the operator inequality
\begin{equation}\label{replace}
\frac{1}{\eta} \prod_{r=1}^n \left(1-e^{-\eta|p_r|}\right)\le C(n) \sum_{r=1}^n \frac{1}{\eta}\left[1-e^{-\eta|p_r| }\right]
\end{equation}
on \(\mathbf{R}^n\) and their quadratic forms will be the two Dirichlet forms \(\widetilde D\) and \(D\).
Since the generator of \(\widetilde D\) now averages in all directions, these
averages yield traces of products \(\Im G A \Im G A \ldots \Im G A\) for which we have
a good local law, hence the corresponding error in~\eqref{Df} is smaller than its naive a priori bound
using only~\eqref{eq:eth}. This crucial gain provides the additional smallness to
overcome the general fact that ultracontractivity bounds alone are barely not sufficient
to gain sufficiently precise information on individual eigenvalues and eigenvectors in the bulk.
The actual proof requires several technical steps such as (i) localising
the dynamics by considering a short range approximation and treating the
long range part as a perturbation; (ii) finite speed of propagation for the short range dynamics;
(iii) cutoff the initial data in a smooth way so that cutoff and time evolution almost commute.
Since these steps have appeared in the literature earlier, we will not reprove them here,
we just refer to~\cite{2005.08425} where they have been adapted to the eigenvector moment flow.
We will give full details only for Step 1.
Parallel with but independently of the current work, Benigni and Lopatto~\cite{2103.12013} have proved the CLT for $\braket{{\bm u}_i,A{\bm u}_i}$ for the observable $A$ projecting onto a deterministic set of orthonormal vectors $A=\sum_{\alpha\in I} |q_\alpha\rangle\langle q_\alpha|$, with $N^\epsilon\le |I|\le N^{1-\epsilon}$, for some small fixed $\epsilon>0$. Their low rank assumption is complementary to our condition $\braket{\mathring{A}^2}\ge c$ for this class of projection operators, moreover their result also covered the edge regime. The low rank assumption allowed them to operate with the eigenvector moment flow from \cite{MR4242625, MR4156609}. However, their control can handle overlaps with at most $N^{1-\epsilon}$ vectors $q_\alpha$ simultaneously. It seems that this approach has a natural limitation preventing it from using it for high rank observables, e.g. for $|I|\sim N$. In contrast, we consider overlaps $\braket{{\bm u}_i,A{\bm u}_j}$ directly without relying on the spectral decomposition of $A$.
\subsection*{Notation and conventions}
We introduce some notations we use throughout the paper. For integers \(k\in\mathbf{N} \) we use the notation \([k]:= \set{1,\ldots, k}\). For positive quantities \(f,g\) we write \(f\lesssim g\) and \(f\sim g\) if \(f \le C g\) or \(c g\le f\le Cg\), respectively, for some constants \(c,C>0\) which depend only on the constants appearing in~\eqref{eq:momentass}. We denote vectors by bold-faced lower case Roman letters \({\bm x}, {\bm y}\in\mathbf{C} ^N\), for some \(N\in\mathbf{N}\). Vector and matrix norms, \(\norm{\bm{x}}\) and \(\norm{A}\), indicate the usual Euclidean norm and the corresponding induced matrix norm. For any \(N\times N\) matrix \(A\) we use the notation \(\braket{ A}:= N^{-1}\Tr A\) to denote the normalized trace of \(A\). Moreover, for vectors \({\bm x}, {\bm y}\in\mathbf{C}^N\) we define
\[
\braket{ {\bm x},{\bm y}}:= \sum \overline{x}_i y_i.
\]
We will use the concept of ``with very high probability'' meaning that for any fixed \(D>0\) the probability of the \(N\)-dependent event is bigger than \(1-N^{-D}\) if \(N\ge N_0(D)\). Moreover, we use the convention that \(\xi>0\) denotes an arbitrary small
positive constant which is independent of \(N\).
\subsection*{Acknowledgement} L.E. would like to thank Zhigang Bao for many illuminating discussions in an
early stage of this research. The authors are also grateful to Paul Bourgade for his comments on the manuscript and the anonymous referee for several useful suggestions.
\section{Main results}
Let \(W\) be an \(N\times N\) real symmetric or complex Hermitian Wigner matrix. We formulate the following assumptions on \(W\).
\begin{assumption}\label{ass:entr}
We assume that the matrix elements \(w_{ab}\) are independent up to the Hermitian symmetry \(w_{ab}=\overline{w_{ba}}\) and identically distributed in the sense that \(w_{ab}\stackrel{\mathrm{d}}{=} N^{-1/2}\chi_{\mathrm{od}}\), for \(a<b\), \(w_{aa}\stackrel{\mathrm{d}}{=}N^{-1/2} \chi_{\mathrm{d}}\), with \(\chi_{\mathrm{od}}\) being a real or complex random variable and \(\chi_{\mathrm{d}}\) being a real random variable such that \(\E \chi_{\mathrm{od}}=\E \chi_{\mathrm{d}}=0\) and \(\E |\chi_{\mathrm{od}}|^2=1\). In the complex case we also assume that \(\E \chi_{\mathrm{od}}^2=0\). In addition, we assume the existence of the high moments of \(\chi_{\mathrm{od}}\), \(\chi_{\mathrm{d}}\), i.e.\ that there exist constants \(C_p>0\), for any \(p\in\mathbf{N} \), such that
\begin{equation}
\label{eq:momentass}
\E \abs*{\chi_{\mathrm{d}}}^p+\E \abs*{\chi_{\mathrm{od}}}^p\le C_p.
\end{equation}
\end{assumption}
Let \(\lambda_1\le \lambda_2\le \ldots \le \lambda_N\) be its eigenvalues in
increasing order and
denote by \({\bm u}_1,\dots, {\bm u}_N\) the corresponding orthonormal eigenvectors.
For any \(N\times N\) matrix \(A\) we denote by \(\mathring{A}:= A-\braket{A}\) the traceless part of \(A\). We now state our main result.
\begin{theorem}[Central Limit Theorem in the QUE]\label{theo:flucque} Let \(W\) be a real symmetric (\(\beta=1\)) or complex Hermitian (\(\beta=2\))
Wigner matrix satisfying Assumptions~\eqref{ass:entr}.
Fix small \(\delta,\delta'>0\) and let \(A=A^*\) be a deterministic \(N\times N\) matrix with \(\norm{A}\lesssim 1\)
and \(\braket{\mathring{A}^2}\ge \delta'\). In the real symmetric case we also assume that \(A\in\mathbf{R}^{N\times N}\)
is real. Then for any \(i\in [\delta N, (1-\delta) N]\) it holds
\begin{equation}
\sqrt{\frac{\beta N}{2\braket{\mathring{A}^2}}} \big[\braket{{\bm u}_i,A {\bm u_i}}-\braket{A}\big]\Rightarrow \mathcal{N},\qquad \mbox{as
\;\; \(N\to\infty\)}
\end{equation}
in the sense of moments, with \(\mathcal{N}\) being a standard real Gaussian random variable. The speed of convergence is explicit, see~\eqref{eq:cltgcomp}.
\end{theorem}
\section{Perfect Matching observables}
For definiteness, we present the
proof for the real symmetric case, the analysis for the complex
Hermitian case it is completely analogous and so omitted. We only mention that the main difference
between the two symmetry classes is that the perfect matching observables \(f_{\bm\lambda,t}\)
in~\eqref{eq:deff} are defined slightly differently (see~\cite[Eq. (A.3)]{MR4156609}) but the current proof can be easily adapted
to this case.
Consider the matrix flow
\begin{equation}\label{eq:matdbm}
\operatorname{d}\!{} W_t=\frac{\operatorname{d}\!{} \widetilde{B}_t}{\sqrt{N}}, \qquad W_0=W,
\end{equation}
with \(\widetilde{B}_t\) being a standard real symmetric Brownian motion (see e.g.~\cite[Definition 2.1]{MR3606475}). We denote the resolvent of \(W_t\) by \(G=G_t(z):=(W_t-z)^{-1}\), for \(z\in\mathbf{C}\setminus\mathbf{R}\). It is well known (see e.g.~\cite{MR2871147,MR3103909,MR3183577}) that as \(N\to \infty\) the resolvent \((W-z)^{-1}\) becomes approximately deterministic; its deterministic approximation is given by the unique solution of the scalar quadratic equation
\begin{equation}
-\frac{1}{m(z)}=z+m(z), \qquad \Im m(z)\Im z>0.
\end{equation}
In particular, \(m(z)=m_{\mathrm{sc}}(z)\), \(m_{\mathrm{sc}}(z)\) being the Stieltjes transform of the semicircular law \(\rho_{\mathrm{sc}}(x):=(2\pi)^{-1}\sqrt{(4-x^2)_+}\). The deterministic approximation of \(G_t(z)\) is given by \(m_t(z)\), with \(m_t\) the solution of
\begin{equation}
\partial_t m_t(z)=-m_t\partial_z m_t(z), \qquad m_0=m.
\end{equation}
From now on by \(\rho_t=\rho_t(z)\) we denote \(\rho_t(z):=\pi^{-1}\Im m_t (z)\), for any \(t\ge 0\). In fact, starting from the standard semicircle \(\rho_0=\rho_{\mathrm{sc}}\), the density \(\rho_t(x+\mathrm{i} 0)\) is just a rescaling of \(\rho_0\) by a factor \cred{\(1+t\)}.
By~\cite[Definition 2.2]{MR3606475} it follows that the eigenvectors \({\bm u}_1(t),\dots, {\bm u}_N(t)\) of \(W_t\), corresponding to the eigenvalues \(\lambda_1(t)\le \lambda_2(t)\le \dots\le \lambda_N(t)\), are a solution of the following system of SDE (dropping the time dependence):
\begin{align}\label{eq:evaluflow}
\operatorname{d}\!{} \lambda_i&=\frac{\operatorname{d}\!{} B_{ii}}{\sqrt{N}}+\frac{1}{N}\sum_{j\ne i} \frac{1}{\lambda_i-\lambda_j} \operatorname{d}\!{} t \\\label{eq:evectorflow}
\operatorname{d}\!{} {\bm u}_k&=\frac{1}{\sqrt{N}}\sum_{j\ne i} \frac{\operatorname{d}\!{} B_{ij}}{\lambda_i-\lambda_j}{\bm u}_j-\frac{1}{2N}\sum_{j\ne i} \frac{{\bm u}_i}{(\lambda_i-\lambda_j)^2}\operatorname{d}\!{} t,
\end{align}
with \(\{B_{ij}\}_{i,j\in [N]}\) being a standard real symmetric Brownian motions. See~\cite[Theorem 2.3]{MR3606475} for the existence and uniqueness of the strong solution of~\eqref{eq:evaluflow}--\eqref{eq:evectorflow}.
By~\eqref{eq:evectorflow} it follows that the flow for the diagonal overlaps \(\braket{{\bm u}_i, A {\bm u}_i}\) naturally depends also on the off-diagonal overlap \(\braket{{\bm u}_i, A {\bm u}_j}\), hence our analysis will concern not only diagonal overlaps, but also off-diagonal ones.
Since \(\braket{{\bm u}_i, A {\bm u}_i}-\braket{A}=\braket{{\bm u}_i, \mathring{A} {\bm u}_i}\) and
\(\braket{{\bm u}_i, A {\bm u}_j}= \braket{{\bm u}_i, \mathring{A} {\bm u}_j}\) for \(i\ne j\), without loss of generality we
may assume for the rest of the paper, that \(A\) is traceless, \(\braket{A}=0\), i.e.\ \(A=\mathring{A}\). For traceless \(A\)
we introduce the short-hand notation
\begin{equation}
p_{ij}=p_{ij}(t):=\braket{{\bm u}_i(t), A {\bm u}_j(t)},
\quad i, j\in [N].
\end{equation}
We are now ready to write the flow for monomials of \(p_{ii}\), \(p_{ij}\) (see~\cite[Theorem 2.6]{MR4156609} for the derivation of the flow). For any fixed \(n\), we
will only need to consider monomials of the form \(\prod_{k=1}^n p_{i_k j_k}\) where each index appears even number of times;
it turns out that the linear combinations of such monomials with a fixed degree \(n\) are invariant under the flow.
To encode general monomials, we use a particle picture (introduced in~\cite{MR3606475} and developed in~\cite{MR4156609, 2005.08425}) where each particle on
the set of integers \([N]\) corresponds to two occurrences of an index \(i\) in the monomial product.
We use the same notation as in~\cite{MR4156609} and we define \({\bm \eta}:[N] \to \mathbf{N}\), where \(\eta_j:={\bm \eta}(j)\) is interpreted as the number of particles at the site \(j\), and \(n({\bm \eta}):=\sum_j \eta_j= n\) denotes the total number of particles that is conserved under the flow. The space of \(n\)-particle configurations is denoted by \(\Omega^n\).
Moreover, for any index pair \(i\ne j\in[N]\), we define \({\bm \eta}^{ij}\) to be the configuration obtained moving a particle from the site \(i\) to the site \(j\), if there is no particle in \(i\) then we define \({\bm \eta}^{ij}={\bm \eta}\). For any configuration \({\bm \eta}\) consider the set of vertices
\begin{equation}
\mathcal{V}_{{\bm \eta}}:=\{(i,a): 1\le i \le n, 1\le a\le 2\eta_i\},
\end{equation}
and let \(\mathcal{G}_{\bm \eta}\) be the set of perfect matchings on \(\mathcal{V}_{{\bm \eta}}\).
Note that every particle configuration \(\eta\) gives rise to two vertices in \(\mathcal{V}_{{\bm \eta}}\), thus the elements of
\(\mathcal{V}_{{\bm \eta}}\) represent the indices in the product \(\prod_{k=1}^n p_{i_k j_k}\).
There is no closed equation for individual products \(\prod_{k=1}^n p_{i_k j_k}\), but there is one for a certain symmetrized
linear combination, see~\cite[Eq. (2.15)]{MR4156609}. Therefore, for any perfect matching \(G\in \mathcal{G}_{\bm \eta}\) we
define
\begin{equation}\label{eq:defpg}
P(G):=\prod_{e\in \mathcal{E}(G)}p(e), \qquad p(e):=p_{i_1i_2},
\end{equation}
where \(e=\{(i_1,a_1),(i_2,a_2)\}\in \mathcal{V}_{\bm \eta}\), and \(\mathcal{E}(G)\) denotes the edges of \(G\).
For example, for \(n=2\) and for the configuration
\({\bm \eta}\) defined by \({\bm\eta}(i)= {\bm\eta}(j)= 1\) with some \(i\ne j\) and zero otherwise, we have three perfect matchings
corresponding to \(p_{ii} p_{jj}\) and twice \(p_{ij}^2\).
For \(n=3\) and \({\bm \eta}\) defined by \({\bm\eta}(i)= {\bm\eta}(j) = {\bm\eta}(k) =1\), we have 15 perfect matchings;
\(p_{ii} p_{jj} p_{kk}\), two copies \(p_{ij}^2p_{kk}\), \(p_{ik}^2 p_{jj}\), \(p_{jk}^2 p_{ii}\) each and 8 copies of \(p_{ij}p_{jk}p_{ki}\).
We are now ready to define the \emph{perfect matching observable} for any given configuration \({\bm \eta}\),
\begin{equation}
\label{eq:deff}
f_{{\bm \lambda},t}({\bm \eta}):= \frac{N^{n/2}}{ [2\braket{{A}^2}]^{n/2}} \frac{1}{(n-1)!!}\frac{1}{\mathcal{M}({\bm \eta}) }\E\left[\sum_{G\in\mathcal{G}_{\bm \eta}} P(G)\Bigg|
{\bm \lambda}\right], \quad \mathcal{M}({\bm \eta}):=%
\prod_{i=1}^N (2\eta_i-1)!!,
\end{equation}
with \(n\) being the number of particles in the configuration \({\bm \eta}\). Here we took the conditioning on the entire flow of eigenvalues, \({\bm \lambda} =\{\bm \lambda(t)\}_{t\in [0,T]}\) for some fixed \(T>0\). From now on we will always assume that $T\ll 1$ (even if not stated explicitly). The observable \(f_{\bm\lambda,t}\) satisfies a parabolic partial differential equation, see~\eqref{eq:1dequa} below.
\begin{remark}
For any \(k\in \mathbf{N}\) the double factorial \(k!!\) is defined by \(k!!=k(k-2)!!\), \(1!!=0!!=(-1)!!=1\). We remark that in~\cite{MR4156609, 2005.08425} the authors use a different convention for the double factorial, i.e.\ in these papers \(k!!=(k-1)(k-2)!!\).
\end{remark}
Note that \(f\) in~\eqref{eq:deff} is defined slightly differently
compared to the definition in~\cite[Eq. (2.15)]{MR4156609}, where the authors do not have the additional
\((N/(2\braket{{A}^2})^{n/2}[(n-1)!!]^{-1}\) factor. Our normalisation factor is dictated by
the principle that for traceless \(A\) we expect \(\sqrt{N}p_{ii}=\sqrt{N}[\langle {\bm u}_i, A{\bm u}_i\rangle ]\) to be approximately
a centred normal random variable with variance \(2\braket{{A}^2}\). In particular the \(n\)-th moment of
\((N/ 2\braket{{A}^2})^{1/2}p_{ii}\) for even \(n\) is close to \((n-1)!!\).
Therefore if \({\bm\eta}\) is a configuration with \(n\)
particles all sitting at the same site \(i\), i.e.\ \({\bm \eta}(i)=n\) and zero otherwise, then
\(\mathcal{M}({\bm \eta})= (2n-1)!!\) is the number of perfect matchings and therefore
we expect \(f_{{\bm \lambda},t}({\bm \eta}) \approx 1\).
Note that using the a priori bound \(|p_{ij}|\le N^{-1/2+\xi}\), for any \(\xi>0\), proven in~\cite[Theorem 2.2]{2012.13215}
%
we have \(|f_{{\bm \lambda},t}|\lesssim N^\xi\) with very high probability, while the analogous
quantity \(f_{{\bm \lambda},t}\) defined in~\cite[Eq. (2.15)]{MR4156609} has an a priori bound of order \(N^{-n/2+\xi}\).
We always assume that the entire eigenvalue trajectory
\(\{\bm \lambda(t)\}_{t\in [0,T]}\) satisfies the usual rigidity estimate (see e.g.~\cite[Theorem 7.6]{MR3068390} or~\cite{MR2871147}). More precisely, for any fixed \(\xi>0\) we define
\begin{equation}\label{def:Omega}
\Omega=\Omega_\xi:= \Big\{ \sup_{0\le t \le T} \max_{i\in[N]} N^{2/3}\widehat{i}^{1/3} | \lambda_i(t)-\gamma_i(t)| \le N^\xi\Big\}
\end{equation}
where \(\widehat{i}:=i\wedge (N+1-i)\),
then we have
\[
\mathbf{P} (\Omega_\xi)\ge 1- C(\xi, D) N^{-D}
\]
for any (small) \(\xi>0\) and (large) \(D>0\).
Here \(\gamma_i(t)\) are the classical eigenvalue locations (\emph{quantiles}) defined by
\begin{equation}\label{eq:quantin}
\int_{-\infty}^{\gamma_i(t)} \rho_t(x)\, \operatorname{d}\!{} x=\frac{i}{N}, \qquad i\in [N],
\end{equation}
where \(\rho_t(x)= \frac{1}{2(1+t)\pi}\sqrt{(4(1+t)^2-x^2)_+}\) is the semicircle law corresponding to \(W_t\).
Note that \(|\gamma_i(t)-\gamma_i(s)|\lesssim |t-s|\) in the bulk, for any \(t,s\ge 0\), as a consequence of the smoothness of \(t\to\rho_t\) in the bulk.
By~\cite[Theorem 2.6]{MR4156609} we have that
\begin{align}\label{eq:1dequa}
\partial_t f_{{\bm \lambda},t}&=\mathcal{B}(t)f_{{\bm \lambda},t}, \\\label{eq:1dkernel}
\mathcal{B}(t)f_{{\bm \lambda},t}&=\sum_{i\ne j} c_{ij}(t) 2\eta_i(1+2\eta_j)\big(f_{{\bm \lambda},t}({\bm \eta}^{kl})-f_{{\bm \lambda},t}({\bm \eta})\big).
\end{align}
where
\begin{equation}\label{eq:defc}
c_{ij}(t):= \frac{1}{N(\lambda_i(t) - \lambda_j(t))^2}.
\end{equation}
Note that \(c_{ij}\) depends on \(\{{\bm \lambda}(t)\}_{t\in [0, T]}\), for some \(T>0\), but we omit this fact from the notation.
We note that this flow was originally derived for special observables given in~\cite[Eq. (2.6)]{MR4156609}, but the same derivation immediately holds
for arbitrary \(A\) (see~\cite[Remark 2.8]{MR4156609}).
The main technical ingredient that will be used in the proof of Theorem~\ref{theo:flucque}
is the following proposition, whose proof is postponed to Section~\ref{sec:DBM}.
\begin{proposition}\label{pro:flucque}
For any \(n\in\mathbf{N}\) there exists \(c(n)>0\) such that for any \(\epsilon>0\), and for any \(T\ge N^{-1+\epsilon}\) it holds
\begin{equation}\label{eq:mainbthissec}
\sup_{{\bm \eta}}\big|f_T({\bm\eta})-\bm1(n\,\, \mathrm{even})\big|\lesssim N^{-c(n)},
\end{equation}
with very high probability, where the supremum is taken over configurations \({\bm \eta}\) such that \(\sum_i \eta_i=n\) and \(\eta_i=0\) for \(i\notin [\delta N, (1-\delta) N]\), with \(\delta>0\) from Theorem~\ref{theo:flucque}. The implicit constant in~\eqref{eq:mainbthissec} depends on \(n\), \(\epsilon\), \(\delta\).
\end{proposition}
\begin{proof}[Proof of Theorem~\ref{theo:flucque}]
We fix \(i\in [\delta N,(1-\delta) N]\) and we choose \({\bm \eta}\) to be the configuration \({\bm \eta}\) with \(\eta_i=n\) and all other \(\eta_j=0\). Then all the terms \(P(G)\) are equal to \(p_{ii}^n\) in the definition of \(f\), see~\eqref{eq:deff}. Then, using~\eqref{eq:mainbthissec} for this particular \({\bm \eta}\), we conclude that
\begin{equation}\label{eq:cltgcomp}
\E\left[\sqrt{\frac{N}{2\braket{A^2}}}\braket{{\bm u}_i(T),A{\bm u}_i(T)}\right]^n=\bm1(n\,\, \mathrm{even})(n-1)!!+\mathcal{O}\left(N^{-c(n)}\right),
\end{equation}
for any \(i\in [\delta N,(1-\delta) N]\) and \(T\gg N^{-1}\), where we used that \(\norm{f_{T}}_\infty\le N^{n/2}\) deterministically on the complement of the high probability set on which~\eqref{eq:mainbthissec} holds. With~\eqref{eq:cltgcomp} we have proved that Theorem~\ref{theo:flucque} holds for Wigner matrices with a small Gaussian component. For the general case, Theorem~\ref{theo:flucque} follows from~\eqref{eq:cltgcomp} and a standard application of the Green function comparison theorem (GFT), relating the eigenvectors/eigenvalues of \(W_T\) to those of \(W\); see Appendix~\ref{app:GFT} where we recall the argument for completeness.
\end{proof}
\section{DBM analysis}\label{sec:DBM}
In this section we focus on the analysis of the eigenvector moment flow~\eqref{eq:1dequa}--\eqref{eq:1dkernel}. Since in our proof we use some results proven in~\cite{2005.08425}, we start giving an equivalent representation of~\eqref{eq:deff} which is the same used in~\cite{2005.08425} without distinguishing the several colours.
\subsection{Equivalent representation of the flow}\label{sec:equivrep}
Fix \(n\in\mathbf{N}\), then in the remainder of this section we will consider configurations \({\bm \eta}\in\Omega^n\), i.e.\ such that \(\sum_j \eta_j=n\). Following~\cite{2005.08425} (but without the extra complication involving colours),
we now give an equivalent representation of the flow~\eqref{eq:1dequa}--\eqref{eq:1dkernel}
which will be defined on the \(2n\)-dimensional lattice \([N]^{2n}\) instead of configurations
of \(n\) particles. Let \({\bm x}\in [N]^{2n}\) and define
\begin{equation}
n_i({\bm x}):=|\{a\in [2n]:x_a=i\}|,
\end{equation}
for all \(i\in \mathbf{N}\). We define the configuration space
\[
\Lambda^n:= \{ {\bm x}\in [N]^{2n} \, : \,\mbox{\(n_i({\bm x})\) is even for every \(i\in [N]\)} \big\}.%
\]
Note that \(\Lambda^n\) is an \(n\)-dimensional subset of the \(2n\) dimensional
lattice \([N]^{2n}\) in the sense that \(\Lambda^n\) is a finite union of \(n\)-dimensional sublattices of \([N]^{2n}\).
From now on we will only consider configurations \({\bm x}\in\Lambda^n\). In particular, in this representation to each particle is associated a label $a\in [2n]$, i.e.\ there is a particle at a site $i\in [N]$ iff there exists $a\in [2n]$ such that $x_a=i$. Additionally, by the definition of $\Lambda^n$ it follows that the number of particles at a site $i\in [N]$ is always even.
\begin{remark}
Note that in~\cite{2005.08425} the authors consider \({\bm x}\) to be an \(n\)-dimensional vector that lives in the \(n/2\)-dimensional subset \(\Lambda^n\). For notational simplicity, in the current paper we assume that \({\bm x}\) is a \(2n\)-dimensional vector and that \(\Lambda^n\) is \(n\)-dimensional.
\end{remark}
The natural correspondence between the two representations is given by
\begin{equation}\label{xeta}
{\bm \eta} \leftrightarrow {\bm x}\qquad \eta_i=\frac{n_i( {\bm x})}{2}.
\end{equation}
Note that \({\bm x}\) uniquely determines \({\bm \eta}\), but \({\bm \eta}\) determines only the coordinates
of \({\bm x}\) as a multi-set and not its ordering. As an example, the configuration with single (or doubled) particles in \(i_1\ne i_2\) corresponds to six \(\bm x\in\Lambda^2\) as in
\begin{align*}
\underbrace{\begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=0pt]
\draw (-.5,0) -- (1.5,0);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\end{tikzpicture}}_{\bm\eta\text{-repr.}}\quad \Leftrightarrow \quad
\underbrace{\begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=0pt]
\draw (-.5,0) -- (1.5,0);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\node (i11) at ($(i1)+(0,.33)$) {};
\node (i21) at ($(i2)+(0,.33)$) {};
\end{tikzpicture}}_{\text{doubled }\bm\eta\text{-repr.}}\quad \Leftrightarrow \quad
\underbrace{\begin{pmatrix}i_1\\ i_1\\ i_2\\ i_2\end{pmatrix}\equiv
\begin{pmatrix}i_1\\ i_2\\ i_1\\ i_2\end{pmatrix}\equiv
\begin{pmatrix}i_1\\ i_2\\ i_2\\ i_1\end{pmatrix}\equiv\cdots}_{\bm x\text{-repr.}}.
\end{align*}
Let \(\phi\colon\Lambda^n\to \Omega^n\), \(\phi({\bm x})={\bm \eta}\)
denote the map that projects the \({\bm x}\)-configuration space to the \({\bm \eta}\)-configuration space using~\eqref{xeta}.
This map naturally pulls back functions \(f\) of \({\bm \eta}\) to functions of \({\bm x}\)
\[
(\phi^* f)({\bm x}) = f(\phi({\bm x})).
\]
We will always consider
functions \(g\) on \([N]^{2n}\) that are push-forwards of some function \(f\) on \(\Omega^n\),
\(g= f\circ \phi\), i.e.
they correspond to functions on the configurations
\[
f({\bm \eta})= f(\phi({\bm x}))= g({\bm x}).
\]
In particular \(g\) is supported on \(\Lambda^n\) and it is equivariant under permutation of the
arguments, i.e.\ it
depends on \({\bm x}\) only as a multiset. %
We therefore consider the observable
\begin{equation}\label{eq:defg}
g_{{\bm \lambda},t}({\bm x}):= f_{{\bm \lambda},t}( \phi({\bm x}))
\end{equation}
where \( f_{{\bm \lambda},t}\) was defined in~\eqref{eq:deff}.
In the following we will often use the notation \(g_t({\bm x})=g_{{\bm \lambda},t}({\bm x})\), dropping the dependence of \(g_t({\bm x})\) on the eigenvalues.
The flow~\eqref{eq:1dequa}--\eqref{eq:1dkernel} can be written in the \({\bm x}\)-representation as follows:
\begin{align}\label{eq:g1deq}
\partial_t g_t({\bm x})&=\mathcal{L}(t)g_t({\bm x}) \\\label{eq:g1dker}
\mathcal{L}(t):=\sum_{j\ne i}\mathcal{L}_{ij}(t), \quad \mathcal{L}_{ij}(t)g({\bm x}):&= c_{ij}(t) \frac{n_j({\bm x})+1}{n_i({\bm x})-1}\sum_{a\ne b\in[2 n]}\big(g({\bm x}_{ab}^{ij})-g({\bm x})\big),
\end{align}
where
\begin{equation}
\label{eq:jumpop}
{\bm x}_{ab}^{ij}:={\bm x}+\delta_{x_a i}\delta_{x_b i} (j-i) ({\bm e}_a+{\bm e}_b),
\end{equation}
with \({\bm e}_a(c)=\delta_{ac}\), \(a,c\in [2n]\). Clearly this flow preserves the equivariance of \(g\),
i.e.\ it is a map on functions defined on \(\Lambda^n\). The jump operator ${\bm x}_{ab}^{ij}$ defined in~\eqref{eq:jumpop} changes \(x_a,x_b\) from \(i\) to \(j\) if \(x_a=x_b=i\) and otherwise leaves \(\bm x\) unchanged. In the particle picture ${\bm \eta}$ this corresponds in moving one particle from the site $i$ (if there is any) to the site $j$, see the following example for \(n=2\) (with \(i=i_1,j=j_1\) and \(a=1,b=2\)):
\[ \begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=-15pt,>=stealth']
\draw (-1.5,0) -- (1.5,0);
\draw (-1.5,-1) -- (1.5,-1);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\node[label=below:$j_1$] (j1) at (-1,-1) {};
\node[label=below:$i_2$] (j2) at (1,-1) {};
\draw [shorten >=1pt,shorten <=1pt,->] (i1) -- (j1);
\end{tikzpicture}\quad \Leftrightarrow \quad
\begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=-15pt,>=stealth']
\draw (-1.5,0) -- (1.5,0);
\draw (-1.5,-1) -- (1.5,-1);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\node[label=below:$j_1$] (j1) at (-1,-1) {};
\node[label=below:$i_2$] (j2) at (1,-1) {};
\node (i1p) at ($(i1)+(0,.33)$) {};
\node (i2p) at ($(i2)+(0,.33)$) {};
\node (j1p) at ($(j1)+(0,.33)$) {};
\node (j2p) at ($(j2)+(0,.33)$) {};
\draw[thick,gray,rounded corners] ($(i1p.north west)+(-0.1,0.1)$) rectangle ($(i1.south east)+(0.1,-0.1)$);
\draw[thick,gray,rounded corners] ($(j1p.north west)+(-0.1,0.1)$) rectangle ($(j1.south east)+(0.1,-0.1)$);
\draw [shorten >=8pt,shorten <=8pt,->] ($(i1)!0.5!(i1p)$) -- ($(j1)!0.5!(j1p)$);
\end{tikzpicture}\quad \Leftrightarrow\quad
\bm x=\begin{pmatrix}i_1\\ i_1\\ i_2\\ i_2\end{pmatrix}\mapsto\bm x_{ab}^{i_1j_1}=\begin{pmatrix}j_1\\ j_1\\ i_2\\ i_2\end{pmatrix}.
\]
Define the measure
\begin{equation}\label{eq:revmeasure}
\pi({\bm x}):=\prod_{i=1}^N ((n_i({\bm x})-1)!!)^2
\end{equation}
on \(\Lambda^n\) and the corresponding \(L^2(\Lambda^n)=L^2(\Lambda^n,\pi)\) space equipped with the scalar product
\begin{equation}\label{eq:scalpro}
\braket{f, g}_{\Lambda^n}=\braket{f, g}_{\Lambda^n, \pi}:=\sum_{{\bm x}\in \Lambda^n}\pi({\bm x})\bar f({\bm x})g({\bm x}).
\end{equation}
We will often drop the dependence on the measure \(\pi\) in the scalar product. We also define the following norm on \(L^p(\Lambda^n)\):
\begin{equation}
\norm{f}_p:=\left(\sum_{{\bm x}\in \Lambda^n}\pi({\bm x})|f({\bm x})|^p\right)^{1/p}.
\end{equation}
The measure \(\pi({\bm x})\) clearly satisfies
\begin{equation}\label{eq:boundsrevmeasure}
1\le \pi({\bm x}) \le (2n-1)!!,
\end{equation}
uniformly in \({\bm x}\in\Lambda^n\).
A direct calculation in~\cite[Appendix A.2]{2005.08425}
shows that the operator \(\mathcal{L}=\mathcal{L}(t)\) is symmetric with respect to the measure \(\pi\)
and it is a negative operator on the space \(L^2(\Lambda^n)\) with Dirichlet form
\[
D(g)=\braket{g, (-\mathcal{L}) g}_{\Lambda^n} = \frac{1}{2} \sum_{{\bm x}\in \Lambda^n}\pi({\bm x})
\sum_{i\ne j} c_{ij}(t) \frac{n_j({\bm x})+1}{n_i({\bm x})-1}
\sum_{a\ne b\in[2 n]}\big|g({\bm x}_{ab}^{ij})-g({\bm x})\big|^2.
\]
We will often omit the time dependence of the generator \(\mathcal{L}(t)\). We denote by \(\mathcal{U}(s,t)\)
the semigroup associated to \(\mathcal{L}\) from~\eqref{eq:g1dker}, i.e.\ for any \(0\le s\le t\) it holds
\[
\partial_t\mathcal{U}(s,t)=\mathcal{L}(t)\mathcal{U}(s,t), \quad \mathcal{U}(s,s)=I.
\]
\subsection{Short-range approximation}
Before proceeding we introduce a localised version of~\eqref{eq:g1deq}--\eqref{eq:g1dker}.
Choose
an (\(N\)-dependent) parameter \(1\ll K\le \sqrt{N}\)
and define the \emph{averaging operator} as a simple multiplication operator by a ``smooth'' cut-off function:
\begin{equation}
\Av(K,{\bm y})h({\bm x}):=\Av({\bm x};K,{\bm y})h({\bm x}), \qquad \Av({\bm x}; K, {\bm y}):=\frac{1}{K}\sum_{j=K}^{2K-1} \bm1(\norm{{\bm x}-{\bm y}}_1<j),
\end{equation}
with \(\norm{{\bm x}-{\bm y}}_1:=\sum_{a=1}^{2n} |x_a-y_a|\).
While it was denoted and called averaging operator in~\cite{MR4156609, 2005.08425}, it is rather a \emph{localization}, i.e.\ a multiplication
by a ``smooth'' cutoff function \({\bm x}\to \Av({\bm x}; K, {\bm y})\) which
is centered at \({\bm y}\) and has a soft range of size \(K\). The parameters \(K, {\bm y}\) are considered fixed
and often omitted from the notation. In particular, throughout the paper we will assume that \({\bm y}\) is supported in the bulk, i.e. we will always assume that ${\bm y}\in \mathcal{J}$ (see the definition of $\mathcal{J}$ in \eqref{eq:defintJ} below).
Now we define a short range version of the dynamics~\eqref{eq:g1deq}. Fix an integer \(\ell\) with \(1\ll\ell\ll K\)
and define the short range coefficients
\begin{equation}\label{eq:ccutoff}
c_{ij}^{\mathcal{S}}(t):=\begin{cases}
c_{ij}(t) &\mathrm{if}\,\, i,j\in \mathcal{J} \,\, \mathrm{and}\,\, |i-j|\le \ell \\
0 & \mathrm{otherwise},
\end{cases}
\end{equation}
where \(c_{ij}(t)\) is defined in~\eqref{eq:defc}. Here
\begin{equation}\label{eq:defintJ}
\mathcal{J}=\mathcal{J}_\delta:=\{ i\in [N]:\, \gamma_i(0)\in \mathcal{I}_\delta\}, \qquad \mathcal{I}_\delta:=(-2+\delta,2-\delta)
\end{equation}
with \(\delta>0\) from Theorem~\ref{theo:flucque}, so that \(\mathcal{I}_\delta\) lies entirely in the bulk spectrum.
We define \(h_t({\bm x})\) as the time evolution of a localized initial data \(g_0\)
by the short range dynamics: %
\begin{equation}\label{g-1}
\begin{split}
h_0({\bm x};\ell, K,{\bm y})=h_0({\bm x};K,{\bm y}):&=\Av({\bm x}; K,{\bm y})(g_0({\bm x})-\bm1(n \,\, \mathrm{even})), \\\partial_t h_t({\bm x}; \ell, K,{\bm y})&=\mathcal{S}(t) h_t({\bm x}; \ell, K,{\bm y}),
\end{split}
\end{equation}
where
\begin{equation}\label{g-2}
\mathcal{S}(t):=\sum_{j\ne i}\mathcal{S}_{ij}(t), \quad \mathcal{S}_{ij}(t)h({\bm x}):=c_{ij}^{\mathcal{S}}(t)\frac{n_j({\bm x})+1}{n_i({\bm x})-1}\sum_{a\ne b\in [2n]}\big(h({\bm x}_{ab}^{ij})-h({\bm x})\big).
\end{equation}
Here we used the notation \(h({\bm x})=h({\bm x}; \ell, K,{\bm y})\) to indicate all relevant parameters:
\(\ell\) indicates
the short range of the dynamics, \({\bm y}\) is the centre
and \(K\) is the range of the cut-off in the initial condition, and we always choose \(\ell \ll K\).
In~\eqref{g-1} we already subtracted \(\bm1(n \,\, \mathrm{even})\) since in our application the initial condition \(g_0({\bm x})\)
after some local averaging will be close to \(\bm1(n \,\, \mathrm{even})\), hence, after longer time we expect that \(h_t\) tends to zero since
the dynamics has a smoothing effect and it is an \(L^1\) contraction.
\subsection{\texorpdfstring{\(L^2\)}{L2}-bound}\label{sec:l2}
Define the distance on \(\Lambda^n\) as
\begin{equation}
d({\bm x}, {\bm y}):=\sup_{a\in [2n]}|\mathcal{J}\cap \big[\min(x_a,y_a), \max(x_a,y_a)\big)|,
\end{equation}
with \(\mathcal{J}\) defined in~\eqref{eq:defintJ}. Note that \(d\) is not a metric since it is degenerate, but it still symmetric and satisfies the triangle inequality~\cite[Eq.~(5.6)]{2005.08425}. The key ingredient to prove the \(L^2\)-bound in~\eqref{eq:l2b} below is to show that the short range dynamics~\eqref{g-1}--\eqref{g-2} is close to the original dynamics~\eqref{eq:g1deq}--\eqref{eq:g1dker}. This will be achieved using the following finite speed of propagation estimate, proven in~\cite[Theorem 2.1, Lemma 2.4]{MR3690289},~\cite[Proposition 5.2]{2005.08425} (see also~\cite[Eq. (3.15)]{MR4156609}), for \(\mathcal{U}_{\mathcal{S}}(s,t)=\mathcal{U}_{\mathcal{S}}(s,t;\ell)\), which is the transition semigroup associated to the short range generator \(\mathcal{S}(t)\). For any \({\bm x}\in\Lambda^n\) define
the ``delta-function'' on \(\Lambda^n\) as
\[
\delta_{{\bm x}}({\bm u}):=\begin{cases}
\pi({\bm x})^{-1} &\mathrm{if} \,\, {\bm u}={\bm x} \\
0 &\mathrm{otherwise},
\end{cases}
\]
and denote the matrix entries of \(\mathcal{U}_{\mathcal{S}}(s,t)\) by \(\mathcal{U}_{\mathcal{S}}(s,t)_{{\bm x}{\bm y}}:=\braket{\delta_{{\bm x}}, \mathcal{U}_{\mathcal{S}}(s,t) \delta_{{\bm y}}}\).
\begin{proposition}\label{pro:finitespeed}
Fix any small \(\epsilon>0\), and \(\ell\ge N^\epsilon\). Then for any \({\bm x}, {\bm y}\in \Lambda^n\) with \(d({\bm x}, {\bm y})>N^\epsilon\ell\) it holds
\begin{equation}
\sup_{0\le s_1\le s_2\le s_1+\ell N^{-1}} |\mathcal{U}_{\mathcal{S}}(s_1,s_2;\ell)_{{\bm x}{\bm y}}|\le e^{-N^\epsilon/2},
\end{equation}
on the very high probability event \(\Omega\).
\end{proposition}
This finite speed of propagation together with the fact that the initial condition \(h_0\) is localized in
a \(K\)-neighbourhood of a fixed center \({\bm y}\) implies that \(h_t\) is supported in a \(K+N^\epsilon\ell \le 2K\)
neighbourhood of \({\bm y}\) up an exponentially small tail part.
Using Proposition~\ref{pro:finitespeed}, by~\cite[Corollary 5.3]{2005.08425}, we immediately conclude the following lemma.
\begin{lemma}\label{lem:exchav}
For any times \(s_1,s_2\) such that \(0\le s_1\le s_2\le s_1+\ell N^{-1}\), and for any \({\bm y}\in \Lambda^n\) supported on \(\mathcal{J}\) (i.e.\ \({\bm y}_a\in \mathcal{J}\) for any \(a\in [2n]\))
for the commutator of the evolution \(\mathcal{U}_\mathcal{S}\) and the averaging operator we have
\begin{equation}\label{eq:comm}
\norm{[\mathcal{U}_\mathcal{S}(s_1,s_2;\ell), \Av({\bm y},K)]}_{\infty,\infty}\le C(n)\frac{N^\epsilon\ell}{K},
\end{equation}
for some constant \(C(n)>0\) and for any small \(\epsilon>0\), on the very high probability event \(\Omega\).
\end{lemma}
Another straightforward application of the finite speed of propagation estimate in Proposition~\ref{pro:finitespeed} is the following bound \(\mathcal{U}(s_1,s_2)-\mathcal{U}_{\mathcal{S}}(s_1,s_2;\ell)\). This result was proven in~\cite[Proposition 5.7]{2005.08425} for a specific \(f\) but the same proof applies for a general function \(f\).
\begin{lemma}\label{lem:shortlongapprox}
Let \(0\le s_1\le s_2\le s_1+\ell N^{-1}\), and \(f\) is a function on \(\Lambda^n\),
then for any \({\bm x}\in \Lambda^n\) supported on \(\mathcal{J}\) it holds
\begin{equation}\label{eq:shortlong}
\Big| (\mathcal{U}(s_1,s_2)-\mathcal{U}_{\mathcal{S}}(s_1,s_2;\ell) ) f({\bm x}) \Big|\lesssim N^{1+n\xi}\frac{s_2-s_1}{\ell} \| f\|_\infty,
\end{equation}
for any small \(\xi>0\).
\end{lemma}
\begin{proof}
Using Proposition~\ref{pro:finitespeed}, the proof of~\eqref{eq:shortlong} is completely analogous to the proof of~\cite[Proposition 5.7]{2005.08425}, since the only input used in~\cite[Proposition 5.7]{2005.08425} is that
\[
\sum_{j: |j-i|>\ell}\frac{1}{N(\lambda_i-\lambda_j)^2}\le \frac{N^{1+\xi}}{\ell}
\]
on \(\Omega\), which follows by rigidity.
\end{proof}
Before stating the main result of this section we define the set \(\widehat{\Omega}\) on which the local laws for certain products of resolvents and traceless matrices \(A\) hold, i.e.\ for a small \(\omega>2\xi>0\) we define
\begin{equation}
\label{eq:hatomega}
\begin{split}
\widehat{\Omega}=\widehat{\Omega}_{\omega, \xi}:&=\bigcap_{\substack{z_i: \Re z_i\in [-3,3], \atop |\Im z_i|\in [N^{-1+\omega},10]}}\Bigg[\bigcap_{k=3}^n \left\{\sup_{0\le t \le T}(\rho_t^*)^{-1/2}\big|\braket{G_t(z_1)A\dots G_t(z_k)A}\big|\le \frac{N^{\xi+(k-3)/2}}{\sqrt{\eta_*}}\right\} \\
&\quad \cap\left\{\sup_{0\le t \le T}(\rho_{1,t}\rho_{2,t})^{-1}\big|\braket{\Im G_t(z_1)A\Im G_t(z_2)A}-\Im m_t(z_1)\Im m_t(z_2)\braket{A^2}\big|\le \frac{N^\xi}{\sqrt{N\eta^*}}\right\}\\
&\quad\cap \left\{\sup_{0\le t \le T}(\rho_{1,t})^{-1/2}\big|\braket{G_t(z_1)A}\big|\le \frac{N^\xi}{N\sqrt{|\Im z_1|}}\right\}\Bigg],
\end{split}
\end{equation}
where \(\eta_*:=\min\set[\big]{|\Im z_i|\given i\in[k]}\), $\rho_{i,t}:=|\Im m_t(z_i)|$, and $\rho_t^*:=\max_i\rho_{i,t}$. The fact that \(\widehat{\Omega}\) is a very high probability set follows by~\cite[Theorem 2.6]{2012.13215} for \(k=1\), by~\cite[Eq. \cred{(3.10)}]{2012.13215} for \(k=2\), and by Proposition~\ref{pro:llaw} for \(k\ge 3\). In particular, since \(\Im m_t(z_1)\Im m_t(z_2)\braket{A^2}\) is bounded by $\rho_{1,t}\rho_{2,t}$ for \(k=2\), we have
\[
\sup_{0\le t\le T}\sup_{z_1, z_2}(\rho_{1,t}\rho_{2,t})^{-1} \braket{\Im G_t(z_1) A\Im G_t(z_2) A}\lesssim 1,
\]
on the very high probability event \(\widehat{\Omega}_{\omega,\xi}\),
which, by spectral theorem, implies
\begin{equation}\label{eq:apriori}
\sup_{0\le t\le T}\max_{i,j \in[N]} |\braket{{\bm u}_i(t), A {\bm u}_j(t)}|\le N^{-1/2+\omega} \qquad \,\,
\mbox{on \(\;\widehat{\Omega}_{\omega,\xi}\cap \Omega_\xi\)}.
\end{equation}
\begin{proposition}\label{prop:mainimprov}
For any scale satisfying \(N^{-1}\ll \eta\ll T_1\ll \ell N^{-1}\ll K N^{-1}\), and any small \(\epsilon, \xi>0\) it holds
\begin{equation}\label{eq:l2b}
\norm{h_{T_1}(\cdot; \ell, K, {\bm y})}_2\lesssim K^{n/2}\mathcal{E},
\end{equation}
with
\begin{equation}\label{eq:basimpr}
\mathcal{E}:= N^{n\xi}\left(\frac{N^\epsilon\ell}{K}+\frac{NT_1}{\ell}+\frac{N\eta}{\ell}+\frac{N^\epsilon}{\sqrt{N\eta}}+\frac{1}{\sqrt{K}}\right),
\end{equation}
uniformly for particle configuration \({\bm y}\in \Lambda^n\) supported on \(\mathcal{J}\) and eigenvalue trajectory \({\bm \lambda}\) on the high probability event \(\Omega_\xi \cap \widehat{\Omega}_{\omega,\xi}\).
\end{proposition}
\begin{proof}
Before presenting the formal proof, we explain the main idea.
In the sense of Dirichlet forms, we will replace the generator \(\mathcal{S}(t)\)~\eqref{g-1}--\eqref{g-2}, which
is the \emph{sum} of one-dimensional generators, with the generator \(\mathcal{A}(t)\) that corresponds to the \emph{product} of
such operators
(see~\eqref{eq:defAgen} below for its definition). Considering that \(c_{ij}\) decays
proportionally with \(|i-j|^{-2}\) (using rigidity in~\eqref{eq:defc}), it is the kernel of the discrete approximation
of the one dimensional operator \(|p|=\sqrt{-\Delta}\) on \(\mathbf{R}\) but lifted to the \(n\)-dimensional space \(\Lambda^{n}\). Therefore
one may think of \(\mathcal{L}(t)\), and its short range approximation
\(\mathcal{S}(t)\), as a discrete analogue of \(|p_1|+|p_2|+ \cdots + |p_n|\), i.e.
the sum of \(|p|\)-operators along all the \(n\) coordinate directions.
As explained in the introduction, using the short distance regularisation of the underlying lattice,
we really have \(\eta^{-1}[1-e^{-\eta|p|}]\) instead of \(|p|\) and the operator inequality~\eqref{replace} holds.
The left hand side of~\eqref{replace}
corresponds to the positive operator \((-\mathcal{A})\) and
the right hand side corresponds to \((-\mathcal{S})\). The key Lemma~\ref{lem:replacement} below asserts that
\(0\le (-\mathcal{A})\le C(n)(-\mathcal{S})\) in the sense of quadratic forms.
The main purpose of this replacement is that \(\mathcal{A}\) averages independently
in every direction, therefore \(\mathcal{A}\) acting on the function \(g=f\circ \phi\)
has the effect that it averages in \emph{all} the \(i_1, i_2, \ldots\) indices in the definition of \(P(G)\),~\eqref{eq:defpg}.
These averages yield traces of products \(\Im G A \Im G A \ldots \Im G A\) for which we have
a good local law on the set \(\widehat\Omega\).
We now explain the origin of the errors in~\eqref{eq:basimpr}.
The error in the multi-\(G\) local laws give the crucial fourth error \(1/\sqrt{N\eta}\)
in~\eqref{eq:basimpr}. The other errors come from various approximations:
the dynamics commutes with the localization
up to an error of order \(\ell/K\) by Lemma~\ref{lem:exchav}, the short range cutoff dynamics
approximates the original one up to time \(T_1\) with an error of order \(NT_1/\ell\),
while the removed long range part contributes with an error or order \(N\eta/\ell\) to the Dirichlet form.
The last \(1/\sqrt{K}\) error term is technical; we do the analysis for typical index configurations
where no two indices coincide and the coinciding indices have a volume factor of order \(1/\sqrt{K}\)
smaller than the total volume.
Now we start with the actual proof. All the estimates in this proof hold uniformly for \({\bm y}\in \Lambda^n\) supported on \(\mathcal{J}\), hence from now on we fix a particle configuration \({\bm y}\). To make the presentation clearer we drop the parameters \({\bm y}, K, \ell\)
and use the short-hand notations \(h_t({\bm x})=h_t({\bm x}; \ell, K, {\bm y})\), \(\Av=\Av(K,{\bm y})\), \(\Av({\bm x})=\Av({\bm x}; K,{\bm y})\), etc. Moreover, for any \({\bm i}, {\bm j}\in [N]^n\) by \(\sum_{{\bm i}}^*\) or \(\sum_{{\bm i}{\bm j}}^*\) we denote the summations over indices that are all distinct, i.e.\ the \(i_1,\dots, i_n\), in the first sum, and \(i_1,\dots, i_n\), \(j_1,\dots, j_n\), in the second sum are all different. The same convention holds for summations over \({\bm a}, {\bm b}\in [2n]^n\).
Let
\begin{equation}
a_{ij}=a_{ij}(t):=\frac{\eta}{N((\lambda_i(t)-\lambda_j(t))^2+\eta^2)},
\end{equation}
and define their short range version \(a_{ij}^\mathcal{S}\) as in~\eqref{eq:ccutoff}. Define the operator \(\mathcal{A}=\mathcal{A}(t)\) by
\begin{equation}\label{eq:defAgen}
\mathcal{A}(t):=\sum_{{\bm i}, {\bm j}\in [N]^n}^*\mathcal{A}_{ {\bm i}{\bm j}}(t), \quad \mathcal{A}_{{\bm i}{\bm j}}(t)h({\bm x}):=\frac{1}{\eta}\left(\prod_{r=1}^n a_{i_r,j_r}^\mathcal{S}(t)\right)\sum_{{\bm a}, {\bm b}\in [2n]^n}^*(h({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-h({\bm x})),
\end{equation}
where
\begin{equation}
\label{eq:lotsofjumpsop}
{\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}:={\bm x}+\left(\prod_{r=1}^n \delta_{x_{a_r}i_r}\delta_{x_{b_r}i_r}\right)\sum_{r=1}^n (j_r-i_r) ({\bm e}_{a_r}+{\bm e}_{b_r}).
\end{equation}
We now explain the difference between the jump operator~\eqref{eq:lotsofjumpsop} and the one defined in~\eqref{eq:jumpop}. The operator~\eqref{eq:jumpop} changes two entries of ${\bm x}$ per time, instead ${\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}$ changes all the coordinates of ${\bm x}$ at the same time, i.e. let ${\bm i}:=(i_1,\dots, i_n), {\bm j}:=(j_1,\dots, j_n)\in [N]^n$, with $\{i_1,\dots,i_n\}\cap\{j_1,\dots, j_n\}=\emptyset$, then ${\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}\ne {\bm x}$ iff for all $r\in [n]$ it holds that $x_{a_r}=x_{b_r}=i_r$, see e.g.
\[ \begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=-15pt,>=stealth']
\draw (-1.25,0) -- (2.25,0);
\draw (-1.25,-1) -- (2.25,-1);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\node[label=below:$j_1$] (j1) at (-1,-1) {};
\node[label=below:$j_2$] (j2) at (2,-1) {};
\draw [shorten >=1pt,shorten <=1pt,->] (i1) -- (j1);
\draw [shorten >=1pt,shorten <=1pt,->] (i2) -- (j2);
\end{tikzpicture}\quad \Leftrightarrow \quad
\begin{tikzpicture}
[every node/.style={fill, circle, inner sep = 1.5pt},baseline=-15pt,>=stealth']
\draw (-1.25,0) -- (2.25,0);
\draw (-1.25,-1) -- (2.25,-1);
\node[label=below:$i_1$] (i1) at (0,0) {};
\node[label=below:$i_2$] (i2) at (1,0) {};
\node[label=below:$j_1$] (j1) at (-1,-1) {};
\node[label=below:$j_2$] (j2) at (2,-1) {};
\node (i1p) at ($(i1)+(0,.33)$) {};
\node (i2p) at ($(i2)+(0,.33)$) {};
\node (j1p) at ($(j1)+(0,.33)$) {};
\node (j2p) at ($(j2)+(0,.33)$) {};
\draw[thick,gray,rounded corners] ($(i1p.north west)+(-0.1,0.1)$) rectangle ($(i1.south east)+(0.1,-0.1)$);
\draw[thick,gray,rounded corners] ($(j1p.north west)+(-0.1,0.1)$) rectangle ($(j1.south east)+(0.1,-0.1)$);
\draw[thick,gray,rounded corners] ($(i2p.north west)+(-0.1,0.1)$) rectangle ($(i2.south east)+(0.1,-0.1)$);
\draw[thick,gray,rounded corners] ($(j2p.north west)+(-0.1,0.1)$) rectangle ($(j2.south east)+(0.1,-0.1)$);
\draw [shorten >=8pt,shorten <=8pt,->] ($(i1)!0.5!(i1p)$) -- ($(j1)!0.5!(j1p)$);
\draw [shorten >=8pt,shorten <=8pt,->] ($(i2)!0.5!(i2p)$) -- ($(j2)!0.5!(j2p)$);
\end{tikzpicture}\quad \Leftrightarrow\quad
\bm x=\begin{pmatrix}i_1\\ i_1\\ i_2\\ i_2\end{pmatrix}\mapsto \bm x_{\bm a\bm b}^{\bm i\bm j}=\begin{pmatrix}j_1\\ j_1\\ j_2\\ j_2\end{pmatrix}.
\]
Note that \(\mu({\bm x})\equiv 1\) on \(\Lambda^n\) is a reversible measure for the generator \(\mathcal{A}(t)\) (as a consequence of \(({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})_{{\bm a}{\bm b}}^{{\bm j}{\bm i}}={\bm x}\) for any fixed \({\bm a}, {\bm b}\) and for any \({\bm x}\) such that \({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}\ne {\bm x}\)), and that \(\pi({\bm x})\sim C(n) \mu({\bm x})\) for all \({\bm x}\in\Lambda^n\) (see~\eqref{eq:boundsrevmeasure}). We define the scalar product with respect to the measure \(\mu({\bm x})\) analogously to~\eqref{eq:scalpro}, and we denote it by \(\braket{\cdot,\cdot}_{\Lambda^n,\mu}\).
We now analyse the time evolution of \( \norm{h_t}_2^2\):
\begin{equation}\label{eq:l2der}
\partial_t \norm{h_t}_2^2=2\braket{h_t, \mathcal{S}(t) h_t}_{\Lambda^n}.
\end{equation}
The main ingredient to give an upper bound on~\eqref{eq:l2der} is the following lemma, whose proof is postponed at the end of this section.
\begin{lemma}\label{lem:replacement}
Let \(\mathcal{S}(t)\), \(\mathcal{A}(t)\) be the generators defined in~\eqref{g-2} and~\eqref{eq:defAgen}, respectively. Then there exists a constant \(C(n)>0\), which depends only on \(n\), such that
\begin{equation}\label{eq:fundbound}
\braket{h, \mathcal{S}(t) h}_{\Lambda^n, \pi}\le C(n) \braket{h, \mathcal{A}(t) h}_{\Lambda^n,\mu}\le 0,
\end{equation}
for any \(h\in L^2(\Lambda^n)\), on the very high probability set \( \Omega_\xi\cap\widehat{\Omega}_{\xi,\omega}\).
\end{lemma}
From now on by \(C(n)\) we denote a constant that depends only on \(n\) and that may change from line to line.
Next, combining~\eqref{eq:l2der}--\eqref{eq:fundbound}, and using that \({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}={\bm x}\) unless \({\bm x}_{a_r}={\bm x}_{b_r}=i_r\) for all \(r\in [n]\), we conclude that
\begin{equation}\label{eq:boundderl2}
\begin{split}
\partial_t \norm{h_t}_2^2&\le C(n) \braket{h_t,\mathcal{A}(t) h_t}_{\Lambda^n,\mu} \\
&=\frac{C(n) }{2\eta}\sum_{{\bm x}\in \Lambda^n}\sum_{{\bm i},{\bm j}\in [N]^n}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right)\sum_{{\bm a}, {\bm b}\in [2n]^n}^*\overline{h_t}({\bm x})\big(h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})-h_t({\bm x})\big)\Psi({\bm x}),
\end{split}
\end{equation}
where for any fixed \({\bm i},{\bm a},{\bm b}\) we defined
\[
\Psi(\bm x)=\Psi_{{\bm i},{\bm a},{\bm b}}(\bm x):=\left(\prod_{r=1}^n \delta_{x_{a_r}i_r}\delta_{x_{b_r}i_r}\right).
\]
Define
\begin{equation}
\Gamma:=\{ {\bm x}\in\Lambda^n : \, d({\bm x},{\bm y})\le 3 K\} \subset\Lambda^n,
\end{equation}
and note that by the finite speed of propagation estimate in Proposition~\ref{pro:finitespeed}
and the support of \(h_0({\bm x})\), the function \(h_t({\bm x})\) is supported on \(\Gamma\)
up to an exponentially small error term (see~\cite[Eqs. (5.76)-(5.77)]{2005.08425} for
a more detailed calculation). For simplicity,
for the rest of the proof we treat \(h_t({\bm x})\) as if it were supported on \(\Gamma\), neglecting the
exponentially small error term of size \(\pi(\Lambda^n\setminus \Gamma)e^{-N^\epsilon}\le N^{2n}e^{-N^\epsilon}\).
Since the dynamics is a linear contraction in \(L^\infty\), this small error term remains small throughout the
whole evolution.
Now we consider the term with \(|h_t({\bm x})|^2\) in~\eqref{eq:boundderl2} (here we use the notation \(\Psi({\bm x})=\Psi_{{\bm i},{\bm a},{\bm b}}({\bm x})\)):
\begin{equation}\label{eq:neweq}
\begin{split}
&-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}, {\bm j}}^*\Psi({\bm x}) \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) \\
&\qquad=-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}}^*\Psi({\bm x})\prod_{r=1}^n\left(\sum_{j_r} a_{i_r j_r}^\mathcal{S}(t)+\mathcal{O}\left(\frac{1}{N\eta}\right)\right) \\
&\qquad=-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}}^*\Psi({\bm x})\prod_{r=1}^n\left( \sum_{j_r} a_{i_r j_r}(t)+\mathcal{O}\left(\frac{1}{N\eta}+\frac{N\eta}{\ell}\right)\right)\\
&\qquad\le - C(n)\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}}^* \Psi({\bm x})
\end{split}
\end{equation}
on the very high probability event \(\Omega_\xi\), where the error term in the second line comes from adding back the
finitely many excluded summands \( j_r\in \{i_1,\dots, i_n\}\) and \(j_r\in \{j_1, \ldots, j_{r-1},j_{r+1}, \ldots j_n\}\).
The new error in the third line comes from removing the short range restriction from \(a_{i_rj_r}^{\mathcal S}\), i.e.\ adding back the regimes \(|j_r-i_r|> \ell\) using
\begin{equation}\label{eq:longreg}
\sum_{j_r: |j_r-i_r|> \ell } a_{i_r j_r}(t) \le \frac{N\eta}{\ell}.
\end{equation}
Finally, to go from the third to the fourth line in~\eqref{eq:neweq} we used the local law
\begin{equation}\label{eq:singlegllaw}
\sum_{j_r}a_{i_r j_r}(t)=\braket{\Im G_t(\lambda_{i_r}+\mathrm{i}\eta)}= \Im m_t(\lambda_{i_r}+\mathrm{i}\eta)+\mathcal{O}\big(N^\xi (N\eta)^{-1}\big),
\end{equation}
with very high probability on the event \(\Omega_\xi\), and that \(\Im m_t(\lambda_{i_r}+\mathrm{i}\eta)\sim 1\) in the bulk
whenever \(\eta\ge N^{-1+\xi}\).
We now bound the last line in~\eqref{eq:neweq} in terms of \(-\norm{h_t}^2\) plus a small error term by removing the restriction from the \({\bm i}\)-summation:
\begin{equation}\label{eq:newbdiag}
\begin{split}
-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}}^* \Psi({\bm x})&=-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}} \Psi({\bm x}) \\
&\quad +\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\left(\sum_{{\bm i}}-\sum_{{\bm i}}^*\right)\Psi({\bm x}) \\
&\le - C(n) \norm{h_t}_2^2+C(n)N^\xi K^{n-1}.
\end{split}
\end{equation}
To estimate the first term in the right hand side we used that
\(\sum_{{\bm a}{\bm b}}^*\sum_{{\bm i}} \Psi_{{\bm i},{\bm a},{\bm b}}({\bm x})\ge 1\), \(1\ge C(n) \pi({\bm x})\) for all \({\bm x}\in \Gamma\), and that we can add back the regime \(\Lambda^n\setminus \Gamma\) at the price of a negligible \(N^n e^{-N^\epsilon}\) error term, by finite speed of propagation.
For the second term in the right hand side of~\eqref{eq:newbdiag}
we estimated \(\norm{h_t}_\infty\le N^\xi\)
as a consequence of \(\norm{h_0}_\infty\le N^\xi\) and the fact that the evolution is an \(L^\infty\)-contraction. Finally we used the fact that
\[
\sum_{{\bm a},{\bm b}\in [2n]^n}^*\left(\sum_{{\bm i}}-\sum_{{\bm i}}^*\right)\Psi_{{\bm i},{\bm a},{\bm b}}({\bm x})\ne 0,
\]
only if there exist \(a,b,c,d\in [2n]\), all distinct, such that \({\bm x}_a={\bm x}_b={\bm x}_c={\bm x}_d\). The volume of this one codimensional subset of \(\Gamma\) is \(C(n) K^{n-1}\), i.e.\ by factor \(K^{-1}\) smaller than the volume of \(\Gamma\) which is of order \(K^n\).
Finally, combining~\eqref{eq:neweq} and~\eqref{eq:newbdiag}, we conclude the estimate for the term containing \(|h_t({\bm x})|^2\) in~\eqref{eq:boundderl2}:
\begin{equation}\label{eq:addb}
-\sum_{{\bm x}\in\Gamma}|h_t({\bm x})|^2\sum_{{\bm a},{\bm b}\in [2n]^n}^*\sum_{{\bm i}, {\bm j}}^* \Psi({\bm x})\left(\prod_{i=1}^r a_{i_r j_r}^\mathcal{S}(t)\right)\le -C_1(n)\norm{h_t}_2^2+C(n) N^\xi K^{n-1}.
\end{equation}
Then, using~\eqref{eq:boundderl2} together with~\eqref{eq:addb}, we conclude that
\begin{equation}\label{eq:halfwatro}
\begin{split}
&\partial_t\norm{h_t}_2^2\le -\frac{C_1(n)}{\eta}\norm{h_t}_2^2+\frac{C(n)N^\xi K^{n-1}}{\eta} \\
&\quad +\frac{C_2(n)}{\eta}\sum_{{\bm x}\in \Gamma}|h_t({\bm x})|\sum_{{\bm a}, {\bm b}\in [2n]^n}^*\sum_{{\bm i}}^*\Psi({\bm x})\Bigg|\sum_{{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})\Bigg|
\end{split}
\end{equation}
for some constants \(C_1(n), C_2(n)>0\), on the event \(\Omega_\xi\) with \(\xi>0\) arbitrarily small. In order to conclude the bound of \(\partial_t\norm{h_t}_2^2\) we are now left with the estimate of the last line in~\eqref{eq:halfwatro}.
In the remainder of the proof we will show that
\begin{equation}\label{eq:remaingoal}
\begin{split}
&\frac{C_2(n)}{\eta}\sum_{{\bm x}\in \Gamma}|h_t({\bm x})|\sum_{{\bm a} , {\bm b}\in [2n]^n}^*\sum_{{\bm i}}^*\Psi({\bm x})\Bigg|\sum_{{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})\Bigg| \\
&\qquad\quad\le \frac{C_1(n)}{2\eta}\norm{h_t}_2^2+ \frac{C_3(n)}{\eta} \mathcal{E}^2K^n,
\end{split}
\end{equation}
with \(\mathcal{E}\) defined in~\eqref{eq:basimpr} and \(C_1(n)\) being the constant from the first line of~\eqref{eq:halfwatro}. Note that using~\eqref{eq:remaingoal} we readily conclude the proof of~\eqref{eq:l2b} by
\begin{equation}
\partial_t\norm{h_t}_2^2\le -\frac{C_1(n)}{2\eta}\norm{h_t}_2^2+\frac{C_3(n)}{\eta}\mathcal{E}^2K^n,
\end{equation}
which implies \(\norm{h_{T_1}}_2^2\le C(n) \mathcal{E}^2 K^n\), by a simple Gronwall inequality, using that \(T_1\gg \eta\).
We now conclude the proof of~\eqref{eq:l2b} proving the bound in~\eqref{eq:remaingoal}. We start with the analysis of
\begin{equation}\label{eq:interest}
\sum_{{\bm j}\in [N]^n}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S} (t)\right) h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})
\end{equation}
for any fixed \({\bm x}\in\Gamma\), \({\bm i}\in [N]^n\), \({\bm a}, {\bm b}\in [2n]^n\) with all distinct coordinates such that \(\Psi({\bm x})\ne 0\).
It will be very important that the configuration \(\phi( {\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}} )\) contains
exactly one particle at every index \(j_r\), i.e.\ we have
\begin{equation}\label{one}
\prod_{l=1}^N (n_l({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-1)!!=1.
\end{equation}
Similarly to~\cite[Eqs. (5.89)--(5.91), Eqs. (5.95)--(5.97)]{2005.08425}, using that the function \(f({\bm x})\equiv \bm1(n \,\, \mathrm{even})\) is in the kernel of \(\mathcal{S}(t)\), for any fixed \({\bm x}\in\Gamma\), and for any fixed \({\bm i}\), \({\bm a}\), \({\bm b}\) we conclude that
\begin{equation}\label{eq:fundrelicon}
\begin{split}
&h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})\\
&=\mathcal{U}_\mathcal{S}(0,t)\big((\Av g_0)({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-(\Av\bm1(n \,\, \mathrm{even}))({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})\big) \\
&=\Av({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})\big(\mathcal{U}_\mathcal{S}(0,t)g_0({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-\bm1(n\,\, \mathrm{even})\big)+\mathcal{O}\left(\frac{N^{\epsilon+n\xi} \ell}{K}\right) \\
&=\left(\Av({\bm x})+\mathcal{O}\left(\frac{\ell}{K}\right)\right)\left(\mathcal{U}(0,t)g_0({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-\bm1(n\,\, \mathrm{even})+\mathcal{O}\left(\frac{N^{1+n\xi}t}{\ell}\right)\right)+\mathcal{O}\left(\frac{N^{\epsilon+n\xi} \ell}{K}\right) \\
&=\Av({\bm x})\big(g_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})\big)-\bm1(n\,\, \mathrm{even})\big)+\mathcal{O}\left(\frac{N^{\epsilon+n\xi} \ell}{K}+\frac{N^{1+n\xi}t}{\ell}\right),
\end{split}
\end{equation}
where the error terms are uniform in \({\bm x}\in\Gamma\). Note that to go from the first to the second line in
~\eqref{eq:fundrelicon} we used Lemma~\ref{lem:exchav}, to go from the second to the third line we used Lemma~\ref{lem:shortlongapprox} together with the a priori bound \(\norm{g_t}_\infty \le N^{n\xi}\) for any \(0\le t\le T\)
on the very high probability event \(\widehat{\Omega}_{\omega,\xi}\), and that
\[
|\Av({\bm x})-\Av({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})|\le \frac{1}{K}\norm{{\bm x}-{\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}}_1\le \frac{2n\ell}{K},
\]
where \(\| {\bf x}\|_1= \sum_{c=1}^{2n} |x_c|\). To go from the third to the fourth line in~\eqref{eq:fundrelicon}
we used that \(|\Av({\bm x})|\le 1\) and again that \(\norm{g_t}_\infty \le N^{n\xi}\).
Then, from~\eqref{eq:fundrelicon}, we conclude that
\begin{equation}\label{eq:medgoal}
\begin{split}
\sum_{{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})&=\Av({\bm x})\sum_{{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) \big(g_t({\bm x}_{{\bm a},{\bm b}}^{{\bm i} {\bm j}})-\bm1(n\,\, \mathrm{even})\big) \\
&\quad +\mathcal{O}\left(\frac{N^{\epsilon+n\xi}\ell}{K}+\frac{N^{1+n\xi}T_1}{\ell}\right).
\end{split}
\end{equation}
From now on we will omit the \(\Av({\bm x})\) prefactor
in~\eqref{eq:medgoal}, since \(|\Av({\bm x})|\le 1\).
Using the definition of \(g_t\) from~\eqref{eq:defg}
and~\eqref{eq:deff}, for any \({\bm x}\in\Gamma\) such that \(\Psi({\bm x})\ne 0\), and for any fixed \({\bm i}\), \({\bm a}, {\bm b}\), dropping the \(t\)-dependence of the eigenvalues \(\lambda_i=\lambda_i(t)\), we have
\begin{equation}\label{eq:leadorcancel1}
\begin{split}
&\sum_{{\bm j}}^*\left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right)\big(g_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})-\bm1(n\,\, \mathrm{even})\big) \\
&=\sum_{{\bm j}}^*\left(\prod_{r=1}^n a_{i_r j_r}(t)\right)\left(\frac{N^{n/2}}{\braket{A^2}^{n/2} 2^{n/2}(n-1)!!}\sum_{G\in \mathcal{G}_{{\bm \eta}^{{\bm j}}}}P(G)-\bm1(n\,\, \mathrm{even})\right)+\mathcal{O}\left(\frac{N^{1+n\xi}\eta}{\ell}\right) \\
&=\sum_{{\bm j}}\left(\prod_{r=1}^n a_{i_r j_r}(t)\right)\left(\frac{N^{n/2}}{\braket{A^2}^{n/2} 2^{n/2}(n-1)!!}\sum_{G\in \mathcal{G}_{{\bm \eta}^{{\bm j}}}} P(G)-\bm1(n\,\, \mathrm{even})\right)+\mathcal{O}\left(\frac{N^{n\xi}}{N\eta}+\frac{N^{1+n\xi}\eta}{\ell}\right).
\end{split}
\end{equation}
Note that in~\eqref{eq:leadorcancel1} we used the notation \({\bm \eta}^{{\bm j}}:=\phi({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})\) to denote the particle configuration which has exactly one particle at each site \(\{j_1,\dots, j_n\}\). Note that in the last line of~\eqref{eq:leadorcancel1} we do not exclude the possibility that two indices \(j\) may assume the same value, since the sum is unrestricted. In the second and third lines of~\eqref{eq:leadorcancel1} we simply omitted the conditional expectation \(\E[\cdots|{\bm \lambda}]\) to shorten the formulas. Since all subsequent
estimates hold with high probability, the conditional expectation does not
play a role. When going from the first to the second line of~\eqref{eq:leadorcancel1} we removed
the short range restriction, as in~\eqref{eq:longreg}, by adding back the summations over the regimes \(|j_r-i_r|> \ell\), and we also used~\eqref{one}
since the coordinates of \({\bm j}\) are all distinct, and so that
\({\mathcal M}({\bm \eta}^{{\bm j}})=1\) in the definition of \(g_t\) in~\eqref{eq:defg} and~\eqref{eq:deff}. Additionally, the error term in the third line of~\eqref{eq:leadorcancel1} comes from adding back the missing \(j_r\)-summations; in this bound we used the a priori bound \(|P(G)|\le N^{n\xi-n/2}\) on the very high probability event \(\widehat{\Omega}_{\omega,\xi}\) and~\eqref{eq:singlegllaw}.
We now use the definition of \(P(G)\) in~\eqref{eq:defpg} on the right hand side of~\eqref{eq:leadorcancel1}. Since every particle is doubled we may rewrite the sum over perfect matchings as
\begin{equation}\label{eq PG sum}
\sum_{G\in\mathcal{G}_{\bm\eta^{\bm j}}} P(G) = \sum_{G\in \Gr_2[n]} \prod_{(v_1\cdots v_k)\in\Cyc(G)} (2k-2)!! p_{j_{v_1}j_{v_2}}\cdots p_{j_{v_k}j_{v_1}},
\end{equation}
where \(\Gr_2[n]\) denotes the set of 2-regular multi-graphs (possibly with loop-edges) on \([n]\) and \(\Cyc(G)\) denoting the collection of cycles in any such graph \(G\in \Gr_2[n]\). The combinatorial factor \((2k-2)!!\) is due to the fact that for each cycle in \(G\) there are \((2k-2)!!\) equivalent perfect matchings giving the very same cyclic monomial. For example, for \(n=2\) there are two 2-regular multi-graphs, \((11),(22)\) and \((12),(12)\) and thus \(\sum_{G\in\mathcal{G}_{\bm\eta^{\bm j}}}P(G)=2p_{j_1j_2}^2 + p_{j_1j_1}p_{j_2j_2}\). Similarly, for \(n=3\) there are the graphs \[\set{(11),(22),(33)},\set{(12),(12),(33)},\set{(13),(13),(22)},\set{(23),(23),(11)},\set{(12),(23),(13)}\] yielding
\[\sum_{G\in\mathcal{G}_{\bm\eta^{\bm j}}}P(G)=p_{j_1j_1}p_{j_2j_2}p_{j_3j_3} + 2p_{j_1j_1}p_{j_2j_3}^2 + 2p_{j_2j_2}p_{j_1j_3}^2 + 2p_{j_3j_3}p_{j_1j_2}^2 + 8p_{j_1j_2}p_{j_2j_3}p_{j_1j_3}.\]
For each graph \(G\in\Gr_2[n]\) we may use the spectral theorem to perform the \(\bm j\) summation as
\begin{equation}\label{j performed sum}
\sum_{j_{v_1},\ldots,j_{v_k}} \Bigl(\prod_{r\in[k]}a_{i_{v_r}j_{v_r}}(t)\Bigr) p_{j_{v_1}j_{v_2}}\cdots p_{j_{v_k}j_{v_1}} = N^{1-k}F_k(v_1,\ldots,v_k)
\end{equation}
with
\[F_k(v_1,\ldots,v_k):=\braket{\Im G_t(\lambda_{i_{v_1}}+\mathrm{i}\eta)A\cdots \Im G_t(\lambda_{i_{v_k}}+\mathrm{i}\eta)A}.\]
Since each vertex appears in exactly one cycle, we can use~\eqref{j performed sum} to perform the summation for the indices corresponding to any cycle separately and obtain
\begin{equation}\label{eq P(G) F}
\begin{split}
\sum_{{\bm j}}\left(\prod_{r=1}^n a_{i_r j_r}(t)\right)\sum_{G\in \mathcal{G}_{{\bm \eta}^{{\bm j}}}} P(G) = \sum_{E\in \Gr_2[n]} \prod_{(v_1\cdots v_k)\in\Cyc(E)} (2k-2)!!N^{1-k} F_k(v_1,\ldots, v_k).
\end{split}
\end{equation}
We note that from~\eqref{eq:hatomega} for each \(k\ge 1\) we have the estimate
\begin{equation}\label{F est}
F_k(v_1,\ldots v_k) = \bm 1(k=2) \braket{A^2}\Im m(z_{i_{v_1}})\Im m(z_{i_{v_k}}) + \landauO*{N^\xi \frac{N^{k/2-1}}{\sqrt{N\eta}}}
\end{equation}
on the high-probability set \(\widehat\Omega\). By using~\eqref{F est} within~\eqref{eq P(G) F} and using the fact that there are \(\bm1(n\text{ even})(n-1)!!\) graphs in \(\Gr_2[n]\) all of which cycles have length two, it follows that
\begin{equation}
~\eqref{eq P(G) F} = \bm1(n\text{ even}) (n-1)!! 2^{n/2} N^{-n/2} \braket{A^2}^{n/2} \prod_{r\in[n]}\Im m(z_{i_r}) + \landauO*{N^\xi\frac{N^{-n/2}}{\sqrt{N\eta}}}
\end{equation}
and from~\eqref{eq:leadorcancel1} we conclude
\begin{equation}\label{eq:finbound}
\Psi({\bm x})\sum_{{\bm j}}^*\left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right)\big(g_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})-\bm1(n\,\, \mathrm{even})\big) = \Psi({\bm x})\mathcal{O}\left(\frac{N^\xi}{\sqrt{N\eta}}+\frac{N^\xi}{N\eta}+\frac{N^{1+\xi}\eta}{\ell}\right).
\end{equation}
We remark that in estimating the error term we used that $\braket{A^2}\ge \delta'$.
Combining~\eqref{eq:medgoal} and~\eqref{eq:finbound}, we get that
\begin{equation}\label{eq:almthere}
\Psi({\bm x})\left|\sum_{{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right) h_t({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})\right| \le C(n) \Psi({\bm x})\mathcal{E}
\end{equation}
and finally, by~\eqref{eq:almthere}, we conclude that
\begin{equation}\label{eq:gronw}
\mathrm{l.h.s.}~\eqref{eq:remaingoal}\le \frac{C_1(n)}{2\eta}\norm{h_t}_2^2+ \frac{C_3(n)}{\eta} \mathcal{E}^2K^n,
\end{equation}
where we used that for any fixed \({\bm x}\in\Lambda^n\) we have
\[
\sum_{{\bm a}, {\bm b}\in [2n]^n}^*\sum_{{\bm i}}^*\Psi_{{\bm i},{\bm a},{\bm b}}({\bm x})\le C(n),
\]
and that
\[
\left|\sum_{{\bm x}\in \Gamma} h_t({\bm x})\mathcal{E}\right|\le \frac{C_1(n)}{2} \sum_{{\bm x}\in \Gamma} \pi({\bm x}) |h_t({\bm x})|^2+C_3(n)\mathcal{E}^2K^n,
\]
by the Schwarz inequality, the bound \(1\le \pi({\bm x})\) from~\eqref{eq:boundsrevmeasure}, and \(\sum_{{\bm x}\in \Gamma} \pi({\bm x})\le C(n) K^n\).
Note that by balancing between the two terms
in the Schwarz inequality we could achieve the same constant \(C_1(n)\) with an additional 1/2 factor
in front of the \(\norm{h_t}_2^2\) term as in the leading term in~\eqref{eq:halfwatro} with a minus sign. This concludes the proof of the bound in~\eqref{eq:remaingoal}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:replacement}]
All along the proof \(C(n)>0\) is a constant that depends only on \(n\) and that may change from line to line.
We consider
\begin{equation}\label{eq:squaform}
\begin{split}
\braket{h, \mathcal{S} (t)h}_{\Lambda^n,\pi}&=-\frac{1}{2}\sum_{{\bm x}\in \Lambda^n}\pi({\bm x})\sum_{j\ne i} c_{ij}^{\mathcal{S}}(t)\frac{n_j({\bm x})+1}{n_i({\bm x})-1}\sum_{a\ne b\in [2n]}\big|h({\bm x}_{ab}^{i j})-h({\bm x})\big|^2 \\
&\le -\frac{C(n)}{\eta}\sum_{{\bm x}\in \Lambda^n}\sum_{j\ne i}a_{ij}^{\mathcal{S}} (t)\sum_{a\ne b\in [2n]}\big|h({\bm x}_{ab}^{ij})-h({\bm x})\big|^2
\end{split}
\end{equation}
and
\begin{equation}\label{eq:aquaform}
\braket{h, \mathcal{A} (t)h}_{\Lambda^n,\mu}=-\frac{1}{2\eta}\sum_{{\bm x}\in \Lambda^n}\sum_{{\bm i}, {\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\right)\sum_{{\bm a},{\bm b}\in [2n]^n}^*\big|h({\bm x}_{{\bm a} {\bm b}}^{{\bm i} {\bm j}})-h({\bm x})\big|^2.
\end{equation}
Note that in~\eqref{eq:squaform} we used that \(a_{ij}^{\mathcal{S}}(t)\le \eta c_{ij}^{\mathcal{S}}(t)\) to compare the kernels, that \(\pi ({\bm x})\ge 1\) uniformly in \({\bm x}\in\Lambda^n\) and finally
that \(n_j({\bm x})+1\ge 1\), \(1\le n_i({\bm x})-1\le n\) for \({\bm x}\) and \(i\) such that \(h({\bm x}_{ab}^{ij})\ne h({\bm x})\).
We start with the bound
\begin{equation}\label{eq:tel}
\begin{split}
&\sum_{{\bm x}\in \Lambda^n}\sum_{{\bm i}, {\bm j}\in [N]^n}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\bm1(n_{i_r}({\bm x})>0)\right)\sum_{{\bm a},{\bm b}\in [2n]^n}^*\big|h({\bm x}_{{\bm a}{\bm b}}^{{\bm i} {\bm j}})-h({\bm x})\big|^2 \\
&\quad\le C(n)\sum_{{\bm x}\in \Lambda^n}\sum_{{\bm i} ,{\bm j}}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\bm1(n_{i_r}({\bm x})>0)
\right)\sum_{l=1}^n\sum_{{\bm a},{\bm b}\in [2n]^n}^*\big|h(({\bm y}_{l-1})_{a_l b_l}^{i_l j_l})-h({\bm y}_{l-1})\big|^2,
\end{split}
\end{equation}
where we recursively defined
\({\bm y}_0={\bm x}, {\bm y}_1, {\bm y}_2 \ldots, {\bm y}_n={\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}}\)
by performing the jumps \(i_1\to j_1\), \(i_2\to j_2\), etc., one by one (assuming that the choice of \((a_l,b_l)\) allows it, otherwise \({\bm y}_l={\bm y}_{l-1}\):
\begin{equation}
{\bm y}_0={\bm y}_0({\bm x}):={\bm x}, \qquad {\bm y}_{l}={\bm y}_{l}({\bm x}):=({\bm y}_{l-1})_{a_l b_l}^{i_l j_l}.
\end{equation}
In the first line of~\eqref{eq:tel} we could add the indicator \(\bm1(n_{i_r}(\bm x)>0)\) since in case \(n_{i_r}(\bm x)=0\) for some \(r\) it holds that \(\bm x_{\bm a \bm b}^{\bm i \bm j}=\bm x\).
Note that to go from the first to the second line of~\eqref{eq:tel} we wrote a telescopic sum
\[
h({\bm x}_{{\bm a}{\bm b}}^{{\bm i}{\bm j}})-h({\bm x})=\sum_{l=1}^n \big[ h(({\bm y}_{l-1})_{a_l b_l}^{i_l j_l})-h({\bm y}_{l-1})\big],
\]
and used Schwarz inequality.
Next we consider
\begin{equation}\label{eq:finsec}
\begin{split}
&\sum_{l=1}^n\sum_{{\bm x}\in \Lambda^n}\sum_{{\bm i}, {\bm j}}^*\sum_{{\bm a},{\bm b}\in [2n]^n}^* \left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\bm1(n_{i_r}({\bm x})>0)
\right)\big|h(({\bm y}_{l-1})_{a_l b_l}^{i_l j_l})-h({\bm y}_{l-1})\big|^2 \\
&\qquad=\sum_{l=1}^n\sum_{{\bm w}\in \Lambda^n}\sum_{{\bm i}, {\bm j}}^* \sum_{{\bm a},{\bm b}\in [2n]^n}^*\left(\prod_{r=1}^n a_{i_r j_r}^\mathcal{S}(t)\bm1(n_{i_r}({\bm z}_{l-1})>0) \right)\big|h({\bm w}_{a_l b_l}^{i_l j_l})-h({\bm w})\big|^2 \\
&\qquad\le C(n) \sum_{{\bm w}\in \Lambda^n}\sum_{l=1}^n \sum_{i_l\ne j_l}a_{i_l j_l}^{\mathcal{S}}(t)\sum_{a_l\ne b_l\in [2n]}\big|h({\bm w}_{a_l b_l}^{i_l j_l})-h({\bm w})\big|^2 \\
&\qquad\qquad\quad \times \left(\prod_{r\ne l} \sum_{i_r, j_r} a_{i_r j_r}\big[\bm1(n_{i_r}({\bm w})>0)+\bm1(n_{j_r}({\bm w})>0)\big]\right) \\
&\qquad\le C(n) \sum_{{\bm w}\in \Lambda^n} \sum_{l=1}^n\sum_{i_l\ne j_l}a_{i_l j_l}^{\mathcal{S}}(t) \sum_{a_l\ne b_l\in [2n]}\big|h({\bm w}_{a_l b_l}^{i_l j_l})-h({\bm w})\big|^2 \\
&\qquad\le C(n) \sum_{{\bm w}\in \Lambda^n}\sum_{i\ne j}a_{i j}^{\mathcal{S}}(t) \sum_{a\ne b\in [2n]}\big|h({\bm w}_{a b}^{i j})-h({\bm w})\big|^2.
\end{split}
\end{equation}
Note that to go from the first to the second line we did the change of variables \({\bm w}= {\bm y}_{l-1}({\bm x})\), we used that \(({\bm x}_{a_lb_l}^{i_lj_l})_{a_lb_l}^{j_l i_l}={\bm x}\) for any \({\bm x}\in \Lambda^n\) such that \(\prod_r \bm1(n_{i_r}({\bm x})>0)\), and we defined \({\bm z}_{l-1}=(({\bm w}_{a_{l-1} b_{l-1}}^{j_{l-1} i_{l-1}})\dots)_{a_1 b_1}^{j_1 i_1}\). Moreover, to go from the second to the third line in~\eqref{eq:finsec} we used that
\begin{equation}
\begin{split}
\prod_{r\in[n]\setminus\set{l}}\bm1(n_{i_r}({\bm z}_{l-1})>0)&\le C(n) \left(\prod_{r=1}^{l-1}\Big[\bm1(n_{j_r}({\bm w})>0)+\bm1(n_{i_r}({\bm w})>0)\Big] \right)\\
&\qquad\qquad\times\left(\prod_{r=l+1}^n \bm1(n_{i_r}({\bm w})>0)\right)
\end{split}
\end{equation}
for \(i_1,\dots i_n, j_1,\dots, j_n\) all distinct, which follows by \(n_{i_r}({\bm z}_{l-1})=n_{i_r}({\bm w})\) if \(r \ge l+1\) and
\[
\bm1(n_{i_r}({\bm z}_{l-1})>0)\le \bm1(n_{i_r}({\bm w})>0)+\bm1(n_{j_r}({\bm w})>0)
\]
for \(r\le l-1\). In the penultimate inequality in~\eqref{eq:finsec} we also used that
\begin{equation}
\prod_{r\ne l} \sum_{i_r j_r} a_{i_r j_r}\big[\bm1(n_{i_r}({\bm w})>0)+\bm1(n_{j_r}({\bm w})>0)\big]\le C(n),
\end{equation}
on the very high probability event \(\widehat{\Omega}\).
Combining~\eqref{eq:squaform}--\eqref{eq:aquaform},~\eqref{eq:tel} and~\eqref{eq:finsec}, we finally conclude~\eqref{eq:fundbound}.
\end{proof}
\subsection{Proof of Proposition~\ref{pro:flucque}}
Fix \(1\ll NT_1\ll \ell_1\ll K\) and \(1\ll NT_2\ll \ell_2 \ll K\),
with \(T_1\le T_2/2\). Define the \emph{lattice generator} \(\mathcal{W}(t)\) by
\begin{equation}
\mathcal{W}(t):=\sum_{i\ne j\in [N]} \mathcal{W}_{ij}(t), \qquad \mathcal{W}_{ij}(t):=c_{ij}^{\mathcal{W}}(t)\frac{n_j({\bm x})+1}{n_i({\bm x})-1}\sum_{a\ne b\in [2n]}\big(h({\bm x}_{ab}^{ij})-h({\bm x})\big),
\end{equation}
with
\begin{equation}
c_{ij}^{\mathcal{W}}(t):=\begin{cases}
c_{ij}(t) &\mathrm{if}\, i,j\in\mathcal{J}\,\, \mathrm{and} \, 1\le |i-j|\le \ell_2 \\
\frac{N}{|i-j|^2} &\mathrm{otherwise}.
\end{cases}
\end{equation}
Denote by \(\mathcal{U}_\mathcal{W}(s,t)\) the semigroup associated to the generator \(\mathcal{W}(t)\). Note that \(\mathcal{W}(t)\) is the original generator of the Dyson eigenvector flow \(\mathcal{L}\)
from~\eqref{eq:g1dker} on short scales and in the interval \(\mathcal{J}\) well inside the bulk,
while on large scales it has an equidistant jump rate. In~\cite{2005.08425} this replacement
made up for the missing rigidity (regularity) control of the eigenvalues outside of a local interval \(\mathcal{J}\);
in our case its role is just to handle the somewhat different scaling of the eigenvalues near the edges. We follow the setup of~\cite{2005.08425} for convenience.
On the event \(\Omega_\xi\) the coefficients \(c_{ij}^{\mathcal{W}}(t)\) satisfy~\cite[Assumption 6.8]{2005.08425} with a rate \(v=N^{1-\xi}\), for any arbitrary small \(\xi>0\), hence all the results in~\cite[Section 6]{2005.08425} apply to the generator \(\mathcal{W}(t)\).
Most importantly, the Dirichlet form of \(\mathcal{W}(t)\) satisfies a Poincar\'e inequality and, consequently
we have an \(L^2\to L^\infty\) ultracontractive decay bound for the corresponding semigroup.
Their scaling properties confirm the intuition that \(\mathcal{W}(t)\) is a discrete analogue of the \(|p|= \sqrt{-\Delta}\)
operator in \(\mathbf{R}^{2n}\). In the continuous setting, standard Sobolev inequality combined with the Nash method implies that
\begin{equation}
\label{nash}
\| e^{-t|p|} f \|_{L^\infty(\mathbf{R}^{2n})} \le \frac{C(n)}{t^{n/2}}\| f\|_{L^2(\mathbf{R}^{2n})}
\end{equation}
holds for any \(L^2\) function on \(\mathbf{R}^{2n}\). The same decay holds for the semigroup generated by \(\mathcal{W}(t)\)
by~\cite[Proposition 6.29]{2005.08425} (recall that~\cite{2005.08425} uses \(n\) to denote
the dimension of the space of \({\bm x}\)'s, we use \(2n\)). We remark
that the proofs in~\cite[Section 6]{2005.08425} are designed for the more involved \emph{coloured} dynamics; here we need
only its simpler \emph{colourblind} version which immediately follows from the coloured version by ignoring the colors. In particular, in our case the \emph{exchange operator} \(\mathcal{E}_{ij}\) is identically zero. While a direct proof of the colourblind version is possible and it would require less combinatorial complexity, for brevity, we directly use the results of~\cite[Section 6]{2005.08425}.
For each \({\bm y}\) supported on \(\mathcal{J}\), let \(q_t({\bm x})=q_t({\bm x};{\bm y})\) be the solution of
\begin{equation}\label{eq:q}
\begin{cases}
q_0({\bm x})=\Av({\bm x};K,{\bm y})\big(g_0({\bm x})-\bm1(n\,\, \mathrm{even})\big) \\
\partial_tq_t({\bm x})=\mathcal{S}(t)q_t({\bm x}) &\mathrm{for}\,\, 0< t\le T_1 \\
\partial_tq_t({\bm x})=\mathcal{W}(t)q_t({\bm x}) &\mathrm{for}\,\, T_1< t\le T_2,
\end{cases}
\end{equation}
with \(\mathcal{S}(t)\) being the short-range generator on a scale \(\ell=\ell_1\) from~\eqref{g-2}. Note that \(q_t=h_t\) for any \(0\le t\le T_1\), with \(h_t\) being the solution of~\eqref{g-1}.
By Proposition~\ref{prop:mainimprov}, choosing \(\eta=N^{-\epsilon} T_1\), we have
\begin{equation}\label{eq:boundt1}
\sup_{{\bm y}: y_a\in \mathcal{J}}\norm{q_{T_1}(\cdot;{\bm y})}_2\lesssim N^\epsilon K^{n/2}\left(\frac{\ell_1}{K}+\frac{NT_1}{\ell_1}+\frac{1}{\sqrt{NT_1}}+\frac{1}{\sqrt{K}}\right),
\end{equation}
for any arbitrary small \(\epsilon>0\), where the supremum is over all the \({\bm y}\) supported on \(\mathcal{J}\). We recall that by the finite speed of propagation estimate in Proposition~\ref{pro:finitespeed}, together with~\cite[Eq. (7.12)]{2005.08425}, the function \(q_t\) is supported on the subset of \(\Gamma\subset\Lambda^n\) such that \(d({\bm x},{\bm y})\le 3K\) for any \({\bm x}\in\Gamma\) (modulo a negligible exponentially small error term). Then, using the ultracontractivity bound for the dynamics of \(\mathcal{W}(t)\) from~\cite[Proposition 6.29]{2005.08425}, with \(v=N^{1-\xi}\), we get that
\begin{equation}\label{eq:boundt2}
\begin{split}
\sup_{{\bm y}}\norm{q_{T_2}(\cdot;{\bm y})}_\infty&=\sup_{{\bm y}}\norm{\mathcal{U}_\mathcal{W}(T_1,T_2)q_{T_1}(\cdot;{\bm y})}_\infty\\
&\le \sup_{{\bm y}}\norm{(1-\Pi)\mathcal{U}_\mathcal{W}(T_1,T_2)q_{T_1}(\cdot;{\bm y})}_\infty+\sup_{{\bm y}}\norm{\Pi\mathcal{U}_\mathcal{W}(T_1,T_2)q_{T_1}(\cdot;{\bm y})}_\infty \\
&\lesssim \frac{N^{n\xi}}{[N(T_2-T_1)]^{n/2}}\sup_{{\bm y}}\norm{q_{T_1}(\cdot;{\bm y})}_2+\frac{K^n}{N^{n (1-\xi)}},
\end{split}
\end{equation}
where \(\Pi\) is the orthogonal projection into the kernel of \(\mathcal{L}\), \(\ker(\mathcal{L})=\bigcap_{i\ne j}\ker(\mathcal{L}_{ij})\), defined in~\cite[Lemma 4.17]{2005.08425}. Note that in~\eqref{eq:boundt2} we used that by~\cite[Corollary 4.20]{2005.08425} it holds
\[
\norm{\Pi q_{T_2}}_{\infty}\le C(n) N^{-n} \norm{q_{T_2}}_1\le K^n N^{-n+n\xi},
\]
since \(\norm{q_{T_2}}_\infty\le N^{n\xi}\) on the very high probability set \(\widehat{\Omega}\). We remark that in~\cite[Proposition 6.29]{2005.08425} \(\mathcal{U}_\mathcal{W}\) is replaced by \(\mathcal{U}\), but this does not play any role since the only assumption on \(\mathcal{L}_{ij}\) used in~\cite[Section 6]{2005.08425} is that \(c_{ij}(t)\ge N^{1-\xi}|i-j|^{-2}\) (see~\cite[Definition 6.8]{2005.08425}). Combining~\eqref{eq:boundt1}--\eqref{eq:boundt2} we conclude
\begin{equation}\label{eq:boundcomb}
\sup_{{\bm y}}\norm{q_{T_2}(\cdot;{\bm y})}_\infty \lesssim N^{2\epsilon} \left(\frac{K}{NT_2}\right)^{n/2}\left(\frac{\ell_1}{K}+\frac{NT_1}{\ell_1}+\frac{1}{\sqrt{NT_1}}+\frac{1}{\sqrt{K}}\right),
\end{equation}
where we used that \(T_1\le T_2/2\).
Now we compare the solution \(q_t\) from~\eqref{eq:q} with the original dynamics \(g_t\) from~\eqref{eq:g1deq}.
This is done, after several steps, using
~\cite[Proposition 7.2]{2005.08425} with \(F_t({\bm y};{\bm y})\) replaced by \(\bm1(n\,\, \mathrm{even})\), asserting that
\begin{equation}\label{eq:approxincon}
\sup_{{\bm y}}\big|q_{T_2}({\bm y}; {\bm y})-(g_{T_2}({\bm y})-\bm1(n\,\, \mathrm{even}))\big|\lesssim N^\epsilon\left(\frac{\ell_1}{K}+\frac{NT_1}{\ell_1}+\frac{\ell_2}{K}+\frac{NT_2}{\ell_2}\right).
\end{equation}
In particular, the only thing used about \(F_t({\bm y};{\bm y})\) in the proof of~\cite[Proposition 7.2]{2005.08425} is that \(F_t\) is in the kernel of all \(\mathcal{L}_{ij}\), and this is clearly the case for \(\bm1(n\,\, \mathrm{even})\) as well. The origins of the error terms in~\eqref{eq:approxincon} are as follows. The smooth cutoff given by the \(\mbox{Av}\) localising operator in the initial condition~\eqref{eq:q} commutes with the
time evolution generated by \(\mathcal{S}\) up an error of order \(\ell_1/K\), see Lemma~\ref{lem:exchav}.
The difference between the original dynamics and the short range dynamics
in the time interval \(t\in [0, T_1]\) yields the error \(NT_1/\ell_1\),
see Lemma~\ref{lem:shortlongapprox}. Similar errors hold for the approximation
of the original dynamics by the time evolution generated by \(\mathcal{W}\) on the time interval \(t\in [T_1, T_2]\),
giving rise to the errors \(\ell_2/K\) and \(N(T_2-T_1)/\ell_2\le NT_2/\ell_2\).
Combining~\eqref{eq:boundcomb}--\eqref{eq:approxincon}, we conclude that
\begin{equation}\label{eq:finbfin}
\begin{split}
&\sup_{{\bm y}}\big|g_{T_2}({\bm y})-\bm1(n\,\, \mathrm{even})\big| \\
&\lesssim N^{2\epsilon}\left(\frac{\ell_1}{K}+\frac{NT_1}{\ell_1}+\frac{\ell_2}{K}+\frac{NT_2}{\ell_2}+\left(\frac{K}{NT_2}\right)^{n/2}\left(\frac{\ell_1}{K}+\frac{NT_1}{\ell_1}+\frac{1}{\sqrt{NT_1}}+\frac{1}{\sqrt{K}}\right) \right) \\
&\lesssim N^{-c/(20 n)},
\end{split}
\end{equation}
on the with very high probability event \(\Omega_\xi\cap\widehat\Omega_{\xi, \epsilon}\)
with choosing a very small \(\xi\).
In the last step we optimised the error terms in the second line of~\eqref{eq:finbfin} with the choice of
\[
K=N^c, \qquad T_2=N^{-1-c/(10 n)} K, \qquad \ell_2=\sqrt{NKT_2}, \qquad \ell_1=\sqrt{NKT_1}, \qquad T_1=\frac{\sqrt{K}}{N},
\]
with some small fixed \(0< c\le 1/2\). Finally, using that
\[
\sup_{{\bm y}: y_a\in\mathcal{J}}\big|g_{T_2}({\bm y})-\bm1(n\,\, \mathrm{even})\big|=\sup_{{\bm \eta}}\big|f_{T_2}({\bm\eta})-\bm1(n\,\, \mathrm{even})\big|
\]
by~\eqref{eq:defg}, where the supremum in the right hand side is taken over configurations \({\bm \eta}\) such that \(\eta_i=0\) for \(i\in [\delta N, (1-\delta) N]^c\) and \(\sum_i \eta_i=n\). The bound in~\eqref{eq:finbfin} concludes the proof of Proposition~\ref{pro:flucque}.
\section{Local law bounds}\label{sec:llaw}
In this section we prove the local laws needed to estimate the probability of the event \(\widehat\Omega\) in~\eqref{eq:hatomega}. We recall~\cite{MR2871147} that the resolvent \(G=(W-z)^{-1}\) of the Wigner matrix \(W\) is approximately equal,
\begin{equation}\label{local law}
G_{ab}=\delta_{ab}m+\landauO*{\frac{N^{\xi}}{\sqrt{N\Im z}}},\quad \braket{G}=m+\landauO*{\frac{N^{\xi}}{N\Im z}}
\end{equation} to the Stieltjes transform \(m=m_\mathrm{sc}(z)\) of the semicircular distribution \(\rho_\mathrm{sc}=\sqrt{4-x^2}/2\pi\) which solves the equation
\begin{equation}\label{mde}
-\frac{1}{m}=m+z.
\end{equation}
\begin{proposition}\label{pro:llaw}
Let \(k\ge 3\) and \(z_1,\ldots,z_k\in\mathbf{C}\setminus\mathbf{R}\) with \(N\min_i(\rho_i\eta_i)\ge N^\epsilon\) for some \(\epsilon>0\) with \(\eta_i:=\abs{\Im z_i}\) and \(\rho_i:=\rho(z_i)\), \(\rho(z):=\abs{\Im m(z)}/\pi\). Then for arbitrary traceless matrices \(A_1,\ldots,A_k\) with \(\norm{A_i}\lesssim 1\) we have
\begin{equation}
\label{eq:llawbound}
\abs*{\braket*{G_1A_1\ldots G_k A_k}}\lesssim N^{\xi+(k-3)/2}\sqrt{\frac{\rho^\ast}{\eta_\ast}},
\end{equation}
with very high probability for any \(\xi>0\), where \(\rho^\ast:=\max_i \rho_i\) and \(\eta_\ast:=\min\eta_i\).
\end{proposition}
\begin{proof}
Using \(WG-zG=I\) and~\eqref{mde} we write
\begin{equation}\label{G m eq}
G=m-m\underline{WG}+m\braket{G-m}G
\end{equation}
where
\[
\underline{WG} = WG + \braket{G}G
\]
denotes a renormalization of \(WG\). More generally, for functions \(f(W)\) we define
\[
\underline{Wf(W)} := Wf(W)- \widetilde \E \widetilde W (\partial_{\widetilde W}f)(W)
\]
with \(\partial_{\widetilde W}\) denoting the directional derivative in direction \(\widetilde W\) and \(\widetilde W\) being an independent GUE-matrix with expectation \(\widetilde\E\). We now use~\eqref{G m eq} and~\eqref{local law} for \(G_1=G(z_1)\) and \(m_1=m(z_1)\) to obtain
\begin{equation}
\begin{split}
\Bigl(1-\landauO[\Big]{\frac{N^\xi}{N\eta_\ast}}\Bigr)\braket*{\prod_{i=1}^k (G_i A_i)} = m_1 \braket*{A_1 \prod_{i=2}^k (G_i A_i)} - m_1 \braket*{\underline{WG_1}A_1 \prod_{i=2}^k (G_i A_i)}.
\end{split}
\end{equation}
Together with
\[ \begin{split}
\braket*{\underline{W\prod_{i=1}^k (G_i A_i)}} &= \braket*{W\prod_{i=1}^k (G_i A_i)} + \sum_{j=1}^k\widetilde\E \braket*{\widetilde W \biggl[\prod_{i=1}^{j-1} (G_iA_i)\biggr] G_{j} \widetilde W \prod_{i=j}^k (G_i A_i)}\\
&= \braket*{\underline{WG_1}A_1 \prod_{i=2}^k (G_i A_i)} + \sum_{j=2}^k\braket*{\biggl[\prod_{i=1}^{j-1} (G_iA_i)\biggr] G_{j}}\braket*{ \prod_{i=j}^k (G_i A_i)}
\end{split} \]
we thus have
\begin{equation}\label{GA eq}
\begin{split}
\Bigl(1-\landauO[\Big]{\frac{N^\xi}{N\eta_\ast}}\Bigr)\braket*{\prod_{i=1}^k (G_i A_i)} &= m_1 \braket*{A_1\prod_{i=2}^k (G_i A_i)} - m_1 \braket*{\underline{W\prod_{i=1}^k (G_i A_i)}} \\
&\quad + \sum_{j=2}^k\braket*{\biggl[\prod_{i=1}^{j-1} (G_iA_i)\biggr] G_{j}}\braket*{ \prod_{i=j}^k (G_i A_i)}.
\end{split}
\end{equation}
We now apply the inequality~\cite[Eq.~(5.35)]{2012.13215}
\[ \abs{\braket{XY}} \le \Bigl[\braket{X^\ast X(YY^\ast)^{1/2}}\braket{(Y^\ast Y)^{1/2}}\Bigr]^{1/2} \]
for arbitrary matrices \(X,Y\) to \(X=\prod_{i=1}^{j-1}(G_i A_i),Y=G_j\) to obtain
\begin{equation*}
\begin{split}
\abs*{\braket*{\prod_{i=1}^{j-1}(G_i A_i)G_{j} }} &\le \eta_1^{-1/2}\Bigl(\braket*{ A_{j-1}^\ast G_{j-1}^\ast \cdots A_1^\ast \Im G_1 A_1 \cdots G_{j-1} A_{j-1} \abs{G_j} } \braket{ \abs{G_j}}\Bigr)^{1/2}
\end{split}
\end{equation*}
from \(G^\ast G=(\Im G)/\eta\). By spectral decomposition we may further estimate with very high probability for any \(\xi>0\)
\[
\begin{split}
&\abs*{\braket*{ A_{j-1}^\ast G_{j-1}^\ast \cdots A_1^\ast \Im G_1 A_1 \cdots G_{j-1} A_{j-1} \abs{G_j} }} \\
&= \abs*{\frac{1}{N} \sum_{\bm a} \frac{\braket{\bm u_{a_{j}},A_{j-1}^\ast \bm u_{a_{1-j}}} \cdots \braket{\bm u_{a_{-2}},A_{1}^\ast \bm u_{a_1}}\braket{\bm u_{a_{1}},A_{1} \bm u_{a_{2}}}\cdots \braket{\bm u_{a_{j-1}},A_{j-1} \bm u_{a_{j}}} }{\bigl[(\lambda_{a_{j-1}}-z_{j-1})(\lambda_{a_{1-j}}-\overline{z_{j-1}})\cdots (\lambda_{a_2}-z_2)(\lambda_{a_{-2}}-\overline{z_2})\bigr]\abs{\lambda_{a_j}-z_j}}\Im \frac{1}{\lambda_{a_1}-z_1}}\\
&\lesssim N^{\xi+j-2} \rho(z_1)
\end{split},\]
from the overlap bound \(\abs{\braket{\bm u_a, A \bm u_b}}\lesssim N^{\xi-1/2}\), and where \(\sum_{\bm a}\) is the summation over the \(2j-2\) indices \(a_1,a_{\pm 2},\ldots,a_{\pm(j-1)},a_j\) and conclude
\begin{equation}
\abs*{\braket*{\prod_{i=1}^{j-1}(G_i A_i)G_{j} }} \lesssim N^{\xi+j/2-1}\frac{\sqrt{\rho_1}}{\sqrt{\eta_1}}.
\end{equation}
Similarly we also have
\begin{equation*}
\abs*{\braket*{ \prod_{i=j+1}^{k} (G_i A_i)}} \lesssim N^{\xi+(k-j)/2-1},
\end{equation*}
and the claim follows from~\eqref{GA eq} and the bound
\begin{equation}
\abs*{\braket*{\underline{W\prod_{i=1}^k (G_i A_i)}}} \lesssim N^\xi N^{(k-3)/2}\frac{\sqrt{\rho^\ast}}{\sqrt{\eta_\ast}}
\end{equation}
on the underlined term in~\cite[Theorem 4.1, Remark 4.3]{2012.13215}.
\end{proof}
|
{'timestamp': '2022-03-04T02:25:51', 'yymm': '2103', 'arxiv_id': '2103.06730', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.06730'}
|
arxiv
|
\section{Introduction}
Stars form in molecular cores that are cold enough to have temperatures below 10 K and dense enough to be up to 10$^{5}$ cm$^{-3}$ in volume density. These cores, known as prestellar cores \citep{1994MNRAS.268..276W,2016MNRAS.463.1008W}, have low Jeans mass and they are prone to further collapse into stars. The surrounding gas and dust grain are heated up by the embedded protostar and can be affected by the feedback of star formation processes. Thus, the molecular chemistry between prestellar cores and star-forming cores is expected to vary. Previous studies of carbon-chain molecules (CCMs) have mainly been focused on the prestellar core phase \citep{1992ApJ...392..551S,2004ApJ...617..399H,2006ApJ...646..258H,2009ApJ...699..585H}. The chemistry of CCMs in low-mass star-forming cores needs to be explored.
It was previously thought that CCMs are abundant in the early stage dark molecular cores and absent in the star-forming cores \citep{1992ApJ...392..551S,2009ApJ...699..585H}. The absence of CCMs are caused by the gas phase destruction and the depletion onto dust grains.
A new type of chemistry, that is, warm carbon-chain chemistry (WCCC), was proposed based on the detection of high-excitation lines of CCMs in the lukewarm corino of low-mass protostar L1527, and Lupus I-1 (IRAS 15398-3359) \citep{2008ApJ...672..371S,2009ApJ...697..769S}. In the gas phase, CH$_4$ can evaporate and then react with C$^+$ to efficiently form CCMs in a warm dense region heated by the emerging protostar \citep{2008ApJ...681.1385H}.
In addition, abnormally high abundances of C$_3$S were recently found in L1251A, L1221, and IRAS 20582+7724 \citep{2019MNRAS.488..495W}. Shocks can increase the abundance of S$^+$ in the gas phase and thus drive the generation of S-bearing CCMs, which is referred to as shocked carbon-chain chemistry \citep[SCCC,][]{2019MNRAS.488..495W}. Finding other low-mass star-forming cores with abundant CCMs is key to improving our understanding of the CCM chemistry in these cores.
In this paper, we report a survey of CCMs of hydrocarbons (C$_2$H, C$_4$H, c--C$_3$H$_2$), HC$_{\rm 2n+1}$N (n=1,2), C$_{\rm n}$S (n=2,3), and three dense gas tracers of SO, HNC, and N$_2$H$^+$ toward four sources, IRAS 04181+2655 (I04181), HH211, L1524, and L1598, drawn from a catalog of high-velocity molecular outflows \citep[and references therein]{2004A&A...426..503W}. The aim was to probe the chemistry evolution of CCMs in the gas phase of these regions.
The IRAS I04181 source is a Class I low-mass protostar and the associated molecule core shows bipolar outflow in CO (J=2--1) \citep{1996A&A...311..858B}. The jet-driven molecular outflow of HH211 was reported by \citet{1999A&A...343..571G}. The bipolar outflow in Class 0 protostar HH211 detected in CO (J=2--1) is compact and does not extend beyond the H$_2$ jet \citep{1994ApJ...436L.189M,2011ApJ...736...25F}.
L1524 contains Haro 6-10, which has an infrared companion \citep[Class I protostar IRAS 04263+2426;][]{1953ApJ...117...73H,1998MNRAS.299..789C}.
L1598 contains a bipolar HH complex (HH 117 and HH 118) centered on the IRAS 05496+0812 source \citep[Class I protostar,][]{2015A&A...584A..92M}. The dynamical age of L1598 is $\sim 2.2 \times10^4$ yr \citep{1988ApJ...327..350S,2002ApJS..141..157Y}. The outflow motions, as in the case of the optical jet, can heat the surrounding gas and also can produce shocks. Therefore, the four sources are good targets for probing the chemistry of CCMs. Table \ref{samples} summarizes the properties of our four target sources.
This paper is organized as follows. The observations are introduced in Sect. \ref{obs}. The results of the molecular line observations are presented in Sect. \ref{results}. We discuss the column density ratio of CCMs, N-bearing inorganic molecules, and the chemistry of CCMs in Sect. \ref{discuss}. We summarize our result in Sect. \ref{sum}.
\section{Observations \label{obs}}
\subsection{Observations with the PMO 13.7 m telescope}
Observations were carried out with the 13.7-m telescope of the Purple Mountain Observatory (PMO) from May 25 to May 29 and from Jun 1 to Jun 6, 2017. The $3\times3$-beam sideband separation Superconduction Spectroscopic Array Receiver system was used as the frontend \citep{2012ITTST...2..593S}. The HPBW is $52''$ in the 90 GHz band (3 mm). The mean beam efficiency ($\eta$) is about $50\%$. The pointing and tracking accuracies are both better than $5''$. The fast Fourier transform spectrometers (FFTSs) were used as the backend. Each FFTS with a total bandwidth of 1 GHz provides 16384 channels. The spectral resolution is about 61 kHz. The velocity resolution is $\sim$0.21 km s$^{-1}$. The upper sideband (USB) was set in 90--91 GHz with the system temperature (T$_{sys}$) around 210 K and the typical rms 0.06 K for the antenna temperature (T$_a$). The lower sideband (LSB) was set in 85--86 GHz with T$_{sys}$ around 120 K and the typical rms 0.08 K for T$_a$. Another observational frequency set is 92--93 GHz in USB and 87--88 GHz in LSB. The T$_{sys}$ and the typical rms are similar to the first setup.
The mapping observations were carried out with the on-the-fly (OTF) mode with LSB of 85-86 GHz and USB of 90-91 GHz from Aug. 31 to Sept. 7, 2017. The antenna continuously scanned a region of $18'\times 18'$ centered on the sources with a scanning rate of $30''$ per second. Because of the high rms noise level at the edges of OTF maps, only data within the center $10'\times 10'$ regions were used for analyses in this work.
\subsection{Observations with the SHAO 65 m telescope}
The spectral lines at Ku band were observed with the Tian Ma Radio Telescope (TMRT) of Shanghai Observatory on Jul. 25, 2016. The TMRT is a fully steerable radio telescope located in the west suburb of Shanghai \citep{2016ApJ...824..136L}. The frontend is a cryogenically cooled receiver covering the frequency range of 11.55-18.5 GHz. The pointing accuracy is better than 10$''$. An FPGA-based spectrometer based upon the design of Versatile GBT Astronomical Spectrometer (VEGAS) was employed as the Digital backend system (DIBAS) \citep{2012AAS...21944610B}. For molecular line observations, DIBAS supports a variety of observing modes, including 19 single sub-band modes and 10 eight sub-band modes. The center frequency of the subband is tunable to an accuracy of 10 kHz. In our observations, the DIBAS mode 22 was adopted. Each of the side-band has eight sub-bands within 1 GHz, while each sub-band has a bandwidth of 23.4 MHz and 16384 channels. The observations of L1598 were carried out with two side-bands. The observations of the other three sources were carried out with three side-bands. As shown in Table \ref{transitions}, only SO (J=1$_2$--1$_1$) was not observed in L1598. The main beam efficiency ($\eta$) is 60\% at the Ku band \citep{2015AcASn..56...63W,2016ApJ...824..136L}.
The observed lines are listed in Table \ref{transitions}. The data were reduced by CLASS and GREG in the IRAM software package GILDAS \citep{2000ASPC..217..299G,2005sf2a.conf..721P} and analysed by the open-source Python package.
\section{Results \label{results}}
\subsection{Spectral lines}
In the 3 mm band, CCH (N=1--0), N$_2$H$^+$ (J=1--0), c--C$_3$H$_2$ (J=2$_{1,2}$--1$_{0,1}$), HNC (J=1--0), and HC$_3$N (J=10--9) are detected in all the target sources, except for L1598, with non-detections of the transitions J=3/2--1/2 F=1--1 and J=1/2--1/2 F=0--1, F=1--0 of CCH (N=1--0). CCS (J=7--6) is detected with signal-to-noise ratio (S/N) of 5 in I04181 and is tentatively detected with S/N of 3 in HH211. The S/N is defined as T$_a$/T$_{rms}$. The transitions J=17/2--15/2 and J=19/2--17/2 of C$_4$H are detected with S/N larger than 3 in I04181, HH211, and L1524 and are tentatively detected with S/N $\sim$2.5 in L1598. The spectral lines of I04181, HH211, L1524, and L1598 are plotted in Fig. \ref{PMO}.
All six hyperfine structure (HFS) components are well resolved for CCH (N=1--0) of the detected sources except for L1598. The CCH spectra are shown in Fig. \ref{appendix-cch}. The HFS components of N$_2$H$^+$ (J=1--0) are classified as three groups labeled as F$_1$=1--1, F$_1$=2--1, and F$_1$=0--1. The group F$_1$=1--1 and the group F$_1$=2--1 each contains three blended HFS.
In the Ku band, HC$_3$N (J=2--1) is detected in I04181, L1524, and HH211. HC$_5$N (J=6--5), and C$_3$S (J=3--2) are only detected in I04181 and L1524. SO (J=1$_2$--1$_1$) is only detected in HH211. These detected spectral lines are presented in Figs. \ref{HC3N} and \ref{HC5N}.
From Fig. \ref{HC3N}, we can see that four HFS components of HC$_3$N (J=2--1) are well resolved for the source L1524, and three HFS components of HC$_3$N (J=2--1) are detected in I04181. Only the main component, namely, F=3--2 of HC$_3$N (J=2--1), is detected in HH211. Due to the low S/N, the HFS components of HC$_5$N cannot be resolved. All these transitions are non-detected in L1598. L1598 is thus regarded as a source lacking CCMs.
The spectral lines were fitted with Gaussian function. The line parameters, including the centroid velocity V$_{\rm LSR}$, the antenna temperature T$_{\rm a}$, the full width at half maximum ($\Delta$V), and integrated intensities ($\int$T$_{\rm a}$dV), were obtained and they are listed in Tables \ref{line-para-PMO} and \ref{line-para-TMRT}.
The line width and antenna temperature of the two groups of N$_2$H$^+$ with blended HFS lines were not obtained. The velocity integrated intensities are listed in Table \ref{line-para-PMO}. The error of the velocity integrated intensity was calculated as
$T_{rms}\times\sqrt{N_{channels}}\times\delta V_{res}$, where T$_{\rm rms}$ is the 1$\sigma$ noise, N$_{\rm channels}$ is the total number of line channels, and $\delta V_{\rm res}$ is the velocity resolution.
HH211 has highest integrated intensities of HNC and N$_2$H$^+$, followed by L1598. The integrated intensities of CCM species in L1598 are the weakest.
The HFS components of CCH (N=1--0), N$_2$H$^+$ (J=1--0), and HC$_3$N (J=2--1) were fitted by the HFS fitting program in GILDAS/CLASS.
The HFS parameters of the antenna temperature multiply optical depth (T$_{\rm a}*\tau$), centroid velocity (V$_{\rm LSR}$), the full width at half maximum ($\Delta$V), and the optical depth ($\tau$) are listed in Table \ref{hfs}. The error of optical depth obtained by HFS fitting is considerable.
\subsection{Excitation}
Based on the optical depth ($\tau$) obtained by hyperfine structure fitting, the excitation temperature (T$_{\rm ex}$) can be derived by:
\begin{eqnarray}
&T_{ex} =& \frac{h\nu}{k} ln^{-1}\left\{\frac{h\nu/k}{\frac{T_r}{f(1-exp(-\tau))}+J(T_{bg})}+1\right\},
\end{eqnarray}
where
\begin{eqnarray}
&J(T) =& \frac{h\nu/k}{exp(\frac{h\nu}{kT})-1},
\end{eqnarray}
and f is the beam filling factor, which is assumed to be 1, $\nu$ is the rest frequency, T$_{bg}$ (2.73 K) is the temperature of the cosmic background radiation, and T$_{\rm r}$=T$_{\rm a}$/$\eta$ is the brightness temperature. The derived T$_{\rm ex}$ of CCH, N$_2$H$^+$, and HC$_3$N are listed in column (6) of Table \ref{hfs}. The T$_{\rm ex}$ of CCH toward L1598 cannot be derived, which is caused by the low S/N and only three HFS components were detected.
The T$_{\rm ex}$ of CCH ranges from 4.8 to 6.4 K and is slightly larger than the values of L1498 and CB246 \citep[$\sim$4 K,][]{2009A&A...505.1199P}. Although the HFS components of N$_2$H$^+$ are not well resolved, we also carried out the HFS fitting and derived the T$_{\rm ex}$ and obtained a result ranging from 3.4 to 6.1 K, which is similar to the typical value of 5 K \citep{2002ApJ...572..238C}. The T$_{\rm ex}$ of HC$_3$N for L1524 is 8.7$\pm$3.9 K. The high level of error may come from the high level of uncertainty for the optical depth obtained from the HFS fitting.
Due to the lack of observed transitions from various excitation levels, the excitation temperatures of HNC, c--C$_3$H$_2$, CCS, CCCS, SO, HC$_5$N, and C$_4$H cannot be reliably derived.
Under the local thermodynamic equilibrium (LTE) assumption, we took dust temperatures as the excitation temperatures to calculate the column densities of these molecules. The dust temperature is derived via a spectral energy distribution (SED) fitting (see Fig. \ref{SED}).
\subsection{Column density and abundance}
The column densities of CCH were derived with the RADEX code\footnote{https://personal.sron.nl/$\sim$vdtak/radex/} \citep{2019ApJ...884..167T}, which is a computer program for non-LTE analyses of molecular lines toward interstellar clouds that are assumed to be homogeneous \citep{2007A&A...468..627V}. A data file for CCH, containing the energy levels, transition frequencies, Einstein A coefficients, and collisional rate coefficients with H$_2$, is needed to run RADEX. The file is provided by LAMDA database\footnote{https://home.strw.leidenuniv.nl/$\sim$moldata/} \citep{2005A&A...432..369S}. The kinetic temperature was assumed as dust temperature for each source. The volume densities were estimated by $\frac{\mathrm{N_{H_2}}}{\mathrm{D}\ \mathrm{HPBW}}$, where N$_{\mathrm{H_2}}$ is the H$_2$ column density (see Fig. \ref{SED}), D is the distance from the sun, HPBW is the half power beam width of 53$''$. The volume densities of I04181, HH211, L1524, and L1598 are 4.6 $\times$ 10$^4$ cm$^{-3}$, 24 $\times$ 10$^4$ cm$^{-3}$, 18 $\times$ 10$^4$ cm$^{-3}$, and 1.5 $\times$ 10$^4$ cm$^{-3}$ repectively. The background temperature is 2.73 K. We minimized the ratios of modeled intensities by RADEX and observed intensities minus 1 ($\frac{\mathrm{M_{flux}}}{\mathrm{O_{flux}}}$--1) of each line. The column densities of CCH are listed in Table \ref{density}, and error was estimated as $\sqrt{\sum{(\frac{\mathrm{M_{flux}}}{\mathrm{O_{flux}}}-1)^2}}$.
The column density of other species was derived by \citep{2015PASP..127..266M}:
\begin{eqnarray}
N =&& \frac{3k}{8\pi^3\nu} \frac{Q}{\Sigma S_{ij}\mu^2} \frac{J(T_{ex}) exp\left(\frac{E_u}{kT_{ex}}\right) }{J(T_{ex})-J(T_{bg})} \int T_{r}dV,
\end{eqnarray}
where the permanent dipole moment, $\mu$, line strength, S$_{\rm ij}$, and upper level energy, E$_{\rm u}$, were adopted from the Cologne Database for Molecular Spectroscopy\footnote{http://www.astro.uni-koeln.de/cdms/} and listed in Table \ref{transitions}. The partition function $Q$ was estimated as $\frac{kT_{ex}}{hB}e^{\frac{hB}{3kT_{ex}}}$ \citep{2015PASP..127..266M}.
The column densities are listed in Table \ref{density}.
The large difference in column densities is evident via the use of HC$_3$N (J=2--1) and (J=10--9) transitions, as presented in Table \ref{density}. The effective excitation densities\footnote{Critical density has been widely used as a representative value for the density of the region traced by a certain transition. However, this is misleading and should only be used when no better alternative exists. As a case in point, widespread CCH has been observed in a low density region (n(H$_2$) $\sim$10$^3$ cm$^{-3}$) when the corresponding critical density can be more than one order of magnitude higher. A better indicator of the density, namely, effective excitation density, has been proposed by \citet{Shirley_2015}} of HC$_3$N (J=2--1) and (J=10--9) are 9.7$\times$10$^2$ cm$^{-3}$ and 1.6$\times$10$^5$ cm$^{-3}$, respectively, when assuming the kinetic temperature to be 10 K \citep{Shirley_2015}. The higher-J HC$_3$N (J=10--9) is a good tracer of dense gas \citep{2020MNRAS.496.2790L}. The lower-J HC$_3$N (J=2--1) has a lower critical density than HC$_3$N (J=10--9) and thus it can trace more externally extended gas.
Moreover, higher-J HC$_3$N traces dense regions with sizes that are smaller than the beam size. Thus, the column density of HC$_3$N (J=10--9) can be underestimated.
On the other hand, the underestimated column densities derived by the transitions with high upper-level energy (E$_{up}$/k$_B$) may be due to overestimated excitation temperature.
With the H$_2$ surface densities generated by the released Herschel data ( Fig. \ref{SED}), the abundances of these molecules were derived and listed in Table \ref{abundance}.
\subsection{Emission regions}
The integrated intensity maps of HNC (J=1--0), c--C$_3$H$_2$ (J=2$_{1,2}$--1$_{0,1}$), C$_4$H, and HC$_3$N (J=10--9) were made for the four sources, except the integrated intensity maps of C$_4$H and HC$_3$N (J=10--9) in L1598 due to the low S/N (see Figs. \ref{inte-map} and \ref{inte-map2}). As presented in Table \ref{line-para-PMO}, the integrated intensities of C$_4$H (J=17/2--15/2) and C$_4$H (J=19/2--17/2) in the four sources are all very weak, with values smaller than 0.1 K km s$^{-1}$. To better display the distribution of C$_4$H with higher S/N, the integrated intensities of these two transitions are added up and shown in Fig. \ref{inte-map2}.
The first moment maps of HNC (J=1--0) emission for the four sources are seen in Fig. \ref{moment}.
\subsubsection{IRAS 04181+2655}
According to the HNC and HC$_3$N image maps (see Figs. \ref{inte-map} and \ref{inte-map2}), we divided I04181 into I04181SE and I04181W. The ring-like structures are shown in the emission regions of HNC and c--C$_3$H$_2$ toward I04181SE, which is presented in Fig. \ref{inte-map}. C$_4$H emission seems to surround the HC$_3$N emission found in I04181SE. There is an obvious velocity gradient between I04181SE and I04181W ranging from 6.7 km s$^{-1}$ to 7.2 km s$^{-1}$.
\subsubsection{HH211}
According to HNC emissions, the HH211 region was divided into HH211mm, HH211E, HH211NE, HH211N, and HH211NW. HH211mm harbors a Class 0 protostar. The Class 0 protostar IC348mm is located in HH211N \citep{2013ApJ...768..110C}.
It seems that the velocity gradient transition from east to west occurred in HH211mm, HH211NE, and HH211N. HH211E has the largest LSR velocity among these cores and thus may be far away from the other gas regions.
\subsubsection{L1524}
The weakest HNC emission is found in L1524 with a peak value of $\sim$0.8 K km s$^{-1}$ (see Fig. \ref{inte-map}). The ammonia condensation peaks at 2.5$'$ north and distributes in the northern regions \citep{1989ApJ...341..208A}. We divided the L1524 region into L1524N and L1524S. L1524S have stronger HC$_3$N emission but weaker C$_4$H emission than L1524N (see Fig. \ref{inte-map2}). In Fig. \ref{moment}, the difference of LSR velocity between L1524N and L1524S is $\sim$0.5 km s$^{-1}$.
\subsubsection{L1598}
HNC emissions are very strong in L1598 (see Table ref{line-para-PMO}), but the emissions of HC$_3$N and C$_4$H are too weak to generate the integrated intensity maps.
As shown in Fig. \ref{inte-map}, the HNC gas cores always deviate from the protostars, which can be seen in I04181, L1524, and L1598. In the HH211mm region, HNC emission is gathering toward Class 0 protostar HH211. Relative to the HNC emission region, the c--C$_3$H$_2$ (J=2$_{1,2}$--1$_{0,1}$) emissions are abundant around the protostar in I04181 and L1524.
\section{Discussion \label{discuss}}
\subsection{Chemical status}
\subsubsection{The molecular pair of CCH and N$_2$H$^+$\label{CCH_vs_N2H+}}
In the early stage of dark clouds, CCH is the most abundant hydrocarbon \citep{2018ApJ...856..151L} with an extended distribution \citep{2008ApJ...675L..33B,2017ApJ...836..194P}, while N$_2$H$^+$ formation is impeded as a result of CO gas possibly reacting with its precursor H$_3^+$ and which may directly destroy N$_2$H$^+$ through the reaction $\mathrm{N}_2\mathrm{H}^++\mathrm{CO}\rightarrow\mathrm{HCO}^++\mathrm{N}_2$ \citep{2002ApJ...570L.101B}.
In the latter stage of cores, CCH is oxidized to form other species such as CO, OH, and H$_2$O in the dense center regions \citep[e.g.,][]{2008ApJ...675L..33B,2014A&A...562A...3M,2016A&A...592A..21F}, while N$_2$H$^+$ shows centrally peaked emission in the dense region where CO depletion has occurred \citep{2002ApJ...570L.101B}. The two kinds of molecules can be regarded as good evolution tracers for the Planck Galactic cold clump cores \citep{2019A&A...622A..32L} as the abundance ratio of x[CCH]/x[N$_2$H$^+$] decreases with evolution.
The relationships among the abundances of CCH and N$_2$H$^+$ and their ratios are plotted in Fig. \ref{N2H_vs_CCH}, where our four sources are marked as red squares and the circled data points are obtained from \citet{2019A&A...622A..32L}. The red circled data points are marked as harboring the \emph{IRAS} source within 1 arcmin \citep{1988SSSC..C......0H} and, thus, the corresponding cores are believed to contain heating sources. These cores are considered to be in the late stage of star formation. The early-stage core is shown in blue. In the left panel of Fig. \ref{N2H_vs_CCH}, it is clear that most of the red data points are located in the top-right corner and most of the blue data points are located in the bottom-left corner. It seems that the abundances of CCH and N$_2$H$^+$ increase when going from starless cores to star-forming cores. Two facts may offer an explanation for the increasing CCH abundance:\ 1) CCH arises from dense gas with densities of n(H$_2$)$\sim$10$^4$-10$^5$ cm$^{-3}$ but not from the densest regions with n(H$_2$) > 10$^6$ cm$^{-3}$ \citep{1988A&A...195..257W}. In the four sources, the volume densities of H$_2$ range from 1.5$\times$10$^4$ to 2.4$\times$10$^5$ cm$^{-3}$ within a beam size of 53$''$;
2) the gas can be affected by star-forming feedback. Heating and radiation may lead to regenerated CCH.
As shown in the right panel of Fig. \ref{N2H_vs_CCH}, N[CCH]/N[N$_2$H$^+$] decreases from early stage cores to latter stage cores. For the four star-forming cores, more evolved cores of I04181 and L1524 have a value of N[CCH]/N[N$_2$H$^+$] that is larger than that of HH211. However, L1598, which is also an evolved core, has the lowest ratio. The ratio in our four protostars may
reflect source variety, or L1598 is a special source with absent CCH.
\subsubsection{Dense gas tracers of N$_2$H$^+$, C$_4$H, and HC$_3$N \label{N2H+_vs_HC3N}}
CCMs are formed from C and C$^+$ which are abundant in photo-dissociation regions \citep[PDR,][]{2005A&A...435..885P}. Thus, CCMs are often used as extended gas tracers \citep[and references therein]{2015ApJ...808..114J}.
However, C$_4$H and HC$_3$N can be formed from CH$_4$ \citep{1984MNRAS.207..405M,2008ApJ...681.1385H} and C$_2$H$_2$ \citep{2009MNRAS.394..221C}, which are sublimated from grain mantles with dust temperatures of 25 K and 50 K, respectively \citep{1983A&A...122..171Y,1984MNRAS.207..405M}. Regarding the hot components of HC$_3$N, its vibrational excitation lines have been detected around massive young stellar objects \citep[e.g.,][]{2020ApJ...898...54T}. Thus, high J HC$_3$N are capable of tracing dense cores quite well \citep{2020MNRAS.496.2790L}.
The abundances of C$_4$H and HC$_3$N increase in high-mass protostellar objects (HMPOs) and WCCC sources \citep{2008ApJ...672..371S,2019ApJ...872..154T}. In these places, dust temperatures are higher than 30 K and can be up to 50 K \citep{2002ApJ...566..931S,2008ApJ...672..371S}. Recently, the molecular pair of HC$_3$N and N$_2$H$^+$ have been used as a chemical evolutionary indicator in massive star-forming regions \citep{2019ApJ...872..154T}.
We plotted the relationships among N(HC$_3$N)/N(N$_2$H$^+$), x(N$_2$H$^+$), and x(HC$_3$N) as shown in Fig. \ref{N2H_vs_HC3N} combined with the data from \citet{2004A&A...416..603J}. The error of the data was not given in \citet{2004A&A...416..603J}.
We can see that the N(HC$_3$N)/N(N$_2$H$^+$) ratio and the x(HC$_3$N) value tend to increase from Class 0 to Class I in Fig. \ref{N2H_vs_HC3N}, which is consistent with the trend found in massive star-forming regions \citep{2019ApJ...872..154T}. We summarized the statistical analyses in Table \ref{stt}. The mean values of N(HC$_3$N)/N(N$_2$H$^+$) ratio are 0.43 and 2.55 in Class 0 and Class I protostars, respectively. The mean values of x(HC$_3$N) are 3.4$\times$10$^{-10}$ and 20.2$\times$10$^{-10}$ in Class 0 and Class I protostars, respectively. The two values are very different between the two groups. We conducted the Kolmogorov-Smirnov (K-S) test to verify whether N(HC$_3$N)/N(N$_2$H$^+$) and x(HC$_3$N) in the two groups of Class 0 and Class I protostars are drawn from the same distribution. The K-S test gives the p-value of 0.74\% and 4.5\% for N(HC$_3$N)/N(N$_2$H$^+$) and x(HC$_3$N). The very low p-value indicates that the differences between the two groups are reliable.
We can see in the left panel of Fig. \ref{N2H_vs_HC3N} that x(N$_2$H$^+$) has a flat distribution with the N(HC$_3$N)/N(N$_2$H$^+$) ratio and displays no obvious difference between Class 0 and Class I protostars. The increasing trend of the N$_2$H$^+$ abundance can be inhibited in the center lukewarm region of the protostellar cores where the CO evaporation occurred \citep{1983A&A...122..171Y}. \citet{2018ApJ...865..135Y} also reported that the abundance of N$_2$H$^+$ begins to decrease or reaches a plateau when the dust temperature up to 27 K.
In the right panel of Fig. \ref{N2H_vs_HC3N}, it is clear that x(HC$_3$N) and N(HC$_3$N)/N(N$_2$H$^+$) increase with the evolution of the protostar.
The increasing trend of the ratio N(HC$_3$N)/N(N$_2$H$^+$) as evolution found in high-mass protostellar objects \citep{2019ApJ...872..154T} also holds in low-mass protostellar cores with the size of less than 0.05 pc. The increased x(HC$_3$N) found here should not entirely come from sublimated CH$_4$, since the SED fitting results toward the four sources show lower dust temperatures when compared to the sublimation temperature of CH$_4$ \cite[25 K,][]{1983A&A...122..171Y}.
It is worth noting that the special Class 0 protostar shown in Fig. \ref{N2H_vs_HC3N} is L1527 that is the first WCCC source \citep{2008ApJ...672..371S}. The detection of CH$_4$ in the protoplanetary disk of protostellar in L1524 was reported by \citet{2013ApJ...776L..28G}. This serves as important evidence of the WCCC source since CH$_4$ has not been directly detected in a WCCC source thus far. Although the abundances of C$_4$H and HC$_3$N are lower than those in the WCCC source L1527, when it comes to L1524, it can be regarded as a WCCC candidate. Since I04181 has a high CCM abundance, similarly to L1527, we should also consider it a WCCC candidate.
\subsubsection{S-bearing molecules and shocked carbon-chain chemistry \label{S}}
Since CS was first detected in the interstellar medium \citep{1971ApJ...168L..53P}, S-bearing molecules have often been used to probe the physical structure of star-forming regions and sulfur chemistry \citep[e.g.,][]{1989ApJ...346..168Z,1991ApJ...368..432L,1993Ap&SS.200..183W,1999MNRAS.306..691R}. S-bearing CCMs, CCS, and CCCS are abundant in cold quiescent molecular cores and are decreasing in star-forming cores \citep[e.g.,][]{1992ApJ...392..551S,1998ApJ...506..743B,2006ApJ...642..319D,2008ApJ...678.1049S}. CCS lines were not detected or only marginally detected in low-mass star-forming regions \citep{1992ApJ...392..551S}. However, the high detection rates of CCS and CCCS were found in a CCMs survey toward embedded low-mass protostars \citep{2018ApJ...863...88L}. The abnormally high abundant CCS and CCCS are also found toward B1-a where the column densities of CCCS are larger than those of HC$_5$N \citep{2018ApJ...863...88L}. Recently, CCCS column densities were found larger than those of HC$_7$N in Ku band in three molecular outflow sources \citep{2019MNRAS.488..495W}. Shocked carbon-chain chemistry (SCCC) can explain this abnormal phenomenon \citep{2019MNRAS.488..495W}.
In our observations, CCCS (J=3--2) is detected in I04181 and L1524, while HC$_7$N is not detected in the two sources. CCCS is more abundant than HC$_7$N, which is the criterion of SCCC \citep{2019MNRAS.488..495W}.
Additionally, the CCCS abundance of I04181 (9.0$\times$10$^{-10}$) is several times as much as that of the SCCC sources \citep[2.1$\times$10$^{-10}$ for L1251A, 1.7$\times$10$^{-10}$ for IRAS 20582+7724 and 1.0$\times$10$^{-10}$ for L1221,][]{2019MNRAS.488..495W}. The CCCS abundance in L1524 (1.4$\times$10$^{-10}$) is higher than that in L1221. The abundances of sulfur atoms and ions are increasing in the shocked region, and then the abundance of CCCS increases \citep{2019MNRAS.488..495W}. Considering the detection of the outflow motions in I04181 and the association between HH 6-10 and L1524, the abundant CCCS seems to be the result of shocks and can be explained by SCCC.
The SO emission is only detected in HH211.
The increased SO abundance starts at $\sim$10$^4$ yr after the temperature up to 100 K \citep{2017MNRAS.469..435V,2018MNRAS.474.5575V}. The temperature cannot be heated up to 100 K by the central protostar on this large scale of $\sim$2$\times$10$^4$ au. The dust temperature of HH211 source is 13.6 K.
The dynamical age less than 1000 yr for HH211 is generated by CO outflow \citep{1994ApJ...436L.189M}.
Such a short timescale is not enough for SO to grow naturally. The SO emission detected here may be formed by S$^+$ evaporated from the grain mantles.
The molecule SO has been used as the outflow and the jet tracer for some years \citep{2005MNRAS.361..244C,2015A&A...581A..85P}.
Recently, \citet{2020MNRAS.496.2790L} investigated the G9.62+0.19 with ALMA three-millimeter observation of massive star-forming regions (ATOMS).
\citet{2020MNRAS.496.2790L} found the correlation between SO and SiO with the principal component analysis that is supported that the two molecules trace similar shocked gas. In general, there is a complex process involved in the formation of long CCMs. Thus, it is possible that SO may be detected, while CCCS is not.
\subsection{Distributions of N-bearing inorganic molecules and CCMs \label{distributions}}
N-bearing inorganic molecules, especially NH$_3$ and N$_2$H$^+$, are often used as the dense gas tracers \citep{1999ApJS..125..161J}. NH$_3$ mapping observations toward the I04181, HH211, and L1524 regions were carried out \citep{1987A&A...173..324B,1989ApJ...341..208A,2015ApJ...805..185S}. HNC is also one of the most commonly used dense gas tracers, with a critical density of larger than 10$^4$ cm$^{-3}$.
I04181W shows three HNC emission peaks and four NH$_3$ peaks, whereas the peak positions of HNC and NH$_3$ deviate. Similar results are found in HH211NE and HH211N. The deviation among emission peaks of different molecules is very common \citep[e.g.,][]{1997ApJ...486..862P,2014A&A...569A..11S,2016ApJ...833...97K}.
HNC and HC$_3$N (J=10--9) can be used as dense gas tracers \citep{2012MNRAS.419..238B,2020MNRAS.496.2790L}.
In our observation, HC$_3$N always emits in the HNC emission regions, as shown in I04181SE, HH211mm, HH211NW, and L1524S images. However, HC$_3$N is tentatively detected or non-detected toward I04181W, HH211N, HH211NE, and L1598 where there are also strong HNC emissions.
For I04181SE, the size of the HNC emission structure is larger than 4$'$, while the HC$_3$N emission structure has a source size of $\sim$1$'$. With the distance adopted as 140 pc \citep[and the references therein]{2004A&A...426..503W}, the size of the HNC emission structure of I04181SE is $\sim$0.2 pc, which is consistent with the clump size of 0.1-1 pc. The substructures of HC$_3$N have sizes $\sim$0.05 pc that is similar to the typical core size \citep{2002ARA&A..40...27C}. Besides, the HC$_3$N peaks are consistent well with NH$_3$ peaks in I04181SE, HH211mm, HH211N, and HH211NW.
Thus, HC$_3$N (J=10--9) is a better tracer of dense cores than HNC (J=1--0) in such cores.
In the L1524 region, NH$_3$ peaks at 2.5 arcmin north (L1524N) of the protostar, while HNC peaks at $\sim$1 arcmin southwest (L1524S) of the protostar. The HC$_3$N in L1524S is also stronger than L1524N. It is similar to TMC-1 where the ammonia peak (TMC-1A) and cyanopolyynes peak (TMC-1CP) are located in different places \citep{1997ApJ...486..862P}. Most of the elemental nitrogen tends to be in the form of gaseous N$_2$ and NH$_3$ as steady-state chemical models \citep{2006Natur.442..425M}. Although HNC can be formed from NH$_3$ via the pathways of NH$_3$+C$^+\rightarrow$ H$_2$NC$^+$+H and H$_2$NC$^+$+e$^-\rightarrow$ HNC+H, a high density of N$_2$ is regarded as key to the presence of abundant HNC \citep{2018MNRAS.481.4662R}.
It is thus suggested that N$_2$ and NH$_3$ are concentrated in L1524S and L1524N, respectively.
\subsection{Co-evolution between linear hydrocarbon and cyanopolyynes in I04181SE}
As seen in the emission intensity maps of I04181 (Fig. \ref{inte-map2}), it is clear that the emissions of HC$_3$N and C$_4$H are widely spreaded in I04181SE, while the HC$_3$N emission in I04181W is concentrated in some peaks and the C$_4$H emission does not show any core-like structures.
It is consistent with the emission of HC$_7$N (J=21--20) detected by \citet{2019ApJ...871..134S}.
On the contract, the emissions of HNC and c--C$_3$H$_2$ are similar in I04181SE and I04181W.
The cyanopolyynes can be formed mainly through two pathways \citep{2016ApJ...817..147T,2018MNRAS.481.4662R}:\ in the first (pathway 1), the Nitrogen atoms in cyanopolyynes are brought in through
reaction HCN+C$_2$H$_2$$^+$+e$^-$$\rightarrow$ HC$_3$N+H$_2$, with subsequent reactions between shorter cyanopolyynes and C$_2$H$_2$$^+$ to form longer cyanopolyynes; in the second (pathway 2), cyanopolyynes are directly synthesized from CN and unsaturated hydrocarbons such as C$_2$H$_2$. In lukewarm regions, cyanopolyynes are formed in a similar mechanism of pathway 2 but in different temperature regions, depending on the sublimation temperature of their parent species \citep{2019ApJ...881...57T}.
The chemical divergences between I04181SE and I04181W can be traced back to the
different distribution of CN, considering that c--C$_3$H$_2$ is mainly formed through C$_2$H$_2$+CH $\rightarrow$ C$_3$H$_2$+H \citep{2015ApJ...807...66Y}.
This may explain the co-evolution between linear hydrocarbon and cyanopolyynes around I04181SE.
The C$_4$H emission is weak at the center of I04181SE, which is similar to that of HC$_7$N (J=21--20), as
detected by \citet{2019ApJ...871..134S}.
It may suggest annular distributions of hydrocarbons and cyanopolyynes in I04181SE
with non-ignorable depletion in the central dense regions \citep[e.g.,][]{2009A&A...505.1199P}.
The hydrocarbons and cyanopolyynes in I04181SE are formed from atoms and ions of C. The C and C$^+$ arise in the diffuse phase of the cloud caused by photo-dissociation and photo-ionization \citep{1992ApJ...392..551S,2008ApJ...672..371S,2019MNRAS.488..495W}. The detected C$_4$H around the protostar of I04181 should be regenerated. The regenerated C$_4$H can only be formed by sublimated CH$_4$, which results in WCCC \citep{2008ApJ...681.1385H}. However, the excitation temperatures of CCH and N$_2$H$^+$ are less than 5 K, and the dust temperature derived by Herschel data is 14.3 K. All these temperatures are too cold to sublimate CH$_4$ from the surface of the dust grain. \cite{2010ApJ...722.1633S} suggested that the WCCC occurred in the centrifugal barrier of 500-1000 au. The spatial resolution of our data is larger than 8000 au, which can not resolve the WCCC happened region. High-resolution observations are needed to confirm whether the temperature is higher than 30 K in the lukewarm region and WCCC can occur.
\section{Conclusions \label{sum}}
In this study, we carry out molecular line observations toward four low-mass outflow sources IRAS 04181+2655 (I04181), HH211, L1524, and L1598 in the 3 mm band using the PMO 13.7 m telescope and in Ku band using the SHAO 65 m TMRT. HNC (J=1--0), c--C$_3$H$_2$ (2$_{1,2}$--1$_{0,1}$), CCH (N=1--0), N$_2$H$^+$ (J=1--0), HC$_3$N (J=10--9) are detected in all four sources. The transitions J=17/2--15/2 and J=19/2--17/2 of C$_4$H are detected in I04181, HH211, and L1524, and tentatively detected in L1598. CCS (J=7--6) is detected in I04181 and marginally detected in HH211. HC$_3$N (J=2--1) is detected in I04181, HH211, and L1524. HC$_5$N (J=6--5) and CCCS (J=3--2) are detected in I04181 and L1524. SO (J=1$_2$--1$_1$) is only detected in HH211. The mapping observations of HNC (J=1--0), c--C$_3$H$_2$ (2$_{1,2}$--1$_{0,1}$), HC$_3$N (J=10--9), and C$_4$H (J=17/2--15/2) and (J=19/2--17/2) were carried out in 3 mm band. The column density of CCH was derived by RADEX code in the non-LTE analysis. For other species, the column densities were derived under the LTE assumption. The main findings are summarized as follows.
\begin{enumerate}
\item We tested the molecule pair of HC$_3$N and N$_2$H$^+$ as a chemical evolutionary indicator in low-mass star-forming cores. The two samples K-S test toward N(HC$_3$N)/N(N$_2$H$^+$) of the cores associated with Class 0 and Class I protostars, giving the p-value of 0.74\% and indicating that the differences between Class 0 and Class I protostars in N(HC$_3$N)/N(N$_2$H$^+$) are reliable. N(HC$_3$N)/N(N$_2$H$^+$) increases with evolution, which occurrs not only in high-mass protostar objects \citep{2019ApJ...872..154T} but also in low-mass star-forming cores.
\item Abundant CCMs are detected in I04181 and L1524. The CCMs abundances of I04181 are similar to the WCCC source L1527. Methane is detected in L1524 \citep{2013ApJ...776L..28G}. The two sources are regarded as WCCC candidates.
\item For the two sources of I04181 and L1524, we detect CCCS J=3--2, whereas HC$_7$N is not detected. The CCCS abundances of the two sources are larger than that of the shocked carbon-chain chemistry (SCCC) source L1221. The abundant CCCS in the two sources can be explained by SCCC. In HH211, the SO line is detected but CCCS and CCS emissions are too weak to detect. The SO may be formed by S$^+$ sublimated from the grain mantles.
\item Two filamentary structures I04181SE and HH211NW are shown in HC$_3$N (J=10--9) integrated intensity maps. The substructures of the HC$_3$N emission in I04181SE have sizes of $\sim$0.05 pc and are dense cores, indicating that HC$_3$N (J=10--9) is a good tracer of dense cores.
\item The HC$_3$N, HC$_7$N, and C$_4$H emissions are only detected in I04181SE, while HNC and c--C$_3$H$_2$ emissions are similar between I04181SE and I04181W \citep{2019ApJ...871..134S}. The chemical divergences between the two clumps may imply different distributions for the CN radical. In I04181SE, the strong correlation between C$_4$H and HC$_3$N, HC$_7$N suggests that there is a co-evolution between linear hydrocarbon and cyanopolyynes. The annular distributions of C$_4$H and HC$_7$N indicate that the depletion cannot be ignorable in the central dense regions.
\end{enumerate}
The observations of carbon-chain molecules toward 4 low-mass star-forming cores show that I04181 and L1524 are carbon-chain rich sources and have rich sulfur-containing carbon-chain molecules. The velocity-integrated intensity maps present the circled distributions of the emissions of C$_4$H and HC$_7$N \citep[observed by][]{2019ApJ...871..134S}. It is speculated that the carbon chain molecules may dissipate in the prestellar cores and reappear in the star-forming cores. N(HC$_3$N)/N(N$_2$H$^+$) increases with evolution in low-mass protostellar cores with the size of less than 0.05 pc.
\begin{acknowledgements}
We are grateful to the staff at the Qinghai Station of PMO and Shanghai Station of SHAO for their assistance during the observations. This work was supported by the NSFC No. 11988101, 12033005, 11433008, 11373009, 11503035 and 11573036, and China Ministry of Science and Technology under State Key Development Program for Basic Research (No. 2012CB821800).
\end{acknowledgements}
\bibliographystyle{aa}
|
{'timestamp': '2021-03-12T02:22:23', 'yymm': '2103', 'arxiv_id': '2103.06645', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.06645'}
|
arxiv
|
\section{Introduction}
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.2cm}
\begin{center}
\includegraphics[width=1\linewidth]{figure_1.pdf}
\end{center}
\caption{Understanding group activity in multi-person scene requires accurately determining relevant relation between actors. Our model learns to represent the scene by actor relation graph, and performs reasoning about group activity (``left spike" in the illustrated example) according to the graph structure and nodes features. Each node denotes an actor, and each edge represents the relation between two actors}
\vspace{-4mm}
\label{fig:figure_1}
\end{figure}
Group activity recognition is an important problem in video understanding~\cite{WangL0G18,SimonyanZ14,TranBFTP15,gan2015devnet} and has many practical applications, such as surveillance, sports video analysis, and social behavior understanding. To understand the scene of multiple persons, the model needs to not only describe the individual action of each actor in the context, but also infer their collective activity.
The ability to accurately capture relevant relation between actors and perform relational reasoning is crucial for understanding group activity of multiple people~\cite{cite_a_12, cite_a_10, cite_d_1, cite_a_17, cite_a_18, cite_a_5, cite_a_2, cite_a_7}.
However, modeling the relation between actors is challenging, as we only have access to individual action labels and collective activity labels, without knowledge of the underlying interaction information.
It is expected to infer relation between actors from other aspects such as {\em appearance similarity} and {\em relative location}.
Therefore, it is required to model these two important cues when we design effective deep models for group activity understanding.
Recent deep learning methods have shown promising results for group activity recognition in videos~\cite{cite_a_1,cite_a_2,cite_a_4,cite_a_5,cite_a_6,cite_a_7,cite_a_17,cite_a_18}. Typically, these methods follow a two-stage recognition pipeline. First, the person-level features are extracted by a convolutional neural network (CNN). Then, a global module is designed to aggregate these person-level representations to yield a scene-level feature. Existing methods model the relation between these actors with an inflexible graphical model~\cite{cite_a_17}, whose structure is manually specified in advance, or using complex yet unintuitive message passing mechanism~\cite{cite_a_5,cite_a_18}. To capture temporal dynamics, a recurrent neural network (RNN) is usually used to model temporal evolution of densely sampled frames~\cite{cite_a_1,cite_a_2}. These models are generally expensive at computational cost and sometimes lack the flexibility dealing with group activity variation.
In this work, we address the problem of capturing appearance and position relation between actors for group activity recognition. Our basic aim is to model actor relation in a more flexible and efficient way, where the graphical connection between actors could be automatically learned from video data, and inference for group activity recognition could be efficiently performed.
Specifically, we propose to model the actor-actor relation by building a {\em Actor Relation Graph} (ARG), illustrated in Figure~\ref{fig:figure_1}, where the node in the graph denotes the actor's features, and the edge represents the relation between two actors. The ARG could be easily placed on top of any existing 2D CNN to form a unified group activity recognition framework. Thanks to the operation of graph convolution~\cite{cite_g_10}, the connections in ARG can be automatically optimized in an end-to-end manner. Thus, our model can discover and learn the potential relations among actors in a more flexible way. Once trained, our network can not only recognize individual actions and collective activity of a multi-person scene, but also on-the-fly generate the video-specific actor relation graph, facilitating further insights for group activity understanding.
To further improve the efficiency of ARG for long-range temporal modeling in videos, we come up with two techniques to sparsify the connections in ARG. Specifically, in spatial domain, we design a {\em localized} ARG by forcing the connection between actors to be only in a local neighborhood. For temporal information, we observe that slowness is naturally video prior, where frames are densely captured but semantics varies very slow. Instead of connecting any pair frame, we propose a {\em randomized} ARG by randomly dropping several frames and only keeping a few. This random dropping operation is able to not only greatly improve the modeling efficiency but also largely increase the diversity of training samples, reducing the overfitting risk of ARG.
In experiment, to fully utilize visual content, we empirically study different methods to compute pair-wise relation from the actor appearance features.
Then we introduce constructing multiple relation graphs on an actors set to enable the model to focus on more diverse relation information among actors.
We report performance on two group activity recognition benchmarks: the Volleyball dataset~\cite{cite_d_2} and the Collective Activity dataset~\cite{cite_d_1}. Our experimental results demonstrate that our ARG is able to obtain superior performance to the existing state-of-the-art approaches.
The major contribution of this paper is summarized as follows:
\begin{itemize}
\item We construct flexible and efficient actor relation graphs to simultaneously capture the appearance and position relation between actors for group activity recognition. It provides an interpretable mechanism to explicitly model the relevant relations among people in the scene, and thus the capability of discriminating different group activities.
\item We introduce an efficient inference scheme over the actor relation graphs by applying the GCN with sparse temporal sampling strategy. The proposed network is able to conduct relational reasoning over actor interactions for the purpose of group activity recognition.
\item The proposed approach achieves the state-of-the-art results on two challenging benchmarks: the Volleyball dataset~\cite{cite_d_2} and the Collective Activity dataset~\cite{cite_d_1}. Visualizations of the learned actor graphs and relation features show that our approach has the ability to attend to the relation information for group activity recognition.
\end{itemize}
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-0.2cm}
\begin{center}
\includegraphics[width=1.0\linewidth]{figure_2.pdf}
\end{center}
\caption{An overview of our network framework for group activity recognition.
We first extract feature vectors of actors from sampled video frames.
We use a $d$-dimension vector to represent an actor bounding box.
And the total number of bounding boxes in sampled frames equals $N$.
Multiple actor relation graphs are built to capture relation information among actors.
Afterwards, Graph Convolutional Networks are used to perform relational reasoning on graphs.
The outputs of all graphs are then fused to produce the relational feature vectors of actors.
Finally, original feature and relational feature are aggregated and fed into classifiers of group activity and individual action.
}
\label{fig:figure_2}
\end{figure*}
\section{Related Work}
{\bf Group activity recognition.}
Group activity recognition has been extensively studied from the research community.
The earlier approaches are mostly based on a combination of hand-crafted visual features with probability graphical models~\cite{cite_a_10,cite_a_11,cite_a_12,cite_a_13,cite_a_15,cite_a_16,cite_a_19} or AND-OR grammar models~\cite{cite_a_9,cite_a_14}.
Recently, the wide adoption of deep convolutional neural networks (CNNs) has demonstrated significant performance improvements on group activity recognition~\cite{cite_a_1,cite_a_2,cite_a_3,cite_a_4,cite_a_5,cite_a_6,cite_a_7,cite_a_17,cite_a_18}.
Ibrahim \etal ~\cite{cite_a_2} designed a two-stage deep temporal model, which builds a LSTM model to represent action dynamics of individual people and another LSTM model to aggregate person-level information.
Bagautdinov \etal ~\cite{cite_a_1} presented a unified framework for joint detection and activity recognition of multiple people.
Ibrahim \etal ~\cite{cite_a_17} proposed a hierarchical relational network that builds a relational representation for each person.
There are also efforts that explore modeling the scene context via structured recurrent neural networks ~\cite{cite_a_5,cite_a_7,cite_a_18} or generating captions ~\cite{cite_a_6}.
Our work differs from these approaches in that it explicitly models the interactions information via building flexible and interpretable ARG. Moreover, instead of using RNN for information fusion, we employ GCN with sparse temporal sampling strategy which enables relational reasoning in an efficient manner.
{\bf Visual relation.}
Modeling or learning relation between objects or entities is an important problem in computer vision~\cite{cite_r_6,cite_r_7,cite_r_8,cite_r_9,Wang0T16}.
Several recent works focus on detecting and recognizing human-object interactions (HOI)~\cite{cite_r_10,cite_r_11,cite_r_12,cite_r_13,cite_r_15}, which usually requires additional annotations of interactions.
In scene understanding, a lot of efforts have been made on modeling pair-wise relationships for scene graph generation~\cite{cite_r_1,cite_r_2,cite_r_3,cite_r_4,cite_r_5,cite_g_16}.
Santoro \etal ~\cite{cite_r_16} proposed a relation network module for relational reasoning between objects, which achieves super-human performance in visual question answering.
Hu \etal ~\cite{cite_r_17} applied an object relation module to object detection, and verified the efficacy of modeling object relations in CNN based detection.
Besides, many works showed that modeling interactions information can help action recognition~\cite{cite_r_18,cite_r_19,cite_r_20,cite_r_21,cite_r_22}.
We show that explicitly exploiting the relation information can achieve significant gain on group activity recognition accuracy.
{\bf Neural networks on graphs.}
Recently, integrating graphical models with deep neural networks is an emerging topic in deep learning research.
A considerable amount of models has arisen for reasoning on graph-structured data at various tasks, such as classification of graphs
~\cite{cite_g_5,cite_g_6,cite_g_7,cite_g_8,cite_g_9},
classification of nodes in graphs
~\cite{cite_g_10,cite_g_11,cite_g_12},
and modeling multi-agent interacting physical systems
~\cite{cite_g_1,cite_g_2,cite_g_3,cite_g_13}.
In our work, we apply the Graph Convolutional Network (GCN)~\cite{cite_g_10} which was originally proposed for semi-supervised learning on the problem of classifying nodes in a graph.
There are also applications of GCNs to single-human action recognition problems~\cite{cite_g_14,cite_g_15}.
However, it would be inefficient to compute all pair-wise relation across all video-frame to build video as a fully-connected graph.
Therefore, we build multi-person scene as a sparse graph according to relative location.
Meanwhile, we propose to combine GCN with sparse temporal sampling strategy~\cite{cite_o_1} for more efficient learning.
\section{Approach}
Our goal is to recognize group activity in multi-person scene by explicitly exploiting relation information. To this end, we build {\em Actor Relation Graph} (ARG) to represent multi-person scene, and perform relational reasoning on it for group activity recognition. In this section, we will give detailed descriptions of our approach. First, we present an overview of our framework. Then, we introduce how to build ARG. Finally, we describe the efficient training and inference algorithms for ARG.
\subsection{Group Activity Recognition Framework}
\label{section:Framework}
The overall network framework is illustrated in Figure~\ref{fig:figure_2}.
Given a video sequence and the bounding boxes of the actors in the scene, our framework takes three key steps.
First, we uniformly sample a set of $K$ frames from the video and extract feature vectors of actors from sampled frames.
We follow the feature extraction strategy used in~\cite{cite_a_1}, which adopts Inception-v3~\cite{cite_o_4} to extract a multi-scale feature map for each frame. Besides that, we also have conducted experiments on other backbone models to verify the generality and effectiveness of our approach. We apply RoIAlign~\cite{cite_o_3} to extract the features for each actor bounding box from the frame feature map. After that, a fc layer is performed on the aligned features to get a $d$ dimensional appearance feature vector for each actor.
The total number of bounding boxes in $K$ frames is denoted as $N$.
We use a $ N \times d$ matrix $\mathbf{X}$ to represent feature vectors of actors.
Afterwards, upon these original features of actors, we build actor relation graphs, where each node denotes an actor.
Each edge in the graphs is a scalar weight, which is computed according to two actors' appearance features and their relative location.
To represent diverse relation information, we construct multiple relation graphs from a same set of actors features.
Finally, we perform learning and inference to recognize individual actions and group activity.
We apply the GCN to conduct relational reasoning based on ARG.
After graph convolution, the ARGs are fused together to generate relational representation for actors, which is also in $N \times d$ dimension.
Then two classifiers respectively for recognizing individual actions and group activity will be applied on the pooled actors' relational representation and the original representation.
We apply a fully connected layer on individual representation for individual action classification. The actor representations are maxpooled together to generate scene-level representation, which is used for group activity classification through another fully connected layer.
\subsection{Building Actor Relation Graphs}
\label{section:Graph}
As mentioned above, ARG is the key component in our framework.
We utilize the graph structure to explicitly model pair-wise relation information for group activity understanding.
Our design is inspired by the recent success of relational reasoning and graph neural networks~\cite{cite_r_16,cite_g_10}.
{\bf Graph definition.}
Formally, the nodes in our graph correspond to a set of actors $A=\{(\mathbf{x}_i^a,\mathbf{x}_i^s)|i=1,\cdots,N\}$, where $N$ is the number of actors, $\mathbf{x}_i^a \in \mathbb{R}^{d}$ is actor $i$'s appearance feature, and and $\mathbf{x}_i^s=(t_i^x,t_i^y)$ is the center coordinates of actor $i$'s bounding box. We construct graph $\mathbf{G} \in \mathbb{R}^{N \times N}$ to represent pair-wise relation among actors, where relation value $\mathbf{G}_{ij}$ indicates the importance of actor $j$'s feature to actor $i$.
In order to obtain sufficient representational power to capture underlying relation between two actors, both appearance features and position information need to be considered. Moreover, we note that appearance relation and position relation have different semantic attributes. To this end, we model the appearance relation and position relation in a separate and explicit way. The relation value is defined as a composite function below:
\begin{equation}
\mathbf{G}_{ij}=h\left( f_a(\mathbf{x}_i^a,\mathbf{x}_j^a),f_s(\mathbf{x}_i^s,\mathbf{x}_j^s) \right) ,
\label{eq:graph}
\end{equation}
where $f_a(\mathbf{x}_i^a,\mathbf{x}_j^a)$ denotes the appearance relation between two actors, and the position relation is computed by $f_s(\mathbf{x}_i^s,\mathbf{x}_j^s)$. The function $h$ fuses appearance and position relation to a scalar weight.
In our experiments, we adopt the following function to compute relation value:
\begin{equation}
\mathbf{G}_{ij}=\frac{ f_s(\mathbf{x}_i^s,\mathbf{x}_j^s) ~ \mathrm{exp} \left( f_a(\mathbf{x}_i^a,\mathbf{x}_j^a) \right) }{ \sum_{j=1}^N f_s(\mathbf{x}_i^s,\mathbf{x}_j^s) ~ \mathrm{exp} \left( f_a(\mathbf{x}_i^a,\mathbf{x}_j^a) \right) } ,
\label{eq:graph_softmax}
\end{equation}
where we perform normalization on each actor node using softmax function so that the sum of all the relation values of one actor node $i$ will be $1$.
{\bf Appearance relation.}
Here we discuss different choices for computing appearance relation value between actors:
\begin{enumerate} [fullwidth, itemindent=1.2em, nosep, label=(\arabic*)]
\item {\em Dot-Product}: The dot-product similarity of appearance features can be considered as a simple form of relation value. It is computed as:
\begin{equation}
f_a(\mathbf{x}_i^a,\mathbf{x}_j^a)=\frac{(\mathbf{x}_i^a)^\mathrm{T} \mathbf{x}_j^a}{ \sqrt{d} },
\end{equation}
where $\sqrt{d}$ acts as a normalization factor.
\item {\em Embedded Dot-Product}: Inspired by the Scaled Dot-Product Attention mechanism~\cite{cite_s_1}, we can extend the dot-product operation to compute similarity in an embedding space, and the corresponding function can be expressed as:
\begin{equation}
f_a(\mathbf{x}_i^a,\mathbf{x}_j^a)=\frac{\theta(\mathbf{x}_i^a)^\mathrm{T}\phi(\mathbf{x}_j^a)}{\sqrt{d_k}},
\label{eq:embedded_dot_product}
\end{equation}
where $\theta(\mathbf{x}_i^a)=\mathbf{W}_\theta \mathbf{x}_i^a+\mathbf{b}_\theta$ and $\phi(\mathbf{x}_j^a)=\mathbf{W}_\phi \mathbf{x}_j^a+\mathbf{b}_\phi$ are two learnable linear transformations. $\mathbf{W}_\theta \in \mathbb{R}^{d_k \times d}$ and $\mathbf{W}_\phi \in \mathbb{R}^{d_k \times d}$ are weight matrices, $\mathbf{b}_\theta \in \mathbb{R}^{d_k}$ and $\mathbf{b}_\phi \in \mathbb{R}^{d_k}$ are weight vectors. By learnable transformations of original features, we can learn the relation value between two actors in a subspace.
\item {\em Relation Network}: We also evaluate the Relation Network module proposed in~\cite{cite_r_16}. It can be written as:
\begin{equation}
f_a(\mathbf{x}_i^a,\mathbf{x}_j^a)=\mathrm{ReLU} \left( \mathbf{W}[\theta(\mathbf{x}_i^a), \phi(\mathbf{x}_j^a)]+\mathbf{b} \right) ,
\end{equation}
where $[ \cdot,\cdot ]$ is the concatenation operation and $\mathbf{W}$ and $\mathbf{b}$ are learnable weights that project the concatenated vector to a scalar, followed by a ReLU non-linearity.
\end{enumerate}
{\bf Position relation.}
In order to add spatial structural information to actor graph, the position relation between actors needs to be considered. To this end, we investigate two approaches to use spatial features in our work:
\begin{enumerate} [fullwidth, itemindent=1.2em, nosep, label=(\arabic*)]
\item {\em Distance Mask}: Generally, signals from local entities are more important than the signals from distant entities. And the relation information in the local scope has more significance than global relation for modeling the group activity. Based on these observations, we can set $\mathbf{G}_{ij}$ as zero for two actors whose distance is above a certain threshold. We call the resulted ARG as {\em localized ARG}. The $f_s$ is formed as:
\begin{equation}
f_s(\mathbf{x}_i^s,\mathbf{x}_j^s)=\mathbb{I} \left( d(\mathbf{x}_i^s,\mathbf{x}_j^s) \leq \mu \right) ,
\end{equation}
where $\mathbb{I}(\cdot)$ is the indicator function, $d(\mathbf{x}_i^s,\mathbf{x}_j^s)$ denotes the Euclidean distance between center points of two actors' bounding boxes, and $\mu$ acts as a distance threshold which is a hyper-parameter.
\item {\em Distance Encoding}: Alternatively, we can use the recent approaches~\cite{cite_s_1} for learning position relation. Specifically, the position relation value is computed as
\begin{equation}
f_s(\mathbf{x}_i^s,\mathbf{x}_j^s)=\mathrm{ReLU} \left( \mathbf{W}_{s}\mathcal{E}(\mathbf{x}_i^s,\mathbf{x}_j^s)+\mathbf{b}_s \right) ,
\end{equation}
the relative distance between two actors is embedded to a high-dimensional representation by $\mathcal{E}$, using cosine and sine functions of different wavelengths. The feature dimension after embedding is $d_s$. We then transform the embedded feature into a scalar by weight vectors $\mathbf{W}_{s}$ and $\mathbf{b}_s$, followed by a ReLU activation.
\end{enumerate}
{\bf Multiple graphs.}
A single ARG $\mathbf{G}$ typically focuses on a specific relation signal between actors, therefore discarding a considerable amount of context information. In order to capture diverse types of relation signals, we can extend the single actor relation graph into multiple graphs. That is, we build a group of graphs $\mathcal{G}=(\mathbf{G}^1,\mathbf{G}^2,\cdots,\mathbf{G}^{N_g})$ on a same actors set,where $N_g$ is the number of graphs.
Every graph $\mathbf{G}^i$ is computed in the same way according to \myref{eq:graph_softmax}, but with unshared weights. Building multiple relation graphs allows the model to jointly attend to different types of relation between actors. Hence, the model can make more robust relational reasoning upon the graphs.
{\bf Temporal modeling.}
Temporal context information is a crucial cue for activity recognition. Different from prior works, which employ Recurrent Neural Network to aggregate temporal information on dense frames, our model merges the information in the temporal domain via a sparse temporal sampling strategy~\cite{cite_o_1}. During training, we randomly sample a set of $K=3$ frames from the entire video, and build temporal graphs upon the actors in these frames. We call the resulted ARG as {\em randomized ARG}. At testing time, we can use a sliding window approach, and the activity scores from all windows are mean-pooled to form global activity prediction.
Empirically we find that sparsely sampling frames when training yields significant improvements on recognition accuracy. A key reason is that, existing group activity recognition datasets (e.g., Collective Activity dataset and Volleyball dataset) remain limited, in both size and diversity. Therefore, randomly sampling the video frames results in more diversity during training and reduces the risk of over-fitting. Moreover, this sparse sampling strategy preserves temporal information with dramatically lower cost, thus enabling end-to-end learning under a reasonable budget in both time and computing resources.
\subsection{Reasoning and Training on Graphs}
Once the ARGs are built, we can perform relational reasoning on them for recognizing individual actions and group activity. We first review a graph reasoning module, called Graph Convolutional Network (GCN)~\cite{cite_g_10}. GCN takes a graph as input, performs computations over the structure, and returns a graph as output, which can be considered as a ``graph-to-graph" block. For a target node $i$ in the graph, it aggregates features from all neighbor nodes according to the edge weight between them. Formally, one layer of GCN can be written as:
\begin{equation}
\mathbf{Z}^{(l+1)}=\sigma \left( \mathbf{G}\mathbf{Z}^{(l)}\mathbf{W}^{(l)} \right) ,
\end{equation}
where $\mathbf{G} \in \mathbb{R}^{N \times N}$ is the matrix representation of the graph. $\mathbf{Z}^{(l)} \in \mathbb{R}^{N \times d}$ is the feature representations of nodes in the $l^{th}$ layer, and $\mathbf{Z}^{(0)}=\mathbf{X}$. $\mathbf{W}^{(l)} \in \mathbb{R}^{d \times d}$ is the layer-specific learnable weight matrix. $\sigma (\cdot)$ denotes an activation function, and we adopt ReLU in this work. This layer-wise propagation can be stacked into multi-layers. For simplicity, we only use a layer of GCN in this work.
The original GCN operates on a single graph structure.
After GCN, the way to fuse a group of graphs together remains an open question. In this work, we employ the late fusion scheme, namely fuse the features of same actor in different graphs after GCN:
\begin{equation}
\mathbf{Z}^{(l+1)} = \sum^{N_g}_{i=1}{\sigma \left( \mathbf{G}^i\mathbf{Z}^{(l)}\mathbf{W}^{(l,i)} \right)} ,
\end{equation}
where we employ element-wise sum as a fusion function. We also evaluate concatenation as fusion function. Alternatively, a group of graphs can also be fused by early fusion, that is, fused via summation to one graph before GCN. We compare different methods of fusing a group of graphs in our experiments.
Finally the output relational features from GCN are fused with original features via summation to form the scene representation. As illustrated in Figure~\ref{fig:figure_2}, the scene representation is fed to two classifiers to generate individual actions and group activity predictions.
The whole model can be trained in an end-to-end manner with backpropagation.
Combining with standard cross-entropy loss, the final loss function is formed as
\begin{equation}
\mathcal{L}=\mathcal{L}_1(y^{G},\hat{y}^{G}) + \lambda\mathcal{L}_2(y^{I},\hat{y}^{I}) ,
\end{equation}
where $\mathcal{L}_1$ and $\mathcal{L}_2$ are the cross-entropy loss, $y^{G}$ and $y^{I}$ denote the ground-truth labels of group activity and individual action, $\hat{y}^{G}$ and $\hat{y}^{I}$ are the predictions to group activity and individual action. The first term corresponds to group activity classification loss, and the second is the loss of the individual action classification. The weight $\lambda$ is used to balance these two tasks.
\begin{table}
\begin{subtable}{0.48\textwidth}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
base model & 89.8\% \\
\hline
dot-product & {\bf 91.3\%} \\
embedded dot-product & {\bf 91.3\%} \\
relation network & 90.7\% \\
\hline
\end{tabular}
\caption{Exploration of different appearance relation functions.}
\label{table:table_1a}
\end{center}
\label{}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
no position relation & 91.3\% \\
\hline
distance mask & {\bf 91.6\%} \\
distance encoding & 91.5\% \\
\hline
\end{tabular}
\caption{Exploration of different position relation functions.}
\label{table:table_1b}
\end{center}
\label{}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Number & 1 & 4 & 8 & 16 & 32 \\
\hline
Accuracy & 91.6\% & 92.0\% & 92.0\% & {\bf 92.1\%} & 92.0\% \\
\hline
\end{tabular}
}
\caption{Exploration of number of graphs.}
\label{table:table_1c}
\end{center}
\label{}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
early fusion & 90.8\% \\
late fusion (summation) & {\bf 92.1\%} \\
late fusion (concatenation) & 91.9\% \\
\hline
\end{tabular}
\caption{Exploration of different methods for fusing multiple graphs.}
\label{table:table_1d}
\end{center}
\label{}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Accuracy \\
\hline\hline
single frame & 92.1\% \\
\hline
TSN (3 frames) & 92.3\% \\
temporal-graphs (3 frames) & {\bf 92.5\%} \\
\hline
\end{tabular}
\caption{Exploration of temporal modeling methods.}
\label{table:table_1e}
\end{center}
\end{subtable}
\caption{Ablation studies for group activity recognition accuracy on the Volleyball dataset.}
\label{table:table_1}
\end{table}
\section{Experiments}
\label{section:Experiments}
In this section, we first introduce two widely-adopted datasets and the implementation details of our approach. Then, we perform a number of ablation studies to understand the effects of proposed components in our model. We also compare the performance of our model with the state of the art methods. Finally, we visualize our learned actor relation graphs and features.
\subsection{Datasets and Implementation Details}
{\bf Datasets.}
We conduct experiments on two publicly available group activity recognition datasets, namely the Volleyball dataset and the Collective Activity dataset.
The Volleyball dataset~\cite{cite_d_2} is composed of 4830 clips gathered from 55 volleyball games, with 3493 training clips and 1337 for testing. Each clip is labeled with one of 8 group activity labels (right set, right spike, right pass, right winpoint, left set, left spike, left pass and left winpoint). Only the middle frame of each clip is annotated with the players' bounding boxes and their individual actions from 9 personal action labels (waiting, setting, digging, failing, spiking, blocking, jumping, moving and standing). Following~\cite{cite_a_2}, we use 10 frames to train and test our model, which corresponds to 5 frames before the annotated frame and 4 frames after. To get the ground truth bounding boxes of unannotated frames, we use the tracklet data provided by~\cite{cite_a_1}.
The Collective Activity dataset~\cite{cite_d_1} contains 44 short video sequences (about 2500 frames) from 5 group activities (crossing, waiting, queueing, walking and talking) and 6 individual actions (NA, crossing, waiting, queueing, walking and talking). The group activity label for a frame is defined by the activity in which most people participate. We follow the same evaluation scheme of~\cite{cite_a_18} and select $1/3$ of the video sequences for testing and the rest for training.
{\bf Implementation details.}
We extract 1024-dimensional feature vector for each actor with ground-truth bounding boxes, using the methods mentioned in Section~\ref{section:Framework}.
During ablation studies, we adopt Inception-v3 as backbone network. We also experiment with VGG~\cite{cite_o_6} network for fair comparison with prior methods.
Due to memory limits, we train our model in two stages: first, we fine-tune the ImageNet pre-trained model on single frame randomly selected from each video without using GCN.
We refer to the fine-tuned model described above as our base model throughout experiments.
The base model performs group activity and individual action classification on original features of actors without relational reasoning.
Then we fix weights of the feature extraction part of network, and further train the network with GCN.
We adopt stochastic gradient descent with ADAM to learn the network parameters with fixed hyper-parameters to $\beta_1=0.9,\beta_2=0.999,\epsilon=10^{-8}$.
For the Volleyball dataset, we train the network in 150 epochs using mini-batch size of 32 and a learning rate ranging from $0.0002$ to $0.00001$.
For the Collective Activity dataset, we use mini-batch size of 16 with a learning rate of $0.0001$, and train the network in 80 epochs.
The individual action loss weight $\lambda=1$ is used.
Besides, the parameters of the GCN are set as $d_k=256,d_s=32$, and we adopt the $1/5$ of the image width to be the distance mask threshold $\mu$.
Our implementation is based on PyTorch deep learning framework.
The running time for inferring a video is approximately 0.2s on a single TITAN-XP GPU.
\subsection{Ablation Studies}
In this subsection, we perform detailed ablation studies on the Volleyball dataset to understand the contributions of the proposed model components to relation modeling using group activity recognition accuracy as evaluation metric. The results are shown in Table~\ref{table:table_1}.
{\bf Appearance relation.}
We begin our experiments by studying the effect of modeling the appearance relation between actors and different functions to compute appearance relation value. Based on single frame, we build single ARG without using position relation. The results are listed in Table~\ref{table:table_1a}.
We first observe that explicitly modeling the relation between actors brings significant performance improvement. All models with GCN outperform the base model.
Then it is shown that the dot-product and embedded dot-product yield same recognition accuracy of $91.3\%$, and perform better than the relation network.
We conjecture that dot-product operation is more stable for representing relation information.
In the following experiments, embedded dot-product is used to compute appearance relation value.
{\bf Position relation.}
We further add spatial structural information to ARG. In Section~\ref{section:Graph}, we present two methods to use spatial features: distance mask and distance encoding.
Results on comparing the performance of these two methods are reported in Table~\ref{table:table_1b}.
We can see that these two methods both obtain better performance than those without using spatial features, demonstrating the effectiveness of modeling position relation.
And the distance mask yields slightly better accuracy than distance encoding.
In the rest of the paper, we choose distance mask to represent position relation.
{\bf Multiple graphs.}
We also investigate the effectiveness of building a group of graphs to capture different kinds of relation information.
First, we compare the performance of using different number of graphs.
As shown in Table~\ref{table:table_1c}, we observe that building multiple graphs leads to consistent and significant gain compared with only building single graph, and is able to further boost accuracy from $91.6\%$ to $92.1\%$.
Then we evaluate three methods to fuse a group of graphs: (1) early fusion, (2) late fusion via summation, (3) late fusion via concatenation. The results of experiments using 16 graphs are summarized in Table~\ref{table:table_1d}. We see that the late fusion via summation achieves the best performance.
We note that the early fusion scheme, which aggregates a group of graphs by summation before GCN, results in the performance drops dramatically.
This observation indicates that the relation values learned by different graphs encode different semantic information and will cause confusion for relational reasoning if they are fused before graph convolution.
We adopt $N_g=16$ and late fusion via summation in the following experiments.
{\bf Temporal modeling.}
With all the design choices set, we now extend our model to temporal domain. As mentioned in Section~\ref{section:Graph}, we employ sparse temporal sampling strategy~\cite{cite_o_1}, and uniformly sample a set of $K=3$ frames from the entire video during training.
In the simplest setting, we can handle the input frames separately, then fuse the prediction scores of different frames as Temporal Segment Network (TSN)~\cite{cite_o_1}.
Alternatively, we can build temporal graphs upon the actors in input frames and fuse temporal information by GCN.
We report the accuracies of these two temporal modeling methods in Table~\ref{table:table_1e}.
We see that TSN modeling is helpful to improve the performance of our model.
Moreover, building temporal graphs further boosts accuracy to $92.5\%$, which demonstrates that temporal reasoning helps to differentiate between group activity categories.
\begin{table}[t]
\begin{subtable}{0.48\textwidth}
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|c|c|c|}
\hline
\multirow{2}*{Method} & \multirow{2}*{Backbone} & Group & Individual \\
~ & ~ & activity & action \\
\hline\hline
HDTM~\cite{cite_a_2} & AlexNet & 81.9\% & - \\
CERN~\cite{cite_a_4} & VGG16 & 83.3\% & - \\
stagNet (GT)~\cite{cite_a_18} & VGG16 & 89.3\% & - \\
stagNet (PRO)~\cite{cite_a_18} & VGG16 & 87.6\% & - \\
HRN~\cite{cite_a_17} & VGG19 & 89.5\% & - \\
SSU (GT)~\cite{cite_a_1} & Inception-v3 & 90.6\% & 81.8\% \\
SSU (PRO)~\cite{cite_a_1} & Inception-v3 & 86.2\% & 77.4\% \\
\hline\hline
OURS (GT) & Inception-v3 & 92.5\% & 83.0\% \\
OURS (PRO) & Inception-v3 & 91.5\% & - \\
OURS (GT) & VGG16 & 91.9\% & \bf{83.1}\% \\
OURS (GT) & VGG19 & \bf{92.6\%} & 82.6\% \\
\hline
\end{tabular}
}
\caption{Comparison with state of the art on the Volleyball dataset.}
\label{table:table_2a}
\end{center}
\label{}
\end{subtable}
\begin{subtable}{0.48\textwidth}
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|c|c|}
\hline
Method & Backbone & Group activity \\
\hline\hline
SIM~\cite{cite_a_5} & AlexNet & 81.2\% \\
HDTM~\cite{cite_a_2} & AlexNet & 81.5\% \\
Cardinality Kernel~\cite{cite_a_19} & None & 83.4\% \\
SBGAR~\cite{cite_a_6} & Inception-v3 & 86.1\% \\
CERN~\cite{cite_a_4} & VGG16 & 87.2\% \\
stagNet (GT)~\cite{cite_a_18} & VGG16 & 89.1\% \\
stagNet (PRO)~\cite{cite_a_18} & VGG16 & 87.9\% \\
\hline\hline
OURS (GT) & Inception-v3 & \bf{91.0\%} \\
OURS (PRO) & Inception-v3 & 90.2\% \\
OURS (GT) & VGG16 & 90.1\% \\
\hline
\end{tabular}
}
\end{center}
\caption{Comparison with state of the art on the Collective dataset.}
\label{table:table_2b}
\end{subtable}
\label{}
\vspace{-2mm}
\caption{Comparison with state of the art methods. GT and PRO indicate using ground-truth and proposal-based bounding boxes, respectively.}
\vspace{-2mm}
\label{table:table_2}
\end{table}
\begin{figure*}[t]
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-0.2cm}
\begin{center}
\includegraphics[width=0.92\linewidth]{visual.pdf}
\end{center}
\caption{Visualization of learned actor relation graphs.
Each row shows two examples.
For each example, we plot: (1) input frame with group-truth bounding boxes and group activity label; (2) matrix $\mathbf{G}$ of learned relation graph with ground-truth individual action labels. The actor who has max column sum of $\mathbf{G}$ in each frame is denoted with red star.
}
\label{fig:visual}
\end{figure*}
\begin{figure*}
\setlength{\abovecaptionskip}{-0.3cm}
\setlength{\belowcaptionskip}{-0.2cm}
\begin{center}
\includegraphics[width=0.92\linewidth]{tsne.pdf}
\end{center}
\caption{t-SNE~\cite{cite_o_2} visualization of embedding of video representation on the Volleyball dataset learned by different model variants: base model, single graph, multiple graphs, temporal multiple graphs. Each video is visualized as one point and colors denote different group activities (better view in color version).}
\label{fig:tsne}
\end{figure*}
\subsection{Comparison with the State of the Art}
Now, we compare our best models with the state-of-the-art methods in Table~\ref{table:table_2}.
For fair comparison with prior methods, we report our results with both Inception-v3 and VGG backbone network.
Meanwhile, we perform proposal-based experiment.
We train a Faster-RCNN~\cite{cite_o_7} with training data.
Using the bounding boxes from Faster-RCNN at testing time, our model can still achieve promising accuracy.
Table~\ref{table:table_2a} shows the comparison with previous results on the Volleyball dataset for group activity and individual action recognition.
Our method surpasses all the existing methods by a good margin, establishing the new state-of-the-art.
Our model with Inception-v3 utilizes the same feature extraction strategy as~\cite{cite_a_1}, and outperforms it by about $2\%$ on group activity recognition accuracy, since our model can capture and exploit the relation information among actors.
And, we also achieve better performance on individual action recognition task.
Meanwhile, our method outperforms the recent methods using hierarchical relational networks~\cite{cite_a_17} or semantic RNN~\cite{cite_a_18}, mostly because we explicitly model the appearance and position relation graph, and adopt more efficient temporal modeling method.
We further evaluate the proposed model on the Collective Activity dataset. The results and comparison with previous methods are listed in Table~\ref{table:table_2b}. Our temporal multiple graphs model again achieves the state-of-the-art performance with $91.0\%$ group activity recognition accuracy.
This outstanding performance shows the effectiveness and generality of proposed ARG for capturing the relation information in multiple people scene.
\subsection{Model Visualization}
{\bf Actor relation graph visualization}
We visualize several examples of the relation graph generated by our model in Figure~\ref{fig:visual}.
We use the single graph model on single frame, because it is easier to visualize.
Visualization results facilitate us understanding how ARG works.
We can see that our model is able to capture relation information for group activity recognition, and the generated ARG can automatically discover the key actor to determine the group activity in the scene.
{\bf t-SNE visualization of the learned representation.}
Figure~\ref{fig:tsne} shows the t-SNE~\cite{cite_o_2} visualization for embedding the video representation learned by different model variants. Specifically, we project the representations of videos on the validation set of Volleyball dataset into 2-dimensional space using t-SNE.
We can observe that the scene-level representations learned by using ARG are better separated.
Moreover, building multiple graphs and aggregating temporal information lead to better differentiate group activities.
These visualization results indicate our ARG models are more effective for group activity recognition.
\vspace{-3mm}
\section{Conclusion}
This paper has presented a flexible and efficient approach to determine relevant relation between actors in a multi-person scene.
We learn {\em Actor Relation Graph} (ARG) to perform relational reasoning on graphs for group activity recognition.
We also evaluate the proposed model on two datasets and establish new state-of-the-art results.
The comprehensive ablation experiments and visualization results show that our model is able to learn relation information for understanding group activity.
In the future, we plan to further understand how ARG works, and incorporate more global scene information for group activity recognition.
\vspace{-2mm}
\section*{Acknowledgement}
This work is supported by the National Science Foundation of China under Grant No.61321491, and Collaborative Innovation Center of Novel Software Technology and Industrialization.
{\small
\bibliographystyle{ieee}
|
{'timestamp': '2019-04-24T02:05:02', 'yymm': '1904', 'arxiv_id': '1904.10117', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.10117'}
|
arxiv
|
\section{Gauge-Higgs unification}
The existence of the Higgs boson of a mass $125\,$GeV has been firmly confirmed.
It establishes the unification scenario of electromagnetic and weak forces.
In the standard model (SM) electromagnetic and weak forces are unified as $SU(2)_L \times U(1)_Y$
gauge forces. The $SU(2)_L \times U(1)_Y$ gauge symmetry is spontaneously broken
by the Higgs scalar fields, whose neutral component appears as the observed Higgs boson.
Although almost all experimental data are consistent with the SM, it is not clear whether
the observed Higgs boson is precisely what the SM assumes to exit.
The gauge sector of the SM is beautiful. The gauge principle dictates how quarks and leptons
interact with each other by gauge forces. In the SM the Higgs field gives masses to
quarks, leptons, and weak bosons. However, the potential for the Higgs boson must be
prepared by hand such that it induces the spontaneous breaking of $SU(2)_L \times U(1)_Y$
symmetry.
To put it differently, there is no principle for the Higgs field which determines how the Higgs
boson interacts with itself and other fields. The lack of a principle results in the arbitrariness in
the choice of parameters in the theory.
Furthermore the Higgs boson acquires an infinitely large correction to its mass at the quantum level
which must be cancelled by fine tuning of bare parameters. It is called as the gauge hierarchy problem.
In addition, even the ground state for the Higgs boson may become unstable against quantum corrections.
Gauge-Higgs unification (GHU) naturally solves those problems.
The 4d Higgs boson appears as a fluctuation mode of an Aharonov-Bohm (AB) phase in
the fifth dimension of spacetime, thus becoming a part of gauge fields.
By dynamics of the AB phase the Higgs boson acquires a finite mass at the quantum level,
which is protected from divergence by the gauge principle.
The interactions of the Higgs boson are governed by the gauge principle, too.
In short, gauge fields and the Higgs boson are unified.\cite{Hosotani1983}-\cite{Hatanaka1998}
A realistic model of gauge-Higgs unification has been proposed.
It is the $SO(5) \times U(1)_X \times SU(3)_C$ gauge-Higgs unification in the Randall-Sundrum
warped space. It reproduces the SM content of gauge fields and matter content, and
SM phenomenology at low energies.
It leads to small deviations in the Higgs couplings. It also predicts new particles at the scale 5 TeV
to 10 TeV as Kaluza-Klein (KK) excitation modes in the fifth dimensions.
Signals of these new particles can be seen both at LHC and at ILC.\cite{Kubo2002}-\cite{FHHOY2019}
One of the distinct features of the gauge-Higgs unification is large parity violation in the couplings
of quarks and leptons to KK exited states of gauge bosons.
Right-handed quarks and leptons have much larger couplings to the first KK excited states of photon,
$Z$ boson, and $Z_R$ boson (called as $Z'$ bosons) than the left-handed ones.
These $Z'$ bosons have masses around 7 TeV - 8 TeV.
We will show below that even at 250 GeV ILC with 250 fb$^{-1}$ data
large deviations from the SM in various cross sections in $e^+ e^- \rightarrow f \bar f$ processes
can be seen by measuring the dependence on the polarization of the electron beam.
The key technique is to see interference effects between the contribution from photon and $Z$
boson and the contribution from $Z'$ bosons.
We comment that there might be variation in the matter content of the
$SO(5) \times U(1)_X \times SU(3)_C$ gauge-Higgs unification. Recently a new way of
introducing quark and lepton multiplets has been found, which can be embedded
in the $SO(11)$ gauge-Higgs grand unification.\cite{FHHOY2019}
Other options for fermion content have been proposed.\cite{Yoon2018}
These models can be clearly distinguished from each other
by investigating the polarization dependence of electron/positron beams in fermion pair production at ILC.
Note also that gauge-Higgs unification scenario provides new approaches to dark matter,
Higgs, and neutrino physics.\cite{FHHOS-DM2014}-\cite{Lim2018}
\section{$SO(5) \times U(1)\times SU(3)$ GHU in Randall-Sundrum warped space}
The theory is defined in the Randall-Sundrum (RS) warped space whose metric is given by
\begin{align}
ds^2= e^{-2\sigma(y)} \eta_{\mu\nu}dx^\mu dx^\nu+dy^2,
\label{RSmetric1}
\end{align}
where $\mu,\nu=0,1,2,3$, $\eta_{\mu\nu}=\mbox{diag}(-1,+1,+1,+1)$,
$\sigma(y)=\sigma(y+ 2L)=\sigma(-y)$, and $\sigma(y)=ky$ for $0 \le y \le L$.
It has the topological structure $S^1/Z_2$.
In terms of the conformal coordinate $z=e^{ky}$
($1\leq z\leq z_L=e^{kL}$) in the region $0 \leq y \leq L$
\begin{align}
ds^2= \frac{1}{z^2} \bigg(\eta_{\mu\nu}dx^{\mu} dx^{\nu} + \frac{dz^2}{k^2}\bigg) .
\label{RSmetric-2}
\end{align}
The bulk region $0<y<L$ ($1<z<z_L$) is anti-de Sitter (AdS) spacetime
with a cosmological constant $\Lambda=-6k^2$, which is sandwiched by the
UV brane at $y=0$ ($z=1$) and the IR brane at $y=L$ ($z=z_L$).
The KK mass scale is $m_{\rm KK}=\pi k/(z_L-1) \simeq \pi kz_L^{-1}$
for $z_L\gg 1$.
Gauge fields $A_M^{SO(5)}$, $A_M^{U(1)_X}$ and $A_M^{SU(3)_C}$ of
$SO(5) \times U(1)_X \times SU(3)_C$, with gauge couplings
$g_A$, $g_B$ and $g_C$, satisfy the orbifold conditions\cite{HOOS2008, FHHOS2013, FHHOY2019}
\begin{align}
&\begin{pmatrix} A_\mu \cr A_{y} \end{pmatrix} (x,y_j-y) =
P_{j} \begin{pmatrix} A_\mu \cr - A_{y} \end{pmatrix} (x,y_j+y)P_{j}^{-1}
\label{BC-gauge1}
\end{align}
where $(y_0, y_1) = (0, L)$. For $A_M^{SO(5)}$
\begin{align}
P_0=P_1 = P_{\bf 5}^{SO(5)} = \mbox{diag} (I_{4},-I_{1} ) ~,
\label{BC-matrix1}
\end{align}
whereas $P_0=P_1=I$ for $A_M^{U(1)_X}$ and $A_M^{SU(3)_C}$.
With this set of boundary conditions $SO(5)$ gauge symmetry is broken to
$SO(4) \simeq SU(2)_L \times SU(2)_R$. At this stage there appear zero modes
of 4D gauge fields in $SU(3)_C$, $SU(2)_L \times SU(2)_R$ and $U(1)_X$.
There appear zero modes in the $SO(5)/SO(4)$ part of $A_y^{SO(5)}$,
which constitute an $SU(2)_L$ doublet and become 4D Higgs fields.
As a part of gauge fields the 4D Higgs boson $H(x)$ appears as an AB phase in the fifth
dimension;
\begin{align}
&\hat W = P \exp \bigg\{ i g_A \int_{-L}^L dy \, A_y \bigg\} \cdot P_1 P_0
\sim \exp \bigg\{ i \bigg(\theta_H + \frac{H(x)}{f_H} \bigg) 2 T^{(45)} \bigg\} ~,
\label{ABphase1}
\end{align}
where
\begin{align}
&f_H = \frac{2}{g_w} \sqrt{ \frac{k}{L(z_L^2 -1)}} \sim
\frac{2 ~ m_{\rm KK}}{\pi g_w \sqrt{kL}} ~.
\label{ABphase2}
\end{align}
$g_w = g_A/\sqrt{L}$ is the 4D weak coupling.
Gauge invariance implies that physics is periodic in $\theta_H $ with a period $2\pi$.
A brane scalar field $\hat \Phi_{(1,2,2, \frac{1}{2})} (x)$ or $\hat \Phi_{(1,1,2, \frac{1}{2})} (x)$ is
introduced on the UV brane where subscripts indicate the
$SU(3)_C \times SU(2)_L \times SU(2)_R \times U(1)_X$ content.
Nonvanishing $\langle \hat \Phi \rangle$ spontaneously breaks $SU(2)_R \times U(1)_X$
to $U(1)_Y$, resulting in the SM symmetry $SU(3)_C \times SU(2)_L \times U(1)_Y$.
Once the fermion content is specified, the effective potential $V_{\rm eff} (\theta_H)$ is
evaluated. The location of the global minimum of $V_{\rm eff} (\theta_H)$ determines
the value of $\theta_H$. When $\theta_H \not= 0$, $SU(2)_L \times U(1)_Y$ symmetry
is dynamically broken to $U(1)_{\rm EM}$. It is called the Hosotani mechanism.\cite{Hosotani1983}
The $W$ boson mass is given by
\begin{align}
m_W \sim \sqrt{\frac{k}{L}} ~ z_L^{-1} \, \sin \theta_H
\sim \frac{\sin \theta_H}{\pi \sqrt{kL}} ~ m_{\rm KK} ~.
\label{Wmass1}
\end{align}
As typical values, for $\theta_H = 0.10$ and $z_L = 3.6 \times 10^4$ one find
$m_{\rm KK} = 8.1\,$TeV and $f_H = 2.5\,$TeV.
There appears natural little hierarchy in the weak scale ($m_Z$) and the KK scale ($m_{\rm KK}$).
Quark and lepton multiplets are introduced in the vector representation {\bf 5} of $SO(5)$.
Further dark fermions are introduced in the spinor representation {\bf 4} of $SO(5)$.
This model is called as the A-model, and has been investigated intensively
so far.\cite{FHHOS2013}-\cite{FHHO2017ILC}
Recently an alternative way of introducing matter has been found.\cite{FHHOY2019}
This model, called as the B-model, can be implemented in the $SO(11)$ gauge-Higgs
grand unification.\cite{HosotaniYamatsu2015, Furui2016, HosotaniYamatsu2017}
The matter content of the two models is summarized in Table \ref{Table-matter}.
In this talk phenomenological consequences of the A-model are presented.
\begin{table}[tbh]
\renewcommand{\arraystretch}{1.4}
\begin{center}
\caption{Matter fields. $SU(3)_C\times SO(5) \times U(1)_X$ content
is shown. In the A-model only $SU(3)_C\times SO(4) \times U(1)_X$
symmetry is maintained on the UV brane so that the $SU(2)_L \times SU(2)_R$ content
is shown for brane fields. In the B-model given in ref.\ \cite{FHHOY2019} the full
$SU(3)_C\times SO(5) \times U(1)_X$ invariance is preserved on the UV brane.
}
\vskip 10pt
\begin{tabular}{|c|c|c|}
\hline
&A-model
&B-model
\\
\hline \hline
quark
&$({\bf 3}, {\bf 5})_{\frac{2}{3}} ~ ({\bf 3}, {\bf 5})_{-\frac{1}{3}}$
&$({\bf 3}, {\bf 4})_{\frac{1}{6}} ~ ({\bf 3}, {\bf 1})_{-\frac{1}{3}}^+
~ ({\bf 3}, {\bf 1})_{-\frac{1}{3}}^-$
\\
\hline
lepton
&$({\bf 1}, {\bf 5})_{0} ~ ({\bf 1}, {\bf 5})_{-1}$
&$\strut ({\bf 1}, {\bf 4})_{-\frac{1}{2}}$
\\
\hline
dark fermion
&$({\bf 1}, {\bf 4})_{\frac{1}{2}}$
& $({\bf 3}, {\bf 4})_{\frac{1}{6}} ~ ({\bf 1}, {\bf 5})_{0}^+ ~ ({\bf 1}, {\bf 5})_{0}^-$
\\
\hline \hline
brane fermion
&$\begin{matrix} ({\bf 3}, [{\bf 2, 1}])_{\frac{7}{6}, \frac{1}{6}, -\frac{5}{6}} \cr
\noalign{\kern -4pt}
({\bf 1}, [{\bf 2, 1}])_{\frac{1}{2}, -\frac{1}{2}, -\frac{3}{2}} \end{matrix}$
&$({\bf 1}, {\bf 1})_{0} $
\\
\hline
brane scalar
&$({\bf 1}, [{\bf 1,2}])_{\frac{1}{2}}$
&$({\bf 1}, {\bf 4})_{\frac{1}{2}} $
\\
\hline
Sym.\ on UV brane
&$SU(3)_C \times SO(4) \times U(1)_X$
&$SU(3)_C \times SO(5) \times U(1)_X$
\\
\hline
\end{tabular}
\label{Table-matter}
\end{center}
}
\end{table}
The correspondence between the SM in four dimensions and the gauge-Higgs unification
in five dimensions is summarized as
\begin{align}
\begin{matrix}
{\rm SM} && {\rm GHU} \cr
\noalign{\kern 3pt}
\displaystyle \int d^4x \Big\{ {\cal L}^{\rm gauge} + {\cal L}^{\rm Higgs}_{\rm kinetic} \Big\}
&~ \Rightarrow ~
&\displaystyle \int d^5 x \sqrt{-g} ~ {\cal L}^{\rm gauge}_{\rm 5d} \cr
\noalign{\kern 5pt}
\displaystyle \int d^4 x \Big\{ {\cal L}^{\rm fermion} + {\cal L}^{\rm Yukawa} \Big\}
&\Rightarrow
&\displaystyle \int d^5 x \sqrt{-g} ~ {\cal L}^{\rm fermion}_{\rm 5d} \cr
\noalign{\kern 5pt}
- \displaystyle \int d^4 x ~ {\cal L}^{\rm Higgs}_{\rm potential}
&\Rightarrow
& \displaystyle \int d^4 x ~ V_{\rm eff} (\theta_H)
\end{matrix}
\label{correspondence}
\end{align}
In the SM, ${\cal L}^{\rm gauge}$, $ {\cal L}^{\rm Higgs}_{\rm kinetic}$ and ${\cal L}^{\rm fermion}$
are governed by the gauge principle, but ${\cal L}^{\rm Yukawa}$ and ${\cal L}^{\rm Higgs}_{\rm potential}$
are not. On the GHU side in (\ref{correspondence}), ${\cal L}^{\rm gauge}_{\rm 5d}$ and
${\cal L}^{\rm fermion}_{\rm 5d}$ are governed by the gauge principle and
$V_{\rm eff} (\theta_H)$
follows from them.
\section{Gauge couplings and Higgs couplings}
Let us focus on the A-model.
The SM quark-lepton content is reproduced with no exotic light fermions.
The one-loop effective potential $V_{\rm eff} (\theta_H)$ is displayed by in fig.\ \ref{figure-Veff}.
The finite Higgs boson mass $m_H \sim 125\,$GeV is generated naturally with $\theta_H \sim 0.1$.
Relevant parameters in the theory are determined from quark-lepton masses,
$m_Z$, and electromagnetic, weak, and strong gauge coupling constants.
Many of physical quantities depend on the value of $\theta_H$, but not on other parameters.
In the SM the $W$ and $Z$ couplings of quarks and leptons are universal.
They depend on only representations of the group $ SU(2)_L \times U(1)_Y$.
In GHU the $W$ and $Z$ couplings of quarks and leptons may depend on more detailed
behavior of wave functions in the fifth dimension.
Four-dimensional couplings are obtained by integrating the product of the
$W/Z$ and quark/lepton wave functions over the fifth dimensional coordinate.
\begin{figure}[tbh]
\begin{center}
\includegraphics[bb=0 0 360 255, height=4.5cm]{Veff2.pdf}
\includegraphics[bb=0 0 360 227, height=4.5cm]{Veff1.pdf}
\end{center}
\vskip -10pt
\caption{Effective potential $V_{\rm eff} (\theta_H)$ is displayed in the unit of $(k z_L^{-1})^4/16 \pi^2$
for $z_L = 3.56 \times 10^4$ and $m_{\rm KK} = 8.144\,$TeV.
The minimum of $V_{\rm eff}$ is located at $\theta_H = 0.10$.
The curvature at the minimum determines the Higgs boson mass by
$m_H^2 = f_H^{-2} V_{\rm eff}''(\theta_H)|_{\rm min}$, yielding $m_H = 125.1\,$GeV. }
\label{figure-Veff}
\end{figure}
Surprisingly the $W$ and $Z$ couplings of quarks and leptons and the $WWZ$ coupling
in GHU turn out very close to those in the SM.
The result is tabulated in Table \ref{Table-gaugeCoupling1}.
In the last column the values in the SM are listed.
The deviations from the SM are very small.
The $W$ couplings of left-handed light quarks and leptons are approximately
given by
\begin{align}
g_L^W\sim g_w \, \frac{\sqrt{2kL}}{\sqrt{2kL - \frac{3}{4} \sin^2 \theta_H }}
\sim g_w \, \Big( 1 + \frac{3 \sin^2 \theta_H}{16 kL} \Big) ~.
\label{Wcoupling1}
\end{align}
Here $kL = \ln z_L$. The $W$ couplings of right-handed quarks and leptons are
negligibly small.
\begin{table}[tbh]
\renewcommand{\arraystretch}{1.1}
\begin{center}
\caption{Gauge ($W, Z$) couplings of quarks and leptons. $WWZ$ coupling is also listed
at the bottom. The values in the SM are listed in the last column.
}
\vskip 10pt
\label{Table-gaugeCoupling1}
\begin{tabular}{|c|c|cc|cc|c|}
\hline
\multicolumn{2}{|c|}{~}
&\multicolumn{2}{|c|}{$\theta_H = 0.115$}
&\multicolumn{2}{|c|}{$\theta_H=0.0737$} & SM\\
\hline
&$(\nu_e , e)$
&\multicolumn{2}{|c|}{1.00019} &\multicolumn{2}{|c|}{ 1.00009} & \\
&$(\nu_\mu , \mu)$
&\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 }&1 \\
$g_L^W/g_w$
&$(\nu_\tau , \tau)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } & \\
\cline{2-7}
&$(u,d)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } & \\
&$(c,s)$ &\multicolumn{2}{|c|}{1.00019} & \multicolumn{2}{|c|}{1.00009 } &1 \\
&$(t,b)$ &\multicolumn{2}{|c|}{0.9993} & \multicolumn{2}{|c|}{0.9995} & \\
\hline
&$\nu_e, \nu_\mu, \nu_\tau$
&0.50014 &0 &0.50008 &0 & 0.5 \qquad 0\\
\cline{2-7}
&$e, \mu, \tau$
&-0.2688 &0.2314 &-0.2688 &0.2313 & -0.2688 $\,$ 0.2312\\
\cline{2-7}
$(g_L^Z,g_R^Z)/g_w$ &$u, c$
&0.3459 &-0.1543 &0.3459 &-0.1542 & 0.3458 $\,$ -0.1541\\
&$t$
&0.3449 &-0.1553 &0.3453 &-0.1549 & \\
\cline{2-7}
&$d, s$
&-0.4230 &0.0771 &-0.4230 &0.0771 & -0.4229 $\,$ 0.0771\\
&$b$
&-0.4231 &0.0771 &-0.4230 &0.0771 & \\
\hline
\multicolumn{2}{|c|}{$g_{WWZ}/g_w \cos \theta_W$}
&\multicolumn{2}{|c|}{0.9999998} &\multicolumn{2}{|c|}{0.99999995} &1\\
\hline
\end{tabular}
\end{center}
\end{table}
\def\noalign{\kern 3pt}{\noalign{\kern 3pt}}
Yukawa couplings of quarks and leptons, and $WWH$, $ZZH$ couplings are well approximated by
\begin{align}
\begin{pmatrix} g_{\rm Yukawa} \cr \noalign{\kern 3pt} g_{WWH} \cr \noalign{\kern 3pt} g_{ZZH} \end{pmatrix}
&\sim
\begin{pmatrix} g_{\rm Yukawa}^{\rm SM} \cr \noalign{\kern 3pt} g_{WWH}^{\rm SM} \cr
\noalign{\kern 3pt} g_{ZZH}^{\rm SM} \end{pmatrix} \times \cos\theta_H
\label{HiggsCoupling1}
\end{align}
where $g_{\rm Yukawa}^{\rm SM}$ on the right side, for instance, denotes the value in the SM.
For $\theta_H \sim 0.1$ the deviation amounts to only 0.5\%.
Larger deviations are expected in the cubic and quartic self-couplings of the Higgs boson.
They are approximately given by
\begin{align}
\lambda_3^{\rm Higgs} &\sim 156.9 \, \cos\theta_H + 17.6 \, \cos^2 \theta_H ~~({\rm GeV}), \cr
\noalign{\kern 5pt}
\lambda_4^{\rm Higgs} &\sim - 0.257+ 0.723\cos 2 \theta_H + 0.040 \cos 4 \theta_H ~.
\label{HiggsCoupling2}
\end{align}
In the SM, $\lambda_3^{\rm Higgs, SM} = 190.7\,$GeV and $\lambda_4^{\rm Higgs, SM} = 0.774$.
In the $\theta_H \rightarrow 0$ limit $\lambda_3^{\rm Higgs}$ and $\lambda_4^{\rm Higgs}$ become
8.5\% and 35\% smaller than the values in the SM.
$\lambda_3^{\rm Higgs}$ can be measured at ILC.
GHU gives nearly the same phenomenology at low energies as the SM.
To distinguish GHU from the SM, one need to look at signals of new particles which GHU predicts.
\section{New particles -- KK excitation}
KK excitations of each particle appear as new particles. The existence of an extra dimension
is confirmed by observing KK excited particles of quarks, leptons, and gauge bosons.
The KK spectrum is shown in Table \ref{table-KKspectrum1}.
$Z_R$ is the gauge field associated with $SU(2)_R$, and has no zero mode.
$Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$ are called as $Z'$ bosons.
Clean signals can be found in the process $q \, \bar q \rightarrow Z' \rightarrow e^+ e^- , \mu^+ \mu^-$
at LHC. So far no event of $Z'$ has been observed, which puts the limit $\theta_H < 0.11$.
\begin{table}[htb]
\caption{The mass spectrum $\{ m_n \}$ ($n \ge 1$) of KK excited modes of gauge bosons and quarks
for $\theta_H = 0.10, n_F = 4$, where $n_F$ is the number of dark fermion multiplets.
Masses are given in the unit of TeV.
Pairs $(W^{(n)}, Z^{(n)})$, $(W_R^{(n)}, Z_R^{(n)})$, $(t^{(n)}, b^{(n)})$, $(c^{(n)}, s^{(n)})$,
$(u^{(n)}, d^{(n)})$ ($n \ge 1$) have almost degenerate masses.
The spectrum of $W_R$ tower is the same as that of $Z_R$ tower.
The gluon tower has the same spectrum as the photon ($\gamma$) tower.
}
\label{table-KKspectrum1}
\vskip 5pt
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|c|c|c|c|c||c|c|c|c|}
\hline
\multicolumn{9}{|c|}{$\theta_H = 0.10, ~ n_F = 4, ~ m_{\rm KK} = 8.144 \,{\rm TeV}, ~ z_L = 3.56 \times 10^4$} \\
\hline
&\multicolumn{2}{c|}{$Z^{(n)}$} &$\gamma^{(\ell)}$ &$Z_R^{(\ell)}$
&\multicolumn{2}{c|}{$t^{(n)}$} &$c^{(n)}$ &$u^{(n)}$ \\
\hline
$n\, (\ell)$ &$m_n$ &$\mfrac{m_n}{m_{\rm KK}}$ &$m_\ell$ &$m_\ell$
&$m_n$ &$\mfrac{m_n}{m_{\rm KK}}$ &$m_n$ &$m_n$ \\
\hline
$1\, (1)$&6.642 &$0.816$ &6.644 &6.234
&7.462&0.916 &8.536 &10.47 \\
2 ~~~~~&9.935 &1.220 &-- &--
&8.814 &1.082 &12.01 &13.82 \\
$3\, (2)$&14.76 &1.812&14.76 &14.31
&15.58 &1.913 &16.70 &18.76 \\
4 ~~~~~&18.19 &2.233 &-- &--
&16.99 &2.087 &20.41 &22.37 \\
\hline
\end{tabular}
\end{table}
The KK mass scale as a function of $\theta_H$ is approximately given by
\begin{align}
&m_{\rm KK} (\theta_H) \sim \frac{1.36\,{\rm TeV}}{(\sin \theta_H )^{0.778}} ~,
\label{KKscale1}
\end{align}
irrespective of the other parameters of the theory.
In GHU many of physical quantities such as the Higgs couplings in (\ref{HiggsCoupling1})
and (\ref{HiggsCoupling2}), the KK scale (\ref{KKscale1}), and KK masses of gauge bosons
are approximately determined by the value of $\theta_H$ only.
This property is called as the $\theta_H$ universality.
Once the $Z^{(1)}$ particle is found and its mass is determined, then the value of $\theta_H$
is fixed and the values of other physical quantities are predicted.\cite{FHHOS2013}
Although $Z'$ bosons are heavy with masses around 6 -- 8 TeV, their effects can be
seen at 250 GeV ILC ($e^+ e^-$ collisions). (Fig.~\ref{fig:ILC-Zprime})
The couplings of right-handed quarks and leptons to $Z'$ bosons are much stronger
than those of left-handed quarks and leptons. This large parity violation manifests
as an interference effect in $e^+ e^-$ collisions.\cite{FHHO2017ILC}
\begin{figure}[htb]
\begin{center}
\includegraphics[bb=10 57 690 191, width=10cm]{ILC-Zprime1.pdf}
\end{center}
\caption{
Dominant diagrams in the process
$e^+ e^- \rightarrow \mu^+ \mu^-$
}
\label{fig:ILC-Zprime}
\end{figure}
Left-handed light quarks and leptons are localized near the UV brane (at $y=0$), whereas right-handed
ones near the IR brane (at $y=L$). Wave functions of top and bottom quarks spread over the entire
fifth dimension. In GHU both left- and right-handed fermions are in the same gauge multiplet so that
if a left-handed fermion is localized near the UV brane, then its partner right-handed fermion is
necessarily localized near the IR brane. KK modes of gauge bosons in the RS space are always
localized near the IR brane. $Z'$ couplings of quarks and leptons are given by
overlap-integrals of wave functions of $Z'$ bosons and left- or right-handed quarks and leptons.
Consequently right-handed quarks/leptons have larger couplings to $Z'$.
Typical behavior of wave functions is depicted in fig.~\ref{fig-wavefunctions}.
\begin{figure}[bht]
\begin{center}
\includegraphics[bb=2 88 425 375, width=8.0cm]{wavefunctions.pdf}
\end{center}
\vskip 5pt
\caption{
Wave functions of various fermions and gauge bosons for $\theta_H = 0.1$.
Only some of the relevant components in $SO(5)$ are displayed.
Wave functions of light quarks and leptons are qualitatively similar to those of $(u_L, u_R)$.
Wave functions of $(b_L, b_R)$ are similar to those of $(t_L, t_R)$.
$Z$ boson wave function is almost constant, whereas
$Z^{(1)}$'s wave function becomes large near the IR brane at $z=z_L$.
}
\label{fig-wavefunctions}
\end{figure}
Gauge couplings of quarks and leptons to $Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$ are
summarized in Table~\ref{table-Zprimecoupling1}.
Except for $b$ and $t$ quarks, right-handed quarks and leptons have much larger couplings
than left-handed ones.
\begin{table}[h]
\caption{
Gauge couplings of quarks and leptons to $Z^{(1)}$, $\gamma^{(1)}$ and $Z_R^{(1)}$
for $\theta_H = 0.0917$ and $\sin^2 \theta_W = 0.2312$.
Couplings are given in the unit of $g_w/\cos \theta_W$.
The $Z$ couplings in the SM , $I_3 - \sin^2 \theta_W Q_{\rm EM}$, are also shown.
}
\label{table-Zprimecoupling1}
\vskip 5pt
\begin{center}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
&\multicolumn{2}{c|}{SM: $Z$}
&\multicolumn{2}{c|}{$Z^{(1)}$}
&\multicolumn{2}{c|}{$Z_R^{(1)}$}
&\multicolumn{2}{c|}{$\gamma^{(1)}$}\\
&Left &Right & Left & Right & Left & Right & Left & Right \\
\hline
$\nu_e$ & & & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\
$\nu_{\mu}$ & $0.5$ & 0 & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\
$\nu_{\tau}$ & $$ & & $-0.183$ & 0 & 0 & 0 & 0 & 0 \\
\hline
$e$
& $$ & $$ & $0.099$ & $0.916$ & 0 & $-1.261$ & $0.155$ & $-1.665$ \\
$\mu$
&$-0.269$ &$0.231$ & $0.099$ & $0.860$ & 0 & $-1.193$ & $0.155$ & $-1.563$ \\
$\tau$
& $$ & $$ & $0.099$ & $0.814$ & 0 & $-1.136$ & $0.155$ & $-1.479$ \\
\hline
$u$
& $$ & $$ & $-0.127$ & $-0.600$ & $0$ & $0.828$ & $-0.103$ & $1.090$ \\
$c$
&$0.346$ &$-0.154$ & $-0.130$ & $-0.555$ & $0$ & $0.773$ & $-0.103$ & $1.009$ \\
$t$
& $$ & $$ & $0.494$ & $-0.372$ & $0.985$ & $0.549$& $0.404$ & $0.678$ \\
\hline
$d$
& $$ & $$ & $0.155$ & $0.300$ & $0$ & $-0.414$ & $0.052$ & $-0.545$ \\
$s$
&$-0.423$ &$0.077$ & $0.155$ & $0.277$ & $0$ & $-0.387$ & $0.052$ & $-0.504$ \\
$b$
& $$ & $$ & $-0.610$ & $0.186$ & $0.984$ & $-0.274$ & $-0.202$ & $-0.339$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{$e^+ e^- $ collisions}
The amplitude ${\cal M}$ for the $e^+ e^- \rightarrow \mu^+ \mu^-$ process at the tree level
in fig.~\ref{fig:ILC-Zprime} can be expressed
as the sum of two terms ${\cal M}_0$ and ${\cal M}_{Z'}$.
\begin{align}
{\cal M} &= {\cal M}_0 + {\cal M}_{Z'} \cr
&= {\cal M}(e^+ e^- \rightarrow \gamma \, , \, Z \rightarrow \mu^+ \mu^-)
+ {\cal M}(e^+ e^- \rightarrow Z' \rightarrow \mu^+ \mu^-) ~.
\label{pairproduction1}
\end{align}
For $s = (250\,{\rm GeV})^2 \sim (1\,{\rm TeV})^2$, we have $m_Z^2 \ll s \ll m_{Z'}^2$
so that the amplitude can be approximated by
\begin{align}
{\cal M} &\simeq \frac{g_w^2}{\cos^2 \theta_W} \sum_{\alpha, \beta= L,R} J_\alpha^{(e)\nu} (p,p')
\bigg\{ \frac{\kappa_{\rm SM}^{\alpha\beta}}{s} - \frac{\kappa_{Z'}^{\alpha\beta}}{m_{Z'}^2} \bigg\}
J_{\beta\nu}^{(\mu)} (k,k')
\label{pairproduction2}
\end{align}
where $J_{\alpha\nu}^{(e)} (p,p')$ and $J_{\beta\nu}^{(\mu)} (k,k')$ represent momentum and polarization
configurations of the initial and final states, respectively.
$\kappa_{\rm SM}^{\alpha\beta}$ and $\kappa_{Z'}^{\alpha\beta}$ are found from
Table~\ref{table-Zprimecoupling1} to be
\begin{align}
(\kappa_{\rm SM}^{LL}, \kappa_{\rm SM}^{LR}, \kappa_{\rm SM}^{RL}, \kappa_{\rm SM}^{RR})
&= (0.25, 0.1156, 0.1156, 0.2312)~, \cr
\noalign{\kern 5pt}
(\kappa_{Z'}^{LL}, \kappa_{Z'}^{LR}, \kappa_{Z'}^{RL}, \kappa_{Z'}^{RR})~
&= (0.034, -0.158, -0.168, 4.895) ~.
\label{pairproduction3}
\end{align}
Compared with the value in the SM, $\kappa_{Z'}^{RR}$ is very large whereas $\kappa_{Z'}^{LL}$
is very small.
Although direct production of $Z'$ particles is not possible with
$s= (250\,{\rm GeV})^2 \sim (1\,{\rm TeV})^2$, the interference term becomes appreciable.
Suppose that the electron beam is polarized in the right-handed mode. Then the interference
term gives
\begin{align}
\frac{{\cal M}_0 {\cal M}_{Z'}^*}{| {\cal M}_0 |^2}
&\sim - \frac{ \kappa_{Z'}^{RR} + \kappa_{Z'}^{RL}}{ \kappa_{\rm SM}^{RR} + \kappa_{\rm SM}^{RL}} \,
\frac{s}{m_{Z'}^2} \sim - 13.6 \, \frac{s}{m_{Z'}^2} \cr
\noalign{\kern 10pt}
&\sim -0.017 \quad {\rm at} ~ \sqrt{s} = 250\,{\rm GeV} ~.
\label{pairproduction4}
\end{align}
This is a sufficiently big number. As the number of events of fermion pair production is huge
in the proposed ILC experiment, 1.7\% correction can be certainly confirmed.
One recognizes that polarized electron and/or positron beams play an important role
to investigate physics beyond
the SM.\cite{FHHO2017ILC}, \cite{Yoon2018}, \cite{Bilokin2017}-\cite{ILC2019}
\subsection{Energy and polarization dependence}
In the $e^+ e^-$ collision experiments one can control both the energy and polarization of
incident electron and positron beams. First consider the total cross section for
$ e^+ e^- \rightarrow \mu^+ \mu^-$;
\begin{align}
F_1 = \frac{\sigma (e^+ e^- \rightarrow \mu^+ \mu^- )^{\rm GHU}}{\sigma (e^+ e^- \rightarrow \mu^+ \mu^- )^{\rm SM}} ~.
\label{mupair1}
\end{align}
Both the electron and positron beams are polarized with polarization $P_{e^-}$ and $P_{e^+}$.
For purely right-handed (left-handed) electrons $P_{e^-} = +1 (-1)$.
At $\sqrt{s} \ge 250\,$GeV, $e^+$ and $e^-$ in the initial state may be viewed as massless particles.
The ratio $F_1$ in (\ref{mupair1}) depends on the effective polarization
\begin{align}
P_{\rm eff} = \frac{P_{e^-} - P_{e^+}}{1 - P_{e^-} P_{e^+} } ~.
\label{Peff1}
\end{align}
At the proposed 250 GeV ILC, $|P_{e^-}| \le 0.8$ and $|P_{e^+}| \le 0.3$
so that $|P_{\rm eff}| \le 0.877$.
The $\sqrt{s}$ dependence of $F_1$ is depicted in fig.~\ref{fig:mupair} (a).
The deviation from the SM becomes very large at $\sqrt{s} = 1.5\,{\rm TeV} \sim
2\,{\rm TeV}$ for $\theta_H = 0.09 \sim 0.07$, particularly with $P_{\rm eff} \sim 0.8$.
For $P_{\rm eff} \sim - 0.8$ the deviation is tiny.
At the energy $\sqrt{s} = 250\,$GeV the deviation might look small.
As the event number expected at ILC is so huge that deviation can be
unambiguously observed even at $\sqrt{s} = 250\,$GeV.
In fig.~\ref{fig:mupair} (b) the polarization $P_{\rm eff}$ dependence of $F_1$ is
depicted for $\sqrt{s} = 250\,$GeV and 500$\,$GeV.
As the polarization $P_{\rm eff}$ varies from $-1$ to $+1$, deviation from the SM
becomes significantly larger. The grey band in fig.~\ref{fig:mupair} (b) indicates
statistical uncertainty at $\sqrt{s} = 250\,$GeV with 250$\,$fb$^{-1}$ data set
in the SM. It is seen that the signal of GHU can be clearly seen by measuring
the polarization dependence in the early stage of ILC 250$\,$GeV.
\begin{figure}[thb]
\begin{center}
\includegraphics[bb=0 0 288 293, width=6.0cm]{sigma-mu-ratio.pdf}
\quad
\includegraphics[bb=0 0 360 232, width=7.cm]{sigma-mu-ILC2.pdf}
\\
(a) \hskip 6cm (b)
\end{center}
\caption{
$F_1 = \sigma (\mu^+ \mu^- )^{\rm GHU}/\sigma (\mu^+ \mu^- )^{\rm SM}$ in (\ref{mupair1}) is plotted.
(a) The $\sqrt{s}$ dependence is shown.
Blue curves a, c and green curve e are for $\theta_H = 0.0917$, whereas red curves b, d are
for $\theta_H = 0.0737$. Curves a and b are with $P_{\rm eff} =0$. Curves c and d are with $P_{\rm eff} =0.877$.
Curve e is with $P_{\rm eff} =- 0.877$.
(b) The polarization $P_{\rm eff}$ dependence is shown.
Solid (dashed) lines are for $\sqrt{s} = 250\,$GeV (500$\,$GeV).
Blue lines are for $\theta_H = 0.0917$, whereas red lines are for $\theta_H = 0.0737$.
The grey band indicates statistical uncertainty at $\sqrt{s} = 250\,$GeV with
250$\,$fb$^{-1}$ data set.
}
\label{fig:mupair}
\end{figure}
\subsection{Forward-backward asymmetry}
Not only in the total cross sections but also in differential cross sections for
$e^+ e^- \rightarrow \mu^+ \mu^- $
significant deviation from the SM can be seen.\cite{Richard2018, Suehara2018}
Even with unpolarized beams the differential cross sections $d\sigma/d\cos\theta$
becomes 8\% (4\%) smaller than in the SM in the forward direction for $\theta_H = 0.0917$ ($0.0737$).
Forward-backward asymmetry $A_{\rm FB}$ characterizes this behavior.
In fig.~\ref{fig:AFB}(a) the $\sqrt{s}$-dependence of $A_{\rm FB}$ for $e^+ e^- \rightarrow \mu^+ \mu^- $
is shown. As $\sqrt{s}$ increases the deviation from the SM becomes evident.
Again the deviation becomes largest around $\sqrt{s} = 1.5 \sim 2\,$TeV with $P_{\rm eff} = 0.877$
for $\theta_H = 0.0917 \sim 0.0737$. The sign of $A_{\rm FB}$ flips around $\sqrt{s} = 1.1 \sim 1.5\,$TeV.
Even at $\sqrt{s} = 250\,$GeV, significant deviation from the SM can be seen in the
dependence on the polarization ($P_{\rm eff}$) of the electron/positron beam as depicted in
fig.~\ref{fig:AFB}(b). With $250\text{ fb}^{-1}$ data the deviation amounts to 6$\sigma$ (4$\sigma$)
at $P_{\rm eff} = 0.8$ for $\theta_H = 0.0917 \, ( 0.0737)$, whereas
the deviation is within an error at $P_{\rm eff} = - 0.8$.
Observing the polarization dependence is a definitive way of investigating the details of the theory.
\begin{figure}[thb]
\begin{center}
\includegraphics[bb=0 0 360 243, width=6.7cm]{AFB-GHU.pdf}
\quad
\includegraphics[bb=0 0 360 230, width=6.8cm]{AFB-ILC.pdf}
\\
(a) \hskip 6.5cm (b)
\end{center}
\caption{
Forward-backward asymmetry $A_{\rm FB} (\mu^+ \mu^-)$.
(a) The $\sqrt{s}$ dependence is shown.
Blue curves a, b, c are for $\theta_H = 0.0917$, red curves d, e are for $\theta_H = 0.0737$,
and black curves f, g, h are for the SM.
Solid curves a, d, f are for unpolarized beams.
Dashed curves b, e, g are with $P_{\rm eff} =0.877$.
Dotted curves c and h are with $P_{\rm eff} =- 0.877$.
(b) $(A_{\rm FB}^{\rm GHU} - A_{\rm FB}^{\rm SM})/A_{\rm FB}^{\rm SM}(\mu^+\mu^-)$
as functions of the effective polarization $P_{\rm eff}$.
Solid and dotted lines are for $\sqrt{s} = 250\,$GeV and $500 \,$GeV, respectively.
Blue and red lines correspond to $\theta_H = 0.0917$ and $0.0737$, respectively.
The gray band indicates the statistical uncertainty at $\sqrt{s}=250\,$GeV with $250\text{ fb}^{-1}$ data.
}
\label{fig:AFB}
\end{figure}
\subsection{Left-right asymmetry}
Systematic errors in the normalization of the cross sections are reduced in the measurement of
\begin{align}
R_{f, LR} (\overline{P})
=\frac{\sigma( \bar{f}f \, ; \, P_{e^-} = + \overline{P}, P_{e^+}=0 )}
{\sigma( \bar{f}f \, ; \, P_{e^-} = - \overline{P}, P_{e^+}=0 )}
\label{defRfRL}
\end{align}
where the electron beams are polarized with $P_{e^-} = + \overline{P}$ and $- \overline{P}$.
Only the polarization of the electron beams is flipped in experiments.
Let $\sigma_{LR}^f$ ($\sigma_{RL}^f$) denote
the $e_L^-e_R^+ (e_R^- e_L^+) \to f\bar{f}$ scattering cross section.
Then the left-right asymmetry $A_{LR}^f$ is related to $R_{f, LR} $ by
\begin{align}
A_{LR}^f &=
\frac{\sigma_{LR}^f- \sigma_{RL}^f}{\sigma_{LR}^f + \sigma_{RL}^f}
= \frac{1}{\overline{P}} \, \frac{1- R_{f,LR}}{1+ R_{f,LR}} ~.
\label{LRasym}
\end{align}
The predicted $R_{f, LR} (\overline{P})$ is summarized in Table \ref{tbl:LRasym}
for $\overline{P} = 0.8$.
Even at $\sqrt{s} = 250\,{\rm GeV}$ with $L_{int} = 250\,{\rm fb}^{-1}$ data,
namely in the early stage of the ILC experiment,
significant deviation from the SM is seen.
The difference between $R_{\mu, LR}$ and $R_{b, LR}$ stems from the different behavior of
wave functions of $\mu$ and $b$ in the fifth dimension.
\begin{table}[htbp]
\caption{$R_{f, LR} (\overline{P})$ in the SM, and
deviations of $R_{f, LR} (\overline{P})^{\rm GHU} / R_{f, LR} (\overline{P})^{\rm SM}$
from unity are tabulated for $\overline{P} = 0.8$.
Statistical uncertainties of $R_{f,LR}^{\rm SM}$ is estimated with $L_{int}$ data for both
$\sigma( \bar{f}f ; P_{e^-} = + \overline{P})$
and $\sigma( \bar{f}f ; P_{e^-} = - \overline{P})$, namely with $2 L_{int}$ data in all.}
\label{tbl:LRasym}
\vskip 8pt
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|c|c|c|cc|}
\hline
$f$ & $\sqrt{s}$~~~,~~~ $L_{int}$ & SM &\multicolumn{2}{c|}{ GHU} \\
&& $R_{f,LR}^{SM}$ (uncertainty) & $\theta_H=0.0917$ & $\theta_H = 0.0737$ \\
\hline
$\mu$ & $250\,{\rm GeV}$, $250\,{\rm fb}^{-1}$ & $0.890$ ($0.3\%$) & $-3.4\%$ & $-2.2\%$ \\
& $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.900$ ($0.4\%$) & $-13.2\%$ & $-8.6\%$ \\
\hline
$b$ & $250\,{\rm GeV}$, $250\,{\rm fb}^{-1}$ & $0.349$ ($0.3\%$) & $-3.1\%$ & $-2.1\%$ \\
& $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.340$ ($0.5\%$) & $-12.3\%$ & $-8.3\%$ \\
\hline
$t$ & $500\,{\rm GeV}$, $500\,{\rm fb}^{-1}$ & $0.544$ ($0.4\%$) & $-13.0\%$ & $-8.2\%$ \\
\hline
\end{tabular}
\end{table}
\section{Summary}
Gauge-Higgs unification predicts large parity violation in the quark-lepton couplings to
the $Z'$ bosons ($Z^{(1)}, \gamma^{(1)},Z_R^{(1)}$). Although these $Z'$ bosons are
very heavy with masses 7 - 8$\,$TeV, they give rise to significant interference effects
in $e^+ e^-$ collisions at $\sqrt{s} = 250\,{\rm GeV} \sim 1\,$TeV.
We examined the A-model of $SO(5) \times U(1) \times SU(3)$ gauge-Higgs unification,
and found that significant deviation can be seen at 250$\,$GeV ILC with 250$\,{\rm fb}^{-1}$ data.
Polarized electron and positron beams are indispensable.
All of the total cross section, differential cross section, forward-backward asymmetry,
and left-right asymmetry for $e^+ e^- \rightarrow f \bar f$ processes show distinct dependence
on the energy and polarization.
We stress that new particles of masses 7 - 8$\,$TeV can be explored at 250$\,$GeV ILC
by seeing the interference effect, but not by direct production.
This is possible at $e^+ e^-$ colliders because the number of $e^+ e^- \rightarrow f \bar f$ events
is huge. Although the probability of directly producing $Z'$ bosons is suppressed
by a factor $(s/m_{Z'}^2)^2$, the interference term is suppressed only by a factor
of $s/m_{Z'}^2$. This gives a big advantage over $p p$ colliders such as LHC.
In this talk the predictions coming from the A-model are presented.
It is curious to see how predictions change in the B-model.
Preliminary study indicates the pattern of the polarization dependence is reversed
in the B-model in comparison with the A-model.
The B-model is motivated by the idea of grand unification, which, in my opinion, is
absolute necessity in GHU in the ultimate form. The A-model cannot be
implemented in natural grand unification.
Satisfactory grand unification in GHU has not been achieved
yet.\cite{HosotaniYamatsu2015, Furui2016, HosotaniYamatsu2017},
\cite{Burdman2003}-\cite{MaruYatagai2019}
There are many other issues to be solved in GHU.
Mixing in the flavor sector, behavior at finite temperature, inflation in cosmology,
and baryon number generation are among them.
I would like to come back to these issues in due course.
\section*{Acknowledgement}
This work was supported in part by Japan Society for the Promotion of Science,
Grants-in-Aid for Scientific Research, No.\ 15K05052 and No.\ 19K03873.
\def\jnl#1#2#3#4{{#1}{\bf #2}, #3 (#4)}
\def{\em Prog.\ Theoret.\ Phys.\ }{{\em Prog.\ Theoret.\ Phys.\ }}
\def{\em Prog.\ Theoret.\ Exp.\ Phys.\ }{{\em Prog.\ Theoret.\ Exp.\ Phys.\ }}
\def{\em Nucl.\ Phys.} B{{\em Nucl.\ Phys.} B}
\def{\it Phys.\ Lett.} B{{\it Phys.\ Lett.} B}
\def\em Phys.\ Rev.\ Lett. {\em Phys.\ Rev.\ Lett. }
\def{\em Phys.\ Rev.} D{{\em Phys.\ Rev.} D}
\def{\em Ann.\ Phys.\ (N.Y.)} {{\em Ann.\ Phys.\ (N.Y.)} }
\def{\em Mod.\ Phys.\ Lett.} A{{\em Mod.\ Phys.\ Lett.} A}
\def{\em Int.\ J.\ Mod.\ Phys.} A{{\em Int.\ J.\ Mod.\ Phys.} A}
\def{\em Int.\ J.\ Mod.\ Phys.} B{{\em Int.\ J.\ Mod.\ Phys.} B}
\def{\em Phys.\ Rev.} {{\em Phys.\ Rev.} }
\def{\em JHEP} {{\em JHEP} }
\def{\em JCAP} {{\em JCAP} }
\def{\em J.\ Phys.} A{{\em J.\ Phys.} A}
\def{\em J.\ Phys.} G{{\em J.\ Phys.} G}
\def{\em ibid.} {{\em ibid.} }
\renewenvironment{thebibliography}[1]
{\begin{list}{[$\,$\arabic{enumi}$\,$]}
{\usecounter{enumi}\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt} \renewcommand{\baselinestretch}{1.2}
\settowidth
{\labelwidth}{#1 ~ ~}\sloppy}}{\end{list}}
\section*{References}
|
{'timestamp': '2019-04-24T02:06:58', 'yymm': '1904', 'arxiv_id': '1904.10156', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.10156'}
|
arxiv
|
\section{Introduction}
Hydrodynamics on curved manifolds is relevant for a wide range of physical phenomena. Examples range from the motion of electrons in graphene at the micro-scale \citep{Giordanelli18}, through thin liquid films \citep{Schwartz95, Howell03}, confined active matter \citep{Keber14,Henkes18,Janssen17} and bio-membranes \citep{Henle10,Arroyo09} at the meso-scale, to relativistic flows in astrophysics \citep{Marti15} and at the cosmological scale \citep{Ellis12}. However, despite its importance, the study of flows on curved space has received much less attention when compared to corresponding investigations on two- and three-dimensional flat space. Suitable numerical approaches to study these problems are also still limited, especially when the flow phenomena of interest involve several fluid components.
Here our focus is on multicomponent flow on curved two-dimensional surfaces. An important motivation to study such problem arises from biological membranes and their synthetic counterparts. Experimentally it has been observed that self-assembled lipid and polymer membranes can adopt an astonishing range of shapes and morphologies \citep{Seddon95}, from single bilayers to stacks and convoluted periodic structures. Moreover, these membranes are usually comprised of several species, which can mix or demix depending on the thermodynamic conditions under which they are prepared \citep{Baumgart03,Bacia05,Roberts17}. The interplay between curvature and composition is a ubiquitous structural feature for bio-membranes, and they are key to biological functions and synthetic membrane-based applications \citep{McMahon05,Groves06,Pontani13}.
There is much interest to understand this interplay between membrane curvature and composition. However, to date continuum modelling of membranes with several lipid components have largely focussed on their equilibrium configurations \citep{Julicher96,Hu11,Paillusson17,Fonda2018}. Several dynamic studies of phase separation on curved surfaces have been carried out in the literature. However, apart from a few exceptions \citep{Nitschke12}, they usually involve diffusive dynamics and ignore the importance of hydrodynamics \citep{Marenduzzo13,Jeong15,Gera17}. The aim of this paper is to develop a flexible lattice Boltzmann framework to simulate multicomponent flow on arbitrary curved surfaces. For simplicity, here we will assume the two-dimensional flow is Newtonian. For lipid membranes, this assumption is supported by both Molecular Dynamics simulations and experimental observations \citep{Dimova06,Otter07,Cicuta07,Camley11}.
Our approach is based on the lattice Boltzmann method (LBM) \citep{KrugerBook,Succi01}, which has recently become increasingly popular to study multicomponent flow phenomena, with good agreement against experiments and other simulation methods, including for drop dynamics, liquid phase separation, microfluidics and porous media \citep{Liu16, Sadullah18, Varagnolo2013, Liu2015}. Within the lattice Boltzmann literature, there are several models for multicomponent flow, including the so-called free energy \citep{Swift96}, pseudo-potential \citep{Shan93} and color \citep{Gunstensen91} models. In this work, we have chosen to employ the free energy model, though our framework can be adapted to account for the pseudo-potential and color models. Our approach can also be extended to account for more fluid components \citep{Semprebon16,Wohrwag18,Abadi18,Liang16,RidlWagner2018}, as well as coupled to other dynamical equations, including those for liquid crystals \citep{Spencer06,Denniston01} and viscoelastic fluids \citep{Malaspinas10,Gupta15}.
Standard lattice Boltzmann method is based on regular Cartesian grids, and this makes it unsuitable for use on curved surfaces. Thus, an important contribution of this work is the use of vielbein formalism \citep{busuioc19} to solve the hydrodynamics equations of motion on curved surfaces, which we combine with the free energy binary fluid model. To our knowledge, this is the first time this has been done. Previous lattice Boltzmann simulations on curved surfaces have been carried out for single-component flows \citep{Mendoza13,busuioc19,hejranfar17pre}. Since it is not always possible to have a regular grid on curved surfaces and for the discrete velocity sets to coincide with the lattice discretizations, here we solve the discrete Boltzmann equation using a finite difference approach, rather than a collision-propagation scheme. The latter is usually the case in standard lattice Boltzmann implementation.
The capabilities of our new method are demonstrated using several problems. Firstly,
we study drift motion of fluid droplets and stripes when placed on the surface of a torus.
This drift is due to non-uniform curvature, and as such, is not present on flat space, or for surfaces with uniform curvature (e.g. a sphere). For the stripes, analytical results are available for their equilibrium configuration, Laplace pressure, and relaxation dynamics \citep{Busuioc19bench}, thus providing an excellent platform to systematically examine the accuracy of our method. We demonstrate that these predictions are accurately captured in our simulations. Secondly, we simulate binary phase separation on the surface of a torus for equal and unequal compositions, both in diffusive and hydrodynamic regimes. We compare and contrast the results for tori of various shapes against those for flat two-dimensional surface \citep{Bray02,Kendon01,Wagner97}.
\section{Computational Model and Method}
\label{sec:gen}
In this section we develop a framework that allows simulations of multicomponent
flow on arbitrary curved surfaces. Our vielbein lattice Boltzmann approach has
three key features. Firstly, similar to standard lattice Boltzmann method, we exploit the Boltzmann
equation to solve the continuum equations of motion, and we use a discrete and finite set of
fluid distribution functions. Secondly, unlike standard lattice Boltzmann method, the discrete
velocity sets do not coincide with the neighbouring lattice points. Thus, rather than solving
the Boltzmann equation using a sequence of collision and propagation steps, we
take advantage of a finite difference method. Thirdly, to describe the curved surface,
we employ a vielbein field, which decouples the velocity space from the
coordinate space \citep{cardall13,busuioc19}. This simplifies the formulation and computation
of the governing Boltzmann equation.
\subsection{Brief Introduction to Vielbein Fields}\label{sec:gen:vielb}
Let us begin by considering a two-dimensional curved surface embedded in three dimensions.
Vector fields, such as the velocity field $\bm{u}(\bm{x})$, on the two-dimensional surface
can be expressed in the curvilinear coordinate system using
\begin{equation}
\bm{u}(\bm{x}) = u^a(q^b) \bm{\partial_a },
\end{equation}
where $u^{a}(q^b)$ represent the components of the velocity field
on a manifold parametrised using the coordinates $q^b$
($1 \le a, b\le 2$ for two-dimensional manifolds).
Furthermore, the squared norm of the velocity field $\bm{u}$ can be computed as
\begin{equation}
\bm{u}^2 = g_{ab} u^{a} u^{b}.
\end{equation}
$g_{ab}$ is called the metric tensor.
This description of vector fields in curvilinear
coordinates can become inconvenient for practical computations.
This is because the elements of the metric tensor $g_{ab}$
may become singular at various points
due to the choice of surface parametrisation. In such instances, the
contravariant components $u^a$ of the velocity must diverge in order for
squared norm $\bm{u}^2$ to remain finite.
The difficulty described above can be alleviated by introducing, as an
interface between the coordinate space and the velocity space,
the vielbein vector fields (frame) $\bm{e_{{\hat{a}}} }= e_{{\hat{a}}}^a \bm{\partial_a}$.
Dual to the vielbein vector fields are the vielbein one-forms (co-frame)
$\bm{\omega^{\hat{a}}} = \omega^{\hat{a}}_a \bm{dq^a}$.
We reserve the hatted indices to denote the vielbein framework.
The vielbein frame and co-frame have to satisfy the following relations
\begin{equation}
\braket{\bm{\omega^{\hat{a}}}, \bm{e_{\hat{b}}}} \equiv \omega^{\hat{a}}_a e^a_{\hat{b}} =
\delta^{\hat{a}}{}_{{\hat{b}}}, \qquad
\omega^{\hat{a}}_a e_{\hat{a}}^b = \delta^b{}_a, \qquad
g_{ab} e^a_{\hat{a}} e^b_{\hat{b}} = \delta_{{\hat{a}}{\hat{b}}}.
\label{eq:vielb_contr}
\end{equation}
With the above vielbein frame and co-frame, the vector field $\bm{u}$ can be
written as
\begin{equation}
\bm{u} = u^{\hat{a}} \bm{e_{{\hat{a}}}},
\end{equation}
where the vector field components are
\begin{equation}
u^{\hat{a}} = \omega^{\hat{a}}_a u^a, \qquad
u^a = e^a_{\hat{a}} u^{\hat{a}},
\end{equation}
and the squared norm
\begin{equation}
\bm{u}^2 = \delta_{{\hat{a}}{\hat{b}}} u^{\hat{a}} u^{\hat{b}}.
\end{equation}
In the vielbein framework, the information on the metric tensor
is effectively absorbed in the components of the vector field,
which makes the formulation and derivation of the lattice Boltzmann approach
significantly less cumbersome.
In the lattice Boltzmann implementation used in this paper, we need to introduce two more geometrical objects.
First, the Cartan coefficients $c_{{\hat{a}}{\hat{b}}}{}^{\hat{c}}$ are defined as
\begin{equation}
c_{{\hat{a}}{\hat{b}}}{}^{\hat{c}} = \braket{\bm{\omega^{\hat{c}}}, [\bm{e_{\hat{a}}}, \bm{e_{\hat{b}}}]} = \omega^{\hat{c}}_a ([e_{\hat{a}}, e_{\hat{b}}])^a,\label{eq:cartan}
\end{equation}
with the commutator $([\bm{e_{\hat{a}}}, \bm{e_{\hat{b}}}])^a = e_{\hat{a}}^b \partial_{b} e_{\hat{b}}^a -
e_{\hat{b}}^b \partial_b e_{\hat{a}}^a$.
Second, $\Gamma^{\hat{a}}{}_{{\hat{b}}{\hat{c}}}$ and $\Gamma_{{\hat{a}}{\hat{b}}{\hat{c}}}$
represent the connection coefficients, which are defined as
\begin{equation}
\Gamma^{\hat{d}}{}_{{\hat{b}}{\hat{c}}} = \delta^{{\hat{d}}{\hat{a}}} \Gamma_{{\hat{a}}{\hat{b}}{\hat{c}}}, \qquad
\Gamma_{{\hat{a}}{\hat{b}}{\hat{c}}} = \frac{1}{2} (c_{{\hat{a}}{\hat{b}}{\hat{c}}} + c_{{\hat{a}}{\hat{c}}{\hat{b}}} -
c_{{\hat{b}}{\hat{c}}{\hat{a}}}).\label{eq:conn}
\end{equation}
In Appendix \ref{app:curved}, we detail the application of the vielbein formalism for a torus.
It is worth noting that our approach is general and
other curved geometries can be handled in a similar way.
\subsection{Binary Fluid Model and Equations of Motion}
We consider a binary mixture of fluids $A$ and $B$, characterised by an order
parameter $\phi$, such that
$\phi = 1$ corresponds to a bulk $A$ fluid and $\phi = -1$ to a bulk $B$ fluid.
A simple free energy model that allows the coexistence of these two bulk fluids
is given by the following Landau free energy \citep{Briant04,KrugerBook}
\begin{equation}
\Psi = \int_V \left[ \frac{A}{4} (1-\phi^2)^2 + \frac{\kappa}{2} (\nabla \phi)^2 \right]
dV, \label{eq:Landau}
\end{equation}
where
$A$ and $\kappa$ are free parameters, which are related to the
interface width $\xi_0$ and surface tension $\gamma$ through
\begin{equation}
\xi_0 = \sqrt{\frac{\kappa}{A}}, \qquad
\gamma = \sqrt{\frac{8 \kappa A}{9}}.
\label{eq:xi_gamma}
\end{equation}
The chemical potential can be derived by taking the functional derivative of
the free energy with respect to the order parameter, giving
\begin{equation}
\mu({\bm x}) = \frac{\delta \Psi}{\delta \phi({\bm x})} = -A\phi(1- \phi^2) - \kappa \Delta \phi.
\label{eq:mu}
\end{equation}
The evolution of the order parameter $\phi$ is specified by the Cahn-Hilliard equation.
In covariant form it is given by
\begin{equation}
\partial_t \phi + \nabla_{\hat{a}} (u^{\hat{a}} \phi) = \nabla_{\hat{a}} (M \nabla^{\hat{a}} \mu),
\label{eq:CH}
\end{equation}
where the hatted indices are taken with respect to the orthonormal vielbein basis.
Equivalently, indices with respect to the coordinate basis can be used, e.g.
$\nabla_{{\hat{a}}} (u^{\hat{a}} \phi) = \nabla_a (u^a \phi)$.
In the above, $M$ is the mobility parameter, $\mu$ is the chemical potential,
and the fluid velocity $\bm{u}$ is a solution of the continuity and Navier-Stokes equations
\begin{equation}
\partial_t n + \nabla_{\hat{a}} (u^{\hat{a}} n) = 0, \qquad
n m \frac{Du^{\hat{a}}}{Dt} = - \nabla_{{\hat{b}}} T^{{\hat{a}}{\hat{b}}} +
n F^{\hat{a}},
\label{eq:hydro}
\end{equation}
where $D/Dt = \partial_t + u^{\hat{b}} \nabla_{\hat{b}}$ is the material (convective) derivative,
$m$ is the particle mass, $n$ is the number density and
$T^{{\hat{a}}{\hat{b}}} = p_{\rm i} \delta^{{\hat{a}}{\hat{b}}} + \sigma^{{\hat{a}}{\hat{b}}}$ is the
ideal gas stress tensor. $T^{{\hat{a}}{\hat{b}}}$ consists of the ideal gas pressure $p_{\rm i} = n k_B T$ and
the viscous stress for the Newtonian fluid $\sigma^{{\hat{a}}{\hat{b}}} =
-\eta(\nabla^{\hat{a}} u^{\hat{b}} + \nabla^{\hat{b}} u^{\hat{a}} - \delta^{{\hat{a}}{\hat{b}}}
\nabla_{\hat{c}} u^{\hat{c}}) - \eta_v \delta^{{\hat{a}}{\hat{b}}} \nabla_{\hat{c}} u^{\hat{c}}$.
The latter is written in terms of the dynamic (shear) and volumetric (bulk)
viscosities $\eta$ and $\eta_v$.
The thermodynamic force term $F^{\hat{a}}$
takes the following form
\begin{eqnarray}
& n F^{{\hat{a}}} = - \phi \nabla^{\hat{a}} \mu = - \nabla^{\hat{a}} p_{\rm binary} +
\kappa \phi \nabla^{\hat{a}} \Delta \phi, \nonumber \\
& p_{\rm binary} = A\left(-\frac{1}{2} \phi^2 + \frac{3}{4} \phi^4\right).
\label{eq:pbin}
\end{eqnarray}
A summary on how the differential operators must be applied for the cases of the
Cartesian and torus geometries is provided in the Supplementary Information \citep{suppl}.
\subsection{The Vielbein Lattice Boltzmann Approach}
\label{sec:gen:boltz}
In this paper, we employ the lattice Boltzmann approach to solve the
hydrodynamics equations [Eq.~\eqref{eq:hydro}], while the Cahn-Hilliard equation [Eq.~\eqref{eq:CH}]
is solved directly using a finite difference method. The details of the numerical
implementation are discussed in the Supplementary Information \citep{suppl}.
It is possible to solve the Cahn-Hilliard equation using a lattice Boltzmann
scheme, and on flat manifolds, it has been suggested that extension to more fluid components
is more straightforward in this approach \citep{li2007symmetric,RidlWagner2018}.
However, for our purpose here, it is more expensive and require us to use a
higher order quadrature.
We use a discretised form of the Boltzmann equation that reproduces the fluid
equations of motion in the continuum limit. In covariant form, the Boltzmann equation
on an arbitrary geometry is given by \citep{busuioc19}:
\begin{equation}
\frac{\partial f}{\partial t} +
\frac{1}{\sqrt{g}} \frac{\partial}{\partial q^b}
\left(v^{\hat{a}} e_{\hat{a}}^ b f \sqrt{g}\right) +
\frac{\partial}{\partial v^{\hat{a}}} \left[\left(\frac{F^{\hat{a}}}{m} -
\Gamma^{\hat{a}}{}_{{\hat{b}}{\hat{c}}} v^{\hat{b}} v^{\hat{c}}\right) f\right]
= J[f],
\label{eq:boltz_cons}
\end{equation}
where $\sqrt{g}$ is the square root of the determinant of the metric tensor,
and $J[f]$ is the collision operator.
For the specific case of a torus, the Boltzmann equation reads
\begin{multline}
\frac{\partial f}{\partial t} + \frac{v^{\hat{\varphi}}}{R + r\cos\theta} \frac{\partial f}{\partial \varphi}
+ \frac{v^{\hat{\theta}}}{r(1 + a\cos\theta)} \frac{\partial [f(1 + a\cos\theta)]}{\partial\theta}
+ \frac{F^{\hat{\varphi}}}{m} \frac{\partial f}{\partial v^{\hat{\varphi}}}
+ \frac{F^{\hat{\theta}}}{m} \frac{\partial f}{\partial v^{\hat{\theta}}} \\
- \frac{\sin\theta}{R + r\cos\theta}
\left[v^{\hat{\varphi}} \frac{\partial (f v^{\hat{\varphi}})}{\partial v^{\hat{\theta}}} -
v^{\hat{\theta}} \frac{\partial (fv^{\hat{\varphi}})}{\partial v^{\hat{\varphi}}} \right]
= -\frac{1}{\tau}[f - \ensuremath{f^{(\mathrm{eq})}}].\label{eq:boltz_tor}
\end{multline}
The steps needed to derive Eq.~\eqref{eq:boltz_tor} from Eq.~\eqref{eq:boltz_cons}
are summarised in Appendix~\ref{app:curved}.
Here $r$ and $R$ represent the inner (small) and outer (large) radii,
$a = r/R$ is the radii ratio, while
the angle $\theta$ goes round the inner circle and $\varphi$ covers the large
circle. The range for both $\theta$ and $\varphi$ is $[0, 2\pi)$ and the
system is periodic with respect to both these angles. The last term on the
left hand side of Eq. \eqref{eq:boltz_tor} corresponds to inertial and reaction forces
that arise when we have flow on curved surfaces, since fluid motion is constrained
on the surface.
As commonly the case in the lattice Boltzmann literature, we employ
the BGK approximation for the collision operator,
\begin{equation}
J[f] = -\frac{1}{\tau} [f - f^{\rm eq}].
\end{equation}
The relaxation time $\tau$ is related to the fluid kinematic viscosity $\nu$,
dynamic viscosity $\eta$ and volumetric viscosity $\eta_v$
by \citep{KrugerBook,Dellar2001}
\begin{equation}
\nu = \frac{\eta}{n m} = \frac{\eta_v}{n m} =
\frac{\tau k_B T}{m},
\label{eq:nu}
\end{equation}
such that $\sigma^{{\hat{a}}{\hat{b}}} = -\eta(\nabla^{{\hat{a}}} u^{\hat{b}}
+ \nabla^{{\hat{b}}} u^{\hat{a}})$.
Rather than considering fluid distribution functions $f(\bm v)$
with continuous velocity space $\bm{v} = (v^{{\hat{\theta}}}, v^{{\hat{\varphi}}})$, we discretise the velocity
space using $\bm{v}_{\bm{k}} = (v_{k_\theta}, v_{k_\varphi})$. Due to the inertial and reaction force terms in Eq.~\eqref{eq:boltz_cons},
we need at least a fourth order quadrature ($Q = 4$) when non-Cartesian coordinates are employed.
We obtain inaccurate simulation results when third order quadrature (or lower) is used.
The 16 velocity directions, corresponding to $Q = 4$, are illustrated in Fig. \ref{fig:veldir} \citep{sofonea18pre}.
The possible values of $v_{k_\theta}$ and $v_{k_\varphi}$ ($1 \le k_\theta, k_\varphi \le 4$) are given as the roots of
the fourth order Hermite polynomial,
\begin{equation}
\begin{pmatrix}
v_1 \\ v_2 \\ v_3 \\ v_4
\end{pmatrix} =
\begin{pmatrix}
-\sqrt{3 + \sqrt{6}} \\
-\sqrt{3 - \sqrt{6}} \\
\sqrt{3 - \sqrt{6}} \\
\sqrt{3 + \sqrt{6}}
\end{pmatrix}. \label{eq:vmatrix}
\end{equation}
We use Hermite polynomials orthogonal with respect to the weight function $e^{-v^2/2}/\sqrt{2\pi}$,
which are described in detail,
e.g. in the Appendix of Ref.~\citep{sofonea18pre}.
It is worth noting that, unlike standard lattice Boltzmann algorithm, in general the velocity directions do not coincide with the neighbouring lattice points.
For simplicity, we have set $k_BT = 1$ and $m = 1$, such that the reference scale for the velocity $v_{\rm ref} = (k_BT/m)^{1/2} = 1$,
which is also the sound speed in an isothermal fluid.
The particle number density $n$ and velocity ${\bm u}$ can be computed as zeroth and first
order moments of the distribution functions
\begin{equation}
n = \sum_{\bm k} f_{\bm k}, \qquad
n {\bm u} = \sum_{\bm k} f_{\bm k} {\bm v}_{\bm k}.
\end{equation}
With the discretisation of the velocity space, we also replace the Maxwell-Boltzmann equilibrium
distribution with a set of distribution functions $f^{\rm eq}_{\bm{k}}$ corresponding to the discrete
velocity vectors $\bm{v}_{\bm{k}}$.
Due to the use of the vielbein formalism, the expression for
$f^{\rm eq}_{\bm{k}}$ coincides with the one employed on the flat Cartesian geometry
\citep{sofonea18pre}
\begin{equation}
f^{\rm eq}_{\bm{k}} = n w_{k_\theta} w_{k_\varphi}
\left\{1 + \bm{v}_{\bm{k}} \cdot \bm{u} +
\frac{1}{2} [(\bm{v}_{\bm{k}} \cdot \bm{u})^2 - \bm{u}^2]
+ \frac{1}{6} \bm{v}_{\bm{k}} \cdot \bm{u} [
(\bm{v}_{\bm{k}} \cdot \bm{u})^2 - 3 \bm{u}^2]\right\}.
\label{eq:feq}
\end{equation}
The quadrature weights $w_k$ can be computed using the formula
\begin{equation}
w_k = \frac{Q!}{H_{Q+1}^2(v_k)},
\end{equation}
where $Q$ is the order of the quadrature and $H_n(x)$ is the Hermite polynomial of order $n$.
For $Q = 4$, the weights have the following values \citep{sofonea18pre}
\begin{equation}
w_1 = w_4 = \frac{3-\sqrt{6}}{12}, \qquad
w_2 = w_3 = \frac{3 + \sqrt{6}}{12}.
\end{equation}
To compute the force terms in the Boltzmann
equation, Eq. \eqref{eq:boltz_tor}, we consider a unidimensional expansion of the distribution
with respect to the velocity space degrees of freedom~\citep{busuioc19,busuioc17arxiv}.
In particular, the velocity derivatives appearing
in Eq.~\eqref{eq:boltz_tor} can be computed as
\begin{align}
& \left(\frac{\partial f}{\partial v^{\hat{\theta}}}\right)_{k_\theta k_\varphi} =
\sum_{k_{\theta}' = 1}^Q \mathcal{K}^H_{k_\theta,k_\theta'}
f_{k_\theta', k_\varphi}, \quad
\left(\frac{\partial f}{\partial v^{\hat{\varphi}}}\right)_{k_\theta k_\varphi} =
\sum_{k_\varphi' = 1}^Q \mathcal{K}^H_{k_\varphi,k_\varphi'}
f_{k_\theta, k_\varphi'}, \nonumber\\
& \left(\frac{\partial (fv^{\hat{\varphi}})}{\partial v^{\hat{\varphi}}}\right)_{k_\theta k_\varphi} =
\sum_{k_\varphi' = 1}^Q \widetilde{\mathcal{K}}^{H}_{k_\varphi,k_\varphi'}
f_{k_\theta, k_\varphi'},
\end{align}
where the kernels $\mathcal{K}^H_{i,m}$ and
$\widetilde{\mathcal{K}}^H_{i,m}$ can be written in terms of
Hermite polynomials \citep{busuioc19}
\begin{align}
\mathcal{K}^{H}_{i,m} =& -w_i \sum_{\ell = 0}^{Q - 1}
\frac{1}{\ell!} H_{\ell+1}(v_i) H_{\ell}(v_{m}), \\
\widetilde{\mathcal{K}}^{H}_{i,m} =& -w_i \sum_{\ell = 0}^{Q - 1}
\frac{1}{\ell!} H_{\ell+1}(v_i) [H_{\ell+1}(v_{m}) +
\ell H_{\ell-1}(v_{m})]. \nonumber
\end{align}
We list below the components of the above matrices for the case of $Q = 4$:
\begin{align}
\mathcal{K}^H_{i,m} =&
\begin{pmatrix}
\frac{1}{2} \sqrt{3 + \sqrt{6}} &
\frac{\sqrt{3 + \sqrt{3}}}{2(3 + \sqrt{6})} &
-\frac{\sqrt{3 - \sqrt{3}}}{2(3 + \sqrt{6})} &
\frac{1}{2} \sqrt{1 - \sqrt{\frac{2}{3}}}\\
-\sqrt{\frac{5 + 2\sqrt{6}}{2(3-\sqrt{3})}} &
\frac{1}{2}\sqrt{3-\sqrt{6}} & \frac{1}{2}\sqrt{1 + \sqrt{\frac{2}{3}}} &
-\frac{\sqrt{27+11\sqrt{6}}-\sqrt{3+\sqrt{6}}}{2\sqrt{6}}\\
\frac{\sqrt{27+11\sqrt{6}}-\sqrt{3+\sqrt{6}}}{2\sqrt{6}} &
-\frac{1}{2} \sqrt{1 + \sqrt{\frac{2}{3}}} &
-\frac{1}{2}\sqrt{3 - \sqrt{6}} &
\frac{\sqrt{27+11\sqrt{6}} + \sqrt{3+\sqrt{6}}}{2\sqrt{6}} \\
-\frac{\sqrt{3-\sqrt{6}}}{2\sqrt{3}} &
\frac{\sqrt{3-\sqrt{3}}}{2(3 + \sqrt{6})} &
-\frac{\sqrt{3+\sqrt{3}}}{2(3 + \sqrt{6})} &
-\frac{1}{2} \sqrt{3 + \sqrt{6}}
\end{pmatrix}, \nonumber\\
\widetilde{\mathcal{K}}^H_{i,m} =&
\begin{pmatrix}
-\frac{3+\sqrt{6}}{2} &
\frac{2 - 5\sqrt{2} + \sqrt{6(9-4\sqrt{2})}}{4} &
\frac{2 + 5\sqrt{2} - \sqrt{6(9+4\sqrt{2})}}{4} &
\frac{1}{2} \\
\frac{2 + 5\sqrt{2} + 4\sqrt{3} + \sqrt{6}}{4} &
-\frac{3 - \sqrt{6}}{2} &
\frac{1}{2} &
\frac{2 - 5\sqrt{2} - 4\sqrt{3} + \sqrt{6}}{4} \\
\frac{2 - 5\sqrt{2} - 4\sqrt{3} + \sqrt{6}}{4} &
\frac{1}{2} &
-\frac{3 - \sqrt{6}}{2} &
\frac{2 + 5\sqrt{2} + 4\sqrt{3} + \sqrt{6}}{4} \\
\frac{1}{2} &
\frac{2 + 5\sqrt{2} - \sqrt{6(9+4\sqrt{2})}}{4} &
\frac{2 - 5\sqrt{2} + \sqrt{6(9-4\sqrt{2})}}{4} &
-\frac{3+\sqrt{6}}{2}
\end{pmatrix}.
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale=0.85,angle=0]{lattice.pdf}
\caption{A fourth order quadrature ($Q = 4$) lattice Boltzmann model with $16$ velocities.
The filled black circle in the centre of the figure corresponds to a lattice point in space.
Here we have an off-lattice implementation, where
the velocity directions do not coincide with neighbouring lattice points.
\label{fig:veldir}}
\end{center}
\end{figure}
\section{Drift Dynamics of Fluid Stripes and Droplets}\label{sec:drift}
In this section we begin by studying the behaviour of fluid stripes on the torus geometry.
By minimising the interface length subject to area conservation, we find there is a second
order phase transition in the location of the equilibrium position as we vary the stripe area.
In particular we observe bistability when the stripe area exceeds a critical value. We validate the ability
of our method to capture this effect in Subsec.~\ref{sec:drift:stripeeq}.
We then consider the Laplace pressure test in Subsec.~\ref{sec:drift:laplace}.
The Laplace pressure takes a different form on a curved torus geometry compared to
that on a flat geometry \citep{Busuioc19bench}. Furthermore,
the approach to equilibrium configuration through a damped harmonic motion
is investigated in Subsec.~\ref{sec:drift:damp}. We show that we recover
the damping coefficient and the angular frequency as derived in
\cite{Busuioc19bench}.
Finally, we contrast the drift dynamics of fluid stripes with droplets on the torus
in section \ref{sec:drift:drop}. While the former drift to the inside of the torus,
the latter move to the outside of the torus.
\subsection{Equilibrium positions of fluid stripes} \label{sec:drift:stripeeq}
The basic idea behind establishing the equilibrium position of fluid stripes
is that the interface length must attain a minimum for a fixed stripe area.
We consider a stripe of angular width $\Delta \theta$, centred on $\theta = \theta_c$,
such that its interfaces are located at
\begin{equation}
\theta_- = \theta_c - \Delta \theta / 2, \qquad
\theta_+ = \theta_c + \Delta \theta / 2. \label{eq:stripe_thetapm}
\end{equation}
As a convention, here the stripe is identified with the minority, rather than the majority,
fluid component.
The area $\Delta A$ enclosed between the upper and lower interfaces
can be obtained as follows
\begin{equation}
\Delta A = 2\pi r R \int_{\theta_-}^{\theta_+} d\theta (1 + a \cos\theta)
= 2\pi r R [\Delta \theta + 2a \sin(\Delta \theta / 2) \cos \theta_c],
\label{eq:stripe_area}
\end{equation}
where $a = r / R$. The preservation of the area allows the variation of the stripe width
$\Delta \theta$ to be related to a variation of the stripe centre
$\theta_c$. Setting $d \Delta A = 0$,
\begin{equation}
d\frac{\Delta \theta}{2} = \frac{a \sin (\Delta \theta/2)}
{1 + a \cos(\Delta\theta/2)} \cos\theta_c \sin \theta_c d\theta_c.
\label{eq:stripe_dA0}
\end{equation}
The total interface length $\ell_{\rm total} = \ell_+ + \ell_-$
can be computed as
\begin{equation}
\ell_{\rm total} = 4\pi R \left(1 + a \cos\theta_c \cos \frac{\Delta \theta}{2}\right).
\label{eq:stripe_ltot}
\end{equation}
Imposing $d\ell_{\rm total} = 0$ yields an equation involving
the stripe width $\Delta \theta_{eq}$ and stripe centre
$\theta_c^{eq}$ at equilibrium
\begin{equation}
\left(a \cos\theta_c^{eq} + \cos\frac{\Delta \theta_{eq}}{2}\right)
\sin\theta_c^{eq} = 0.
\label{eq:minimaring}
\end{equation}
The above equation has different solutions depending on the stripe width.
For narrow stripes, the equilibrium position is located
at $\theta_c^{eq} = \pi$.
There is a critical point corresponding to stripe width
$\Delta \theta_{eq} = \Delta \theta_{\rm crit} = 2\arccos(a)$, or alternatively
stripe area
\begin{equation}
\Delta A_{\rm crit} = 4\pi r R (\arccos a - a\sqrt{1 - a^2}).
\label{eq:stripe_dA_crit}
\end{equation}
For stripes with areas larger than this critical value,
two equilibrium positions are possible, namely
\begin{equation}
\theta^{eq}_{c} = \pi \pm \arccos\left[\frac{1}{a}
\cos\frac{\Delta \theta_{eq}}{2}\right].
\label{eq:stripe_thceq}
\end{equation}
We now reproduce the above phenomenon using our lattice Boltzmann approach.
Unless stated otherwise, in section \ref{sec:drift}, we use a torus with
$r = 0.8$ and $R = 2$ ($a = r / R = 0.4$). We set the parameters in our free energy model,
Eq.\eqref{eq:Landau}, to $\kappa = 5 \times 10^{-4}$ and $A = 0.5$, and set the
kinematic viscosity $\nu = 2.5 \times 10^{-3}$ and mobility parameter in the Cahn-Hilliard
equation $M = 2.5 \times 10^{-3}$. Due to its homogeneity with respect to
$\varphi$, the system is essentially one dimensional, such that
a single node is used on the $\varphi$ direction (i.e., $N_\varphi = 1$).
The discretisation along the $\theta$ direction is performed using
$N_\theta = 320$ nodes. Throughout this paper we ensure that our
discretization is such that the spacing is always smaller than the interface width $\xi_0$,
as given in Eq.\eqref{eq:xi_gamma}.
The time step is set to $\delta t = 5\times 10^{-4}$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\columnwidth]{data/fork/fork_thc-eps-converted-to.pdf} &
\includegraphics[width=0.45\columnwidth]{data/fork/fork_theq-eps-converted-to.pdf}
\vspace{2pt}
\end{tabular}
\begin{tabular}{ccc}
\hspace{5pt} \includegraphics[width=0.3\columnwidth]{data/stripe/examples/stripe-rad0325.png}&
\hspace{5pt} \includegraphics[width=0.3\columnwidth]{data/stripe/examples/stripe-rad015.png}&
\hspace{5pt} \includegraphics[width=0.3\columnwidth]{data/stripe/examples/stripe-rad03.png}\vspace{2pt}\\
\hspace{15pt}(c)&\hspace{15pt}(d)&\hspace{15pt}(e)\vspace{2pt}\\
\hspace{-10pt} \includegraphics[width=0.31\columnwidth]{data/fork/ltotal_rad0325-eps-converted-to.pdf} &
\includegraphics[width=0.31\columnwidth]{data/fork/ltotal_rad015-eps-converted-to.pdf} &
\includegraphics[width=0.31\columnwidth]{data/fork/ltotal_rad03-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{(a) Equilibrium position $\theta_c^{eq}$ for stripes
initialised at $\theta_0 = \pi / 2$ on the torus with
$r = 0.8$ and $R = 2$ ($a = 0.4$), as a function of the
initial angular width $\Delta \theta_0$ in comparison
with the analytical prediction.
(b) Diagram indicating the location of the equilibrium position $\theta_c^{eq}$
as a function of the stripe width $\Delta \theta_0$ and the radii ratio $a = r/R$,
for stripes initialised at $\theta_0 = \pi / 2$.
(c-e) Examples of stripes equilibrated at
(c) $\theta_{c}^{eq} > \pi$ ($\Delta \theta_0 =0.65\pi$),
(d) $\theta_{c}^{eq} = \pi$ ($\Delta \theta_0 =0.3\pi$), and
(e) $\theta_{c}^{eq} < \pi$ ($\Delta \theta_0 =0.6\pi$).
(f-h) Interface length $\ell_{\rm total}$ as a function of the
stripe centre position (solid line) for the stripe parameters considered
in (c-e). The symbols highlight the interface lengths at maximum
oscillation amplitude at initialisation ($0$) and after each
half period ($1$, $2$, etc).
\label{fig:tor_stripe_fork}}
\end{figure}
We initialise the fluid stripes using a hyperbolic tangent profile
\begin{equation}
\phi_{\rm stripe}(\theta, t) = \phi_0 +
\tanh \left[\frac{r}{\xi_0 \sqrt{2}}\left(
|\widetilde{\theta - \theta_c}| - \frac{\Delta \theta}{2}\right)\right],
\label{eq:stripe_tanh}
\end{equation}
where $\phi_0$ is an offset due to the Laplace pressure (see next subsection)
\begin{equation}
\phi_0 = \frac{\xi_0}{3R \sqrt{2}}
\frac{\cos\theta_c \sin(\Delta \theta / 2)}
{1 + a \cos\theta_c \cos (\Delta \theta/2)}. \label{eq:stripe_phi0}
\end{equation}
We consider stripes having the same initial position centred
at $\theta_0 = \pi / 2$, but initialised with different initial widths $\Delta \theta_0$.
The area of these stripes is given by
\begin{equation}
\Delta A = 2\pi r R \Delta \theta_0.
\end{equation}
The equilibrium positions $\theta_c^{eq}$
for four different stripes are shown in Fig.~\ref{fig:tor_stripe_fork}(a).
The first case corresponds to a very large stripe
($\Delta \theta_0 = 0.95\pi$, $\Delta A \simeq 1.88 \Delta A_{\rm crit}$),
for which the possible equilibria $\theta^{eq}_{c}$ are close to
$\pi/2$ and $3\pi /2$. Due to the initial condition, the stripe is
attracted by the equilibrium point on the upper side of
the torus, where it will eventually stabilise.
As the stripe size decreases,
its kinetic energy as it slides towards the equilibrium point
will be sufficiently large for it to go over the ``barrier''
at $\theta_c = \pi$ to the lower side of the torus. Because of energy
loss due to viscous dissipation, its kinetic energy may be insufficient
to overcome this barrier again, so the stripe remains trapped
on the lower side. This is the case for the second stripe
having $\Delta \theta_0 = 0.65\pi$ ($\Delta A \simeq 1.29 \Delta A_{\rm crit}$).
Further decreasing the stripe size
causes the peak at $\theta_c = \pi$
to also decrease, allowing the stripe to overcome it a second time
as it migrates back towards the upper side.
The third stripe, initialised with
$\Delta \theta_0 = 0.6 \pi$ ($\Delta A \simeq 1.19 \Delta A_{\rm crit}$),
stabilises on the upper side of the torus.
Finally, the fourth stripe is initialised with
$\Delta \theta_0 = 0.3 \pi$, such that its area $\Delta A \simeq 0.59 \Delta A_{\rm crit}$
is below the critical value. Thus, the fourth stripe will perform oscillations
around the equilibrium at $\theta_c = \pi$, where it will eventually stabilise.
Judging by the number of times that the stripe centre $\theta_c$ crosses the barrier at
$\theta_c = \pi$, two types of stripes having $\Delta A > \Delta A_{\rm crit}$
can be distinguished: (i) the
ones that cross the $\theta_c = \pi$ line an even number of times
stabilise on the upper half of the torus, while (ii) the ones that cross it an odd
number of times stabilise on the lower half of the torus.
This is presented in Fig. \ref{fig:tor_stripe_fork}(b), where
the equilibrium position $\theta_c^{eq}$ for stripes initialised at
$\theta_0 = \pi / 2$ is represented as a function of $\Delta \theta_0$
in comparison with the analytical predictions in Eq.~\eqref{eq:stripe_thceq}.
Panels (c-e) in Fig.~\ref{fig:tor_stripe_fork} illustrate the three scenarios where
the stripes are equilibrated at $\theta_{c}^{eq} > \pi$, $\theta_{c}^{eq} = \pi$, and
$\theta_{c}^{eq} < \pi$ respectively.
The total interface lengths $\ell_{\rm total}$
($\sim \Psi$) for the stripes shown in (c-e) are represented
in panels (f-h) of Fig.~\ref{fig:tor_stripe_fork}.
The interface lengths at the equilibrium positions corresponding
to the initial state as well as to the turning points corresponding to
half-periods are also shown using symbols, numbered sequentially
in the legend ($0$ corresponds to the initial state). It can be seen that
$\ell_{\rm total}$ measured at these turning points decreases monotonically.
When $\ell_{\rm total}$ decreases below its value at $\theta_c = \pi$,
the stripe centre can no longer cross the $\theta_c = \pi$ line and
becomes trapped in one of the minima.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.5\columnwidth]{data/diagram/diagram-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{Diagram indicating the location of the equilibrium position $\theta_c^{eq}$
as a function of the stripe width $\Delta \theta_0$ and the radii ratio $a = r/R$,
for stripes initialised at $\theta_0 = \pi / 2$.
\label{fig:tor_stripe_map}}
\end{figure}
Fig.~\ref{fig:tor_stripe_map} further summarises the location of the equilibrium
stripe position as a function of the stripe width $\Delta \theta_0$ and
the radii ratio $a = r / R$. Our simulations are performed by keeping $R = 2$ constant,
such that the various values of $a$ are obtained by changing $r$.
As before, the stripe is initialised at $\theta_0 = \pi / 2$.
Moving from the top right corner of the diagram towards the bottom left corner,
the subsequent regions distinguish between whether the stripes stabilise on
the top half ($< \pi$) or on the bottom half ($> \pi$), depending on the number of times
that $\theta_c$ crosses $\pi$. In the bottom left corner, the stripes
stabilise at $\theta_c^{eq} = \pi$. The black region between the purple band and
the lower left region corresponds to stripes that cross $\pi$ more
than $3$ times but stabilise away from $\pi$ ($\theta_c^{eq} \neq \pi$).
Due to the diffuse nature of the interface, the stripes evaporate when
$r \Delta \theta \lesssim 5 \xi_0$
($\xi_0 = \sqrt{\kappa / A} \simeq 0.031$).
These regions correspond to the top left and bottom right
corners of the diagram and are shown in red.
\subsection{Laplace pressure test}
\label{sec:drift:laplace}
Since the stripe interfaces have a non-vanishing curvature, it can
be expected that there will be a pressure difference across this interface.
This pressure difference is often termed the Laplace pressure.
This pressure difference was recently derived analytically on a torus and the result
is \citep{Busuioc19bench}
\begin{equation}
\Delta p = -\frac{\gamma}{R} \frac{\cos\theta_c \sin(\Delta \theta / 2)}
{1 + a \cos\theta_c \cos(\Delta \theta / 2)}.\label{eq:stripe_laplace_gen}
\end{equation}
This expression can be simplified for the two types of minima highlighted in the
previous subsection
\begin{equation}
\Delta p =
\begin{cases}
{\displaystyle \frac{\gamma}{R} \frac{\sin(\Delta \theta_{eq} / 2)}
{1 - a \cos(\Delta \theta_{eq} / 2)}}, & \Delta A < \Delta A_{\rm crit},\\
{\displaystyle \frac{\gamma}{r} \cot \frac{\Delta \theta_{eq}}{2}},
& \Delta A > \Delta A_{\rm crit},
\end{cases}\label{eq:stripe_laplace_eq}
\end{equation}
We remind the readers that, on the first branch, $\theta_c^{eq} = \pi$.
On the second branch, the equilibrium position is determined via
$a \cos \theta_c^{eq} + \cos(\Delta \theta_{eq} / 2) = 0$.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{data/stripe-laplace/stripe-laplace-eps-converted-to.pdf}
\end{center}
\caption{Comparison of the Laplace pressure obtained numerically (dashed lines and circles)
against the the analytic formula, Eq.~\eqref{eq:stripe_laplace_eq}, for $\kappa = 5\times 10^{-4}$
and $2.5 \times 10^{-4}$. The analytic prediction is almost everywhere
overlapped with the numerical results.
\label{fig:stripe_laplace}}
\end{figure}
In order to validate our numerical scheme against the Laplace pressure test on
the torus, we perform numerical simulations for two values of $\kappa$ in our free energy model,
$\kappa = 2.5\times 10^{-4}$ and $5 \times 10^{-4}$. These effectively change
the surface tension and interface width in our simulations, see Eq.~\eqref{eq:xi_gamma}.
All of the other simulation parameters are kept the same as in the previous subsection:
$R = 2$, $r = 0.8$, $A = 0.5$, $\nu = 2.5 \times 10^{-3}$ and $M = 2.5 \times 10^{-3}$.
We consider stripes of various areas $\Delta A$ in Fig.~\ref{fig:stripe_laplace}.
After the stationary state is reached, we measure the total pressure
$p = p_{\rm i} + p_{\rm binary} = n k_B T + A(-\frac{1}{2} \phi^2 +
\frac{3}{4} \phi^4)$ in the interior and exterior of the stripe, and
compute the difference $\Delta p$ between these two values.
The simulation results are shown using dashed lines and symbols in Fig.~\ref{fig:stripe_laplace}.
We observe an excellent agreement with the analytic results, Eq.~\eqref{eq:stripe_laplace_eq},
which are shown using the solid lines.
\subsection{Approach to equilibrium}
\label{sec:drift:damp}
For stripes close to their equilibrium position, the
time evolution of the departure $\delta\theta = \theta_c - \theta_c^{eq}$
can be described as a damped harmonic oscillation:
\begin{equation}
\delta \theta \simeq \delta \theta_0 \cos(\omega_0 t + \varsigma) e^{-\alpha t},
\label{eq:stripe_hydro_sol}
\end{equation}
where the damping coefficient $\alpha = \alpha_\nu + \alpha_\mu$ receives
contributions from the viscous damping due to the fluid~\citep{Busuioc19bench}
\begin{equation}
\alpha_\nu = \frac{\nu}{R^2 - r^2},\label{eq:stripe_alphanu}
\end{equation}
as well as from the diffusion due to the mobility of the order parameter, $\alpha_\mu$ \citep{Busuioc19bench}.
In the applications considered in
this paper, $\alpha_\mu \ll \alpha_\nu$, such that
we will only consider the approximation $\alpha \simeq \alpha_\nu$.
For the case of subcritical stripes ($\Delta A < \Delta A_{\rm crit}$),
which equilibrate at $\theta_c^{eq}= \pi$, the oscillation frequency
is \citep{Busuioc19bench}
\begin{equation}
\omega_0^2 = \frac{\gamma \sqrt{1 - a^2}}{\pi r^2 R n m}
\frac{\cos(\Delta \theta_{eq} / 2) - a}{[1 - a \cos(\Delta \theta_{eq} / 2)]^3}.
\label{eq:stripe_hydro_omega0_pi}
\end{equation}
For the supercritical stripes, ($\Delta A > \Delta A_{\rm crit}$),
when the equilibrium position is at $a \cos\theta_c^{eq} + \cos(\Delta \theta_{eq}/2) = 0$,
$\omega_0^2$ is given by
\begin{equation}
\omega_0^2 = \frac{2\gamma}{\pi r R^2 n m (1 - a^2)^{3/2}}
\left[\frac{\sin \theta_c^{eq}}{\sin(\Delta \theta_{eq} / 2)}\right]^2.
\label{eq:stripe_hydro_omega0_npi}
\end{equation}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.48\columnwidth]{data/damp/dth-pi-eps-converted-to.pdf} &
\includegraphics[width=0.48\columnwidth]{data/damp/dth-nonpi-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{Time evolution of the stripe center $\theta_c$ for stripes
initialised at (a) $\theta_0 = 0.95\pi$ with $\Delta \theta_0 = 0.280\pi$
(equilibrating at $\theta_c^{eq} = \pi$);
and (b) $\theta_0 = 0.7 \pi$ with $\Delta \theta_0 = 0.796\pi$
(equilibrating at $\theta_c^{eq} = 3\pi/4$).
The numerical results are shown using dotted lines and symbols,
while the analytic solutions are shown
using solid lines.
\label{fig:stripe_hydro_dth}}
\end{figure}
We will now demonstrate that our lattice Boltzmann implementation captures the dynamical
approach to equilibrium as described by the analytical results.
First, we consider a torus with $r = 0.8$ and $R = 2$ ($a = 0.4$), and set
$\kappa = 5 \times 10^{-4}$, $A = 0.5$ and $\tau = M = 2.5 \times 10^{-3}$.
The number of nodes is $N_\theta = 320$, and the order parameter $\phi$
is initialised according to Eq.~\eqref{eq:stripe_tanh}, where the stripe centre is
located at an angular distance
$\delta \theta_0 = \theta_c - \theta_c^{eq} = -\pi / 20$ away
from the expected equilibrium position.
Fig.~\ref{fig:stripe_hydro_dth} shows a comparison between the
numerical and analytical results for the time evolution of
$(\theta_c^{eq} - \theta_c) / \pi$ for the cases
(a) $\theta_c^{eq} = \pi$ with initial stripe width $\Delta \theta_0 = 0.28 \pi$,
and (b) $\theta_c^{eq} = 3\pi/4$ with $\Delta \theta_0 = 0.786 \pi$.
For the analytical solution, the angular velocity $\omega_0$
is computed using Eqs.~\eqref{eq:stripe_hydro_omega0_pi}
and \eqref{eq:stripe_hydro_omega0_npi} for cases (a) and (b)
respectively, and the damping factor $\alpha \simeq \alpha_\nu$
is computed using Eq.~\eqref{eq:stripe_alphanu}. We have also
set the offset to $\varsigma = 0$. It can be seen that the analytic expression provides
an excellent match to the simulation results for the
stripe that goes to $\theta_c^{eq} = \pi$. For the stripe
equilibrating to $3\pi/4$, we observe a small discrepancy,
especially during the first oscillation period.
However, the overall agreement is still very good.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.48\columnwidth]{data/damp/var-tau-nu-pi-eps-converted-to.pdf} &
\includegraphics[width=0.48\columnwidth]{data/damp/var-tau-nu-nonpi-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{The damping coefficient $\alpha$
obtained by fitting Eq.~\eqref{eq:stripe_hydro_sol} to the simulation
results (points), for stripes initialised at (a) $\theta_0=0.95\pi$
with $\theta_c^{eq}=\pi$; and (b) $\theta_0 = 0.7\pi$ with
$\theta_c^{eq}=0.75\pi$.
The dashed lines represent the viscous damping coefficient
$\alpha_\nu$, given in Eq.~\eqref{eq:stripe_alphanu}.\label{fig:stripe_hydro_alpha}
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.48\columnwidth]{data/damp/var-gamma-w-pi-eps-converted-to.pdf} &
\includegraphics[width=0.48\columnwidth]{data/damp/var-gamma-w-nonpi-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{The angular frequency $\omega_0$, obtained by fitting Eq.~\eqref{eq:stripe_hydro_sol} to the simulation
results (points).
The black dash-dotted curves correspond to the analytic expressions, as given by
Eq.~\eqref{eq:stripe_hydro_omega0_pi} for panel (a), when $\theta_c^{eq} = \pi$; and
Eq.~\eqref{eq:stripe_hydro_omega0_npi} for panel (b), when $\theta_c^{eq} = 3\pi/4$.
\label{fig:stripe_hydro_w}}
\end{figure}
Next we consider three tori having radii ratio $a = r/R = 0.4$, with
$r = 0.8$, $1$ and $1.2$, and perform two sets of simulations.
In the first set of simulations, the
initial configuration corresponds to a stripe centred on
$\theta_0 = 0.95 \pi$, with initial width $\Delta \theta_0 = 0.28\pi$.
These stripes relax towards $\theta_c^{eq} = \pi$.
In the second set of simulations, the stripes are initially centred at $\theta_0 = 0.7\pi$,
and they equilibrate at $\theta_c^{eq} = 3\pi/4$,
with initial width $\Delta \theta_0 = 0.786\pi$.
The simulations are performed using
$N_\theta = 320$, $400$ and $480$ nodes for $r = 0.8$, $1$ and $1.2$,
respectively. The best-fit values of $\alpha$ and $\omega_0$ for the three torus geometries
are shown in Fig.~\ref{fig:stripe_hydro_alpha} and
Fig.~\ref{fig:stripe_hydro_w} respectively as function of the kinematic viscosity $\nu$
(varying between $2.5 \times 10^{-3}$ and $7.5 \times 10^{-3}$) at
$\kappa = 5 \times 10^{-4}$;
and the surface tension parameter $\kappa$ (varying between $2.5 \times 10^{-4}$ and $6.25 \times 10^{-4}$) at $\nu = 2.5 \times 10^{-3}$.
For each simulation, Eq.~\eqref{eq:stripe_hydro_sol} is fitted to the numerical data
for the time evolution of the stripe centre as it relaxes towards equilibrium,
using $\alpha$ and $\omega$ as free parameters, while
$\varsigma = 0$. For simplicity, we have used $M = \nu$ and $A = 0.5$ in Fig.~\ref{fig:stripe_hydro_alpha} and
Fig.~\ref{fig:stripe_hydro_w}.
Panels (a) in Fig.~\ref{fig:stripe_hydro_alpha} and
Fig.~\ref{fig:stripe_hydro_w} correspond to stripes
equilibrating at $\theta_c^{eq} = \pi$,
while panels (b) in Fig.~\ref{fig:stripe_hydro_alpha} and
Fig.~\ref{fig:stripe_hydro_w} are for $\theta_c^{eq} = 3\pi / 4$.
It can be seen that the analytic expressions are in good agreement
with the numerical data in all instances simulated.
Finally we investigate the applicability of
Eqs.~\eqref{eq:stripe_hydro_omega0_pi} and \eqref{eq:stripe_hydro_omega0_npi}
with respect to various values of the stripe area, $\Delta A$. The simulations
are now performed on the torus with $r =0.8$ and $R = 2$, using
$\kappa = 5 \times 10^{-4}$, $A = 0.5$, $\tau = M = 2.5 \times 10^{-3}$.
Fig.~\ref{fig:stripe_hydro_w_all} shows the values of $\omega_0$
obtained by fitting Eq.~\eqref{eq:stripe_hydro_sol} to the numerical
data (points) and the analytic expressions (solid lines).
As before, for the fitting, we set $\varsigma = 0$, and use $\alpha$ and $\omega_0$ as free
parameters. An excellent agreement can be seen, even for
the nearly critical stripe, for which $\omega_0$ is greatly decreased.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.48\columnwidth]{data/w0/w0-eps-converted-to.pdf}
\end{tabular}
\end{center}
\caption{
Comparison between the values of $\omega_0$ obtained by fitting
Eq.~\eqref{eq:stripe_hydro_sol} to the numerical results, shown
with points, and
the analytic expressions, Eq.~\eqref{eq:stripe_hydro_omega0_pi}
for $\Delta A< \Delta A_{\rm crit}$ and Eq.~\eqref{eq:stripe_hydro_omega0_npi} for $\Delta A > \Delta A_{\rm crit}$, shown with solid black lines.
\label{fig:stripe_hydro_w_all}}
\end{figure}
\subsection{Droplets on Tori} \label{sec:drift:drop}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\hspace{-20pt} \includegraphics[width=0.5\columnwidth]{data/drops/drops-thc-eps-converted-to.pdf} \\
\vspace{2 mm}
\begin{tabular}{ccc}
\includegraphics[width=0.31\columnwidth]{data/drops/drop09-t0.png}\vspace{2 mm}&
\includegraphics[width=0.31\columnwidth]{data/drops/drop09-t650.png} \vspace{2 mm}&
\includegraphics[width=0.31\columnwidth]{data/drops/drop09-t1775.png} \vspace{2 mm} \\
(b) $t = 0$ & (c) $t = 650$ & (d) $t = 1775$
\end{tabular}
\end{tabular}
\caption{
(a) Time evolution of the position of the center $\theta_c / \pi$
for drops initialised according to Eq.~\eqref{eq:drop_init} with
$(\theta_0, R_0) \in \{(5\pi/10,0.938),
(7\pi/10,0.924), (9\pi/10,0.910)\}$.
(b--d) Snapshots of the evolution of the drop corresponding to
$\theta_0= 9 \pi / 10$ for $t = 0$, $650$ and $1775$.
\label{fig:drops}}
\end{center}
\end{figure}
We will now show that, when placed on a torus, a fluid droplet will also exhibit
a drift motion. However, in contrast to stripes, the drops will move towards the outer
rather than the inner side of the torus. To study this phenomenon quantitatively,
we initialise drops on a torus using the following equation
\begin{equation}
\phi_{\rm drop}(\theta_0, R_0; \theta, \varphi) =
\tanh \frac{r - R_0}{\xi_0 \sqrt{2}},
\label{eq:drop_init}
\end{equation}
where $r = \sqrt{(x-x_c) + (y-y_c) + (z-z_c)}$ is the Euclidean distance between
the point with coordinates $(x,y,z)$ and the centre of the drop $(x_c, y_c, z_c)$,
corresponding to $(\theta, \varphi)$ and $(\theta_0, 0)$ in polar coordinates respectively.
The relation between the Cartesian and polar coordinates are given in Eq.~\eqref{eq:torus}
of Appendix \ref{app:curved}.
The parameter $\theta_0$ represents the center of the drop, while
$R_0$ is a measure of its radius. $\xi_0$ is the interface width derived for the
Cartesian case. In principle the interfacial profile will be different on a torus,
but currently we are not aware of a closed analytical formula.
We also do not
introduce in Eq.~\eqref{eq:drop_init} the offset $\phi_0$ responsible for the
Laplace pressure difference, since the analysis of this quantity is less
straightforward than for the azimuthally-symmetric stripe domains discussed
in the previous subsections.
In order for the drops to have approximately the same areas, for a given value of $\theta_0$,
$R_0$ is obtained as a solution of
\begin{equation}
\int_0^{2\pi} d\varphi \int_0^{2\pi} d\theta (R + r\cos\theta)
[\phi_{\rm drop}(0, 30\xi_0; \theta, \varphi) -
\phi_{\rm drop}(\theta_0, R_0; \theta, \varphi)] = 0,
\end{equation}
where the first term in the parenthesis corresponds to the configuration when the droplet
is centred on the outer equator and has $R_0 = 30\xi_0$. The drift phenomenon we report
here is robust with respect to the drop size, but we choose a relatively large drop size because
small drops are known to evaporate in diffuse interface models.
The simulation parameters are the same as in Sec.~\ref{sec:drift:damp},
namely $r =0.8$, $R = 2$, $\kappa = 5 \times 10^{-4}$, $A = 0.5$
and $\tau = M = 2.5 \times 10^{-3}$.
As shown in Fig.~\ref{fig:drops}(a), similar to the stripe configuration in the
previous sub-section, we observe a damped oscillatory motion.
Here the three drops are initialised at different positions on the torus.
Moreover, as is commonly the
case for an underdamped harmonic motion, the drops initially overshoot
the stable equilibrium position, but they eventually relax to the minimum energy configuration.
For the drops we find that all drops eventually drift to $\theta = 0$ (the outer side of the torus). Typical drop configurations
during the oscillatory motion are shown in Fig.~\ref{fig:drops}(b-d). Compared to the oscillatory
dynamics for the stripe configurations, we also observe that the oscillation dies out quicker for the
drops.
\section{Phase Separation}\label{sec:res:growth}
In this section we investigate binary phase separation on the torus
and compare the results against those on flat surfaces. We consider
hydrodynamics and diffusive regimes for both even (section
\ref{sec:res:growth:even}) and uneven (section
\ref{sec:res:growth:uneven}) mixtures.
The fluid order parameter at lattice point $(s,q)$ is initialised as
\begin{equation}
\phi_{s,q} = \overline{\phi} + (\delta \phi)_{s,q},\label{eq:phirand}
\end{equation}
where $\overline{\phi}$ is a constant and $(\delta \phi)_{s,q}$ is
randomly distributed between $(-0.1, 0.1)$.
We characterise the coarsening dynamics using the instantaneous
domain length scale $L_d(t)$, computed using the following function:
\begin{equation}
L_d(t) = \frac{A_{\rm total}}{L_I(t)},\label{eq:cart_scaling_L0}
\end{equation}
where $A_{\rm total}$ is the total area of the simulation domain.
The total interface length at time $t$, $L_I(t)$, is computed by
visiting each cell $(s, q)$ exactly once, starting from the bottom left
corner, where $s = q = 1$, and progressing towards the top right corner,
where $s = N_1$ and $q = N_2$. $N_1 = N_x$ and $N_2 = N_y$
for the Cartesian domains and $N_1 = N_\varphi$ and $N_2 = N_\theta$
for the torus domains.
For each cell where $\phi_{s,q} \times \phi_{s+1,q} < 0$,
the length of the vertical interface between the $(s,q)$
and $(s + 1, q)$ is added to $L_I$.
In the case of the Cartesian geometry, this length is $\delta y$, while for
the torus, the length is given by $r \delta \theta$.
Similarly, if $\phi_{s,q} \times \phi_{s,q+1} < 0$, the length of the horizontal
interface ($\delta x$ for the Cartesian case and
$(R + r \cos\theta_{q + 1/2}) \delta \varphi$ for the torus case,
where $\theta_{q+1/2} = \theta_q + \delta \theta / 2$ is the
coordinate of the cell interface) is added to $L_I$. The periodic
boundary conditions allow the cells with $(N_1 + 1, q)$ and
$(s, N_1 + 1)$ to be identified with the cells $(1, q)$ and $(s, 1)$,
respectively.
Unless specified otherwise, we use the following
parameters in this phase separation section: $M = \tau = 2.5 \times 10^{-3}$,
$\delta t = 5 \times 10^{-4}$, $A = 0.5$ and $\kappa = 5 \times 10^{-4}$.
In the initial state, the distributions for the
LB solver are initialised using Eq.~\eqref{eq:feq} with
a constant density $n_0 = 20$ and vanishing velocity.
\subsection{Even mixtures}\label{sec:res:growth:even}
\subsubsection{Cartesian Geometry}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\columnwidth]{data/cscal/cscal-ev-inertial-eps-converted-to.pdf} &
\includegraphics[width=0.4\columnwidth]{data/cscal/cscal-ev-diff-eps-converted-to.pdf} \\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.225\columnwidth]{data/cscal/even10-t40.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/even10-t100.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/even10-t250.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/even10-t3000.png} \vspace{1 mm}\\
(c) $t = 40$ & (d) $t = 100$ & (e) $t = 250$ & (f) $t = 3000$ \vspace{1 mm}\\
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.225\columnwidth]{data/cscal/nohce10-t300.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/nohce10-t2700.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/nohce10-t15000.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/nohce10-t168000.png} \vspace{1 mm}\\
(g) $t = 300$ & (h) $t = 2700$ & (i) $t = 15000$ & (j) $t = 168000$
\end{tabular}
\caption{
Growth of the fluid domain size $L_d(t)$ for an even mixture in two dimensions in (a) the inertial-hydrodynamics
and (b) the diffusive regimes. For the diffusive regime, we remove the convective term in the Cahn-Hilliard
equation.
(c-f) Snapshots of the typical fluid configurations at $t = 40$, $100$, $250$ and $3000$
corresponding to the case indicated in panel (a).
(g-j) Snapshots of the fluid configurations corresponding to the case indicated in panel (b),
at times $t = 300$, $2700$, $15000$, and $168000$. These are selected such that $L_d(t)$
matches the values corresponding to panels (c-f).
\label{fig:cart_scal}
}
\end{center}
\end{figure}
We begin by considering the coarsening dynamics of a phase separating binary fluid with
even mixtures on a flat two-dimensional surface. We use a simulation domain of
$N_x \times N_y = 512 \times 512$ with a grid spacing of $\delta x = \delta y = 0.02$.
The linear size of the simulation domain is $L = 512 \times 0.02 = 10.24$ and its
total area is $A_{\rm total} = L^2$.
As shown in Fig. ~\ref{fig:cart_scal}(a), we observe that the fluid domain grows with an
exponent of $2/3$. This exponent is often associated with the so-called inertial-hydrodynamics
regime for binary fluid phase separation in three dimensions \citep{Bray02,Kendon01}.
However, in two dimensions, it has been argued that self-similar growth in the
inertial-hydrodynamics regime may be absent \citep{Wagner97}. The apparent exponent of $2/3$
is really due to a mixture of viscous exponent of $1$ for the growth of the connected domains and
an exponent of $1/3$ for the diffusive dissolution of circular droplets.
Classical morphologies typical of a spinodal decomposition phenomenon are shown
in Fig.~\ref{fig:cart_scal}(c-f). The deviation from this apparent scaling law is observed
at early times when domains of fluid components A and B are formed from the initial perturbation,
and at late times when the domains become comparable in size to the simulation box.
For the latter, there are very few domains left (see e.g. Fig.~\ref{fig:cart_scal}(f)) and coarsening
slows down because of the lack of coalescence events between the fluid domains.
To access the diffusive regime, in this work we remove the advection term in
the Cahn-Hilliard equation and decouple it from the Navier-Stokes equation.
In this case, coarsening can only occur via diffusive dynamics, and indeed we do observe
a growth exponent of $1/3$, as shown in Fig.~\ref{fig:cart_scal}(b), as expected for diffusive
dynamics \citep{Bray02,Kendon01}. Representative configurations from the coarsening
evolution are shown in Fig.~\ref{fig:cart_scal}(g-j). These snapshots look somewhat similar to
those shown in Fig.~\ref{fig:cart_scal}(c-f) for the apparent $2/3$ scaling regime. The key difference
between the morphologies is that more small droplets are accumulated during coarsening when
hydrodynamics is on. It is also worth noting that the coarsening dynamics are much slower
in the diffusive regime. At late times we see a deviation from the diffusive scaling exponent, where
$L_d(t)$ appears to grow faster than $1/3$ exponent. In this limit, as illustrated in
Fig.~\ref{fig:cart_scal}(j), the increase in $L_d(t)$ is primarily driven by finite size effects.
\subsubsection{Torus Geometry}
We now consider the coarsening dynamics of a phase separating binary fluid on the
surface of a torus. Initially we simulate a torus domain with $R = 2.5$ and $r = 1$
($a = r / R = 0.4$). These parameters are chosen such that the total area,
$A_{\rm total} = 4\pi^2 r R$, is close to the one employed in the Cartesian case.
The $\varphi$ direction is discretised using $N_\varphi = 800$ nodes,
while the $\theta$ direction is discretised using $N_\theta =400$ nodes.
The fluid order parameter at lattice point $(s,q)$ is initialised
according to Eq.~\eqref{eq:phirand} with $\overline{\phi} = 0$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\columnwidth]{data/tscal/tscal-ev-inertial-eps-converted-to.pdf} &
\includegraphics[width=0.4\columnwidth]{data/tscal/tscal-ev-diff-eps-converted-to.pdf} \\
\end{tabular}
\begin{tabular}{ccc}
\includegraphics[width=.3\columnwidth]{data/tscal/hteR2p5r1-t40.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/hteR2p5r1-t100.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/hteR2p5r1-t250.png} \vspace{1 mm}\\
(c) $t = 40$ & (d) $t = 100$ & (e) $t = 250$ \vspace{1 mm}\\
\includegraphics[width=.3\columnwidth]{data/tscal/nohteR2p5r1-t350.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/nohteR2p5r1-t4250.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/nohteR2p5r1-t25500.png} \vspace{1 mm}\\
(f) $t = 350$ & (g) $t = 4250$ & (h) $t = 25500$
\end{tabular}
\caption{
Growth of the fluid domain size $L_d(t)$ for an even mixture on the surface of a torus
with $R = 2.5$ and $r = 1$ in (a) the inertial-hydrodynamics
and (b) the diffusive regimes. For the diffusive regime, the convective term in the Cahn-Hilliard
equation is removed.
(c-e) Snapshots of the typical fluid configurations at $t = 40$, $100$ and $250$
corresponding to the case indicated in panel (a).
(f-h) Snapshots of the typical fluid configurations at $t = 350$, $4250$ and $25500$
corresponding to the case indicated in panel (b). The times are chosen such that
$L_d$ matches the ones corresponding to the panels (c-e).
\label{fig:tscal-ev}
}
\end{center}
\end{figure}
Our simulation results are shown
in Figs.~\ref{fig:tscal-ev}(a) and \ref{fig:tscal-ev}(b)
respectively for cases with and without coupling to
hydrodynamics. Qualitatively we find a similar behaviour to the results obtained
in the Cartesian case, Fig.~\ref{fig:cart_scal}. In panel (a), it can be seen that $L_d(t)$ grows with
an apparent exponent of $2/3$ when hydrodynamics is on.
Turning off the hydrodynamics, the $1/3$ diffusive exponent emerges, as demonstrated in panel (b).
The coarsening dynamics is also much faster with hydrodynamics
on the torus. Snapshots of the order parameter configuration
at various times for the case of the even mixture with and without hydrodynamics are shown
in panels (c-e) and (f-h) respectively.
Quantitatively, we observe that finite size effects occur earlier (smaller $L_d$) for the torus considered in Fig.~\ref{fig:tscal-ev}
compared to the Cartesian case. This is expected since the effective length scale in the poloidal direction,
$2\pi r$, is smaller than the width of the simulation box in the Cartesian case, even though the total surface areas
are comparable. Indeed, we can observe that
the departure from the $2/3$ (panel a) and $1/3$ (panel b) exponents
occur when the fluid domains start to wrap around the circle in the poloidal direction.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\columnwidth]{data/tscal/tscal-ev-inertial-fat-eps-converted-to.pdf} &
\includegraphics[width=0.4\columnwidth]{data/tscal/tscal-ev-inertial-thin-eps-converted-to.pdf}
\end{tabular}
\begin{tabular}{ccc}
\includegraphics[width=.32\columnwidth]{data/tscal/hteR2r1p25-t40.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/hteR2r1p25-t100.png} &
\includegraphics[width=.31\columnwidth]{data/tscal/hteR2r1p25-t250.png} \vspace{1 mm} \\
(c) $t = 40$ & (d) $t = 100$ & (e) $t = 250$ \vspace{2 mm} \\
\includegraphics[width=.3\columnwidth]{data/tscal/hteR5r0p5-t40.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/hteR5r0p5-t100.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/hteR5r0p5-t250.png} \vspace{1 mm} \\
(f) $t = 40$ & (g) $t = 100$ & (h) $t = 250$
\end{tabular}
\caption{
Comparing the growth of the fluid domain size $L_d(t)$ for an even mixture on
(a) a thick torus ($R = 2$, $r = 1.25$) and (b)
a thin torus ($R = 5$, $r = 0.5$).
(c-e) and (f-h) Snapshots of the typical fluid configurations at $t = 40$, $100$ and $250$
corresponding to the cases indicated in panels (a) and (b) respectively.\label{fig:tscal-ev-fat}
}
\end{center}
\end{figure}
In Fig.~\ref{fig:tscal-ev-fat} we further show simulation results for a thicker
($R = 2$ and $r = 1.25$; $a = 0.625$) and
a thinner ($R = 5$ and $r = 0.5$; $a = 0.1$) torus, having a total
area equal to the one considered at the beginning of this section.
The simulation parameters are kept the same as before, except that
for the thicker torus, the time step must
be decreased down to $\delta t = 5 \times 10^{-5}$
since the minimum spacing along the $\varphi$
direction occurring on the inner equator
is $2\pi(R-r)/N_\varphi \sim 0.00589$. Comparing
Figs.~\ref{fig:tscal-ev}(a), \ref{fig:tscal-ev-fat}(a)
and \ref{fig:tscal-ev-fat}(b), we can further conclude that
finite size
effects appear sooner for the thinner torus and later for the thicker one.
This further strengthens the argument that the determining lengthscale
for the finite size effects is the circumference
in the poloidal direction,
rather than the circumference
on the inner side of the torus
(at $\theta = \pi$), $2\pi (R-r)$. Otherwise, the thicker torus
should display finite size effects the earliest among the three geometries simulated.
Given the fluid stripes are generally formed in the poloidal rather
than the toroidal direction during phase separation, the
drift phenomenon reported in sub-section \ref{sec:drift:damp} for
stripe configurations cannot be clearly visualised.
However, domain drifts for drops, as reported in
section~\ref{sec:drift:drop}, can be seen in Figs.~\ref{fig:tscal-ev}
and \ref{fig:tscal-ev-fat} during the late stages of the coarsening phenomenon.
This drift phenomenon can be observed even clearer when we
study uneven mixtures, as discussed in the next sub-section.
\subsection{Uneven Mixtures}
\label{sec:res:growth:uneven}
\subsubsection{Cartesian Geometry}
The simulation results for a mixture with asymmetric composition
are shown in Fig.~\ref{fig:cart_scal_uneven}. We use the same
simulation parameters as in Fig~\ref{fig:cart_scal}, except that $\overline{\phi} = -0.3$.
Fig.~\ref{fig:cart_scal_uneven}(a) shows how the typical domain size scales with time both
when hydrodynamics is turned on and off. Interestingly, in both
cases we observe an exponent of $1/3$, albeit with different
prefactors. This is in contrast to our results for the even mixtures,
when an apparent exponent of $2/3$
is obtained with hydrodynamics. It has been suggested in the literature that the effect of
hydrodynamics decreases as a function of the asymmetry of the mixture, though
we do not yet know of a convincing systematic study of this effect.
For example, \cite{wagner2001phase} showed that at high concentrations
droplets with hydrodynamics exhibit the viscous hydrodynamic coarsening
regime, but as droplet coalescence is reduced at lower volume fractions
the effect of hydrodynamics diminishes. Here we observe the limit where
the scaling is typical of that for diffusive dynamics.
The fluid configurations at various times in the simulation are shown
in Fig.~\ref{fig:cart_scal_uneven}, panels (b-e), when hydrodynamics is
taken into account.
These can be compared to Fig.~\ref{fig:cart_scal_uneven}, panels (f-i),
when the advection term is
switched off in the Cahn-Hilliard equation. The differences are mainly
that the morphologies with hydrodynamics are coarsening faster. In the
non-hydrodynamic simulation there are more coalescence events visible
because the restoration of a round shape takes more time.
Thus, while the scaling exponent is the same with and without hydrodynamics, hydrodynamics
still plays an important role in that it allows coalescing droplets
return to a round shape more quickly.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.45\columnwidth]{data/cscal/cscal-unev-comb-eps-converted-to.pdf}
\end{tabular}
\begin{tabular}{cccc}
\includegraphics[width=0.225\columnwidth]{data/cscal/uneven10-t50.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/uneven10-t200.png}&
\includegraphics[width=0.225\columnwidth]{data/cscal/uneven10-t500.png}&
\includegraphics[width=0.225\columnwidth]{data/cscal/uneven10-t4000.png} \vspace{1 mm}\\
(b) $t = 50$ & (c) $t = 250$ & (d) $t = 500$ & (e) $t = 4000$ \vspace{1 mm} \\
\includegraphics[width=0.225\columnwidth]{data/cscal/nohcue10-t150.png} &
\includegraphics[width=0.225\columnwidth]{data/cscal/nohcue10-t1050.png}&
\includegraphics[width=0.225\columnwidth]{data/cscal/nohcue10-t1500.png}&
\includegraphics[width=0.225\columnwidth]{data/cscal/nohcue10-t6500.png} \vspace{1 mm}\\
(f) $t = 150$ & (g) $t = 1050$ & (h) $t = 1500$ & (i) $t = 6500$
\end{tabular}
\caption{
(a) Growth of the fluid domain $L_d(t)$ for an uneven mixture ($\overline{\phi} = -0.3$)
in two dimensions with and without hydrodynamics.
In both cases, an exponent of $1/3$ characteristic of the diffusive
regime is observed at late times.
(b-e) Snapshots of the typical fluid configurations at times
$t = 50$, $250$, $500$ and $4000$, corresponding to the case with hydrodynamics.
(g-j) Snapshots of the fluid configurations corresponding to the case without
hydrodynamics,
at times $t = 150$, $1050$, $1500$ and $6500$. These are selected such that the
values of $L_d(t)$ correspond to those in panels (b-e).
\label{fig:cart_scal_uneven}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.5\columnwidth]{data/tscal/tscal-unev-comb-eps-converted-to.pdf}
\end{tabular}
\begin{tabular}{ccc}
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t40.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t250.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t1500.png} \vspace{1 mm}\\
(b) $t = 40$ & (c) $t = 250$ & (d) $t = 1500$ \vspace{1 mm}\\
\includegraphics[width=.3\columnwidth]{data/tscal/nohtueR2p5r1-t100.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/nohtueR2p5r1-t1250.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/nohtueR2p5r1-t3500.png} \vspace{1 mm}\\
(e) $t = 100$ & (f) $t = 1250$ & (g) $t = 3500$
\end{tabular}
\caption{(a)
Growth of the fluid domain size $L_d(t)$ for an uneven mixture
on a torus with $R = 2.5$ and $r = 1$ with and without hydrodynamics.
(b-d) Snapshots of the typical fluid configurations at $t = 40$, $250$ and $1500$
corresponding to the case with hydrodynamics.
(e-g) Snapshots of the typical fluid configurations at $t = 100$, $1250$ and $3500$
corresponding to the case without hydrodynamics. The times are chosen such that
the values of $L_d$ match those in panles (b-d).
\label{fig:tscal-unev}
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{data/tscal/tscal-unev-phiavg-eps-converted-to.pdf}
\begin{tabular}{ccc}
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t2500.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t5000.png} &
\includegraphics[width=.3\columnwidth]{data/tscal/htueR2p5r1-t18000.png} \vspace{1 mm}\\
(b) $t = 2500$ & (c) $t = 5000$ & (d) $t = 1800$ \vspace{1 mm}\\
\end{tabular}
\caption{(a) The average distribution of the component, $(\braket{\phi} + 1)/2$, as a function of $\theta$ at various times. (b-d) Snapshots of the fluid configurations at $t=2500$, $t = 5000$ and $18000$.
\label{fig:tscal-unev-phiavg}
}
\end{center}
\end{figure}
\subsubsection{Torus Geometry}
Here we consider a torus geometry with $R = 2.5$ and $r = 1$
(same geometry and simulation parameters as in Fig.~\ref{fig:tscal-ev}), and
the order parameter is initialised according to Eq.~\eqref{eq:phirand} with $\overline{\phi} = -0.3$.
The simulation results for the uneven mixture are shown in Fig. ~\ref{fig:tscal-unev}.
Quantitatively, we find a similar behaviour as for the Cartesian case.
Both when hydrodynamics is turned on and off, we observe a $1/3$ exponent in our simulations.
Similar to the even mixture shown in Fig.~\ref{fig:tscal-ev},
we also find that finite size effects occur earlier (smaller $L_d$)
for the torus compared to the Cartesian geometry. As discussed in the case of even mixtures,
this occurs when the fluid domains
start to wrap around the circle in the poloidal direction. Snapshots of the fluid configurations
during phase separation are shown in panels (b-d) and (e-g) respectively for simulations
with and without hydrodynamics.
At late times, the effect of the curvature on the domain dynamics becomes important.
In Sec.~\ref{sec:drift:drop} we discussed how droplet domains migrate to the outer side
of the torus. To quantify this effect during phase separation of uneven mixtures, we consider
the average of $\phi$ with respect to the azimuthal angle $\varphi$:
\begin{equation}
\braket{\phi} = \int_0^{2\pi} \frac{d\varphi}{2\pi} \phi(\theta,\varphi).
\label{eq:phiavg}
\end{equation}
The discrete equivalent of the above relation is
\begin{equation}
\braket{\phi}_q = \frac{1}{N_{\varphi}}
\sum_{s = 1}^{N_\varphi} \phi_{s,q}.
\end{equation}
We plot $(\braket{\phi} + 1)/2$ as a function of the poloidal angle $\theta$ at various times in
Fig.~\ref{fig:tscal-unev-phiavg}(a).
At late times, see e.g. Fig.~\ref{fig:tscal-unev-phiavg}(c),
the typical configuration corresponds to the majority phase ($\phi = -1$)
forming a continuum with several large droplets of the minority phase ($\phi = +1$)
primarily in the outer side of the torus. At $t = 18000$ [Fig.~\ref{fig:tscal-unev-phiavg}(d)],
when the steady state is reached, the inner stripe spans $0.65\pi \lesssim \theta \lesssim 1.35\pi$.
Our convention is to identify the stripe with the minority fluid component.
The maximum of $(\braket{\phi} + 1)/2$ is clearly reached at $\theta = 0$,
indicating that the outer side of the torus is populated by droplets centered on $\theta = 0$.
\section{Conclusions}\label{sec:conc}
In this work we developed a vielbein lattice Boltzmann scheme to solve the hydrodynamics equations of motion of a binary fluid on an arbitrary curved surface. To illustrate the application of our vielbein lattice Boltzmann method to curved surfaces, here we focussed on the torus geometry and studied two classes of problems. First, due to the non-uniform curvature present on a torus, we showed drift motions of fluid droplets and stripes on a torus. Such dynamics are not present on a flat surface or on surfaces with uniform curvature. Interestingly the fluid droplets and stripes display preference to different regions of the torus. Fluid droplets migrate to the outer side of the torus, while fluid stripes move to the inner side of the torus. The exhibited dynamics are typical of a damped oscillatory motion. Moreover, for the fluid stripes, the corresponding dynamics can effectively be reduced to a one-dimensional problem by taking advantage of the symmetry with respect to
the azimuthal angle. Our simulation results are in excellent agreement with the analytical predictions for the equilibrium position of the stripes, the Laplace pressure difference between
the inside and outside of the stripes, and the relaxation dynamics of the stripes towards equilibrium.
We also studied phase separation dynamics on tori of various shapes. For even mixtures, $2/3$ and $1/3$ scaling exponents characteristic of hydrodynamics and diffusive regimes are observed. In contrast, for uneven mixtures, we only observe a $1/3$ scaling exponent both when hydrodynamics is turned on and off. Compared to Cartesian geometry, we saw that finite size effects kick in earlier for the torus geometry. By comparing the results for three torus aspect ratios, we conclude
that the determining lengthscale for the finite size effects seems to be
the perimeter in the poloidal direction, corresponding to fluid domains wrapping around the circle in the poloidal direction. That the stripes are observed to form in the poloidal rather than the toroidal direction prevents the observation of drift motion of fluid stripes towards the inner side of the torus during phase separation. However, the domain drifts for fluid drops to the outer side of the torus can be clearly observed at the late stage of phase separation.
While we focussed on the torus geometry, our approach can be applied to arbitrary curved geometry. Moreover, one interesting area for future work is to expand the method to account for unstructured mesh, where the geometrical objects needed for the Boltzmann equation must be evaluated numerically. A major challenge is to construct a numerical scheme which is accurate to second order or higher. Another important avenue for future investigations is to couple the hydrodynamics equations of motion with more complex dynamical equations, such as those for (active and passive) liquid crystals and viscoelastic fluids. We believe this work extends the applicability of the lattice Boltzmann approaches to a new class of problems, complex flows on curved manifolds, which are difficult to carry out using the standard lattice Boltzmann method.
{\bf Acknowledgements:} We acknowledge funding from EPSRC (HK; EP/J017566/1 and EP/P007139/1), Romanian Ministry of Research and Innovation (VEA and SB; CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0371/VMS, within PNCDI III), and the EU COST action MP1305 Flowing Matter (VEA and HK; Short Term Scientific Mission 38607).
VEA gratefully acknowledges the support of NVIDIA Corporation with the donation of a
Tesla K40 GPU used for this research. VEA and SB thank Professor
Victor Sofonea (Romanian Academy, Timi\cb{s}oara Branch) for
encouragement, as well as for sharing with us the GPU infrastructure
available at the Timi\cb{s}oara Branch of the Romanian Academy.
|
{'timestamp': '2019-04-24T02:03:10', 'yymm': '1904', 'arxiv_id': '1904.10070', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.10070'}
|
arxiv
|
\section{Equations of motion and asymptotic behavior of relaxion}\label{sec:app}
The equations of motion for relaxion and dark photon are given as
\begin{eqnarray}
0 &=& \ddot{\phi} + 3 H \dot{\phi} + \frac{\partial V(v,\phi)}{\partial \phi} + \frac{r_X}{4f a^4} \langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle,
\\
0 &=& X_\pm'' + (k^2 \mp r_X k \theta' ) X_\pm,
\label{dp_eom}
\end{eqnarray}
where the prime and overdot denote a derivative with respect to the conformal time and the physical time, respectively, and $\theta \equiv \phi /f $.
The metric is given as
\begin{eqnarray}
ds^2 = dt^2 - a^2(t) \delta_{ij} dx^i dx^j.
\end{eqnarray}
To investigate how the particle production affects the relaxion evolution, it is more convenient to write the source term in the relaxion equation of motion in Fourier space,
\begin{eqnarray}
\frac{1}{4 a^4} \langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle = \frac{1}{a^4} \int \frac{d^3k}{(2\pi)^3} {k \over 2} \sum_{\lambda=\pm} \lambda \frac{d}{d\tau} |X_\lambda|^2 .
\end{eqnarray}
Since the relaxion velocity is $\theta' > 0 $ in our convention, only $\lambda = +$ helicity is exponentially produced, while $\lambda = -$ helicity state remains almost vacuum fluctuation.
The relaxion evolution before the particle production is dominantly governed by the slope of the relaxion potential.
At the very beginning of relaxion evolution, its solution in radiation dominated universe is approximated as
\begin{eqnarray}
\dot{\phi}(t) &=& \frac{2}{5} g \Lambda^3 t \bigg[ 1 - \Big(\frac{t_{\rm rh}}{t}\Big)^{5/2} \bigg],
\label{rel_vel_no}
\\
|\Delta \phi(t)| &=& \frac{1}{5} g \Lambda^3 t^2 \left[ 1 - 5 \Big(\frac{t_{\rm rh}}{t} \Big)^2 + 4 \Big( \frac{t_{\rm rh}}{t} \Big)^{5/2} \right],
\label{rel_ex_no}
\end{eqnarray}
where $t_{\rm rh}$ is the physical time at the reheating.
Using this approximate solution, we can estimate the time scale that the particle production becomes important.
For this purpose, we use WKB approximation to solve the equation of motion for dark photon, and find
\begin{eqnarray}
X_+ (k,\tau) \approx \frac{e^{ \int^\tau d\tau' \, \Omega_k (\tau') }}{\sqrt{2\Omega_k(\tau)}} \equiv \frac{e^{g_k(\tau)}}{\sqrt{2\Omega_k(\tau)}},
\end{eqnarray}
where the frequency is defined as $\Omega_k^2(\tau) = r_X k \theta' -k^2$, and $g_k(\tau) \equiv \int^\tau d\tau' \, \Omega_k(\tau')$.
This approximation is valid only when $|\Omega_k' / \Omega_k^2| \ll 1$, which is translated into
\begin{eqnarray}
\frac{1}{4r_X} \left| \frac{\theta''^2}{\theta'^4} \right|< k/ | \theta' | < r_X .
\end{eqnarray}
Substituting this solution to the source term, we find
\begin{eqnarray}
\frac{1}{4a^4} \langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle
\approx \frac{1}{4\pi^2 a^4} \! \int dk \, k^3 e^{2 g_k(\tau)}
\sim \frac{k_*^4}{4\pi^2 a^4} e^{2 g_{k_*}(\tau)}.
\nonumber
\end{eqnarray}
For the last expression, we take a specific wavenumber for the estimation, $k_* = r_X | \theta'(\tau)|$, which becomes stable at $\tau$.
With this wavenumber, we estimate the source term as
\begin{eqnarray}
\frac{1}{4a^4} \langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle
\!\sim\! \frac{r_X^4 |\dot{\theta}(t)|^4}{4\pi^2} \exp \bigg[ \frac{4r_X}{5} \frac{\dot{\theta}(t)}{H} \bigg] .
\label{before_pp}
\end{eqnarray}
When this term becomes comparable to the other terms in the equation of motion, for instance, $\partial V(v,\phi) / \partial \phi$, the dark photon begins to affect the relaxion evolution.
Equating the source term with the slope of relaxion potential, we estimate the Hubble scale at the particle production as
\begin{eqnarray}
H_{\rm pp} = m_\phi \sqrt{\frac{r_X}{5\xi}},
\end{eqnarray}
where $\xi$ is given as a solution of
\begin{eqnarray}
\xi = \frac{5}{2} \ln \left( \frac{10\pi}{r_X^{3/2} \xi} \frac{f}{m_\phi} \right) \sim{\cal O}(10 \,\,\textrm{--} \,\,10^2).
\end{eqnarray}
The relaxion evolution after this time scale is difficult to estimate as the equation becomes integro-differential equation.
Still, we know that the relaxion field velocity must decrease with time.
To see this, we consider the constant relaxion kinetic energy.
In this case, the dark photon field with a constant internal of wavenumber, $0\leq k \leq r_X |\theta'|$, experiences tachyonic instability at a constant rate.
This indicates that these wavenumbers of dark photon keep being exponentially produced, leading to a exponentially growing source.
Therefore, the source term cannot asymptotes to the slope of relaxion potential with a constant relaxion field velocity.
The relaxion field velocity must be a decreasing function in time.
Since we do not know the form of the asymptotic solution, we introduce an ansatz for relaxion evolution, and ask which ansatz satisfies the equation of motion asymptotically.
We introduce
\begin{eqnarray}
\theta'(\tau) = \theta'(\tau_{\rm pp}) \left( \frac{\tau_{\rm pp}}{\tau} \right)^n,
\end{eqnarray}
where the subscript indicates a value computed at the time scale of particle production.
It is interesting to observe that $n=1$ allows scale invariant production of dark photon.
From Eq.~\eqref{dp_eom}, at $k\tau =$ constant surface, the dark photon field is enhanced exactly the same amount relative to its vacuum fluctuation regardless of wavenumber.
However, $n=1$ cannot be the asymptotic solution because $\langle X \widetilde{X}\rangle$ contributes to relaxion equation of motion with $a^{-4}$, and also because the exponentially produced wavenumber redshifts such that the integral over tachyonic wavenumber also scales as $a^{-4}$.
In other words, despite that the dark photon field, $\sqrt{2k}X(k,\tau)$, is enhanced exactly by the same amount, its contribution to relaxion equation of motion redshifts as $\propto 1/a^8$, while $\partial V(v,\phi) / \partial \phi$ remains as constant.
The relaxion evolution that allows the dark photon source term to asymptote to the slope of the potential would be the one with $n<1$.
To estimate this more carefully, we use the WKB solution again in addition to saddle point approximation, and find
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\! \frac{1}{4 a^4} \langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle
\simeq \frac{e^{ 2 g_{\tilde{k}} (\tau)}}{8\pi^{3/2}} \bigg( {\tilde{k} \over a} \bigg)^4
\left( - \tilde{k}^2 \frac{ \partial^2 g_{k}}{ \partial k^2} \bigg|_{\tilde k} \right)^{-\frac{1}{2}} \!\!.
\label{expanded_source}
\end{eqnarray}
The saddle point $\tilde k$ is obtained as a solution of $\partial g_{k}(\tau) / \partial k|_{k=\tilde{k}} = 0$, and is generally a function of conformal time.
To find the asymptotic solution, it is crucial to know how $g_{\tilde k}(\tau)$ scales because the source is exponentially sensitive to it.
By definition, the saddle point satisfies
\begin{eqnarray}
\int^\tau d\tau' \, \frac{r_X \theta' (\tau') - 2 \tilde{k}}{2\sqrt{r_X \theta' (\tau') {\tilde k} - {\tilde k}^2}} = 0,
\label{saddle}
\end{eqnarray}
for any conformal time.
If we shift $\tau \to \lambda \tau$ with a scaling parameter $\lambda$, the saddle point should scales as
\begin{eqnarray}
\tilde{k} \to \tilde{k}_\lambda = \lambda^{-n} \tilde{k},
\end{eqnarray}
in order to satisfy Eq.~\eqref{saddle}.
This immediately leads to
\begin{eqnarray}
\!\!\!\!\!\!\!\!\! g_{\tilde k}(\tau) \to \lambda^{1-n} g_{\tilde k}(\tau),
\quad
\frac{\partial^2 g_k}{\partial k^2}\bigg|_{\tilde{k}} \!\!\!\!\! \to \lambda^{n+1}\frac{\partial^2 g_{ k}}{\partial k^2}\bigg|_{\tilde{k}} \, .
\end{eqnarray}
Substituting this scaling behavior into the source term, Eq.~\eqref{expanded_source}, we notice that the source is a decreasing polynomial of the scaling parameter for $n\geq1$, while it exponentially grows for $n \ll 1$.
Thus, it is $0<1-n \ll1$ that allows the source term to asymptotes to the slope of the relaxion potential.
In this respect, we write $1-n \equiv \epsilon \ll 1$.
We find
\begin{eqnarray}
\frac{a^{-4}\langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle (\tau)}{a_{\rm pp}^{-4}\langle X_{\mu\nu} \widetilde{X}^{\mu\nu} \rangle(\tau_{\rm pp})}
\approx \left( \frac{\tau}{\tau_{\rm pp}} \right)^{-8 +2 \epsilon g_{\tilde k}(\tau_{\rm pp})}.
\end{eqnarray}
From this, we find
\begin{eqnarray}
\epsilon \approx \frac{4}{g_{\tilde k}(\tau_{\rm pp})} \sim \frac{5}{\xi} < 1 ,
\end{eqnarray}
in order for $a^{-4} \langle X\widetilde{X} \rangle$ to asymptote to the slope of relaxion potential.
Here, we have estimated the exponent $g_{\tilde k}(\tau_{\rm pp})$ from Eq.~\eqref{before_pp}.
Using this result, we find the asymptotic relaxion evolution as
\begin{eqnarray}
\!\!\theta' &\simeq& \theta'_{\rm pp} \left( \frac{\tau_{\rm pp}}{\tau} \right)^{1-\epsilon} = r_\xi (aH) \left( \frac{\tau}{\tau_{\rm pp}} \right)^\epsilon,
\\
\!\!\dot{\theta} &\simeq& r_\xi H \left( \frac{t}{t_{\rm pp}} \right)^{\epsilon/2}
\simeq r_\xi H \left[ 1 + \frac{\epsilon}{2} \ln\bigg(\frac{t}{t_{\rm pp}}\bigg) \right],
\end{eqnarray}
where $r_\xi \equiv \xi / r_X$.
The relaxion evolution after the particle production scales as $\dot{\theta} \propto H$ in addition to small logarithmic time dependence, which we ignored in the main text.
\end{document}
|
{'timestamp': '2018-10-05T02:00:30', 'yymm': '1810', 'arxiv_id': '1810.01889', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.01889'}
|
arxiv
|
\section{Introduction}
A distortion is the pathway by which a system transforms in its transition between two or more physical states. Distortions are ubiquitous in nature and crucial in our understanding of physical processes. Recently, the concept of distortion symmetry, an analog of the symmetry of static structures, has been introduced by VanLeeuwen and Gopalan\cite{VanLeeuwen2015} as a framework to study distortions. The versatility of distortion symmetry is shown through an array of insightful examples and valuable applications, including molecular distortions and pseudorotations, minimum energy paths, tensor properties of crystals, and electronic structures and Berry phases \cite{VanLeeuwen2015}. The application to the discovery of minimum energy paths (MEPs) was further explored by Munro \textit{et. al.}\cite{Munro2018}, where MEPs overlooked by conventional techniques are consistently and systematically generated through symmetry-adapted perturbations made possible by distortion symmetry. These results agree with and extend MEPs found previously by other researchers using approaches involving physical intuition and single structure perturbations.
We introduce a thorough treatment of distortion symmetry and its application to nudged elastic band (NEB) calculations employed by Munro \textit{et al.}\cite{Munro2018}, providing a complete introduction to the concepts, mathematics, and methods of distortion symmetry. Furthermore, we illustrate its synergy with powerful methods in representation theory for generating symmetry-adapted perturbations in the context of materials science. In particular, we show the application of our method to ferroelectric switching in LiNbO$_3$ using a software implementation (\texttt{DiSPy})\cite{dispy}.
\section{Theory and Background}
\subsection{Distortion Symmetry Groups}
Consider a periodic cell of a crystalline solid in three-dimensions which contains $N$ atoms. This cell can be represented by the collection of vectors $V = \{\mathbf{r}_i^\alpha\ |\ i=1,\ldots,N\}$, where $\mathbf{r}_i^\alpha$ give the fractional coordinate of atom $i$ with type $\alpha$ in the cell basis. The spatial symmetry of the structure described by this cell is characterized by the group ($S$) of conventional spatial operations ($h \in S$) represented by matrix-vector pairs $(R,\mathbf{t})$, which leave $V$ invariant:
\begin{equation}
R\mathbf{r}_{i}^{\alpha}+\mathbf{t}=\mathbf{r}_{i'}^\alpha\in V,
\end{equation}
for all $h$ and $\mathbf{r}_i^\alpha$. Here, $R$ is the matrix representation of the proper or improper rotation associated with $h$, and $\mathbf{t}$ is the spatial translation vector.
Whereas conventional symmetry groups represent the symmetry of a static structure, distortion symmetry and distortion symmetry groups characterize the symmetry of a distortion pathway. By considering a pathway described by a set of structures $P = \{V_m\}$, the distortion symmetry group ($G$) of a given distortion pathway is the closed set of operations $(G=\{g\})$ which, when applied to said pathway, leave it unchanged or result in an equivalent pathway configuration:
\begin{equation}
R\mathbf{r}_{i,m}^{\alpha}+\mathbf{t}=\mathbf{r}_{i',m'}^\alpha\in P,
\end{equation}
for all $g$ and $\mathbf{r}_{i,m}^\alpha$. When considering the transformations appropriate for distortion symmetry, VanLeeuwen and Gopalan \cite{VanLeeuwen2015} first note that a distortion is described by the positions of all atoms in space at every point in the distortion. It is thus parameterized by spatial coordinates and a time-like dimension representing the progress or extent of the distortion, both of which should be operated upon. To operate on space (independent of progress/extent), the elements of conventional symmetry groups are applied to entire distortion pathways by transforming all atoms undergoing the distortion at every point in the path. To operate on the extent of distortion, a new operation known as distortion reversal ($1^*$) is introduced, which reverses the entire distortion and generates a pathway from the final state to the initial state (Figure \ref{dsym}). $1^*$ can be combined with conventional symmetry operations, and as such, any element in a distortion symmetry group is an element of the direct product of the group of all conventional symmetry elements and the group $\{1,1^*\}$.
However, merely referring to the ``extent" of a distortion is problematic. One can define the extent of any point in a physical distortion in any number of ways (for instance, the elapsed time or the position of one atom), each method potentially yielding different results when distortion reversal is applied. VanLeeuwen and Gopalan\cite{VanLeeuwen2015} introduce a parameter ($\lambda$) as a reaction (or distortion) coordinate which is consistent and universal.
As $1^*$ commutes with and is different from any element of a conventional symmetry group, every element of a distortion symmetry group is either mathematically equivalent to a conventional symmetry operation or can be uniquely decomposed into the product of $1^*$ and a conventional symmetry operation. Those elements that fall into the former group are denoted ``unstarred" and those that fall into the latter group ``starred."
In nature, distortions are continuous -- they contain an uncountably infinite number of intermediate states. Given a continuous definition of the distortion parameter $\lambda$, it is possible to determine the distortion symmetry group of a path. However, this is difficult and unnecessary, as a smooth path can be approximated very precisely with discrete snapshots, or images, of the path. We will focus on determining distortion symmetry given only the set of these images $\{V_\lambda\}$. As they do not contain the entirety of the information of the path, we must reformulate the terms of distortion symmetry to best suit this approximation.
It would be most precise and convenient if the images are evenly spaced in $\lambda$. This turns out to be the case for images in NEB calculation, due to the nature of the simulated spring forces at work (assuming images are near enough to each other to be approximated by differentials). With this, applying distortion reversal to a path results in the new distortion coordinate of every image coinciding with the initial distortion coordinate of its ``opposite'' image at $-\lambda$. Thus, distortion reversal implies, quite simply, mapping the first image to the distortion coordinate of last, the second to the second to last, and so forth. Furthermore, it is also preferable to work with an odd number of images; if a middle image exists, it is often a critical transition image. Moreover, distortion reversal also leaves the distortion coordinate of the middle image unchanged, providing a bridge between the conventional symmetry of the middle image and the distortion symmetry of the path.
With this framework, it is now rather simple to construct the group of distortion symmetry operations from a set of appropriate images $\{V_\lambda\}$ parameterized with $\lambda$ corresponding to a distortion pathway ($P$).\cite{VanLeeuwen2015} First, the operations are separated into the unstarred and starred categories. If an element is unstarred, it must leave all images in the distortion pathway invariant. Thus, these elements are able to be obtained through a simple intersection of the conventional symmetry groups of all images. Let us denote all such elements as $H$, which can be written as
\begin{equation}
H = \medcap_{_{-1\leq\lambda\leq+1}} S(\lambda),
\end{equation}
where $S(\lambda)$ is the conventional symmetry group of the image $V_\lambda$. $H$ is a group, as it is the intersection of groups. If an element is starred, its spatial component must be a symmetry operation of the middle image $V_0$. These elements can be determined by finding all symmetry operations of the middle image that map images at $\lambda$ to those at $-\lambda$ $(V_\lambda \rightarrow V_{-\lambda})$. The set of all such operations is denoted as $A$.
Seeing that all of the elements of a distortion group belong to one of these two categories, we can write the distortion group ($G$) as:
\begin{equation}
G = H \cup 1^*A
\end{equation}
It is also very convenient that all distortion symmetry groups are isomorphic to at least one space group, and thus have identical irreducible representations (irreps) \cite{Vanleeuwen2014}. We first note that $1^*$ is not an element of any physically significant distortion symmetry group: if the opposite were true, the path could be broken down into two identical distortions, which could be analyzed in place of the original. Any distortion group $G=H \cup A^*$ is isomorphic to the set $G'=H\cup A^*1^*$, as $1^*$ is not an element of $A^*$. Furthermore, it is simple to show that $G’$ is closed and thus is a valid space group. One can thus generate a space group isomorphic to any physically significant distortion symmetry group by making all of its starred elements unstarred.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{DSM_Fig.png}}
\caption{A set of displacements of a collection of atoms constituting a distortion. The distortion is paramaterized by a reaction coordinate $\lambda$, which varies between $\lambda=-1$ and $\lambda=+1$. Applying the $1^*$ antisymmetry operation reverses the distortion by taking $\lambda \rightarrow -\lambda$.\label{dsym}}
\end{figure}
\subsection{Perturbing Initial Paths}
We now turn to the application of distortion symmetry to the nudged elastic band method (NEB) employed in the paper by Munro \textit{et al.}\cite{Munro2018}. NEB calculations have the potential to neglect a number of potential minimum energy pathways. This can be thought of as an analogue to the common failure of static structure relaxation calculations in providing the ground-state due to force balancing and symmetry conservation. However, instead of static symmetry being conserved, it is instead the distortion symmetry of a path\cite{VanLeeuwen2015}. In the case of static structure calculations, the issue can be solved by perturbing the generated structure with unstable phonon modes to lower its symmetry. This, however, is computationally costly and becomes exponentially more so in the case of entire paths, making a calculation of unstable modes of a path infeasible in many circumstances. Instead of this, we can utilize the related and less computationally intensive subject of distortion symmetry. Using distortion groups, basis vectors for unstable modes of a path can be generated and used as alternative perturbations. Since the NEB algorithm is unable to lower the distortion symmetry of the initial pathway\cite{VanLeeuwen2015}, generating perturbations that selectively break distortion symmetry and invoke path instabilities useful in enabling NEB to explore additional pathways that may exist.
\subsubsection{Distortion space and irreducible representations}
Consider the vector space $U$, hereon referred to as ``distortion space", of all possible distortions whose initial and final images are identical to those of the initial path, with addition and scalar multiplication defined as the addition and scalar multiplication of the vector displacements of every atom at every $\lambda$ between $-1$ and $1$. A vector in this space ($\mathbf{v}\in U$) can be written as a linear combination of $3Np$ basis vectors of the space, where $p$ is the number of images in the path, and $N$ is the number of atoms in each image. In other words,
\begin{equation}
\mathbf{v} = \sum_{i=1}^{3N}\sum_{j=1}^{p}C_i^j\mathbf{d}_{i}^j
\end{equation}
where $C_i^j$ is a constant, $\mathbf{d}_{i}^j$ is a basis vector of $U$ representing a displacement of one of the $N$ atoms in image $j$ in one of the three cell vector direction, and for all $i$,
\begin{equation}
|\mathbf{d}_{i}^1|, |\mathbf{d}_{i}^p| = 0.
\end{equation}
It is simple to show that applying a perturbation (adding an vector in distortion space to the $3Np$-dimensional vector constructed with $\mathbf{r}_{i,m}^\alpha$ vectors that represents the original path) preserves all distortion symmetry operations shared between the initial path and the perturbation, and breaks all symmetry operations of the initial path that are not symmetry operations of the perturbation. To control the distortion symmetry of the perturbed path, we should therefore control the distortion symmetry of the perturbation.
Suitable perturbations can be obtained by considering the irreducible representations (irreps) of the distortion symmetry group of the initial path. For a brief description of irreps, see Supplementary Note 1. Similar to the way phonon modes in a crystal can be categorized using the irreps of its space group, distortion group irreps allow us to categorize displacive modes of atoms in a path. Since $U$ is invariant with respect to the elements $g \in G$,
\begin{equation}
g \mathbf{d}_{i}^j = \sum_{i=1}^{3N}\sum_{j=1}^{p}D^{U}_{iji'j'}(g)\mathbf{d}_{i}^j = \mathbf{d}_{i'}^{j'} \in U,
\end{equation}
for all $g$ and $\mathbf{d}_{i}^j$. Where $D^{U}(g)$ is the matrix of $g$ in the reducible representation $D^{U}$ constructed using $U$, it is then possible to obtain a set of distortion vectors adapted to the irreps of $G$. Mathematically, this corresponds to generating basis vectors of symmetry invariant subspaces of the distortion space ($W_\mu \subset U$):
\begin{equation}
g\mathbf{w}_{i}^{\mu} = \sum_{i=1}^{l_n}D^{\mu}_{ii'}(g)\mathbf{w}_{i}^{\mu} = \mathbf{w}_{i'}^{\mu} \in W_\mu,
\end{equation}
for all $g$, and $\mathbf{w}_{i}^{\mu}$. Where $D^{\mu}(g)$ is the matrix of $g$ in the irreducible representation $D^{\mu}$ constructed using $W_\mu$, and $\mathbf{w}_i^\mu$ is one of its $l_n$ basis vectors. It should be noted that practically the $D^U$ matrices can be generated as:
\begin{equation}
D^U(g) = A(g) \otimes R(g),
\end{equation}
where $A(g)$ is the matrix representing the mapping between atoms in the path for a given symmetry operation $g\in G$, and $R(g)$ is the $3\times3$ matrix representing the proper or improper rotation associated with $g$ in the fractional crystal basis of the system.
Therefore, for a given reciprocal lattice vector in the first Brillouin zone, and commensurate distortion space (see Supplementary Note 1), $U$ can be decomposed into a finite set of symmetry-invariant irreducible subspaces:
\begin{equation}
U = \bigoplus_\mu m_\mu W_\mu,
\end{equation}
where $m_\mu$ are positive integers indicating the number of times $W_\mu$ appears in $U$.
Consequently, all of the basis vectors ($\mathbf{w}_i^\mu$) are orthogonal, and together form a complete basis for $U$,
\begin{equation}
\mathbf{w}_i^\mu[\mathbf{w}_{i'}^{\mu'}]^\text{T} = \delta_{i,i'}\delta_{\mu,\mu'}.
\end{equation}
Additionally, a similar orthonormality relation exists between irrep matrices,
\begin{equation}
\sum_{g \in G} D_{ij}^{\mu}(g)\left[D_{i'j'}^{\mu'}(g)\right]^* = \dfrac{h}{l_n}\delta_{i,i'}\delta_{j,j'}\delta_{\mu,\mu'},
\label{ortho}
\end{equation}
where $h$ is the number of elements $g \in G$, and $l_n$ is the dimension of the irrep.
These symmetry-adapted basis vectors are prime candidates for perturbations that break some, but not all, distortion symmetry because they have the symmetry that transforms as the kernel of the associated irrep: if $D^{\mu}_{ii'}(g) = \delta_{i,i'}$ for $1\leq i\leq l_n$ (for one dimensional irreps, this simply means that $D^{\mu}(R) = [1]$), $g$ leaves $\mathbf{w}_i^\mu$ unchanged and is thus a symmetry operation of $\mathbf{w}_i^\mu$.
As mentioned, it is important to note that the unstable displacive modes that could be generated for a path would transform as a specific irrep of its distortion symmetry group. Due to this, they would be able to be expressed as a linear combination of the described symmetry-adapted basis vectors of the subspace associated with that particular irrep. As such, perturbing a path with these provides a feasible, systematic, and accurate alternative to perturbing with unstable modes that may exist.
\subsubsection{Generating perturbations with projection operators}
One way to generate symmetry-adapted basis vectors is through the method of projection operators \cite{Dresselhaus2008,Bradley2009}. A projection operator $\hat{P}_{kl}^{\mu}$ is an operator that transforms a basis vector $\mathbf{w}_l^\mu$ corresponding to the symmetry invariant subspace associated with $D^\mu$ into another basis vector $\mathbf{w}_k^\mu$ of the same subspace\cite{Dresselhaus2008}:
\begin{equation}
\hat{P}_{kl}^{\mu} \mathbf{w}_l^\mu = \mathbf{w}_k^\mu
\label{proj_app}
\end{equation}
To obtain these operators, let the projection operator be written as the linear combination of symmetry operations $g$:
\begin{equation}
\hat{P}_{kl}^{\mu} = \dfrac{l_n}{h}\sum_{g \in G} A_{kl}(g) g \label{proj_coeff}
\end{equation}
where $A_{kl}(g)$ is a constant. By substituting Eq.~\ref{proj_coeff} into Eq.~\ref{proj_app}, we obtain:
\begin{equation}
\dfrac{l_n}{h}\sum_{g \in G} A_{kl}(g) g \mathbf{w}_l^\mu = \mathbf{w}_k^\mu.
\label{proj_sub1}
\end{equation}
Multiplying both sides by $\left[\mathbf{w}_k^\mu\right]^\text{T}$ then yields
\begin{eqnarray}
\dfrac{l_n}{h}\sum_{g \in G} A_{kl}(g)\left[\mathbf{w}_k^\mu\right]^\text{T} g \mathbf{w}_l^\mu &=& \left[\mathbf{w}_k^\mu\right]^\text{T}\mathbf{w}_k^\mu
\\
\dfrac{l_n}{h}\sum_{g \in G} A_{kl}(g)D^{\mu}_{kl}(g) &=& 1.
\end{eqnarray}
Using the orthonormality relation in Eq.~\ref{ortho}, $A_{kl}(g)$ can then be written as:
\begin{equation}
A_{kl}(g) = \dfrac{l_n}{h}\left[D^{\mu}_{kl}(g)\right]^*.
\end{equation}
This finally allows us to write the projection operator $\hat{P}_{kl}^{\mu}$ as:
\begin{equation}
\hat{P}_{kl}^{\mu} = \dfrac{l_n}{h}\sum_{g \in G} \left[D^{\mu}_{kl}(g)\right]^* g \label{proj_op}
\end{equation}
It is important to note that applying the projection operator $\hat{P}_{kk}^{\mu}$ (which maps to $\mathbf{w}_k^\mu$ to itself) to an arbitrary vector generates the orthogonal projection of said vector onto $\mathbf{w}_k^\mu$. For irreps with $l_n > 1$, we can also generate the other basis vectors by applying $\hat{P}_{kl}^{(\Gamma_n)}$ to $\mathbf{w}_l^\mu$ once it is projected out of an arbitrary vector.
To generate a symmetry adapted perturbation to a path with a distortion symmetry group $G$ that transforms as irrep $D^{\mu}$, $\hat{P}_{kk}^{\mu}$ is applied to the basis vectors of distortion space. This will project out of each basis vector the component of the symmetry adapted basis vector $\mathbf{w}_k^{\mu}$ of $W_\mu \subset U$:
\begin{equation}
\hat{P}_{kk}^{\mu} \mathbf{d}_i^j = c^j_i\mathbf{w}_{k}^{\mu},
\end{equation}
where $c^j_i$ is a constant.
By applying the operator to every basis vector of $U$, all basis vectors of $W_{\mu}$ can be generated. However, repeats of symmetry-adapted basis vectors are likely to be generated, from which one of each linearly independent vector can be chosen to construct the perturbation. For subspaces of dimension larger than one, or for multiple subspaces that may exist in a particular $D^U$, a perturbation created from the linear combination of the basis vectors with different random coefficients will bring the path symmetry down to that of the kernel of the irrep. However, in the former case, path symmetries corresponding to epikernels may be imposed with specific weights for each vector, which can also be exploited to explore higher symmetries if desired.
It should be noted that the sum in Eq.~(\ref{proj_op}) that runs over all group elements $g\in G$ is infinite for distortion groups that include translational symmetry for periodic structures. This is circumvented by choosing a unit cell of the structure, imposing periodic boundary conditions by using a fractional coordinate system, and only summing over symmetry elements within that cell. Depending on the irrep that is chosen to construct the perturbation, the cell choice becomes an important consideration. For irreps at $\mathbf{b}\neq(0,0,0)$, where $\mathbf{b}$ is a reciprocal lattice vector in the first Brillouin zone (see Supplementary Note 1), perturbations will result in a loss of translational symmetry, and a cell must be chosen such that this is accommodated. To help with this, we can use the fact that the elements $D^{\mu}(g)_{kk}$ can be written as
\begin{equation}
D^{\mu}_{kk}(g) = e^{-i\mathbf{b}\boldsymbol{\cdot}\mathbf{A}} D^{\mu}_{kk}(r)
\end{equation}
where $r$ consists of the rotation and nonelementary translational parts of the element $g\in G$, and $\mathbf{A}$ is a vector of the elementary translational component\cite{Kovalev1964,Koster1957,Bradley2009}. Once $\mathbf{b}$ is chosen, if the term $e^{-i\mathbf{b}\boldsymbol{\cdot}\mathbf{A}} \neq 1$ for a particular translation vector $\mathbf{A}$, and for all $\mathbf{b}$ in the star of the group of the wave vector (see Supplementary Note 1), it is those symmetries that will be broken for perturbations constructed with any irrep associated with $\mathbf{b}$. In response, a supercell should be chosen that explicitly includes atoms generated from elements with elementary translations $\mathbf{A}$.
\section{Implementation and Examples}
To enable the use of the methods described thus far, the projection operator approach to generating the symmetry-adapted perturbations was implemented into a Python package (\texttt{DiSPy})\cite{dispy}. This utility has been made to interface with many of the most popular software packages that support the NEB method, and allows for the generation of images describing perturbed initial paths.
To identify standard spatial symmetries from which the path symmetry group is generated, the well documented and open-source \texttt{SPGLIB} package\cite{A.2009} is utilized. In order to employ projection operators, the matrix representation of all distortion group elements for any given irrep is required. Since non-magnetic distortion groups are isomorphic to space groups, the listing containing irrep matrices for all of the space groups by Stokes \textit{et. al.}\cite{Stokes2013} is used.
A flowchart illustrating the function of \texttt{DiSPy} is presented in Figure \ref{chart}. First, the elements of the distortion group of an inputted initial path are generated from the spatial symmetry elements of the space groups of the individual images obtained using \texttt{SPGLIB}. These consist of matrix-vector pairs representing the symmetry operations of the group in the fraction crystal basis of the inputted structures. Next, these matrices and vectors are transformed into the standard basis of the isomorphic space group as defined in the International Tables of Crystallography (ITA)\cite{Hahn2006}. Using the specific irrep designated by the user, the irrep matrices can then be looked up using these standardized matrix-vector pairs. From here, symmetry-adapted perturbations for the given irrep are generated using projection operators and the basis vectors of the distortion space. These are subsequently applied to the images of the path. Finally, The elements of the new distortion symmetry group are obtained, and the perturbed images are outputted.
\begin{figure}
\centerline{\includegraphics[width=2.6in]{Flowchart.png}}
\caption{Flowchart illustrating the steps taken by \texttt{DiSPy} to generate perturbed initial paths for the NEB algorithm. \label{chart}}
\end{figure}
\subsection{LiNbO$_3$ ferroelectric switching}
To the illustrate the described procedure, NEB calculations are applied to the study of ferroelectric switching pathways in LiNbO$_3$, with the first example consisting of uniform bulk switching. Although this provides a simplistic model of how switching will proceed in the material, it can still produce useful insight into the kinds of atomic motion that may be involved.
LiNbO$_3$ is a ferroelectric material with wide use in optical, electro-optical, and piezoelectric applications \cite{Inbar1997}. Below \SI{1480}{\kelvin}, it exists in a polar ground state with rhombohedral symmetry ($R3c$), and above, in a non-polar paraelectric phase ($R\bar{3}c$) \cite{Veithen2002}. The origin of its ferroelectricity can be attributed to a polar mode associated with the Li cations being displaced along the three-fold rhombohedral axis. Reversing the polarization of the structure then involves the motion of these cations along this axis from one oxygen octahedra to another (see Figure \ref{lno_unit}). This switching behavior has previously been explored using density functional theory calculations by Ye and Vanderbilt \cite{Ye2016}. Due to the simplicity of the system, two options for a coherent switching pathway were put forward and studied: one in which the two Li cations move simultaneously (this is obtained by a simple linear interpolation between the images in Figure \ref{lno_unit}), and one in which they move sequentially for at least some portion of the path. For the pathway resulting from simultaneous motion, the intermediate image is that of the high-temperature paraelectric structure ($R\bar{3}c$), and for the sequential pathway, it is a lower symmetry structure with $R\bar{3}$ symmetry \cite{Ye2016}. In order to calculate the energy profile of the sequential path, DFT calculations were run to obtain the total energy of the static structures at various points along the reaction pathway \cite{Ye2016}. To incorporate the sequential motion of the Li cations into the calculations, structural optimizations of the static images were completed with the Li atoms frozen in place at various positions along the three-fold axis. The resulting sequential path showed a lower energy profile than that of the path with simultaneous motion, revealing the surprising fact that the midpoint of the switching path is not identified with the high-symmetry paraelectric structure of $R\bar{3}c$.
\begin{figure}
\centerline{\includegraphics[width=3.0in]{Fig1.png}}
\caption{The initial and final states of the primitive unit cell for bulk polarization switching in LiNbO$_3$. Arrows indicate the direction of the polarization along $[111]$. The switching process consists of coordinated motion along this three-fold axis of the Li cations as they move from one oxygen octahedra to another.\label{lno_unit}}
\end{figure}
Although the above result was obtained without the use of the NEB method, the problem can be simplified by its use alongside the consideration of distortion symmetry. Choosing the simultaneous (linear interpolated) path as the starting point for an NEB calculation, the high energy profile shown in Fig.~\ref{lno_dat1}b is produced. As mentioned, since the NEB algorithm cannot break the distortion symmetry of a path, the group of the path is conserved throughout the calculation. This, in turn, makes the sequential path of lower energy inaccessible without perturbation. To find the distortion group, the space group of each of the images is obtained. These are listed above each of the image illustrations for the linear path outlined in black in Fig.~\ref{lno_dat1}c. Next, the elements of $H$ and $A$ can be found as,
\begin{eqnarray}
&H& = \{1,3^{+}_{111},3^{-}_{111},n_{\bar{1}01},n_{1\bar{1}0},n_{01\bar{1}}\}\\
&A& = \{\bar{1},2_{\bar{1}01},2_{1\bar{1}0},2_{01\bar{1}},\bar{3}_{111}^{+},\bar{3}_{111}^{-}\},
\end{eqnarray}
written in ITA\cite{Hahn2006} notation in the rhobohedral cell basis. Finally, the distortion group can be obtained:
\begin{equation}
G = H \cup 1^* A = R\bar{3}^*c
\end{equation}
By considering the non-trivial irreps of the group $R\bar{3}^*c$, symmetry-adapted perturbations to the initial relaxed path can be constructed and applied. Since the path images contain just the primitive rhombohedral cell, only non-trivial irreps at $\Gamma$, $\mathbf{b} = (0,0,0)$, can be used, as no perturbations that break translational symmetry can be accommodated. The characters of the matrices in these irreps can be seen in the character table shown in Fig.~\ref{lno_dat1}a. The full irrep matrices can be found using the listing by Stokes \textit{et al.}, and projection operators can then be used to generate perturbations to the path. For most of the perturbations (those constructed using the irreps highlighted in black in Fig.~\ref{lno_dat1}a), running NEB calculation simply returns the path back to the path with a distortion group of $R\bar{3}^*c$. However, for the new initial path resulting from perturbing with symmetry-adapted basis vectors constructed with the $\Gamma_{2+}$ irrep (see Fig.~\ref{lno_dat1}c), the lower energy sequential path is obtained. This path has a distortion symmetry group of $R\bar{3}^*$, which is equal to that of the kernel of $\Gamma_{2+}$ (Fig.~\ref{lno_dat1}a). From here, further perturbations are able to be applied to the newly obtained path. Using the irreps of the groups $R\bar{3}^*$, symmetry-adapted perturbations are constructed and applied to the respective relaxed paths. NEB calculations are then run resulting in all final paths returning to the initial $R\bar{3}^*$. This indicates that there are no additional lower energy paths of similar character to be found.
\begin{figure*}
\centerline{\includegraphics[width=\linewidth]{ALT_FIG1.png}}
\caption{Path perturbations and NEB data for bulk ferroelectric switching in LiNbO$_3$. (a) Character table of the distortion symmetry group of the initial simultaneous path ($R\bar{3}^*c$) at $\vec{b}=(0,0,0)$. Perturbations constructed for a specific irrep will reduce the path symmetry to that of the kernel shown in the last collumn. (b) The energy relative to the initial and final state as a function of reaction coordinate for the simultaneous ($R\bar{3}^*c$) and sequential ($R\bar{3}^*$) pathways obtained from NEB calculations. The sequential path is obtained by perturbing the initial relaxed path using symemtry-adapted perturbations constructed from the $\Gamma_2^{+}$ irrep. (c) Snapshots of images along the relaxed simultaneous and sequential pathways given by the NEB algorithm. The $R\bar{3}^*c$ and $R\bar{3}^*$ pathways are illustrated by a black and blue outline respectively. The space group of each image is shown above it. Perturbations to the initial $R\bar{3}^*c$ path are shown for the atoms involved in the distortion (Li and O). Black, brown, green, and orange arrows indicate perturbations along $[111]$, $[121]$, $[112]$, and $[211]$ respectively.\label{lno_dat1}}
\end{figure*}
\begin{figure*}
\centerline{\includegraphics[width=\linewidth]{Fig4_v3.png}}
\caption{Path perturbations using other reciprocal lattice vectors. (a) Illustration of the unit cell and $2\times2\times2$ supercell of LiNbO$_3$. The high symmetry structure with $R\bar{3}c$ symmetry is shown, along with the positions of all the Li atoms in each of the eight primitive rhombohedral unit cells. Arrows and labels indicate atomic displacements of Li from the high-symmetry positions. (b) The energy relative to the initial and final state as a function of reaction coordinate for the simultaneous ($R\bar{3}^*c$) and sequential ($R\bar{3}^*$) paths, as well as the lower energy paths from their perturbation. Paths found by perturbing the simultaneous and sequential paths are shown with solid and dotted lines respectively. The perturbations were obtained using symmetry-adapted basis vectors constructed from the $F_{2-}$ and $F_{1-}$ irreps of $R\bar{3}^*c$ and the $F_{1-}$ and $F_{1+}$ irreps of $R\bar{3}^*$. (c) Plot of the displacement of both Li atoms within each primitive rhombohedral unit cell along each of the cell vectors in panel (a) as a function of reaction coordinate for the blue path in panel (b). The high-symmetry structure with $R\bar{3}c$ symmetry is taken as reference. What is illustrated is a switching process with a lower overall energy barrier that involves the step wise transition of different pairs of primitive cells.\label{lno_dat2}}
\end{figure*}
In order to construct perturbations using other high-symmetry $\mathbf{b}$-vectors, a $2\times2\times2$ rhombohedral supercell can instead be used for all of the images in the initial $R\bar{3}^*c$ path. In turn, this allows for the exploration of the potential ways in which coordinated motion between different unit cells can reduce cost of switching.
An illustration of the supercell can be seen in Fig.~\ref{lno_dat2}a. Running NEB calculations on the initial $R\bar{3}^*c$ path results in the energy profile shown in black in Fig.~\ref{lno_dat2}b. Choosing the $F$ reciprocal lattice point at $\mathbf{b} = (1/2,1/2,0)$, path perturbations can then be generated. The loss of translational symmetry can be identified by the three $\mathbf{b}$-vectors in the star of the wave-vector ${\mathbf{b}=(1/2,1/2,0),(0,1/2,1/2),(1/2,0,1/2)}$ (see Supplementary Note 1). Since only unit translation vectors of $\mathbf{A} = (0,0,0)$ and $\mathbf{A} = (1,1,1)$ result in $e^{-i\mathbf{b}\boldsymbol{\cdot}\mathbf{A}} = 1$, all other unit translations in the supercell will be broken after perturbation. Furthermore, it is important to note that all irreps associated with the $F$-point are three-dimensional. Consequently, three symmetry-adapted basis vectors will be generating from applying the projection operators, which in linear combination will make up the final path perturbation. Since we would like to perturb to the kernel symmetry of each irrep, a random and different coefficient for each is chosen.
After constructing perturbations with both $F$-point irreps, two new lower energy paths are obtained that both have trivial path symmetry groups ($P1$). Both are similar in character, and their energy profiles can be seen in Fig.~\ref{lno_dat2}b. In each case, the switching process proceeds in a stepwise manor, with different pairs of unit cells switching simultaneously. In both paths, the two Li atoms in each primitive cell move together, and their displacement from their positions in the high-symmetry $R\bar{3}c$ structure is plotted in Fig.~\ref{lno_dat2}c for the path resulting from perturbing with the $F_{2-}$ irrep. The same procedure has also been completed for the sequential path with $R\bar{3}^*$ symmetry. Two paths with lower energy than the initial path are also found, and their energy profiles are plotted in Fig. \ref{lno_dat2}b. Similar coordinated motion between Li atoms across primitive cells can be seen, but instead with four of the eight cells transitioning at a time. The Li displacements have also been plotted and can be seen in Supplementary Figs. 2-3. Overall, the exhibited coordinated motion between different primitive cells shown in all of the obtained paths allows for a lowering of the maximum switching barrier, and may inform how nucleation and growth switching processes could begin at the unit cell scale at domain walls in the material.
\section{Conclusion}
In this paper, we have presented the mathematical formalism of the distortion symmetry method described by Munro \textit{et al.}\cite{Munro2018}. By considering the distortion space for a given path, projection operators can be applied to generate symmetry-adapted basis vectors of its symmetry-invariant subspaces. These basis vectors can the be used to perturb a path, reducing its symmetry to that of the kernel of the irreducible representation associated with a particular subspace. In turn, this allows one to break free of the distortion symmetry conservation exhibited by the NEB algorithm, induce path instabilities, and explore additional low energy paths that may exist. The described procedure has been implemented into a Python package (\texttt{DiSPy})\cite{dispy}, and has been applied to bulk ferroelectric switching in LiNbO$_3$. Previously reported paths are recreated, with additional low-energy paths found that involve the breaking of translation symmetry. These provide insight into the coordinated motion of atoms across unit cells that may be involved in the switching process. We foresee the generation and application of symmetry adapted perturbations being an integral part of NEB calculations in the future. Furthermore, we envision possible extensions to the distortion symmetry framework that involve other types of symmetry such as distortion translation\cite{Padmanabhan2017,Liu2018}.
\section*{Methods}
All NEB \cite{JONSSON1998} calculations were completed using the Vienna Ab Initio Simulation Package (VASP) \cite{Kresse1993,Kresse1996,Kresse1996a,Kresse1999} after obtaining the optimized geometries of the end state structures. All perturbations were generated and applied using the \texttt{DiSPy} package \cite{dispy}, and were normalized such that the maximum displacement of any one atom along any of the rhombohedral cell vector directions was set to 0.05\AA.
All first-principles calculations were completed using the revised Perdew-Burke-Ernzerhof generalized-gradient approximation functional \cite{Perdew2008} (PBEsol) that has been shown to improve the properties of densely packed solids. A 6$\times$6$\times$6 and 3$\times$3$\times$3 $k$-point mesh was used for the unit cell and supercell calculations respectively. A \si{600}{eV} plane-wave cutoff was used for all calculations, with an energy error threshold of \SI{1e-6}{\electronvolt} and \SI{1e-5}{\electronvolt} being used for the geometry optimization and NEB calculations respectively. Both types of calculations were run until forces were below \SI{0.001}{\electronvolt\per\angstrom} and \SI{0.01}{\electronvolt\per\angstrom} respectively. The projector augmented wave method was used to represent the ionic cores. There were 3 electrons for Li ($1s^22s^2$), 13 electrons for Nb ($4s^24p^64d^45s^1$), and 6 electrons for O ($2s^22p^4$) treated explicitly.
\section*{Data Availability}
The data that supports the findings of this study are available from the corresponding author upon request.
\section*{Acknowledgments}
This material is based upon work supported by the National Science Foundation under Grants No. 1807768 and No. 1210588. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), and the NSF-MRSEC Center for Nanoscale Science at the Pennsylvania State University, Grant No. DMR-1420620. J.M.M. and I.D. also acknowledge partial support from the Soltis faculty support award and the Ralph E. Powe junior faculty award from Oak Ridge Associated Universities.
\vspace{-0.3cm}
\section*{Author Contributions}
The implementation of the distortion symmetry method was completed by J.M. and V.L. All calculations presented were completed by J.M. The manuscript was written by J.M., V.S., and I.D. All authors discussed the results and implications, and commented on the manuscript at all stages.
\\
\noindent \textbf{Competing interests:} The authors declare no competing financial or non-financial interests.
\vfill
|
{'timestamp': '2018-12-21T02:07:20', 'yymm': '1810', 'arxiv_id': '1810.01911', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.01911'}
|
arxiv
|
\section{Introduction}
In relativistic heavy-ion collisions carried out at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), one aims at searching for a new form of matter --- the Quark-Gluon Plasma (QGP)~\cite{PBM_QGP} and studying its properties in laboratory. The production of J/$\psi$ and dileptons in heavy-ion collisions are key measurements to probe the formation of QGP. Due to the color screening of the quark-antiquark potential in the deconfined medium, the production of J/$\psi$ would be significantly suppressed, which was proposed as a direct signature of the QGP formation~\cite{MATSUI1986416}. After decades of experimental and theoretical efforts, it is recognized that other mechanisms, such as the recombination of deconfined charm quarks in the QGP and cold nuclear matter (CNM) effects, also modify the J/$\psi$ production significantly in heavy-ion collisions. Currently, the interplay of these effects can qualitatively explain the J/$\psi$ yield measured so far at SPS, RHIC and LHC~\cite{STAR_Jpsi_AuAu_BES}. Dileptons have been proposed as ``penetrating probes'' for the hot and dense medium~\cite{SHURYAK198071}, because they are not subject to the violent strong interactions in the medium. Various dilepton measurements have been performed in heavy-ion collisions. A clear enhancement in low mass region ($M_{ll} < M_{\phi}$) has been observed, which is consistent with in-medium broadening of the $\rho$ mass spectrum~\cite{PhysRevLett.79.1229,KOHLER2014665}. While the excess presented in the intermediate mass region ($M_{\phi} < M_{ll} < M_{J/\psi}$) is believed to be originated from the QGP thermal radiation~\cite{PhysRevC.63.054907}.
J/$\psi$ and dilepton can also be generated by the intense electromagnetic fields accompanied with the relativistic heavy ions~\cite{UPCreview}. The intense electromagnetic field can be viewed as a spectrum of equivalent photons by the equivalent photon approximation~\cite{KRAUSS1997503}. The quasi-real photon emitted by one nucleus could fluctuate into $c\bar{c}$ pair, scatters off the other nucleus, and emerge as a real J/$\psi$. The virtual photon from one nucleus can also interact with the photon from the other, resulting in the production of dileptons, which can be represented as $\gamma + \gamma \rightarrow l^{+} + l^{-} $. The coherent nature of these interactions gives the processes distinctive characteristics: the final products consist of a J/$\psi$ (or dilepton pair) with very low transverse momentum, two intact nuclei, and nothing else. Conventionally, these reactions are only visible and studied in Ultra-Peripheral Collisions (UPC), in which the impact parameter ($b$) is larger than twice the nuclear radius ($R_{A}$) to avoid any hadronic interactions.
Can the coherent photon products also exist in Hadronic Heavy-Ion Collisions (HHIC, $b < 2R_{A}$), where the violent strong interactions occur in the overlap region? The story starts with the measurements from ALICE: significant excesses of J/$\psi$ yield at very low $p_{T} (< 0.3$ GeV/c) have been observed in peripheral Pb+Pb collisions at $\sqrt{s_{\rm{NN}}} =$ 2.76 TeV~\cite{LOW_ALICE}, which can not be explained by the hadronic J$/\psi$ production with the known cold and hot medium effects. STAR made the same measurements in Au+Au collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV~\cite{1742-6596-779-1-012039}, and also observed significant enhancement at very low $p_{T}$ in peripheral collisions. The anomaly excesses observed possess characteristics of coherent photoproduction and can be quantitative described by the theoretical calculations with coherent photon-nucleus production mechanism~\cite{PhysRevC.93.044912,PhysRevC.97.044910,SHI2018399}, which points to evidence of coherent photon-nucleus reactions in HHIC. If coherent photonuclear production is the underlying mechanism for the observed J/$\psi$ excess, coherent photon-photon production should also be there and contribute to the dilepton pair production in HHIC. Base on this train of thought, STAR measured the dielectron spectrum at very low $p_{T}$ in peripheral collisions, and indeed significant excesses were observed~\cite{PhysRevLett.121.132301}, which could be reasonably described by coherent photon-photon production mechanism~\cite{ZHA2018182,PhysRevC.97.054903}. The isobaric collision experiment, recently completed in the 2018 run at RHIC ($^{96}_{44}\rm{Ru} + ^{96}_{44}\rm{Ru}$ and $^{96}_{40}\rm{Zr} + ^{96}_{40}\rm{Zr}$), provides a unique opportunity to further address this issue. The idea is that: the production yield originated from coherent photon-nucleus interaction should be proportional to $Z^{2}$, while the production rate of coherent photon-photon is proportional to $Z^{4}$; according to above, the excesses of J$/\psi$ and dielectron would differ significantly between isobaric collisions and Au+Au collisions for centralities with the same hadronic background. In this letter, we report calculations for coherent production of J/$\psi$ and dielectron in isobaric collisions to provide theoretical baseline for further experimental test. The centrality dependence of the coherent products is presented and compared to that in Au+Au collisions. The difference in $t$ distributions of J/$\psi$ between isobaric collisions and Au+Au collision is also discussed in current framework.
\section{Theoretical formalism}
According to the equivalent photon approximation, the coherent photon-nucleus and photon-photon interactions in heavy-ion collisions can be factorized into a semiclassical and quantum part. The semiclassical part deals with the distribution of quasi-real photons induced by the colliding ions, while the quantum part handles the interactions of photon-Pomeron or photon-photon. The cross section for J/$\psi$ from coherent photon-nucleus interactions can be written as~\cite{UPC_JPSI_PRC,UPC_JPSI_PRL}:
\begin{equation}
\label{equation1}
\sigma({A + A} \rightarrow {A + A} + \text{J}/\psi) = \int d\omega n(\omega)\sigma(\gamma A \rightarrow \text{J}/\psi A),
\end{equation}
where $\omega$ is the photon energy, $n(\omega)$ is the photon flux at energy $\omega$, and $\sigma(\gamma A \rightarrow \text{J}/\psi A)$ is the photonuclear interaction cross-section for J$/\psi$.
Similarly, the dielectron production from coherent photon-photon reactions can be calculated via~\cite{Klein:2016yzr}:
\begin{equation}
\begin{aligned}
&\sigma (A + A \rightarrow A + A + e^{+}e^{-})
\\
& =\int d\omega_{1}d\omega_{2} \frac{n(\omega_{1})}{\omega_{1}}\frac{n(\omega_{2})}{\omega_{2}}\sigma(\gamma \gamma \rightarrow e^{+}e^{-}),
\label{equation2}
\end{aligned}
\end{equation}
where $\omega_{1}$ and $\omega_{2}$ are the photon energies from the two colliding beams, and $\sigma(\gamma \gamma \rightarrow e^{+}e^{-})$ is the photon-photon reaction cross-section for dielectron.
The photon flux induced by the heavy ions can be modelled using the Weizs\"acker-Williams method~\cite{KRAUSS1997503}:
\begin{equation}
\label{equation3}
\begin{aligned}
& n(\omega,r) = \frac{4Z^{2}\alpha}{\omega} \bigg | \int \frac{d^{2}q_{\bot}}{(2\pi)^{2}}q_{\bot} \frac{F(q)}{q^{2}} e^{iq_{\bot} \cdot r} \bigg |^{2}
\\
& q = (q_{\bot},\frac{\omega}{\gamma})
\end{aligned}
\end{equation}
where $n(\omega,r)$ is the flux of photons with energy $\omega$ at distant $r$ from the center of nucleus, $\alpha$ is the electromagnetic coupling constant,$\gamma$ is lorentz factor, and the form factor $F(q)$ is Fourier transform of the charge distribution in nucleus. In the calculations, we employ the Woods-Saxon form to model the nucleon distribution of nucleus in spherical coordinates:
\begin{equation}
\rho_{A}(r,\theta)=\frac{\rho^{0}}{1+\exp[(r-R_{\rm{WS}}-\beta_{2}R_{\rm{WS}}Y_{2}^{0}(\theta))/d]},
\label{equation4}
\end{equation}
where $\rho_{0} = 0.16 \rm{\ fm}^{-3}$, $R_{WS}$ and $d$ are the ``radius'' and the surface diffuseness parameter, respectively, and $\beta_{2}$ is the deformity of the nucleus. The deformity parameter $\beta_{2}$ for Ru and Zr is ambiguous and important for the bulk correlated physics~\cite{PhysRevC.97.044901}, however, it is a minor effect in our calculations. For simplicity, the deformity parameter $\beta_{2}$ is ignored and set to 0. The ``radius'' $R_{WS}$ (Au: 6.38 fm, Ru: 5.02 fm, Zr: 5.02fm) and surface diffuseness parameter $d$ (Au: 0.535 fm, Ru: 0.46 fm, Zr: 0.46 fm) are based on fits to electron scattering data~\cite{0031-9112-29-7-028}. Fig.~\ref{figure1} shows the two-dimensional distributions of the photon flux induced in isobaric collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV as a function of distant $r$ and energy $\omega$ for Ru + Ru (left panel) and Zr + Zr (right panel).
\renewcommand{\floatpagefraction}{0.75}
\begin{figure*}[htbp]
\includegraphics[keepaspectratio,width=0.45\textwidth]{flux_distribution_Ru.pdf}
\includegraphics[keepaspectratio,width=0.45\textwidth]{flux_distribution_Zr.pdf}
\caption{Two-dimensional distributions of the photon flux in the distant $r$ and in the energy of photon $\omega$ for Ru + Ru (left panel) and Zr + Zr (right panel) collisions at $\sqrt{s_{\rm{NN}}} =$ 200 GeV.}
\label{figure1}
\end{figure*}
The cross-section for $\gamma A \rightarrow \text{J}/\psi A$ reaction can be derived from a quantum Glauber approach coupled with the parameterized forward scattering cross section $\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}|_{t=0}$ as input~\cite{PhysRevC.93.044912,UPC_JPSI_PRC,PhysRevC.97.044910}:
\begin{equation}
\label{equation3_1}
\begin{split}
&\sigma(\gamma A \rightarrow \text{J}/\psi A)=\frac{d\sigma(\gamma A \rightarrow \text{J}/\psi A)}{dt}\bigg|_{t=0} \times\\
&\int|F_{P}(\vec{k}_{P})|^{2}d^{2}{\vec{k}_{P\bot}} \ \ \ \ \ \ \vec{k}_{P}=(\vec{k}_{P\bot},\frac{ \omega_{P}}{\gamma_{c}})\\
& \omega_{P} = \frac{1}{2}M_{\text{J}/\psi} e^{\pm y} = \frac{M_{\text{J}/\psi}^{2}}{4\omega_{\gamma}}
\end{split}
\end{equation}
\begin{equation}
\label{equation3_2}
\frac{d\sigma(\gamma A \rightarrow \text{J}/\psi A)}{dt}\bigg|_{t=0}=C^{2}\frac{\alpha \sigma_{tot}^{2}(\text{J}/\psi A)}{4f_{\text{J}/\psi}^{2}}
\end{equation}
\begin{equation}
\label{equation3_3}
\sigma_{tot}(\text{J}/\psi A)=2\int(1-\exp(-\frac{1}{2}\sigma_{tot}(\text{J}/\psi p)T_{A}(x_{\bot})))d^{2}x_{\bot}
\end{equation}
\begin{equation}
\label{equation3_4}
\sigma_{tot}^{2}(\text{J}/\psi p)=16\pi\frac{d\sigma(\text{J}/\psi p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0}
\end{equation}
\begin{equation}
\label{equation3_5}
\frac{d\sigma(\text{J}/\psi p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0}=\frac{f_{\text{J}/\psi}^{2}}{4\pi \alpha C^{2}}\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}\bigg|_{t=0}
\end{equation}
where $T_{A}(x_{\bot})$ is the nuclear thickness function, $-t$ is the squared four momentum transfer, and $f_{\text{J}/\psi}$ is the J/$\psi$-photon coupling. Eq.~\ref{equation3_2} and ~\ref{equation3_5} are relations from vector meson dominance model~\cite{RevModPhys.50.261} and the correction factor $C$ is adopted to account for the non-diagonal coupling through higher mass vector mesons~\cite{HUFNER1998154}, as implemented in the generalized vector dominance model~\cite{PhysRevC.57.2648}. Eq.~\ref{equation3_4} is the optical theorem relation and parametrization for forward cross section $\frac{d\sigma(\gamma p \rightarrow \text{J}/\psi p)}{dt}|_{t=0}$ in Eq.~\ref{equation3_5} is obtained from~\cite{Klein:2016yzr}.
The elementary cross-section to produce a pair of positron-electron with electron mass $m$ and pair invariant mass $W$ can be determined by the Breit-Wheeler formula~\cite{PhysRevD.4.1532}
\begin{equation}
\label{equation5}
\begin{aligned}
& \sigma (\gamma \gamma \rightarrow l^{+}l^{-}) =
\\
&\frac{4\pi \alpha^{2}}{W^{2}} [(2+\frac{8m^{2}}{W^{2}} - \frac{16m^{4}}{W^{4}})\text{ln}(\frac{W+\sqrt{W^{2}-4m^{2}}}{2m})
\\
& -\sqrt{1-\frac{4m^{2}}{W^{2}}}(1+\frac{4m^{2}}{W^{2}})].
\end{aligned}
\end{equation}
The angular distribution of these positron-electron pairs can be given by
\begin{equation}
G(\theta) = 2 + 4(1-\frac{4m^{2}}{W^{2}})\frac{(1-\frac{4m^{2}}{W^{2}})\text{sin}^{2}(\theta)\text{cos}^{2}(\theta)+\frac{4m^{2}}{W^{2}}}{(1-(1-\frac{4m^{2}}{W^{2}})\text{cos}^{2}(\theta))^{2}},
\label{equation6}
\end{equation}
where $\theta$ is the angle between the beam direction and one of the electrons in the positron-electron center of mass frame.
The approaches employed in this calculation are very mature in UPC, which could quantitatively describe the experimental measurements~\cite{SHI2018399,PhysRevC.70.031902,DYNDAL2017281,Abbas2013,2017489}. However, the energetic strong interactions in HHIC could impose significant impact on the coherent production. The possible disruptive effect could be factorized into two distinct sub-process: photon emission and external disturbance in overlap region. The equivalent photon field is highly contracted into the transverse plane, and travels along with the colliding nuclei in the laboratory frame. Therefore the coherent photon-nucleus and photon-photon interactions occur at almost the same time when violent hadronic collisions happen. Due to the time retarded potential, the quasi-real photons are likely to be emitted before hadronic collision by about $\Delta t = \gamma R/c$, where $\gamma$ is Lorentz factor, and $R$ is the transverse distance from the colliding nuclei. Hence, the photon emission should be unaffected by hadronic collisions. In the overlap region of collisions, the photon products could be affected by the violent hadronic interactions, leading to the loss of coherent action. For the coherent photon-photon interactions, because the final product is electron-positron pair, which is not subject to the strong interactions, the disruption effect from overlap region should be small enough to be neglected in the calculations. However, the J/$\psi$'s from coherent photon-nucleus interactions are sensitive to the hadronic interactions, thus the production in overlap region is prohibited in our approach. Furthermore, as described in Ref.~\cite{PhysRevC.97.044910}, the interference effect is included in the calculations of coherent photon-nucleus process.
\section{Results}
\renewcommand{\floatpagefraction}{0.75}
\begin{figure}[htbp]
\includegraphics[keepaspectratio,width=0.45\textwidth]{drawyield.pdf}
\caption{Yields of coherent J/$\psi$ production as a function of $N_{\rm{part}}$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au (solid line), Ru+Ru (dotted line), and Zr+Zr (dashed line) collisions.}
\label{figure2}
\end{figure}
\renewcommand{\floatpagefraction}{0.75}
\begin{figure}[htbp]
\includegraphics[keepaspectratio,width=0.45\textwidth]{drawt.pdf}
\caption{The $t$ distribution of coherently produced J/$\psi$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). }
\label{figure3}
\end{figure}
\renewcommand{\floatpagefraction}{0.75}
\begin{figure}[htbp]
\includegraphics[keepaspectratio,width=0.45\textwidth]{compair_Au_Zr_Ru.pdf}
\caption{The invariant mass spectrum of electron-positron pair at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The experimental measurements~\cite{PhysRevLett.121.132301} in $60-80\%$ centrality class from STAR are also plotted for comparison.The results are filtered to match the fiducial acceptance described in the text.}
\label{figure4}
\end{figure}
Figure~\ref{figure2} shows the coherent J/$\psi$ yields, including interference effect, as a function of number of participants ($N_{\rm{part}}$) at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au (solid line), Ru+Ru (dotted line), and Zr+Zr (dashed line) collisions. The predictions in Au+Au and isobaric collisions all follow a trend from rise to decline towards central collisions. The increasing of cross section from peripheral to semi-peripheral collisions results from the larger photon flux induced by nucleus with smaller impact parameter. However, the later on inversion of trend originates from the destructive interference and external disturbance from overlap region, which prevails over the increasing photon flux towards central collisions. The turning point of the trend in Au+Au collisions is at higher $N_{\rm{part}}$ value than those of isobaric collisions, which is due to the collision geometry and nucleus profile differences. The production rate in Ru+Ru collisions is 1.2 times that in Zr+Zr collisions, following the exact $Z^{2}$ scaling. And the yields in isobaric collisions is dramatically smaller than that in Au+Au collisions, which is conducted by the huge $Z$ difference combined with different nuclear profile and collision geometry. The significant differences in production rate among the three collision systems lead to dramatically different enhancements with respect to hadronic background, which provides us a sensitive probe to test the coherent photoproduction in HHIC.
The differential distributions of coherently produced J/$\psi$ in the three collision systems are also studied in this paper. Fig.~\ref{figure3} shows the $t$ distributions of coherently produced J/$\psi$ at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The Mandelstam variable t is expressed as $t = t_{\parallel} +t_{\perp}$, with $t_{\parallel} = -M_{\text{J}/\psi}^{2}/(\gamma^{2}e^{\pm y})$ and $t_{\perp} = - p_{T}^{2}$. At top RHIC energies, $t_{\parallel}$ is very small and neglected here($t \simeq -p_{T}^{2}$). The specified centrality class is chosen to guarantee the same hadronic backgrounds in the three collisions systems. The rapid drops towards $p_{T} \rightarrow 0$ in $t$ distributions come from the destructive interference from the two colliding nuclei, while the downtrends at relative higher $p_{T}$ range reveal diffraction pattern, which are determined by the nuclear density distributions. The corresponding $t$ value at peak positron in Au+Au collisions is smaller than those in isobaric collisions, which is due to the larger impact parameter in Au+Au collision for the selected centrality class. And the slope of the down trend at relative higher $p_{T}$ range in Au+Au collisions is deeper than those in isobaric collisions, owning to larger nuclear profile for Au nucleus. There is no difference in $t$ distributions between isobaric collisions, since we use the same nuclear density distributions for Zr+Zr and Ru+Ru.
In comparison to coherent photon-nucleus interactions, the $Z^{4}$ dependence of coherent photon-photon process make it a more significant signal to be tested in HHIC. The cross-section of photon-photon interactions is heavily concentrated among near-threshold pairs, which are not visible to existing detectors. So, calculations of the total cross-section are not particularly useful. Instead, we calculate the cross-section and kinematic distributions with acceptances that match that used by STAR. Fig.~\ref{figure4} shows the invariant mass spectrum of electron-positron pair at $\sqrt{s_{\rm{NN}}} =$ 200 GeV in Au+Au collisions for $60-80\%$ centrality class (solid line), Ru+Ru collisions for $47-75\%$ centrality class (dotted line), and Zr+Zr collisions for $47-75\%$ centrality class (dashed line). The results are filtered to match the fiducial acceptance at STAR: daughter track transverse momentum $p_{T} >$ 0.2 GeV/c, track pseudo-rapidity $|\eta| <$ 1, and pair rapidity $|y| <$ 1. The experimental measurements~\cite{PhysRevLett.121.132301} in $60-80\%$ centrality class from STAR are also shown for comparison, which could be reasonaly described by our calculation. The mass distribution shapes for the three collision systems are almost the same, while the relative yield ratios are 7.9 : 1.5 : 1.0 for Au+Au, Ru+Ru, and Zr+Zr collisions, respectively. The large differences provide excellent experimental feasibility to test the production mechanism in isobaric collisions.
\section{summary}
In summary, we have performed calculations of J/$\psi$ production from coherent photon-nucleus interactions and electron-positron pair production from coherent photon-photon interactions in hadronic isobaric collisions. We show that the production rate of the coherent photon products at RHIC top energy differ significantly between isobaric collisions, and in comparison with those in Au+Au collisions, the differences grow enormous. The differential $t$ distributions of coherently produced J/$\psi$ in isobaric collisons are also studied, which possesses different shapes compared with that in Au+Au collisions due to the different nuclear profile and collison geometry. The predictions for isobaric collisions carried out in this paper provides theoretical basline for the further experimental test, which would be performed in the near future.
\section{acknowledgement}
We thank Dr. Spencer Klein and Prof. Pengfei Zhuang for useful discussions. This work was funded by the National Natural Science Foundation of China under Grant Nos. 11775213, 11505180 and 11375172, the U.S. DOE Office of Science under contract No. DE-SC0012704, and MOST under Grant No. 2014CB845400.
\nocite{*}
\bibliographystyle{aipnum4-1}
|
{'timestamp': '2018-10-05T02:07:13', 'yymm': '1810', 'arxiv_id': '1810.02064', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.02064'}
|
arxiv
|
\section{Introduction}
\IEEEPARstart{S}{mart} Meters (SMs) are a cornerstone for the development of smart electrical grids. These devices are able to report power consumption measurements of a house to an utility provider at every hour or even at every few minutes. This feature generates a considerably amount of useful data which enables several applications in almost real-time such as power quality monitoring, timely fault detection, demand response, energy theft prevention, etc.~\cite{alahakoon2016,wang2019,depuru2011}. However, this fine-grained power consumption monitoring poses a thread to consumers privacy. In fact, it has been shown that simple algorithms, known in general as NonIntrusive Load Monitoring (NILM) methods, can readily be used to infer the types of appliances being used at a home in a given time, even without any prior knowledge about the household \cite{molina2010}. Since these features are highly correlated with the presence of people at the dwelling and their personal habits \cite{giaconi2018privacy}, this induces serious privacy concerns which can have an impact on the acceptance and deployment pace of SMs \cite{mckenna2012,cuijpers2013}. The natural challenge raised here is: how privacy can be enhanced while preserving the utility of the data? Although this problem has been widely studied in the context of data science \cite{jain2016}, the time series structure of SMs data requires a particular treatment \cite{asghar2017}. For further details the reader may be referred to~\cite{giaconi2018privacy}.
Simple approaches for privacy-preserving in the context of SMs include data aggregation and encryption \cite{li2010,rottondi2013}, the use of pseudonyms rather than the real identities of users \cite{efthymiou2010smart}, downsampling of the data \cite{mashima2015authenticated,eibl2015} and random noise addition \cite{barbosa2016}. However, these methods often restrict the potential applications of the SMs data. For instance, domwnsampling of the data may incur in time delays to detect critical events, while data aggregation degrades the positioning and accuracy of the power measurements.
A formal approach to the privacy problem has been presented in \cite{sankar2013smart} from an information-theoretic persepctive, where it has been proposed to assess privacy by the Mutual Information (MI) between the sensitive and released variables. More specifically, the authors model the power measurements of SMs with a hidden Markov model in which the distribution of the measurements is controlled by the state of the appliances, and for each particular state the distribution of power consumption is modeled as Gaussian. This model is then used to obtain the privacy-utility trade-off using tools from rate-distortion theory \cite{cover2006elements}. Although this approach is very appealing it has s two important limitations for its application to real-time scenarios with actual data. First, the privacy measure does not capture the causal time dependence and processing of the data, which is an essential feature of the problem. Second, the Gaussian model is quite restrictive in practice. The first limitation has been addressed in \cite{erdogdu2015}, where it is shown that for, an online scenario, the privacy measure should be based on Directed Information (DI) \cite{massey1990causality}, which is the privacy measure that we will adopt in the present work. We will further address the second limitation by taking a data-based approach in which no explicit constraints on the distributions or statistics of the involved variables are assumed.
A more sophisticated approach considers the use of Rechargeable Batteries (RBs) and Renewable Energy Sources (RES) in homes in order to modify the actual energy consumption of users with the goal of hiding the sensitive information \cite{kalogridis2010,tan2013,acs2012,zhao2014achieving,li2018information,giaconi2018,erdemir2019privacy}. Many of these works borrow ideas from the well-known principle of differential privacy \cite{dwork2008}, which seems to be better suited for fixed databases than for time series data \cite{asghar2017}. The main motivation to introduce the use of physical resources into the privacy problem comes from the observation that this approach does not require any distortion in the actual SMs measurements, which means that there is no loss in terms of utility. However, the incorporation of physical resources does not only make the problem more complex and limited in scope, but can also generate a significant cost to users due to the faster wear of the RBs as a consequence of the increased charging/discharging rate \cite{giaconi2018privacy}. On the other hand, the required level of distortion for a specific privacy goal in a realistic scenario in which the attacker threatening privacy has only partial information is still an open question. Thus, the need and convenience of these solutions is questionable. As a matter of fact, in this work, we show that under some conditions the privacy-utility trade-off may be much less severe than expected. However, it is important to note that these approaches are complementary to the ones based on distorting the power measurements rather than alternative. Thus, for simplicity, we assume that no RBs and/or RESs are available and distortion is the only mean to achieve a desired privacy level.
The use of neural networks to model an attacker has been considered in \cite{wang2017}. However, a more powerful formulation of the problem assumes that both the releaser (or privatizer) and the attacker are deep neural networks (DNNs) that are trained simultaneously based on a minimax game, an idea that is inspired by the well-known Generative Adversarial Networks (GANs) \cite{goodfellow2014}. This concept can be referred to as Generative Adversarial Privacy (GAP) \cite{huang2018} and is the basis for our approach. It should be mentioned that the concept of GAP has been studied for different applications related to images \cite{tripathy2019,feutry2018} but, to the best of our knowledge, not in the context of SMs time series data. In these works, the authors consider i.i.d. data and deep feed-forward neural networks for the releaser and attacker, while in this paper we consider deep Recurrent Neural Networks (RNNs) to capture and exploit the time correlation. The idea of time-series generation with an adversarial approach has been considered in \cite{esteban2018} for medical data based in the principle of differential privacy. As we mentioned previously, our approach is instead based on the DI, an information-theoretic measure of privacy.
In summary, the main contributions of this paper are the following:
\begin{enumerate}[(i)]
\item We applied DI as a privacy measure similarly to \cite{erdogdu2015}. However, unlike this and previous works, we impose no explicit assumptions on the generating model of the power measurements, but take a more versatile data-driven approach.
\item We study different possible distortion measures which provide more flexibility to control the specific features to be preserved in the released signals that is, the relevant features for the target applications.
\item For the sake of computational tractability, we propose a loss function for training the privacy-preserving releaser based on an upper bound to DI. Then, considering an attacker that minimizes a standard cross-entropy loss, we show that this leads to an adversarial framework based on two RNNs to train the releaser.
\item We perform an extensive statistical study with actual data from three different data sets and frameworks motivated by real-world theaters to characterize the utility-privacy trade-offs and the nature of the distortion generated by the releaser network.
\item We investigate the data mismatch problem in the context of SMs privacy, which occurs when the data available to the attacker is not the same as the one used for training the releaser mechanism, and show that it has an important impact on the privacy-utility trade-off.
\end{enumerate}
The rest of the paper is organized as follows. In Section \ref{sec:formulation}, we present the theoretical formulation of the problem that motivates the loss functions for the releaser and attacker. Then, in Section \ref{sec:model}, the privacy-preserving adversarial framework is introduced along with the training algorithm. Extensive results are presented and discussed in Section \ref{sec:results}. Finally, some concluding remarks are presented in Section \ref{sec:conclusion}.
\subsection*{Notation and conventions}
\begin{itemize}
\item $X^T = (X_1, \ldots, X_T)$ : A sequence of random variables, or a time series, of length $T$;
\item $x^T = (x_1, x_2, \ldots, x_T)$: A realization of $X^T$;
\item $x^{(i)T} = (x^{(i)}_1, x^{(i)}_2, \ldots, x^{(i)}_T)$: The $i^{\text{th}}$ sample in a minibatch used for training;
\item $\E[X]$: The expectation of a random variable $X$;
\item $p_X$: The distribution of $X$;
\item $I(X;Y)$: Mutual information between random variables $X$ and $Y$ \cite{cover2006elements};
\item $H(X)$: Entropy of random variable $X$;
\item $I(X^T \to Y^T)$: Directed information between two time series $X^T$ and $Y^T$;
\item $H(X^T||Y^T)$: Causally conditional entropy of $X^T$ given $Y^T$ \cite{kramer2003};
\item ${X -\!\!\!\!\minuso\!\!\!\!- Y -\!\!\!\!\minuso\!\!\!\!- Z}$: Markov chain among $X$, $Y$ and $Z$.
\end{itemize}{}
\section{Problem Formulation and Training Loss}
\label{sec:formulation}
\subsection{Main definitions}
Consider the private time series $X^T$, the utile process $Y^T$, and the observed signal $W^T$. We assume that $X_t$ takes values on a fixed discrete alphabet $\mathcal{X}$, for each $t \in \{ 1, \ldots, T \}$. A releaser $\mathcal{R}_{\theta}$ (this notation is used to denote that the releaser is controlled by a vector of parameters $\theta$) produces the release process as $Z_t$ based on the observation $W^t$, for each time $t \in \{ 1, \ldots, T \}$, while an attacker $\mathcal{A}_{\phi}$ attempts to infer $X_t$ based on $Z^t$ by finding an approximation of $p_{X^T|Z^T}$, which we shall denote by $p_{\hat{X}^T|Z^T}$. Thus, the Markov chain $(X^t,Y^t) -\!\!\!\!\minuso\!\!\!\!- W^t -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$ holds for all $t \in \{ 1, \ldots, T \}$. In addition, due to causality, the distribution $p_{Z^T\hat{X}^T|W^T}$ can be decomposed as follows:
\begin{equation} p_{Z^T\hat{X}^T|W^T}(z^T,\hat{x}^T|w^T) = \prod_{t=1}^{T} p_{Z_t|W^t}(z_t|w^t) p_{\hat{X}_t|Z^t}(\hat{x}_t|z^t). \end{equation}
The goal of the releaser $\mathcal{R}_\theta$ is to minimize the flow of information from the sensitive process $X^T$ to $\hat{X}^T$ while simultaneously keeping the distortion between the release time series $Z^T$ and the useful signal $Y^T$ small. On the other hand, the goal of the attacker $\mathcal{A}_{\phi}$ (again, this notation is used to denote that the attacker is controlled by a vector of parameters $\phi$) is to learn the optimal decision rule based on the distribution $p_{X_t|Z^t}$, for each $t$, as accurately as possible. Note that after the approximation $p_{\hat{X}_t|Z^t}$ is obtained, the attacker can estimate the realization $x^T$ corresponding to $z^T$ in an online fashion, by solving the following $T$ problems:
\begin{equation} \underset{\hat{x}_t \in \mathcal{X}}{\text{argmax}} \; p_{\hat{X}_t|Z^t}(\hat{x}_t|z^t), \qquad t \in \{1,\ldots,T\}. \end{equation}
Thus, the attacker can be interpreted as an hypothesis test, as stated in \cite{li2019}. However, in the present case, we consider the more realistic scenario in which the statistical test is suboptimal due to the fact that the attacker has no access to the actual conditional distributions $p_{X_t|Z^t}$ but only to $p_{\hat{X}_t|Z^t}$, i.e., an inference of them
In order to take into account the causal relation between $X^T$ and $\hat{X}^T$, the flow of information is quantified by DI~\cite{massey1990causality}:
\begin{equation} \label{eqtr1} I\big(X^T\rightarrow \hat{X}^T\big)\triangleq \sum_{t=1}^{T} I(X^t;\hat{X}_t|\hat{X}^{t-1}), \end{equation}
where $I(X^t;\hat{X}_t|\hat{X}^{t-1})$ is the conditional mutual information between $X^t$ and $\hat{X}_t$ conditioned on $\hat{X}^{t-1}$\cite{cover2006elements}. The normalized expected distortion between $Z^T$ and $Y^T$ is defined as:
\begin{equation} \label{Distortion} \mathcal{D}(Z^T,Y^T) \triangleq \frac{\E[d(Z^T,Y^T)]}{T}, \end{equation}
where $d : \R^T \times \R^T \to \R$ is any distortion function (i.e., a metric on $\R^T$). To ensure the quality of the release, it is natural to impose the following constraint: $\mathcal{D}(Z^T,Y^T) \le \varepsilon$ for some given $\varepsilon \ge 0$. In previous works, the normalized squared error was considered as a distortion function (e.g., \cite{sankar2013smart}).
Beside this, other distortion measures can be relevant whithin the framework of SMs. For instance, demand response programs usually require an accurate knowledge of peak power consumption, so a distortion function closer to the infinity norm would be more meaningful for those particular applications. Thus, for the sake of generality and to keep the distortion function simple, we propose to use an $\ell_p$ distance:
\begin{equation} \label{eq:p_distortion} d(z^T,y^T) \triangleq \| z^t - y^t \|_p = \left( \sum_{t=1}^T |z_t - y_t|^p \right)^{1/p}, \end{equation}
where $p \ge 2$ is a fix parameter. Note that this distortion function contains the squared error case as a particular case for $p = 2$ while it converges to the maximum error between the components of $z^T$ and $y^T$ as $p \to \infty$.
Therefore, the problem of finding an optimal releaser $\mathcal{R}_\theta$ subject to the aforementioned attacker $\mathcal{A}_{\phi}$ and distortion constraint can be formally written as follows:
\begin{align} \label{eqtr2} & \underset{\theta}{\text{min}} \; \quad I\left(X^T\rightarrow \hat{X}^T\right), \nonumber \\ & \text{s.t. } \quad \mathcal{D}(Z^T,Y^T) \le \varepsilon. \end{align}
Note that the solution to this optimization problem depends on $p_{\hat{X}^T|Z^T}$, i.e., the conditional distributions that represent the attacker $\mathcal{A}_{\phi}$. Thus, a joint optimization of the releaser $\mathcal{R}_{\theta}$ and the attacker $\mathcal{A}_{\phi}$ is required.
\subsection{A novel training loss}
The optimization problem in \eqref{eqtr2} can be exploited to motivate an objective function for $\mathcal{R}_\theta$. However, note that the cost of computing DI term is $O(|\mathcal{X}|^T)$, where $|\mathcal{X}|$ is the size of $\mathcal{X}$. Thus, for the sake of tractability, DI will be replaced with the following surrogate upper bound:
\begin{align} \label{eqtr5}
I\left(X^T\rightarrow \hat{X}^T\right) & = \sum_{t=1}^{T}\left[H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|\hat{X}^{t-1},X^t)\right]
\nonumber \\ &\overset{\text{(i)}}{\leq}\sum_{t=1}^{T}\left[H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|\hat{X}^{t-1},X^t,Z^t)\right] \nonumber \\
&\overset{\text{(ii)}}{\triangleq} \sum_{t=1}^{T} \left[ H(\hat{X}_t|\hat{X}^{t-1})-H(\hat{X}_t|Z^t)\right]\nonumber \\
& \overset{\text{(iii)}}{\leq} T \log|\mathcal{X}| - \sum_{t=1}^{T}H(\hat{X}_t|Z^t)\nonumber \\
& \overset{\text{(iv)}}{=} \textrm{constant} - H\big(\hat{X}^T \| Z^T \big),
\end{align}
where (i) is due to the fact that conditioning reduces entropy; equality (ii) is due to the Markov chains $X^t -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$ and $\hat{X}^{t-1} -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$; (iii) is due to the trivial bound $H(\hat{X}_t|\hat{X}^{t-1}) \le H(\hat{X}_t) \le \log |\mathcal{X}|$; and (iv) follows by the definition of the \emph{causally} conditional entropy ~\cite{kramer2003} and the Markov chain $\hat{X}^{t-1} -\!\!\!\!\minuso\!\!\!\!- Z^t -\!\!\!\!\minuso\!\!\!\!- \hat{X}^t$. Therefore, the loss function for $\mathcal{R}_\theta$ can be written as:
\begin{equation} \label{eq:releaser_loss}
\mathcal{L}_{\mathcal{R}}(\theta, \phi,\lambda) = \mathcal{D}(Z^T,Y^T) - \frac{\lambda}{T} H\big(\hat{X}^T \| Z^T \big),
\end{equation}
where $\lambda \ge 0$ controls the privacy-utility trade-off and the factor $1/T$ has been introduced for normalization purposes. It should be noted that, for $\lambda = 0$, the loss function $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$ reduces to the expected distortion, being independent from the attacker $\mathcal{A}_{\phi}$. In such scenario, $\mathcal{R}_\theta$ offers no privacy guarantees. Conversely, for very large values of $\lambda$, the loss function $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$ is dominated by the upper bound on DI, so that privacy is the only goal of $\mathcal{R}_\theta$. In this regime, we expect the attacker $\mathcal{A}_{\phi}$ to fail completely to infer $X^T$, i.e., to approach to random guessing performance.
On the other hand, the attacker $\mathcal{A}_{\phi}$ is a classifier which optimizes the following cross-entropy loss:
\begin{equation} \label{eq:attacker_loss} \mathcal{L}_{\mathcal{A}}(\phi) \triangleq \frac{1}{T} \sum_{t=1}^T \E\left[- \log p_{\hat{X}_t|Z^t}(X_t|Z^t) \right], \end{equation}
where the expectation should be understood w.r.t. $p_{X_tZ^t}$. It is important to note that
\begin{align} \label{eq:loss_a_bound} \mathcal{L}_{\mathcal{A}}(\phi) & = \frac{1}{T} \sum_{t=1}^T \E\left[- \log p_{\hat{X}_t|Z^t}(\hat{X}_t|Z^t) \right] + \E\left[ \log \frac{p_{\hat{X}_t|Z^t}(\hat{X}_t|Z^t)}{p_{\hat{X}_t|Z^t}(X_t|Z^t)} \right] \nonumber \\ & \ge \frac{1}{T} H\big(\hat{X}^T \| Z^T \big), \end{align}
since the second term in \eqref{eq:loss_a_bound} is a Kullback-Leibler divergence, which is non-negative. Thus, by minimizing $\mathcal{L}_{\mathcal{R}}$, the releaser is preventing the attacker from inferring the sensitive process while also minimizing the distortion between the useful and released processes. This shows that $\mathcal{R}_{\theta}$ and $\mathcal{A}_{\phi}$ are indeed trained in an adversarial fashion. It should be noted that $\mathcal{A}_{\phi}$ here is an artificial attacker used for training $\mathcal{R}_{\theta}$. Once the training is complete, and $\mathcal{R}_{\theta}$ is fixed, a new attacker should be trained from scratch, using the loss \eqref{eq:attacker_loss}, in order to assess the privacy-utility trade-off in an unbiased way.
\section{Privacy-Preserving Adversarial Learning} \label{sec:model}
Based on the previous theoretical formulation, an adversarial modeling framework consisting of two RNNs, a releaser $\mathcal{R}_{\theta}$ and an attacker $\mathcal{A}_{\phi}$, is considered (see Fig. \ref{had1}). Note that independent noise $U^T$ is appended to $W^T$ in order to randomize the released variables $Z^T$, which is a popular approach in privacy-preserving methods. In addition, the available theoretical results show that, for Gaussian distributions, the optimal release contains such a noise component \cite{sankar2013smart,tripathy2017privacy}. For both networks, a LSTM architecture is selected (see Fig. \ref{had2}), which was shown to be successful in several problems dealing with sequences of data (e.g., see \cite{goodfellow2016} and references therein for more details). Training in the suggested framework is performed using the Algorithm \ref{Al1} which requires $k$ gradient steps to train $\mathcal{A}_\phi$ followed by one gradient step to train $\mathcal{R}_\theta$. It is worth to emphesize that $k$ should be larger than $1$ in order to ensure that $\mathcal{A}_\phi$ represents a strong attacker. However, if $k$ is too large, this could lead to an overfitting and thus a poor attacker.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]{Model.png}
\caption{Privacy-Preserving framework. The seed noise $U^T$ is generated from i.i.d. samples according to a uniform distribution: $U_t \sim U[0,1]$.}
\label{had1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{LSTM.png}
\caption{LSTM recurrent network cell diagram. The cell includes four gating units to control the flow of information. All the gating units have a sigmoid activation function ($\sigma$) except for the input unit (that uses an hyperbolic tangent activation function ($\tanh$) by default). The parameters $b,V,W$ are respectively biases, input weights, and recurrent weights. In the LSTM architecture, the forget gate $f_t = \sigma(b^f + K^fh_{t-1} + V^fw_t)$ uses the output of the previous cell (which is called hidden state $h_{t-1}$) to control the cell state $C_t$ to remove irrelevant information. On the other hand, the input gate $g_t = \sigma(b^g + K^g h_{t-1} + V^g w_t)$ and input unit adds new information to $C_t$ from the current input. Finally, the output gate $o_t = \sigma(b^o + K^oh_{t-1} + V^ow_t)$ generates the output of the cell from the current input and cell state.}
\label{had2}
\end{figure}
\begin{algorithm*}
\caption{Algorithm for training privacy-preserving data releaser neural network.}
\label{Al1}
\textbf{Input:} Data set (which includes sample sequences of useful data $y^T$, sensitive data $x^t$); seed noise samples $u^T$; seed noise dimension $m$; batch size $B$; number of steps to apply to the attacker $k$; gradient clipping value $C$; $\ell_2$ recurrent regularization parameter $\beta$. \\
\textbf{Output:} Releaser network $\mathcal{R}_\theta$.
\begin{algorithmic}[1]
\FOR {number of training iterations}
\FOR {$k$ steps}
\STATE Sample minibatch of $B$ examples: $\mathcal{B} = \Big\{w^{(b)T}=\Big(x^{(b)T},y^{(b)T},u^{(b)T}\Big); \; b=1,2,\ldots,B\Big\}$.
\STATE Compute the gradient of $\mathcal{L}_{\mathcal{A}}(\phi)$, approximated with the minibatch $\mathcal{B}$, w.r.t. to $\phi$
\STATE Update the attacker by applying the RMSprop optimizer with clipping value $C$.
\ENDFOR
\STATE Sample minibatch of $B$ examples: $\mathcal{B} = \Big\{w^{(b)T}=\Big(x^{(b)T},y^{(b)T},u^{(b)T}\Big); \; b=1,2,\ldots,B\Big\}$. \STATE Compute the gradient of $\mathcal{L}_{\mathcal{R}}(\theta,\phi,\lambda)$, approximated with the minibatch $\mathcal{B}$, w.r.t. to $\theta$.
\STATE Use $\textrm{Ridge}(L_2)$ recurrent regularization with value $\beta$ and update the releaser by applying RMSprop optimizer with clipping value $C$.
\ENDFOR
\end{algorithmic}
\end{algorithm*}
\section{Results and Discussion} \label{sec:results}
\subsection{Description of data sets} \label{sec:dataset}
Three different data sets are considered:
\begin{itemize}
\item The Electricity Consumption \& Occupancy (ECO) data set, collected and published by \cite{beckel2014eco}, which includes 1 Hz power consumption measurements and occupancy information of five houses in Swiss over a period of $8$ months. In this study, we re-sampled the data to have hourly samples.
\item The Pecan Street data set contains hourly SMs data of houses in Texas, Austin and was collected by Pecan Street Inc. \cite{street2019dataport}. Pecan Street project is a smart grid demonstration research program which provides electricity, water, natural gas, and solar energy generation measurements for over $1000$ houses in Texas, Austin.
\item The Low Carbon London (LCL) data set, which includes half-hourly energy consumption for more than $5000$ households over the period $2011-2014$ in London \cite{LCLdatastore}. Each household is allocated to a CACI Acorn group \cite{caci1989acorn}, which includes three categories: affluent, comfortable and adversity.
\end{itemize}
We model the time dependency over each day, so the data sets were reshaped to sample sequences of length $24$ for ECO and Pecan Street (data rate of 1 sample per hour) while sample sequences of length $48$ were used for LCL data set (data rate of 1 sample per half hour). For the ECO, Pecan Street, and LCL data sets, a total number of $11225$, $9120$, and $19237$ sample sequences were collected, respectively. The data sets were splitted into train and test sets with a ratio of roughly \mbox{85:15} while $10 \%$ of the training data is used as the validation set. The network architectures and hyperparameters used for training and test in the different applications are summarized in Table \ref{tab:hyperparameters}.
\begin{table*}[htbp]
\centering
\caption{Model architectures and hyperparameters values used for each application.}
\begin{adjustbox}{width=0.95\textwidth}
\begin{tabular}{c c c c c c c c}
\toprule
\midrule
\textbf{SMs Application}& \textbf{\makecell{Releaser}}& \textbf{\makecell{Training Attacker}}& \textbf{\makecell{Test Attacker}}& \textbf{\makecell{$B$}}& \textbf{\makecell{$k$}}& \textbf{\makecell{$m$}}\\
\midrule[0.5pt]
\multicolumn{1}{l}{\textbf{Inference of households occupancy}} &\makecell{$4$ LSTM layers each\\with 64 cells and $\beta=1.5$} &\makecell{$2$ LSTM layers each\\ with 32 cells}& \makecell{$3$ LSTM layers each\\ with 32 cells}&128 &4&8 \\
\midrule[0.1pt]
\multicolumn{1}{l}{\textbf{Inference of households identity}}&\makecell{$6$ LSTM layers each\\ with 128 cells and $\beta=2$} &\makecell{$4$ LSTM layers each\\ with 32 cells}& \makecell{$4$ LSTM layers each\\ with 32 cells}&128 &5&3\\
\midrule[0.1pt]
\multicolumn{1}{l}{\textbf{Inference of households acron type}} &\makecell{$5$ LSTM layers each\\ with 100 cells and $\beta=0.1$} &\makecell{$3$ LSTM layers each\\ with 32 cells}& \makecell{$4$ LSTM layers each\\ with 32 cells}&128 &7&3 \\
\midrule
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:hyperparameters}
\end{table*}
To assess the distortion with respect to the actual power consumption measurements, we define the Normalized Error (NE) for the different $\ell_p$ distortion functions as follows
\begin{equation} \text{NE}_p \triangleq \frac{\E\left[ \| Y^T - Z^T \|_p \right]}{\E\left[ \| Y^T \|_p \right]}. \end{equation}
\subsection{$\ell_2$ Distortion} \label{subsec:L2norm}
First, the $\ell_2$ distortion function is considered (i.e., $p = 2$ in \eqref{eq:p_distortion}). In the following subsections, three different privacy applications are studied (one for each of the data sets presented in Section \ref{sec:dataset}).
\subsubsection{Inference of households occupancy} \label{sec:household_occupancy}
The first practical case of study regarding privacy-preserving in time series data is the concern of inferring presence/absence of residents at home from the total power consumption collected by SMs \cite{kleiminger2015household,jia2014human}. For this application, the electricity consumption measurements from the ECO data set are considered as the useful data, while occupancy labels are defined as the private data. Therefore, in this case, the releaser attempts to minimize a trade-off between the distortion of the total electricity consumption incurred and the probability of inferring the presence of an individual at home from the release signal. Note from Table \ref{tab:hyperparameters} that a stronger attacker composed of 3 LSTM layers is used for the test.
In Fig. \ref{ECOTradeoff} we show the empirically found privacy-utility trade-off for this application. Note that by adding the distortion the accuracy of the attacker changes from more than $80 \%$ (no privacy) to $50 \%$ (full privacy), which corresponds to the performance of a random guessing classifier.
\begin{figure}[htbp]
\centering
\includegraphics[width=.45\linewidth]{ECO_Tradeoff.png}
\caption{Privacy-utility trade-off for house occupancy inference application. Since in this application the attacker is a binary classifier, the random guessing (balanced) accuracy is 50$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.}
\label{ECOTradeoff}
\end{figure}
In order to provide more insights about the release mechanism, the Power Spectrum Density (PSD) of the input signal and the PSD of the error signal for three different cases along the privacy-utility trade-off curve of Fig.~\ref{ECOTradeoff} are estimated using Welch's method \cite{stoica2005spectral}. For each case, we use 10 release signals and average the PSD estimates to reduce the variance of the estimator. Results are shown in Fig. \ref{ECONoise}. Looking at the PSD of the input signal (useful data), some harmonics are clearly visible. The PSD of the error signals show that the model controls the trade-off in privacy-utility by modifying the floor noise and the distortion on these harmonics.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\linewidth]{ECO_Noise.png}
\caption{PSD of the actual electricity consumption and error signals for the house occupancy inference application.}
\label{ECONoise}
\end{figure}
It should be mentioned that two stationary tests, the Augmented Dickey-Fuller test \cite{dickey1979distribution} and the Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) test \cite{kwiatkowski1992testing}, were applied to the data. This confirmed that there is enough evidence to suggest the data is stationary, thus supporting our PSD analysis.
\subsubsection{Inference of household identity} \label{sec:household_identity}
The second practical case of study regarding privacy preservation in SMs measurements is identity recognition from total power consumption of households \cite{efthymiou2010smart}. It is assumed that both the releaser and the attacker have access to the total power consumption of different households in a region (training data), and then the attacker attempts to determine the identity of a house using the released data obtained from the test data. Thus, our model aims at generating release data of total power consumption of households in a way that prevents the attacker to perform the identity recognition while keeping distortion on the total power minimized. For this task, total power consumption of five houses is used.
The empirical privacy-utility trade-off curve obtained for this application is presented in Fig. \ref{PEECANTradeoff}. Comparing Fig. \ref{PEECANTradeoff} with Fig. \ref{ECOTradeoff}, we see that a high level of privacy requires a high level of distortion. For instance, in order to obtain an attacker accuracy of 30 $\%$, NE$_2$ should be approximately equal to 0.30. This is attributed to the fact that this task is harder from the learning viewpoint than the one considered in Section \ref{sec:household_occupancy}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\linewidth]{PEECAN_Five_Houses.png}
\caption{Privacy-utility trade-off for house identity inference application. Since in this application the attacker is a five-class classifier, the random guessing (balanced) accuracy is 20$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.}
\label{PEECANTradeoff}
\end{figure}
\subsubsection{Inference of households acorn type} \label{sec:household_acorntype}
As the third practical case of study, we consider the family acorn type identification which can reveal the family economic status to any third-party having access to the SMs data \cite{thorve2018}. Thus, for this application, the SMs power consumption is used as useful data while the acorn type is considered as private one.
The empirical privacy-utility trade-off curve obtained for this application is presented in Fig. \ref{LondonTradeoff}. Once again, we see a large variation in the accuracy of the attacker as the distortion is modified.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\linewidth]{London_Tradeoff.png}
\caption{Privacy-utility trade-off for acorn type inference application. Since in this application the attacker is a three-class classifier, the random guessing (balanced) accuracy is 33$\%$. The fitted curve is based on an exponential function and is included only for illustration purposes.}
\label{LondonTradeoff}
\end{figure}
It should be noted that the PSD analysis for this application and the previous one lead to similar results to the ones of the first application and therefore are not reported.
To assess the quality of the release signal, utility providers may be interested in several different indicators. These include, for instance, the mean, skewness, kurtosis, standard deviation to mean ratio, and maximum to mean ratio \cite{shen2019}. Thus, for completeness, we present these indicators in Table \ref{tab:tab1} for three different cases along the privacy-utility trade off curve. These results show that in general the error in these indicators is small when the privacy constraints are lax and increases as they become strict. Whereas no simple relation can be expected between NE$_2$ and the values of the corresponding indicators.
\begin{table*}
\centering
\caption{Errors in power quality indicators for three applications along the privacy-utility trade-off.}
\begin{adjustbox}{width=0.9\textwidth}
\renewcommand{\arraystretch}{0.55}
\begin{tabular}{c c c c c c c c}
\toprule
\midrule
\multirow{2}[4]{*}{SMs Application}&\multirow{2}[4]{*}{\textbf{NE$_2$}}&\multirow{2}[4]{*}{\textbf{Accuracy(\%)}} & \multicolumn{5}{c}{\textbf{Absolute relative error of quality indicators(\%) } }\\
\cmidrule(rl){4-8}
& & & \textit{Mean} & \textit{Skewness}& \textit{Kurtosis}& \textit{Std. Dev./Mean}& \textit{Max./Mean} \\
\midrule
\multirow{3}[4]{*}{\textbf{Inference of households occupancy}}&0.04& 78 &1.42&1.06&0.70&0.67&0.46\\
&0.12&65&9.69&4.32&5.81&4.58&4.92\\
&0.18&57& 13.26&12.83&2.57&16.44&13.89\\
\midrule
\multirow{3}[4]{*}{\textbf{Inference of households identity }}&0.05&54 &3.42&2.22&2.01&3.50&2.51\\
&0.17&39&4.63&3.18&1.79&15.74&9.32\\
&0.36&29 &12.49&6.71&1.44&19.12&9.98\\
\midrule
\multirow{3}[4]{*}{\textbf{Inference of households acron type}}&0.03&85&1.86&0.66&0.44&0.02&0.02\\
&0.29&47 &2.49&9.46&14.54&24.97&13.24\\
&0.60&35& 13.21&45.92&24.03&55.38&41.68\\
\midrule
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:tab1}
\end{table*}
\subsection{$\ell_p$ Distortion
As already discussed in Section \ref{sec:formulation}, the distortion function should be properly matched to the intended application of the release variables $Z^T$ in order to preserve the characteristics of the target variables $Y^T$ that are considered essential. In this section, we consider the $\ell_p$ distortion \eqref{eq:p_distortion} with $p=4,5$ as an alternative to the $\ell_2$ distortion function in Section and study their potential benefits.
The privacy-utility trade-off curve for the inference of households occupancy application is shown in Fig. \ref{LossesEco}.
As a first observation, it appears clear that the choice of the distortion measure has a non-negligible impact on the privacy-utility trade-off curve. In fact, it can be seen that for a given amount of distortion, the releasers trained with the $\ell_4$ and $\ell_5$ distortion measures achieve a higher level of privacy than the one trained with the $\ell_2$ distortion function. It should be mentioned that we also considered other norms, such as the $\ell_{10}$, and the privacy-utility trade-off was observed to be similar, but slightly better, than the one corresponding to the $\ell_4$ norm.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{L4L5_tradeoff.png}
\caption{Privacy-utility trade-off for house occupancy inference application based on the different $\ell_p$ distortion functions. For each figure, the dashed line, shown for comparison purposes, is the fitted curve found in Fig. \ref{ECOTradeoff} for the $\ell_2$ distortion function.}
\label{LossesEco}
\end{figure}
As we discussed in Section \ref{sec:formulation}, in demand response programs, the utilities are mostly interested in the peak power consumption of the customers. It is also expected that higher-order $\ell_p$ norms are better at preserving these signal characteristics than the $\ell_2$ norm. To verify this notion, we considered 60 random days of the ECO data set in a full privacy scenario (i.e., with an attacker accuracy very close to $50 \%$) and plotted the actual power consumption along with the corresponding release signals for both the $\ell_4$ and $\ell_2$ distortion functions. Results are shown in Fig. \ref{TimeL4L2ECO}, which clearly indicates that the number of peaks preserved by the releaser trained with the $\ell_4$ distortion function is much higher than the ones kept by the releaser trained with the $\ell_2$ distortion function. This suggests that for the demand response application, higher order $\ell_p$ distortion functions should be considered.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\linewidth]{Time_L24.png}
\caption{Example of the release power consumption in the time domain compared with the actual power consumption over 60 random days with almost full privacy for the $\ell_4$ and $\ell_2$ distortion functions.}
\label{TimeL4L2ECO}
\end{figure}
\subsection{Attacker with Data Mismatch Problem
All the previous results are based on the assumption that the attacker has access to exactly the same training data set used by the releaser. This case should be considered as a worst-case analysis of the performance of the privacy-preserving networks. However, this assumption may not be true in practice. To examine this scenario, we revisit the application of Section \ref{sec:household_occupancy} in two different cases. In the first case, we assume that out of the data set of five houses (ECO data set), the releaser uses the data of houses $\{1,2,4,5|$ for training while the attacker has access to just the data of house $3$. In the second case, we assume that releaser is trained by the the data set of all houses but just the data set of houses $1$ and $3$ are available to the attacker. These scenarios try to capture different degrees of a data mismatch problem, which could have an impact on the privacy-utility trade-off due to the different generalization errors.
The results are presented in Fig. \ref{DiffDataset} along with the worst-case scenario. This clearly shows how the overlapping of the training data sets of the releaser and the attacker affect the performance of the model. In fact, in the case where the attacker does not have access to the full data set of the releaser but a portion of that, the performance of the attacker largely degrades, which means that a target level of privacy requires much less distortion. In the extreme case where the attacker has no access to the releaser training data set, a very high level of privacy can be achieved with negligible distortion. This should be considered as a best-case scenario. It should be mentioned that we repeated this experiment with different shuffling of the houses and similar results were obtained.
\begin{figure}[htbp]
\centering
\includegraphics[width=.45\linewidth]{drawing13.png}
\caption{Privacy-utility trade-off for house occupancy inference application when an attacker (trained separately to infer private data from the release) does not have full access to the releaser training data set.}
\label{DiffDataset}
\end{figure}
\section{Summary and Discussion} \label{sec:conclusion}
Privacy problems associated with smart meters measurements are an important concern in society, which can have an impact on their deployment pace and the advancement of smart grid technologies. Thus, it is essential to understand the real privacy risks associated with them in order to provide an adequate solution to this problem.
In this paper, we proposed to measure the privacy based on Directed Information (DI) between the sensitive time series and its inference by a potential attacker optimized for that task. DI captures the causal time dependencies present in the time series data and its processing. Unlike previous approaches, we impose no explicit assumption on the statistics or distributions of the involved random variables. We believe that this data-driven approach can provide a more accurate assessment of the information leakage in practice than purely theoretical studies based on worst-case assumptions.
We considered a privacy-preserving adversarial learning framework that balances the trade-off between privacy and distortion on release data. More precisely, we defined a tractable training objective (or loss) based on an upper bound to DI and a general distortion measure. The desired releaser is then trained in an adversarial framework using RNNs to optimize such objective, while an artificial attacker is trained with an opposite goal. After convergence, a new attacker is trained to test the level of privacy achieved by the releaser.
A detailed study of different applications, including inference of households occupancy (ECO data set), inference of household identity (Pecan Street data set), and inference of household acorn type (LCL data set), shows that the privacy-utility trade-off is strongly dependent upon the considered application and distortion measure. We showed that the usual $\ell_p$-norm based distortion measure for $p=2$ can have a worse privacy-utility trade-off than for $p>2$. In addition, we showed that the $\ell_4$ distortion measure generates a release that preserves most of the power consumption peaks even under a full privacy regime, which is not the case for the $\ell_2$ distortion function. This result is of considerable importance for demand response applications.
Finally, we studied the impact of the data mismatch problem in this application, which occurs when the training data set of the releaser is not the same as the one used by the attacker. Results show that this effect can greatly affect the privacy-utility trade-off. Since this phenomenon is expected in practice, at least to some degree, these findings suggest that the level of required distortion to achieve desired privacy targets may not be too significant in several cases of interest. In such scenarios, our approach may offer a simpler and more general solution than the ones offered by methods based on rechargeable batteries and renewable energy sources.
\bibliographystyle{ieeetr}
|
{'timestamp': '2019-06-18T02:03:07', 'yymm': '1906', 'arxiv_id': '1906.06427', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.06427'}
|
arxiv
|
\section{Introduction}
At the CMS experiment of the Large Hadron Collider, the final stage of data processing is typically carried out over several terabytes of numerical data residing on a shared cluster of servers used for batch processing. The data consist of columns of features for the recorded particles such as electrons, muons, jets and photons for each event in billions of rows, one row per event. In addition to the columns of purely kinematic information of particle momentum, each particle carries a number of features that describe the reconstruction details and other high-level properties of the reconstructed particles. For example, for muons, we might record the number of tracker layers where an associated hit was found, whether or not it reached the outer muon chambers and in MC simulation the index of the associated generator-level particle, thereby cross-linking two collections. Typical compressed event sizes for such reduced data formats at CMS are a few kilobytes per event. In practice, mixed precision floating points are used to store data only up to experimental precision, such that the number of features per event is on the order of a few hundred.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{flowchart.pdf}
\caption{The flowchart of the accelerated analysis workflow for the benchmark CMS $H(\mu\mu)$ analysis. The necessary ROOT data are accessed from a networked filesystem with a total size of about 880 GB. In the caching step, the relevant feature columns are decompressed and extracted and optional preselection is applied. The caching step takes approximately 2 hours for a 24-core job processing 1.2 billion events and is largely limited by IO. The cache consists of about 200 GB of data, 600M events of memory-mappable contiguous arrays and is portable between different platforms. This analysis-specific cache can be processed with the $H(\mu\mu)$ selection in 5 minutes on a 24-core server and in about 20 minutes on a commodity laptop.}
\label{fig:flowchart}
\end{figure}
A typical physics analysis at the LHC such as the precision measurement of a particle property or the search for a new phenomenon involves billions of real and simulated events. The final result typically requires tens to hundreds of iterations over this dataset while the analysis is ongoing over the period of months to a year. For each iteration of the analysis, hundreds of batch jobs of custom reduction software is run over these data. The maintenance and operation of this data pipeline takes up valuable researcher time and can result in delays between iterations of the analysis, slowing down innovation in experimental methods.
We demonstrate that by efficiently accessing and caching only the needed columns from the data, sizeable batches of data can be loaded from disk to memory at the speed of several MHz (millions of events per second). By processing the data using efficient vectorizable kernels on the arrays, applying multithreading and GPU acceleration to the final stage of data analysis becomes natural, such that data can also be processed at MHz-level speeds already on a single server. This means that a complete physics analysis of billions of events can be run as a single integrated Python code on a single machine with a rate of about a billion events per hour. Although we use a specific and relatively simple CMS analysis as an example, the method based on array computing with accelerated kernels is generic and can be used for other collider analyses. The purpose of this paper is to detail the issue of processing terabyte-scale data in HEP efficiently. We release the \texttt{hepaccelerate} library for further discussion~\cite{hepaccelerate}.
In the following parts, we explore this approach based on columnar data analysis in more detail. In section~\ref{sec:data}, we describe the structure of the data and discuss how data sparsity is handled efficiently. We introduce physics-specific computational kernels in section~\ref{sec:kernels} and describe the measured performance under a variety of conditions in section~\ref{sec:performance}. Finally, we conclude with a summary and outlook in section~\ref{sec:summary}.
\section{Data structure}
\label{sec:data}
We can represent HEP collider data in the form of two-dimensional matrices, where $N$ rows correspond to events and columns correspond to features in the event such as the momentum components of all measured particles. However, due to the nature of the underlying physics processes that produce a varying number of particles per event, the number of features varies from one collider event to another, such that a fixed-size two dimensional representation is not memory-efficient. Therefore, the standard HEP software framework based on ROOT includes mechanisms for representing dynamically-sized arrays as well as complete C++ classes with arbitrary structure as the feature columns and a mechanism for serializing and deserializing these dynamic arrays~\cite{Antcheva:2009zz}.
Based on the approach first introduced in the \texttt{uproot}~\cite{uproot} and \texttt{awkward-array}~\cite{awkward} python libraries, many existing HEP data files with a varying number of particles per event can be represented and efficiently loaded as sparse arrays with an underlying one-dimensional array for a single feature. Event boundaries are encoded in an offset array that records the particle count per event. Therefore, the full particle structure of events can be represented by a contiguous offset array of size $N$ and a contiguous offset array of size $M$ for each particle feature. This can easily be extended to event formats where the event contains particle collections of different types, e.g. jets, electrons and muons. By using the data and offset arrays as the basis for computation, efficient computational kernels can be implemented and evaluated on the data. We illustrate the jagged data structure on figure~\ref{fig:jagged}.
In practice, analysis-level HEP event data are stored in compressed ROOT files storing raw features in a so-called ``flat analysis ntuple'' form. Out of hundreds of stored features, a typical analysis might use approximately 50 to 100, discarding the rest and only using them rarely for certain auxiliary calibration purposes. When the same features are accessed multiple times, the cost of decompressing the data can be significant. Therefore, in order to maximize the computational efficiency of the analysis, we have implemented a simple cache based on memory mapped files for the feature and offset data. The efficiency of the uncompressed cache depends on the ratio of features used for analysis. We find that for the representative CMS analysis, the average uncompressed cache size is approximately 0.5 kB/event after choosing only the necessary columns, down from approximately 2 kB/event in compressed form. The choice of compression algorithms can and should be addressed further in optimizing the file formats for cold storage and analysis~\cite{rootio}.
\begin{figure}
\centering
\raisebox{30pt}{\includegraphics[width=0.4\linewidth]{jagged.pdf}}
\includegraphics[width=0.4\linewidth]{jagged_matrices.pdf}
\caption{A visual representation of the jagged data structure of the jet $p_T$, $\eta$ and $\phi$ content in 50 simulated events. On the diagram on the left, we illustrate how three events with a varying number of jets are recorded as contiguous arrays. On the figure on the right, we show the number of jets per event, one event per row, which is derived from the offset array. In the three rightmost columns, we show the jet content in events, visualizing the $p_T$, $\eta$ and $\phi$ of the first 10 jets for each event.}
\label{fig:jagged}
\end{figure}
\section{Computational kernels}
\label{sec:kernels}
In the context of this report, a kernel is a function that is mapped across an array to transform the underlying data. A simple kernel could compute the square root of all the values in the array. More complicated kernels such as convolutions might involve relations between neighbouring array elements or involve multiple inputs and outputs of varying sizes. When the individual kernel calls across the data are independent of each other, these kernels can be evaluated in parallel over the data using SIMD processors. The use of efficient computational kernels with an easy-to-use API has proven to be successful for machine learning frameworks such as \texttt{tensorflow}. Therefore, we propose to implement HEP analyses similarly using common kernels that are steered from a single high-level code.
In the following, we will demonstrate that by using the data and offset arrays as described in section~\ref{sec:data}, common HEP operations such as computing the total scalar sum of momenta of all selected particles in an event can be formulated as kernels and dispatched to vector processing units (CPUs, GPUs), efficiently processing the event data in parallel. The central result of this paper is that only a small number of simple kernels, easily implemented in e.g. Python or C, are needed to implement a realistic HEP analysis.
As mentioned above, a prototypical HEP-specific kernel would be to find the scalar sum $H_T$ of all particles passing some quality criteria in an event. We show the Python implementation for this on listing~\ref{lst:sumht}. This kernel takes as input the $M$-element data array of all particle transverse momenta \texttt{pt\_data} and an $N+1$-element array of the event offsets. In addition, as we wish to include only selected particles in selected events in this analysis, we use an $N$-element boolean mask for events and $M$-element boolean mask for the particles that have passed selection. These masks can be propagated to further functions, making it possible to efficiently chain computations without resorting to expensive data copying. Finally, the output is stored in a preallocated array of size $N$ that is initialized to zeroes.
\begin{minipage}{0.95\linewidth}
\begin{center}
\vspace{0.1cm}
\begin{lstlisting}[frame=single,language=Python,caption={Python code for the kernel computing the scalar sum of selected particle momenta $H_T$. The inputs are \texttt{pt\_data}, an $M$-element array of $p_T$ data for all the particles, the $N$-element \texttt{offsets} array with the indices between the events in the particle collections, as well masks for events and particles that should be considered. On line 10, the kernel is executed in parallel over the events using the Numba \texttt{prange} iterator, which creates multithreaded code across the loop iterations. On line 19, the particles in the event are iterated over sequentially.},label={lst:sumht},abovecaptionskip=10pt]
def sum_ht(
pt_data, offsets,
mask_rows, mask_content,
out):
N = len(offsets) - 1
M = len(data)
#loop over events in parallel
for iev in prange(N):
if not mask_rows[iev]:
continue
#indices of the particles in this event
i0 = offsets[iev]
i1 = offsets[iev + 1]
#loop over particles in this event
for ielem in range(i0, i1):
if mask_content[ielem]:
out[iev] += pt_data[ielem]
\end{lstlisting}
\end{center}
\end{minipage}
This kernel generically reduces a data array using a pairwise operator $(+)$ within the offsets and can be reused in other cases, such as counting the number of particles per event passing a certain selection. Other kernels that turn out to be useful are related to finding the minimum or maximum value within the offsets or retrieving or setting the $m$-th value of an array within the event offsets.
The generic kernels we have implemented for the $H(\mu\mu)$ analysis are the following:
\begin{itemize}
\item \verb|get_in_offsets|: given jagged data with offsets, retrieves the $n$-th elements for all rows. This can be used to create a contiguous array of e.g. the leading jet $p_T$ for further numerical analysis.
\item \verb|set_in_offsets|: as above, but sets the $n$-th element to a value. This can be used to selectively mask objects in a collection, e.g. to select the first two jets ordered by $p_T$.
\item \verb|sum_in_offsets|: given jagged data with offsets, calculates the sum of the values within the rows. As we have described above, this can be used to compute a total within events, either to count objects passing selection criteria by summing masks or to compute observables such as $H_T$.
\item \verb|max_in_offsets|: as above, but calculates the maximum of the values within rows.
\item \verb|min_in_offsets|: as above, but calculates the minimum.
\item \verb|fill_histogram|: given a data array, a weight array, histogram bin edges and contents, fills the weighted data to the histogram. This is used to create 1-dimensional histograms that are common in HEP. Extension to multidimensional histograms is straightforward.
\item \verb|get_bin_contents|: given a data array and a lookup histogram, retrieves the bin contents corresponding to each data array element. This is used for implementing histogram-based reweighting.
\end{itemize}
This demonstrates that a small number of dedicated kernels can be used to offload a significant part of the analysis. There are also a number of less generic analysis operations which do not easily decompose into other fundamental array operations, but are still useful for HEP analysis. A particular example would be to find the first two muons of opposite charge in the event, or to perform $\Delta R(\eta, \phi)$-matching between two collections of particles. In the standard approach, the physicist might simply write down procedural code in the form of a nested loop over the particles in the event which is terminated when the matching criterion is satisfied. These functions can similarly be expressed in the form of a dedicated kernels that do a single pass over the data and that are easy to write down and debug procedurally for the physicist before being dispatched to SIMD processors. We choose to implement these specifically rather than rely on more general array operations that perform cross-collection joins for speed and simplicity of implementation. These specialized kernels are as follows:
\begin{itemize}
\item \verb|mask_deltar_first|: given two collections of objects, masks all objects in the first collections that are closer than a predefined value in $\Delta R^2 = \Delta \eta^2 + \Delta \phi^2$ to an object in the second collection
\item \verb|select_opposite_sign_muons|: given a collection of objects with charge (i.e. muons), masks all but the first two objects ordered by $p_T$ which are of opposite charge
\end{itemize}
With a complete set of about 10 kernels, implemented in Python and just-in-time compiled to either multithreaded CPU or GPU/CUDA code using the Numba package~\cite{lam2015numba}, a standard HEP analysis such as the search for $H(\mu \mu)$ can be carried out from a simple controller script. We have chosen Python and Numba to implement the kernels in the spirit of quickly prototyping this idea, but this approach is not restricted to a particular programming language. The total number of lines of code for both the CPU and GPU implementations of all kernels is approximately 250, reflecting the simplicity of the implementations. Using a 24 core Intel Xeon E5-2687W v4 @ 3.00GHz with Intel Optane SSDs, networked storage using CephFS and an nVidia Geforce Titan X, we have benchmarked the performance of the kernels on preloaded data. The results are shown on figure~\ref{fig:kernel_benchmarks}. For complex kernels such as $\Delta R$ masking, with 24 cores we observe a speedup of approximately 5 times over single-core performance. On the GPU, we find a total speedup of approximately 15x over single-core performance.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{kernel_benchmarks.pdf}
\caption{Computational kernels benchmarked on the reference server. We compare the performance of the kernels on approximately 150k preloaded events, showing the scaling with respect to the single-core baseline. The absolute performance is on the scale of tens to hundreds of MHz and is displayed in the legend. In particular, we find that the kernel for computing $\Delta R$ masking runs at a speed of 34 MHz on a single-core of the CPU and is sped up by about a factor x5 (x15) by multithreading using the CPU (GPU).}
\label{fig:kernel_benchmarks}
\end{figure}
\section{Analysis benchmark}
\label{sec:performance}
Besides testing the individual kernels in a controlled setting, we benchmark a complete physics analysis based on multithreaded kernels using a CMS analysis for $H(\mu\mu)$. This analysis requires trigger selection, jet and lepton selection, matching of leptons to trigger objects as well as generator-level objects and of jets to leptons. The dimuon candidate is constructed from muons that pass quality criteria. Events are categorized based on the number of additional leptons and jets and the distribution of several control variables as well as the variable of interest, the dimuon invariant mass, is saved to histograms for further analysis. We also evaluate a high-level pretrained DNN discriminator using \texttt{tensorflow} based on about 25 input variables. Pileup reweighting is applied on the fly to correct the MC distribution of the number of reconstructed vertices to the one observed in data, with variated histograms being used to account for systematic uncertainties. We use existing CMS code for lepton scale factor and Rochester corrections, which is integrated using Python CFFI and multi-threaded using OpenMP. In general, the most computationally complex steps in high-level analysis are the evaluation of machine learning models and filling of hundreds to thousands of histograms with systematic variations. Both of these are highly efficient on SIMD processors such as GPUs. In order to emulate a more complex analysis, once the data are loaded to memory in batches, we study the scaling behaviour with respect to analysis complexity by rerunning the analysis multiple times on the same data, which reflects the possibility of easily doing parameter scans in this approach.
We use approximately 90 million MC events in 80 GB of compressed CMS NanoAOD files to benchmark the analysis, reflecting about 10\% of the total needed for a single year of data taking. First, we decompress and cache data at a speed of approximately 100-1000 kHz on the benchmark machine, creating an uncompressed cache of approximately 40 GB which is stored on a local SSD. This step is crucial for increased physics throughput, but needs to be repeated only when additional branches need to be included in the analysis. We find that it is advantageous to store the input data on a fast parallel storage with a capacity of several terabytes that supports random reads at speeds exceeding 100 MB/s, but caching speeds from networked bulk storage such as HDFS-FUSE are still acceptable at around 50-100 kHz. The cache creation speed is largely IO limited and can vary significantly depending on the hardware. Further optimization of the decompression and loading step of columnar ROOT data would be beneficial.
After the branch cache is created, the physics analysis code can be run efficiently. We observe baseline event processing rates of approximately 0.8 MHz on the multicore CPU backend and approximately 0.9 MHz using the GPU backend for a single analysis iteration. It is important to note that the absolute event processing speed depends on the analysis complexity. In HEP data analysis it is common to apply some preselection or ``skimming'' such as trigger bit requirements to the data that are stored or cached for regular reanalysis. By doing this, we avoid loading data that we know will never be used and the effective data processing rate will thus be increased by the preselection factor, which in the case of $H(\mu\mu)$ is approximately three. We choose to report the speed with respect to raw event numbers from CMS simulation without preselection, which would artificially inflate the reported speed. For the baseline analysis, the CPU and GPU backends perform about equally. The GPU backend is advantageous in the case of more complex analyses: when the analysis complexity is scaled up by a factor of 10, we find that the GPU-based backend completes the analysis approximately 2x faster than a pure CPU implementation, and provides a 2.5-fold speedup with respect to a naive implementation. These results are summarized on figure~\ref{fig:analysis_scaling}. In the implementation of the data flow pipeline, we use a multithreaded approach where a worker thread preloads cached data to system memory and feeds it into a thread-safe queue, which is processed by either the CPU or GPU backend on a separate thread. The batch size of the preloaded data determines the memory usage of the code, which is currently around 10-15 GB total ($ < 0.7$ GB/core). The complete analysis is encapsulated in a single multithreaded python script which can be steered directly from the command line and does not require running additional server software or services.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{analysis_scaling.pdf}
\caption{Extrapolated processing time for the $H(\mu\mu)$ analysis on 1 billion events and scaling with respect to analysis complexity. More complex analyses are emulated by iterating same physics code on data already loaded to memory either 5x or 10x. We compare the expected time of a naive reanalysis on the CPU backend (blue) with respect to reanalyzing a loaded batch on the CPU backend (orange) and on the GPU backend (green). A naive reanalysis scales linearly with complexity, whereas analyzing data already in memory ensures the code is compute-bound rather than IO-bound. The GPU backend provides the best scaling with respect to analysis complexity.}
\label{fig:analysis_scaling}
\end{figure}
We have not performed a symmetrical benchmark with respect to current commonly used non-array-based codes for this analysis, as these cannot be easily run from a single integrated workflow and thus cannot be reliably benchmarked end-to-end. However, neglecting the time spent in batch job preparation and management, event processing speeds for ROOT-based event loop analyses are commonly in the range of 50 - 100 kHz, which also factors in data fetching and decompression. We therefore stress that the benchmark shown in this result is not aimed to be symmetrical against standard analysis codes, but rather reflects the possibilities afforded by using array-based techniques on local data for analysis, which allows for fast experimentation and iteration on sizeable datasets before resorting to extensive distributed computing infrastructure. We hope this encourages further development and optimization of HEP data analysis codes along the performance and scaling axes.
In addition to benchmarking the scaling performance on a small dataset of 90 million simulated events as above, we have found that an end-to-end physics processing of roughly a billion events of data and MC is possible within minutes. The input data consist of approximately 1.2 billion events for the 2017 data taking period in approximately 880 GB of ROOT files. After the initial column caching and decompression step, which takes about an hour for a single 24-core job on the benchmark machine, a cache of approximately 210 GB is produced, preselecting 600M events based on the isolated muon trigger bit and saving only the approximately 50 analysis-specific columns. This cache is stored on an SSD for fast access and is portable between machines. The benchmarks are summarized in table~\ref{tab:analysis_bench}.
\begin{table}[t]
\begin{center}
\begin{tabular}{ c|cc }
Platform & Time to create caches & Time to run physics code \\
\hline
24-core server + GPU & 50 minutes & 10 minutes \\
4-core laptop & N/A & 30 minutes
\end{tabular}
\caption{Time to run the analysis on 1.2 billion raw events on a server \textsuperscript{a} and on a laptop \textsuperscript{b}. We list the time to decompress and cache approximately 880 GB of ROOT data, which is done only on the server, and the time to run the physics analysis code on approximately 210 GB of uncompressed and preselected caches.}
\label{tab:analysis_bench}
\small\textsuperscript{a} 24-core Intel Xeon E5-2687W v4 \@ 3.00GHz, Intel P4608 SSD, nVidia GTX Titan X\\
\small\textsuperscript{b} 4-core 2.3 GHz Intel i5 laptop, 8GB RAM, Thunderbolt 3 SSD
\end{center}
\end{table}
Although some computationally demanding analysis features such as jet energy scale variations are not yet implemented, the scaling behaviour suggests that even a significantly more complex analysis could be completed within a few hours using only a single machine. Both horizontal and vertical scaling are possible. The work can be distributed across multiple machines using data parallelism within environments such as Apache Spark or using the more traditional batch software. Further engineering work on multithreading and kernel optimization, GPU streaming and caching is expected to increase the achievable analysis speeds on a single machine.
\section{Summary and outlook}
\label{sec:summary}
The central message of this report is that it is possible to do significant physics analysis processing from a simple code on a single machine using multithreading within minutes to hours. Using memory-mappable caching, array processing approaches and a small number of specialized kernels for jagged arrays implemented in Python using Numba, it is possible to do real physics analysis tasks with event rates reaching up to a million events per second. It is also possible to offload parts of these array computations to accelerators such as GPUs, which are highly efficient at SIMD processing. Once the batched data are in device memory, reprocessing is computationally efficient, such that multiple iterations of the analyses can be run for optimization purposes. We have demonstrated a prototypical Higgs analysis implementation using these computational kernels which can be evaluated on a billion MC and data events in less than an hour with optional GPU offloading. Several optimizations remain possible in a future work, among them optimizing the data access via efficient decompression and caching, scaling across multiple machines as well as optimizing the threading performance. We hope the approach shown here will spark discussion and further development of fast analysis tools which would be useful for scientists involved in HEP data analysis and more widely in data intensive fields.
\section*{Acknowledgment}
We would like to thank Jim Pivarski and Lindsey Gray for helpful feedback at the start of this project. We are grateful to Nan Lu and Irene Dutta for providing a reference implementation of the $H(\mu\mu)$ analysis that could be adapted to vectorized code. We would like to thank Christina Reissel for being an independent early tester of these approaches and for helpful feedback on this report. The availability of the excellent Python libraries \texttt{uproot}, \texttt{awkward}, \texttt{coffea}, \texttt{Numba}, \texttt{cupy} and \texttt{numpy} was imperative for this project and we are grateful to the developers of those projects. Part of this work was conducted at "\textit{iBanks}", the AI GPU cluster at Caltech. We acknowledge NVIDIA, SuperMicro and the Kavli Foundation for their support of "\textit{iBanks}".
|
{'timestamp': '2019-06-17T02:15:44', 'yymm': '1906', 'arxiv_id': '1906.06242', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.06242'}
|
arxiv
|
\section{Introduction}
In this paper we consider semiclassical asymptotics for a class of Schr\"odinger operators
on bounded sets $\Omega\subset \mathbb{R}^d$ with potentials which are singular at the boundary and subject to Dirichlet boundary conditions. Specifically, for a bounded open set $\Omega\subset\mathbb{R}^d$ with $C^1$ boundary we consider Schr\"odinger operators
\begin{equation}\label{eq: informal op def}
-\Delta+ W(x) \quad \mbox{with } W(x) \approx \textup{dist}(x, \Omega)^{-2} \mbox{ as } \textup{dist}(x, \partial\Omega)\to 0\,.
\end{equation}
These operators have purely discrete spectrum and our main interest is towards the asymptotic behavior of their eigenvalues. Our main result is a two-term asymptotic formula for the sum of the eigenvalues.
Before we formulate our main result it is necessary to explain more precisely how~\eqref{eq: informal op def} is to be interpreted. We shall assume that our potential decomposes as one part which is in $L^\infty_{\rm loc}(\Omega)$ and has the prescribed singular behavior at the boundary and a part which, in comparison, is well-behaved. To simplify the exposition we write
\begin{equation*}
H_{\Omega, b, V}(h) = -h^2\Delta+h^2\Bigl(b^2(x)-\frac{1}{4}\Bigr)\frac{1}{\textup{dist}(x, \partial\Omega)^2}+h^2V(x)-1\quad \mbox{for }h>0\,.
\end{equation*}
Technically, the operator $H_{\Omega, b, V}(h)$ is defined through the quadratic form
\begin{equation}\label{eq: H quad form}
u \mapsto \int_\Omega\Bigl(h^2|\nabla u(x)|^2 + h^2\Bigl(b^2(x)-\frac{1}{4}\Bigr)\frac{|u(x)|^2}{\textup{dist}(x, \partial\Omega)^2}+h^2 V(x)|u(x)|^2-|u(x)|^2\Bigr)\,dx
\end{equation}
with form domain $\{ u\in H_0^1(\Omega): V_+u^2 \in L^1(\Omega)\}$. Throughout we shall assume that $V \in L^1(\Omega)$, $V_- \in L^{1+d/2}(\Omega)$, and that $b \in L^\infty(\Omega)$ is positive and satisfies
\begin{equation}\label{eq: b regularity}
\lim_{r\to 0^+}\int_{\partial\Omega}\biggl[\,\sup_{y\in B_r(x) \cap \Omega}b(y)-\inf_{y\in B_r(x) \cap \Omega}b(y)\biggr] \,d\mathcal{H}^{d-1}(x) = 0\,.
\end{equation}
Here and in what follows we define $x_\pm = \frac{|x|\pm x}{2}$ and note that with this convention both $x_+$ and $x_-$ are non-negative. As a consequence of Hardy's inequality, the assumptions on $V$ and $b$ ensure that the quadratic form \eqref{eq: H quad form} is bounded from below and closed. Therefore it generates a selfadjoint, bounded from below operator $H_{\Omega,b,V}(h)$ in $L^2(\Omega)$.
We emphasize that by positivity of $b$, we mean $\inf_\Omega b>0$. This assumption can naturally be relaxed to require positivity only in a neighborhood of the boundary by adjusting $V$ correspondingly. The regularity assumption~\eqref{eq: b regularity} implies that $b|_{\partial\Omega}$ can be made sense of as an element of $L^\infty(\partial\Omega)$; indeed, by~\eqref{eq: b regularity}, $b$ has a well-defined limit $\mathcal{H}^{d-1}$-almost everywhere on $\partial\Omega$ which is finite since $b\in L^\infty(\Omega)$. Our main result can now be stated as follows:
\begin{thm}\label{thm: main result}
Let $\Omega \subset \mathbb{R}^d$ be open and bounded with $C^1$ boundary, $V \in L^1(\Omega)$ with $V_- \in L^{1+d/2}(\Omega)$, and let $b\in L^\infty(\Omega)$ be positive and satisfy~\eqref{eq: b regularity}. Then, as $h\to 0^+$,
\begin{equation*}
\textup{Tr}(H_{\Omega, b, V}(h))_- = L_dh^{-d}|\Omega| - \frac{L_{d-1}}{2}h^{-d+1}\int_{\partial\Omega}b(x)\,d\mathcal{H}^{d-1}(x) + o(h^{-d+1})\,,
\end{equation*}
where $L_d = (4\pi)^{-d/2}\Gamma(2+d/2)^{-1}.$
\end{thm}
As a corollary of Theorem~\ref{thm: main result} we deduce:
\begin{cor}\label{cor: convergence of density}
Let $\Omega\subset \mathbb{R}^d$ be open and bounded with $C^1$ boundary. Then, with $\Delta_\Omega$ denoting the Dirichlet Laplace operator in $\Omega$, as $h \to 0^+$ and in the sense of measures
\begin{equation*}
h^{d+1}\frac{\mathds{1}(-h^2\Delta_\Omega\leq 1)(x, x)}{\mathrm{dist}(x, \partial\Omega)^2}\,dx \to \frac{L_{d-1}}{2}\mathcal{H}^{d-1}|_{\partial\Omega}\,.
\end{equation*}
\end{cor}
\begin{proof}
The corollary follows from a standard Feynman--Hellmann argument (cf.~\cite{LiebSimon_AdvMath77}) and Theorem~\ref{thm: main result} applied with the potential $W(x) = t f(x)/\mathrm{dist}(x, \partial\Omega)^2$ for $f \in C(\overline{\Omega})$ and sending first $h$ then $t$ to zero.
\end{proof}
Spectral asymptotics for differential operators that degenerate at the boundary of the domain are not new. However, the results in the literature mainly concern cases in which the operator degenerates at leading order and how this affects the first term in the asymptotics, see~\cite{BirmanSolomjak_Survey79,BirmanSolomjak_QuantEmbedding} and references therein.
While the class of operators considered here is drastically less singular, our interest is towards the effect of the degeneracy on the second term in the asymptotics.
In the special case of the Dirichlet Laplacian, i.e.\ $V\equiv 0$ and $b\equiv1/2$, Theorem~\ref{thm: main result} was proved in~\cite{FrankGeisinger_11,FrankGeisinger_12}. The strategy of our proof follows closely that developed there, but several new obstacles need to be circumvented in the presence of the potential, which is singular at the boundary. The idea is to localize the operator in balls whose size varies depending on the distance to the boundary and $h$. In a ball far from the boundary the influence of the boundary conditions and the potential both have a negligible effect and precise asymptotics can be obtained through standard methods. In a ball close to the boundary the regularity of the boundary allows to map the problem to a half-space where asymptotics are obtained by explicitly diagonalizing an effective operator. The main new ingredients needed here is to control how the straightening of the boundary affects the singular part of the potential and to understand how the potential enters in the half-space problem.
The works~\cite{FrankGeisinger_11,FrankGeisinger_12} for domains with $C^1$ boundaries were extended to the case of Lipschitz boundaries in \cite{FrankLarson}, see also \cite{FrankLarson_20}. Since the (weak) Hardy constant can be smaller than $1/4$ for Lipschitz domains, it is not clear how to generalize the results of the present paper to this setting.
The plan for the paper is as follows. In Section~\ref{sec: Prelmininaries} we recall a number of results concerning changes of variables mapping $\partial \Omega$ locally to a hyperplane. In particular, Lemma~\ref{lem: A geometric lemma} describes how such a mapping affects the singular part of our potential. We also prove a local Hardy--Lieb--Thirring inequality which will be crucial in controlling error terms appearing in our analysis, and which replaces the Lieb--Thirring inequality in~\cite{FrankGeisinger_11} in the absence of a singular potential. In Section~\ref{sec: Local asymptotics} we provide local asymptotics, both in the bulk of our domain and close to the boundary. Finally, in Section~\ref{sec: local to global} we adapt the localization procedure developed in~\cite{FrankGeisinger_11,FrankGeisinger_12,FrankLarson} to our current setting and use it to piece together the local asymptotics of Section~\ref{sec: Local asymptotics}, thus proving Theorem~\ref{thm: main result}.
The letter $C$ will denote a constant whose value can change at each occurence.
\medskip
We are deeply grateful to Ari Laptev for sharing his fascination for spectral estimates and Hardy's inequality with us and we would like to dedicate this paper to him on the occasion of his 70th birthday.
\section{Preliminaries}\label{sec: Prelmininaries}
\subsection{Straightening the boundary}\label{sec: Straightening of bdry}
Let $\mathbb{R}^d_+ = \{ y\in \mathbb{R}^d: y_d >0\}$. Let $B\subset \mathbb{R}^d$ be an open ball of radius $\ell$ centred at a point $x_0\in \partial\Omega$. By rotating and translating we may assume that $x_0=0$ and $\nu_0=(0, \ldots, 0, 1)$ is the inward pointing unit normal to $\partial \Omega$ at $x_0$. Since $\Omega$ is bounded with $C^1$ boundary, there exists a non-decreasing modulus of continuity $\omega\colon \mathbb{R}_+\to [0, 1]$ such that, if $\ell$ is small enough, there exists a function $f\colon \mathbb{R}^{d-1}\to \mathbb{R}$ satisfying $|\nabla f(x')| \leq \omega(|x'|)$ such that
\begin{equation*}
\partial\Omega \cap B_{2\ell}(0) = \{(x', x_d) \in \mathbb{R}^{d-1}\times \mathbb{R}: x_d = f(x')\}\cap B_{2\ell}(0)\,.
\end{equation*}
Note that, by the choice of coordinates, $f(0)=0$ and $\nabla f(0)=0$.
Set $\,\mathcal{X}=\{(x', x_d)\in \mathbb{R}^{d-1}\times \mathbb{R}: |x'|<2\ell\}$. We define a diffeomorphism $\Phi \colon \mathcal{X} \to \mathbb{R}^d$
by $\Phi_j(x)=x_j$ for $j=1, \ldots, d-1$ and $\Phi_d(x)= x_d-f(x')$. Note that the Jacobian determinant of $\Phi$ equals $1$ and that the inverse of $\Phi$ is well-defined on $\Phi(\mathcal{X})=\mathcal{X}$.
The inverse is given by $\Phi_j^{-1}(y)=y_j$ for $j=1, \ldots, d-1$ and $\Phi^{-1}_d(y)=y_d+f(y')$.
In the following lemma we gather some results whose proofs are standard and can be found, for instance, in~\cite[Section 4]{FrankGeisinger_12}.
\begin{lem}[Straightening of the boundary]\label{lem: Straightening of boundary}
Let $B, \Phi$ be as above and for $u\colon B \to \mathbb{R}$ set $\tilde u = u \circ \Phi^{-1}$. For $0<\ell \leq c(\omega)$ and with $C$ depending only on $d$, we have:
\begin{enumerate}
\item \label{itm: volume preservation} if $u \in L^1(B)$ then
\begin{equation*}
\int_{B} u(x)\,dx = \int_{\Phi(B)}\tilde u(y)\,dy\,.
\end{equation*}
\item\label{itm: boundary integral change} if $u \in L^\infty(\partial\Omega\cap B)$ then
\begin{equation*}
\biggl|\int_{\partial\Omega \cap B}u(x)\,d\mathcal{H}^{d-1}(x)- \int_{\partial\mathbb{R}^{d}_+ \cap \Phi(B)}\tilde u(y)\,d\mathcal{H}^{d-1}(y)\biggr|\leq C \ell^{d-1}\omega(\ell)^2\|u\|_{L^\infty}\,.
\end{equation*}
\item\label{itm: gradient estimate} if $u\in H^1_0(\Omega \cap B)$ then $\tilde u \in H^1_0(\mathbb{R}^d_+ \cap \Phi(B))$ and
\begin{equation*}
\biggl|\int_{\Omega\cap B} |\nabla u(x)|^2\,dx - \int_{\mathbb{R}^d_+ \cap \Phi(B)} |\nabla \tilde u(y)|^2\,d y \biggr| \leq C \omega(\ell) \int_{\mathbb{R}^d_+ \cap \Phi(B)} |\nabla \tilde u(y)|^2\,dy\,.
\end{equation*}
\item\label{itm: sup gradient estimate} if $u \in C^1_0(\mathbb{R}^d)$ is supported in $\overline{B}$ then, after extension by zero, $\tilde u \in C^1_0(\mathbb{R}^d)$ with $\mathrm{supp}\,\tilde u\subseteq \overline{B_{2\ell}(0)}$ and $\|\nabla \tilde u\|_{L^\infty} \leq C\|\nabla u\|_{L^\infty}$.
\end{enumerate}
\end{lem}
In addition to the properties in Lemma~\ref{lem: Straightening of boundary} we will need the following result which enables us to control the change of the singular part of our potentials:
\begin{lem}\label{lem: A geometric lemma}
Let $B, \Phi$ be as above. There is a constant $C$ depending only on $d$ such that for any $x \in B \cap \Omega$,
\begin{equation}\label{eq:distdist}
0 \leq \frac{1}{\textup{dist}(x, \partial\Omega)^2}- \frac{1}{\textup{dist}(\Phi(x),\partial \mathbb{R}^d_+)^2} \leq C\frac{\omega(2\ell)^2}{\textup{dist}(\Phi(x),\partial \mathbb{R}^d_+)^2}\,.
\end{equation}
\end{lem}
\begin{proof}
By definition of $f$, $(x', f(x'))\in \partial\Omega$, thus $\textup{dist}(x, \partial\Omega) \leq |x-(x', f(x'))|=|x_d-f(x')| = \textup{dist}(\Phi(x), \partial \mathbb{R}^d_+)$ which implies the lower bound in \eqref{eq:distdist}.
To prove the upper bound, let $z= (z', f(z'))\in \partial\Omega$ be such that $\textup{dist}(x, \partial\Omega)=|x- z|$. Since $\partial\Omega$ is parametrized by $f$ in the larger ball $B_{2\ell}(x_0)$ it is clear that such a point exists and that $z\in B_{2\ell}(x_0)$. The point $z$ might not be uniquely determined but that will not play any role in what follows.
We begin by rewriting the expression we want to bound in terms of $z$:
\begin{align*}
\frac{1}{\textup{dist}(x, \partial\Omega)^2}- \frac{1}{\textup{dist}(\Phi(x),\partial \mathbb{R}^d_+)^2}
&=
\frac{1}{|x-z|^2}- \frac{1}{|x_d-f(x')|^2}\\
&=
\frac{(f(x')-f(z'))(f(x')+f(z')-2x_d)-|x'-z'|^2}{|x-z|^2|x_d-f(x')|^2}\,.
\end{align*}
Since $f$ is $C^1$ and by the definition of $z$ it holds that
\begin{equation*}
x= z+ |x-z| \frac{(-\nabla f(z'), 1)}{\sqrt{1+|\nabla f(z')|^2}}\,.
\end{equation*}
Consequently,
\begin{equation}\label{eq: comparison distances}
|x'-z'|^2 = |x-z|^2\frac{|\nabla f(z')|^2}{1+|\nabla f(z')|^2}
\quad\mbox{and}\quad
|x_d-f(z')|^2 = \frac{ |x-z|^2}{1+|\nabla f(z')|^2}\,.
\end{equation}
Note also that $f(x')\leq f(z')\leq x_d$. From the above identities one finds
\begin{equation}\label{eq: distances in terms of f}
\begin{aligned}
\frac{1}{\textup{dist}(x, \partial\Omega)^2}- \frac{1}{\mathrm{dist}(\Phi(x), \partial \mathbb{R}^d_+)^2}
&=
\frac{1}{|x_d-f(x')|^2}\Biggl[\frac{|f(x')-f(z')|^2}{|x-z|^2} \\
&\quad
+2\frac{|f(x')-f(z')|}{|x-z|\sqrt{1+|\nabla f(z')|^2}}- \frac{|\nabla f(z')|^2}{1+|\nabla f(z')|^2}\Biggr]\,.
\end{aligned}
\end{equation}
By the fundamental theorem of calculus and~\eqref{eq: comparison distances}
\begin{align*}
|f(x')-f(z')| = \biggl|(x'-z')\int_0^1 \nabla f(t x'+(1-t)z')\,dt\biggr|
\leq
\omega(2\ell)^2|x-z| \,.
\end{align*}
Therefore
\begin{align*}
\frac{|f(x')-f(z')|^2}{|x-z|^2}+ 2 \frac{|f(x')-f(z')|}{|x-z|\sqrt{1+|\nabla f(z')|^2}}
- \frac{|\nabla f(z')|^2}{1+|\nabla f(z')|^2}
\leq C \omega(2\ell)^2\,.
\end{align*}
Combined with~\eqref{eq: distances in terms of f} this completes the proof of Lemma~\ref{lem: A geometric lemma}.
\end{proof}
\subsection{A local Hardy--Lieb--Thirring inequality}
The aim of this subsection is to prove a bound for localized traces of our operator. Before stating the result we recall the following Hardy inequality due to Davies~\cite{Davies95} (combine his Theorems 2.3 and 2.4).
\begin{lem}\label{lem: Davies Hardy ineq}
Let $\Omega \subset \mathbb{R}^d$ be open and bounded with $C^1$-boundary. Then for any $\varepsilon>0$ there is a $c_H(\varepsilon, \Omega)\geq 0$ such that for all $u \in H^1_0(\Omega)$,
\begin{equation*}
\int_{\Omega}|\nabla u(x)|^2 \,dx + \Bigl(\varepsilon-\frac{1}{4}\Bigr) \int_\Omega \frac{|u(x)|^2}{\mathrm{dist}(x, \partial\Omega)^2}\,dx \geq - c_H(\varepsilon, \Omega) \int_\Omega |u(x)|^2\,dx\,.
\end{equation*}
\end{lem}
\begin{rem}
Lemma~\ref{lem: Davies Hardy ineq} can be proved in a direct manner by using a partition of unity and appealing to Lemmas~\ref{lem: Straightening of boundary} and~\ref{lem: A geometric lemma}. In particular, this allows one to quantify the best constant $c_H$ in terms of the $C^1$-regularity of $\partial\Omega$. Indeed, such a proof yields the bound $c_H(\varepsilon, \Omega) \leq \tfrac{C}{\omega^{-1}(\varepsilon)^2}$ for a constant $C$ depending only on the dimension and $\omega^{-1}$ is the inverse of the $C^1$-modulus of continuity of $\partial \Omega$.
\end{rem}
With Lemma~\ref{lem: Davies Hardy ineq} in hand we move on to the main result of this subsection. Specifically, the following local Hardy--Lieb--Thirring type inequality for $H_{\Omega, b, V}$ (cf.~\cite{FrankLoss_HSM_2012}):
\begin{lem}
\label{lem: local HLT}
Let $\Omega, b, V$ be as in Theorem~\ref{thm: main result}. Let $\phi \in C^1_0(\mathbb{R}^d)$ be supported in a ball $\overline{B}$ of radius $\ell$ and set $\underline{b} = \inf_{\Omega \cap B} b$.
If $0<h\leq K \min\{\ell, c_H(\underline{b}^2/2, \Omega)^{-1/2}\}$, then
\begin{equation*}
\textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_- \leq C \min\{\underline{b}, 1\}^{-d}\ell^dh^{-d} \Bigl(1 + h^2 \|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\,,
\end{equation*}
where the constant $C$ depends only on $d, K,$ and $\|\phi\|_{L^\infty}$.
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lem: local HLT}]
By assumption, $\underline{b}>0$.
By the variational principle and for any $\delta \in (0, 1/2]$, we find
\begin{align*}
\phi H_{\Omega, b, V}(h) \phi &\geq
\phi\Bigl(-h^2 \delta \Delta - h^2 V_-(x) -1\\
&\quad
- h^2(1-\delta)\Bigl(-\Delta + (1-\delta)^{-1}\Bigl(\underline{b}^2- \frac{1}{4}\Bigr) \frac{1}{\mathrm{dist}(x, \partial\Omega)^2}\Bigr) \Bigr)\phi\,.
\end{align*}
Since $\delta \in (0, 1/2]$ we have
\begin{equation*}
(1-\delta)^{-1}\Bigl(\underline{b}^2- \frac{1}{4}\Bigr) \geq (1+2\delta)\Bigl(\underline{b}^2- \frac{1}{4}\Bigr) > \underline{b}^2 - \frac{\delta}{2} - \frac{1}{4}\,.
\end{equation*}
Thus, setting $\delta = \min\{\underline{b}^2, 1/2\} \leq 1/2$, Lemma~\ref{lem: Davies Hardy ineq} implies with $c_0 = c_H(\underline{b}^2/2, \Omega)$ that
\begin{equation}\label{eq: lower bound by Hardy}
\phi H_{\Omega, b, V}(h)\phi \geq
\phi(-h^2 \delta \Delta - c_0h^2
- h^2 V_-(x) -1) \phi\,.
\end{equation}
Consequently, for any $0<\rho< 1$, the variational principle and~\eqref{eq: lower bound by Hardy} yields
\begin{align*}
\mathrm{Tr}(\phi H_{\Omega, b, V}(h)\phi)_-
&\leq \mathrm{Tr}(\phi(-h^2\delta(1-\rho)\Delta -c_0h^2-1)\phi)_-\\
&\quad +
\mathrm{Tr}(\phi(-h^2\delta\rho\Delta - h^2V_-)\phi)_-\,.
\end{align*}
Using the Berezin--Li--Yau inequality
\begin{align*}
\mathrm{Tr}(\phi(-h^2\delta(1-\rho)&\Delta -c_0h^2-1)\phi)_- \\
&\leq C (1+c_0h^2)^{1+d/2}(1-\rho)^{-d/2}\delta^{-d/2}h^{-d}\ell^{d}\,,
\end{align*}
with $C>0$ depending on $d$ and $\|\phi\|_{L^\infty}$. For the remaining term the Lieb--Thirring inequality implies
\begin{equation*}
\mathrm{Tr}(\phi(-h^2\delta\rho\Delta - h^2V_-)\phi)_- \leq C h^2\delta^{-d/2}\rho^{-d/2}\|V_-\|_{L^{1+d/2}(\Omega \cap B_\ell)}^{1+d/2}\,,
\end{equation*}
for some $C>0$ depending only on $d$. Gathering the estimates and setting $\rho = h^2/(2K^2\ell^2)< 1$ completes the proof.
\end{proof}
\section{Local asymptotics}\label{sec: Local asymptotics}
\subsection{Local asymptotics in the bulk}
\begin{lem}
\label{lem: local asymptotics bulk}
Let $\phi \in C^1_0(\mathbb{R}^d)$ be supported in a ball $\overline{B}$ of radius $\ell>0$ and satisfy
\begin{equation}\label{eq: phi grad bound bulk asymptotics}
\|\nabla \phi\|_{L^\infty(\mathbb{R}^d)}\leq M \ell^{-1}\,.
\end{equation}
If $V\in L^1(B)$ is such that $V_- = V_0+V_1$ with $0\leq V_0 \in L^\infty(B)$ and $V_1 \in L^{1+d/2}(B)$ then, for $0<h\leq K \min\{\ell, \|V_0\|_\infty^{-1/2}\}$,
\begin{align*}
\Bigl|\textup{Tr}(\phi (-h^2&\Delta+h^2V-1)\phi)_- - L_d h^{-d}\int_B \phi^2(x)\,dx \Bigr|\\
& \leq
C h^{-d+2}\Bigl[\ell^{d-2}+\ell^d\|V_0\|_{L^\infty(B)} + \ell^d\|V_1\|^{1+d/2}_{L^{1+d/2}(B)}+\|V_+\|_{L^1(B)}\Bigr]\,,
\end{align*}
where the constant $C$ depends only on $d, M, K$.
\end{lem}
\begin{proof}
Throughout the proof we set $H_V = H_{\mathbb R^d,0,V}=-h^2\Delta+ h^2V-1$ in $L^2(\mathbb R^d)$.
To prove the lower bound, consider the operator $\gamma$ with integral kernel
\begin{equation*}
\gamma(x, y) = \frac{1}{(2\pi)^d}\chi(x)\int_{|\xi|< h^{-1}}e^{i \xi(x-y)}\,d\xi\,\chi(y)\,,
\end{equation*}
where $\chi \in C_0^\infty(\mathbb{R}^d)$ with $0\leq \chi \leq 1$ and $\chi \equiv 1$ on $B$. The operator $\gamma$ is trace class and satisfies $0\leq \gamma \leq \mathbf{1}$. Therefore, the variational principle implies that
\begin{align*}
\textup{Tr}(\phi H_V \phi)_-
&\geq
\textup{Tr}(\phi H_{V_+}\phi)_-\\
&\geq
-\textup{Tr}(\gamma \phi H_{V_+}\phi)\\
&=
- \frac{1}{(2\pi)^d}\int_{|\xi|< h^{-1}} \Bigl(h^2 \|\nabla e^{i\xi\, \cdot \,}\phi\|_{L^2(\mathbb{R}^d)}^2+h^2\|V_+ \phi^2\|_{L^1(\mathbb{R}^d)}-\|\phi\|_{L^2(\mathbb{R}^d)}^2\Bigr)\,d\xi\\
&=
L_d h^{-d}\int_B \phi^2(x)\,dx
- C h^{-d+2}\Bigl(\|\nabla \phi\|_{L^2(\mathbb{R}^d)}^2+\|V_+ \phi^2\|_{L^1(\mathbb{R}^d)}\Bigr)\,.
\end{align*}
Since, by~\eqref{eq: phi grad bound bulk asymptotics}, $\|\phi\|_{L^\infty}\leq M$ and $\|\nabla \phi\|_{L^2(\mathbb{R}^d)}^2 \leq C \ell^{d-2}$ this proves the lower bound.
It remains to prove the upper bound.
For any $0<\rho \leq 1/2$
\begin{align*}
\textup{Tr}(\phi H_{V}\phi)_-
&\leq \textup{Tr}(\phi H_{V_-}\phi)_-\\
&\leq
\textup{Tr}(\phi (-h^2(1-\rho)\Delta-h^2V_0-1)\phi)_- + h^2\textup{Tr}(\phi(-\rho\Delta-V_1)\phi)_-\,.
\end{align*}
To bound the second term we apply the Lieb--Thirring inequality to conclude that
\begin{align*}
h^2\textup{Tr}(\phi(-\rho\Delta-V_1)\phi)_-
&\leq
h^2\textup{Tr}(\phi(-\rho\Delta-V_1\mathrm{1}_B)_-\phi)
\leq
Ch^2\rho^{-d/2} \int_{B}|V_1(x)|^{1+d/2}\,dx\,,
\end{align*}
where we again used $\|\phi\|_{L^\infty}\leq M$. Since $V_0\in L^\infty(B)$, we can bound
\begin{align*}
\textup{Tr}(\phi (-h^2(1-\rho)\Delta-h^2V_0-1)\phi)_-
&\leq
\textup{Tr}(\phi(-h^2(1-\rho)\Delta - h^2 \sup_{B}V_0-1)\phi)_-\\
&=
(1+h^2\sup_B V_0) \textup{Tr}(\phi (-\tilde h^2\Delta-1)\phi)_-
\end{align*}
with $\tilde h = h(1-\rho)^{1/2}(1+h^2\sup_B V_0)^{-1/2}$.
By the Berezin--Li--Yau inequality,
\begin{equation*}
\textup{Tr}(\phi (-\tilde h^2\Delta-1)\phi)_-\leq L_d \tilde h^{-d}\int_B \phi^2(x)\,dx\,.
\end{equation*}
Combining the above we have arrived at
\begin{align*}
\textup{Tr}(\phi H_{V}\phi)_-
&\leq
L_d h^{-d}\int_B \phi^2(x)\,dx +
C h^2 \rho^{-d/2}\int_B |V_1(x)|^{1+d/2}\,dx\\
&\quad +
L_d h^{-d}\Bigl[(1-\rho)^{-d/2}(1+h^2\sup_B V_0)^{1+d/2}-1\Bigr]\int_B \phi^2(x)\,dx \\
&\leq
L_d h^{-d}\int_B \phi^2(x)\,dx
+
C h^2 \rho^{-d/2}\int_B |V_1(x)|^{1+d/2}\,dx\\
&\quad +
C h^{-d}\Bigl[\rho+h^2\sup_B V_0\Bigr]\int_B \phi^2(x)\,dx\,,
\end{align*}
where $C$ depend only on $d, K, M$. Setting $\rho = h^2/(2K^2\ell^2)\leq 1/2$ and using $\int \phi^2 \leq C \ell^d$ completes the proof.
\end{proof}
\subsection{Local asymptotics near the boundary}
In this section we prove the following local asymptotic expansion close to the boundary:
\begin{thm}\label{thm: local asymptotics boundary}
Let $\Omega, b, V$ be as in Theorem~\ref{thm: main result}. Let $\phi \in C^1_0(\mathbb{R}^d)$ be supported in a ball $\overline{B}$ of radius $\ell$ and satisfy
\begin{equation*}
\|\nabla \phi\|_{L^\infty(\mathbb{R}^d)}\leq M \ell^{-1}\,.
\end{equation*}
Assume that $\textup{dist}(B, \partial\Omega)\leq 2\ell$, and set $\underline{b} = \inf_{B\cap \Omega} b$. For $0<\ell\leq c(\Omega, \underline{b})$ and $0<h\leq K\ell$,
\begin{align*}
\biggl|\textup{Tr}&(\phi H_{\Omega, b, V}(h)\phi)_- - L_dh^{-d}\int_\Omega \phi^2(x)\,dx + \frac{L_{d-1}}{2}h^{-d+1}\int_{\partial\Omega}\phi^2(x)b(x)\,d\mathcal{H}^{d-1}(x)\biggr|\\
&\leq \ell^dh^{-d}o_{\ell\to 0^+}(1)
+ O(h^{-d+1})\int_{\partial\Omega}\phi^2(x)\biggl[\sup_{y\in B_{2\ell}(x)}b(y)-\inf_{y\in B_{2\ell}(x)}b(y)\biggr]\,d\mathcal{H}^{d-1}(x)\\
&\quad
+ O(h^{-d+2})\Bigl(\ell^{d-2}\log(\ell/h)+\|V_+\|_{L^1(\Omega \cap B)}+\ell^d \|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)
\end{align*}
Moreover, the error terms and the implicit constants can be quantified in terms of the $C^1$-regularity of $\partial\Omega$ and $M, K, \|b\|_{L^\infty(\Omega\cap B)}, \underline{b}$.
\end{thm}
The proof of Theorem~\ref{thm: local asymptotics boundary} will be split into several lemmas. The first of which reduces our problem to the corresponding in a half-space:
\begin{lem}\label{lem: reduction to halfspace}
Let $\Omega, b , V$ be as in Theorem~\ref{thm: main result}. Let $\phi \in C^1_0(\mathbb{R}^d)$ be supported in a ball $\overline{B}$ of radius $\ell$ such that $\textup{dist}(B, \partial\Omega)\leq 2\ell$, and $\inf_{B\cap \Omega} b = \underline{b}>0$. For $0<\ell\leq c(\Omega, \underline{b})$ and $0<h\leq K\ell$ with $\tilde \phi = \phi\circ \Phi^{-1}, \tilde V = V \circ \Phi^{-1}$,
\begin{align*}
\textup{Tr}(\tilde\phi H_{\mathbb{R}^d_+, \overline{b}, \tilde V}&(h)\tilde\phi)_- - \ell^dh^{-d}o_{\ell\to 0^+}(1)\Bigl(1 + h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr) \\
&\leq \textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_-\\
&\leq \textup{Tr}(\tilde\phi H_{\mathbb{R}^d_+, \underline{b}, \tilde V}(h)\tilde\phi)_- + \ell^dh^{-d}o_{\ell\to 0^+}(1)\Bigl(1 + h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)
\end{align*}
where $\overline{b} = \sup_{x\in B\cap \Omega}b(x)$. Moreover, the error terms and the implicit constants can be quantified in terms of the $C^1$-regularity of $\partial\Omega$ and $K, \underline{b}, \overline{b}, \|\phi\|_{L^\infty}$.
\end{lem}
\begin{proof}
Provided $\ell$ is small enough there exists a ball $B'\supset B$ with centre on $\partial \Omega$ and radius $4\ell$ which satisfies the assumptions in Section~\ref{sec: Straightening of bdry}. Let $\Phi$ be the associated diffeomorphism.
We split the proof into two parts, in the first part we prove the upper bound and in the second we prove the lower bound.
\smallskip
{\noindent\bf Part 1:} (Proof of the upper bound) By the variational principle
\begin{equation*}
\textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_- \leq \textup{Tr}(\phi H_{\Omega, \underline{b}, V}(h)\phi)_-\,.
\end{equation*}
Moreover, by Lemma~\ref{lem: Straightening of boundary} there exists $C_0>0$ depending only on $d$ such that
\begin{equation*}
\textup{Tr}(\phi H_{\Omega, \underline{b}, V}(h)\phi)_- \leq \textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4}{\textup{dist}(\Phi^{-1}(\,\cdot\,), \partial\Omega)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\,.
\end{equation*}
We claim that
\begin{align*}
\textup{Tr}\Bigl(&\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4}{\textup{dist}(\Phi^{-1}(\,\cdot\,), \partial\Omega)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-
\end{align*}
for a constant $C$ depending only on $d$. Indeed, if $\underline{b}\geq 1/2$ Lemma~\ref{lem: A geometric lemma} and the variational principle implies
\begin{align*}
\textup{Tr}\Bigl(&\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4}{\textup{dist}(\Phi^{-1}(\,\cdot\,), \partial\Omega)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\,.
\end{align*}
Similarly, if $0< \underline{b}< 1/2$ Lemma~\ref{lem: A geometric lemma} and the variational principle implies
\begin{align*}
\textup{Tr}\Bigl(&\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4}{\textup{dist}(\Phi^{-1}(\,\cdot\,), \partial\Omega)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{(\underline{b}^2-1/4)(1+C\omega(8\ell)^2)}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\,.
\end{align*}
For any $2C_0\omega(4\ell)< \rho \leq 1/2$ we estimate
\begin{align*}
\textup{Tr}\Bigl(&\tilde \phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\underline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2 \tilde V-1\Bigr)\tilde \phi\Bigr)_-\\
&\leq
\textup{Tr}(\tilde \phi H_{\mathbb{R}^d_+, \underline{b}, \tilde V}(h)\tilde \phi)_-\\
&\quad
+
\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(\rho-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\rho(\underline{b}^2-1/4)-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2\rho\tilde V -\rho\Bigr)\tilde \phi\Bigr)_-\,.
\end{align*}
Provided
\begin{equation}\label{eq: HLT smallness half-space reduction upper bound}
\frac{\rho(\underline{b}^2-1/4)-C\omega(8\ell)^2}{\rho-C_0\omega(4\ell)} = \Bigl(\underline{b}^2-1/4\Bigr)\frac{1}{1-C_0\omega(4\ell)\rho^{-1}} - C \frac{\omega(8\ell)^2}{\rho-C_0\omega(4\ell)}> -\frac{1}{4}\,,
\end{equation}
we can apply the local Hardy--Lieb--Thirring inequality of Lemma~\ref{lem: local HLT} in $\mathbb{R}^d_+$ to bound
\begin{align*}
&\textup{Tr}\Bigl(\tilde \phi \Bigl(-h^2(\rho-C_0\omega(4\ell))\Delta_{\mathbb{R}^d_+}+ h^2\frac{\rho(\underline{b}^2-1/4)-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}+h^2\rho \tilde V-\rho\Bigr)\tilde \phi\Bigr)_-\\
&\ \leq
C \rho^{1+d/2} \ell^d h^{-d} (\rho-C_0\omega(4\ell))^{-d/2}\Bigr(1+h^2\rho^{d/2}(\rho-C_0\omega(4\ell))^{-d/2}\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\\
&\ \leq
C \rho \ell^d h^{-d}\Bigr(1+h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\,.
\end{align*}
Set $\rho = \sqrt{\omega(4\ell)}+ \omega(8\ell)$. Then $\rho>2C_0\omega(4\ell)$ and~\eqref{eq: HLT smallness half-space reduction upper bound} are valid provided $\ell$ is small enough. Therefore, upon collecting the estimates above we arrive at the bound
\begin{align*}
\textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_- &\leq \textup{Tr}(\tilde\phi H_{\mathbb{R}^d_+, \underline{b}, \tilde V}(h)\tilde\phi)_- \\
&\quad + C\ell^d h^{-d}\Bigl(\sqrt{\omega(4\ell)}+ \omega(8\ell)\Bigr)\Bigl(1+ h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\,,
\end{align*}
thus completing the proof of the upper bound.
\smallskip
{\noindent \bf Part 2:} (Proof of the lower bound) The proof of the lower bound proceeds as the upper bound but with the roles of $\Omega$ and $\mathbb{R}^d_+$ exchanged.
By Lemma~\ref{lem: Straightening of boundary},
\begin{equation*}
\textup{Tr}(\tilde \phi H_{\mathbb{R}^d_+, \overline{b}, \tilde V}(h)\tilde \phi)_- \!\leq \textup{Tr}\Bigl(\phi \Bigl(-h^2(1+C_0\omega(4\ell))^{-1}\!\Delta_{\Omega}+ h^2\frac{\overline{b}^2-1/4}{\textup{dist}(\Phi(\,\cdot\,), \partial\mathbb{R}^d_+)^2}+h^2 V-1\Bigr)\phi\Bigr)_-.
\end{equation*}
If $\ell$ is sufficiently small so that $C_0\omega(4\ell)\leq 1/2$ then $(1+C_0\omega(4\ell))^{-1}\geq 1-C_0\omega(4\ell)>0$, and hence
\begin{equation*}
\textup{Tr}(\tilde \phi H_{\mathbb{R}^d_+, \overline{b}, \tilde V}(h)\tilde \phi)_- \leq \textup{Tr}\Bigl(\phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\overline{b}^2-1/4}{\textup{dist}(\Phi(\,\cdot\,), \partial\mathbb{R}^d_+)^2}+h^2 V-1\Bigr)\phi\Bigr)_-\,.
\end{equation*}
By splitting into cases depending on the sign of $\overline{b}^2-1/4$ as in the proof of the upper bound one finds
\begin{align*}
\textup{Tr}\Bigl(&\phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\Omega} +h^2\frac{\overline{b}^2-1/4}{\textup{dist}(\Phi(\,\cdot\,), \partial\mathbb{R}^d_+)^2}+h^2 V-1\Bigr) \phi\Bigr)_-\\
&\leq
\textup{Tr}\Bigl(\phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\overline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\Omega)^2}+h^2 V-1\Bigr)\phi\Bigr)_-
\end{align*}
for a constant $C$ depending on $d, \overline{b}$.
For any $2C_0\omega(4\ell)< \rho \leq 1/2$ we estimate
\begin{align*}
\textup{Tr}\Bigl(&\phi \Bigl(-h^2(1-C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\overline{b}^2-1/4-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\Omega)^2}+h^2 V-1\Bigr)\phi\Bigr)_-\\
&\leq
\textup{Tr}(\phi H_{\Omega, \overline{b}, V}(h)\phi)_-\\
&\quad
+
\textup{Tr}\Bigl(\phi \Bigl(-h^2(\rho -C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\rho (\overline{b}^2-1/4)-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\Omega)^2}+h^2\rho V-\rho \Bigr)\phi\Bigr)_-\\
&\leq
\textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_-\\
&\quad
+
\textup{Tr}\Bigl(\phi \Bigl(-h^2(\rho -C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\rho (\overline{b}^2-1/4)-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\Omega)^2}+h^2\rho V-\rho \Bigr)\phi\Bigr)_-\,.
\end{align*}
Provided the analogue of~\eqref{eq: HLT smallness half-space reduction upper bound} with $\overline{b}$ instead of $\underline{b}$ holds we can apply the local Hardy--Lieb--Thirring inequality of Lemma~\ref{lem: local HLT} to bound
\begin{align*}
\textup{Tr}\Bigl(&\phi \Bigl(-h^2(\rho -C_0\omega(4\ell))\Delta_{\Omega}+ h^2\frac{\rho (\overline{b}^2-1/4)-C\omega(8\ell)^2}{\textup{dist}(\,\cdot\,, \partial\Omega)^2}+h^2\rho V-\rho \Bigr)\phi\Bigr)_-\\
&\leq
C \rho \ell^dh^{-d}\Bigl(1+ h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\,.
\end{align*}
Again we can set $\rho = \sqrt{\omega(4\ell)}+ \omega(8\ell)$ and combine the above estimates to arrive at
\begin{align*}
\textup{Tr}(\tilde\phi H_{\mathbb{R}^d_+, \overline{b}, \tilde V}(h)\tilde \phi)_- &\leq \textup{Tr}(\phi H_{\Omega, b, V}(h)\phi)_- \\
&\quad + C\ell^d h^{-d}\Bigl(\sqrt{\omega(4\ell)}+ \omega(8\ell)\Bigr)\Bigl(1+ h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B)}\Bigr)\,.
\end{align*}
This completes the proof of the lower bound and hence the proof of Lemma~\ref{lem: reduction to halfspace}.
\end{proof}
The proof of Theorem~\ref{thm: local asymptotics boundary} has been reduced to understanding the asymptotics of $\textup{Tr}(\phi H_{\mathbb{R}^d_+, b, V}(h)\phi)_-$ with $b(x)\equiv b>0$.
\begin{lem}\label{lem: halfspace asymptotics}
Let $\Omega, V$ be as in Theorem~\ref{thm: main result}. Let $\phi \in C^1_0(\mathbb{R}^d)$ be supported in a ball $\overline{B}$ of radius $\ell$ and satisfy
\begin{equation}\label{eq: grad bound halfspace asymptotics}
\|\nabla \phi\|_{L^\infty}\leq M \ell^{-1}\,.
\end{equation}
With $b(x) \equiv b> 0$ we have, for $0<h\leq K\ell$,
\begin{align*}
\biggl|\textup{Tr}(\phi H_{\mathbb{R}^d_+, b, V}&(h)\phi)_- - L_d h^{-d} \int_{\mathbb{R}^d_+}\phi^2(y)\,dy + \frac{b\,L_{d-1}}{2}h^{-d+1}\int_{\partial \mathbb{R}^d_+} \phi^2(y)\,d\mathcal{H}^{d-1}(y)\biggr|\\
& \leq C h^{-d+2}\Bigl(\ell^{d-2}|{\log(\ell/h)}|+\|V_+\|_{L^1(\mathbb{R}^d_+ \cap B)}+ \ell^d\|V_-\|^{1+d/2}_{L^{1+d/2}(\mathbb{R}^d_+ \cap B)}\Bigr)\,,
\end{align*}
where $C$ depends only on $d, M, K, b$ and can be uniformly bounded for $b$ in compact subsets of $[0, \infty)$.
\end{lem}
\begin{proof}
Our proof proceeds by diagonalizing the operator $H_{\mathbb{R}^d_+, b, 0}(h)$. For the general background on what follows, see~\cite[Chapter XIII]{DunfordSchwartz_II}.
For $f\in C^2(\mathbb{R}_+)$ define the differential expression
\begin{equation*}
L_b f(x) = f''(x) - \Bigl(b^2-\frac{1}{4}\Bigr)\frac{f(x)}{x^2}\,.
\end{equation*}
The operator $H_{\mathbb{R}^d_+, b, 0}(h)$ can then be decomposed as
\begin{equation*}
H_{\mathbb{R}^d_+,b,0}(h) = -h^2\Delta' - h^2L_b\,,
\end{equation*}
where $\Delta' = \sum_{j=1}^{d-1}\frac{\partial^2}{\partial y_j^2}$ and $L_b$ acts in the $y_d$-coordinate.
For $b > 0, \mu \geq 0$ the ODE
\begin{equation*}
-L_b u(x)=\mu u(x)
\end{equation*}
has two linearly independent solutions
\begin{align*}
\psi_{b, \mu}(x)= x^{1/2}J_{b}(x\sqrt{\mu})\quad \mbox{and}\quad \eta_{b, \mu}(x)=x^{1/2}Y_{b}(x\sqrt{\mu})\,.
\end{align*}
If $b\geq 1/2$ only $\psi$ vanishes at $x=0$ while for $b \in (0, 1/2)$ both solutions vanish, indeed $\psi \sim x^{1/2+b}$ and $\eta\sim x^{1/2-b}$ as $x\to 0^+$. However, for any $b\neq \frac{1}{2}$ only the first solution $\psi_{b, \nu}$ is in $H^1$ around zero. In particular, our effective operator $H_{\mathbb{R}^d_+, b, 0}(h)$ is diagonalized through a Fourier transform with respect to $y'$ and a Hankel transform $\mathfrak{H}_{b}$ with respect to $y_d$. Recall that the Hankel transform $\mathfrak{H}_\alpha\colon L^2(\mathbb{R}_+)\to L^2(\mathbb{R}_+)$ is initially defined by
\begin{equation*}
\mathfrak{H}_\alpha(g)(s)= \int_0^\infty g(t)J_\alpha(s t)\sqrt{st}\,dt \quad \mbox{for }g\in L^1(\mathbb{R}_+)
\end{equation*}
and extended to $L^2(\mathbb{R}_+)$ in a similar manner as the Fourier transform. Moreover, $\mathfrak{H}_\alpha$ is unitary, is its own inverse $\mathfrak{H}_\alpha^2=\mathrm{1}$. Moreover, for $G\in L^\infty(\mathbb{R}_+)$ with compact support and $f\in H_0^1(\mathbb{R}_+)\cap H^2(\mathbb{R}_+)$
\begin{equation*}
\langle f, G(-L_b) f\rangle_{L^2(\mathbb{R}_+)} = \int_0^\infty G(s^2)|\mathfrak{H}_{b}(f)(s)|^2\,ds\,.
\end{equation*}
By a similar argument as in the proof of Lemma \ref{lem: local HLT} the upper bound can be reduced to the case $V\equiv 0$. Indeed, for any $0<\rho\leq 1/2$,
\begin{align*}
\textup{Tr}(\phi& H_{\mathbb{R}^d_+, b, V}(h)\phi)_- \\
&\leq
\textup{Tr}(\phi H_{\mathbb{R}^d_+, b, 0}(h(1-\rho))\phi)_-
+
\textup{Tr}\Bigl(\phi\Bigr(h^2\rho\Delta_{\mathbb{R}^d_+}+ h^2\rho \frac{b^2-1/4}{\textup{dist}(\,\cdot\,, \partial\mathbb{R}^d_+)^2}-h^2V\Bigr)\phi\Bigr)_- \\
&\leq
\textup{Tr}(\phi H_{\mathbb{R}^d_+, b, 0}(h(1-\rho))\phi)_- + C h^2\rho^{-d/2}\|V_-\|^{1+d/2}_{L^{1+d/2}(\mathbb{R}^d_+ \cap B)}\,.
\end{align*}
Set $\rho = h^2/(2K^2\ell^2)$ so that $h^2\rho^{-d/2}=O(\ell^dh^{-d+2})$ and $(h(1-\rho))^{-\beta} = h^{-\beta}(1 + O(\ell^{-2}h^2))$.
The claimed upper bound now follows from the case $V\equiv 0$.
Using the inequality $\textup{Tr}(\phi H \phi)_- \leq \textup{Tr}(\phi H_- \phi)$, applying the Fourier transform with respect to $y'$ and the Hankel transform in the $y_d$-direction yields
\begin{equation}\label{eq: Upper tracebound halfspace}
\begin{aligned}
\textup{Tr}(\phi H_{\mathbb{R}^d_+, b, 0}(h)\phi)_-
&\leq
\textup{Tr}(\phi (H_{\mathbb{R}^d_+, b, 0}(h))_-\phi)\\
&=
\frac{1}{(2\pi)^{d-1}}\iint_{\mathbb{R}^d_+\times \mathbb{R}^d_+} \phi^2(y)(h^2|\xi|^2-1)_- \xi_d y_d J_{b}(\xi_d y_d)^2\,d\xi dy\,.
\end{aligned}
\end{equation}
For the lower bound define the operator $\gamma$ with integral kernel
\begin{equation*}
\gamma(x, y) = \frac{1}{(2\pi)^{d-1}}\chi(x) \int_{\mathbb{R}^d_+\cap B_{h^{-1}}(0)} e^{i \xi'(x'-y')}\sqrt{\xi_d x_d}J_{b}(\xi_d x_d) \sqrt{\xi_d y_d}J_{b}(\xi_d y_d) \,d \xi\, \chi(y)\,,
\end{equation*}
where $\chi\in C_0^\infty(\mathbb{R}^d)$ is such that $0\leq \chi \leq 1$ and $\chi\equiv 1$ on $\textup{supp}\, \phi$. The operator $\gamma$ is trace class, satisfies $0\leq \gamma \leq \mathbf{1}$, and its range is contained in the domain of $H_{\mathbb{R}^d_+, b, V}$. Thus, by the variational principle,
\begin{equation}\label{eq: lower trace bound halfspace}
\begin{aligned}
-\textup{Tr}(\phi &H_{\mathbb{R}^d_+, b, V}(h)\phi)_-\\
&\leq
\textup{Tr}(\gamma \phi H_{\mathbb{R}^d_+, b, V_+}(h)\phi)\\
&=
\frac{1}{(2\pi)^{d-1}}\iint_{\mathbb{R}^d_+\times\mathbb{R}^d_+}(h^2|\xi|^2 -1)_- \phi^2(x)\xi_d x_dJ_{b}(\xi_dx_d)^2 d\xi dx \\
&\quad
+ h^{-d+2}\int_{\mathbb{R}^d_+}(V_+(x)\phi^2(x)+|\nabla\phi(x)|^2) \int_0^{1}(x_d th^{-1})J_{b}(x_d th^{-1})^2\,dt dx \\
&\leq
\frac{1}{(2\pi)^{d-1}}\iint_{\mathbb{R}^d_+\times\mathbb{R}^d_+}(h^2|\xi|^2 -1)_- \phi^2(x)\xi_d x_dJ_{b}(\xi_dx_d)^2 d\xi dx\\
&\quad
+ Ch^{-d+2}\int_{\mathbb{R}^d_+}(V_+(x)\phi^2(x)+|\nabla\phi(x)|^2)\,dx\,,
\end{aligned}
\end{equation}
with $C$ uniformly bounded for $b$ in compact subsets of $[0, \infty)$, since $\|\sqrt{\,\cdot\,}J_b\|_{L^\infty(\mathbb{R}_+)}<\infty$ uniformly for $b$ in compact subsets of $[0, \infty)$ (see~\cite[Chapter 7]{Watson_BesselFunctions}). By~\eqref{eq: grad bound halfspace asymptotics} we can estimate $\|\phi\|_{L^\infty}\leq M$ and $\int_{\mathbb{R}^d_+}|\nabla \phi(x)|^2\,dx \leq C \ell^{d-2}.$
What remains is to understand the common integral in~\eqref{eq: Upper tracebound halfspace} and~\eqref{eq: lower trace bound halfspace}. We begin by extracting the desired leading term:
\begin{align}
&\frac{1}{(2\pi)^{d-1}}\iint_{\mathbb{R}^d_+\times \mathbb{R}^d_+} \phi^2(y)(h^2|\xi|^2-1)_- \xi_d y_d J_{b}(\xi_d y_d)^2\,d\xi dy\nonumber \\
&\ =
L_dh^{-d}\int_{\mathbb{R}^d_+} \phi^2(y)dy \label{eq: extraction of main term}\\
&\ \quad
-L_{d-1}h^{-d+1}\int_0^\infty\hspace{-5pt}\int_{\mathbb{R}^{d-1}} \phi^2(y', h t)\,dy'\int_0^1(1-\xi_d^2)^{(d+1)/2} \Bigl(\frac{1}{\pi}-\xi_d t J_{b}(\xi_d t)^2\Bigr)\,d\xi_d dt\,. \nonumber
\end{align}
Define, for $b \geq 0$ and $t\geq 0$,
\begin{equation*}
P_b(t) = \int_0^1(1-\xi^2)^{(d+1)/2} \Bigl(\frac{1}{\pi}-\xi t J_{b}(\xi t)^2\Bigr)\,d\xi\,.
\end{equation*}
In Lemmas~\ref{lem: Pnu asymptotics} and~\ref{lem: Pnu integral identity} we shall prove that
\begin{equation}\label{eq: P properties}
\int_0^\infty P_b(t)\, dt = \frac{b}{2}\quad \mbox{and} \quad P_b(t)= O(t^{-2}) \mbox{ as } t\to \infty\,,
\end{equation}
with the implicit constant uniformly bounded for $b$ in compact subsets of $[0, \infty)$.
Using~\eqref{eq: P properties} we can estimate
\begin{align*}
\int_0^\infty\int_{\mathbb{R}^{d-1}} &\phi^2(y', h t)\,dy'P_b(t) dt\\
&=
\int_0^{2\ell/h}\int_{\mathbb{R}^{d-1}} \phi^2(y', h t)\,dy' P_b(t) dt\\
&=
\frac{b}{2}\int_{\mathbb{R}^{d-1}} \phi^2(y', 0)\,dy'
- \int_{2\ell/h}^{\infty}\int_{\mathbb{R}^{d-1}} \phi^2(y', 0)\,dy' P_b(t) dt\\
&\quad
+ 2\int_0^{2\ell/h}ht\int_{\mathbb{R}^{d-1}} \int_0^1\phi(y', h t s)\partial_{y_d}\phi(y', h t s)\,ds\,dy' P_b(t) dt\\
&=
\frac{b}{2}\int_{\mathbb{R}^{d-1}} \phi^2(y', 0)\,dy'
+ O(h\ell^{d-2}|{\log(\ell/h)}|)\,.
\end{align*}
Combined with~\eqref{eq: extraction of main term},~\eqref{eq: Upper tracebound halfspace}, and~\eqref{eq: lower trace bound halfspace} this completes the proof of Lemma~\ref{lem: halfspace asymptotics}.
\end{proof}
We are now ready to prove Theorem~\ref{thm: local asymptotics boundary}.
\begin{proof}[Proof of Theorem~\ref{thm: local asymptotics boundary}]
By combining Lemma~\ref{lem: reduction to halfspace} and Lemma~\ref{lem: halfspace asymptotics} the claimed estimate follows from
\begin{align*}
\int_{\partial\Omega}\phi^2(x)&\biggl[b(x)-\inf_{y\in \Omega \cap B}b(y)\biggr]\,d\mathcal{H}^{d-1}(x)\\
& \leq \int_{\partial\Omega}\phi^2(x)\biggl[\,\sup_{y\in \Omega \cap B}b(y)-\inf_{y\in \Omega \cap B}b(y)\biggr]\,d\mathcal{H}^{d-1}(x) \,,
\end{align*}
and the corresponding inequality for the $\sup$ and the fact that $\textup{supp}\,\phi \subseteq \overline{B} \subset \overline{B_{2\ell}(x)}$ for any $x\in \textup{supp}\,\phi$.
\end{proof}
\section{From local to global asymptotics}\label{sec: local to global}
In this section we prove our main result by piecing together the local asymptotics obtained above. The key ingredient is the following construction of a continuum partition of unity due to Solovej and Spitzer~\cite{MR2013804}.
Let
\begin{equation*}
\ell(u) = \frac{1}{2}\max\{\mathrm{dist}(u, \Omega^c), 2\ell_0\}
\end{equation*}
with a small parameter $0<\ell_0$ to be determined. Note that $0<\ell\leq \max\{\tfrac{r_{in}(\Omega)}{2}, \ell_0\}$ and, since $|\nabla \textup{dist}(u, \Omega^c)|=1$ a.e., $\|\nabla \ell\|_{L^\infty}\leq \frac{1}{2}.$ Note also that $\mathrm{dist}(B_{\ell(u)}, \Omega^c))\leq 2\ell(u)$ if and only if $\mathrm{dist}(u, \partial \Omega)\leq 2\ell_0$ in which case $\ell(u)=\ell_0$. In particular, if $\mathrm{dist}(u, \Omega)>\ell_0$ then $B_{\ell(u)}(u)\cap\Omega = \emptyset$.
Fix a function $\phi\in C^\infty_0(\mathbb{R}^d)$ with $\textup{supp}\,\phi \subseteq \overline{B_1(0)}$ and $\|\phi\|_{L^2}=1$. By~\cite[Theorem~22]{MR2013804} (see also~\cite[Lemma 2.5]{FrankLarson}) the functions
$$
\phi_u(x) = \phi\left(\frac{x-u}{\ell(u)}\right)\, \sqrt{1+ \nabla\ell(u)\cdot\frac{x-y}{\ell(u)}} \,,
\qquad x\in\mathbb R^d \,,\ u\in\mathbb R^d \,,
$$
belong to $C_0^\infty(\mathbb R^d)$ with $\mathrm{supp}\, \phi_u \subseteq \overline{B_{\ell(u)}(u)}$, satisfy
\begin{equation}
\label{eq:phi_properties1}
\int_{\mathbb{R}^d} \phi_u(x)^2 \ell(u)^{-d}\, du = 1
\qquad\text{for all}\ x\in\mathbb{R}^d
\end{equation}
and, with a constant $C$ depending only on $d$,
\begin{equation*}
\|\phi_u\|_{L^\infty}\leq \sqrt 2\, \|\phi\|_{L^\infty} \quad\text{and}\quad \|\nabla \phi_u\|_{L^\infty} \leq C\ell(u)^{-1} \|\nabla\phi\|_{L^\infty}
\quad\text{for all}\ u\in\mathbb{R}^d \, .
\end{equation*}
The application to our problem here is summarized in the following lemma:
\begin{lem}\label{lem: localization Schrodinger}
Let $\Omega, b, V$ be as in Theorem~\ref{thm: main result} and define $\ell, \{\phi_u\}_{u\in \mathbb{R}^d}$ as above. Then, for $0<\ell_0 \leq c(\Omega, b)$ and $0<h \leq K \ell_0$,
\begin{align*}
\Biggl|\textup{Tr}( H_{\Omega, b, V}(h))_- &- \int_{\mathbb{R}^d}\textup{Tr}(\phi_u H_{\Omega, b, V}(h) \phi_u)_- \ell(u)^{-d}\,du\Biggr| \\
&\leq C h^{-d+2} \int_{\mathrm{dist}(u, \Omega)\leq \ell_0}\Bigr(1+ h^2\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega \cap B_{\ell(u)}(u))}\Bigr)\ell(u)^{-2}\,du\,,
\end{align*}
where the constant $C$ depends only on $\Omega, b, K, \|\phi\|_{L^\infty}.$
\end{lem}
For the sake of brevity, we omit the proof of Lemma~\ref{lem: localization Schrodinger} and instead refer the reader to the proof of~\cite[Lemma 2.8]{FrankLarson}. Lemma~\ref{lem: localization Schrodinger} can be proved in the same manner but replacing the use of a local Berezin--Li--Yau inequality by an application of Lemma~\ref{lem: local HLT}.
With the above results in hand we are ready to prove Theorem~\ref{thm: main result}.
\begin{proof}[Proof of Theorem~\ref{thm: main result}]
Set $\ell_0=h/\varepsilon_0$ with $0<h\leq \varepsilon_0 r_{in}(\Omega)/2$ for a parameter $\varepsilon_0 \in (0, 1]$ which will eventually tend to zero.
We divide the set of $u\in\mathbb{R}^d$ such that $B_{\ell(u)}(u)\cap \Omega \neq \emptyset$ into two disjoint parts:
\begin{equation}\label{eq: partition}
\Omega_* = \{u \in \mathbb{R}^d: 2\ell_0<\delta_\Omega(u)\}\, \quad \mbox{and}\quad
\Omega^* = \{u \in \mathbb{R}^d: -\ell_0< \delta_{\Omega}(u)\leq 2\ell_0\}\,,
\end{equation}
where $\delta_\Omega$ denotes the signed distance function to the boundary,
$\delta_\Omega(y) = \textup{dist}(u, \Omega^c)-\textup{dist}(u, \Omega).$
Note that for all $u\in \Omega^*$ we have $\ell(u)=\ell_0$.
By Lemma~\ref{lem: localization Schrodinger} we need to understand the integral with respect to $u$ of the local traces $\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_-$. Breaking the integral according to the partition~\eqref{eq: partition} we have
\begin{align*}
\int_{\mathbb{R}^d}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell(u)^{-d}\,du
&=
\int_{\Omega_*}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell(u)^{-d}\,du\\
&\quad
+
\int_{\Omega^*}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell_0^{-d}\,du\,.
\end{align*}
For the first term Lemma~\ref{lem: local asymptotics bulk} with $V_0(x)= \tfrac{(b(x)^2-1/4)_-}{\textup{dist}(x, \partial\Omega)^{2}}$, $V_1 = V_-(x)$ yields
\begin{align*}
&\int_{\Omega_*}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell(u)^{-d}\,du\\
&\ \ \ =
L_d h^{-d}\int_{\Omega_*}
\int_\Omega \phi_u^2(x)\ell(u)^{-d}\,dxdu\\
&\ \ \ + O(h^{-d+2})\!\!
\int_{\Omega_*}\!\Bigl[
\ell(u)^{-2}\bigl(1+ \|b\|_{L^\infty}^2\bigr)
+ \|V_-\|_{L^{1+d/2}(B_{\ell(u)}(u))}^{1+d/2}+\ell(u)^{-d}\|V_+\|_{L^1(B_{\ell(u)}(u))}\Bigr]du
\end{align*}
where we used $\|V_0\|_{L^\infty} \leq \tfrac{C}{(\textup{dist}(u, \partial\Omega)-\ell(u))^{2}}\leq C\ell(u)^{-2}$ and $\tfrac{(b(x)^2-1/4)_+}{\mathrm{dist}(x, \partial\Omega)^{2}}\leq C \|b\|_{L^\infty}^2 \ell(u)^{-2}$.
\medskip
For the integral over the boundary region $\Omega^*$ Theorem~\ref{thm: local asymptotics boundary}, for $\varepsilon_0, \ell_0, h$ sufficiently small, implies
\begin{align*}
&\int_{\Omega^*}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell_0^{-d}\,du\\
&\quad=
L_d h^{-d}\int_{\Omega^*}\int_\Omega \phi_u^2(x)\ell_0^{-d}\,dxdu - \frac{L_{d-1}}{2}h^{-d+1}\int_{\Omega^*}\int_{\partial\Omega}\phi_u^2(x)b(x)\ell_0^{-d}\,d\mathcal{H}^{d-1}(x)du\\
&\qquad
+ O(h^{-d})|\Omega^*|(
o_{\ell_0\to 0^+}(1) + \varepsilon_0^{2}|{\log(\varepsilon_0)}|) + h^{-d+1}o_{\ell_0\to 0^+}(1)\\
&\qquad
+ O(h^{-d+2})\int_{\Omega^*}\Bigl[\|V_-\|^{1+d/2}_{L^{1+d/2}(B_{\ell(u)}(u))}+\ell_0^{-d}\|V_+\|_{L^1(B_{\ell(u)}(u))}\Bigr]du\,.
\end{align*}
Here we used the fact that $b$ satisfies~\eqref{eq: b regularity}.
Combining the estimates for the contribution from the bulk and boundary region, using~\eqref{eq:phi_properties1}, and estimating the integrals of the norms of $V_-, V_+$, we find
\begin{equation}\label{eq: integral of local traces asymptotics}
\begin{aligned}
&\int_{\mathbb{R}^d}\textup{Tr}(\phi_u H_{\Omega, b, V}(h)\phi_u)_- \ell(u)^{-d}\,du\\
&\quad=
L_d h^{-d}|\Omega| - \frac{L_{d-1}}{2}h^{-d+1}\int_{\partial\Omega}b(x)\,d\mathcal{H}^{d-1}(x)\\
&\qquad
+
O(h^{-d})|\Omega^*|(o_{\ell_0\to 0^+}(1)+\varepsilon_0^2|{\log(\varepsilon_0)}|) + h^{-d+1}o_{\ell_0\to 0^+}(1)\\
&\qquad
+ O(h^{-d+2})\bigl(1+\|b\|_{L^\infty}^2\bigr)\int_{\Omega_*}\ell(u)^{-2}\,du
+O(h^{-d+2})\Bigl[\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega)}+ \|V_+\|_{L^1(\Omega)}\Bigr]\,.\hspace{-26pt}
\end{aligned}
\end{equation}
By~\cite[eq.'s (4.6)--(4.8)]{FrankLarson}, $\int_{\Omega_*}\ell(u)^{-2}\,du \leq C \ell_0^{-1}$ and $|\Omega^*|\leq C \ell_0$ with $C$ depending only on $\Omega$. Thus by Lemma~\ref{lem: localization Schrodinger},~\eqref{eq: integral of local traces asymptotics}, and since $h^2/\ell(u)^{2}\leq \varepsilon_0^2$ we conclude that
\begin{align*}
h^{d-1}\biggl|\textup{Tr}(H_{\Omega, b, V}&(h))_- - L_d h^{-d}|\Omega| + \frac{L_{d-1}}{2}h^{-d+1}\int_{\partial\Omega}b(x)\,d\mathcal{H}^{d-1}(x)\biggr|\\
&\leq
\varepsilon_0^{-1}o_{h/\varepsilon_0\to 0^+}(1)+O(\varepsilon_0|{\log(\varepsilon_0)}|) + o_{h/\varepsilon_0\to 0^+}(1)\\
&\quad
+ O(\varepsilon_0)\bigl(1+\|b\|_{L^\infty}^2\bigr)
+O(h)\Bigl[\|V_-\|^{1+d/2}_{L^{1+d/2}(\Omega)}+ \|V_+\|_{L^1(\Omega)}\Bigr]\,.
\end{align*}
Letting first $h$ and then $\varepsilon_0$ tend to $0$ completes the proof of Theorem~\ref{thm: main result}.
\end{proof}
|
{'timestamp': '2020-10-13T02:27:32', 'yymm': '2010', 'arxiv_id': '2010.05417', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.05417'}
|
arxiv
|
\section{Introduction}
Entangled photons form the basis of many quantum applications, notably in computing and communication \cite{brien--science--2007, Rudolph-Why-optimistic-2016, Ma2012, Guenthner2017}. For this purpose, one would like to have sources that produce single photons, photon pairs or entangled photons at high rates with high quality. In practical realizations, various technological challenges arise which affect the efficiency or quality of the prepared photon states. This can range from the simple dark noise in a detector to complex, parasitic nonlinear optical interactions in an optical component.
While some strategies to handle and mitigate detrimental effects are often internal lab-knowledge (e.g. the ubiquitous black masking tape at the right places to shield detectors from background light), others are well-known techniques and approaches in the experiment or data post-processing. For example, in both bulk and integrated experiments, pulsed operation, time gating, and spectral and spatial filtering are commonly employed \cite{aspect1981,kwiat1995,patel2012,eraerds2010,schweickert2018}.
To be more specific, many quantum optics experiments are plagued by uncorrelated background light that produces spurious events and reduces the quality of the photon state and the signal-to-noise ratio \cite{shields2007}. This is not necessarily limited to the photon pair sources: In quantum cryptography applications, for example, light leakage can lead to compromised link security \cite{brassard2000}.
Integrated semiconductor quantum light sources are particularly susceptible to the parasitic influence of background light. Once the photonic chips have been fabricated, there are only limited options to include additional filters afterwards. Moreover, the presence of imperfections in semiconductor materials causes many complex light-matter interactions that are difficult to track down or get rid of.
Nevertheless, integrated photonic circuits have the advantage of overall stability and compact dimensions. Small dimensions lead to high field strengths which increase nonlinear interactions. The further integration of light sources drastically improves the wall-plug efficiency of these chips, compared to bulk setups. This is all beneficial for many potential practical applications \cite{brien--science--2007, politi--science--2008}, but especially for satellite technology \cite{armengol2008, Yin-Pan-Satellite-2017}. Thus, it is worthwhile to look at measures to suppress the parasitic effects already at the source level on the chip.
One example of integrated quantum light sources are Bragg-reflection waveguides (BRWs) that produce photon pairs via parametric down-conversion (PDC) \cite{lanco2006semiconductor,sarrafi2013continuous,Gregor-Monolithic-Source-2012}. They are made of multiple epitaxial layers of different alloys of aluminum gallium arsenide (AlGaAs). This material system possesses a large second-order optical nonlinearity \cite{Boyd2003} and is versatile for designing and fabricating samples with the help of well-established techniques. A great benefit of AlGaAs is its potential to seamlessly integrate electro-optic elements, like light sources and modulators, with PDC on chip. The waveguides can be designed to operate at almost any temperature where the material is stable. While the technology still is not as mature as silicon or lithium niobate in certain aspects, most notably linear loss and homogeneity, it is improving quickly \cite{Pressl--2015, porkolab2014algaasreflow, Pressl-2017-advanced-BRW}.
Recently, BRWs have become increasingly popular for PDC \cite{Gregor-Monolithic-Source-2012,Gunthner-2015,Claire2016Multi-user,kang2016monolithic}, and considerable effort was dedicated towards optimizing their performance for different tasks. For example, not only integrated electrically injected pump lasers on the same chip as the PDC sources could be demonstrated \cite{Boitier-Ducci-Electrically-2013,bijlani2013semiconductor}, but also various aspects of the preparation of polarization entangled states \cite{Valles-2013,Horn-scientific-reports-2013,kang2015two}, like the compensation of the birefringent group delay \cite{schlager--2017--temporally}.
In this paper, we conduct a series of experiments to determine the driving factors that affect the generation of photoluminescence in our BRWs, such as wavelength and power dependencies as well as the spatial field distribution. Photoluminescence in BRW structures similar to the ones investigated here has been reported by other groups \cite{Horn-scientific-reports-2013,Boitier-Ducci-Electrically-2013} as well, but was never studied in-depth. It was informally hypothesized that the impurities in the substrate are driving factors of the photoluminescence.
We note that such luminescence is not exclusive to PDC in semiconductors, but also exists in commonly employed crystals, like BaB$_2$O$_4$ or KTiOPO$_4$, especially when pumped with high-energy light \cite{machulka-2014-bbo-luminescence,Bhar-89-evaluation-bbo,Chen-09-pdc-ppktp}. At similar wavelengths to our BRWs, fluorescence is observed in periodically poled silica fibers \cite{Zhu-11-pp-fiber} and spontaneous Raman scattering in four-wave-mixing schemes (FWM) \cite{Takesue-05-sfwm-fiber}. In integrated quantum optics, parasitic background light has also been observed in p-i-n diode based single-photon sources \cite{yuan2002} and quantum-dot based systems \cite{stevenson2006}.
\subsection{Aspects of photoluminescence in integrated PDC sources}
The challenge of parasitic photoluminescence in integrated PDC sources of photon pairs is that they operate in the linear low-gain regime \cite{christ2013}. In contrast to classical second-harmonic generation, difference frequency generation or optical parametric oscillation schemes, the efficiency \textit{does not} increase with increasing pump power. Typical PDC signal rates are in the kHz to GHz range which are many orders of magnitude smaller than the photon rate of the pump light ($\gg$\si{\peta\hertz}). These drastically different amounts of optical power in the device mean that even very inefficient photoluminescence processes can easily produce photons at rates similar to those of the PDC.
The observed photoluminescence rates depend on the power nonlinearity of the underlying process: If the PDC is pumped with a pulsed laser incorporating a high peak power, nonlinear photoluminescence generation is either greatly enhanced or somewhat suppressed, compared to the continuous wave (CW) case. The former corresponds to unrestricted two- or multi-photon absorption, while the latter indicates saturation.
PDC in BRWs is based on the interaction between the fundamental and higher-order spatial modes. One challenging aspect in this regard is that BRWs do not operate in a single-mode regime, but support higher-order modes with different mode profiles. These characteristic profiles are the result of the stratified layout made from different material compositions and the horizontal confinement of the ridge. Hence, the layers are exposed to different amounts of light depending on whether we look at the pump or PDC wavelength. For example, the pump mode is localized in the central core layer, while the fundamental telecom mode of the PDC photons mostly propagates in the two layers right adjacent to the core. Of course, the material composition of these is different from the core, which results in additional complications when analysing the inter-play of pump light, PDC and the photoluminescence.
Due to the complicated modal structure, it is easy to excite many spatial modes simultaneously, not always intentionally. Therefore, a considerable amount of light can excite the semiconductor material, but will not partake in down-conversion \cite{Pressl--2015}. There are also many modes which not only excite the impurities, but will guide any photoluminescence along the ridge. These conditions are sub-optimal as the PDC also propagates in the waveguide. Various external improvements have been proposed to reduce the effective multi-modeness, for example beam-shaping via holographic elements or integrating the pump laser \cite{Boitier-Ducci-Electrically-2013,bijlani2013semiconductor}. Narrow temporal filtering below \SI{3}{ns} is commonly employed, and we have previously shown that additional spectral filtering is highly effective \cite{Gunthner-2015}.
In this work, however, we \textit{intentionally ignore} these \textit{best practices} in order to get a clear photoluminescence signal. We use an excitation laser with a Gaussian beam profile and use only wide time-gates of \SI{13}{\nano\second} (= pulse repetition time of the laser). This allows us to model the photoluminescence from studying the single and coincidence rates with varying frequency and incident power. One goal is to separate the contribution of the photoluminescence from the PDC. In this context, we also study whether the specific design of the sample can be improved to reduce noise.
The data presented here reveals that the photoluminescence results from linear absorption at the bandgap of one or more layers near the core of the waveguide. Once an electron-hole pair is excited, the radiative recombination takes place at impurities at half the bandgap energy. Photoluminescence from these deep levels can be related to lattice defects, like antisites (arsenic) or vacancies, or certain rare dopants \cite{pavesi-1994-photoluminescence}. Moreover, we operate our device at room-temperature, therefore a rather broad, quasi-uniform photoluminescence spectrum is expected \cite{pavesi-1994-photoluminescence}. This is consistent to measurements in previous experiments \cite{laiho--2016--uncovering}.
\section{Methods}
\subsection{Sample and setup}
The BRW sample under investigation is a state-of-the-art, low-loss, matching-layer enhanced design optimized for simple fabrication, while simultaneously allowing a bright type\nobreakdash-II PDC process \cite{Pressl-2017-advanced-BRW}. Our experimental setup is shown in Fig.~\ref{fig:main-setup}. For illumination, we switch between two different pump lasers. The first is a Coherent MIRA titanium sapphire laser running in femtosecond mode, with the pulses being stretched to \SI{1.5}{ps} by a pulse stretcher. The pulse repetition rate is \SI{76.2}{MHz}, which corresponds to a \SI{13.1}{ns} delay between pulses. The second is a continuous wave Tekhnoscan T\&D\nobreakdash-scan X1 titanium sapphire laser. The selected pump laser is coupled into the waveguides via an aspheric lens (AL) or a microscope objective (MO) on one side, the generated PDC is collimated with another AL at the output facet. Two longpass dichroic interference filters remove the residual pump beam. An optional \SI{40}{nm} or \SI{12}{nm} bandpass filter, nominally centered at \SI{1550}{nm}, follows before the photon pairs are split by a polarizing beam splitter (PBS). The central wavelength of the bandpass is slightly tunable by rotating the filter a few degrees with respect to the beam. After the PBS, we couple the photons via collimation optics into single mode fibers connected to high-efficiency SingleQuantum EOS superconducting nanowire single photon detectors (SNSPDs). The output of the SNSPDs is amplified, routed to threshold discriminators and detected by a quTools quTau time\nobreakdash-to\nobreakdash-digital converter (TDC). All optics have the necessary broadband coating and suitable glass. This ensures a reflectivity of less than \SI{1}{\percent} at a \SI{100}{nm} away from the nominal operating wavelengths.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig-setup-main-mo.pdf}
\caption{\label{fig:main-setup} Schematic of the main setup for the pulsed excitation. The selected pump laser is coupled into the waveguide via an aspheric lens (AL) or a microscope objective (MO) on one side and the generated PDC is collected with another AL at the output facet. After collimation, the pump is suppressed by two longpass filters, followed by an optional bandpass. The photon pair is split deterministically at the polarizing beam splitter (PBS) and coupled to fibers via ALs on the XYZ translation stages and detected and correlated subsequently.}
\end{figure}
We conduct three experiments: The first in section~\ref{sec:spatial} serves as a sanity check, testing the spatial distribution of the PDC and photoluminescence signals at the facet. Here, we move the collection fiber (single\nobreakdash-mode) parallel to the facet to verify that both signals are indeed coming from the waveguide. The second experiment in section~\ref{sec:rate-model} yields a precise model of the PDC and noise. We vary the pump power and employ various bandpass filters after the waveguide while recording the channel rates and coincidences. The final experiment in section~\ref{sec:off-resonant} reveals more about the cause of the photoluminescence by measuring the signal power at pump wavelength strongly detuned from the degeneracy point.
For convenience, we operate two almost identical setups according to Fig.~\ref{fig:main-setup}. In sections~\ref{sec:spatial} and \ref{sec:off-resonant} we employ a waveguide with a length of \SI{2.04}{\milli\meter} and a degeneracy wavelength of \SI{767}{nm} as well as a microscope objective for the in-coupling. In sections~\ref{sec:rate-model} and \ref{sec:future-devices} a \SI{1.3}{\milli\meter} long waveguide with \SI{763}{\nano\meter} degeneracy wavelength is utilized and an aspheric lens is employed for in-coupling. To ensure comparability, both are from the same wafer. These wavelengths are shorter than the design wavelength of \SI{775}{\nano\metre}. Depending on the location of the chip on the wafer and the waveguide width some variability was observed. We intentionally choose waveguides with shorter operating wavelengths, as they are closer to the bandgap of the material and yield a clearer combined PDC-photoluminescence signal as investigated in section~\ref{sec:off-resonant}.
The main experimental challenge is keeping all light sources and the setup stable under comparable and reproducible conditions. This imposes a practical limit how much usable data we can actually acquire in a certain time-frame. As some components, like filters, need to be changed for every set, the coupling varies and quick re-alignment is necessary. Therefore, we optimize the setup on metrics that can be derived in real-time in the lab\footnote{With a more time-consuming and elaborate optimization procedure, values similar to better than the ones reported in Ref.~\cite{Gunthner-2015} can readily be achieved (e.g. Ref.~\cite{chen-2018-time-bin})}, most importantly the raw coincidence count rate. We operate the setup in a tightly controlled environment suitable for precision interferometry as reported in our Ref.~\cite{Kauten2017}.
\subsection{Rate models for the PDC and photoluminescence}
\label{sec:rate-model-theory}
For analysis, we model the observed rates on the detectors (``singles'') as well as the coincidences rates to simultaneously estimate the intrinsic PDC pair production rate and photoluminescence. This approach has been applied to various processes, such as PDC \cite{Pearson2010ratemodels, Schneeloch2019} or FWM \cite{Silverstone2014,Faruque2019}. In our case, the model is the system of equations,
\begin{align}
R_s &= \eta_s\left(\xi P + f(P)\right)+R_\text{bg},\label{eq:Rs}\\
R_i &= \eta_i\left(\xi P + f(P)\right)+R_\text{bg} \;\text{and}\label{eq:Ri}\\
R_{c} &= \eta_s\eta_i\xi P + \tau_cR_sR_i,\label{eq:Rsi}
\end{align}
which we solve for the detected coincidence rate $R_{c}$. Empirically, we know that the background rate $R_\text{bg}$ is equal for both channels, which we include directly in the model\footnote{In principle, as the channels are separated by polarization in the setup in Fig.~\ref{fig:main-setup}, polarized background light split at the PBS and coupled to the fibers could cause an imbalance. However, we have not observed such an effect in our system. In fact, the SNSPD bias current is set to a value that a mean dark-rate of \SI{300}{\per\second} is achieved. Thus, the quantum efficiency might be slightly different for different detectors. This is taken care of by measuring the heralding efficiencies.}. The single rates in Eqs.~\eqref{eq:Rs} and \eqref{eq:Ri} are eliminated, allowing us to write $R_{c}$ in terms of the incident optical power $P$, the efficiency to generate photon pairs from that power $\xi$, the heralding efficiencies of ``signal'' and ``idler'' $\eta_s$ and $\eta_i$ according to the Klyshko scheme \cite{klyshko1980}, and the coincidence~window~$\tau_c$. Similar to the intra-waveguide PDC generation rate $\xi P$, the noise model $f(P)$ describes the amount of photoluminescence generated depending on pump power.
A power law is the simplest noise function, which is given by
\begin{equation}
f(P)=\gamma_P P^\alpha,\label{eq:model-power-law}
\end{equation}
with $\gamma_P$ being the photoluminescence generation efficiency, equivalent to $\xi$. This model is easy to interpret via the exponent $\alpha$ in terms of the number of photons that are involved. If it is dominated by two- or multi-photon processes or exponential avalanche effects, like in laser resonators, the exponent $\alpha$ is greater or equal than $2$. A value of $1$ corresponds to linear absorption, smaller than $1$ indicates saturation.
The second proposed noise function describes a saturable absorber, we assume a ``dead-time'' model \cite{davidson1968} given by
\begin{equation}
f(P)=\frac{\gamma_S P}{1+\beta \gamma_S P}.\label{eq:model-dead-time}
\end{equation}
Here, $\beta$ corresponds to an effective lifetime (or dead-time) of an ensemble of light emitting defects and $\gamma_S$ corresponds to $\gamma_P$ in Eq.~\eqref{eq:model-power-law}. We focus only on these two noise models as others, like ones based on error functions $\propto \text{erf}(P)$ or exponentials $\propto 1-\exp(P)$ fail to converge satisfactorily over the whole power range for the data presented in section~\ref{sec:rate-model}.
\section{Results}
\subsection{Spatial distribution of the photons at the facet}
\label{sec:spatial}
For start, we determine the spatial distribution of the PDC and background counts of the \SI{2.04}{\milli\meter} waveguide. This is done to verify that both coincidences and background originate at the waveguide and are not collected from somewhere else. Moreover, there have been previous hypotheses that the background results from impurities in the GaAs substrate \cite{ Boitier-Ducci-Electrically-2013}, which can be verified by comparison with the spatial distribution of the PDC signal.
The collection fiber is held by a clamp mounted on an Elliot Scientific MDE510 fiber launch system. The fiber can be moved in X, Y, and Z-directions relative to the fixed collimating lens. We measure the distribution by moving the fiber in the imaging plane parallel to the waveguide facet. Due to the different focal lengths of the collimating lenses, the image of the waveguide facet is magnified 5.8 times at the plane of the collection fiber.
This approach is limited in resolution by the collection spot size and the difficulty to determine the absolute position of the spot on the facet. The latter can be partially circumvented, as we optimize the coupling for maximum coincidences. According to simulations, the maximum of the coincidences is the center of the ridge and the center of the core layer. We choose this easy to find position as our reference point. First, we scan the distribution of the coincidences, starting with the maximum. Then, we optimize for maximum coincidences again and change the wavelength of the pump lasers, so that no PDC can be detected. This allows us to repeat the same measurement for the photoluminescence using the same reference frame.
In the horizontal direction, which is parallel to the epitaxial layers, both distributions are perfectly centered, but the photoluminescence is much wider. In vertical direction, which is perpendicular to the layers of the BRW structure, the photoluminescence is also wider and the maximum is shifted by approximately \SI{1}{\micro\meter} towards the substrate, as shown in Fig.~\ref{fig:spatial-pdc}. We observe little to no light from the substrate, even though the peak of the photoluminescence is observed slightly shifted towards the substrate. A possible explanation for the wider and shifted emission of the photoluminescence is that there are many modes the impurities can emit into, instead of just the total internal reflection modes for PDC. They may even be only weakly guided and just diffract through the sample, but only the parts that are actually collected are relevant in further experiments.
\begin{figure}[ht]
\centering
\includegraphics{fig-spatial-pdc.pdf}
\caption{\label{fig:spatial-pdc} Vertical slice of the photoluminescence signal (orange, left scale) and the coincidences of the PDC (blue, right scale). The peak of the background is slightly below the peak of the PDC coincidences.}
\end{figure}
The GaAs substrate is semi-insulating, which increases the total number of impurities and potentially the photoluminescence. In contrast to previous hypotheses about the spatial distribution \cite{Boitier-Ducci-Electrically-2013}, our data suggests that the substrate causes little or no relevant photoluminescence. This is important for electrically active samples: If a heavily doped substrate is required to contact the waveguide from below, it will not affect the noise generation rate. Nevertheless, higher doping in the core regions increase the number of impurities, so more photoluminescence has to be expected.
\subsection{Rate models from power sweeps at the degeneracy wavelength}
\label{sec:rate-model}
In the second experiment, we record the single count rates as well as the coincidences in a \SI{13}{ns} time window over three orders of magnitude of input power and compare them with the case of narrower temporal time-gating of \SI{1.13}{ns}. The power sweeps are carried out at a pump wavelength of \SI{763}{nm}, which is approximately \SI{0.3}{\nano\metre} below the degeneracy wavelength of the employed \SI{1.3}{\milli\meter} long waveguide. This wavelength was chosen empirically, as it is a stable and repeatable laser operating point. We select average pump powers with logarithmic spacing between \SI{10}{\micro\watt} and \SI{2000}{\micro\watt}, the typical operating range of our waveguides when pumped externally.
Each power sweep corresponds to one of four conditions: (1) Pulsed pump without filter, (2) pulsed pump with \SI{40}{nm} or (3) pulsed pump with \SI{12}{nm} bandpass filter. The bandpass filters are centered at the degeneracy wavelength of the produced signal and idler photons. For condition (4), we remove the bandpass and couple a CW laser into the waveguide. The rationale behind this approach is two-fold: First, the SNSPDs are able to detect light well outside the telecom C-band, we can therefore test the effectiveness of bandpass filtering under pulsed pump conditions. We know from previous measurements that the spectrum of the photoluminescence is very broad and uniform, and that spectral filtering is effective in increasing the signal-to-noise ratio \cite{laiho--2016--uncovering}. Second, the peak power in the pulsed case is roughly $10^6$ times higher than in the CW case for the same average power. Any nonlinear effects, like two-photon absorption, should therefore be clearly discernible.
The coefficients of two PDC models with different noise functions are given in table~\ref{tab:model-coefficients}, the raw data with power-law fits and with the saturation models are depicted in Fig.~\ref{fig:rate-models}. For comparison, the analysis with a narrower time gate of \SI{1.13}{ns} is shown in Fig.~\ref{fig:time-gated-models}. Here, only the rate model for the PDC without photoluminescence is fitted. Looking at the data for CW, we suspect that the data point at \SI{20}{\micro\watt} is an outlier and is subsequently excluded. Note, that the \SI{12}{nm} bandpass filter cut way too much signal to get enough data for a sound analysis.
\begin{figure}[ht]
\centering
\includegraphics{fig-rate-models.pdf}
\caption{\label{fig:rate-models} Recorded coincidences with a fit representing the power law and the saturation model for the photoluminescence using a \SI{13}{ns} window. The two models produce almost identical curves in the displayed range.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{fig-time-gate2.pdf}
\caption{\label{fig:time-gated-models} Recorded coincidences using a narrow \SI{1.13}{ns} window. Here, photoluminescence models are not needed ($f(P)=0$) to explain the curves statistically well enough.}
\end{figure}
\begin{table*}
\caption{\label{tab:model-coefficients} Fit coefficients of the two noise models for different pump/filter conditions according to Eqs.~\eqref{eq:Rs}-\eqref{eq:Rsi}. The data measured with the \SI{40}{nm} bandpass filter can be well explained without a noise model. Most quoted parameter values exhibit a $p$-value of lower than 0.01, except for $\xi$ in the \textit{No filter (CW)} case and $\beta$ for \textit{No filter (pulsed)} (both $p=0.1$). The $R^2$ is always better than $0.95$. The coincidence window is \SI{13.1}{ns} (= 1/laser repetition rate), except where stated.}
\begin{ruledtabular}
\begin{tabular}{llccc}
PL model & Parameter & No filter (pulsed) & \SI{40}{nm} BPF (pulsed) & No filter (CW)\\
\hline
\\[-0.5em]
Power law Eq.~\eqref{eq:model-power-law} & $\xi$ ($10^6$ pairs/s/\textmu W) & \SI{0.5(1)}{} & \SI{0.060(1)}{} & \SI{0.5(2)}{} \\
Noise $\propto \gamma_P P^\alpha$& $\gamma_P$ ($10^6$ photons/s/\textmu W) & \SI{2.0(4)}{} & n/a & \SI{8.5(5)}{} \\
& $\alpha$ & \SI{0.74(4)}{} & n/a & \SI{0.70(3)}{} \\
\\[-0.5em]
Saturation Eq.~\eqref{eq:model-dead-time} & $\xi$ ($10^6$ pairs/s/\textmu W) & \SI{0.78(6)}{} & \SI{0.060(1)}{} & \SI{1.40(6)}{} \\
Noise $\propto \gamma_S P/(1+\beta \gamma_S P)$& $\gamma_S$ ($10^6$ photons/s/\textmu W) & \SI{0.4(1)}{} & n/a & \SI{2.5(2)}{} \\
& $\beta$ (ns) & \SI{8(6)}{} & n/a & \SI{5(1)}{} \\
\\[-0.5em]
Short time gate (\SI{1.13}{ns}) & $\xi$ ($10^6$ pairs/s/\textmu W) & \SI{0.80(1)}{} & \SI{0.062(1)}{} & \SI{1.80(3)}{} \\
\\[-0.5em]
Klyshko efficiencies & $\eta_s\times10^{-4}$ & 1.9(1) & 4.9(2) & 2.6(1) \\
& $\eta_i\times10^{-4}$ & 1.5(1) & 8.5(3) & 3.2(1) \\
\end{tabular}
\end{ruledtabular}
\end{table*}
We then take a closer look at the estimated PDC pair generation rates $\xi$. As the setup and waveguide remain identical in between experiments, these values speak for the consistency of our measurements. Most of the rates are within the range of \SIrange{5e5}{8e5}{pairs\per\micro\watt\per\second} without filters, and an order of magnitude less with a \SI{40}{nm} bandpass filter. The only exception is the saturation model in the \textit{No filter (CW)} case, which reports pair rates more than twice as high as the other cases. This might be an artifact of the low dead-time it converged to. Because of the known repetition rate, we expect an effective dead-time slightly above \SI{10}{ns}.
However, this raises the question of whether a reduction to a tenth of the PDC rate is agreeable when employing \SI{40}{nm} bandpass filter. As the spectrum of the PDC is more than \SI{90}{nm} FWHM wide \cite{Gunthner-2015}, a considerable reduction in power can already be expected. Including the transmission profile of the filter, we determine that only approximately \SI{32}{\percent} of the signal/idler photons are transmitted, while the rest is absorbed or reflected by the filter. As coincidence rates are affected twice, due to the involvement of two correlated photons, the total PDC transmission is expected to be about \SI{10}{\percent}, which is consistent with our results for the PDC rates.
Moreover, as the noise is spectrally broader than the PDC \cite{laiho--2016--uncovering}, the signal-to-noise ratio increases. This is evident as the model without photoluminescence ($\gamma_S$ or $\gamma_P=0$) yields a statistically significant fit in Table~\ref{tab:model-coefficients}.
Our results show that the exponents of the photoluminescence power law model are significantly below 1. This indicates that the driving factor of the photoluminescence is saturable linear absorption, and that higher-order photon processes play only marginal roles in our experimental conditions. In the following section~\ref{sec:off-resonant}, the dependence of the photoluminescence generation rates on the pump wavelength supports this hypothesis. The saturation is modeled as an excitation with a lifetime, which assumes that the photoluminescence stems from impurities that can only be re-excited after a delay when emitting a photon.
We emphasize that the photoluminescence scale factors $\gamma_S$ and $\gamma_P$ serve the same purpose, but their values cannot be compared directly without taking $f(P)$ into account. The true rate is only given by $f(P)$. For example, at high powers $f(P)$ predicts a much higher photoluminescence rate for the saturation model compared to the power model. At low powers it is vice-versa. Both models can explain the overall shape of the curves well and statistically sound. Further discussions about the differences are given in section~\ref{sec:future-devices} and Fig.~\ref{fig:car}.
Note, that the Klyshko efficiencies are two orders of magnitude lower as in our previous work, which is caused by the nature of this experiment. As we are deliberately trying to measure the photoluminescence, we refrain from tight spatial, temporal or spectral filtering. Hence, the single rates increase by a large amount, while the coincidence rates stay relatively low. In our previous work \cite{Gunthner-2015}, we report up to $\eta\sim \SI{6}{\percent}$ by employing the proper filters.
\subsection{Off-resonant photoluminescence generation rates}
\label{sec:off-resonant}
In the final experiment, we move the excitation wavelength away from the degeneracy point, mostly towards longer wavelengths where no PDC is produced, because the phasematching condition is no longer fulfilled. On occasion, we have observed PDC processes of various types that probably involve higher-order modes in several waveguides. We avoid these wavelengths in this experiment to focus solely on the photoluminescence. We measure the magnitude and linearity of the photoluminescence at each wavelength for different excitation powers.
In this section, we employ the setup with the microscope objective and the sample with a length of \SI{2.04}{\milli\meter} (degeneracy wavelength \SI{767}{nm}). We use a \SI{40}{nm} bandpass filter and a pulsed pump to emulate the conditions for a typical broadband PDC experiment with BRWs.
Without the PDC, we neglect coincidences so the full rate model is no longer necessary. Instead, we simply record the rates on one of the SNSPDs, measuring only dark counts plus the photoluminescence. This can be described by a power law with an offset
\begin{equation}
R_s(P)=A P^\alpha + R_\text{0},\label{eq:power-law}
\end{equation}
where $P$ is the power before the waveguide in-coupling objective. The scale factor of the polynomial $A$ can be interpreted as the photoluminescence generation rate, $R_0$ is the count rate at the lowest power measured, including dark-counts and background. The exponent $\alpha$ corresponds to the one from the fits and models discussed in the previous section~\ref{sec:rate-model}, but is determined independently for each photoluminescence data set. This model shows an excellent agreement with the measured data in Fig.~\ref{fig:off-resonant-rates}. The resulting fit coefficients are listed in table~\ref{tab:power-law-noise}. We find that $\alpha$ is very close to one, with the tendency that shorter wavelengths slightly deviate towards lower exponents. This is a further hint at saturation effects, which are more noticeable the closer the excitation wavelength is to the bandgap of the materials. We note that the value of the exponent can only be compared qualitatively between samples. The exact nature, realisation and excitation of the impurities can vary between the waveguides, resulting in slightly different rates and exponents.
\begin{figure}[ht]
\centering
\includegraphics{fig-off-resonant-rates.pdf}
\caption{\label{fig:off-resonant-rates} Raw data of the off-resonant photoluminescence generation rates at telecom wavelengths for different average excitation powers and pump wavelengths between \SI{760}{nm} and \SI{850}{nm}. The data is fitted with the power law from Eq.~\eqref{eq:power-law}. The fit parameters are listed in Table~\ref{tab:power-law-noise}, the generation rate $A$ is plotted in Fig.~\ref{fig:lorentzian}.}
\end{figure}
\begin{table}
\caption{\label{tab:power-law-noise} Fit coefficients of the power law with offset from Eq.~\eqref{eq:power-law}.}
\begin{ruledtabular}
\begin{tabular}{@{}cccc@{}}
Wavelength & $A$ (\si{photons\per\micro\watt\per\second}) & $\alpha$ & $R_0$ (\si{photons\per\second}) \\
\hline
760\,\si{nm} & 1800(300) & 0.86(3) & \SI{100(1400)}{} \\
763\,\si{nm} & 1130(150) & 0.91(3) & \SI{3000(1000)}{} \\
775\,\si{nm} & 360(50) & 1.00(2) & \SI{4400(400)}{} \\
800\,\si{nm} & 87(13) & 1.07(3) & \SI{5000(140)}{} \\
850\,\si{nm} & 27(10) & 1.00(6) & \SI{4200(300)}{}
\end{tabular}
\end{ruledtabular}
\end{table}
The photoluminescence generation rate $A$ (table~\ref{tab:power-law-noise}) shows a distinct behaviour in Fig.~\ref{fig:lorentzian}. Over the span of \SI{100}{nm} above the bandgap, it decreases by two orders of magnitude. To model this behaviour, we tried a variety of functions, like polynomials or exponentials similar to the overlap integrals known from solid state physics \cite{gross2014}. It turns out, the only viable model is a Lorentzian function given by
\begin{equation}
A(h\nu)=\frac{\mathcal{N}}{1+\left(\frac{h\nu-E_g}{\sigma}\right)^2}.\label{eq:lorentzian}
\end{equation}
While $\mathcal{N}$ and $\sigma$ are just scale parameters, the position of the resonance, $E_g=\SI{1.654(4)}{eV} \approx \SI{750}{nm}$, can be explained physically: Its value is very close to the bandgap of the matching layers with an aluminum concentration of nominally $\SI{20}{\percent}$. The Lorentzian function is an excellent approximation for the real and imaginary parts of the dielectric function near the bandgap of Al$_x$Ga$_{1-x}$As alloys with an aluminum concentration of $x$ \cite{Kim1993dielectricalgaas}. In the literature, the quoted bandgap of Al$_x$Ga$_{1-x}$As varies slightly \cite{Kim1993dielectricalgaas,Gehrsitz-refractive-index-2000,adachi1993properties}. If we also take the typical fabrication error of the concentration of about two percentage points into account, our estimation of $E_g$ corresponds to an aluminum concentration of \SIrange{18.5}{19.5}{\percent}, which is well within the manufacturing specification. Together with the rate modeling from the previous section~\ref{sec:rate-model}, this strongly indicates that linear absorption is indeed the driving factor of the photoluminescence.
\begin{figure}[ht]
\centering
\includegraphics{fig-lorentzian.pdf}
\caption{\label{fig:lorentzian} Noise generation rate $A$ from table~\ref{tab:power-law-noise} with Lorentzian fit according to Eq.~\eqref{eq:lorentzian}. The resonance is centered at \SI{1.654(4)}{eV} (\SI{750}{nm}), which is approximately the bandgap of the matching layer materials.}
\end{figure}
\subsection{Considerations for future devices}
\label{sec:future-devices}
Identifying and modelling the driving factors of the photoluminescence allows us to improve future samples. The first and foremost measure is increasing the spread between the lowest bandgap and the design wavelength. We propose two measures to reduce the noise: longer operating wavelengths and lower bandgap materials.
Quantitatively, we can estimate the effect of these proposals for the sample at hand from the fit in Fig.~\ref{fig:lorentzian}. First, moving from a \SI{767}{nm} to a \SI{780}{nm} pump wavelength already reduces the amount of photoluminescence by \SI{70}{\percent}. Second, our powerful automated sample design suite \cite{Pressl-2017-advanced-BRW} allows us to easily modify the sample to increase the aluminum concentration. For example, changing just the two matching layers next to the core from a concentration of \SI{20}{\percent} to \SI{22}{\percent}, reduces the photoluminescence by another \SI{60}{\percent}. Both measures increase the spread between excitation and bandgap edge and result in a \SI{90}{\percent} total reduction compared to the samples investigated here. It is important to note that these small changes do not affect the critical performance metrics \cite{Pressl-2017-advanced-BRW} of the waveguide, like the mode overlap and the effective nonlinearity.
Furthermore, since we now have the full rate model description (Eqs.~\eqref{eq:Rs}-\eqref{eq:Rsi} and Table~\ref{tab:model-coefficients}) of our \SI{1.3}{mm} long waveguide at hand, we calculate the coincidence-to-accidentals ratio (CAR) curves for different filters and potential alternative designs, which may have modified material compositions, other geometries or shifted design wavelengths. A great advantage of this model-based approach is the separation of the PDC and noise signal. This allows us to evaluate the effects of different coincidence windows on the figures of merit.
The results for two different coincidence windows are depicted in Fig.~\ref{fig:car}. It is clearly visible that spectral filtering and time filtering prove to be highly effective, as a model without photoluminescence can explain the measured data well. A narrow time gate increases the maximum CAR by a factor of 10. Without spectral filter, a noise-optimized design could increase the usable pump power substantially for a fixed CAR. The saturation model without spectral filters (solid blue line) in Fig.~\ref{fig:car} yields a CAR of 7 at \SI{100}{\micro\watt} pump power. Keeping the CAR constant and moving to the dashed line representing a sample with \SI{90}{\percent} reduction in the photoluminescence, shows a pump power of around \SI{200}{\micro\watt} - which corresponds to doubling the pair rate.
Furthermore, it is clearly visible that both noise models have their merits. We believe the power law in Eq.~\eqref{eq:model-power-law} provides a good estimate of the PDC production rate, while the saturation model in Eq.~\eqref{eq:model-dead-time} provides a good description of the processes at lower powers. The slight discrepancy with the measured data, however, also shows that we still cannot capture the full physics of our system. Moreover, the fact that two parameters in Table~\ref{tab:model-coefficients} could only be fitted to a $p$-value of 0.1 indicates the limitations of the statistics and the models in certain cases. Building the model from a pure quantum optics approach, i.e. mean photon numbers, failed to produce reliable results over the whole power range.
We emphasize that the values reported in Fig.~\ref{fig:car} are conservative in the sense that we did not explicitly align for maximum CAR. Properly optimizing the pumping and coupling, also in connection with narrower filtering (e.g. \SI{12}{nm} bandpass), yields values at least an order of magnitude higher \cite{chen-2018-time-bin}. This can be seen instantly as the graph for \SI{90}{\percent} reduction is not even close to the \SI{40}{nm} case, which it should be. With better alignment, however, there is much headroom for optimizing for the individual case. Nevertheless, it follows that for these hypothetical samples, the filtering requirements are relaxed significantly compared to the state-of-the-art. This can be especially interesting for any integrated detection system: Realizing high fidelity time filtering to the picosecond level is much harder than just to nanoseconds. This is also true for on-chip spectral bandpass filters.
\begin{figure*}[ht]
\centering
\includegraphics{fig-car-combined.pdf}
\caption{\label{fig:car} Predicted CAR values for different conditions with a pulsed pump at \SI{76.2}{\mega\hertz} repetition rate. At low powers, the CAR is limited by the dark counts of the detectors (\SI{300}{\per\second}), at high powers by the accidental coincidences. Both the raw data (symbols) from Figs.~\ref{fig:rate-models} and \ref{fig:time-gated-models} and curves derived from the different models (\SI{40}{nm}, red and No filter, black and blue) are shown. The black solid line depicts the no filter power law model, while the blue line is the corresponding saturation model, showing better agreement to the measurements, especially at low powers. The dashed line is a hypothetical sample, with the photoluminescence reduced by \SI{90}{\percent}. Here, choosing either the power law or saturation model made no significant difference. Fig.~(a) shows the time filtered case with a \SI{1.13}{ns} time gate, (b) the unfiltered with \SI{13.1}{ns}. Note that the vertical scales are different by an order of magnitude.}
\end{figure*}
Having the full model depending on the input power allows us not only to estimate the CAR, but also to predict pair rates for an on-chip pump and photonic network. As all the stated input powers are measured before the aspheric lens, we need to determine the individual loss factors for coupling into the waveguide. The actual power guided in the pump mode can be recovered by multiplying the aspheric lens transmission (\SI{70}{\percent}) with the total in-coupling efficiency ($<$\SI{35}{\percent}) and the typical relative pump mode excitation ($>$\SI{4}{\percent}) \cite{Pressl--2015}. Thus, only \SI{1}{\percent} of the power reaches the necessary pump mode. Hence, we estimate that the \textit{true} coefficient of pair generation rate, e.g. for an on-chip pump, is in fact on the order of \SI{5e7}{\per\micro\watt\per\second}. In a pure externally pumped system, this is not achievable due to absorption of the glass in the objective, even with proper beam shaping to match the far field mode shape. In contrast, an active, electrically pumped, waveguide laser runs intrinsically in the correct mode \cite{bijlani2013semiconductor, Boitier-Ducci-Electrically-2013}. This means that for \SI{1}{\milli\watt} of internal laser power, a pair rate of at least \SI{5}{\giga\hertz} can be expected. Such rates are tremendously useful as they can be harnessed by a fully integrated (quantum) optic network.
\section{Conclusion}
We have presented three different measurements designed to gain insight into the nature of BRW photoluminescence. We proposed two rate models to describe the photon generation process from a big-picture point of view. There is strong evidence that the main cause of photoluminescence is electron-hole pair excitation via linear absorption of a pump photon, followed by a short lived radiative decay via deep impurity levels. The defects that provide these deep levels are located in the matching layers with a low aluminum concentration right next to the core. Furthermore, we have proposed small modifications in the sample design that promise to greatly reduce the photoluminescence. Our calculations predict a reduction by \SI{90}{\percent}, while promising high non-linearity and photon pair rates in the GHz regime.
\section*{Author contributions}
Conceptualization, S.A., K.L., B.P., G.W.; Formal analysis, S.A., B.P.; Methodology, S.A., A.S., K.L., B.P, G.W.; Investigation, S.A., A.S., K.L. B.P., H.T.; Resources, H.S., M.K., S.H., C.S.; Software, B.P.; Supervision, B.P., G.W.; Writing - original draft, S.A., B.P.; Writing - review \& editing, A.S., K.L., H.T., C.S., G.W.; Funding acquisition, C.S., G.W.;
\section*{Acknowledgments}
This work was supported by the Austrian Science Fund (FWF) through the project I2065 and the Special Research Program (SFB) project \textit{BeyondC} no. F7114, the DFG project no. {SCHN1376/2-1}, the ERC project {\textit{EnSeNa} (Grant No. 257531)} and EU H2020 quantum flagship program {\textit{UNIQORN} (Grant No. 820474)} and the State of Bavaria. S.A. is supported by the EU H2020 FET open project {\textit{PIEDMONS} (Grant No. 801285)}. B.P. acknowledges support by the FWF SFB project no. F6806. We thank A. Wolf and S. Kuhn for assistance during sample growth and fabrication. We thank T. G{\"{u}}nthner and H. Chen for laboratory assistance and S. Frick, M. Sassermann, R. Chapman and M. Prilm{\"{u}}ller for fruitful discussions and comments.
\section*{References}
|
{'timestamp': '2020-10-13T02:29:07', 'yymm': '2010', 'arxiv_id': '2010.05474', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.05474'}
|
arxiv
|
\section{Effective low energy time-independent Hamiltonian}
Two massive Dirac materials can hybidize through interlayer hopping, which can be described by the minimal $\rm{\mathbf{k \cdot p}}$ model \cite{SM2017Topological}:
\begin{equation}\label{EqS1}
H_{\tau}(\mathbf{k})=
\begin{pmatrix}
-\mathit{\Delta}/2+M_{u} &{\nu}_{u}(\tau k_{x}-i\epsilon k_{y}) & t_{cc} & t_{cv} \\
{\nu}_{u}(\tau k_{x}+i\epsilon k_{y}) & -\mathit{\Delta}/2 & t_{vc} & t_{vv} \\
t^{*}_{cc} & t^{*}_{vc} &\mathit{\Delta}/2 &{\nu}_{l}(\tau k_{x}-ik_{y}) \\
t^{*}_{cv} & t^{*}_{vv} &{\nu}_{l}(\tau k_{x}+ik_{y}) &\mathit{\Delta}/2-M_{l}
\end{pmatrix},
\end{equation}
in which $\nu_{u}(\nu_{l})$ and $M_{u}(M_{l})$ corresponds to Fermi velocity and mass in the upper (lower) layer. $\tau=\pm 1$ is the valley index of hexagonal bilayer. $\epsilon=\pm 1$ denotes the two types of stacking orientations, $\epsilon=+1$ ($\epsilon=-1$) corresponds to R-type (H-type) stacking in the main text. $t_{ij}(i,j=c,v)$ are interlayer hopping between band i of upper layer and the one in band j of lower layer at the $K/K{'}$ valley. The $\mathit{\Delta}=\mathit{\Delta}_{g}-U$ is the heterobilayer band gap under external electric field, where $U$ is the interlayer bias proportional to the perpendicular electric field, and $\mathit{\Delta}_{g}$ is heterobilayer intrinsic band gap. It is worth noting that certain interlayer hopping channels must vanish considering the three-fold rotational symmetry $C_3$ of hexagonal bilayer \cite{SM2017Topological}. Herein, we choose $H_X^M$ stacking with the characteristics of valley quantum spin Hall (VQSH) state as a representative example to elaborate the topological phases. Consequently, the Eq. \ref{EqS1} becomes
\begin{equation}\label{EqS2}
\begin{split}
H_{\tau}(\mathbf{k})&= \boldsymbol s_x
(\tau v_1k_{x}\boldsymbol\sigma_{x}+ v_2 k_{y}\boldsymbol\sigma_{y}) +
\boldsymbol s_y
( v_1k_{y}\boldsymbol\sigma_{x}- \tau v_2 k_{x}\boldsymbol\sigma_{y}) \\
& +\frac{\Delta}{2}\boldsymbol s_z\boldsymbol\sigma_{z} +M_2(\boldsymbol s_z\boldsymbol\sigma_{0}-\boldsymbol s_0\boldsymbol\sigma_{0}) +
M_1(\boldsymbol s_0\boldsymbol\sigma_{z}-\boldsymbol s_z\boldsymbol\sigma_{z}) \\
& + \lambda(\boldsymbol s_x\boldsymbol\sigma_{0} - \boldsymbol s_x\boldsymbol\sigma_{z}),
\end{split}
\end{equation}
in which $v_{1,2} = {(v_l \pm v_u)}/{2}$, $M_{1,2} = {(M_l \pm M_u)}/{4}$, and $\lambda=\frac{1}{2}t_{vv}$ denotes interlayer hopping for holes. In this case, the interlayer hopping for electrons vanishes at $K$ points. The gap-closing topological phase transition regulated by the electric field occurs when $\mathit{\Delta}$ is varied from $\mathit{\Delta}>0$ to $\mathit{\Delta}<0$. Here we only focus on neighborhood regime around the critical point $\mathit{\Delta}\approx 0$, i.e., $M_{u}(M_{l}) \gg \mathit{\Delta}$. In this case, the $4\times4$ Hamiltonian can be projected to a $2\times2$ Hamiltonian:
\begin{equation}\label{EqS3}
\begin{split}
H_{\tau}(\mathbf{k}) &\cong (\frac{\mathit{\Delta}}{2}+\frac{\mathit{P}}{2}k^2)\boldsymbol{\sigma}_{z}+\frac{\mathit{Q}}{2}k^2+2\lambda\frac{\nu_{l}}{M_{l}} (\tau k_{x} - ik_{y})(\boldsymbol{\sigma}_{x}+i\boldsymbol{\sigma}_{y})+h.c. \\
&= d_{0}(k)\sigma_{0}+\mathbf{d_{\tau}(k)} \cdot \boldsymbol{\sigma},
\end{split}
\end{equation}
where $\mathit{P}=\nu^{2}_{u}/M_{u}+\nu^{2}_{l}/M_{l}$ and $\mathit{Q}=\nu^{2}_{u}/M_{u}-\nu^{2}_{l}/M_{l}$. We have $d_{0}(k) = -Dk^2$ and $\mathbf{d_{\tau}(k)}=(\tau Ck_{x}, Ck_{y}, M-Bk^{2})$, where $D=-\frac{Q}{2}$, $C=4\lambda \frac{\nu_{l}}{M_{l}}$, $M=\frac{\mathit{\Delta}}{2}$, and $B=-\frac{P}{2}$.
Based on the Floquet theorem, under the irradiation of time-periodic circularly polarized light (CPL) with vector potential of $\mathbf{A}(t)=\mathbf{A}({t}+T)= A[\cos (\omega \tau ), \pm \eta \sin (\omega \tau ), 0]$, where $\eta=\pm 1$ denotes the left(right)-handed CPL. The $A$, $\omega$, and $T=2\pi /\omega$ are its amplitude, frequency and time period, respectively. The vector potential $\mathbf{A}(t)$ can be coupled to Hamiltonian based on the minimal coupling substitution, $H_{\tau}(\mathbf{k},t)=H_{\tau}(\mathbf{k}+e\mathbf{A(t)})$ \cite{SMPhysRevA.27.72,SMPhysRevLett.110.200403}. The time-periodic Hamiltonian can be expanded as
\begin{equation}\label{EqS4}
H_{\tau}(\mathbf{k},t)=\sum_{m}H_{\tau}^{m}(\mathbf{k})e^{im\omega t},
\end{equation}
and then the effective Floquet Hamiltonian in the high frequency approximation can be expressed as \cite{SMPhysRevX.4.031027}
\begin{equation}\label{EqS5}
H_{\tau}^{F}(\mathbf{k}) = H^{0}(\mathbf{k})+\sum_{m\ge 1}\frac{\left[H_{\tau}^{-m},H_{\tau}^{m}\right]}{m\hbar \omega},
\end{equation}
with
\begin{equation}
\begin{split}\label{EqS6}
&H^{0}(\mathbf{k}) = \frac{Q}{2}[(\frac{eA}{\hbar})^{2}+k^{2}]\sigma_{0}+4\tau\lambda\frac{\nu_{l}}{M_{l}} k_{x}\sigma_{x}+4\lambda\frac{\nu_{l}}{M_{l}}k_{y}\sigma_{y}+[\frac{P}{2}(\frac{eA}{\hbar})^{2}+\frac{P}{2}k^{2}+\frac{\mathit{\Delta}}{2}]\sigma_{z}, \\
&H^{\pm1}(\mathbf{k}) = \frac{Q}{2}\frac{eA}{\hbar}(k_{x} \pm i\eta k_{y})\sigma_{0}+2\tau \lambda P\frac{eA}{\hbar}\frac{\nu_{l}}{M_{l}}\sigma_{x}\pm 2i\eta\lambda\frac{eA}{\hbar} \frac{\nu_{l}}{M_{l}}\sigma_{y}+\frac{P}{2}\frac{eA}{\hbar}(k_{x} \pm i\eta k_{y})\sigma_{z}, \\
&H^{\pm2}(\mathbf{k}) = 0.
\end{split}
\end{equation}
By directly calculating the commutators,
\begin{equation}
\begin{split}\label{EqS7}
[H^{-1},H^{1}] &= -4\eta\lambda\frac{\nu_{l}}{M_{l}}P(\frac{eA}{\hbar})^{2}k_{x}\sigma_{x} - 4\eta\tau\lambda\frac{\nu_{l}}{M_{l}}P(\frac{eA}{\hbar})^{2}k_{y}\sigma_{y}+8\lambda(\frac{\nu_{l}}{M_{l}})^{2}(\frac{eA}{\hbar})^{2}\sigma_{z} \\
[H^{-2},H^{2}] &= 0
\end{split}
\end{equation}
we obtain
\begin{equation}\label{EqS8}
H_{\tau}^{F}(\mathbf{k}) = \tilde{d_{0}}(k)\sigma_{0}+\tilde{\mathbf{d}}_{\tau}(\mathbf{k}) \cdot \boldsymbol{\sigma},
\end{equation}
the light renormalized $\tilde{d_{0}}(k) = \tilde{D}-Dk^2$ and $\tilde{\mathbf{d}}_{\tau}(\mathbf{k})=(\tau \tilde{C}k_{x}, \tilde{C}k_{y}, \tilde{M}_{\tau}-Bk^{2})$, where $\tilde{D}=-D(\frac{eA}{\hbar})^2$, $\tilde{C}=C-\frac{4\eta}{\hbar \omega}\frac{\nu_{l}}{M_{l}}P\left(\frac{eA}{\hbar}\right)^{2}\lambda$, and $\tilde{M}_{\tau}=\frac{\mathit{\Delta}}{2}+(\frac{eA}{\hbar})^{2}[\frac{P}{2}+\frac{8\eta\tau}{\hbar \omega}(\frac{\nu_{l}}{M_{l}})^{2}\lambda]$. As shown in Fig. \ref{Fig. S1}, the Dirac mass $\tilde{M}_{\tau}$ is valley-dependent, leading to two valley sectors with different responses to CPL. The valley-resolved Chern number can be analytically obtained as $C_{\tau}=-\frac{\tau}{2}[\mathrm{sgn}(\tilde{M}_{\tau}) + \mathrm{sgn}(B)]$.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{FigS1.pdf}
\caption{The light-renormalized Dirac mass $\tilde{M}_{\tau}$ as function of light intensity $A$ for (a) left-handed and (b) right-handed CPL. Here, $\mathit{\Delta}=-0.05$, $\lambda=1.5$, and $\frac{\nu_{l}}{M_{l}}=\frac{\nu_{u}}{M_{u}}=0.5$ due to the similar band structures in lower and upper layers for heterobilayers.
}
\label{Fig. S1}
\end{figure}
\section{Computational methods}
The first-principles calculations were performed within the framework of density functional theory (DFT) \cite{SMPhysRev.136.B864,SMPhysRev.140.A1133} using the projector augmented-wave method encoded in the Vienna $ab$ $initio$ Simulation Package (VASP) \cite{SMPhysRevB.54.11169}. The exchange correlation functional was described by using the generalized gradient approximation within the Perdew-Burke-Ernzerhof (GGA-PBE) formalism \cite{SMPhysRevLett.77.3865}. The plane-wave cutoff energy was set to be 500 eV. The first Brillouin zone was sampled by $15 \times 15 \times 1$ Monkhorst-Pack mesh grid. We adopted $1 \times 1$ primitive cell of two different 2H-TMDs to build the heterobilayer. A vacuum layer larger than 20\text{\AA} was introduced to ensure decouple the interaction between two neighboring heterobilayer slabs. All geometric structures are fully relaxed until energy and force converged to $10^{-6}$ eV and 0.01 eV/\text{\AA}. Weak van der Waals (vdW) interactions were included in our calculations using Grimme (DFT-D3) method \cite{SMDensity}. The 56 projected atomic orbitals comprised of $s$ and $d$ orbitals centered at the metal atoms, $s$ and $p$ orbitals centered at the chalcogen atoms in the primitive unit cell were used to construct WFTB model based on maximally localized Wannier functions methods by the WANNIER90 package \cite{SMMostofi2014}. The iterative Green's function method \cite{SMSancho_1985} as implemented in the WannierTools \cite{SMWU2017} was used for chiral edge states calculations.
\section{Lattice structures and stacking-dependent electronic band structures}
\begin{table}[H]
\caption{Structure properties of $H_X^M$ stacking in different heterobilayer. The $a$, $\delta$, $d$, $E_{b}$, and $\Delta\Phi$ represent the optimized lattice constants, lattice mismatch, averaged layer distances, binding energies and work function differences between two isolated monolayers, respectively.}
\begin{center}
\renewcommand\arraystretch{1.5}
\begin{tabular}{p{2.5cm}<{\centering} p{2cm}<{\centering} p{2cm}<{\centering} p{2cm}<{\centering} p{2cm}<{\centering} p{2cm}<{\centering}}
\hline
\hline
& $a(\text{\AA})$ & $\delta(\%)$ & $d(\text{\AA})$ & $E_{b}$(eV) & $\Delta\Phi$(eV)\\
\hline
MoS$_2$/WTe$_2$ & 3.33 & 4.50 & 3.21 & -0.30 & 1.32\\
WS$_2$/WTe$_2$ & 3.33 & 4.20 & 3.21 & -0.31 & 1.06\\
MoSe$_2$/WTe$_2$ & 3.41 & 2.63 & 3.24 & -0.31 & 0.63\\
WSe$_2$/WTe$_2$ & 3.40 & 2.35 & 3.24 & -0.32 & 0.49\\
MoTe$_2$/WTe$_2$ & 3.52 & 0.85 & 3.39 & -0.33 & 0.26\\
\hline
\hline
\end{tabular}
\end{center}\label{Tab. S1}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=8.6cm]{FigS2.pdf}
\caption{The WS$_2$/WTe$_2$ heterobilayer band structure without SOC. The illustration depicts the type-\uppercase\expandafter{\romannumeral2} band alignment, where holes(electrons) in two valleys reside in lower (upper) layer.}
\label{Fig. S2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=17.3cm]{FigS3.pdf}
\caption{(a)-(e) The spin-resolved band structures for a series of potential TMD heterobilayers in the presence of SOC. Red (blue) color denotes the spin-up (spin-down) component.}
\label{Fig. S3}
\end{figure}
The Fig. \ref{Fig. S2} shows that holes(electrons) at two valleys are predominantly localized in an individual layer, exhibiting type-\uppercase\expandafter{\romannumeral2} band alignment features. These separation of electrons and holes can effectively bind together to form interlayer excitons. Here, without loss of generality, we started from a series of potential TMD heterobilayers of MX$_2$/WTe$_2$ (M=Mo, W; X= S, Se) in Tab. \ref{Tab. S1}. The choice of these candidates was based on their work function differences between the upper and lower layers. As illustrated in Fig. \ref{Fig. S3}, the calculated results confirmed that band gaps of heterobilayers decrease with increasing the work function differences. To obtain band inversion in the presence of SOC, the large work function difference of two layers is preferred.
\begin{figure}[H]
\centering
\includegraphics[width=15cm]{FigS4.pdf}
\caption{(a)-(c) The enlarged views of spin (left panel) and orbit (right panel) components at the $K$ valley for different high-symmetry stacking order. The component of W-$d_{z^{2}}$ orbital of upper WS$_2$ layer (W-$d_{xy}$\&$d_{x^{2}-{y^{2}}}$ orbital of lower WTe$_2$ layer) is proportional to the width of the purple (orange) curve. (d)-(f) Corresponding semi-infinite local density of states (LDOS).}
\label{Fig. S4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=15cm]{FigS5.pdf}
\caption{For $H_X^M$ stacking, the band structure evolution as external E-field strength at $K$/$K'$ valleys. From left to right panel, the external E-field strength is set to 0, -0.03,-0.09 and -0.1 V/\AA, respectively. Red (blue) color denotes the spin-up (spin-down) component.}
\label{Fig. S5}
\end{figure}
\section{Implementation of the Floquet theorem in Wannier-function-based tight-binding (WFTB) model}
Starting from the real-space Wannier tight-binding (TB) Hamiltonian
\begin{equation}\label{Eq.9}
H(\mathbf{r})=\sum_{mn}\sum_{j}t_{j}^{mn}C_{m}^{\dag}(\mathbf{R}_{j})C_{n}(\mathbf{R}_{0})+h.c.,
\end{equation}
where $t_{j}^{mn}$ is a hopping parameter from orbital $n$ located at $\mathbf{R}_{0}$ to orbital $m$ located at $\mathbf{R}_{j}$, and $C_m^{\dag}(\mathbf{R}_{j})$ ($C_m (\mathbf{R}_{0})$) creates (annihilates) an electron on site $\mathbf{R}_{j}$ ($\mathbf{R}_{0}$). We consider it under the
irradiation of an external time-periodic CPL with vector potential of $\mathbf{A}(t)=\mathbf{A}(t+T)= A[\cos (\omega t ), \pm \eta \sin (\omega t ), 0]$. The time-dependent hopping parameter $t_j^{mn}(t)$ is coupled to the Hamiltonian by introducing Peierls substitution \cite {SMPhysRevA.27.72,SMPhysRevLett.110.200403}
\begin{equation}\label{Eq.10}
t_j^{mn}(t) =t_j^{mn}e^{i\frac{e}{\hbar}\mathbf{A}(t)\cdot \mathbf{d}_{mn}},
\end{equation}
where $\mathbf{d}_{mn}$ is the related position vector between two Wannier orbitals. This time-dependent tight-binding Hamiltonian can be written as
\begin{equation}\label{Eq.11}
H(\mathbf{k},t)=\sum_{mn}\sum_{j}t_j^{mn}(t)e^{i\mathbf{k}\cdot\mathbf{R}_j}C_{m}^{\dag}(\mathbf{k}, t)C_{n}(\mathbf{k}, t)+h.c.,
\end{equation}
Taking into account the lattice and time translation invariance, the time-dependent Hamiltonian can be effectively treated with Floquet theorem by performing dual the Fourier transformation. Here, the Floquet-Bloch creation (annihilation) operator can be expressed as
\begin{equation}\label{Eq.12}
\begin{split}
C_{m}^{\dag}(\mathbf{k}, t)&=\sum_{j}\sum_{\alpha=-\infty}^{\infty} C_{\alpha m}^{\dag}(\mathbf{R}_{j})e^{+i\mathbf{k}\cdot \mathbf{R}_{j} - i\alpha \omega t}, \\
C_{m}(\mathbf{k}, t)&=\sum_{j}\sum_{\alpha=-\infty}^{\infty} C_{\alpha m}(\mathbf{R}_{j})e^{-i\mathbf{k}\cdot \mathbf{R}_{j} + i\alpha \omega t}.
\end{split}
\end{equation}
Plugging Eq. \ref{Eq.12} into Eq. \ref{Eq.11}, we can obtain an effective static Hamiltonian in the frequency and momentum space
\begin{equation}\label{Eq.13}
H({\mathbf{k}}, \omega)=\sum_{m, n}\sum_{\alpha, \beta}[H_{mn}^{\alpha-\beta}({\mathbf{k}}, \omega)
+\alpha \hbar \omega \delta_{mn}\delta_{\alpha \beta}]C_{\alpha m}^{\dag}(\mathbf{k})C_{\beta n}(\mathbf{k})+h.c,
\end{equation}
where $\hbar \omega$ represents the energy of photo, $(\alpha, \beta)$ is the Floquet index ranging from $-\infty$ to $+\infty$, and the matrix element $H_{mn}^{\alpha-\beta}({\mathbf{k}}, \omega)$ is
\begin{equation}\label{Eq.14}
H_{mn}^{\alpha-\beta}({\mathbf{k}}, \omega)=\sum_{j} e^{i\mathbf{k} \cdot \mathbf{R}_j}\bigg(\frac{1}{T}\int_{0}^{T}t_j^{mn}\times e^{i\frac{e}{\hbar}\mathbf{A}(t)\cdot \mathbf{d}_{mn}}e^{i(\alpha-\beta)\omega t}dt \bigg).
\end{equation}
We give $\mathbf{A}(t)$ and $\mathbf{d}_{mn}$ as the following general forms
\begin{equation}\label{Eq.15}
\mathbf{A}(t) = [A_{x}\sin{(\omega t + {\varphi_1})},A_{y}\sin{(\omega t + {\varphi_2})},A_{z}\sin{(\omega t + {\varphi_3})}],
\end{equation}
\begin{equation}\label{Eq.16}
{\mathbf{d}_{mn}} = (d_{x},d_{y},d_{z}),
\end{equation}
Then, the the matrix element ${H_{mn}^q} (\mathbf{k}, \omega)$ ($q=\alpha-\beta$) can rewrite as
\begin{equation}\label{Eq.17}
H_{mn}^q ({\mathbf{k}}, \omega)=\sum_{j} t_j^{mn}e^{i\mathbf{k}\cdot \mathbf{R}_j}\cdot J_{q}(\frac{e}{\hbar}A_{max}),
\end{equation}
in which $J_q$ is q-th Bessel function
\begin{equation}\label{Eq.18}
J_{q}(\frac{e}{\hbar}A_{max}) = e^{-iq \varphi} \frac{1}{T} \int_{0}^{T} e^{i[\frac{e}{\hbar}A_{max}\sin{(\omega t + \varphi)]}}e^{iq (\omega t+\varphi)}dt ,
\end{equation}
with
\begin{gather}\label{Eq.19}
A_{max} = \sqrt{(A_{x}d_{x}\sin{\varphi _1} + A_{y}d_{y}\sin{\varphi _2}+A_{z}d_{z}\sin{\varphi _3})^2 + (A_{x}d_{x}\cos{\varphi _1} + A_{y}d_{y}\cos{\varphi _2}+A_{z}d_{z}\cos{\varphi _3})^2}, \nonumber \\
\varphi = \arctan {(\frac{A_{x}d_{x}\sin{\varphi _1} + A_{y}d_{y}\sin{\varphi _2}+A_{z}d_{z}\sin{\varphi _3}} {A_{x}d_{x}\cos{\varphi _1} + A_{y}d_{y}\cos{\varphi _2}+A_{z}d_{z}\cos{\varphi _3}})}.
\end{gather}
The Floquet-Bloch Hamiltonian in the block matrix form
\begin{equation}\label{Eq.20}
H({\mathbf{k}}, \omega)=
\begin{pmatrix}
\ddots & \vdots & \vdots & \vdots & \begin{sideways}$\ddots$\end{sideways} \\
\cdots & H_{0}-\hbar \omega & H_{-1} & H_{-2} & \cdots \\
\cdots & H_{1} & H_{0} & H_{-1} & \cdots \\
\cdots & H_{2} & H_{1} &H_{0}+\hbar \omega & \cdots \\
\begin{sideways}$\ddots$\end{sideways} & \vdots & \vdots & \vdots & \ddots \\
\end{pmatrix}
\end{equation}
As shown in equation (\ref {Eq.20}), the infinite Floquet energy subbands can be truncated at first order $(\left| \alpha - \beta \right|=0,1)$, which is sufficient to obtaining desired convergence as shown in Fig. \ref {Fig. S6}. Photon energy $\hbar \omega = 15$ eV is chosen to be larger than the bandwidth so that overlap between different Floquet subbands is negligible. As a result, the Floquet bands near the Fermi energy are determined by leading order $H_0$.
\begin{figure}[H]
\centering
\includegraphics[width=8.6cm]{FigS6.pdf}
\caption{The comparison of Floquet band structures among different truncated orders q=0, 1, 2. We set $eA/\hbar=0.075\rm{\AA^{-1}}$.}
\label{Fig. S6}
\end{figure}
\section{Optically switchable topological properties dependent on handedness}
Herein, we give the evolution of topological phases subjected to light irradiation of left- and right- handed CPL.
As shown in Figs. \ref {Fig. S7}(a)-(d), with increasing light intensity of left-handed CPL, the band gap of $K$ valley spin-up states first closes and then reopens; that is, only spin-down states of $K'$ valley preserve the inverted band topology, resulting in a VQSH-to-VQAH topological phase transition occurs in $K$ valley. Conversely, as shown in Figs. \ref {Fig. S7}(e)-(f), this topological phase transition of $K'$ valley can be obtained by using the right-handed CPL. To further verify light-driven specific topological phases, as shown in Figs. \ref {Fig. S8} and \ref {Fig. S9}, we calculate evolution of Wannier charge centers (WCCs) and Berry curvature under diffirent helicity of CPL, which indicate helicity of CPL can effectively controll this topological spin-valley filter.
\begin{figure}[H]
\centering
\includegraphics[width=17.3cm]{FigS7.pdf}
\caption{(a)-(d) The evolution of spin-resolved band structures of $H_X^M$ stacking WS$_2$/WTe$_2$ heterobilayer around the $K$ and $K'$ valleys under left-handed CPL with a light intensity $eA/\hbar$ of 0, 0.045, 0.051, and 0.053 {\AA}$^{-1}$. (e)-(h) Likewise under right-handed CPL. The red and blue colors indicate the $z$-component of spin-up and spin-down states, respectively. A perpendicular electric field is fixed at -0.02 V/\AA.}
\label{Fig. S7}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=13cm]{FigS8.pdf}
\caption{Evolution of Wannier charge centers (WCCs) as a function of $k_y$ (a) without light irradiation and (b) with left-handed or (c) right-handed CPL irradiation ($eA/\hbar=0.053\rm{\AA^{-1}}$).}
\label{Fig. S8}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=17.3cm]{FigS9.pdf}
\caption{The distribution of the Berry curvature $\Omega_z(\mathbf{k})$ for (a) $\mathcal{T}$-broken (or invariant) VQSH state, (b) VQAH state of $K'$ valley driven by left-handed CPL, and (c) VQAH state of $K$ valley driven by right-handed CPL in the $k_x-k_y$ plane, respectively. The hexagonal Brilllouin zone is marked by dashed lines.}
\label{Fig. S9}
\end{figure}
|
{'timestamp': '2021-11-04T01:18:06', 'yymm': '2110', 'arxiv_id': '2110.11009', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.11009'}
|
arxiv
|
\section{Introduction}
Experiments have shown that shocks in granular media can become unstable. For example, granular media in vertically oscillating beds have been shown to become unstable \cite{Bizon1998}. Another example is in similar jet formation observed in explosively dispersed granular media \cite{Frost2012}.
Previous studies have addressed the unique structure of shock waves through granular media, although instabilities had not been identified \cite{Goldshteinetalch31996, Kamenetsky_etal2000}. These structures were identified by studying the problem of piston propagated shock waves. In this structure, a piston causes a shock wave to traverse through a granular media, which causes the granular temperature to increase. Due to the inelasticity and increased rate of the collisions within this ``fluidized'' region, the granular temperature decreases and the density increases. Eventually, the density is high enough that the collisions have subsided, characterized as the ``frozen'' equilibrium region.
The structure of granular shock waves is similar to that of strong shock waves through molecular gases that undergo strong relaxation effects \cite{Zeldovich&Raizer1966}; whereby the difference exists in the equilibrium region, where a fraction of translational kinetic energy is conserved. Experiments have shown that such shock waves can become unstable due to endothermic reactions (eg. ionization \cite{Grunetal1991, Griffithetal1976} and dissociation \cite{Griffithetal1976}). Although there is significant experimental evidence showing instability in relaxing shock waves, the mechanism controlling the instability is not well understood.
Shock waves through granular media and molecular gases undergoing endothermic reactions share a common feature, where the shock is characterized by dissipative collisions \textit{within} the shock structure. Therefore, investigating the controlling mechanism of instability in granular gases may have direct bearing on the understanding of instabilities seen in molecular gases.
Previously, we have shown shock instability in a system of inelastic disks (2D), with collisions of disks modelled deterministically \cite{RadulescuSirmas2011}. Introducing an activation energy for which disks collide inelastically yields the formation of high density non-uniformities and convective rolls within the relaxing region of the shock structure. By studying the time evolution of the material undergoing shock compression and further relaxation, we find that the granular gas develops the instability on the same time scales as the clustering instability in homogeneous granular gases. Thus confirming that the clustering instability is the dominant mechanism controlling instability in our model \cite{SirmasTBD}.
Previous studies investigating Faraday instabilities in granular media have shown a similar jetting instability \cite{Carrilo2008}. These studies have been done with the same molecular dynamics model \textit{and} with continuum modelling of the Navier-Stokes granular equations. Both of these models are in good agreement, reproducing the Faraday instability.
Thus, we pose the following question pertaining to our investigation: \textit{Can we observe a similar shock instability in granular gases at the continuum level?} Specifically, we wish to examine whether the instability can be seen using a simple hydrodynamic description of granular media, absent of higher order viscous effects.
For a granular system with a constant coefficient of restitution, the hydrodynamic descriptions are well understood \cite{Brilliantov&Poschel2004}. However, in our study we introduce an activation energy necessary for inelastic collisions. This requires modifications to the general transport coefficients, using methods from traditional kinetic theory of monatomic gases.
This paper is organized as follows. First, we describe the model that is used to study the stability of shock waves through granular gases, with the presence of an activation energy. Secondly, we describe the numerical methods we use for the molecular and continuum models. Specifics are outlined for the modifications made to the governing equations. Finally, the shock structures from both methods are compared.
\section{Problem Overview}
The medium we investigate is a system of colliding hard disks. Each binary collision is elastic, unless an \textit{activation} threshold is met. Quantitatively, collisions are inelastic if the impact velocity (normal relative velocity) exceeds a velocity threshold $u^*$, a classical activation formalism in chemical kinetics. If the collision is inelastic, the disks collide with a constant coefficient of restitution $\varepsilon$.
The system we study is a classical shock propagation problem, whereby the motion of a suddenly accelerated piston driven into a thermalized medium drives a strong shock wave. The driving piston is initially at rest and suddenly acquires a constant velocity $u_p$. This model allows for the dissipation of the non-equilibrium energy accumulated within the shock structure, which terminates once the collision amplitudes fall back below the activation threshold. In this manner, the activation threshold also acts as a tunable parameter to control the equilibrium temperature in the post shock media.
\section{Numerical Models}
\subsection{Molecular Dynamics Model}
The MD simulations reconstruct the dynamics of smooth inelastic disks, calculated using the Event Driven Molecular Dynamics technique \cite{Alder&Wainright1959}. We use the implementation of P\"{o}schel and Schwager \cite{Poschel&Schwager2005}, that we extended to treat a moving wall. The particles were initialized with equal speed and random directions. The system was let to thermalize and attain Maxwell-Boltzmann statistics. Once thermalized, the piston started moving with constant speed.
The initial packing fraction of the disks was chosen to be $\eta_1=(N\pi d^2)/4A=0.012$, where $N\pi d^2/4$ is the specific volume (area) occupied by $N$ hard disk with diameter $d$, and domain area $A=L_x\times L_y$ ; the initial gas is thus in the ideal gas regime \cite{Sirmasetal2012}.
All distances and velocities are normalized by the initial root mean squared velocity $u_{rms_o}$ and the diameter $d$ of the disks, respectively. Thus fixing the time scaling $\overline{t}=t/(\frac{d}{u_{rms_o}})$. The initial domain size was $A=4550d\times455d$, occupied by 30,000 hard disks.
\subsection{Continuum Model}
In the continuum model, we consider a two-dimensional granular gas, by modelling a system of smooth inelastic disks. For such a system, the Euler hydrodynamic equations for mass, momentum and energy take the form:
\begin{eqnarray}
\frac{\partial \rho}{\partial t}+\vec{\nabla}\cdot\left(\rho\vec{u}\right)&=&0\notag\\
\frac{\partial \rho\vec{u}}{\partial t}+\vec{\nabla}\cdot\left(\rho\vec{u}\vec{u}\right)&=&-{\nabla}p\\
\frac{\partial E}{\partial t}+\vec{\nabla}\cdot\left(\vec{u}(E+p)\right)&=&\zeta\notag
\end{eqnarray}
where $E=\rho T+\frac{1}{2}\rho \vec{u}^2$ is the total energy density, in terms of density $\rho$, granular temperature $T$ and velocity $\vec{u}$.
For granular systems, the hydrostatic pressure $p$ can be approximated as:
\begin{equation}\label{EOS1}
p=\rho e\left[ 1+(1+\varepsilon)\eta g_2(\eta)\right]
\end{equation}
where $e=T$ is the internal energy in 2D and $g_2(\eta)=(1-(7/16)\eta)((1-\eta)^2)$ is the pair correlation function for a system of hard disks \cite{Torquato1995}.
The cooling coefficient $\zeta$ for constant $\varepsilon$ may be written as:
\begin{equation}\label{zeta_nocorrect}
\zeta=-\frac{4}{d\sqrt{\pi}}\left(1-\varepsilon^2\right)\rho T^{3/2}\eta g_2(\eta)
\end{equation}
\subsubsection{Modification of Cooling Rate}
In our model, inelastic collisions occur for a fraction of collisions, making the cooling rate from \eqref{zeta_nocorrect} invalid. Therefore, a modification to the cooling rate is necessary to account solely on energy losses from activated inelastic collisions.
To adjust the cooling rate, one must examine the energy involved in collisions.
To do so, we first look at the rate of binary collisions per unit volume \cite{Vincenti&Kruger1975}:
\begin{equation}\label{distribution}
n^2 d \frac{m}{2 kT}\exp\left\{\frac{m g^2}{4kT}\right\}g^2 \cos \psi dg d\psi
\end{equation}
This term gives the rate of binary collisions of a system of disks of mass $m$ with a number density $n=N/A$ that have a relative speed in the range of $g$ to $g+dg$, with an angle between the relative velocity and the line of action in the range of $\psi$ to $\psi+d\psi$. Along the line of action, the relative velocity is $u_n=g\cos\psi$. Multiplying \eqref{distribution} by $u_n^2=(g\cos\psi)^2$, and integrating over a range of $u_n$, one recovers the energy along the line of action for collisions with impact velocities within this range. Integrating with $u_n$ from 0 to $\infty$ recovers the energy directly applicable to \eqref{zeta_nocorrect}. Integrating $u_n$ from $u^*$ to $\infty$ yields the energy seen along the line of action for impact velocities exceeding $u^*$.
To modify the cooling rate \eqref{zeta_nocorrect}(that considers all collisions inelastic), we calculate the ratio of the average energy involved in activated collisions to the energy of all collisions. This ratio is:
\begin{eqnarray}
\frac{\langle{u_{n}^2}\rangle(u_n>u^*)}{\langle{u_{n}^2}\rangle(u_n>0)}=\exp\left\{\frac{m {u^*}^2}{4kT}\right\}\left(1+\frac{m {u^*}^2}{4kT}\right)
\end{eqnarray}
Since $\frac{1}{2}{u^*}^2/u_{rms}^2=E_a/T$ for disks of equal mass \cite{Vincenti&Kruger1975}, multiplying this ratio with the cooling rate \eqref{zeta_nocorrect} yields the following modified cooling rate:
\begin{equation}
\zeta^*=-\frac{4}{d\sqrt{\pi}}\left(1-\varepsilon^2\right)\rho T^{3/2}\eta g_2(\eta)\exp \left\{ \frac{E_A}{T}\right\}\left(1+\frac{E_A}{T}\right)\\
\end{equation}
This cooling rate is validated by comparing the evolution of granular temperature obtained via MD for a homogeneous cooling granular gas with the inclusion of an activation threshold.
\subsubsection{Details of Hydrodynamic Solver}
The software package \textit{MG} is used to solve the governing equations of the described system. The \textit{MG} package utilizes a second order Godunov solver with adaptive mesh refinement.
The hydrodynamics of the granular gas are solved by investigating the flow at the piston frame of reference. A reflective wall boundary is implemented on the left (piston face) with flow travelling towards the wall at constant velocity $u_p$. The upper and lower boundaries have reflective wall boundary conditions, and the right boundary has free boundary conditions. In order to compare with the MD results a domain height of $455d$ was used. A resolution of $\Delta x=d$ was found to be sufficient for the model.
In order to investigate the unstable shock structure, the initial and incoming flow density are perturbed. Random density perturbations are implemented with a variance of $10\%$, which are applied in perturbed cells with area $11.375d\times 11.375d$ (40 cells high in the current domain). The size of the perturbed cells is chosen such that the frequency of instability is not fixed, yet sufficient to avoid numerical diffusion of the flow. The variance is taken in accordance with deviation associated with the MD model with equal dimensions and packing factor.
\section{Results}
Figure \ref{fig:compareMDHD2D} shows the evolution of shock morphology obtained via the MD and continuum models for $u_p=20$, $u^*=10$, and $\varepsilon=0.95$. Both models demonstrate an irregular shock structure with high density non-uniformities extending ahead of the piston. The continuum model shows a higher frequency instability, evident by the increased frequency of bumps.
\begin{figure}[h]
\captionsetup[subfigure]{labelformat=empty, oneside,margin={-1cm,0cm}}
\subfloat{\includegraphics[width=0.19\linewidth]{./Figures/2DMD/cropevent000025.jpg}}
\subfloat{\includegraphics[width=0.19\linewidth]{./Figures/2DMD/cropevent000125.jpg}}
\subfloat{\includegraphics[width=0.19\linewidth]{./Figures/2DMD/cropevent000225.jpg}}
\subfloat{\includegraphics[width=0.19\linewidth]{./Figures/2DMD/cropevent000325.jpg}}
\subfloat{\includegraphics[width=0.19\linewidth]{./Figures/2DMD/cropevent000425.jpg}}\\
\vspace*{-5pt}\hspace*{-4pt}
\subfloat[$\overline{t}=45.5$]{\includegraphics[width=0.25\linewidth]{./Figures/2DMD/densityplot0002}}\hspace*{-28.5pt}
\subfloat[$\overline{t}=91.0$]{\includegraphics[width=0.25\linewidth]{./Figures/2DMD/densityplot0004}}\hspace*{-28.5pt}
\subfloat[$\overline{t}=136.5$]{\includegraphics[width=0.25\linewidth]{./Figures/2DMD/densityplot0006}}\hspace*{-28.5pt}
\subfloat[$\overline{t}=182.0$]{\includegraphics[width=0.25\linewidth]{./Figures/2DMD/densityplot0008}}\hspace*{-28.5pt}
\subfloat[$\overline{t}=227.5$]{\includegraphics[width=0.25\linewidth]{./Figures/2DMD/densityplot0010}}\\
\caption{Comparison of the evolution of shock morphology from MD (top) and continuum (bottom) models for $u_p=20$, $u^*=10$ and $\varepsilon=0.95$.}
\label{fig:compareMDHD2D}
\end{figure}
\begin{figure}[h]
\centering
\captionsetup[subfigure]{labelformat=empty, oneside,margin={1cm,.5cm}}
\subfloat{\includegraphics[width=0.3\linewidth]{./Figures/new/tempplot0025-eps-converted-to.pdf}}\hspace*{-26.2pt}
\subfloat{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_temp_plot0050-eps-converted-to.pdf}}\hspace*{-26.2pt}
\subfloat{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_temp_plot0125-eps-converted-to.pdf}}\hspace*{-26.2pt}
\subfloat{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_temp_plot0251-eps-converted-to.pdf}}\
\subfloat[$\overline{t}=11.15$]{\includegraphics[width=0.3\linewidth]{./Figures/new/densityplot0025-eps-converted-to.pdf}}\hspace*{-25.2pt}
\subfloat[$\overline{t}=22.3$]{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_densityplot0050-eps-converted-to.pdf}}\hspace*{-25.2pt}
\subfloat[$\overline{t}=55.75$]{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_densityplot0125-eps-converted-to.pdf}}\hspace*{-25.2pt}
\subfloat[$\overline{t}=111.5$]{\includegraphics[width=0.3\linewidth]{./Figures/new/nolab_densityplot0251-eps-converted-to.pdf}}
\caption{Evolution of one-dimensional temperature (top) and density (bottom) distributions, comparing MD (dashed) and continuum (solid) models for $u_p=20$, $u^*=10$ and $\varepsilon=0.95$.}
\label{fig:distributions}
\end{figure}
Figure \ref{fig:distributions} shows the evolution of the macroscopic 1D distribution of granular temperature and density, for the case seen in Figure \ref{fig:compareMDHD2D}. The MD results are obtained by ensemble and course grain averaging over 50 simulations with statistically different initial conditions. The granular temperature $u_{rms}^2$, is scaled by the activation energy $u_{max}^2$, which was previously found to form identical distributions for equal values of $u_p/u_{max}$ and $\varepsilon$ \cite{SirmasTBD}.
The results for temperature and density distributions from the MD and continuum models show good agreement with regards to shock structure. Due to the lack of viscous effects in the continuum model, the shock front is represented by a sharp discontinuity on the temperature and density distributions. The locations of the peak temperature are similar for the two models, demonstrating that the shock waves are travelling at the same velocity.
\section{Conclusions}
In this study, we have demonstrated that shock instability through granular gases can be seen at the discrete and continuum levels. Although we do not model viscous effects in the continuum model, instability is still present, albeit at a higher frequency than that seen in the MD simulations. These results may shed light on the role higher order effects have on the overall mechanism controlling shock instabilities in dissipative gases, which will be a subject of further investigation.
\bibliographystyle{iopart-num}
\section{Introduction}
These guidelines show how to prepare articles for publication in \jpcs\ using \LaTeX\ so they can be published quickly and accurately. Articles will be refereed by the \corg s but the accepted PDF will be published with no editing, proofreading or changes to layout. It is, therefore, the author's responsibility to ensure that the content and layout are correct. This document has been prepared using \cls\ so serves as a sample document. The class file and accompanying documentation are available from \verb"http://jpcs.iop.org".
\section{Preparing your paper}
\verb"jpconf" requires \LaTeXe\ and can be used with other package files such
as those loading the AMS extension fonts
\verb"msam" and \verb"msbm" (these fonts provide the
blackboard bold alphabet and various extra maths symbols as well as
symbols useful in figure captions); an extra style file \verb"iopams.sty" is
provided to load these packages and provide extra definitions for bold Greek letters.
\subsection{Headers, footers and page numbers}
Authors should {\it not} add headers, footers or page numbers to the pages of their article---they will
be added by \iopp\ as part of the production process.
\subsection{{\cls\ }package options}
The \cls\ class file has two options `a4paper' and `letterpaper':
\begin{verbatim}
\documentclass[a4paper]{jpconf}
\end{verbatim}
or \begin{verbatim}
\documentclass[letterpaper]{jpconf}
\end{verbatim}
\begin{center}
\begin{table}[h]
\caption{\label{opt}\cls\ class file options.}
\centering
\begin{tabular}{@{}*{7}{l}}
\br
Option&Description\\
\mr
\verb"a4paper"&Set the paper size and margins for A4 paper.\\
\verb"letterpaper"&Set the paper size and margins for US letter paper.\\
\br
\end{tabular}
\end{table}
\end{center}
The default paper size is A4 (i.e., the default option is {\tt a4paper}) but this can be changed to Letter by
using \verb"\documentclass[letterpaper]{jpconf}". It is essential that you do not put macros into the text which alter the page dimensions.
\section{The title, authors, addresses and abstract}
The code for setting the title page information is slightly different from
the normal default in \LaTeX\ but please follow these instructions as carefully as possible so all articles within a conference have the same style to the title page.
The title is set in bold unjustified type using the command
\verb"\title{#1}", where \verb"#1" is the title of the article. The
first letter of the title should be capitalized with the rest in lower case.
The next information required is the list of all authors' names followed by
the affiliations. For the authors' names type \verb"\author{#1}",
where \verb"#1" is the
list of all authors' names. The style for the names is initials then
surname, with a comma after all but the last
two names, which are separated by `and'. Initials should {\it not} have
full stops. First names may be used if desired. The command \verb"\maketitle" is not
required.
The addresses of the authors' affiliations follow the list of authors.
Each address should be set by using
\verb"\address{#1}" with the address as the single parameter in braces.
If there is more
than one address then a superscripted number, followed by a space, should come at the start of
each address. In this case each author should also have a superscripted number or numbers following their name to indicate which address is the appropriate one for them.
Please also provide e-mail addresses for any or all of the authors using an \verb"\ead{#1}" command after the last address. \verb"\ead{#1}" provides the text Email: so \verb"#1" is just the e-mail address or a list of emails.
The abstract follows the addresses and
should give readers concise information about the content
of the article and should not normally exceed 200
words. {\bf All articles must include an abstract}. To indicate the start
of the abstract type \verb"\begin{abstract}" followed by the text of the
abstract. The abstract should normally be restricted
to a single paragraph and is terminated by the command
\verb"\end{abstract}"
\subsection{Sample coding for the start of an article}
\label{startsample}
The code for the start of a title page of a typical paper might read:
\begin{verbatim}
\title{The anomalous magnetic moment of the
neutrino and its relation to the solar neutrino problem}
\author{P J Smith$^1$, T M Collins$^2$,
R J Jones$^{3,}$\footnote[4]{Present address:
Department of Physics, University of Bristol, Tyndalls Park Road,
Bristol BS8 1TS, UK.} and Janet Williams$^3$}
\address{$^1$ Mathematics Faculty, Open University,
Milton Keynes MK7~6AA, UK}
\address{$^2$ Department of Mathematics,
Imperial College, Prince Consort Road, London SW7~2BZ, UK}
\address{$^3$ Department of Computer Science,
University College London, Gower Street, London WC1E~6BT, UK}
\ead{[email protected]}
\begin{abstract}
The abstract appears here.
\end{abstract}
\end{verbatim}
\section{The text}
The text of the article should should be produced using standard \LaTeX\ formatting. Articles may be divided into sections and subsections, but the length limit provided by the \corg\ should be adhered to.
\subsection{Acknowledgments}
Authors wishing to acknowledge assistance or encouragement from
colleagues, special work by technical staff or financial support from
organizations should do so in an unnumbered Acknowledgments section
immediately following the last numbered section of the paper. The
command \verb"\ack" sets the acknowledgments heading as an unnumbered
section.
\subsection{Appendices}
Technical detail that it is necessary to include, but that interrupts
the flow of the article, may be consigned to an appendix.
Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list.
If there are two or more appendices they will be called Appendix A, Appendix B, etc.
Numbered equations will be in the form (A.1), (A.2), etc,
figures will appear as figure A1, figure B1, etc and tables as table A1,
table B1, etc.
The command \verb"
|
{'timestamp': '2013-12-02T02:17:14', 'yymm': '1311', 'arxiv_id': '1311.7658', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.7658'}
|
arxiv
|
\section{Introduction}\label{intro}
In the past decade our picture of the Milky Way's stellar halo has dramatically
changed thanks to the advent of several observational surveys, which have
shown the richness and complexity of the substructure in the Galactic halo
\citep{ibat01,newb02,maj03,yan03,martin04CMa,grilldion07b,belok06,belok07orphan,
belok07quintet}. Our Galaxy is still undergoing an assembling process, where part
of the infalling material has already been accreted and become dynamically relaxed
\citep{helmi99,sheff12}, part of it is still dynamically cold \citep{bell08,juric08}
and another part is in the process of being dynamically stripped or even
approaching its first dynamical encounter with our Galaxy \citep{kall06feb,
kall06dec,piatek08,besla10,rocha12}.
The most prominent example of a currently ongoing disruption is that of the
Sagittarius stream (Sgr stream). Since its discovery in 1996 \citep{mat96},
the stream has been mapped wrapping over $\pi$ radians
on the sky, first through 2MASS \citep{maj03} and later through
SDSS \citep{belok06,kopos12}. There is general agreement that it is the stellar
debris of a disrupting satellite galaxy, the Sagittarius dwarf galaxy \citep{ibat94},
which is currently being accreted by the Milky Way \citep{velaz95,ibat97,no10}.
The stream is composed of the leading and the trailing tails of this disruption
event \citep{mat96,ibat01,dp01,md01,md04,maj03,belok06, belok13}, which wrap at least
once around the Galaxy but have been predicted to wrap more than once
\citep{penarrub10,law10}. In addition, a bifurcation and what resembles an extra
branch parallel to the main component of the Sgr stream have been discovered both
in the northern hemisphere \citep{belok06} and in the southern hemisphere
\citep{kopos12}. The origin of this bifurcation and the meaning of the two
branches are still debated: they could represent wraps of different age
\citep{fellh06}, they could have arisen due to the internal dynamics of the progenitor
\citep{penarrub10,penarrub11} or they could indeed be due to different
progenitors and a multiple accretion event \citep{kopos12}.
On the other hand, one of the simplest and neatest examples of a disrupting
satellite is that of the Palomar 5 globular cluster \citep{sand77,oden02,dehnen04}
and its stream \citep{oden01,oden03}. This stream extends over $20^{\mathrm{o}}$
along its narrow leading and trailing tails. It displays an inhomogeneous stellar
density in what resembles gaps or underdensities \citep{grilldion06pal5};
the origin of this stellar distribution has been attributed both to interactions
with dark satellites \citep{carlb12} and to epicyclic motions of stars along the
tails \citep{mb12}.
Finally, there are also cases of streams with unknown progenitors, such as the
so-called Orphan stream \citep{grill06orphan,belok06,belok07orphan,newb10orphan}.
This stream extends for $~50^{\mathrm{o}}$ in the North galactic cap, and the
chemical signatures from recent spectroscopic observations associate its
progenitor with a dwarf galaxy \citep{casey13I,casey13II}. A number of plausible
progenitors have been suggested \citep{zucker06,fellh07,jin07,sales08}, but it
is still possible that the true progenitor remains undiscovered in the southern
hemisphere \citep{casey13I}.
In general, the discovery of most of the substructures in the halo of the
Milky Way has been possible thanks to photometric multi-colour wide area surveys.
Such surveys pose several advantages for this kind of search. First, their
multiple-band photometry allows for stellar population selections (halo or
thick disk; red clump, main sequence turnoff point, etc.) based on colour-colour
stellar loci. These selection criteria can be used to make stellar density maps
that track the streams all through the survey's coverage area \citep{maj03,
belok06}. Second, their continuous coverage of a large area allow the fields
adjacent to the substructure to act as control fields. In this way, the
colour-magnitude diagrams (CMDs) of the control fields can be used to statistically
subtract the foreground and the background stars from the fields probing the
substructure. This enhances the signature of the stellar population belonging
to the stream or satellite (by removing the noise), and makes it possible to
identify age and distance indicators such as the red clump or the main sequence
turnoff point \citep{belok06,kopos12,slater13}.
In this paper we explore the possibilities of using deep two-band
pencil-beam surveys instead of the usual wide-area multi-colour surveys in order
to detect and characterize stellar streams of the halo and, in particular, we revisit
the Sagittarius, the Palomar 5 and the Orphan streams. We derive photometric distances
using purely the main sequence turnoff point and --unlike other works-- regardless of
the giant branch and its red clump.
\section{Observations and data processing }\label{data}
\subsection{Description of data set}\label{subsec:dataset}
We use deep photometric imaging from the MENeaCS and the CCCP surveys
\citep{sand12meneacs,hoekstra12cccp,bildfell12combined} as well as several additional
archival cluster fields, observed with the CFHT-MegaCam instrument. These surveys
targeted pre-selected samples of galaxy clusters; therefore the surveys geometry takes
the form of a beam-like survey where the pointings are distributed without prior
knowledge of the halo substructure (blind survey).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{mapequat_differentiatedpointings_upon_SgrFromKop2012.eps}
\caption{Equatorial map showing the position of all the fields from our survey (white
hexagons) and highlighting the ones that lay on the Sagittarius stream
(green circles for the faint branch and green squares for the bright branch),
on the Palomar 5 stream and on the Orphan stream (blue diamonds).
The background image is the SDSS-DR8 map of the Sgr stream from \citet{kopos12}.
}
\label{fig:mapPointings}
\end{figure*}
Our pointings are one square degree wide and spread over the sky visible to CFHT. Each
consists of several exposures through the g' and r' filters with image quality of of
sub-arcsecond seeing. After stacking the individual exposures, the limiting magnitudes
reach $\sim25.0$ at the $5.0\sigma$ level. Out of
the 97 fields, at least 25 fall on the structure of the Sagittarius (Sgr) stream
and show distinct signatures in their CMDs,
one on the Orphan stream,
one on the Palomar 5 stream and
three to seven on the Virgo Overdensity and the Virgo Stellar Stream \citep{duffau06,
juric08,casetti09,prior09,bonaca12virgo} (see figure~\ref{fig:mapPointings}).
Further away from the plane of the Sgr stream, we also find
three fields to be coincident with the Triangulum-Andromeda structure
\citep{rochapinto04,bonaca12triangulum},
two to three with the Pisces Overdensity \citep{watkins09,sesar10,sharma10},
one transitional between the Triangulum-Andromeda and the Pisces Overdensity,
four with the Anticenter Structure \citep{grillmair06anticenter} and
two to three with the NGC5466 stream \citep{grillmairjohnson06,fellhauer07ngc5466}.
We also find
two fields on the Lethe stream \citep{grillmair09new4treams},
four on the Styx stream \citep{grillmair09new4treams},
one on a region apparently common to the Styx and Cocytos streams
\citep{grillmair09new4treams} and
two on the Canis Major overdensity \citep{martin04CMa}.
In this paper we concentrate on the clearest structures (those where the
contrast-to-noise in the CMD is higher) in order to test the capabilities of our
method. In particular, we address the Sagittarius stream, the Palomar5 stream and
the Orphan stream.
\subsection{Correction of the PSF distortion and implications for
the star/galaxy separation}\label{subsec:PSF}
Before building catalogues and in order to perform an accurate star/galaxy separation,
it is necessary to rectify the varying PSF across the fields of the CFHT images.
In order to correct for this effect, we make use of a 'PSF-homogenizing' code
(K. Kuijken et al., in prep.). The code uses the shapes of bright objects
unambiguously classified as stars to map the PSF across the image, and then convolves it
with an appropriate spatially variable kernel designed to render the PSF gaussian
everywhere.
With a view to obtaining a PSF as homogeneous as possible, we treat the data as follows
\citep{vdBurg13}:
i) we implement an accurate selection of sufficiently bright stars based on an
initial catalogue,
ii) we run the code on the individual exposures for each field, and
iii) we reject bad exposures based on a seeing criterion \footnotemark[1] before stacking
them into one final image, on which we perform the final source extraction and photometry.
\footnotetext[1]{
The rejection of exposures derives from trying to optimize the image quality while
achieving the desired photometric depth. Thus our seeing criterion is a variable number
dependent on the field itself, the seeing distribution for individual exposures and the
individual plus total exposure time. In general it takes values $<\approx0.9$.}
The advantages of this procedure are twofold. First, because the resulting PSF for each
exposure is gaussian, all the stars become round.
Second, because the PSF anisotropy is removed from all exposures before stacking, the
dispersion in size for the point-source objects becomes smaller, even if the average
value increases after stacking the individual exposures (see figure~\ref{fig:GaussedSize}).
These two improvements significantly reduce the galaxy contamination when performing
the star selection (illustrated in figure~\ref{fig:starsSelec}).
Additionally, homogenizing the PSF also allows to measure colours in fixed apertures.
\begin{figure}
\centering
\includegraphics[width=95mm]{gaussed_magsize_A119_g.eps}
\caption{Brightness versus size diagram of all the sources in one of our pointings.
The stellar locus prior to the PSF-homogenization (black) is wider and
therefore subject to greater galaxy contamination at the faint end than the
stellar locus posterior to the correction (blue) because the PSF initially
varies across the field.}
\label{fig:GaussedSize}
\end{figure}
From the final images, we extract the sources and produce photometric catalogues
using SExtractor \citep{bertinSExtractor}.
To derive the stellar catalogues, we use a code that filters the source catalogues
as follows:
i) finds the saturated stars and removes them from the stellar catalogue;
ii) evaluates the distribution of bright sources ($r'=[18.0,20.0]\ \mathrm{mag}$) in
the brightness-size parameter space, assumes a gaussian distribution in the size and
in the ellipticity parameters ($e_1$, $e_2$)\footnotemark[2] of stars, and uses this
information to define the boundaries of the stellar locus along the bright range;
iii) evaluates the dependence of the width of the stellar locus on brightness and
extrapolates the relation to fainter magnitudes;
iv) applies the extended stellar locus and an ellipticity criterion to drop galaxies
from the stellar catalogue.
\footnotetext[2]{
\begin{displaymath}
e_1 = \frac{1-q^2}{1+q^2}\cos{2\theta} \, , \; \;
e_2 = \frac{1-q^2}{1+q^2}\sin{2\theta} \, , \; \;
\end{displaymath}
where $q=$axis ratio, $\theta=$position angle of the major axis.
}
For the stars resulting from this selection (figure~\ref{fig:starsSelec}), we correct
their photometry from galactic reddening by using the extinction maps from
\citet{schlegel98dustmaps}. The final stellar catalogues are used to build the CMDs
employed for our analysis. The PSF-corrected catalogues yield much cleaner CMDs than
the catalogues with similar star/galaxy separation but no PSF-correction
(figure~\ref{fig:GaussedCMD}).
\begin{figure}
\centering
\includegraphics[width=95mm]{starSelec_A119_g.eps}
\caption{Brightness versus size diagram showing all the PSF-corrected sources
(blue) and the subset of sources selected as stars through our star/galaxy
separation algorithm (red) for one of our pointings. Although the star
selection may not be complete at the faint end due to increasing scatter,
our algorithm minimizes the galaxy contamination, which otherwise would be
the main obstacle for detecting faint structures in the CMD.}
\label{fig:starsSelec}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{gaussed_CMD_A119.eps}
\caption{Colour-Magnitude Diagram (CMD) displaying the selection of sources
considered stars (selected as explained in section 2.2). The plume on
the red side ($g-r\approx1.2$) is composed of the nearby M-dwarfs, whereas
the main sequence on the bluer side ($0.18<g-r<0.6$) corresponds to a halo
overdensity located at a particular well-defined distance.
The cloud of sources at faint magnitudes are faint galaxies that enter the
star selection.
\emph{Left:} CMD derived from an image that has not been PSF-corrected.
\emph{Right:} CMD derived from a PSF-corrected image. After homogenizing
the PSF, the galaxy contamination decreases markedly below $r \approx 22$.}
\label{fig:GaussedCMD}
\end{figure*}
\subsection{Identification of the main sequence turnoff point}\label{subsec:turnoffpoint}
The photometric depth of our data allows us to detect a number of halo substructures
several magnitudes below their main sequence turn-off point. However, because our
survey is a pencil-beam survey lacking control fields adjacent to our target fields,
we have no reference-CMDs representing a clean foreground plus a smooth halo, and thus
a simple foreground subtraction is not possible. Instead the halo substructures in our
survey can only be detected in those fields where the contrast in density between the main
sequence stream stars and the foreground and background stars is significant in the CMD.
Thus, in order to search for main sequences in the CMDs, we build a cross-correlation
algorithm that runs across a region of the CMD (the 'search region'), focused on the
colour range associated with the halo turnoff stars ($0.18 \leq g-r \leq 0.30$). Within
the boundaries of this search region, we slide a template main sequence-shaped 2D
function that operates over the number of stars and, for each step, yields an integral
representing the weighted density of stars in such a main sequence-shaped area. When the
template main sequence function coincides with a similarly shaped
overdensity in the CMD), the value of the cross-correlation (the weighted density) is
maximized, and a value for the turnoff point is assigned. This process is illustrated
in figure~\ref{fig:ccCMDbins}.
In some cases a CMD presents more than one main sequence signature with sufficient
contrast to noise. When this happens we use the detection of the primary
main sequence (the position of its turnoff point and its characteristic width-function)
to randomly subtract a percentage of the stars associated with it (lowering its
density to the foreground level) and detect the next prominent main sequence feature.
We name these main sequence detections as primary, secondary, etc., ranked by their
signal to noise. We require the signal to noise to be $>3.5\sigma$ for primary MSs and
$>4\sigma$ for the secondary or tertiary MSs after partially removing the primary one.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{cc_slrCMDdenBins_A401.eps}
\caption{\emph{Left}: Dereddened CMD (black dots) with the search region (pink
solid-line rectangle) for the cross-correlation and the template main
sequence-shaped function (green solid line) at the position of maximum
density (peak of the cross-correlation).
\emph{Right}: Binned diagram representing the weighted density of stars
resulting from the cross-correlation process. The density in each bin
corresponds to the integral of the template main sequence-shaped
function with top left corner in the position of the bin.}
\label{fig:ccCMDbins}
\end{figure*}
\subsubsection{Shape of the template main sequence function}
When constructing the template main sequence-shaped 2D function (from now on, 'template-MS'),
we use two ingredients.
The first one is a theoretical isochrone\footnotemark[3] of age $t = 10 \mathrm{Gyr}$ and
metallicity $[Fe/H] = -1.58$, which is used to define the central spine of the template-MS.
The position of this central spine is later shifted in magnitude and colour steps during
the cross-correlation.
Since we are only interested in the shape of this isochrone (its absolute values are
irrelevant because it will be shifted) and since we are searching for halo substructures,
we choose the above age and metallicity values because they yield an isochrone shape
representative of old metal-poor stellar populations.
The second ingredient is a magnitude-dependent colour-width, which is used to broaden the
isochrone template as illustrated in the left panel of fig.~\ref{fig:ccCMDbins}).
\footnotetext[3]{Through all this work we use a subset of theoretical isochrones from
http://stev.oapd.inaf.it/cmd. The theoretical isochrones (\citet{marigo08}, with
the corrections from Case A in \citet{girardi10} and the bolometric corrections
for Carbon stars from \citet{loidl01}) are provided as observable quantities
transformed into the CFHT photometric system.}
The width is in general directly derived from the width of the locus of nearby M-dwarfs
($1.0<g-r<1.4$). The width of this feature is calculated for a number of magnitude bins
as three times the standard deviation in colour for each bin. Then a functional form
dependent on magnitude is obtained through polynomial fitting. In a few cases, minor
tweaking is needed to compensate for extremely large widths (colour shifts become
insensitive to any substructure) or for extremely small widths (density values become
meaningless due to the built-in weight [see below]). This way of defining the width of
the template-MS accounts for the observational broadening of intrinsically well defined
stellar loci due to increasing photometric uncertainties at faint magnitudes.
\subsubsection{Weights within the template MS-function}
In addition to a theoretically and observationally motivated shape for the template-MS,
we also give a different weight to each region of the template. This means that, for each
step of the cross-correlation, the stars contained within will contribute differently to
the enclosed stellar density depending on how far they are from the spine of the template-MS.
The weight in colour (stars near the spine of the template-MS are more likely to belong
to the main sequence than stars close to the boundaries) is assigned through the exponential
term in a gaussian weight function.
We match the standard deviation of the gaussian weight to the standard deviation of the
template-MS width ($3\sigma=\omega_{MS}$) so that all the stars contained within the
template-MS are assigned a weight.
To guarantee that the weight does not favour bright features, we choose the amplitude of the
gaussian function to be such that the integral of the weight function between the edges
of the template-MS function is the same for all magnitudes.
The resulting weight function for a given star in the template-MS at a particular
step of the cross-correlation then follows:
\begin{equation}
\ \ W_{\ast}(mag,colour) =\frac{A}{\sqrt{2\pi}\sigma(mag)}\cdot exp\left\{-\frac{[colour-\eta_{CC}(mag)]^2}{2[\sigma(mag)]^2}\right\}
\end{equation}
where $mag$ and $colour$ are the magnitude and colour of the weighted star,
$\eta_{CC}(mag)$ represents the theoretical isochrone at that particular step
of the cross-correlation,
and $\sigma(mag)=\frac{1}{3}\omega_{MS}(mag)$ is proportional to the width of the template-MS
function for that particular CMD.
\subsection{Uncertainties in the turnoff point}\label{subsec:uncertainties}
The colour and magnitude values for the turnoff point of a given main sequence, ($c_{TO}$,
$mag_{TO}$), are derived from the position of the template at which the cross-correlation
peaks. Therefore the uncertainties for these turnoff point values derive from the contribution
of individual stars to the position and shape of the main sequence (the uncertainty from
the CMD itself).
To evaluate this uncertainty, we carry out a bootstrapping process. In this process first
we generate re-sampled stellar catalogues by randomly withdrawing stars from one of our
true catalogues. Second we run the cross-correlation and obtain the turnoff points for each
of these re-samples. Third we consider the offsets between these turnoff points and the original
turnoff point and derive the standard deviation of the distribution. The contribution of any
CMD to the uncertainty of its turnoff point can then be calculated as a function of a
reference (bootstrapped) standard deviation, $s$:
\begin{displaymath}
\ \ \ \ E_{\mathrm{mag,CMD}} =\ f_{\mathrm{mag,BS}}\cdot\frac{(s_{{\mathrm{mag,BS}}})}{\left.\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 mag}\right|_{\mathrm{TO}}} \, , \; \; \;
E_{\mathrm{c,CMD}} =\ f_{\mathrm{c,BS}}\cdot\frac{(s_{{\mathrm{c,BS}}})}{\left.\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 c}\right|_{\mathrm{TO}}} \, , \; \;
\end{displaymath}
where, in practice, $s_{\mathrm{mag,BS}}$ and $s_{\mathrm{c,BS}}$ are the standard deviations
calculated for a number of representative fields, $f_{\mathrm{mag,BS}}$ and $f_{\mathrm{c,BS}}$
are scale factors that allow to obtain the uncertainty for any field from the standard
deviation of the bootstrapped fields, and $\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 mag}$
and $\frac{\partial^2 \rho_{\mathrm{CC}}}{\partial^2 c}$ evaluate the prominence of the
particular overdensity as a function of magnitude or as a function of colour. In practice,
$E_{\mathrm{mag,CMD}}=s_{{\mathrm{mag,BS}}}$ and $E_{\mathrm{c,CMD}}=s_{{\mathrm{c,BS}}}$ for
the bootstrapped fields used as a reference.
The photometric turnoff point distances are derived from the distance modulus. Therefore the
uncertainties in the distances can be calculated as a combination of two sources of error:
the uncertainty derived from the observed brightness of the turnoff point ($E_{\mathrm{mag,CMD}}$,
discussed above) and the uncertainty derived from the absolute brightness of the
turnoff point, which depends on the choice of isochrone (and thus on the uncertainty in the
age or in the metallicity).
\begin{eqnarray}
E_{\mathrm{\mu,TO}} & = & \sqrt{E_{\mathrm{mag,CMD}}^2 + E_{\mathrm{mag,isoch}}^2} \,;
\end{eqnarray}
\section{The Sagittarius stream}\label{results}
\subsection{Turnoff point distances to the Sgr stream}\label{subsec:resultsSgr}
The Sagittarius stream is clearly probed by at least 25 of our 97 fields (see the red
and pink squares in figure~\ref{fig:mapPointings}). They probe both the faint and the
bright branches of the stream (the faint branch lying to the North of the bright one)
and also two transitional areas, indicating that the transversal drop in stellar
counts between both branches is not dramatic.
Some of these fields present more than one main sequence in their CMDs; for those fields
the secondary turnoff points are calculated by subtracting the primary MS and
re-running the cross-correlation (as explained in section~\ref{subsec:turnoffpoint}).
Based on the turnoff point values obtained from the cross-correlation, we calculate
the distances to the Sagittarius stream in these 25 fields for 31 detections. For this
calculation, we assume a single stellar population represented by a theoretical
isochrone with age $t_{age} = 10.2 \ \mathrm{Gyr}$ and metallicity $[Fe/H]=-1.0 \ \mathrm{dex}$
(for a detailed description on the set of isochrones see footnote~2 in
section~\ref{subsec:turnoffpoint}). We choose these age and metallicity values because
they match the age-metallicity relation for the Sgr dwarf galaxy \citep{layden00}
--which is also expected to hold for its debris-- and are consistent with the range that
characterizes old metal-poor populations.
To account for the potential influence on our distance measurements of a possible
metallicity gradient along the different Sgr arms \citep{chou07,shi12,vivas05,carlin12},
we analyse the dependency of the
isochrones turnoff point absolute brightness ($M_{TO}$) with metallicity throughout
the Sgr metal-poor range (see figure~\ref{fig:metallicitySgr}). We find that for
$-1.53 \ \mathrm{dex} <[Fe/H]< -0.8 \ \mathrm{dex}$ the absolute brightness remains
nearly constant in the r band, with a maximum variation of $\Delta M=\pm0.1 \ \mathrm{mag}$.
We conclude that if we take this variation in absolute brightness as the isochrone
uncertainty in the distance modulus ($E_{\mathrm{mag,isoch}}=\Delta M$), we can use
the $t_{age} = 10.2 \ \mathrm{Gyr}$ and $[Fe/H]=-1.0 \ \mathrm{dex}$ isochrone to
calculate distances to any region of the Sgr stream.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Sgrstream_TOmagageZrelation_zoomin.eps}
\caption{Absolute brightness of the turnoff point in the r band as a function of
metalllicity and age for metal poor populations (green circles).
The values in this diagram meet the age/metallicity relation for the Sgr
dwarf galaxy from \citet{layden00}. The isochrone used in this paper to
derive distances to the Sgr stream is represented with a yellow star,
and its maximum difference to the other brightness values in this
range is $\Delta M=\pm0.1\mathrm{mag}$.}
\label{fig:metallicitySgr}
\end{figure*}
\begin{table*}
\caption{Position and distances to the Sgr stream,
together with a tag for faint or bright branch membership, a tentative
classification as leading or trailing arm and a number specifying the hierarchy
of the detection in the CMD (primary, secondary,etc.). The distances are
indicated both as distance modulus and as heliocentric distance, with the
distance uncertainty ($E_{\mathrm{d}}$) in kpc.
}
\label{table:distSgr}
\centering
\begin{tabular}{l r c c r c c c c}
\hline\hline
Field & arm & detection & RA (deg) & DEC (deg) & $\mu (mag)$ & $d$ (kpc) & $E_{\mathrm{d}}$ (kpc) \\
\hline
A2104$^{\mathrm{f}}$ & lead & 1 & 235.040644 & -3.33158 & 18.8 & 56.6 & 3.1 \\
RXJ1524$^{\mathrm{f}}$ & trail & 1 & 231.170583 & 9.93498 & 16.2 & 17.1 & 2.0 \\
A2050$^{\mathrm{f}}$ & lead & 1 & 229.080749 & 0.08773 & 18.7 & 54.1 & 8.7 \\
A1942$^{\mathrm{f}}$ & lead & 1 & 219.654126 & 3.63573 & 18.7 & 54.1 & 3.7 \\
A1882$^{\mathrm{b}}$ & lead & 1 & 213.667817 & -0.30598 & 18.5 & 49.3 & 5.7 \\
A1835$^{\mathrm{b}}$ & lead & 1 & 210.259355 & 2.83093 & 18.4 & 47.1 & 4.2 \\
RXJ1347$^{\mathrm{b}}$ & ? & 1 & 206.889060 & -11.80299 & 15.5 & 12.4 & 7.3 \\
ZwCl1215$^{\mathrm{b}}$ & ? & 1 & 184.433196 & 3.67469 & 16.7 & 21.5 & 2.9 \\
ZwCl1215$^{\mathrm{b}}$ & ? & 3 & 184.433196 & 3.67469 & 15.0 & 9.8 & 2.6 \\
A1413$^{\mathrm{f}}$ & lead & 1 & 178.842420 & 23.42207 & 17.5 & 31.1 & 2.7 \\
A1413$^{\mathrm{f}}$ & trail & 2 & 178.842420 & 23.42207 & 16.2 & 17.1 & 1.9 \\
A1246$^{\mathrm{f}}$ & lead & 1 & 170.987824 & 21.40913 & 17.6 & 32.6 & 9.2 \\
A1185$^{\mathrm{f}}$ & ? & 1 & 167.694750 & 28.68127 & 16.3 & 18.7 & 12 \\
ZwCl1023$^{\mathrm{b}}$ & ? & 1 & 156.489424 & 12.69030 & 17.4 & 29.7 & 11 \\
A795$^{\mathrm{b}}$ & lead & 1 & 141.030063 & 14.18190 & 16.0 & 14.2 & 2.8 \\
A795$^{\mathrm{b}}$ & ? & 2 & 141.030063 & 14.18190 & 15.6 & 14.2 & 2.8 \\
A763$^{\mathrm{b}}$ & ? & 1 & 138.150298 & 15.99992 & 16.7 & 21.5 & 2.6 \\
A763$^{\mathrm{b}}$ & lead & 2 & 138.150298 & 15.99992 & 15.8 & 14.2 & 1.0 \\
RXJ0352$^{\mathrm{f}}$ & lead & 1 & 58.263173 & 19.70387 & 15.7 & 13.6 & 0.7 \\
RXJ0352$^{\mathrm{f}}$ & trail & 2 & 58.263173 & 19.70387 & 17.7 & 34.1 & 4.3 \\
A401$^{\mathrm{f}}$ & trail & 1 & 44.759558 & 13.58975 & 17.4 & 29.7 & 3.4 \\
A399$^{\mathrm{f}}$ & trail & 1 & 44.478652 & 13.05185 & 17.6 & 32.6 & 11 \\
A370$^{\mathrm{b}}$ & trail & 1 & 39.963713 & -1.65806 & 17.6 & 32.6 & 4.8 \\
A223$^{\mathrm{b}}$ & trail & 1 & 24.557005 & -12.77010 & 17.0 & 24.7 & 1.7 \\
RXJ0132$^{\mathrm{f}}$ & trail & 1 & 23.169048 & -8.04556 & 17.1 & 25.9 & 2.3 \\
A133$^{\mathrm{b}}$ & trail & 1 & 15.673483 & -21.88113 & 16.6 & 20.6 & 2.4 \\
A119$^{\mathrm{f}}$ & trail & 1 & 14.074919 & -1.23337 & 16.9 & 23.6 & 2.9 \\
A85$^{\mathrm{b}}$ & trail & 1 & 10.469662 & -9.28824 & 16.9 & 23.6 & 1.6 \\
A2670$^{\mathrm{f}}$ & trail & 1 & 358.564313 & -10.40142 & 16.6 & 20.6 & 1.1 \\
RXJ2344$^{\mathrm{f}}$ & trail & 1 & 356.059633 & -4.36345 & 16.7 & 21.5 & 5.6 \\
RXJ2344$^{\mathrm{f}}$ & lead & 2 & 356.059633 & -4.36345 & 15.6 & 13.0 & 1.2 \\
A2597$^{\mathrm{f}}$ & trail & 1 & 351.336736 & -12.11193 & 16.9 & 23.6 & 1.4 \\
\hline
\end{tabular}
\begin{list}{}{}
\item[$^{\mathrm{b}}$] Bright branch
\item[$^{\mathrm{f}}$] Faint branch
\end{list}
\end{table*}
The resulting distances and distance uncertainties for these fields can be found in
table~\ref{table:distSgr},
together with the central position of each field (in equatorial coordinates), a
'faint/bright branch' tag (derived from figure~\ref{fig:mapPointings}), a tentative
classification as leading or trailing arm where
posible (see below), a 'primary/secondary detection' tag and the distance modulus ($\mu$).
In figure~\ref{fig:DistVsRA} we compare our results to values from the
literature~\footnotemark[4], split in two diagrams (top panel for the faint branches and
bottom panel for the bright branches in both hemispheres).
\footnotetext[4]{The SDSS-DR8 measurements shown in this paper for the southern bright
arm have been corrected for the difference in the calibration of the red clump absolute
magnitude, as pointed out in \citet{slater13} and corrected in \citet{kopos13erratum}).
And the SDSS-DR5 measurements have been decreased by $0.15 \ \mathrm{mag}$ to match the
BHB signal from SDSS, as prescribed in \citet{belok13}.}
Remarkably our turnoff point distances are not only in agreement with previous distance
measurements to known wraps, but also compatible with the distance predictions for nearby
wraps by the models of \citet{penarrub10} and \citet{law10}. In the following section
we discuss in detail these findings.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Sgr_distance_vs_RA_witherrors.eps}
\caption{Photometric main sequence turnoff point distances for the Sagittarius
stream along right ascension (northern-leading tail and southern-
trailing tail). The top panel shows results for the faint
branch, whereas the bottom panel corresponds to the bright arm. Our data
(blue circles for leading tails and red circles for trailing tails) are
based on the theoretical isochrones by
\citet{marigo08} and the corrections by \citet{girardi10}, for a
$10.2 \ \mathrm{Gyr}$ old stellar population with $[Fe/H]=-1.0$.
Other distance values correspond to \citet{belok06} (green and grey triangles),
\citet{kopos12,kopos13erratum} (green asterisks), \citet{belok13} (pink triangles)
and \citet{slater13}
(yellow squares for $>3\sigma$ detections and white squares for $<3\sigma$).
White circles denote detections that can not be unambiguously tagged as
leading or trailing.}
\label{fig:DistVsRA}
\end{figure*}
\subsection{Comparison with models of the Sgr stream}\label{subsec:compareModels}
Using the model predictions shown in figures~\ref{fig:modelJorge} and \ref{fig:modelLaw},
we classify each field as belonging to the leading or trailing arm, by matching the distance
and the sky position.
Of these two models, the model by \citet{penarrub10} seems to recover better the separation
in stellar density distribution that gives rise to the northern bifurcation into faint
and bright branches (figure~\ref{fig:modelJorge}, upper panels), whereas the model by
\citet{law10} seems to reproduce better the projected 2MASS stellar density distribution
(figure~\ref{fig:modelJorge}, lower panels). As noted in \citet{belok13}, the northern-
trailing arm is more distant and has a steeper distance gradient than predicted by any
Sgr model. And although neither clearly recovers the southern bifurcation, they succeed in
reproducing the general distribution observed in that section of the stream.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Sgr_positionANDdistance_model_Jorge.eps}
\caption{Our data compared to the predictions by the model from \citet{penarrub10}.
\emph{Top panel}: Equatorial map with the position of our fields plotted
over the simulation.
\emph{Bottom panel}: Distance vs RA diagram with our results compared to the
model predictions.
Fields on the faint branch are denoted with circles, and fields on the bright
branch are denoted with squares.
Measurements matching the leading arm are denoted in pink, whereas those matching
the trailing arm are denoted in light blue. White markers represent detections
that can not be unambiguously tagged as leading or trailing; grey markers in the
upper panel correspond to fields with more than one MS detection (they unfold in
the bottom panel).
The colour scales represent the time since the particles from the
simulations became unbound.}
\label{fig:modelJorge}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Sgr_positionANDdistance_model_Law.eps}
\caption{Same as in figure~\ref{fig:modelJorge} but for the model from \citet{law10}.}
\label{fig:modelLaw}
\end{figure*}
\subsubsection{Northern leading arm}\label{subsec:SgrNorthLead}
From our eighteen measurements on the bright and the faint branches of the northern-leading arm
(branches A and B, in the terminology of \citet{belok06}), nine clearly reproduce the
distance trends of \citet{belok06} and \citet{belok13} based on red giant and blue
horizontal branch stars (blue circles in figure~\ref{fig:DistVsRA}).
For the faint branch, we extend westwards the distance measurements beyond those of
SDSS, and we provide its most distant detections so far --out to $56\mathrm{kpc}$ at
RA~$\sim235^{\mathrm{o}}$. Comparing these most distant detections to the distance trend
of the models and to the bright branch at a similar right ascension, one can argue that these
detections likely lie close to the aphelion of the faint branch (or represent the aphelion
themselves), and therefore they are probably a good estimation for its distance.
For the other nine detections, we find that the derived distances are either in mild disagreement
with the trends of the leading arm (four cases, white circles in figure~\ref{fig:DistVsRA}) or
incompatible with the leading arm (five cases, red and white circles in figure~\ref{fig:DistVsRA}).
In the single case of mild disagreement for the faint branch (A1185, RA~$\sim168^{\mathrm{o}}$)
the distance is well below the trends of both this and previous work (offset $\approx10 \
\mathrm{kpc}$); however its large uncertainty prevents us from ruling out that
it belongs to the faint branch. We will discuss an alternative membership in subsection~
\ref{subsec:SgrNorthTrail}. The three cases of mild disagreement for the bright branch
(ZwCl1023, A795-2 and A763-1, RA~$\sim150^{\mathrm{o}}$) are slightly above the distance
trend of this branch. Particularly, fields A795 and A763 also display two additional detections
(primary and secondary, respectively) slightly under the expected distance trend. Fields A795 and
A763 lie close in the sky (less than $4^{\mathrm{o}}$ apart) and both yield primary and secondary
distance measurements very consistent with each other and with this dichotomy. We interpret this
as possibly indicating a region of the sky where the bright branch runs broader in distance.
Out of the five detections incompatible with the distance trends of the leading arm, we will
discuss three (RXJ1524,A1413-2 and ZwCl1215-1) in
subsection~\ref{subsec:SgrNorthTrail}, together with the above mentioned A1185. Regarding the
other two (RXJ1347 and ZwCl1215-3, RA~$\sim205^{\mathrm{o}}$ and RA~$\sim185^{\mathrm{o}}$
respectively), we have studied them individually and found the following. On the one hand,
ZwCl1215-3 matches the distance to the Virgo Overdensity \citep{bonaca12virgo} when using the
appropriate age and metallicity values for the theoretical isochrone, so it is likely a detection
of this cloud. On the other hand, RXJ1347 matches the distance and position predicted by the model
from \citet{penarrub10} for an older northern-wrap of the leading arm, but also the distances
predicted by the two models for the northern-trailing wrap. However we can not draw conclusions
regarding membership for an isolated detection and we lack kinematic data, so at the moment
we can not discriminate between both options (or even a third one).
\subsubsection{Northern trailing arm}\label{subsec:SgrNorthTrail}
In this subsection we revisit four detections in the galactic northern hemisphere which
yield distances incompatible with (or off) the leading arm. These detections are RXJ1524, A1413-2
(red circles in figure~\ref{fig:DistVsRA}), A1185 (compatible with the faint
leading branch thanks to its large error bars, but severely offset from the distance trend)
and ZwCl1215-1 (white marker at RA~$\sim185^{\mathrm{o}}$ on the bright branch).
The three detections in the faint branch (RXJ1524, A1413-2 and A1185) show distances strongly
consistent with each other ($\sim17 \ \mathrm{kpc}$). And the three of them are the fields
most apart from the Sgr orbital plane in our northern sample, spreading $60^{\mathrm{o}}$ along
the orbit.
Remarkably both their distances and their positions in the sky are in extremely good agreement
with the predictions from the above mentioned models for the Sgr debris in the northern-trailing arm,
but at odds with the claim in \citet{belok13} that branch C (at lower declinations and more
distant) is indeed the continuation of the northern-trailing arm for this range of RA
(RA~$>160^{\mathrm{o}}$). In this sense it is worth noting that the model from \citet{penarrub10}
has predicted two nearly parallel branches for the northern-trailing arm (not only for the northern-
leading arm). Therefore it is dynamically feasible that both the measurements for branch C
\citep{belok06} and the measurements in this work are tracing the trailing arm in the
galactic northern hemisphere, as far as they are probing two different branches.
Given the consistency of our distance measurements with each other and with the simulations, and
given the distribution of the fields along the stream, we believe our detections in the faint branch
are a previously undetected part of a Sgr wrap, most likely a continuation of the section of the
northern-trailing arm presented in \citet{belok13}. However kinematic data or a spatially broader
photometric coverage are needed to confirm this.
Additionally, ZwCl1215-1, which lies on the bright branch, yields a distance measurement compatible
with the trend predicted for the northern-trailing arm. But its position on the sky (on the bright
branch) can not be reconciled with the current models for the trailing tail, neither with the
age, metallicity and distance values for the Virgo Overdensity. Thus, its membership
and meaning in the puzzle of the halo remain an open question.
\subsubsection{Southern trailing arm}\label{subsec:SgrSouthTrail}
Our measurements on the bright and the faint branches of the southern-trailing arm
reproduce the distance trends of \citet{kopos12, kopos13erratum} and \citet{slater13}
based on red clump and turnoff point stars.
For the faint branch, we confirm the trend set by the $<3\sigma$ detections in
\citet{slater13}, and we briefly extend westwards and eastwards the distance measurements.
Contrary to \citet{slater13}, we find no evidence for a difference in distance
between the faint and the bright branches of the southern-trailing tail. However it is
possible that such difference remains hidden in our distance uncertainties.
When comparing to the above mentioned models, we find that the measures are in general
agreement with the predictions for both the faint and the bright branches. However the
distance gradient in the faint branch seems to be less steep in the data than in the
models, and the branch seems to be thinner in distance than predicted for any value of
the probed RA range. In this sense it is worth noting that, in contrast to what happens
to many of our northern hemisphere fields, only two of the CMDs in the southern
galactic hemisphere show secondary MS detections (RXJ0352 and RXJ2344, at
RA~$\sim58^{\mathrm{o}}$ and RA~$\sim356^{\mathrm{o}}$, respectively). And the
difference between the turnoff point brightness of these double detections does not
favour a thick branch, but rather the detection of a previously unknown nearby wrap (see
subsection~\ref{subsec:SgrSouthLead}).
\subsubsection{Southern leading arm}\label{subsec:SgrSouthLead}
In this subsection we revisit the double detections of \ref{subsec:SgrSouthTrail}, namely
RXJ0352-1 and RXJ2344-2, (RA~$\sim58^{\mathrm{o}}$ and RA~$\sim356^{\mathrm{o}}$, primary
and secondary detections, respectively). We show their CMDs and their cross-correlation
density diagrams in figures~\ref{fig:CMD-leadSouthRXJ0352} and \ref{fig:CMD-leadSouthRXJ2344}.
We find that, using the same isochrone we have
used to derive distances to all the Sgr fields, both yield a distance of $\sim13 \
\mathrm{kpc}$. These distances are in excellent agreement with the predictions from the two
simulations for the leading arm in the South and also with the trend set by the leading-northern data.
We thus claim to have detected the continuation of the northern-leading arm into the southern
hemisphere for the first time. The positions of these fields, however, suggest that the leading
arm dives into the southern hemisphere at higher declinations than predicted, overlapping in
projection with the faint branch of the trailing arm.
If the detection of the southern-leading arm or the northern-trailing arm proposed in this
paper are confirmed in future works (with kinematic measurements for membership or
photometric follow-up for spatial coverage), our measurements will be the closest and the oldest
debris of the Sgr stream detected to date.
If so, this would mean that our method has succeeded in detecting nearby substructure in
regions of the sky that had already been explored. The explanation to such a performance would
lie on the fact that we use a sample of stars (a large part of the main sequence) to identify
the overdensities in the CMD larger
than the sample of the usual halo tracers (red clump, red giants or blue horizontal branch),
and this could increase the contrast in regions of low concentration and thick disk contamination.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{slr_CMDdensBins_RXJ0352.eps}
\caption{\emph{Left}: Dereddened CMD for the westernmost pointing probing the leading arm
in the southern hemisphere; the template main sequence function and the turnoff
point (green) are plotted for the maximum of the primary cross-correlation.
\emph{Right}: Weighted-density diagram resulting from the primary cross-
correlation. The maximum (white bin, black cross) marks the top left corner of the
template-MS at the position of the southern-leading arm main sequence, whereas the
red overdensity at fainter magnitudes corresponds to the southern-trailing arm.
}
\label{fig:CMD-leadSouthRXJ0352}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{slr_CMDdensBins_RXJ2344.eps}
\caption{\emph{Left}: Dereddened CMD for the easternmost pointing probing the leading arm
in the southern hemisphere; the template main sequence function and the turnoff
point (green) are plotted for the maximum of the secondary cross-correlation.
We have randomly removed $50\%$ of the stars contributing to the primary
detection, which corresponds to the southern-trailing arm of the Sgr stream.
\emph{Right}: Weighted-density diagram resulting from the secondary cross-
correlation.
The maximum (white bin, black cross) marks the top left corner of the template-MS
function at the position of the Orphan stream's main sequence. The primary
detection has been partially removed, and the remainings can be seen as a weak
tail at fainter magnitudes and slightly bluer colour.}
\label{fig:CMD-leadSouthRXJ2344}
\end{figure*}
\section{The Palomar 5 stream and the Orphan stream}\label{sec:Pal5Orphan}
\subsection{Turnoff point distances to the Pal5 stream and the Orphan stream}
The Palomar5 stream and the Orphan stream are also probed by two of our fields (see
pink circles in figure~\ref{fig:mapPointings}). Their CMDs and their
corresponding turnoff points are shown in figures~\ref{fig:CMD-Pal5} and
\ref{fig:CMD-Orph}, together with their cross-correlation maps.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{slr_CMDdensBins_A2050.eps}
\caption{\emph{Left}: Dereddened CMD for the pointing containing the Palomar 5 stream as
its primary feature; the template main sequence function and the turnoff
point (green) are plotted for the maximum of the cross-correlation. The
secondary main sequence at fainter magnitudes corresponds to the faint arm
of the Sgr stream.
\emph{Right}: Weighted-density diagram resulting from the cross-correlation.
The maximum (white bin, black cross) marks the top left corner of the template-MS
function at the position of the Palomar 5 stream's main sequence, whereas
the cyan overdensity at fainter magnitudes corresponds to the Sgr stream.}
\label{fig:CMD-Pal5}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{slr_CMDdensBins_ZWCL1023.eps}
\caption{\emph{Left}: Dereddened CMD for the pointing containing the Orphan stream as
its secondary feature; the template main sequence function and the turnoff
point (green) are plotted for the maximum of the secondary cross-correlation.
We have randomly removed $60\%$ of the stars contributing to the primary
detection, which corresponds to the bright arm of the Sgr stream.
\emph{Right}: Weighted-density diagram resulting from the secondary cross-
correlation.
The maximum (white bin, black cross) marks the top left corner of the template-MS
function at the position of the Orphan stream's main sequence. The primary
detection has been removed, and thus it does not show in the density diagram.
}
\label{fig:CMD-Orph}
\end{figure*}
We use these turnoff point values to calculate photometric distances to each of the
streams. Once again we assume single stellar populations characterized by theoretical
isochrones but now with $t_{age} = 11.5 \ \mathrm{Gyr}$ \citep{martell02} and
metallicity $[Fe/H]=-1.43$ \citep{harris96} in the case of the Pal5 stream,
and $t_{age} = 10.0 \ \mathrm{Gyr}$ and metallicity $[Fe/H]=-1.63$ in the case of
the Orphan stream \citep{casey13II}. These values correspond to measurements for
these particular streams, which are more metal-poor than the Sgr stream for similar
ages. Since the absolute brightness of the turnoff point for a given stellar population
depends on its age and metallicity, it is important to select representative
values in order to derive the right photometric distances.
The resulting distances are collected in table~\ref{table:distOthers} and displayed
in figures~\ref{fig:Pal5DistVsRA} and \ref{fig:OrphDistVsDEC}, respectively, where
they are compared to previous findings by other groups. Both results show good
agreement for the adopted age and metallicity values.
\begin{table*}
\caption{Position and distances to the Palomar5 and Orphan streams:}
\label{table:distOthers}
\centering
\begin{tabular}{l c r c c c c}
\hline\hline
Field & stream & RA (deg) & DEC (deg) & $\mu (mag)$ & $d$ (kpc) & $\Delta d$ (kpc) \\
\hline
A2050 & Pal5 & 229.080749 & 0.08773 & 17.0 & 23.1 & 1.1 \\
ZwCl1023 & Orphan & 235.040644 & -3.33158 & 16.6 & 23.8 & 2.2 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Pal5_distance_vs_RA_witherrors.eps}
\caption{Photometric main sequence turnoff point distances along right ascension
for the Palomar5 stream. Our data point (blue circle) is based on a
single stellar population of age $11.5 \ \mathrm{Gyr}$ and metallicity
$[Fe/H]=-1.43$. The other values correspond to \citet{grilldion06pal5}
(green triangles) and \citet{vivas06} (pink star).
}
\label{fig:Pal5DistVsRA}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Orph_distance_vs_DEC_witherrors.eps}
\caption{Photometric main sequence turnoff point distances along declination
for the Orphan stream.
Our data point (blue circle) is based on the theoretical isochrone
for a $10.0 \ \mathrm{Gyr}$ old stellar population with $[Fe/H]=-1.63$.
The other values correspond to \citet{belok07orphan} (green triangles),
\citet{newb10orphan} (orange diamonds) and \citet{casey13II} (cyan star).
}
\label{fig:OrphDistVsDEC}
\end{figure*}
\subsection{Influence of the age/Z isochrone values on the distances}\label{subsec:discussPal5Orph}
For the Palomar 5 stream and the Orphan stream, our pencil-beam survey returns only
one detection each. We compare their derived distance measurements (table~
\ref{table:distOthers}) to previous work (figures~\ref{fig:Pal5DistVsRA} and
\ref{fig:OrphDistVsDEC}, respectively) and find that our measurements are
consistent and well within the values to be expected from interpolations. We interpret
this as an independent validation of the stellar population parameters for these streams
in the literature: $11.5 \ \mathrm{Gyr}$
and $[Fe/H]=-1.43$ for the Pal5 stream \citep{martell02,harris96}, and
$10.0 \ \mathrm{Gyr}$ and $[Fe/H]=-1.63$ for the Orphan stream \citep{casey13II}.
Variations in the absolute magnitude for the turnoff point of the
theoretical isochrone assigned to a given stellar population (characterized by a given
age and metallicity) propagate into the distance modulus, thus yielding variations in
the distance. For the Pal5 stream, our distance measurement can tolerate a relative
variation of $\Delta d_{rel}\approx0.15$ and still be in agreement with the previous
distance measurements; this variation threshold translates into an absolute magnitude
variation threshold of $\Delta M\approx0.35$. We find that variations in
$\Delta t=^{+1.7}_{-3.2}$ Gyr (limited by the formation time of the first stars) or
in $\Delta Z=^{+30}_{-6}\cdot10^{-4}$ dex (limited by the minimum metallicity available
in the set of theoretical isochrones) for the age and metallicity of the employed
theoretical isochrones meet this tolerance criterion.
For the Orphan stream, our distance measurement can tolerate a relative variation of
$\Delta d_{rel}=^{+0.24}_{-0.05}$, which translates into $\Delta M=^{+0.60}_{-0.11}$.
Variations in $\Delta t=^{+3.2}_{-1.3}$ Gyr or in $\Delta Z=^{+26}_{-3}\cdot10^{-4}$
dex (same limitations as above) for the age and metallicity of the theoretical
isochrones respect this requirement.
\section{Conclusions}\label{conc}
In this work we have used data from two deep cluster surveys, which provide randomly
scattered photometric pencil-beams in g' and r', and a field of view of $1\mathrm{deg}^2$ per
pointing. We have used this data to characterize previously
known substructure in the stellar halo of the Milky Way.
We analysed these data using two novel ingredients: a PSF-homogenization for
the images and a cross-correlation algorithm for the colour-magnitude diagram (CMD).
The PSF-homogenization algorithm corrects the inhomogeneous distortion of the
sources across an image caused by the telescope's optics. In this
way, it recovers the true shapes and size distribution of the sources, improving
the performance of any star/galaxy separation procedure, specially at the
faint end. The cross-correlation algorithm explores the CMD of each
field searching for an overdensity with the shape of a stellar main sequence,
and returns the (colour,magnitude) coordinates of the corresponding turnoff
point, from which distances can be derived. Through this method, we have
shown that it is possible to exploit a two-filter pencil-beam survey to perform
such a study of streams or satellites, provided
that the contrast-to-noise ratio of the substructure's main sequence is moderately
significant. In this way our method bypasses the need for nearby control-fields
that can be used to subtract a reference foreground from the target CMDs.
Using a set of theoretical isochrones \citep{marigo08,girardi10}, we have
calculated the distances to different regions
of the Sagittarius stream (faint and bright branches in both the northern and
southern arms) and obtained results in good agreement with previous work
\citep{belok06,kopos12,kopos13erratum,slater13} (see figure~\ref{fig:DistVsRA}).
We detect for the first time the continuation of the northern-leading arm into the
Southern hemisphere; we find that its distances are in excellent agreement with the
predictions by the models in \citet{penarrub10} and \citet{law10}, while the trajectory
seems to be located at higher declinations.
We also find evidence for a nearby branch of the northern-trailing arm at RA~$>160^{\mathrm{o}}$.
Both the distances and the footprint on the sky are in good agreement with the predictions
from the models. It is also compatible with being the continuation of the northern-trailing
region detected in \citet{belok13} if it turns or broadens to higher latitudes as it evolves
westwards, but it does not follow the same distance trend as branch C \citep{belok06}. However
it is feasible that both trends represent the trailing arm in the galactic northern hemisphere
if they belong to two different branches, as predicted in the model from \citet{penarrub10}.
We have also used age and metallicity measurements from previous work \citep{martell02,
harris96,casey13II}, to calculate distances to the Pal5 stream and the Orphan stream.
These distances are
in good agreement with the results in the literature \citep{grilldion06pal5,
vivas06,belok07orphan,newb10orphan,casey13II}, attesting --together with the results
from the Sgr stream for previously known regions of the stream-- the robustness and
accuracy of the cross-correlation.
The methods presented in this paper open
the possibility of using deeper existing pencil-beam surveys (maybe originally
aimed for extragalactic studies) to measure accurate distances (or ages or
metallicities, provided that two of the three parameters are known) to streams,
globular clusters or dwarf galaxies. The existence of these pencil-beam surveys or
the reduced requirements of prospective ones, allow for more complete maps
of the Galactic halo substructure at reduced observational costs.
\begin{acknowledgements}
B.P.D. iss supported by NOVA, the Dutch Research School of Astronomy.
H.H. and R.vdB. acknowledge support from the Netherlands Organisation for
Scientific Research (NWO) grant number 639.042.814.
\end{acknowledgements}
\bibliographystyle{aa}
|
{'timestamp': '2013-12-02T02:15:16', 'yymm': '1311', 'arxiv_id': '1311.7580', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.7580'}
|
arxiv
|
\section{Introduction}
Optimization in the space of probability measures has wide applications across various domains, including advanced generative models in machine learning~\cite{arjovsky2017wasserstein}, the training of two-layer neural networks~\cite{chizat2018global}, variational inference using Stein's method~\cite{liu2016stein}, super-resolution in signal processing~\cite{bredies2013inverse}, and interacting particles in physics~\cite{mccann1997convexity}.
Optimization in probability spaces goes beyond the conventional optimization in Euclidean space. \cite{ambrosio2005gradient} extends
the notion of steepest descent in Euclidean space to the space of probability measures with the Wasserstein metric. This notion traces back to studies of the Fokker–Planck equation, a partial differential equation (PDE) describing the density evolution of Ito diffusion. The Fokker–Planck equation can be interpreted as a gradient flow in the space of probability distributions with the Wasserstein metric~\cite{jordan1998variational}. Gradient flows have become general tools to go beyond optimization in Euclidean space ~\cite{absil2009optimization,santambrogio2017euclidean,chizat2022sparse,carrillo2018measure,carrillo2022global,carrillo2021equilibrium}.
Gradient flows enjoy a fast global convergence on an important function class called displacement convex functions~\cite{ambrosio2005gradient} which is introduced to analyze equilibrium states of physical systems~\cite{mccann1997convexity}.
Despite their fast convergence rate, gradient flows are hard to implement. Specifically, there are numerical solvers only for the limited class of linear functions with an entropy regularizer.
We study a different method to optimize functions of probability measures called particle gradient descent~\cite{chizat2018global,chizat2022sparse}. This method restricts optimization to sparse measures with finite support ~\cite{nitanda2017stochastic,chizat2018global,chizat2022sparse,li2022sampling} as
\begin{align}\label{eq:particle_program}
\min_{w_1,\dots, w_n} F \left( \frac{1}{n} \sum_{i=1}^n \delta_{w_i} \right),
\end{align}
where $\delta_{w_i}$ is the Dirac measure at $w_i \in \Omega \subset {\mathbb{R}}^d$. Points $w_1, \dots, w_n$ are called particles. \textit{Particle gradient descent}
is the standard gradient descent optimizing the particles \cite{chizat2018global,chizat2022sparse}. This method is widely used to optimize neural networks~\cite{chizat2018global}, take samples from a broad family of distributions~\cite{li2022sampling}, and simulate gradient flows in physics~\cite{carrillo2022global}. As will be discussed, $F$ is not convex in particles due to its permutation-invariance to the particles. In that regard, the convergence of particle gradient descent is not guaranteed for general functions.
Gradient descent links to gradient flow as $n\to \infty$. In this asymptotic regime, \cite{chizat2018global} proves that the empirical distribution over the particles $w_1,\dots, w_n$ implements a (Wasserstein) gradient flow for $F$. Although the associated gradient flow globally optimizes displacement convex functions, the implication of such convergence has remained unknown for a finite number of particles.
\subsection{Main contributions.}
We prove that particle gradient descent efficiently optimizes displacement convex functions. Consider
the sparse measure $\mu_n$ with support of size $n$. The error for $\mu_n$ can be decomposed as
\begin{multline*} F(\mu_n) - F^* := \\ \underbrace{F(\mu_n) - \min_{\mu_n} F(\mu_n)}_{\text{optimization error}} + \underbrace{\min_{\mu_n} F(\mu_n)-F^*}_{\text{approximation error}}.
\end{multline*}
The optimization error in the above equation measures how much the function value of $\mu_n$ can be reduced by particle gradient descent. The approximation error is induced by the sparsity constraint. While the optimization of particles reduces the optimization error, the approximation error is independent of the optimization and depends on $n$.
\paragraph{Optimization error.} For displacement convex functions, we establish the global convergence of variants of particle gradient descent. Table~\ref{tab:rate_summary} presents the computational complexity of particle gradient descent optimizing smooth and Lipschitz displacement convex functions. To demonstrate the applications of these results, we provide examples of displacement convex functions that have emerged in machine learning, tensor decomposition, and physics.
\paragraph{Approximation error.} Under a certain Lipschitz continuity condition, we prove the approximation error is bounded by $O(\frac{1}{\sqrt{n}})$ with a high probability. Furthermore, we prove this bound can be improved to $O(1/n)$ for convex and smooth functions in measures.
Finally, we demonstrate the application of the established results for a specific neural network with two-dimensional inputs, and zero-one activations. When the inputs are drawn uniformly from the unit circle, we prove that $n$-neurons achieve $O(1/n)$-function approximation in polynomial time for a specific function class.
\begin{table*}[t!]
\centering
\begin{tabular}{|l|l|l|}
\hline
Function class & Regularity & Complexity
\\
\hline
$\lambda$-displacement convex & $\ell$-smooth & $nd\left(\frac{\ell-\lambda}{\ell+\lambda}\right)\log(\ell/\epsilon)$ \\
\hline
star displacement convex & $\ell$-smooth & $nd \ell \left(\frac{1}{\epsilon}\right)$ \\
\hline
$\lambda$-displacement convex & $L$-Lipschitz & $ndL^2/(\lambda \epsilon)$ \\
\hline
star displacement convex &$L$-Lipschitz & $nd \ell \left(\frac{1}{\epsilon}\right)$ \\
\hline
\end{tabular}
\caption{\textit{Computational complexity to reach an $\epsilon$-optimization error.} See Theorems~\ref{thm:smooth} and \ref{thm:nonsmooth} for formal statements. }
\label{tab:rate_summary}
\end{table*}
\section{Related works}
There are alternatives to particle gradient descent for optimization in the space of measures. For example, conditional gradient descent optimizes smooth convex functions with a sub-linear convergence rate~\cite{frank1956algorithm}. This method constructs a sparse measure with support of size $n$ using an iterative approach. This sparse measure is $O(\frac{1}{n})$-accurate in $F$~\cite{dunn1979rates,jaggi2013revisiting}. However, each iteration of the conditional gradient method casts to a non-convex optimization without efficient solvers. Instead, the iterations of particle gradient descent are computationally efficient.
\cite{chizat2018global} establishes the link between Wasserstein gradient flows and particle gradient descent. This study proves that particle gradient descent implements the gradient flows in the limit of infinite particles for a rich function class. The neurons in single-layer neural networks can be interpreted as the particles whose density simulates a gradient flow. The elegant connection between gradient descent and gradient flows has provided valuable insights into the optimization of neural networks~\cite{chizat2018global} and their statistical efficiency~\cite{chizat2020implicit}. In practice, particle gradient descent is limited to a finite number of particles. Thus, it is essential to study particle gradient descent in a non-asymptotic regime. In this paper, we analyze optimization with a finite number of particles for displacement convex functions.
Displacement convexity has been used in recent studies of neural networks~\cite{javanmard,hadi22}. \cite{javanmard} establishes the global convergence of radial basis function networks using an approximate displacement convexity. \cite{hadi22} proves the global convergence of gradient descent for a single-layer network with two-dimensional inputs and zero-one loss in realizable settings. Motivated by these examples, we analyze optimization for general (non-)smooth displacement convex functions.
Displacement convexity relates to the rich literature on geodesic convex optimization. Although
the optimization of geodesic convex functions is extensively analyzed by ~\cite{zhang2016first,udriste2013convex,absil2009optimization} for Riemannian manifolds, less is known for the non-Riemannian manifold of probability measures with the Wasserstein-2 metric ~\cite{jordan1998variational}.
In machine learning, various objective functions do not have any spurious local minima. This property was observed in early studies of neural networks. \cite{baldi1989neural} show that the training objective of two-layer neural networks with linear activations does not have suboptimal local minima. This proof is extended to a family of matrix factorization problems, including matrix sensing, matrix completion, and robust PCA~\cite{ge2017no}. Smooth displacement convex functions studied in this paper inherently do not admit spurious local minima~\cite{javanmard2020analysis}.
For functions with no spurious minima, escaping the saddle points is crucial, which is extensively studied for smooth functions~\cite{jin2017escape,daneshmand2018escaping}. Although gradient descent may converge to suboptimal saddle points, random initialization effectively avoids the convergence of gradient descent to saddle points~\cite{lee2016gradient}. Yet, gradient descent may need a long time to escape saddles~\cite{du2017gradient}. To speed up the escape, \cite{jin2017escape} leverages noise that allows escaping saddles in polynomial time. Building on these studies, we analyze the escaping of saddles for displacement convex functions.
\section{Displacement convex functions}
Note that the objective function $F$ is invariant to the permutation of the particles. This permutation invariance concludes that $F$ is not convex as the next Proposition states.
\begin{proposition} \label{proposition:non-convex}
Suppose that $w_1^*, \dots, w_n^*$ is the unique minimizer of an arbitrary function $F(\frac{1}{n}\sum_{i=1}^n \delta_{w_i})$ such that $w_1^* \neq w_2^*$. If $F$ is invariant to the permutation of $w_1, \dots, w_n$, then it is non-convex.
\end{proposition}
The condition of having distinct optimal particles, required in the last Proposition, ensures the minimizer is not a trivial minimizer for which all the particles are equal.
Since there is no global optimization method for non-convex functions, we study the optimization of the specific family of displacement convex functions.
\subsection{Optimal transport} To introduce displacement convexity, we need to review the basics of optimal transport theory.
Consider two probability measures $\mu$ and $\nu$ over ${\mathbb{R}}^d$. A transport map from $\mu$ to $\nu$ is a function $T:{\mathbb{R}}^d \to {\mathbb{R}}^d$ such that
\begin{align}
\int_A \nu(x) dx = \int_{ T^{-1}(A)} \mu(x)dx
\end{align}
holds for any Borel subset $A$ of ${\mathbb{R}}^d$ \cite{santambrogio2017euclidean}. The optimal transport $T^*$ has the minimum transportation cost:
\begin{align*}
T^* = \arg\min_T \int \text{cost}(T(x), x) d\mu(x).
\end{align*}
We use the standard squared Euclidean distance function for the transportation
cost ~\cite{santambrogio2017euclidean}. Remarkably, the transport map between distributions may not exist. For example, one can not transport a Dirac measure to a continuous measure.
In this paper, we frequently use the optimal transport map for two $n$-sparse measures in the following form \begin{equation} \label{eq:disc_measure}
\mu = \frac{1}{n}\sum_{i=1}^n \delta_{w_i}, \quad \nu= \frac{1}{n} \sum_{i=1}^n \delta_{v_i}.\end{equation}
For the sparse measures, a permutation of $[1,\dots, n]$, denoted by $\sigma$, transports $\mu$ to $\nu$. Consider the set $\Lambda$, containing all permutations of $[1,\dots, n]$ and define
\begin{align}
\label{eq:optimal_permuation}
\sigma^* = \arg\min_{\sigma \in \Lambda} \sum_{i=1}^n \| w_i - v_{\sigma(i)}\|^2.
\end{align}
The optimal permutation in the above equation yields the optimal transport map from $\mu$ to $\nu$ as $T^*(w_i) = v_{\sigma^*_i}$, and the Wasserstein-2 distance between $\mu$ and $\nu$:
\begin{align} \label{eq:wdist}
W_2^2(\mu,\nu) = \sum_{i=1}^n \| w_i- v_{\sigma^*(i)}\|^2_2.
\end{align}
Note that we omit the factor $1/n$ in $W_2^2$ for ease of notation.
\subsection{Displacement convex functions}
The displacement interpolation between $\mu$ and $\nu$ is defined by the optimal transport map as \cite{mccann1997convexity}
\begin{align}
\mu_t = \left((1-t)\text{Identity} + T^*\right)_\# \mu,
\end{align}
where $G_\# \mu$ denotes the measure obtained by pushing $\mu$ with $G$.
Note that the above interpolation is different from the convex combination of measures, i.e., $(1-t) \mu + t \nu$. For sparse measure, the displacement interpolation is $w_i - w_{\sigma*(i)}$ for the optimal permutation $\sigma^*$ defined in Eq.~\eqref{eq:optimal_permuation}.
$\lambda$-displacement convexity asserts Jensen's inequality along the displacement interpolation \cite{mccann1997convexity} as
\begin{multline*}
F(\mu_t) \leq (1-t) F(\mu) \\ + t F(\nu) -\frac{\lambda}{2}(1-t)(t) W_2^2(\mu,\nu).
\end{multline*}
A standard example of a displacement convex function is a convex quadratic function of measures.
\begin{example}
Consider
\begin{align*}
Q(\mu) = \int K(x-y) d\mu(x) d\mu(y)
\end{align*}
where $\mu$ is a measure over ${\mathbb{R}}^d$ and $K(\Delta)$ is convex in $\Delta \in {\mathbb{R}}^d$; then, $Q$ is $0$-displacement convex~\cite{mccann1997convexity}.
\end{example}
The optimization of $Q$ over a sparse measure is convex~\footnote{$Q$ does not satisfy the condition of Proposition~\ref{proposition:non-convex}.}. However, this is a very specific example of displacement convex functions. Generally, displacement convex functions are not necessarily convex.
Recall the sparse measures defined in Eq.~\eqref{eq:disc_measure}. While convexity asserts Jensen's inequality for the interpolation of $\{w_i\}$ with all $n!$ permutations of $\{v_j\}$, displacement convexity only relies on a specific permutation. In that regard, displacement convexity is weaker than convexity. In the following example, we elaborate on this difference.
\begin{example} \label{example:energy}
The energy distance between measures over ${\mathbb{R}}$ is defined as
\begin{multline} \label{eq:mmd}
E(\mu,\nu) = 2 \int | x - y | d\mu(x) d\nu(y) \\- \int | x - y | d\mu(x) d\mu(x) - \int| x- y | d\nu(x) d\nu(y).
\end{multline}
$E(\mu,\nu)$ is $0$-displacement convex in $\mu$~\cite{carrillo2018measure}.
\end{example}
According to Proposition~\ref{proposition:non-convex}, $E$ does not obey Jensen's inequality for interpolations with an arbitrary transport map. In contrast, $E$ obeys Jensen's inequality for the optimal transport map, since it is monotone in ${\mathbb{R}}$~\cite{carrillo2018measure}. This key property concludes $E$ is displacement convex.
Remarkably, the optimization of the energy distance has applications in machine learning and physics.
\cite{hadi22} show that the training of two-layer neural networks with two-dimensional inputs (uniformly drawn from the unit sphere) casts to minimizing $E(\mu,\nu)$ in a sparse measure $\mu$. The optimization of the energy distance has been also used in clustering \cite{szekely2017energy}. In physics, the gradient flow on the energy distance describes interacting particles from two different species \cite{carrillo2018measure}.
\subsection{Star displacement convex functions}
Our convergence analysis extends to a broader family of functions. Let $\widehat{\mu}$ denote the optimal $n$-sparse solution for the optimization in Eq.~\eqref{eq:particle_program}, and $\mu_t$ is obtained by the displacement interpolation between $\mu$ and $\widehat{\mu}$. Star displacement convex function $F$ obeys
\begin{align*}
\sum_{i} \langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle \geq F(\mu) - F(\widehat{\mu}),
\end{align*}
where $T$ is the optimal transport map from $\mu$ to $\widehat{\mu}$. The above definition is inspired by the notion of star-convexity~\cite{nesterov2006cubic}. It is easy to check that $0$-displacement convex functions are star displacement convex.
Star displacement convex optimization is used for generative models in machine learning.
An important family of generative models optimizes the Wasserstein-2 metric~\cite{arjovsky2017wasserstein}. Although Wasserstein 2 is not displacement convex \cite{santambrogio2017euclidean}, it is star displacement convex.
\begin{example} \label{lemma:weakconvex_w2}
$W_2^2(\mu,\nu)$ is star displacement convex in $\mu$ as long as $\mu$ and $\nu$ has sparse supports of the same size.
\end{example}
Star displacement convexity holds for complete orthogonal tensor decomposition. Specifically, we consider the following example of tensor decomposition.
\begin{example} \label{example:tensor}
Consider the orthogonal complete tensor decomposition of order $3$, namely
\begin{align*}
\min_{w_1,\dots, w_d \in {\mathbb{R}}^d} \left( G\left(\frac{1}{n}\sum_{i=1}^d \delta_{w_i}\right)=- \sum_{i=1}^d \sum_{j=1}^d \left\langle \frac{w_j}{\|w_j\|}, v_i\right\rangle^3\right),
\end{align*}
where $v_1, \dots, v_d$ are orthogonal vectors over the unit sphere denoted by $\mathcal{S}_{d-1}$.
\end{example}
Although orthogonal tensor decomposition is not convex~\cite{anandkumar2014tensor}, the next lemma proves that it is star displacement convex.
\begin{lemma} \label{theorem:star_tensor}
$G$ is star displacement convex for $w_1, \dots, w_n \in \mathcal{S}_{d-1}$.
\end{lemma}
To prove the above lemma, we leverage the properties of the optimal transport map used for displacement interpolation.
There are more examples of displacement convex functions in machine learning~\cite{javanmard2020analysis} and physics~\cite{carrillo2009example}. Motivated by these examples, we analyze displacement convex optimization.
\section{Optimization of smooth functions}
Gradient descent is a powerful method to optimize smooth functions that enjoy a dimension-free convergence rate to a critical point~\cite{nesterov1999global}. More interestingly, a variant of gradient descent converges to local optimum~\cite{daneshmand2018escaping,jin2017escape,ge2015escaping,xu2018first,zhang2017hitting}. Here, we prove gradient descent globally optimizes the class of (star) displacement convex functions. Our results are established for the standard gradient descent, namely the following iterates
\begin{multline}
w_{i}^{(k+1)} = w_i^{(k)} - \gamma \partial_{w_i} F(\mu_k),\\ \mu_k := \frac{1}{n}\sum_{i=1}^n \delta_{w_i^{(k)}}
\end{multline}
where $\partial_{w_i} F$ denotes the gradient of $F$ with respect to $w_i$.
The next Theorem establishes the convergence of gradient descent.
\begin{theorem}\label{thm:smooth}
Assume $F$ is $\ell$-smooth, and particle gradient descent starts from distinct particles $w_1^{(0)}\neq \dots \neq w_n^{(0)}$. Let $\widehat{\mu}$ denote the optimal solution of \eqref{eq:particle_program}.
\begin{itemize}
\item[(a)] For $(\lambda>0)$-displacement functions,
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\\leq \ell \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right)^k W_2^2(\mu_0,\widehat{\mu})
\end{multline*}
holds as long as $\gamma\leq 2/(\lambda+\ell)$.
\item[(b)] Under $0$-displacement convexity,
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\ \leq \frac{2 (F(\mu_0) - F(\widehat{\mu}))W_2^2(\mu_0,\widehat{\mu})}{2 W_2^2(\mu_0,\widehat{\mu}) + (F(\mu_0) - F(\widehat{\mu})) \gamma k }
\end{multline*}
holds for $\gamma \leq 1/\ell$.
\item[(c)] Suppose that $F$ is star displacement convex and $\max_{m\in\{1,\dots, k\}} W_2^2(\mu_m,\widehat{\mu})\leq r^2$; then
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\\leq \frac{2 (F(\mu_0) - F(\widehat{\mu}))r^2}{2 r^2 + (F(\mu_0) - F(\widehat{\mu})) \gamma k }
\end{multline*}
holds for $\gamma \leq 1/\ell$.
\end{itemize}
\end{theorem}
\begin{table*}[t!]
\centering
\begin{tabular}{|l|l|}
\hline
Function class & Convergence rate
\\
\hline
$\lambda$-disp. convex & $\ell \left(\frac{\ell-\lambda}{\ell+\lambda}\right)^k W_2^2(\mu_0,\widehat{\mu})$ \\
$\lambda$-strongly convex & $\frac{\ell}{2} \left(\frac{\ell-\lambda}{\ell+\lambda}\right)^k \sum_{i}\| w_i - w^*_i\|^2_2$ \\
\hline
$0$-disp. convex & $2L W_2^2(\mu_0,\widehat{\mu})k^{-1}$ \\
convex & $2L \left(\sum_{i}\| w_i - w^*_i\|^2_2\right)(k+4)^{-1}$ \\
\hline
\end{tabular}
\caption{\textit{Convergence rates for the optimization of $\ell$-smooth functions.} We use the optimal choice for the stepsize $\gamma$ to achieve the best possible rate. Recall $\widehat{\mu} = \frac{1}{n}\sum_{i=1}^n\delta_{w_i^*}$ denotes the optimal solution for Eq.~\eqref{eq:particle_program}. Rates for convex functions: \cite{nesterov1999global}. Rates for displacement convex functions: Theorem~\ref{thm:smooth}. }
\label{tab:ratesmooth}
\end{table*}
Table~\ref{tab:ratesmooth} compares convergence rates for convex and displacement convex functions. We observe an analogy between the rates. The main difference is the replacement of Euclidean distance by the Wasserstein distance in the rates for displacement convex functions. This replacement is due to the permutation invariance of $F$.
The Euclidean distance between $(w_1^*, \dots, w_n^*)$ and permuted particles $(w_{\sigma(1)}^*, \dots , w_{\sigma(n)}^*)$ can be arbitrary large, while $F$ is invariant to the permutation of particles $w_1,\dots, w_n$. Proven by the last Theorem~\ref{thm:smooth}, Wasserstein distance effectively replaces the Euclidean distance for permutation invariant displacement convex functions.
Smooth displacement convex functions are non-convex, hence have saddle points. However, displacement convex functions do not have suboptimal local minima \cite{javanmard2020analysis}. Such property has been frequently observed for various objective functions in machine learning. To optimize functions without suboptimal local minima, escaping saddle points is crucial since saddle points may avoid the convergence of gradient descent~\cite{ge2015escaping}. \cite{lee2016gradient} proves that random initialization effectively avoids the convergence of gradient descent to saddle points. Similarly, the established global convergence results rely on a weak condition on initialization: The particles have to be distinct. A regular random initialization satisfies this condition.
Escaping saddles with random initialization may require considerable time for general functions. \cite{du2017gradient} propose a smooth function on which escaping saddles may take an exponential time with the dimension. Notably, the result of the last theorem holds specifically for displacement convex functions. For this function class, random initialization not only enables escaping saddles but also leads to global convergence.
\section{Optimization of Lipschitz functions}
Various objective functions are not smooth. For example, the training loss of neural networks with the standard ReLU activation is not smooth. In physics, energy functions often are not smooth~\cite{mccann1997convexity,carrillo2022global}. Furthermore, recent sampling methods are developed based on non-smooth optimization with particle gradient descent~\cite{li2022sampling}. Motivated by these broad applications, we study the optimization of non-smooth displacement convex functions. In particular, we focus on $L$-Lipschitz functions whose gradient is bounded by $L$.
To optimize non-smooth functions, we add noise to gradient iterations as
\begin{multline}
\tag{PGD}\label{eq:noisy_descent}
w_{i}^{(k+1)} = w_i^{(k)} \\- \gamma_k \left( \partial_{w_i} F(\mu_k) + \frac{1}{\sqrt{n}}\xi_i^{(k)} \right)
\end{multline}
where $\xi_1^{(k)}, \dots \xi_n^{(k)} \in {\mathbb{R}}^d$ are random vectors uniformly drawn from the unit ball. The above perturbed gradient descent (PGD) is widely used in smooth optimization to escape saddle points~\cite{ge2015escaping}. The next Theorem proves this random perturbation can be leveraged for optimization of non-smooth functions, which are (star) displacement convex.
\begin{theorem}\label{thm:nonsmooth}
Consider the optimization of a $L$-Lipschitz function with ~\ref{eq:noisy_descent} starting from $w_1^{(0)}\neq \dots \neq w_n^{(0)}$.
\begin{itemize}
\item[a.] If $F$ is $\lambda$-displacement convex, then
\begin{align*}
\min_{k \in \{1, \dots, m \}} \left\{ {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu})\right] \right\} & \leq \frac{2(L^2 +1)}{\lambda(m+1)}
\end{align*}
holds for $\gamma_k = 2/(\lambda(k+1))$.
\item[b.] If $F$ is star displacement convex, then
\begin{multline*}
\min_{k \in \{1, \dots, m \}} \left\{ {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \right\} \\\leq \frac{1}{\sqrt{m}}\left( W_2^2(\mu_0,\widehat{\mu}) + L +1\right)
\end{multline*}
holds for $\gamma_1 = \dots = \gamma_m = 1/\sqrt{m}$.
\end{itemize}
Notably, the above expectations are taken over random vectors $\xi_1^{(k)}, \dots \xi_n^{(k)}$.
\end{theorem}
Thus, \ref{eq:noisy_descent} yields an $\epsilon$-optimization error with $O(1/\epsilon^2)$ iterations to reach $\epsilon$-suboptimal solution for Lipschitz displacement convex functions. This rate holds for the optimization of the energy distance since it is $2$-Lipschitz and $0$-displacement convex~\cite{carrillo2018measure}.
\cite{hadi22} also establishes the convergence of gradient descent on the specific example of the energy distance. The last Theorem extends this convergence to the general function class of non-smooth Lipschitz displacement convex functions. While the convergence of \cite{hadi22} is in the Wasserstein distance, our convergence results are in terms of the function value.
\section{Approximation error} \label{sec:app_error}
Now, we turn our focus to the approximation error. We provide bounds on the approximation error for two important function classes:
\begin{itemize}
\item[(i)] Lipschitz functions in measures.
\item[(ii)] Convex and smooth functions in measures.
\end{itemize}
For (i), we provide the probabilistic bound $O\left(\frac{1}{\sqrt{n}}\right)$ on the approximation error; then, we improve the bound to $O(\frac{1}{n})$ for (ii).
\subsection{Lipschitz functions in measures} We introduce a specific notion of Lipschitz continuity for functions of probability measures. This notion relies on Maximum Mean Discrepancy (MMD) between probability measures. Given a positive definite kernel $K$, MMD$_K$ is defined as
\begin{multline*}
\left(\text{MMD}_K(\mu, \nu)\right)^2 = \int K(w,v) d\mu(w) d\mu(v) \\- 2 \int K(w,v) d\mu(w) d\nu (v) + \int K(w,v) d\nu(w) d\nu(v)
\end{multline*}
\noindent
MMD is widely used for the two-sample test in machine learning~\cite{gretton2012kernel}.
Leveraging MMD, we define the following Lipschitz property.
\begin{definition}[$L$-MMD$_K$ Lipschitz]
$F$ is $L$-MMD$_K$ Lipschitz, if there exists a positive definite Kernel $K$ such that
\begin{align*}
| F(\mu) - F(\nu)| \leq L \times \text{MMD}_K(\mu,\nu)
\end{align*}
holds for all probability measures $\mu$ and $\nu$.
\end{definition}
Indeed, the above Lipschitz continuity is an extension of the standard Lipschitz continuity to functions of probability measures. A wide range of objective functions obeys the above Lipschitz continuity. Particularly, \cite{chizat2018global}
introduces a unified formulation for training two-layer neural networks, sparse deconvolution, and tensor decomposition as
\begin{align} \label{eq:nn_chizat}
R\left(\int \Phi(w) d\mu(w) \right)
\end{align}
where $\Phi: {\mathbb{R}}^d \to \mathcal{H}$ is a map whose range lies in the Hilbert space $\mathcal{H}$ and $R: \mathcal{H} \to {\mathbb{R}}_+$. Under a weak assumption, $R$ is $L$-MMD$_K$ Lipschitz.
\begin{proposition} \label{prop:Lipschitz}
If $R$ is $L$-Lipschitz in its input, then it is $L$-MMD$_K$-Lipschitz for $K(w,v) = \langle \Phi(w),\Phi(v) \rangle$.
\end{proposition}
Thus, the class of Lipschitz functions is rich. For this function class, $O(\frac{1}{\sqrt{n}})$-approximation error is achievable.
\begin{proposition}
\label{lemma:approximation} Suppose that there exists a uniformly bounded kernel $\|K\|_{\infty} \leq B$ such that $F$ is $L$-MMD$_K$ Lipschitz; then,
\begin{align*}
\min_{\mu_n} F(\mu_n)-F^* \leq \frac{3\sqrt{B}}{\sqrt{n}}
\end{align*}
holds with probability at least $1- \exp(-1/n)$.
\end{proposition}
The last Proposition is a straightforward application of Theorem 7 in \cite{gretton2012kernel}.
Combining the above result with Theorem~\ref{thm:nonsmooth} concludes the total complexity of $O(d/\epsilon^4)$ to find an $\epsilon$-optimal solution for Lipschitz displacement functions. The complexity can be improved to $O(d/\epsilon^2)$ for smooth functions according to Theorem~\ref{thm:smooth}.
The established bound $O(1/\sqrt{n})$ can be improved under assumptions on the kernel $K$ associated with the Lipschitz continuity. For $d$-differentiable shift-invariant kernels, \cite{xu2022accurate} establishes a considerably tighter bound $O(\frac{\log(n)^d}{n})$ when the support of the optimal measure is a subset of the unit hypercube.
\subsection{Convex functions in measures}
If $F$ is convex and smooth in $\mu$, we can get a tighter bound on the approximation error.
\begin{lemma} \label{lemma:approx_convex}
Suppose $F$ is convex and smooth in $\mu$. If the probability measure $\mu$ is defined over a compact set, then
\begin{align*}
\min_{\mu_n} F(\mu_n) - F^* = O\left(\frac{1}{n}\right)
\end{align*}
holds for all $n$.
\end{lemma}
The proof of the last Lemma is based on the convergence rate of the Frank-Wolfe algorithm~\cite{jaggi2013revisiting}. This algorithm optimizes a smooth convex function by adding particles one by one. After $n$ iterates, the algorithm obtains an $n$-sparse measure which is $O(1/n)$-suboptimal. \cite{bach2017breaking} uses this proof technique to bound the approximation error for neural networks. The last lemma extends this result to a broader function class.
Remarkably, the energy distance is convex and smooth in $\mu$, hence enjoys $O(1/n)$-approximation error as stated in the next lemma.
\begin{lemma} \label{lemma:energy_distance}
$E(\mu,\nu)$ is convex and smooth in $\mu$ when $\mu$ and $\nu$ have a bounded support.
\end{lemma}
\subsection{Applications for neural networks}
The established theoretical analysis has a subtle application for the function approximation with neural networks. Consider the class of functions in the following form
\begin{align}
f(x) = \int \varphi(x^\top w) d\nu(w)
\end{align}
where $x, w \in {\mathbb{R}}^2$ lies on the unit circle and $\nu$ is a measure with support contained in the upper-half unit circle. $\varphi$ is the standard zero-one ridge function:
\begin{align}
\varphi(a) = \begin{cases}
1 & a >0 \\
0 & a \leq 0
\end{cases}.
\end{align}
The above function is used in the original MacCulloch-Pitts model for neural networks~\cite{mcculloch1943logical}. To approximate function $f$, one may use a neural network with a finite number of neurons implementing the following output function:
\begin{align}
f_n(x) = \frac{1
}{n} \sum_{i=1}^n \varphi(x^\top w_i),
\end{align}
where $w_1, \dots, w_n$ are points over the unit circle representing the parameters of the neurons. To optimize the location of $w_1,\dots, w_n$,
one may minimize the standard mean-squares loss as
\begin{align}
\min_{w_1,\dots,w_n} \left( L(w):= {\mathbf E}_x \left( f_n(x) - f(x)\right)^2 \right).
\end{align}
As is stated in the next corollary, \ref{eq:noisy_descent} optimizes $L$ up to the approximation error when the input $x$ is distributed uniformly over the unit circle.
\begin{corollary} \label{cor:neural_nets}
Suppose that the input $x$ is drawn uniformly over the unit circle. After a specific transformation of the coordinates for $w_1,\dots, w_n$, \ref{eq:noisy_descent} with $n$ particles with stepsize $\gamma_k =1/\sqrt{k}$ obtains $w^{(k)}:=[w_1^{(k)},\dots, w_n^{(k)}]$ after $k$ iteration such that
\begin{align}
{\mathbf E} \left[ L(w^{(k)})\right] =O(\frac{n}{\sqrt{k}}+\frac{1}{n})
\end{align}
holds where the expectation is taken over the algorithmic randomness of \ref{eq:noisy_descent}.
\end{corollary}
The last corollary is the consequence of part b of Theorem~\ref{thm:nonsmooth}, and the approximation error established in Lemma~\ref{lemma:approx_convex}. For the proof, we use the connection between $L$ and the energy distance derived by \cite{hadi22}. While \cite{hadi22} focuses on realizable settings, the last corollary holds for non-realizable settings when the measure $\nu$ is not an $n$-sparse measure.
\section{Experiments}
We experimentally validate established bounds on the approximation and optimization error. Specifically, we validate the results for the example of the energy distance, which obeys the required conditions for our theoretical results.
\subsection{Optimization of the energy distance}
As noted in Example~\ref{example:energy}, the energy distance is displacement convex. Furthermore, it is easy to check that this function is $2$-Lipschitz. For the sparse measures in Eq.~\eqref{eq:disc_measure}, the energy distance has the following form
\begin{multline*}
n^2 E(\mu,\nu) = 2\sum_{i,j=1}^n |w_i - v_j| \\
- \sum_{i,j=1}^n | v_i - v_j | - \sum_{i,j=1}^n | w_i - w_j|,
\end{multline*}
where $n=100$ for this experiment.
We draw $v_1,\dots, v_n$ at random from uniform$[0,1]$. Since $E$ is not a smooth function, we use \ref{eq:noisy_descent} to optimize $w_1,\dots, w_n \in {\mathbb{R}}$. In particular, we use $\xi_1^{(k)}$ i.i.d. from uniform$[-0.05,0.05]$. For the stepsize, we use $\gamma_k = 1/\sqrt{k}$ required for the convergence result in Theorem~\ref{thm:nonsmooth} (part b). In Figure~\ref{fig:energy_distance}, we observe a match between the theoretical and experimental convergence
rate for \ref{eq:noisy_descent}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{images/energy_distance_dc.pdf}
\caption{\footnotesize{\textbf{The convergence of \ref{eq:noisy_descent} for the energy distance.} Horizontal: $\log(k)$; vertical: $\log(E(\mu_k,\nu)- E(\widehat{\mu},\nu))$. The red dashed line is the theoretical convergence rate. The blue line is the convergence observed in practice for the average of 10 independent runs. }}
\label{fig:energy_distance}
\end{figure}
\subsection{Approximation error for the energy distance
}
Lemma~\ref{lemma:approx_convex} establish $O(1/n)$ approximation error for convex functions of measures. Although the energy distance $E(\mu,\nu)$ is not convex in the support of $\mu$, it is convex and smooth in $\mu$ as stated in Lemma~\ref{lemma:energy_distance}. Thus, $O(1/n)$-approximation error holds for the energy distance. We experimentally validate this result. Consider the recover of $\nu=$uniform$[-1,1]$ by minimizing the energy distance as
\begin{multline*}
E(\mu,\nu) = \frac{2}{n}\sum_{i=1}^n | w_i - v| d\nu(v) \\ - \frac{1}{n^2}\sum_{i,j=1}^n | w_i- w_j| - \int | v- v'|d\nu(v) d\nu(v').
\end{multline*}
The above integrals can be computed in closed forms using $\int_{-1}^1|w-v|d\nu(v) = w^2+1$. Hence, we can compute the derivative of $E$ with respect to $w_i$. We run \ref{eq:noisy_descent} with stepsize determined in the part b of Theorem~\ref{thm:nonsmooth} for $k=3\times10^5$ iterations and various $n \in \{2^2, \dots, 2^8\}$. Figure~\ref{fig:energy_distance} shows how the error decreases with $n$ in the log-log scale. In this plot, we observe that $E$ enjoys a mildly better approximation error compared to the established bound $O(1/n)$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{images/approximation.pdf}
\caption{\footnotesize{\textbf{Approximation error for the energy distance.} Horizontal: $n$; vertical: $E(\widehat{\mu}_n, \nu)$ where $\widehat{\mu}_n$ is obtained by $3\times 10^5$ iterations of \ref{eq:noisy_descent} with $n$ particles. The red dashed line is the theoretical $O(1/n)$-bound for the approximation error. The plot is in the $\log$-scale for both axes. The (blue) plot shows the average of 10 independent runs. }}
\label{fig:my_label}
\end{figure}
\section{Discussions}
We establish a non-asymptotic convergence rate for particle gradient descent when optimizing displacement convex functions of measures. Leveraging this convergence rate, we prove the optimization of displacement convex functions of (infinite-dimensional) measures can be solved in polynomial time with input dimension, and the desired accuracy rate. This finding will be of interest to various communities, including the communities of non-convex optimization, optimal transport theory, particle-based sampling, and theoretical physics.
The established convergence rates are limited to particle gradient descent. Yet, there may be other algorithms that converge faster than this algorithm. Convex optimization literature has established lower-bound complexities required to optimize convex function (with first-order derivatives)~\cite{nesterov1999global}. Given that displacement convex functions do not obey the conventional notion of convexity, it is not clear whether these lower bounds extend to this specific class of non-convex functions. More research is needed to establish (Oracle-based) lower-computational-complexities for displacement convex optimization.
Nesterov's accelerated gradient descent enjoys a considerably faster convergence compared to gradient descent in convex optimization. Indeed, this method attains the optimal convergence rate using only first-order derivatives of smooth convex functions~\cite{nesterov1999global}. This motivates future research to analyze the convergence of accelerated gradient descent on displacement convex functions.
We provided examples of displacement convex functions, including the energy distance. Displacement convex functions are not limited to these examples. A progression of this work is to assess the displacement convexity of various non-convex functions. In particular, non-convex functions invariant to permutation of the coordinates, including latent variable models and matrix factorization~\cite{anandkumar2014tensor}, may obey displacement convexity under weak assumptions.
A major limitation of our result is excluding displacement convex functions with entropy regularizers that have emerged frequently in physics~\cite{mccann1997convexity}. The entropy is displacement convex. Restricting the support of measures to a sparse set avoids the estimation of the entropy. Thus, particle gradient descent is not practical for the optimization of functions with the entropy regularizer. To optimize such functions, the existing literature uses a system of interacting particles solving a stochastic differential equation~\cite{philipowski2007interacting}. In asymptotic regimes, this algorithm implements a gradient flow converging to the global optimal measure~\cite{philipowski2007interacting}. To assess the complexity of these particle-based algorithms, we need non-asymptotic analyses for a finite number of particles.
\section*{Acknowledgments and Disclosure of Funding}
We thank Francis Bach, Lenaic Chizat and Philippe Rigollet for their helpful discussions on the related literature on particle-based sampling, the energy distance minimization and Riemannian optimization. This project was funded by the Swiss National Science Foundation (grant P2BSP3\_195698).
\section{Displacement convexity}
\subsection{Proof of Proposition~\ref{proposition:non-convex}}
Suppose that $w_1^*, \dots, w_n^*$ is the unique minimizer of $F$ such that $w_1^* \neq w_2^*$. Let $\sigma$ is a non monotone permutation of $\{1,\dots, n\}$. Interpolating the parameters $\{w_i^*\}$ with the permuted $\{w_{\sigma(i)}^*\}$ concludes $F$ is not convex since
\begin{align*}
F\left(\frac{1}{n}\sum_{i=1}^n \delta_{(1-t)w_i^* + t w_{\sigma(i)}^* }\right) > t F\left(\frac{1}{n}\sum_{i=1}^n \delta_{w_i^*}\right) + (1-t) F\left(\frac{1}{n}\sum_{i=1}^n \delta_{w_{\sigma(i)^*}}\right)
\end{align*}
holds for the unique minimizer $w_1^*,\dots, w_n^{*}$. Thus, $F$ is not convex.
\subsection{Proof of Proposition~\ref{prop:Lipschitz}}
The range of $\Phi$ lies in a Hilbert space with a norm induced by an inner product. Thus,
\begin{align*}
\left\| \int \Phi d\mu - \int \Phi d\nu \right\|^2 & = \left\langle \int \Phi d\mu - \int \Phi d\nu, \int \Phi d\mu - \int \Phi d\nu \right\rangle \\
& = \left\langle \int \Phi d\mu, \int \Phi d\mu \right\rangle - 2 \left\langle \int \Phi d\mu, \int \Phi d\nu \right\rangle + \left\langle \int \Phi d\nu, \int \Phi d\nu \right\rangle \\
& = \left( \text{MMD}_K(\mu,\nu)\right)^2.
\end{align*}
The above result together with the Lipschitz property of $R$ conclude the proof.
\subsection{Example ~\ref{lemma:weakconvex_w2}}
\begin{align*}
\text{Define:} \quad \quad &
\sigma^*= \arg \underbrace{\min_{\sigma} \sum_{i=1}^n \| w_i - v_{\sigma(i)}\|^2}_{=W_2^2(\mu,\nu)}. \\
\text{Define:} \quad \quad & \mu_t = \frac{1}{n} \sum_{i=1}^n \delta_{(1-t)w_i + t v_{\sigma^*(i)}}.
\end{align*}
According to definition of $W_2$, we have
\begin{align*}
W_2(\mu_t,\nu) \leq \sum_{i=1}^n \| (1-t) w_i + t v_{\sigma^*(i)} - v_{\sigma^*(i)}\|^2 = (1-t) W_2^2(\mu,\nu) + \underbrace{t W_2^2(\nu,\nu)}_{=0}.
\end{align*}
\subsection{Example~\ref{example:tensor}}
\begin{align*}
\text{Define:} \quad \quad &\sigma = \arg\min_{\sigma'} \sum_{i} \left\| w_{i}- v_{\sigma'(i)} \right\|^2. \\
\text{Define:} \quad \quad &
g(t) = G\left(\underbrace{(1-t) w_1 + t v_{\sigma(1)}}_{w_1(t)},\dots, (1-t) w_n + t v_{\sigma(n)}\right).
\end{align*}
To prove $G$ is star displacement convex, we need to prove that $
- g'(0) \geq (G -G^*)$ holds.
To validate this inequality, we take the derivative of $g$ with respect to $t$:
\begin{align*}
\frac{1}{3}g'(t) = -\sum_{j} \sum_{i} \left( \frac{\langle v_i, w_j(t) \rangle^2 \langle v_i, (v_{\sigma(j)}-w_j(t)) \rangle}{\| w_j(t) \|^3} + \frac{\langle v_i,w_j(t)\rangle^3 \langle w_j(t),w_j- v_{\sigma(j)}\rangle }{\|w_j(t)\|^3}\right).
\end{align*}
For $w_1,\dots, w_n \in \mathcal{S}_{d-1}$, we get
\begin{align}
\frac{1}{3}g'(0) = \sum_{j} \langle w_j,v_{\sigma(j)}\rangle \left( \sum_{i} \langle v_i, w_j \rangle^3 - \langle w_j, v_{\sigma(j)} \rangle \right),
\end{align}
where we used the orthogonality of $v_1,\dots, v_n$.
Since $\sigma$ is an optimal permutation, we have
\begin{align}
n \sum_{j} \langle w_j , v_{\sigma(j)} \rangle^2 \geq \sum_{i} \sum_{j} \langle w_i, v_{j} \rangle^2 = n,
\end{align}
where we use the orthogonality of $v_1,\dots, v_n$ to get the last equality.
Therefore,
\begin{align}
-\frac{1}{3}g'(0) & \geq 1 - \sum_{i,j=1}^n \langle v_i, w_j \rangle^3 = G - G^*
\end{align}
\subsection{Properties of $\lambda$-convex functions}
Let $T$ denote the optimal transport from $\mu$ to $\nu$ which achieves
\begin{align}
W_2^2(\mu,\nu) =\sum_{i=1}^n \| w_i - T(w_i)\|^2_2
\end{align}
Let $g(t) = F(\mu_t)$ where $\mu_t$ is obtained by the displacement interpolation of $\mu$ and $\nu$. Taking the derivative of $g$ with respect to $t$ leads to the following important inequality \cite{santambrogio2017euclidean}:
\begin{align} \label{eq:gradient_bound}
\underbrace{\sum_{i} \langle w_i-T(w_i), \partial_{w_i} F(\mu)\rangle}_{g'(0)} &\geq F(\mu) - F(\nu) +\frac{\lambda}{2} W_2^2(\mu,\nu)
\end{align}
\subsection{Properties of smooth and displacement convex functions}
The next Theorem establishes properties of smooth and displacement convex functions.
\begin{theorem} \label{thm:smoothness_consequences}
An $\ell$-smooth $F$ and $(\lambda\geq 0)$-displacement convex function obeys
\begin{equation} \tag{i} \sum_{i} \| \partial_{w_i} F(\mu) - \partial_{T(w_i)} F(\nu) \| \leq \ell^2 W_2^2(\mu,\nu)
\end{equation}
\begin{equation} \tag{ii}\| \partial_{w_i} F(\mu) - \partial_{w_j} F(\mu) \| \leq \ell \| w_i - w_j\|
\end{equation}
\begin{equation} \tag{iii} F(\nu) \geq F(\mu) + \sum_{i} \langle \partial_{w_i} F(\mu), T(w_i) - w_i \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{equation}
\begin{equation} \tag{iv} \label{eq:iv} \sum_{i} \langle \partial_{T(w_i)} F(\nu) - \partial_{w_i} F(\nu), T(w_i) - w_i \rangle \geq \frac{1}{\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{equation}
\begin{multline} \quad \tag{v}
\sum_{i}\langle \partial_{w_i} F(\mu)- \partial_{T(w_i)} F(\nu), w_i - T(w_i) \rangle \geq \frac{\lambda \ell}{\ell+\lambda} \sum_i \| w_i - T(w_i) \|^2 + \frac{1}{\lambda + \ell}\| \partial_{w_i} F(\mu) - \partial_{w_i} F(\nu)\|^2 \label{eq:v}
\end{multline}
\end{theorem}
\begin{proof}
\textbf{(i)}
Since the smoothness holds for all permutation of particles, we get
\begin{align}
\sum_{i=1}^n \| \partial_{w_i} F(\mu) - \partial_{T(w_i)} F(\nu) \|^2
& = \| \partial F(w) - \partial F(v)\|^2\\
& \leq \ell^2 \|w - v \|^2 = \ell^2 W_2^2(\mu,\nu),
\end{align}
where use a permutation of particles to get the the last equality.\\
\textbf{(ii)}
Suppose that $v$ is obtained by swapping $w_i$ and $w_j$ for $i\neq j$ in $w=(w_1,\dots, w_n)$. Then, the smoothness implies
\begin{align}
\| \partial_{w_i} F(w) - \partial_{w_j} F(v) \| \leq \ell \| w_i - w_j\| \label{eq:smoothness_consequence}
\end{align}
\textbf{(iii)}
Akin to the proof of
Theorem 2.1.5 in \cite{nesterov1999global}, we define
\begin{align*}
Q(\nu) = F(\nu) - \sum_{i} \langle T(w_i), \partial_{w_i} F(\mu) \rangle
\end{align*}
Displacement convexity, more precisely Eq.~\eqref{eq:gradient_bound}, ensures that $\mu$ is the minimizer of the above functional since
\begin{align}
F(\nu) - F(\mu) - \sum_{i} \langle T(w_i)-w_i, \partial_{w_i} F(\mu)\rangle \geq 0
\end{align}
$\ell$-smoothness ensures \cite{nesterov1999global}
\begin{align}
Q(\mu) & \leq Q\left( \frac{1}{n} \sum_{i=1}^n \delta_{v_i - \frac{1}{\ell} \partial_{v_i} Q(\nu)}\right) \\
&\leq Q(\nu) - \frac{1}{2\ell} \underbrace{\sum_{i=1}^n \| \partial_{v_i} Q(\nu)\|^2}_{= \sum_{i} \|\partial_{T(w_i)} F(\nu) - \partial_{w_i} F(\mu) \|^2 }
\end{align}
\end{proof}
We will repeatedly use the above inequalities in future analysis.
\noindent
\textbf{(iv)} Inequality (iii) ensures
\begin{align}
F(\nu) & \geq F(\mu) + \sum_{i} \langle \partial_{w_i} F(\mu), T(w_i) - w_i \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2 \\
F(\mu) & \geq F(\nu) + \sum_{i} \langle \partial_{T(w_i)} F(\nu), w_i - T(w_i) \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{align}
Summing up the above two inequalities concludes (iv).
\noindent
\textbf{(v)} The proof is similar to the proof of Theorem 2.1.11 in \cite{nesterov1999global}. Recall $T$ is the optimal transport map from $\mu$ to $\nu$ and $\mu_t$ is the displacement interpolation between $\mu$ and $\nu$. First, we define function $\phi(\mu) = F(\mu) - \frac{\lambda}{2} \sum_{i=1}^n \| w_i \|^2$. We prove $\phi$ is $0$-displacement convex.
\begin{align}
\phi(\mu_t) & \leq (1-t) F(\mu) + t F(\nu) - \left(\frac{(1-t) t \lambda }{2}\right)W_2^2(\mu,\nu) \\
\phi(\mu) & = F(\mu) - \frac{\lambda}{2} \sum_{i=1}^n \| w_i\|^2 \\
\phi(\nu) & = F(\nu) - \frac{\lambda}{2} \sum_{i=1}^n \| T(w_i)\|^2
\end{align}
Putting the above three equations together yields
\begin{multline}
(1-t) \phi(\mu) + t \phi(\nu) - \phi(\mu_t) \geq \left(\frac{\lambda t (1-t)}{2}\right) \sum_{i} \| w_i - T(w_i)\|^2 \\
- \frac{\lambda(1-t)}{2} \sum_{i}\|w_i\|^2 - \frac{\lambda}{2} \sum_{i} \|T(w_i)\|^2 \\
+ \frac{\lambda}{2} \sum_{i} \| (1-t) w_i + t T(w_i)\|^2
\end{multline}
Expanding the last term concludes $\phi$ is $0$-displacement convex. Thus, we can use (\ref{eq:iv}) to get
\begin{align}
\sum_{i} \langle \partial_{w_i} \phi(\mu) - \partial_{T(w_i)} \phi(\nu), w_i - T(w_i) \rangle \geq \frac{1}{\ell-\lambda} \sum_{i} \| \partial_{w_i} \phi(\mu) - \partial_{T(w_i)} \phi(\nu) \|^2
\end{align}
A rearrangement of the terms concludes (v).
\section{Smooth displacement convex optimization}
\subsection{Proof of Theorem~\ref{thm:smooth}.a}
\subsection*{Contraction}
We first prove that the particles contract in $W_2$.
\begin{lemma}[Contraction] \label{lemma:contraction}
Suppose that $\mu_{k}$ is obtained by particle gradient descent starting from $n$-sparse measures with distinct particles. If $F$ is $\lambda$-displacement convex and $\ell$-smooth, then
\begin{align}
W_2^2(\mu_{(k+1)},\widehat{\mu}) \leq \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right) W_2^2(\mu_{k},\widehat{\mu})
\end{align}
holds for $\gamma \leq 2/(\ell+L)$.
\end{lemma}
\begin{proof}
In order to leverage displacement convexity, we need to ensure the optimal transport map exists from $\mu_{(k)}$ to $\widehat{\mu}$ for all $k$.
The next Lemma proves that $\mu_k$ has distinct particles assuming $\mu_0$ has distinct particles, hence the optimal transport map exists under weak conditions on $\mu_0$.
\begin{lemma} \label{lemma:smooth_dist} For $\ell$-smooth $F$ and $\gamma<1/\ell$, $w_i^{(k+1)} = w_j^{(k+1)}$ hold only if $w_i^{(k)} = w_j^{(k)}$.
\end{lemma}
Therefore, the optimal transport from $\mu:= \mu_k$ to $\widehat{\mu}$ is well defined and denoted by $T$. We leverage this transports to couple $\mu_+ := \mu_{k+1}$ with $\widehat{\mu}$. The optimality of the transport implies that
\begin{align}
\sum_{i} \| w_i - T(w_i) \|^2 d\mu(w) = W_2^2(\mu,\widehat{\mu})
\end{align}
Using the recurrence of gradient descent, we get
\begin{align}
W_2^2 \left( \mu_{+} , \widehat{\mu} \right) \leq \sum_{i} \left( \| w_i - T(w_i) \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu) \rangle\right) +
\gamma^2 \sum_{i} \| \partial_{w_i} F(\mu) - \underbrace{\partial_{T(w_i)} F(\widehat{\mu})}_{=0} \|^2.
\end{align}
According to Theorem~\ref{thm:smoothness_consequences}.\ref{eq:v}. we have
\begin{align*}
\sum_{i}
\left(\langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle\right) \geq \frac{\lambda \ell}{\ell+\lambda} W_2^2(\mu,\widehat{\mu}) + \frac{1}{\ell+\lambda} \sum_{i} \| \partial_{w_i} F(\mu)- \partial_{T(w_i)} F(\widehat{\mu}) \|^2
\end{align*}
Combining the last two inequalities, we get
\begin{align*}
W_2^2(\mu_{+},\widehat{\mu}) \leq W_2^2(\mu,\widehat{\mu}) - \left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)W_2^2(\mu,\widehat{\mu}) + \left(\gamma^2 -\frac{2\gamma}{\ell+\lambda}\right)\sum_{i} \| \partial_{w_i} F(\mu) - \underbrace{\partial_{T(w_i)} F(\widehat{\mu})}_{=0} \|^2
\end{align*}
For $\gamma \leq 2/(\ell+\lambda) $, we get
\begin{align}
W_2^2(\mu_{+},\widehat{\mu}) \leq \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right) W_2^2(\mu,\widehat{\mu}).
\end{align}
\end{proof}
\subsection*{Convergence proof}
A straight forward applications of Cauchy-Schwarz for Eq.\ref{eq:gradient_bound} yields
\begin{align}
W_2(\mu,\widehat{\mu}) \sqrt{ \sum_{i} \| \partial_{w_i} F(\mu)\|^2 } \geq F(\mu) - F(\widehat{\mu})
\end{align}
holds for all $\mu$ with $n$ particles.
Invoking Theorem~\ref{thm:smoothness_consequences}.I, we get
\begin{align}
\sum_{i} \| \partial_{w_i} F(\mu)\|^2 \leq \ell^2 W_2^2(\mu,\widehat{\mu})
\end{align}
Combining the last two inequalities, we get
\begin{align}
F(\mu) - F(\widehat{\mu}) \leq \ell W_2^2(\mu,\widehat{\mu})
\end{align}
Combining the above inequality with the contraction established in the last lemma, we get
\begin{align}
|F(\mu_{k+1}) - F(\widehat{\mu})| \leq \ell \left(1- \frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)^k W_2^2(\mu_0,\widehat{\mu})
\end{align}
\subsection{Proof of Theorem~\ref{thm:smooth}.c.}
It is known that gradient descent decreases smooth functions in each iteration as long as $\gamma \leq 1/\ell$ (see Theorem 2.1.13 of \cite{nesterov1999global}):
\begin{align} \label{eq:f_decrease_smooth}
F(\mu_{k+1}) \leq F(\mu_k) - \gamma \left(1- \ell \gamma/2 \right) \sum_{i} \| \partial_{w_i} F(\mu_k)\|^2
\end{align}
star displacement convexity ensures
\begin{align}
\sum_{i} \langle w_i-T(w_i), \partial_{w_i} F(\mu) \rangle \geq F(\mu_k) - F(\widehat{\mu}).
\end{align}
A straightforward application of Cauchy-Schwarz yields
\begin{align}
\sum_i \langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle &\leq \sum_{i} \| w_i - T(w_i)\| \| \partial_{w_i} F(\mu)\| \\
& \leq W_2(\mu_k,\widehat{\mu}) \sqrt{\sum_{i} \| \partial_{w_i} F(\mu)\|^2 }
\end{align}
where the last inequality holds since $W_2^2(\mu_k,\widehat{\mu})$ is decreasing for $\gamma<1/\ell$.
Combining the last two inequalities, we get
\begin{align}
\sum_i \| \partial_{w_i} F(\mu)\|^2 \geq \frac{\left( F(\mu_k) - F(\widehat{\mu})\right)^2}{W_2^2(\mu_k,\widehat{\mu})}
\end{align}
We introduce the compact notion $\Delta_k = F(\mu_k) - F(\widehat{\mu})$. Plugging the last inequality into Eq.~\ref{eq:f_decrease_smooth} obtains
\begin{align}
\Delta_{k+1} & \leq \Delta_k - \underbrace{\left(\frac{\gamma}{W_2^2(\mu_k,\widehat{\mu})}\right)}_{\geq \gamma/r^2}(1-\ell\gamma/2) \Delta_k^2
\end{align}
According to Theorem 2.1.13 of \cite{nesterov1999global}, the above inequality concludes the proof.
\begin{align}
\frac{1}{\Delta_{k+1}} \geq \frac{1}{\Delta_k (1-c\Delta_k)}
\end{align}
\subsection{Proof of Theorem~\ref{thm:smooth}.b}
Suppose that $T$ is the optimal transport map from $\mu_k$ to $\widehat{\mu}$. Using $\ell$-smoothness, we have
\begin{align}
W_2^2 (\mu_{k+1}, \widehat{\mu} ) \leq \sum_{i} \| T(w_i) - w_i \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu_k) \rangle +
\gamma^2 \| \partial_{w_i} F(\mu_k) \|^2
\end{align}
Replacing $\nu = \widehat{\mu}$ and $\mu= \mu_k$ in Theorem~\ref{thm:smoothness_consequences}.III yields
\begin{align}
\sum_i \langle w_i-T(w_i),\partial_{w_i} F(\mu) \rangle \geq \frac{1}{2\ell} \sum_{i} \| \partial_{w_i} F(\mu) \|^2
\end{align}
Incorporating the above inequality into the established bound for $W_2$ leads to
\begin{align}
W_2^2(\mu_{k+1},\widehat{\mu}) \leq W_2^2(\mu_k,\widehat{\mu}) + (\gamma^2-\gamma/\ell) \sum_i \| \partial_{w_i} F(\mu_k)\|^2.
\end{align}
Thus, $W_2(\mu_k,\widehat{\mu})$ is non-increasing in $k$ for $\gamma\leq 1/\ell$. Since a $0$-displacement convex function is also weak displacement convex, invoking part c. of Theorem~\ref{thm:smooth} with $r^2 = W_2^2(\mu_0,\widehat{\mu})$ concludes the proof.
\subsection{Proof of Lemma~\ref{lemma:smooth_dist}}
The proof is similar to the analysis of smooth programs in \cite{lee2016gradient}. According to the definition, $w_{i}^{(k+1)}= w_j^{(k+1)}$ if
\begin{align}
w_i^{(k)} - \gamma \partial_{w_i} F(\mu_k) = w_j^{(k)} - \gamma \partial_{w_i} F(\mu_k)
\end{align}
A rearrangement of terms together with Theorem~\ref{thm:smoothness_consequences}.II concludes the proof:
\begin{align}
\| w_i^{(k)} - w_j^{(k)}\| = \gamma \|\partial_{w_i} F(\mu_k)- \partial_{w_j} F(\mu_k) \| \leq \gamma \ell \| w_i^{(k)}- w_j^{(k)} \| < \| w_i^{(k)} - w_j^{(k)}\|.
\end{align}
\section{Lipschitz displacement convex optimization}
\subsection{Proof of Theorem~\ref{thm:nonsmooth}.a}
The proof is inspired by the convergence analysis of gradient descent for non-smooth convex functions (Theorem 3.9 of \cite{bubeck2015convex}). The injection with noise ensures that the particles remain distinct with probability one, hence the optimal transport from $\mu_k$ to $\widehat{\mu}$ denoted by $T$ exists with probability one. Leveraging the optimal transport $T$ and inequality~\ref{eq:gradient_bound} obtained by $\lambda$-diplacement convexity, we get
\begin{multline}
{\mathbf E}\left[ W_2^2 \left( \mu_{k+1} , \widehat{\mu} \right) \right] \leq {\mathbf E} \left[ \sum_{i} \| T(w_i) - w_i \|^2 + \gamma_k\underbrace{2 \sum_{i} \langle T(w_i) - w_i, \partial_{w_i} F'(\mu_k) \rangle}_{\leq -\lambda W_2^2(\mu_k,\widehat{\mu}) + 2F(\widehat{\mu})-2F(\mu_k)) } \right] \\ +
\gamma^2_k \underbrace{\sum_{i} {\mathbf E} \| \partial_{w_i} F(\mu_k) \|^2 }_{\leq L^2}
+ \frac{\gamma^2_k}{n}\sum_{i=1}^n {\mathbf E} \left[ \| \xi_i^{(k)}\|^2 \right]
\end{multline}
A rearrangement of terms yields
\begin{multline}
k {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \\ \leq \lambda k(k-1) {\mathbf E} \left[ W_2^2(\mu_k,\widehat{\mu}) \right] - \lambda k(k+1) {\mathbf E} \left[ W_2^2(\mu_{k+1},\widehat{\mu})\right]+\frac{(L^2+1)}{\lambda}
\end{multline}
Summing over $k=1,\dots,m$ concludes the proof as
\begin{align}
\left(\frac{m(m+1)}{2}\right) \min_{k\leq m} \left( {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \right) \leq \sum_{k=1}^m k\left( {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu})\right] \right) \leq \frac{m(L^2+1)}{\lambda}.
\end{align}
\subsection{Proof of Theorem~\ref{thm:nonsmooth}.b} \noindent The proof is inspired by convex analysis for non-smooth functions (Theorem 3.2.2 of \cite{nesterov1999global}). Suppose $\widehat{\mu}$ the minimzier of particle program and let $T$ denotes the optimal mapping from $\mu_{k}$ to $\widehat{\mu}$. The injection with noise ensure that particles of $\mu_k$ are distinct, hence $T$ does exists.
\begin{multline}
{\mathbf E} \left[ W_2^2 \left( \mu_{k+1} , \mu_* \right) \right] \leq {\mathbf E} \left[ \sum_{i} \left( \| T(w_i) - w_i \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu_k) \rangle \right) \right] \\ +
\gamma^2 \underbrace{ \sum_{i} {\mathbf E} \| \partial_{w_i} F(\mu) \|^2 }_{\leq L^2}
+ \frac{\gamma^2}{n}\sum_{i=1}^n {\mathbf E} \| \xi_i^{(k)}\|^2
\end{multline}
Using Eq.~\eqref{eq:gradient_bound}, we conclude that
\begin{align}
-\int \langle T(w) - w, \partial_w F(\mu)\rangle d \mu_k(w) \geq \underbrace{F(\mu_k) - F(\widehat{\mu})}_{\Delta_k} \geq 0
\end{align}
Therefore,
\begin{align}
{\mathbf E} \left[ W_2^2 \left( \mu_{k+1} , \widehat{\mu} \right) \right] \leq {\mathbf E} \left[ W_2^2 \left( \mu_{k} , \widehat{\mu}\right) \right] - 2\gamma {\mathbf E} \left[ \Delta_k \right] + \gamma^2 (L^2+1)
\end{align}
Summing over $k$ concludes the proof
\begin{align}
(m\gamma) \min_{k \in \{1, \dots, m\}} {\mathbf E} \left[ \Delta_k \right] \leq \sum_{k=1}^m \gamma {\mathbf E} \left[ \Delta_k \right] \leq W_2^2(\mu_0,\widehat{\mu}) + m \gamma^2 (L^2+1)
\end{align}
\section{Approximation error}
\subsection{Proof of Lemma~\ref{lemma:approx_convex}}
The proof is based on the convergence of Frank-Wolf algorithm. This proof technique is previously used for specific functions in neural networks~\cite{bach2017breaking}. Since this algorithm uses a (infinite-dimensional) non-convex optimization in each step, it is not implementable. Yet, we can use its convergence properties to bound the approximation error.
To introduce the algorithm, we first need to formulate the optimization over a compact domain in Banach space. We optimize $F$ over $L^2$ Hilbert spaces of functions. Let $D$ is the set of probability measures over a compact set, which is a subset of $L^2$. Frank-Wolfe method optimizes $F$ through the following iterations \cite{jaggi2013revisiting}
\begin{align}
\mu^{(k+1)} = (1-\gamma) \mu^{(k)} + \gamma s , \quad s = \arg\max_{\nu\in D} \int F'(\mu^{(k)})(x) d\nu(x),
\end{align}
where $F'$ is the functional derivative of $F$ with respect to $\mu$~\cite{santambrogio2017euclidean}.
It is easy to check that $s$ is always a Dirac measure at $\max_x F'(\mu^{(k)})(x)$. Hence, $\mu^{(n-1)}$ is a sparse measure over $n$ particles as long as $\mu^{(0)} = \delta_{w_0}$. The compactness of $D$, convexity and smoothness of $F$ ensures the rate $O(1/n)$ for the convergence of Frank-Wolfe method~\cite{jaggi2013revisiting}.
\subsection{Proof of Lemma~\ref{lemma:energy_distance}}
The proof is a straightforward application of the key observation in \cite{hadi22}. \citet{hadi22} show the energy distance can be written alternatively as an MMD$_K$ distance with a positive definite kernel $K$, which is quadratic convex function in $\mu$. Since the kernel $K$ is Lipschitz, $L$ is smooth in the measure.
\section{Applications for neural networks}
\subsection{Proof of Corollary~\ref{cor:neural_nets}}
\cite{hadi22} proves that $L$ is equivalent to the $E$ in polar coordinates. Invoking the part b of Theorem~\ref{thm:nonsmooth} concludes the rate.
\section{Displacement convexity}
\subsection{Proof of Proposition~\ref{proposition:non-convex}}
Suppose that $w_1^*, \dots, w_n^*$ is the unique minimizer of $F$ such that $w_1^* \neq w_2^*$. Let $\sigma$ is a non monotone permutation of $\{1,\dots, n\}$. Interpolating the parameters $\{w_i^*\}$ with the permuted $\{w_{\sigma(i)}^*\}$ concludes $F$ is not convex since
\begin{align*}
F\left(\frac{1}{n}\sum_{i=1}^n \delta_{(1-t)w_i^* + t w_{\sigma(i)}^* }\right) > t F\left(\frac{1}{n}\sum_{i=1}^n \delta_{w_i^*}\right) + (1-t) F\left(\frac{1}{n}\sum_{i=1}^n \delta_{w_{\sigma(i)^*}}\right)
\end{align*}
holds for the unique minimizer $w_1^*,\dots, w_n^{*}$. Thus, $F$ is not convex.
\subsection{Proof of Proposition~\ref{prop:Lipschitz}}
The range of $\Phi$ lies in a Hilbert space with a norm induced by an inner product. Thus,
\begin{align*}
\left\| \int \Phi d\mu - \int \Phi d\nu \right\|^2 & = \left\langle \int \Phi d\mu - \int \Phi d\nu, \int \Phi d\mu - \int \Phi d\nu \right\rangle \\
& = \left\langle \int \Phi d\mu, \int \Phi d\mu \right\rangle - 2 \left\langle \int \Phi d\mu, \int \Phi d\nu \right\rangle + \left\langle \int \Phi d\nu, \int \Phi d\nu \right\rangle \\
& = \left( \text{MMD}_K(\mu,\nu)\right)^2.
\end{align*}
The above result together with the Lipschitz property of $R$ conclude the proof.
\subsection{Example ~\ref{lemma:weakconvex_w2}}
\begin{align*}
\text{Define:} \quad \quad &
\sigma^*= \arg \underbrace{\min_{\sigma} \sum_{i=1}^n \| w_i - v_{\sigma(i)}\|^2}_{=W_2^2(\mu,\nu)}. \\
\text{Define:} \quad \quad & \mu_t = \frac{1}{n} \sum_{i=1}^n \delta_{(1-t)w_i + t v_{\sigma^*(i)}}.
\end{align*}
According to definition of $W_2$, we have
\begin{align*}
W_2(\mu_t,\nu) \leq \sum_{i=1}^n \| (1-t) w_i + t v_{\sigma^*(i)} - v_{\sigma^*(i)}\|^2 = (1-t) W_2^2(\mu,\nu) + \underbrace{t W_2^2(\nu,\nu)}_{=0}.
\end{align*}
\subsection{Example~\ref{example:tensor}}
\begin{align*}
\text{Define:} \quad \quad &\sigma = \arg\min_{\sigma'} \sum_{i} \left\| w_{i}- v_{\sigma'(i)} \right\|^2. \\
\text{Define:} \quad \quad &
g(t) = G\left(\underbrace{(1-t) w_1 + t v_{\sigma(1)}}_{w_1(t)},\dots, (1-t) w_n + t v_{\sigma(n)}\right).
\end{align*}
To prove $G$ is star displacement convex, we need to prove that $
- g'(0) \geq (G -G^*)$ holds.
To validate this inequality, we take the derivative of $g$ with respect to $t$:
\begin{align*}
\frac{1}{3}g'(t) = -\sum_{j} \sum_{i} \left( \frac{\langle v_i, w_j(t) \rangle^2 \langle v_i, (v_{\sigma(j)}-w_j(t)) \rangle}{\| w_j(t) \|^3} + \frac{\langle v_i,w_j(t)\rangle^3 \langle w_j(t),w_j- v_{\sigma(j)}\rangle }{\|w_j(t)\|^3}\right).
\end{align*}
For $w_1,\dots, w_n \in \mathcal{S}_{d-1}$, we get
\begin{align}
\frac{1}{3}g'(0) = \sum_{j} \langle w_j,v_{\sigma(j)}\rangle \left( \sum_{i} \langle v_i, w_j \rangle^3 - \langle w_j, v_{\sigma(j)} \rangle \right),
\end{align}
where we used the orthogonality of $v_1,\dots, v_n$.
Since $\sigma$ is an optimal permutation, we have
\begin{align}
n \sum_{j} \langle w_j , v_{\sigma(j)} \rangle^2 \geq \sum_{i} \sum_{j} \langle w_i, v_{j} \rangle^2 = n,
\end{align}
where we use the orthogonality of $v_1,\dots, v_n$ to get the last equality.
Therefore,
\begin{align}
-\frac{1}{3}g'(0) & \geq 1 - \sum_{i,j=1}^n \langle v_i, w_j \rangle^3 = G - G^*
\end{align}
\subsection{Properties of $\lambda$-convex functions}
Let $T$ denote the optimal transport from $\mu$ to $\nu$ which achieves
\begin{align}
W_2^2(\mu,\nu) =\sum_{i=1}^n \| w_i - T(w_i)\|^2_2
\end{align}
Let $g(t) = F(\mu_t)$ where $\mu_t$ is obtained by the displacement interpolation of $\mu$ and $\nu$. Taking the derivative of $g$ with respect to $t$ leads to the following important inequality \cite{santambrogio2017euclidean}:
\begin{align} \label{eq:gradient_bound}
\underbrace{\sum_{i} \langle w_i-T(w_i), \partial_{w_i} F(\mu)\rangle}_{g'(0)} &\geq F(\mu) - F(\nu) +\frac{\lambda}{2} W_2^2(\mu,\nu)
\end{align}
\subsection{Properties of smooth and displacement convex functions}
The next Theorem establishes properties of smooth and displacement convex functions.
\begin{theorem} \label{thm:smoothness_consequences}
An $\ell$-smooth $F$ and $(\lambda\geq 0)$-displacement convex function obeys
\begin{equation} \tag{i} \sum_{i} \| \partial_{w_i} F(\mu) - \partial_{T(w_i)} F(\nu) \| \leq \ell^2 W_2^2(\mu,\nu)
\end{equation}
\begin{equation} \tag{ii}\| \partial_{w_i} F(\mu) - \partial_{w_j} F(\mu) \| \leq \ell \| w_i - w_j\|
\end{equation}
\begin{equation} \tag{iii} F(\nu) \geq F(\mu) + \sum_{i} \langle \partial_{w_i} F(\mu), T(w_i) - w_i \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{equation}
\begin{equation} \tag{iv} \label{eq:iv} \sum_{i} \langle \partial_{T(w_i)} F(\nu) - \partial_{w_i} F(\nu), T(w_i) - w_i \rangle \geq \frac{1}{\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{equation}
\begin{multline} \quad \tag{v}
\sum_{i}\langle \partial_{w_i} F(\mu)- \partial_{T(w_i)} F(\nu), w_i - T(w_i) \rangle \geq \frac{\lambda \ell}{\ell+\lambda} \sum_i \| w_i - T(w_i) \|^2 + \frac{1}{\lambda + \ell}\| \partial_{w_i} F(\mu) - \partial_{w_i} F(\nu)\|^2 \label{eq:v}
\end{multline}
\end{theorem}
\begin{proof}
\textbf{(i)}
Since the smoothness holds for all permutation of particles, we get
\begin{align}
\sum_{i=1}^n \| \partial_{w_i} F(\mu) - \partial_{T(w_i)} F(\nu) \|^2
& = \| \partial F(w) - \partial F(v)\|^2\\
& \leq \ell^2 \|w - v \|^2 = \ell^2 W_2^2(\mu,\nu),
\end{align}
where use a permutation of particles to get the the last equality.\\
\textbf{(ii)}
Suppose that $v$ is obtained by swapping $w_i$ and $w_j$ for $i\neq j$ in $w=(w_1,\dots, w_n)$. Then, the smoothness implies
\begin{align}
\| \partial_{w_i} F(w) - \partial_{w_j} F(v) \| \leq \ell \| w_i - w_j\| \label{eq:smoothness_consequence}
\end{align}
\textbf{(iii)}
Akin to the proof of
Theorem 2.1.5 in \cite{nesterov1999global}, we define
\begin{align*}
Q(\nu) = F(\nu) - \sum_{i} \langle T(w_i), \partial_{w_i} F(\mu) \rangle
\end{align*}
Displacement convexity, more precisely Eq.~\eqref{eq:gradient_bound}, ensures that $\mu$ is the minimizer of the above functional since
\begin{align}
F(\nu) - F(\mu) - \sum_{i} \langle T(w_i)-w_i, \partial_{w_i} F(\mu)\rangle \geq 0
\end{align}
$\ell$-smoothness ensures \cite{nesterov1999global}
\begin{align}
Q(\mu) & \leq Q\left( \frac{1}{n} \sum_{i=1}^n \delta_{v_i - \frac{1}{\ell} \partial_{v_i} Q(\nu)}\right) \\
&\leq Q(\nu) - \frac{1}{2\ell} \underbrace{\sum_{i=1}^n \| \partial_{v_i} Q(\nu)\|^2}_{= \sum_{i} \|\partial_{T(w_i)} F(\nu) - \partial_{w_i} F(\mu) \|^2 }
\end{align}
\end{proof}
We will repeatedly use the above inequalities in future analysis.
\noindent
\textbf{(iv)} Inequality (iii) ensures
\begin{align}
F(\nu) & \geq F(\mu) + \sum_{i} \langle \partial_{w_i} F(\mu), T(w_i) - w_i \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2 \\
F(\mu) & \geq F(\nu) + \sum_{i} \langle \partial_{T(w_i)} F(\nu), w_i - T(w_i) \rangle +\frac{1}{2\ell} \sum_{i} \| \partial_{T(w_i)}F(\nu) - \partial_{w_i} F(\mu)\|^2
\end{align}
Summing up the above two inequalities concludes (iv).
\noindent
\textbf{(v)} The proof is similar to the proof of Theorem 2.1.11 in \cite{nesterov1999global}. Recall $T$ is the optimal transport map from $\mu$ to $\nu$ and $\mu_t$ is the displacement interpolation between $\mu$ and $\nu$. First, we define function $\phi(\mu) = F(\mu) - \frac{\lambda}{2} \sum_{i=1}^n \| w_i \|^2$. We prove $\phi$ is $0$-displacement convex.
\begin{align}
\phi(\mu_t) & \leq (1-t) F(\mu) + t F(\nu) - \left(\frac{(1-t) t \lambda }{2}\right)W_2^2(\mu,\nu) \\
\phi(\mu) & = F(\mu) - \frac{\lambda}{2} \sum_{i=1}^n \| w_i\|^2 \\
\phi(\nu) & = F(\nu) - \frac{\lambda}{2} \sum_{i=1}^n \| T(w_i)\|^2
\end{align}
Putting the above three equations together yields
\begin{multline}
(1-t) \phi(\mu) + t \phi(\nu) - \phi(\mu_t) \geq \left(\frac{\lambda t (1-t)}{2}\right) \sum_{i} \| w_i - T(w_i)\|^2 \\
- \frac{\lambda(1-t)}{2} \sum_{i}\|w_i\|^2 - \frac{\lambda}{2} \sum_{i} \|T(w_i)\|^2 \\
+ \frac{\lambda}{2} \sum_{i} \| (1-t) w_i + t T(w_i)\|^2
\end{multline}
Expanding the last term concludes $\phi$ is $0$-displacement convex. Thus, we can use (\ref{eq:iv}) to get
\begin{align}
\sum_{i} \langle \partial_{w_i} \phi(\mu) - \partial_{T(w_i)} \phi(\nu), w_i - T(w_i) \rangle \geq \frac{1}{\ell-\lambda} \sum_{i} \| \partial_{w_i} \phi(\mu) - \partial_{T(w_i)} \phi(\nu) \|^2
\end{align}
A rearrangement of the terms concludes (v).
\section{Smooth displacement convex optimization}
\subsection{Proof of Theorem~\ref{thm:smooth}.a}
\subsection*{Contraction}
We first prove that the particles contract in $W_2$.
\begin{lemma}[Contraction] \label{lemma:contraction}
Suppose that $\mu_{k}$ is obtained by particle gradient descent starting from $n$-sparse measures with distinct particles. If $F$ is $\lambda$-displacement convex and $\ell$-smooth, then
\begin{align}
W_2^2(\mu_{(k+1)},\widehat{\mu}) \leq \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right) W_2^2(\mu_{k},\widehat{\mu})
\end{align}
holds for $\gamma \leq 2/(\ell+L)$.
\end{lemma}
\begin{proof}
In order to leverage displacement convexity, we need to ensure the optimal transport map exists from $\mu_{(k)}$ to $\widehat{\mu}$ for all $k$.
The next Lemma proves that $\mu_k$ has distinct particles assuming $\mu_0$ has distinct particles, hence the optimal transport map exists under weak conditions on $\mu_0$.
\begin{lemma} \label{lemma:smooth_dist} For $\ell$-smooth $F$ and $\gamma<1/\ell$, $w_i^{(k+1)} = w_j^{(k+1)}$ hold only if $w_i^{(k)} = w_j^{(k)}$.
\end{lemma}
Therefore, the optimal transport from $\mu:= \mu_k$ to $\widehat{\mu}$ is well defined and denoted by $T$. We leverage this transports to couple $\mu_+ := \mu_{k+1}$ with $\widehat{\mu}$. The optimality of the transport implies that
\begin{align}
\sum_{i} \| w_i - T(w_i) \|^2 d\mu(w) = W_2^2(\mu,\widehat{\mu})
\end{align}
Using the recurrence of gradient descent, we get
\begin{align}
W_2^2 \left( \mu_{+} , \widehat{\mu} \right) \leq \sum_{i} \left( \| w_i - T(w_i) \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu) \rangle\right) +
\gamma^2 \sum_{i} \| \partial_{w_i} F(\mu) - \underbrace{\partial_{T(w_i)} F(\widehat{\mu})}_{=0} \|^2.
\end{align}
According to Theorem~\ref{thm:smoothness_consequences}.\ref{eq:v}. we have
\begin{align*}
\sum_{i}
\left(\langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle\right) \geq \frac{\lambda \ell}{\ell+\lambda} W_2^2(\mu,\widehat{\mu}) + \frac{1}{\ell+\lambda} \sum_{i} \| \partial_{w_i} F(\mu)- \partial_{T(w_i)} F(\widehat{\mu}) \|^2
\end{align*}
Combining the last two inequalities, we get
\begin{align*}
W_2^2(\mu_{+},\widehat{\mu}) \leq W_2^2(\mu,\widehat{\mu}) - \left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)W_2^2(\mu,\widehat{\mu}) + \left(\gamma^2 -\frac{2\gamma}{\ell+\lambda}\right)\sum_{i} \| \partial_{w_i} F(\mu) - \underbrace{\partial_{T(w_i)} F(\widehat{\mu})}_{=0} \|^2
\end{align*}
For $\gamma \leq 2/(\ell+\lambda) $, we get
\begin{align}
W_2^2(\mu_{+},\widehat{\mu}) \leq \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right) W_2^2(\mu,\widehat{\mu}).
\end{align}
\end{proof}
\subsection*{Convergence proof}
A straight forward applications of Cauchy-Schwarz for Eq.\ref{eq:gradient_bound} yields
\begin{align}
W_2(\mu,\widehat{\mu}) \sqrt{ \sum_{i} \| \partial_{w_i} F(\mu)\|^2 } \geq F(\mu) - F(\widehat{\mu})
\end{align}
holds for all $\mu$ with $n$ particles.
Invoking Theorem~\ref{thm:smoothness_consequences}.I, we get
\begin{align}
\sum_{i} \| \partial_{w_i} F(\mu)\|^2 \leq \ell^2 W_2^2(\mu,\widehat{\mu})
\end{align}
Combining the last two inequalities, we get
\begin{align}
F(\mu) - F(\widehat{\mu}) \leq \ell W_2^2(\mu,\widehat{\mu})
\end{align}
Combining the above inequality with the contraction established in the last lemma, we get
\begin{align}
|F(\mu_{k+1}) - F(\widehat{\mu})| \leq \ell \left(1- \frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)^k W_2^2(\mu_0,\widehat{\mu})
\end{align}
\subsection{Proof of Theorem~\ref{thm:smooth}.c.}
It is known that gradient descent decreases smooth functions in each iteration as long as $\gamma \leq 1/\ell$ (see Theorem 2.1.13 of \cite{nesterov1999global}):
\begin{align} \label{eq:f_decrease_smooth}
F(\mu_{k+1}) \leq F(\mu_k) - \gamma \left(1- \ell \gamma/2 \right) \sum_{i} \| \partial_{w_i} F(\mu_k)\|^2
\end{align}
star displacement convexity ensures
\begin{align}
\sum_{i} \langle w_i-T(w_i), \partial_{w_i} F(\mu) \rangle \geq F(\mu_k) - F(\widehat{\mu}).
\end{align}
A straightforward application of Cauchy-Schwarz yields
\begin{align}
\sum_i \langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle &\leq \sum_{i} \| w_i - T(w_i)\| \| \partial_{w_i} F(\mu)\| \\
& \leq W_2(\mu_k,\widehat{\mu}) \sqrt{\sum_{i} \| \partial_{w_i} F(\mu)\|^2 }
\end{align}
where the last inequality holds since $W_2^2(\mu_k,\widehat{\mu})$ is decreasing for $\gamma<1/\ell$.
Combining the last two inequalities, we get
\begin{align}
\sum_i \| \partial_{w_i} F(\mu)\|^2 \geq \frac{\left( F(\mu_k) - F(\widehat{\mu})\right)^2}{W_2^2(\mu_k,\widehat{\mu})}
\end{align}
We introduce the compact notion $\Delta_k = F(\mu_k) - F(\widehat{\mu})$. Plugging the last inequality into Eq.~\ref{eq:f_decrease_smooth} obtains
\begin{align}
\Delta_{k+1} & \leq \Delta_k - \underbrace{\left(\frac{\gamma}{W_2^2(\mu_k,\widehat{\mu})}\right)}_{\geq \gamma/r^2}(1-\ell\gamma/2) \Delta_k^2
\end{align}
According to Theorem 2.1.13 of \cite{nesterov1999global}, the above inequality concludes the proof.
\begin{align}
\frac{1}{\Delta_{k+1}} \geq \frac{1}{\Delta_k (1-c\Delta_k)}
\end{align}
\subsection{Proof of Theorem~\ref{thm:smooth}.b}
Suppose that $T$ is the optimal transport map from $\mu_k$ to $\widehat{\mu}$. Using $\ell$-smoothness, we have
\begin{align}
W_2^2 (\mu_{k+1}, \widehat{\mu} ) \leq \sum_{i} \| T(w_i) - w_i \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu_k) \rangle +
\gamma^2 \| \partial_{w_i} F(\mu_k) \|^2
\end{align}
Replacing $\nu = \widehat{\mu}$ and $\mu= \mu_k$ in Theorem~\ref{thm:smoothness_consequences}.III yields
\begin{align}
\sum_i \langle w_i-T(w_i),\partial_{w_i} F(\mu) \rangle \geq \frac{1}{2\ell} \sum_{i} \| \partial_{w_i} F(\mu) \|^2
\end{align}
Incorporating the above inequality into the established bound for $W_2$ leads to
\begin{align}
W_2^2(\mu_{k+1},\widehat{\mu}) \leq W_2^2(\mu_k,\widehat{\mu}) + (\gamma^2-\gamma/\ell) \sum_i \| \partial_{w_i} F(\mu_k)\|^2.
\end{align}
Thus, $W_2(\mu_k,\widehat{\mu})$ is non-increasing in $k$ for $\gamma\leq 1/\ell$. Since a $0$-displacement convex function is also weak displacement convex, invoking part c. of Theorem~\ref{thm:smooth} with $r^2 = W_2^2(\mu_0,\widehat{\mu})$ concludes the proof.
\subsection{Proof of Lemma~\ref{lemma:smooth_dist}}
The proof is similar to the analysis of smooth programs in \cite{lee2016gradient}. According to the definition, $w_{i}^{(k+1)}= w_j^{(k+1)}$ if
\begin{align}
w_i^{(k)} - \gamma \partial_{w_i} F(\mu_k) = w_j^{(k)} - \gamma \partial_{w_i} F(\mu_k)
\end{align}
A rearrangement of terms together with Theorem~\ref{thm:smoothness_consequences}.II concludes the proof:
\begin{align}
\| w_i^{(k)} - w_j^{(k)}\| = \gamma \|\partial_{w_i} F(\mu_k)- \partial_{w_j} F(\mu_k) \| \leq \gamma \ell \| w_i^{(k)}- w_j^{(k)} \| < \| w_i^{(k)} - w_j^{(k)}\|.
\end{align}
\section{Lipschitz displacement convex optimization}
\subsection{Proof of Theorem~\ref{thm:nonsmooth}.a}
The proof is inspired by the convergence analysis of gradient descent for non-smooth convex functions (Theorem 3.9 of \cite{bubeck2015convex}). The injection with noise ensures that the particles remain distinct with probability one, hence the optimal transport from $\mu_k$ to $\widehat{\mu}$ denoted by $T$ exists with probability one. Leveraging the optimal transport $T$ and inequality~\ref{eq:gradient_bound} obtained by $\lambda$-diplacement convexity, we get
\begin{multline}
{\mathbf E}\left[ W_2^2 \left( \mu_{k+1} , \widehat{\mu} \right) \right] \leq {\mathbf E} \left[ \sum_{i} \| T(w_i) - w_i \|^2 + \gamma_k\underbrace{2 \sum_{i} \langle T(w_i) - w_i, \partial_{w_i} F'(\mu_k) \rangle}_{\leq -\lambda W_2^2(\mu_k,\widehat{\mu}) + 2F(\widehat{\mu})-2F(\mu_k)) } \right] \\ +
\gamma^2_k \underbrace{\sum_{i} {\mathbf E} \| \partial_{w_i} F(\mu_k) \|^2 }_{\leq L^2}
+ \frac{\gamma^2_k}{n}\sum_{i=1}^n {\mathbf E} \left[ \| \xi_i^{(k)}\|^2 \right]
\end{multline}
A rearrangement of terms yields
\begin{multline}
k {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \\ \leq \lambda k(k-1) {\mathbf E} \left[ W_2^2(\mu_k,\widehat{\mu}) \right] - \lambda k(k+1) {\mathbf E} \left[ W_2^2(\mu_{k+1},\widehat{\mu})\right]+\frac{(L^2+1)}{\lambda}
\end{multline}
Summing over $k=1,\dots,m$ concludes the proof as
\begin{align}
\left(\frac{m(m+1)}{2}\right) \min_{k\leq m} \left( {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \right) \leq \sum_{k=1}^m k\left( {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu})\right] \right) \leq \frac{m(L^2+1)}{\lambda}.
\end{align}
\subsection{Proof of Theorem~\ref{thm:nonsmooth}.b} \noindent The proof is inspired by convex analysis for non-smooth functions (Theorem 3.2.2 of \cite{nesterov1999global}). Suppose $\widehat{\mu}$ the minimzier of particle program and let $T$ denotes the optimal mapping from $\mu_{k}$ to $\widehat{\mu}$. The injection with noise ensure that particles of $\mu_k$ are distinct, hence $T$ does exists.
\begin{multline}
{\mathbf E} \left[ W_2^2 \left( \mu_{k+1} , \mu_* \right) \right] \leq {\mathbf E} \left[ \sum_{i} \left( \| T(w_i) - w_i \|^2 + 2 \gamma \langle T(w_i) - w_i, \partial_{w_i} F(\mu_k) \rangle \right) \right] \\ +
\gamma^2 \underbrace{ \sum_{i} {\mathbf E} \| \partial_{w_i} F(\mu) \|^2 }_{\leq L^2}
+ \frac{\gamma^2}{n}\sum_{i=1}^n {\mathbf E} \| \xi_i^{(k)}\|^2
\end{multline}
Using Eq.~\eqref{eq:gradient_bound}, we conclude that
\begin{align}
-\int \langle T(w) - w, \partial_w F(\mu)\rangle d \mu_k(w) \geq \underbrace{F(\mu_k) - F(\widehat{\mu})}_{\Delta_k} \geq 0
\end{align}
Therefore,
\begin{align}
{\mathbf E} \left[ W_2^2 \left( \mu_{k+1} , \widehat{\mu} \right) \right] \leq {\mathbf E} \left[ W_2^2 \left( \mu_{k} , \widehat{\mu}\right) \right] - 2\gamma {\mathbf E} \left[ \Delta_k \right] + \gamma^2 (L^2+1)
\end{align}
Summing over $k$ concludes the proof
\begin{align}
(m\gamma) \min_{k \in \{1, \dots, m\}} {\mathbf E} \left[ \Delta_k \right] \leq \sum_{k=1}^m \gamma {\mathbf E} \left[ \Delta_k \right] \leq W_2^2(\mu_0,\widehat{\mu}) + m \gamma^2 (L^2+1)
\end{align}
\section{Approximation error}
\subsection{Proof of Lemma~\ref{lemma:approx_convex}}
The proof is based on the convergence of Frank-Wolf algorithm. This proof technique is previously used for specific functions in neural networks~\cite{bach2017breaking}. Since this algorithm uses a (infinite-dimensional) non-convex optimization in each step, it is not implementable. Yet, we can use its convergence properties to bound the approximation error.
To introduce the algorithm, we first need to formulate the optimization over a compact domain in Banach space. We optimize $F$ over $L^2$ Hilbert spaces of functions. Let $D$ is the set of probability measures over a compact set, which is a subset of $L^2$. Frank-Wolfe method optimizes $F$ through the following iterations \cite{jaggi2013revisiting}
\begin{align}
\mu^{(k+1)} = (1-\gamma) \mu^{(k)} + \gamma s , \quad s = \arg\max_{\nu\in D} \int F'(\mu^{(k)})(x) d\nu(x),
\end{align}
where $F'$ is the functional derivative of $F$ with respect to $\mu$~\cite{santambrogio2017euclidean}.
It is easy to check that $s$ is always a Dirac measure at $\max_x F'(\mu^{(k)})(x)$. Hence, $\mu^{(n-1)}$ is a sparse measure over $n$ particles as long as $\mu^{(0)} = \delta_{w_0}$. The compactness of $D$, convexity and smoothness of $F$ ensures the rate $O(1/n)$ for the convergence of Frank-Wolfe method~\cite{jaggi2013revisiting}.
\subsection{Proof of Lemma~\ref{lemma:energy_distance}}
The proof is a straightforward application of the key observation in \cite{hadi22}. \citet{hadi22} show the energy distance can be written alternatively as an MMD$_K$ distance with a positive definite kernel $K$, which is quadratic convex function in $\mu$. Since the kernel $K$ is Lipschitz, $L$ is smooth in the measure.
\section{Applications for neural networks}
\subsection{Proof of Corollary~\ref{cor:neural_nets}}
\cite{hadi22} proves that $L$ is equivalent to the $E$ in polar coordinates. Invoking the part b of Theorem~\ref{thm:nonsmooth} concludes the rate.
\section{Introduction}
Optimization in the space of probability measures has wide applications across various domains, including advanced generative models in machine learning~\cite{arjovsky2017wasserstein}, the training of two-layer neural networks~\cite{chizat2018global}, variational inference using Stein's method~\cite{liu2016stein}, super-resolution in signal processing~\cite{bredies2013inverse}, and interacting particles in physics~\cite{mccann1997convexity}.
Optimization in probability spaces goes beyond the conventional optimization in Euclidean space. \cite{ambrosio2005gradient} extends
the notion of steepest descent in Euclidean space to the space of probability measures with the Wasserstein metric. This notion traces back to studies of the Fokker–Planck equation, a partial differential equation (PDE) describing the density evolution of Ito diffusion. The Fokker–Planck equation can be interpreted as a gradient flow in the space of probability distributions with the Wasserstein metric~\cite{jordan1998variational}. Gradient flows have become general tools to go beyond optimization in Euclidean space ~\cite{absil2009optimization,santambrogio2017euclidean,chizat2022sparse,carrillo2018measure,carrillo2022global,carrillo2021equilibrium}.
Gradient flows enjoy a fast global convergence on an important function class called displacement convex functions~\cite{ambrosio2005gradient} which is introduced to analyze equilibrium states of physical systems~\cite{mccann1997convexity}.
Despite their fast convergence rate, gradient flows are hard to implement. Specifically, there are numerical solvers only for the limited class of linear functions with an entropy regularizer.
We study a different method to optimize functions of probability measures called particle gradient descent~\cite{chizat2018global,chizat2022sparse}. This method restricts optimization to sparse measures with finite support ~\cite{nitanda2017stochastic,chizat2018global,chizat2022sparse,li2022sampling} as
\begin{align}\label{eq:particle_program}
\min_{w_1,\dots, w_n} F \left( \frac{1}{n} \sum_{i=1}^n \delta_{w_i} \right),
\end{align}
where $\delta_{w_i}$ is the Dirac measure at $w_i \in \Omega \subset {\mathbb{R}}^d$. Points $w_1, \dots, w_n$ are called particles. \textit{Particle gradient descent}
is the standard gradient descent optimizing the particles \cite{chizat2018global,chizat2022sparse}. This method is widely used to optimize neural networks~\cite{chizat2018global}, take samples from a broad family of distributions~\cite{li2022sampling}, and simulate gradient flows in physics~\cite{carrillo2022global}. As will be discussed, $F$ is not convex in particles due to its permutation-invariance to the particles. In that regard, the convergence of particle gradient descent is not guaranteed for general functions.
Gradient descent links to gradient flow as $n\to \infty$. In this asymptotic regime, \cite{chizat2018global} proves that the empirical distribution over the particles $w_1,\dots, w_n$ implements a (Wasserstein) gradient flow for $F$. Although the associated gradient flow globally optimizes displacement convex functions, the implication of such convergence has remained unknown for a finite number of particles.
\subsection{Main contributions.}
We prove that particle gradient descent efficiently optimizes displacement convex functions. Consider
the sparse measure $\mu_n$ with support of size $n$. The error for $\mu_n$ can be decomposed as
\begin{multline*} F(\mu_n) - F^* := \\ \underbrace{F(\mu_n) - \min_{\mu_n} F(\mu_n)}_{\text{optimization error}} + \underbrace{\min_{\mu_n} F(\mu_n)-F^*}_{\text{approximation error}}.
\end{multline*}
The optimization error in the above equation measures how much the function value of $\mu_n$ can be reduced by particle gradient descent. The approximation error is induced by the sparsity constraint. While the optimization of particles reduces the optimization error, the approximation error is independent of the optimization and depends on $n$.
\paragraph{Optimization error.} For displacement convex functions, we establish the global convergence of variants of particle gradient descent. Table~\ref{tab:rate_summary} presents the computational complexity of particle gradient descent optimizing smooth and Lipschitz displacement convex functions. To demonstrate the applications of these results, we provide examples of displacement convex functions that have emerged in machine learning, tensor decomposition, and physics.
\paragraph{Approximation error.} Under a certain Lipschitz continuity condition, we prove the approximation error is bounded by $O(\frac{1}{\sqrt{n}})$ with a high probability. Furthermore, we prove this bound can be improved to $O(1/n)$ for convex and smooth functions in measures.
Finally, we demonstrate the application of the established results for a specific neural network with two-dimensional inputs, and zero-one activations. When the inputs are drawn uniformly from the unit circle, we prove that $n$-neurons achieve $O(1/n)$-function approximation in polynomial time for a specific function class.
\begin{table*}[t!]
\centering
\begin{tabular}{|l|l|l|}
\hline
Function class & Regularity & Complexity
\\
\hline
$\lambda$-displacement convex & $\ell$-smooth & $nd\left(\frac{\ell-\lambda}{\ell+\lambda}\right)\log(\ell/\epsilon)$ \\
\hline
star displacement convex & $\ell$-smooth & $nd \ell \left(\frac{1}{\epsilon}\right)$ \\
\hline
$\lambda$-displacement convex & $L$-Lipschitz & $ndL^2/(\lambda \epsilon)$ \\
\hline
star displacement convex &$L$-Lipschitz & $nd \ell \left(\frac{1}{\epsilon}\right)$ \\
\hline
\end{tabular}
\caption{\textit{Computational complexity to reach an $\epsilon$-optimization error.} See Theorems~\ref{thm:smooth} and \ref{thm:nonsmooth} for formal statements. }
\label{tab:rate_summary}
\end{table*}
\section{Related works}
There are alternatives to particle gradient descent for optimization in the space of measures. For example, conditional gradient descent optimizes smooth convex functions with a sub-linear convergence rate~\cite{frank1956algorithm}. This method constructs a sparse measure with support of size $n$ using an iterative approach. This sparse measure is $O(\frac{1}{n})$-accurate in $F$~\cite{dunn1979rates,jaggi2013revisiting}. However, each iteration of the conditional gradient method casts to a non-convex optimization without efficient solvers. Instead, the iterations of particle gradient descent are computationally efficient.
\cite{chizat2018global} establishes the link between Wasserstein gradient flows and particle gradient descent. This study proves that particle gradient descent implements the gradient flows in the limit of infinite particles for a rich function class. The neurons in single-layer neural networks can be interpreted as the particles whose density simulates a gradient flow. The elegant connection between gradient descent and gradient flows has provided valuable insights into the optimization of neural networks~\cite{chizat2018global} and their statistical efficiency~\cite{chizat2020implicit}. In practice, particle gradient descent is limited to a finite number of particles. Thus, it is essential to study particle gradient descent in a non-asymptotic regime. In this paper, we analyze optimization with a finite number of particles for displacement convex functions.
Displacement convexity has been used in recent studies of neural networks~\cite{javanmard,hadi22}. \cite{javanmard} establishes the global convergence of radial basis function networks using an approximate displacement convexity. \cite{hadi22} proves the global convergence of gradient descent for a single-layer network with two-dimensional inputs and zero-one loss in realizable settings. Motivated by these examples, we analyze optimization for general (non-)smooth displacement convex functions.
Displacement convexity relates to the rich literature on geodesic convex optimization. Although
the optimization of geodesic convex functions is extensively analyzed by ~\cite{zhang2016first,udriste2013convex,absil2009optimization} for Riemannian manifolds, less is known for the non-Riemannian manifold of probability measures with the Wasserstein-2 metric ~\cite{jordan1998variational}.
In machine learning, various objective functions do not have any spurious local minima. This property was observed in early studies of neural networks. \cite{baldi1989neural} show that the training objective of two-layer neural networks with linear activations does not have suboptimal local minima. This proof is extended to a family of matrix factorization problems, including matrix sensing, matrix completion, and robust PCA~\cite{ge2017no}. Smooth displacement convex functions studied in this paper inherently do not admit spurious local minima~\cite{javanmard2020analysis}.
For functions with no spurious minima, escaping the saddle points is crucial, which is extensively studied for smooth functions~\cite{jin2017escape,daneshmand2018escaping}. Although gradient descent may converge to suboptimal saddle points, random initialization effectively avoids the convergence of gradient descent to saddle points~\cite{lee2016gradient}. Yet, gradient descent may need a long time to escape saddles~\cite{du2017gradient}. To speed up the escape, \cite{jin2017escape} leverages noise that allows escaping saddles in polynomial time. Building on these studies, we analyze the escaping of saddles for displacement convex functions.
\section{Displacement convex functions}
Note that the objective function $F$ is invariant to the permutation of the particles. This permutation invariance concludes that $F$ is not convex as the next Proposition states.
\begin{proposition} \label{proposition:non-convex}
Suppose that $w_1^*, \dots, w_n^*$ is the unique minimizer of an arbitrary function $F(\frac{1}{n}\sum_{i=1}^n \delta_{w_i})$ such that $w_1^* \neq w_2^*$. If $F$ is invariant to the permutation of $w_1, \dots, w_n$, then it is non-convex.
\end{proposition}
The condition of having distinct optimal particles, required in the last Proposition, ensures the minimizer is not a trivial minimizer for which all the particles are equal.
Since there is no global optimization method for non-convex functions, we study the optimization of the specific family of displacement convex functions.
\subsection{Optimal transport} To introduce displacement convexity, we need to review the basics of optimal transport theory.
Consider two probability measures $\mu$ and $\nu$ over ${\mathbb{R}}^d$. A transport map from $\mu$ to $\nu$ is a function $T:{\mathbb{R}}^d \to {\mathbb{R}}^d$ such that
\begin{align}
\int_A \nu(x) dx = \int_{ T^{-1}(A)} \mu(x)dx
\end{align}
holds for any Borel subset $A$ of ${\mathbb{R}}^d$ \cite{santambrogio2017euclidean}. The optimal transport $T^*$ has the minimum transportation cost:
\begin{align*}
T^* = \arg\min_T \int \text{cost}(T(x), x) d\mu(x).
\end{align*}
We use the standard squared Euclidean distance function for the transportation
cost ~\cite{santambrogio2017euclidean}. Remarkably, the transport map between distributions may not exist. For example, one can not transport a Dirac measure to a continuous measure.
In this paper, we frequently use the optimal transport map for two $n$-sparse measures in the following form \begin{equation} \label{eq:disc_measure}
\mu = \frac{1}{n}\sum_{i=1}^n \delta_{w_i}, \quad \nu= \frac{1}{n} \sum_{i=1}^n \delta_{v_i}.\end{equation}
For the sparse measures, a permutation of $[1,\dots, n]$, denoted by $\sigma$, transports $\mu$ to $\nu$. Consider the set $\Lambda$, containing all permutations of $[1,\dots, n]$ and define
\begin{align}
\label{eq:optimal_permuation}
\sigma^* = \arg\min_{\sigma \in \Lambda} \sum_{i=1}^n \| w_i - v_{\sigma(i)}\|^2.
\end{align}
The optimal permutation in the above equation yields the optimal transport map from $\mu$ to $\nu$ as $T^*(w_i) = v_{\sigma^*_i}$, and the Wasserstein-2 distance between $\mu$ and $\nu$:
\begin{align} \label{eq:wdist}
W_2^2(\mu,\nu) = \sum_{i=1}^n \| w_i- v_{\sigma^*(i)}\|^2_2.
\end{align}
Note that we omit the factor $1/n$ in $W_2^2$ for ease of notation.
\subsection{Displacement convex functions}
The displacement interpolation between $\mu$ and $\nu$ is defined by the optimal transport map as \cite{mccann1997convexity}
\begin{align}
\mu_t = \left((1-t)\text{Identity} + T^*\right)_\# \mu,
\end{align}
where $G_\# \mu$ denotes the measure obtained by pushing $\mu$ with $G$.
Note that the above interpolation is different from the convex combination of measures, i.e., $(1-t) \mu + t \nu$. For sparse measure, the displacement interpolation is $w_i - w_{\sigma*(i)}$ for the optimal permutation $\sigma^*$ defined in Eq.~\eqref{eq:optimal_permuation}.
$\lambda$-displacement convexity asserts Jensen's inequality along the displacement interpolation \cite{mccann1997convexity} as
\begin{multline*}
F(\mu_t) \leq (1-t) F(\mu) \\ + t F(\nu) -\frac{\lambda}{2}(1-t)(t) W_2^2(\mu,\nu).
\end{multline*}
A standard example of a displacement convex function is a convex quadratic function of measures.
\begin{example}
Consider
\begin{align*}
Q(\mu) = \int K(x-y) d\mu(x) d\mu(y)
\end{align*}
where $\mu$ is a measure over ${\mathbb{R}}^d$ and $K(\Delta)$ is convex in $\Delta \in {\mathbb{R}}^d$; then, $Q$ is $0$-displacement convex~\cite{mccann1997convexity}.
\end{example}
The optimization of $Q$ over a sparse measure is convex~\footnote{$Q$ does not satisfy the condition of Proposition~\ref{proposition:non-convex}.}. However, this is a very specific example of displacement convex functions. Generally, displacement convex functions are not necessarily convex.
Recall the sparse measures defined in Eq.~\eqref{eq:disc_measure}. While convexity asserts Jensen's inequality for the interpolation of $\{w_i\}$ with all $n!$ permutations of $\{v_j\}$, displacement convexity only relies on a specific permutation. In that regard, displacement convexity is weaker than convexity. In the following example, we elaborate on this difference.
\begin{example} \label{example:energy}
The energy distance between measures over ${\mathbb{R}}$ is defined as
\begin{multline} \label{eq:mmd}
E(\mu,\nu) = 2 \int | x - y | d\mu(x) d\nu(y) \\- \int | x - y | d\mu(x) d\mu(x) - \int| x- y | d\nu(x) d\nu(y).
\end{multline}
$E(\mu,\nu)$ is $0$-displacement convex in $\mu$~\cite{carrillo2018measure}.
\end{example}
According to Proposition~\ref{proposition:non-convex}, $E$ does not obey Jensen's inequality for interpolations with an arbitrary transport map. In contrast, $E$ obeys Jensen's inequality for the optimal transport map, since it is monotone in ${\mathbb{R}}$~\cite{carrillo2018measure}. This key property concludes $E$ is displacement convex.
Remarkably, the optimization of the energy distance has applications in machine learning and physics.
\cite{hadi22} show that the training of two-layer neural networks with two-dimensional inputs (uniformly drawn from the unit sphere) casts to minimizing $E(\mu,\nu)$ in a sparse measure $\mu$. The optimization of the energy distance has been also used in clustering \cite{szekely2017energy}. In physics, the gradient flow on the energy distance describes interacting particles from two different species \cite{carrillo2018measure}.
\subsection{Star displacement convex functions}
Our convergence analysis extends to a broader family of functions. Let $\widehat{\mu}$ denote the optimal $n$-sparse solution for the optimization in Eq.~\eqref{eq:particle_program}, and $\mu_t$ is obtained by the displacement interpolation between $\mu$ and $\widehat{\mu}$. Star displacement convex function $F$ obeys
\begin{align*}
\sum_{i} \langle w_i - T(w_i), \partial_{w_i} F(\mu) \rangle \geq F(\mu) - F(\widehat{\mu}),
\end{align*}
where $T$ is the optimal transport map from $\mu$ to $\widehat{\mu}$. The above definition is inspired by the notion of star-convexity~\cite{nesterov2006cubic}. It is easy to check that $0$-displacement convex functions are star displacement convex.
Star displacement convex optimization is used for generative models in machine learning.
An important family of generative models optimizes the Wasserstein-2 metric~\cite{arjovsky2017wasserstein}. Although Wasserstein 2 is not displacement convex \cite{santambrogio2017euclidean}, it is star displacement convex.
\begin{example} \label{lemma:weakconvex_w2}
$W_2^2(\mu,\nu)$ is star displacement convex in $\mu$ as long as $\mu$ and $\nu$ has sparse supports of the same size.
\end{example}
Star displacement convexity holds for complete orthogonal tensor decomposition. Specifically, we consider the following example of tensor decomposition.
\begin{example} \label{example:tensor}
Consider the orthogonal complete tensor decomposition of order $3$, namely
\begin{align*}
\min_{w_1,\dots, w_d \in {\mathbb{R}}^d} \left( G\left(\frac{1}{n}\sum_{i=1}^d \delta_{w_i}\right)=- \sum_{i=1}^d \sum_{j=1}^d \left\langle \frac{w_j}{\|w_j\|}, v_i\right\rangle^3\right),
\end{align*}
where $v_1, \dots, v_d$ are orthogonal vectors over the unit sphere denoted by $\mathcal{S}_{d-1}$.
\end{example}
Although orthogonal tensor decomposition is not convex~\cite{anandkumar2014tensor}, the next lemma proves that it is star displacement convex.
\begin{lemma} \label{theorem:star_tensor}
$G$ is star displacement convex for $w_1, \dots, w_n \in \mathcal{S}_{d-1}$.
\end{lemma}
To prove the above lemma, we leverage the properties of the optimal transport map used for displacement interpolation.
There are more examples of displacement convex functions in machine learning~\cite{javanmard2020analysis} and physics~\cite{carrillo2009example}. Motivated by these examples, we analyze displacement convex optimization.
\section{Optimization of smooth functions}
Gradient descent is a powerful method to optimize smooth functions that enjoy a dimension-free convergence rate to a critical point~\cite{nesterov1999global}. More interestingly, a variant of gradient descent converges to local optimum~\cite{daneshmand2018escaping,jin2017escape,ge2015escaping,xu2018first,zhang2017hitting}. Here, we prove gradient descent globally optimizes the class of (star) displacement convex functions. Our results are established for the standard gradient descent, namely the following iterates
\begin{multline}
w_{i}^{(k+1)} = w_i^{(k)} - \gamma \partial_{w_i} F(\mu_k),\\ \mu_k := \frac{1}{n}\sum_{i=1}^n \delta_{w_i^{(k)}}
\end{multline}
where $\partial_{w_i} F$ denotes the gradient of $F$ with respect to $w_i$.
The next Theorem establishes the convergence of gradient descent.
\begin{theorem}\label{thm:smooth}
Assume $F$ is $\ell$-smooth, and particle gradient descent starts from distinct particles $w_1^{(0)}\neq \dots \neq w_n^{(0)}$. Let $\widehat{\mu}$ denote the optimal solution of \eqref{eq:particle_program}.
\begin{itemize}
\item[(a)] For $(\lambda>0)$-displacement functions,
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\\leq \ell \left(1-\left(\frac{2 \lambda \ell \gamma}{\ell+\lambda}\right)\right)^k W_2^2(\mu_0,\widehat{\mu})
\end{multline*}
holds as long as $\gamma\leq 2/(\lambda+\ell)$.
\item[(b)] Under $0$-displacement convexity,
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\ \leq \frac{2 (F(\mu_0) - F(\widehat{\mu}))W_2^2(\mu_0,\widehat{\mu})}{2 W_2^2(\mu_0,\widehat{\mu}) + (F(\mu_0) - F(\widehat{\mu})) \gamma k }
\end{multline*}
holds for $\gamma \leq 1/\ell$.
\item[(c)] Suppose that $F$ is star displacement convex and $\max_{m\in\{1,\dots, k\}} W_2^2(\mu_m,\widehat{\mu})\leq r^2$; then
\begin{multline*}
F(\mu_{k+1}) - F(\widehat{\mu}) \\\leq \frac{2 (F(\mu_0) - F(\widehat{\mu}))r^2}{2 r^2 + (F(\mu_0) - F(\widehat{\mu})) \gamma k }
\end{multline*}
holds for $\gamma \leq 1/\ell$.
\end{itemize}
\end{theorem}
\begin{table*}[t!]
\centering
\begin{tabular}{|l|l|}
\hline
Function class & Convergence rate
\\
\hline
$\lambda$-disp. convex & $\ell \left(\frac{\ell-\lambda}{\ell+\lambda}\right)^k W_2^2(\mu_0,\widehat{\mu})$ \\
$\lambda$-strongly convex & $\frac{\ell}{2} \left(\frac{\ell-\lambda}{\ell+\lambda}\right)^k \sum_{i}\| w_i - w^*_i\|^2_2$ \\
\hline
$0$-disp. convex & $2L W_2^2(\mu_0,\widehat{\mu})k^{-1}$ \\
convex & $2L \left(\sum_{i}\| w_i - w^*_i\|^2_2\right)(k+4)^{-1}$ \\
\hline
\end{tabular}
\caption{\textit{Convergence rates for the optimization of $\ell$-smooth functions.} We use the optimal choice for the stepsize $\gamma$ to achieve the best possible rate. Recall $\widehat{\mu} = \frac{1}{n}\sum_{i=1}^n\delta_{w_i^*}$ denotes the optimal solution for Eq.~\eqref{eq:particle_program}. Rates for convex functions: \cite{nesterov1999global}. Rates for displacement convex functions: Theorem~\ref{thm:smooth}. }
\label{tab:ratesmooth}
\end{table*}
Table~\ref{tab:ratesmooth} compares convergence rates for convex and displacement convex functions. We observe an analogy between the rates. The main difference is the replacement of Euclidean distance by the Wasserstein distance in the rates for displacement convex functions. This replacement is due to the permutation invariance of $F$.
The Euclidean distance between $(w_1^*, \dots, w_n^*)$ and permuted particles $(w_{\sigma(1)}^*, \dots , w_{\sigma(n)}^*)$ can be arbitrary large, while $F$ is invariant to the permutation of particles $w_1,\dots, w_n$. Proven by the last Theorem~\ref{thm:smooth}, Wasserstein distance effectively replaces the Euclidean distance for permutation invariant displacement convex functions.
Smooth displacement convex functions are non-convex, hence have saddle points. However, displacement convex functions do not have suboptimal local minima \cite{javanmard2020analysis}. Such property has been frequently observed for various objective functions in machine learning. To optimize functions without suboptimal local minima, escaping saddle points is crucial since saddle points may avoid the convergence of gradient descent~\cite{ge2015escaping}. \cite{lee2016gradient} proves that random initialization effectively avoids the convergence of gradient descent to saddle points. Similarly, the established global convergence results rely on a weak condition on initialization: The particles have to be distinct. A regular random initialization satisfies this condition.
Escaping saddles with random initialization may require considerable time for general functions. \cite{du2017gradient} propose a smooth function on which escaping saddles may take an exponential time with the dimension. Notably, the result of the last theorem holds specifically for displacement convex functions. For this function class, random initialization not only enables escaping saddles but also leads to global convergence.
\section{Optimization of Lipschitz functions}
Various objective functions are not smooth. For example, the training loss of neural networks with the standard ReLU activation is not smooth. In physics, energy functions often are not smooth~\cite{mccann1997convexity,carrillo2022global}. Furthermore, recent sampling methods are developed based on non-smooth optimization with particle gradient descent~\cite{li2022sampling}. Motivated by these broad applications, we study the optimization of non-smooth displacement convex functions. In particular, we focus on $L$-Lipschitz functions whose gradient is bounded by $L$.
To optimize non-smooth functions, we add noise to gradient iterations as
\begin{multline}
\tag{PGD}\label{eq:noisy_descent}
w_{i}^{(k+1)} = w_i^{(k)} \\- \gamma_k \left( \partial_{w_i} F(\mu_k) + \frac{1}{\sqrt{n}}\xi_i^{(k)} \right)
\end{multline}
where $\xi_1^{(k)}, \dots \xi_n^{(k)} \in {\mathbb{R}}^d$ are random vectors uniformly drawn from the unit ball. The above perturbed gradient descent (PGD) is widely used in smooth optimization to escape saddle points~\cite{ge2015escaping}. The next Theorem proves this random perturbation can be leveraged for optimization of non-smooth functions, which are (star) displacement convex.
\begin{theorem}\label{thm:nonsmooth}
Consider the optimization of a $L$-Lipschitz function with ~\ref{eq:noisy_descent} starting from $w_1^{(0)}\neq \dots \neq w_n^{(0)}$.
\begin{itemize}
\item[a.] If $F$ is $\lambda$-displacement convex, then
\begin{align*}
\min_{k \in \{1, \dots, m \}} \left\{ {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu})\right] \right\} & \leq \frac{2(L^2 +1)}{\lambda(m+1)}
\end{align*}
holds for $\gamma_k = 2/(\lambda(k+1))$.
\item[b.] If $F$ is star displacement convex, then
\begin{multline*}
\min_{k \in \{1, \dots, m \}} \left\{ {\mathbf E} \left[ F(\mu_k) - F(\widehat{\mu}) \right] \right\} \\\leq \frac{1}{\sqrt{m}}\left( W_2^2(\mu_0,\widehat{\mu}) + L +1\right)
\end{multline*}
holds for $\gamma_1 = \dots = \gamma_m = 1/\sqrt{m}$.
\end{itemize}
Notably, the above expectations are taken over random vectors $\xi_1^{(k)}, \dots \xi_n^{(k)}$.
\end{theorem}
Thus, \ref{eq:noisy_descent} yields an $\epsilon$-optimization error with $O(1/\epsilon^2)$ iterations to reach $\epsilon$-suboptimal solution for Lipschitz displacement convex functions. This rate holds for the optimization of the energy distance since it is $2$-Lipschitz and $0$-displacement convex~\cite{carrillo2018measure}.
\cite{hadi22} also establishes the convergence of gradient descent on the specific example of the energy distance. The last Theorem extends this convergence to the general function class of non-smooth Lipschitz displacement convex functions. While the convergence of \cite{hadi22} is in the Wasserstein distance, our convergence results are in terms of the function value.
\section{Approximation error} \label{sec:app_error}
Now, we turn our focus to the approximation error. We provide bounds on the approximation error for two important function classes:
\begin{itemize}
\item[(i)] Lipschitz functions in measures.
\item[(ii)] Convex and smooth functions in measures.
\end{itemize}
For (i), we provide the probabilistic bound $O\left(\frac{1}{\sqrt{n}}\right)$ on the approximation error; then, we improve the bound to $O(\frac{1}{n})$ for (ii).
\subsection{Lipschitz functions in measures} We introduce a specific notion of Lipschitz continuity for functions of probability measures. This notion relies on Maximum Mean Discrepancy (MMD) between probability measures. Given a positive definite kernel $K$, MMD$_K$ is defined as
\begin{multline*}
\left(\text{MMD}_K(\mu, \nu)\right)^2 = \int K(w,v) d\mu(w) d\mu(v) \\- 2 \int K(w,v) d\mu(w) d\nu (v) + \int K(w,v) d\nu(w) d\nu(v)
\end{multline*}
\noindent
MMD is widely used for the two-sample test in machine learning~\cite{gretton2012kernel}.
Leveraging MMD, we define the following Lipschitz property.
\begin{definition}[$L$-MMD$_K$ Lipschitz]
$F$ is $L$-MMD$_K$ Lipschitz, if there exists a positive definite Kernel $K$ such that
\begin{align*}
| F(\mu) - F(\nu)| \leq L \times \text{MMD}_K(\mu,\nu)
\end{align*}
holds for all probability measures $\mu$ and $\nu$.
\end{definition}
Indeed, the above Lipschitz continuity is an extension of the standard Lipschitz continuity to functions of probability measures. A wide range of objective functions obeys the above Lipschitz continuity. Particularly, \cite{chizat2018global}
introduces a unified formulation for training two-layer neural networks, sparse deconvolution, and tensor decomposition as
\begin{align} \label{eq:nn_chizat}
R\left(\int \Phi(w) d\mu(w) \right)
\end{align}
where $\Phi: {\mathbb{R}}^d \to \mathcal{H}$ is a map whose range lies in the Hilbert space $\mathcal{H}$ and $R: \mathcal{H} \to {\mathbb{R}}_+$. Under a weak assumption, $R$ is $L$-MMD$_K$ Lipschitz.
\begin{proposition} \label{prop:Lipschitz}
If $R$ is $L$-Lipschitz in its input, then it is $L$-MMD$_K$-Lipschitz for $K(w,v) = \langle \Phi(w),\Phi(v) \rangle$.
\end{proposition}
Thus, the class of Lipschitz functions is rich. For this function class, $O(\frac{1}{\sqrt{n}})$-approximation error is achievable.
\begin{proposition}
\label{lemma:approximation} Suppose that there exists a uniformly bounded kernel $\|K\|_{\infty} \leq B$ such that $F$ is $L$-MMD$_K$ Lipschitz; then,
\begin{align*}
\min_{\mu_n} F(\mu_n)-F^* \leq \frac{3\sqrt{B}}{\sqrt{n}}
\end{align*}
holds with probability at least $1- \exp(-1/n)$.
\end{proposition}
The last Proposition is a straightforward application of Theorem 7 in \cite{gretton2012kernel}.
Combining the above result with Theorem~\ref{thm:nonsmooth} concludes the total complexity of $O(d/\epsilon^4)$ to find an $\epsilon$-optimal solution for Lipschitz displacement functions. The complexity can be improved to $O(d/\epsilon^2)$ for smooth functions according to Theorem~\ref{thm:smooth}.
The established bound $O(1/\sqrt{n})$ can be improved under assumptions on the kernel $K$ associated with the Lipschitz continuity. For $d$-differentiable shift-invariant kernels, \cite{xu2022accurate} establishes a considerably tighter bound $O(\frac{\log(n)^d}{n})$ when the support of the optimal measure is a subset of the unit hypercube.
\subsection{Convex functions in measures}
If $F$ is convex and smooth in $\mu$, we can get a tighter bound on the approximation error.
\begin{lemma} \label{lemma:approx_convex}
Suppose $F$ is convex and smooth in $\mu$. If the probability measure $\mu$ is defined over a compact set, then
\begin{align*}
\min_{\mu_n} F(\mu_n) - F^* = O\left(\frac{1}{n}\right)
\end{align*}
holds for all $n$.
\end{lemma}
The proof of the last Lemma is based on the convergence rate of the Frank-Wolfe algorithm~\cite{jaggi2013revisiting}. This algorithm optimizes a smooth convex function by adding particles one by one. After $n$ iterates, the algorithm obtains an $n$-sparse measure which is $O(1/n)$-suboptimal. \cite{bach2017breaking} uses this proof technique to bound the approximation error for neural networks. The last lemma extends this result to a broader function class.
Remarkably, the energy distance is convex and smooth in $\mu$, hence enjoys $O(1/n)$-approximation error as stated in the next lemma.
\begin{lemma} \label{lemma:energy_distance}
$E(\mu,\nu)$ is convex and smooth in $\mu$ when $\mu$ and $\nu$ have a bounded support.
\end{lemma}
\subsection{Applications for neural networks}
The established theoretical analysis has a subtle application for the function approximation with neural networks. Consider the class of functions in the following form
\begin{align}
f(x) = \int \varphi(x^\top w) d\nu(w)
\end{align}
where $x, w \in {\mathbb{R}}^2$ lies on the unit circle and $\nu$ is a measure with support contained in the upper-half unit circle. $\varphi$ is the standard zero-one ridge function:
\begin{align}
\varphi(a) = \begin{cases}
1 & a >0 \\
0 & a \leq 0
\end{cases}.
\end{align}
The above function is used in the original MacCulloch-Pitts model for neural networks~\cite{mcculloch1943logical}. To approximate function $f$, one may use a neural network with a finite number of neurons implementing the following output function:
\begin{align}
f_n(x) = \frac{1
}{n} \sum_{i=1}^n \varphi(x^\top w_i),
\end{align}
where $w_1, \dots, w_n$ are points over the unit circle representing the parameters of the neurons. To optimize the location of $w_1,\dots, w_n$,
one may minimize the standard mean-squares loss as
\begin{align}
\min_{w_1,\dots,w_n} \left( L(w):= {\mathbf E}_x \left( f_n(x) - f(x)\right)^2 \right).
\end{align}
As is stated in the next corollary, \ref{eq:noisy_descent} optimizes $L$ up to the approximation error when the input $x$ is distributed uniformly over the unit circle.
\begin{corollary} \label{cor:neural_nets}
Suppose that the input $x$ is drawn uniformly over the unit circle. After a specific transformation of the coordinates for $w_1,\dots, w_n$, \ref{eq:noisy_descent} with $n$ particles with stepsize $\gamma_k =1/\sqrt{k}$ obtains $w^{(k)}:=[w_1^{(k)},\dots, w_n^{(k)}]$ after $k$ iteration such that
\begin{align}
{\mathbf E} \left[ L(w^{(k)})\right] =O(\frac{n}{\sqrt{k}}+\frac{1}{n})
\end{align}
holds where the expectation is taken over the algorithmic randomness of \ref{eq:noisy_descent}.
\end{corollary}
The last corollary is the consequence of part b of Theorem~\ref{thm:nonsmooth}, and the approximation error established in Lemma~\ref{lemma:approx_convex}. For the proof, we use the connection between $L$ and the energy distance derived by \cite{hadi22}. While \cite{hadi22} focuses on realizable settings, the last corollary holds for non-realizable settings when the measure $\nu$ is not an $n$-sparse measure.
\section{Experiments}
We experimentally validate established bounds on the approximation and optimization error. Specifically, we validate the results for the example of the energy distance, which obeys the required conditions for our theoretical results.
\subsection{Optimization of the energy distance}
As noted in Example~\ref{example:energy}, the energy distance is displacement convex. Furthermore, it is easy to check that this function is $2$-Lipschitz. For the sparse measures in Eq.~\eqref{eq:disc_measure}, the energy distance has the following form
\begin{multline*}
n^2 E(\mu,\nu) = 2\sum_{i,j=1}^n |w_i - v_j| \\
- \sum_{i,j=1}^n | v_i - v_j | - \sum_{i,j=1}^n | w_i - w_j|,
\end{multline*}
where $n=100$ for this experiment.
We draw $v_1,\dots, v_n$ at random from uniform$[0,1]$. Since $E$ is not a smooth function, we use \ref{eq:noisy_descent} to optimize $w_1,\dots, w_n \in {\mathbb{R}}$. In particular, we use $\xi_1^{(k)}$ i.i.d. from uniform$[-0.05,0.05]$. For the stepsize, we use $\gamma_k = 1/\sqrt{k}$ required for the convergence result in Theorem~\ref{thm:nonsmooth} (part b). In Figure~\ref{fig:energy_distance}, we observe a match between the theoretical and experimental convergence
rate for \ref{eq:noisy_descent}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{images/energy_distance_dc.pdf}
\caption{\footnotesize{\textbf{The convergence of \ref{eq:noisy_descent} for the energy distance.} Horizontal: $\log(k)$; vertical: $\log(E(\mu_k,\nu)- E(\widehat{\mu},\nu))$. The red dashed line is the theoretical convergence rate. The blue line is the convergence observed in practice for the average of 10 independent runs. }}
\label{fig:energy_distance}
\end{figure}
\subsection{Approximation error for the energy distance
}
Lemma~\ref{lemma:approx_convex} establish $O(1/n)$ approximation error for convex functions of measures. Although the energy distance $E(\mu,\nu)$ is not convex in the support of $\mu$, it is convex and smooth in $\mu$ as stated in Lemma~\ref{lemma:energy_distance}. Thus, $O(1/n)$-approximation error holds for the energy distance. We experimentally validate this result. Consider the recover of $\nu=$uniform$[-1,1]$ by minimizing the energy distance as
\begin{multline*}
E(\mu,\nu) = \frac{2}{n}\sum_{i=1}^n | w_i - v| d\nu(v) \\ - \frac{1}{n^2}\sum_{i,j=1}^n | w_i- w_j| - \int | v- v'|d\nu(v) d\nu(v').
\end{multline*}
The above integrals can be computed in closed forms using $\int_{-1}^1|w-v|d\nu(v) = w^2+1$. Hence, we can compute the derivative of $E$ with respect to $w_i$. We run \ref{eq:noisy_descent} with stepsize determined in the part b of Theorem~\ref{thm:nonsmooth} for $k=3\times10^5$ iterations and various $n \in \{2^2, \dots, 2^8\}$. Figure~\ref{fig:energy_distance} shows how the error decreases with $n$ in the log-log scale. In this plot, we observe that $E$ enjoys a mildly better approximation error compared to the established bound $O(1/n)$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{images/approximation.pdf}
\caption{\footnotesize{\textbf{Approximation error for the energy distance.} Horizontal: $n$; vertical: $E(\widehat{\mu}_n, \nu)$ where $\widehat{\mu}_n$ is obtained by $3\times 10^5$ iterations of \ref{eq:noisy_descent} with $n$ particles. The red dashed line is the theoretical $O(1/n)$-bound for the approximation error. The plot is in the $\log$-scale for both axes. The (blue) plot shows the average of 10 independent runs. }}
\label{fig:my_label}
\end{figure}
\section{Discussions}
We establish a non-asymptotic convergence rate for particle gradient descent when optimizing displacement convex functions of measures. Leveraging this convergence rate, we prove the optimization of displacement convex functions of (infinite-dimensional) measures can be solved in polynomial time with input dimension, and the desired accuracy rate. This finding will be of interest to various communities, including the communities of non-convex optimization, optimal transport theory, particle-based sampling, and theoretical physics.
The established convergence rates are limited to particle gradient descent. Yet, there may be other algorithms that converge faster than this algorithm. Convex optimization literature has established lower-bound complexities required to optimize convex function (with first-order derivatives)~\cite{nesterov1999global}. Given that displacement convex functions do not obey the conventional notion of convexity, it is not clear whether these lower bounds extend to this specific class of non-convex functions. More research is needed to establish (Oracle-based) lower-computational-complexities for displacement convex optimization.
Nesterov's accelerated gradient descent enjoys a considerably faster convergence compared to gradient descent in convex optimization. Indeed, this method attains the optimal convergence rate using only first-order derivatives of smooth convex functions~\cite{nesterov1999global}. This motivates future research to analyze the convergence of accelerated gradient descent on displacement convex functions.
We provided examples of displacement convex functions, including the energy distance. Displacement convex functions are not limited to these examples. A progression of this work is to assess the displacement convexity of various non-convex functions. In particular, non-convex functions invariant to permutation of the coordinates, including latent variable models and matrix factorization~\cite{anandkumar2014tensor}, may obey displacement convexity under weak assumptions.
A major limitation of our result is excluding displacement convex functions with entropy regularizers that have emerged frequently in physics~\cite{mccann1997convexity}. The entropy is displacement convex. Restricting the support of measures to a sparse set avoids the estimation of the entropy. Thus, particle gradient descent is not practical for the optimization of functions with the entropy regularizer. To optimize such functions, the existing literature uses a system of interacting particles solving a stochastic differential equation~\cite{philipowski2007interacting}. In asymptotic regimes, this algorithm implements a gradient flow converging to the global optimal measure~\cite{philipowski2007interacting}. To assess the complexity of these particle-based algorithms, we need non-asymptotic analyses for a finite number of particles.
\section*{Acknowledgments and Disclosure of Funding}
We thank Francis Bach, Lenaic Chizat and Philippe Rigollet for their helpful discussions on the related literature on particle-based sampling, the energy distance minimization and Riemannian optimization. This project was funded by the Swiss National Science Foundation (grant P2BSP3\_195698).
|
{'timestamp': '2023-02-10T02:15:24', 'yymm': '2302', 'arxiv_id': '2302.04753', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.04753'}
|
arxiv
|
\section{Introduction}
The AdS/CFT correspondence \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj} is a mysterious duality between field theories and gravity theories. It gives a new geometric interpretation to a special class of field theories. To check this correspondence, the conformal symmetry often plays an important role in explicit computations of the partition function and correlation functions. It is an interesting problem to make computable examples for generalizations of the AdS/CFT correspondence by deforming the conformal symmetry. The authors of \cite{McGough:2016lol} proposed such an example: the AdS$_3$/CFT$_2$ correspondence with the $T\overline{T}$ deformation and a finite radius cutoff.
Let us briefly review the $T\overline{T}$ deformation of 2d CFT on a flat spacetime \cite{Zamolodchikov:2004ce,Smirnov:2016lqw, Cavaglia:2016oda}. Consider a deformed CFT by the $T\overline{T}$ operator with a deformation parameter $\mu$. The action ($S^{(\mu)}_{\textrm{QFT}}$) of this deformed CFT is defined via the differential equation
\begin{align}
\frac{d S^{(\mu)}_{\textrm{QFT}}}{d \mu}=\int \mathrm{d}^2 x \,(T\overline{T})_{\mu}, \;\;\;\;S^{(\mu)}_{\textrm{QFT}}\Bigr|_{\mu=0}=S_\textrm{CFT},
\end{align}
where $S_\textrm{CFT}$ is the action of the undeformed CFT, and $(T\overline{T})_{\mu}$ is a local operator which is constructed by the energy momentum tensor of the deformed CFT. {See Eq. (6.10) in \cite{Cavaglia:2016oda} for a simple example.}
If we consider the first order perturbation in $\mu$, the perturbative action ($S_{\textrm{QFT}}$) is given by
\begin{align}
S_{\textrm{QFT}}=S_\textrm{CFT}+\mu \int \mathrm{d}^2 x \, T\overline{T}\,,\label{pa}
\end{align}
where
\begin{align}
T:=T_{ww}\,, \qquad \overline{T}:=T_{\bar{w}\bar{w}} \,,
\end{align}
are the energy momentum tensors of the undeformed CFT and $w$ and $\bar{w}$ are complex coordinates.\footnote{We omit the $\Theta^2:=T_{w\bar{w}}T_{w\bar{w}}$ term because the correlation functions which include $\Theta^2$ on the cylinder are zero.}
The $T\overline{T}$ deformation of 2d CFT has a solvable structure. In particular, the energy spectrum on a spatial circle can be computed non-perturbatively \cite{Smirnov:2016lqw, Cavaglia:2016oda}. For recent studies on the $T\overline{T}$ deformation see, for example, \cite{Cardy:2015xaa, Giribet:2017imm, Dubovsky:2017cnj, Cardy:2018sdv, Aharony:2018vux, Bonelli:2018kik, Dubovsky:2018bmo, Datta:2018thy, Conti:2018jho, Chen:2018keo, Aharony:2018bad, Conti:2018tca, Santilli:2018xux, Baggio:2018rpv, Chang:2018dge, Jiang:2019tcq, LeFloch:2019rut, Jiang:2019hux, Conti:2019dxg, Chang:2019kiu, Frolov:2019nrr}. Non-Lorentz invariant cases are also studied in, for example, \cite{Guica:2017lia, Chakraborty:2018vja, Aharony:2018ics, Cardy:2018jho, Nakayama:2018ujt, Guica:2019vnb}.
The proposal in \cite{McGough:2016lol} is that the gravity dual of the $T\overline{T}$-deformed 2d holographic CFT with $\mu>0$ is AdS$_3$ gravity with a finite radius cutoff. This proposal has been studied and checked by various methods. In particular, the energy spectrum of the deformed CFT is matched to the quasi-local energy in the cutoff space time $r\le r_c$ \cite{McGough:2016lol}. See also recent development of this holography in \cite{Shyam:2017znq, Kraus:2018xrn, Cottrell:2018skz, Bzowski:2018pcy, Taylor:2018xcy, Hartman:2018tkw, Shyam:2018sro, Caputa:2019pam}. Another gravity dual of the deformed CFT for vacua of string theory was proposed in \cite{Giveon:2017nie}, see also \cite{Giveon:2017myj, Asrat:2017tzd, Baggio:2018gct, Apolo:2018qpq, Babaro:2018cmq, Chakraborty:2018aji, Araujo:2018rho, Giveon:2019fgr, Chakraborty:2019mdf, Nakayama:2019mvq, Dei:2018mfl, Dei:2018jyj}.
Holographic entanglement entropy \cite{Ryu:2006bv, Ryu:2006ef} is a well studied topic in the AdS/CFT correspondence. The entanglement entropy in the $T\overline{T}$-deformed CFT and its holographic dual have been studied in \cite{Chakraborty:2018kpr, Donnelly:2018bef, Chen:2018eqk, Gorbenko:2018oov, Park:2018snf, Sun:2019ijq, Banerjee:2019ewu, Murdia:2019fax, Ota:2019yfe}.
Especially, a perturbative computation of the entanglement entropy in the $T\overline{T}$-deformed 2d CFT on a cylinder \cite{Chen:2018eqk} and its non-perturbative computation with a large central charge on a sphere \cite{Donnelly:2018bef} are consistent with the holographic entanglement entropy with a radius cutoff.
Without the $T\overline{T}$ deformation, the entanglement entropy of a single interval in the ground state of 2d CFT is expressed by a well known formula \cite{Calabrese:2004eu, Calabrese:2009qy}. Even though it has been reproduced by holography, we note that it is valid in {\it any} CFT not only in the {\it holographic CFT}, {where the conditions for the holographic CFT are a large central charge and sparse spectrum.}
This universality follows from the universality of two point functions in CFT.
On the other hand, the entanglement entropy of multiple intervals is related to higher point functions of the twist operators, which depend on the details of CFT. Thus, in order to check the AdS$_3$/CFT$_2$ correspondence for the entanglement entropy of multiple intervals, we need to use conditions for the {\it holographic CFT}. In \cite{Hartman:2013mia}, the entanglement entropy of multiple intervals in the 2d holographic CFT was computed by the dominant contribution (vacuum conformal block) in the correlation functions, and it agrees with the holographic entanglement entropy formula.
This agreement is an important consistency check for the holographic entanglement entropy formula because it is confirmed under the conditions of {\it holographic CFT} in the field theory side.
With the first order $T\overline{T}$ deformation, the R\'{e}nyi entropy of a {\it single} interval in the deformed 2d free fermions CFT was studied at {\it zero} temperature \cite{Chakraborty:2018kpr, Sun:2019ijq} {by using the twist operators.
In this paper, we develop a formula for R\'{e}nyi entanglement entropy of {\it multiple} intervals in 2d CFT with the first order $T\overline{T}$ deformation at {\it finite} temperature by using the twist operators. Our formula reproduces the R\'{e}nyi entropy of a {\it single} interval in the deformed 2d CFT at finite temperature~\cite{ Chen:2018eqk}, where a different method, a conformal map from a replica manifold to a complex plane, was used. }
We find that the entanglement entropy of multiple intervals in the deformed holographic CFT is a summation of the one of the single interval because of the dominant contribution from the vacuum conformal block.
The R\'{e}nyi entropy of two intervals becomes a summation of the one of the single interval {if} the distance between the intervals is large enough. These `additivity' properties from the field theory are consistent with the holographic computation with a radius cutoff.
\begin{figure}[]
\centering
\subfigure[$s$-channel]
{\includegraphics[width=7.3cm]{sCHN} \label{}}
\subfigure[$t$-channel]
{\includegraphics[width=7.3cm]{tCHN} \label{}}
\caption{The holographic entanglement entropy for two intervals $[x'_1, x'_2]\cup[x'_3, x'_4]$ at $u=u_c$: schematic pictures of minimal surfaces (red curves). }\label{hee}
\end{figure}
From the holographic perspective, the entanglement entropy is identified with the minimal area of the surface anchored at the boundary points of the interval at the cutoff $u=u_c$. For two intervals, there are two configurations of minimal surfaces: disconnected phase ($s$-channel) and connected phase ($t$-channel) as shown in Figure \ref{hee}. By comparing two areas of the minimal surfaces, we can determine `phase transition' points of the holographic entanglement entropy.
This phase structure of the holographic entanglement entropy will be useful to understand the entanglement entropy in the $T\overline{T}$ deformed CFT.
In \cite{Ota:2019yfe}, the phase transition of the holographic entanglement entropy of two intervals with the {\it same} lengths at {\it zero} temperature was investigated. In this paper, we generalize it to two intervals with {\it different} lengths and at {\it finite} temperature.
At high temperature, our holographic computation shows that s-channel is always favored so there is no phase transition.
We show that this agrees with the field theory computations.
However, at zero temperature and intermediate temperature there is a phase transition between $s$-channel and $t$-channel. We also discuss the cutoff dependence of the phase transition of the entanglement entropy.
The organization of this paper is as follows. In section \ref{section2}, we provide the formulas of the R\'{e}nyi entropy with the first order $T\overline{T}$ deformation by using the twist operators. Based on this formulas, in section \ref{section3} and \ref{section4}, we explicitly compute the entanglement entropy and the R\'{e}nyi entropy respectively. In section \ref{section5}, we study the holographic entanglement entropy of two intervals with the finite radius cutoff and its phase structure. We conclude in section \ref{summary}.
\section{Formulas of the R\'{e}nyi entropy in the deformed CFT }\label{section2}
In this section, we develop a formalism to compute the R\'{e}nyi entropy in 2d CFT at finite temperature with a first order perturbation by the $T\overline{T}$ deformation. We use the twist operators to compute correlation functions on the $n$-sheeted surface in the replica method. This formalism can generalize the computation of the R\'{e}nyi entropy of a single interval \cite{Chen:2018eqk} to multiple intervals.
Consider the deformed CFT by $T\overline{T}$ deformation living on the manifold $\mathcal{M}$.
By the replica method \cite{Calabrese:2004eu, Calabrese:2009qy}, the R\'{e}nyi entropy of a subsystem $A\in\mathcal{M}$ can be expressed as follows
\begin{align}
S_n(A):=\frac{1}{1-n}\log\frac{Z_n(A)}{Z^n},\label{ee1}
\end{align}
where $Z$ is the partition function defined on $\mathcal{M}$ and $Z_n(A)$ is the one defined on the $n$-sheeted surface $\mathcal{M}^n(A)$ which is constructed from sewing $n$ copies of $\mathcal{M}$ cyclically along $A$ on each $\mathcal{M}$.
A vivid example of this n-sheeted surface is displayed in Fig. \ref{ManifoldFig}.
\begin{figure}[]
\centering
{\includegraphics[width=10cm]{MANIFOLD} \label{}}
\caption{Schematic picture of manifold $\mathcal{M}^{3}$.}\label{ManifoldFig}
\end{figure}
Note that this R\'{e}nyi entropy \eqref{ee1} reduces to the entanglement entropy in the $n\rightarrow1$ limit:
\begin{align}
S(A)=\lim_{n\to1}S_n(A)\,.
\end{align}
In this paper, we consider the deformed CFT by the first order perturbation of $T\overline{T}$ {at finite temperature, i.e. on a cylinder $\mathcal{M}$.} The perturbative action is
\begin{align}
S_\textrm{QFT}=S_{\textrm{CFT}}+\mu\int_\mathcal{M}T\overline{T},\label{Sqft}
\end{align}
where $T$ and $\overline{T}$ are the energy momentum tensors in the undeformed CFT. We use coordinate $w=x+i\tau$ and $\bar{w}=x-i\tau$ on the cylinder $\mathcal{M}$. Here, $\tau$ is periodic as $\tau\sim\tau+\beta$, and $\beta$ can be interpreted as the inverse temperature. Thus, the integral sign in (\ref{Sqft}) is understood as $\int_{\mathcal{M}}:=\int^{\infty}_{-\infty} \mathrm{d} x\int^{\beta}_0 \mathrm{d}\tau$. With this perturbative action, the first order perturbation of $S_n(A)$ is \cite{Chen:2018eqk}
\begin{align}
\delta S_n(A)=\frac{\mu}{n-1}\left(\int_{\mathcal{M}^n}\langle T\overline{T}\rangle_{\mathcal{M}^n}-n\int_{\mathcal{M}} \langle T\overline{T}\rangle_{\mathcal{M}}\right).\label{ee2}
\end{align}
Since we consider the first order perturbation by $\mu$, we can use CFT techniques to compute the correlation functions in (\ref{ee2}).
Let us express $\int_{\mathcal{M}^n}\langle T\overline{T}\rangle_{\mathcal{M}^n}$ by the twist operators. Consider $m$ intervals $A=[x_1, x_2]\cup\cdots\cup[x_{2m-1}, x_{2m}]$ as the subsystem. In this subsystem, $\int_{\mathcal{M}^n}\langle T\overline{T}\rangle_{\mathcal{M}^n}$ is given by \cite{Cardy:2007mb, Calabrese:2009qy, Sun:2019ijq}
\begin{align}
\begin{split}
\int_{\mathcal{M}^n}\langle T\overline{T}\rangle_{\mathcal{M}^n}
&= \sum_{k=1}^{n} \int_\mathcal{M} \frac{\langle T_{k}(w)\overline{T}_{k}(\bar{w})\prod_{i=1}^m \sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}}{\langle\prod_{i=1}^m\sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}} \\
&=\int_\mathcal{M}\frac{1}{n}\frac{\langle T^{(n)}(w)\overline{T}^{(n)}(\bar{w})\prod_{i=1}^m\sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}}{\langle\prod_{i=1}^m\sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}} \,, \label{ee3}
\end{split}
\end{align}
where $\sigma_n$ and $\bar{\sigma}_n$ are the twist operators and $w_{i}$ denotes an end point of each interval.
Note that $k$ in the first line is a replica index and $T_{k}(w)\overline{T}_{k}(\bar{w})$ is defined from the $k$-th replica fields. In the second line, we use the following identity in a correlation function:
\begin{align}
\begin{split}
\sum_{k=1}^{n} \, \langle T_{k}(w) \, \overline{T}_{k}(\bar{w}) \, \cdots \rangle = \frac{1}{n} \, \langle T^{(n)}(w) \, \overline{T}^{(n)}(\bar{\omega}) \, \cdots \rangle \,,
\label{REPLICA}
\end{split}
\end{align}
where $T^{(n)}(w)$ and $\overline{T}^{(n)}(\bar{w})$ are the total energy momentum tensors of $n$ replica fields, which are defined as follows:
\begin{align}
\begin{split}
T^{(n)}(w) := \sum_{k=1}^{n} \, T_{k}(w) \,, \qquad \overline{T}^{(n)}(\bar{w}) := \sum_{k=1}^{n} \, \overline{T}_{k}(\bar{w}) \,.
\label{REPLICA2}
\end{split}
\end{align}
Note the \eqref{REPLICA} is valid when the operators ``$\cdots$'' therein have the cyclic symmetry under the change of replica indices.
In order to compute the correlation functions on the cylinder $\mathcal{M}$, consider a conformal map
\begin{align}
z=e^{\frac{2\pi w}{\beta}}\,, \qquad \bar{z}=e^{\frac{2\pi \bar{w}}{\beta}},
\end{align}
from $w$ on $\mathcal{M}$ to $z$ on a complex plane $\mathcal{C}$. Under this transformation, the total energy momentum tensors of $n$ replica fields transform as
\begin{equation}
\begin{split}
&T^{(n)}(w)=\left(\frac{2\pi}{\beta}z\right)^2T^{(n)}(z)-\frac{\pi^2nc}{6\beta^2}, \\
&\overline{T}^{(n)}(\bar{w})=\left(\frac{2\pi}{\beta} \bar{z}\right)^2\overline{T}^{(n)}(\bar{z})-\frac{\pi^2nc}{6\beta^2},\label{temt}
\end{split}
\end{equation}
{where $c$ is the central charge of the undeformed CFT. }
For the calculation of \eqref{ee3}, we use (\ref{temt}) and the Ward identity with the energy momentum tensor (see, for example, \cite{Guica:2019vnb})
\begin{align}
\begin{split}
\langle T^{(n)}(z)\mathcal{O}_1(z_1, \bar{z}_1) \cdots & \mathcal{O}_{2m}(z_{2m}, \bar{z}_{2m}) \rangle_\mathcal{C} \\
& =\sum_{j=1}^{2m}\left(\frac{h_j}{(z-z_j)^2}+\frac{1}{z-z_j}\partial_{z_j}\right)\langle \mathcal{O}_1(z_1, \bar{z}_1)\cdots\mathcal{O}_{2m}(z_{2m}, \bar{z}_{2m})\rangle_\mathcal{C}, \\
\langle \overline{T}^{(n)}(\bar{z})\mathcal{O}_1(z_1, \bar{z}_1) \cdots & \mathcal{O}_{2m}(z_{2m}, \bar{z}_{2m})\rangle_\mathcal{C} \\
& = \sum_{j=1}^{2m}\left(\frac{\bar{h}_j}{(\bar{z}-\bar{z}_j)^2}+\frac{1}{\bar{z}-\bar{z}_j}\partial_{\bar{z}_j}\right)\langle \mathcal{O}_1(z_1, \bar{z}_1)\cdots\mathcal{O}_{2m}(z_{2m}, \bar{z}_{2m})\rangle_\mathcal{C},
\end{split}
\end{align}
where $h_i$ and $\bar{h}_i$ are the conformal dimensions of primary operators $\mathcal{O}_i$\footnote{In the usual conformal ward identity, the number of operators $\mathcal{O}_i$ could be any number. But here we restrict it as an even number $2m$ because our interest in this paper is the $m$-intervals.}.
The conformal dimensions of the twist operators are \cite{Calabrese:2004eu, Calabrese:2009qy}
\begin{align} \label{CDC}
h_{\sigma_n} =\bar{h}_{\sigma_n}=\frac{c}{24}\left(n-\frac{1}{n}\right).
\end{align}
Then, we obtain
\begin{align}
&\frac{\langle T^{(n)}(w)\overline{T}^{(n)}(\bar{w})\prod_{i=1}^m\sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}}{\langle\prod_{i=1}^m\sigma_n(w_{2i-1}, \bar{w}_{2i-1})\bar{\sigma}_n(w_{2i}, \bar{w}_{2i})\rangle_{\mathcal{M}}}\notag\\
=&\frac{1}{\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}\left[-\frac{\pi^2nc}{6\beta^2}+\left(\frac{2\pi}{\beta}z\right)^2\sum_{j=1}^{2m}\left(\frac{c(n-\frac{1}{n})}{24(z-z_j)^2}+\frac{1}{z-z_j}\partial_{z_j}\right)\right]\notag\\
\times&\left[-\frac{\pi^2nc}{6\beta^2}+\left(\frac{2\pi}{\beta}\bar{z}\right)^2\sum_{j=1}^{2m}\left(\frac{c(n-\frac{1}{n})}{24(\bar{z}-\bar{z}_j)^2}+\frac{1}{\bar{z}-\bar{z}_j}\partial_{\bar{z}_j}\right)\right]
\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}},
\label{ee5}
\end{align}
where the differential operators $\partial_{z_j}$ and $\partial_{\bar{z}_j}$ in (\ref{ee5}) act only on the correlation function.
Finally, with (\ref{ee2}), (\ref{ee3}), and (\ref{ee5}), we obtain an expression of the R\'{e}nyi entropy in the deformed CFT of the multiple intervals by using the twist operators
\begin{align}
&\delta S_n(A)\notag = - \frac{\mu c}{12(n-1)}\frac{8\pi^4}{\beta^4} \\
&\times \int_{\mathcal{M}}\Biggm[z^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(z-z_j)^2}+\frac{\partial_{z_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(z-z_j)}\right)\notag\\
&\;\;\;\;\;\;\;\;\;\;+\bar{z}^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(\bar{z}-\bar{z}_j)^2}+\frac{\partial_{\bar{z}_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(\bar{z}-\bar{z}_j)}\right)\Biggm]\notag\\
&+\frac{\mu}{n(n-1)}\frac{16\pi^4}{\beta^4}\frac{1}{\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}\notag\\
&\times\int_{\mathcal{M}}z^2\left(\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(z-z_j)^2}+\frac{\partial_{z_j}}{(z-z_j)}\right)\right)
\bar{z}^2\left(\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(\bar{z}-\bar{z}_j)^2}+\frac{\partial_{\bar{z}_j}}{(\bar{z}-\bar{z}_j)}\right)\right)\notag\\
&\times\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}.\label{dsna2}
\end{align}
Taking the limit $n\to1$, the entanglement entropy $\delta S(A):=\lim_{n\to1}\delta S_n(A)$ is given by
\begin{align}
&\delta S(A)\notag = -\mu \left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4} \\
&\;\;\; \times \int_{\mathcal{M}}\Biggm[z^2\sum_{j=1}^{2m}\left(\frac{1}{(z-z_j)^2}+\lim_{n\to1}\frac{12\partial_{z_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{c(n-1)(z-z_j)}\right)\notag\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;+\bar{z}^2\sum_{j=1}^{2m}\left(\frac{1}{(\bar{z}-\bar{z}_j)^2}+\lim_{n\to1}\frac{12\partial_{\bar{z}_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{c(n-1)(\bar{z}-\bar{z}_j)}\right)\Biggm].\label{dsa}
\end{align}
If the correlation function is factorized as
\begin{align}
\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}=f(z_1, \cdots, z_{2m})g(\bar{z}_1, \cdots, \bar{z}_{2m}),\label{fcf}
\end{align}
we can obtain a simpler expression of $\delta S_n(A)$
\begin{align}
&\delta S_n(A)\notag\\
=&- \frac{\mu c}{12(n-1)}\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}}\Biggm[z^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(z-z_j)^2}+\frac{\partial_{z_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(z-z_j)}\right)\notag\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\bar{z}^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(\bar{z}-\bar{z}_j)^2}+\frac{\partial_{\bar{z}_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(\bar{z}-\bar{z}_j)}\right)\Biggm]\notag\\
&+\frac{\mu}{n(n-1)}\frac{16\pi^4}{\beta^4}\int_{\mathcal{M}}z^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(z-z_j)^2}+\frac{\partial_{z_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(z-z_j)}\right)\notag\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \bar{z}^2\sum_{j=1}^{2m}\left(\frac{c(n-1/n)}{24(\bar{z}-\bar{z}_j)^2}+\frac{\partial_{\bar{z}_j}\log\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}}{(\bar{z}-\bar{z}_j)}\right).\label{dsna}
\end{align}
The correlation functions which are studied explicitly in section \ref{section3} and \ref{section4} satisfy (\ref{fcf}).
{From here, we set all $\tau_i$ are the same, which is equivalent to set $\tau_i =0$ because of the periodicity of $\tau$. }
\section{Explicit computation of the entanglement entropy $\delta S(A)$ }\label{section3}
In this section, we explicitly estimate $\delta S(A)$ (\ref{dsa}) of a single interval, two intervals, and multiple intervals. We show that $\delta S(A)$ of multiple intervals is a summation of $\delta S(A)$ of the single interval if the correlation function of multi intervals of the twist operators is factorized into the two point functions, such as a dominant contribution from the vacuum conformal block in the holographic CFT. This property of $\delta S(A)$ is consistent with the holographic entanglement entropy.
\subsection{Single interval}
Let us first compute $\delta S(A)$ of a single interval case by the twist operator method (\ref{dsa}).
The correlation function of the twist operators for the single interval $A=[x_1, x_2]$ is
\begin{align}
\langle\sigma_n(z_1, \bar{z}_1)\bar{\sigma}_n(z_2, \bar{z}_2)\rangle_{\mathcal{C}} &= \frac{c_n}{|z_1-z_2|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}}
= \frac{c_{n}}{|z_1-z_2|^{\frac{c}{6}\left(n-\frac{1}{n}\right)}} \,, \label{tp}
\end{align}
where $c_n$ is a constant and \eqref{CDC} is used.
Thus, $\delta S(A)$ (\ref{dsa}) of the single interval is
\begin{align}
\begin{split}
\delta S(A) & \\
=&-\mu \left(\frac{c}{12}\right)^2 \frac{8\pi^4}{\beta^4} \int_{\mathcal{M}} \Biggl[z^2 \biggl(\frac{1}{(z-z_1)^2} + \frac{1}{(z-z_2)^2} \\
&\qquad\qquad\qquad\qquad\qquad\quad +\frac{-2}{(z-z_1)(z_1-z_2)}+\frac{-2}{(z-z_2)(z_2-z_1)}\biggr) + \textrm{h.c.}\Biggr] \\
=&-\mu \left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}}\left(\frac{z^2(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}+\textrm{h.c.}\right) \\
=& - \mu \, \frac{\pi ^4 c^2 \, (x_{2}-x_{1})}{9 \beta ^3}\coth \left(\frac{\pi (x_{2}-x_{1}) }{\beta }\right) \label{dsasi} \,,
\end{split}
\end{align}
where we used $z_{i} := e^{\frac{2\pi}{\beta}x_{i}}$ with $\tau_i=0$ in the last equality. For a detailed calculation of the integration in \eqref{dsasi}, see Appendix \ref{appendixA}.
Note that $``-2"$ in the numerator of the first equality comes from the derivative of the logarithm term in \eqref{dsa} i.e. we only need to read off the power of $z_1-z_2$ in \eqref{tp}, $\frac{-c}{12}(n-\frac{1}{n}).$\footnote{It is not $\frac{c}{6}(n-\frac{1}{n})$ because $|z_1-z_2| = \sqrt{(z_1-z_2)(\bar{z}_1-\bar{z}_2)}$. }
Finally, by considering the remaining factor $\frac{12}{c(n-1)}$ in \eqref{dsa} we have $\frac{12}{c(n-1)} \times \frac{-c}{12}(n-\frac{1}{n}) = -\frac{n+1}{n}$ which becomes $``-2"$ in the $n\rightarrow1$ limit.
Taking $x_{1}=0$, $x_2=\ell$ in \eqref{dsasi}, we reproduce the same result in \cite{Chen:2018eqk}, where a different approach is used: a conformal map between $\mathcal{M}^n$ and $\mathcal{C}$.
\subsection{Two intervals}\label{sec32}
Let us turn to compute $\delta S(A)$ of two intervals $A=[x_1, x_2]\cup[x_3, x_4]$. The correlation function of the twist operators for two intervals is the four point function $\langle\sigma_n(z_1, \bar{z}_1)\bar{\sigma}_n(z_2, \bar{z}_2)$ \\ $\sigma_n(z_3, \bar{z}_3)\bar{\sigma}_n(z_4, \bar{z}_4)\rangle_{\mathcal{C}}$. Generally, four point functions in CFT are not universal and depend on the details of CFT. Thus, to proceed, we consider the {\it holographic CFT}, because
in the holographic CFT, it was argued that the vacuum conformal block is a dominant contribution in the four point function of the twist operators~\cite{Hartman:2013mia}\footnote{Rigorously speaking, there is a possibility that the vacuum conformal block is not dominant in some region of $\eta$ defined in \eqref{crosr}. In this paper, we assume that the vacuum conformal block is dominant in the entire region $0\le\eta\le1$. }. Furthermore, the four point function in the limit $n\to1$ is factorized into the two point functions in the leading order (see, also \cite{Fitzpatrick:2014vua, Perlmutter:2015iya}).
{In more detail, let us consider the four point function in the holographic CFT classified by the cross ratio\footnote{In this paper, $\eta$ is real such that $\eta=\bar{\eta}$.} $\eta$
\begin{equation} \label{crosr}
\eta := \frac{(z_1-z_2)(z_3-z_4)}{(z_1-z_3)(z_2-z_4)} \,.
\end{equation}
This cross ratio is invariant under the global conformal transformation~\cite{DiFrancesco:1997nk} and it plays a role in classifying convergence of the conformal block expansion in each channel~(see, for example, \cite{Hartman:2015lfa}). We will call the region of $0\le\eta\le1/2$ ``$s$-channel" and the region of $1/2\le\eta\le1$ ``$t$-channel".
The four point function can be approximated by the vacuum conformal block as\footnote{We omit constant terms in the normalization of the correlation functions.
}, for example in the $s$-channel\footnote{For the $t$-channel, we exchange $z_2 \leftrightarrow z_4$ (in this case $\eta$ becomes $1-\eta).$}
\cite{Fitzpatrick:2014vua, Perlmutter:2015iya}
\begin{align}
\begin{split}
\log\langle\sigma_n(z_{1}, \bar{z}_{1})\bar{\sigma}_n(z_{2}, \bar{z}_{2})\sigma_n(z_{3}, \bar{z}_{3})\bar{\sigma}_n(z_{4}, \bar{z}_{4})\rangle_{\mathcal{C}} \,\sim\, -\frac{nc}{6}f_\text{vac}(z_1,z_2,z_3,z_4) \,+\, \text{h.c} \,,
\end{split}
\end{align}
with
\begin{align}
\begin{split}
f_\text{vac}(z_1,z_2,z_3,z_4) &= \epsilon_n\left(2\log[z_2-z_1] + 2\log[z_4-z_3]-\frac{\epsilon_n}{3}\eta^2\;_2F_1(2, 2; 4; \eta)\right) \\
& \quad + \mathcal{O}\left(\left(n-1\right)^3\right), \\
\epsilon_n &:= \frac{6}{nc}h_{\sigma_n} = \frac{n+1}{4n^2}\left(n-1\right)\,, \label{fp}
\end{split}
\end{align}
where $ _2F_1(\alpha, \beta; \gamma; \delta)$ is the hypergeometric function and $h_{\sigma_n}$ is defined in \eqref{CDC}. }
In the limit $n\to1$, we can ignore $\epsilon_n^2 \, \eta^2 \, _2F_1(2, 2; 4; \eta)$ and the higher order terms of $n-1$ in (\ref{fp}). Thus, for the case $n\to1$, the four point function in $s$-channel $(0\le\eta\le1/2)$ is factorized as
\begin{align}
\langle\sigma_n(z_1, \bar{z}_1)\bar{\sigma}_n(z_2, \bar{z}_2)\sigma_n(z_3, \bar{z}_3)\bar{\sigma}_n(z_4, \bar{z}_4)\rangle_{\mathcal{C}}\sim\frac{1}{|z_1-z_2|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}|z_3-z_4|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}}. \label{sc}
\end{align}
Similarly, the four point function in $t$-channel $(1/2\le\eta\le1)$ is factorized as
\begin{align}
\langle\sigma_n(z_1, \bar{z}_1)\bar{\sigma}_n(z_2, \bar{z}_2)\sigma_n(z_3, \bar{z}_3)\bar{\sigma}_n(z_4, \bar{z}_4)\rangle_{\mathcal{C}}\sim\frac{1}{|z_1-z_4|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}|z_3-z_2|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}}. \label{tc}
\end{align}
Substituting (\ref{sc}) and (\ref{tc}) into (\ref{dsa}), we obtain
\begin{align}
\begin{split}
\delta S_{\,\text{s-ch}}(A) & \;\; \\
\sim&-\mu \left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}} \Biggl[ \left(\frac{z^2(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}+\textrm{h.c.}\right) + \left(\frac{z^2(z_3-z_4)^2}{(z-z_3)^2(z-z_4)^2}+\textrm{h.c.}\right) \Biggr] \\
=& - \mu \, \frac{\pi ^4 c^2 \, (x_{2}-x_{1})}{9 \beta ^3}\coth \left(\frac{\pi (x_{2}-x_{1}) }{\beta }\right) - \mu \, \frac{\pi ^4 c^2 \, (x_{4}-x_{3})}{9 \beta ^3}\coth \left(\frac{\pi (x_{4}-x_{3}) }{\beta }\right) \label{scds}
\end{split}
\end{align}
and
\begin{equation}
\delta S_{\,\text{t-ch}}(A) = \quad z_2 \leftrightarrow z_4 \quad \mathrm{and} \quad x_2 \leftrightarrow x_4 \quad \mathrm{in} \quad \delta S_{\,\text{s-ch}}(A) \,. \label{tcds}
\end{equation}
Eqs. \eqref{scds} and \eqref{tcds} can be considered as a double summation of \eqref{dsasi}.
Therefore, in the deformed holographic CFT, $\delta S(A)$ of two intervals is the summation over $\delta S(A)$ of the single interval.
Indeed, this additive property comes from the $\log$ term in \eqref{dsa} with the factorization in \eqref{sc} and \eqref{tc}.
{The author} in~\cite{Hartman:2013mia} {argued} that, {by using the vacuum conformal block approximation,} $\delta S(A)$ of the two intervals in the \textit{undeformed} {holographic} CFT is the summation of the single interval case. This additive property is also shown in~\cite{Ryu:2006bv, Ryu:2006ef, Headrick:2010zt} within a holographic framework without introducing cutoff.
Here, we have shown that this additive property still holds in the $T\overline{T}$ deformed {holographic} CFT by the field theory computation from \eqref{scds} and \eqref{tcds}. In section \ref{section5}, we will show this additive property works also in the holographic theory (with a finite cutoff dual to $T\overline{T}$ deformation) and is consistent with the field theory results here.
\subsection{Multiple intervals}
Finally, we compute $\delta S(A)$ of multiple intervals $A=[x_1, x_2]\cup\cdots\cup[x_{2m-1}, x_{2m}]$. Consider the holographic CFT in which the correlation function of the twist operators in the limit $n\to1$ is factorized into the two point functions because of the dominant vacuum conformal block \cite{Hartman:2013mia} as
\begin{align}
\langle\prod_{i=1}^m\sigma_n(z_{2i-1}, \bar{z}_{2i-1})\bar{\sigma}_n(z_{2i}, \bar{z}_{2i})\rangle_{\mathcal{C}}\sim\prod_{i=1}^m\frac{1}{|z_{k_i}-z_{l_i}|^{2(h_{\sigma_{n}}+\bar{h}_{\sigma_{n}})}} \;\;\;\;(n\to1),\label{mi}
\end{align}
where $k_i$ and $l_i$ are determined by configuration of the multiple intervals in the same manner as the correlation function for the two intervals. Substituting (\ref{mi}) into (\ref{dsa}), we obtain
\begin{align}
\begin{split}
\delta S(A)\sim&-\sum_{i=1}^m\mu \left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}}\left(\frac{z^2(z_{k_i}-z_{l_i})^2}{(z-z_{k_i})^2(z-z_{l_i})^2}+\textrm{h.c.}\right)\\
=& - \sum_{i=1}^m \mu \, \frac{\pi ^4 c^2 \, (x_{k_{i}}-x_{\ell_{i}})}{9 \beta ^3}\coth \left(\frac{\pi (x_{k_{i}}-x_{\ell_{i}}) }{\beta }\right). \label{mids}
\end{split}
\end{align}
Therefore, $\delta S(A)$ of multiple intervals in the deformed holographic CFT which has the property (\ref{mi}) is the summation of $\delta S(A)$ of the single interval. This additive property comes from the $\log$ term in \eqref{dsa} with the factorization in \eqref{mi}.
This property of $\delta S(A)$ is consistent with the Ryu-Takayanagi formula with the radius cutoff.
\section{Explicit computation of the R\'{e}nyi entropy $\delta S_n(A)$} \label{section4}
In this section, we evaluate the R\'{e}nyi entropy $\delta S_n(A)$.
First, we compute $\delta S_n(A)$ of single interval case by using our twist operator method \eqref{dsna}. Our result agrees with the one in~\cite{Chen:2018eqk}, where a different method was used.
Second, we consider the two interval case and show that $\delta S_n(A)$ of two intervals can be expressed as a summation of a single interval case in some limit. Finally, we make some comments about a holographic interpretation of our results.
\subsection{Single interval}
Consider $\delta S_n(A)$ of a single interval $A=[x_1, x_2]$. Substituting (\ref{tp}) into (\ref{dsna}), we obtain
\begin{align}
\begin{split}
\delta S_n(A) &\\
=&- \frac{(n+1)\mu }{2n}\left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}}\left(\frac{z^2(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}+\textrm{h.c.}\right)\\
&+\frac{(n+1)^2(n-1)\mu}{2n^3}\left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4}\int_{\mathcal{M}}\frac{z^2(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}\frac{\bar{z}^2(\bar{z}_1-\bar{z}_2)^2}{(\bar{z}-\bar{z}_1)^2(\bar{z}-\bar{z}_2)^2}.
\label{dsnasi}
\end{split}
\end{align}
{Here the first term comes from the first and second terms in \eqref{dsna} and the second term comes from the third term in \eqref{dsna}. This second term is a ``mixing term'' between holomorphic and anti-holomorphic part and will play an important role when we discuss the additivity of $\delta S_n(A)$ for two intervals.
Note also that it gives the same result as \eqref{dsasi} for $n=1$ as expected and in this case the mixing term vanishes. }
The integration in the second term in \eqref{dsnasi} needs a regularization procedure\footnote{See \cite{Chen:2018eqk} for the result after performing a regularization. In this paper, we will not do the regularization because our interest is to study the additive property of the R\'{e}nyi entropy.}. One can see this divergence from the more general form in \eqref{RenyiResult}: By substituting $\bar{z}_{3} \rightarrow z_{1}$ and $\bar{z}_{4} \rightarrow z_{2}$ in \eqref{RenyiResult}, we find a divergency. The upshot of this twist field method is that our results for $\delta S_n(A)$ of a single interval \eqref{dsnasi} reproduces the same consequence reported in \cite{Chen:2018eqk}, {where a direct conformal map between $\mathcal{M}^n$ and $\mathcal{C}$ is used (while here we used a map between $\mathcal{M}$ and $\mathcal{C}$, and $\mathcal{M}^n$ and $\mathcal{M}$ are related by the twist operators.).}
\subsection{Two intervals}
Consider $\delta S_n(A)$ of two intervals $A=[x_1, x_2]\cup[x_3, x_4]$. In general, $\delta S_n(A)$ of two intervals can not be expressed as a summation of a single interval case because of two reasons.
{Firstly, the four point function of the twist operators with $n\ne 1$ in the holographic CFT is not factorized into the two point functions as can be seen in \eqref{fp}, where $\epsilon_n \sim n-1$ but $n\ne1$.
If we instead consider a limit that the cross ratio $\eta$ behaves as $\eta\to0$ or $\eta\to1$ the four point function is factorized into two point functions. Note that this factorization is valid not only in holographic CFT but also in any CFT which obeys the cluster decomposition~\cite{Calabrese:2009ez, Headrick:2010zt}.
Secondly, even if the four point function is factorized, $\delta S_n(A)$ of two intervals may not be the sum of the single interval's because of the ``mixing term'' between holomorphic and anti-holomorphic part, the third term in \eqref{dsna}.
We will show that this mixing vanishes only for $\eta\to0$ but does not vanish for $\eta\to1$.
}
Let us first consider the limit $\eta\to0$. By substituting (\ref{sc}) into (\ref{dsna}), we obtain $\delta S_n(A)$ of the two interval:
\begin{equation}
\delta S_n(A)|_{\eta\to0}
\sim \,\, \delta S^{\,\,\,\text{single}}_n(A) \,+\, \delta S^{\,\,\,\text{single}}_n(A)\Bigr|_{z_{1} \,\rightarrow\, z_{3}, \,\, z_{2} \,\rightarrow\, z_{4}} + \delta S^{\,\,\,\text{mixing}}_n(A) \,,
\label{dsnati}
\end{equation}
where $\delta S^{\,\,\,\text{single}}_n(A)$ denotes the R\'{e}nyi entropy of a single interval \eqref{dsnasi} and
\begin{equation} \label{mix987}
\begin{split}
\delta S^{\,\,\,\text{mixing}}_n(A)
=& \frac{(n+1)^2(n-1)\mu}{2n^3}\left(\frac{c}{12}\right)^2\frac{8\pi^4}{\beta^4} \\
&\times \int_{\mathcal{M}}\left(\frac{z^2(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}\frac{\bar{z}^2(\bar{z}_3-\bar{z}_4)^2}{(\bar{z}-\bar{z}_3)^2(\bar{z}-\bar{z}_4)^2}+\textrm{h.c.}\right) \,.
\end{split}
\end{equation}
Note that if $n=1$, i.e. for entanglement entropy, the mixing term vanishes always. However, in general, if $n \ne 1$, it looks that $\delta S_n(A)$ of two intervals can not be expressed as a summation of a single interval case because of the mixing term \eqref{mix987}.
To see when this mixing term is negligible, we compute the integral in Appendix \ref{appendixA} and it boils down to
\begin{equation} \label{mixing123}
\begin{split}
\frac{\delta S^{\,\,\,\text{mixing}}_n(A)}{c} &= \left[\frac{(n+1)^2(n-1)}{n^3} \frac{\pi^3}{72} \right] \, \, \hat{\mu} \times \mathbb{M} \,, \\
&\hat{\mu} := \frac{\mu c}{\beta^2} \,, \\
&\mathbb{M} := \biggl( \frac{z_{1}^2 + \bar{z}_{3}^2}{(z_{1}-\bar{z}_{3})^2} + \frac{z_{1}^2+\bar{z}_{4}^2}{(z_{1}-\bar{z}_{4})^2} + \frac{z_{2}^2+\bar{z}_{3}^2}{(z_{2}-\bar{z}_{3})^2} + \frac{z_{2}^2+\bar{z}_{4}^2}{(z_{2}-\bar{z}_{4})^2} - 4 \\
&\qquad + 2\frac{(z_{1}+z_{2}) (\bar{z}_{3}+\bar{z}_{4})}{(z_{1}-z_{2}) (\bar{z}_{3}-\bar{z}_{4})}\ln \frac{(z_{1}-\bar{z}_{4}) (z_{2}-\bar{z}_{3})}{(z_{1}-\bar{z}_{3}) (z_{2}-\bar{z}_{4})} \, \biggr) \,,
\end{split}
\end{equation}
where $\hat{\mu}$ is a dimensionless parameter.
Let us choose the parameters ($0 = x_1 < x_2 <x_3 < x_4 $) as
\begin{equation}
\ell_{12} := x_2 - x_1 \,, \quad \ell_{23} := x_3 - x_2\,, \quad \ell_{34} := x_4 - x_3 \,.
\end{equation}
If we fix $x_2$ and $\ell_{34}$; and take $x_3(x_4) \rightarrow \infty$, it implies $z_1$ and $z_2$ are fixed and $\bar{z}_3(\bar{z}_4) \rightarrow \infty$ with the relation ($\tau_i=0$);
\begin{equation} \label{SubSti}
\begin{split}
\bar{z}_{4} = e^{\frac{2\pi}{\beta} x_{4}}
= e^{\frac{2\pi}{\beta} (x_{3} + \ell_{34})}
= \bar{z}_{3} \, e^{\frac{2\pi}{\beta} \ell_{34}} \,.
\end{split}
\end{equation}
Thus, the first five terms in $\mathbb{M}$ in \eqref{mixing123} sums up to zero. The last term also vanishes ($\sim \ln 1$) by itself.
In other words, when two intervals are far from each other, $\delta S^{\,\,\,\text{mixing}}_n(A)$ vanishes, so $\delta S_n(A)$ of the two intervals is the summation of $\delta S_n(A)$ of the single interval.
{This is indeed the requirement $\eta \rightarrow 0$, which is the condition we have already imposed to have a four point function factorized.}
On the other hand, if two intervals become close ($x_3 \rightarrow x_2$) the mixing term blows up because of the third term in $\mathbb{M}$.
These two extreme limit will be interpolated as we dial $\ell_{23}$. To see this we make a plot of the $\mathbb{M}$ in \eqref{mixing123} in Fig. \ref{CROSSFiga}, where we choose
\begin{equation}
{ \ell_{12}= \ell_{34} = 0.01 \,, \quad \beta = 2\pi \,. }
\end{equation}
Because we are considering $\eta \rightarrow 0$ limit, {$\ell_{23}/\ell_{12}$} should be large so only the range $\ell_{23} \gg 1$\footnote{More precisely, $\ell_{23}/(\beta/2\pi) \gg 1$. In fact, our choice of $\beta=2\pi$ corresponds to using the rescaled parameter $\tilde{\ell}_{ij} := 2\pi\ell_{ij}/\beta$. With this understanding, not to clutter, we use $\ell_{ij}$ without tilde.} is valid.
\begin{figure}[]
\centering
\subfigure[$\eta\to0$: $\ell_{12} = 0.01, \, \beta = 2\pi$]
{\includegraphics[width=7.3cm]{MIXING1} \label{CROSSFiga}}
\subfigure[$\eta\to1$: $\ell_{23} = 0.01, \, \beta = 2\pi$]
{\includegraphics[width=7.3cm]{MIXING2} \label{CROSSFigb}}
\caption{ {The mixing effect $\mathbb{M}$ in \eqref{mixing123} of symmetric configuration, $\ell_{12} = \ell_{34}$, where $\ell_{12} := |x_2 - x_1|$ and $\ell_{34} := |x_4 - x_3|$. } In order to satisfy the condition $\eta\to0$ (a) and $\eta\to1$ (b) only the ranges of $\ell_{23} \gg 1$ (a) or $\ell_{23} \ll 1$ with a fixed $\ell_{12}$(b) are valid. } \label{CROSSFig}
\end{figure}
\begin{figure}[]
\centering
\subfigure[ $\eta\sim0$ ]
{\includegraphics[width=7.3cm]{eta0} \label{}}
\subfigure[ $\eta\sim1$ ]
{\includegraphics[width=7.1cm]{eta1} \label{}}
\caption{The mixing effect $\mathbb{M}$ in \eqref{mixing123} vs $\eta$: $\ell_{12}$ = 0.5, 1, 5 (red, green, blue).} \label{xxxc}
\end{figure}
{
In the limit $\eta\to1$, by using a relation of the correlation function in 2d CFT under $\eta\to 1-\eta$ as explained in \cite{Calabrese:2009ez, Headrick:2010zt}, one can perform a similar analysis. Thus, $\delta S_n(A)$ of the two intervals is obtained by exchanging $z_{2} \leftrightarrow z_{4}$ in \eqref{dsnati}.
In Fig. \ref{CROSSFigb} we display the $\mathbb{M}$ in \eqref{mixing123} after exchanging $z_{2} \leftrightarrow z_{4}$ with
\begin{equation}
\ell_{23} = 0.01 \,, \quad \beta = 2\pi \,.
\end{equation}
Here, we chose a small $\ell_{23}$ because we are considering $\eta \rightarrow 1$ limit, which is equivalent to {$\ell_{23} \ll 1$} with a fixed $\ell_{12}$.}
$\mathbb{M}$ saturates to $4$ when $\ell_{12} = \ell_{34}$ increase while it diverges if $\ell_{12} = \ell_{34}$ decreases. This divergence is originated from the second and third term in $\mathbb{M}$ in \eqref{mixing123}, after exchanging $z_{2} \leftrightarrow z_{4}$.
Note that, roughly speaking, $\eta \to 1$ restricts the range of $\ell_{12} \gg 1 $.
{
We noted that, in Fig.~\ref{CROSSFig}, only the right part of both figures (large $\ell_{23}$ in (a) and large $\ell_{12}$ in (b)) is valid in order to satisfy the conditions $\eta \sim 0$ (a) and $\eta \sim 1$ (b). To demonstrate it more clearly we made a plot of the $\mathbb{M}$ in \eqref{mixing123} versus $\eta$ in Fig.~\ref{xxxc} for $\ell_{12} = 0.5, 1, 5$. If $\eta \sim 0$, $\mathbb{M}$ is very small and vanishes as $\eta \rightarrow 0$, while if $\eta \rightarrow 1$, $\mathbb{M}$ saturates to the finite value, which depends on $\ell_{12}$. The saturation value for a given $\ell_{12}$ is approximately the same as the value in Fig. \ref{CROSSFigb} since $\eta \rightarrow 1$ corresponds to $\ell_{23} \rightarrow 0$. There are two interesting facts in the limit of $\eta \rightarrow 1$:
\begin{itemize}
\item[(a)] $\mathbb{M}$ is non-zero
\item[(b)] $\mathbb{M}$ is saturated to the minimum value ($\sim 4$) as $\ell_{12}$ increases as shown in Fig. \ref{CROSSFigb}.
\end{itemize}
These two properties can be intuitively explained by holography. See the end of the next section.
However, there is one subtlety in our results\footnote{{We thank the referee for pointing out this issue.}}. The numerical value of $\mathbb{M}$ can be very large as $\ell_{12}$ becomes very small as shown in Fig.~\ref{xxxc}. In this case, the perturbation theory will break down if $\mathbb{M}$ reaches the value of order $1/\hat{\mu}\, $\footnote{{The numerical value of the square bracket in the first line of \eqref{mixing123} is around $0.5$. It does not vary much as $n$ changes.}}. Note that this is problematic only if $\ell_{12}$ is small. Thus, we suspect that our perturbation theory may be justified by the existence of the lower bound of $\ell_{12}$ provided by the cut-off $\hat{\mu}$ scale. i.e. {because $\ell_{12}$ needs to be much longer than $\mu$}\footnote{{The effective field theory at small scale comparable to the deformation becomes non-local, where it is not clear that the entanglement entropy makes sense, e.g, see~\cite{Dubovsky:2012wk, Dubovsky:2013ira, Cooper:2013ffa}.}} it is bounded below so $\mathbb{M}$ may be well below $1/\hat{\mu}$. Currently, we do not have a more rigorous understanding on this issue and leave it as a future work.}
\subsection{Holographic interpretation}
The author of \cite{Dong:2016fnf} proposed the gravity dual of R\'{e}nyi entropy in the holographic CFT as
\begin{align}
n^2\partial_n\left(\frac{n-1}{n}S_n(A)\right)=\frac{\textrm{Area}(\textrm{Cosmic Brane}_n)}{4G},
\end{align}
where $G$ is the Newton's constant, and the cosmic branes are anchored at the boundary of $A$.
In order to compute the area of the cosmic branes, we need to consider the back-reaction from the cosmic branes to the bulk geometry. Thus, generally, the holographic R\'{e}nyi entropy of two intervals with $n\ne1$ is not summation of the one of the single interval because of the back-reaction between the two cosmic branes.
However, if two cosmic branes are far from each other, the back-reaction between them is negligible. {This is the limit of $\eta \rightarrow 0$.} In this limit, the holographic R\'{e}nyi entropy of the two intervals becomes summation of the one of the single interval. Even though we introduce the radius cutoff, which corresponds to the $T\overline{T}$ deformation of the holographic CFT, this property of the holographic R\'{e}nyi entropy will be still valid. Thus, it is consistent with our field theory result in the previous subsection.
{Away from the limit $\eta \rightarrow 0$,}
$\delta S_n(A)$ of the two intervals in the deformed holographic CFT includes corrections to \eqref{mixing123} and higher order terms of $\eta$ in the vacuum conformal block. These corrections may be related to the back-reaction between the two cosmic branes in the holographic R\'{e}nyi entanglement entropy formula with the radius cutoff.
{Let us turn to the holographic interpretation of the two properties (a) and (b) in the previous section.
For $\eta \rightarrow 1$, our field theory result shows, at given temperature $\beta$, there is a saturation of the mixing term when $\ell_{12}$ increases. It may be understood holographically as follows: i) the cosmic brane for the range $\ell_{23}$ is almost a point at the cutoff because $\eta \sim 1$ means $\ell_{23} \sim 0$ , ii) the cosmic brane for the range $\ell_{14}$ will be close to and bounded by the horizon if $\ell_{12} \gg 1 (\ell_{14} \sim 2\ell_{12})$. Thus, the effective distance between two cosmic branes is saturated, which is basically the distance between the cut-off and the horizon. Thus, the effect of the back-reaction is also saturated. {Only if the cut-off or temperature becomes zero, the effective distance between two cosmic branes becomes infinite and their back-reaction may be negligible. This information is encoded in $\hat{\mu}$ in \eqref{mixing123}.} Note that if $\eta \rightarrow 0$, two cosmic branes are far away always so their interaction is negligible regardless of the cut-off.}
\section{Holographic entanglement entropy and phase transitions}\label{section5}
In this section, we study the holographic entanglement entropy with a finite cutoff and compare it with the previous field theory result. While the perturbative field theory is valid only for small deformations, the holographic method can be used for general deformations.
We specify the parameter regime that the field theory and holographic results agree. We also investigate the phase transitions between the $s$-channel and the $t$-channel for the two interval cases with a finite radius cutoff in the holographic framework.
\subsection{Holography: single interval}
Let us consider a planar BTZ black hole:
\begin{align}
\mathrm{d} s^2=\frac{r^2-r_h^2}{L^2}\mathrm{d} t^2+\frac{L^2}{r^2-r^2_h}\mathrm{d} r^2+\frac{r^2}{L^2} \mathrm{d} \tilde{x}^2\,,
\end{align}
where $L$ is the AdS radius, and $r_h$ is the horizon radius.
At the cutoff radius $r=r_c$,
\begin{equation}
\mathrm{d} s^2 \sim \mathrm{d} t^2+\frac{1}{1-r_h^2/r_c^2} \mathrm{d} \tilde{x}^2\ = \mathrm{d} t^2+ \mathrm{d} x^2 \,,
\end{equation}
where
\begin{equation}
x := \frac{\tilde{x}}{\sqrt{1-r_h^2/r_c^2}} \,.
\end{equation}
The holographic entanglement entropy of a single interval between $\tilde{x}=\tilde{x}_i$ and $\tilde{x}=\tilde{x}_j$ at the cutoff radius $r=r_c$ in this black hole geometry is~\cite{Ryu:2006bv, Chen:2018eqk}
\begin{equation} \label{PTEE}
\begin{split}
S^{H}(\ell_{ij}) &= \frac{L}{4G} \log\left( \mathcal{A}(\ell_{ij}) + \sqrt{\mathcal{A}(\ell_{ij})^2 -1} \right) \,, \\
\mathcal{A}(\ell_{ij}) :&= 1 + 2 \, \frac{u_{h}^2}{u_{c}^2} \sinh\left( \frac{\ell_{ij}}{2 L^2 u_{h}}\sqrt{1 - \frac{u_{c}^2}{u_{h}^2}} \right)^2,
\end{split}
\end{equation}
where $G$ is the Newton constant, $u_c:=1/r_c$, and
$u_{h}:=1/r_{h}$ is proportional to the inverse temperature
\begin{equation} \label{temp123}
\beta = 2 \pi L^2 \, u_{h} \,.
\end{equation}
The length $\ell_{ij} := |\tilde{x}_{i}-\tilde{x}_{j}|/\sqrt{1 - u_{c}^2/u_{h}^2}$ corresponds to the length of the single interval $|x_{i}-x_{j}|$ in the dual field theory.
Eq.~\eqref{PTEE} reduces to the usual holographic entanglement entropy in \cite{Ryu:2006bv} as $u_{c} \rightarrow 0$.
To compare this with the field theory result \eqref{dsasi}, let us consider a small deformation or cutoff ($u_{c} \ll u_{h}$)~\cite{Chen:2018eqk},
\begin{align}
\begin{split}
S^{H}(\ell_{ij})
&= \frac{L}{2G} \log \Biggl[\frac{2 u_{h} \sinh \left(\frac{\ell_{\ij} }{2 L^2 \,u_{h} }\right)}{u_{c}} \\
&\qquad\qquad\quad \times \left(1 - \frac{u_{c}^2}{4 L^2 u_{h}^3} \left( \ell_{ij} \coth \left(\frac{\ell_{\ij} }{2 L^2 \, u_{h} }\right) - \frac{ L^2 u_{h}}{\sinh \left(\frac{\ell_{\ij} }{2 L^2 \, u_{h} }\right)^2} \right) \right) + \mathcal{O}\left(\frac{u_{c}^3}{u_{h}^3}\right)\Biggr] \\
&\sim \frac{L}{2G} \log \left(\frac{2 u_{h} \sinh \left(\frac{\ell_{\ij} }{2 L^2 \,u_{h} }\right)}{u_{c}}\right) - \frac{u_{c}^2}{8 G L \, u_{h}^3} \left( \ell_{ij} \coth \left(\frac{\ell_{\ij} }{2 L^2 \, u_{h} }\right) - \frac{L^2 u_{h}}{\sinh \left(\frac{\ell_{\ij} }{2 L^2 \, u_{h} }\right)^2} \right) \,.\label{}
\end{split}
\end{align}
By further considering the ``high temperature'' limit ($ \beta = 2\pi L^2 \, u_{h} \ll \ell_{ij}$), that is to say, $u_{c} \ll u_{h} \ll \frac{\ell_{ij}}{2\pi L^2} $, the holographic entanglement entropy $S^{H}(\ell_{ij})$ in \eqref{PTEE} becomes
\begin{align}
\begin{split}
S^{H}_{\, \text{High T}}(\ell_{ij}) &\sim \frac{L}{2G} \log \left(\frac{2 u_{h} \sinh \left(\frac{\ell_{\ij} }{2 L^2 \,u_{h} }\right)}{u_{c}}\right) - \frac{\ell_{ij} \, u_{c}^2}{8 G L \, u_{h}^3}\coth \left(\frac{\ell_{\ij} }{2 L^2 \, u_{h} }\right) \\
&= \frac{c}{3} \log \left(\frac{\beta \sinh \left(\frac{\pi \ell_{\ij} }{\beta }\right)}{\pi \epsilon}\right) - \mu \, \frac{\pi ^4 c^2 \, \ell_{ij}}{9 \beta ^3}\coth \left(\frac{\pi \ell_{\ij} }{\beta }\right)\,,\label{HighTcase2}
\end{split}
\end{align}
with~\cite{Chen:2018eqk, McGough:2016lol, Strominger:1997eq}
\begin{equation} \label{Relation}
\begin{split}
c = \frac{3L}{2G}\,, \quad \epsilon = L^2 u_{c} \,, \quad \mu = \frac{6 L^4}{\pi c}u_{c}^2 \,,
\end{split}
\end{equation}
where $c$ is the central charge and $\epsilon$ is the corresponding UV cutoff in the dual field theory.
Note that the second term of \eqref{HighTcase2} agrees with \eqref{dsasi}, i.e. the first order correction in $\mu$ to the holographic entanglement entropy {\it only at high temperature limit} ($ \beta \ll \ell_{ij}$) matches with the field theory result~\cite{Chen:2018eqk}.
\subsection{Field theory: two intervals and phase transition}\label{field123}
In section \ref{sec32}, for two intervals, we find that there are {two phases of the entanglement entropy}: $s$-channel and $t$-channel. {In the 2d holographic CFT without the deformation, it has been shown that $\eta=1/2$ is the transition point between two channels~\cite{Hartman:2013mia} with the assumption that there are no other phases.} We may ask what the effect of the small deformation on the phase transition is. Does it enhance the phase transition or not? To answer this question in the perturbed field theory we express the entanglement entropy of the deformed holographic CFT up to first order perturbation:
\begin{align} \label{FULL}
\begin{split}
S_{\,\text{s-ch}}(A)
&=
\frac{c}{3} \log \left(\frac{\beta \sinh \left(\frac{\pi |x_{2}-x_{1}| }{\beta }\right)}{\pi \epsilon}\right)
+ \frac{c}{3} \log \left(\frac{\beta \sinh \left(\frac{\pi |x_{4}-x_{3}| }{\beta }\right)}{\pi \epsilon}\right)
+ \delta S_{\,\text{s-ch}}(A) \,, \\
S_{\,\text{t-ch}}(A)
&= \quad x_2 \leftrightarrow x_4 \quad \mathrm{and}\quad \text{s-ch} \to \text{t-ch} \quad \mathrm{in} \quad
S_{\,\text{s-ch}}(A) \,.
\end{split}
\end{align}
Here, $S_{\,\text{s-ch}}(A)$ and $S_{\,\text{t-ch}}(A)$ are the entanglement entropy of the $s$-channel and $t$-channel up to first order perturbation, respectively, where $\delta S_{\,\text{s-ch}}(A)$ and $\delta S_{\,\text{t-ch}}(A)$ are the first order correction of the entanglement entropy given in \eqref{scds} and \eqref{tcds}, respectively.
{In the previous subsection we see that the field theory results match the holographic entanglement entropy in the high temperature limit: $\beta \ll \ell_{ij} = |x_{i}-x_{j}|$.}
Thus, we expand \eqref{FULL} in terms of $\beta/ \ell_{ij} $
\begin{align} \label{stch11}
\begin{split}
S_{\,\text{s-ch}}(A)
& = \frac{c}{3} \left( \frac{\pi \left(|x_2-x_1|+|x_4-x_3|\right)}{\beta} - 2 \log\left(\frac{2\pi\epsilon}{\beta}\right) \right) - \mu \, \frac{\pi^4 c^2 \left(|x_2 - x_1| + |x_4 - x_3|\right)}{9 \beta^3} \,, \\
S_{\,\text{t-ch}}(A)
& = \quad x_2 \leftrightarrow x_4 \quad \mathrm{in} \quad
S_{\,\text{s-ch}}(A) \,,
\end{split}
\end{align}
where we used that $\sinh(\ell_{ij}/\beta) \sim \frac{e^{\ell_{ij}/\beta}}{2}$ and $\coth(\ell_{ij}/\beta) \sim 1$. Then we have
\begin{align} \label{FIELDresult}
\begin{split}
S_{\,\text{s-ch}}(A) - S_{\,\text{t-ch}}(A)
& = -\frac{2 \, c \, \pi(x_3-x_2)}{3 \beta}\left( 1 - \mu \, \frac{c \, \pi^3 }{3 \beta^2}\right) < 0 \,.
\end{split}
\end{align}
Because our field theory method is only reliable in the small $\mu/\beta^2$ regime, \eqref{FIELDresult} is always negative so there is no phase transition: the $s$-channel is always dominant in the high temperature.
\subsection{Holography: two intervals (symmetric case)}
As we showed in the previous section, our field theory computation cannot capture any phase transition in its validity regime: the small deformation and high temperature regime.
However, since the holographic entanglement entropy formula \eqref{PTEE} can be defined at any temperature (any $u_{h} > 0$ via \eqref{temp123}) and any cutoff $u_{c}<u_{h}$,\footnote{This inequality comes from the positive definite condition in \eqref{PTEE}. } we can explore the transition in the whole region $0< u_{c} <u_{h}$.
{Note that, from the field theory point of view, \eqref{PTEE} may not make sense for the entanglement entropy when $\ell_{ij}< u_c$ (See footnote 12). Therefore, if the cutoff becomes bigger, the {\it holographic} entanglement entropy may not be dual to the field theory entanglement entropy even though the {\it holographic} entanglement entropy is a well-defined object in gravitational language.
Because our purpose in this section is to investigate the phase transition of the \textit{holographic} entanglement entropy (5.4)[56, 60] itself, our computation includes $\ell_{ij}< u_c$ case. However, at least we do not need to worry about this issue in the high temperature limit ($\ell_{ij} \gg u_{h}>u_{c}$).}
In the case of two intervals, we have two configurations of minimal surfaces as shown in Fig. \ref{hee}.
Then, the holographic entanglement entropy is chosen as the one having a smaller minimal surface:
\begin{equation} \label{kjhuy}
\begin{split}
S^{H} &=\text{min} \left\{ S_{\,\text{s-ch}}^{H} ,\,\, S_{\,\text{t-ch}}^{H} \right \},
\end{split}
\end{equation}
where
\begin{align} \label{yui678}
\begin{split}
S_{\,\text{s-ch}}^{H}:=S^{H}(\ell_{12}) + S^{H}(\ell_{34}),& \qquad S_{\,\text{t-ch}}^{H}:=S^{H}(\ell_{14}) + S^{H}(\ell_{23}) \,,
\end{split}
\end{align}
and here $S^{H}(\ell_{ij})$ is the holographic entanglement entropy for the single interval $\ell_{ij} = |x_i-x_j|$. For convenience, we assume $x_1 < x_2 < x_3 < x_4$.
In summary, to find $S^H$ our task is i) to compute $S_{\,\text{s-ch}}^{H}$ and $S_{\,\text{t-ch}}^{H}$ in \eqref{yui678} by using the single interval formula \eqref{PTEE}, ii) to compare them and pick up the one with a smaller value, which is \eqref{kjhuy}.
In order to quantify the transition points we define
\begin{equation} \label{RatioEq}
\begin{split}
\mathcal{S}_{c} : = \frac{ S_{\,\text{s-ch}}^{H} } { S_{\,\text{t-ch}}^{H} } \,.
\end{split}
\end{equation}
Thus, if $\mathcal{S}_{c} > 1$, $S^{H} = S_{\,\text{t-ch}}^{H} $, while if $\mathcal{S}_{c} < 1$, $S^{H} = S_{\,\text{s-ch}}^{H} $. The curves of $\mathcal{S}_{c} = 1$ are the transition points or the phase boundaries.
Setting $\ell_{12}$ as a scaling parameter we deal with the following scaled parameters.
\begin{align} \label{DEF}
\begin{split}
\bar{\ell}_{34}:= \frac{\ell_{34}}{\ell_{12}}, \quad \bar{\ell}_{23}:= \frac{\ell_{23}}{\ell_{12}}, \quad \bar{L}:= \frac{L}{\ell_{12}}, \quad \bar{u}_{c}:= u_{c} \, \ell_{12}, \quad \bar{u}_{h}:= u_{h} \, \ell_{12}\,.
\end{split}
\end{align}
From here, we take $\bar{L} = 1$ without loss of generality.
\begin{figure}[]
\centering
\subfigure[$\bar{u}_{h}=2$. The red dot will be explained in Fig.~\ref{phastTa}.]
{\includegraphics[width=7.2cm]{FIGSUB11} \label{LOWTEM}}
\subfigure[$\bar{u}_{h}=2,\, 0.5,\, 0.1$]
{ \includegraphics[width=7.2cm]{FIGSUB22} \label{AllTEM}}
\caption{{Transition curves of symmetric case {($\bar{\ell}_{34}= \ell_{34}/\ell_{12} =1$), where $\bar{\ell}_{23} = \ell_{23}/\ell_{12}$, $\bar{u}_c = u_c \ell_{12}$ and $\bar{u}_h = u_h \ell_{12}$.} All solid curves represent phase transition points satisfying $\mathcal{S}_{c} = 1$ (see \eqref{RatioEq}) with the various temperature $\bar{u}_h$, and the black dashed curves represent approximate formulas \eqref{Asymp}. The region above(below) the solid curves corresponds to the $s(t)$-channel.}} \label{TransitionPointFigure}
\end{figure}
Let us first consider a symmetric case ($\bar{\ell}_{34}= 1$).
Fig. \ref{TransitionPointFigure} shows the curves of $\mathcal{S}_{c} = 1$ (see \eqref{RatioEq}) in the plane of $\bar{\ell}_{23}$ and $\bar{u}_c$ at fixed temperature $\bar{u}_h$. Fig. \ref{LOWTEM} is for $\bar{u}_h=2$ and Fig. \ref{AllTEM} is for $\bar{u}_h=$2(blue), 0.5(green), 0.1(red). The region above(below) the solid curves correspond to the $s(t)$-channel. The black dashed curves display the approximate results near $\bar{u}_{c} = 0$ in \eqref{Asymp}.
From Fig. \ref{TransitionPointFigure} we find the followings.
\paragraph{Field theory results:} The perturbative field theory regime we studied in section \ref{field123} qualitatively corresponds to the left-upper corner of Fig. \ref{TransitionPointFigure}. Qualitatively speaking, `Left' corresponds to the small deformation and `up' corresponds to the high temperature limit\footnote{Rigorously speaking, the high temperature limit also includes $\bar{u}_h \ll 1$.}. The left-upper corner is always the $s$-channel, which is consistent with the field theory result.
\paragraph{Separation dependence:} For fixed $\bar{u}_c$ and $\bar{u}_h$, as the separation $\bar{\ell}_{23}$ increases, the $s$-channel is favored. Note that there is {\it always} a phase transition because $\bar{u}_c < \bar{u}_h$.
\paragraph{Cutoff dependence:} Let us first define $\bar{\ell}_{23}^{(0)}$ by the maximum $\bar{\ell}_{23}$ {for fixed $\bar{u}_h$} allowing the phase transition. For example, $\bar{\ell}_{23}^{(0)} \approx 0.4$ in Fig. \ref{LOWTEM}. If $\bar{\ell}_{23} > \bar{\ell}_{23}^{(0)}$, there is no phase transition; always the $s$-channel is favored (Fig. \ref{TABLEFIG1}). If $\bar{\ell}_{23} < \bar{\ell}_{23}^{(0)}$, as the cutoff $\bar{u}_{c}$ increases, the $s$-channel is favored (Fig. \ref{TABLEFIG2}). Even for $\bar{\ell}_{23} \sim 0$ it undergoes a phase transition near $\bar{u}_c \sim \bar{u}_h$ (Fig. \ref{TABLEFIG3}).
\begin{figure}[]
\centering
\subfigure[$\bar{\ell}_{23} > \bar{\ell}_{23}^{\,\,\,\, c}$]
{\includegraphics[width=4.831cm]{TABLEFIG1} \label{TABLEFIG1}}
\subfigure[$\bar{\ell}_{23} < \bar{\ell}_{23}^{\,\,\,\, c}$]
{\includegraphics[width=4.831cm]{TABLEFIG2} \label{TABLEFIG2}}
\subfigure[$\bar{\ell}_{23} \simeq 0$]
{\includegraphics[width=4.831cm]{TABLEFIG3} \label{TABLEFIG3}}
\caption{{Cutoff ($\bar{u}_c$) dependence of the phase transition.
(a) If the separation is big enough ($\bar{\ell}_{23} > \bar{\ell}_{23}^{\,\,\,\, c}$, see \eqref{Asymp}) there is no phase transition. The $s$-channel is always favored. (b)(c) Otherwise, there is a phase transition from the $t$-channel to $s$-channel as the cutoff $\bar{u}_c$ increases.}} \label{TABLEFIG}
\end{figure}
\paragraph{Temperature dependence:}
Our result at low temperature is qualitatively consistent with the one at zero temperature~\cite{Ota:2019yfe}. In general, large separation and large cutoff favor the $s$-channel. There are some ranges for the $t$-channel around the corner $\bar{u}_{c} \sim \bar{\ell}_{23} \sim 0$. However, as the temperature increases(blue to green to red in Fig. \ref{AllTEM}), this $t$-channel range shrinks towards $\bar{u}_{c} \sim \bar{\ell}_{23} \sim 0$, and finally vanishes for all cutoff ($\bar{u}_{c}$) and the separation $\bar{\ell}_{23}$ at $\bar{u}_h = 0$.
\paragraph{Maximum separation for the phase transition:}
By expanding \eqref{RatioEq} in terms of $\bar{u}_{c} \ll 1$, we obtain an approximate formula $\bar{\ell}_{23}^c(\bar{u}_c, \bar{u}_h, \bar{\ell}_{34})$ for transition points $\mathcal{S}_{c} = 1$:
\begin{equation} \label{Asymp}
\begin{split}
\bar{\ell}_{23}^c(\bar{u}_c, \bar{u}_h,\bar{\ell}_{34}=1) \,=\, \bar{\ell}_{23}^{\,\,\,(0)} + \frac{ \bar{u}_{c}^2}{2 \, \bar{u}_{h}^2}\left( \bar{\ell}_{23}^{\,\,\,(0)} + 1 + \frac{ 1 - 2\bar{u}_{h} - (1 + 2 \bar{u}_{h}) e^\frac{2}{ \bar{u}_{h}} }{\sqrt{\left(-1 + e^{\frac{1}{\bar{u}_{h}}}\right)^2 \left(1 + e^{\frac{2}{\bar{u}_{h}}} \right)} } \right) + \mathcal{O}\left(\bar{u}_{c}^4\right) \,,
\end{split}
\end{equation}
where $\bar{\ell}_{23}^{\,\,\,(0)} = \bar{\ell}_{23}(0, \bar{u}_h, 1)$ is the value without a cutoff:
\begin{align}
\bar{\ell}_{23}^{\,\,\,(0)} &= -2 + \bar{u}_{h} \log \left[1 - e^{\frac{1}{\bar{u}_{h}}} + e^{\frac{2}{\bar{u}_{h}}} + \sqrt{\left( -1 + e^{\frac{1}{\bar{u}_{h}}} \right)^2 \left( 1+e^{\frac{2}{\bar{u}_{h}}} \right)} \right] \,, \label{Asymp2} \\
&\sim\,
\hspace*{-0cm}\begin{cases}
\sqrt{2}-1 \,, \qquad\qquad \left(\bar{u}_{h} \gg 1\right) \,, \\
(\log 2) \, \bar{u}_{h} \,, \quad\qquad \left(\bar{u}_{h} \ll 1\right) \,.
\hspace*{-0cm}\end{cases} \label{TempDep}
\end{align}
The asymptotic formula \eqref{Asymp} is shown as black dashed curves in Fig. \ref{TransitionPointFigure}.
At a given temperature, $\bar{\ell}_{23}^{\,\,\,(0)}$ is the maximum separation which allows the $t$-channel. If two intervals are farther from each other than $\bar{\ell}_{23}^{\,\,\,(0)}$ only the $s$-channel is allowed. Note that $\bar{\ell}_{23}^{\,\,\,(0)} \sim 0.4$ at the zero temperature limit {($\bar{u}_{h} \to \infty$)}, which is already close to the case at $\bar{u}_h = 2$ in Fig. \ref{LOWTEM}.
\subsection{Holography: two intervals (asymmetric case)}
\begin{figure}[]
\centering
\subfigure[$\bar{\ell}_{34} = 0.1$]
{ {\includegraphics[width=4.53cm]{RedFig}} \label{TransitionPointFigure2a}}
\subfigure[$\bar{\ell}_{34} = 1$]
{ {\includegraphics[width=4.53cm]{GreenFig}}\label{TransitionPointFigure2b}}
\subfigure[$\bar{\ell}_{34} = 10$]
{ {\includegraphics[width=4.53cm]{BlueFig}}\label{TransitionPointFigure2c}}
\caption{{The transition surface in ($\bar{u}_{c}, \bar{u}_{h}, \bar{\ell}_{23}$) space for $\bar{\ell}_{34} = 0.1,1,10$. The region above(below) the transition surface is for the $s$($t$)-channel. As $\bar{\ell}_{34}$ increases, the parameter region for the $t$-channel increases.}}\label{TransitionPointFigure2}
\end{figure}
Next, we consider the asymmetric case ($\bar{\ell}_{34} \neq 1$).
Fig. \ref{TransitionPointFigure2} shows the phase transition surface for $\bar{\ell}_{34} = 0.1, 1, 10$. The region above the transition surface is for the $s$-channel. Fig. \ref{TransitionPointFigure2b} in particular corresponds to the symmetric case ($\bar{\ell}_{34} \neq 1$) which is the three dimensional version of Fig. \ref{AllTEM}. As $\bar{\ell}_{34}$ decreases, the $t$-channel is more suppressed. As $\bar{\ell}_{34}$ increases, the parameter region for the $t$-channel increases and saturates to the maximum region.
\paragraph{Maximum separation for the phase transition:}
Similarly to the symmetric case, there is the maximum separation for the phase transition. If the separation is bigger than this, only the $s$-channel is available. From Fig. \ref{TransitionPointFigure2} we find that the transition point at $\bar{u}_c=0$ at $\bar{u}_h \rightarrow \infty$ has the maximum separation. By collecting these points for various $\bar{\ell}_{34}$ ($\bar{u}_{c}=0$, $\bar{u}_{h}=100$), we make a black curve in Fig. \ref{MAXVAL}. The maximum point increases as $\bar{\ell}_{34}$ increases, but it does not exceeds 1.
Indeed, this curve can be understood as follows.
Without a cutoff ($\bar{u}_c = 0$), in the limit $\bar{u}_h \rightarrow \infty$, $\bar{\ell}_{23}$ satisfying $\mathcal{S}_{c} = 1$ saturates to
\begin{align} \label{MAXanal}
\begin{split}
\bar{\ell}_{23}^{c}(0,\infty,\bar{\ell}_{34}) \,=\, \frac{-1 - \bar{\ell}_{34} + \sqrt{(1+\bar{\ell}_{34})^2 + 4 \, \bar{\ell}_{34} }}{2} \,,
\end{split}
\end{align}
which is plotted as the yellow dashed curve in Fig. \ref{MAXVAL}. Note that
\begin{equation}
\bar{\ell}_{23}^{c}(0,\infty,\bar{\ell}_{34}) \,\rightarrow 1 \,,\label{lmax1}
\end{equation}
as $\bar{\ell}_{34} \rightarrow \infty $.
\begin{figure}[]
\centering
{\includegraphics[width=9cm]{MaxVal}}
\caption{{$\bar{\ell}_{23}^{\,\,\,\, max}$ vs $\bar{\ell}_{34}$. If the separation ($\bar{\ell}_{23}$) is bigger than $\bar{\ell}_{23}^{\,\,\,\, max}$, only the $s$-channel is available.
The black solid line is the numerical plot with ($\bar{u}_{c}=0$, $\bar{u}_{h}=100$). The yellow dashed line represents the analytic result in \eqref{MAXanal}. The red dot corresponds to the symmetric case ($\bar{\ell}_{34}=1$).} }\label{MAXVAL}
\end{figure}
Alternatively, we can also understand \eqref{MAXanal} by the cross ratio $\eta$. This $\eta$ is defined in the 2d CFT, and in this case the phase transition occurs at $\eta = 1/2$~\cite{Hartman:2013mia, Headrick:2010zt}\footnote{In higher dimensional cases, the phase transition of the holographic entanglement entropy of two strips at finite temperature was also studied \cite{Fischler:2012uv, BabaeiVelni:2019pkw}.}. Since we are considering the case $\bar{u}_c = 0$ and $\bar{u}_h \rightarrow \infty$, we may use that criteria i.e.
\begin{align} \label{ETARELATION}
\begin{split}
\eta = \frac{(z_1-z_2)(z_3-z_4)}{(z_1-z_3)(z_2-z_4)}
= \frac{(x_1-x_2)(x_3-x_4)}{(x_1-x_3)(x_2-x_4)}
= \frac{\bar{\ell}_{34}}{(1+\bar{\ell}_{23})(\bar{\ell}_{23}+\bar{\ell}_{34})} = \frac{1}{2} \,,
\end{split}
\end{align}
which implies \eqref{MAXanal}.
{
All phase transitions in Fig. \ref{TransitionPointFigure2} is the first order phase transition. For example, let us consider the phase transition at $\bar{u}_h=2$ and $\bar{u}_c=0.5$. See the red dot in Fig.~\ref{LOWTEM}. As we increase $\bar{\ell}_{23}$ the configuration of smaller area changes from the $t$-channel to $s$-channel.
It can be seen concretely in Fig.~\ref{phastTa}. As $\bar{\ell}_{23}$ increases, the entanglement entropy of the $s$-channel ($S^H_\text{s-ch}$) is constant (dotted line) because the entanglement surface does not change, while the entanglement entropy of the $t$-channel ($S^H_\text{t-ch}$) monotonically increases (solid line) because the entanglement surface becomes bigger. The same argument applies to all transition points in Fig.~\ref{TransitionPointFigure2} so it is natural to have the first order phase transition.
Intuitively, the first order phase transition is natural if two configurations are available for all parameter range as in Fig.~\ref{phastTa}. In this case, there will be a cross point of two curves at the phase transition, so the first derivative at that point will be different (`first' order transition) in general. To have a second order (or continuous) phase transition, usually we have one configurations before the phase transition occurs and two configurations are available after the phase transition. See the schematic picture in Fig.~\ref{phastTb}, where the blue dotted curve goes to the red curve after the phase transition. A good example of this type of phase transition is a holographic superconductor~\cite{Gubser:2008px, Hartnoll:2008vx}.
}
\begin{figure}[]
\centering
\subfigure[First order phase transition]
{\includegraphics[width=7.2cm]{FIGSUB112} \label{phastTa}}
\subfigure[(Schematic) second order phase transition]
{ \includegraphics[width=7.2cm]{Finitebeta3Tem3} \label{phastTb}}
\caption{Intuitive understanding of phase transitions} \label{phastT}
\end{figure}
\section{Conclusions} \label{summary}
In this work we have studied the entanglement entropy and the R\'{e}nyi entropy of multiple intervals in 2d CFT at finite temperature with the first order perturbation by the $T\overline{T}$ deformation.
To compute the R\'{e}nyi entropy, computations of the correlation functions between the twist operators and $T\overline{T}$ are crucial.
We have derived the general formula to compute the R\'{e}nyi entropy (also entanglement entropy as a special case of the R\'{e}nyi entropy) in general CFT up to the first {order} deformation.
By using this formula we have found that the entanglement entropy of multiple intervals in the deformed holographic CFT is the sum of the one of a single interval. This is a non-trivial result from the field theory side while it looks straightforward from a holographic viewpoint via the Ryu-Takayanagi formula. In other words, it provides a non-trivial consistency check of holography with the $T\overline{T}$ deformation.
On the contrary, the R\'{e}nyi entropy of two intervals is the sum of the one of a single interval only if the distance between two intervals is large enough. It can be intuitively understood by the fact that the holographic R\'{e}nyi entropy are related with the cosmic brane which has a tension, contrary to the Ryu-Takayanagi surface. Thus, in general there will be a back-reaction, which will disappear only if two intervals are far away.
{Moreover, we also have found an interesting observation when the cross ratio goes to unity. We show that in this case there is a finite mixing effect between the holomorphic and anti-holomorphic parts to the R\'{e}nyi entropy which vanishes for the entanglement entropy but not for the R\'{e}nyi entropy. Because of this mixing, $\delta S_n(A)$ of the two intervals does not become the sum of the single interval's. We provide arguments to understand this non-vanishing mixing effect from the perspective of field theory and holography.
In general, if $\eta \rightarrow 0$, there is no mixing term and the cluster decomposition (such as the factorization of the CFT correlation function) is allowed, while if $\eta \rightarrow 1$ there should be a mixing term and the cluster decomposition is not allowed. In field theories with the conformal symmetry, the cluster decomposition is valid also for $\eta\to1$ by the conformal map so there is no mixing term. However if the $T\overline{T}$ deformation is considered, conformal symmetry is broken (i.e., we cannot use a conformal map) so that the cluster decomposition property may not work in $\eta\to1$. Thus the mixing term may be non-zero. Our results can be considered as an example of this case: considering the finite $\mu$, mixing term in $\eta\to1$ is zero when $n=1$(the entanglement entropy) but finite when $n\neq1$(the R\'{e}nyi entropy).
In holography, we can use a holographic prescription of the R\'{e}nyi entropy to understand mixing effect in $\eta\to1$ case; area of cosmic branes.
If we interpret that the mixing term is from the interaction between two cosmic branes through the back-reaction ($n\neq1$), we can argue that the mixing effect may remain with $\mu\neq0$ (finite cutoff) as follows.
i) If the distance between two cosmic branes are very far from each other such as the case with $\eta \rightarrow 0$, the interaction between them would be negligible (vanishing mixing term).
ii) However, in the $\eta\to1$ case, the distance between the two branes are always finite because one will shrink to the point on the finite radius cutoff (which is the definition of $\eta\to1$) and the other will be bounded by the horizon. Thus, we may expect that there will be a remaining interaction (non-vanishing mixing term). {This mixing effect will vanish if the cut-off or temperature goes to zero.}}
For two intervals, there are two configurations for the entanglement entropy, the so called $s$-channel and $t$-channel. They correspond to the disconnected Ryu-Takayangi surface and the connected surface respectively in holography. Mathematically, both are available, but the entanglement entropy corresponds to the one with the smaller value.
From our field theory computation, we have shown that the $s$-channel is always favored in the {\it small} deformation if the lengths of the intervals ($\ell_{12}$ and $\ell_{34}$) and the separation between them (${\ell}_{23}$) are much bigger than the temperature (${u}_h$), i.e. ${\ell}_{ij} \gg {u}_h \sim \beta$. From our holography computation, we have confirmed it and, furthermore, shown that it is true also in the {\it large} (arbitrary) deformation.
Holographic framework can deal with an arbitrary deformation and temperature, contrary to the field theory method. By taking this advantage, we explored the whole parameter space of the deformation and temperature to identify the parameter range for $the t$-channel.
We find that at a given deformation and temperature and $\ell_{34}$,
if $\bar{\ell}_{23} = \ell_{23}/\ell_{12}$ becomes smaller there will be a phase transition from the $s$-channel to $t$-channel at some critical length, say $\bar{\ell}_{23}^c$. The critical length $\bar{\ell}_{23}^c$ increases as the temperature or the deformation parameter decreases or $\ell_{34}$ increases. Thus, we find that the maximum value of $\bar{\ell}_{23}^c$ is determined by CFT(zero deformation) at zero temperature and ${\ell}_{34} \rightarrow \infty$, which is $\bar{\ell}_{23}^c \rightarrow 1$.
For the R\'{e}nyi entropy of two intervals in this paper, we focused on the case $\eta \to 0$ or $\eta \to 1$ to use the factorization property of the four point function. However, in principle, it is possible to consider an arbitrary $\eta$ in the holographic CFT.
Comparing it with the holographic mutual R\'{e}nyi information {at least to first order in $n-1$ \cite{Dong:2016fnf}} may serve as another important consistency check of holography.
It will be also interesting to generalize the formalism for the R\'{e}nyi entropy in this paper for other entanglement measures (for example, \cite{Calabrese:2012ew, Tamaoka:2018ned, Caputa:2018xuf, Dutta:2019gen}) by considering the suitable twist operators.
We leave these as future work.
\acknowledgments
We would like to thank Toshihiro Ota for fruitful discussions.
This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT $\&$ Future Planning(NRF- 2017R1A2B4004810) and GIST Research Institute(GRI) grant funded by the GIST in 2019.
|
{'timestamp': '2020-02-03T02:07:08', 'yymm': '1906', 'arxiv_id': '1906.03894', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.03894'}
|
arxiv
|
\subsection{Patient population}
The data used in this study was obtained from The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA).
We identified 120 patients from TCGA lower-grade glioma collection\footnote{\url{https://cancergenome.nih.gov/cancersselected/lowergradeglioma}} who had preoperative imaging data available, containing at least a fluid-attenuated inversion recovery (FLAIR) sequence.
Ten patients had to be excluded since they did not have genomic cluster information available.
The final group of 110 patients was from the following 5 institutions: Thomas Jefferson University (TCGA-CS, 16 patients), Henry Ford Hospital (TCGA-DU, 45 patients), UNC (TCGA-EZ, 1 patient), Case Western (TCGA-FG, 14 patients), Case Western – St. Joseph’s (TCGA-HT, 34 patients) from TCGA LGG collection.
The complete list of patients used in this study is included in Online Resource 1.
The entire set of 110 patients was split into 22 non-overlapping subsets of 5 patients each.
This was done for evaluation with cross-validation.
\subsection{Imaging data}
Imaging data was obtained from The Cancer Imaging Archive\footnote{\url{https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG}} which contains the images corresponding to the TCGA patients and is sponsored by the National Cancer Institute.
We used all modalities when available and only FLAIR in case any other modality was missing.
There were 101 patients with all sequences available, 9 patients with missing post-contrast sequence, and 6 with missing pre-contrast sequence.
The complete list of available sequences for each patient is included in Online Resource 1.
The number of slices varied among patients from 20 to 88.
In order to capture the original pattern of tumor growth, we only analyzed preoperative data.
The assessment of tumor shape was based on FLAIR abnormality since enhancing tumor in LGG is rare.
A researcher in our laboratory, who was a medical school graduate with experience in neuroradiology imaging, manually annotated FLAIR images by drawing an outline of the FLAIR abnormality on each slice to form training data for the automatic segmentation algorithm.
We used software developed in our laboratory for this purpose.
A board eligible radiologist verified all annotations and modified those that were identified as incorrect.
Dataset of registered images together with manual segmentation masks for each case used in our study is released and made publicly available at the following link: \url{https://kaggle.com/mateuszbuda/lgg-mri-segmentation}.
\subsection{Genomic data}
Genomic data used in this study consisted of DNA methylation, gene expression, DNA copy number, and microRNA expression, as well as IDH mutation 1p/19q co-deletion measurement.
Specifically, in our analysis we consider six previously identified molecular classifications of LGG that are known to be correlated with some tumor shape features~\cite{mazurowski2017radiogenomics}:
\begin{enumerate}[noitemsep,topsep=0em]
\item Molecular subtype based on IDH mutation and 1p/19q co-deletion (three subtypes: IDH mutation-1p/19q co-deletion, IDH mutation-no 1p/19q co-deletion, IDH wild type)
\item RNASeq clusters (4 clusters: R1-R4)
\item DNA methylation clusters (5 clusters: M1-M5)
\item DNA copy number clusters (3 clusters: C1-C3)
\item microRNA expression clusters (4 clusters: mi1-mi4)
\item Cluster of clusters (3 clusters: coc1-coc3)
\end{enumerate}
\end{document}
\subsection{Automatic segmentation}
Figure~\ref{fig:fig1} shows the overview of the segmentation algorithm.
The following phases comprise the fully automatic algorithm for obtaining the segmentation mask: image preprocessing, segmentation, and post-processing.
Then, once the segmentation masks are generated, we extracted shape features that were identified as predictive of molecular subtypes.
The following sections provide details on each of the steps.
Source code of the algorithm described in this section is also available at the following link: \url{https://github.com/mateuszbuda/brain-segmentation}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{images/Figure_1.png}
\caption{A schema showing data processing steps of our system for molecular subtype inference from a sequence of brain MRI}
\label{fig:fig1}
\end{figure}
\subsubsection{Preprocessing}
Images varied significantly between patients in terms of size.
The preprocessing of the image sequences consisted of the following steps:
\begin{itemize}[noitemsep,topsep=0em]
\item Scaling of the images to the common frame of reference.
\item Removal of the skull to focus the analysis on the brain region (a.k.a., skull stripping).
\item Adaptive window and level adjustment based on the image histogram to normalize intensities of tissues between cases.
\item Z-score normalization of the entire data set.
\end{itemize}
The details of all the pre-processing steps are included in the Online Resource 2.
\subsubsection{Segmentation}
The main segmentation step was performed using a fully convolutional neural network with the U-Net architecture~\cite{ronneberger2015u} shown in Figure~\ref{fig:fig2}.
It comprises four levels of blocks containing two convolutional layers with ReLU activation function and one max pooling layer in the encoding part and up-convolutional layers instead in the decoding part~\cite{glorot2011deep, lecun1995convolutional, lecun1998gradient, krizhevsky2012imagenet, noh2015learning}.
Consistent with the U-Net architecture, from the encoding layers we use skip connections to the corresponding layers in the decoding part.
They provide a shortcut for gradient flow in shallow layers during the training phase~\cite{drozdzal2016importance}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{images/Figure_2.pdf}
\caption{U-Net architecture used for skull stripping and segmentation. Below each layer specification we provide dimensionality of a single example that this layer outputs}
\label{fig:fig2}
\end{figure}
Manual segmentation served as a ground truth for training a model for automatic segmentation.
We trained two networks, one for cases with three sequences available (pre-contrast, FLAIR, and post-contrast) and the other that used only FLAIR.
For the second network, instead of missing sequences we used neighboring FLAIR slices from both sides of a slice of interest as additional channels.
Since in this scenario the two sequences, which occupied channel 1 and channel 3 of the input are not available, we filled these channels with neighboring tumor slices to provide additional information to the network.
The number of slices containing tumor was considerably lower than those with only background class present.
Therefore, to account for this fact, we applied oversampling with data augmentation that was proved to help in training convolutional neural networks~\cite{buda2018systematic}.
We did it by having three instances of each tumor slice in our training set.
For one oversampled slice we applied random rotation by 5 to 15 degrees and for the other slice we applied random scale by 4\% to 8\%.
To further reduce the imbalance between tumor and non-tumor pixels, we discarded empty slices that did not contain any brain or other tissue after applying skull stripping.
This step has been undertaken since training a fully convolutional neural network with images that do not contain any positive voxels can be highly detrimental.
Please note that a significant majority of voxels in the abnormal slices are still normal and therefore sufficient negative data is available for training.
\subsubsection{Post-processing}
To further improve the accuracy, we implemented a post-processing algorithm that removes false positives.
Specifically, we extracted all tumor volumes using connected components algorithm on a three-dimensional segmentation mask for each patient.
We did it using 6-connected pixels in three dimensions, i.e. neighboring pixels are defined as being connected along primary axes.
Eventually, we included in the final segmentation mask only the pixels comprising the largest connected tumor volume.
This post-processing strategy benefits extraction of shape features (described in the following section) since they are sensitive to isolated false positive pixel segmentations.
\subsubsection{Extraction of shape features}
We consider three shape features of a segmented tumor that were identified as important in the context of lower grade glioma radiogenomic~\cite{mazurowski2017radiogenomics}:
\textit{Angular standard deviation (ASD)} is the average of the radial distance standard deviations from the centroid of the mass across ten equiangular bins in one slice, as described in~\cite{georgiou2007multi}.
Before calculating the value of this feature, we normalize radial distances to have mean equal one.
Angular standard deviation of a tumor shape is a quantitative measure of variation in the tumor margin within relatively small parts of the tumor.
It also captures non-circularity of the tumor, i.e. low value indicates circle like shape.
\textit{Bounding ellipsoid volume ratio (BEVR)} is the ratio between the volume of segmented FLAIR abnormality and its minimum bounding ellipsoid.
This feature captures the irregularity of the tumor in three dimensions.
If the tumor fits well into its bounding ellipsoid (high value of BEVR), it is considered more regular while if more space in the bounding ellipsoid is unfilled, the shape is considered irregular.
\textit{Margin fluctuation (MF)} is computed as follows.
First, we find the centroid of the tumor and distances from it to all pixels on the tumor boundary in one slice.
Then, we apply averaging filter of length equal to 10\% of the tumor perimeter measured in the number of pixels.
Margin fluctuation is the standard deviation of the difference between values before and after smoothing, i.e. applying averaging filter.
Similarly as in ASD, radial distances are normalized to have a mean of one.
This is done in order to remove the impact of tumor size on the value of this feature.
Margin fluctuation is a two-dimensional feature that quantifies the amount of high frequency changes, i.e. smoothness of the tumor boundary and was previously used for analysis of spiculation in breast tumors~\cite{giger1994computerized, pohlman1996quantitative}.
\subsection{Statistical analysis}
Our hypothesis was that fully automatically-assessed shape features are predictive of tumor molecular subtypes.
Since we considered 6 definitions of molecular subtypes based on genomic assays and multiple imaging features, we focused our analysis on the relationships between imaging and genomics that were found significant (with manual tumor segmentation) in a previous study~\cite{mazurowski2017radiogenomics}.
Specifically, those were the following relationships: bounding ellipsoid volume ratio with RNASeq, miRNA, CNC, and COC, the relationship of Margin fluctuation with RNASeq, and the relationship of angular standard deviation with IDH/1p19q, RNASeq, Methylation, CNC, and COC resulting in 10 specific hypotheses. To assess statistical significance of these associations, we conducted the Fisher exact test (fisher.test function in R) for each of 10 combinations of imaging and genomics.
For the purpose of this test, we turned each continuous imaging variable value into a number from 1 to 4 based on which quartile of the feature value it fell into.
For each of the imaging and genomic feature combinations, we used only the cases that had both: imaging and genomic subtype data available.
We conducted a total of 10 statistical tests for each pair of imaging feature and genomic subtype for our primary hypothesis.
To account for multiple hypothesis testing, we applied a Bonferroni correction.
P-values lower than 0.005 (0.05/10) were considered statistically significant for our primary radiogenomics hypotheses.
Additionally, we evaluated performance of the deep learning-based segmentation itself.
We used Dice similarity coefficient~\cite{dice1945measures} as the evaluation metric which measures the overlap between the segmentation provided by the algorithm and the manually-annotated gold standard.
In the evaluation process, we used cross-validation.
Specifically, we divided our entire dataset into 22 subsets, each containing exactly 5 cases.
The model training was conducted on the training subsets and then the model was applied to the test cases.
This was repeated 22 times until each subset served once as the test set.
The cases then were pooled for the analysis as described above.
The number of cases included in the training and test sets (which determines the number of folds) is a trade-off between computational cost of training multiple models and having more data to train each of them.
The two extremes of this approach are leave-one-out strategy which results in one-case folds and the other is 50\% split which gives 2 folds.
We found folds of 5 patients to be a good balance between a training set size and computational cost.
\end{document}
\section{Introduction}
\label{sec:introduction}
\subfile{introduction}
\section{Dataset}
\label{sec:dataset}
\subfile{dataset}
\section{Methods}
\label{sec:methods}
\subfile{methods}
\section{Results}
\label{sec:results}
\subfile{results}
\section{Discussion}
\label{sec:discussion}
\subfile{discussion}
\section{Limitations}
\label{sec:limitations}
\subfile{limitations}
\section{Conclusions}
\label{sec:conclusions}
\subfile{conclusions}
\vspace{2em}
|
{'timestamp': '2019-06-11T02:20:00', 'yymm': '1906', 'arxiv_id': '1906.03720', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.03720'}
|
arxiv
|
\section{Introduction}
Anomaly detection aims to discover unexpected events or rare items in data. It is popular in many industrial applications and is an important research area in data mining. Accurate anomaly detection can trigger prompt troubleshooting, help to avoid loss in revenue, and maintain the reputation and branding for a company. For this purpose, large companies have built their own anomaly detection services to monitor their business, product and service health~\cite{yahooo2015EGADS,twitter-ad}. When anomalies are detected, alerts will be sent to the operators to make timely decisions related to incidents. For instance, Yahoo releases EGADS~\cite{yahooo2015EGADS} to automatically monitor and raise alerts on millions of time-series of different Yahoo properties for various use-cases. At Microsoft, we build an anomaly detection service to monitor millions of metrics coming from Bing, Office and Azure, which enables engineers move faster in solving live site issues. In this paper, we focus on the pipeline and algorithm of our anomaly detection service specialized for time-series data.
There are many challenges in designing an industrial service for time-series anomaly detection:
\textbf{Challenge 1: Lack of Labels.} To provide anomaly detection services for a single business scenario, the system must process millions of time-series simultaneously. There is no easy way for users to label each time-series manually. Moreover, the data distribution of time-series is constantly changing, which requires the system recognizing the anomalies even though similar patterns have not appeared before. That makes the supervised models insufficient in the industrial scenario.
\textbf{Challenge 2: Generalization.} Various kinds of time-series from different business scenarios are required to be monitored. As shown in Figure \ref{fig:series_pattern}, there are several typical categories of time-series patterns; and it is important for industrial anomaly detection services to work well on all kinds of patterns. However, existing approaches are not generalized enough for different patterns. For example, Holt winters~\cite{holtwinters} always shows poor results in (b) and (c); and Spot~\cite{siffer2017anomaly} always shows poor results in (a). Thus, we need to find a solution of better generality.
\begin{figure}[htbp]
\centering
\subfigure[seasonal]{
\label{Fig.seasonal}
\includegraphics[width=0.14\textwidth]{img/13-ad.png}}
\subfigure[stable]{
\label{Fig.stable}
\includegraphics[width=0.14\textwidth]{img/8-ad.png}}
\subfigure[unstable]{
\label{Fig.unstable}
\includegraphics[width=0.14\textwidth]{img/1-ad.png}}
\caption{Different types of time-series.}
\label{fig:series_pattern}
\end{figure}
\textbf{Challenge 3: Efficiency.} In business applications, a monitoring system must process millions, even billions of time-series in near real time. Especially for minute-level time-series, the anomaly detection procedure needs to be finished within limited time. Therefore, efficiency is one of the major prerequisites for online anomaly detection service. Even though the models with large time complexity are good at accuracy, they are often of little use in an online scenario.
To tackle the aforementioned problems, our goal is to develop an anomaly detection approach which is accurate, efficient and general. Traditional statistical models~\cite{twitter-ad,siffer2017anomaly,holtwinters,rosner1983percentage,lu2009network,mahimkar2011rapid,zhang2005network,rasheed2009fourier} can be easily adopted online, but their accuracies are not sufficient for industrial applications. Supervised models~\cite{liu2015opprentice,anomalyGoogle} are superior in accuracy, but they are insufficient in our scenario because of lacking labeled data. There are other unsupervised approaches, for instance, Luminol~\cite{linkedin/luminol} and DONUT~\cite{xu2018unsupervised}. However, these methods are either too time-consuming or parameter-sensitive. Therefore, we aim to develop a more competitive method in the unsupervised manner which favors accuracy, efficiency and generality simultaneously.
In this paper, we borrow the Spectral Residual model \cite{hou2007saliency} from the visual saliency detection domain to our anomaly detection application. Spectral Residual (SR) is an efficient unsupervised algorithm, which demonstrates outstanding performance and robustness in the visual saliency detection tasks. To the best of our knowledge, our work is the first attempt to borrow this idea for time-series anomaly detection. The motivation is that the time-series anomaly detection task is similar to the problem of visual saliency detection essentially. Saliency is what "stands out" in a photo or scene, enabling our eye-brain connection to quickly (and essentially unconsciously) focus on the most important regions. Meanwhile, when anomalies appear in time-series curves, they are always the most salient part in vision.
Moreover, we propose a novel approach based on the combination of SR and CNN. CNN is a state-of-the-art method for supervised saliency detection when sufficient labeled data is available; while SR is a state-of-the-art approach in unsupervised setting. Our innovation is to unite these two models by applying CNN on the basis of SR output directly. As the problem of anomaly discrimination becomes much easier upon the output of SR model, we can train CNN through automatically generated anomalies and achieve significant performance enhancement over the original SR model. Because the anomalies used for CNN training is fully synthetic, the SR-CNN approach remains unsupervised and establishes a new state-of-the-art performance when no manually labeled data is available.
As shown in the experiments, our proposed algorithm is more accurate and general than state-of-the-art unsupervised models. Furthermore, we also apply it as an additional feature in the supervised learning model. The experimental results demonstrate that the performance can be further improved when labeled data is available; and the additional features do provide complementary information to existing anomaly detectors. Up to the date of paper submission, the $F_1$-score of our unsupervised and supervised approaches are both the best ever achieved on the open datasets.
The \textbf{contributions} of this paper are highlighted as below:
\begin{itemize}
\item For the first time in the anomaly detection field, we borrow the technique of visual saliency detection to detect anomalies in time-series data. The inspiring results prove the possibility of using computer vision technologies to solve anomaly detection problems.
\item We combine the SR and CNN model to improve the accuracy of time-series anomaly detection. The idea is innovative and the approach outperforms current state-of-the-art methods by a large margin. Especially, the $F_1$-score is improved by more than 20\% on Microsoft production data.
\item From the practical perspective, the proposed solution has good generality and efficiency. It can be easily integrated with online monitoring systems to provide quick alerts for important online metrics. This technique has enabled product teams to move faster in detecting issues, save manual efforts, and accelerate the process of diagnostics.
\end{itemize}
The rest of this paper is organized as follows. First, in Section \ref{sys_design}, we describe the details of system design, including data ingestion, experimentation platform and online compute. Then, we share our experience of real applications in Section 3 and introduce the methodology in Section 4. Experimental results are analyzed in Section 5 and related works are presented in Section 6. Finally, we conclude our work and put forward future work in Section 7.
\section{system overview}
\label{sys_design}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{"img/System_Overview".png}
\caption{System Overview}
\end{figure*}
The whole system consists of three major components: \textbf{data ingestion}, \textbf{experimentation platform} and \textbf{online compute}. Before going into more detail about these components, we will introduce the whole pipeline first. Users can register monitoring tasks by ingesting time-series to the system. Ingesting time-series from different data sources (including Azure storage, databases and online streaming data) is supported. The \textit{ingestion worker} is responsible for updating each time-series according to the designated granularity, for example, minute, hour, or day. Time-series points enter the streaming pipeline through Kafka and is stored into the time-series database. \textit{Anomaly detection processor} calculates the anomaly status for incoming time-series points online. In a common scenario of monitoring business metrics, users ingest a collection of time-series simultaneously. As an example, Bing team ingests the time-series representing the the usage of different markets and platforms. When incident happens, \textit{alert service} combines anomalies of related time-series and sends them to users through emails and paging services. The combined anomalies show the overall status of an incident and help users to shorten the time in diagnosing issues. Figure 2 illustrates the general pipeline of the system.
\subsection{Data Ingestion}
Users can register a monitor task by creating a \textit{Datafeed}. Each datafeed is identified by \textit{Connect String} and \textit{Granularity}. Connect String is used to connect user's storage system to the anomaly detection service. Granularity indicates the update frequency of a datafeed; and the minimum granularity is one minute. An ingestion task will ingest the data points of time-series to the system according to the given granularity. For example, if a user sets minute as the granularity, ingestion module will create a task every minute to ingest a new data point. Time-series points are ingested into influxDB\footnote{https://www.influxdata.com/} and Kafka\footnote{https://kafka.apache.org/}. Throughput of this module varies from 10,000 to 100,000 data points per second.
\subsection{Online Compute}
The online compute module processes each data point immediately after it enters the pipeline. To detect anomaly status of an incoming point, a sliding window of the time-series data points is required. Therefore, we use Flink\footnote{https://flink.apache.org/} to manage the points in memory to optimize the computation efficiency. Currently, the streaming pipeline processes more than 4 million time-series every day in production. The maximum throughput can be 4 million every minute. \textit{Anomaly detection processor} detects anomalies for each single time-series. In practice, a single anomaly is not enough for users to diagnose their service efficiently. Thus, \textit{smart alert processor} correlates the anomalies from difference time-series and generates an incident report accordingly. As anomaly detection is the main topic in this paper, smart alert is not discussed in more detail.
\subsection{Experimentation Platform}
We build an experimentation platform to evaluate the performance of anomaly detection models. Before we deploy a new model, offline experiments and online A/B tests will be conducted on the platform. Users can mark a point as anomaly or not on the portal. A labeling service is provided to human editors. Editors will first label true anomaly points of a single time-series and then label false anomaly points from anomaly detection results of a specific model. Labeled data is used to evaluate the accuracy of the anomaly detection model. We also evaluate the efficiency and generality of each model on the platform. In online experiments, we flight several datafeeds to the new model. A couple of metrics, such as click through rate of alerts, percentage of anomalies and false anomaly rate is used to decide whether the new model can be deployed to production. The experimentation platform is built on Azure machine learning service\footnote{https://azure.microsoft.com/en-us/services/machine-learning-service/}. If a model is verified to be effective, the platform will expose it as a web service and host it on K8s\footnote{https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/}.
\section{applications}
\begin{figure*}[t]
\subfigure[Alert Page]{
\label{fig:alert}
\includegraphics[width=0.45\textwidth,height=5cm]{img/alert-v2.png}}
\hspace*{\fill}
\subfigure[Incident Report]{
\label{fig:incident}
\includegraphics[width=0.45\textwidth,height=5cm]{img/incident.png}}
\caption{An illustration of example application from Microsoft Bing}
\label{fig:sys_exp}
\end{figure*}
At Microsoft, it is a common need to monitor business metrics and act quickly to address the issue if there is anything outside of the normal pattern. To tackle the problem, we build a scalable system with the ability to monitor minute-level time-series from various data sources. Automated diagnostic insights are provided to assist users to resolve their issues efficiently. The service has been used by more than 200 product teams within Microsoft, across Office 365, Windows, Bing and Azure organizations, with more than 4 million time-series ingested and monitored continuously.
As an example, Michael from Bing team would like to monitor the usage of their service in the global marketplace. In the anomaly detection system, he created a new \textit{datafeed} to ingest thousands of time-series, each indicating the usage of a specific market (US, UK, etc.), device (PC, windows phone, etc.) or channel (PORE, QBRE, etc.). Within 5 minutes, Michael saw the ingested time-series on the portal. At 9am, Oct-14, 2017, the time-series associated to the UK market encountered an incident. Michael was notified through E-mail alerts (as shown in Figure \ref{fig:alert}) and started to investigate the problem. He opened the incident report where the top correlated time-series with anomalies are selected from a set of time-series around 9am. As shown in Figure \ref{fig:incident}, usage on PC devices and PORE channel can be found in the incident report. Michael brought this insight to the team and finally found that the problem was caused by a relevance issue which made users do lots of pagination requests (PORE) to get satisfactory search results.
As another example, the Outlook anti-spam team used to leverage a rule-based method to monitor the effectiveness of their spam detection system. However, this method was not easy to be maintained and usually showed bad cases on some Geo-locations. Therefore, they ingested key metrics to our anomaly detection service to monitor the effectiveness of their spam detection model across different Geo-locations. Through our API, they have integrated anomaly detection ability into the Office DevOps platform. By using this automatic detection service, they have covered more Geo-locations and received less false positive cases compared to the original rule-based solution.
\section{methodology}
The problem of time-series anomaly detection is defined as below.
\newtheorem{problem}{Problem}
\begin{problem}
\label{def}
Given a sequence of real values, i.e., $\textbf{x}={x_1, x_2, ..., x_n}$, the task of time-series anomaly detection is to produce an output sequence, $\textbf{y}={y_1, y_2, ..., y_n}$, where $y_i \in \{0, 1\}$ denotes whether $x_i$ is an anomaly point.
\end{problem}
As emphasized in the Introduction, our challenge is to develop a general and efficient algorithm with no labeled data. Inspired by the domain of visual computing, we adopt Spectral Residual (SR)~\cite{hou2007saliency}, a simple yet powerful approach based on Fast Fourier Transform (FFT)~\cite{van1992computational}. The SR approach is unsupervised and has been proved to be efficient and effective in visual saliency detection applications. We believe that the visual saliency detection and time-series anomaly detection tasks are similar essentially, because the anomaly points are usually salient in the visual perspective.
Furthermore, recent saliency detection research has shown favor to end-to-end training with Convolutional Neural Networks (CNNs) when sufficient labeled data is available~\cite{zhao2015saliency}. Nevertheless, it is prohibitive for our application as large-scale labeled data is difficult to be collected online. As a trade-off, we propose a novel method, SR-CNN, which applies CNN on the output of SR model directly. CNN is responsible to learn a discriminate rule to replace the single threshold adopted by the original SR solution. The problem becomes much easier to learn the CNN model on SR results than on the original input sequence. Specifically, we can use artificially generated anomaly labels to train the CNN-based discriminator. In the following sub-sections, we introduce the details of SR and SR-CNN methods respectively.
\subsection{SR (Spectral Residual)}
\label{section:spectral residual}
The Spectral Residual (SR) algorithm consists of three major steps: (1) Fourier Transform to get the log amplitude spectrum; (2) calculation of \textit{spectral residual}; and (3) Inverse Fourier Transform that transforms the sequence back to spatial domain. Mathematically, given a sequence $\textbf{x}$, we have
\begin{align}
& A(f) = Amplitude(\mathfrak{F}(\textbf{x})) \\
& P(f) = Phrase(\mathfrak{F}(\textbf{x})) \\
& L(f) = log(A(f)) \\
& AL(f) = h_q(f) \cdot L(f) \\
& R(f) = L(f) - AL(f) \\
& S(\textbf{x}) = \left\Vert \mathfrak{F}^{-1}(exp(R(f) + iP(f))) \right\Vert
\end{align}
where $\mathfrak{F}$ and $\mathfrak{F}^{-1}$ denote Fourier Transform and Inverse Fourier Transform respectively. \textbf{x} is the input sequence with shape $n \times 1$; $A(f)$ is the amplitude spectrum of sequence \textbf{x}; $P(f)$ is the corresponding phase spectrum of sequence \textbf{x}; $L(f)$ is the log representation of $A(f)$; and $AL(f)$ is the average spectrum of $L(f)$ which can be approximated by convoluting the input sequence by $h_q(f)$, where $h_q(f)$ is an $q \times q$ matrix defined as:
\begin{align*}
h_q(f)
=
\frac{1}{q^{2}}
\begin{bmatrix}
1 & 1 & 1 & \dots & 1 \\
1 & 1 & 1 & \dots & 1 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & 1 & 1 & \dots & 1
\end{bmatrix}
\end{align*}
$R(f)$ is the \textit{spectral residual}, i.e., the log spectrum $L(f)$ subtracting the averaged log spectrum $AL(f)$. The \textit{spectral residual} serves as a compressed representation of the sequence while the innovation part of the original sequence becomes more significant. At last, we transfer the sequence back to spatial domain via Inverse Fourier Transform. The result sequence $S(\textbf{x})$ is called the \textit{saliency map}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{img/sr-example-dark-blue.png}
\caption{Example of SR model results}
\label{fig:sr_example}
\end{figure}
Figure \ref{fig:sr_example} shows an example of the original time-series and the corresponding \textit{saliency map} after SR processing. As shown in the figure, the innovation point (shown in red) in the \textit{saliency map} is much more significant than that in the original input. Based on the \textit{saliency map}, it is easy to leverage a simple rule to annotate the anomaly points correctly. We adopt a simple threshold $\tau $ to annote anomaly points. Given the saliency map $S(\textbf{x})$, the output sequence $O(\textbf{x})$ is computed by:
\begin{align}
O(x_i) =
\begin{cases}
1, &\text{if $\frac{S(x_i) - \overline{S(x_i)}}{\overline{S(x_i)}}) > \tau $,} \\
0, &\text{otherwise,}
\end{cases}
\label{fuc:threshold}
\end{align}
where $x_i$ represents an arbitrary point in sequence \textbf{x}; $S(x_i)$ is the corresponding point in the saliency map; and $\overline{S(x_i)}$ is the local average of the preceding z points of $S(x_i)$.
In practice, the FFT operation is conducted within a \textit{sliding window} of the sequence. Moreover, we expect the algorithm to discover the anomaly points with low latency. That is, given a stream $x_1, x_2, ..., x_n$ where $x_n$ is the recent point, we want to tell if $x_n$ is an anomaly point as soon as possible. However, the SR method works better if the target point locates in the center of the sliding window. Thus, we add several \textit{estimated points} after $x_n$ before inputting the sequence to SR model. The value of estimated point $x_{n+1}$ is calculated by:
\begin{align}
\overline{g} = \frac{1}{m}\sum_{i=1}^{m}{g(x_{n}, x_{n-i})} \\
x_{n+1} = x_{n-m+1} + \overline{g} \cdot m
\end{align}
where $g(x_i, x_j)$ denotes the gradient of the straight line between point $x_i$ and $x_j$; and $\overline{g}$ represents the average gradient of the preceding points. $m$ is the number of preceding points considered, and we set $m=5$ in our implementation. We find that the first estimated point plays a decisive role. Thus, we just copy $x_{n+1}$ for $\kappa$ times and add the points to the tail of the sequence.
To summarize, the SR algorithm contains only a few hyper-parameters, i.e., sliding window size $\omega$, estimated points number $\kappa$, and anomaly detection threshold $\tau$. We set them empirically and show their robustness in our experiments. Therefore, the SR algorithm is a good choice for online anomaly detection service.
\subsection{SR-CNN}
The original SR method utilizes a single threshold upon the \textit{saliency map} to detect anomaly points, as defined in Equation (\ref{fuc:threshold}). However, this rule is so na\"ive that it is natural to seek for more sophisticated decision rules. Our philosophy is to train a discriminative model on well-designed synthetic data as the anomaly detector. The synthetic data can be generated by injecting anomaly points into a collection of \textit{saliency maps} that are not included in the evaluated data. The injection points are labeled as anomalies while others are labeled as normal. Concretely, we randomly select several points in the time series, calculate the injection value to replace the original point and get its \textit{saliency map}. The values of anomaly points are calculated by:
\begin{align}
x = (\overline{x} + mean)(1 + var) \cdot r + x
\end{align}
where $\overline{x}$ is the local average of the preceding points; $mean$ and $var$ are the mean and variance of all points within the current sliding window; and $r \sim \mathcal{N}(0,1)$ is randomly sampled.
We choose CNN as our discrimative model architecture. CNN is a commonly used supervised model for saliency detection~\cite{zhao2015saliency}. However, as we do not have enough labeled data in our scenario, we apply CNN on the basis of \textit{saliency map} instead of raw input, which makes the problem of anomaly annotation to be much easier. In practice, we collect production time-series with synthetic anomalies as training data. The advantage is that the detector can be adaptive to the change of time-series distribution, while no manually labeled data is required. In our experiments, we use totally 65 million points for training.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{img/CNN-arc-2.png}
\caption{SR-CNN architecture}
\label{fig:arc}
\end{figure}
The architecture of SR-CNN is visualized in Figure \ref{fig:arc}. The network is composed of two 1-D convolutional layers (with filter size equals to the sliding window size $\omega$) and two fully connected layers. The channel size of the first convolutional layer is equal to $\omega$; while the channel size is doubled in the second convolutional layer. Two full connected layers are stacked before Sigmoid output. Cross entropy is adopted as the loss function; and SGD optimizer is utilized in the training process.
\section{experiments}
\subsection{Datasets}
\label{section:dataset}
We use three datasets to evaluate our model. KPI and Yahoo are public datasets\footnote{These two datasets are used only for research purpose and do not leveraged in production.} that are commonly used for evaluating the performance of time-series anomaly detection; while Microsoft is an internal dataset collected in the production. These datasets cover time-series of different time intervals and cover a broad spectrum of time-series patterns. In these datasets, anomaly points are labeled as positive samples and normal points are labeled as negative. The statistics of these datasets are shown in Table \ref{Statistics of datasets}.
\textbf{KPI} is released by AIOPS data competition~\cite{kpidataset,kpicomptition}. The dataset consists of multiple KPI curves with anomaly labels collected from various Internet Companies, including Sogou, Tecent, eBay, etc. Most KPI curves have an interval of 1 minute between two adjacent data points, while some of them have an interval of 5 minutes.
\textbf{Yahoo} is an open data set for anomaly detection released by Yahoo lab\footnote{\url{https://yahooresearch.tumblr.com/post/114590420346/a-benchmark-dataset-for-time-series-anomaly}}. Part of the time-series curves is synthetic (i.e., simulated); while the other part comes from the real traffic of Yahoo services. The anomaly points in the simulated curves are algorithmically generated and those in the real-traffic curves are labeled by editors manually. The interval of all time-series is one hour.
\textbf{Microsoft} is a dataset obtained from our internal anomaly detection service at Microsoft. We select a collection of time-series randomly for evaluation. The selected time-series reflect different KPIs, including revenues, active users, number of pageviews, etc. The anomaly points are labeled by customers or editors manually; and the interval of these time-series is one day.
\begin{table}
\renewcommand\arraystretch{1.2}
\caption{Statistics of datasets}
\label{Statistics of datasets}
\begin{threeparttable}
\begin{tabularx}{0.48\textwidth}{cccc} \hline
\textbf{DataSet}& \textbf{Total Curves} & \textbf{Total Points} & \textbf{Anomaly Points}\\ \hline
\textbf{KPI} & 58& 5922913& 134114/2.26\% \\
\textbf{Yahoo} & 367& 572966& 3896/0.68\% \\
\textbf{Microsoft
} & 372 & 66132 & 1871/2.83\% \\
\hline
\end{tabularx}
\begin{tablenotes}
\footnotesize
\item
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Metrics}
\label{section:metrics}
We evaluate our model from three aspects, \textbf{accuracy}, \textbf{efficiency} and \textbf{generality}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{"img/evaluate".png}
\caption{\label{fig:eval_strategy}Illustration of the evaluation strategy. There are 10 contiguous points in the time-series, where the first row indicates ground truth; the second row shows the point-wise anomaly detection results; and the third row shows adjusted results according to the evaluation strategy.}
\end{figure}
We use precision, recall and $F_{1}$-score to indicate the \textbf{accuracy} of our model. In real applications, the human operators do not care about the point-wise metrics. It is acceptable for an algorithm to trigger an alert for any point in a contiguous anomaly segment if the delay is not too long. Thus, we adopt the evaluation strategy\footnote{The evaluation script is available at https://github.com/iopsai/iops/tree/master/evaluation} following~\cite{xu2018unsupervised}. We mark the whole segment of continuous anomalies as a positive sample which means no matter how many anomalies have been detected in this segment, only one effective detection will be counted. If any point in an anomaly segment can be detected by the algorithm, and the delay of this point is no more than $k$ from the start point of the anomaly segment, we say this segment is detected correctly. Thus, all points in this segment are treated as correct, and the points outside the anomaly segments are treated as normal.
The evaluation strategy is illustrated in Figure \ref{fig:eval_strategy}. As shown in the first row of Figure \ref{fig:eval_strategy}, there are 10 contiguous points and two anomaly segments in the example time-series. The prediction results are shown in the second row. In this case, if we allow the delay as one point, i.e., $k=1$, the first segment is treated as correct and the second is treated as incorrect (because the delay is more than one point). Thus, the adjusted results are illustrated in the third row. Based on the adjusted results, the value of precision, recall and $F_1$-score can be calculated accordingly. In our experiments, we set $k=7$ for minutely time-series, $k=3$ for hourly time-series and $k=1$ for daily time-series following the requirement of real application.
\textbf{Efficiency} is another key indicator of anomaly detection models, especially for those be applied in online services. In the system, we must complete hundreds of thousands of calculations per second. The latency of the model needs to be small enough so that it won't block the whole computation pipeline. In our experiments, we evaluate total execution time on the three datasets to compare the efficiency of different anomaly detection approaches.
Besides accuracy and efficiency, we also emphasize \textbf{generality} in our evaluation. As illustrated previously, an industrial anomaly detection model should have the ability to handle different types of time-series. To evaluate generality, we group the time-series in Yahoo dataset into 3 major classes (for example, seasonal, stable and unstable as shown in Figure \ref{fig:series_pattern}) manually and compare the $F_1$-score on different classes separately.
\subsection{SR/SR-CNN Experiment}
\label{section: SR experiment}
\begin{table*}[ht]
\centering
\setlength\tabcolsep{4.0pt}
\renewcommand\arraystretch{1.2}
\caption{Result comparison of cold-start}
\label{table:result}
\begin{threeparttable}
\begin{tabularx}{\textwidth}{l|cccc|cccc|cccc} \hline
& \multicolumn{4}{c|}{\textbf{KPI}}& \multicolumn{4}{c|}{\textbf{Yahoo}}& \multicolumn{4}{c}{\textbf{Microsoft}}\\\hline
Model& $F_{1}$-score&$Precision$&$Recall$&$Time(s)$& $F_{1}$-score&$Precision$&$Recall$&$Time(s)$& $F_{1}$-score&$Precision$&$Recall$&$Time(s)$\\\hline
\textbf{FFT}& \textbf{0.538} & 0.478 & 0.615& 3756.63 & 0.291& 0.202& 0.517& 356.56 & 0.349& 0.812& 0.218& 8.38\\
\textbf{Twitter-AD}& 0.330& 0.411& 0.276& 523232.0 & 0.245& 0.166& 0.462& 301601.50 & 0.347& 0.716& 0.229& 6698.80\\
\textbf{Luminol}& 0.417 & 0.306 &
0.650 & 14244.92 & \textbf{0.388}& 0.254& 0.818 & 1071.25 & \textbf{0.443}& 0.776 & 0.310& 16.26 \\ \hline
\textbf{SR}& 0.666 & 0.637 & 0.697& 1427.08 & 0.529& 0.404 & 0.765 & 43.59 & 0.484 & 0.878& 0.334& 2.45\\
\textbf{SR-CNN}& \textbf{0.732} & 0.811 & 0.667 & 6805.13 &\textbf{0.655} & 0.786&0.561 &279.97 &\textbf{0.537} &0.468 & 0.630& 25.26\\
\hline
\end{tabularx}
\end{threeparttable}
\end{table*}
\begin{table*}[!ht]
\centering
\renewcommand\arraystretch{1.2}
\caption{Result comparison on test data}
\label{table:result_training}
\begin{threeparttable}
\begin{tabularx}{\textwidth}{l|cccc|cccc|cccc} \hline
& \multicolumn{4}{c|}{\textbf{KPI}}& \multicolumn{4}{c|}{\textbf{Yahoo}}& \multicolumn{4}{c}{\textbf{Microsoft}}\\\hline
Model& $F_{1}$-score & $Precision$ & $Recall$ & $Time(s)$ & $F_{1}$-score & $Precision$ & $Recall$ & $Time(s)$ &$F_{1}$-score & $Precision$ & $Recall$ & $Time(s)$\\\hline
\textbf{SPOT}& 0.217& 0.786& 0.126& 9097.85 & \textbf{0.338}& 0.269& 0.454& 2893.08& 0.244& 0.702& 0.147& 9.43\\
\textbf{DSPOT}& \textbf{0.521}& 0.623& 0.447& 1634.41 & 0.316& 0.241& 0.458& 339.62 & 0.190& 0.394& 0.125& 1.37 \\
\textbf{DONUT}& 0.347&0.371& 0.326& 24248.13 & 0.026& 0.013& 0.825& 2572.76 & \textbf{0.323}& 0.241& 0.490& 288.36\\\hline
\textbf{SR}& 0.622 & 0.647 & 0.598 & 724.02 & 0.563& 0.451& 0.747& 22.71 & 0.440& 0.814& 0.301& 1.55 \\
\textbf{SR-CNN}& \textbf{0.771} & 0.797 & 0.747 & 2724.33 & \textbf{0.652} & 0.816 & 0.542 & 125.37 &\textbf{0.507} &0.441 & 0.595&16.13\\
\hline
\end{tabularx}
\end{threeparttable}
\end{table*}
\begin{table}
\renewcommand\arraystretch{1.2}
\caption{Generality Comparison on Yahoo dataset}
\label{table:generality}
\begin{threeparttable}
\begin{tabularx}{0.48\textwidth}{ccccccccc} \hline
& Seasonal & Stable & Unstable & Overall & $Var$\\ \hline
\textbf{FFT} & \textbf{0.446} & 0.370 & 0.301 & 0.364 & \textbf{0.060} \\
\textbf{Twitter-AD} & 0.397 & \textbf{0.924} & \textbf{0.438} & \textbf{0.466} & 0.268 \\
\textbf{Luminol} & 0.374 & 0.763 & 0.428 & 0.430 & 0.195 \\
\textbf{SPOT} & 0.199 & 0.879 & 0.356 & 0.338 & 0.322 \\
\textbf{DSPOT} & 0.211 & 0.485 & 0.379 & 0.316 & 0.120 \\
\textbf{DONUT} & 0.023 & 0.032 & 0.029 & 0.026 & 0.004 \\\hline
\textbf{SR} & 0.558 & 0.601 & \textbf{0.556} & 0.563 & \textbf{0.023} \\
\textbf{SR-CNN} & \textbf{0.716} & \textbf{0.752} & 0.464 & \textbf{0.652} & 0.128 \\
\hline
\end{tabularx}
\begin{tablenotes}
\footnotesize \item {$Var$ indicates the standard deviation of the overall $F_{1}$-scores for the three classes}
\end{tablenotes}
\end{threeparttable}
\end{table}
We compare SR and SR-CNN with state-of-the-art unsupervised time-series anomaly detection methods. The baseline models include FFT (Fast Fourier Transform)~\cite{rasheed2009fourier}, Twitter-AD (Twitter Anomaly Detection)~\cite{twitter-ad}, Luminol (LinkedIn Anomaly Detection)~\cite{linkedin/luminol}, DONUT~\cite{xu2018unsupervised}, SPOT and DSPOT~\cite{siffer2017anomaly}. Among these methods, FFT, Twitter-AD and Luminol do not need additional data to start, so we compare these models in a cold-start setting by treating all the time-series as test data. On the other hand, SPOT, DSPOT and DONUT need additional data to train their models. Therefore, we split the points of each time-series as two halves according to the time order. The first half is utilized for training those unsupervised models while the second half is leveraged for evaluation. Note that DONUT can leverage additional labeled data to benefit the anomaly detection performance. However, as we are aiming to get a fair comparison in the fully unsupervised setting, we do not use additional labeled data in the implementation\footnote{https://github.com/haowen-xu/donut}.
The experiments are conducted in a streaming pipeline. The points of a time-series are ingested to the evaluation pipeline sequentially. In each turn, we only detect if the recent point is anomaly or not while the succeeding points are invisible. In the setting of cold-start, recommended configurations are applied to the baseline models which come from papers or codes published by the authors. For SR and SR-CNN, we set the hyper-parameters empirically. In SR, shape of $h_q(f)$ q is set as 3, number of local average of preceding points z is set as 21, threshold $\tau$ is set as 3, number of estimated points $\kappa$ is set as 5, and the sliding window size $\omega$ is set as 1440 on KPI, 64 on Yahoo and 30 on Microsoft. For SR-CNN, q, z, $\kappa$ and $\omega$ are set to the same value.
We report (1) $F_{1}$-score; (2) $Precision$; (3) $Recall$; and (4) CPU execution times separately for each dataset. We can see that SR significantly outperforms current state-of-the-art unsupervised models. Furthermore, SR-CNN achieves further improvement on all three datasets, which shows the advantage of replacing the single threshold by a CNN discriminator. Table \ref{table:result} shows comparison results of FFT, Twitter-AD and Luminol in the cold-start scenario. We improve the $F_1$-score by 36.1\% on KPI dataset, 68.8\% on Yahoo dataset and 21.2\% on Microsoft dataset compared to the best results achieved by baseline solutions. Table \ref{table:result_training} demonstrates the comparison results of those unsupervised models which need to be trained on the first half of the dataset (labels are excluded). As shown in Table \ref{table:result_training}, the $F_1$-score is improved by 48.0\% on KPI dataset, 92.9\% on Yahoo dataset and 57.0\% on Microsoft dataset than the best state-of-the-art results.
Moreover, SR is the most efficient method as indicated by the total CPU execution time in Table \ref{table:result} and \ref{table:result_training}. SR-CNN achieves better accuracy with a reasonable latency increase. For generality comparison, we conduct the experiments on the second half of Yahoo dataset, which is classified into three classes manually. $F_{1}$-score on different classes of Yahoo dataset is reported separately in Table \ref{table:generality}. SR and SR-CNN achieve outstanding results on various patterns of time-series. SR is the most stable one across the three classes. SR-CNN also demonstrates good capability of generalization.
\subsection{SR+DNN}
\label{section:supervised}
\begin{table*}[t]
\centering
\renewcommand\arraystretch{1.2}
\caption{Features used in the supervised DNN model}
\label{features}
\begin{threeparttable}
\begin{tabularx}{1\textwidth}{|l|X|} \hline
Feature & Description \\ \hline
Transformations & Transformations to the value of each data point. We use logarithm as our transformation function and leverage the result value as a feature. \\ \hline
Statistics & We applied sliding windows to the time-series and treat the statistics calculated in each sliding window as features. The statistics we used include mean, exponential weighted mean, min, max, standard deviation, and the quantity of the data point values within a sliding window. We use multiple sizes of the sliding window to generate different features. The sizes are [10, 50, 100, 200, 500, 1440] \\ \hline
Ratios & The ratios of current point value against other statistics or transformations\\ \hline
Differences & The differences of current point value against other statistics or transformations \\ \hline
\end{tabularx}
\end{threeparttable}
\end{table*}
In the previous experiments, we can see that the SR model shows convincing results in the unsupervised anomaly detection scenario. However, when labels of anomalies are available, we can obtain more satisfactory results as illustrated in previous works~\cite{liu2015opprentice}. Thus, we would like to know whether our methodology contributes to the supervised scenario as well. Concretely, we treat the intermediate results of SR as an additional feature in the supervised anomaly detection model. We conduct the experiment on KPI dataset as it has been extensively studied in the AIOPS data competition~\cite{kpicomptition}.
We adopt the DNN-based supervised model~\cite{dnn-champion} which is the champion in the AIOPS data competition. The DNN architecture is composed by an input layer, an output layer and two hidden layers (shown in Figure \ref{fig:supervise}). We add a dropout layer after the second hidden layer and set dropout ratio as 0.5. In addition, we apply $L_1=L_2=0.0001$ regularization to the weights of all layers. Since the output of the model indicates the likelihood of a data point being an anomaly, we search for the optimal threshold on the training set.
Each data point is associated with a feature vector, which consists of different types of features including transformations, statistics, ratios, and differences (Table \ref{features}). We follow the official train/test split of the dataset, where the statistics is shown in Table \ref{train and test}. We can see that the proportion of positive and negative samples is extremely imbalanced. Thus, we train our model by over-sampling anomalies to keep the positive/negative proportion to 1:2.
Experimental results are shown in Table \ref{Supervised results on KPI dataset}. We can see that the SR feature brings 1.6\% improvement in $F_1$-score to the vanilla DNN model. Especially, the SR-powered DNN model establishes a new state-of-the-art on the KPI dataset. To the best of our knowledge, it is the best-ever result reported on the KPI dataset up to the date of paper submission. Moreover, we draw the P-R curve of the SR+DNN and DNN methods. As illustrated in Figure \ref{fig:pr_curve}, SR+DNN outperforms the vanilla DNN consistently on various threshold.
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.2}
\caption{Train and test split of KPI dataset}
\label{train and test}
\begin{threeparttable}
\begin{tabular}{l|c|c} \hline
\textbf{DataSet} & \textbf{Total points} & \textbf{Anomaly points} \\ \hline
Train & 3004066 & 79554/2.65\% \\ \hline
Test & 2918847 & 54560/1.87\% \\ \hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{img/supervised_model.png}
\caption{DNN architecture}
\label{fig:supervise}
\end{figure}
\begin{table}
\centering
\renewcommand\arraystretch{1.2}
\caption{Supervised results on KPI dataset}
\label{Supervised results on KPI dataset}
\begin{threeparttable}
\begin{tabular}{l|c|c|c} \hline
\textbf{Model} & \textbf{$F_{1}$-score} & \textbf{Precision} & \textbf{Recall} \\ \hline
DNN & 0.798& 0.849& 0.753\\ \hline
SR+DNN & \textbf{0.811}& 0.915& 0.728 \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth, height=0.25\textwidth]{img/P-R.png}
\caption{P-R curves of SR+DNN and DNN methods}
\label{fig:pr_curve}
\end{figure}
\section{related works}
\subsection{Anomaly detectors}
Previous works can be categorized into statistical, supervised and unsupervised approaches. In the past years, several models were subsequently proposed in the statistics literature, including hypothesis testing~\cite{rosner1983percentage}, wavelet analysis~\cite{lu2009network}, SVD~\cite{mahimkar2011rapid} and auto-regressive integrated moving average (ARIMA)~\cite{zhang2005network}. Fast Fourier Transform (FFT)~\cite{van1992computational} is another traditional method for time-series processing. For example,~\cite{rasheed2009fourier} highlighted the areas with high frequency change by FFT and reconfirmed it with Z-value test. In 2015, Twitter~\cite{twitter-ad} proposed a model to detect anomalies in time-series of both application metrics (e.g., Tweets Per Sec) and system metrics (e.g., CPU utilization). In 2017, SPOT and DSPOT~\cite{siffer2017anomaly} were proposed on the basis of Extreme Value Theory~\cite{de2007extreme}, the threshold of which can be selected automatically.
The performances of traditional statistical models are not satisfactory in real applications. Thus, researchers have investigated supervised models to improve the anomaly detection accuracy.
Opprentice~\cite{liu2015opprentice} outperformed other traditional detectors by using statistical detectors as feature extractors and leveraged a Random Forest classifier~\cite{liaw2002classification} to detect anomalies. Yahoo EGADS~\cite{yahooo2015EGADS} utilized a collection of anomaly detection and forecasting models with an anomaly filtering layer for scalable anomaly detection on time-series data. In 2017, Google leveraged deep learning models to detect anomalies on their own dataset~\cite{anomalyGoogle} and achieved promising results. However, continuous labels can not be obtained in industrial environment, which makes these supervised approaches insufficient in online applications.
As a result, advanced unsupervised approaches have been studied to tackle the problem in industrial application. In 2018, \cite{xu2018unsupervised} proposed DONUT, an unsupervised anomaly detection method based on Variational Auto-Encoder (VAE)~\cite{doersch2016tutorial}. VAE was leveraged to model the reconstruction probabilities of normal time-series, while the abnormal points were reported if the reconstruction error was larger than a threshold. Besides, LinkedIn developed Luminol \cite{linkedin/luminol} based on \cite{linkedinref}, which segmented time-series into chunks and used the frequency of similar chunks to calculate anomaly scores.
\subsection{Saliency detection approaches}
\label{Saliency detection}
Our work has been inspired by visual saliency detection models. Hou et al.~\cite{hou2007saliency} invented the Spectral Residual (SR) model for saliency detection and demonstrated impressive performance in their experiments. They assumed that an image can be divided into redundant part and innovation part, while people's vision is more sensitive to the innovation part. Meanwhile, the log amplitude spectrum of an image subtracting the average log amplitude spectrum captures the saliency part of the image. Guo et al.~\cite{guo2008spatio} argued that only phase spectrum was enough to detect the saliency part of an image and simplified the algorithm in \cite{hou2007saliency}. Hou et al.~\cite{hou2012image} also proposed an image signature approach for highlighting sparse salient regions with theoretical proof. Although the latter two solutions showed improvement in their publications, we found that Spectral Residual (SR) was more effective in our time-series anomaly detection scenario. Moreover, supervised models based on neural networks are also used in saliency detection. For instance, Zhao et al.~\cite{zhao2015saliency} tackled the problem of salient object detection by a multi-context deep learning framework based on CNN architecture.
\section{Conclusion \& Future Work}
Time-series anomaly detection is a critical module to ensure the quality of online services. An efficient, general and accurate anomaly detection system is indispensable in real applications. In this paper, we have introduced a time-series anomaly detection service at Microsoft. The service has been used by more than 200 teams within Microsoft, including Bing, Office and Azure. Anomalies are detected from 4 million time-series per minute maximally in the production. Moreover, we for the first time apply the Spectral Residual (SR) model in the time-series anomaly detection task and innovatively combine the SR and CNN model to achieve an outstanding performance. In the future, we plan to ensemble the state-of-the-art methods together to provide a more robust anomaly detection service to our customers. Besides internal serving, our time-series anomaly detection service will be published on Microsoft Azure as part of Cognitive Service\footnote{https://azure.microsoft.com/en-us/services/cognitive-services/} shortly to external customers.
\bibliographystyle{ACM-Reference-Format}
|
{'timestamp': '2019-06-11T02:23:25', 'yymm': '1906', 'arxiv_id': '1906.03821', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.03821'}
|
arxiv
|
\section{Introduction}
The present paper is concerned with nonparametric regression by means of a reproducing kernel Hilbert space (RKHS) associated with a reproducing kernel \cite{aronszajn1950theory, wahba1990spline, scholkopf2001learning, gu2013smoothing}. There is a large amount of literature on the application of kernel machines in many areas of science and engineering \cite{scholkopf2001learning, shawe2004kernel, zhang2007local}, which is out of the main scope of the paper.
A family of linear estimators called \textit{spectral filter estimators} \cite{engl1996regularization, Yao2007, bauer2007regularization, baldassarre2012multi} can be seen as particular instances of iterative learning algorithms. This family includes two famous examples: gradient descent and iterative (Tikhonov) ridge regression. In several papers, it was observed empirically and proved theoretically that these two algorithms are closely related \cite{friedman2004gradient, Yao2007, raskutti2014early, 2015arXiv151005684A, 2018arXiv181010082A}. For example, \cite{2018arXiv181010082A} showed that in the linear regression model under the calibration $t = 1 / \lambda$, where $t$ is the time parameter in gradient descent and $\lambda$ the tuning parameter in ridge regression, the risk error of gradient descent could not be much higher than that of ridge regression. This gives some intuition of why the idea of implicit regularization could work.
\textit{Early stopping rule} (ESR) is a form of regularization based on choosing when to stop an iterative algorithm based on some design criterion. Its main idea is lowering the computational complexity of an iterative algorithm while preserving its statistical optimality. This approach is quite old and initially was developed for Landweber iterations to solve ill-posed matrix problems in the 1970s \cite{wahba1987three, engl1996regularization}. The next wave of interest in this topic was in the 1990s and has been applied to neural network parameters learning with (stochastic) gradient descent \cite{prechelt1998early, caruana2001overfitting}. For instance, \cite{prechelt1998early} suggested some heuristics, that rely on monitoring at the same time the train and validation errors for stopping the learning process, and gave some consistent simulation findings. Nevertheless, until the 2000s, there was a lack of theoretical understanding of this phenomenon. Recent papers provided some insights for the connection between early stopping and boosting methods \cite{buhlmann2003boosting, zhang2005boosting, bartlett2007adaboost, wei2017early}, gradient descent, and Tikhonov regularization in a reproducing kernel Hilbert space (RKHS) \cite{Yao2007, bauer2007regularization, raskutti2014early}. For instance, \cite{buhlmann2003boosting} established the first optimal in-sample convergence rate of $L^2$-boosting with early stopping. Raskutti et al. \cite{raskutti2014early} provided a result on a stopping rule that achieves the minimax-optimal rate for kernelized gradient descent and ridge regression over different smoothness classes. This work established an important connection between the localized Radamacher complexities \cite{bartlett2005local, koltchinskii2006local, wainwright2019high}, that characterizes the size of the explored function space, and early stopping. The main drawback of the result is that one needs to know the RKHS-norm of the regression function or its tight upper bound in order to apply this early stopping rule in practice. Besides that, this rule is design-dependent, which limits its practical application as well. In the subsequent work, \cite{wei2017early} showed how to control early stopping optimality via the localized Gaussian complexities in RKHS for different boosting algorithms ($L^2$-boosting, LogitBoost, and AdaBoost). Another theoretical result for a not data-driven ESR was built by \cite{blanchard2016convergence}, where the authors proved a minimax-optimal (in the ${L}_2(\mathbb{P}_X)$ out-of-sample norm) stopping rule for conjugate gradient descent in the nonparametric regression setting. \cite{2015arXiv151005684A} proposed a different approach, where the authors focused on both time/memory computational savings combining early stopping with Nystrom subsampling technique.
Some stopping rules, that (potentially) could be applied in practice, were provided by \cite{blanchard2016optimal, blanchard2018early} and \cite{stankewitz2019smoothed}, and were based on the so-called \textit{minimum discrepancy principle} \cite{engl1996regularization, hansen2010discrete, blanchard2012discrepancy, blanchard2016convergence}. This principle consists of monitoring the empirical risk and determining the first iteration at which a given learning algorithm starts to fit the noise. In the papers mentioned, the authors considered spectral filter estimators such as gradient descent, Tikhonov (ridge) regularization, and spectral cut-off regression for the linear Gaussian sequence model, and derived several oracle-type inequalities for the proposed ESR. The main deficiency of the works \cite{blanchard2016optimal, blanchard2018early, stankewitz2019smoothed} is that the authors dealt only with the linear Gaussian sequence model, and the minimax optimality result was restricted to the spectral cut-off estimator. It is worth to mention that \cite{stankewitz2019smoothed} introduced the so-called \textit{polynomial smoothing} strategy to achieve optimality of the minimum discrepancy principle ESR over Sobolev balls for the spectral cut-off estimator. More recently, \cite{CelWah2020} studied a minimum discrepancy principle stopping rule and its modified (they called it smoothed as well) version, where they provided the range of values of the regression function regularity, for which these stopping rules are optimal for different spectral filter estimators in RKHS.
\textbf{Contribution.} Hence, to the best of our knowledge, there is no \textit{fully data-driven} stopping rule for gradient descent or ridge regression in RKHS that does not use a validation set, does not depend on the parameters of the model such as the RKHS-norm of the regression function, and explains why it is statistically optimal. In our paper, we combine techniques from \cite{raskutti2014early}, \cite{blanchard2016optimal}, and \cite{stankewitz2019smoothed} to construct such an ESR. Our analysis is based on the bias and variance trade-off of an estimator, and we try to catch the iteration of their intersection by means of the \textit{minimum discrepancy principle} \cite{blanchard2012discrepancy, blanchard2016optimal, CelWah2020} and the \textit{localized Rademacher complexities} \cite{mendelson2002geometric, bartlett2005local, koltchinskii2006local, wainwright2019high}. In particular, for the kernels with infinite rank, we propose to use a special technique \cite{blanchard2012discrepancy, stankewitz2019smoothed} for the empirical risk in order to reduce its variance. Further, we introduce new notions of \textit{smoothed empirical Rademacher complexity} and \textit{smoothed critical radius} to achieve minimax optimality bounds for the functional estimator based on the proposed rule. This can be done by solving the associated fixed-point equation. It implies that the bounds in our analysis cannot be improved (up to numeric constants). It is important to note that in the present paper, we establish an important connection between a smoothed version of the \textit{statistical dimension} of $n$-dimensional kernel matrix, introduced by \cite{yang2017randomized} for randomized projections in kernel ridge regression, with early stopping (see Section \ref{optimality_section} for more details). We show also how to estimate the variance $\sigma^2$ of the model, specifically, for the class of polynomial eigenvalue decay kernels. In the meanwhile, we provide experimental results on artificial data indicating the consistent performance of the proposed rules.
\textbf{Outline of the paper.} The organization of the paper is as follows. In Section \ref{sec:2}, we introduce the background on nonparametric regression and reproducing kernel Hilbert space. There, we explain the updates of two spectral filter iterative algorithms: gradient descent and (iterative) kernel ridge regression, that will be studied. In Section \ref{sec:3}, we clarify how to compute our first early stopping rule for finite-rank kernels and provide an oracle-type inequality (Theorem \ref{th:1}) and an upper bound for the risk error of this stopping rule with fixed covariates (Corollary \ref{corollary_empirical_norm}). After that, we present a similar upper bound for the risk error with random covariates (Theorem \ref{th:2}) that is proved to be minimax-rate optimal. By contrast, Section \ref{sec:4} is devoted to the development of a new stopping rule for infinite-rank kernels based on the \textit{polynomial smoothing} \cite{blanchard2012discrepancy, stankewitz2019smoothed} strategy. There, Theorem \ref{th:3} shows, under some quite general assumptions on the eigenvalues of the kernel matrix, a high probability upper bound for the performance of this stopping rule measured in the $L_2(\mathbb{P}_n)$ in-sample norm. In particular, this upper bound leads to minimax optimality over Sobolev smoothness classes. In Section \ref{sec:5}, we compare our stopping rules to other rules, such as methods using hold-out data and $V-$fold cross-validation. After that, we propose using a strategy for the estimation of the variance $\sigma^2$ of the regression model. Section \ref{sec:6} summarizes the content of the paper and describes some perspectives. Supplementary and more technical proofs are deferred to Appendix.
\section{Nonparametric regression and reproducing kernel framework} \label{sec:2}
\subsection{Probabilistic model and notation}
The context of the present work is that of nonparametric regression, where an i.i.d. sample $\{(x_i, y_i), \ i=1, \ldots, n \}$ of cardinality $n$ is given, with $x_i \in \mathcal{X} \ (\textnormal{feature space})$ and $\ y_i \in \mathbb{R}$.
The goal is to estimate the regression function $f^*: \mathcal{X} \to \mathbb{R}$ from the model
\begin{equation}\label{main}
y_i = f^*(x_i) + \overline{\varepsilon}_i, \qquad i = 1, \ldots, n,
\end{equation}
where the error variables $\overline{\varepsilon}_i$ are i.i.d. zero-mean Gaussian random variables $\mathcal{N}(0, \sigma^2)$, with $\sigma > 0$. In all what follows (except for Section \ref{sec:5}, where results of empirical experiments are reported), the values of $\sigma^2$ is assumed to be known as in \cite{raskutti2014early} and \cite{wei2017early}.
Along the paper, calculations are mainly derived in the \emph{fixed-design} context, where the $\{ x_i \}_{i=1}^n$ are assumed to be fixed, and only the error variables $\{ \overline{\varepsilon}_i \}_{i=1}^n$ are random.
In this context, the performance of any estimator $\widehat{f}$ of the regression function $f^*$ is measured in terms of the so-called \emph{empirical norm}, that is, the $L_2(\mathbb{P}_n)$-norm defined by
\begin{equation*}
\lVert \widehat{f} - f^* \rVert_n^2 \coloneqq \frac{1}{n}\sum_{i=1}^n \Big[ \widehat{f}(x_i) - f^*(x_i) \Big]^2 ,
\end{equation*}
where $\lVert h \rVert_n \coloneqq \sqrt{ 1/n \sum_{i=1}^n h(x_i)^{2} }$ for any bounded function $h$ over $\mathcal{X}$, and $\langle \cdot, \cdot \rangle_n$ denotes the related inner-product defined by $\langle h_1, h_2 \rangle_n \coloneqq 1/n \sum_{i=1}^n h_1(x_i) h_2(x_i)$ for any functions $h_1$ and $h_2$ bounded over $\mathcal{X}$.
In this context, $\mathbb{P}_{\varepsilon}$ and $\mathbb{E}_{\varepsilon}$ denote the probability and expectation, respectively, with respect to the $\{ \overline{\varepsilon}_i\}_{i=1}^n$.
By contrast, Section~\ref{sec.random.design} discusses some extensions of the previous results to the \emph{random design} context, where both the covariates $\{ x_i \}_{i=1}^n$ and the responses $\{ y_i \}_{i=1}^n$ are random variables.
In this random design context, the performance of an estimator $\widehat{f}$ of $f^*$ is measured in terms of the $L_2(\mathbb{P}_X)$-norm defined by
\begin{equation*}
\lVert \widehat{f} - f^* \rVert_2^2 \coloneqq \mathbb{E}_{X} \Big[ (\widehat{f}(X) - f^*(X))^2 \Big],
\end{equation*}
where $\mathbb{P}_X$ denotes the probability distribution of the $\{ x_i \}_{i=1}^n$.
In what follows,
$\mathbb{P}$ and $\mathbb{E}$, respectively, state for the probability and expectation with respect to the couples
$\{ (x_i,y_i) \}_{i=1}^n$.
\paragraph{Notation.}
Throughout the paper, $\lVert \cdot \rVert$ and $\langle \cdot, \cdot \rangle$ are the usual Euclidean norm and inner product in $\mathbb{R}^n$.
We shall write $a_n \lesssim b_n$ whenever $a_n \leq C b_n$ for some numeric constant $C > 0$ for all $n \geq 1$. $a_n \gtrsim b_n$ whenever $a_n \geq C b_n$ for some numeric constant $C > 0$ for all $n \geq 1$. Similarly, $a_n \asymp b_n$ means $a_n \lesssim b_n$ and $b_n \gtrsim a_n$. $\left[ M \right] \equiv \{1, \ldots, M \}$ for any $M \in \mathbb{N}$. For $a \geq 0$, we denote by $\floor*{a}$ the largest natural number that is smaller than or equal to $a$. We denote by $\ceil*{a}$ the smallest natural number that is greater than or equal to $a$. Throughout the paper, we use the notation $c, c_1, C, \widetilde{c}, \widetilde{C}, \ldots$ to show that numeric constants $c, c_1, C, \widetilde{c}, \widetilde{C}, \ldots$ do not depend on the parameters considered. Their values may change
from line to line.
\subsection{Statistical model and assumptions}
\subsubsection{Reproducing Kernel Hilbert Space (RKHS)}
Let us start by introducing a reproducing kernel Hilbert space (RKHS) denoted by $\mathcal{H}$ \cite{aronszajn1950theory, berlinet2011reproducing}.
Such a RKHS $\mathcal{H}$ is a class of functions associated with a \emph{reproducing kernel} $\mathbb{K}: \mathcal{X}^2 \to \mathbb{R}$ and endowed with an inner-product denoted by $\langle \cdot,\cdot \rangle_{\mathcal{H}}$, and satisfying $\langle \mathbb{K}(\cdot, x), \mathbb{K}(\cdot, y) \rangle_{\mathcal{H}} = \mathbb{K}(x,y)$ for all $x,y\in\mathcal{X}$.
Each function within $\mathcal{H}$ admits a representation as an element of $L_2(\mathbb{P}_{X})$, which justifies the slight abuse when writing $\mathcal{H} \subset L_2(\mathbb{P}_{X})$ (see \cite{cucker2002mathematical} and \cite[Assumption~3]{CelWah2020}).
Assuming the RKHS $\mathcal{H}$ is separable, Mercer's theorem \cite{scholkopf2001learning} guarantees that the kernel can be expanded as
$$\mathbb{K}(x, x^\prime) = \sum_{k=1}^{\infty} \mu_k \phi_k(x) \phi_k(x^\prime),\quad \forall x,x^\prime \in\mathcal{X},$$
where $\mu_1 \geq \mu_2 \geq ... \geq 0$ and $\{ \phi_k \}_{k=1}^{\infty}$ are, respectively, the eigenvalues and corresponding eigenfunctions of the kernel integral operator $T_k$, which is given by
\begin{align}\label{kernel.integral.operator}
T_k(f)(x) = \int_{\mathcal{X}} \mathbb{K}(x,u) f(u) d\mathbb{P}_X(u),\quad \forall f\in\mathcal{H}, \ x \in \mathcal{X}.
\end{align}
It is then known that the family $\{ \phi_k \}_{k=1}^{\infty}$ is an orthonormal basis of $L_2(\mathbb{P}_{X})$, while $\{ \sqrt{\mu_k} \phi_k \}_{k=1}^{\infty}$ is an orthonormal basis of $\mathcal{H}$.
Then, any function $f \in \mathcal{H} \subset L_2(\mathbb{P}_{X})$ can be expanded as
\begin{align*}
f = \sum_{k=1}^{\infty} \sqrt{\mu_k} \theta_k \phi_k ,
\end{align*}
where the coefficients $\{ \theta_k \}_{k=1}^{\infty}$ are given by
\begin{equation} \label{coefficients}
\theta_k = \langle f, \sqrt{\mu_k}\phi_k \rangle_{\mathcal{H}} = \frac{1}{\sqrt{\mu_k}} \langle f, \phi_k \rangle_{L_2(\mathbb{P}_X)} = \int_{\mathcal{X}} \frac{ f(x) \phi_k(x)}{\sqrt{\mu_k}} d \mathbb{P}_{X}(x).
\end{equation}
Therefore, each functions $f,g \in \mathcal{H}$ can be represented by the respective sequences $\{ a_k \}_{k=1}^{\infty}, \{ b_k \}_{k=1}^{\infty} \in \ell_2(\mathbb{N})$ such that
\begin{equation*}
f = \sum_{k=1}^{+\infty} a_k \phi_k, \quad \mbox{and} \quad g = \sum_{k=1}^{+\infty} b_k \phi_k ,
\end{equation*}
with the inner-product in the Hilbert space $\mathcal{H}$ given by
\begin{equation*}
\langle f, g \rangle_{\mathcal{H}} = \sum_{k=1}^{\infty} \frac{a_k b_k}{\mu_k} .
\end{equation*}
This leads to the following representation of $\mathcal{H}$ as an ellipsoid
\begin{align*}
\mathcal{H} = \left\{ f = \sum_{k=1}^{+\infty} a_k \phi_k,\quad \sum_{k=1}^{+\infty} a_k^2<+\infty, \mbox{ and } \sum_{k=1}^{+\infty} \frac{a_k^2}{\mu_k}<+\infty \right\} .
\end{align*}
\subsubsection{Main assumptions}
From the initial model given by Eq. \eqref{main}, we make the following assumption.
\begin{assumption}[Statistical model] \label{a1}
Let $\mathbb{K}(\cdot, \cdot)$ denote a reproducing kernel as defined above, and $\mathcal{H}$ is the induced separable RKHS. Then, there exists a constant $R > 0$ such that the $n$-sample $(x_1,y_1), \ldots,(x_n,y_n) \in\mathcal{X}^n \times \mathbb{R}^n$ satisfies the statistical model
\begin{align}\label{assum.a1}
y_i = f^*(x_i) + \overline{\varepsilon}_i, \quad \mbox{with}\quad f^* \in \mathbb{B}_{\mathcal{H}}(R) = \{ f \in \mathcal{H}: \lVert f \rVert_{\mathcal{H}} \leq R \},
\end{align}
where the $\{ \overline{\varepsilon}_i \}_{i=1}^n$ are i.i.d. Gaussian random variables with $\mathbb{E}[\overline{\varepsilon}_i \mid x_i] = 0$ and $\mathbb{V}[\overline{\varepsilon}_i \mid x_i] = \sigma^2$.
\end{assumption}
The model from Assumption~\ref{a1} can be vectorized as
\begin{equation}\label{vector.model}
Y = [y_1, ..., y_n]^\top = F^* + \overline{\varepsilon} \in \mathbb{R}^n,
\end{equation}
where $F^* = [f^*(x_1), \ldots, f^*(x_n)]^\top$ and $\overline{\varepsilon} = [\overline{\varepsilon}_1, \ldots, \overline{\varepsilon}_n]^\top$, which turns to be useful all along the paper.
Let us emphasize that Assumption~\ref{assum.a1} encapsulates a (mild) smoothness assumption about $f^*$ encoded by the specification of the reproducing kernel $\mathbb{K}(\cdot, \cdot)$. For instance, this affects the convergence rates one can achieve \cite{raskutti2012minimax}. More precisely, from the kernel operator $T_k$ \eqref{kernel.integral.operator}, that is self-adjoint and trace-class, the smoothness of $f^*$ can be quantified by means of a so-called \textit{source condition} expressed as
\begin{equation} \label{sc}
f^* = T_k^s u \ \ \ \textnormal{ with } \ \ \ u \in L_2(\mathbb{P}_X),\ \lVert u \rVert_{2} \leq \rho,
\end{equation}
where $s > 0$ and $\rho > 0$ are constants.
For instance, assuming $s \geq \frac{1}{2}$ is equivalent to requiring $f^* \in \mathcal{H}$. See also \cite[Assumption 3]{CelWah2020} for a deeper discussion about the source condition.
Examples of celebrated reproducing kernels that are used in practice include the Gaussian RBF kernel \cite[Section~3.2]{arlot2019kernel}, the Sobolev kernel \cite{raskutti2014early}, polynomial kernels of degree $d$ \cite{yang2017randomized}, \ldots For more examples, see \cite{wahba1990spline, scholkopf2001learning,gartner2008kernels}.
In the present paper, we make a boundness assumption on the reproducing kernel $\mathbb{K}(\cdot, \cdot)$.
\begin{assumption} \label{a2}
Let us assume that the reproducing kernel $\mathbb{K}(\cdot, \cdot)$ is uniformly bounded on its support, meaning that there exists a constant $B > 0$ such that $$\underset{x \in \mathcal{X}}{\sup} \Big[ \mathbb{K}(x, x) \Big] = \underset{x \in \mathcal{X}}{\sup} || \mathbb{K}(\cdot, x) ||_{\mathcal{H}}^2 \leq B .$$
Moreover in what follows, we assume that $B=1$ without loss of generality.
\end{assumption}
Assumption \ref{a2} holds for many kernels. On the one hand, it is fulfilled with an unbounded domain $\mathcal{X}$ with a bounded kernel (e.g., Gaussian, Laplace kernels). On the other hand, it amounts to assume the domain $\mathcal{X}$ is bounded with an unbounded kernel such as the polynomial or Sobolev kernels \cite{scholkopf2001learning}.
Let us also mention that Assumptions~\ref{a1} and~\ref{a2} (combined with the reproducing property) imply that $f^*$ is uniformly bounded since
\begin{equation} \label{h_norm_infty_norm}
\lVert f^* \rVert_{\infty} = \underset{x \in \mathcal{X}}{\sup} \left| \langle f^*, \mathbb{K}(\cdot, x) \rangle_{\mathcal{H}} \right| \leq \lVert f^* \rVert_{\mathcal{H}} \underset{x \in \mathcal{X}}{\sup}\lVert \mathbb{K}(\cdot, x) \rVert_{\mathcal{H}} \leq R.
\end{equation}
\medskip
Considering now the Gram matrix $K = \{\mathbb{K}(x_i, x_j)\}_{1\leq i,j\leq n}$, the related \emph{normalized Gram matrix} $K_n = \{ \mathbb{K}(x_i, x_j) / n\}_{1\leq i,j \leq n}$ turns out to be symmetric and positive semidefinite. This entails the existence of
the empirical eigenvalues $\widehat{\mu}_1, \ldots, \widehat{\mu}_n$ (respectively, the eigenvectors $\widehat{u}_1, \ldots, \widehat{u}_n$) such that $K_n \widehat{u}_i = \widehat{\mu}_i \cdot \widehat{u}_i $ for all $i \in [n]$. Let us further assume that the rank of $K_n$ satisfies $\text{rk}(K_n) = r \leq n$ with
\begin{equation*}
\widehat{\mu}_1 \geq \widehat{\mu}_2 \geq \ldots \geq \widehat{\mu}_r > 0 = \widehat{\mu}_{r+1} = \ldots = \widehat{\mu}_n = 0.
\end{equation*}
Remark that Assumption~\ref{a2} implies $0\leq \max( \widehat{\mu}_1,\mu_1) \leq 1$.
For technical convenience, it turns out to be useful rephrasing the model \eqref{vector.model} by using the SVD of the normalized Gram matrix $K_n$. This leads to the new (rotated) model
\begin{equation} \label{rotated_model}
Z_i = \langle \widehat{u}_i, Y \rangle = G_i^* + \varepsilon_i, \quad i = 1, \ldots, n,
\end{equation}
where $G_i^* = \langle \widehat{u}_i, F^* \rangle $, and $\varepsilon_i = \langle \widehat{u}_i, \overline{\varepsilon} \rangle$ is a zero-mean Gaussian random variable with the variance $\sigma^2$.
\subsection{Spectral filter algorithms}
\label{sec.spectral.filters}
Spectral filter algorithms were first introduced for solving ill-posed inverse problems with deterministic noise \cite{engl1996regularization}. Among others, one typical example of such an algorithm is the gradient descent algorithm (that is named as well as $L^2$-boosting \cite{buhlmann2003boosting}). They were more recently brought to the supervised learning community, for instance, by \cite{article, bauer2007regularization, Yao2007, gerfo2008spectral}.
For estimating the vector $F^*$ from Eq. \eqref{vector.model} in the fixed-design context, such a spectral filter estimator is a linear estimator, which can be expressed as
\begin{align}\label{spectral.estimator.vector}
F^\lambda \coloneqq \left(f^\lambda(x_1), \ldots, f^\lambda(x_n)\right)^\top = g_\lambda(K_n)K_n Y,
\end{align}
where $g_{\lambda}:\ [0,1]\to \mathbb{R}$ is called the \emph{spectral filter function} and is defined as follows.
\begin{definition}[see, e.g., \cite{gerfo2008spectral}]\label{def.spectral.filters}
$\lambda \mapsto g_{\lambda}$ is called the admissible spectral filter function if it is continuous, non-increasing, and obeys the next four conditions:
\begin{enumerate}
\item There exists $\overline{B} > 0$ such that $\underset{0 < \xi \leq 1}{\sup}|g_{\lambda}(\xi)| \leq \frac{\overline{B}}{\lambda}, \quad \forall \lambda \in [0, \infty)$.
\item For all $\xi \in (0, 1]$, $\underset{\lambda \to 0}{\lim}[\xi g_{\lambda}(\xi)] = 1$.
\item There exists $\overline{D} > 0$ such that $\underset{0 < \xi \leq 1}{\sup} |\xi g_{\lambda}(\xi)| \leq \overline{D},\quad \forall \lambda \in [0, \infty)$.
\item There exists $\Bar{\nu} > 0$ called the \textit{qualification} of $g_{\lambda}$ and a constant $C_{\nu} > 0$ independent of $\lambda$ such that
\begin{equation} \label{qualification}
\underset{0 < \xi \leq 1}{\sup} |1 - \xi g_{\lambda}(\xi)|\xi^{\nu} \leq C_{\nu}\lambda^{\nu}, \ \ \forall \ 0 < \nu \leq \bar{\nu}.
\end{equation}
%
\end{enumerate}
\end{definition}
The choice $g_{\lambda}(\xi) = \frac{1}{\xi + \lambda}$, which corresponds to the kernel ridge estimator with regularization parameter $\lambda>0$, is an admissible spectral filter function with $\overline{B} = \overline{D} = 1$, where qualification Ineq. (\ref{qualification}) holds with $C_{\nu} = 1$ for $0 < \nu \leq 1 = \Bar{\nu} $ (see \cite{blanchard2016optimal,CelWah2020} for other possible choices).
From the model expressed in the empirical eigenvectors basis \eqref{rotated_model}, the resulting spectral filter estimator \eqref{spectral.estimator.vector} can be expressed as
\begin{equation} \label{iterations}
G^{\lambda_t}_i = \langle \widehat{u}_i, F^{\lambda_t} \rangle = \gamma_i^{(t)} Z_i, \quad\forall i=1,\ldots,n,
\end{equation}
where $t \mapsto \lambda_t> 0$ is a decreasing function mapping $t$ to a regularization parameter value at time $t$, and $t \mapsto \gamma_i^{(t)}$ is defined by
\begin{equation*}
\gamma_i^{(t)} = \widehat{\mu}_i g_{\lambda_t}(\widehat{\mu}_i), \quad \forall i = 1, \ldots, n.
\end{equation*}
From Definition~\ref{def.spectral.filters}, it can be proved that $\gamma_i^{(t)}$ is a non-decreasing function of $t$, $\gamma_i^{(0)} = 0$, and $\underset{t \to \infty}{\lim} \gamma_i^{(t)} = 1$. Moreover, $\widehat{\mu}_i = 0$ implies $\gamma_i^{(t)} = 0$, as it is the case for the kernels with a finite rank, that is, when $\mathrm{rk}(K_n) = r < n$.
Thanks to the remark above, we define the following convenient notations $f^t \coloneqq f^{\lambda_t}$ (for the functions) and $F^t \coloneqq F^{\lambda_t}$ (for the vectors), with a continuous iteration (time) $t > 0$.
In what follows, we introduce an assumption on $\gamma_i^{(t)}$ function that will play a crucial role in our analysis.
\begin{assumption} \label{additional_assumption_gd_krr}
\begin{equation*}
c \min \{1, \eta t \widehat{\mu}_i \} \leq \gamma_i^{(t)} \leq \min \{1, \eta t \widehat{\mu}_i \}, \quad i = 1, \ldots, n
\end{equation*}
for some positive constants $c \in (0, 1)$ and $\eta > 0$.
\end{assumption}
Let us mention two famous examples of spectral filter estimators that satisfy Assumption \ref{additional_assumption_gd_krr} with $c=1/2$ (see Lemma \ref{gamma_bounds} in Appendix). These examples will be further studied in the present paper.
\begin{itemize}
\item Gradient descent (GD) with a constant step-size $0<\eta<1/\widehat{\mu}_1$:
\begin{equation}
\gamma_i^{(t)} = 1 - (1 - \eta \widehat{\mu}_i)^t, \quad \forall t>0, \ \forall i=1,\ldots,n.
\end{equation}
%
Note that GD satisfies the qualification condition \eqref{qualification} with arbitrary $\bar{\nu} > 0$ (see, e.g., \cite{bauer2007regularization} for more discussion on the qualification).
The constant step-size $\eta$ can be replaced by any non-increasing sequence $\{\eta^t\}_{t=0}^{+\infty}$ satisfying \cite{raskutti2014early}
\begin{itemize}
\item $(\widehat{\mu}_1)^{-1} \geq \eta^{t} \geq \eta^{t+1}\geq \dots$, for $t = 0, 1, \ldots$,
\item $\sum_{s = 0}^{t - 1} \eta^t \to +\infty$ as $t \to + \infty$.
\end{itemize}
\item Kernel ridge regression (KRR) with the regularization parameter $\lambda_t = 1/(\eta t)$ with $\eta>0$:
\begin{equation}
\gamma_i^{(t)} = \frac{\widehat{\mu}_i}{\widehat{\mu}_i + \lambda_t}, \quad \forall t>0, \ \forall i=1,\ldots,n.
\end{equation}
%
The linear parameterization $\lambda = 1/(\eta t)$ is chosen for theoretical convenience and could be replaced by any alternative choice, such as the exponential parameterization $\lambda = 1 / (e^{\eta t} - 1)$.
\end{itemize}
We refer interested readers, for instance, to \cite[Sections 4.1 and 4.4]{raskutti2014early} for the derivation of the $\gamma_i^{(t)}$ expressions.
The expressions of the two above examples have been derived from $F^0 = [f^0(x_1), \ldots, f^0(x_n)]^\top = [0, \ldots, 0]^\top$ as an initialization condition without loss of generality.
\subsection{Reference stopping rule and oracle-type inequality}
From a set of iterations $t \in \mathcal{T} = \{0, \ldots, T \}$ for an iterative learning algorithm (like the spectral filter described in Section~\ref{sec.spectral.filters}), the present goal is to design $\widehat{t} = \widehat{t}(\{x_i, y_i \}_{i=1}^n)$ from the data $\{ x_i , y_i \}_{i=1}^n$ such that the functional estimator $f^{\widehat{t}}$ is as close as possible to the optimal one among $\mathcal{T}.$
Numerous classical model selection procedures for choosing $\widehat{t}$ already exist, e.g. the (generalized) cross validation \cite{wahba1977practical}, AIC and BIC criteria \cite{schwarz1978estimating, akaike1998information}, the unbiased risk estimation \cite{cavalier2002oracle}, or Lepski's balancing principle \cite{mathe2003geometry}.
Their main drawback in the present context is that they require the practitioner to calculate all the estimators $\{f^t, \ t \in \mathcal{T}\}$ in a first step, and then choose the optimal estimator among the candidates in a second step, which can be high computationally demanding.
By contrast, early stopping is a less time-consuming approach. It is based on observing one estimator at each iteration $t \in \mathcal{T}$ and deciding to stop the learning process according to some criterion. Its aim is to reduce the computational cost induced by this selection procedure while preserving the statistical optimality properties of the output estimator.
\medskip
The prediction error (risk) of an estimator $f^t$ at iteration $t$ is split into a bias and a variance term \cite{raskutti2014early} as
\begin{equation*}
R(t) = \mathbb{E}_{\varepsilon} \lVert f^t - f^* \rVert_n^2 = \lVert \mathbb{E}_{\varepsilon} f^t - f^* \rVert_n^2 + \mathbb{E}_{\varepsilon} \lVert f^t - \mathbb{E}_{\varepsilon}f^t\rVert_n^2 = B^2(t) + V(t) \end{equation*}
with
\begin{equation}
B^2(t) = \frac{1}{n}\sum_{i=1}^n (1 - \gamma_i^{(t)})^2 (G_i^*)^2, \ \ \ \ \ \ V(t) = \frac{\sigma^2}{n}\sum_{i=1}^n (\gamma_i^{(t)})^2.
\end{equation}
From the properties of Definition~\ref{def.spectral.filters}, the bias term is a non-increasing function of $t$ converging to zero, while the variance term is a non-decreasing function of $t$ converging to $\frac{r \sigma^2}{n}$ ($\mathrm{rk}(K_n)=r$).
Since minimizing the risk as a function of $t$ cannot be achieved, the empirical risk $R_t$ (that measures the size of the residuals) is introduced with the notation of Eq.~\eqref{rotated_model}.
\begin{equation} \label{empirical_risk}
R_t
= \frac{1}{n}\sum_{i=1}^n (1 - \gamma_i^{(t)})^2 Z_i^2 = \frac{1}{n}\sum_{i=1}^r (1 - \gamma_i^{(t)})^2 Z_i^2 + \frac{1}{n}\sum_{i=r+1}^n Z_i^2,
\end{equation}
This is a non-increasing function of $t$, which measures how well an estimator $f^t$ fits the data (or equivalently, how much information is still contained within the residuals).
An illustration of the typical behavior of the risk, empirical risk, bias, and variance is displayed by Figure~\ref{fig:bvr}.
The risk achieves its (global) minimum at $t \approx 1000$. Making additional iterations will eventually lead to waste the computational resources and worsen the statistical performance, which empirically justifies the need for a data-driven early stopping rule.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\linewidth]{./fig/bias_var_risk.pdf}
\caption{Bias, variance, risk, and empirical risk behavior.}
\label{fig:bvr}
\end{figure}
\medskip
Let us now introduce our "reference stopping rule".
This stopping rule balances the bias and variance described above, which is a common strategy for model selection in the nonparametric statistics literature, since it usually yields a minimax-optimal estimator (see, e.g., \cite{tsybakov2008introduction}).
This reference stopping rule is defined as the first time the bias term becomes smaller than or equal to the variance term, that is,
\begin{equation} \label{t_b}
t^{b} = \inf \{ t > 0 \ | \ B^2(t) \leq V(t) \}.
\end{equation}
This is a purely theoretical stopping rule since it strongly depends on unknown quantities.
However, its main interest lies in the way it compares with the global optimum performance, that is, with the oracle performance. This is the purpose of the next lemma.
\begin{lemma} \label{oracle}
Under the monotonicity of the bias and variance terms,
\begin{equation*}
\mathbb{E}_{\varepsilon} \lVert f^{t^b} - f^* \rVert_n^2 \leq 2 \ \underset{t > 0}{\inf} \Big[ \mathbb{E}_{\varepsilon} \lVert f^t - f^* \rVert_n^2 \Big].
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{oracle}]
The proof is quite simple and can be deduced from \cite[\textnormal{p.8}]{blanchard2016optimal}. For any $t > 0$,
\begin{equation*}
B^2(t) + V(t) \geq \min \{ B^2(t^b), V(t^b) \} = \frac{1}{2}\Big[ B^2(t^b) + V(t^b) \Big] = \frac{1}{2}\mathbb{E}_{\varepsilon} \lVert f^{t^b} - f^* \rVert_n^2.
\end{equation*}
To finish the proof, it is sufficient to take $t = \underset{t > 0}{\textnormal{argmin}} \Big[ \mathbb{E}_{\varepsilon} \lVert f^t - f^* \rVert_n^2 \Big].$
\end{proof}
This lemma provides a fundamental result that guarantees the optimality of $t^b$ for any iterative estimator, for which the bias is a non-increasing function of $t$, and the variance is a non-decreasing function of $t$. It also implies that the risk of any spectral filter estimator computed at $t^b$ cannot be higher than two times the risk of the oracle rule. This is the main reason for considering $t^b$ as a reference stopping rule in our analysis.
It is also worth mentioning that even if we knew $B^2(t)$ for all $t \leq t_1$ for some $t_1 > 0$, the bias could still suddenly drop after at time $t_2>t_1$. Stopping at $t_1$ could then result in a much worse performance than stopping at time $t_2$, where the bias term is zero. This remark suggests that recovering the oracle performance cannot be achieved in full generality in the present framework, where one has access to a limited number of "observations" of the risk curve. This is why the balancing stopping rule $t^b$ plays the role of a reference stopping rule since its performance can nevertheless be linked with the one of the oracle stopping rule.
\medskip
Our main concern is formulating a data-driven stopping rule (a mapping from the data $\{(x_i, y_i)\}_{i=1}^n$ to a positive time $\widehat{t}$) so that the prediction errors $\mathbb{E}_{\varepsilon}\lVert f^{\widehat{t}} - f^* \rVert_n^2$ or, equivalently, $\mathbb{E}\lVert f^{\widehat{t}} - f^* \rVert_2^2$ are as small as possible.
A classical tool commonly used in model selection for quantifying the performance of a procedure is the oracle-type inequality \cite{cavalier2002oracle, koltchinskii2006local, tsybakov2008introduction, wainwright2019high}. In the fixed design context, an oracle inequality (in expectation) can be formulated as follows
\begin{equation}
\mathbb{E}_{\varepsilon} \lVert f^{\widehat{t}} - f^* \rVert_n^2 \leq C_n \underset{t \in \mathcal{T}}{\inf} \Big[ \mathbb{E}_{\varepsilon} \lVert f^t - f^* \rVert_n^2 \Big] + r_n,
\end{equation}
where the constant $C_n\geq 1$ in the right-hand side can depend on various parameters of the problem (except $f^*$). The main term $\underset{t \in \mathcal{T}}{\inf} \Big[ \mathbb{E}_{\varepsilon} \lVert f^t - f^* \rVert_n^2 \Big]$ is the best possible performance any estimator among $\{ f^t,\ t\in\mathcal{T}\}$ can achieve. Ideally, for the oracle inequality to be meaningful, the last term $r_n$ in the right-hand side should be negligible compared to the oracle performance.
\subsection{Localized empirical Rademacher complexity}
The analysis of the forthcoming early stopping rules involves the use of a model complexity measure known as the \emph{localized empirical Rademacher complexity} \cite{bartlett2005local, koltchinskii2006local, wainwright2019high}.
\begin{definition}
For any given $\epsilon > 0$ and a function class $\mathcal{F}$, consider the localized empirical Rademacher complexity
\begin{equation} \label{def_localized_empirical_rk}
\widehat{\mathcal{R}}_n(\epsilon, \mathcal{F}) = \mathbb{E}_{\mathbf{r}}\left[ \underset{\underset{\lVert f \rVert_n \leq \epsilon R}{f \in \mathcal{F}}}{\textnormal{sup}} \left| \frac{1}{n}\sum_{i=1}^n \mathbf{r}_i f(x_i) \right| \right],
\end{equation}
where $\{ \mathbf{r}_i \}_{i=1}^n$ are i.i.d. Rademacher variables ($\{-1, +1\}$-random variables with equal probability $\frac{1}{2}$).
%
\end{definition}
Usually, the localized empirical Rademacher complexity is defined for $R = 1$, but due to the scaling factor of $\lVert f^* \rVert_{\mathcal{H}}$, one needs to consider the radius $\epsilon R$ within the supremum.
Along the analysis, more explicit lower and upper bounds on the above local empirical Rademacher complexity have to be derived. This is the purpose of introducing the so-called \textit{kernel complexity function} \cite{mendelson2002geometric, mendelson2003performance} that is proved to be of the same size (up to numeric constants) as the localized empirical Rademacher complexity of $\mathcal{F} = \mathbb{B}_{\mathcal{H}}(R)$, that is,
\begin{equation} \label{empirical_rademacher_complexity_def}
\widehat{\mathcal{R}}_n(\epsilon, \mathcal{H}) = R \Big[ \frac{1}{n}\sum_{j=1}^r \textnormal{min}\{ \epsilon^2, \widehat{\mu}_j \} \Big]^{1/2}.
\end{equation}
It corresponds to a rescaled sum of the empirical eigenvalues truncated at $\epsilon^2$.
For a given RKHS $\mathcal{H}$ and noise level $\sigma$, let us finally define the \textit{empirical critical radius} $\widehat{\epsilon}_n$ as the smallest positive value $\epsilon$ such that
\begin{equation} \label{RK_critical_radius_empirical}
\frac{\widehat{R}_n(\epsilon, \mathcal{H})}{\epsilon R} \leq \frac{2 \epsilon R}{\sigma}.
\end{equation}
There is an extensive literature on the empirical critical equation and related empirical critical radius \cite{mendelson2002geometric, bartlett2005local, raskutti2014early}, and it is out of the scope of the present paper providing an exhaustive review on this topic.
Nevertheless, it has been proved that $\widehat{\epsilon}_n$ does always exist and is unique.
Constant $2$ in Ineq. \eqref{RK_critical_radius_empirical} is for theoretical convenience only.
\section{Data-driven early stopping rule and minimum discrepancy principle} \label{sec:3}
Let us start by recalling that the expression of the empirical risk in Eq.~\eqref{empirical_risk} gives that the empirical risk is a non-increasing function of $t$ (as illustrated by Fig.~\ref{fig:bvr} as well). This is consistent with the intuition that the amount of available information within the residuals decreases as the number of iterations grows. If there exists an iteration $t$ such that $f^{t} \approx f^*$, then the empirical risk is approximately equal to $\sigma^2$ (level of noise), that is,
\begin{equation} \label{approximation}
\mathbb{E}_{\varepsilon}R_t = \mathbb{E}_{\varepsilon}\Big[ \lVert F^t - Y \rVert_n^2 \Big] \approx \mathbb{E}_{\varepsilon} \Big[ \lVert F^* - Y \rVert_n^2 \Big] = \mathbb{E}_{\varepsilon} \Big[\lVert \varepsilon \rVert_n^2 \Big] = \sigma^2.
\end{equation}
Additional iterations would result in fitting to the noise (overfitting).
Introducing, moreover, the reduced empirical risk $\widetilde{R}_t, \ t > 0,$ and recalling that $r$ denotes the rank of the Gram matrix, it comes
\begin{equation} \label{reduced_empir_risk_def}
\mathbb{E}_{\varepsilon}R_t = \mathbb{E}_{\varepsilon} \Big[ \frac{1}{n}\sum_{i=1}^n (1 - \gamma_i^{(t)})^2 Z_i^2 \Big] = \mathbb{E}_{\varepsilon} \underbrace{\Big[ \frac{1}{n}\sum_{i=1}^r (1 - \gamma_i^{(t)})^2 Z_i^2 \Big]}_{\coloneqq \widetilde{R}_t} + \frac{n - r}{n}\sigma^2 \overset{(\textnormal{i})}{\approx} \sigma^2,
\end{equation}
where $(\textnormal{i})$ is due to Eq. (\ref{approximation}). This heuristic argument gives rise to a first deterministic stopping rule $t^*$ involving the reduced empirical risk and given by
\begin{equation} \label{t_star}
t^* = \inf \left\{ t > 0 \ | \ \mathbb{E}_{\varepsilon} \widetilde{R}_t \leq \frac{r \sigma^2}{n} \right\}.
\end{equation}
Since $t^*$ is \textit{not achievable} in practice,
an estimator of $t^*$ is given by the data-driven stopping rule $\tau$ based on the so-called minimum discrepancy principle (MDP) \begin{equation}\label{tau}
\tau = \inf \left\{ t > 0 \ | \ \widetilde{R}_t \leq \frac{r \sigma^2}{n}\right\}.
\end{equation}
The existing literature considering the MDP-based stopping rule usually defines $\tau$ by the event $\{ R_t \leq \sigma^2 \}$ \cite{engl1996regularization, hansen2010discrete, blanchard2012discrepancy, blanchard2016convergence, blanchard2016optimal, stankewitz2019smoothed}.
On the one hand, with a full-rank kernel ($r=n$), the reduced empirical risk $\widetilde{R}_t$ is equal to the classical empirical risk, leading then to the same stopping rule.
On the other hand, with a finite-rank kernel ($r\ll n$), using the reduced empirical risk and the event $\{ \widetilde{R}_t \leq \frac{r \sigma^2}{n} \}$ rather than the empirical risk and $\{ R_t \leq \sigma^2 \}$ should lead to a less variable stopping rule. From a practical perspective, the knowledge of the rank of the Gram matrix (which is exploited by the reduced empirical risk, unlike the classical empirical risk) avoids estimating the last $n-r$ components of the vector $G^*$, which are already known to be zero (see Appendix \ref{general_appendix} for more details).
Intuitively, if the empirical risk is close to its expectation, then $\tau$ should be optimal in some sense. Therefore, the main analysis in the paper will concern quantifying how close $\tau$ and $t^*$ are to each other. It appeared in practice that, if the model is quite simple, e.g. the kernel is of finite rank or the variance $\sigma^2$ is low compared to the signal $f^*$, $\tau$ is close to $t^*$, and $\tau$ performs well. As soon as the model becomes complex, e.g. an infinite-rank kernel or the variance $\sigma^2$ is high compared to the signal $f^*$, $\tau$, as a random variable, has a high variance that should be reduced. Of course, the smoothness of the regression function should play a role too. This not rigorous statement will be further developed in Section \ref{pdk}.
\subsection{Finite-rank kernels}
\subsubsection{Fixed-design framework}
Let us start by discussing our results with the case of RKHS of finite-rank kernels with rank $r < n: \widehat{\mu}_i = 0, \ i > r,$ and $\mu_i = 0, \ i > r$. Examples that include these kernels are the linear kernel $\mathbb{K}(x_1, x_2) = x_1^\top x_2$ and the polynomial kernel of degree $d \in \mathbb{N} $ $\mathbb{K}(x_1, x_2) = (1 + x_1^\top x_2)^d$. It is easy to show that the polynomial kernel is of finite rank at most $d + 1$, meaning that the kernel matrix $K_n$ has at most $\min \{d + 1, n \}$ nonzero eigenvalues.
The following theorem applies to any functional sequence $\{ f^t \}_{t=0}^{\infty}$ generated by (\ref{iterations}) and initialized at $f^0 = 0$. The main part of the proof of this result consists of properly upper bounding $\mathbb{E}_{\varepsilon}|\mathbb{E}_{\varepsilon}\widetilde{R}_{t^*} - \widetilde{R}_{t^*} |$ and follows the same trend of Proposition 3.1 in \cite{blanchard2016optimal}.
\begin{theorem}\label{th:1}
Under Assumptions \ref{a1} and \ref{a2}, given the stopping rule (\ref{tau}),
\begin{equation} \label{general_res}
\mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^* \rVert_n^2 \leq 2(1+\theta^{-1}) \mathbb{E}_{\varepsilon} \lVert f^{t^*} - f^* \rVert_n^2 + 2(\sqrt{3} + \theta) \frac{\sqrt{r} \sigma^2}{n}
\end{equation}
for any positive $\theta$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{th:1}]
In this proof, we will use the following inequalities: for any $a, b \geq 0, \ (a - b)^2 \leq |a^2 - b^2|$, and $2ab \leq \theta a^2 + \frac{1}{\theta} b^2$ for $\forall \theta > 0$.
Let us first prove the subsequent oracle-type inequality for the difference between $f^\tau$ and $f^{t^*}$. Consider
\begin{align*}
\lVert f^{t^*} - f^{\tau} \rVert_n^2 & = \frac{1}{n} \sum_{i=1}^r \Big( \gamma_i^{(t^*)} - \gamma_i^{(\tau)}\Big)^2 Z_i^2 \leq \frac{1}{n} \sum_{i=1}^r |(1 - \gamma_i^{(t^*)})^2 - (1 - \gamma_i^{(\tau)})^2 | Z_i^2 \\
%
& = (\widetilde{R}_{t^*} - \widetilde{R}_{\tau})\mathbb{I}\left\{ \tau \geq t^*\right\} + (\widetilde{R}_{\tau} - \widetilde{R}_{t^*})\mathbb{I}\left\{ \tau < t^*\right\} \\
%
%
& \leq (\widetilde{R}_{t^*} - \mathbb{E}_{\varepsilon}\widetilde{R}_{t^*})\mathbb{I}\left\{ \tau \geq t^*\right\} + (\mathbb{E}_{\varepsilon}\widetilde{R}_{t^*} - \widetilde{R}_{t^*})\mathbb{I}\left\{ \tau < t^* \right\} \\
%
& \leq |\widetilde{R}_{t^*} - \mathbb{E}_{\varepsilon}\widetilde{R}_{t^*} |.
\end{align*}
From the definition of $\widetilde{R}_t$ (\ref{reduced_empir_risk_def}), one notices that
\begin{align*}
| \widetilde{R}_{t^*} - \mathbb{E}_{\varepsilon}\widetilde{R}_{t^*} | & = \left| \sum_{i=1}^r (1 - \gamma_i^{(t^*)})^2 \Big[\frac{1}{n}(\varepsilon_i^2 - \sigma^2) + \frac{2}{n} \varepsilon_i G_i^* \Big]\right|.
\end{align*}
From $\mathbb{E}_{\varepsilon}| X(\varepsilon)| \leq \sqrt{\text{var}_{\varepsilon}X(\varepsilon)}$ for $X(\varepsilon)$ centered and $\sqrt{a + b} \leq \sqrt{a} + \sqrt{b}$ for any $a, b \geq 0$, and $\mathbb{E}_{\varepsilon}\left( \varepsilon^{4} \right) \leq 3 \sigma^4$, it comes
\begin{align*}
\mathbb{E}_{\varepsilon} |\widetilde{R}_{t^*} - \mathbb{E}_{\varepsilon}\widetilde{R}_{t^*} | & \leq \sqrt{\frac{2 \sigma^2}{n^2} \sum_{i=1}^r (1 - \gamma_i^{(t^*)})^4 \left[ \frac{3}{2} \sigma^2 + 2 (G_i^*)^2 \right]} \\
&\leq \sqrt{\frac{3 \sigma^4}{n^2} \sum_{i=1}^r (1 - \gamma_i^{(t^*)})^2} + \sqrt{\frac{4 \sigma^2}{n^2} \sum_{i=1}^r (1 - \gamma_i^{(t^*)})^2 (G_i^*)^2} \\
& \leq \frac{\sqrt{3}\sigma^2 \sqrt{r}}{n} + \theta \frac{\sigma^2}{n} + \theta^{-1}B^2(t^*) \\ &\leq \theta^{-1}B^2(t^*) + (\sqrt{3} + \theta) \frac{\sqrt{r} \sigma^2}{n}.
\end{align*}
Applying the inequalities $(a + b)^2 \leq 2a^2 + 2b^2$ for any $a, b \geq 0$ and $B^2(t^*) \leq \mathbb{E}_{\varepsilon} \lVert f^{t^*} - f^* \rVert_n^2$, we arrive at
\begin{align*}
& \mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^* \rVert_n^2 \\
%
& \leq 2 \mathbb{E}_{\varepsilon} \lVert f^{t^*} - f^* \rVert_n^2 + 2 \mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^{t^*} \rVert_n^2 \\
%
%
& \leq 2(1 + \theta^{-1})\mathbb{E}_{\varepsilon}\lVert f^{t^*} - f^* \rVert_n^2 + 2 (\sqrt{3} + \theta)\frac{\sqrt{r}\sigma^2}{n}.
\end{align*}
The claim is proved.
\end{proof}
First of all, it is worth noting that the risk of the estimator $f^{t^*}$ is proved to be \textit{optimal} for gradient descent and kernel ridge regression no matter the kernel we use (see Appendix \ref{finite_rank_appendix} for the proof), so it remains to focus on the remainder term on the right-hand side in Ineq. (\ref{general_res}). Theorem \ref{th:1} applies to any reproducing kernel, but one remarks that for infinite-rank kernels, $r = n$, and we achieve only the rate $\mathcal{O}(1/\sqrt{n})$.
This rate is suboptimal since, for instance, RKHS with polynomial eigenvalue decay kernels (will be considered in the next subsection) has the minimax-optimal rate for the risk error of the order $\mathcal{O}(n^{-\frac{\beta}{\beta + 1}})$, with $\beta > 1$. Therefore, the oracle-type inequality (\ref{general_res}) could be useful only for finite-rank kernels due to the fast $\mathcal{O}(\sqrt{r}/n)$ rate of the remainder term.
Notice that, in order to make artificially the term $\mathcal{O}(\frac{\sqrt{r}}{n})$ a remainder one (even for cases corresponding to infinite-rank kernels), \cite{blanchard2016optimal, blanchard2018early} introduced in the definitions of their stopping rules a restriction on the "starting time" $t_{0}$. However, in the mentioned work, this restriction incurred the price of possibility to miss the designed time $\tau$. For instance, in \cite{blanchard2016optimal}, the authors took $t_0$ as the first time at which the variance becomes of the order $\frac{\sqrt{r}\sigma^2}{n} (\sqrt{D} \delta^2$ in their notations). Besides that, \cite{blanchard2018early} developed an additional procedure based on standard model selection criteria such as AIC-criterion for the spectral cut-off estimator to recover the "missing" stopping rule and to achieve optimality over Sobolev-type ellipsoids. In our work, we removed such a strong assumption.
As a corollary of Theorem \ref{th:1}, one can prove that $f^{\tau}$ provides a minimax estimator of $f^*$ over the ball of radius $R$.
\begin{corollary} \label{corollary_empirical_norm}
Under Assumptions \ref{a1}, \ref{a2}, \ref{additional_assumption_gd_krr}, if a kernel has finite rank $r$, then
\begin{equation}
\mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^* \rVert_n^2 \leq c_u R^2 \widehat{\epsilon}_n^2,
\end{equation}
where the constant $c_u$ is numeric.
\end{corollary}
\begin{proof}[Proof of Corollary \ref{corollary_empirical_norm}]
From Theorem \ref{th:1} and Lemma \ref{cl1l2} in Appendix,
\begin{equation}
\mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^* \rVert_n^2 \leq 16 (1 + \theta^{-1})R^2 \widehat{\epsilon}_n^2 + 2(\sqrt{3} + \theta)\frac{\sqrt{r}\sigma^2}{n}.
\end{equation}
Further, from Lemma \ref{critical_radius_empirical} in Appendix, $\widehat{\epsilon}_n^2 = c \frac{r \sigma^2}{n R^2}$ with a positive numeric constant $c$, and it implies that
\begin{equation}
\mathbb{E}_{\varepsilon} \lVert f^{\tau} - f^* \rVert_n^2 \leq \Big[ 16(1 + \theta^{-1}) + \frac{2(\sqrt{3} + \theta)}{c} \Big] R^2 \widehat{\epsilon}_n^2.
\end{equation}
\end{proof}
Note that the critical radius $\widehat{\epsilon}_n$ cannot be arbitrary small since it should satisfy Ineq. (\ref{RK_critical_radius_empirical}). As it will be clarified later, the squared empirical critical radius is essentially optimal.
\subsubsection{Random-design framework}\label{sec.random.design}
We would like to transfer the minimax optimality bound for the estimator $f^{\tau}$ from the empirical $L_2(\mathbb{P}_n)$-norm to the $L_2(\mathbb{P}_X)$ in-sample norm by means of the so-called localized population Rademacher complexity. This complexity measure became a standard tool in empirical processes and nonparametric regression \cite{bartlett2005local, koltchinskii2006local, raskutti2014early, wainwright2019high}.
For any kernel function class studied in the paper, we consider the localized Rademacher complexity that can be seen as a population counterpart of the empirical Rademacher complexity (\ref{empirical_rademacher_complexity_def}) introduced earlier:
\begin{equation} \label{radamacher_complexity}
\overline{\mathcal{R}}_n(\epsilon, \mathcal{H}) = R \left[ \frac{1}{n}\sum_{i=1}^{\infty} \min \{ \mu_i, \epsilon^2 \} \right]^{1/2}.
\end{equation}
Using the localized population Rademacher complexity, we define its \textit{population critical radius} $\epsilon_n > 0$ to be the smallest positive solution $\epsilon$ that satisfies the inequality
\begin{equation} \label{RK_critical_radius}
\frac{\overline{\mathcal{R}}_n(\epsilon, \mathcal{H})}{\epsilon R} \leq \frac{2 \epsilon R}{\sigma}.
\end{equation}
In contrast to the empirical critical radius $\widehat{\epsilon}_n$, this quantity is not data-dependent, since it is specified by the population eigenvalues of the kernel operator $T_k$ underlying the RKHS.
Recall the definition of the population critical radius (\ref{RK_critical_radius}), then the following result provides a fundamental lemma of the transfer between the $L_2(\mathbb{P}_n)$ and $L_2(\mathbb{P}_X)$ functional norms. In what follows, we assume that $\mathcal{H}$ is a star-shaped function class, meaning that for any $f \in \mathcal{H}$ and scalar $\omega \in [0, 1],$ the function $\omega f$ belongs to $\mathcal{H}$. The assumption on $\mathcal{H}$ being star-shape holds if $f$ is assumed to lie in the $\mathcal{H}$-norm ball of an \textit{arbitrary} finite radius.
\begin{lemma}\cite[\textnormal{Theorem 14.1}]{wainwright2019high} \label{Lemma:change_norm}
Assume a star-shaped kernel function class $\mathcal{H}$ and Assumption \ref{a2} of bounded kernel. Let $\epsilon_n$ be as in Ineq. (\ref{RK_critical_radius}), then for any $f \in \mathbb{B}_{\mathcal{H}}(cR)$, where $c > 1$ is a numeric constant, and any $h \geq \epsilon_n$, one has
\begin{equation}
\left|\lVert f \rVert_n^2 - \lVert f \rVert_2^2 \right| \leq \frac{1}{2}\lVert f \rVert_2^2 + c_1 R^2 h^2
\end{equation}
with probability at least $1 - c_2 e^{-c_3 \frac{n h^2 R^2}{\sigma^2}}$, for some positive numeric constants $c_1, c_2$, and $c_3$.
\end{lemma}
We deduce from Lemma \ref{Lemma:change_norm} that with probability at least $1 - c_2 e^{-c_3 \frac{n h^2 R^2}{\sigma^2}}$,
\begin{align*}
\frac{1}{2} \lVert f \rVert_2^2 - c_1 R^2 h^2 \leq \lVert f \rVert_n^2 \leq \frac{3}{2}\lVert f \rVert_2^2 + c_1 R^2 h^2.
\end{align*}
The previous lemma means the following. If we are able to proof that for some $t > 0$, $\lVert f^{t} - f^* \rVert_{\mathcal{H}}^2 \leq c R$ with high probability for a positive numeric constant $c$, then we can directly change the optimality result in terms of $\lVert f^t - f^* \rVert_n^2$ to the optimality result in terms of the $L_2(\mathbb{P}_X)$-norm $\lVert f^t - f^* \rVert_2^2$, losing only $c_1 R^2 h^2 \asymp R^2 \epsilon_n^2$ by choosing $h = \epsilon_n$.
Equipped with the localized Rademacher complexity (\ref{radamacher_complexity}), we can state the optimality theorem for finite-rank kernels for any functional sequence $\{ f^t \}_{t = 0}^{\infty}$ generated by (\ref{iterations}) and initialized at $f^0 = 0$.
\begin{theorem}\label{th:2}
Under Assumptions \ref{a1}, \ref{a2}, and \ref{additional_assumption_gd_krr}, given the stopping rule (\ref{tau}), there is a numeric constant $\widetilde{c}_{u}$ so that for finite-rank kernels with rank $r$,
\begin{equation}
\mathbb{E} \lVert f^{\tau} - f^* \rVert_2^2 \leq \widetilde{c}_{u} \frac{r \sigma^2}{n}.
\end{equation}
\end{theorem}
\textit{Proof intuition}.
The full proof is deferred to Section \ref{proof_for_change_of_norm}. Its main ingredient is Lemma \ref{change_norm_bound} in Appendix that states the following: $\lVert f^t \rVert_{\mathcal{H}} \leq \sqrt{7}R$ for any $t \leq \overline{t}_{\epsilon}$, where $\overline{t}_{\epsilon} = \inf \Big\{ t > 0 \ | \ B^2(t) = \frac{\sigma^2}{2n}\sum_{i=1}^r \gamma_i^{(t)} \Big\}$, with high probability. With this argument, we can apply the triangular inequality and Lemma \ref{Lemma:change_norm}, if $\tau \leq \overline{t}_{\epsilon}$ w.h.p.
\medskip
\begin{remark}
Theorem \ref{th:2} provides a rate for the $L_2(\mathbb{P}_X)$-norm that matches up to a constant the minimax bound (see, e.g., \cite[Theorem 2(a)]{raskutti2012minimax} with $s = d = 1$), when $f^*$ belongs to the $\mathcal{H}$-norm ball of a fixed radius $R$, thus not improvable in general. A similar bound for finite-rank kernels was achieved in \cite[Corollary 4]{raskutti2014early}.
\end{remark}
We summarize our findings in the following corollary.
\begin{corollary} \label{finite_rank_corollary}
Under Assumptions \ref{a1}, \ref{a2}, \ref{additional_assumption_gd_krr}, and a finite-rank kernel, the early stopping rule $\tau$ satisfies
\begin{equation}
\mathbb{E} \lVert f^{\tau} - f^* \rVert_2^2 \asymp \underset{\widehat{f}}{\inf} \underset{\lVert f^* \rVert_{\mathcal{H}} \leq R}{\sup} \mathbb{E} \lVert \widehat{f} - f^* \rVert_2^2,
\end{equation}
where the infimum is taken over all measurable functions of the input data.
\end{corollary}
\subsection{Practical behavior of $\tau$ with infinite-rank kernels} \label{pdk}
A typical example of RKHS that produces a "smooth" infinite-rank kernel is the $k^{\textnormal{th}}$-order Sobolev spaces for some fixed integer $k \geq 1$ with Lebesgue measure on a bounded domain. We consider Sobolev spaces that consist of functions that have $k^{\textnormal{th}}$-order weak derivatives $f^{(k)}$ being Lebesgue integrable and $f^{(0)} = f^{(1)}(0) = \ldots = f^{(k-1)}(0) = 0$. It is worth to mention that for such classes, the eigenvalues of the Gram matrix $\widehat{\mu}_i \asymp i^{-\beta}, \ i \in [r]$. Another example of kernel with this decay condition for the eigenvalues is the Laplace kernel $\mathbb{K}(x_1, x_2) = e^{-|x_1 - x_2|}, \ x_1, x_2 \in \mathbb{R}$ (see, \cite{scholkopf2001learning} p.402).
Firstly, let us now illustrate the practical behavior of ESR (\ref{tau}) (its histogram) for gradient descent (\ref{iterations}) with the step-size $\eta = 1/(1.2 \widehat{\mu}_1)$ and one-dimensional Sobolev kernel $\mathbb{K}(x_1, x_2) = \min\{x_1, x_2\}$ that generates the reproducing space
\begin{equation} \label{H:def}
\mathcal{H} = \left\{f: [0, 1] \to \mathbb{R} \ | \ f(0) = 0, \int_{0}^1 (f^\prime(x))^2 dx < \infty \right\}.
\end{equation}
We deal with the model (\ref{main}) with two regression functions: a smooth piece-wise linear $f^*(x) = |x - 1/2|-1/2 $ and nonsmooth heavisine $f^*(x) = 0.093 \ [4 \ \textnormal{sin}(4 \pi x) - \textnormal{sign}(x - 0.3) - \textnormal{sign}(0.72 - x)]$ functions. The design points are random $x_i \overset{\textnormal{i.i.d.}}{\sim} \mathbb{U}[0, 1]$. The number of observations is $n = 200$. For both functions, $\lVert f^* \rVert_n \approx 0.28$, and we set up a middle difficulty noise level $\sigma = 0.15$. The number of repetitions is $N = 200$.
\begin{figure}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/3d_hist_smooth0.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/3d_hist_heavisine0.pdf}
\caption{}
\end{subfigure}
\caption{Histogram of $\tau$ vs $t^b$ vs $t^*$ vs $t_{\textnormal{or}} \coloneqq \underset{t > 0}{\textnormal{argmin}} \Big[ \mathbb{E}_{\varepsilon}\lVert f^t - f^* \rVert_n^2 \Big]$ for kernel gradient descent with the step-size $\eta = 1 / (1.2 \widehat{\mu}_1)$ for the piece-wise linear $f^*(x) = |x - 1/2| - 1/2$ (panel (a)) and heavisine $f^*(x) = 0.093 \ [4 \ \textnormal{sin}(4 \pi x) - \textnormal{sign}(x - 0.3) - \textnormal{sign}(0.72 - x)]$ (panel (b)) regression functions, and the first-order Sobolev kernel $\mathbb{K}(x_1, x_2) = \min \{x_1, x_2 \}$.}
\label{fig:hist}
\end{figure}
In panel (a) of Figure \ref{fig:hist}, we detect that our stopping rule $\tau$ has a high variance. This could be explained by the variability of $\tau$ around its proxy version $t^*$ or by the variability of the empirical risk $R_t$ around its expectation at $t^*$. To understand this phenomenon, we move back to Theorem \ref{th:1} and notice that the remainder term there vanishes at the fast rate $\mathcal{O}(\sqrt{r}/n)$ when the kernel rank is fixed. If the kernel is not of finite rank, as a consequence, the worst-case rate is $\mathcal{O}(1/\sqrt{n})$, and we could not guarantee that we get a true remainder term at the end. Thus, the high variance comes from a large remainder term. Moreover, it has been shown in \cite{blanchard2018early} that the term $\mathcal{O}(\sqrt{r}/n)$ is unavoidable for the spectral cut-off algorithm (in their notation, it corresponds to $\sqrt{D}\delta^2$ where $\delta^2 = \frac{\sigma^2}{n}$).
If we change the signal $f^*$ from the smooth to nonsmooth one, the regression function does not belong anymore to $\mathcal{H}$ defined in (\ref{H:def}). In this case (panel (b) in Figure \ref{fig:hist}), the stopping rule $\tau$ performs much better than for the previous regression function. A conclusion one can make is that for smooth functions in $\mathcal{H}$, one needs to reduce the variance of the empirical risk. In order to do that and to get a stable early stopping rule that will be close to $t^*$, we propose using a special smoothing technique for the empirical risk.
\section{Polynomial smoothing} \label{sec:4}
As was mentioned earlier, the main issue of poor behavior of the stopping rule $\tau$ for "smooth" infinite-rank kernels is the variability of the empirical risk around its expectation. We would like to reduce this variability. In order to grasp additional intuition of this variability, consider the expectation of the empirical risk $\mathbb{E}_{\varepsilon}R_t \approx \frac{1}{n} \sum_{i=1}^r (1 - \gamma_i^{(t)})^2 (G^*_{i})^2$ and the fact that there exist components $i \in [r]$ for which $(G^*_{i})^2 \leq \varepsilon_i^2$, one can conclude that there is no hope to apply the early stopping $\tau$ with this type of kernels. That would be extremely difficult to recover the part of the regression function associated with these components since we observe pure noise. Our goal then is to reduce the number of these components and, by doing that, to reduce the variability of the empirical risk. A solution that we propose is to smooth the empirical risk by means of the eigenvalues of the normalized Gram matrix.
\subsection{Polynomial smoothing and minimum discrepancy principle rule}
We start by defining the squared $\alpha$-norm as $\lVert f \rVert_{n, \alpha}^2 \coloneqq \langle K_n^{\alpha} F, F \rangle_n$ for all $F = \left[f(x_1), \ldots, f(x_n) \right]^\top \in \mathbb{R}^n$ and $\alpha \in [0, 1]$, from which we also introduce the smoothed risk, bias, and variance of a spectral filter estimator as
\begin{equation*}
R_{\alpha}(t) = \mathbb{E}_{\varepsilon}\lVert f^t - f^*\rVert_{n, \alpha}^2 = \lVert \mathbb{E}_{\varepsilon}f^t - f^*\rVert_{n, \alpha}^2 + \mathbb{E}_{\varepsilon}\lVert f^t - \mathbb{E}_{\varepsilon}f^t\rVert_{n, \alpha}^2 = B^2_{\alpha}(t) + V_{\alpha}(t),
\end{equation*}
with
\begin{equation}
B^2_{\alpha}(t) = \frac{1}{n}\sum_{i=1}^r \widehat{\mu}_i^{\alpha} (1 - \gamma_i^{(t)})^2 (G_i^*)^2, \ \ \ \ V_{\alpha}(t) = \frac{\sigma^2}{n}\sum_{i=1}^r \widehat{\mu}_i^{\alpha} (\gamma_i^{(t)})^2.
\end{equation}
The smoothed empirical risk is
\begin{equation}
R_{\alpha, t} = \lVert F^t - Y\rVert_{n, \alpha}^2 = \lVert G^t - Z \rVert_{n, \alpha}^2 = \frac{1}{n}\sum_{i=1}^r \widehat{\mu}_i^{\alpha} (1 - \gamma_i^{(t)})^2 Z_i^2 , \quad \textnormal{ for } t > 0.
\end{equation}
Recall that the kernel is bounded by $B = 1$, thus $\widehat{\mu}_i \leq 1$ for all $i = 1, \ldots, r$, then the smoothed bias $B_{\alpha}^2(t)$ and smoothed variance $V_{\alpha}(t)$ are smaller their non-smoothed counterparts.
Analogously to the heuristic derivation leading to the stopping rule (\ref{tau}), the new stopping rule is based on the discrepancy principle applied to the $\alpha-$smoothed empirical risk, that is,
\begin{equation} \label{t_alpha}
\tau_{\alpha} = \inf \left\{ t > 0 \ | \ R_{\alpha, t} \leq \sigma^2\frac{\mathrm{tr}(K_n^\alpha)}{n} \right\},
\end{equation}
where $\sigma^2 \mathrm{tr}(K_n^\alpha)/n = \sigma^2\sum_{i=1}^r \widehat{\mu}_i^{\alpha}/n$ is the natural counterpart of $r \sigma^2/n$ in the case of a full-rank kernel and the $\alpha-$norm.
Since there is no straightforward connection between $\tau_{\alpha}$ and the former reference stopping rule $t^b = \inf \{ t > 0 \ | \ B^2(t) \leq V(t) \}$, we need to introduce a new reference one for the theoretical analysis of the behavior of $\tau_{\alpha}$.
We first define a new smoothed reference stopping rule (which balances between the smoothed bias and variance)
\begin{equation} \label{t^b_alpha}
t^{b}_{\alpha} = \inf \left\{ t > 0 \ | \ B^2_{\alpha}(t) \leq V_{\alpha}(t) \right\}
\end{equation}
and also the analogue of \eqref{t_star} with the $\alpha-$norm:
\begin{equation} \label{t_star_alpha}
t_{\alpha}^* = \inf \left\{t > 0 \ | \ \mathbb{E}_{\varepsilon}R_{\alpha, t} \leq \frac{\sigma^2}{n}\sum_{i=1}^r \widehat{\mu}_i^{\alpha} \right\}.
\end{equation}
\subsection{Related work}
The idea of smoothing the empirical risk (the residuals) is not new in the literature. For instance, \cite{blanchard2010conjugate, blanchard2012discrepancy, blanchard2016convergence} discussed various smoothing strategies applied to (kernelized) conjugate gradient descent, and \cite{CelWah2020} considered spectral regularization with spectral filter estimators. %
More closely related to the present work, \cite{stankewitz2019smoothed} studied a statistical performance improvement allowed by polynomial smoothing of the residuals (as we do here) but restricted to the spectral cut-off estimator.
In \cite{blanchard2010conjugate, blanchard2012discrepancy}, the authors considered the following statistical inverse problem: $z = Ax + \sigma \zeta$, where $A$ is a self-adjoint operator and $\zeta$ is Gaussian noise. In their case, for the purpose of achieving optimal rates, the usual discrepancy principle rule $\lVert Ax_m - z \rVert \leq \vartheta \delta$ ($m$ is an iteration number, $\vartheta$ is a parameter) was modified and took the form $\lVert \rho_{\lambda}(A)( A x_m - z )\rVert \leq \vartheta \delta$, where $\rho_{\lambda}(t) = \frac{1}{\sqrt{t + \lambda}}$ and $\delta$ is the normalized variance of Gaussian noise.
In \cite{blanchard2016convergence}, the minimum discrepancy principle was modified to the following: each iteration $m$ of conjugate gradient descent was represented by a vector $\widehat{\alpha}_m = K_n^{\dagger} Y$, $K_n^{\dagger}$ is the pseudo-inverse of the normalized Gram matrix, and the learning process was stopped if $\lVert Y - K_n \widehat{\alpha}_m \rVert_{K_n} < \Omega$ for some positive $\Omega$, where $\lVert \alpha \rVert_{K_n}^2 = \langle \alpha, K_n \alpha \rangle.$ Thus, this method corresponds (up to a threshold) to the stopping rule (\ref{t_alpha}) with $\alpha = 1.$
In the work \cite{stankewitz2019smoothed}, the authors concentrated on the inverse problem $Y = A \xi + \delta W$ and its corresponding Gaussian vector observation model $Y_i = \Tilde{\mu}_{i} \xi_i + \delta \varepsilon_i, \ i \in [r]$, where $\{ \Tilde{\mu}_i \}_{i=1}^r$ are the singular values of the linear bounded operator $A$ and $\{ \varepsilon_i \}_{i=1}^r$ are Gaussian noise variables. They recovered the signal $\{ \xi_i \}_{i=1}^r$ by a cut-off estimator of the form $\widehat{\xi}_i^{(t)} = \mathbb{I}\{i \leq t \}\widetilde{\mu}_{i}^{-1}Y_i, \ i \in [r]$. The discrepancy principle in this case was $\lVert (A A^\top)^{\alpha/2}(Y - A \widehat{\xi}^{(t)}) \rVert^2 \leq \kappa$ for some positive $\kappa.$ They found out that, if the smoothing parameter $\alpha$ lies in the interval $[\frac{1}{4p}, \frac{1}{2p})$, where $p$ is the polynomial decay of the singular values $\{ \widetilde{\mu}_i \}_{i=1}^r$, then the cut-off estimator is adaptive to Sobolev ellipsoids. Therefore, our work could be considered as an extension of \cite{stankewitz2019smoothed} in order to generalize the polynomial smoothing strategy to more complex filter estimators such as gradient descent and (Tikhonov) ridge regression in the reproducing kernel framework.
\subsection{Optimality result (fixed-design)} \label{optimality_section}
To take into account in our analysis the fact that we use the $\alpha-$norm, we define a modified version of the localized empirical Rademacher complexity that we call the \textit{smoothed empirical Rademacher complexity}. The derivation of the next expression is deferred to Appendix~\ref{smooth_complexity_derivation}.
\begin{definition}
The smoothed empirical Rademacher complexity of $\mathbb{B}_{\mathcal{H}}(R)$ is defined as
\begin{equation} \label{smoothed_empirical_rad_compl}
\widehat{\mathcal{R}}_{n, \alpha}(\epsilon, \mathcal{H}) = R \sqrt{\frac{1}{n} \sum_{i=1}^r \widehat{\mu}_i^{\alpha} \min \{\widehat{\mu}_i, \epsilon^2 \}},
\end{equation}
where $\alpha \in [0, 1]$ and $\{ \widehat{\mu}_i \}_{i=1}^r$ are the eigenvalues of the Gram matrix $K_n$.
\end{definition}
This new definition leads to the next updated smoothed version of the critical inequality and its related empirical critical radius.
\begin{definition} \label{def_smooth_fixed_point}
Define the \textit{smoothed empirical critical radius} $\widehat{\epsilon}_{n, \alpha}$ as the smallest positive solution $\epsilon > 0$ to the following fixed-point inequality
\begin{equation} \label{fixed_point_smooth}
\frac{\widehat{\mathcal{R}}_{n, \alpha}(\epsilon, \mathcal{H})}{\epsilon R} \leq \frac{2 R}{\sigma} \epsilon^{1 + \alpha}.
\end{equation}
\end{definition}
Appendix~\ref{auxiliary} establishes that the smoothed empirical critical radius $\widehat{\epsilon}_{n, \alpha}$ does exist, is unique and achieves the equality in Ineq. (\ref{fixed_point_smooth}).
We pursue the analogy a bit further by defining the \textit{smoothed statistical dimension} as
\begin{equation}
d_{n, \alpha} \coloneqq \min \Big\{ j \in [r]: \widehat{\mu}_j \leq \widehat{\epsilon}_{n, \alpha}^2 \Big\},
\end{equation}
and $d_{n, \alpha} = r$ if no such index does exist.
Combined with \eqref{smoothed_empirical_rad_compl}, this implies that
\begin{equation} \label{useful_inequalities_smooth_}
\widehat{\mathcal{R}}_{n, \alpha}^2(\widehat{\epsilon}_{n, \alpha}, \mathcal{H}) \geq \frac{ \sum_{j=1}^{d_{n, \alpha}}\widehat{\mu}_j^{\alpha}}{n} R^2 \widehat{\epsilon}_{n, \alpha}^2, \quad \textnormal{ and } \quad \widehat{\epsilon}_{n, \alpha}^{2(1+\alpha)} \geq \frac{\sigma^2 \sum_{j=1}^{d_{n, \alpha}} \widehat{\mu}_j^{\alpha}}{4 R^2 n},
\end{equation}
where the second statement results from Ineq.~\eqref{fixed_point_smooth}.
Let us emphasize that \cite{yang2017randomized} already introduced the so-called \emph{statistical dimension} (corresponds to $\alpha = 0$ in our notations). It appeared that the statistical dimension provides an upper bound on the minimax-optimal dimension of randomized projections for kernel ridge regression (see \cite[Theorem 2, Corollary 1]{yang2017randomized}).
In our case, $d_{n, \alpha}$ can be seen as a ($\alpha$-smooth) version of the statistical dimension. One motivation is that this notion turns out to be useful in the derivation of minimax rates.
In particular, this can be achieved by means of the following assumptions that involve this quantity.
\begin{assumption} \label{assumption_on_variance}
There exists a numeric $\mathcal{A} > 0$ such that for all $\alpha \in [0, 1]$,
\begin{equation}
\sum_{i=d_{n, \alpha} + 1}^r \widehat{\mu}_i \leq \mathcal{A} d_{n, \alpha} \widehat{\epsilon}_{n, \alpha}^2 .
\end{equation}
\end{assumption}
This assumption will further make the transfer from the smooth critical inequality (\ref{fixed_point_smooth}) to its non-smooth version (\ref{RK_critical_radius_empirical}). Indeed, under Assumption \ref{assumption_on_variance}, if $\epsilon$ satisfies Ineq. (\ref{fixed_point_smooth}), then it satisfies Ineq. (\ref{RK_critical_radius_empirical}) as well, where constant $2$ on the right-hand side is replaced by $2\sqrt{1+\mathcal{A}}$. Although there are reproducing kernels for which Assumption \ref{assumption_on_variance} does not hold, for most of them it holds true \cite{yang2017randomized}, including all the examples in the present paper. We detail one of them below.
\begin{example}[$\beta$-polynomial eigenvalue decay]
Let us assume that the eigenvalues of the normalized Gram matrix satisfy that there exist numeric constants $0 < c \leq C $ such that
\begin{align} \label{beta-polynomial}
c i^{-\beta} \leq \widehat{\mu}_i \leq C i^{-\beta},\quad i=1,\ldots, r,
\end{align} for some $\beta > 1$.
Instances of kernels in this class are mentioned at the beginning of Section~\ref{pdk}.
Then, Assumption \ref{assumption_on_variance} holds with $ \mathcal{A} = \frac{C}{c}\frac{1}{\beta - 1}$.
\end{example}
Another key property for the smoothing to yield optimal results is that the value of $\alpha$ has to be large enough to control the tail sum of the smoothed eigenvalues by the corresponding cumulative sum, which is the purpose of the assumption below.
\begin{assumption} \label{sufficient_smoothing}
There exists $\Upsilon = [\alpha_0, 1], \ \alpha_0 \geq 0$, such that for all $\alpha \in \Upsilon$,
\begin{equation}
\sum_{i=d_{n, \alpha} + 1}^r \widehat{\mu}_i^{2 \alpha} \leq \mathcal{M} \sum_{i=1}^{d_{n, \alpha}}\widehat{\mu}_i^{2 \alpha},
\end{equation}
where $\mathcal{M} \geq 1$ denotes a numeric constant.
\end{assumption}
Let us remark that controlling the tail sum of the empirical eigenvalues has been already made, for example, by \cite{bartlett2020benign} (effective rank) and more recently by \cite[Assumption 6]{CelWah2020}.
Let us also mention that Assumption~\ref{sufficient_smoothing} does not imply Assumption~\ref{assumption_on_variance} holds.
Let us enumerate several classical examples for which this assumption holds.
\begin{example}[$\beta$-polynomial eigenvalue decay kernels (\ref{beta-polynomial})]
For the polynomial eigenvalue-decay kernels, Assumption~\ref{sufficient_smoothing} holds with
\begin{equation}\label{lower.bound.alpha}
\mathcal{M} = 2 \Big( \frac{C}{c} \Big)^{2} \quad \textnormal{and} \quad 1 \geq \alpha \geq \frac{1}{\beta + 1}=\alpha_0.
\end{equation}
\end{example}
\begin{example}[$\gamma$-exponential eigenvalue-decay kernels]
Let us assume that the eigenvalues of the normalized Gram matrix satisfy that there exist numeric constants $0 < c \leq C $ and a constant $\gamma>0$ such that
\begin{align*}
c e^{-i^{\gamma}} \leq \widehat{\mu}_i \leq C e^{-i^{\gamma}}.
\end{align*}
Instances of kernels within this class include the Gaussian kernel with respect to the Lebesgue measure on the real line (with $\gamma = 2$) or on a compact domain (with $\gamma = 1$) (up to $\log$ factor in the exponent).
Then, Assumption~\ref{sufficient_smoothing} holds with
\begin{equation*}
\mathcal{M} = \Big( \frac{C}{c} \Big)^2 \frac{\int_{0}^{\infty} e^{-y^{\gamma}}dy}{\int_{(2\alpha_0)^{1/\gamma}}^{2 (2\alpha_0)^{1/\gamma}} e^{-y^{\gamma}}dy} \quad \textnormal{and}\quad \alpha \in [\alpha_0, 1], \quad \mbox{for any}\quad \alpha_0 \in (0, 1).
\end{equation*}
\end{example}
For \emph{any reproducing kernel} satisfying the above assumptions, the next theorem provides a high probability bound on the performance of $f^{\tau_{\alpha}}$ (measured in terms of the $L_2(\mathbb{P}_n)$-norm), which depends on the smoothed empirical critical radius.
\begin{theorem}[Upper bound]\label{th:3}
Under Assumptions~\ref{a1}, \ref{a2}, \ref{additional_assumption_gd_krr}, \ref{assumption_on_variance}, and \ref{sufficient_smoothing}, given the stopping rule (\ref{t_alpha}),
\begin{equation} \label{main_inequality}
\lVert f^{\tau_{\alpha}} - f^* \rVert_n^2 \leq c_u R^2 \widehat{\epsilon}_{n, \alpha}^2
\end{equation}
with probability at least $1 - 5 \exp \Big[ - c_1 \frac{R^2}{\sigma^2}n \widehat{\epsilon}_{n, \alpha}^{2(1+\alpha)} \Big]$ for some positive constants $c_1$ and $c_u$, where $c_1$ depends only on $\mathcal{M}$, $c_u$ depends only on $\mathcal{A}$.
Moreover,
\begin{equation} \label{in_expectation}
\mathbb{E}_{\varepsilon} \lVert f^{\tau_{\alpha}} - f^* \rVert_n^2 \leq C R^2 \widehat{\epsilon}_{n, \alpha}^2 + 6 \max \{ \sigma^2, R^2 \} \exp \Big[ - c_3 \frac{R^2}{\sigma^2}n \widehat{\epsilon}_{n, \alpha}^{2(1+\alpha)} \Big],
\end{equation}
for the constant $C$ only depending on $\mathcal{A}$, constant $c_3$ only depending on $\mathcal{M}$.
\end{theorem}
First of all, Theorem \ref{th:3} is established in the fixed-design framework, and Ineq.~\eqref{in_expectation} is a direct consequence of the high probability bound \eqref{main_inequality}.
The main message is that the final performance of the estimator $f^{\tau_{\alpha}}$ is controlled by the smoothed critical radius $\widehat{\epsilon}_{n, \alpha}$. From the existing literature on the empirical critical radius \cite{raskutti2012minimax, raskutti2014early, yang2017randomized, wainwright2019high}, it is already known that the non-smooth version $\widehat{\epsilon}_n^2$ is the typical quantity that leads to minimax rates in the RKHS (see also Theorem~\ref{theorem_yang} below).
In particular, tight upper bounds on $\widehat{\epsilon}_n^2$ can be computed from a priori information about the RKHS, e.g. the decay rate of the empirical/population eigenvalues.
However, the behavior of $\widehat{\epsilon}_{n,\alpha}^2$ with respect to $n$ is likely to depend on $\alpha$, as emphasized by the notation. Intuitively, this suggests that there could exist a range of values of $\alpha$, for which $\widehat{\epsilon}_{n,\alpha}^2$ is of the same order as (or faster than) $\widehat{\epsilon}_{n}^2$, leading therefore to optimal rates. But there could also exist ranges of values of $\alpha$, where this does not hold true, leading to suboptimal rates.
Another striking aspect of Ineq.~\eqref{in_expectation} is related to the additional terms involving the exponential function in Ineq. \eqref{in_expectation}. As far as \eqref{main_inequality} is a statement with "high probability", this term is expected to converge to 0 at a rate depending on $n\widehat{\epsilon}_{n,\alpha}^2$. Therefore, the final convergence rate as well as the fact that this term is (or not) negligible will depend on $\alpha$.
\begin{proof}[Sketch of the proof of Theorem~\ref{th:3}]
The complete proof is given in Appendix~\ref{polynomial_appendix} and starts from splitting the risk error $ \lVert f^{\tau_{\alpha}} - f^* \rVert_n^2$ into two parts:
\begin{equation} \label{upper_bounding_proof}
2 B^2(\tau_{\alpha}) + 2 v(\tau_{\alpha}),
\end{equation}
where $v(t) \coloneqq \frac{1}{n}\sum_{i=1}^n (\gamma_i^{(t)})^2 \varepsilon_i^2$ is called the stochastic variance at iteration $t$.
The key ingredients of the proof are the next two deviation inequalities.
\begin{align*}
i)\ \mathbb{P}_{\varepsilon}\left( \tau_{\alpha} > \overline{t}_{\epsilon, \alpha} \right) &\leq 2 \ \exp \Big[ - c_1 \frac{R^2}{\sigma^2}n \widehat{\epsilon}_{n, \alpha}^{2(1+\alpha)} \Big],\\
ii)\ \mathbb{P}_{\varepsilon} \left( \tau_{\alpha} < \widetilde{t}_{\epsilon, \alpha} \right) &\leq 2 \ \exp \Big[ -c_2 \frac{R^2}{\sigma^2}n \widehat{\epsilon}_{n, \alpha}^{2(1 + \alpha)} \Big],
\end{align*}
where $\overline{t}_{\epsilon, \alpha}$ and $\widetilde{t}_{\epsilon, \alpha}$ are some properly chosen upper and lower bounds of $t_{\alpha}^*$.
Since it can be shown that $\eta \widetilde{t}_{\epsilon, \alpha} \asymp \eta \overline{t}_{\epsilon, \alpha} \asymp (\widehat{\epsilon}_{n, \alpha}^2)^{-1}$, these two inequalities show that $\tau_{\alpha}$ stays of the optimal order $(\widehat{\epsilon}_{n, \alpha}^2)^{-1}$ with high probability. After that, it is sufficient to upper bound each term in (\ref{upper_bounding_proof}), and the claim follows.
\end{proof}
\medskip
The purpose of the following result is to give more insight into the understanding of Theorem~\ref{th:3} regarding the influence of the different terms in the convergence rate.
\begin{theorem}[Lower bound from Theorem~1 in \cite{yang2017randomized}] \label{theorem_yang}
If Assumption~\ref{assumption_on_variance} holds with $\alpha=0$, then for any estimator $\widetilde{f}$ of $f^* \in \mathbb{B}_{\mathcal{H}}(R)$ satisfying the nonparametric model defined in Eq.~\eqref{main}, we get
\begin{equation*}
\mathbb{E}_{\varepsilon} \lVert \widetilde{f} - f^* \rVert_n^2 \geq c_l R^2 \widehat{\epsilon}_n^2,
\end{equation*}
for some positive numeric constant $c_l$ that only depends on $\mathcal{A}$ from Assumption~\ref{assumption_on_variance}.
\end{theorem}
Firstly, Theorem~\ref{theorem_yang} has been proved in \cite{yang2017randomized} with $R=1$, and a simple rescaling argument provides the above statement, so we do not reproduce the proof here.
Secondly, Theorem~\ref{theorem_yang} applies to any kernel as long as Assumption~\ref{assumption_on_variance} is fulfilled with $\alpha=0$, which is in particular true for the reproducing kernels from Theorem~\ref{th:3}. Therefore, the fastest achievable rate by an estimator of $f^*$ is $\widehat{\epsilon}_n^2$. As a consequence, as far as there exist values of $\alpha$ such that $\widehat{\epsilon}_{n, \alpha}^2 $ is at most as large as $\widehat{\epsilon}_n^2$, the estimator $f^{\tau_\alpha}$ is optimal.
\subsection{Consequences for $\beta$-polynomial eigenvalue-decay kernels}
The leading idea in the present section is identifying values of $\alpha$, for which the bound (\ref{main_inequality}) from Theorem~\ref{th:3} scales as $R^2 \widehat{\epsilon}_n^2$.
Let us recall the definition of a polynomial decay kernel from \eqref{beta-polynomial}:
\begin{equation*}
c i^{-\beta} \leq \widehat{\mu}_i \leq C i^{-\beta}, \ i \in [r], \ \ \textnormal{ for some } \beta > 1 \textnormal{ and numeric constants } c, C > 0.
\end{equation*}
One typical example of the reproducing kernel satisfying this condition is the Sobolev kernel on $[0, 1] \times [0, 1]$ given by $\mathbb{K}(x, x^\prime) = \min \{x, x^\prime \}$ with $\beta = 2$ \cite{raskutti2014early}. The corresponding RKHS is the first-order Sobolev class, that is, the class of functions that are almost everywhere differentiable with the derivative in $L_2[0, 1]$.
\begin{lemma} \label{epsilons_comparaison}
Assume there exists $\beta>1$ such that the $\beta$-polynomial decay assumption from (\ref{beta-polynomial}) holds. Then there exist numeric constants $c_1, c_2 > 0$ such that for $\alpha < 1/\beta$, one has
\begin{align*}
c_1 \widehat{\epsilon}_n^2 \leq \widehat{\epsilon}_{n, \alpha}^2 \leq c_2 \widehat{\epsilon}_n^2 \asymp \left[ \frac{\sigma^2}{2 R^2 n} \right]^{\frac{\beta}{\beta + 1}} .
\end{align*}
\end{lemma}
The proof of Lemma~\ref{epsilons_comparaison}, which can be derived from combining Lemmas~\ref{critical_radius_empirical} and~\ref{critical_radius_empirical_smooth} from Appendix~\ref{general_appendix}, is not reproduced here.
Therefore, if $\alpha \beta < 1$, then $\widehat{\epsilon}_{n, \alpha}^2 \asymp \widehat{\epsilon}_n^2 \asymp \left[ \frac{\sigma^2}{2 R^2 n} \right]^{\frac{\beta}{\beta + 1}}$.
Let us now recall from \eqref{lower.bound.alpha} that Assumption~\ref{sufficient_smoothing} holds for $\alpha\geq (\beta+1)^{-1}$.
All these arguments lead us to the next result, which establishes the minimax optimality of $\tau_\alpha$ with any kernel satisfying the $\beta$-polynomial eigenvalue-decay assumption, as long as $\alpha \in [\frac{1}{\beta + 1}, \frac{1}{\beta})$.
\begin{corollary} \label{pdk_corollary}
Under Assumptions~\ref{a1},~\ref{a2}, \ref{additional_assumption_gd_krr}, and the $\beta$-polynomial eigenvalue decay (\ref{beta-polynomial}), for any $\alpha \in [\frac{1}{\beta + 1},\frac{1}{\beta})$, the early stopping rule $\tau_{\alpha}$ satisfies
\begin{equation}
\mathbb{E}_{\varepsilon}\lVert f^{\tau_{\alpha}} - f^* \rVert_n^2 \asymp \underset{\widehat{f}}{\inf} \underset{\lVert f^* \rVert_{\mathcal{H}} \leq R}{\sup} \mathbb{E}_{\varepsilon} \lVert \widehat{f} - f^* \rVert_n^2,
\end{equation}
where the infimum is taken over all measurable functions of the input data.
\end{corollary}
Corollary~\ref{pdk_corollary} establishes an optimality result in the fixed-design framework since as long as $(\beta+1)^{-1}\leq \alpha<\beta^{-1}$, the upper bound matches the lower bound up to multiplicative constants. Moreover, this property holds uniformly with respect to $\beta>1$ provided the value of $\alpha$ is chosen appropriately.
An interesting feature of this bound is that the optimal value of $\alpha$ only depends on the (polynomial) decay rate of the empirical eigenvalues of the normalized Gram matrix. This suggests that any effective estimator of the unknown parameter $\beta$ could be plugged into the above (fixed-design) result and would lead to an optimal rate.
Note that \cite{stankewitz2019smoothed} has recently emphasized a similar trade-off ($(\beta+1)^{-1}\leq \alpha<\beta^{-1}$) for the smoothing parameter $\alpha$ (polynomial smoothing), considering the spectral cut-off estimator in the Gaussian sequence model.
Regarding convergence rates, Corollary~\ref{pdk_corollary} combined with Lemma~\ref{epsilons_comparaison} suggests that the convergence rate of the expected (fixed-design) risk is of the order $\mathcal{O}(n^{-\frac{\beta}{\beta+1}})$. This is the same as the already known one in nonparametric regression in the random design framework \cite{stone1985additive,raskutti2014early}, which is known to be minimax-optimal as long as $f^*$ belongs to the RKHS $\mathcal{H}$.
\section{Empirical comparison with existing stopping rules} \label{sec:5}
The present section aims at illustrating the practical behavior of several stopping rules discussed along the paper as well as making a comparison with existing alternative stopping rules.
\subsection{Stopping rules involved}
The empirical comparison is carried out between the stopping rules $\tau$ \eqref{tau} and $\tau_{\alpha}$ with $\alpha \in [\frac{1}{\beta + 1}, \frac{1}{\beta})$ \eqref{t_alpha}, and four alternative stopping rules that are briefly described in the what follows. For the sake of comparison, most of them correspond to early stopping rules already considered in \cite{raskutti2014early}.
\subsubsection*{Hold-out stopping rule}
We consider a procedure based on the hold-out idea \cite{arlot2010survey}. The data $\{(x_i, y_i)\}_{i=1}^n$ are split into two parts: the training sample $S_{\textnormal{train}} = (x_{\textnormal{train}}, y_{\textnormal{train}})$ and the test sample $S_{\textnormal{test}} = (x_{\textnormal{test}}, y_{\textnormal{test}})$ so that the training sample and test sample represent a half of the whole dataset.
We train the learning algorithm for $t = 0, 1, \ldots$ and estimate the risk for each $t$ by $R_{\textnormal{ho}}(f^t) = \frac{1}{n}\sum_{i \in S_{\textnormal{test}}}((\widehat{y}_{\textnormal{test}})_i - y_i)^2$, where $(\widehat{y}_{\textnormal{test}})_i$ denotes the output of the algorithm trained at iteration $t$ on $S_{\textnormal{train}}$ and evaluated at the point $x_i$ of the test sample.
The final stopping rule is defined as
\begin{equation} \label{t_ho}
\widehat{\textnormal{T}}_{\textnormal{HO}} = \textnormal{argmin} \Big\{ t \in \mathbb{N} \ | \ R_{\textnormal{ho}} (f^{t + 1}) > R_{\textnormal{ho}} (f^t) \Big\} - 1.
\end{equation}
Although it does not completely use the data for training (loss of information), the hold-out strategy has been proved to output minimax-optimal estimators in various contexts (see, for instance, \cite{article, caponnetto2010cross} with Sobolev spaces and $\beta \leq 2$).
\subsubsection*{V-fold stopping rule}
The observations $\{(x_i, y_i)\}_{i=1}^n$ are randomly split into $V = 4$ equal sized blocks.
At each round (among the $V$ ones), $V - 1$ blocks are devoted to training $S_{\textnormal{train}} = (x_{\textnormal{train}}, y_{\textnormal{train}})$, and the remaining one serves for the test sample $S_{\textnormal{test}} = (x_{\textnormal{test}}, y_{\textnormal{test}})$.
At each iteration $t = 0, 1, \ldots$, the risk is estimated by $R_{\textnormal{VFCV}}(f^t) = \frac{1}{V - 1} \sum_{j=1}^{V-1} \frac{1}{n/V}\sum_{i \in S_{\textnormal{test}}(j)} ((\widehat{y}_{\textnormal{test}})_i - y_i)^2$, where $\widehat{y}_{\textnormal{test}}$ was described for the hold-out stopping rule.
The final stopping rule is
\begin{equation} \label{t_vf}
\widehat{\textnormal{T}}_{\textnormal{VFCV}} = \textnormal{argmin} \big\{ t \in \mathbb{N} \ | \ R_{\textnormal{VFCV}}(f^{t+1}) > R_{\textnormal{VFCV}}(f^t) \big\} - 1.
\end{equation}
V-fold cross validation is widely used in practice since, on the one hand, it is more computationally tractable than other splitting-based methods such as leave-one-out or leave-p-out (see the survey \cite{arlot2010survey}), and on the other hand, it enjoys a better statistical performance than the hold-out (lower variability).
\subsubsection*{Raskutti-Wainwright-Yu stopping rule (from \cite{raskutti2014early})}
The use of this stopping rule heavily relies on the assumption that $\lVert f^* \rVert_{\mathcal{H}}^2 $ is known, which is a strong requirement in practice.
It controls the bias-variance trade-off by using upper bounds on the bias and variance terms. The latter involves the localized empirical Rademacher complexity $\widehat{\mathcal{R}}_{n}(\frac{1}{\sqrt{\eta t}}, \mathcal{H})$. Similarly to $t^b$, it stops as soon as (upper bound of) the bias term becomes smaller than (upper bound on) the variance term, which leads to
\begin{equation} \label{t_w}
\widehat{\textnormal{T}}_{\textnormal{RWY}} = \textnormal{argmin} \Big\{ t \in \mathbb{N} \ | \ \widehat{\mathcal{R}}_n\Big(\frac{1}{\sqrt{\eta t}}, \mathcal{H}\Big) > (2 e \sigma \eta t)^{-1} \Big\} - 1.
\end{equation}
\subsubsection*{Theoretical minimum discrepancy-based stopping rule $t^*$}
The fourth stopping rule is the one introduced in \eqref{t_star}. It relies on the minimum discrepancy principle and involves the (theoretical) expected empirical risk $\mathbb{E}_{\varepsilon} R_t$:
\begin{equation*}
t^* = \inf \left\{ t > 0 \ | \ \mathbb{E}_{\varepsilon} R_t \leq \sigma^2 \right\}.
\end{equation*}
This stopping rule is introduced for comparison purposes only since it cannot be computed in practice.
This rule is proved to be optimal (see Appendix~\ref{finite_rank_appendix}) for \textit{any bounded reproducing kernel}, so it could serve as a reference in the present empirical comparison.
\subsubsection*{Oracle stopping rule}
The "oracle" stopping rule defines the first time the risk curve starts to increase.
\begin{equation} \label{t_or}
t_{\textnormal{or}} = \textnormal{argmin} \big\{ t \in \mathbb{N} \ | \ \mathbb{E}_{\varepsilon}\lVert f^{t+1} - f^* \rVert_n^2 > \mathbb{E}_{\varepsilon}\lVert f^t - f^* \rVert_n^2 \big\} - 1.
\end{equation}
In situations where only one global minimum does exists for the risk, this rule coincides with the global minimum location.
Its formulation reflects the realistic constraint that we do not have access to the whole risk curve (unlike in the classical model selection setup).
\subsection{Simulation design}
Artificial data are generated according to the regression model $y_j = f^*(x_j) + \varepsilon_j$, $j = 1,\ldots, n$, where $\varepsilon_j \overset{\textnormal{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2)$ with the equidistant $x_j = j/n, \ j = 1, \ldots, n$, and $\sigma = 0.15$.
The same experiments have been also carried out with $x_i \sim \mathbb{U}[0, 1]$ (not reported here) without any change regarding the conclusions.
The sample size $n$ varies from $40$ to $400$.
The gradient descent algorithm \eqref{iterations} has been used with the step-size $\eta = (1.2\, \widehat{\mu}_1)^{-1}$ and initialization $F^0 = [0, \ldots, 0]^\top$.
The present comparison involves two regression functions with the same $L_2(\mathbb{P}_n)$-norms of the signal $\lVert f^* \rVert_n \approx 0.28$: $(i)$ a piecewise linear function called "smooth" $f^*(x) = |x - 1/2|-1/2$, and $(ii)$ a "sinus" $f^*(x) = 0.9 \ \textnormal{sin}(8 \pi x) x^2 $.
An illustration of the corresponding curves is displayed in Figure~\ref{fig:funcs}.
To ease the comparison, the piecewise linear regression function was set up as in \cite[Figure 3]{raskutti2014early}.
The case of finite-rank kernels is addressed in Section~\ref{sec.finite.rank.simuls} with the so-called polynomial kernel of degree $3$ defined by $\mathbb{K}(x_1, x_2) = (1 + x_1^{\top}x_2)^3$ on the unit square $[0, 1] \times [0, 1]$.
By contrast, Section~\ref{sec.poly.decay.simuls} tackles the polynomial decay kernels with the first-order Sobolev kernel $\mathbb{K}(x_1, x_2) = \min \{ x_1, x_2 \}$ on the unit square $[0, 1] \times [0, 1]$.
The performance of the early stopping rules is measured in terms of the $L_2(\mathbb{P}_n)$ squared norm $\lVert f^t - f^* \rVert_n^2$ averaged over $N = 100$ independent trials.
For our simulations, we use a variance estimation method that is described in Section~\ref{variance_decay}. This method is asymptotically unbiased, which is sufficient for our purposes.
\begin{figure}[!htb]
\minipage{0.45\textwidth}
\includegraphics[width=1\linewidth]{./fig/smooth.pdf}
\endminipage\hfill
\minipage{0.45\textwidth}
\includegraphics[width=1\linewidth]{./fig/doppler.pdf}
\endminipage\hfill
\caption{Smooth and sinus regression functions}
\label{fig:funcs}
\end{figure}
\subsection{Results of the simulation experiments}\label{sec.simul.experiments.results}
\subsubsection{Finite-rank kernels}\label{sec.finite.rank.simuls}
\begin{figure}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/paper_polynom_smooth0.pdf}
\caption{\label{smooth}}
\end{subfigure}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/paper_polynom_sinus0.pdf}
\caption{\label{sinus}}
\end{subfigure}
\caption{Kernel gradient descent with the step-size $\eta = 1 / (1.2 \widehat{\mu}_1)$ and polynomial kernel $\mathbb{K}(x_1, x_2) = (1 + x_1^{\top}x_2)^3, \ x_1, x_2 \in [0, 1]$, for the estimation of two noised regression functions from Figure \ref{fig:funcs}: the smooth $f^*(x) = |x - 1/2| - 1/2$ for panel (a), and the "sinus" $f^*(x) = 0.9 \ \textnormal{sin}(8 \pi x)x^2$ for panel (b), with the equidistant covariates $x_j = j/n$. Each curve corresponds to the $L_2(\mathbb{P}_n)$ squared norm error for the stopping rules (\ref{t_or}), (\ref{t_star}), (\ref{t_w}), (\ref{t_vf}), (\ref{tau}) averaged over $100$ independent trials, versus the sample size $n = \{40, 80, 120, 200, 320, 400 \}$.}
\label{fig:finite}
\end{figure}
Figure~\ref{fig:finite} displays the (averaged) $L_2(\mathbb{P}_n)$-norm error of the oracle stopping rule (\ref{t_or}), our stopping rule $\tau$ (\ref{tau}), $t^*$ (\ref{t_star}), minimax-optimal stopping rule $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ (\ref{t_w}), and $4$-fold cross validation stopping rule $\widehat{\textnormal{T}}_{\textnormal{VFCV}}$ (\ref{t_vf}) versus the sample size.
Figure~\ref{smooth} shows the results for the piecewise linear regression function whereas Figure~\ref{sinus} corresponds to the "sinus" regression function.
All the curves decrease as $n$ grows.
From these graphs, the overall worst performance is achieved by $\widehat{\textnormal{T}}_{\textnormal{VFCV}}$, especially with a small sample size, which can be due to the additional randomness induced by the preliminary random splitting with $4-FCV$.
By contrast, the minimum discrepancy-based stopping rules ($\tau$ and $t^*$) exhibit the best performances compared to the results of $\widehat{\textnormal{T}}_{\textnormal{VFCV}}$ and $\widehat{\textnormal{T}}_{\textnormal{RWY}}$.
The averaged mean-squared error of $\tau$ is getting closer to the one of $t^*$ as the number of samples $n$ increases, which was expected from the theory and also intuitively, since $\tau$ has been introduced as an estimator of $t^*$.
From Figure~\ref{smooth}, $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ is less accurate for small sample sizes, but improves a lot as $n$ grows up to achieving a performance similar to that of $\tau$. This can result from the fact that $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ is built from upper bounds on the bias and variance terms, which are likely to be looser with a small sample size, but achieve an optimal convergence rate as $n$ increases.
On Figure~\ref{sinus}, the reason why $\tau$ exhibits (strongly) better results than $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ owes to the main assumption on the regression function, namely that $ \lVert f^* \rVert_{\mathcal{H}} \leq 1$. This could be violated for the "sinus" function.
\subsubsection{Polynomial eigenvalue decay kernels}\label{sec.poly.decay.simuls}
Figure~\ref{comp:pdk} displays the resulting (averaged over $100$ repetitions) $L_2(\mathbb{P}_n)$-error of $\tau_\alpha$ (with $\alpha = \frac{1}{\beta + 1} = 0.33$) \eqref{t_alpha}, $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ \eqref{t_w}, $t^*$ \eqref{t_star}, and $\widehat{\textnormal{T}}_{\textnormal{HO}}$ \eqref{t_ho} versus the sample size.
\begin{figure} \label{fig:finite_kernel}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/to_the_paper_1.pdf}
\caption{\label{fig.smooth.poly.Decay}}
\end{subfigure}
\begin{subfigure}{9cm}
\centering\includegraphics[width=9cm]{./fig/to_the_paper_2.pdf}
\caption{\label{fig.sinus.poly.Decay}}
\end{subfigure}
\caption{Kernel gradient descent (\ref{iterations}) with the step-size $\eta = 1 / (1.2 \widehat{\mu}_1)$ and Sobolev kernel $\mathbb{K}(x_1, x_2) = \min \{ x_1, x_2\}, \ x_1, x_2 \in [0, 1]$ for the estimation of two noised regression functions from Figure \ref{fig:funcs}: the smooth $f^*(x) = |x - 1/2| - 1/2$ for panel (a) and the "sinus" $f^*(x) = 0.9 \ \textnormal{sin}(8\pi x)x^2$ for panel (b), with the equidistant covariates $x_j = j/n$. Each curve corresponds to the $L_2(\mathbb{P}_n)$ squared norm error for the stopping rules (\ref{t_or}), (\ref{t_star}), (\ref{t_w}), (\ref{t_ho}), (\ref{t_alpha}) with $\alpha = 0.33$, averaged over $100$ independent trials, versus the sample size $n = \{40, 80, 120, 200, 320, 400 \}$.}
\label{comp:pdk}
\end{figure}
Figure~\ref{fig.smooth.poly.Decay} shows that all stopping rules seem to work equivalently well, although there is a slight advantage for $\widehat{\textnormal{T}}_{\textnormal{HO}}$ and $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ compared to $t^*$ and $\tau_\alpha$. However, as $n$ grows to $n=400$, the performances of all stopping rules become very close to each other. Let us mention that the true value of $\beta$ is not known in these experiments. Therefore, the value $\frac{1}{\beta + 1} = 0.33$ has been estimated from the decay of the empirical eigenvalue of the normalized Gram matrix. This can explain why the performance of $\tau_\alpha$ remains worse than that of $\widehat{\textnormal{T}}_{\textnormal{RWY}}$.
The story described by Figure~\ref{fig.sinus.poly.Decay} is somewhat different.
The first striking remark is that $\widehat{\textnormal{T}}_{\textnormal{RWY}}$ completely fails on this example, which still stems from the (unsatisfied) constraint on the $\mathcal{H}$-norm of $f^*$.
However, the best performance is still achieved by the Hold-out stopping rule, although $\tau_\alpha$ and $t^*$ remain very close to the latter.
The fact that $t^*$ remains close to the oracle stopping rule (without any need for smoothing) supports the idea that the minimum discrepancy is a reliable principle for designing an effective stopping rule. The deficiency of $\tau$ (by contrast to $\tau_\alpha$) then results from the variability of the empirical risk, which does not remain close enough to its expectation. This bad behavior is then balanced by introducing the polynomial smoothing at level $\alpha$ within the definition of $\tau_\alpha$, which enjoys close to optimal practical performances.
Let us also mention that $\widehat{\textnormal{T}}_{\textnormal{HO}}$ exhibit some variability, in particular, with small sample sizes as illustrated by Figures~\ref{fig.smooth.poly.Decay} and~\ref{fig.sinus.poly.Decay}.
The overall conclusion is that the smoothed minimum discrepancy-based stopping rules $\tau_\alpha$ leads to almost optimal performances provided $\alpha= (\beta+1)^{-1}$, where $\beta$ quantifies the polynomial decay of the empirical eigenvalues of the normalized Gram matrix.
\subsection{Estimation of variance and decay rate for polynomial eigenvalue decay kernels} \label{variance_decay}
The purpose of the present section is to describe two strategies for estimating: $(i)$ the decay rate of the empirical eigenvalues of the normalized Gram matrix, and $(ii)$ the variance parameter $\sigma^2$.
\subsubsection{Polynomial decay parameter estimation}
From the polynomial decay assumption \eqref{beta-polynomial}, one easily derives upper and lower bounds for $\beta$ as
\begin{equation*}
\frac{\log(\widehat{\mu}_i / \widehat{\mu}_{i+1}) - \log(C / c)}{\log(1 + 1/i)} \leq \beta \leq \frac{\log(\widehat{\mu}_i / \widehat{\mu}_{i+1}) + \log(C / c)}{\log(1 + 1/i)}.
\end{equation*}
The difference between these upper and lower bounds is equal to $\frac{2 \log (C / c)}{\log(1 + 1/i)}$, which is minimized for $i=1$. Then the best precision on the estimated value of $\beta$ is reached with $i=1$, which yields the estimator
\begin{equation}\label{beta_est}
\widehat{\beta} = \frac{\log ( \widehat{\mu}_1 / \widehat{\mu}_2 )}{\log2}.
\end{equation}
Note that this estimator $\widehat{\beta}$ from \eqref{beta_est} is not rigorously grounded but only serves as a rough choice in our simulation experiments (see Section ~\ref{sec.simul.experiments.results}).
\subsubsection{Variance parameter estimation}
There is a bunch of suggestions for variance estimation with linear smoothers; see, e.g., Section 5.6 in the book \cite{wasserman2006all}.
In our simulation experiments, two cases are distinguished: the situation where the reproducing kernel has finite rank $r$, and the situation where the empirical eigenvalues of the normalized Gram matrix exhibit a polynomial decay. In both cases, an asymptotically unbiased estimator of $\sigma^2$ is designed.
\paragraph{Finite-rank kernel.}
With such a finite-rank kernel, the estimation of the noise is made from the coordinates $\{ Z_i \}_{i=r+1}^n$ corresponding to the situation, where $G_i^* = 0, \ i > r$ (see Lemma \ref{zero_coeff} in Appendix \ref{general_appendix}).
Actually, these coordinates (which are pure noise) are exploited to build an easy-to-compute estimator of $\sigma^2$, that is,
\begin{equation} \label{sigma_est_const}
\widehat{\sigma}^2 = \frac{\sum_{i=n - r + 1}^n Z_i^2}{n - r}.
\end{equation}
\paragraph{Polynomial decay kernel.}
If the empirical eigenvalues of $K_n$ satisfy the polynomial eigenvalue decay assumption \eqref{beta-polynomial}, we suggest overly-smoothing the residuals by choosing $\alpha=1$, which intuitively results in reducing by a large amount the variability of the corresponding smoothed empirical risk around its expectation, that is,
$\mathbb{E}_{\varepsilon} R_{1, t} \approx R_{1, t}$.
Therefore, the smoothed empirical risk can be approximated by $R_{1, t} \approx B_{1}^2(t) + \frac{\sigma^2}{n}\sum_{i=1}^r ( 1 - \gamma_i^{(t)} )^2$, and $$\sigma^2 \approx \frac{R_{1, t} - B_{1}^2(t)}{\frac{1}{n}\sum_{i=1}^r \widehat{\mu}_i (1 - \gamma_i^{(t)})^2}.$$
Using furthermore that $B_{1}^2(t) \to 0$ as $t$ increases to $+\infty$, the final choice is
\begin{align*}\label{estim.sigma}
\widehat{\sigma}^2 = \frac{R_{1, t}}{\frac{1}{n}\sum_{i=1}^r \widehat{\mu}_i (1 - \gamma_i^{(t)})^2}.
\end{align*}
Following the above heuristic argument, let us emphasize that $\widehat{\sigma}^2 $ is likely to be an upper bound on the true variance $\sigma^2$ since the (non-negative) bias is lower bounded by 0. Nevertheless, the next result justifies this choice.
\begin{lemma} \label{var_est}
Under the polynomial eigenvalue decay assumption \eqref{beta-polynomial}, any value of $t$ satisfying $ t \cdot \eta \widehat{\epsilon}_n^2 \to +\infty$ as $n\to +\infty$ yields that $\widehat{\sigma}^2 = \frac{R_{1, t}}{\frac{1}{n}\sum_{i=1}^r \widehat{\mu}_i (1 - \gamma_i^{(t)})^2}$ is an asymptotically unbiased estimator of $\sigma^2$.
\end{lemma}
A sketch of the proof of Lemma~\ref{var_est} is given in Appendix~\ref{lemma_var_est}.
Based on this lemma, we suggest taking $t = T$, where $T$ is the maximum number of iterations allowed to execute due to computational constraints.
Notice that as long as we access closed-form expressions of the estimator, there is no need to compute all estimators for $t$ between $1\leq t\leq T$.
The final estimator of $\sigma^2$ used in the experiments of Section~\ref{sec.simul.experiments.results} is given by
\begin{equation} \label{sigma_est_full}
\widehat{\sigma}^2 = \frac{R_{1, T}}{\frac{1}{n} \sum_{i=1}^r \widehat{\mu}_i(1 - \gamma_i^{(T)})^2}.
\end{equation}
\section{Conclusion} \label{sec:6}
In this paper, we describe spectral filter estimators (gradient descent, kernel ridge regression) for the non-parametric regression function estimation in RKHS. Two new data-driven early stopping rules $\tau$ (\ref{tau}) and $\tau_{\alpha}$ (\ref{t_alpha}) for these iterative algorithms are designed. In more detail, we show that for the infinite-rank reproducing kernels, $\tau$ has a high variance due to the variability of the empirical risk around its expectation, and we proposed a way to reduce this variability by means of smoothing the empirical $L_2(\mathbb{P}_n)$-norm (and, as a consequence, the empirical risk) by the eigenvalues of the normalized kernel matrix. We demonstrate in Corollaries \ref{finite_rank_corollary} and \ref{pdk_corollary} that our stopping rules $\tau$ and $\tau_{\alpha}$ yield minimax-optimal rates, in particular, for finite-rank kernel classes and Sobolev spaces. It is worth to mention that computing our stopping rules (for a general reproducing kernel) requires \textit{only} the estimation of the variance $\sigma^2$ and computing $(\widehat{\mu}_1, \ldots, \widehat{\mu}_r)$. Theoretical results are confirmed empirically: $\tau$ and $\tau_{\alpha}$ with the smoothing parameter $\alpha = \frac{1}{\beta + 1}$, where $\beta$ is the polynomial decay rate of the eigenvalues of the normalized Gram matrix, perform favorably in comparison with stopping rules based on hold-out data and 4-fold cross-validation.
There are various open questions that could be tackled after our results. A deficiency of our strategy is that the construction of $\tau$ and $\tau_{\alpha}$ is based on the assumption that the regression function belongs to a known RKHS, which restricts (mildly) the smoothness of the regression function. We would like to understand how our results extend to other loss functions besides the squared loss (for example, in the classification framework), as it was done in \cite{wei2017early}. Another research direction could be to use early stopping with fast approximation techniques for kernels \cite{2015arXiv151005684A, rudi2015less} to avoid calculation of all eigenvalues of the normalized Gram matrix that can be prohibited for large-scale problems.
\printbibliography
\newpage
|
{'timestamp': '2020-12-01T02:15:13', 'yymm': '2007', 'arxiv_id': '2007.06827', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.06827'}
|
arxiv
|
\section{Introduction}
High frequency errors are always present in numerical simulations due to inaccuracies of the spatial derivatives at high-wave numbers. These errors can be exacerbated due to aliasing errors or steep gradients in the solution which can exist (or arise) for solutions of partial differential equations (PDEs) with predominantly hyperbolic character. To combat such errors, one technique is to use a filter operator that removes high wave number oscillations, e.g. \cite{chaudhur2017,Gassner:2013qf,Hestahven:1008th,meister2012}. The filtering procedure is separate from the scheme itself, and is often done in an ad hoc fashion (filtering is applied ``as little as possible, but as much as needed'').
There exist a multitude of numerical methods to approximate the solution of hyperbolic time-dependent PDEs. The family of discontinuous Galerkin (DG) spectral methods is well-suited for hyperbolic problems due to its high-order nature and ability to propagate waves accurately over long times \cite{Airnsworth2004}. In particular, nodal collocation DG methods are attractive because of their computational efficiency \cite{Kopriva:2009nx}. Additionally, if a nodal DG method is constructed on the Legendre-Gauss-Lobatto (LGL) nodes then the discrete differentiation and integration operators satisfy a summation-by-parts (SBP) property \cite{kreiss1974finite,kreiss1977,strand1994,svard2014} for any polynomial order \cite{gassner_skew_burgers}. This allows for semi-discrete stability estimates to be constructed for such high-order nodal DG methods, e.g \cite{carpenter_esdg,Gassner:2013ij}.
Recently, Lundquist and Nordstr\"{o}m \cite{lundquist2020stable} removed the ad hoc nature of filtering in the context of finite difference methods. Therein, they discuss a contractivity condition on the explicit filter matrix by re-framing the filtering procedure as a transmission problem \cite{nordstrom2018well}. Further, the work in \cite{lundquist2020stable} develops a necessary condition on the existence of an auxiliary filter matrix and how it must be related to the particular discrete integration (quadrature). Also, an implicit implementation of filtering was proved to be stable, i.e. sufficiency was obtained. The explicit implementation was more easily implemented, slightly more accurate and numerically shown to be stable but a proof was not obtained.
The goal of the present work is to re-interpret the theoretical work from \cite{lundquist2020stable} into the nodal DG context. In doing so, we will remove the ad hoc nature of the nodal DG filtering procedure and prove that the commonly used explicit filter technique from the spectral community \cite{don1994numerical,hesthaven2008filtering,vandeven1991} is stable in time. Just as in the case of finite difference methods, the temporal stability of the filtering for nodal DG relies upon the existence of a high-accuracy auxiliary filter matrix as well as a semi-discrete bound on the solution.
The remainder of the paper is organized as follows: Section \ref{sec:DGOverview} provides an overview of the nodal DG method and the commonly used filtering procedure. Then, Section \ref{sec:stableFilterProof} generalizes the theoretical time stability results from \cite{lundquist2020stable} into the DG context. Numerical results that support and verify the theoretical findings are given in Section \ref{sec:numResults}. Our concluding remarks and outlook are given in the final section.
\section{Overview of nodal DG approximations}\label{sec:DGOverview}
Discontinuous Galerkin methods are principally designed to approximate solutions of hyperbolic conservation laws \cite{Hestahven:1008th,Kopriva:2009nx}. Here, we consider such a conservation law in one spatial dimension
\begin{equation}\label{eq:linearAdvection}
\pderivative{u}{t} + \pderivative{f}{x} = 0,\qquad x \in [x_L,x_R],
\end{equation}
where $u\equiv u(x,t)$ is the solution and $f\equiv f(u)$ is the flux function. The conservation law is then equipped with an initial condition $u(x,0)\equiv u_{\text{ini}}(x)$ and suitable boundary condition(s). Two prototypical examples of conservation laws are the linear advection equation and Burgers' equation whose corresponding flux functions are
\begin{equation}
\text{linear advection: }\, f(u) = a(x,t) u,\qquad\text{Burgers': }\, f(u) = \frac{u^2}{2}.
\end{equation}
Next, we provide the broad strokes to arrive at the nodal DG approximation of the conservation law \eqref{eq:linearAdvection}. First, we transform the domain $[x_L,x_R]$ to the reference interval $[-1,1]$. To do so, we apply the mapping
\begin{equation}
\xi(x) = 2\left(\frac{x-x_L}{x_R-x_L}\right) - 1,\quad\textrm{such that}\quad \xi\in[-1,1],
\end{equation}
and rewrite the conservation law in the computational coordinate:
\begin{equation}\label{eq:mappedLinAdv}
\pderivative{u}{t} + \pderivative{f}{x} = 0 \qquad\Rightarrow\qquad \frac{\Delta x}{2}\pderivative{u}{t} + \pderivative{f}{\xi} = 0.
\end{equation}
\subsection{The numerical scheme}
The DG approximation is built from the weak form of the mapped equation \eqref{eq:mappedLinAdv}. We list the nodal DG approximation steps below with full details given in \cite{Kopriva:2009nx}:
\begin{enumerate
\item Multiply by a test function $\varphi$ and integrate over the reference domain.
\item Integrate-by-parts once and resolve discontinuities at the physical boundaries with a numerical flux function $f^*(u^L,u^R)$.
\item Integrate-by-parts again to obtain \textit{strong form DG}.
\item Approximate the solution and flux with nodal polynomials of degree $N$ written in the Lagrange basis, e.g.
\begin{equation}
u(x,t) \approx U(x,t) = \sum_{j=0}^NU_j(t)\ell_j(\xi)
\end{equation}
where the interpolation nodes are taken to be the $N+1$ Legendre-Gauss-Lobatto (LGL) nodes.
\item Select the test function to be the Lagrange polynomials $\varphi = \ell_i(\xi)$ with $i=0,\ldots,N$.
\item Approximate integrals with LGL quadrature such that the DG scheme is collocated.
\item Arrive at the semi-discrete approximation that can be integrated in time
\end{enumerate}
The resulting strong form, nodal DG approximation is then
\begin{equation}\label{eq:semiDG}
\frac{\Delta x}{2}\dot{\Nvec{U}} + \Dmat\,\Nvec{F} + \Mmat^{-1}\Bmat\left(\Nvec{F}^* - \Nvec{F}\right) = 0
\end{equation}
where
\begin{equation}\label{eq:vecNotation}
\Nvec{U} = \left[U_0,U_1,\ldots,U_N\right]^T
\end{equation}
is the vector form for the degrees of freedom. The matrices in the semi-discrete form \eqref{eq:semiDG} are the discrete derivative matrix \cite{Kopriva:2009nx}
\begin{equation}\label{eq:discDer}
\Dmat_{ij} = \ell'_j(\xi_i),\quad i,j = 0,\ldots,N,
\end{equation}
the matrix $\Mmat$ containing the LGL quadrature weights and the boundary matrix $\Bmat$ given by
\begin{equation}\label{eq:massBndyMats}
\Mmat = \text{diag}(\omega_0,\ldots,\omega_N)\qquad\text{and}\qquad\Bmat = \text{diag}(-1,0,\ldots,0,1).
\end{equation}
There exist several numerical flux functions depending on the particular mathematical flux function $f(u)$, see Toro \cite{toro2009} for details. A general (and simple) numerical flux which we use in the present work is the local Lax-Friedrichs flux
\begin{equation}
F^*(U^L,U^R) = \frac{1}{2}\left(F^L + F^R\right) - \frac{\lambda_{\max}}{2}\left(U^R - U^L\right),
\end{equation}
where $\lambda_{\max}$ is the maximum wave speed of the flux Jacobian.
Two features to note for this flavour of nodal DG approximation are: The diagonal mass matrix $\Mmat$ denotes a quadrature rule that is exact for polynomials up to degree $2N-1$. So, there is equality between the continuous and discrete integral
\begin{equation}\label{eq:equalityDiscCont}
\iprodN{V,W} = \Nvec{V}^T\,\Mmat\,\Nvec{W} = \sum_{i=0}^N V_i\Mmat_{ii}W_i = \int\limits_{-1}^1 v\,w\,\mathrm{d}\xi = \iprod{v,w}
\end{equation}
provided the product of the functions $v$ and $w$ are polynomials of degree $\leq 2N-1$.
From \eqref{eq:equalityDiscCont} the continuous and discrete $L_2$ norms are denoted
\begin{equation}\label{eq:discNorm}
\iprod{v,v} = \inorm{v}^2\quad\text{and}\quad\iprodN{V,V} = \inormN{V}^2.
\end{equation}
The mass and derivative matrices of the LGL collocation DG scheme \eqref{eq:semiDG} form a summation-by-parts (SBP) operator pair \cite{kreiss1974finite,kreiss1977,strand1994,svard2014} for any nodal polynomial order $N$ \cite{gassner_skew_burgers}, i.e.
\begin{equation}\label{eq:SBP}
\Mmat\,\Dmat + (\Mmat\,\Dmat)^T = \Bmat.
\end{equation}
From the SBP property \eqref{eq:SBP}, stable versions of the nodal DG method can be constructed, e.g. \cite{carpenter_esdg,chan2018,gassner_skew_burgers,Gassner:2016ye}.
\subsection{Construction of filtering for nodal DG}\label{sec:standardDG}
In the context of DG methods, the general idea of filtering exploits that the polynomial representation of the function $U$ is unique, and hence can be written in terms of other basis polynomial functions. Basically, the filtering procedure is:
\begin{enumerate
\item Transform the coefficients of the nodal approximation into a modal set of basis functions, e.g., the orthogonal Legendre polynomials $\{L_j(\xi)\}_{j=0}^N$
\begin{equation}
U(\xi,t) = \sum_{j=0}^N U_j(t)\ell_j(\xi) = \sum_{j=0}^N\widetilde{U}_j(t)L_j(\xi).
\end{equation}
\item Because the modal basis is hierarchical, it is straightforward for one to perform a cutoff in modal space to filter higher order modes.
\item Transform the filtered modal solution coefficients back to the nodal Lagrange basis.
\end{enumerate}
At present, we select the modal (normalized) Legendre basis polynomials $\{L_j(\xi)\}_{j=0}^N$ to construct the filtering. It is straightforward to compute the Vandermonde matrix $\Vmat$ associated with the LGL nodal interpolation nodes $\{\xi_i\}_{i=0}^N$
\begin{equation}\label{eq:Vandermonde}
\Vmat_{ij} = L_j(\xi_i),\quad i,j = 0,\ldots,N,
\end{equation}
which allows us to transform the nodal degrees of freedom, $\{U_i\}_{i=0}^N$, to modal degrees of freedom, $\{\widetilde{U}_j\}_{j=0}^N$ and vice versa
\begin{equation}
\Nvec{U} = \Vmat\Nvec{\widetilde{U}}\quad\text{and}\quad\Nvec{\widetilde{U}} = \Vinvmat\Nvec{U}.
\end{equation}
The filtering matrix is then constructed as \cite{vandeven1991}
\begin{equation}
\Cmat_{ij} = \delta_{ij}\sigma_i,\quad i,j=0,\ldots,N,
\end{equation}
where $\Cmat$ is a diagonal modal cutoff matrix. The conditions the filter function $\sigma(\eta)$, as defined by Vandeven \cite{vandeven1991}, are
\begin{itemize}
\renewcommand\labelitemi{\textbullet}
\item $\sigma: \mathbb{R}^+ \mapsto[0,1]$
\item $\sigma(\eta)$ must have $s$ continuous derivatives where
\begin{equation}\label{eq:Vandevenfilter}
\begin{cases}
\sigma(0) = 1&\\[0.05cm]
\sigma^{(k)}(0) = 0,& k = 1,\ldots,s-1\\[0.05cm]
\sigma(\eta) = 0,& \eta\geq 1\\[0.05cm]
\sigma^{(k)}(1) = 0, & k = 1,\ldots,s-1.
\end{cases}
\end{equation}
\end{itemize}
In the nodal DG community a typical choice is an exponential filter function, e.g. \cite{chaudhur2017,Gassner:2013qf,Hestahven:1008th}, to define the coefficients
\begin{equation}\label{eq:filterSigmas}
\sigma_i = \begin{cases}
1 & \textrm{if }0 \leq i \leq N_c -1\\[0.1cm]
\exp\left(-\alpha\left(\frac{i+1-N_c}{N+1-N_c}\right)^s\right) & \textrm{if }N_c \leq i \leq N
\end{cases}
\end{equation}
where $\alpha$, $s$ and $N_c$ are the filter parameters. The value $N_c$ indicates the number of the unaffected modes, $\alpha$ is chosen such that $\exp(-\alpha)$ is machine epsilon and $s$ is an even number determining the order (sometimes referred to as the strength) of the filter. Two common choices for the filtering parameters are to take $\alpha = 36$, $N_c = 4$ and the filter strength is either ``strong'' with $s = 16$ or ``weak'' with $s = 32$ \cite{chaudhur2017,Gassner:2013qf,Hestahven:1008th}.
For any choice of the filter parameters the filter coefficients are constructed such that $0\leq\sigma_i\leq 1$. Note, this exponential filter does not strictly adhere to Vandeven's definition of the filter function \eqref{eq:filterSigmas}, but it does so in practice by choosing $\alpha$ such that $\sigma(1)$ is below machine accuracy \cite{canuto2006}.
In summary, the filter matrix for the nodal DG approximation is given as
\begin{equation}\label{HWFilter}
\Filtermat = \Vmat\,\Dmat\,\Vinvmat.
\end{equation}
The filter of the form \eqref{HWFilter} retains the high-order accuracy of the nodal DG approximation for smooth functions \cite{hesthaven2008filtering,vandeven1991} as shown numerically in Section~\ref{sec:conv}
\begin{rem}
Filtering have often been used as a stabilization technique for numerical methods such as in finite difference \cite{kennedy1997,yee1999} as well as discontinuous Galerkin \cite{chaudhur2017,Gassner:2013qf,Hestahven:1008th} methods. We strongly advise against such use. Instead, one should first construct construct a (provably) stable numerical scheme. After this, the solution quality can be addressed and cleaned-up, possibly using filtering.
\end{rem}
\section{Stability}\label{sec:stableFilterProof}
As previously mentioned, it is possible to develop semi-discrete stability estimates for the nodal DG approximation via the SBP property \eqref{eq:SBP}.
The filtering is a separate procedure which changes the approximate solution during the time integration procedure, for example after each explicit time step or even after each stage of a Runge-Kutta method. Here, we explore what influence this filtering step has on the stability estimate for the nodal DG approximation.
To discuss the filtering procedure and its affect on stability we re-interpret the work on provably stable filtering from \cite{lundquist2020stable} into the nodal DG context.
In a broad sense, with homogeneous boundary conditions, semi-discrete stability ensures that the discrete norm of the approximate solution is bounded by the discrete norm of the initial conditions, see \cite{Nordstrom:2016jk} for complete details. For the nodal DG approximation such a stability statement takes the form
\begin{equation}
\inormN{U(t)} \leq \inormN{U_{\text{ini}}},
\end{equation}
where $U_{\text{ini}}$ is the initial condition evaluated at the LGL nodes
Pursuant to the work \cite{lundquist2020stable}, we view the application of a filter matrix to a discrete solution at some intermediate time $t_1$ as a transmission problem \cite{nordstrom2018well}:
\begin{equation}\label{eq:transmissionProblem}
\begin{aligned}
\Nvec{U}_t + \mathbb{D}(\Nvec{U}) &= 0,\qquad 0 \leq t \leq t_1\\[0.1cm]
\Nvec{V}_t + \mathbb{D}(\Nvec{V}) &= 0,\qquad t \geq t_1\\[0.1cm]
\Nvec{U}(0) &= \Nvec{U}_{\text{ini}}\\[0.1cm]
\Nvec{V}(t_1) &= \Filtermat\Nvec{U}(t_1)\\[0.1cm]
\end{aligned}
\end{equation}
where the operator $\mathbb{D}$ contains the derivative matrix $\Dmat$ as well as the boundary conditions. For the present discussion, the filtering stated in the final line of \eqref{eq:transmissionProblem} is performed in an explicit fashion.
For stability it must hold that the filter is \textit{contractive}, i.e.
\begin{equation}\label{eq:discNormContract}
\inormN{V(t_1)} \leq \inormN{U(t_1)}.
\end{equation}
In turn, this contractive property guarantees that the filter procedure is \textit{stable} because
\begin{equation}
\inormN{V(t_1)} \leq \inormN{U(t_1)} \leq \inormN{U_{\text{ini}}}.
\end{equation}
The contractivity property in the discrete norm \eqref{eq:discNormContract} then implies that the following contractivity condition on the explicit filter matrix $\Filtermat$ must hold
\begin{equation}\label{eq:contract}
\Filtermat^T\,\Mmat\,\Filtermat - \Mmat \leq 0.
\end{equation}
This contractive matrix property was first identified in \cite{nordstrom2018well} for stable transmission problems. The contractivity condition \eqref{eq:contract} expresses a precise interplay between the filter matrix and the mass matrix. As demonstrated in \cite{lundquist2020stable}, a necessary condition for the explicit filter matrix to satisfy \eqref{eq:contract} is that an auxiliary filter matrix
\begin{equation}\label{eq:auxFilter}
\explicitFiltermat = \Mmat^{-1}\Filtermat^T\Mmat,
\end{equation}
exists and possesses the same accuracy as $\Filtermat$. The accuracy requirement on $\explicitFiltermat$ is necessary since otherwise \eqref{eq:contract} is provably indefinite \cite{lundquist2020stable}.
We will show that the auxiliary filter matrix \eqref{eq:auxFilter} is identical to the original filter matrix \eqref{HWFilter} for the LGL nodal DG approximation. Furthermore, the filter matrix $\Filtermat$ indeed satisfies the contractivity condition \eqref{eq:contract}. Both results require the following Lemma.
\begin{lemma}\label{lem:quad}
The matrix product $\Vmat^T\Mmat\,\Vmat$ is the LGL quadrature rule applied to the (normalized) Legendre polynomial functions $\{L_j(\xi)\}_{j=0}^N$ and results in a diagonal matrix
\begin{equation}\label{eq:quadMatDef}
\Vmat^T\Mmat\,\Vmat = \textnormal{diag}\left(1,1,\ldots,1,2 + \frac{1}{N}\right) \coloneqq \quadmat.
\end{equation}
\end{lemma}
\begin{proof}
The entries of this matrix product in terms of discrete inner products is
\begin{equation}\label{eq:quadVandermonde}
\Vmat^T\Mmat\,\Vmat = \begin{bmatrix}
\inormN{L_0}^2 & \iprodN{L_0,L_1} & \cdots & \iprodN{L_0,L_N} \\[0.05cm]
\iprodN{L_1,L_0} & \inormN{L_1}^2 & \cdots & \iprodN{L_1,L_N} \\[0.05cm]
\vdots & \vdots & \ddots & \vdots\\[0.05cm]
\iprodN{L_N,L_0} & \iprodN{L_N,L_1} & \cdots & \inormN{L_N}^2 \\[0.05cm]
\end{bmatrix}.
\end{equation}
From the accuracy of the LGL quadrature and the fact that $\{L_j(\xi)\}_{j=0}^N$ are polynomials, we have equality between the discrete and continuous inner products
\begin{equation}\label{eq:LegendreInner}
\iprodN{L_j,L_k} = \iprod{L_j,L_k} = \inorm{L_j}^2\delta_{jk} = \delta_{jk},
\end{equation}
provided $j+k \leq 2N-1$. The result above utilizes that the Legendre basis is orthonormal. The quadratures in \eqref{eq:quadVandermonde} are therefore exact for all inner products except the one related to $L_N$ with itself since $2N > 2N-1$. Thus,
\begin{equation}\label{eq:preK}
\Vmat^T\Mmat\,\Vmat = \text{diag}(1,1,\ldots,1,\inormN{L_N}^2).
\end{equation}
The discrete and continuous norms are equivalent \cite{canuto2006}. Provided that $\phi$ is a polynomial of degree $N$, the discrete and continuous $L^2$ norms are related by $\inormN{\phi} = \sqrt{2 + 1/N}\inorm{\phi}$ for the LGL quadrature \cite{ISI:A1982NE30900005} .
Using this fact, \eqref{eq:preK} becomes the diagonal matrix $\quadmat$.
\end{proof}
We can now prove
\begin{proposition}\label{prop:auxFilter}
The auxiliary filter is identical to the DG filter matrix, i.e. $\explicitFiltermat = \Filtermat$.
\end{proposition}
\begin{proof}
We examine the difference between the two filter matrices
\begin{equation}
\explicitFiltermat - \Filtermat = \Mmat^{-1}\Filtermat^T\Mmat - \Filtermat = \Mmat^{-1}\mathcal{V}^{-T}\Cmat\,\Vmat^T\Mmat - \Vmat\,\Cmat\,\Vinvmat.
\end{equation}
Next, we factor out the matrix $\Vmat$ on the left and $\Vinvmat$ on the right to have
\begin{equation}
\begin{aligned}
\explicitFiltermat - \Filtermat &=\Vmat\left[\left(\Vinvmat\Mmat^{-1}\mathcal{V}^{-T}\right)\Cmat\left(\Vmat^T\Mmat\Vmat\right) - \Cmat\right]\Vinvmat\\[0.15cm]
&=\Vmat\left[\left(\Vmat^T\Mmat\Vmat\right)^{-1}\Cmat\left(\Vmat^T\Mmat\Vmat\right) - \Cmat\right]\Vinvmat.
\end{aligned}
\end{equation}
Applying the result from Lemma~\ref{lem:quad} gives
\begin{equation}
\explicitFiltermat - \Filtermat =\Vmat\left[\quadmat^{-1}\Cmat\quadmat - \Cmat\right]\Vinvmat =\Vmat\left[\quadmat^{-1}\quadmat\Cmat - \Cmat\right]\Vinvmat = 0,
\end{equation}
where we use that the matrices $\Cmat$ and $\quadmat$ are diagonal to obtain the desired result.
\end{proof}
\begin{rem}
The accuracy of the filter $\Filtermat$ lies entirely in the filter function $\sigma$ used to create the diagonal entries in the matrix $\Cmat$, as shown by Vandeven \cite{vandeven1991}.
\end{rem}
The following result is then self-evident.
\begin{corollary}
The auxiliary filter $\explicitFiltermat$ exists and is as accurate as $\Filtermat$.
\end{corollary}
Further, the result from Lemma~\ref{lem:quad} allows us to prove
\begin{proposition}\label{prop:contract}
The nodal DG filter matrix $\Filtermat$ is contractive in the sense of \eqref{eq:contract}.
\end{proposition}
\begin{proof}
We substitute the form of the filter matrix \eqref{HWFilter} into the contractivity condition \eqref{eq:contract} to obtain
\begin{equation}\label{eq:coerce}
\Filtermat^T\,\Mmat\,\Filtermat - \Mmat = (\Vmat\,\Cmat\,\Vinvmat)^T\,\Mmat\,(\Vmat\,\Cmat\,\Vinvmat) - \Mmat
= (\Vinvmat)^T\Cmat\,(\Vmat^T\Mmat\,\Vmat)\,\Cmat\,\Vinvmat - \Mmat.
\end{equation}
The middle term, $\Vmat^T\Mmat\,\Vmat$, grouped above is precisely that from Lemma~\ref{lem:quad}, which gives
\begin{equation}\label{eq:coerce2}
\Filtermat^T\,\Mmat\,\Filtermat - \Mmat = (\Vinvmat)^T\Cmat\,\quadmat\,\Cmat\,\Vinvmat - \Mmat.
\end{equation}
Next, we recall that the modal cutoff matrix is diagonal with entries $\{\sigma_i\}_{i=0}^N$ and, by construction, the term $\sigma_N = 0$, see \eqref{eq:Vandevenfilter}. Thus,
\begin{equation}\label{eq:diagCmat}
\Cmat = \text{diag}(\sigma_0,\sigma_1,\ldots,\sigma_{N-1},0).
\end{equation}
We combine this fact with the diagonal quadrature matrix result \eqref{eq:quadMatDef} to simplify the middle term of \eqref{eq:coerce2} to be $\Cmat\,\quadmat\,\Cmat = \Cmat^2$. The contractivity condition then becomes
\begin{equation}
\Filtermat^T\,\Mmat\,\Filtermat - \Mmat = (\Vinvmat)^T\Cmat^2\Vinvmat - \Mmat = (\Vinvmat)^T\left(\Cmat^2 - \quadmat\right)\Vinvmat,
\end{equation}
where we, again, apply the result from Lemma~\ref{lem:quad}. From \eqref{eq:quadMatDef} and \eqref{eq:diagCmat} we see that
\begin{equation}
\Cmat^2 - \quadmat = \text{diag}\left(\sigma^2_0-1,\sigma^2_1-1,\ldots,\sigma^2_{N-1}-1,-2-\frac{1}{N}\right) \leq 0,
\end{equation}
because $0\leq\sigma_i\leq 1$ for $i=0,\ldots,N$.
Thus, \eqref{eq:contract} holds.
\end{proof}
\begin{rem}
The proof above holds provided the filter function is chosen such that $\sigma(\eta)\in[0,1]$ and that it ``clips'' the highest mode, i.e. $\sigma(N) = 0$, then the nodal DG filter matrix $\Filtermat$ satisfies the contractivity condition \eqref{eq:contract}. Thus, other proposed filter functions like those found in Hussaini et al. \cite{hussaini1985spectral} also produce a provably stable filter matrix.
\end{rem}
\section{Numerical results}\label{sec:numResults}
Here we apply the nodal DG filter matrix in $\Filtermat$ \eqref{HWFilter} to several test problems. For these tests we select the parameters for a ``strong'' version of the nodal DG filter $\Filtermat$ from Section~\ref{sec:standardDG}. We do so to demonstrate, in practice, the high-order accuracy of the filtered DG approximation for a smooth solution and how it performs for test cases that develops non-smooth solutions over time. To integrate the semi-discrete DG approximation \eqref{eq:semiDG} in time, we use a third-order Runge-Kutta from Williamson \cite{williamson1980}.
\subsection{High-order convergence for linear advection}\label{sec:conv}
For the convergence test we consider the linear advection equation with flux function $f(u)=a u$ and take the wave speed to be the constant $a=1$ on the domain $\Omega = [0,1]$.
We use a smooth Gaussian pulse to set the initial and boundary conditions as well as compute errors
\begin{equation}
u(x,t) = \exp\left(-\zeta\left(x - 0.25 - t\right)^2\right),\quad\text{with}\quad \zeta = \frac{\ln(2)}{0.2^2}.
\end{equation}
We vary the polynomial degree $N$ at values in the interval $[7,64]$ and integrate up $T=0.5$ as the final time. In Fig.~\ref{fig:spec_conv} we present a semilog plot of the $L_{\infty}$ error versus the polynomial order. We see that the error decays exponentially until the errors in the approximation are dominated by the time integration. Thereafter, we halve the time step size and see that the stagnation point of the error drops by a factor of eight, as expected for a third-order time integration technique.
\begin{figure}[htbp]
\begin{center}
{
{\includegraphics[scale=0.49, trim=8 11 10 10, clip]{figs/eoc_solution.pdf}}
}
{
{\includegraphics[scale=0.34, trim=30 170 65 180, clip]{figs/spec_cge.pdf}}
}
\caption{(\textit{left}) Exact and approximate solution with $N=29$ for the linear advection with smooth Gauss pulse at $T=0.5$. (\textit{right}) Space and time convergence.}
\label{fig:spec_conv}
\end{center}
\end{figure}
This experimentally demonstrates that the nodal DG approximation filtered in every time step remains high-order accurate in space and time for a smooth solution.
\subsection{Variable wave speed for linear advection}
Next, we consider a more complicated test proposed in \cite{hesthaven2008filtering}. For this case, the solution remains bounded, but develops steep gradients. Due to the high-degree polynomial approximation of the DG method, spurious oscillations can develop near these steep gradients and propagate throughout the domain, polluting the solution quality.
We consider the linear advection problem with a variable wave speed on the domain $\Omega=[-1,1]$ written in the form
\begin{equation}\label{eq:varCoeff}
\pderivative{u}{t} + a(x)\pderivative{u}{x} = 0,\quad\text{where}\quad a(x) = \frac{1}{\pi}\sin(\pi x - 1).
\end{equation}
This wave speed remains positive at the boundaries of the domain, but it can change sign within the domain. We take the initial condition to be
$u_{\text{ini}}(x) = \sin(\pi x)$
which gives the corresponding analytical solution \cite{gottlieb1981stability}
\begin{equation}\label{eq:varCoeffAnalytical}
u(x,t) = \sin\left(2\tan^{-1}\left(e^{-t}\tan\left(\frac{\pi x -1}{2}\right)\right) + 1\right).
\end{equation}
The solution in \eqref{eq:varCoeffAnalytical} develops a steep gradient around the point $x = (1-\pi)/\pi\approx -0.68$ before it finally decays to a constant value
$\lim_{t\to\infty}u(x,t) = \sin(1)$.
We compare the analytical and approximate solutions at a final time $T=4$ on a single spectral element using polynomial order $N=256$ and $\Delta t = 1/2000$ as the explicit time step. The polynomial order and general setup are chosen such that a comparison can be made to the results in \cite{hesthaven2008filtering}. In Figure \ref{fig:varCoeff} we present the unfiltered approximation on the left and the filtered approximation on the right. Clearly, the unfiltered DG scheme contains spurious oscillations whereas the filtered solution suppresses such behaviour. Also, qualitatively, the results of the new DG filter scheme are very similar to those from \cite{hesthaven2008filtering}.
\begin{figure}[htbp]
\begin{center}
{
{\includegraphics[scale=0.49, trim=10 12 10 10, clip]{figs/unfiltered.pdf}}
}
{
\includegraphics[scale=0.49, trim=35 12 10 10, clip]{figs/filtered.pdf}
\vspace{-0.3cm}
}
\caption{Unfiltered (\textit{left}) and filtered (\textit{right}) nodal DG solution for the variable wave speed problem \eqref{eq:varCoeff} at $T=4$ with polynomial order $N=256$ on a single spectral element.}
\label{fig:varCoeff}
\end{center}
\end{figure}
\subsection{Stability demonstration for Burgers'}
This example is designed to illustrate the importance of the stability of the underlying spatial discretization and how the filtering can influence the behavior of the solution in time. For this we consider Burgers' equation and two forms of the nodal DG spatial discretization. One discretizes the conservative form of the PDE where the nonlinear Burgers' flux is simply $f(u) = u^2/2$ while the other skew-symmetric discretization writes the spatial derivative of the flux in a split formulation
\begin{equation}
f^{\text{skew}}_x(u) = \frac{2}{3}\left(\frac{u^2}{2}\right)_{\!x} + \frac{1}{3}\left(uu_x\right).
\end{equation}
On the continuous level these two forms of Burgers' equation are equivalent; however, on the discrete level they exhibit different behavior. Most notably, the solution energy $u^2/2$ is bounded for the nodal DG discretization constructed from the skew-symmetric formulation whereas no such bound exists for the discretization of the conservative form, see \cite{gassner_skew_burgers} for complete details.
Therefore, the discretization of the skew-symmetric form is provably stable and the conservative form \textit{is not}. As discussed in the previous Sections, the DG filtering is a procedure divorced from the spatial discretization. If the underlying numerical scheme is provably stable, and the filtering is contractive, it will remove energy from the solution in a stable way. However, if the underlying scheme is unstable the filtering still removes energy and the approximation \textit{might} be stable, but no further conclusions regarding the solution energy can be drawn.
To illustrate this we consider the domain $\Omega=[0,2]$ with periodic boundary conditions and the initial condition
\begin{equation}\label{eq:burgersinitial}
u_{\text{ini}}(x) = \frac{1}{5}[1 + \cos(\pi x)].
\end{equation}
We run the simulation with polynomial order $N=128$ up to $T=2.25$ as a final time. Further, the solution is filtered at 16 equally spaced times during the simulation. Due to the nonlinear nature of Burgers' equation the initial conditions will steepen and eventually a shock will form.
We run four variants of the nodal DG scheme relevant to the present discussion:
\begin{itemize}[label=$\bullet$]
\item Conservative formulation; Unfiltered.
\item Conservative formulation; Filtered.
\item Skew-symmetric formulation; Unfiltered.
\item Skew-symmetric formulation; Filtered.
\end{itemize}
On the left in Figure~\ref{fig:burgers} we present the evolution of the solution energy, normalized with its initial value, over time. On the right in Figure~\ref{fig:burgers} we give the approximate solution at the final time produced by the filtered skew-symmetric DG formulation as well as a reference solution created with a standard finite volume scheme on 10000 grid cells.
\begin{figure}[htbp]
\begin{center}
{
\includegraphics[scale=0.49, trim=5 12 10 12, clip]{figs/stable/time_stable_energy_longer_time.pdf}
}
{
\includegraphics[scale=0.49, trim=10 12 10 12, clip]{figs/burgers_sol/burgers_filt.pdf}
}
\caption{(\textit{left}) Solution energy evolution of four nodal DG variants for Burgers' equation with the initial condition \eqref{eq:burgersinitial}. (\textit{right}) Plot of the filtered skew-symmetric solution ($N=128$) and a reference finite volume solution at $T=2.25$.}
\label{fig:burgers}
\end{center}
\end{figure}
Due to the high polynomial order, the simulation is well resolved and the four variants are nearly indistinguishable for most of the simulation time. However, as the gradients steepen we see that the conservative formulation, for which no energy stability statement exists, behave erratically. The unfiltered conservative form simulation crashes at $T\approx 1.8$. The filtered conservative form simulation successfully runs as the filtering keeps the solution energy ``under control.'' This is illustrated in Figure~\ref{fig:burgers} where we observe growth in the solution energy between the filter applications because the underlying spatial discretization is unstable. The solution energy of unfiltered and filtered skew-symmetric simulations both remain bounded because the underlying spatial discretization possesses an energy bound. Note, there is a small amount of dissipation in the solution energy for the unfiltered skew-symmetric scheme due to the formation of the shock \cite{jameson2008_energy}. Further, the filtered skew-symmetric simulation is less energetic, as expected, because the act of filtering removes some solution energy.
\vspace{-0.0cm}
\section{Closing remarks}
We proved that the commonly used nodal DG filter matrix $\Filtermat$ satisfied a contractivity condition. Further, we proved that a high-order auxiliary filter matrix $\explicitFiltermat$ exists for the nodal DG approximation which is necessary for the contractivity condition to be satisfied. Together, these results implied that the explicit filtering procedure in the context of nodal DG methods ``removes'' information in a stable way when measured in the norm induced by the Legendre-Gauss-Lobatto quadrature.
Numerical results were provided to demonstrate and verify that the filtering retained the high-order accuracy of the nodal DG approximation, that it suppressed spurious oscillations near steep gradients and was stable provided the underlying spatial discretization of the method had a semi-discrete bound.
The generalization of the results described in this work to multiple spatial dimensions is straightforward due to the tensor product nature of the nodal DG method. The same is true for problems involving curved physical boundaries, similar to those discussed in \cite{lundquist2020stable}. An interesting open question for future work is the extension of provably stable filtering to problems involving multiple DG elements where the interface coupling will play a key role.
\section{Declarations}
\subsection{Funding}
This work was supported by Vetenskapsr{\aa}det, Sweden (award number 2018-05084 VR)
\subsection{Conflicts of interest}
The authors declare that they have no conflicts of interest in the present work.
\subsection{Availability of data and material}
Not applicable.
\subsection{Code availability}
The code used to generate the results in this work is available upon request with Andrew Winters ([email protected]).
\bibliographystyle{spmpsci}
|
{'timestamp': '2020-07-15T02:14:06', 'yymm': '2007', 'arxiv_id': '2007.06948', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.06948'}
|
arxiv
|
\section{Introduction}
Recently, neural network has progressed quickly for computer vision. Various efficient methods \cite{Redmon2015You} \cite{Redmon2017YOLO9000} \cite{tian2019fcos} \cite{kong2019foveabox} depend on large labeled datasets. However, when datasets are insufficient, it may result in overfitting and hurting generalization performance. On the contrary, there is a quite difference between the human vision system and the computer vision system. For the unlabeled datasets, the human vision system can classify, locate and describe. Computer systems cannot do those. Despite most state-of-the-art methods can succeed, they require more expensive datasets that are the labeled with auxiliary descriptions, such as shape, scene or color etc.
The predecessors propose few-shot learning methods \cite{wertheimer2019few-shot} \cite{lifchitz2019dense} \cite{ShahTask}, solving the above issues, and few-shot learning includes classification, detection and segementation. Few-shot detection \cite{bansal2018zero-shot} \cite{PorikliZero} \cite{SaligramaZero} is one of the most challenging tasks. This paper finds two main challenges. First, due to just few samples, the features which are extracted from standard CNNs are not suitable for few-shot learning, directly. In previous most state-of-the-art few-shot learning methods, the classification is often regarded as the standard task. For each iteration of training, classification is a binary classification task for YOLOv2 \cite{Redmon2017YOLO9000}, resulting in bias problem and hurting performance on the other classes. Then, many methods \cite{PinheiroAdaptive} \cite{XieDual} \cite{HebertWatch} are proposed for auxiliary features related to description. However, it is difficult to ensure whether the external datasets are beneficial and tell which is noise. Therefore, many methods \cite{PinheiroAdaptive} \cite{ZemelIncremental} \cite{XieDual} learn auxiliary features by sub-modules to improve performance, requiring more labeled datasets and more parameters.
\begin{figure*}
\scriptsize
\subfigure[The visualization of category grouping.]
{
\begin{minipage}[c]{0.5\textwidth}
\includegraphics[width=1.0\textwidth]{figure1.pdf}
\end{minipage
}
\subfigure[Category-based grouping table on Pascal VOC dataset.]
{
\begin{minipage}[c]{0.5\textwidth}
\center
\begin{tabular}{|l|llllll|}
\hline
group&\multicolumn{6}{c|}{Class}\\
\hline
1& aero& bird&&&& \\
\hline
2&cow&horse&cat&sheep&dog&\\
\hline
3&sofa&chair&&&&\\
\hline
4&tv&plant&table&&&\\
\hline
5&boat&bicycle&train&car&bus&mbike\\
\hline
6&bottle&person&&&&\\
\hline
\end{tabular}
\end{minipage}
}
\caption{Overall scheme of category-based grouping mechanism. In (a), all categories are divided into $K$ groups. All categories in each row are similar in appearance and environment appeared. All categories in each row are a group. When ``bird'' is flying, it looks like ``aero''. ``cows, horses, cats, sheep and dogs'' have four legs and similar shape, and they often appear in similar environments. We regard them as a group. In this work, we experiment Pascal VOC dataset. As detailed in (b), all categories of Pascal VOC dataset are divided into 6 (i.e., K=6) groups by (a). The appearance and the appeared environment are very similar between categories within a group.}
\label{figure1}
\end{figure*}
In order to solve those problems, based on \cite{kang2019few-shot}, we propose a new top-C classification loss (i.e., TCL-C) for few-shot learning to improve performance of few-shot detection. For YOLOv2 \cite{Redmon2017YOLO9000}, classification task is a binary classification task and ignores other results. Although the Cross-Entropy loss \cite{Rubinstein1999The} or Focal loss \cite{Lin2017Focal} can reduce the trust on the original label and increase the trust on the other labels to a certain extent. That cannot ensure that other categories-based features except for the true-label are bad for learning features, hurting detection performance because of few samples. Many researchers exploit the label smoothing and Cross-Entropy to alleviate the problem. However, they do not eliminate category-based irrelevant features and enhance similarity semantic features between categories. Smoothing label may increase irrelevant semantic features of other classes and hurt performance for few-shot learning. \textbf{Therefore, for classification, except for the true-label, we constraint the C-1 predictions with the highest classification scores. Because the C-1 false classes are the most likely to predict as the true-label, in this paper, we regard them as the most likely C-1 false classification predictions}, we can only set a simple constraint to enhance the semantic features relating with the true-label and suppress irrelevant semantic information.
For few-shot detection, previous many methods\cite{wang2020frustratingly} \cite{yan2019meta}\cite{wang2019meta} \cite{hsieh2019one} \cite{hsieh2019one} \cite{fan2020few} \cite{zhu2021semantic} \cite{2021FSCE} exploit Faster R-CNN with FPN, multi-relation networks and ResNet with many other mechanisms as backbone to detect few-shot objects well. However, their networks are very complex, and they fail to consider strong bias problem (i.e., model can detect classes with sufficient training samples, and classes with the few samples is poor). In this paper, we propose a category-based grouping mechanism only by labels to allevaite strong bias problem. As shown in Figure \ref{figure1} (a), the left in the first row is the object "aero" and the last one in the same row is "bird", they are similar in appearance (i.e., visual appearance, shape, and limbs etc.). In most conditions, they often appear on the same environment. In this Figure, these are also very similar in appearance. And the scenes are also similar between objects in 1th, 2th, and 4th column, and between objects in 3th and 5th column. As seen in Figure \ref{figure1}, we split classes into disjoint groups. Therefore, this work proposes a category-based grouping mechanism to assist model learn meta-features better. Few methods without additional datasets or modules have found the characteristic, applying that into few-shot detection. Therefore, we alleviate the strong bias problem between classes and further improve few-shot detection APs by the category-based grouping. Based on the few-shot detection\cite{kang2019few-shot}, our contributions are as follows:
\begin{quote}
\begin{itemize}
\item We design a top-C classification loss (i.e.,TCL-C), which allows the true-label prediction and the most likely C-1 false classification predictions to improve performance on few-shot classes.
\item Based similar appearance (i.e., visual appearance, shape, and limbs etc.) and environment in which objects often appear. We group categories into sub-groups with disjoint each other. Then, we construct a category-based grouping loss on meta-features grouped, which alleviates the strong bias problem and further improves detection APs.
\item We experiment different classification losses for few-shot detection on Pascal VOC, and all results show that our TCL-C performs better.
\item Combining the TCL-C with category-based grouping mechanism, for $k$-shot detection, $k=1, 2, 3,$ the detection APs achieve almost 20\%, 25\%, 30\%, respectively. Experimental results show that ours outperforms the state-of-the-art methods, and grouping is beneficial for concentration of detection APs between classes.
\end{itemize}
\end{quote}.
\section{Related Work}
\textbf{Classification Loss.} Different classification losses, such as the BCEwithLogits, the Cross-Entropy loss with SoftMax\cite{Rubinstein1999The} \cite{liu2016large-margin} \cite{liu2017sphereface:}, are proposed. Most computer vision tasks use the Cross-Entropy to implement training. Then, \cite{Lin2017Focal} proposes a focal loss to alleviate the imbalance between the positive and negative samples. However, many tasks based on YOLOv2 \cite{Redmon2017YOLO9000} just exploit a binary classification loss, resulting in the imbalance and ignoring the correlation about categories. In this paper, for few-shot classes, we assume that too much noise can hurt detection, and only the true-label may fail in learning relation with other categories. Therefore, we propose the TCL-C for classification task, which only focuses on the true-label and the most similar C-1 false classes. Compared with \cite{rahman2018polarity}, our TCL-C only exploits semantic information to promote performance.
\textbf{Meta-Learning.} Recently, different meta-learning algorithms have been proposed, including metric-based \cite{li2019revisiting} \cite{lifchitz2019dense} \cite{kim2019variational}, memory networks \cite{santoro2016meta-learning} \cite{oreshkin2018tadam:} \cite{mishra2017simple}, and optimization \cite{grant2018recasting} \cite{lee2018gradient-based} \cite{finn2017meta-learning} \cite{finn2017model-agnostic} \cite{kang2019few-shot}. The first type learns a metric based on few samples given and score a label of the target image according to similarity. The second is cross-task learning, and most memory networks widely are model-independent adaptation \cite{finn2017model-agnostic}. A model is learned on a variety of different tasks, making it possible to solve some new learning tasks with only few samples. Many researchers propose many variants \cite{nichol2018on, sun2019meta-transfer} \cite{antoniou2018how} \cite{rusu2019meta-learning}. The last one is a parameter prediction.\cite{kang2019few-shot} detects objects by Yolov2 \cite{Redmon2017YOLO9000}, and based on that, we further improve performance and alleviate many problems for few-shot detection.
\textbf{Few-shot Detection.} Previous most few-shot detection methods mainly focus on fine-tuning and metric learning, \cite{wang2020frustratingly}\cite{kang2019few-shot} and \cite{hsieh2019one} \cite{fan2020few} \cite{zhu2021semantic} \cite{2021FSCE}.\cite{wang2019meta} based on \cite{2017Faster} predicts parameters of category-specific by a weight prediction meta-model, but the category-agnostic is botained by base class samples. \cite{wang2020frustratingly} \cite{hsieh2019one} \cite{fan2020few} use FasterR R-CNN and ResNet-101 with FPN as backbone to detect objects. \cite{chen2018lstd:} transfers the basis domain to the novel. \cite{fan2020few} based on \cite{2017Faster} exploits attention-RPN and three relations to improve performance for few-shot detection. However, those methods fail to consider unequal detection APs and increase more parameters, resulting in training slower and poor performance on few-shot classes. \cite{2021FSCE} uses a contrastive branch measures the similarity between proposals, the method fails to the similarity between categories. \cite{zhu2021semantic} projects features into the category-based embedding space which is obtained by a large corpus of text, the cost of embedding space is much more. Therefore, based on \cite{kang2019few-shot}, we only split all categories into disjoint groups to improve detection performance without additional sub-modules(i.e., FPN, multi-relation modules, and metric function etc.), and captures the correlation between groups or categories from the category-based meta-features to reduce unequal detection.
\section{Our Approach}
As shown in Figure \ref{figure2}, we propose the TCL-C for classification and category-based grouping mechanism to help meta-model $M$ learn the related features between categories. The input of the meta-model $M$ is an image and a mask of only an object selected randomly. The value of the mask within object is 1, otherwise, it is 0. Every iteration, the M inputs the same number of samples as the number of categories. The meta-model M extracts meta-feature vectors about classes as the weight for reweighting features from feature extractor D, then, the classifier and detector complete classification and regression task. During training, we use the TCL-C and category-based grouping mechanism to train classification and help meta-model $M$, respectively. According to categories-based grouping mechanism, we split category-based meta-feature vectors into groups to learning better meta-features.
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth]{figure2.pdf}
\caption{Overall structure of our method. The detection model consists of a feature extractor $D$ and a meta-model $M$.}
\label{figure2}
\end{figure*}
\subsection{Feature Reweighting for Detection}
Different categories may have a common and unique distribution
As shown in Figure \ref{figure2}, based on YOLOv2 \cite{Redmon2017YOLO9000}, this method uses a meta-model $M$ to obtain meta-feature about categories for reweighting features. The meta-learning model takes each annotated sample ($I_i$, $B_i$), and for the category $i$, $i = 1, 2 ,..., N, N$ represents the number of categories. $I_i$ and $B_i$ represent the image and the selected mask of only an object on $ith$ class, obtaining category-based meta-feature by $M$. The $M$ learns to predict $N$ vectors $W$, $W=\{ w_1, w_2, w_3,...,w_N\}$, where $w_i$ represents the meta-feature vector of the $i-th$ category, $w_i = M (I_i, B_i)$, $M$ is the meta-model. Based on Darknet-19, the author builds a feature extractor $D$ which extracts basis features $F_j$ from the image $S_j$: $F_j = D(S_j)$. Then, for class $i$, the reweighted feature vector is obtained according to $w_i$ and $F_j$: $F_{j,i}$=$F_j\otimes$$w_i$. Finally, based on $F_{j,i}$, the author uses classifier and detector to classify and regress. See \cite{kang2019few-shot} for details
\subsection{TCL-C}
For few-shot detection, especially for meta-learning, we use the TCL-C to encourage model to train classification by the true-label prediction and the most likely C-1 false classification predictions, enhancing the semantic features relating with the true-label and suppressing irrelevant semantic information. As shown in Equation \ref{equation1}, our TCL-C makes the features tend to learn the true-label and controls effect of the most similar C-1 classes by ${\beta}^+$ and ${\beta_c}^-$, respectively, improving performance on novel (i.e.,few-shot) classes. Therefore, $\bm{{\beta}^+}$, $\bm{{\beta_c}^-}$ denote the expected classification score of true-label, and highest $k-th$ classes expected scores except for the true-label, respectively. Because of different similarities between categories,$\bm{{\beta_c}^-}$ is obtained through extensive experiments. Therefore, in this paper, our experiment about TCL-C is only 2(i.e.,C=2), as detailed in Equation\ref{equation1111}. Finally, $\eta$ and $\gamma$ affect the convergence rate. Detailed in Equation \ref{equation1} below.
\begin{equation}
\begin{split}
L_{cls}&={L_{cls}}^{pos}+{L_{cls}}^{neg}\\
{L_{cls}}^{pos}&=log(\eta+e^{\gamma({\beta}^+-P_t)})\\
{L_{cls}}^{neg}=&log(\eta+e^{\gamma({F_t}^1-{\beta_1}^-)})+log(\eta+e^{\gamma({F_t}^2-{\beta_2}^-)})\\
&+\cdots+log(\eta+e^{\gamma({F_t}^c-{\beta_c}^-)})+\cdots+\\
&log(\eta+e^{\gamma({F_t}^C-{\beta_C}^-)}), 2\leq C\leq N \\
\end{split}
\label{equation1}
\end{equation}
where ${L_{cls}}^{pos}$ and ${L_{cls}}^{neg}$ represent the loss functions for the true-label class and the most likely C-1 false classes, respectively. $P_t$ and $F_t$ represent the prediction score on the true-label class, and prediction scores of the most similar C-1 classes(i.e., the most likely C-1 false classes), respectively. $0\leq {\beta_k}^- \leq1$, ${\beta_k}^-$ is the expected threshold for the $kth$ (i.e., k=1, 2, ..., C) similar category. During analysis, as detailed in Equation \ref{equation1111}, we only experiment TCL-2. When C is bigger than 2 for TCL-C, how much influence other classes except for the true-label have on detection, we need to adjust the threshold value of each category dynamically to apply to the true-label, which requires extensive experiments, and we will experiment it in the future. Therefore, we urge the model to distinguish between the semantic features of the two most similar categories, improving the detection APs on few-shot classes.
\begin{equation}
{L_{cls}}^{neg}=log(\eta+e^{\gamma(F_t-{\beta}^-)})\\
\label{equation1111}
\end{equation}
\subsection{Category-Based Grouping}
As detailed in Figure \ref{figure2}, meta-learning uses the correlation between categories for few-shot detection. As shown in Figure \ref{figure1}, our grouping mechanism focuses on appearance, followed by the environment, splitting all categories into $K$ groups which are disjoint with each other. We mainly analyze the mean and variance of the category-based meta-feature distribution from $M$. As for the principle (see Figure \ref{figure1}), as shown in Equation \ref{equation2},we propose a category-related loss about groups, the intra-group distance is smaller and the inter-group distance is larger. Our method encourages the variance of the mean value of the feature vector smaller for every group, making the semantic feature distribution more compact between categories within each group, and helps the feature distribution sparser between groups, improving detection APs and reducing the detection dispersion on the all categories.
\begin{equation}
L_{re-meta}=\sum\limits_{j=1}^Klog(\tau+{{L^j}_{group}})
\label{equation2}
\end{equation}
As detailed in Equation \ref{equation2}, where ${{L^j}_{group}}$ is cost composed of within $kth$ group and between the $kth$ group and other groups, $\tau$ is set to 1.0. As shown in Equation \ref{groupj}.
\begin{equation}
\begin{split}
{{L^j}_{group}}=\frac{{W_{mean-std}}^j}{\epsilon+\frac{1}{{W_{mean-std}}^j}+\sum\limits_{k=j+1}^KL_{j,k}}\\
L_{j,k}=e^{{({W_{std}}^j-{W_{std}}^k)}^2} +e^{{({W_{mean}}^j-{W_{mean}}^k)}^2}
\end{split}
\label{groupj}
\end{equation}
where $L_{j,k}$ represents the meat-features distribution difference between the $jth$ group and the $kth$ group, we expect the value to be as big as possible. $\bm{{W_{mean-std}}^j}$ represent the dispersion of concentration of the meta-feature space between categories within the $jth$ group, and we expect it to be as small as possible (i.e., distribution between categories within gruops is more compact). $\bm{{W_{std}}^j}$ and $\bm{{W_{mean}}^j}$ represents the dispersion metric and the concentration metric of meta-features between categories within the $jth$ group , respectively. According to Equation \ref{equation2}, we expect the distribution of different categories within the groups is more compact, and the different groups are far farther from each other (i.e., distribution is obvious between groups). $\bm{{W_{mean-std}}^j}$ is smaller, then, $\bm{{W_{std}}^j-{W_{std}}^k}$ and $\bm{{W_{mean}}^j-{W_{mean}}^k}$ are bigger. Every theory is detailed below.
\begin{equation}
{W_{mean-std}}^j=\sqrt{ \frac{1}{||C_j||} \sum\limits_{m=1}^{||C_j||}{( {u_m}^j - u^j )}^2 }
\label{equation3}
\end{equation}
where we expect the value to be smaller. $C_j$ is the $jth$ group, and $||C_j||$ represents the number of classes within the $jth$ group. $u^j$ is the mean value of all features for the $j$th group, and ${u_m}^j$ is the mean value of meta-features for the $mth$ class of the $jth$ group.
\begin{equation}
\label{equation41}
{W_{std}}^j=\left\{
\begin{aligned}
\sqrt{ \frac{1}{||C_j||} \sum\limits_{m=1}^{||C_j||}{( {\delta_m}^j - \delta^j )}^2 } & , &||C_j||>1. \\
{\delta_m}^j& , &||C_j||=1.
\end{aligned}
\right.
\end{equation}
\begin{equation}
\label{equation42}
{W_{mean}}^j=\left\{
\begin{aligned}
\frac{1}{||C_j||} \sum\limits_{m=1}^{||C_j||} {u_m}^j & , &||C_j||>1. \\
{u_m}^j& , &||C_j||=1.
\end{aligned}
\right.
\end{equation}
\begin{equation}
\label{equation5}
u_i=\frac{1}{|F|} \sum\limits_{f=1}^{|F|} {x_f}^i , \quad \delta_i=\sqrt{ \frac{1}{|F|} \sum\limits_{f=1}^{|F|} {( {x_f}^i- u_i )}^2 }
\end{equation}
where $x$ denotes that the category-related $|F|$-dimension meta-feature vectors, $|F| = 1024$, $i=1,2,3,...,N$. $u_i$ and $\delta_i$ represent the mean value and variance of meta-feature for the $ith$ category, respectively. Within the $jth$ group, ${\delta_m}^j$ and $\delta^j$ are the variance value of the $mth$ class and the variance value of all meta-features, respectively. As shown in Figure \ref{figure1}, all 20 categories are divided into 6 groups, $K = 6$, $C_j\in\{C_1, C_2, ..., C_6\}$. Because of the correlation loss of each group, the value of the $log$ function is less than 0. Therefore, the parameter $\tau$ is used to ensure that the loss is a positive value, and the parameter must be greater than or equal to 1. In terms, the method alleviates the phenomenon which the performance on different categories varies greatly for few-shot detection. See Appendix A for details.
\subsection{Loss Details}
\textbf{Category-Based Grouping.} Considering that different categories in different environments have the similar appearance and different categories are in the similar environment, in order to reduce the setting, we mainly focus on the appearance similarity, followed by the environment, and we set classes with the similar appearance and scenes appeared as a group. As shown in Figure \ref{figure1} (b), we divide the Pascal VOC with 20 categories into 6 groups, namely $K = 6$. As shown in the Equation \ref{equation2}, we set the parameter $\tau$, $\epsilon$ to 1 and 0.00005 respectively.
\textbf{Loss Functions.} In order to train the meta-model and ensure that the shared features ( i.e., within a group) are more compact between the meta-features of the similar semantic objects, we jointly train classification, category-based grouping, and regression, as shown in Equation \ref{equation6}. Compared with state-of-the-art classification methods, our TCL-2 method is more suitable for few-shot detection.
\begin{equation}
\begin{split}
L_{loc}&=L_{loc}(x)+L_{loc}(y)+L_{loc}(w)+L_{loc}(h)\\
L&=\alpha L_{cls}+\omega L_{re-meta}+\lambda L_{loc}
\end{split}
\label{equation6}
\end{equation}
where $ L_{re-meta}$ denotes the category-based grouping loss. $ L_{loc}$ includes the center location loss $ L_{loc}(x)$, $ L_{loc}(y)$ and scale loss $ L_{loc}(w)$, $ L_{loc}(h)$. In this experiment, the classification, similarity, and regression balance parameters, $\alpha$, $\omega$ and $\lambda$, are set to 1, 6, and 1, respectively.
\section{Experiments and Results}
This experiment consists of the base training and few-shot fine-tuning. As shown in Figure \ref{figure2}, the output of the meta-model $M$ is related to the number of categories, and each category-based meta-feature vector is represented as a 1024-dimension vector. We experiment with different classification losses, BCEwithLogits, Focal \cite{Lin2017Focal}, Cross-Entropy \cite{Rubinstein1999The} and the TCL-2, combining with the Category-based Grouping, respectively. Methods which combine all classification losses and our proposed category-based grouping are regarded as Re-BCEwithLogits, Re-Focal, Re-Cross-Entropy, and Ours (i.e.,Re-TCL-2), respectively. Detail as follows.
\subsection{DataSets and Setting}
The Pascal VOC DataSet contains 20 categories, we randomly select 5 categories as novel(i.e., few-shot) categories with few samples (i.e.,k-shot, $k=1,2,3,5$) for fine-tuning, and the remaining 15 categories as base classes with sufficient samples for the base model. The 20 categories are randomly divided into 6 novel parts, and we experiment 3 parts obtained as the novel classes for fine-tuning $k$-shot, $k=1,2,3,5$. Our setting is the same with \cite{kang2019few-shot}. During base classes training, only samples with 15 categories are trained, and the remaining regarded as novel classes with 5 categories are fine-tuned, and each novel class has only $k$-shot, named 5-way $k$-shot. For meta-model, when there are multiple objects in an image, only an object mask is randomly selected corresponding to the classes. All models are trained by 4 GPUs with 64 batch sizes, and we train for 80,000 iterations for base model. In our work, we use test sets of VOC2007 as our test sets, and training/validation sets of VOC2007 and VOC2012 as our training sets. We use SGD with momentum 0.9, and L2 weight-decay 0.0005 for detector and meta-model.
\subsection{Ablation Studies}
Our experiments are mainly for 5-way $k$-shot. We analyze the detection performance on Pascal VOC by the category-based meta-feature grouping and different classification losses. The details are as follows.
\begin{table*}
\small
\centering
\begin{center}
\begin{tabular}{|l|llll|llll|llll|}
\hline
&\multicolumn{4}{c|}{Novel Set1} & \multicolumn{4}{c|}{Novel Set2} &\multicolumn{4}{c|}{Novel Set3}\\
\hline
\centering
Method/shot& 1& 2& 3& 5 & 1& 2& 3& 5 & 1& 2& 3& 5 \\
\hline
\centering
BCEwithLogits&16.42&18.51&27.41&36.07&13.59&14.71&26.3&35.2&15.1&15.62&26.14&31.6\\
Re-BCEwithLogits&13.26&17.46&24.31&33.76&\textbf{18.29}&19.71&26.99&35.3&11.0&13.0&20.74&31.95\\
\hline
Focal&16.27&21.63&27.91&\textbf{37.43}&10.39&15.23&18.36&34.09&9.6&8.87&20.16&27.54\\
Re-Focal&18.22&20.05&20.45&36.15&14.16&15.88&23.13&27.2&7.3&8.67&16.35&28.6\\
\hline
Cross-Entropy&15.37&19.11&23.11&35.18&16.04&19.2&25.46&35.84&12.19&15.3&20.31&31.91\\
Re-Cross-Entropy&18.55&21.02&22.25&36.5&15.15&20.81&26.07&33.45&13.07&14.53&23.93&35.58\\
\hline
TCL-2 &19.15&21.23&28.64&36.94&17.56&22.25&25.57&38.45&12.27&17.33&\textbf{30.81}&35.08\\
Ours&\textbf{20.08}&\textbf{26.75}&\textbf{29.76}&36.28&18.07&\textbf{24.66}&\textbf{30.94}&\textbf{39.04}&\textbf{19.42}&\textbf{17.43}&23.24&\textbf{37.66}\\
\hline
\end{tabular}
\end{center}
\label{sum_result}
\caption{The results of detection APs(\%) on novel classes. For few-shot detection on Pascal VOC, our method significantly outperforms others.}
\end{table*}
\begin{figure}[htb]
\subfigure[ Curves of the APs(\%) normalized for all methods.]
{
\begin{minipage}[c]{0.47\textwidth}
\centering
\includegraphics[width=0.93\textwidth]{novel1shot2novelap.png}
\end{minipage
}
\subfigure[ Curves of the APs(\%) normalized for different $\beta^-$. ]
{
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{beta_compare.png}
\end{minipage
}
\caption{ For Pascal VOC, in (a), for novel set1 2-shot, the curve shows all epochs fine-tuning results of the detection APs (\%) on the novel classes. Our method obviously outperforms others. In (b), Results with solid line are normalized APs on the novel classes, and results with the dashed line are the detection normalized APs on all categories. For TCL-2, when the $\beta^-$ is set to 0.5, our method is better for fine-tuning novel1 2-shot.}
\label{beta}
\end{figure}
\subsubsection{The Importance of the TCL}
\textbf{Impact of $\beta^-$.} As shown in Equation \ref{equation1}, $\beta^-$ is essential for our TCL-2. The threshold of true-label $\beta^+$ is 1.0, the threshold of the most likely false classification prediction $\beta^-$ is set to 0.5. $\beta^-$ cannot be too large or small. If it is too large, making meta-model drive the other classes towards the true-label,and that causes semantically similar categories to become ambiguous and inconducive to classification. Otherwise, it makes the model trust only the true-label,and that make farther gap between similar categories and fail to make full use of semantic to assist each other. As illustrated in Figure \ref{beta}(b), when the $\beta^-$ is greater than 0.5, the APs distribution on the novel classes(see the solid line) is consistent, and best APs on all categories(see the dashed line) is lower than our setting, $\beta^-=0.5$. When $\beta^-$ is less than 0.5, the APs on the novel classes tend to be smooth because the semantics between the true-label and the false most likely classification results are clearly separated, making model trust the true-label most and failing to weaken the assistance of false true result. Therefore, when the $\beta^-$ is set to 0.5, our method can exploit similar semantics distribution between different categories to improve the performance on novel classes better.
\textbf{Comparsion with the state-of-the-art losses.} As shown in Table 1,
compared with the state-of-the-art BCEwithLogits, Focal \cite{Lin2017Focal}, and Cross-Entropy \cite{Rubinstein1999The}, our TCL-2 can improve the few-shot detection performance. For novel set 1, the 1-shot detection APs of TCL-2 is 2.73\%, 2.88\%, and 3.78\% better than the other classification losses, respectively. The TCL-2 is better than the other classification methods by 1.23\%, 0.73\%, and 5.53\% for 3-shot, respectively
TCL-2 has another advantage (i.e.,alleviating the strong bias problem). As shown in Table 2, for example, our TCL-2 for dispersion of detection APs is 2.04\%, 4.0\%, and 4.15\% better than the BCEwithLogits, Focal and Cross-Entropy on novel set 1, respectively.
\begin{table*}
\small
\centering
\resizebox{1.0\textwidth}{!}{
\centering
\begin{tabular}{|l|ll|ll|ll|ll|}
\hline
Shot/Method& BCEwithLogits& Re-BCEwithLogit& Focal& Re-Focal&Cross-Entropy&Re-Cross-Entropy&TCL-2&Ours \\
\hline
\centering
1&55.04&58.58&57.0&54.77&57.15&53.86&53.0&\textbf{52.63}\\
\hline
2&52.84&54.46&57.0&51.99&53.36&50.63&50.93&\textbf{46.45}\\
\hline
3&45.81&48.28&45.67&51.22&49.63&48.6&45.46&\textbf{44.37}\\
\hline
5&41.1&42.07&\textbf{40.74}&32.57&41.95&40.64&41.82&41.36\\
\hline
\end{tabular}
}
\label{divergence_fine}
\caption{Dispersion of the detection APs(\%) on all categories. For novel set1, our method obviously alleviates the strong bias, reducing dispersion of detection performance.}
\end{table*}
\subsubsection{Analysis of the Category-based Grouping}
\textbf{Impact on every strategy.} We design the scheme in Equation \ref{equations} by category-based grouping mechanism (i.e., Figure \ref{figure1} and Equation \ref{equation2}). Then, the experiment analyzes every component (i.e., $q_j$, $Q_j$ and $U_j$).
\begin{equation}
L_{re-meta}=\sum\limits_{j=1}^Klog(\tau+\frac{q_j}{\epsilon+Q_j+\sum\limits_{k=j+1}^KU_j}).
\label{equations}
\end{equation}
Except for the best strategy (Equation \ref{equation2} and Equation \ref{groupj}), we experiment three other strategies, details as follows:
\begin{equation}
q_j=1, Q_j=0, U_j=e^{{({W_{std}}^j-{W_{std}}^k)}^2}
\label{equation10}
\end{equation}
where related grouping is only related to dispersion of meta-feature distribution between groups, the method fails to take into account the similarity of meta-feature distribution between groups. Therefore, we optimize the component $U_j$, as detailed below:
\begin{equation}
\begin{split}
q_j&=1, Q_j=0\\
U_j&=e^{{({W_{std}}^j-{W_{std}}^k)}^2} +e^{{({W_{mean}}^j-{W_{mean}}^k)}^2}
\end{split}
\label{equation11}
\end{equation}
$U_j$ can learn features between groups. However, the method fails to learn meta-feature distribute within a group.
Then,our best category-based grouping metric (Equation \ref{equation2}) makes the distribution of meta-features more compact within a group and the difference between groups more obvious.
In the other hand, if the category-based grouping is only attribute to the disperse and similarity of meta-feature between groups, grouping mechanism cannot learn the difference of meta-features between categories within a group, failing to distinguish between different categories of meta-feature distribution within a group, detail as follow:
\begin{equation}
\begin{split}
q_j&={W_{mean-std}}^j, Q_j=\frac{1}{{W_{mean-std}}^j}.\\
U_j&=e^{{({W_{std}}^j-{W_{std}}^k)}^2}.
\end{split}
\label{equation12}
\end{equation}
As shown in Table 3
, for every strategy ( Equation \ref{equation2}, \ref{equations}, \ref{equation10},\ref{equation11},and \ref{equation12}), we experiment different Re-TCL methods which combine our TCL-2 with different category-based grouping methods. Ours combining TCL-2 with the strategy (in Equation \ref{equation2}) is better for few-shot detection.
\begin{table*}
\small
\centering
\begin{tabular}{|ll|llllll|l|l|}
\hline
\centering
& &\multicolumn{6}{c|}{Novel Set 1} &\multicolumn{2}{c|}{APs}\\
\hline
\centering
Shot&Method&boat&cat&mbike&sheep&sofa&mean&base AP& AP\\
\hline
\centering
\multirow{4}{*}{1}& Re-TCL-2 (Equation \ref{equations} and \ref{equation10})&9.39&\textbf{37.6}&28.63&17.93&12.84&\textbf{21.27}&65.72&54.61\\
&Re-TCL-2 (Equation \ref{equations} and \ref{equation11})&4.55&32.78&29.89&18.28&10.87&19.27&\textbf{66.51}&\textbf{54.7}\\
& Re-TCL-2 (Equation(\ref{equations} and \ref{equation12})&9.09&34.75&21.63&17.11&\textbf{19.67}&20.45&65.14&53.97\\
&Ours \ref{equation2}&\textbf{9.53}&33.58&\textbf{32.28}&\textbf{19.66}&5.34&20.08&65.44&54.1\\
\hline
\multirow{4}{*}{2}& Re-TCL-2 (Equation \ref{equations} and \ref{equation10})&7.22&41.24&20.34&32.36&13.66&22.96&64.83&\textbf{55.61}\\
&Re-TCL-2 (Equation \ref{equations} and \ref{equation11})&5.22&39.38&\textbf{33.79}&33.46&11.9&24.75&65.05&54.97\\
&Re-TCL-2 (Equation \ref{equations} and \ref{equation12})&2.19&\textbf{45.05}&25.01&27.84&17.8&23.56&64.83&54.51\\
&Our \ref{equation2}&\textbf{10.61}&35.11&33.75&\textbf{35.89}&\textbf{18.38}&\textbf{26.75}&\textbf{65.18}&55.58\\
\hline
\multirow{4}{*}{3}& Re-TCL-2 (Equation \ref{equations} and \ref{equation10})&6.32&\textbf{47.77}&22.45&27.92&29.99&26.89&65.55&55.89\\
&Re-TCL-2 (Equation \ref{equations} and \ref{equation11})&\textbf{10.46}&47.35&27.08&26.12&\textbf{37.72}&29.75&65.11&56.27\\
&Re-TCL-2 (Equation \ref{equations} and \ref{equation12})&10.29&39.55&18.76&28.67&33.84&26.22&65.18&55.44\\
&Ours \ref{equation2} &10.29&46.05&\textbf{28.11}&\textbf{29.81}&34.52&\textbf{29.76}&\textbf{65.64}&\textbf{56.67}\\
\hline
\end{tabular}
\label{compare_related_methods}
\caption{The detection APs(\%) on Pascal VOC by combining our TCL-2 with different category-based grouping methods. For $k$-shot detection, $k=1,2,3$, ours is better than others for novel set1. This table illustrates APs (i.e., every novel class, the mean APs on the novel classes, the mean APs on the base classes, and the mean APs on the all categories).}
\end{table*}
\textbf{Impact of category-based grouping mechanism.} Without additional datasets, as detailed in Equation\ref{equation2}, we mainly focus on the similar appearance between categories, followed by similar scenes, exploiting the relationships to promote performance. For Equation \ref{equation2}, we analyze category-based grouping, and compare with every ablation. As shown in Table 1,
compared with only classification losses, splitting 20 categories into 6 disjoint groups can improve performance for few-shot detection
\textbf{Impact on dispersion.} As shown in Table 2,
Re-BCEwithLogits, Re-Focal, and Re-Cross-Entropy is compared with the BCEwithLogits, Focal, and Cross-Entropy, respectively. We find that the better meta-feature distribution between categories can alleviate the unbalanced performance on all categories. Especially for novel set1 2-shot, the dispersion of Re-Focal and Re-Cross-Entropy are reduced by 5.01\% and 2.73\%, respectively. Therefore, our category-based grouping mechanism can help distribution between similar semantic classes more compact and exploit the correlation between categories better, and alleviate the strong bias problem.
As shown in Figure \ref{figure4}, for a subgraph, each category-based meta-feature is represented by different color histograms, and each subgraph is represented as a group with categories. We find that the meta-features distribution is very similar within a group, and the difference between groups is obvious
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth]{plot_group.png}
\caption{Histogram of meta-features distribution. The distribution of meta-feature vectors of 20 categories. In a sub-figure, the feature vectors of each group are represented, and different colors represent different categories within a group. Every meta-feature in a subgraph is very similar, and the distribution difference between subgraphs is obvious. Our method can extract shared meta-features within a group better.}
\label{figure4}
\end{figure*}
\subsection{Visualization and Results}
Our TCL-$C$ and category-based grouping can improve detection performance on few-shot classes, as seen in Appendix B. First, as detailed in Equation \ref{equation1} and Figure \ref{beta} (a), our TCL-2 performs better for similar semantics to improve the detection APs on few-shot classes.
Then, according to the similar appearance and environment which different categories appear, as detailed in Figure \ref{figure1} and Equation \ref{equation2}, we split all categories into $K$ groups which is disjoint each other. The distribution of meta-features is compact between categories within a group, the distribution between groups is far away from each other. That category-based grouping exploits similar distribution of category-based meta-features and reduces the detection dispersion on all categories. As can be seen from Figure \ref{beta} (a), Figure \ref{figure4} and Table 1,
the category-based grouping helps meta-model extract the shared meta-features between categories and ours improves the detection APs by similar semantics between categories. As shown in Table 2, ours reduces the dispersion of the detection APs on all classes.
Combining TCL-2 with category-based grouping is more beneficial for few-shot detection. As shown in Table 1,
for $k$-shot detection, $k = 1,2,3, $ours is better, and the detection APs are close to 20.0\%, 25\%, 30\%, respectively. For novel set3, the novel classes are " aero, bottle, cow, horse, sofa" and the remaining is the base classes. Although there is no novel class associated with the base categories, our method is 4.32\%, 9.82\% and 6.35\% better than the BCEwithLogits, the Focal and Re-Cross-Entropy for novel set3 1-shot, respectively.
\section{Conclusions}
For few-shot detection, we present a TCL-$C$ for exploiting the true-label and the most similar C-1 classes to improve the detection performance on few-shot classes, and a category-based grouping method for helping meta-model extract category-related meta-features better, alleviating the strong bias problem and further improving performance. Based on similar appearance or the environment they often appeared, this paper splits categories into disjoint groups manually. This method helps the meta-model extract meta-feature vectors, making distribution of meta-features within a group more compact and difference of meta-features more obvious between groups. For 1-shot, 2-shot, and 3-shot detection, our method obtains all detection APs of almost 20\%, 25\%, and 30\%. In the future, rather than group manually, we will combine categories-embedded features and unsupervised clustering to group more categories dynamically to improve performance for few-shot detection.
{\small
\bibliographystyle{ieee_fullname}
|
{'timestamp': '2021-06-16T02:15:36', 'yymm': '2007', 'arxiv_id': '2007.06837', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.06837'}
|
arxiv
|
\section{Introduction}
Since Darwin's and Mendel's works on the theory of evolution by natural selection~\cite{darwin2004origin} and on the laws of biological inheritance~\cite{mendel1866versuche} respectively, evolutionary theories have focused mainly on the role of selection acting on randomly-generated genetic material in the origination of \emph{phenotypic diversification}---and finally \emph{speciation}.
This way of thinking---summarised by the so-called ``Neo-Darwinian Synthesis'' or ``Modern Synthesis''---fostered the idea that the responsibility for the generation and the subsequent establishment of ``evolutionary novelty'' was prerogative of genetic material.
So, the differential survival and reproduction success of biological organisms has been ascribed to genotype.
Parallel to the development of ``Modern Synthesis'', Mayr~\cite{Mayr2091} argued that: ``[...] it is the
phenotype which is the part of the individual that is \emph{visible}
to selection.''
Mayr's argument, together with Waddington's work on the \emph{Epigenetic Landscape}~\cite{waddington1942epigenotype}, has paved the way for the formulation of a theoretical framework according to which the phenotype---and not the genotype---,the environment~\footnote{With the term \emph{environment} here we refer---without loosing generality---to all external perturbations from which a subject can be influenced, some of these could be the external world itself, organisms of the same or other species, etc.} and above all the development process play a primary role in the origin of the novelty from an evolutionary point of view.
The epigenetic landscape metaphor, which finds a formal basis in dynamical systems theory, stresses the concept for which there is no trivial deterministic mapping between genotype and phenotype~\cite{huang2012molecular}.
It is the dynamics of the complex network of interaction among genes, and between genes and the environment, which will determine the stable expression patterns and so ultimately will affect the phenotype determination.
Therefore, the ensemble of dynamics than can be generated by the genes composing the organism's genetic code represents a source of diversification that can explain the birth of new phenotypes and, consequently, their affirmation on the evolutionary scale. It is important here to emphasise the role of the environment in constraining and shaping these actual dynamics.
In biology, the capacity of a genotype to produce different phenotypes depending on the environment in which it is located is defined as \emph{phenotypic plasticity}~\cite{PFENNIG2010459,kelly2011phenotypic}, \emph{developmental plasticity} if differences emerges during development~\cite{fusco-minelli-2010,gilbert2016developmental}.
The specific dynamics that shapes an organism's phenotype during its development is indeed the response to various influences, among which we inevitably find the external environment, other organisms and noise~\cite{longo_how_2018}.
More in general, these external agents influence the process of regulation so that they might destabilise reached (meta)stable patterns of gene expression and induce a network dynamics reconfiguration, able to accommodate and possibly give appropriate responses to the new state of the external environment. In other words, they stimulate the process of construction of a new internal model of the external world.
Biologists call this process \emph{developmental recombination}; in the works~\cite{West-Eberhard6543,PFENNIG2010459} reasons and evidences why this process is held responsible for the origin of differences between species are presented.
Noteworthy is the hypothesis for which the phenotypic plasticity would be able to allow the crossing of the valleys present in the fitness landscape, a crossing that would be precluded to evolution-by-mutations as the valleys' phenotypes would be selectively disadvantageous.
Therefore, even if mutations (random or not) contribute to the creation of diversification, by modifying the gene regulatory networks topology and therefore the constraints imposed on its dynamics, they are not necessary condition for the phenotypic plasticity and, in the light of the previous discussion, assume a role of supporting actors.
They are, however, implicated in the \emph{genetic accommodation} process~\cite{West-Eberhard6543}, that is the process following the selection of the phenotypic variant with a genetic component; or when a reorganisation of the genotype allows individuals of subsequent generations to reach the same phenotype at a lower cost~\cite{bateson_gluckman_2011}, in terms of time, resources, etc.
From a cybernetics point of view~\cite{wiener1948cybernetics,ashby-cybernetics}, this capability of producing an internal model of the external world, which is provided by phenotypic plasticity, is of great importance. In abstract terms, this process makes it possible for an organism to compress the wealth of information coming from the external world into an internal representation that values only the pieces of information relevant for the organism's survival; on the basis of this internal model the organism acts so as to achieve its tasks, \textit{in primis} to attain homeostasis, i.e. maintaining its \textit{essential variables} within physiological ranges~\cite{design-for-a-brain}. We can then state that phenotypic plasticity not only is a vital property for living organisms, but it may also be of great value for artificial ones.
Therefore, a fundamental question arises as to what are the \emph{generic properties} that allow organisms to exhibit the phenotypic plasticity observed in the process of organisms development and so attain an effective level of adaptivity.
If these properties are found, on the one hand they may provide us insights about the mechanisms underlying the adaptive behaviours of organisms during their development process;
on the other hand, given the reported relevance that the development process may have on evolutionary-scale changes~\cite{arthur_2004}, they can be the key to understand the onset of the differences, and at the same time the common traits, between the species~\footnote{Evolutionary Developmental Biology (evo-devo) was born around the end of the twentieth century with the intention of answering these and other related questions.
In particular, it focuses on the role of the developmental process, and the effects of its alteration, on evolutionary changes~\cite{Hall2012,WallaceEvoDevo}}.
An approach based on generic properties provides an alternative to the comparative studies between different species, which although have led to great results (see above all the discoveries of \emph{homeobox} and \emph{Pax6} gene, ~\cite{Gehring2007,WallaceEvoDevo,Hall2012,Xu383}) have the limitation of being highly costly and not being easily generalisable.
In addition, general properties supporting phenotypic plasticity may provide an effective design principle for artificial systems capable to adapt.
To this aim we believe it is necessary to start from the known and most relevant properties of the organisms and check whether they can also provide plausible hypotheses for the construction of general principles that can first explain the phenotypic plasticity and then hopefully can bring us to link development and evolution.
We believe that one of these principles can be found in \textit{criticality}.
A long-standing conjecture in complex system science---the \emph{criticality hypothesis}---emphasises the optimal balance between robustness and adaptiveness of those systems that are in a dynamical regime between order and chaos~\cite{Kauf93,Kau1996}.\footnote{A recent account of dynamical criticality can be found in ~\cite{roli2018dynamical,munoz2018colloquium}.}
Theoretical studies on properties of such systems and a bunch of empirical evidences led to a reshape of this conjecture into ``life exists at the edge of chaos''~\cite{Lan1990,Pac1988}, or in the field of information processing into
``computation at the edge of chaos''~\cite{CruYou1990,Pro2013}.
Among the most remarkable experimental studies that have brought evidence for the criticality hypothesis, we focus our attention on those that belong to the field of biology, since we are interested in biological development and evolution.
In many papers, it emerges that biological cells---or more precisely their gene networks---operate in the critical dynamic regime.
This has been repeatedly corroborated by different authors and with the help of different techniques, models and working hypotheses.
The comparison of time sequences of microarray gene-expression data against data generated by random Boolean networks (RBNs) models~\cite{kauffman69} led Shmulevich and others~\cite{shmulevich2005eukaryotic}, to conclude that genetic regulatory network of HeLa cells---an eukaryotic cancer cell line---operate in the ordered or critical regime, but not in chaotic one.
In~\cite{DanielsWalkerCriticality}, by exploiting the CellCollective~\cite{helikar2012cell} database of Boolean models of real regulatory networks, the authors showed that by using the \emph{sensitivities} measure~\cite{Sensitivities} all the network taken in exam were critical, or near critical.
Further, Serra and Villani showed that Boolean networks that best fit the knock-out avalanches in the yeast \textit{S.~Cerevisiae} are ordered, but very close to the critical boundary.
Many noteworthy papers~\cite{beggsCriticality,BeggsBrain,ChialvoNeural} that focus on analyses of the brain, mainly making use of models, brought evidences supporting that also the brain works in critical condition.
As an example, evidence concerning models of C. Elegans' nervous system activities in free locomotion condition shows that its brain functioning has some signature of criticality~\cite{cElegansIzquierdo}.
At the same time, there is evidence suggesting that organisms that operate in critical condition are the most advantaged, evolutionarily speaking.
Aldana et al. in~\cite{AldBalKauRes2007} pointed out that well-known model of biological genetic networks, RBNs (formally introduced in the following), in critical regime showed the properties of robustness and, in particular, \emph{evolvability}, at the same time.
More in detail, they introduced network \emph{mutations} (see experimental details in the original paper) and assessed the degree to which the original attractors (those exhibited before mutations) are retained and, simultaneously, the capacity to give rise to new attractors.
Although both ordered and critical networks have proven capable of retaining the original attractors with high probability, critical networks were those with the greatest tendency to produce new attractors, and so \emph{evolvability}.
Torres-Sosa and others~\cite{TorresSosa} have reached the same conclusion by following a slightly different path.
By modelling natural selection acting on Boolean networks as an evolutionary algorithm with mutation and gene duplication operators, they showed that dynamical criticality emerges as a consequence of the attractor landscape evolvability property at the evolutionary level.
A remarkable property of critical systems is that they can reliably respond to the inputs while being capable to react with a wide repertoire of possible actions~\cite{kauffman2000investigations}: this functionality is essential for organisms that must select, filter and compress the information coming from the environment that is relevant for their life.
At this point, we wonder if criticality can foster the emergence of phenotypic plasticity and, if so if this can be at the same time the responsible of the establishment of robustness and adaptivity properties, characteristic of organisms produced by both phylogenesis and ontogenesis.
In this paper we begin tackle these fundamental, open questions; so we start investigating if dynamical criticality can favour phenotypic plasticity.
Indeed, if this were to be the case, not only we could bring further evidence to the ``criticality hypothesis'', but we could start to shed light on the relationship between phenotypic plasticity, criticality and evolution.
\section{Creation of novelty in robotics}
In this work, we make use of robotic agents as a proxy to start investigating the questions raised in the previous section.
The robotics literature is full of examples in which techniques inspired by the natural world are used to allow robots---or swarms of robots---to perform complex tasks~\cite{bonabeau1999swarm,nolfi-evorobot-book,braccini2017applications}.
Dually, we can think of employing artificial agents for representing or mimicking natural dynamics and therefore, to investigate issues and open questions otherwise impossible to study.
Indeed, robots development and design costs are very low (especially considering the possibilities offered by simulation) and therefore, ideal for these analyses.
Although this artificial approach has intrinsic limits---their results are not easily generalisable \emph{as are} to the natural counterpart---they can provide us with some new clues, hypotheses or different perspectives that could lead to the formation of new models or theories, besides suggesting specific experiments to be undertaken.
Artificial devices have already proven to be able to give rise to the emergence of diversity and the creation of novelty.
In this regard, Gordon Pask in the 1950s conducted a remarkable experiment involving truly evolvable hardware~\cite{pask1958physical,pask1960natural,cariani1993evolve}.
Pask builded an electrochemical device with emerging sensory abilities.
In particular, the experimental structure he created---composed of electrodes immersed in a ferrous sulphate solution---was able to evolve from scratch the ability to recognise sounds or magnetic fields.
In other words, the assembly~\footnote{It can be considered an example of evolutionary robotic device~\cite{cariani1992some}.} developed its own sensors from scratch and therefore its own \emph{relevance criteria} from the outside world.
Inspired by Pask's works, Peter Cariani proposes a classification of the kinds of adaptive behaviours attainable by physical devices~\cite{cariani1992some,Cariani2008EmergenceAC}.
He calls \emph{nonadaptive robotic devices} those devices that are not able to modify their internal structure based on their past experiences.
\emph{Adaptive computational devices} is instead the category for devices that can change their computational module, if advantageous.
They can improve only the mapping between their fixed input and output though.
With the term \emph{structurally adaptive devices} he refers to devices capable of constructing new hardware, sensors and effectors for themselves.
As Cariani points out, this is analogous to the biological evolution of organs.
Through the building of sensors and effectors, and so with the freedom to gather and manipulate the kind of information needed to perform a given task, robotic agents can create new semantic states: \emph{relevance criteria} different from those that the designer may have imposed on the robot, initially.
Cariani identifies this last category as a necessary condition for the construction of agents with \emph{epistemic autonomy}: in this condition, the agent is completely autonomous also as regards the creation of new perspectives or point of views of itself, of the world in which is immersed and the relationship among them.
Although they follow a more abstract approach than Pask's pioneering work, attempts to evolve sensors into robotic agents are present in the literature.
For example, in the work~\cite{mark_framework_1998}, both the number and size of sensors are evolved in a simulated environment, finding a preliminary correlation between the number of sensors (in this case artificial eyes) and complexity of the task achieved.
A different---and in some ways more general---perspective is pursued by those approaches that deals with the coevolution of agents' body and brain~\cite{lund_evolving_1997,Dellaert_1996,bongard_evolving_2003}.
Although many of these are inspired by cellular processes and by biological development mainly, these are \emph{offline} approaches.
There are works that tries to apply some elements of evolution in robotics in an online setting~\cite{bredeche2018embodied}, as well as a kind of online epigenetic adaptation~\cite{brawer2017epigenetic}.\footnote{Of course, here we do not consider here all the works concerning generic online adaptation of some parameters of the robotic system.}
The relevance of development in the designing of artificial agents has so far been underestimated; only recently its prospective role and the properties that can derive from it (phenotypic plasticity above all) in the robotic field are emerging~\cite{hunt_phenotypic_2020,jin2011morphogenetic}.
The development process in the designing of artificial entities, by analogy with the biological counterpart, can be represented by an \emph{online adaptation process}~\footnote{The one we propose in the following sections is just a possible example of it.}.
According to the above reported biological evidence and preliminary experiments conducted in artificial contexts, this mechanism has the potential to bring out novelties in the behaviour of artificial agents.
In this context we define \emph{novelty} as the behaviour or outcome reached by the artificial agents that would not have been contemplated by the model the designer has about the robot.
Indeed, the robots actual perceptual experiences and real contingencies give the agent the opportunity to overcome and override the initial designer's constraints and biases.
To conclude, development-inspired (online) approaches are viable alternatives, or complementary tools, for techniques of automatic design of robotic agents.
\section{Proof of Concept}
In light of previous discussions, we \emph{(i)} propose an \emph{online} adaptive mechanism capable of giving rise to the observable phenotypic plasticity property typical of biological organisms without requiring mutations; \emph{(ii)} start analysing the conditions under which robotic agents---using the mechanism referred to in the previous point---obtain a measurable advantage.
Firstly, we briefly introduce the Boolean networks model, since it represents the actual substrate on which our mechanism is based.
Boolean networks (BNs) have been introduced by Kauffman~\cite{kauffman69} as an abstract model of gene regulatory networks.
They are discrete-state and discrete-time non-linear dynamical systems capable of complex dynamics.
Formally, a BN is represented by a directed graph with $n$ nodes each having a variable number $k$ of incoming nodes, a Boolean variable $x_i$, $i = 1,...,n$ and a Boolean function $f_i = (x_{i_1},...,x_{i_k})$.
They have received much attention not only for their capacity to reproduce important properties of biological cells~\cite{shmulevich2005eukaryotic,roli2018dynamical,serra2006,helikar2012cell,DanielsWalkerCriticality} but also for their capacity to express rich dynamics combined with their relatively compact description, characteristics that make them appealing also for artificial applications.
For this reason, the so-called Boolean network robotics takes advantage of them by employing BN instances as control software in robots.
Some examples of the remarkable results that could be obtained through their employment are reported in the following works~\cite{roli2012preliminary,roli-aiia2015,RoliAttractorLandscape}.
The approach we propose---which is grounded on the BN-robotics---consists of using a Boolean network as a robot program.
Its dynamics, in a way similar to that performed by gene regulatory networks in biological cells~\cite{braccini2017applications}, determines the behaviour of the robot and ultimately its final phenotype.
The word \emph{phenotype} is used in this context with a more generic meaning than its biological counterpart: regardless of the specific physical characteristics of the robot, it identifies the overall observable behaviour achieved by the artificial agent.
As illustrated in~\cite{roli2011design} the first step to take when designing a robot control software based on Boolean networks is to choose the coupling between the nodes and the robot actuators and sensors.
Usually, this mapping is chosen at design-time and stay the same during all the design, simulation and, possibly, real-world applications phases.
The mapping itself can be subject to optimisation during the design phase, but once reached the desired/optimal level of robot performance---according to a defined fitness function---it will not undergo any variation.
These approaches are referred to as offline design methods.
With the intention of conferring the property of phenotypic plasticity observed in the development phase of biological organisms (see \emph{(i)}), we propose a novel \emph{online} adaptive mechanism for the design of control software for robots
~\footnote{In the present discussion, we will refer only to robots with fixed morphology, although this mechanism finds natural application in self-assembling robots. They indeed can build their own sensors and really capture their relevance criteria.}.
The BN chosen as control software for the robotic agent is generated once and remains unchanged during all the robot's life.
What distinguishes our approach from past ones is the fact that what changes is the coupling between the BN nodes and the sensors of the agents~\footnote{Although this mechanism is abstract enough to be able to contemplate the variations of both sensors and actuators, in this discussion, we will consider varying only the former.}.
The coupling changes are not externally imposed by the designer: the robot determines which couplings are more suitable to it by continually interacting with the environment in which it is located.
The task chosen for our proof of concept is that of navigation with obstacle avoidance.
The robot, equipped with proximity sensors, must, therefore, try to move in its environment, represented by an arena and at the same time avoid colliding with the obstacles present in it.
This problem can be formally considered as a dynamic classification problem.
Indeed, the artificial agent is faced with a problem of classification of time series which are not independent of the agent's behaviour since they are conditioned by the dynamics belonging to its \emph{sensorimotor loop}~\cite{lungarella2001robots}.
Through a designer defined objective function, we provide a figure of merit assessing the degree of adaptation attained by the robot. This function will act as selective pressure and guide the robot adaptation process. It should be considered as an abstraction of the rewarding mechanisms (both intrinsic and extrinsic) that characterise adaptation in natural and artificial systems.
In Figure~\ref{fig:img_adaptive_sensors_image_pp}, we see a schematic representation of two consecutive steps of the process.
\begin{figure}[h!]
\centering
\includegraphics[width=.9\textwidth]{image_pp.pdf}
\caption{Schematic representation of two consecutive steps of the proposed online adaptive mechanism. The topology of the network, the number of sensors, as well as the nodes coupled with them are only used for example purposes and do not reflect the experimental conditions used in our experiments.}
\label{fig:img_adaptive_sensors_image_pp}
\end{figure}
Albeit in a more abstract form, this mechanism takes inspiration from the Pask's evolvable and self-organising device.
Here, the space of possibilities among which the robot can choose it's not open-ended, like the one used in Pask's experiment, but it is limited by the possible set of dynamics of the Boolean network, the number of sensors and the coupling combinations between the two.
Simultaneously, it can be considered an artificial counterpart of the adaptive behaviour without mutations present in the development phases of biological organisms.
Indeed, the robot exploits the feedbacks it receives from the environment and consequently tries to re-organise the \emph{raw genetic material} it owns.
In doing so, it does not modify the Boolean network functions or topology but it will use the intrinsic BN's information processing capabilities.
In addition, our adaptive mechanism resembles a step that takes place in the biological phenomenon of neuroplasticity or brain plasticity~\cite{neuroplasticity}.
In neuroplasticity the creation of synaptic connections and changes to neurones occur mainly during the development phase.
However, the process of refinement of the neural network that starts at birth plays a crucial role.
This last occurs as a function of the environmental stimuli and feedbacks the individual receives after his activities and interactions with it.
The different ensembles of cognitive skills, sensorimotor coordination and, in general, all processes influenced by the brain's activities which an individual develops through his experience represent the sets of possible phenotypes.
At a very high level of abstraction, our ``scrambling phase''---that which change coupling among BN nodes and robot sensors phase---acts as the activity-driven refinement mechanism found in the child's brain.
A further investigative idea behind this experiment and expressed in point \emph{(ii)} is to start finding out what general principles govern the best performing robotic agents, and therefore they promote and at the same time take advantage of the phenotypic plasticity characteristic of our adaptive mechanism.
Fortunately, the literature relating to Boolean networks provides us with a wide range of both theoretical and experimental results to start from.
A natural starting point for an analysis of differential performances obtained through the use of Boolean network models is that concerning the dynamical regimes in which they operate.
So, in the next sections we investigate what Boolean network dynamical regimes---ordered, critical or chaotic---provides an advantage for robots equipped with the adaptive mechanism we have just introduced.
\subsection{Experimental setting}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{footbot.png}
\caption{The robot used in the experiments.}
\label{fig:footbot}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{arena.png}
\caption{The arena used in the experiments.}
\label{fig:arena}
\end{figure}
In our experiments we used a robot model equipped with 24 proximity sensors (evenly placed along its main circumference) and controlled by two motorised wheels (see figure~\ref{fig:footbot}). The robot moves inside a squared arena, delimited by walls, with a central box (see figure~\ref{fig:arena}). The goal we want the robot to achieve is to move as fast as possible around the central box without colliding against walls and the box itself. The robot is controlled by a BN.
The coupling between the BN and the robot is as follows: two nodes are randomly chosen and their value is taken to control the two motors, which can be either ON (node with value 1) or OFF (node with value 0) and control the wheels at constant speed. The sensor readings return a value in $[0,1]$ and so are binarised by a simple step function with threshold $\theta$\footnote{In our experiments we set $\theta = 0.1$}: if the sensor value is greater than $\theta$, then the BN node is set to 1, otherwise it is set to 0. The 24 sensors are randomly associated to 24 randomly chosen nodes in the network, excluding the output ones. At each network update, the binarised values from the sensors are overridden to the current values of the corresponding nodes, so as to provide an external signal to the BN.
The adaptive mechanism consists in randomly rewiring $q$ connections between sensors and BN nodes (excluding output nodes, of course). The actual value of $q$ is randomly chosen at each iteration in $\{1,2,\ldots,6\}$. The robot is then run for $T=1200$ steps (corresponding to $120$ seconds of real time); if the current binding enables the robot to perform better, then it is kept, otherwise it is rejected and the previous one is taken as the basis for a new perturbation. We remark that the binding between proximity sensors and BN ``input'' nodes is the only change made to the network: in this way we address the question as to what extent a random BN can indeed provide a sufficient bouquet of behaviours to enable a robot to adapt to a given (minimally cognitive) task.
BNs are generated with $n$ nodes, $k=3$ inputs per node and random Boolean functions defined by means of the bias $b$, i.e. $b$ is the probability of assigning a 1 a truth table entry. In the experiments we tested $n \in \{100,1000\}$ and $b \in \{0.1,0.21,0.5,0.79,0.9\}$. According to~\cite{sole_critical_points}, random BNs with $k=3$ generated with bias equal to 0.1 or 0.9 are likely to be ordered, with bias equal to 0.5 are likely to be chaotic and bias equal to 0.21 and 0.79 characterises criticality.\footnote{Along the critical line, $k$ and $b$ are linked by this relation: $k = \dfrac{1}{2b(1-b)}$.} Only the BN nodes controlling the wheels have function randomly chosen always with bias $0.5$; this is to avoid naively conditioning the behaviour of the robot, which would tend to be always moving (resp. resting) for high biases (resp. low biases). This choice has anyway a negligible contribution to the overall dynamical regime of the network.
The performance is evaluated by an objective function that is accumulated along the robot execution steps and then normalised. The function is defined as follows:
\begin{center}
\begin{math}
F = (1-p_{max}) \; (1-|v_l-v_r|) \; \frac{(v_l+v_r)}{2}
\end{math}
\end{center}
\noindent
where $p_{max}$ is the maximal value returned among the proximity sensors, and $v_l$ and $v_r$ are the binarised values used to control the left and right motor, respectively. The intuition of the function is to favour fast and as much straight as possible trajectories far from the obstacles~\cite{nolfi-evorobot-book}.
Experiments are run in simulations with ARGoS~\cite{pinciroli2012-argos}.\footnote{The controller has been implemented in Lua and it is available from the authors upon request; raw data of the results are available as well.}
\subsection{Results}
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{boxplot-100_title.pdf}
\includegraphics[scale=0.38]{boxplot-1000_title.pdf}
\caption{Boxplots summarising the performance as a function of BN bias for BNs with $n=100$ and $n=1000$.}
\label{fig:xor}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{boxplot-100-dual_title.pdf}
\includegraphics[scale=0.38]{boxplot-1000-dual_title.pdf}
\caption{Boxplots summarising the performance as a function of BN bias for BNs with $n=100$ and $n=1000$ for robots controlled by BNs with a dual encoding (i.e., 0 denotes that an obstacle is detected).}
\label{fig:xor-dual}
\end{figure}
We run 1000 random replicas for each configuration of BN parameters and collected statistics on the best performance attained after a trial of $1.44 \times 10^4$ seconds (corresponding to $1.44 \times 10^5$ steps in total). In order to avoid variance due to the initialisation of the network, all nodes are initially set to 0.
Since the evaluation function can not be maximal across the whole run, as the robot must anyway turn to remain in the arena and avoid the obstacles, values of $F$ greater than 0.7 correspond to a good performance.
As we can observe in figure~\ref{fig:xor}, despite the simple adaptation mechanism, a large fraction of BN attains a good performance. Notably, critical networks attains the best performance---this result is striking for large BNs ($n=1000$).
In particular, for $n=100$ the results achieved with $b=0.21$ are significantly better (Wilcoxon test, with $\alpha=0.05$) than all the other cases, with the exception of $b=0.5$ for which we could not reject the null hypothesis. As for $n=1000$ the case with $b=0.21$ is significantly better than all the other ones.
We observe, however, that just one of the two bias values corresponding to the critical regime corresponds to a good performance. The reason is that in our experiment the symmetry between 0 and 1 is broken, because a 1 means that an obstacle is detected. To test the robot in the dual condition, we ran the same experiments with a negative convention on the values (if the obstacle is near, then the node is set to 0; similarly, the wheels are activated if the corresponding output node state is 0\footnote{We kept this dual condition uniformly across all the choices, even if, being the bias of output nodes 0.5, the encoding has no effect on average on the wheels.}). As expected, results (see figure~\ref{fig:xor-dual}) are perfectly specular to the previous ones (and the same results of the statistical test hold).
\section{Discussion}\label{discussion}
The picture emerging from our experiments is neat: one bias value characterises the best overall performance and this value is one of the two along the critical line. The reason of the asymmetry between the two critical bias values has to be ascribed to the symmetry breaking introduced by the binarisation of the sensor values. Anyway, the remarkable observation is that random BNs generated with a bias corresponding to critical regime adapt better than the other kinds of BNs. Since the adaptive mechanism only acts on the mapping between sensors and input nodes, the dynamical regime of the BNs is preserved; therefore, we have a further evidence that critical BNs achieve the best performance in discriminating the external signals.
One might ask as to what extent the adaptive mechanism we have implemented can be said to be a case of phenotypic plasticity. To answering this question we first observe that, in our setting, adaptation involves only the way external information is filtered by the robot and so it concerns the sensing module of the system; second, this adaptation takes place during the ``life'' of the individual and it is based on a feedback that rewards specific behaviours (i.e. those favouring wandering with collision avoidance), without changing the actual ``genetic'' structure. In other words, our mechanism mimics a kind of sensory development tailored to the specific environment: in general, the robot can be coupled with the environment in a huge number of possible combinations, each constraining the system to express a particular behaviour (phenotype); the mapping between sensors readings and network nodes is the result of the embodied adaptation of the sensory-motor loop and manifests one particular phenotype, emerged from the interaction between the robot and the environment.
\section{Conclusion}\label{conclusions}
In this work we have shown that robots controlled by critical BNs subject to sensor adaptation achieve the highest level of performance in wandering behaviour with collision avoidance. The sensor adaptation mechanism used in this work consists in varying the coupling between robot sensors and BN nodes---whose value is then set by the binarised reading on the associated sensor. Other possible adaptive mechanisms can be chosen, e.g. varying the coupling between BN nodes and robot actuators, and can also be combined; in general, structural modifications of the BN are also possible, such as the ones acting on Boolean function, network topology and also network size. These adaptive mechanisms are subject of ongoing work and preliminary results suggest that sensor and actuator adaptation mechanisms are way better than structural ones, and that critical BNs again attain superior performance.
As a next step, we plan to investigate the relation between criticality of controlling BNs, their performance and the maximisation of some information theory measures, such as predictive information~\cite{ay2008predictive}, integrated information~\cite{edlund2011integrated} and transfer entropy~\cite{lizier2008information}.
Besides providing evidence to the \textit{criticality hypothesis}, the results we have presented make it possible to speculate further: criticality may be a property that enables phenotypic plasticity---at least as long as sensory adaptation is concerned. We believe that this outcome provides a motivation for deeper investigations, which may be primarily conducted in simulation or anyway with artificial systems. Nevertheless, we also envisage the possibility of devising wet experiments, in which the dynamical regime of an organism is externally controlled and its ability to exhibit phenotypic plasticity can be estimated.
In addition, a mechanism like the one we have introduced may be an effective tool for tuning artificial systems to the specific environment in which they have to operate. As a futuristic application, we imagine the construction of miniaturised robot that can accomplish missions precluded to humans, such as being inoculated into higher organisms to repair them, or recovering polluted environments.
In fact, recent technological advances have made it possible to build incredibly small robots, till the size of tens of nanometers. The current smallest robots---built by biological matter---can perform only a few predetermined actions,\footnote{See, e.g. the recent prominent case of Xenobots~\cite{xenobots}} therefore they can not attain the level of adaptivity and robustness needed for a complex mission. On the other hand, Artificial Intelligence software has recently made tremendous advancements and has been proved capable of learning and accomplishing difficult tasks with a high degree of reliability. This software, however, can not be run onto tiny robots. A viable way for filling this gap is provided by control programs based on unconventional computation, such as the ones derived from cell dynamics models, where phenotypic plasticity may play an important role.
|
{'timestamp': '2020-06-04T02:18:31', 'yymm': '2006', 'arxiv_id': '2006.02367', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.02367'}
|
arxiv
|
\section{Realizing a controlled unitary gate}
\label{suppcU}
In this section, we demonstrate how the multi-qubit C$_k$Z$^m$ gates in the main text can be generalized to generic C$_kU_1 \cdots U_m$ gates, in which an arbitrary unitary $U_i$ gate is applied to the $i$th qubit. This is achieved in a manner that is similar to the generalization for two-qubit gates and relies on single-qubit gates \cite{Barenco1995}. In particular, we rely on the fact that a single-qubit unitary can be written in the form $U = e^{i \delta} W$, where $W \in SU(2)$. Additionally, we take advantage of the fact that there exist matrices $A,B,C \in SU(2)$ such that $A B C = I$ and $W =A Z B Z C$ \cite{Barenco1995}.
The latter of these two identities implies a simple way to realize a two-qubit C$W$ gate: apply $A$, CZ, $B$, CZ, $C$, where $A,B,C$ are applied to the target qubit. The generalization to multiple controls and targets is straightforward. For each target qubit, we choose $A_i, B_i, C_i$ such that $W_i$ is applied to that qubit.
To fully realize general $U_i$, a phase needs to be applied. This can be achieved through a slight modification of the pulse sequence on the target atoms. In the main text, we assumed that the $2 \pi$ pulse was applied using the same Rabi frequency throughout. If we instead apply two $\pi$ pulses using Rabi frequencies with different phases, the $|1 0 \rangle$ state will pick up an extra phase based on the difference of the Rabi frequency phases, and $|1 0 \rangle \to - e^{i \delta/2} |10\rangle$. By applying an extra pair of $\pi$ pulses from the $|1\rangle$ state of the target qubits to the $|t\rangle$ state, we can realize $|1 1 \rangle \to e^{i \delta/2} |1 1 \rangle$. Applying the necessary Pauli-X gates to the target qubit, the resulting gate applied to the target qubit is $Z_{\delta} \equiv e^{i \delta/2} Z$. Using this controlled gate in place of the CZ gates above, the corresponding unitary is then $U_i = e^{i \delta_i} W_i$. Since the phases for all the $U_i$ add together, we only need to realize a single overall phase $\sum_i \delta_i$. To reduce the local control of the phase needed, we use the same phase $\langle \delta \rangle = \frac{1}{m}\sum_i \delta_i$ for all the target qubits. This gate sequence is illustrated in Fig.~\ref{cU}.
We could alternatively apply the necessary phase $\sum_i \delta_i$ through just one of the target qubits, while the remaining target qubits contribute no phase. Additionally, in the case where there is only a single control qubit, we only need to apply a phase $\sum_i \delta_i$ to the $|1\rangle$ state of the control qubit via a single-qubit gate.
\begin{figure}[h]
\centering
\includegraphics[scale=.33]{ControlUAve.pdf}
\caption{Generalization of C$_k$Z$^m$ gates to C$_kU_1 \cdots U_m$ for general unitaries $U_i$. The control and target qubits are denoted by $c_i$ and $t_i$, respectively. $A_i, B_i, C_i \in SU(2)$ with $A_i B_i C_i = I$, and $Z_{\langle \delta \rangle} = e^{i \langle \delta \rangle/2} Z$. The unitaries applied to each target qubit are $U_i = e^{i \langle \delta \rangle} A_i Z B_i Z C_i$, which is equivalent to applying the unitaries $U_i = e^{i \delta_i} A_i Z B_i Z C_i$, where $ \frac{1}{m}\sum_i \delta_i = \langle \delta \rangle$.}
\label{cU}
\end{figure}
\section{Different driving for control and target atoms}
\label{ctdriveSupp}
In this section, we discuss the scenario in which the control and target atoms can be dressed with different fields. This would rely on using dressing fields which have wavelengths which are small compared to the atomic spacing. This could be achieved, for example, via a two-photon process through a low-energy state (e.g.,~a ground state) which couples the $|p_0 \rangle$ and $|p_+\rangle$ states, although large Rabi frequencies will be more difficult to achieve.
For fixed magnitudes of the coefficients of the dressed states, the intraspecies interaction is maximized when $t_0, t_+$ are real and have opposite signs. This can be achieved, for example, by taking $\Omega_{c,0} = \Omega_{t,0}$, $\Omega_{c,+} = - \Omega_{t,+}$, where the first subscript denotes whether the drive is applied to a control ($c$) or target ($t$) atom and the second subscript denotes the polarization of the drive. Taking normalization factors into account, the maximal interaction is given by
\begin{equation}
\langle c t| V_{dd} |c t \rangle = \pm \frac{4 c_0 t_0 \mu_0^2}{\mathcal{N}_0^2 \mathcal{N}_1^2} \frac{1 - 3 \cos^2 \theta}{r^3},
\end{equation}
where $\mathcal{N}_0^2 = 1+ (1+2 M^2) |c_0|^2$, $\mathcal{N}_1^2 = 1 + (1+2 M^2)|t_0|^2$ are normalization factors and $M = \mu_0/\mu_+$. The interaction is maximized (in magnitude) when $c_0 = \pm t_0 = \sqrt{1+2M^2}^{-1}$, where it takes the value
\begin{equation}
\langle c t| V_{dd} |c t \rangle = \pm \frac{1}{\mu_0^{-2} + 2 \mu_+^{-2}} \frac{1 - 3 \cos^2 \theta}{r^3},
\end{equation}
which is the harmonic mean of the the two individual dipole-dipole interactions. This is generally larger than can be achieved when the atoms are dressed with the same drives. Off-diagonal interactions need not be considered in general, as the different drives will result in different light shifts, so the off-diagonal interactions will be off-resonant.
\section{Calculation of van der Waals interactions}
\label{vdWsupp}
In this section, we present the method used for calculating van der Waals interactions for the dressed states. We assume that the atomic separation is sufficiently large that $C_6$ can be determined via second-order perturbation theory. Due to the degeneracy of the states used in the dressing with undressed states and the fact that the microwave Rabi frequencies are large compared to the dipole-dipole interactions, it is important to take into account light shifts due to the dressing. In the rotating frame, the Hamiltonian of the system is given by
\begin{equation}
H = E_c |c \rangle \langle c| + E_t |t \rangle \langle t| + E_3 |3 \rangle \langle 3| + \sum_u E_u |u \rangle \langle u| + V_1 + V_2 + V_3,
\end{equation}
where $|3 \rangle$ is the third dressed state which is not involved in the Rydberg gate, $|u \rangle$ are undressed Rydberg states, $E_\mu$ is the energy (including light shifts) of state $|\mu \rangle$ in the rotating frame, and there are three types of dipole-dipole interaction terms
\begin{subequations}
\begin{equation}
V_1 = \sum_{u, u'} \sum_{\sigma, \sigma'} V_{w_\sigma u}^{w_{\sigma'} u'} e^{i (\nu_\sigma + \nu_{\sigma'}) t} |w_\sigma w_{\sigma '} \rangle \langle u u'| + H.c.,
\end{equation}
\begin{equation}
V_2 = \sum_{d_1,d_2,d_3,d_4} V_{d_1 d_3}^{d_2 d_4} |d_1 d_2 \rangle \langle d_3 d_4| + H.c.,
\end{equation}
\begin{equation}
V_3 = \sum_{d_1,d_2,d_3,u} V_{d_1 d_3}^{d_2 u}(t) (|d_1 d_2 \rangle \langle d_3 u|+|d_2 d_1 \rangle\langle u' d_3|) + H.c.,
\end{equation}
\end{subequations}
where $\sigma = \{s, 0, +\}, \{w_s,w_0,w_+\} \equiv \{s, p_0, p_+\}, d_i \in \{c, t, 3\}$, and $V^{\psi' \phi'}_{\psi \phi} = \langle \psi \psi' | V_{dd} | \phi \phi' \rangle$, where $V_{dd}$ is the dipole-dipole interaction. Thus $V_1$ corresponds to undressed intermediate states, $V_2$ to dressed intermediate states, and $V_3$ to a mix of dressed and undressed intermediate states. The mixed term $V_3$ must be put in the basis of the dressed states in order to properly identify the energies of the intermediate states. All three are from dipole-dipole interactions in the necessary basis. Due to the rotating frames, $V_1$ possess rotating terms. Similarly, $V_3$ also possesses rotating terms which are more complicated due to the change of basis. Defining $|w_\sigma \rangle = \sum_i R_{\sigma i} |d^i \rangle$ with $\{d^1, d^2, d^3\} \equiv \{c, t, 3 \}$, then the $V_3$ terms can be written
\begin{equation}
V_{d_1 d_3}^{d_2 u}(t) = \sum_{\sigma, \sigma', \sigma''} V_{w_\sigma w_{\sigma''}}^{w_{\sigma'} u} e^{i (\nu_\sigma + \nu_{\sigma'} - \nu_{\sigma''})t} R_{\sigma 1} R_{\sigma' 2} R_{\sigma '' 3}^*,
\end{equation}
which may possess multiple terms rotating with different frequencies.
Due to the presence of multiple rotating frames, there is no single rotating frame which removes all time dependence, which is necessary to apply time-independent perturbation theory. Instead, we apply a Floquet approach and expand the states in a quasi-energy series
\begin{equation}
|u u' \rangle = \sum_{n, m} e^{i n \nu_m t |u u{'}^m \rangle},
\end{equation}
where $n$ labels the harmonic and $m$ labels the different rotating frames needed for a given state. Practically speaking, this has the effect of shifting the energy defect for a given rotating term by $n \nu_m$. With this in mind, we can write the perturbative corrections from each of the three interaction terms
\begin{subequations}
\label{vdWterms}
\begin{equation}
V_{1,cc}^{vdW} = \sum_{\sigma, \sigma'} \sum_{u,u'} \frac{ |a_\sigma a_{\sigma'} V_{u_\sigma u}^{u_{\sigma'} u'}|^2}{2 E_c + \nu_\sigma + \nu_{\sigma'} - E_{u} - E_{u'}},
\label{vdW1}
\end{equation}
\begin{equation}
V_{2,cc}^{vdW} = \sum_{(d_1, d_2) \neq(c,c)} \frac{|V_{d_1 c}^{d_2 c}|^2}{2 E_c - E_{d_1} -E_{d_2}},
\label{vdW2}
\end{equation}
\begin{equation}
V_{3,cc}^{vdW} = 2 \sum_{ \sigma, \sigma', \sigma''} \sum_{d, u} \frac{ |R_{\sigma c} R_{\sigma' c} R_{\sigma'' d}^* V_{w_\sigma w_{\sigma''}}^{w_{\sigma'} u}|^2}{2 E_c + \nu_\sigma + \nu_{\sigma'} - \nu_{\sigma ''} - E_d - E_{u}},
\end{equation}
\end{subequations}
where we have, without loss of generality, focused on the control-control vdW interactions and $a_s, a_\sigma$ are the normalized coefficients of $|c\rangle$. Eqs.~(\ref{vdW1},\ref{vdW2}), are the contributions of the typical $s$ and $p$ state vdW interactions with the effect of the light shifts included. The third term is a new contribution due to the dressing. In all three cases, the effects of the light shifts must be included, as they are needed to tune the vdW interactions to 0. This approach is easily generalized to cases where additional states are coupled due to the drives.
\section{GHZ state preparation}
\label{GHZsupp}
In this section, we present the details of the protocol used to prepare large GHZ states using the multi-qubit gates in the main text. This approach is inspired by the protocol in Ref.~\cite{Eldredge2017}. In order to prepare a GHZ state, we rely on the fact that a controlled NOT (CNOT) gate has the following behavior:
\begin{equation}
\text{CNOT} \left(\frac{|0 0 \rangle + |1 0 \rangle}{\sqrt{2}}\right) = \frac{|0 0 \rangle + |11 \rangle}{\sqrt{2}}.
\end{equation}
By using any qubit part of the GHZ state as a control qubit and a target qubit in the $|0\rangle$ state, the size of the GHZ state can be sequentially increased. By using the multi-qubit gates developed in the main text, many qubits can be incorporated into the GHZ state in a single step. Although the gate in the main text is a C$_k$Z$^m$ gate, a C$_k$NOT$^m$ can be realized either via a modification to the pulse sequence or by applying single-qubit Hadamard gates to the target qubits before and after the C$_k$Z$^m$ gate; we consider the latter implementation.
The GHZ state preparation protocol is as follows: Initially, all atoms are in a square lattice in the $|0\rangle$ state except for a single atom in the $(|0 \rangle + |1 \rangle)/\sqrt{2}$ state. This single atom will be the control atom while its four nearest neighbors are target atoms. A Hadamard gate is applied to the target qubits, taking them to the $(|0\rangle + |1 \rangle)\sqrt{2}$ state, upon which a C$_1$Z$^3$ gate is applied. A Hadamard gate is applied to the target qubits once more, ending the first step and creating a 5-atom GHZ state. For the subsequent steps, the outermost atoms of the GHZ state are controls while their nearest neighbors outside of the GHZ state are targets. These steps are illustrated in Fig.~\ref{GHZfig}.
\begin{figure}[h]
\subfloat[]{
\includegraphics[scale=.4]{GHZBG1.pdf}
}
\qquad
\subfloat[]{
\includegraphics[scale=.4]{GHZBG2.pdf}
}
\qquad
\subfloat[]{
\includegraphics[scale=.4]{GHZBG3.pdf}
}
\caption{GHZ state preparation steps. The light blue circles denote the control atoms, the dark green circles denote the target atoms, and the white and black circles indicate atoms not involved in a given step. The light blue and black atoms together are in a GHZ state. After each step, the new GHZ state includes (a) 5, (b) 13, or (c) 25 atoms. \label{GHZfig}}
\end{figure}
The resulting error for each step is
\begin{equation}
\epsilon = (N_c + N_t) \frac{\pi}{4 \Omega \tau} + N_t \langle V_b^{-2} \rangle \Omega^2,
\end{equation}
which has a minimum
\begin{equation}
\epsilon = \frac{3 \pi^{2/3} N_t^{1/3} (N_c+N_t)^{2/3}}{4 (v \tau)^{2/3}},
\end{equation}
where $N_c$ ($N_t$) is the number of control (target) atoms and $\langle V_b^{-2} \rangle = v^{-2}$ is the average value of $V_b^{-2}$ for the target atoms. We have dropped contributions from vdW interactions since they can be made negligible with suitable dressing. For the first, second, and third steps, $\langle (V_{nn}/V_b)^{2} \rangle = 1, .44, .32$, where $V_{nn}$ is the nearest-neighbor interaction. This continues to decrease before reaching a limit of $\langle V_b^{-2} \rangle = .196$ for large steps.
Based on the dressing of Fig.~3 of the main text, where vdW interactions are made negligible, the maximal nearest-neighbor control-target interaction is $2 \pi \times 2.7$ MHz, which is 80 times smaller than the smallest microwave Rabi frequency. A factor of 10 is to ensure the microwave fields are stronger than the undressed dipole-dipole interactions while the factor of 8 is due to a reduction in the dressed dipole-dipole interactions compared to the undressed dipole-dipole interactions due to the dressing. For $\tau_{c/t} \approx .44$ ms, the errors in the GHZ state preparation are 2\%, 4.5\%, and 7.8\% for 5-, 13-, and 25-atom GHZ states, respectively.
|
{'timestamp': '2020-06-05T02:01:53', 'yymm': '2006', 'arxiv_id': '2006.02486', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.02486'}
|
arxiv
|
\section{Dataset}
\label{sec:data}
We first give an overview of the setup and content of our dataset.
\PAR{Locations:}
The initial release of the dataset contains 3 large locations representative of AR use cases:
1) HGE (18'000 m$^2$) is the ground floor of a historical university building composed of multiple large halls and large esplanades on both sides.
2) CAB (12'000 m$^2$) is a multi-floor office building composed of multiple small and large offices, a kitchen, storage rooms, and 2 courtyards.
3) LIN (15'000 m$^2$) is a few blocks of an old town with shops, restaurants, and narrow passages.
HGE and CAB contain both indoor and outdoor sections with many symmetric structures.
Each location underwent structural changes over the span of a year, e.g.\@\xspace, the front of HGE turned into a construction site and the indoor furniture was rearranged.
See \cref{fig:locations} and \cref{sec:visuals} for visualizations.
\PAR{Data collection:}
We collected data using Microsoft HoloLens 2 and Apple iPad Pro devices with custom raw sensor recording applications.
10 participants were each given one device and asked to walk through a common designated area.
They were only given the instructions to freely walk through the environment to visit, inspect, and find their way around.
This yielded diverse camera heights and motion patterns.
Their trajectories were not planned or restricted in any way.
Participants visited each location, both during the day and at night, at different points in time over the course of up to 1 year.
In total, each location is covered by more than 100 sessions of 5 minutes.
We did not need to prepare the capturing site in any way before recording.
This enables easy barrier-free crowd-sourced data collections.
Each location was also captured two to three times by NavVis M6 trolley or VLX backpack mapping platforms, which generate textured dense 3D models of the environment using laser scanners and panoramic cameras.
\PAR{Privacy:}
We paid special attention to comply with privacy regulations.
Since the dataset is recorded in public spaces, our pipeline anonymizes all visible faces and licence plates.
\PAR{Sensors:}
We provide details about the recorded sensors in \cref{tab:sensors}.
The HoloLens has a specialized large field-of-view (FOV) multi-camera tracking rig (low resolution, global shutter) \cite{ungureanu2020hololens}, while the iPad has a single, higher-resolution camera with rolling shutter and more limited FOV.
We also recorded outputs of the real-time AR tracking algorithms available on each device, which includes relative camera poses and sensor calibration.
All images are undistorted.
All sensor data is registered into a common reference frame with accurate absolute GT poses using the pipeline described in the next section.
\begin{figure}[t]
\centering
\input{figures/trajectories}
\vspace{2mm}
\caption{%
\textbf{The locations feature diverse indoor and outdoor spaces.}
High-quality meshes, obtained from lidar, are registered with numerous AR sequences, each shown here as a different color.
%
}
\label{fig:locations}
\end{figure}
\section{Ground-truth generation}
\label{sec:method}
The GT estimation process takes as input the raw data from the different sensors.
The entire pipeline is fully automated and does not require any manual alignment or input.
\PAR{Overview:}
We start by aligning different sessions of the laser scanner by using the images and the 3D lidar point cloud.
When registered together, they form the GT reference map, which accurately captures the structure and appearance of the scene.
We then register each AR sequence individually to the reference map using local feature matching and relative poses from the on-device tracker.
Finally, all camera poses are refined jointly by optimizing the visual constraints within and across sequences.
\PAR{Notation:}
We denote ${}_i\*T_j \in \text{SE}(3)$ the 6-DoF pose, encompassing rotation and translation, that transforms a point in frame $j$ to another frame $i$.
Our goal is to compute globally-consistent absolute poses ${}_w\*T_i$ for all cameras $i$ of all sequences and scanning sessions into a common reference world frame $w$.
\subsection{Ground-truth reference model}
\label{sec:method:ref}
Each capture session $S \in \mathcal S$ of the NavVis laser-scanning platform is processed by a proprietary inertial-lidar SLAM that estimates, for each image $i$, a pose ${}_0\*T_i^S$ relative to the beginning of the session.
The software filters out noisy lidar measurements, removes dynamic objects, and aggregates the remainder into a globally-consistent colored 3D point cloud with a grid resolution of 1cm.
To recover visibility information, we compute a dense mesh using the Advancing Front algorithm~\cite{cohen2004greedy}.
Our first goal is to align the sessions into a common GT reference frame.
We assume that the scan trajectories are drift-free and only need to register each with a rigid transformation~${}_w\*T_0^S$.
Scan sessions can be captured between extensive periods of time and therefore exhibit large structural and appearance changes.
We use a combination of image and point cloud information to obtain accurate registrations without any manual initialization.
The steps are inspired by the reconstruction pipeline of Choi~et~al.\@\xspace~\cite{choi2015robust,Zhou2018}.
\PAR{Pair-wise registration:}
We first estimate a rigid transformation ${}_A\*T_B$ for each pair of scanning sessions $(A, B) \in \mathcal{S}^2$.
For each image $I_i^A$ in $A$, we select the $r$ most similar images $(I^B_j)_{1 \leq j \leq r}$ in $B$ based on global image descriptors~\cite{vlad,arandjelovic2016netvlad,apgem}, which helps the registration scale to large scenes.
We extract sparse local image features and establish 2D-2D correspondences $\{\*p^A_i,\*p^B_j\}$ for each image pair $(i, j)$.
The 2D keypoints $\*p_i \in \mathbb{R}^2$ are lifted to 3D, $\*P_i \in \mathbb{R}^3$, by tracing rays through the dense mesh of the corresponding session.
This yields 3D-3D correspondences $\{\*P^A_i,\*P^B_j\}$, from which we estimate an initial relative pose~\cite{umeyama1991least} using RANSAC~\cite{fischler1981random}.
This pose is refined with the point-to-plane Iterative Closest Point (ICP) algorithm~\cite{rusinkiewicz2001efficient} applied to the pair of lidar point clouds.
We use state-of-the-art local image features that can match across drastic illumination and viewpoint changes~\cite{sarlin2019,detone2018superpoint,revaudr2d2}.
Combined with the strong geometric constraints in the registration, our system is robust to long-term temporal changes and does not require manual initialization.
Using this approach, we have successfully registered building-scale scans captured at more than a year of interval with large structural changes.
\PAR{Global alignment:}
We gather all pairwise constraints and jointly refine all absolute scan poses $\{{}_w\*T_0^S\}$ by optimizing a pose graph~\cite{grisetti2010tutorial}.
The edges are weighted with the covariance matrices of the pair-wise ICP estimates.
The images of all scan sessions are finally combined into a unique reference trajectory $\{{}_w\*T_i^\text{ref}\}$.
The point clouds and meshes are aligned according to the same transformations.
They define the reference representation of the scene, which we use as a basis to obtain GT for the AR sequences.
\PAR{Ground-truth visibility:}
The accurate and dense 3D geometry of the mesh allows us to compute accurate visual overlap between two cameras with known poses and calibration.
Inspired by Rau~et~al.\@\xspace~\cite{rau2020imageboxoverlap}, we define the overlap of image $i$ wrt.~a reference image $j$ by the ratio of pixels in $i$ that are visible in $j$:
\begin{equation}
O(i\rightarrow j) = \frac{\sum_{k\in(W,H)} \mathbbm{1}\left[
\mathrm{\Pi}_j({}_w\*T_j, \mathrm{\Pi}_i^{-1}({}_w\*T_i, \*p^i_k, z_k)) \in (W, H)
\right]\:\alpha_k
}{W\cdot H} \enspace,
\end{equation}
where $\mathrm{\Pi}_i$ projects a 3D point $k$ to camera $i$, $\mathrm{\Pi}_i^{-1}$ conversely backprojects it using its known depth $z_k$ with $(W, H)$ as the image dimensions.
The contribution of each pixel is weighted by the angle $\alpha_k = \cos(\*n_{i,k}, \*n_{j,k})$ between the two rays.
To handle scale changes, it is averaged both ways $i\rightarrow j$ and $j\rightarrow i$.
This score is efficiently computed by tracing rays through the mesh and checking for occlusion for robustness.
This score $O\in[0,1]$ favors images that observe the same scene from similar viewpoints.
Unlike sparse co-visibility in an SfM model~\cite{radenovic2018fine}, our formulation is independent of the amount of texture and the density of the feature detections.
This score correlates with matchability -- we thus use it as GT when evaluating retrieval and to determine an upper bound on the theoretically achievable performance of our benchmark.
\subsection{Sequence-to-scan alignment}
\label{sec:method:seq}
We now aim to register each AR sequence individually into the dense GT reference model (see \cref{fig:seq-to-scan}).
Given a sequence of $n$ frames, we introduce a simple algorithm that estimates the per-frame absolute pose $\{{}_w\*T_i\}_{1 \leq i \leq n}$.
A frame refers to an image taken at a given time or, when the device is composed of a camera rig with known calibration (e.g.\@\xspace, HoloLens), to a collection of simultaneously captured images.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/pipeline.pdf}
\caption{{\bf Sequence-to-scan alignment.}
We first estimate the absolute pose of each sequence frame using image retrieval and matching.
This initial localization prior is used to obtain a single rigid alignment between the input trajectory and the reference 3D model via voting.
The alignment is then relaxed by optimizing the individual frame poses in a pose graph based on both relative and absolute pose constraints.
We bootstrap this initialization by mining relevant image pairs and re-localizing the queries.
Given these improved absolute priors, we optimize the pose graph again and finally include reprojection errors of the visual correspondences, yielding a refined trajectory.
%
}
\label{fig:seq-to-scan}
\end{figure}
\PAR{Inputs:}
We assume given trajectories $\{{}_0\*T^\text{track}_{i}\}$ estimated by a visual-inertial tracker -- we use ARKit for iPhone/iPad and the on-device tracker for HoloLens.
The tracker also outputs per-frame camera intrinsics $\{\*C_i\}$, which account for auto-focus or calibration changes and are for now kept fixed.
\PAR{Initial localization:}
For each frame of a sequence $\{I^{\text{query}}_{i}\}$, we retrieve a fixed number $r$ of relevant reference images $(I^\text{ref}_j)_{1 \leq j \leq r}$ using global image descriptors.
We match sparse local features~\cite{lowe2004distinctive,detone2018superpoint,revaudr2d2} extracted in the query frame to each retrieved image $I^\text{ref}_j$ obtaining a set of 2D-2D correspondences $\{\*p_{i,k}^\text{q},\*p_{j,k}^\text{ref}\}_k$.
The 2D reference keypoints are lifted to 3D by tracing rays through the mesh of the reference model, yielding a set of 2D-3D correspondences $\mathcal{M}_{i, j} : = \{\*p_{i,k}^\text{q},\*P_{j,k}^\text{ref}\}_k$.
We combine all matches per query frame $\mathcal{M}_{i} = \cup_{j=1}^{r} \mathcal{M}_{i, j}$ and estimate an initial absolute pose ${}_w\*T^\text{loc}_i$ using the (generalized) P3P algorithm~\cite{hee2016minimal} within a LO-RANSAC scheme~\cite{chum2003locally} followed by a non-linear refinement~\cite{Schoenberger2016Structure}.
Because of challenging appearance conditions, structural changes, or lack of texture, some frames cannot be localized in this stage.
We discard all poses that are supported by a low number of inlier correspondences.
\PAR{Rigid alignment:}
We next recover a coarse initial pose $\{{}_w\*T^\text{init}_i\}$ for all frames, including those that could not be localized.
Using the tracking, which is for now assumed drift-free, we find the rigid alignment ${}_w\*T^\text{init}_0$ that maximizes the consensus among localization poses.
This voting scheme is fast and effectively rejects poses that are incorrect, yet confident, due to visual aliasing and symmetries.
Each estimate is a candidate transformation ${}_w\*T^i_0 = {}_w\*T^\text{loc}_i \left({}_0\*T^\text{track}_{i}\right)^{-1}$, for which other frames can vote, if they are consistent within a threshold $\tau_\text{rigid}$.
We select the candidate with the highest count of inliers:
\begin{equation}
{}_w\*T^\text{init}_0 = \argmax_{\*T \in \{{}_w\*T^i_0\}_{1 \leq i \leq n}} \sum_{1 \leq j \leq n}
\mathbbm{1}\left[\text{dist} \left( {}_w\*T^\text{loc}_j, \*T\cdot{}_0\*T^\text{track}_{j} \right) < \tau_\text{rigid}\right] \enspace,
\end{equation}
where $\mathbbm{1}\left[\cdot\right]$ is the indicator function and $\text{dist} \left( \cdot, \cdot \right)$ returns the magnitude, in terms of translation and rotation, of the difference between two absolute poses.
We then recover the per-frame initial poses as $\{{}_w\*T^\text{init}_i := {}_w\*T^\text{init}_0 \cdot {}_0\*T^\text{track}_{i}\}_{1 \leq i \leq n}$.
\PAR{Pose graph optimization:}
We refine the initial absolute poses by maximizing the consistency of tracking and localization cues within a pose graph.
The refined poses $\{{}_w\*T^\text{PGO}_i\}$ minimize the energy function
\begin{equation}
E(\{{}_w\*T_i\}) =
\sum_{i = 1}^{n - 1} \mathcal{C}_\text{PGO}\left({}_w\*T_{i+1}^{-1}\:{}_w\*T_i,\ {}_{i+1}\*T^\text{track}_{i} \right)
+ \sum_{i = 1}^n \mathcal{C}_\text{PGO}\left({}_w\*T_i,\ {}_w\*T^\text{loc}_i\right) \enspace,
\end{equation}
where $\mathcal{C}_\text{PGO}\left(\*T_1, \*T_2\right) := \left\Vert\text{Log}\left(\*T_1\:\*T_2^{-1}\right)\right\Vert^2_{\Sigma,\gamma}$ is the distance between two absolute or relative poses, weighted by covariance matrix $\Sigma \in \mathbb{R}^{6\times6}$ and loss function $\gamma$.
Here, $\text{Log}$ maps from the Lie group $\text{SE}(3)$ to the corresponding algebra $\mathfrak{se}(3)$.
We robustify the absolute term with the Geman-McClure loss function and anneal its scale via a Graduated Non-Convexity scheme~\cite{yang2020graduated}.
This ensures convergence in case of poor initialization, e.g.\@\xspace, when the tracking exhibits significant drift, while remaining robust to incorrect localization estimates.
The covariance of the absolute term is propagated from the preceding non-linear refinement performed during localization.
The covariance of the relative term is recovered from the odometry pipeline, or, if not available, approximated as a factor of the motion magnitude.
This step can fill the gaps from the localization stage using the tracking information and conversely correct for tracker drift using localization cues.
In rare cases, the resulting poses might still be inaccurate when both the tracking drifts and the localization fails.
\PAR{Guided localization via visual overlap:}
To further increase the pose accuracy, we leverage the current pose estimates $\{{}_w\*T^\text{PGO}_i\}$ to mine for additional localization cues.
Instead of relying on global visual descriptors, which are easily affected by aliasing, we select reference images with a high overlap using the score defined in \cref{sec:method:ref}.
For each sequence frame $i$, we select $r$ reference images with the largest overlap and again match local features and estimate an absolute pose.
These new localization priors improve the pose estimates in a second optimization of the pose graph.
\PAR{Bundle adjustment:}
For each frame $i$, we recover the set of 2D-3D correspondences $\mathcal{M}_i$ used by the guided re-localization.
We now refine the poses $\{{}_w\*T^\text{BA}_i\}$ by jointly minimizing a bundle adjustment problem with relative pose graph costs:
\begin{align}
\begin{split}
E(\{{}_w\*T_i\}) =
& \sum_{i = 1}^{n - 1} \mathcal{C}_\text{PGO}\left({}_w\*T_{i+1}^{-1}\:{}_w\*T_i,\ {}_{i+1}\*T^\text{track}_{i} \right) \\
+& \sum_{i=1}^{n} \sum_{\mathcal{M}_{i,j}\in \mathcal{M}_{i}}\sum_{(\*p^\text{ref}_k, \*P^\text{q}_k) \in \mathcal M_{i,j}}
\left\Vert\mathrm{\Pi} ({}_w\*T_i, \*P^\text{ref}_{j,k}) - \*p^\text{q}_{i,k}\right\Vert^2_{\sigma^2} \enspace,
\label{eq:pgo-ba}
\end{split}
\end{align}
where the second term evaluates the reprojection error of a 3D point $\*P^\text{ref}_{j,k}$ for observation $k$ to frame $i$.
The covariance is the noise $\sigma^2$ of the keypoint detection algorithm.
We pre-filter correspondences that are behind the camera or have an initial reprojection error greater than $\sigma\,\tau_\text{reproj}$.
As the 3D points are sampled from the lidar, we also optimize them with a prior noise corresponding to the lidar specifications.
We use the Ceres~\cite{ceres-solver} solver.
\subsection{Joint global refinement}
\label{sec:refinement}
Once all sequences are individually aligned, we refine them jointly by leveraging sequence-to-sequence visual observations.
This is helpful when sequences observe parts of the scene not mapped by the LiDAR.
We first triangulate a sparse 3D model from scan images, aided by the mesh.
We then triangulate additional observations, and finally jointly optimize the whole problem.
\PAR{Reference triangulation:}
We estimate image correspondences of the reference scan using pairs selected according to the visual overlap defined in \cref{sec:method:seq}.
Since the image poses are deemed accurate and fixed, we filter the correspondences using the known epipolar geometry.
We first consider feature tracks consistent with the reference surface mesh before triangulating more noisy observations within LO-RANSAC using COLMAP~\cite{Schoenberger2016Structure}.
The remaining feature detections, which could not be reliably matched or triangulated, are lifted to 3D by tracing through the mesh.
This results in an accurate, sparse SfM model with tracks across reference images.
\PAR{Sequence optimization:}
We then add each sequence to the sparse model.
We first establish correspondences between images of the same and of different sequences.
The image pairs are again selected by highest visual overlap computed using the aligned poses $\{{}_w\*T^\text{BA}_i\}$.
The resulting tracks are sequentially triangulated, merged, and added to the sparse model.
Finally, all 3D points and poses are jointly optimized by minimizing the joint pose-graph and bundle adjustment (\cref{eq:pgo-ba}).
As in COLMAP~\cite{Schoenberger2016Structure}, we alternate optimization and track merging.
To scale to large scenes, we subsample keyframes from the full frame-rate captures and only introduce absolute pose and reprojection constraints for keyframes while maintaining all relative pose constraints from tracking.
\begin{figure}[t]
\centering
\begin{minipage}{0.52\textwidth}
\includegraphics[width=\linewidth]{figures/uncertainties/uncertainty_CAB_t_keyrigs_scatter.png}
\end{minipage}%
\hspace{0.01\textwidth}%
\begin{minipage}{0.46\textwidth}
\centering
\includegraphics[width=.49\textwidth]{figures/qualitative_renderings/6493927726_ios_2022-02-27_18.05.07_000_cam_phone_6493927726.jpg}%
\hspace{0.01\textwidth}%
\includegraphics[width=.49\textwidth]{figures/qualitative_renderings/131582775_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}
\includegraphics[width=.49\linewidth]{figures/qualitative_renderings/2714445675_hl_2021-06-02-11-59-31-495.001_hetlf.jpg}%
\hspace{0.01\textwidth}%
\includegraphics[width=.49\linewidth]{figures/qualitative_renderings/116376895_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}
\end{minipage}
\vspace{2mm}
\caption{%
\textbf{Uncertainty of the GT poses for the CAB scene.}
Left: The overhead map shows that the translation uncertainties are larger in long corridors and outdoor spaces.
Right: Pairs of captured images (left) and renderings of the mesh at the estimated camera poses (right). They are pixel-aligned, which confirms that the poses are sufficiently accurate for our evaluation.
}%
\label{fig:uncertainties}%
\end{figure}
\subsection{Ground-truth validation}
\PAR{Potential limits:}
Brachmann~et~al.\@\xspace~\cite{brachmann2021limits} observe that algorithms generating pseudo-GT poses by minimizing either 2D or 3D cost functions alone can yield noticeably different results.
We argue that there exists a single underlying, true GT.
Reaching it requires fusing large amounts of redundant data with sufficient sensors of sufficiently low noise.
Our GT poses optimize complementary constraints from visual and inertial measurements, guided by an accurate lidar-based 3D structure.
Careful design and propagation of uncertainties reduces the bias towards one of the sensors.
All sensors are factory- and self-calibrated during each recording by the respective commercial, production-grade SLAM algorithms.
We do not claim that our GT is perfect but analyzing the optimization uncertainties sheds light on its degree of accuracy.
\PAR{Pose uncertainty:}
We estimate the uncertainties of the GT poses by inverting the Hessian of the refinement.
To obtain calibrated covariances, we scale them by the empirical keypoint detection noise, estimated as $\sigma{=}1.33$ pixels for the CAB scene.
The maximum noise in translation is the size of the major axis of the uncertainty ellipsoids, which is the largest eivenvalue $\sigma_t^2$ of the covariance matrices.
\Cref{fig:uncertainties} shows its distribution for the CAB scene.
We retain images whose poses are correct within $10$cm with a confidence of $99.7$\%.
For normally distributed errors, this corresponds to a maximum uncertainty $\sigma_t{=}3.33\text{cm}$ and discards $0.8$\% of all frames.
For visual inspection, we render images at the estimated GT camera poses using the colored mesh.
They appear pixel-aligned with the original images, supporting that the poses are accurate.
We provide additional visualizations in \cref{sec:uncertainties}.
\subsection{Selection of mapping and query sequences}
We divide the set of sequences into two disjoint groups for mapping and localization.
Mapping sequences are selected such that they have a minimal overlap between each other yet cover the area visited by all remaining sequences.
This simulates a scenario of minimal coverage and maximizes the number of localization query sequences.
We cast this as a combinatorial optimization problem solved with a depth-first search.
We provide more details in \cref{sec:map-query-split}.
\section{Evaluation}
We evaluate state-of-the-art approaches in both single-frame and sequence settings and summarize our results in \cref{fig:baselines}.
We build maps using both types of AR devices and evaluate the localization accuracy for 1000 randomly-selected queries of each device for each location.
All results are averaged across all locations.
\Cref{sec:supp:distribution} provides more details about the distribution of the evaluation data.
\PAR{Single-frame:}
We first consider in \cref{sec:eval:single-frame} the classical academic setup of single-frame queries (single image for phones and single rig for HoloLens 2) without additional sensor.
We then look at how radio signals can be beneficial.
We also analyze the impact of various settings: FOV, type of mapping images, and mapping algorithm.
\PAR{Sequence:}
Second, by leveraging the real-time AR tracking poses, we consider the problem of sequence localization in \cref{sec:eval:chunk}.
This corresponds to a real-world AR application retrieving the content attached to a target map using the real-time sensor stream from the device.
In this context, we care not only about accuracy and recall but also about the time required to localize accurately, which we call the \emph{time-to-recall}.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\textwidth]{figures/main_results_v3}
\caption{\textbf{Main results.} We show results for Fusion image retrieval with SuperPoint local features and SuperGlue matcher on both HoloLens 2 and phone queries.
We consider several tracks: single-image / single-rig localization with / without radios and similarly for sequence (10 seconds) localization.
In addition, we report the percentage of queries with at least 5\% ground-truth overlap with respect to the best mapping image.
}%
\label{fig:baselines}
\end{figure}
\subsection{Single-frame localization}\label{sec:eval:single-frame}
We first evaluate several algorithms representative of the state of the art in the classical single-frame academic setup.
We consider the hierarchical localization framework with different approaches for image retrieval and matching.
Each of them first builds a sparse SfM map from reference images.
For each query frame, we then retrieve relevant reference images, match their local features, lift the reference keypoints to 3D using the sparse map, and finally estimate a pose with PnP+RANSAC.
We report the recall of the final pose at two thresholds~\cite{sattler2018benchmarking}:
1) a fine threshold at ($1^\circ, 10$cm), which we see as the minimum accuracy required for a good AR user experience in most settings.
2) a coarse threshold at ($5^\circ, 1$m) to show the room for improvement for current approaches.
We evaluate global descriptors computed by NetVLAD~\cite{arandjelovic2016netvlad} and by a fusion~\cite{humenberger2020robust} of NetVLAD and APGeM~\cite{apgem}, which are representative of the field~\cite{pion2020benchmarking}.
We retrieve the 10 most similar images.
For matching, we evaluate handcrafted SIFT~\cite{lowe2004distinctive}, SOSNet~\cite{tian2019sosnet} as a learned patch descriptor extracted from DoG~\cite{lowe2004distinctive} keypoints, and a robust deep-learning based joint detector and descriptor R2D2~\cite{revaudr2d2}.
Those are matched by exact mutual nearest neighbor search.
We also evaluate SuperGlue~\cite{sarlin2020superglue} -- a learned matcher based on SuperPoint~\cite{detone2018superpoint} features.
To build the map, we retrieve neighboring images filtered by frustum intersection from reference poses, match these pairs, and triangulate a sparse SfM model using COLMAP~\cite{Schoenberger2016Structure}.
We report the results in \cref{tab:baselines+radio} (left).
Even the best methods have a large gap to perfect scores and much room for improvement.
In the remaining ablation, we solely rely on SuperPoint+SuperGlue~\cite{detone2018superpoint,sarlin2020superglue} for matching as it clearly performs the best.
\begin{table}[t]
\centering
\begin{minipage}{0.47\linewidth}
\scriptsize{\input{tables/baselines.tex}}
\end{minipage}
\begin{minipage}{0.52\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/retrieval_small_v3.pdf}
\end{minipage}
\vspace{0.5mm}
\caption{%
\textbf{Left: single-frame localization.}
We report the recall at ($1^\circ, 10$cm)/($5^\circ, 1$m) for baselines representative of the state of the art.
Our dataset is challenging while most others are saturated.
There is a clear progress from SIFT but also large room for improvement.
\textbf{Right: localization with radio signals.}
Increasing the number \{5, 10, 20\} of retrieved images increases the localization recall at ($1^\circ, 10$cm).
The best-performing visual retrieval (Fusion, orange) is however far worse than the GT overlap.
Filtering with radio signals (blue) improves the performance in all settings.
}%
\label{tab:baselines+radio}
\end{table}
\PAR{Leveraging radio signals:}\label{sec:eval:single-frame-radio}
In this experiment, we show that radio signals can be used to constrain the search space for image retrieval.
This has two main benefits: 1) it reduces the risk of incorrectly considering visual aliases, and 2) it lowers the compute requirements by reducing that numbers of images that need to be retrieved and matched.
We implement this filtering as follows.
We first split the scene into a sparse 3D grid considering only voxels containing at least one mapping frame.
For each frame, we gather all radio signals in a $\pm2$s window and associate them to the corresponding voxel.
If the same endpoint is observed multiple times in a given voxel, we average the received signal strengths (RSSI) in dBm.
For a query frame, we similarly aggregate signals over the past 2s and rank voxels by their L2 distance between RSSIs, considering those with at least one common endpoint.
We thus restrict image retrieval to $2.5\%$ of the map.
\Cref{tab:baselines+radio} (right) shows that radio filtering always improves the localization accuracy over vanilla vision-only retrieval, irrespective of how many images are matches.
The upper bound based on the GT overlap, defined in \cref{sec:method:ref}, shows that there is still much room for improvement for both image and radio retrieval.
As the GT overlap baseline is far from the perfect 100\% recall, frame-to-frame matching and pose estimation have also much room to improve.
\PAR{Varying field-of-view:}
We study the impact of the FOV of the HoloLens 2 device via two configurations:
1) Each camera in a rig is seen as a single-frame and localized using LO-RANSAC + P3P.
2) We consider all four cameras in a frame and localize them together using the generalized solver GP3P.
With fusion retrieval, SuperPoint, and SuperGlue, single images (1) only achieve 45.6\%~/~61.3\% recall,
while using rigs (2) yields 64.2\%~/~77.4\%~(\cref{tab:baselines+radio}).
Rig localization is thus highly beneficial, especially in hard cases where single cameras face texture-less areas, such as the ground and walls.
\PAR{Mapping modality:}
We study whether the high-quality lidar mesh can be used for localization.
We consider two approaches to obtain a sparse 3D point cloud:
1) By triangulating sparse visual correspondences across multiple views.
2) By lifting 2D keypoints in reference images to 3D by tracing rays through the mesh.
Lifting can leverage dense correspondences, which cannot be efficiently triangulated with conventional multi-view geometry.
We thus compare 1) and 2) with SuperGlue to 2) with LoFTR~\cite{sun2021loftr}, a state-of-the-art dense matcher.
The results in \cref{tab:mapping+tri} (right) show that the mesh brings some improvements.
Points could also be lifted by dense depth from multi-view stereo.
We however did not obtain satisfactory results with a state-of-the-art approach~\cite{wang2020patchmatchnet} as it cannot handle very sparse mapping images.
\PAR{Mapping scenario:}
We study the accuracy of localization against maps built from different types of images: 1) crowd-sourced, dense AR sequences; 2) curated, sparser HD 360 images from the NavVis device; 3) a combination of the two.
The results are summarized in \cref{tab:mapping+tri} (left), showing that the mapping scenario has a large impact on the final numbers.
On the other hand, image pair selection for mapping matters little.
Crowd-sourcing and manual scans can complement each other well to address an imperfect scene coverage.
We hope that future work can close the gap between the scenarios to achieve better metrics from crowd-sourced data without curation.
\begin{table*}[t]
\centering
\begin{minipage}{0.57\linewidth}
\scriptsize{\input{tables/mapping.tex}}%
\end{minipage}%
\hspace{0.03\linewidth}%
\begin{minipage}{0.399\linewidth}
\includegraphics[width=1.0\textwidth]{figures/map_algo_v2.pdf}%
\end{minipage}%
\vspace{1.5mm}
\caption{%
\textbf{Impact of mapping.}
\textbf{Left: Scenarios.}
Building the map with HD 360 images from NavVis scanners, instead of or with dense AR sequences, does not consistently boost the performance as they are usually sparser, do not fully cover each location, and have different characteristics than AR images.
\textbf{Right: Modalities.}
Lifting 2D points to 3D using the lidar mesh instead of triangulating with SfM is beneficial.
This can also leverage dense matching, e.g.\@\xspace with LoFTR.
}%
\label{tab:mapping+tri}%
\end{table*}
\subsection{Sequence localization}\label{sec:eval:chunk}
In this section, inspired by typical AR use cases, we consider the problem of sequence localization. The task is to align multiple consecutive frames using sensor data aggregated over short time intervals.
Our baseline for this task is based on the ground-truthing pipeline and has as such relatively high compute requirements.
However, we are primarily interested in demonstrating the potential performance gains by leveraging multiple frames.
First, we run image retrieval and single-frame localization, followed by a first PGO with tracking and localization poses.
Then, we do a second localization with retrieval guided by the poses of the first PGO, followed by a second PGO.
Finally, we run a pose refinement by considering reprojections to query frames and tracking cost.
We can also use radio signals to restrict image retrieval throughout the pipeline.
As previously, we consider the localization recall but only of the last frame in each sequence, which is the one that influences the current AR user experience in a real-time scenario.
\begin{figure}[t]
\centering
\begin{minipage}{0.66\linewidth}
\includegraphics[width=\linewidth]{figures/ttr_v2.pdf}%
\end{minipage}%
\begin{minipage}{0.33\linewidth}
\centering
\scriptsize{\input{tables/ttr.tex}}
\end{minipage}
\vspace{1mm}
\caption{%
\textbf{Sequence localization.}
We report the localization recall at ($1^\circ, 10$cm) of SuperPoint features with SuperGlue matcher as we increase the duration of each sequence.
The pipeline leverages both on-device tracking and absolute localization, as vision-only (solid) or combined with radio signals (dashed).
We show the time-to-recall (TTR) at 80\% for HL2 and at 70\% for phone queries.
Using radio signals reduces the TTR from over 10s to 1.40s and 3.58s, respectively.
}
\label{fig:chunks}
\end{figure}
We evaluate various query durations and introduce the \emph{time-to-recall} metric as the sequence length (time) required to successfully localize X\% (recall) of the queries within (1$^\circ$, 10cm), or, in short, TTR@X\%.
Localization algorithms should aim to minimize this metric to render retrieved content as quickly as possible after starting an AR experience.
\Cref{fig:chunks} reports the results averaged over all locations.
While the performance of current methods is not satisfactory yet to achieve a TTR@90\% under 10 seconds, using sequence localization leads to significant gains of 20\%.
The radio signals improve the performance in particular with shorter sequences and thus effectively reduce the time-to-recall.
\section{Introduction}
Placing virtual content in the physical 3D world, persisting it over time, and sharing it with other users are typical scenarios for Augmented Reality (AR).
In order to reliably overlay virtual content in the real world with pixel-level precision, these scenarios require AR devices to accurately determine their 6-DoF pose at any point in time.
While visual localization and mapping is one of the most studied problems in computer vision, its use for AR entails specific challenges and opportunities.
First, modern AR devices, such as mobile phones or the Microsoft HoloLens or \mbox{MagicLeap One}, are often equipped with multiple cameras and additional inertial or radio sensors.
Second, they exhibit characteristic hand-held or head-mounted motion patterns.
The on-device real-time tracking systems provide spatially-posed sensor streams.
However, many AR scenarios require positioning beyond local tracking, both indoors and outdoors, and robustness to common temporal changes of appearance and structure.
Furthermore, given the plurality of temporal sensor data, the question is often not whether, but how quickly can the device localize at any time to ensure a compelling end-user experience.
Finally, as AR adoption grows, crowd-sourced data captured by users with diverse devices can be mined for building large-scale maps without a manual and costly scanning effort.
Crowd-sourcing offers great opportunities but poses additional challenges on the robustness of algorithms, e.g.\@\xspace, to enable cross-device localization~\cite{dusmanu2021cross}, mapping from incomplete data with low accuracy~\cite{Schoenberger2016Structure,brachmann2021limits}, privacy-preservation of data~\cite{speciale2019a,geppert2020privacy,shibuya2020privacy,geppert2021privacy,dusmanu2021privacy}, etc.\@\xspace
However, the academic community is mainly driven by benchmarks that are disconnected from the specifics of AR.
They mostly evaluate localization and mapping using single still images and either lack temporal changes~\cite{shotton2013scene,advio} or accurate ground truth~(GT)~\cite{sattler2018benchmarking,kendall2015,taira2018inloc}, are restricted to small scenes~\cite{Balntas2017HPatches,shotton2013scene,kendall2015,wald2020,schops2017multi} or landmarks~\cite{Jin2020Image,Schonberger2017Comparative} with perfect coverage and limited viewpoint variability, or disregard temporal tracking data or additional visual, inertial, or radio sensors~\cite{sattler2012aachen,sattler2018benchmarking,taira2018inloc,lee2021naver,nclt,sun2017dataset}.
Our first contribution is to introduce {\bf a large-scale dataset captured using AR devices in diverse environments}, notably a historical building, a multi-story office building, and part of a city center.
The initial data release contains both indoor and outdoor images with illumination and semantic changes as well as dynamic objects.
Specifically, we collected multi-sensor data streams (images, depth, tracking, IMU, BT, WiFi) totalling more than 100 hours using head-mounted HoloLens 2 and hand-held iPhone / iPad devices covering 45'000 square meters over the span of one year (\cref{fig:teaser}).
Second, we develop {\bf a GT pipeline to automatically and accurately register AR trajectories} against large-scale 3D laser scans.
Our pipeline does not require any manual labelling or setup of custom infrastructure (e.g.\@\xspace, fiducial markers).
Furthermore, the system robustly handles crowd-sourced data from heterogeneous devices captured over longer periods of time and can be easily extended to support future devices.
Finally, we present {\bf a rigorous evaluation of localization and mapping in the context of AR} and provide {\bf novel insights for future research}.
Notably, we show that the performance of state-of-the-art methods can be drastically improved by considering additional data streams generally available in AR devices, such as radio signals or sequence odometry.
Thus, future algorithms in the field of AR localization and mapping should always consider these sensors in their evaluation to show real-world impact.
The LaMAR dataset, benchmark, GT pipeline, and the implementations of baselines integrating additional sensory data are all publicly available at~\href{https://lamar.ethz.ch/}{\texttt{lamar.ethz.ch}}.
We hope that this will spark future research addressing the challenges of AR.
\section{Conclusion}
LaMAR is the first benchmark that faithfully captures the challenges and opportunities of AR for visual localization and mapping.
We first identified several key limitations of current benchmarks that make them unrealistic for AR.
To address these limitations, we developed a new ground-truthing pipeline to accurately and robustly register AR sensor streams in large and diverse scenes aided by laser scans without any manual labelling or custom infrastructure.
With this new benchmark, initially covering 3 large locations, we revisited the traditional academic setup and showed a large performance gap for existing state-of-the-art methods when evaluated using more realistic and challenging data.
We implemented simple yet representative baselines to take advantage of the AR-specific setup and we presented new insights that pave promising avenues for future works.
We showed the large potential of leveraging other sensor modalities like radio signals, depth, or query sequences instead of single images.
We also hope to direct the attention of the community towards improving map representations for crowd-sourced data and towards considering the time-to-recall metric, which is currently largely ignored.
We publicly release at \href{https://lamar.ethz.ch/}{\texttt{lamar.ethz.ch}} the complete LaMAR dataset, our ground-truthing pipeline, and the implementation of all baselines.
The evaluation server and public leaderboard facilitates the benchmarking of new approaches to keep track of the state of the art.
We hope this will spark future research addressing the challenges of AR.
\PAR{Acknowledgements.}
LaMAR would not have been possible without the hard work and contributions of %
Gabriela Evrova,
Silvano Galliani,
Michael Baumgartner,
Cedric Cagniart,
Jeffrey Delmerico,
Jonas Hein,
Dawid Jeczmionek,
Mirlan Karimov,
Maximilian Mews,
Patrick Misteli,
Juan Nieto,
Sònia Batllori Pallarès,
R\'emi Pautrat,
Songyou Peng,
Iago Suarez,
Rui Wang,
Jeremy Wanner,
Silvan Weder
and our colleagues in \mbox{CVG at ETH Zurich} and the wider Microsoft Mixed Reality \& AI team.
\section{Related work}
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{%
\scriptsize{\input{tables/datasets.tex}}%
}%
\vspace{1mm}
\caption{\textbf{Overview of existing datasets.}
No dataset, besides ours, exhibits at the same time short-term appearance and structural changes due to moving people~\iconDynamic, weather~\iconWeather, or day-night cycles~\iconNight, but also long-term changes due to displaced furniture~\iconChair\ or construction work~\iconStruct.
}%
\label{tab:datasets}
\end{table*}
\PAR{Image-based localization}
is classically tackled by estimating a camera pose from correspondences established between sparse local features~\cite{lowe2004distinctive,bay2008speeded,Rublee2011ORB,mikolajczyk2004ijcv} and a 3D Structure-from-Motion (SfM)~\cite{Schoenberger2016Structure} map of the scene~\cite{fischler1981random,li2012worldwide,sattler2012improving}.
This pipeline scales to large scenes using image retrieval~\cite{arandjelovic2012three,vlad,apgem,asmk,cao2020unifying,rau2020imageboxoverlap,densevlad}.
Recently, many of these steps or even the end-to-end pipeline have been successfully learned with neural networks~\cite{detone2018superpoint,sarlin2020superglue,Dusmanu2019CVPR,schoenberger2018semantic,arandjelovic2016netvlad,NIPS2017_831caa1b,tian2019sosnet,sarlin2019,yi2016lift,Hyeon2021,sarlin21pixloc,lindenberger2021pixsfm}.
Other approaches regress absolute camera pose~\cite{kendall2015,kendall2017geometric,ng2021reassessing} or scene coordinates~\cite{shotton2013scene,valentin2015cvpr,meng2017backtracking,massiceti2017random,angle_scr,brachmann2019esac,Wang2021,brachmann2021dsacstar}.
However, all these approaches typically fail whenever there is lack of context (e.g.\@\xspace, limited field-of-view) or the map has repetitive elements.
Leveraging the sequential ordering of video frames~\cite{seqslam,johns2013feature} or modelling the problem as a generalized camera~\cite{pless2003using,hee2016minimal,sattler2018benchmarking,speciale2019a} can improve results.
\PAR{Radio-based localization:}
Radio signals, such as WiFi and Bluetooth, are spatially bounded (logarithmic decay)~\cite{radar,khalajmehrabadi2016modern,radio_fingerprint}, thus can distinguish similarly looking (spatially distant) locations.
Their unique identifiers can be uniquely hashed which makes them computationally attractive (compared with high-dimensional image descriptors).
Several methods use the signal strength, angle, direction, or time of arrival~\cite{radio_aoa,radio_toa,radio_tdoa} but the most popular is model-free map-based fingerprinting~\cite{khalajmehrabadi2016modern,radio_fingerprint,fault_tolerant}, as it only requires to collect unique identifiers of nearby radio sources and received signal strength.
GNSS provides absolute 3-DoF positioning but is not applicable indoors and has insufficient accuracy for AR scenarios, especially in urban environments due to multi-pathing, etc.\@\xspace
\PAR{Datasets and ground-truth:}
Many of the existing benchmarks (cf.\@\xspace~\cref{tab:datasets}) are captured in small-scale environments \cite{shotton2013scene,wald2020,dai2017scannet,hodan2018bop}, do not contain sequential data~\cite{sattler2012aachen,Jin2020Image,sanfrancisco,taira2018inloc,sun2017dataset,schops2017multi,Balntas2017HPatches,Schonberger2017Comparative}, lack characteristic hand-held/head-mounted motion patterns ~\cite{sattler2018benchmarking,Badino2011,RobotCarDatasetIJRR,wenzel2020fourseasons}, or their GT is not accurate enough for AR~\cite{advio,kendall2015}.
None of these datasets contain WiFi or Bluetooth data (\cref{tab:datasets}).
The closest to our work are Naver Labs~\cite{lee2021naver}, NCLT~\cite{nclt} and ETH3D~\cite{schops2017multi}.
Both, Naver Labs~\cite{lee2021naver} and NCLT~\cite{nclt} are less accurate than ours and do not contain AR specific trajectories or radio data.
The Naver Labs dataset~\cite{lee2021naver} also does not contain any outdoor data.
ETH3D~\cite{schops2017multi} is highly accurate, however, it is only small-scale, does not contain significant changes, or any radio data.
To establish ground-truth, many datasets rely on off-the-shelf SfM algorithms~\cite{Schoenberger2016Structure} for unordered image collections~\cite{sattler2012aachen,Jin2020Image,kendall2015,wald2020,advio,sun2017dataset,taira2018inloc,Jin2020Image}.
Pure SfM-based GT generation has limited accuracy~\cite{brachmann2021limits} and completeness, which biases the evaluations to scenarios in which visual localization already works well.
Other approaches rely on RGB(-D) tracking~\cite{wald2020,shotton2013scene}, which usually drifts in larger scenes and cannot produce GT in crowd-sourced, multi-device scenarios.
Specialized capture rigs of an AR device with a more accurate sensor (lidar)~\cite{lee2021naver,nclt} prevent capturing of realistic AR motion patterns.
Furthermore, scalability is limited for these approaches, especially if they rely on manual selection of reference images~\cite{sun2017dataset}, laborious labelling of correspondences~\cite{sattler2012aachen,taira2018inloc}, or placement of fiducial markers~\cite{hodan2018bop}.
For example, the accuracy of ETH3D~\cite{schops2017multi} is achieved by using single stationary lidar scan, manual cleaning, and aligning very few images captured by tripod-mounted DSLR cameras.
Images thus obtained are not representative for AR devices and the process cannot scale or take advantage of crowd-sourced data.
In contrast, our fully automatic approach does not require any manual labelling or special capture setups, thus enables light-weight and repeated scanning of large locations.
\section*{Appendix}
\section{Visualizations}
\label{sec:visuals}
\PAR{Diversity of devices:}
We show in \cref{fig:supp:samples} some samples of images captured in the HGE location.
NavVis and phone images are colored while HoloLens2 images are grayscale.
NavVis images are always perfectly upright, while the viewpoint and height of HoloLens2 and phone images varies significantly.
Despite the automatic exposure, phone images easily appear dark in night-time low-light conditions.
\PAR{Diversity of environments:}
We show an extensive overview of the three locations CAB, HGE, and LIN in Figures~\ref{fig:supp:CAB}, \ref{fig:supp:HGE}, and \ref{fig:supp:LIN}, respectively.
In each image, we show a rendering of the lidar mesh along with the ground truth trajectories of a few sequences.
\begin{figure}[!b]
\centering
\includegraphics[width=\textwidth]{figures/image_samples_compressed.pdf}
\caption{\textbf{Sample of images from the different devices:} NavVis M6, HoloLens2, phone.
Each column shows a different scene of the HGE location with large illumination changes.
}%
\label{fig:supp:samples}%
\end{figure}
\begin{figure}[p]
\centering
\input{supp/trajectories_CAB}
\vspace{2mm}
\caption{\textbf{The CAB location} features 1-2) a staircase spanning 5 similar-looking floors, 3) large and small offices and meeting rooms, 4) long corridors, 5) large halls, and 6) outdoor areas with repeated structures.
This location includes the \emph{Facade}, \emph{Courtyard}, \emph{Lounge}, \emph{Old Computer}, \emph{Storage Room}, and \emph{Office} scenes of the ETH3D~\cite{schops2017multi} dataset and is thus much larger than each of them.
}%
\label{fig:supp:CAB}%
\end{figure}
\begin{figure}[!ht]
\centering
\input{supp/trajectories_HGE}
\vspace{2mm}
\caption{\textbf{The HGE location} features a highly-symmetric building with 1-2) hallways, 3) long corridors, 4) two esplanades, and 5) a section of sidewalk.
This location includes the \emph{Relief}, \emph{Door}, and \emph{Statue} scenes of the ETH3D~\cite{schops2017multi} dataset.
}%
\label{fig:supp:HGE}
\vspace{1cm}
\end{figure}
\begin{figure}[ht]
\centering
\input{supp/trajectories_LIN}
\vspace{2mm}
\caption{\textbf{The LIN location} features large outdoor open spaces (top row), narrow passages with stairs (middle row), and both residential and commercial street-level facades.
}%
\label{fig:supp:LIN}
\vspace{1cm}
\end{figure}
\PAR{Long-term changes:}
Because spaces are actively used and managed, they undergo significant appearance and structural changes over the year-long data recording.
This is captured by the laser scans, which are aligned based on elements that do not change, such as the structure of the buildings.
We show in \cref{fig:supp:changes} a visual comparison between scans captured at different points in time.
\begin{figure}[p]
\centering
\input{supp/pointcloud_changes}
\vspace{1mm}
\caption{\textbf{Long-term structural changes.}
Lidar point clouds captured over a year reveal the geometric changes that spaces undergo at different time scales:
1) very rarely (construction work),
2-4) sparsely (displacement of furniture), or even
5-6) daily due to regular usage (people, objects).
}%
\label{fig:supp:changes}%
\end{figure}
\newpage
\section{Uncertainties of the ground truth}
\label{sec:uncertainties}
We show overhead maps and histograms of uncertainties for all scenes in \cref{fig:supp:uncertainties}.
We also additional rendering comparisons in \cref{fig:renderings}.
Since we do not use the mesh for any color-accurate tasks (e.g., photo-metric alignment), we use a simple vertex-coloring based on the NavVis colored lidar pointcloud.
The renderings are therefore not realistic but nevertheless allow an inspection the final alignment.
The proposed ground-truthing pipeline yields poses that allow pixel-accurate rendering.
\begin{figure}[p]
\centering
\input{supp/uncertainties}
\vspace{4mm}
\caption{\textbf{Translation uncertainties of the ground truth camera centers} for the CAB (top), LIN (middle) and HGE (bottom) scenes.
Left: The overhead map shows that the uncertainties are larger in areas that are not well covered by the 3D scanners or where the scene is further away from the camera, such as in long corridors and large outdoor space.
Right: The histogram of uncertainties shows that most images have an uncertainty far lower than $\sigma_t{=}3.33\text{cm}$.
}%
\label{fig:supp:uncertainties}%
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/116376895_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/2714445675_hl_2021-06-02-11-59-31-495.001_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/453870575_hl_2021-06-02-11-31-59-805.000_hetlf.jpg}\\
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/131582775_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/492852500_hl_2021-06-02-11-31-59-805.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/375274845_hl_2021-06-02-11-31-59-805.000_hetrf.jpg}\\
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/4953298178_ios_2022-02-27_17.39.20_000_cam_phone_4953298178.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/6493927726_ios_2022-02-27_18.05.07_000_cam_phone_6493927726.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/7165878684_ios_2022-02-27_20.29.13_000_cam_phone_7165878684.jpg}\\ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/72874052732_ios_2022-01-20_14.56.34_000_cam_phone_72874052732.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/7763152056_ios_2022-02-27_20.41.27_000_cam_phone_7763152056.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/95570500269_ios_2021-06-02_14.31.38_000_cam_phone_95570500269.jpg}\\
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg2.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin2.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab2.jpg}\\
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg4.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin4.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab4.jpg}\\
\vspace{1mm}
\caption{{\bf Qualitative renderings from the mesh.}
Using the estimated ground-truth poses, we render images from the vertex-colored mesh (right) and compare them to the originals (left).
The first two rows show six HoloLens 2 images while the next two show six phone images.
We overlay a regular grid to facilitate the comparison.
The bottom rows shows 2x2 mosaics alternating between originals (top-left, bottom-right) and renders (top-right, bottom-left).
Best viewed when zoomed in.
}
\label{fig:renderings}
\end{figure}
\newpage
\section{Selection of mapping and query sequences}
\label{sec:map-query-split}
We now describe in more details the algorithm that automatically selects mapping and query sequences, whose distributions are shown in \cref{fig:supp:qualitymaps}.
The coverage $C(i,j)_k$ is a boolean that indicates whether the image $k$ of sequence $i$ shares sufficient covisibility with at least one image is sequence $j$.
Here two images are deemed covisible if they co-observe a sufficient number of 3D points in the final, full SfM sparse model~\cite{radenovic2018fine} or according to the ground truth mesh-based visual overlap.
The coverage of sequence $i$ with a set of other sequences $\mathcal{S} = \{j\}$ is the ratio of images in $i$ that are covered by at least one image in $\mathcal{S}$:
\begin{equation}
C(i,\mathcal{S}) = \frac{1}{|i|}\sum_{k\in i} \bigcap_{j \in \mathcal{S}} C(i,j)_k
\end{equation}
\begin{figure}[p]
\centering
\input{supp/qualitymaps}
\vspace{4mm}
\caption{\textbf{Spatial distribution of AR sequences} for the CAB (top), HGE (middle), and LIN (bottom) locations.
We show the ground truth trajectories overlaid on the lidar point clouds along 3 orthogonal directions.
All axes are in meters and $z$ is aligned with the gravity.
Left: Types of AR devices among all registered sequences.
Right: Map and query sequences selected for evaluation.
CAB spans multiple floors while HGE and LIN are mostly 2D but include a range of ground heights.
The space is well covered by both types of devices and sequences.
}%
\label{fig:supp:qualitymaps}%
\end{figure}
We seek to find the set of mapping sequences $\mathcal{M}$ and remaining query sequences $\mathcal{Q} = \mathcal{S}\backslash\mathcal{M}$ that minimize the coverage between map sequences while ensuring that each query is sufficiently covered by the map:
\begin{align}
\begin{split}
\mathcal{M}^* = \argmin \frac{1}{|\mathcal{M}|} \sum_{i \in\mathcal{M}} C(i,\mathcal{M}\backslash\{i\}) \\
\text{such that}\quad C(i,\mathcal{M}) > \tau\quad \forall i \in \mathcal{Q}\enspace,
\end{split}
\end{align}
where $\tau$ is the minimum query coverage.
We ensure that query sequences are out of coverage for at most $t$ consecutive seconds, where $t$ can be tuned to adjust the difficulty of the localization and generally varies from 1 to 5 seconds.
This problem is combinatorial and without exact solution.
We solve it approximately with a depth-first search that iteratively adds new images and checks for the feasibility of the solution.
At each step, we consider the query sequences that are the least covisible with the current map.
\section{Data distribution}
\label{sec:supp:distribution}
We show in \cref{fig:supp:qualitymaps} (left) the spatial distribution to the registered sequences for the two devices types HoloLens2 and phone.
We select a subset of all registered sequences for evaluation and split it into mapping and localization groups according to the algorithm described in \cref{sec:map-query-split}.
The spatial distribution of these groups is shown in \cref{fig:supp:qualitymaps} (right).
We enforce that night-time sequences are not included in the map, which is a realistic assumption for crowd-sourced scenarios.
We do not enforce an equal distribution of device types in either group but observe that this occurs naturally.
For the evaluation, mapping images are sampled at intervals of at most 2.5FPS, 50cm of distance, and 20\degree of rotation.
This ensures a sufficient covisibility between subsequent frames while reducing the computational cost of creating maps.
The queries are sampled every 1s/1m/20\degree and, for each device type, 1000 poses are selected out of those with sufficiently low uncertainty.
\section{Additional evaluation results}
\subsection{Impact of the condition and environment}
We now investigate the impact of different capture conditions (day vs night) and environment (indoor vs outdoor) of the query images.
Query sequences are labeled as day or night based on the time and date of capture.
We manually annotate overhead maps into indoor and outdoor areas.
We report the results for single-image localization of phone images in \cref{tab:supp:cond-env}.
In regular day-time conditions, outdoor areas exhibit distinctive texture and are thus easier to coarsely localize in than texture-less, repetitive indoor areas.
The scene structure is however generally further away from the camera, so optimizing reprojection errors yields less accurate camera poses.
Indoor scenes generally benefit from artificial light and are thus minimally affected by the night-time drop of natural light.
Outdoor scenes benefit from little artificial light, mostly due to sparse street lighting, and thus widely change in appearance between day and night.
As a result, the localization performance drops to a larger extent outdoors than indoors.
\begin{table}[t]
\centering
{%
\setlength\tabcolsep{5pt}
\begin{tabular}{cccccc}
\toprule
\multirowcell{2}[-0.1cm]{Condition} & \multicolumn{2}{c}{CAB scene} & \multicolumn{2}{c}{HGE scene} & LIN scene\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-6}
& Indoor & Outdoor & Indoor & Outdoor & Outdoor \\
\midrule
day & 66.5 / 74.7 & 73.9 / 88.1 & 52.7 / 65.9 & 43.0 / 64.3 & 71.2 / 82.5\\
night & 30.3 / 44.8 & 18.8 / 40.6 & 47.9 / 59.4 & 12.1 / 33.6 & 38.6 / 55.6\\
\bottomrule
\end{tabular}
}
\vspace{2mm}
\caption{\textbf{Impact of the condition and environment} on single-image phone localization.
During the day, localizing indoors can be more accurate (10cm threshold) but less robust (1m threshold) than outdoors due to visual aliasing and a lack of texture.
Night-time localization is more challenging outdoors than indoors because of a larger drop of illumination.
}
\label{tab:supp:cond-env}
\end{table}
\subsection{Additional results on sequence localization}
We run a detailed ablation of the sequence localization on an extended set of queries and report our finding below.
\PAR{Ablation:} We ablate the different parts of our proposed sequence localization pipeline on sequences of 20 seconds.
The localization recall at \{$1^\circ, 10$cm\} and \{$5^\circ, 1$m\} can be seen in \cref{tab:multi-frame-ablation} for both HoloLens 2 and Phone queries.
The initial PGO with tracking and absolute constraints already offers a significant boost in performance compared to single-frame localization.
We notice that the re-localization with image retrieval guided by the PGO poses achieves much better results than the first localization - this points to retrieval being a bottle-neck, not feature matching.
Next, the second PGO is able to leverage the improved absolute constraints and yields better results.
Finally, the pose refinement optimizing reprojection errors while also taking into account tracking constraints further improves the performance, notably at the the tighter threshold.
\begin{table}[t]
\centering
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Device} & \multirow{2}{*}{Radios} & \multicolumn{6}{c}{Steps} \\
\cmidrule(lr){3-8}
& & Loc. & Init. & PGO1 & Re-loc. & PGO2 & BA\\
\midrule
\multirow{2}{*}{HL2} & \sxmark & 66.0 / 79.9 & 66.1 / 92.5 & 71.8 / 92.4 & 74.2 / 88.0 & 74.9 / 92.5 & {\bf 79.3} / {\bf 92.8} \\
& \scmark & 67.7 / 82.3 & 66.4 / 94.5 & 74.3 / 94.3 & 76.2 / 90.1 & 76.7 / 94.4 & {\bf 81.6} / {\bf 94.9} \\
\midrule
\multirow{2}{*}{Phone} & \sxmark & 54.2 / 65.5 & 52.4 / 88.0 & 62.7 / 87.7 & 61.8 / 77.4 & 66.1 / 88.4 & {\bf 69.0} / {\bf 88.6} \\
& \scmark & 56.7 / 71.5 & 54.1 / {\bf 90.2} & 64.4 / 89.8 & 63.1 / 79.5 & 66.9 / 90.1 & {\bf 71.0} / {\bf 90.2} \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{{\bf Ablation of the sequence localization.}
We report recall for the different steps of the sequence localization pipeline for 10s sequences on the CAB location.
The second localization, guided by the poses of the first PGO, drastically improves over the initial localization, especially when no radio signals are used.
The final pose refinement optimizing reprojection errors while also taking into account tracking constraints offers a significant boost for the tighter threshold.}
\label{tab:multi-frame-ablation}
\end{table}
\section{Phone capture application}
We wrote a simple iOS Swift application that saves the data exposed by the ARKit, CoreBluetooth, CoreMotion, and CoreLocation services.
The user can adjust the frame rate prior to capturing.
The interface displays the current input image and the trajectory estimated by ARKit as an AR overlay.
It also displays the amount of free disk space on the device, the recording time, the number of captured frames, and the status of the ARKit tracking and mapping algorithms.
The data storage is optimized such that a single device can capture hours of data without running out of space.
After capture, the data can be inspected on-device and shared over Airdrop or cloud storage.
Screenshots of the user interface are shown in \cref{fig:supp:app}.
\begin{figure}[t]
\centering
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/init.jpg}
\end{minipage}%
\hspace{1mm}%
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/record.jpg}
\end{minipage}%
\hspace{1mm}%
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/result.jpg}
\end{minipage}
\vspace{2mm}
\caption{\textbf{iOS capture application.}
}%
\label{fig:supp:app}%
\vspace{0.5cm}
\end{figure}
\section{Implementation details}
\PAR{Scan-to-scan alignment:}
The pairwise alignment is initialized by matching the $r{=}5$ most similar images and running 3D-3D RANSAC with a threshold $\tau{=}5\text{cm}$.
The ICP refinement search for correspondences within a $5$cm radius.
\PAR{Sequence-to-scan alignment:}
The single-image localization is only performed for keyframes, which are selected every $1$ second, $20$\degree of rotation, or $1$ meter of traveled distance.
In RANSAC, the inlier threshold depends on the detection noise and thus of the image resolution: $3\sigma$ for PnP and $1\sigma$ GPnP, as camera rigs are better constrained.
Poses with fewer than $50$ inliers are discarded.
In the rigid alignment, inliers are selected for pose errors lower than $\tau_\text{rigid}{=}(2\text{m}, 20\degree)$.
Sequences with fewer than 4 inliers are discarded.
When optimizing the pose graph, we apply the $\text{arctan}$ robust cost function to the absolute pose term, with a scale of $100$ after covariance whitening.
In the bundle adjustment, reprojection errors larger than $5\sigma$px are discarded at initialization.
A robust Huber loss function is applied to the remaining ones with a scale of $2.5$ after covariance whitening.
\PAR{Radio transfer:}
As mentioned in the main paper, currently, Apple devices only expose partial radio signals.
Notably, WiFi signals cannot be recovered, Bluetooth beacons are not exposed, and the remaining Bluetooth signals are anonymized.
This makes it impossible to match them to those recovered by other devices (e.g., HoloLens, NavVis).
To show the potential benefit of exposing these radios, we implement a simple radio transfer algorithm from HoloLens 2 devices to phones.
First, we estimate the location of each HoloLens radio detection by linearly interpolating the poses of temporally adjacent frames.
For each phone query, we aggregate all radios within at most 3m in any direction and 1.5m vertically (to avoid cross-floor transfer) with respect to the ground-truth pose.
If the same radio signal is observed multiple times in this radius, we only keep the spatially closest 5 detections.
The final RSSI estimate for each radio signal is then obtained by a distance-based weighted-average of these observations.
Note that this step is done on the raw data, after the alignment.
Thus, we can safely release radios for phone queries without divulging the ground-truth poses.
\section{Dataset}
\label{sec:data}
We first give an overview of the setup and content of our dataset.
\PAR{Locations:}
The initial release of the dataset contains 3 large locations representative of AR use cases:
1) HGE (18'000 m$^2$) is the ground floor of a historical university building composed of multiple large halls and large esplanades on both sides.
2) CAB (12'000 m$^2$) is a multi-floor office building composed of multiple small and large offices, a kitchen, storage rooms, and 2 courtyards.
3) LIN (15'000 m$^2$) is a few blocks of an old town with shops, restaurants, and narrow passages.
HGE and CAB contain both indoor and outdoor sections with many symmetric structures.
Each location underwent structural changes over the span of a year, e.g.\@\xspace, the front of HGE turned into a construction site and the indoor furniture was rearranged.
See \cref{fig:locations} and \cref{sec:visuals} for visualizations.
\PAR{Data collection:}
We collected data using Microsoft HoloLens 2 and Apple iPad Pro devices with custom raw sensor recording applications.
10 participants were each given one device and asked to walk through a common designated area.
They were only given the instructions to freely walk through the environment to visit, inspect, and find their way around.
This yielded diverse camera heights and motion patterns.
Their trajectories were not planned or restricted in any way.
Participants visited each location, both during the day and at night, at different points in time over the course of up to 1 year.
In total, each location is covered by more than 100 sessions of 5 minutes.
We did not need to prepare the capturing site in any way before recording.
This enables easy barrier-free crowd-sourced data collections.
Each location was also captured two to three times by NavVis M6 trolley or VLX backpack mapping platforms, which generate textured dense 3D models of the environment using laser scanners and panoramic cameras.
\PAR{Privacy:}
We paid special attention to comply with privacy regulations.
Since the dataset is recorded in public spaces, our pipeline anonymizes all visible faces and licence plates.
\PAR{Sensors:}
We provide details about the recorded sensors in \cref{tab:sensors}.
The HoloLens has a specialized large field-of-view (FOV) multi-camera tracking rig (low resolution, global shutter) \cite{ungureanu2020hololens}, while the iPad has a single, higher-resolution camera with rolling shutter and more limited FOV.
We also recorded outputs of the real-time AR tracking algorithms available on each device, which includes relative camera poses and sensor calibration.
All images are undistorted.
All sensor data is registered into a common reference frame with accurate absolute GT poses using the pipeline described in the next section.
\begin{figure}[t]
\centering
\input{figures/trajectories}
\vspace{2mm}
\caption{%
\textbf{The locations feature diverse indoor and outdoor spaces.}
High-quality meshes, obtained from lidar, are registered with numerous AR sequences, each shown here as a different color.
%
}
\label{fig:locations}
\end{figure}
\section{Ground-truth generation}
\label{sec:method}
The GT estimation process takes as input the raw data from the different sensors.
The entire pipeline is fully automated and does not require any manual alignment or input.
\PAR{Overview:}
We start by aligning different sessions of the laser scanner by using the images and the 3D lidar point cloud.
When registered together, they form the GT reference map, which accurately captures the structure and appearance of the scene.
We then register each AR sequence individually to the reference map using local feature matching and relative poses from the on-device tracker.
Finally, all camera poses are refined jointly by optimizing the visual constraints within and across sequences.
\PAR{Notation:}
We denote ${}_i\*T_j \in \text{SE}(3)$ the 6-DoF pose, encompassing rotation and translation, that transforms a point in frame $j$ to another frame $i$.
Our goal is to compute globally-consistent absolute poses ${}_w\*T_i$ for all cameras $i$ of all sequences and scanning sessions into a common reference world frame $w$.
\subsection{Ground-truth reference model}
\label{sec:method:ref}
Each capture session $S \in \mathcal S$ of the NavVis laser-scanning platform is processed by a proprietary inertial-lidar SLAM that estimates, for each image $i$, a pose ${}_0\*T_i^S$ relative to the beginning of the session.
The software filters out noisy lidar measurements, removes dynamic objects, and aggregates the remainder into a globally-consistent colored 3D point cloud with a grid resolution of 1cm.
To recover visibility information, we compute a dense mesh using the Advancing Front algorithm~\cite{cohen2004greedy}.
Our first goal is to align the sessions into a common GT reference frame.
We assume that the scan trajectories are drift-free and only need to register each with a rigid transformation~${}_w\*T_0^S$.
Scan sessions can be captured between extensive periods of time and therefore exhibit large structural and appearance changes.
We use a combination of image and point cloud information to obtain accurate registrations without any manual initialization.
The steps are inspired by the reconstruction pipeline of Choi~et~al.\@\xspace~\cite{choi2015robust,Zhou2018}.
\PAR{Pair-wise registration:}
We first estimate a rigid transformation ${}_A\*T_B$ for each pair of scanning sessions $(A, B) \in \mathcal{S}^2$.
For each image $I_i^A$ in $A$, we select the $r$ most similar images $(I^B_j)_{1 \leq j \leq r}$ in $B$ based on global image descriptors~\cite{vlad,arandjelovic2016netvlad,apgem}, which helps the registration scale to large scenes.
We extract sparse local image features and establish 2D-2D correspondences $\{\*p^A_i,\*p^B_j\}$ for each image pair $(i, j)$.
The 2D keypoints $\*p_i \in \mathbb{R}^2$ are lifted to 3D, $\*P_i \in \mathbb{R}^3$, by tracing rays through the dense mesh of the corresponding session.
This yields 3D-3D correspondences $\{\*P^A_i,\*P^B_j\}$, from which we estimate an initial relative pose~\cite{umeyama1991least} using RANSAC~\cite{fischler1981random}.
This pose is refined with the point-to-plane Iterative Closest Point (ICP) algorithm~\cite{rusinkiewicz2001efficient} applied to the pair of lidar point clouds.
We use state-of-the-art local image features that can match across drastic illumination and viewpoint changes~\cite{sarlin2019,detone2018superpoint,revaudr2d2}.
Combined with the strong geometric constraints in the registration, our system is robust to long-term temporal changes and does not require manual initialization.
Using this approach, we have successfully registered building-scale scans captured at more than a year of interval with large structural changes.
\PAR{Global alignment:}
We gather all pairwise constraints and jointly refine all absolute scan poses $\{{}_w\*T_0^S\}$ by optimizing a pose graph~\cite{grisetti2010tutorial}.
The edges are weighted with the covariance matrices of the pair-wise ICP estimates.
The images of all scan sessions are finally combined into a unique reference trajectory $\{{}_w\*T_i^\text{ref}\}$.
The point clouds and meshes are aligned according to the same transformations.
They define the reference representation of the scene, which we use as a basis to obtain GT for the AR sequences.
\PAR{Ground-truth visibility:}
The accurate and dense 3D geometry of the mesh allows us to compute accurate visual overlap between two cameras with known poses and calibration.
Inspired by Rau~et~al.\@\xspace~\cite{rau2020imageboxoverlap}, we define the overlap of image $i$ wrt.~a reference image $j$ by the ratio of pixels in $i$ that are visible in $j$:
\begin{equation}
O(i\rightarrow j) = \frac{\sum_{k\in(W,H)} \mathbbm{1}\left[
\mathrm{\Pi}_j({}_w\*T_j, \mathrm{\Pi}_i^{-1}({}_w\*T_i, \*p^i_k, z_k)) \in (W, H)
\right]\:\alpha_k
}{W\cdot H} \enspace,
\end{equation}
where $\mathrm{\Pi}_i$ projects a 3D point $k$ to camera $i$, $\mathrm{\Pi}_i^{-1}$ conversely backprojects it using its known depth $z_k$ with $(W, H)$ as the image dimensions.
The contribution of each pixel is weighted by the angle $\alpha_k = \cos(\*n_{i,k}, \*n_{j,k})$ between the two rays.
To handle scale changes, it is averaged both ways $i\rightarrow j$ and $j\rightarrow i$.
This score is efficiently computed by tracing rays through the mesh and checking for occlusion for robustness.
This score $O\in[0,1]$ favors images that observe the same scene from similar viewpoints.
Unlike sparse co-visibility in an SfM model~\cite{radenovic2018fine}, our formulation is independent of the amount of texture and the density of the feature detections.
This score correlates with matchability -- we thus use it as GT when evaluating retrieval and to determine an upper bound on the theoretically achievable performance of our benchmark.
\subsection{Sequence-to-scan alignment}
\label{sec:method:seq}
We now aim to register each AR sequence individually into the dense GT reference model (see \cref{fig:seq-to-scan}).
Given a sequence of $n$ frames, we introduce a simple algorithm that estimates the per-frame absolute pose $\{{}_w\*T_i\}_{1 \leq i \leq n}$.
A frame refers to an image taken at a given time or, when the device is composed of a camera rig with known calibration (e.g.\@\xspace, HoloLens), to a collection of simultaneously captured images.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/pipeline.pdf}
\caption{{\bf Sequence-to-scan alignment.}
We first estimate the absolute pose of each sequence frame using image retrieval and matching.
This initial localization prior is used to obtain a single rigid alignment between the input trajectory and the reference 3D model via voting.
The alignment is then relaxed by optimizing the individual frame poses in a pose graph based on both relative and absolute pose constraints.
We bootstrap this initialization by mining relevant image pairs and re-localizing the queries.
Given these improved absolute priors, we optimize the pose graph again and finally include reprojection errors of the visual correspondences, yielding a refined trajectory.
%
}
\label{fig:seq-to-scan}
\end{figure}
\PAR{Inputs:}
We assume given trajectories $\{{}_0\*T^\text{track}_{i}\}$ estimated by a visual-inertial tracker -- we use ARKit for iPhone/iPad and the on-device tracker for HoloLens.
The tracker also outputs per-frame camera intrinsics $\{\*C_i\}$, which account for auto-focus or calibration changes and are for now kept fixed.
\PAR{Initial localization:}
For each frame of a sequence $\{I^{\text{query}}_{i}\}$, we retrieve a fixed number $r$ of relevant reference images $(I^\text{ref}_j)_{1 \leq j \leq r}$ using global image descriptors.
We match sparse local features~\cite{lowe2004distinctive,detone2018superpoint,revaudr2d2} extracted in the query frame to each retrieved image $I^\text{ref}_j$ obtaining a set of 2D-2D correspondences $\{\*p_{i,k}^\text{q},\*p_{j,k}^\text{ref}\}_k$.
The 2D reference keypoints are lifted to 3D by tracing rays through the mesh of the reference model, yielding a set of 2D-3D correspondences $\mathcal{M}_{i, j} : = \{\*p_{i,k}^\text{q},\*P_{j,k}^\text{ref}\}_k$.
We combine all matches per query frame $\mathcal{M}_{i} = \cup_{j=1}^{r} \mathcal{M}_{i, j}$ and estimate an initial absolute pose ${}_w\*T^\text{loc}_i$ using the (generalized) P3P algorithm~\cite{hee2016minimal} within a LO-RANSAC scheme~\cite{chum2003locally} followed by a non-linear refinement~\cite{Schoenberger2016Structure}.
Because of challenging appearance conditions, structural changes, or lack of texture, some frames cannot be localized in this stage.
We discard all poses that are supported by a low number of inlier correspondences.
\PAR{Rigid alignment:}
We next recover a coarse initial pose $\{{}_w\*T^\text{init}_i\}$ for all frames, including those that could not be localized.
Using the tracking, which is for now assumed drift-free, we find the rigid alignment ${}_w\*T^\text{init}_0$ that maximizes the consensus among localization poses.
This voting scheme is fast and effectively rejects poses that are incorrect, yet confident, due to visual aliasing and symmetries.
Each estimate is a candidate transformation ${}_w\*T^i_0 = {}_w\*T^\text{loc}_i \left({}_0\*T^\text{track}_{i}\right)^{-1}$, for which other frames can vote, if they are consistent within a threshold $\tau_\text{rigid}$.
We select the candidate with the highest count of inliers:
\begin{equation}
{}_w\*T^\text{init}_0 = \argmax_{\*T \in \{{}_w\*T^i_0\}_{1 \leq i \leq n}} \sum_{1 \leq j \leq n}
\mathbbm{1}\left[\text{dist} \left( {}_w\*T^\text{loc}_j, \*T\cdot{}_0\*T^\text{track}_{j} \right) < \tau_\text{rigid}\right] \enspace,
\end{equation}
where $\mathbbm{1}\left[\cdot\right]$ is the indicator function and $\text{dist} \left( \cdot, \cdot \right)$ returns the magnitude, in terms of translation and rotation, of the difference between two absolute poses.
We then recover the per-frame initial poses as $\{{}_w\*T^\text{init}_i := {}_w\*T^\text{init}_0 \cdot {}_0\*T^\text{track}_{i}\}_{1 \leq i \leq n}$.
\PAR{Pose graph optimization:}
We refine the initial absolute poses by maximizing the consistency of tracking and localization cues within a pose graph.
The refined poses $\{{}_w\*T^\text{PGO}_i\}$ minimize the energy function
\begin{equation}
E(\{{}_w\*T_i\}) =
\sum_{i = 1}^{n - 1} \mathcal{C}_\text{PGO}\left({}_w\*T_{i+1}^{-1}\:{}_w\*T_i,\ {}_{i+1}\*T^\text{track}_{i} \right)
+ \sum_{i = 1}^n \mathcal{C}_\text{PGO}\left({}_w\*T_i,\ {}_w\*T^\text{loc}_i\right) \enspace,
\end{equation}
where $\mathcal{C}_\text{PGO}\left(\*T_1, \*T_2\right) := \left\Vert\text{Log}\left(\*T_1\:\*T_2^{-1}\right)\right\Vert^2_{\Sigma,\gamma}$ is the distance between two absolute or relative poses, weighted by covariance matrix $\Sigma \in \mathbb{R}^{6\times6}$ and loss function $\gamma$.
Here, $\text{Log}$ maps from the Lie group $\text{SE}(3)$ to the corresponding algebra $\mathfrak{se}(3)$.
We robustify the absolute term with the Geman-McClure loss function and anneal its scale via a Graduated Non-Convexity scheme~\cite{yang2020graduated}.
This ensures convergence in case of poor initialization, e.g.\@\xspace, when the tracking exhibits significant drift, while remaining robust to incorrect localization estimates.
The covariance of the absolute term is propagated from the preceding non-linear refinement performed during localization.
The covariance of the relative term is recovered from the odometry pipeline, or, if not available, approximated as a factor of the motion magnitude.
This step can fill the gaps from the localization stage using the tracking information and conversely correct for tracker drift using localization cues.
In rare cases, the resulting poses might still be inaccurate when both the tracking drifts and the localization fails.
\PAR{Guided localization via visual overlap:}
To further increase the pose accuracy, we leverage the current pose estimates $\{{}_w\*T^\text{PGO}_i\}$ to mine for additional localization cues.
Instead of relying on global visual descriptors, which are easily affected by aliasing, we select reference images with a high overlap using the score defined in \cref{sec:method:ref}.
For each sequence frame $i$, we select $r$ reference images with the largest overlap and again match local features and estimate an absolute pose.
These new localization priors improve the pose estimates in a second optimization of the pose graph.
\PAR{Bundle adjustment:}
For each frame $i$, we recover the set of 2D-3D correspondences $\mathcal{M}_i$ used by the guided re-localization.
We now refine the poses $\{{}_w\*T^\text{BA}_i\}$ by jointly minimizing a bundle adjustment problem with relative pose graph costs:
\begin{align}
\begin{split}
E(\{{}_w\*T_i\}) =
& \sum_{i = 1}^{n - 1} \mathcal{C}_\text{PGO}\left({}_w\*T_{i+1}^{-1}\:{}_w\*T_i,\ {}_{i+1}\*T^\text{track}_{i} \right) \\
+& \sum_{i=1}^{n} \sum_{\mathcal{M}_{i,j}\in \mathcal{M}_{i}}\sum_{(\*p^\text{ref}_k, \*P^\text{q}_k) \in \mathcal M_{i,j}}
\left\Vert\mathrm{\Pi} ({}_w\*T_i, \*P^\text{ref}_{j,k}) - \*p^\text{q}_{i,k}\right\Vert^2_{\sigma^2} \enspace,
\label{eq:pgo-ba}
\end{split}
\end{align}
where the second term evaluates the reprojection error of a 3D point $\*P^\text{ref}_{j,k}$ for observation $k$ to frame $i$.
The covariance is the noise $\sigma^2$ of the keypoint detection algorithm.
We pre-filter correspondences that are behind the camera or have an initial reprojection error greater than $\sigma\,\tau_\text{reproj}$.
As the 3D points are sampled from the lidar, we also optimize them with a prior noise corresponding to the lidar specifications.
We use the Ceres~\cite{ceres-solver} solver.
\subsection{Joint global refinement}
\label{sec:refinement}
Once all sequences are individually aligned, we refine them jointly by leveraging sequence-to-sequence visual observations.
This is helpful when sequences observe parts of the scene not mapped by the LiDAR.
We first triangulate a sparse 3D model from scan images, aided by the mesh.
We then triangulate additional observations, and finally jointly optimize the whole problem.
\PAR{Reference triangulation:}
We estimate image correspondences of the reference scan using pairs selected according to the visual overlap defined in \cref{sec:method:seq}.
Since the image poses are deemed accurate and fixed, we filter the correspondences using the known epipolar geometry.
We first consider feature tracks consistent with the reference surface mesh before triangulating more noisy observations within LO-RANSAC using COLMAP~\cite{Schoenberger2016Structure}.
The remaining feature detections, which could not be reliably matched or triangulated, are lifted to 3D by tracing through the mesh.
This results in an accurate, sparse SfM model with tracks across reference images.
\PAR{Sequence optimization:}
We then add each sequence to the sparse model.
We first establish correspondences between images of the same and of different sequences.
The image pairs are again selected by highest visual overlap computed using the aligned poses $\{{}_w\*T^\text{BA}_i\}$.
The resulting tracks are sequentially triangulated, merged, and added to the sparse model.
Finally, all 3D points and poses are jointly optimized by minimizing the joint pose-graph and bundle adjustment (\cref{eq:pgo-ba}).
As in COLMAP~\cite{Schoenberger2016Structure}, we alternate optimization and track merging.
To scale to large scenes, we subsample keyframes from the full frame-rate captures and only introduce absolute pose and reprojection constraints for keyframes while maintaining all relative pose constraints from tracking.
\begin{figure}[t]
\centering
\begin{minipage}{0.52\textwidth}
\includegraphics[width=\linewidth]{figures/uncertainties/uncertainty_CAB_t_keyrigs_scatter.png}
\end{minipage}%
\hspace{0.01\textwidth}%
\begin{minipage}{0.46\textwidth}
\centering
\includegraphics[width=.49\textwidth]{figures/qualitative_renderings/6493927726_ios_2022-02-27_18.05.07_000_cam_phone_6493927726.jpg}%
\hspace{0.01\textwidth}%
\includegraphics[width=.49\textwidth]{figures/qualitative_renderings/131582775_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}
\includegraphics[width=.49\linewidth]{figures/qualitative_renderings/2714445675_hl_2021-06-02-11-59-31-495.001_hetlf.jpg}%
\hspace{0.01\textwidth}%
\includegraphics[width=.49\linewidth]{figures/qualitative_renderings/116376895_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}
\end{minipage}
\vspace{2mm}
\caption{%
\textbf{Uncertainty of the GT poses for the CAB scene.}
Left: The overhead map shows that the translation uncertainties are larger in long corridors and outdoor spaces.
Right: Pairs of captured images (left) and renderings of the mesh at the estimated camera poses (right). They are pixel-aligned, which confirms that the poses are sufficiently accurate for our evaluation.
}%
\label{fig:uncertainties}%
\end{figure}
\subsection{Ground-truth validation}
\PAR{Potential limits:}
Brachmann~et~al.\@\xspace~\cite{brachmann2021limits} observe that algorithms generating pseudo-GT poses by minimizing either 2D or 3D cost functions alone can yield noticeably different results.
We argue that there exists a single underlying, true GT.
Reaching it requires fusing large amounts of redundant data with sufficient sensors of sufficiently low noise.
Our GT poses optimize complementary constraints from visual and inertial measurements, guided by an accurate lidar-based 3D structure.
Careful design and propagation of uncertainties reduces the bias towards one of the sensors.
All sensors are factory- and self-calibrated during each recording by the respective commercial, production-grade SLAM algorithms.
We do not claim that our GT is perfect but analyzing the optimization uncertainties sheds light on its degree of accuracy.
\PAR{Pose uncertainty:}
We estimate the uncertainties of the GT poses by inverting the Hessian of the refinement.
To obtain calibrated covariances, we scale them by the empirical keypoint detection noise, estimated as $\sigma{=}1.33$ pixels for the CAB scene.
The maximum noise in translation is the size of the major axis of the uncertainty ellipsoids, which is the largest eivenvalue $\sigma_t^2$ of the covariance matrices.
\Cref{fig:uncertainties} shows its distribution for the CAB scene.
We retain images whose poses are correct within $10$cm with a confidence of $99.7$\%.
For normally distributed errors, this corresponds to a maximum uncertainty $\sigma_t{=}3.33\text{cm}$ and discards $0.8$\% of all frames.
For visual inspection, we render images at the estimated GT camera poses using the colored mesh.
They appear pixel-aligned with the original images, supporting that the poses are accurate.
We provide additional visualizations in \cref{sec:uncertainties}.
\subsection{Selection of mapping and query sequences}
We divide the set of sequences into two disjoint groups for mapping and localization.
Mapping sequences are selected such that they have a minimal overlap between each other yet cover the area visited by all remaining sequences.
This simulates a scenario of minimal coverage and maximizes the number of localization query sequences.
We cast this as a combinatorial optimization problem solved with a depth-first search.
We provide more details in \cref{sec:map-query-split}.
\section{Evaluation}
We evaluate state-of-the-art approaches in both single-frame and sequence settings and summarize our results in \cref{fig:baselines}.
We build maps using both types of AR devices and evaluate the localization accuracy for 1000 randomly-selected queries of each device for each location.
All results are averaged across all locations.
\Cref{sec:supp:distribution} provides more details about the distribution of the evaluation data.
\PAR{Single-frame:}
We first consider in \cref{sec:eval:single-frame} the classical academic setup of single-frame queries (single image for phones and single rig for HoloLens 2) without additional sensor.
We then look at how radio signals can be beneficial.
We also analyze the impact of various settings: FOV, type of mapping images, and mapping algorithm.
\PAR{Sequence:}
Second, by leveraging the real-time AR tracking poses, we consider the problem of sequence localization in \cref{sec:eval:chunk}.
This corresponds to a real-world AR application retrieving the content attached to a target map using the real-time sensor stream from the device.
In this context, we care not only about accuracy and recall but also about the time required to localize accurately, which we call the \emph{time-to-recall}.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\textwidth]{figures/main_results_v3}
\caption{\textbf{Main results.} We show results for Fusion image retrieval with SuperPoint local features and SuperGlue matcher on both HoloLens 2 and phone queries.
We consider several tracks: single-image / single-rig localization with / without radios and similarly for sequence (10 seconds) localization.
In addition, we report the percentage of queries with at least 5\% ground-truth overlap with respect to the best mapping image.
}%
\label{fig:baselines}
\end{figure}
\subsection{Single-frame localization}\label{sec:eval:single-frame}
We first evaluate several algorithms representative of the state of the art in the classical single-frame academic setup.
We consider the hierarchical localization framework with different approaches for image retrieval and matching.
Each of them first builds a sparse SfM map from reference images.
For each query frame, we then retrieve relevant reference images, match their local features, lift the reference keypoints to 3D using the sparse map, and finally estimate a pose with PnP+RANSAC.
We report the recall of the final pose at two thresholds~\cite{sattler2018benchmarking}:
1) a fine threshold at ($1^\circ, 10$cm), which we see as the minimum accuracy required for a good AR user experience in most settings.
2) a coarse threshold at ($5^\circ, 1$m) to show the room for improvement for current approaches.
We evaluate global descriptors computed by NetVLAD~\cite{arandjelovic2016netvlad} and by a fusion~\cite{humenberger2020robust} of NetVLAD and APGeM~\cite{apgem}, which are representative of the field~\cite{pion2020benchmarking}.
We retrieve the 10 most similar images.
For matching, we evaluate handcrafted SIFT~\cite{lowe2004distinctive}, SOSNet~\cite{tian2019sosnet} as a learned patch descriptor extracted from DoG~\cite{lowe2004distinctive} keypoints, and a robust deep-learning based joint detector and descriptor R2D2~\cite{revaudr2d2}.
Those are matched by exact mutual nearest neighbor search.
We also evaluate SuperGlue~\cite{sarlin2020superglue} -- a learned matcher based on SuperPoint~\cite{detone2018superpoint} features.
To build the map, we retrieve neighboring images filtered by frustum intersection from reference poses, match these pairs, and triangulate a sparse SfM model using COLMAP~\cite{Schoenberger2016Structure}.
We report the results in \cref{tab:baselines+radio} (left).
Even the best methods have a large gap to perfect scores and much room for improvement.
In the remaining ablation, we solely rely on SuperPoint+SuperGlue~\cite{detone2018superpoint,sarlin2020superglue} for matching as it clearly performs the best.
\begin{table}[t]
\centering
\begin{minipage}{0.47\linewidth}
\scriptsize{\input{tables/baselines.tex}}
\end{minipage}
\begin{minipage}{0.52\linewidth}
\centering
\includegraphics[width=\textwidth]{figures/retrieval_small_v3.pdf}
\end{minipage}
\vspace{0.5mm}
\caption{%
\textbf{Left: single-frame localization.}
We report the recall at ($1^\circ, 10$cm)/($5^\circ, 1$m) for baselines representative of the state of the art.
Our dataset is challenging while most others are saturated.
There is a clear progress from SIFT but also large room for improvement.
\textbf{Right: localization with radio signals.}
Increasing the number \{5, 10, 20\} of retrieved images increases the localization recall at ($1^\circ, 10$cm).
The best-performing visual retrieval (Fusion, orange) is however far worse than the GT overlap.
Filtering with radio signals (blue) improves the performance in all settings.
}%
\label{tab:baselines+radio}
\end{table}
\PAR{Leveraging radio signals:}\label{sec:eval:single-frame-radio}
In this experiment, we show that radio signals can be used to constrain the search space for image retrieval.
This has two main benefits: 1) it reduces the risk of incorrectly considering visual aliases, and 2) it lowers the compute requirements by reducing that numbers of images that need to be retrieved and matched.
We implement this filtering as follows.
We first split the scene into a sparse 3D grid considering only voxels containing at least one mapping frame.
For each frame, we gather all radio signals in a $\pm2$s window and associate them to the corresponding voxel.
If the same endpoint is observed multiple times in a given voxel, we average the received signal strengths (RSSI) in dBm.
For a query frame, we similarly aggregate signals over the past 2s and rank voxels by their L2 distance between RSSIs, considering those with at least one common endpoint.
We thus restrict image retrieval to $2.5\%$ of the map.
\Cref{tab:baselines+radio} (right) shows that radio filtering always improves the localization accuracy over vanilla vision-only retrieval, irrespective of how many images are matches.
The upper bound based on the GT overlap, defined in \cref{sec:method:ref}, shows that there is still much room for improvement for both image and radio retrieval.
As the GT overlap baseline is far from the perfect 100\% recall, frame-to-frame matching and pose estimation have also much room to improve.
\PAR{Varying field-of-view:}
We study the impact of the FOV of the HoloLens 2 device via two configurations:
1) Each camera in a rig is seen as a single-frame and localized using LO-RANSAC + P3P.
2) We consider all four cameras in a frame and localize them together using the generalized solver GP3P.
With fusion retrieval, SuperPoint, and SuperGlue, single images (1) only achieve 45.6\%~/~61.3\% recall,
while using rigs (2) yields 64.2\%~/~77.4\%~(\cref{tab:baselines+radio}).
Rig localization is thus highly beneficial, especially in hard cases where single cameras face texture-less areas, such as the ground and walls.
\PAR{Mapping modality:}
We study whether the high-quality lidar mesh can be used for localization.
We consider two approaches to obtain a sparse 3D point cloud:
1) By triangulating sparse visual correspondences across multiple views.
2) By lifting 2D keypoints in reference images to 3D by tracing rays through the mesh.
Lifting can leverage dense correspondences, which cannot be efficiently triangulated with conventional multi-view geometry.
We thus compare 1) and 2) with SuperGlue to 2) with LoFTR~\cite{sun2021loftr}, a state-of-the-art dense matcher.
The results in \cref{tab:mapping+tri} (right) show that the mesh brings some improvements.
Points could also be lifted by dense depth from multi-view stereo.
We however did not obtain satisfactory results with a state-of-the-art approach~\cite{wang2020patchmatchnet} as it cannot handle very sparse mapping images.
\PAR{Mapping scenario:}
We study the accuracy of localization against maps built from different types of images: 1) crowd-sourced, dense AR sequences; 2) curated, sparser HD 360 images from the NavVis device; 3) a combination of the two.
The results are summarized in \cref{tab:mapping+tri} (left), showing that the mapping scenario has a large impact on the final numbers.
On the other hand, image pair selection for mapping matters little.
Crowd-sourcing and manual scans can complement each other well to address an imperfect scene coverage.
We hope that future work can close the gap between the scenarios to achieve better metrics from crowd-sourced data without curation.
\begin{table*}[t]
\centering
\begin{minipage}{0.57\linewidth}
\scriptsize{\input{tables/mapping.tex}}%
\end{minipage}%
\hspace{0.03\linewidth}%
\begin{minipage}{0.399\linewidth}
\includegraphics[width=1.0\textwidth]{figures/map_algo_v2.pdf}%
\end{minipage}%
\vspace{1.5mm}
\caption{%
\textbf{Impact of mapping.}
\textbf{Left: Scenarios.}
Building the map with HD 360 images from NavVis scanners, instead of or with dense AR sequences, does not consistently boost the performance as they are usually sparser, do not fully cover each location, and have different characteristics than AR images.
\textbf{Right: Modalities.}
Lifting 2D points to 3D using the lidar mesh instead of triangulating with SfM is beneficial.
This can also leverage dense matching, e.g.\@\xspace with LoFTR.
}%
\label{tab:mapping+tri}%
\end{table*}
\subsection{Sequence localization}\label{sec:eval:chunk}
In this section, inspired by typical AR use cases, we consider the problem of sequence localization. The task is to align multiple consecutive frames using sensor data aggregated over short time intervals.
Our baseline for this task is based on the ground-truthing pipeline and has as such relatively high compute requirements.
However, we are primarily interested in demonstrating the potential performance gains by leveraging multiple frames.
First, we run image retrieval and single-frame localization, followed by a first PGO with tracking and localization poses.
Then, we do a second localization with retrieval guided by the poses of the first PGO, followed by a second PGO.
Finally, we run a pose refinement by considering reprojections to query frames and tracking cost.
We can also use radio signals to restrict image retrieval throughout the pipeline.
As previously, we consider the localization recall but only of the last frame in each sequence, which is the one that influences the current AR user experience in a real-time scenario.
\begin{figure}[t]
\centering
\begin{minipage}{0.66\linewidth}
\includegraphics[width=\linewidth]{figures/ttr_v2.pdf}%
\end{minipage}%
\begin{minipage}{0.33\linewidth}
\centering
\scriptsize{\input{tables/ttr.tex}}
\end{minipage}
\vspace{1mm}
\caption{%
\textbf{Sequence localization.}
We report the localization recall at ($1^\circ, 10$cm) of SuperPoint features with SuperGlue matcher as we increase the duration of each sequence.
The pipeline leverages both on-device tracking and absolute localization, as vision-only (solid) or combined with radio signals (dashed).
We show the time-to-recall (TTR) at 80\% for HL2 and at 70\% for phone queries.
Using radio signals reduces the TTR from over 10s to 1.40s and 3.58s, respectively.
}
\label{fig:chunks}
\end{figure}
We evaluate various query durations and introduce the \emph{time-to-recall} metric as the sequence length (time) required to successfully localize X\% (recall) of the queries within (1$^\circ$, 10cm), or, in short, TTR@X\%.
Localization algorithms should aim to minimize this metric to render retrieved content as quickly as possible after starting an AR experience.
\Cref{fig:chunks} reports the results averaged over all locations.
While the performance of current methods is not satisfactory yet to achieve a TTR@90\% under 10 seconds, using sequence localization leads to significant gains of 20\%.
The radio signals improve the performance in particular with shorter sequences and thus effectively reduce the time-to-recall.
\section{Introduction}
Placing virtual content in the physical 3D world, persisting it over time, and sharing it with other users are typical scenarios for Augmented Reality (AR).
In order to reliably overlay virtual content in the real world with pixel-level precision, these scenarios require AR devices to accurately determine their 6-DoF pose at any point in time.
While visual localization and mapping is one of the most studied problems in computer vision, its use for AR entails specific challenges and opportunities.
First, modern AR devices, such as mobile phones or the Microsoft HoloLens or \mbox{MagicLeap One}, are often equipped with multiple cameras and additional inertial or radio sensors.
Second, they exhibit characteristic hand-held or head-mounted motion patterns.
The on-device real-time tracking systems provide spatially-posed sensor streams.
However, many AR scenarios require positioning beyond local tracking, both indoors and outdoors, and robustness to common temporal changes of appearance and structure.
Furthermore, given the plurality of temporal sensor data, the question is often not whether, but how quickly can the device localize at any time to ensure a compelling end-user experience.
Finally, as AR adoption grows, crowd-sourced data captured by users with diverse devices can be mined for building large-scale maps without a manual and costly scanning effort.
Crowd-sourcing offers great opportunities but poses additional challenges on the robustness of algorithms, e.g.\@\xspace, to enable cross-device localization~\cite{dusmanu2021cross}, mapping from incomplete data with low accuracy~\cite{Schoenberger2016Structure,brachmann2021limits}, privacy-preservation of data~\cite{speciale2019a,geppert2020privacy,shibuya2020privacy,geppert2021privacy,dusmanu2021privacy}, etc.\@\xspace
However, the academic community is mainly driven by benchmarks that are disconnected from the specifics of AR.
They mostly evaluate localization and mapping using single still images and either lack temporal changes~\cite{shotton2013scene,advio} or accurate ground truth~(GT)~\cite{sattler2018benchmarking,kendall2015,taira2018inloc}, are restricted to small scenes~\cite{Balntas2017HPatches,shotton2013scene,kendall2015,wald2020,schops2017multi} or landmarks~\cite{Jin2020Image,Schonberger2017Comparative} with perfect coverage and limited viewpoint variability, or disregard temporal tracking data or additional visual, inertial, or radio sensors~\cite{sattler2012aachen,sattler2018benchmarking,taira2018inloc,lee2021naver,nclt,sun2017dataset}.
Our first contribution is to introduce {\bf a large-scale dataset captured using AR devices in diverse environments}, notably a historical building, a multi-story office building, and part of a city center.
The initial data release contains both indoor and outdoor images with illumination and semantic changes as well as dynamic objects.
Specifically, we collected multi-sensor data streams (images, depth, tracking, IMU, BT, WiFi) totalling more than 100 hours using head-mounted HoloLens 2 and hand-held iPhone / iPad devices covering 45'000 square meters over the span of one year (\cref{fig:teaser}).
Second, we develop {\bf a GT pipeline to automatically and accurately register AR trajectories} against large-scale 3D laser scans.
Our pipeline does not require any manual labelling or setup of custom infrastructure (e.g.\@\xspace, fiducial markers).
Furthermore, the system robustly handles crowd-sourced data from heterogeneous devices captured over longer periods of time and can be easily extended to support future devices.
Finally, we present {\bf a rigorous evaluation of localization and mapping in the context of AR} and provide {\bf novel insights for future research}.
Notably, we show that the performance of state-of-the-art methods can be drastically improved by considering additional data streams generally available in AR devices, such as radio signals or sequence odometry.
Thus, future algorithms in the field of AR localization and mapping should always consider these sensors in their evaluation to show real-world impact.
The LaMAR dataset, benchmark, GT pipeline, and the implementations of baselines integrating additional sensory data are all publicly available at~\href{https://lamar.ethz.ch/}{\texttt{lamar.ethz.ch}}.
We hope that this will spark future research addressing the challenges of AR.
\section{Conclusion}
LaMAR is the first benchmark that faithfully captures the challenges and opportunities of AR for visual localization and mapping.
We first identified several key limitations of current benchmarks that make them unrealistic for AR.
To address these limitations, we developed a new ground-truthing pipeline to accurately and robustly register AR sensor streams in large and diverse scenes aided by laser scans without any manual labelling or custom infrastructure.
With this new benchmark, initially covering 3 large locations, we revisited the traditional academic setup and showed a large performance gap for existing state-of-the-art methods when evaluated using more realistic and challenging data.
We implemented simple yet representative baselines to take advantage of the AR-specific setup and we presented new insights that pave promising avenues for future works.
We showed the large potential of leveraging other sensor modalities like radio signals, depth, or query sequences instead of single images.
We also hope to direct the attention of the community towards improving map representations for crowd-sourced data and towards considering the time-to-recall metric, which is currently largely ignored.
We publicly release at \href{https://lamar.ethz.ch/}{\texttt{lamar.ethz.ch}} the complete LaMAR dataset, our ground-truthing pipeline, and the implementation of all baselines.
The evaluation server and public leaderboard facilitates the benchmarking of new approaches to keep track of the state of the art.
We hope this will spark future research addressing the challenges of AR.
\PAR{Acknowledgements.}
LaMAR would not have been possible without the hard work and contributions of %
Gabriela Evrova,
Silvano Galliani,
Michael Baumgartner,
Cedric Cagniart,
Jeffrey Delmerico,
Jonas Hein,
Dawid Jeczmionek,
Mirlan Karimov,
Maximilian Mews,
Patrick Misteli,
Juan Nieto,
Sònia Batllori Pallarès,
R\'emi Pautrat,
Songyou Peng,
Iago Suarez,
Rui Wang,
Jeremy Wanner,
Silvan Weder
and our colleagues in \mbox{CVG at ETH Zurich} and the wider Microsoft Mixed Reality \& AI team.
\section{Related work}
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{%
\scriptsize{\input{tables/datasets.tex}}%
}%
\vspace{1mm}
\caption{\textbf{Overview of existing datasets.}
No dataset, besides ours, exhibits at the same time short-term appearance and structural changes due to moving people~\iconDynamic, weather~\iconWeather, or day-night cycles~\iconNight, but also long-term changes due to displaced furniture~\iconChair\ or construction work~\iconStruct.
}%
\label{tab:datasets}
\end{table*}
\PAR{Image-based localization}
is classically tackled by estimating a camera pose from correspondences established between sparse local features~\cite{lowe2004distinctive,bay2008speeded,Rublee2011ORB,mikolajczyk2004ijcv} and a 3D Structure-from-Motion (SfM)~\cite{Schoenberger2016Structure} map of the scene~\cite{fischler1981random,li2012worldwide,sattler2012improving}.
This pipeline scales to large scenes using image retrieval~\cite{arandjelovic2012three,vlad,apgem,asmk,cao2020unifying,rau2020imageboxoverlap,densevlad}.
Recently, many of these steps or even the end-to-end pipeline have been successfully learned with neural networks~\cite{detone2018superpoint,sarlin2020superglue,Dusmanu2019CVPR,schoenberger2018semantic,arandjelovic2016netvlad,NIPS2017_831caa1b,tian2019sosnet,sarlin2019,yi2016lift,Hyeon2021,sarlin21pixloc,lindenberger2021pixsfm}.
Other approaches regress absolute camera pose~\cite{kendall2015,kendall2017geometric,ng2021reassessing} or scene coordinates~\cite{shotton2013scene,valentin2015cvpr,meng2017backtracking,massiceti2017random,angle_scr,brachmann2019esac,Wang2021,brachmann2021dsacstar}.
However, all these approaches typically fail whenever there is lack of context (e.g.\@\xspace, limited field-of-view) or the map has repetitive elements.
Leveraging the sequential ordering of video frames~\cite{seqslam,johns2013feature} or modelling the problem as a generalized camera~\cite{pless2003using,hee2016minimal,sattler2018benchmarking,speciale2019a} can improve results.
\PAR{Radio-based localization:}
Radio signals, such as WiFi and Bluetooth, are spatially bounded (logarithmic decay)~\cite{radar,khalajmehrabadi2016modern,radio_fingerprint}, thus can distinguish similarly looking (spatially distant) locations.
Their unique identifiers can be uniquely hashed which makes them computationally attractive (compared with high-dimensional image descriptors).
Several methods use the signal strength, angle, direction, or time of arrival~\cite{radio_aoa,radio_toa,radio_tdoa} but the most popular is model-free map-based fingerprinting~\cite{khalajmehrabadi2016modern,radio_fingerprint,fault_tolerant}, as it only requires to collect unique identifiers of nearby radio sources and received signal strength.
GNSS provides absolute 3-DoF positioning but is not applicable indoors and has insufficient accuracy for AR scenarios, especially in urban environments due to multi-pathing, etc.\@\xspace
\PAR{Datasets and ground-truth:}
Many of the existing benchmarks (cf.\@\xspace~\cref{tab:datasets}) are captured in small-scale environments \cite{shotton2013scene,wald2020,dai2017scannet,hodan2018bop}, do not contain sequential data~\cite{sattler2012aachen,Jin2020Image,sanfrancisco,taira2018inloc,sun2017dataset,schops2017multi,Balntas2017HPatches,Schonberger2017Comparative}, lack characteristic hand-held/head-mounted motion patterns ~\cite{sattler2018benchmarking,Badino2011,RobotCarDatasetIJRR,wenzel2020fourseasons}, or their GT is not accurate enough for AR~\cite{advio,kendall2015}.
None of these datasets contain WiFi or Bluetooth data (\cref{tab:datasets}).
The closest to our work are Naver Labs~\cite{lee2021naver}, NCLT~\cite{nclt} and ETH3D~\cite{schops2017multi}.
Both, Naver Labs~\cite{lee2021naver} and NCLT~\cite{nclt} are less accurate than ours and do not contain AR specific trajectories or radio data.
The Naver Labs dataset~\cite{lee2021naver} also does not contain any outdoor data.
ETH3D~\cite{schops2017multi} is highly accurate, however, it is only small-scale, does not contain significant changes, or any radio data.
To establish ground-truth, many datasets rely on off-the-shelf SfM algorithms~\cite{Schoenberger2016Structure} for unordered image collections~\cite{sattler2012aachen,Jin2020Image,kendall2015,wald2020,advio,sun2017dataset,taira2018inloc,Jin2020Image}.
Pure SfM-based GT generation has limited accuracy~\cite{brachmann2021limits} and completeness, which biases the evaluations to scenarios in which visual localization already works well.
Other approaches rely on RGB(-D) tracking~\cite{wald2020,shotton2013scene}, which usually drifts in larger scenes and cannot produce GT in crowd-sourced, multi-device scenarios.
Specialized capture rigs of an AR device with a more accurate sensor (lidar)~\cite{lee2021naver,nclt} prevent capturing of realistic AR motion patterns.
Furthermore, scalability is limited for these approaches, especially if they rely on manual selection of reference images~\cite{sun2017dataset}, laborious labelling of correspondences~\cite{sattler2012aachen,taira2018inloc}, or placement of fiducial markers~\cite{hodan2018bop}.
For example, the accuracy of ETH3D~\cite{schops2017multi} is achieved by using single stationary lidar scan, manual cleaning, and aligning very few images captured by tripod-mounted DSLR cameras.
Images thus obtained are not representative for AR devices and the process cannot scale or take advantage of crowd-sourced data.
In contrast, our fully automatic approach does not require any manual labelling or special capture setups, thus enables light-weight and repeated scanning of large locations.
\section*{Appendix}
\section{Visualizations}
\label{sec:visuals}
\PAR{Diversity of devices:}
We show in \cref{fig:supp:samples} some samples of images captured in the HGE location.
NavVis and phone images are colored while HoloLens2 images are grayscale.
NavVis images are always perfectly upright, while the viewpoint and height of HoloLens2 and phone images varies significantly.
Despite the automatic exposure, phone images easily appear dark in night-time low-light conditions.
\PAR{Diversity of environments:}
We show an extensive overview of the three locations CAB, HGE, and LIN in Figures~\ref{fig:supp:CAB}, \ref{fig:supp:HGE}, and \ref{fig:supp:LIN}, respectively.
In each image, we show a rendering of the lidar mesh along with the ground truth trajectories of a few sequences.
\begin{figure}[!b]
\centering
\includegraphics[width=\textwidth]{figures/image_samples_compressed.pdf}
\caption{\textbf{Sample of images from the different devices:} NavVis M6, HoloLens2, phone.
Each column shows a different scene of the HGE location with large illumination changes.
}%
\label{fig:supp:samples}%
\end{figure}
\begin{figure}[p]
\centering
\input{supp/trajectories_CAB}
\vspace{2mm}
\caption{\textbf{The CAB location} features 1-2) a staircase spanning 5 similar-looking floors, 3) large and small offices and meeting rooms, 4) long corridors, 5) large halls, and 6) outdoor areas with repeated structures.
This location includes the \emph{Facade}, \emph{Courtyard}, \emph{Lounge}, \emph{Old Computer}, \emph{Storage Room}, and \emph{Office} scenes of the ETH3D~\cite{schops2017multi} dataset and is thus much larger than each of them.
}%
\label{fig:supp:CAB}%
\end{figure}
\begin{figure}[!ht]
\centering
\input{supp/trajectories_HGE}
\vspace{2mm}
\caption{\textbf{The HGE location} features a highly-symmetric building with 1-2) hallways, 3) long corridors, 4) two esplanades, and 5) a section of sidewalk.
This location includes the \emph{Relief}, \emph{Door}, and \emph{Statue} scenes of the ETH3D~\cite{schops2017multi} dataset.
}%
\label{fig:supp:HGE}
\vspace{1cm}
\end{figure}
\begin{figure}[ht]
\centering
\input{supp/trajectories_LIN}
\vspace{2mm}
\caption{\textbf{The LIN location} features large outdoor open spaces (top row), narrow passages with stairs (middle row), and both residential and commercial street-level facades.
}%
\label{fig:supp:LIN}
\vspace{1cm}
\end{figure}
\PAR{Long-term changes:}
Because spaces are actively used and managed, they undergo significant appearance and structural changes over the year-long data recording.
This is captured by the laser scans, which are aligned based on elements that do not change, such as the structure of the buildings.
We show in \cref{fig:supp:changes} a visual comparison between scans captured at different points in time.
\begin{figure}[p]
\centering
\input{supp/pointcloud_changes}
\vspace{1mm}
\caption{\textbf{Long-term structural changes.}
Lidar point clouds captured over a year reveal the geometric changes that spaces undergo at different time scales:
1) very rarely (construction work),
2-4) sparsely (displacement of furniture), or even
5-6) daily due to regular usage (people, objects).
}%
\label{fig:supp:changes}%
\end{figure}
\newpage
\section{Uncertainties of the ground truth}
\label{sec:uncertainties}
We show overhead maps and histograms of uncertainties for all scenes in \cref{fig:supp:uncertainties}.
We also additional rendering comparisons in \cref{fig:renderings}.
Since we do not use the mesh for any color-accurate tasks (e.g., photo-metric alignment), we use a simple vertex-coloring based on the NavVis colored lidar pointcloud.
The renderings are therefore not realistic but nevertheless allow an inspection the final alignment.
The proposed ground-truthing pipeline yields poses that allow pixel-accurate rendering.
\begin{figure}[p]
\centering
\input{supp/uncertainties}
\vspace{4mm}
\caption{\textbf{Translation uncertainties of the ground truth camera centers} for the CAB (top), LIN (middle) and HGE (bottom) scenes.
Left: The overhead map shows that the uncertainties are larger in areas that are not well covered by the 3D scanners or where the scene is further away from the camera, such as in long corridors and large outdoor space.
Right: The histogram of uncertainties shows that most images have an uncertainty far lower than $\sigma_t{=}3.33\text{cm}$.
}%
\label{fig:supp:uncertainties}%
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/116376895_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/2714445675_hl_2021-06-02-11-59-31-495.001_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/453870575_hl_2021-06-02-11-31-59-805.000_hetlf.jpg}\\
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/131582775_hl_2022-01-18-12-58-38-108.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/492852500_hl_2021-06-02-11-31-59-805.000_hetlf.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/375274845_hl_2021-06-02-11-31-59-805.000_hetrf.jpg}\\
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/4953298178_ios_2022-02-27_17.39.20_000_cam_phone_4953298178.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/6493927726_ios_2022-02-27_18.05.07_000_cam_phone_6493927726.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/7165878684_ios_2022-02-27_20.29.13_000_cam_phone_7165878684.jpg}\\ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/72874052732_ios_2022-01-20_14.56.34_000_cam_phone_72874052732.jpg}~ \includegraphics[width=.30\textwidth]{figures/qualitative_renderings/7763152056_ios_2022-02-27_20.41.27_000_cam_phone_7763152056.jpg}~
\includegraphics[width=.30\textwidth]{figures/qualitative_renderings/95570500269_ios_2021-06-02_14.31.38_000_cam_phone_95570500269.jpg}\\
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg2.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin2.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab1.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab2.jpg}\\
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_hg4.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_lin4.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab3.jpg}~
\includegraphics[width=.15\textwidth]{figures/qualitative_renderings/mosaic_cab4.jpg}\\
\vspace{1mm}
\caption{{\bf Qualitative renderings from the mesh.}
Using the estimated ground-truth poses, we render images from the vertex-colored mesh (right) and compare them to the originals (left).
The first two rows show six HoloLens 2 images while the next two show six phone images.
We overlay a regular grid to facilitate the comparison.
The bottom rows shows 2x2 mosaics alternating between originals (top-left, bottom-right) and renders (top-right, bottom-left).
Best viewed when zoomed in.
}
\label{fig:renderings}
\end{figure}
\newpage
\section{Selection of mapping and query sequences}
\label{sec:map-query-split}
We now describe in more details the algorithm that automatically selects mapping and query sequences, whose distributions are shown in \cref{fig:supp:qualitymaps}.
The coverage $C(i,j)_k$ is a boolean that indicates whether the image $k$ of sequence $i$ shares sufficient covisibility with at least one image is sequence $j$.
Here two images are deemed covisible if they co-observe a sufficient number of 3D points in the final, full SfM sparse model~\cite{radenovic2018fine} or according to the ground truth mesh-based visual overlap.
The coverage of sequence $i$ with a set of other sequences $\mathcal{S} = \{j\}$ is the ratio of images in $i$ that are covered by at least one image in $\mathcal{S}$:
\begin{equation}
C(i,\mathcal{S}) = \frac{1}{|i|}\sum_{k\in i} \bigcap_{j \in \mathcal{S}} C(i,j)_k
\end{equation}
\begin{figure}[p]
\centering
\input{supp/qualitymaps}
\vspace{4mm}
\caption{\textbf{Spatial distribution of AR sequences} for the CAB (top), HGE (middle), and LIN (bottom) locations.
We show the ground truth trajectories overlaid on the lidar point clouds along 3 orthogonal directions.
All axes are in meters and $z$ is aligned with the gravity.
Left: Types of AR devices among all registered sequences.
Right: Map and query sequences selected for evaluation.
CAB spans multiple floors while HGE and LIN are mostly 2D but include a range of ground heights.
The space is well covered by both types of devices and sequences.
}%
\label{fig:supp:qualitymaps}%
\end{figure}
We seek to find the set of mapping sequences $\mathcal{M}$ and remaining query sequences $\mathcal{Q} = \mathcal{S}\backslash\mathcal{M}$ that minimize the coverage between map sequences while ensuring that each query is sufficiently covered by the map:
\begin{align}
\begin{split}
\mathcal{M}^* = \argmin \frac{1}{|\mathcal{M}|} \sum_{i \in\mathcal{M}} C(i,\mathcal{M}\backslash\{i\}) \\
\text{such that}\quad C(i,\mathcal{M}) > \tau\quad \forall i \in \mathcal{Q}\enspace,
\end{split}
\end{align}
where $\tau$ is the minimum query coverage.
We ensure that query sequences are out of coverage for at most $t$ consecutive seconds, where $t$ can be tuned to adjust the difficulty of the localization and generally varies from 1 to 5 seconds.
This problem is combinatorial and without exact solution.
We solve it approximately with a depth-first search that iteratively adds new images and checks for the feasibility of the solution.
At each step, we consider the query sequences that are the least covisible with the current map.
\section{Data distribution}
\label{sec:supp:distribution}
We show in \cref{fig:supp:qualitymaps} (left) the spatial distribution to the registered sequences for the two devices types HoloLens2 and phone.
We select a subset of all registered sequences for evaluation and split it into mapping and localization groups according to the algorithm described in \cref{sec:map-query-split}.
The spatial distribution of these groups is shown in \cref{fig:supp:qualitymaps} (right).
We enforce that night-time sequences are not included in the map, which is a realistic assumption for crowd-sourced scenarios.
We do not enforce an equal distribution of device types in either group but observe that this occurs naturally.
For the evaluation, mapping images are sampled at intervals of at most 2.5FPS, 50cm of distance, and 20\degree of rotation.
This ensures a sufficient covisibility between subsequent frames while reducing the computational cost of creating maps.
The queries are sampled every 1s/1m/20\degree and, for each device type, 1000 poses are selected out of those with sufficiently low uncertainty.
\section{Additional evaluation results}
\subsection{Impact of the condition and environment}
We now investigate the impact of different capture conditions (day vs night) and environment (indoor vs outdoor) of the query images.
Query sequences are labeled as day or night based on the time and date of capture.
We manually annotate overhead maps into indoor and outdoor areas.
We report the results for single-image localization of phone images in \cref{tab:supp:cond-env}.
In regular day-time conditions, outdoor areas exhibit distinctive texture and are thus easier to coarsely localize in than texture-less, repetitive indoor areas.
The scene structure is however generally further away from the camera, so optimizing reprojection errors yields less accurate camera poses.
Indoor scenes generally benefit from artificial light and are thus minimally affected by the night-time drop of natural light.
Outdoor scenes benefit from little artificial light, mostly due to sparse street lighting, and thus widely change in appearance between day and night.
As a result, the localization performance drops to a larger extent outdoors than indoors.
\begin{table}[t]
\centering
{%
\setlength\tabcolsep{5pt}
\begin{tabular}{cccccc}
\toprule
\multirowcell{2}[-0.1cm]{Condition} & \multicolumn{2}{c}{CAB scene} & \multicolumn{2}{c}{HGE scene} & LIN scene\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-6}
& Indoor & Outdoor & Indoor & Outdoor & Outdoor \\
\midrule
day & 66.5 / 74.7 & 73.9 / 88.1 & 52.7 / 65.9 & 43.0 / 64.3 & 71.2 / 82.5\\
night & 30.3 / 44.8 & 18.8 / 40.6 & 47.9 / 59.4 & 12.1 / 33.6 & 38.6 / 55.6\\
\bottomrule
\end{tabular}
}
\vspace{2mm}
\caption{\textbf{Impact of the condition and environment} on single-image phone localization.
During the day, localizing indoors can be more accurate (10cm threshold) but less robust (1m threshold) than outdoors due to visual aliasing and a lack of texture.
Night-time localization is more challenging outdoors than indoors because of a larger drop of illumination.
}
\label{tab:supp:cond-env}
\end{table}
\subsection{Additional results on sequence localization}
We run a detailed ablation of the sequence localization on an extended set of queries and report our finding below.
\PAR{Ablation:} We ablate the different parts of our proposed sequence localization pipeline on sequences of 20 seconds.
The localization recall at \{$1^\circ, 10$cm\} and \{$5^\circ, 1$m\} can be seen in \cref{tab:multi-frame-ablation} for both HoloLens 2 and Phone queries.
The initial PGO with tracking and absolute constraints already offers a significant boost in performance compared to single-frame localization.
We notice that the re-localization with image retrieval guided by the PGO poses achieves much better results than the first localization - this points to retrieval being a bottle-neck, not feature matching.
Next, the second PGO is able to leverage the improved absolute constraints and yields better results.
Finally, the pose refinement optimizing reprojection errors while also taking into account tracking constraints further improves the performance, notably at the the tighter threshold.
\begin{table}[t]
\centering
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Device} & \multirow{2}{*}{Radios} & \multicolumn{6}{c}{Steps} \\
\cmidrule(lr){3-8}
& & Loc. & Init. & PGO1 & Re-loc. & PGO2 & BA\\
\midrule
\multirow{2}{*}{HL2} & \sxmark & 66.0 / 79.9 & 66.1 / 92.5 & 71.8 / 92.4 & 74.2 / 88.0 & 74.9 / 92.5 & {\bf 79.3} / {\bf 92.8} \\
& \scmark & 67.7 / 82.3 & 66.4 / 94.5 & 74.3 / 94.3 & 76.2 / 90.1 & 76.7 / 94.4 & {\bf 81.6} / {\bf 94.9} \\
\midrule
\multirow{2}{*}{Phone} & \sxmark & 54.2 / 65.5 & 52.4 / 88.0 & 62.7 / 87.7 & 61.8 / 77.4 & 66.1 / 88.4 & {\bf 69.0} / {\bf 88.6} \\
& \scmark & 56.7 / 71.5 & 54.1 / {\bf 90.2} & 64.4 / 89.8 & 63.1 / 79.5 & 66.9 / 90.1 & {\bf 71.0} / {\bf 90.2} \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{{\bf Ablation of the sequence localization.}
We report recall for the different steps of the sequence localization pipeline for 10s sequences on the CAB location.
The second localization, guided by the poses of the first PGO, drastically improves over the initial localization, especially when no radio signals are used.
The final pose refinement optimizing reprojection errors while also taking into account tracking constraints offers a significant boost for the tighter threshold.}
\label{tab:multi-frame-ablation}
\end{table}
\section{Phone capture application}
We wrote a simple iOS Swift application that saves the data exposed by the ARKit, CoreBluetooth, CoreMotion, and CoreLocation services.
The user can adjust the frame rate prior to capturing.
The interface displays the current input image and the trajectory estimated by ARKit as an AR overlay.
It also displays the amount of free disk space on the device, the recording time, the number of captured frames, and the status of the ARKit tracking and mapping algorithms.
The data storage is optimized such that a single device can capture hours of data without running out of space.
After capture, the data can be inspected on-device and shared over Airdrop or cloud storage.
Screenshots of the user interface are shown in \cref{fig:supp:app}.
\begin{figure}[t]
\centering
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/init.jpg}
\end{minipage}%
\hspace{1mm}%
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/record.jpg}
\end{minipage}%
\hspace{1mm}%
\begin{minipage}{0.25\textwidth}
\includegraphics[width=\linewidth]{figures/ios_app/result.jpg}
\end{minipage}
\vspace{2mm}
\caption{\textbf{iOS capture application.}
}%
\label{fig:supp:app}%
\vspace{0.5cm}
\end{figure}
\section{Implementation details}
\PAR{Scan-to-scan alignment:}
The pairwise alignment is initialized by matching the $r{=}5$ most similar images and running 3D-3D RANSAC with a threshold $\tau{=}5\text{cm}$.
The ICP refinement search for correspondences within a $5$cm radius.
\PAR{Sequence-to-scan alignment:}
The single-image localization is only performed for keyframes, which are selected every $1$ second, $20$\degree of rotation, or $1$ meter of traveled distance.
In RANSAC, the inlier threshold depends on the detection noise and thus of the image resolution: $3\sigma$ for PnP and $1\sigma$ GPnP, as camera rigs are better constrained.
Poses with fewer than $50$ inliers are discarded.
In the rigid alignment, inliers are selected for pose errors lower than $\tau_\text{rigid}{=}(2\text{m}, 20\degree)$.
Sequences with fewer than 4 inliers are discarded.
When optimizing the pose graph, we apply the $\text{arctan}$ robust cost function to the absolute pose term, with a scale of $100$ after covariance whitening.
In the bundle adjustment, reprojection errors larger than $5\sigma$px are discarded at initialization.
A robust Huber loss function is applied to the remaining ones with a scale of $2.5$ after covariance whitening.
\PAR{Radio transfer:}
As mentioned in the main paper, currently, Apple devices only expose partial radio signals.
Notably, WiFi signals cannot be recovered, Bluetooth beacons are not exposed, and the remaining Bluetooth signals are anonymized.
This makes it impossible to match them to those recovered by other devices (e.g., HoloLens, NavVis).
To show the potential benefit of exposing these radios, we implement a simple radio transfer algorithm from HoloLens 2 devices to phones.
First, we estimate the location of each HoloLens radio detection by linearly interpolating the poses of temporally adjacent frames.
For each phone query, we aggregate all radios within at most 3m in any direction and 1.5m vertically (to avoid cross-floor transfer) with respect to the ground-truth pose.
If the same radio signal is observed multiple times in this radius, we only keep the spatially closest 5 detections.
The final RSSI estimate for each radio signal is then obtained by a distance-based weighted-average of these observations.
Note that this step is done on the raw data, after the alignment.
Thus, we can safely release radios for phone queries without divulging the ground-truth poses.
|
{'timestamp': '2022-10-20T02:19:28', 'yymm': '2210', 'arxiv_id': '2210.10770', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.10770'}
|
arxiv
|
\section{Introduction}
\begin{comment}
\begin{itemize}
\item
fluidic devices are useful and everywhere.
\item
current dseign pipelines require domain knowledge and iteraive testings with slow CFD packages (slow ismulation)
\item
\end{itemize}
Optimizing designs of fluidic devices: rigid boundaries and Stokes flow; density-based decision variables.
\end{comment}
Fluidic systems play a vital role in today's industry and engineering, supporting applications from jet engines, hydraulic actuators, to heart valves and bioreactors.
Computational design of a fluidic system that manifests precise functionality and complex geometry remains as a substantial challenge. Despite the rapid advent of additive manufacturing, which enabled the fabrication of intricate flow-driven systems on an unprecedented level of printing resolution, the computational exploration of the design space of even a simple flow ``twister'' (one that converts a laminar input flow to a swirling pattern at its outlet) remains a difficult task due to the interleaving complexities of the flow physics simulation, topological structure exploration, and accurate boundary representations.
We identify two underlying challenges for building a fluidic system design framework. First, an accurate flow simulator needs to enforce both the incompressibility constraint and accurate boundary conditions (i.e., slip and non-slip) within a topologically complicated domain. Especially, when a fluid structure gets narrow, an accurate model to characterize the solid boundary's impacts on the near-boundary flow behavior is essential for evaluating the system's performance and design sensitivities.
Conventional approaches, as a strict adherence to a non-slip boundary condition (which is a modeling hypothesis rather than physical principles), could effectively “clog up” such narrow fluid pathways, and hinder the optimization process to generate new design features.
Second, a capable optimizer is expected to effectively explore the high dimensional design space of both \textit{shape} and \textit{topology} without being constrained by any parameterization priors.
Traditional shape optimization frameworks, despite their ability in featuring the local geometry of the solid-fluid boundary accurately, could not generate new topological features that differ drastically from the current shapes.
Considering these two aspects, we need to carefully choose the design's discrete representation that can balance its geometric expressiveness (i.e., accurately featuring the local shape) and topological complexities (i.e., freely evolving the global topology).
Traditional fluidic design frameworks, categorized into \emph{field-based} methods and \emph{shape-based} methods according to their design representations, suffer from a number of limitations.
In particular, current field-based approaches lack an accurate boundary representation, and shape-based approaches are limited in topological flexibility.
A field-based approach (e.g., see~\cite{borrvall2003topology}) represents the fluid domain using a density field discretized on a background grid. Akin to the volume-of-fluid (VOF) method~\cite{hirt1981volume}, the density of each cell specifies the fraction between fluid and solid phases occupying the cell's volume. Such fraction-based representation, like in VOF, suffers from its inherent ambiguities in reconstructing the accurate geometry of a sharp interface and therefore lacks its ability in enforcing accurate boundary conditions.
Shape-based approaches, exemplified by the implicit level sets~\cite{fedkiw2002level} and explicit parametric shapes~\cite{du2020stokes}, lack their flexibility in exploring complex topological changes.
In particular, the lack of ability in tackling topological changes such as merging, splitting (for parametric shapes), and adding/removing holes (for level sets), will constrain the algorithm's exploration of the design space within a limited scope.
A method that can combine the merits of both field-based and shape-based approaches, despite their successes in solving forward simulation problems in computational physics, such as the Coupled Level-Set and Volume of Fluid (CLSVOF)~\cite{sussman2000coupled} and particle level-set method~\cite{enright2002hybrid},
remains largely unexplored in the field of computational design of fluidic systems.
To address these two challenges, we propose a non-parametric topology optimization framework enhanced by accurate boundary treatments to enable large-scale fluidic system designs. The critical challenge we addressed in this work is to devise a geometric representation that can express the phase (solid or fluid), sharp interface (with boundary normals), and anisotropicity (with local flow directions) in a unified geometric representation.
Our method was inspired by the anisotropic material model for elastic simulation \cite{Li2015Anisotropic} and the diffusive imaging model in Magnetic Resonance Imaging (MRI) \cite{Basser1994Tensor}, which use tensor fields to encode the local, anisotropic geometry.
We propose to represent the solid, fluid, and their boundary as an anisotropic tensor field discretized on a background grid, which enables us to accurately handle different boundary types and maintain flexibility in evolving topology.
Based upon this novel representation, we further formulated the differentiable simulation and optimization models in conjunction with our novel block-based incompressibility constraint to explore designs in a high-dimensional parameter space. Compared with the flow optimization literature, our design system tackles topologically complicated flow design problems by expressing its spatially filling, multi-phase, and heterogeneous material features in a continuous and unified fashion.
We validate the efficacy of our approach in multiple fluidic device design problems with as many as two million design parameters, many of which showed for the first time designs with intricate solid structures and free-slip flow fields in complex fluid domains, which were impractical for previous methods.
We summarize the main contributions of our paper as follows:
\begin{itemize}
\item
We propose an anisotropic Stokes flow model, as well as its discretization scheme, numerical solver, and gradient computation, which jointly enable flexible modeling of both free-slip and no-slip boundary conditions.
\item
We propose an approach to incorporate volume-preserving constraints aggregating in rectangular regions that improve the conditioning of our system.
\item
We propose a field-based topology optimization framework for computational optimization of fluidic systems with accurate, flexible solid-fluid boundaries.
\item
We demonstrate the capacity of our framework for obtaining a variety of complex fluidic devices design.
\end{itemize}
\section{Related Work}
\paragraph{Flow optimization}
Beginning with the pioneering work of \citet{borrvall2003topology}, a vast literature has been devoted to the optimization of fluid systems \cite{fluids2020review}. Given a predefined design domain with boundary conditions, a typical optimization objective is to maximize some performance functional of a fluid system (e.g., the power loss of the system) constrained by the physical equations. Similar to a conventional structural optimization problem, the design domain is discretized. The optimization algorithm decides for each element whether it should be fluid or solid to optimize some performance function such as the power loss.
Examples of flow optimization applications include Stokes flow \cite{borrvall2003topology,guest2006topology,aage2008topology,challis2009level}, steady-state flow \cite{zhou2008variational}, weakly compressible flow \cite{evgrafov2006topology}, unsteady flow \cite{deng2012navierstokesTopOpt}, channel flow \cite{gersborg2005topology}, ducted flow \cite{othmer2007implementation}, viscous flow \cite{kontoleontos2013adjoint}, fluid-structure interaction (FSI) \cite{yoon2010topology,casas2017optimization,andreasen2013topology}, fluid-thermal interaction \cite{matsumori2013topology,yaji2015topology}, microfluidics \cite{andreasen2009topology}, and aerodynamics \cite{Jameson95AerodynamicsOpt, maute2004conceptual}, to name a few.
The development of topology optimization algorithms to explore functional flow systems remains largely unexplored due to the complexities regarding both the simulation and optimization.
In computer graphics, \citet{du2020stokes} developed a differentiable framework to simulate and optimize Stokes flow systems governed by design specifications with different types of boundary conditions. Although this work realized a multitude of design examples, such as the flow averager, gates, mixer, and twister, the generated designs were all limited within the design space spanned by the predefined shape parameters.
\paragraph{Topology optimization}
Topology optimization has demonstrated its efficacy in creating mechanical designs with complex structures and extreme properties in many engineering problems (see \cite{rozvany2009critical,sigmund2013topology,deaton2014survey} for surveys).
Starting from a volumetric domain with uniform material distribution, a topology optimization algorithm iteratively redistributes material to develop a structure that minimizes a design objective (e.g., structural compliance), given the prescribed target volume and boundary conditions.
In computer graphics, a wide range of topology optimization algorithms have been developed to accommodate computational fabrication applications and 3D printing designs, including examples of elastic structures \cite{Liu2018Narrow}, shells \cite{skouras2014designing}, porous materials \cite{wu2017infill}, and microstructures \cite{zhu2017,Panetta:2015,Schumacher:2015}.
Despite their successes in structural optimization, research on topology optimization algorithms for solving flow systems with accurate boundary conditions is scarce, limiting their applications in designing thin and delicate structures in fluidic devices. Finally, we share inspiration from various works from the graphics community on differentiable simulation \cite{hu2018chainqueen, li2022diffcloth, du21diffpd} and shape optimization \cite{spin2017, want2016buoyancyopt} for computational design and control applications.
\paragraph{Anisotropic methods}
Anisotropic methods have been explored in all aspects of computer graphics. Examples include meshing \cite{narain2012adaptive}, texturing \cite{mccormack1999feline}, rendering \cite{wang2008modeling}, surface reconstruction \cite{yu2013reconstructing}, and various physics simulations such as cloth \cite{narain2012adaptive}, solid \cite{li2015stable,schreck2020practical}, and fluid \cite{pfaff2010scalable,xiao2020adaptive}. Typically, an anisotropic method encode the local orientation information either in discrete spatial discretizations, such as anisotropic mesh elements or grid cells, or through continuous tensor representations, such as anisotropic elastic material models or fluid turbulent models. In this paper, we chose to explore anisotropic tensor representations discretized on a uniform grid, which mimics the aniostropic continuum mechanics models and uses it in a new context for representing solid-fluid boundaries in topology optimization.
\begin{comment}
\paragraph{Fabrication-Oriented Optimization.}
The last decade has witnessed an increasing interest in the design of computational tools and algorithms targeting digital fabrication of physical artifacts.
A broad range of applications have been addressed, including the mechanical characters \cite{coros2013computational, thomaszewski2014computational,ceylan2013designing}, inflatable thin shells \cite{skouras2014designing}, foldable structures \cite{felton2014method,sung2015foldable}, Voronoi structures \cite{lu2014build}, fabrication-specific compiler \cite{Vidimce13}, joints and puzzles \cite{sun2015computational}, spinning objects \cite{bacher2014spin}, gliders \cite{martin2015omniad} multicopters \cite{du2016computational}, hydraulic walkers \cite{maccurdy2016printable}, origami robots \cite{IJRR-2017}, articulated robots \cite{icra-2017}, and multi-material jumpers \cite{chen2017}, to name a few.
Among these applications, the problem of optimizing the material distribution of a 3D printable object to control its mechanical properties has drawn particular attention.
Examples of designing mechanical properties by optimizing materials include optics \cite{Hasan:2010:PRO,Dong:2010:FSS}, mechanical stability \cite{Stava12Stress}, strength \cite{Zhou2013WorstCase}, rest shape \cite{Chen:2014:ANM}, and desired deformation \cite{Bickel:2010:DAF,zhu2017}. \td{Similarly, I feel it lacks connection with this paper. Maybe add a sentence in the end: ``While our work does not present real-world results, we also take functionality into account and aim at fabrication-oriented design and optimization.'' or something even more positive.}
\end{comment}
\section{Method Overview}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.96\textwidth]{figures/figure2-pipeline-small.pdf}
\vspace{-1em}
\caption{An overview of our pipeline: (1) Our system takes as input the specification of a fluidic device design task on a regular grid, including the locations and desired profiles of the Stokes flow at the inlet and outlet; (2) The system then represents the fluidic device using anisotropic materials in each voxel, which we further parametrize using its anisotropic direction (the principal axes of the ellipse in the illustration), viscosity (radius of the ellipse), and impedance for fluid (shown as the background color of the voxel); (3) A numerical differentiable simulator receives the design and solves the Stokes flow; (4) The pipeline then compares the simulated flow at the outlet with the target profile given in the specification and computes a loss function characterizing their discrepancy. The loss is then backpropagated through the numerical simulator to compute its gradients with respect to the anisotropic material parameters. The pipeline runs the method of moving asymptotes (MMA,~\cite{svanberg1987method}), a gradient-based optimizer, to improve the design; (5) The pipeline outputs a final design after post-processing the results from a converged optimization process.}
\label{fig:systempipeline}
\end{figure*}
We present an overview of our method in Fig.~\ref{fig:systempipeline}. The input to our method is the specification of a fluidic device defined as the target inlet and outlet flow profiles. The fluidic device is represented using a regular grid filled with anisotropic materials in each voxel. The materials are parametrized with scalar fields describing its anisotropy, viscosity, and impedance for flow. The material distribution induces a multi-phase fluidic device design whose solid-fluid boundaries can be extracted from cells with highly anisotropic materials. A numerical differentiable simulator then simulates the design to compute its Stokes flow, which is compared with the target outlet flow profile to evaluate its performance. The pipeline computes the gradients of the performance metric with respect to material parameters and backpropagating them through the numerical differentiable simulator. The pipeline then runs MMA~\cite{svanberg1987method}, the standard gradient-based optimizer in topology optimization, to improve the performance of the design by evolving its anisotropic material distribution. After this optimization converges, the resulting design is computed by a post-processing step to extract the design surface.
We organize the remainder of our paper as follows. We first describe the governing equations for our Stokes flow model in Sec.~\ref{sec:eq}. Then, we discuss its discretization and the numerical solver in Sec.~\ref{sec:disc}. We next formulate the fluidic system design problem as a numerical optimization problem and present our optimization algorithm in Sec.~\ref{sec:optimization}. Finally, we present applications and evaluation of our method in Sec.~\ref{sec:evalapp} and provide conclusions in Sec.~\ref{sec:conclusion}.
\section{Governing Partial Differential Equations}\label{sec:eq}
This section describes the physical model of the Stokes flow problem used in this paper. While Stokes equations have been extensively studied for decades, we revisit this problem with a focus on developing a novel anisotropic constitutive model that jointly represents different phases (solid and fluid) and boundary conditions (no-slip and free-slip) in a unified manner. Our constitutive model provides a uniform, grid-friendly parametrization of the design space of fluidic devices without sacrificing the flexibility and accuracy in solid-fluid boundary conditions.
\subsection{Isotropic Stokes Equations}
\paragraph{Quasi-incompressible Stokes flow} We briefly review the quasi-incompressible Stokes flow model described in~\citet{du2020stokes}. Consider a problem domain $\Omega \subset\R^d$ ($d=2$ or $3$). The velocity field $\vvec: \Omega\rightarrow \R^d$ of the quasi-incompressible Stokes flow is given by the following energy minimization problem:
\begin{align}
\min_{\vvec} & \quad \int_\Omega \mu\|\nabla \vvec\|^2_F d\x + \int_{\Omega} \lambda (\nabla\cdot\vvec)^2 d\x, \label{eq:stokes_energy} \\
s.t. & \quad \vvec(\x) = \vvec_D(\x), \quad\forall \x\in\partial\Omega_D. \label{eq:stokes_dirichlet} \\
& \quad \vvec(\x)\cdot \n(\x) = 0, \quad \forall \x\in\partial\Omega_F. \label{eq:stokes_free_slip}
\end{align}
Note that Eqn. (\ref{eq:stokes_energy}) excludes the external-force energy defined in~\citet{du2020stokes} because we assume no external forces (e.g., gravity) in our design problem. Here, the notation $\|\cdot\|_F$ is the Frobenius norm of a matrix, and $\mu\in\R^+$ and $\lambda\in\R^+$ are two scalar parameters denoting the flow's dynamic viscosity and incompressibility, respectively. In particular, $\lambda\rightarrow+\infty$ implies perfectly incompressible Stokes flow. The problem considers the following boundary conditions defined on a partition of the domain boundary $\partial\Omega = \partial\Omega_D \cup \partial\Omega_F \cup \partial\Omega_O$: The \emph{Dirichlet boundary} condition specifies a desired velocity profile $\vvec_D$ on the boundary $\partial\Omega_D$, which is either from prescribed inlet flow profiles or from \emph{no-slip boundary} conditions ($\vvec_D=\bm{0}$); the \emph{free-slip boundary} condition defined on $\partial\Omega_F$ requires the velocity's projection along the normal direction $\n$ be zero; finally, the \emph{open boundary} on $\partial\Omega_O$ imposes no explicit constraints on the velocity and automatically satisfies zero-traction conditions once the energy in Eqn. (\ref{eq:stokes_energy}) is minimized, which is suitable for modeling free flows at an outlet of a fluidic system.
Although we do not consider external forces or non-zero-traction boundary conditions in our problem, they can be accommodated in a similar way described in~\citet{du2020stokes}. We refer readers to~\citet{du2020stokes} for a comprehensive discussion on the derivation of this quasi-incompressible Stokes flow model and its numerical benefits in fluidic device design problems. While their paper presented a computational design pipeline for fluidic devices and demonstrated examples with moderately sophisticated solid structures, the method constrained the designs in the space of parametric shapes, which inhibits topologically different designs from emerging.
\subsection{Anisotropic Stokes Equations}
The challenges in previous methods motivate us to develop a new geometric representation that simultaneously accommodates expressive topology, flexible boundary conditions, and accurate simulation in Stokes-flow fluidic systems. Noting that fluid near solid-fluid boundaries satisfies different physical constraints in the normal and tangent directions, we propose an anisotropic material model that uniformly represents solid, fluid, and solid-fluid boundaries, which we describe in detail below.
\paragraph{Anisotropic, quasi-incompressible Stokes flow} We propose the following energy minimization problem that modifies the previous isotropic Stokes flow:
\begin{align}
\min_{\vvec} & \quad E_{m,\mu}[\vvec] + E_{m,\lambda}[\vvec] + E_f[\vvec], \label{eq:aniso_stokes} \\
s.t. & \quad \vvec(\x) = \vvec_D(\x), \quad \forall \x\in\partial\B_D. \label{eq:aniso_stokes_dirichlet}
\end{align}
where each energy component is defined as follows:
\begin{align}
E_{m,\mu}[\vvec] := &\int_{\B}\mu \|\nabla\vvec \K_m^{\frac{1}{2}}(\x)\|_F^2 d\x, \label{eq:aniso_stokes_em1} \\
E_{m,\lambda}[\vvec] := & \int_{\B}\lambda(\x) (\nabla\cdot \vvec)^2 d\x, \label{eq:aniso_stokes_em2}\\
E_f := &\int_{\B} \|\K_f^{\frac{1}{2}}(\x)\vvec\|_2^2 d\x, \label{eq:aniso_stokes_friction}
\end{align}
where we use the subscripts $m$ and $f$ in the energy names to indicate they model the material and the frictional effects, respectively. Note that this formulation incorporates the standard, isotropic Stokes model as a special case, namely by setting $\K_m=\mathbf{I}$ and $\K_f=\mathbf{0}$. The new energy minimization problem introduces a few critical modifications to the original problem in Eqns. (\ref{eq:stokes_energy}-\ref{eq:stokes_free_slip}): First, we change the problem domain from $\Omega$ to $\B\subset \R^d$, which we assume to be an axis-aligned, sufficiently large box that encloses the fluidic region $\Omega$. Second, we introduce two symmetric positive semi-definite matrix fields $\K_m,\K_f:\B\rightarrow \SPD^d_+$ and replace the scalar parameter $\lambda$ with a spatially varying field $\lambda:\B\rightarrow \R^+$. These three new fields define a new material model that enables anisotropic responses to velocities at different directions.
Finally, with the domain changing from $\Omega$ to a box-shaped $\B$, we adjust the boundary conditions as follows: We consider a partition of the boundary $\partial \B$ into $\partial \B = \partial \B_D \cup \partial \B_O$ where $\partial \B_D$ and $\partial \B_O$ represent the locations of the Dirichlet boundary and the open boundary conditions, respectively. The Dirichlet boundary $\B_D$ now consists of the inlet of the fluidic system where we enforce a prescribed flow profile and the border of the solid phase, on which we directly assign zero velocities. The open boundary $\partial \B_O$ still models a zero-traction, free-flow outlet like before.
The new formulation of boundary conditions does not mean we exclude no-slip or free-slip solid-fluid boundary conditions in our problem, however. In fact, solid-fluid boundaries are now absorbed into the interior of $\B$ and will be represented by a careful choice of $\K_m$, $\K_f$, and $\lambda$. We illustrate the new Stokes flow model in Fig.~\ref{fig:anisotropic_material}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/figure3-AnisotropicStokes.pdf}
\vspace{-2em}
\caption{An illustration of the anisotropic material model in the quasi-incompressible Stokes flow. We selected three representative material points from the solid phase (orange), the fluid phase (green), and the free-slip solid-fluid boundary (yellow). We associate each material point with an ellipse visualization of $\K_f$ in a way that the material inhibits flow along its minor principal axis (small radii). For example, a solid material point forces flow along all directions to be zero and is therefore associated with a small circle. Similarly, a material point on the free-slip boundary impedes flow only in the normal direction, so its $\K_f$ is a highly eccentric ellipse aligned with the tangent and normal direction of the boundary.}
\label{fig:anisotropic_material}
\end{figure}
The new energy minimization problem uses a single material model characterized by spatially-varying $\K_m$, $\K_f$, and $\lambda$ in the new problem domain $\B$. We stress that this material model design is not arbitrary but inspired by strong physics intuition. Below, we will discuss the subtleties in these three parameters by demonstrating their capability of expressing different phases (solid and fluid) and various boundary types (no-slip and free-slip). Concretely speaking, we will show that by properly setting them everywhere in $\B$, we can draw an analogy between the two physical models described in Eqns. (\ref{eq:stokes_energy}-\ref{eq:stokes_free_slip}) and Eqns. (\ref{eq:aniso_stokes}-\ref{eq:aniso_stokes_dirichlet}).
\paragraph{Fluid-phase material} For any point $\x$ in the interior of the fluid phase in Eqns. (\ref{eq:stokes_energy}-\ref{eq:stokes_free_slip}), i.e., $\x$ belongs to the interior of $\Omega$, we choose $\K_m=\I$, $\K_f=\bm{0}$, and $\lambda=\lambda_0$ where $\lambda_0$ indicates the scalar parameter used in the original quasi-incompressible Stokes problem (Eqn. (\ref{eq:stokes_energy})). This way, the energy in Eqns. (\ref{eq:aniso_stokes_em1}-\ref{eq:aniso_stokes_em2}) becomes identical to Eqn. (\ref{eq:stokes_energy}), confirming that it preserves the physics model of quasi-incompressible Stokes flow in the fluid phase.
\paragraph{Solid-phase material} Similarly, for any point $\x$ in the interior of the solid phase, we set $\K_f=k_f\I$ where $k_f\rightarrow+\infty$. According to the energy in Eqn. (\ref{eq:aniso_stokes_friction}), this will force the velocity $\vvec$ at $\x$ to be $\bm{0}$ just as expected. As a result, $\vvec$ will be an all-zero field in the interior of the solid phase, leading to $E_{m,\mu}=E_{m,\lambda}=0$ regardless of the choice of $\K_m$ and $\lambda$. We suggest $\K_m=\I$ and $\lambda=\lambda_0$ in this case, modeling the solid phase as an isotropic, quasi-incompressible material that impedes fluid.
\paragraph{No-slip-boundary material} It remains to show how a proper combination of $\K_m$, $\K_f$, and $\lambda$ can represent solid-fluid boundary conditions inside the domain $\B$. We consider two types of solid-fluid boundaries in this work: no-slip and free-slip. Note that a no-slip solid-fluid boundary simply states the flow velocity near the boundary should be zero, so we can treat it the same way as we define the solid material above, i.e., $\K_f=k_f\I$ with $k_f\rightarrow+\infty$, $\K_m=\I$, and $\lambda=\lambda_0$.
\paragraph{Free-slip-boundary material} The last and most challenging case is to model free-slip boundary conditions (Eqn. (\ref{eq:stokes_free_slip})) with a proper choice of ($\K_m$, $\K_f$, $\lambda$). For brevity, we will present our results in 3D only. Consider a point $\x$ on a free-slip solid-fluid boundary, and let $\n$ be its unit normal vector. We augment $\n$ with two unit vectors $\tvec_1, \tvec_{2}$ orthogonal to $\n$ so that the matrix $\Rmat:=(\n, \tvec_1,\tvec_2)$ defines an orthonormal basis in $\R^3$.
To derive a proper $\K_m$ for free-slip boundaries, we recall that the isotropic energy $\|\nabla\vvec\|_F^2$ in the original Stokes flow model is the variational form of $\Delta \vvec$, the Laplacian of the velocity field component-by-component, in the PDE form of Stokes flow~\cite{borrvall2003topology,du2020stokes}. Intuitively, this states that Stokes flow creates a component-by-component as-harmonic-as-possible velocity field, subject to the (quasi-)incompressibility constraint.
We further point out that both the Frobenius norm and the Laplacian are rotationally invariant, allowing us to change the coordinate systems freely when computing $\nabla\vvec$. Therefore, we can consider computing $\nabla\vvec$ in a local frame spanned by the normal and tangent directions at $\x$, i.e., the columns of $\Rmat$:
\begin{align}
\vvec_{\Rmat} := & \Rmat^\top \vvec, \\
\x_{\Rmat} := & \Rmat^\top \x, \\
\nabla_{\x_{\Rmat}} \vvec_{\Rmat} = & \Rmat^\top \nabla \vvec \Rmat.
\end{align}
Here, the subscript $(\cdot)_{\Rmat}$ means the quantity is defined in the local frame spanned by $\Rmat$. For example, the first row in $\nabla_{\x_{\Rmat}} \vvec_{\Rmat}$ represents the spatial gradient of the normal flow magnitude.
It is now straightforward to see how the physical intuition behind free-slip boundaries can motivate the definition of $\K_m$: Essentially, free-slip boundaries retain the physical property of Stokes flow along the tangent directions and dismiss any spatial gradients along the normal direction. From the perspective of the Laplacian operator, this means directly dropping the second-order derivative along the normal direction and requiring the flow to be harmonically smooth only along the tangent directions. Mapping it back to the variational form, we see it is equivalent to zeroing out the column in $\nabla_{\x_{\Rmat}} \vvec_{\Rmat}$ that corresponds to the normal direction, leading to the following energy density to be integrated in $E_{m, \mu}$:
\begin{align}
\Psi_{m,\mu} :=& \mu \|\nabla_{\x_{\Rmat}} \vvec_{\Rmat} \bm{\Lambda}(0, 1, 1)\|_F^2 \\
=& \mu \|\Rmat^\top (\nabla \vvec) \Rmat \bm{\Lambda}(0, 1, 1) \|_F^2 \\
=& \mu \|(\nabla \vvec) (\bm{0}, \tvec_1, \tvec_2) \|_F^2 \\
=& \mu \textrm{trace}((\nabla\vvec) (\I - \n \n^\top) (\nabla\vvec)^\top) \\
=& \mu \|\nabla \vvec (\I - \n\n^\top)^{\frac{1}{2}} \|_F^2 \label{eq:psi_free_slip}
\end{align}
where $\bm{\Lambda}$ constructs a diagonal matrix from its input. Comparing Eqn. (\ref{eq:psi_free_slip}) with Eqn. (\ref{eq:aniso_stokes_em1}), we see $\K_m$ should be defined as follows:
\begin{align}
\K_m = \I - \n\n^\top. \label{eq:Km_free_slip}
\end{align}
\begin{comment}
The last and most challenging case is to model free-slip boundary conditions (Eqn. (\ref{eq:stokes_free_slip})) with a proper choice of $\K_m$ and $\K_f$. Consider a point $\x$ on a free-slip solid-fluid boundary, and let $\n$ be its unit normal vector. We augment $\n$ with $d-1$ unit vectors $\tvec_1,\cdots, \tvec_{d-1}$ orthogonal to $\n$ so that the matrix $\Rmat:=(\n, \tvec_1,\cdots,\tvec_{d-1})$ defines an orthonormal basis in $\R^d$. If we project the flow to the normal and tangent directions as indicated in $\Rmat$, the free-slip boundary condition requires that the tangent flow should obey the Stokes equation while the normal flow should be zero. Motivated by this observation, we propose the following $\K_m$ definition:
\begin{align}
\K_m = \I - \n\n^\top. \label{eq:Km_free_slip}
\end{align}
Intuitively, this $\K_m$ definition is obtained from the following transformations on the energy integrand $\|\nabla \vvec\|_F^2$ in the original (quasi-incompressible) Stokes problem (Eqn. (\ref{eq:stokes_energy})): First, noting that the Frobenius norm has rotational invariance, we can calculate the gradient $\nabla \vvec$ in the local frame $\Rmat$:
\begin{align}
\vvec_{\Rmat}(\x_{\Rmat}) :=& \Rmat^\top \vvec(\Rmat \x_{\Rmat}), \\
\nabla_{\x_{\Rmat}} \vvec_{\Rmat} =& \Rmat^\top (\nabla \vvec) \Rmat.
\end{align}
Here, the subscript $(\cdot)_{\Rmat}$ indicates the quantity is defined in the local frame spanned by $\Rmat$. For example, the first row in $\nabla_{\x_{\Rmat}} \vvec_{\Rmat}$ represents the spatial gradient of the normal flow velocity. The free-slip boundary conditions preserves the tangent flow and drops the normal flow, creating discontinuities along the normal direction. Therefore, we propose to drop the spatial gradients with respect to the coordinates along the normal direction, i.e., the first column in $\nabla_{\x_{\Rmat}}\vvec_{\Rmat}$. This results in the following energy density to be integrated in $E_{m, \mu}$:
\begin{align}
\Psi_{m,\mu} :=& \mu \|\nabla_{\x_{\Rmat}} \vvec_{\Rmat} \bm{\Lambda}(0, 1,\cdots, 1)\|_F^2 \\
=& \mu \|\Rmat^\top (\nabla \vvec) \Rmat \bm{\Lambda}(0, 1,\cdots, 1) \|_F^2 \\
=& \mu \|(\nabla \vvec) (\bm{0}, \tvec_1, \cdots, \tvec_{d - 1}) \|_F^2 \\
=& \mu \textrm{tr}((\nabla\vvec) (\I - \n \n^\top) (\nabla\vvec)^\top) \\
=& \mu \|\nabla \vvec (\I - \n\n^\top)^{\frac{1}{2}} \|_F^2 \label{eq:psi_free_slip}
\end{align}
where $\bm{\Lambda}:\R^d \rightarrow \R^{d\times d}$ is a function that constructs a diagonal matrix from its input and $\textrm{tr}$ computes the trace of a matrix. Now the definition of $\K_m$ in Eqn. (\ref{eq:Km_free_slip}) can be obtained from comparing Eqn. (\ref{eq:psi_free_slip}) with Eqn. (\ref{eq:aniso_stokes_em1}).
\end{comment}
Similarly, we propose the following $\K_f$ to impede the normal flow in $E_f$:
\begin{align}
\K_f = \Rmat \bm{\Lambda}(k_f, 0, 0) \Rmat^\top = k_f \n \n^\top,
\end{align}
where $k_f\rightarrow+\infty$. Plugging this definition into $E_f$ in Eqn. (\ref{eq:aniso_stokes_friction}) will confirm that the proposed $\K_f$ leads to the expected behavior: It first converts $\vvec$ to a local frame spanned by $\Rmat$ then forces the normal component of the flow to be zero and keeps the tangent flow intact.
Finally, regarding the choice of $\lambda$, we set its value to 0 so that $E_{m,\lambda}$ is ignored for points on free-slip boundaries. We justify this decision from two perspectives: First, using a positive $\lambda$ in this case will attempt to preserve fluidic volume at free-slip boundaries, which typically means enforcing zero divergence in a neighborhood near the boundary after discretization. For example, it may suggest that a mixed cell's solid and fluid velocities be divergence-free even though they come from different phases, implying a physically problematic constraint imposed on the discretized problem. Such constraints typically lead to conditioning issues in the numerical system after discretization that calls for a more careful, subcell-accurate treatment of these mixed cells. Second, and more importantly, the real objective that incompressibility plays near the boundary is to avoid fluid leakage out of the boundary, which is automatically satisfied as long as we disallow normal flows crossing the boundary and ensure the interior of the fluid phase is properly incompressible. Both reasons imply that it is unnecessary to consider a positive $\lambda$ that encourages the (ill-defined) divergence at locations on free-slip boundaries to be zero, hence our decision to set $\lambda = 0$.
\paragraph{Summary} To conclude, we have presented an anisotropic material model which provides a uniform representation of different phases and boundary conditions encountered in the quasi-incompressible Stokes flow problem. We summarize the anisotropic material parameters $\K_m$, $\K_f$, and $\lambda$ for all cases in Table~\ref{tab:material}.
\begin{table}[htb]
\centering
\caption{We summarize the anisotropic material parameters for modeling different phases and boundary conditions in quasi-incompressible Stokes problem: $\lambda_0\in\R^+$ represents a predefined scalar parameter controlling the incompressibility in the quasi-incompressible Stokes flow; $k_f\in\R^+$ is a scalar parameter that determines the material's impedance for fluid flow, with $k_f\rightarrow+\infty$ creating the solid phase; $\n\in\R^d$ is the unit normal vector on the solid-fluid boundary that decides the material's anisotropic responses to normal and tangent flows.}
\begin{tabular}{c|cccc}
\toprule
& Fluid & Solid & No-slip boundary & Free-slip boundary \\
\midrule
$\K_m$ & $\I$ & $\I$ & $\I$ & $\I - \n \n^\top$\\
$\K_f$ & $\bm{0}$ & $k_f\I$ & $k_f\I$ & $k_f\n\n^\top$\\
$\lambda$ & $\lambda_0$ & $\lambda_0$ & $\lambda_0$ & 0 \\
\bottomrule
\end{tabular}
\label{tab:material}
\end{table}
\section{Numerical Simulation}\label{sec:disc}
This section outlines the discretization scheme and the numerical solver for the continuous Stokes flow model described in the previous section. The Stokes flow system, described in Section \ref{sec:eq}, is discretely modeled over the entire domain and parameterized to represent fluid, solid and fluid-solid interface regions on the domain. Our discretization enables easy specification of boundary conditions, allows seamless application of additional design constraints over the domain, such as volume fraction, and accommodates a highly flexible parameter space that helps optimize for complex designs for fluid-solid interfaces. We use a uniform Cartesian lattice to discretize domain descriptors and state variables. Fluid velocities $\vvec$ are stored at grid nodes and bilinearly (2D) or trilinearly (3D) interpolated, while the design parameters $\K_m, \K_f$ and $\lambda$ are treated as having a constant value on each grid cell (as we will see, they are stored indirectly, through some other parameters that are ultimately stored per-cell).
We design a solver that solves for flow for a given parameter set, using a symmetric positive-definite (SPD) stiffness matrix. The solver also includes additional constraints, such as block divergence, to satisfy the flow divergence constraint in an aggregate fashion.
\subsection{Anisotropic Material Parametrization}
Our anisotropic material is characterized by SPD matrix fields $\K_m$ and $\K_f$ and a scalar field $\lambda$ in the whole domain. From Table~\ref{tab:material}, we notice that these fields can be induced from the solid/fluid material distributions and boundary normals, which have lower degrees of freedom than the full SPD matrices. This inspires us to reparametrize $\K_m$, $\K_f$, and $\lambda$ with three new fields: the fluidity $\rho:\B\rightarrow[0, 1]$, which assigns 0 to pure solid and 1 to pure fluid; the isotropy $\epsilon:\B\rightarrow[0, 1]$ in which larger $\epsilon$ means more isotropic material; the anisotropic orientation $\bm{\alpha}: \B\rightarrow\R^{d-1}$, which is a field of rotational angles in 2D and spherical coordinates in 3D. Furthermore, we compute a unit normal field $\n:\B\rightarrow \R^d$ induced from $\bm{\alpha}$.
\paragraph{Constructing $\K_m$} We define the material matrix field $\K_m$ as follows:
\begin{align}
\K_m = \I - (1 - \epsilon)\rho \n \n^\top.
\end{align}
Therefore, $\K_m\approx \I$ whenever $\epsilon\approx1$ (isotropic material) or $\rho\approx 0$ (solid phase), and $\K_m$ becomes highly anisotropic only if $\epsilon\approx 0$ and $\rho\approx 1$, i.e., anisotropic fluid that we use to represent free-slip boundaries.
\paragraph{Constructing $\K_f$} We define $k_f$ with the following nonlinear mapping function borrowed from previous work~\cite{borrvall2003topology}:
\begin{align}\label{eq:interp_func_kf}
k_f(\rho) = {k_f}_{\max} + ({k_f}_{\min} - {k_f}_{\max})\rho \frac{1+q}{\rho+q},
\end{align}
where ${k_{f}}_{\min}\approx0$ and ${k_f}_{\max}=1e5$ indicate the range of $k_f$ and $q=0.1$ is a hyperparameter that controls the sharpness of the mapping: A smaller $q$ generates a more binary mapping with the output $k_f$ concentrating on the bound values. Intuitively, $k_f(\rho)$ is a monotonically decreasing function that maps small $\rho$ (solid phase) to ${k_f}_{\max}$ and large $\rho$ (fluid phase) to ${k_f}_{\min}$. We then define $\K_f$ as follows:
\begin{align}
\K_f = k_f(\rho)\I + (k_f(\epsilon\rho) - k_f(\rho))\n\n^\top.
\end{align}
We can see such a definition matches what Table~\ref{tab:material} suggests for each type of material: For fluid phase, which satisfies $\epsilon\approx\rho\approx1$, we have $k_f(\epsilon\rho)\approx k_f(\rho)\approx 0$, and therefore $\K_f\approx \bm{0}$; for solid phases and no-slip boundaries, we have $\epsilon\approx 1$ but $\rho\approx0$, which leads to a large $k_f(\rho)$ and $\K_f\approx {k_f}_{\max}\I$; finally, for free-slip boundaries, we model them as anisotropic fluid with $\epsilon\approx0$ and $\rho\approx 1$, which means $k_f(\epsilon\rho) \gg k_f(\rho)\approx0$ in the definition, and it follows that $\K_f\approx {k_f}_{\max}\n\n^\top$.
\paragraph{Constructing $\lambda$} Finally, we construct the $\lambda$ field as follows:
\begin{align}
\lambda = \lambda_{\min} + [1 - (1 - \epsilon)\rho]^p \lambda_{\max},
\end{align}
where $\lambda_{\min}=0.1$ and $\lambda_{\max}=1e3$ sets the range of $\lambda$ and $p=12$ is a power index that pushes $\lambda$ to be a binary choice between $\lambda_{\min}$ and $\lambda_{\min} + \lambda_{\max}$. Similar to $\K_m$ defined above, this definition of $\lambda$ requires that $\lambda \approx \lambda_{\min}$ only if $\epsilon\approx0$ and $\rho\approx1$, i.e., near free-slip boundaries.
\paragraph{Summary} In this manner, the anisotropic material distribution in the entire domain is parameterized by three fields $\rho$, $\epsilon$, and $\bm{\alpha}$. Comparing the results above with Table~\ref{tab:material}, we can see that these new fields implement the intended properties of the anisotropic material in fluid, solid, and flexible solid-fluid boundaries.
\subsection{Discretization Scheme}
We now describe our discretization scheme for solving the Stokes flow problem in Sec.~\ref{sec:eq}. We solve the energy minimization problem defined by Eqns. (\ref{eq:aniso_stokes}-\ref{eq:aniso_stokes_dirichlet}) on a regular grid of $N^d$ cells and store all physical quantities at the center of each cell except for flow velocities, which we store at grid nodes. We first describe how we discretize the material parameters. Next, we discretize the variational form and introduce block divergence, a novel numerical treatment of the (quasi-)incompressibility constraint that improves the condition number of the numerical system to be solved.
\paragraph{Discretizing material parameters} We discretize the continuous fields $\rho$, $\epsilon$, and $\bm{\alpha}$ on the regular grid by storing their values at each cell center. For brevity, we use $\bm{\theta}$ from now on to refer to the set of discretized material parameters $\rho$, $\epsilon$, and $\bm{\alpha}$:
\begin{align}
\bm{\theta} := \{\rho, \epsilon, \bm{\alpha} \}.
\end{align}
The induced fields $\K_m$, $\K_f$, and $\lambda$ are also computed and stored at these cell centers based on the mapping described previously.
These parameters are treated as constant throughout the spatial extent of each cell; quadrature rules that are discussed next that need to access such parameters at quadrature point locations throughout the cell will just use the same constant value assigned to such cell.
\paragraph{Discretizing fluid velocity} We discretize and store the fluid velocity field $\vvec$ on the nodes of each cell in the grid. Then, we compute the velocity and its spatial gradients at any point with bi/trilinear interpolation of the nodal values.
\paragraph{Discretizing variational energy terms} With the material parameters and fluid velocity fully discretized, we can now discretize the variational form in Eqns. (\ref{eq:aniso_stokes}-\ref{eq:aniso_stokes_dirichlet}). Here, we will strategically use a different numerical integration scheme for each of the energy terms in Eqns. (\ref{eq:aniso_stokes_em1}-\ref{eq:aniso_stokes_friction}) as detailed next:
For the term $E_{m,\mu}$ in Eqn. (\ref{eq:aniso_stokes_em1}) -- we can label this the \emph{Anisotropic Laplace} term, recognizing that the isotropic case $\K_m=\mathbf{I}$ would yield the Laplace term in the PDE version of Stokes, as in \citet{du2020stokes} -- we use the Gaussian-Legendre quadrature rule to integrate this energy numerically within each grid cell. Hence, we employ four quadrature points in 2D and eight quadrature points in 3D, in a fashion directly analogous to prior work \cite{du2020stokes}. As mentioned, we use the single value of $\K_m$ associated with the cell at all quadrature points.
The term $E_{m,\lambda}$ in Eqn. (\ref{eq:aniso_stokes_em2}) can be labeled the \emph{cell-wise divergence term}, and essentially seeks to enforce volume preservation at each individual cell where it is applied with $\lambda\ne 0$. For a given grid cell $\mathcal{C}$, we approximate this term with the following expression:
$$
E_{m,\lambda}^\mathcal{C} \approx \lambda_\mathcal{C}W_\mathcal{C}\left[\frac{1}{W_\mathcal{C}}\int_\mathcal{C}(\nabla\cdot\vvec)dx\right]^2
$$
where $W_\mathcal{C}$ is the volume ($h^2$ in 2D, $h^3$ in 3D) of the cell $\mathcal{C}$. Intuitively, what this approximation suggests is that instead of penalizing a non-zero divergence at each individual interior point of the cell, we only seek to drive the \emph{net flux}
$\mbox{Flux}(\mathcal{C})=\int_\mathcal{C}(\nabla\cdot\vvec)dx$ to zero, that treats the entire cell in an aggregate fashion (allowing a non-zero divergence at interior locations, as long as the aggregate net flux is zero). This is an important modification to the numerical discretization that circumvents locking behaviors for highly incompressible materials, and has been shown effective and compatible with mixed FEM formulations for highly incompressible materials \cite{patterson2012simulation}. What is more, since the integrand is linear in the nodal velocities (when using bilinear/trilinear interpolation), it is posssible to obtain an analytic expression for the flux, namely in 2D (if $\vvec=(u,v)$ are the individual scalar velocity components):
\begin{equation}
\mbox{Flux}(\mathcal{C})=h\left(\frac{u_{10}+u_{11}}{2}+\frac{v_{01}+v_{11}}{2}-\frac{u_{00}+u_{01}}{2}-\frac{v_{00}+v_{10}}{2}\right)
\label{eq:cell_flux}
\end{equation}
where the subscripts denote each of the four cell vertices. The four terms of this expression can be easily and intuitively indentified as, respectively, the (average) signed fluxes through the right, top, left, and bottom faces of the cell. We would exactly arrive at this same analytic expression if we simply used a 4-point Gauss quadrature for the flux integral, since the linear integrand would be integrated exactly. An exactly analogous expression can be derived in 3D; the only difference is that we would create the average velocity of each face by averaging four nodal velocies, and the scalefactor of $h^2$ would replace $h$ in Eqn. (\ref{eq:cell_flux}) to account for the area of each 3D cell.
Finally, for the term $E_f$ in Eqn. (\ref{eq:aniso_stokes_friction}) we employ a quadrature scheme that uses the cell vertices themselves as quadrature points, namely:
$$
E_f^\mathcal{C} \approx \frac{W_\mathcal{C}}{2^d}\sum_{I}
\|\K_f^{\frac{1}{2}}(\mathcal{C})\vvec_I\|_2^2
$$
where the index $I$ traverses all cell vertices (4 in 2D; 8 in 3D). We observe that this term seeks to model a cell as viscous/rigid (in the solid phase) or permeable (in the fluid phase), hence it is perfectly reasonable to apply this viscous penalty on a per-vertex basis. Doing so, in fact, avoids certain hazards of low-order quadrature schemes (for example a single-point quadrature scheme evaluated at the cell center could yield a zero result even if non-zero velocities at the vertices happen to average out to zero at the cell center).
After combining all terms in this discretization scheme, we arrive at a quadratic energy form, given by $E = \vvec^\top \K \vvec - \bvec^\top\vvec$, where $\vvec$ stacks all velocity degrees of freedom and $\K$ and $\bvec$ the SPD matrix and vector composed of the material parameters, respectively. We enforce the Dirichlet boundary condition by forcing the corresponding nodal velocities to be the desired value. Putting them together, we state the discrete variational form as the following quadratic programming problem:
\begin{align}
\min_{\vvec} & \quad \vvec^\top \K(\bm{\theta}) \vvec - \bvec(\bm{\theta})^\top \vvec \label{eq:aniso_stokes_discret_energy}\\
s.t. & \quad \vvec_i = (\vvec_D)_i, \forall (i, (\vvec_D)_i)\in \D, \label{eq:aniso_stokes_discret_constraint}
\end{align}
where $\D$ states the Dirichlet boundary conditions.
\subsection{Block divergence}\label{sec:blk_div}
The variational form and the anisotropic material parametrization frequently use large hyperparameters to satisfy constraints that are supposed to be strict, e.g., perfect incompressibility is approximated with $E_{m,\lambda}$ penalizing the divergence with a coefficient $\lambda_{\max}\rightarrow+\infty$, and free-slip boundaries are replaced with $E_f$ scaled by $(k_f)_{\max}\rightarrow +\infty$ that penalizes nonzero normal flows. Unfortunately, while setting such parameters to near infinity tightens these constraints from an optimization perspective, it also leads undesirable behaviors. Specifically, using an exceptionally high $\lambda$ value is known to compromise the conditioning of the system. Using a high $k_f$ value has an even more obscure side effect; since boundary cells use this penalty to prevent flows from having a component in the \emph{normal} direction to the boundary, if there are any two neighboring boundary cells that have even slightly different directions of anisotropy, a high $k_f$ value would effectively cause the velocities at any shared vertices between the two cells to be driven to zero, as their projection to two non-parallel directions (the normals at the two cells) will jointly and strongly be required to be zero. Hence, this might inadvertently once again drive us to a situation where we are unintentionally forcing a no-slip condition.
Using moderately-high values for $\lambda$ and $k_f$ certainly helps alleviate these issues. The risk of doing so, however, is that we open up the possibility for volume loss that, even not egregious at the local level, could add up to a substantial flow loss at the global scale. This is aggravated by the fact that (with sound motivation) we do not enforce incompressibility (we use $\lambda=0$) at boundary cells paired with a free-slip condition, depending on the zero-normal-flow condition (which is not strictly enforced if $k_f$ is not very high) to prevent leakage at free-slip boundaries.
We introduce an original solution to this issue, by pairing the moderately-high parameters at the per-cell level with a hard-constraint of absolute volume preservation at a more aggregate scale, namely large blocks (typically rectangular boxes of 4 or 8 cells across) that we partition our domain in. This is illustrated in Figure \ref{fig:ab_study:block_divergence} (right); we refer to the associated ablation study for an illustration of the effect of this technique (or the consequences of its omission).
We partition the domain $\B$ into large blocks (indexed by $b$) of uniform sizes and enforce the (aggregate) net flux over each block to be zero. Since the per-cell flux is a linear expression on the nodal velocities, the aggregate constraint is simply the sum:
$$
0 = \mbox{Flux}(\B_b)=\sum_{\mathcal{C}\in\B_b}\mbox{Flux}(\mathcal{C})
$$
As one would intuitively expect, the net flux over the block $\B_b$ is a linear expression of the averaged (signed) fluxes through the faces of the aggregate block; the contribution of faces interior to $\B_b$ cancels out when fluxes of neighboring cells are summed. Ultimately, the net flux constraint on each block yields a single linear equation (with a zero right-hand side), and these \emph{block-divergence constraints} for the entire domain are ultimately distilled into a linear constraint system $\mathcal{C}\vvec=\mathbf{0}$. Ultimately, we reformulate our governing anisotropic Stokes equations as a linearly-constrained, quadratic optimization problem, where we minimize the functional $E(\vvec)=\vvec^T\mathbf{K}(\bm{\theta})\vvec-\bvec(\bm{\theta})^T\vvec$ (resulting from the quadrature schemes in the previous paragraph), subject to the linear constraint $\mathcal{C}\vvec=\mathbf{0}$; $\bm{\theta}$ represents the design parameters. We ultimately solve this constrained optimization problem via the Karush-Kuhn-Tucker condition, in the system:
$$
\left(
\begin{array}{cc}
2\mathbf{K}(\bm{\theta}) & \mathbf{C}^T \\
\mathbf{C} & 0
\end{array}
\right)\left(
\begin{array}{c}
\vvec \\
\mathbf{q}
\end{array}
\right)=\left(
\begin{array}{c}
\bvec(\bm{\theta}) \\
0
\end{array}
\right)
$$
(where $\mathbf{q}$ are the Langrange multipliers associated with the block-divergence constraint). We use direct factorization methods (we employ PARDISO \cite{pardiso-7.2a}) to solve these symmetric/indefinite problems both for forward simulation as well as for the inverse problems associated with optimization; we may omit, for brevity, an explicit reference to the constraint in our upcoming discussion of the optimization pipeline, with the understanding that it is always present in our pipeline. From a practical perspective, we found this technique to be highly effective; in our 3D examples, enforcing a hard block-divergence constraint on 4x4x4 blocks, paired with moderate to moderately-high parameters $\lambda$ and $k_f$ produced results that were practically indistinguishable from using a very high $\lambda$, and much more resilient to artifacts associated with using high $k_f$ values. Note that the block-divergence constraint does not depend on the design parameters $\bm{\theta}$, contrasted to $\mathbf{K}$ and $\bvec$ that do. Ultimately, we view the solution of this constraint optimization problem as the \emph{simulation} function $\vvec=F(\bm{\theta})$ that maps the design parameters to the corresponding simulation result.
\section{Optimization}\label{sec:optimization}
We now describe our optimization problem which is built upon the numerical simulator described before. We first formulate the fluidic device design task as a numerical optimization problem and state its formal definition. Next, we describe the algorithm for solving this optimization problem numerically.
\subsection{Problem Statement}
We formally define the task of designing a fluidic device as the following numerical optimization problem:
\begin{align}
\min_{\bm{\theta}} & \quad L_f(\vvec) + w_c L_c(\bm{\theta}) + w_d L_d(\bm{\theta}) + w_a L_a(\bm{\theta}), \\
s.t. & \quad \vvec = F(\bm{\theta}), \label{eq:opt_sim} \\
& \quad \bm{\theta}_{\min}\leq \bm{\theta} \leq \bm{\theta}_{\max}, \label{eq:opt_bound} \\
& \quad 0 \leq V_{\textrm{iso-fluid}}(\bm{\theta}) \leq V_{\max}, \label{eq:opt_vol} \\
& \quad 0 \leq V_{\textrm{all-fluid}}(\bm{\theta}) \leq V_{\textrm{b}} + V_{\max}. \label{eq:opt_edge_vol}
\end{align}
Here, $L_f$ states the \emph{functional} loss given by the task specification, which is typically defined as the $L_2$ difference between the outlet flow in simulation and the desired outlet flow profile.
\paragraph{Regularizers} The next three terms in the objective are regularizers on the material parameter $\bm{\theta}$: Following the standard practice in the previous field-based fluidic topology optimization method~\cite{borrvall2003topology}, the \emph{compliance} regularizer $L_c$ computes the elastic energy accumulated from enforcing the outlet flow profile to be the same as the target as extra Dirichlet constraints:
\begin{align}
L_c(\bm{\theta}) := & \vvec_c^\top \K(\bm{\theta}) \vvec_c - \bvec(\bm{\theta})^\top \vvec_c, \\
\vvec_c = & F(\bm{\theta}; \D \cup \D_O),
\end{align}
where $\D_O$ summarizes the extra Dirichlet conditions from the target outlet flow profile. The motivation behind $L_c$ is that a lower elastic energy means the outlet flow profile will be more similar to the target if we release the extra Dirichlet conditions on the outlet.
Next, the \emph{directional} regularizer $L_d$ is defined as the differences between the normal direction of an anisotropic cell and the normals from its neighborhood. We first filter out cells that contain anisotropic materials with two thresholds $\epsilon_0$ and $\rho_0$: a cell is considered to be anisotropic if its $\epsilon < \epsilon_0$ and $\rho > \rho_0$, i.e., it is modeled as anisotropic fluid. Let $\Aset$ be the set of anisotropic cells, we define $L_d$ as follows:
\begin{align}
L_d := \sum_{c\in\Aset} 1 - \psi_{\cos}(\bm{\alpha}_c, \bm{\alpha}_{\textrm{nbr}})
\end{align}
where $\bm{\alpha}_c$ is the anisotropic direction (a unit vector) of cell $c$, $\bm{\alpha}_{\textrm{nbr}}$ an average direction fitted from its small neighbors ($3\times3\times3$ in our implementation), and $\psi_{\cos}$ the cosine similarity between them. Therefore, minimizing $L_d$ encourages smooth free-slip solid-fluid boundaries.
Finally, the \emph{anisotropic} regularizer $L_a$ concentrates anisotropic cells near solid-fluid boundaries:
\begin{align}
L_a(\epsilon) := \sum_c \epsilon_c \rho_c (\rho^{local}_{\max} - \rho^{local}_{\min}),
\end{align}
where the sum loops over each cell $c$ and $\rho_{\max}^{local}$ and $\rho_{\min}^{local}$ are the maximum and minimum fluidity from its small neighborhood ($3\times3\times3$ in our implementation). The proposed loss encourages cells near the solid-fluid boundaries (large $\rho^{local}_{\max} - \rho^{local}_{\min}$) to use anisotropic materials (small $\epsilon$). To avoid chasing a moving target, we freeze $\rho^{local}_{\max}$ and $ \rho^{local}_{\min}$ in $L_d$ when optimizing $\rho$ in each iteration.
\paragraph{Constraints} Apart from the objective function, the optimization problem also contains a number of constraints: Eqn. (\ref{eq:opt_sim}) ensures the flow field $\vvec$ is computed from the numerical simulator, and Eqn. (\ref{eq:opt_bound}) states the bound constraints on the material parameters. The last two equations (\ref{eq:opt_vol}) and (\ref{eq:opt_edge_vol}) define volume constraints on the fluidic region and the free-slip fluid-solid boundaries:
\begin{align}
V_{\textrm{iso-fluid}} := & \sum_c \epsilon_c\rho_c, \\
V_{\textrm{all-fluid}} := & \sum_c \rho_c.
\end{align}
The difference between them implies the volume of anisotropic cells, and $V_{\max}$ and $V_b$ are two task-dependent thresholds defining the maximum fluidic volume and the maximum volume of anisotropic cells (free-slip boundaries), respectively.
\paragraph{Summary} In summary, the optimization problem aims to find an optimal material distribution $\bm{\theta}$ that minimizes the functional loss under bound constraints and volume constraints. The optimization process is facilitated by three regularizers that encourage spatially smooth yet clear solid-fluid boundaries.
\subsection{Numerical Optimizer}
Standard topology optimization typically consists of hundreds of thousands of decision variables even for moderate-size problems (e.g., on a $64^3$ grid), and our numerical optimization is no exception. In fact, due to the inclusion of additional anisotropic material parameters, the numerical optimization problem we formulated has a larger number of decision variables than its isotropic topology optimization counterparts. Following the standard practice in topology optimization, we use the method of moving asymptotes (MMA) ~\cite{svanberg1987method}, a widely used gradient-based optimization algorithm, to solve this large-scale optimization problem. As MMA requires gradients with respect to the material parameters $\bm{\theta}$, we extended the numerical simulation method in Sec.~\ref{sec:disc} to a differentiable simulator in a way similar to~\citet{du2020stokes}, through which the gradients can be computed from backpropagating the loss function. To encourage more structured designs during the optimization process, we dynamically update the upper bound on the isotropy $\epsilon_c$ for each cell $c$ based on the following heuristic:
\begin{align}
(\epsilon_c)_{\max} := 1 - (\rho^{local}_{\max} - \rho^{local}_{\min}).
\end{align}
In other words, when a cell is surrounded by both solid and fluid cells, we force it to choose anisotropic materials (small upper bound on $\epsilon$). Our empirical experience suggests that this dynamic update scheme biased the optimization process towards generating more structured fluidic device designs.
\subsection{Choice of Parameters and Interpolation Functions}
For the hyperparameter settings, we reused the value from~\cite{borrvall2003topology} for $k_{f_{\min}}$. We chose a near-zero value for $\lambda_{\min}$. The choices of these two hyperparameters followed the convention in topology optimization. We did not observe noticeable differences when their values were perturbed. For $k_{f_{\max}}$ and $\lambda_{\max}$ values, we chose values that can balance the solver performance and block divergence (See our discussion in Sec.~\ref{sec:blk_div}). We chose the current block size by experimenting with a minimum block size that gives satisfactory free-slip boundaries without introducing large divergence errors in small neighborhood. We include an experiment on the sensitivity of our method to block size in the supplementary materials. For the choices of interpolation functions, we used the interpolation function (Eq.~\ref{eq:interp_func_kf}) in~\cite{borrvall2003topology} with the same $q$ value for $k_f$, and we used a power-indexed interpolation function for $\lambda$, which is conventional in density-based topology optimization, to encourage the design’s binary convergence. We did not observe noticeable differences between these two functions.
\section{Results}\label{sec:evalapp}
In this section, we present various 3D design problems to evaluate the performance of our differentiable anisotropic Stokes flow simulator as well as the optimization pipeline.
Next, we compare our method with two previous state-of-the-art baseline methods and validate our method with ablation studies. A complete demonstration of our design problems with the evolution of our optimization process can be found in our supplemental video.
\input{main/7.1-applications}
\input{main/7.2-evaluation}
\input{main/7.3-ablationstudy}
\subsection{Applications}\label{sec:evalapp:app}
\begin{table*}[ht!]
\caption{We report the statistics from optimizing the design problems in Sec.~\ref{sec:evalapp:app}, including the maximum fluidic volume fraction, the functional loss before and after optimization, and the time decomposition in each optimization iteration: ``Forward'' represents the time spent on computing the loss function, which is dominated by the numerical simulator; ``Backprop.'' stands for the time spent on computing the gradients; ``MMA optimizer'' reports the time cost from running one iteration of MMA after obtaining the loss and gradients; ``Total'' is the sum of all the time above.}
\begin{tabular}{l|c|c|c|c|cccc}
\toprule
& \textbf{Resolution} & \textbf{Volume Limit} & \multicolumn{2}{c|}{\textbf{Functional Loss} ($L_f$)} & \multicolumn{4}{c}{\textbf{Time per Optimization Iteration (s)}} \\
& & ($V_{\max}$) & Initial & Final & \multicolumn{1}{l|}{Forward} & \multicolumn{1}{l|}{Backprop.} & \multicolumn{1}{l|}{MMA optimizer} & Total \\ \midrule
\textbf{Twister} & 100x100x100 & 0.30 & 22.575 & 0.519 & \multicolumn{1}{l|}{1374.1} & \multicolumn{1}{l|}{152.6} & \multicolumn{1}{l|}{153.2} & 1679.9 \\
\textbf{Tree Diffuser} & 80x80x80 & 0.25 & 3.966 & 0.133 & \multicolumn{1}{l|}{721.9} & \multicolumn{1}{l|}{85.0} & \multicolumn{1}{l|}{61.1} & 868.0 \\
\textbf{Circuit-1} & 80x80x80 & 0.25 & 86.281 & 2.125 & \multicolumn{1}{l|}{737.2} & \multicolumn{1}{l|}{86.5} & \multicolumn{1}{l|}{83.1} & 896.7 \\
\textbf{Circuit-2} & 80x80x80 & 0.50 & 86.153 & 1.490 & \multicolumn{1}{l|}{721.3} & \multicolumn{1}{l|}{86.4} & \multicolumn{1}{l|}{82.7} & 889.7 \\ \bottomrule
\end{tabular}
\label{tab:opt_statistics}
\end{table*}
We demonstrate a variety of complex fluidic device designs obtained using our optimization pipeline. We use a grid resolution of $100\times100\times100$ for the Fluid Twister example and $80\times80\times80$ for all other optimization examples and initialize the material parameters to be isotropic ($\epsilon=1$) with fluidity $\rho=V_{\max}$. We run all optimizations with 300 iterations and use the design that achieves minimum final loss as our optimized design. We report the task-specific volume fraction limit, initial and optimized functional loss, and the execution time for each task in Table~\ref{tab:opt_statistics}. We additionally include design domain illustration and task specifications in supplementary materials.
We implement our optimization pipeline in C++ and use the implementation of PARDISO~\cite{pardiso-7.2a} for solving our linear systems and MMA~\cite{svanberg1987method} as our optimizer. Since the sparsity pattern of the system matrix remains the same over optimization iterations, we also optimize the matrix factorization time by performing symbolic factorization only once. We perform our experiments on an Intel Ice Lake 128-core server and Ubuntu 20.04 operating system.
\paragraph{Motivating examples}
As motivating examples, we present the 3D design problems of an amplifier and a mixer under a $80\times80\times80$ grid. The 2D versions of both examples are presented in prior works \cite{borrvall2003topology, du2020stokes}. In \textit{Amplifier}, we enforce a constant circular input with inflow velocity $(v_{in},0,0)$. The objective is to amplify the flow by a factor of $\frac{5}{3}$. In \textit{Mixer}, the design objective is to mix a high-pressure and low-pressure flow from two inlets to produce equal middle-pressure flows at the two target outlets. Specifically, the two inlets have inflow velocities $(v,0,0)$ and $(0,2v,0)$ and the two outlets have outflow velocities $(1.5v,0,0)$ and $(0,1.5v,0)$, respectively. The volume fraction limit is $0.3$ for \textit{Amplifier} and $0.4$ for \textit{Mixer}. We visualize the optimized designs in Fig.~\ref{fig:amplifier_mixer}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/figure4-amplifier-mixer.pdf}
\vspace{-2em}
\caption{
Designing an \textit{Amplifier} (left) and a \textit{Mixer} (right) under a grid resolution of $80\times80\times80$. The anisotropic boundaries of the optimized designs are visualized using small colored disks. }
\label{fig:amplifier_mixer}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/figure5-brancher-small.pdf}
\vspace{-2em}
\caption{
Our pipeline generates a tree diffuser on a $80\times80\times80$ grid. Top left: The anisotropic boundary of the optimized design is visualized using small colored disks. Bottom: The 3 images in the middle visualize the perpendicular cross-sections of the optimized design at different depths from the inlet. These cross-sections highlight the progressive branching from a single inlet to multiple outlets. Top right: The fluid flow, simulated from the optimized design, is visualized as streamlines.
}
\label{fig:tree_diffuser}
\end{figure}
\paragraph{Tree Diffuser}
The goal of this example is to generate a fluidic diffuser that directs fluids from one constant circular-shaped inlet to 16 small square-shaped outlets while bypassing a small obstacle at the center of the domain, which we enforce as zero-velocity Dirichlet constraints. In the optimized design (Fig.~\ref{fig:tree_diffuser}), an interesting tree-like topology automatically emerged from our pipeline, where the fluid first branches into four chambers and then into the 16 outlets. The resulting shape produced by our pipeline exhibits an intuitive design. The branching is gradual as one moves from the inlet to the interior of the domain and the branching factor increases gradually in this direction. The cross-sections in Fig.~\ref{fig:tree_diffuser} highlight the progress of the branching in the optimized design. This example highlights the ability of our method to synthesize an intricate structure with 16-outlets without any prior on its shape or topology.
\paragraph{Fluid Twister}
In this example, we enforce a circular-shaped constant inlet with inflow velocity $(v_{in},0,0)$. The objective of the task is to generate a swirl flow in the $yz$-plane at the outlet of the domain. This example is solved on a $100\times100\times100$ grid with nearly $4$ million decision variables, and the final design is shown in Fig.~\ref{fig:teaser}. We show the streamline visualization of the optimized design in the middle of~\ref{fig:teaser}, which successfully generates a swirl flow at the outlet. Using our topology optimization approach, a propeller-like structure automatically emerged from a constant fluidity parameter field, highlighting the ability of our pipeline to create a new topology.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/figure6-circuit.pdf}
\vspace{-2em}
\caption{
We design the \textit{Circuit} with maximum volume fraction $V_{\max}=0.25$ (top) and $V_{\max}=0.5$ (bottom). Inset: We visualize the domain setup. Different faces of the domain are marked as inlet and outlet with different flow velocity. Left: The anisotropic boundary of the generated design is visualized using small colored discs. Right: The resulting fluid flow from the optimized design is visualized using colorcoded streamlines.
}
\label{fig:fluid_circuit}
\end{figure}
\paragraph{Fluid Circuit}
In this example, we mimic a fluid circuit that connects multiple inlets, located at two faces of the cubic domain, to multiple outlets located at the remaining four faces of the cubic domain. The inlets have three types of inlet velocities, and the goal of the circuit is to connect the inlets to produce equal flows at the outlets. The result of our optimization is shown in Fig.~\ref{fig:fluid_circuit}. The optimized design that emerges from our pipeline is intuitive, as it seems to connect the nearest pairs of inlets and outlets that can produce equal flows in order to meet the small volume fraction constraint (0.25 in this example). For instance, the top-right inlet on the left face (of the domain) is connected to the nearest outlet on the top face (of the domain). We present two results using different volume constraints $V_{\max}=0.25$ and $V_{\max}=0.5$ and observe different topological structures, which exhibit different routing plans between the inlets and outlets.
\subsection{Evaluation}
Below we show experiments evaluating our method and comparing our optimization methods to previous works \cite{du2020stokes, borrvall2003topology}. We include an additional evaluation of the sensitivity of our results to initialization, an experiment validating our optimized designs, and an ablation study on our new anisotropic material model in the supplementary material.
\paragraph{Solver evaluation}
To verify that our simulation results converge under refinement, we evaluate our solver on a 2D fluid amplifier design represented by two symmetric B\'ezier curves. The left of the domain has horizontal inflow. We simulate the design in square domains of dimensions 32, 64, 128, 256, 512, 1024, and 2048, and observe that the velocity fields converge to a limit (Fig.~\ref{fig:eval_solverconvergence_comparison}). To compare our solver with more traditional Stokes flow solvers, we simulate the same amplifier design using the solver from~\citet{du2020stokes}, which is an exact interface solver that supports free-slip boundaries.
As shown in Fig.~\ref{fig:eval_solverconvergence_comparison}, the main body of the velocity fields are similar, and the discrepancy mainly exhibits near the solid boundary due to the different boundary treatments. Specifically, \citet{du2020stokes} assumes an exact interface for the solid-fluid interface and simulates the cells of the interface with subcell precision, while our method only assumes the direction of the interface within the cell.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/figure7-convergence-comparison.pdf}
\vspace{-2em}
\caption{Evaluation of our solver on a 2D fluid amplifier design. We visualize the norm of the nodal velocity field simulated in square domains of dimension 32, 256 and 2048 (top left, top middle, bottom left). The relative error (to the solution at dimension 2048) monotonically decreases as the resolution increases. We additionally simulate the same design at 2048x2048 using a traditional Stokes flow solver from~\citet{du2020stokes} (bottom middle), and visualize the difference field (bottom right, normalized to inflow value). }
\label{fig:eval_solverconvergence_comparison}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figures/figure8-pipe2.pdf}
\vspace{-1em}
\caption{Comparisons between our method and~\cite{borrvall2003topology} using a 2D design problem ($60\times60$ cells). The domain has one inlet and outlet at its left and right, respectively. The goal is to synthesize a structure that transports the inlet flow with velocity $(v_i, 0)$ to the outlet. From left to right: we run both methods to solve this design problem with varying volume fraction limits from 0.6 to 0.2. For both methods, we visualize the optimized fluidity field (row 1, 2) and velocity field (row 4, 5). We additionally visualize the optimized isotropicity field $\epsilon$ for our method (row 3). As we decrease the volume fraction from left to right, the method of~\citet{borrvall2003topology} starts to synthesize non-physical designs and flow, i.e., the solid cells near the inlet and outlet as well as nonzero fluid velocity on them.}
\label{fig:comparedBorrvall}
\end{figure}
\paragraph{Sensitivity of results to block size}\label{sec:apdx_sensitivy_blocksize}
To evaluate the sensitivity of the optimization results to block size, we optimize the 2D amplifier design problem under $100\times100$ with volume fraction limit 0.5. The domain has an inlet with parallel horizontal inflow located at the left of the domain and an outlet at the right of the domain. We initialize the optimization with $\epsilon=1$ and $\rho=0.5$ and repeat the optimization with block sizes of 4, 8 and 16 (Fig.~\ref{fig:eval_blocksize}). We observe that the optimized designs are similar.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/figure9-eval-blocksize.pdf}
\vspace{-2.5em}
\caption{Sensitivity of optimization results to block size. We optimize the 2D amplifier design problem with block sizes 4, 8, and 16 where initial $L_f=141.938$. We visualize the norm of the velocity field (top) and the optimized isotropy field (bottom) and report the optimized $L_f$ value (bottom). }
\label{fig:eval_blocksize}
\end{figure}
\paragraph{Comparison with baselines}
We compare our method with two representative baseline algorithms in the design and optimization of fluidic devices: One is a field-based fluidic topology optimization method described by~\citet{borrvall2003topology}, which uses isotropic materials with varying stiffness to model porous materials that allow or impede water passing. The other is a parametric-shape optimization algorithm~\cite{du2020stokes}, which uses parametric shapes to represent the device boundary and optimize the design by evolving shape parameters. We implemented the method of~\citet{borrvall2003topology} exactly as described in the paper with the same set of hyperparameters and verified that we can obtain the results presented. We used the open source implementation for comparison with~\cite{du2020stokes}.
The main difference between our approach and~\cite{borrvall2003topology} is the introduction of an anisotropic material model in fluidic topology optimization problems. To validate the merits of anisotropic materials, we consider a 2D design problem that aims to synthesize the internal structure between an inlet and an outlet on the opposite sides of a square domain ($60\times60$ cells). The inlet flow is given by $(v_i, 0)$ and enforced as the Dirichlet boundaries, and the goal is to create a design so that the outlet flow at each node is $(v_i, 0)$ too. While this design problem has a trivial solution of a straight pipe connecting the inlet and the outlet, we stress test the problem by imposing various volume fraction $V_{\max}$ ranging from $60\%$ to $20\%$ of the domain (Fig.~\ref{fig:comparedBorrvall}, left to right) and run both methods. As we decrease $V_{\max}$, the method of~\citet{borrvall2003topology} starts to generate physically implausible flow velocities that co-exist with several solid cells near the inlet and the outlet (Fig.~\ref{fig:comparedBorrvall} top). The deeper reason behind them is that traditional isotropic field-based methods typically lead to blurred solid-fluid boundaries that occupy more fluid volumes than a physical boundary should. In exchange for that, such methods have to trade fluid volumes that should have stayed in the interior of the fluid phase when the volume fraction becomes tight. In contrast, our method maintains the sharp boundary of the final design and the physically plausible flow velocity even if we push $V_{\max}$ to its minimum (Fig.~\ref{fig:comparedBorrvall} bottom), which we attribute to the anisotropic material model.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/figure10-diffstokes-small.pdf}
\vspace{-2em}
\caption{Solving the Fluidic Twister example using the open-source implementation from~\citet{du2020stokes}. Left: The parametric shape and fluid flow visualization before optimization, with the top-right insets showing the designs viewed from their outlets. Right: The corresponding visualizations after optimization.}
\label{fig:compare_parametric_shape}
\end{figure}
The difference between our method and~\cite{du2020stokes} is that our representation of a fluidic system is field-based while their representation is based on parametric shapes. We run both methods in the Fluidic Twister example, which was also one of the design problems studied in their paper. We report the optimization results in Figs.~\ref{fig:teaser} and~\ref{fig:compare_parametric_shape}, respectively. It is evident that the design space in~\cite{du2020stokes} is limited to parametric shapes that only evolve a simple surface (Fig.~\ref{fig:compare_parametric_shape}) without changing its topology. On the contrary, our method automatically synthesizes a propeller-like structure without geometrical or topological priors. Furthermore, our final design achieves a much lower functional loss (0.52) than their approach (3.31), indicating that we explored a much larger design space thanks to the expressiveness of the field-based representation.
\subsection{Ablation Study}
\paragraph{Block divergence}
To understand the effects of the block divergence constraints in our numerical simulation, we consider a 2D example with two inlets and two outlets on the left and right sides of a square domain (Fig.~\ref{fig:ab_study:block_divergence}). We use moderately large material parameters $k_f$ and $\lambda_0$ defined in Table~\ref{tab:material} to model the design and simulate the Stokes flow with and without block divergence. To quantify the divergence in the whole domain, we compute the ratio between the outflux at the two outlets and the influx from the two inlets, which is about 67\% without block divergence and 100\% with block divergence. These numbers indicate that moderate material parameters, while friendly to a numerical solver, create leaky flows disappearing into the solid phase in the domain. With block divergence, however, the whole domain remains divergence-free in an aggregated sense without messing with the conditioning of the numerical system.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/figure11-blockdivergence.pdf}
\vspace{-2em}
\caption{We study the effects of block divergence for simulating divergence-free Stokes flow using this two-inlet, two-outlet design problem in 2D ($30\times30$). The inlets and outlets are defined on the left and right sides of the domain, respectively. Left: We model the design with moderate $k_f$ and $\lambda_0$ without imposing the block divergence constraints in simulation. The outflux on the two outlets is 67\% of the influx from the two inlets, indicating the flow velocities dissipate into the solid phase inside the domain. Right: We rerun the same simulation with block divergence constraints with block sizes of $8\times 8$ (dashed red lines). The resulting outflux is 100\% of the influx.}
\label{fig:ab_study:block_divergence}
\end{figure}
\section{Conclusions}\label{sec:conclusion}
In this paper, we provided a density-based fluidic topology optimization pipeline that handles flexible boundary conditions in the fluidic phase. Our core contribution is to present an anisotropic material model that uniformly represents different phases and flexible boundaries in the Stokes flow model. Building on top of this physical model, we develop numerical solutions to its geometric representation, simulation, and optimization. We ran ablation studies that checked the validity of our approach, and comparisons with existing methods confirmed the superiority of our approach in designing fluidic devices with delicate structures and flexible boundary types.
\section{Limitation and Future Work}
We identify certain current limitations of our approach and discuss possible future directions that could address and overcome them.
First, our model has consciously engaged in certain modeling simplifications of the governing physics of a fluidic system. For example, our approach models the fluid phase as steady-state Stokes flow and ignores -- as has been the case with almost any prior work that compares to our feature set -- the effects of time-dependent flow behaviors on the system's performance. Developing optimization frameworks for dynamic fluid systems becomes a natural next step based on our Stokes flow optimizer. On the other hand, our method models the solid phase as rigid. Extending the current model to a compliant solid phase to develop optimization tools for the interaction between incompressible flow and compliant structures rises as an important problem to solve for fluid-driven soft robot design.
Second, our model is currently limited by its scalability. Even though our demonstrated resolutions are quite competitive in the context of fluidic topology optimization, it lacks by orders of magnitude the resolution complexity of topology optimization pipelines that focus on purely solid/elastic (not fluidic) devices.
Our current framework employs a direct solver for the algebraic problems/systems arising from our discretization. To achieve a significant next leap in scalability, we will need to devise iterative and possibly multi-resolution (e.g. multigrid, or multigrid-preconditioned) solvers. Although there is precedent for scalable multigrid solvers performing well for Stokes flows \cite{gaspar2008distributive}, there is a number of complications related to our specific needs that will need to be adressed. For example, most prior Stokes multigrid solvers rely on staggered discretizations and mixed variational formulations for proper performance. We would likely need to adopt a mixed formulation as well \cite{patterson2012simulation} but preserving the convergence qualities that are established in staggered discretizations in the contest of a collocated discretization as the one we employ will require attention to stability issues and careful adaptation of relaxation techniques. At the same time, we will need to address the complications that our anisotropic terms might impart on the convergence of techniques for pure Stokes problems.
In addition to addressing the issues of model accuracy and solver scalability, we anticipate to devise new optimization objectives and constraints to consider the system's manufacturability in the framework. The methods for fabrication and performance benchmark to evaluate optimized fluidic devices also remain as an unexplored yet essential field to bridge the gap between simulation and fabrication.
\section{Additional Evaluation}
\paragraph{Sensitivity of results to initialization}\label{sec:apdx_sensitivy_init}
To evaluate the sensitivity of optimization results to initialization, we optimize the same 2D amplifier design problem with different initial values. Specifically, we perturb the initial fluidity field $\rho$ and anisotropic orientation field $\bm{\alpha}$ by a noise sampled from $\mathcal{U}(-k, k)$ where $k$ is $0.001, 0.01$, and $0.09$, respectively (Fig.~\ref{fig:eval_initialization}).
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/figure12-eval-initialization.pdf}
\vspace{-2em}
\caption{Sensitivity of optimization results to initial values. We optimize the 2D amplifier design problem with initial values perturbed. Specifically, the fluidity field $\rho$ and anisotropic orientation field $\bm{\alpha}$ by a noise vector sampled from $\mathcal{U}(-k, k)$ where $k$ is $0.001, 0.01$ and $0.09$ (left, middle, right), respectively. }
\label{fig:eval_initialization}
\end{figure}
\paragraph{Validation of optimized result}\label{sec:apdx_validation}
To validate the design optimized by our anisotropic mixture model, we extract an exact interface from our optimized design of a 2D amplifier design problem under $100 \times 100$ and simulate the design using our solver and the conventional FEM solver from~\cite{du2020stokes} (Fig.~\ref{fig:eval_validation}). To extract the optimized interface, we first binarize the optimized $\epsilon$ field, then fit four cubic B\'ezier curves to the binarized field. We compare the simulation of the extracted design using the solver from our method and from~\cite{du2020stokes}. The initial, optimized, and the $L_f$ value achieved with the reparameterized and binarized design (for validation purposes)
is $142.70$ and $6.42$ and $26.51$ respectively. The increased $L_f$ value after the two post-processing steps is as expected because to the boundary geometry change introduced during binarization and fitting, and different approaches for handling the boundary between the two methods, which introduce simulation difference near the boundary of the domain.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/figure13-eval-validation.pdf}
\vspace{-2em}
\caption{\hlrevision{Validation of optimized result. We optimize the 2D amplifier problem (top left) and extract an exact interface by first binarizing the optimized $\epsilon$ filed (top middle), then fitting the binarized field with four cubic B\'ezier curves where the upper and lower part of the design are symmetric. We visualize the velocity field (normalized to inflow value) of the extracted design simulated using our method (bottom left) and~\cite{du2020stokes} (bottom middle) and the error between the two field (bottom right, normalized to target flow value). }}
\label{fig:eval_validation}
\end{figure}
\paragraph{Anisotropic material model} We study the effects of the two critical anisotropic material parameters $\K_m$ and $\K_f$ on modeling sharp, free-slip solid-fluid boundary conditions. Specifically, we analyze the Stokes flow problem in a 2D slanted pipe (Fig.~\ref{fig:ab_study:material}) with straight and parallel free-slip boundaries and a constant inlet flow parallel to them. Since these boundaries are frictionless, we expect a physically plausible simulator to generate a constant flow inside the pipeline and, in particular, result in a constant outlet flow velocity parallel to the boundaries. We consider using four possible combinations of isotropic/anisotropic $\K_m$ and $\K_f$ to model and simulate this slanted pipe example discretized on a grid of $20\times20$ cells and report their resultant flow fields in Fig.~\ref{fig:ab_study:material}. Comparing these results, we can see that the simulation generates a near-constant flow field only when $\K_m$ and $\K_f$ are both anisotropic (Fig.~\ref{fig:ab_study:material} bottom right) using the values suggested in Table~\ref{tab:material}. In particular, whenever $\K_m$ or $\K_f$ is isotropic, it significantly damps the flow near the solid-fluid boundaries, leading to a non-constant outlet flow profile. This comparison shows that isotropic material models suffer from the inherent difficulty in modeling free-slip boundary conditions and necessitates the need for anisotropic $\K_m$ and $\K_f$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/figure14-slantedpipe.pdf}
\vspace{-2em}
\caption{
We study the effects of anisotropic material parameters $\K_m$ and $\K_f$ on simulating our Stokes flow model in a 2D slanted pipe ($20\times 20$ cells). We repeat the simulation with all four combinations of anisotropic ($\K_m$, $\K_f$) and their isotropic counterparts ($\K_m=\I$ and $\K_f=k_f\I$). The slanted straight pipe has parallel free-slip solid-fluid boundaries (dashed blue lines) and a constant inlet flow parallel to them from left of the domain. We visualize the flow velocity at each node in the pipe as blue arrows and expect the velocity at each outlet node to be identical to the inlet velocity. We additionally highlight the flow norm profile passing through a cross-section in the insets.}
\label{fig:ab_study:material}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/appendix/figure15-setupall.pdf}
\caption{We visualize the domain specification for the \textit{Amplifier}, \textit{Mixer}, \textit{Tree Diffuser} and \textit{Twister} design problems. }
\label{fig:apdxsetup1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/appendix/figure16-circuit-setup.pdf}
\caption{We visualize the domain specification for the \textit{Circuit} design problem.}
\label{fig:apdxsetup2}
\end{figure}
\section{Optimization Problems Illustration}
We illustrate the design specifications for all of the examples from Sec.~\ref{sec:evalapp:app} in Fig.~\ref{fig:apdxsetup1} and Fig.~\ref{fig:apdxsetup2}. In each example, the nodes on the faces not labeled with "outlet" have zero-velocity Dirichlet conditions specified on the non-fluid domain. On faces labeled with ``outlet'', the non-fluid nodes have zero velocity Dirichlet conditions specified when computing $L_c$ but the conditions are released when computing $L_f$, as explained in Sec~\ref{sec:optimization}.
\section{Mixed formulation for Multigrid solver}
|
{'timestamp': '2022-09-27T02:07:32', 'yymm': '2209', 'arxiv_id': '2209.10736', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.10736'}
|
arxiv
|
\section{Introduction}
\IEEEPARstart{G}{iven} a non-singular square matrix $G$ of size $l\times l$ over the finite field $\mathbb{F}_q$ with $q$ elements ($q$ a power of a prime) it can be used for designing the kernel of a polar code. In such construction it is interesting to study the exponent of the matrix and the information set.
Concerning the information set, it was proved in \cite{Bardet} that given a binary symmetric channel the construction based on the binary matrix $G_A=\begin{bmatrix} 1&0\\1&1\end{bmatrix}$ can be analyzed in terms of the elements in the polynomial ring $\mathbb{F}_2[x_1,\ldots,x_n]/\langle x_1^2-x_1,\ldots,x_n^2-x_n\rangle$. That is, for a fixed given monomial order (thus giving a divisibility on the ring and a weight to the variables $x_i, \, i=1,\ldots, n$) one can get information for its information set. In the same paper the authors also devise a formula for the minimum distance, derived the dual code and they proved that the permutation group of the code was ``large''.
In this paper we will show some applications of algebraic curves to polar codes. If we apply some restrictions to a discrete memoryless channel (DMC)
$W:\mathbb{F}_q\rightarrow\mathcal{O}$ and we will study under which assumptions a matrix $G$ polarizes in terms of the curve used to construct its kernel. Also the information set, the minimum distance of a polar code and its dual based on the curve will be shown generalizing some of the results in \cite{Bardet}, in particular we will keep the same notation they used for some properties of polar codes. All the restrictions that we make to our curves will be accomplished by Castle Curves \cite{castillo1}, this is the reason for the title since Vard{\o}hus (Norway) is the only castle known by the authors in the polar region.
The structure of the paper is as follows. In Section~\ref{sec:pre} we have compile some basic facts on polar codes and algebraic geometric curves needed to understand the paper. Section~\ref{sec:AGkernels} reviews some results in \cite{Anderson} and we adapt them for constructing kernel matrices that arise from algebraic curves for a SOF channel that is a discrete memoryless channel which is symmetric w.r.t the field operations. Section~IV deals with the computation of the minimum distance and the dual of codes proposed in the previous section. Finally Section~V is devoted to the study of the exponent of such codes and how to reduce them to get a better exponent for a given matrix size.
\section{Preliminaries}\label{sec:pre}
\subsection{Polar codes}
Ar\i kan introduced in \cite{Arikan} a method to get efficient capacity-achieving binary source and channel codes, generalized later by \c Sa\c so\u glu et. al. in \cite{Sasoglu}. In this section, we review briefly some results used in the rest of the work.
Given a non-singular matrix $G$ over $\mathbb{F}_q$ ($q=p^r$ a power of a prime) of size $l\times l$ and a discrete memoryless channel $W:\mathbb{F}_q\rightarrow\mathcal{O}$, take $N=l^n$ and define $W_n:\mathbb{F}_q^N\rightarrow\mathcal{O}^N$ such that
$$W_n(y_1^N\ |\ u_1^N)=\prod_{i=1}^N W(y_i\ |\ u_1^N G_n)$$
with $G_n=B_n G^{\otimes n}$, where $G^{\otimes n}$ is the Kronecker product of $G$ with itself $n$ times and $B_n$ is the matrix $N\times N$ such for the vector $v_1^N=u_1^NB_n$ yields that if $(i_n,\ldots,i_1)$ is the $l$-ary expansion of $i$, then $v_i=u_{i'}$, where $i'$ has the expansion $(i_1,\ldots,i_n)$. The channel $W_n$ is splitted into $N$ channels $W_n^{(i)}:\mathbb{F}_q\rightarrow\mathcal{O}^{N}\times\mathbb{F}_q^{i-1}$ with
$$W_n^{(i)}(y_1^N,u_1^{i-1}\ |\ u_i)=\sum_{u_{i+1}^N\in\mathbb{F}_q^{N-i}}W_n(y_1^N\ |\ u_1^N)$$
These channels are compared via its rate, that is, for a channel $W:\mathbb{F}_q\rightarrow\mathcal{O}$
$$I(W)=\sum_{y\in\mathcal{O}}\sum_{x\in\mathbb{F}_q}\frac{1}{q}W(y|x)\log_q\left(\frac{W(y|x)}{W_{\mathcal{O}}(y)}\right)$$
\noindent where $W_{\mathcal{O}}(y)$ represents the probability of receive $y$ through $W$. We say that $G$ polarizes $W$ if for each $\delta\in(0,1)$ we have:
\begin{align*}
&\lim_{n\rightarrow\infty}\frac{\left|\left\{i\in\{1,\ldots,N\}\ |\ I\left(W_n^{(i)}\right)\in(1-\delta,1]\right\}\right|}{N}=I(W)\\
\\
&\lim_{n\rightarrow\infty}\frac{\left|\left\{i\in\{1,\ldots,N\}\ |\ I\left(W_n^{(i)}\right)\in[0,\delta)\right\}\right|}{N}=1-I(W)
\end{align*}
If a matrix $G$ polarizes, we say that $G$ the kernel of the polarization. Ar\i kan's original construction using $G_A=\begin{bmatrix} 1&0\\ 1&1\end{bmatrix}$ polarizes any binary symmetric channel $W$. Defining a channel over $\mathbb{F}_q$ as symmetric if for each $x\in\mathbb{F}_q$ exists a permutation $\sigma_x$ such $W(y|x)=W(\sigma_{x'-x}(y)|x')$ for each $y\in\mathcal{O}$ and $x',x\in\mathbb{F}_q$, Mori and Tanaka~\cite{moriq} showed that source polarization is the same as symmetric channel polarization and
\begin{theorem}[\cite{moriq}, Theorem 14]
Let $G$ be a non-singular matrix of size $l\times l$ over $\mathbb{F}_q$, $V$ an invertible upper triangular matrix and $P$ a permutation matrix. If $G'=VGP$ is a lower triangular matrix with units on its diagonal (we call it a standard form of $G$), then for a $G$ with a non-identity standard form the following statements are equivalent
\begin{itemize}
\item Any symmetric channel is polarized by $G$.
\item $\mathbb{F}_p(G')=\mathbb{F}_q$ for any standard form $G'$ of $G$, where $\mathbb{F}_p(G')$ denotes the field generated by the adjunction of the elements in $G'$.
\item $\mathbb{F}_p(G')=\mathbb{F}_q$ for one standard form $G'$ of $G$.
\end{itemize}
\end{theorem}
When a matrix $G$ polarize some channel $W$ we get a efficient codes choosing the best channels $W_n^{(i)}$. For this purpose, we use the Bhattacharyya parameter for a channel $W:\mathbb{F}_q\rightarrow\mathcal{O}$ defined as follows
$$Z(W)=\frac{1}{q(q-1)}\sum_{x,x'\in\mathbb{F}_q, x\neq x'}\sum_{y\in\mathcal{O}}\sqrt{W(y|x)W(y|x')}.$$
We construct a polar code choosing an information set $\mathcal{A}_n\subset\{1,\ldots,N\}$ with the condition that for each $i\in\mathcal{A}_n$ and $j\notin\mathcal{A}_n$ yields
$$Z\left(W_n^{(i)}\right)\leq Z\left(W_n^{(j)}\right).$$
The polar code $C_{\mathcal{A}_n}$ is generated by the rows of $G_n$ indexed by $\mathcal{A}_n$. Due to the polarization of $G$ over $W$, this code will have a low block error probability. To see this, we use the concept of rate of polarization or exponent of $G$ introduced by Korada et al. in \cite{korada} and generalized by Mori and Tanaka in \cite{morinl}. The exponent of $G$ is defined as
$$E(G)=\frac{1}{l\ln l}\sum_{i=1}^l\ln D_i,$$
where $D_i$ is called the partial distance and it is define as $D_i=d(G_i,\langle G_{i+1},\ldots,G_l\rangle)$, where $G_i$ is the $i$-th row of $G$. The original Ar\i kan's matrix $G_A$ has exponent $\frac{1}{2}$. The exponent is the value such that
\begin{itemize}
\item for any fixed $\beta<E(G)$
$$\liminf_{n\rightarrow\infty} P[Z_n\leq 2^{-N^\beta}]=I(W).$$
\item For any fixed $\beta>E(G)$
$$\liminf_{n\rightarrow\infty} P[Z_n\geq 2^{-N^\beta}]=1,$$
\end{itemize}
where $Z_n=Z(W'_n)$ and $W'_n={W'_{n-1}}_1^{(B_n)}$ with $\{B_n\}_{n\in\mathbb{N}}$ independent random variables identically distributed over $\{1,\ldots,l\}$.
Anderson and Matthews proved in \cite{Anderson} that this means that for any $\beta<E(G)$ polar coding using kernel $G$ over a DMC channel $W$ at a fixed rate $0<R<I(W)$ and block length $N=l^n$ implies
$$P_e=O(2^{-N^\beta}),$$
where $P_e$ is the probability of block error.
The partial distances $D_i$ can be estimated by the sucesion of nested codes $\langle G_i,\ldots, G_l\rangle$ and shortening matrices leads to a good exponents in smaller sizes. Compute the exponent and the information set $\mathcal{A}_n$ are two of the main problems in polar coding. About the last topic Bardet et al.~\cite{Bardet} proved that for $G_A$ the structure of the information set can be derived from a monomial order over $\mathbb{F}_2[x_1,\ldots,x_n]$ and they proved also that minimum distances are computable and duals of polar codes have similar structures using the fact that rational curves over $\mathbb{F}_2$ have nice properties.
All these conditions leads to consider a special type of algebraic curves, the Castle-like curves that have a nested code structure. They can be described in terms of a finite-generated algebra and satisfied the isometry-dual property.
\subsection{Algebraic pointed curves and AG codes}
Let us remember some facts about algebraic geometry (AG) codes over curves (for an extensive account on AG codes see for example~\cite{AGC}). By a curve we mean a projective, non-singular, geometrically irreducible algebraic curve $\mathcal{X}$ over $\mathbb{F}_q$ and we denote by $\mathcal{X}(\mathbb{F}_q)$ its rational points, by $\mathbb{F}_q(\mathcal{X})$ its function field and by $g=g(\mathcal{X})$ its genus. We will consider two rational divisors $D=\sum_{i=1}^l P_i$, where the $P_i$, $i=1,\ldots , l$ are distinct rational points in the curve (rational places) and $G$ such that $\mathrm{supp}\ D\cap\mathrm{supp}\ G=\emptyset$ and $1\leq \mathrm{deg}(G)\leq n+2g-1$. We define the evaluation map $ev_D:\mathcal{L}(G)\rightarrow\mathbb{F}_q^l$ as
$$ev_D(f)=(f(P_1),\ldots,f(P_l)),$$
where $\mathcal{L}(G)$ is the vector space of rational functions over the curve such that either $f=0$ or $\mathrm{div}(f)+G\geq 0$.
We define the evaluation code as $C(D,G)=ev_D(\mathcal{L}(G))$. The kernel of $ev_D$ is $\mathcal{L}(G-D)$ and the length of $C(D,G)$ is $\deg D$, its dimension $k=l(G)-l(G-D)$ and its minimum distance $\delta(C(D,G))\geq \deg D-\deg G$.
Given $Q\in\mathcal{X}(\mathbb{F}_q)$ we called a pointed curve to the pair $(\mathcal{X},Q)$. We denote by $H(Q)$ to the Weierstrass semigroup of $Q$ and given $D$ as before we denote $$H^\ast(Q)=\{m\in\mathbb{N}_0\ |\ C(D,(m-1)Q)\neq C(D,mQ)\}.$$ Clearly, $|H^\ast(Q)|=l$ and we can write $H^\ast(Q)=\{m_1,\ldots,m_l\}$. Most of information of the codes $\{C(D,mQ)\}_{m\in\mathbb{N}_0}$ is contained in $H^\ast(Q)$.
If $\mathcal{X}$ is a curve of genus $g$ we say that $H(Q)$ is symmetric if
$$h\in H(Q)\Longleftrightarrow 2g-1-h\notin H(Q).$$
When $H(Q)$ is symmetric and $D\equiv lQ$ we have $H^\ast(Q)=H(Q)\cap\{0,1,\ldots,n-1\}\cup\{l_1,\ldots,l_g\}$ where $l_i$ are gaps of $Q$~\cite{diegorder}. The isometry-dual condition for a sequence of codes of length $l$ $\{C_i\}_{i=1}^l$, $C_i\subsetneq C_{i+1}$, means that there is $x\in\mathbb{F}_q^l$ such that for each $i\in\{1,\ldots, l\}$, $C_i^\perp$ is isometric by $x$ to $C_{l-i}$. In \cite{diegorder} they also proved that the following statements are equivalent when $l \geq 2g-2$
\begin{itemize}
\item The set $\{C(D,mQ)\}_{m\in H^\ast(Q)}$ satisfies the isometry-dual condition,
\item the divisor $(l+2g-2)Q-D$ is canonical,
\item $l+2g-1\in H^\ast(Q)$.
\end{itemize}
Then if $l\geq 2g-2$ and $D\equiv lQ$, the sequence of nested codes $\{C(D,m_iQ)\}_{m_i\in H^\ast(Q)}$ satisfy the isometry dual condition. Observe that a rational curve satisfies these conditions.
\section{Algebraic Curves and Kernels}\label{sec:AGkernels}
From now on $G$ will be a non-singular square matrix $G$ of size $l\times l$ over the finite field $\mathbb{F}_q$ with $q$ elements ($q=p^r$ a power of a prime) and $W:\mathbb{F}_q\rightarrow\mathcal{O}$ a DMC channel. Let $G_n$ be the matrix used for constructing a polar code of length $N=l^n$ based on $G$ and consider $$W_n(y_1^N|u_1^N)=\prod_{k=1}^N W(y_k|u_1^l(G_n)_{\ast,k})$$
and the partitions given by
$$W_n^{(i)}=(y_1^N,u_1^{i-1}|u_i)=\sum_{u_{i+1}^N} W_n(y_1^N|u_1^N).$$
Note that the channels
$\left(W_{n-1}^{(i)}\right)^{(j)}$ and $W_n^{(i-1)l+j}$ are the same in the sense that the parameters $I(W)$ y $Z(W)$ are equal in both cases.
\begin{proposition}\label{degchanpart}
If $W:\mathbb{F}_q\rightarrow\mathcal{O}$ is a DMC given two integers $1\leq i\leq l^{n-1}$ and $1\leq j\leq l$ then
$$\left(W_{n-1}^{(i)}\right)_1^{(j)}= W_n^{((i-1)l+j)}.$$
\end{proposition}
\begin{IEEEproof}
Let $$f:\mathcal{O}^{l^n}\times\mathbb{F}_q^{l(i-1)}\rightarrow\left(\mathcal{O}^{l^{n-1}}\times\mathbb{F}_q^{i-1}\right)^l$$ defined as follows
$$f(y_1^{l^n},u_1^{l(i-1)})=(y_{(k-1)l^{n-1}+1}^{kl^{n-1}},u_1^lG_{\ast,k},\ldots,u_{(i-2)l+1}^{(i-1)l}G_{\ast,k})_{k=1}^l.$$
Then we have that
\begin{align*}
&W_n^{((i-1)l+j)}(y_1^{l^n},u_1^{(i-1)l+j-1}|u_j)\\=&\frac{1}{q^{l^{n}-1}}\sum_{u_{(i-1)l+j+1}^{l^n}}\prod_{k=1}^{l^n}W\left(y_k\left|\sum_{h=1}^{l^n} u(G_n)_{\ast,k}\right.\right)\\
=&\frac{1}{q^{l^{n}-1}}\sum_{u_{(i-1)l+j+1}^{l^n}}\prod_{k=1}^{l}\prod_{k'=1}^{l^{n-1}}W\left(y_{(k-1)l^{n-1}+k'}\left|\left(u_{(h-1)l+1}^{hl}G_{\ast,k}\right)_{h=1}^{l^{n-1}}(G_{n-1})_{\ast,k'}\right.\right)\\
\stackrel{(\ast)}{=}&\frac{1}{q^{l-1}}\sum_{u_{(i-1)l+j+1}^{il}}\prod_{k=1}^l\left[\frac{1}{q^{l^{n-1}-1}}\sum_{v_{i+1}^{l^{n-1}}}\prod_{k'=1}^{l^{n-1}}W\left(y_{(k-1)l^{n-1}+k'}\left|((u_{(h-1)l+1}^{hl}G_{\ast,k})_{h=1}^{i},v_{i+1}^{l^{n-1}})(G_{n-1})_{\ast,k'}\right.\right)\right]\\
=&\frac{1}{q^{l-1}}\sum_{u_{(i-1)l+j+1}^{il}}\prod_{k=1}^lW_{n-1}^{(i)}\left(f(y_1^{l^n},u_1^{l(i-1)})_k\left|u_{(i-1)l+1}^{il}G_{\ast,k}\right.\right)\\
=&\left(W_{n-1}^{(i)}\right)_1^{(j)}\left(f(y_1^{l^n},u_1^{l(i-1)}),u_{(i-1)l+1}^{(i-1)l+j-1}\left|u_{(i-1)l+j}\right.\right)
\end{align*}
where the equality $(\ast)$ follows from the fact that the space generated by the last $l^n-(i-1)l-j+1$ rows in the matrix $G_n$ has the same dimension as the space $l$ times cartesian product of the space given by the last $l^{n-1}-i+1$ rows in $G_{n-1}$. Therefore, since there is a bijection between the output alphabets of both channels, their parameters are the same.
\end{IEEEproof}
We will be interested in those channels where we can \textit{recognize} the operations among their elements, more formally
\begin{definition}
Let $W:\mathbb{F}_q\rightarrow\mathcal{O}$ be a DMC. We say that $W$ is symmetric w.r.t the field addition if for each $x\in\mathbb{F}_q$ there is a permutation $\sigma_x\in\mathrm{SG}(\mathcal{O})$ such that
$$W(y|x)=W(\sigma_{x'-x}(y)|x'),\ \ \ \ \ \forall x,x'\in\mathbb{F}_Q,\ \forall y\in\mathcal{O}.$$
We say that $W$ is symmetric w.r.t. the field product if for each $\alpha\in\mathbb{F}_q^\ast$ there is a permutation $\psi_\alpha$ such that
$$W(y|x)=W(\psi_\alpha(y)|\alpha x),\ \ \ \ \ \forall x\in\mathbb{F}_q,\ \forall y\in\mathcal{O}$$
We say that $W$ is symmetric w.r.t. the field operations in $\mathbb{F}_q$ (SOF) if $W$ is symmetric w.r.t. the field addition and product.
\end{definition}
\begin{remark} Note that if the channel
$W$ is symmetric w.r.t the field addition for each $\alpha\in\mathbb{F}_q$ we have that
$$W(y|x)=W(\sigma_\alpha(y)|\alpha+x).$$
\end{remark}
\begin{ex} Consider the channel $W_{Sq}:\mathbb{F}_q\rightarrow\mathbb{F}_q$ with transition probabilities given by
$$W_{Sq}(y|x)=(1-p)\chi_{x}(y)+\frac{p}{n}.$$
Then
$W_{Sq}$ is a SOF channel. This channel has been studied in \cite{qSC4,qSC3,qSC2,qSC1}.
\end{ex}
\begin{corollary}
In the binary case $q=2$ we have that any channel symmetric w.r.t the field addition is also a SOF channel.
\end{corollary}
\noindent Using Proposition~\ref{degchanpart} the polarization process can be analyzed inductively using the following result.
\begin{proposition}
If $W:\mathbb{F}_q\rightarrow\mathcal{O}$ is a SOF channel and $G$ a non-singular square matrix of size $l\times l$ over $\mathbb{F}_q$, then $W_1^{(i)}$ is also a SOF channel.
\end{proposition}
\begin{IEEEproof} The symmetry w.r.t the addition is known, see \cite{moriq}. Therefore we will check the symmetry w.r.t the product.
Let $i\in\{1,\ldots,N\}$ and $\alpha\in\mathbb{F}_q^\ast$, then we have that
\begin{align*}
W_n^{(i)}(y_1^l,u_1^{i-1}|u_i)&=\sum_{u_{i+1}^N}W_n(y_1^N|u_1^N)\\
&=\sum_{u_{i+1}^N}\prod_{k=1}^N W\left(y_k\left|\sum_{j=1}^N u_j(G_n)_{j,k}\right.\right)\\
&=\sum_{u_{i+1}^N}\prod_{k=1}^N W\left(\psi_{\alpha}(y_k)\left|\alpha\sum_{j=1}^N u_j(G_n)_{j,k}\right.\right)\\
&=W_n^{(i)}\left((\psi_{\alpha}(y_k))_{k=1}^N,\alpha u_1^{i-1} | \alpha u_i\right)
\end{align*}
since $(u_{i+1},\ldots,u_N)\mapsto(\alpha u_{i+1},\ldots,\alpha u_N)$ is a bijection. Hence we define
$$\Psi_{\alpha}(y_1^N,u_1^{i-1})=((\psi_{\alpha}(y_k))_{k=1}^N,\alpha u_1^{i-1})$$
and we get the result.
\end{IEEEproof}
The following result also follows from \cite{moriq}.
\begin{proposition}\label{scsf}
Let $G$ be a non-singular square matrix of size $l\times l$ over $\mathbb{F}_q$, $V$ be an upper triangular invertible matrix and $P$ a permutation matrix and consider $G'=VGP$. Let $W:\mathbb{F}_q\rightarrow\mathcal{O}$ a SOF channel and $W_1^{(i)}$ y ${W'}_1^{(i)}$ the channels associated to the polarization processes with the matrices $G$ and $G'$ respectively. then we have
$$I(W_1^{(i)})=I({W'}_1^{(i)}),$$
$$Z(W_1^{(i)})=Z({W'}_1^{(i)}).$$
\end{proposition}
\begin{corollary}\label{mcfs}
If $G$ polarizes a SOF channel $W$ and the matrices $G'$ and $G$ are given as in the above proposition, then $G'$ also polarizes $W$. Moreover, if $\mathcal{A}_n$ and $\mathcal{A}'_n$ are the information sets generated by $G$ and $G'$ respectively, then
$$\mathcal{A}_n=\mathcal{A}'_n.$$
\end{corollary}
\begin{IEEEproof}
It follows from Proposition~\ref{scsf} and Proposition~\ref{degchanpart}.
\end{IEEEproof}
As we have seen before when the channel $W$ is symmetric w.r.t. the addition the kernel of the polar code has all the information in the spaces
$$\langle G_{l,\ast}\rangle\subset\ldots\subset\langle G_{1,\ast},\ldots,G_{l,\ast}\rangle.$$
It is natural to associate this structure with the derivative of an algebraic curve. Let $\mathcal{X}$ be and algebraic curve and $D=\sum_{i=1}^l P_i$, where $P_i\neq P_j$ if $i\neq j$, and $P_i$ rational points (places in $\mathcal{X}$ of degree 1) and let us suppose that there exist divisors $A_1,\ldots,A_l$ such that the support of $A_i$ and $D$ are disjoint for each $i$ , $A_i\leq A_{i+1}$ and
\begin{equation}
\mathbb{F}_q=C(D,A_1)\subsetneq\ldots\subsetneq C(D,A_l)=\mathbb{F}_q^l.
\label{kerag}
\end{equation}
We consider now
$f_1,\ldots,f_l$ functions such that $\langle f_1,\ldots,f_i\rangle =C(D,A_i)$ and we build the evaluation matrix $G$ given by $G_{i,\ast}=ev_D(f_i)$.
Pointed algebraic curves satisfy the above construction. If we are given a pointed curve $(\mathcal{X},Q)$ and $D=\sum_{i=1}^l P_i$ where $P_i$ are different rational points, let $H^\ast(Q)=\{m_1,\ldots,m_l\}$ and $A_i=m_i Q$, we get the desired structure.
\begin{ex}
Consider the field with 4 elements $\mathbb{F}_4$ and the Hermitian curve $x^3=y^2+y$. If we take $Q$ as the common pole of $x$ and $y$ and the divisor $D=\sum P_{\alpha,\beta}$ where $P_{\alpha,\beta}$ is the common zero of $x-\alpha$ and $y-\beta$, then $\deg D=8$ and $H^\ast(Q)=\{0,2,3,4,5,6,7,9\}$. It follows that
$$\begin{array}{r|cccccccc}
& 00 & 01 & 1\alpha & 1\alpha^2& \alpha\alpha & \alpha\alpha^2 & \alpha^2\alpha & \alpha^2\alpha^2\\\hline
x^3y&0&0&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x^2y & 0& 0&\alpha&\alpha^2&1&\alpha&\alpha^2&1\\
x^3& 0&0&1&1&1&1&1&1\\
xy & 0&0&\alpha&\alpha^2&\alpha^2&1&1&\alpha\\
x^2& 0&0&1&1&\alpha^2&\alpha^2&\alpha&\alpha\\
y&0&1&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x&0&0&1&1&\alpha&\alpha&\alpha^2&\alpha^2\\
1&1&1&1&1&1&1&1&1
\end{array}$$
\end{ex}
When the channel $W$ is symmetric w.r.t. the addition we will call \emph{the kernel associated to the pointed curve $(\mathcal{X},Q)$} to any evaluation matrix generated by a basis $\{f_1,\ldots,f_l\}$ where each $f_i\in\mathcal{L}(m_iQ)\setminus\mathcal{L}(m_{i-1}Q)$. Note that it is well defined since by Corollary~\ref{mcfs} any matrix of this form produces the same set $\mathcal{A}_n$.
In order to study the structure of those matrices associated to curves note that $\mathcal{L}(\infty Q)=\bigcup_{m=0}^\infty \mathcal{L}(mQ)$ is a finitely generated algebra.
\begin{proposition}[\cite{ruudorder}, Proposition 5.2]\label{polrayo}
Let $(\mathcal{X},Q)$ be a pointed curve and $H(Q)=\langle a_1,\ldots,a_s\rangle$, where $\{a_1,\ldots,a_s\}$ is a minimal generator set of $H(Q)$, then there exists an ideal $I\subset\mathbb{F}_q[t_1,\ldots,t_s]$ such that
$$\mathcal{L}(\infty Q)=\mathbb{F}_q[t_1,\ldots,t_s]/I.$$
\end{proposition}
\begin{proposition}
Let $D=\sum_{i=1}^l P_i$ be a divisor of rational places and let us suppose that there exists a $z\in\mathcal{L}(\infty Q)$ such that
$$(z)=D-lQ.$$
If $f_z\in\mathbb{F}_q[t_1,\ldots,t_s]$ is a polynomial such that $f_z$ represents $z$, then
$$ev_D(\mathcal{L}(\infty Q))=\mathbb{F}_q[t_1,\ldots,t_s]/\langle I,f_z\rangle.$$
\end{proposition}
\begin{IEEEproof}
If $x\in\ker(ev_D)\cap\mathcal{L}(mQ)$, then $x\in\mathcal{L}(mQ-D)$ and since $(z)=D-lQ$ then we have
$$x\in z\mathcal{L}((m-l)Q).$$
Ie., the image of $x$ in $\mathbb{F}_q[t_1,\ldots,t_s]$ is in the ideal generated by the equivalence class represented by $f_z$, hence $x\in\mathbb{F}_q[t_1,\ldots,t_s]/\langle I,f_z\rangle$. Since $m$ has been arbitrary chosen we have $\subset$. The other contention follows since $ev_D(yz)=ev_D(y)\ast ev_D(z)=0$.
\end{IEEEproof}
\section{Information sets for SOF channels}
In this section we analyze the information set $\mathcal{A}_n$ for a SOF channel. The main tool we will use is channel degradation.
\begin{definition}\normalfont
Let $W:\mathcal{I}\rightarrow\mathcal{O}$ and $W':\mathcal{I}\rightarrow\mathcal{O}'$ be two DMC channels. We say the $W'$ is a degradation of $W$ and we will denote it as $W'\preceq W$, if there exists a channel $W'':\mathcal{O}\rightarrow\mathcal{O}'$ such that
$$W'(y|x)=\sum_{z\in\mathcal{O}}W''(y|z)W(z|x)$$
for any $y\in\mathcal{O}',x\in\mathcal{I}$.
\end{definition}
One can think on degradation as a ``composition" of channels in the sense that the transition probability of $W'$ represents the probability of the event if we send $x$ trough channel $W$ and the received is transmited by channel $W''$ we get $y$. That is
\[\begin{tikzcd}
x\arrow[r, "W"]\arrow{rd}{}[swap]{W'} &z \arrow[d, "W''"]\\
& y
\end{tikzcd}\]
Therefore degradation make the transmission gets worse.
\begin{proposition}\label{degpar}
If channels $W:\mathcal{I}\rightarrow\mathcal{O}$ and $W':\mathcal{I}\rightarrow\mathcal{O'}$ satisfy $W'\preceq W$, then
\begin{align*}
Z(W)&\leq Z(W'),\\
I(W)&\geq I(W').
\end{align*}
\end{proposition}
\begin{IEEEproof}
Let $a,b\in\mathcal{I}$, then
\begin{align*}
Z_{a,b}(W')&=\sum_{y\in\mathcal{O'}}\sqrt{W'(y|a)W'(y|b)}\\
&=\sum_{y\in\mathcal{O'}}\sqrt{\sum_{z\in\mathcal{O}} W''(y|z)W(z|a)\sum_{z\in\mathcal{O}} W''(y|z)W(z|b)}\\
&\geq \sum_{y\in\mathcal{O'}}\sum_{z\in\mathcal{O}}\sqrt{W(z|a)W(z|b)}W''(y|z)\\
&=\sum_{z\in\mathcal{O}}\sqrt{W(z|a)W(z|b)}\\
&=Z_{a,b}(W)
\end{align*}
\noindent where the inequality follows from Cauchy-Schwartz. If we take the mean among all the pairs $(a,b)\in\mathcal{I}^2$, $a\neq b$ it follows the desired result.
The second inequality follows form the data processing inequality \cite{dpi}.
\end{IEEEproof}
Moreover, degradation is preserved by the polarization process, more formally
\begin{proposition}\label{degpolar}
Let $W:\mathbb{F}_q\rightarrow\mathcal{O}$ and $W':\mathbb{F}_q\rightarrow\mathcal{O}'$ be two channels such that $W'\preceq W$ and let $G$ be a non-singular square matrix of size $l\times l$ over the finite field $\mathbb{F}_q$, then
$${W'}_1^{(i)}\preceq W_1^{(i)}.$$
\end{proposition}
\begin{IEEEproof}
\begin{align*}
{W'}_1^{(i)}(y_1^l,u_1^{i-1}|u_i)&=\sum_{u_{i+1}^l}\prod_{k=1}^l W'(y_k|uG_{\ast,k})\\
&=\sum_{u_{i+1}^l}\prod_{k=1}^l\sum_{z\in\mathcal{O}} W''(y_k|z)W(z|uG_{\ast,k})\\
&=\sum_{u_{i+1}^l} \sum_{z_1^l}\prod_{k=1}^l W''(y_k|z_k)W(z_k|uG_{\ast,k})\\
&=\sum_{z_1^l}\prod_{k=1}^l W''(y_k|z_k)\sum_{u_{i+1}^l}\prod_{k=1}^lW(z_k|uG_{\ast,k})\\
&=\sum_{z_1^l}\prod_{k=1}^l W''(y_k|z_k)W_1^{(i)}(z_1^l,u_1^{i-1}|u_i).
\end{align*}
If we define $W'''(y_1^l,u_1^{i-1}|z_1^l,u_1^{i-1})=\prod_{k=1}^l W''(y_k|z_k)$ we conclude the proof.
\end{IEEEproof}
\begin{lemma}\label{lemma}
Let $(\mathcal{X},Q)$ be a pointed curve of genus $g$ such that $l\geq 2g$ and $H(Q)=\langle a_1,\ldots,a_s\rangle$ is a minimal generator set of $H(Q)$. Let us define $f_i:H^\ast(Q)\rightarrow H^\ast(Q)$ as
$$f_i(m)=\begin{cases}
m-a_i& m-a_i\in H^\ast(Q)\\
l+m-a_i& m-a_i\notin H^\ast(Q)
\end{cases},$$
then $f_i$ is a bijection.
\end{lemma}
\begin{IEEEproof}
Since $l\geq 2g$ we know that $H^\ast(Q)=H(Q)\setminus\{l+H(Q)\}=H(Q)\cap \{0,\ldots,l\}\cup\{l+l_1,\ldots,l+l_g\}$, where $l_i$ are the gaps of $Q$.
If $a_i\leq m<n$, then either $m-a_i\in H(Q)$ (therefore in $H^\ast(Q)$) or $m-a_i\notin H(Q)$ and hence $l+m-a_i\in H^\ast(Q)$, while $l+m\notin H^\ast(Q)$ (if not $m$ will be a gap).
If $m>l$ and $m-a_i<l$, then $m-a_i\in H^\ast(Q)$ but $m-a_i\notin f_i(\{a_i,\ldots,l-1\}\cap H(Q))\subset \{0,\ldots,l-a_i-1\}$.
On the other hand, if $m>l$ y $m-a_i>l$ then $m-a_i\in H^\ast(Q)$. Of course, if $m-a_i\notin H^\ast(Q)$ then $m-a_i-l$ is a non-gap, but $m-l$ is a gap, which contradicts $a_i\in H(Q)$. Therefore $m-a_i\neq f_i(m')$ for any $a_i\leq m'<l$.
Finally, if $m<a_i$ then $-a_i\leq m-a_i<0$ and hence $l-a_i\leq l+m-a_i<l$ and since $m$ is a non-gap it is not covered in the previous cases, therefore $f_i$ is injective and by cardinality it is bijective.
\end{IEEEproof}
From now on $T=(t_1,\ldots,t_s)$ y sea $R[T]=\mathbb{F}_q[t_1,\ldots,t_s]/I(T)$, where the ideal $I(T)$ is the one given in Proposition~\ref{polrayo}.
\begin{theorem}\label{degsem}
Let $W$ be a SOF channel. Let us consider the pointed curve $(\mathcal{X},Q)$ and a divisor on the curve $D=(z)_0$ with $l=\deg (z)_0\geq 2g$, and let $H^\ast(Q)=\{m_0,\ldots,m_{l-1}\}$. If we consider also an element $m_{l-i}\in H^\ast(Q)$ with $m_{l-i}<l$. If $m_{l-j}=m_{l-i}-a_r\in H^\ast(Q)$, where $a_r$ is one of the generators of $H(Q)$, then
$$W_1^{(i)}\preceq W_1^{(j)}.$$
\end{theorem}
\begin{IEEEproof}
We can choose monomials $M_k$ in $R[T]$ such that $v_Q(M_k)=m_{l-k}$ and such that if $m_{l-k}-a_r\in H^\ast(Q)$ then $t_r|M_k$ and also if $b=\max\{a\in\mathbb{N}_0\ |\ t_r^a|f,\ f\in\mathcal{L}(m_{r-k}Q)\setminus\mathcal{L}((m_{r-k}-1)Q)\}$ then $t_r^b|M_k$. We construct the kernel $G$ evaluating those monomials. Thus if $t_r|M_k$ for any $k$ then $\frac{M_k}{t_r}=M_{k'}$ for some $k'$ it is clear that $m_{l-k}=m_{l-k'}+a_r$.
Consider the polynomial $f=\sum_{k=1}^l u_kM_k$ and denote as $W_1(y|f)=\prod_{k=1}^l W(y_k|f(P_{k}))$. We have that $W_1(y|f)=W_1(y|u_1^l)$. If we consider $A=\{k\in\{1,\ldots,l\}|\ m_{l-k}-a_r\in H^\ast(Q)\}$, then
$$W_1(y|f)=W_1\left(y\left|\sum_{k\in A} u_kM_k+\sum_{k\notin A} u_kM_k\right.\right).$$
Let $F_r$ the function $F_r(M_k)=M_{k'}\Leftrightarrow f_r(m_{l-k})=m_{l-k'}$. Applying Lemma~\ref{lemma} we have a bijection of the chosen monomials and also
$$W_1(y|f)=W_1\left(y\left|\sum_{k\in A}u_kF_i(M_k)t_r+\sum_{k\notin A} u_kM_k\right.\right),$$
where $u_iM_i=u_iM_jt_r$. We define $\overline{y}=(y_{\alpha_1},\ldots,y_{\alpha_z})$ where $\mathrm{supp}\ t_r=\{\alpha_1<\ldots<\alpha_z\}$ and if $g$ is a polynomial we define
$$\sigma_g(\overline{y})=(\sigma_{g(P_{\alpha_1})}(y_{\alpha_1}),\ldots,\sigma_{g(P_{\alpha_z})}(y_{\alpha_z})),$$
$$\psi_{t_r^{-1}}(\overline{y})=\left(\psi_{t_r^{-1}(P_{\alpha_1})}(y_{\alpha_1}),\ldots,\psi_{t_r^{-1}(P_{\alpha_z})}(y_{\alpha_z})\right).$$
Note that
{
\begin{align*}
W_1(y|f)&=W_1\left(y\left|\sum_{k\in A} u_kF_r(M_k)t_r+\sum_{k\notin A}u_kM_k\right.\right)\\
&=\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}u_kM_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(y_{\alpha_h}\left|\left(\sum_{k\in A}u_kF_r(M_k)t_r+\sum_{k\notin A}u_kM_k\right)(P_{\alpha_h})\right.\right)
\end{align*}
}
Let $\hat{u}$ be the result of taking only those indexes $u_k$ with $k\notin A$ and $g(\hat{u})=\displaystyle\sum_{k\notin A}u_k(F_r(M_k)t_r-M_k)$. Since we are in a SOF channel it follows
{
\begin{align*}
W_1(y|f)&=\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}u_kM_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(y_{\alpha_h}\left|\left(\sum_{k\in A}u_kF_r(M_k)t_r+\sum_{k\notin A}u_kM_k\right)(P_{\alpha_h})\right.\right)\\
&=\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}u_kM_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(\sigma_{g(\hat{u})}(\overline{y})_h\left|\left(t_r\sum_{k=1}^l u_kF_i(M_k)\right)(P_{\alpha_h})\right.\right)\\
&=\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}u_kM_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(\psi_{t_r^{-1}}\left(\sigma_{g(\hat{u})}(\overline{y})\right)_h\left|\sum_{k=1}^l u_kF_r(M_k)(P_{\alpha_h})\right.\right).
\end{align*}}
Now since $F_r$ is a bijection there is a permutation $\varphi:\{1,\ldots,l\}\rightarrow\{1,\ldots,l\}$ such that
$$\sum_{k=1}^l u_kF_r(M_k)=\sum_{k=1}^lu_{\varphi(k)}M_k,$$
that also satisfies
$$\varphi(j)=i$$
$$\varphi^{-1}(k)>j\Longleftrightarrow k\in A,\ k>i.$$
This last fact is because if $m_{l-k}-a_r\notin H^\ast(Q)$ then $l+m_{l-k}-a_r\geq l-a_r>m_{l-i}-a_r$. Now let us
consider the channel given by $Q:\mathcal{Y}^l\times\mathbb{F}_q^{j-1}\rightarrow\mathcal{Y}^l\times\mathbb{F}^{i-1}$ as follows
$$Q(y,u_1^{i-1}|z,v_1^{j-1})=\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A} v_{\varphi^{-1}(k)}M_{k}(P_{\alpha})\right.\right)$$
if $u_{\varphi(k)}=v_k$ for $1\leq k\leq i-1$ and $\overline{y}=\sigma_{g(v)}\left(\psi_{t_r}(\overline{z})\right)$ with $g(v)=\sum_{k\notin A} v_{\varphi^{-1}(k)}(F_r(M_k)-M_k)$ and $0$ elsewhere.
\noindent If $v\in\mathbb{F}_q^l$ is a vector where $v_j=u_i$, then
\begin{align*}
&\sum_{z,v_1^{j-1}}Q(y,u_1^{i-1}|z,v_1^{j-1})W_1^{(j)}(z,v_1^{j-1}|u_i)\\\stackrel{(\ast)}{=}&\frac{1}{q^{l-1}}\sum_{z,v_1^{j-1}}\sum_{v_{j+1}^l}\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}v_{\varphi^{-1}(k)}M_k(P_{\alpha})\right.\right)W_1(z|v)\\
=&\frac{1}{q^{l-1}}\sum_{z,v_1^{j-1}}\sum_{v_{j+1}^l}\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}v_{\varphi^{-1}(k)}M_k(P_{\alpha})\right.\right)\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(z_\alpha\left|\sum_{k\notin A}v_kM_k(P_{\alpha})\right.\right)\\
&\prod_{h=1}^zW\left(\psi_{t_r^{-1}}\left(\sigma_{g(v)}(\overline{y})\right)_h\left|\sum_{k=1}^l v_kM_k(P_{\alpha_h})\right.\right)\\
=&\frac{1}{q^{l-1}}\sum_{v_1^{j-1}}\sum_{v_{j+1}^l}\prod_{\alpha\notin\mathrm{supp}\ t_r}W\left(y_\alpha\left|\sum_{k\notin A}v_{\varphi^{-1}(k)}M_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(\psi_{t_r^{-1}}\left(\sigma_{g(v)}(\overline{y})\right)_h\left|\sum_{k=1}^l v_kM_k(P_{\alpha_h})\right.\right)
\end{align*}
\begin{align*}
=&\frac{1}{q^{l-1}}\sum_{u_{i+1}^l}\prod_{\alpha\notin\mathrm{supp} t_r}W\left(y_\alpha\left|\sum_{k\notin A}u_kM_k(P_{\alpha})\right.\right)\prod_{h=1}^zW\left(\psi_{t_r^{-1}}\left(\sigma_{g(\hat{u})}(\overline{y})\right)_h\left|\sum_{k=1}^lu_{\varphi(k)}M_k(P_{\alpha_h})\right.\right)\\
=&\frac{1}{q^{l-1}}\sum_{u_{i+1}^l}W_1(y|u)\\
=&W_1^{(i)}(y,u_1^{i-1}|u_i)
\end{align*}
\noindent where from step $(\ast)$ on the sum that ranges in $z,v_1^{j-1}$ is only over those indexes that $\overline{z}=\psi_{t_r^{-1}}(\sigma_{g(v)}(\overline{y}))$ and $u_{\varphi(k)}=v_k$ for $1\leq k\leq i-1$. Finally since for any matrix
$G$ defining the kernel the information set does not change then the result follows.
\end{IEEEproof}
In order to fix a matrix given a pointed curve $(\mathcal{X},Q)$ so we can describe the polar code in terms of the function field we will take $D=(z)_0=\sum_{i=0}^{l-1} P_i$ and $H^\ast(Q)=\{m_1=0,\ldots,m_l\}$. It is known that $ev_D(\mathcal{L}(\infty Q))=R[T]/\langle f_z\rangle$, thus a basis is given by $ev_D(\mathcal{L}(\infty Q))=\Delta(I(T),f_z)=\{M_0,\ldots,M_{l-1}\}$ where $(M_i)_\infty=m_{i+1}Q$. Evaluating that basis we will construct the matrix $G$ for the polarization process. From now on we shall consider always the matrix for constructing polar codes from the pointed curve $(\mathcal{X},Q)$.
For each $n\in\mathbb{N}$ and each $i\in\{0,\ldots,l^n-1\}$ we denote by $(i_n,\ldots,i_1)$ to the $l$-ary expansion of $i$, ie. $0\leq i_k\leq l-1$ and
$$i=\sum_{k=1}^n i_kl^{k-1}.$$
Let $P^n_i=(P_{i_n},\ldots,P_{i_1})$, $\mathbb{F}_q[X_1,\ldots,X_n]$, $X_k=(x_{k1},\ldots,x_{ks})$ and
$$I_n=\langle I(X_1),\ldots,I(X_n),f_z(X_1),\ldots,f_z(X_n)\rangle.$$
In the polynomial ring $\mathbb{F}_q[X_1,\ldots,X_n]$ we take the monomial ordering inherit from the weights in the variables $x_i$, ie. the monomial ordering defined by the vectors with entries
$$w_{n+1-i,il+j}=-v_Q(x_{ij}),$$
and we will break ties with RevLex if it is needed. As a resume, we get $n$ copies of the original ring $R[T]$ and order it with weights inherit from the valuation in $Q$.
\begin{proposition}\label{orderg}
Let $M^n_k(X_1,\ldots,X_n)=M_{k_{1}}(X_1)\cdots M_{k_n}(X_n)$, where $(k_n,\ldots,k_1)$ is $l$-ary expansion of $0\leq k\leq l^n-1$ and
$$M^n_k(P^n_j)=M_{k_1}(P_{j_n})\cdots M_{k_n}(P_{j_1}),$$
then $\Delta(I_n)=\{ M^n_k(X_1,\ldots,X_n)\ |\ k\in\{0,\ldots,l^n-1\}$ and the matrix $G_n$ in the polarization process satisfies
$$G_n(i,j)=M^n_{(l^n-i)}(P^n_j).$$
\end{proposition}
\begin{IEEEproof}
The equality in $\Delta(I_n)$ is clear. The proof of the statements related with $G_n$ and the columns are also clear since it is just an application of the Kronecker product. For checking the property on the rows we will use induction.
Case $n=1$ is clear so let us suppose it is true in the step $n-1$.
\noindent Due to the bit-reversal in the polarization matrix we know that the row $jl^{n-1}+i$whose $l$-ary expansion is $(j,i_n,\ldots,i_1)$ is the row in the Kronecker product's matrix with $l$-ary expansion $(i_1,\ldots,i_n,j)$. That row is in correspondence with the product of the monomials $M_j(X_n)$ and $M^n_{l^{n-1}-i}(X_1,\ldots,X_{n-1})$ by the induction hypothesis, so the result follows.
\end{IEEEproof}
\noindent As a corollary of
Theorem~\ref{degsem} we have
\begin{corollary}\label{order1}
Let $G$ be the matrix associated to the pointed curve $(\mathcal{X},Q)$ and $M_i\in\Delta(I(X_1),f_z)$ with $\deg M_i\leq l$. If $M_i\neq\Delta(I,M_j)$ with $i>j$, then
$$W_1^{(l-i)}\preceq W_1^{(l-j)}$$
In particular the result follows if $M_j|M_i$ (where the division is in the ring $R[T]$).
\end{corollary}
Consider the set $\mathcal{A}_n\subset\Delta(I_n)$ and the code given by
$$C_{\mathcal{A}_n}=\langle\{(M^n_k(P^n_j))_{j=0}^{l^n-1}\ |\ M^n_k\in\mathcal{A}_n\}\rangle,$$
we say that $C_{\mathcal{A}_n}$ is a polar code if for each $M^n_k\in\mathcal{A}_n$ and for each $M^n_j\notin\mathcal{A}_n$ we have that
$$Z(M^n_k):=Z(W_n^{(l^n-k)})\leq Z(M^n_j).$$
\begin{proposition}\label{infsetd}
Let $C_{\mathcal{A}_n}$ be a polar code constructed from a pointed curve $(\mathcal{X},Q)$. If $M^n_i\in\mathcal{A}_n$ satisfies for all $i_k<j_k$, $\deg M_{i_k}\leq l$ and $M_{i_k}\notin\Delta(I(X_k),M_{j_k})$, $1\leq k\leq n$, then $M^n_j\in\mathcal{A}_n$.
In particular, if $M^n_j|M^n_i$ then $M^n_j\in\mathcal{A}_n$.
\end{proposition}
\begin{IEEEproof}
It follows from induction taking into account that
$$W\preceq W'\Longrightarrow W_1^{(k)}\preceq {W'_1}^{(k)}$$
and
$$\left(W_{n-1}^{(k)}\right)^{(k')}=W_n^{((k-1)l+k')}.$$
Thus if $i_1<j_1$, $M^n_i=M_{i_1}(X_1)M^{n-1}_{i'}$ and $M^n_j=M_{j_1}(X_1)M^{n-1}_{j'}$ then
\begin{align*}
W_n^{(l^n-i)}&=W_n^{((l^{n-1}-i'-1)l+l-i_1)}\\&=(W_{n-1}^{(l^{n-1}-i')})^{(l-i_1)}\\&
\preceq (W_{n-1}^{(l^{n-1}-i')})^{(l-j_1)}\\&\preceq (W_{n-1}^{(l^{n-1}-j')})^{(l-j_1)}\\&=W_n^{(l^n-j)}
\end{align*}
The induction step follows from Corollary~\ref{order1} above.
\end{IEEEproof}
\begin{definition}
We say that the code $C_{\mathcal{A}_n}$ is weakly decreasing if for all $M^n_k\in\mathcal{A}_n$ we have that if $j$ satisfies $k_i\geq j_i$, $(j_n,\ldots,j_1)$ (the $l$-ary expansion of $j$) then $M^n_j\in\mathcal{A}_n$.
\end{definition}
\begin{remark} The name \emph{weakly decreasing} is recovered from the one in
\cite{Bardet}. We do not have a way of ensuring that a code is weakly decreasing, but from the fact that any polar code is the shortening of a weakly decreasing code, using the proposition above we will check that for some cases the difference between a polar code and weakly decreasing code is not so big (measured as the number of rows that one has to remove).
\end{remark}
\begin{ex} Consider the hermitian curve $x^3=y^2+y$ over $\mathbb{F}_4$ pointed in $Q$ the common pole of $x$ and $y$.
In this case $$\Delta(I,x^4-x)=\{x^3y,x^2y,x^3,xy,x^2,y,x,1\}$$ that correspond with the values $H^\ast(Q)=\{9,7,6,5,4,3,2,0\}$. If we choose $x^2y\in\mathcal{A}_1$ then $xy,x^2,y,x,1\in\mathcal{A}_1$; if $x^3\in\mathcal{A}_1$ then we have a weakly decreasing code. On the other hand if $x^3\in\mathcal{A}_1$ then $x^2,y,x,1\in\mathcal{A}_1$ and for getting a weakly decreasing code is enough to see that $xy\in\mathcal{A}_n$.
\end{ex}
\begin{corollary}
Rational curves provide kernels for polar codes that are weakly decreasing.
\end{corollary}
\begin{IEEEproof}
Those curves there are no gaps and $H^\ast(Q)=\{0,1,\ldots,q-1\}$ therefore for all $m\in H^\ast(Q)$, $m<n$ and $m-1\in H^\ast(Q)$.
\end{IEEEproof}
\begin{remark} The result stated in the corollary above generalizes
the same statement \cite{Bardet} for rational curves over $\mathbb{F}_2$.
\end{remark}
\begin{definition} We say that a code
$C_{\mathcal{A}_n}$ is decreasing if it is weakly decreasing and there are $h_1,\ldots,h_k\in\{0,\ldots,l-1\}$ (maybe not distinct) such that
$$M_{h_1}(X_{i_1})\cdots M_{h_k}(X_{i_k})\in\mathcal{A}_n,$$
then for any $j_v\leq i_v$, $v\in\{1,\ldots,k\}$ we have
$$M_{h_1}(X_{j_1})\cdots M_{h_k}(X_{j_k})\in\mathcal{A}_n.$$
\end{definition}
This extra property for being decreasing will be called degrading property. Indeed it makes sense since in each step in the polarization process the new elements are worse than the previous ones.
\begin{proposition}\label{polardegr}
If $C_{\mathcal{A}_n}$ is a polar code and $M_{h_1}(X_{i_1})\cdots M_{h_k}(X_{i_k})\in\mathcal{A}_n$ then
$$M_{h_1}(X_{j_1})\cdots M_{h_k}(X_{j_k})\in\mathcal{A}_n$$
for all $j_v\leq i_v$, $1\leq v\leq k$.
\end{proposition}
\begin{IEEEproof}
We will prove it by induction on $n$. For $n=2$ remember that $G_2=B_2G^{\otimes 2}$ where $B_2$ interchanges the $i$-th row with $l$-ary expansion $(i_2,i_1)$ with the one with expansion $(i_1,i_2)$ (rows are indexed from $0$ to $l^2-1$) and therefore $(B_2)^{-1}=B_2$. Moreover $G_2B_2=G^{\otimes 2}$. We have that
$${G_2}_{i,j}=M_{i_1}(P_{j_2})M_{i_2}(P_{j_1})$$
that multiplied by $B_2$ returns
$${G_2B_2}_{i,j}=M_{i_1}(P_{j_1})M_{i_2}(P_{j_2})={G^{\otimes 2}}_{i,j}.$$
Hence if $u\in\mathbb{F}_q^{l^2}$, then
\begin{equation}
(uB_2)(G_2B_2)=u(G^{\otimes_2}B_2)=uG_2\tag{$\ast$}.
\end{equation}
Moreover note that the last $l$ entries in $uB_2$ correspond with $(u_{l-1},u_{2l-1},\ldots,u_{l^2-1})$. Whence suppose that $M_h(X_2)\in\mathcal{A}_2$; that monomial is associated with the row $l^2-lh=l(l-h)$ while the monomial $M_h(X_1)$ is associated with $l^2-h$. If we define $Q:\mathcal{Y}^{l^2}\times\mathbb{F}_q^{l^2-h-1}\rightarrow\mathcal{Y}^{l^2}\times\mathbb{F}_q^{l^2-lh-1}$ with probabilities
$$Q(y_1^{l^2},u_1^{l^2-lh-1}|z_1^{l^2},v_1^{l^2-h-1})=1$$
if $B_2y_1^{l^2}=z_1^{l^2}$ and $(vB_2)_1^{l^2-lh-1}=u_1^{l^2-lh-1}$.Then by $(\ast)$, it follows that
\begin{align*}
&\sum_{z_1^{l^2},v_{1}^{l^2-h-1}}Q(y_1^{l^2},u_1^{l^2-lh-1}|z_1^{l^2}v_1^{l^2-h-1})W_2^{l^2-h}(z_1^{l^2},v_1^{l^2-h-1}|u_{l^2-lh})\\
=&\sum_{z_1^{l^2},v_{1}^{l^2-h-1}}\sum_{v_{l^2-h+1}^{l^2}}\frac{1}{q^{l^2-1}}Q(y_1^{l^2},u_1^{l^2-lh-1}|z_1^{l^2}v_1^{l^2-h-1})W_2(z|v)\\
=&\sum_{u_{l^2-lh+1}^{l^2}} \frac{1}{q^{l^2-1}}W_2(B_2y|uB_2)\\
=&\sum_{u_{l^2-lh+1}^{l^2} }\frac{1}{q^{l^2-1}}W_2(y|u)\\
=&W_2^{(l^2-lh)}(y_1^{l^2},u_1^{l^2-lh-1}|u_{l^2-lh}).
\end{align*}
In other words, $W_2^{l^2-lh}\preceq W_2^{l^2-h}$ and therefore $M_h(X_1)\in\mathcal{A}_2$.
Let us suppose that it is true for $n-1$.
Note that $\left(W_{n-1}^{(i)}\right)^{(l)}=W_n^{(li)}$, hence if $M_{h_1}(X_{i_1})\cdots M_{h_k}(X_{i_k})\in\mathcal{A}_n$ and $i_1\geq 2$ we have that $M_{h_1}(X_{j_1})\cdots M_{h_k}(X_{j_k})\in\mathcal{A}_n$ for all $j_v\leq i_v$, $v\in\{1,\ldots,k\}$ such that $2\geq j_1\leq i_1$. If $2\geq j_1<i_1$ then we have that if $i=\sum_{v=1}^k h_vl^{i_v-1}$ and $j=\sum_{v=1}^k h_vl^{j_v-1}$ then $l|i$ and $l|j$ by the induction hypothesis in $n-1$, therefore
$$W_n^{(l^n-i)}=\left(W_{n-1}^{\left(\frac{l^n-i}{l}\right)}\right)^{(l)}\preceq\left(W_{n-1}^{\left(\frac{l^n-j}{l}\right)}\right)^{(l)}=W_n^{(l^n-j)},$$
and the result follows. Same reasoning guaranties the result if $i_1=j_1=1$. It only remains the case $1=j_1<i_1$. Taking into account that degradation is a transitive relation we can suppose that $i_1=2$ and choosing $i'=i-h_1l$ an $j'=j-h_1$, where $i$ and $j$ are as before. Thus applying the induction step to the case $n-2$, we have that $W_{n-2}^{\left(\frac{l^n-i'}{l^2}\right)}\preceq W_{n-2}^{\left(\frac{l^n-j'}{l^2}\right)}$. Applying induction for the case $2$
we have
\begin{align*}
W_n^{(l^n-i)}&=\left(W_{n-2}^{\left(\frac{l^n-i'}{l^2}\right)}\right)_2^{(l^2-lh_1)}\\
&\preceq\left(W_{n-2}^{\left(\frac{l^n-i'}{l^2}\right)}\right)_2^{(l^2-h_1)}\\
&\preceq\left(W_{n-2}^{\left(\frac{l^n-j'}{l^2}\right)}\right)_2^{(l^2-h_1)}\\
&=W_n^{(l^n-j)}
\end{align*}
and we conclude the proof.
\end{IEEEproof}
\begin{remark}The previous result does not make use of the property SOF. Thus a polar code weakly decreasing is decreasing.
\end{remark}
\begin{ex}\label{hermit2}\normalfont
Take the hermitian curve from previous examples and $n=2$. If $y_1x_2\in\mathcal{A}_2$ then, using Proposition~\ref{infsetd} we get
$$x_2,y_1,1\in\mathcal{A}_2$$
and applying the proposition above $x_1\in\mathcal{A}_2$. If $x_1x_2\in\mathcal{A}_2$, with the previous elements we have a descending polar code.
\end{ex}
\section{Minimum Distance and Dual of Polar Codes}
Let us check some properties of the structure of a polar code constructed from a pointed curve $(\mathcal{X},Q)$.
\begin{proposition}\label{minpol}
Let $C_{\mathcal{A}_n}$ be a decreasing code and let $(K_n,\ldots,K_1)$ be a tuple such for each $M^n_k\in\mathcal{A}_n$ the $l$-ary expansion of $k$, $(k_n,\ldots,k_1)$, satisfies $k_i\leq K_i$ for each $i\in\{1,\ldots,n\}$. Take $H^\ast(Q)=\{m_1,\ldots,m_l\}$ and $d_i=\delta(C(\mathcal{X},D,m_{K_i+1}Q))$ and let $k'$ be such that $M^n_k\in\mathcal{A}_n$ for each $k'\geq k$ and $d'_i=\delta(C(\mathcal{X},D,m_{k'_i+1}Q))$, then we have
$$\prod_{i=1}^n d'_i\geq \delta(C_{\mathcal{A}_n})\geq\prod_{i=1}^n d_i.$$
\end{proposition}
\begin{IEEEproof}
We will proceed by induction. It is clear for $n=1$. Let us suppose it is true for $n$ and get the result for $n+1$. First note that $K_1\geq K_2\geq\ldots\geq K_{n+1}$ since if $M_k^n\in\mathcal{A}_n$ and $M_{K_j}(X_j)|M_k^n$ (from the hypothesis in the proposition), then $M_{K_j}(X_j)\in\mathcal{A}_n$ and $M_{K_j}(X_{j-1})\in\mathcal{A}_n$, since the code is decreasing and therefore $K_j\leq K_{j-1}$. Let $C_1$ be the generator matrix of $C(\mathcal{X},D,m_{K_1+1}Q)$ and let $A$ be the matrix with rows the evaluations of $M_k^{n+1}\in\mathcal{A}_{n+1}$ with $k>l-1$. Then $C_{\mathcal{A}_n}$ is contained in the code generated by $A\otimes C_1$, which is the generator matrix for the matrix product code
$$[C_1\cdots C_1]A$$
and then, by \cite[Theorem 2.2]{dmpc} we have the result. The other inequality follows in a similar way.
\end{IEEEproof}
\begin{ex}
Consider Example~\ref{hermit2} again taking $\mathcal{A}_2=\{y_1x_2,y_1,x_2,x_1,1\}$. This code is contained in the decreasing code generated by $\mathcal{A}_2\cup\{x_2x_1\}$, then $K_1=2>K_2=1$. We know that $m_3=3$ and $m_2=2$ and the minimum distances for these hermitian codes are $3$ and $2$ respectively \cite{hermdist}, therefore
$$\delta(C_{\mathcal{A}_2})\geq 6.$$
\end{ex}
Remember that isometry-dual condition for a sequence of codes $\{C_i\}_{i=1}^l$ means that exists $x\in\mathbb{F}_q^l$ such for each $i\in\{1,\ldots,l\}$, $C_i^\perp$ and $C_{l-i}$ are isometric according to $x$. Codes constructed from pointed curves $(\mathcal{X},Q)$ and $D\sim lQ$ satisfy this condition and we say that the curve satisfy the isometry-dual condition (\cite{diegorder}). We will see that polar codes constructed from these curves preserves a similar condition.
\begin{proposition}\label{dualpolar}
Let $G$ be the kernel for a isometric-dual curve $(\mathcal{X},Q)$ of size $l\times l$. Let $C_{\mathcal{A}_n}$ be a decreasing code and define
$$\mathcal{A}^\perp_n=\{ M^n_j\in\Delta(I_n)\ |\ j_i=l-1-k_i,\ 1\leq i\leq n,\ M^n_k\in\mathcal{A}_n\}^c$$
Then $(C_{\mathcal{A}_n})^\perp$ is isometric to $C_{\mathcal{A}^\perp_n}$, and this code is also decreasing.
\end{proposition}
\begin{IEEEproof}
If we compare the sizes of the sets we just have to check that $C_{\mathcal{A}_n^\perp}\subset (C_{\mathcal{A}_n})^\perp$. It is also clear that $C_{\mathcal{A}_n^\perp}$ is also a decreasing code.
Let $f(T)\in\mathcal{L}(\infty Q)$ be the element which establish the isometry between $C(\mathcal{X},D,m_iQ)^\perp$ and $C(\mathcal{X},D,m_{l-i}Q)$ for each $i\in\{1,\ldots,l\}$. Then we have that $\sum_{i=0}^{l-1} f(P_i)M_j(P_i)M_k(P_i)=0$ for each $j\in\{0,\ldots,l-1\}$ and for every $k\in\{0,\ldots,l-1-j\}$. Take $F=f(X_1)\cdots f(X_n)$ and $M^n_k\in\mathcal{A}_n$ and $M^n_{k'}\in C_{\mathcal{A}_n^\perp}$. Then we have
$$\sum_{i=0}^{l^n-1}F(P^n_i)M^n_k(P^n_i)M^n_{k'}(P^n_i)=\prod_{j=1}^n \left(\sum_{i=0}^{l-1} f(P_i)M_{k_j}(P_i)M_{k'_j}(P_i)\right).$$
We claim that there exists $j\in\{1,\ldots,n\}$ such that $k'_j\leq l-1-k_j$. If this does not happen we would have
$$k'_j>l-1-k_j,\ \forall j\in\{1,\ldots,n\}$$
$$l-1-k'_j<k_j,\ \forall j\in\{1,\ldots,n\}$$
but $C_{\mathcal{A}_n}$ is decreasing, then for $\overline{k'}=\sum_{j=1}^n (l-1-k'_j)l^{j-1}$, $M^n_{\overline{k'}}\in\mathcal{A}_n\setminus\mathcal{A}_n^\perp$, which is a contradiction. Therefore it exists such $j$ and the sum over it is $0$ and we have the result.
\end{IEEEproof}
\begin{corollary}
If in the proof of Proposition~\ref{dualpolar} we have that the function $f$ evaluates to $ev(f)=(1,1,\ldots,1)$, then
$$C_{\mathcal{A}_n}^\perp=C_{\mathcal{A}_n^\perp}$$
Codes with kernel $G_q$ satisfies this condition.
\end{corollary}
\begin{corollary}
Let $C_{\mathcal{A}_n}$ be a decreasing code from a isometric-dual curve. Let $\mathcal{B}_n$ and $\mathcal{C}_n$ be decreasing sets such that
$$\mathcal{C}_n\subset\mathcal{A}_n\subset\mathcal{B}_n$$
then
$$C_{\mathcal{B}^\perp_n}\subset C^\perp_{\mathcal{A}_n}\subset C_{\mathcal{C}^\perp_n}.$$
\end{corollary}
\begin{ex}
We have already mentioned that each polar code can be seen as a shortened code obtained from a decreasing code, then we can complete its dual from the dual of the decreasing one. Let us take again
$$\mathcal{A}_2=\{y_1x_2,x_2,y_1,x_1,1\}$$
and
$$\mathcal{A}'_2=\mathcal{A}_2\cup\{x_1x_2\}.$$
This is a decreasing set and we have
\begin{align*}
{\mathcal{A}'}^\perp_2=&\{x_1y_1x_2^3y_2, x_1^2x_2^3y_2, y_1x_2^3y_2, x_1x_2^3y_2,
x_2^3y_2, x_1y_1x_2^2y_2, x_1^2x_2^2y_2, y_1x_2^2y_2,\\&
x_1x_2^2y_2, x_2^2y_2, x_1^3y_1x_2^3, x_1^2y_1x_2^3, x_1^3x_2^3, x_1y_1x_2^3, x_1^2x_2^3, y_1x_2^3, x_1x_2^3, x_2^3,\\&
x_1^3y_1x_2y_2, x_1^2y_1x_2y_2, x_1^3x_2y_2, x_1y_1x_2y_2, x_1^2x_2y_2, y_1x_2y_2, x_1x_2y_2, x_2y_2,\\&
x_1^3y_1x_2^2, x_1^2y_1x_2^2, x_1^3x_2^2, x_1y_1x_2^2, x_1^2x_2^2, y_1x_2^2, x_1x_2^2, x_2^2, x_1^3y_1y_2, x_1^2y_1y_2,\\&
x_1^3y_2, x_1y_1y_2, x_1^2y_2, y_1y_2, x_1y_2, y_2, x_1^3y_1x_2, x_1^2y_1x_2, x_1^3x_2, x_1y_1x_2, x_1^2x_2, y_1x_2,\\&
x_1x_2, x_2, x_1^3y_1, x_1^2y_1, x_1^3, x_1y_1, x_1^2, y_1, x_1, 1\}
\end{align*}
In this case the isometry is given by $(1,\ldots,1)$ and if we add an orthogonal vector to the evaluations of $\mathcal{A}_2$ but not to the one of $x_1x_2$, we would have a generator set for $C^\perp_{\mathcal{A}_2}$. One of these vectors is the evaluation of $g=x_2x_1(x_1y_1+1)$, then ${\mathcal{A}'}_2^\perp\cup\{g\}$ generates $C^\perp_{\mathcal{A}_2}$.
\end{ex}
All the conditions asked for the pointed curves $(\mathcal{X},Q)$ are satisfied by weak Castle and Castle curves \cite{castillo1}. We say that a pointed curve $(\mathcal{X},Q)$ over $\mathbb{F}_q$ is weak Castle if $H(Q)$ is symmetric and there is a morphism $\phi:\overline{\mathbb{P}}\rightarrow\overline{\mathbb{P}}$ with $(\phi)_\infty=hQ$ and $\alpha_1,\ldots,\alpha_a\in\mathbb{F}_q$ such that
$$\left|\phi^{-1}(\alpha_i)\cap\mathcal{X}(\mathbb{F}_q)\right|=h.$$
$(\mathcal{X},Q)$ be a pointed Castle curve if it is weak Castle and $h$ is the multiplicity of $H(Q)$ and $r=q$.
\section{Modifying kernels from algebraic curves}
Remember that given a square-matrix $G$ of size $l\times l$ over $\mathbb{F}_q$ with rows $G_1,\ldots,G_l$, the exponent $E(G)$ of the matrix $G$ is defined as
$$E(G)=\frac{1}{l\ln l}\sum_{i=1}^l \ln D_i,$$
where $D_i$ is the called partial distance and it is the minimum of the Hamming distances $d(G_i,v)$, with $v\in\langle G_{i+1},\ldots, G_l\rangle$.
Suppose $G$ is non-singular over $\mathbb{F}_q$ of size $l\times l$ and $G'$ is as $G$. If ${G'}G^{-1}$ is a upper-triangular invertible matrix, then $E(G)=E(G')$; therefore, each matrix coming from a pointed curve has the same exponent. Looking for the best matrices over a given size, shortening codes is a good way to find them, for example this was the way to find the best matrix over $\mathbb{F}_2$ of size 16 (see \cite{korada}).
Next theorem was proved by Anderson and Matthews in \cite{Anderson}. It says that shortening kernels from algebraic curves does not change the final structure of the code.
\begin{theorem}\label{qpag}
Let $G$ be a kernel from a pointed curve $(\mathcal{X},Q)$ with $D=\sum_{i=1}^l P_i$. Taking the $j$-th column, we can shorten $G$ to obtain the matrix $G'$. Then we have that $G'$ is the kernel arising from the codes $\{C(D-P_j,mQ-P_j)\}_{m\in H^\ast(Q)}$.
\end{theorem}
We can repeat this process to obtain polar codes from kernel associated to divisors of the form $mQ-\sum P$. However, if we take points coming from zero divisor of elements in $\mathcal{L}(\infty Q)$ we will have a matrix with the same structure.
\begin{proposition}
Let $(\mathcal{X},Q)$ a pointed curve and $z\in\mathbb{F}_q(\mathcal{X})$ with $(z)=D-lQ$, $D=\sum_{i=1}^l P_i$. Let's suppose there is $z'\in\mathbb{F}_q(\mathcal{X})$ such that $(z')=\sum_{i=1}^s P_{k_i}-sQ$ with $k_i\neq k_j$ if $i\neq j$; define $D'=(z')_0$.
Let $\varphi:\mathbb{F}_q^l\rightarrow\mathbb{F}_q^s$ be the mapping such $\varphi(c)$ is the same word $c$ but erasing the entries indexed by $\{i\in\{1,\ldots,n\}\ |\ P_i\notin\mathrm{supp}\ z'\}$. Let $\psi:R/\langle I,f_z\rangle\rightarrow R/\langle I,f_{z'}\rangle$ the natural mapping between both rings. Then
\[\begin{tikzcd}
\arrow[d, "\psi"]R/\langle I,f_z\rangle \arrow[r, "ev_D"] &\mathbb{F}_q^n \arrow[d, "\varphi"]\\
R/\langle I,f_{z'}\rangle
\arrow[r,"ev_{D'}"]& \mathbb{F}_q^{s}
\end{tikzcd}\]
is commutative. The kernel $G$ constructed from $D$ and $Q$ has as submatrix $G'$, the kernel from $D'$ and $Q$.
\end{proposition}
\begin{IEEEproof}
Take $f,f'\in R/\langle I,f_z\rangle$ such that
$$\varphi(ev_D(f))=\varphi(ev_D(f')).$$
This occurs if and only if
$$ev_D(f)_j=ev_D(f')_j\ \ \forall j\ \in P_j\in\mathrm{supp}\ z'.$$
This is $ev_{D'}(f)=ev_{D'}(f')$, then we have $f-f'\in\langle I,f_{z'}\rangle$, implying $\psi(f)=\psi(f')$ as we wanted it.
\end{IEEEproof}
\begin{corollary}
The matrix $G'$ of the proposition above is isometric to the one obtained after shortening $G$ with the process described in Theorem \ref{qpag}.
\end{corollary}
\medskip
\begin{corollary}
A Castle-like curve with $D=(z)_0=\left(\prod_{i=1}^a (\phi-\alpha_i)\right)_0$ produces a sequence of $a$ kernels (each one submatrix of the next) coming from the divisors of $\prod_{i=1}^j (\phi-\alpha_i)$, $j\in\{1,\ldots,a\}$.
\end{corollary}
\medskip
\begin{ex}
Take again the hermitian curve over $\mathbb{F}_4$ where $\alpha$ is a primitive element, $x^3=y^2+y$. This is a Castle curve with kernel
$$\begin{array}{r|cccccccc}
& 00 & 01 & 1\alpha & 1\alpha^2& \alpha\alpha & \alpha\alpha^2 & \alpha^2\alpha & \alpha^2\alpha^2\\\hline
x^3y&0&0&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x^2y & 0& 0&\alpha&\alpha^2&1&\alpha&\alpha^2&1\\
x^3& 0&0&1&1&1&1&1&1\\
xy & 0&0&\alpha&\alpha^2&\alpha^2&1&1&\alpha\\
x^2& 0&0&1&1&\alpha^2&\alpha^2&\alpha&\alpha\\
y&0&1&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x&0&0&1&1&\alpha&\alpha&\alpha^2&\alpha^2\\
1&1&1&1&1&1&1&1&1
\end{array}$$
If we shorten this kernel taking the points with $x=0$ like in Theorem \ref{qpag}, starting with $00$.
$$\begin{array}{r|cccccc}
& 1\alpha & 1\alpha^2& \alpha\alpha & \alpha\alpha^2 & \alpha^2\alpha & \alpha^2\alpha^2\\\hline
x^3y&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x^2y &\alpha&\alpha^2&1&\alpha&\alpha^2&1\\
x^3&1&1&1&1&1&1\\
xy &\alpha&\alpha^2&\alpha^2&1&1&\alpha\\
x^2&1&1&\alpha^2&\alpha^2&\alpha&\alpha\\
x&1&1&\alpha&\alpha&\alpha^2&\alpha^2
\end{array}.$$
This matrix comes from the codes with divisor $(x^3-1)_0$ and $P_\infty-P_{00}-P_{01}$.
From the original kernel, if we remove the columns of that points and the rows products of $x^3$.
$$\begin{array}{r|cccccccc}
& 1\alpha & 1\alpha^2& \alpha\alpha & \alpha\alpha^2 & \alpha^2\alpha & \alpha^2\alpha^2\\\hline
x^2y &\alpha&\alpha^2&1&\alpha&\alpha^2&1\\
xy &\alpha&\alpha^2&\alpha^2&1&1&\alpha\\
x^2&1&1&\alpha^2&\alpha^2&\alpha&\alpha\\
y&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x&1&1&\alpha&\alpha&\alpha^2&\alpha^2\\
1&1&1&1&1&1&1
\end{array}$$
This matrix comes from the divisor $(x^3-1)_0$ and $P_\infty$. The isometry between both matrices is clear and this second matrix has the same structure as the original one, so we can apply the analysis of information set, minimum distance and its dual like before.
As an example of the last corollary we can give the next matrix sequence from the hermitian curve
$$\begin{array}{r|cc}
&00&01\\\hline
y&0&1\\
1&1&1\end{array}\ \ \ \begin{array}{r|cccc}
&00&01&1\alpha&1\alpha^2\\\hline
xy&0&0&\alpha&\alpha^2\\
y&0&1&\alpha&\alpha^2\\
x&0&0&1&1\\
1&1&1&1&1\end{array}
\ \ \begin{array}{r|cccccc}
&00&01&1\alpha&1\alpha^2&\alpha\alpha&\alpha\alpha^2\\\hline
x^2y&0&0&\alpha&\alpha^2&1&\alpha\\
xy&0&0&\alpha&\alpha^2&\alpha^2&1\\
x^2&0&0&1&\alpha^2&\alpha^2&\alpha\\
y&0&1&\alpha&\alpha^2&\alpha&\alpha^2\\
x&0&0&1&1&\alpha&\alpha\\
1&1&1&1&1&1&1\end{array}
$$
$$
\begin{array}{r|cccccccc}
& 00 & 01 & 1\alpha & 1\alpha^2& \alpha\alpha & \alpha\alpha^2 & \alpha^2\alpha & \alpha^2\alpha^2\\\hline
x^3y&0&0&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x^2y & 0& 0&\alpha&\alpha^2&1&\alpha&\alpha^2&1\\
x^3& 0&0&1&1&1&1&1&1\\
xy & 0&0&\alpha&\alpha^2&\alpha^2&1&1&\alpha\\
x^2& 0&0&1&1&\alpha^2&\alpha^2&\alpha&\alpha\\
y&0&1&\alpha&\alpha^2&\alpha&\alpha^2&\alpha&\alpha^2\\
x&0&0&1&1&\alpha&\alpha&\alpha^2&\alpha^2\\
1&1&1&1&1&1&1&1&1
\end{array}.$$
Their exponents are, respectively
$$\frac{1}{2},\ \frac{1}{2},\ \frac{\ln(6\cdot 4\cdot 3\cdot 2\cdot 2\cdot 1)}{6\ln(6)}\approx 0.5268,\ \frac{\ln(8\cdot 6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 2\cdot 1)}{8\ln (8)}\approx 0.5622$$
\end{ex}
Now we will check another resource to search matrices with good exponents arising from AG codes. We will need the next result.
\begin{proposition}
Let $G$ and $G'$ be two matrices over $\mathbb{F}_q$ of size $l\times l$ and $l'\times l'$ respectively, non-singular and with partial distances $\{D_i(G)\}_{i=1}^l$ and $\{D_i(G')\}_{i=1}^{l'}$. Then for the matrix $G'\otimes G$ we have
$$D_k(G'\otimes G)=D_{i'}(G')\cdot D_i(G)$$
where $k=(i'-1)l+i$.
\end{proposition}
\begin{IEEEproof}
For the first $l$ rows is clear since they are just copies of the original $G$. Let us suppose the result for the first $l'l-(kl+l)+1$ rows ($0\leq k\leq l'-1$) and let's prove it for the rows $l'l-(k+1)l-j$, $0\leq j\leq l-1$.
If we begin with $h=l'l-(k+1)l$ we observe that the partial distance $D_h(G'\otimes G)$ is the same as the one of
\begin{equation}
\begin{bmatrix} \{G'_i\}_{i=1}^k\otimes G\\
\{G'_i\}_{i=k+1}^{l'}\otimes I_l\end{bmatrix}\tag{$\ast$}
\end{equation}
where $I_l$ is the identity matrix of size $l$ and $G'_i$ is the $i$-th row of $G'$. Since the matrix $G$ is non-singular and the $h$-th partial distance this is the distance $d(G'\otimes G_h,\langle G'\otimes G_{h+1},\ldots,G'\otimes G_{l'l}\rangle)$, then the last vector space is generated by the tensor product of the last $l'-k$ rows of $G'$ with $I_l$. Also we know that
$$G'_{l'-k}\otimes G_l=\sum_{j=1}^l G'_{l'-k}\otimes (G_{l,j}e_j),$$
where $e_j$ is the $j$-th vector of the canonical basis for $\mathbb{F}_q^l$. Notice that if $u$ and $v$ are two vectors with disjoint supports, then the Hamming weight $w(v+u)=w(v)+w(u)$; also, $w(v\otimes u)=w(v)\cdot w(u)$. Then if we take some elements $\alpha_i\in\mathbb{F}_q$, $l'l-(k+1)l+1\leq i\leq l'l$ we have
\begin{align*}
&w\left(G'_{l'-k}\otimes G_l+\sum_{i=l'-k+1,j=1}^{l',l},\alpha_{(i-1)l+j}G'_i\otimes e_j\right)\\
=&w\left(\sum_{j=1}^l G'_{l'-k}\otimes G_{l,j}e_j+\sum_{i=l'-k+1,j=1}^{l',l}\alpha_{(i-1)l+j}G'_i\otimes e_j\right)\\
=&w\left(\sum_{j\in\mathrm{supp}\ G_l}\left(G'_{l'-k}+\sum_{i=l'-k+1}^{l'}\frac{\alpha_{(i-1)l+j}}{G_{l,j}}G'_i\right)\otimes e_j+\sum_{j\notin\mathrm{supp}\ G_l}\left(\sum_{i=l'-k+1}^{l'} \alpha_{(i-1)l+j}G'_i\right)\otimes e_j\right)\\
\stackrel{\circ}{=}&\sum_{j\in\mathrm{supp}\ G_l}w\left(\left(G'_{l'-k}+\sum_{i=l'-k+1}^{l'}\frac{\alpha_{(i-1)l+j}}{G_{l,j}}G'_i\right)\otimes e_j\right)+\sum_{j\notin\mathrm{supp}\ G_l}w\left(\left(\sum_{i=l'-k+1}^{l'} \alpha_{(i-1)l+j}G'_i\right)\otimes e_j\right)\\
\geq&\sum_{j\in\mathrm{supp}\ G_l}w\left(\left(G'_{l'-k}+\sum_{i=l'-k+1}^{l'}\frac{\alpha_{(i-1)l+j}}{G_{l,j}}G'_i\right)\otimes e_j\right)\\
=&\sum_{j\in\mathrm{supp}\ G_l}w\left(G'_{l'-k}+\sum_{i=l'-k+1}^{l'}\frac{\alpha_{(i-1)l+j}}{G_{l,j}}G'_i\right)w(e_j)\\
\geq &\sum_{j\in\mathrm{supp}\ G_l} D_{l'-k}(G')\\
=&D_l(G)\cdot D_{l'-k}(G'),
\end{align*}
where we have that $\circ$ sin $v\otimes e_j$ has disjoints support for different $j$.
The results follows if we change $G_l$ for any vector of weight $D_j(G)$.
\end{IEEEproof}
Next corollary is a general version of the one in \cite{Kron}.
\begin{corollary}\label{expkron}
Let $G_1$ and $G_2$ be two non-singular matrices over $\mathbb{F}_q$ of sizes $l_1$ and $l_2$ respectively. Then
$$E(G_1\otimes G_2)=\frac{E(G_1)}{\log_{l_1}(l_1l_2)}+\frac{E(G_2)}{\log_{l_2}(l_1l_2)}.$$
\end{corollary}
\begin{IEEEproof}
We know that $G_1\otimes G_2$ has size $l_1l_2$. For each $k\in\{1,\ldots,l_1l_2\}$ we can rewrite $k$ as $k=(j-1)l_2+s$ where $1\leq j\leq l_1$ y $1\leq s\leq l_2$, therefore
\begin{align*}
E(G_1\otimes G_2)&=\frac{1}{l_1l_2\ln(l_1l_2)}\sum_{k=1}^{l_1 l_2}\ln(D_k(G_1\otimes G_2))\\
&\stackrel{(a)}{=}\frac{1}{l_1l_2\ln(l_1l_2)}\sum_{j=1}^{l_1}\sum_{s=1}^{l_2}\ln(D_j(G_1)D_s(G_2))\\
&=\frac{1}{l_1\ln(l_1l_2)}\sum_{j=1}^{l_1} \ln(D_j(G_1))+\frac{1}{l_2\ln(l_1l_2)}\sum_{s=1}^{l_2}\ln(D_s(G_2))\\
&=\frac{E(G_1)\ln l_1}{\ln l_1l_2}+\frac{E(G_2)\ln l_2}{\ln l_1l_2}
\end{align*}
where equality $(a)$ follows from the previous proposition.
\end{IEEEproof}
We can extend the analysis done for the kernel defined by one curve to the product of two kernels arising from two curves defined over the same field.
Let $\left(\mathcal{X},(z)_0=\sum_{i=0}^{l_1-1} P_i,Q\right)$ and $\left(\mathcal{Y},(z')_0=\sum_{j=0}^{l_2-1}P'_i,Q'\right)$ be two pointed curves over $\mathbb{F}_q$ and $S[X]=\mathbb{F}_q[x_1,\ldots,x_s]$ and $S[Y]=\mathbb{F}_q[y_1,\ldots,y_{s'}]$ the polynomial rings where there exist $I\subset S[X]$ and $I'\subset S[Y]$ such that $S[X]/\langle I,f_z\rangle$ and $S[Y]/\langle I',f_{z'}\rangle$ are isomorphic to the codes associated to the respective curves.We will denote as $G_X$ and $G_Y$ their respective kernels.
We denote by $S[X,Y]=\mathbb{F}_q[x_1,\ldots,x_s,y_1,\ldots,y_{s'}]$ and by $I_{XY}=\langle I, I',f_z,f_{z'}\rangle$. We will endow $R[X,Y]=S[X,Y]/\langle I,I'\rangle$ with the weight generated by the inherit vectors $$w_1=(w(x_1),\ldots,w(x_s),0,\ldots,0)\hbox{ and } w_2=(0,\ldots,0,w(y_1),\ldots,w(y_{s'})).$$
\begin{proposition}
If $\Delta(I)=\{M_0,\ldots, M_{l_1-1}\}$ and $\Delta(I')=\{M'_0,\ldots,M'_{l_2-1}\}$, then
$$\Delta(I_{XY})=\{M_0M'_0,M_0M'_1,\ldots,M_{l_1-1}M_{l_2-1}\}$$
and the rows of the matrix $G_X\otimes G_Y$ are evaluations of the elements in $\Delta(I_{XY})$
$$M\cdot M'(Q_i)=M(P_j)M'(P'_k)\ \ \ i=jl_2+k$$
in decreasing order w.r.t. the induced ordering.
\end{proposition}
\begin{IEEEproof}
The equality for $\Delta(I_{XY})$ is clear and the equality on the rows follows from the definition of the Kronecker product since
\begin{align*}
(G_X\otimes G_Y)_{i,j}&=(G_X)_{\lfloor i/l_2\rfloor,\lfloor j/l_2\rfloor}(G_Y)_{i\ \mathrm{mod}\ l_2,j\ \mathrm{mod}\ l_2}\\
&=M_{l^n-\lfloor i/l_2\rfloor}(P_{\lfloor j/l_2\rfloor})M'_{l^n-i\ \mathrm{mod}\ l_2}(P'_{j\ \mathrm{mod}\ l_2}).
\end{align*}
\end{IEEEproof}
Thus we have a set of monomials $\tilde{M}_i=M_{\lfloor i/l_2\rfloor}M'_{i\mathrm{mod}\ l_2}$ and $Q_i$ as before. Let $l=l_1l_2$ and consider the polar code constructed from the kernel $G_{XY}$. Now we will work on the polynomial ring $R[X_1,Y_1,X_2,Y_2,\ldots,X_n,Y_n]$.
We will define an ordering on $\mathbb{Z}$ as follows $i\triangleleft j$ if and only if when $i=hl_2+k$ and $j=h'l_2+k'$ we have $h<h'$ y $j<j'$. With this new ordering our previous definitions are translated easily.
\begin{itemize}
\item A code $C_{\mathcal{A}_n}$ with kernel $G_{XY}$ is called weakly decreasing if for $\tilde{M}^n_k\in\mathcal{A}_n$, $\tilde{M}^n_{k'}|\tilde{M}^n_k$ it follows that $\tilde{M}^n_{k'}\in\mathcal{A}_n$. As a corollary a polar code over a SOF channel is weakly decreasing.
\item A code is decreasing if it is weakly decreasing and also $\tilde{M}_{i_1}(X_{j_1}Y_{j_1})\cdots\tilde{M}_{i_k}(X_{j_k}Y_{j_k})\in\mathcal{A}_n$ implies
$$\tilde{M}_{i_1}(X_{j'_1}Y_{j'_1})\cdots\tilde{M}_{i_k}(X_{j'_k}Y_{j'_k})\in\mathcal{A}_n$$
for $j'_v\leq j_v$, $v\in\{1,\ldots,k\}$. As before this last property will be call degrading property.
\end{itemize}
\begin{ex} From Corollary~\ref{expkron} we can see that a matrix $G^{\otimes n}$ has the same exponent as the original matrix $G$.
Let us consider the field $\mathbb{F}_4$ and the field of rational functions with variable $t$ and the hermitian curve over $\mathbb{F}_4[x,y]$. We compute the kernel using $Q=P_\infty$ (the common pole of $x$ and $y$) and we construct the matrices $G_H$ and $G_R$. The monomial basis we have to consider are
$$L_H=\{x^3y,x^2y,x^3,xy,x^2,x,y,1\},$$
$$L_R=\{t^3,t^2,t,1\}.$$
Therefore $G_H\otimes G_R$ is the evaluation of the monomials
\begin{align*}
L_{H\otimes R}=&\{x^3yt^3,x^3yt^2,x^3yt,x^3y,x^2yt^3,x^2yt^2,x^2yt,x^2y,\\
&x^3t^3,x^3t^2,x^3t,x^3,xyt^3,xyt^2,xyt,xy,\\ &x^2t^3,x^2t^2,x^2t,x^2,yt^3,yt^2,yt,y,\\
&xt^3,xt^2,xt,x,t^3,t^2,t,1\}.
\end{align*}
The rational curve has exponent $\frac{\ln 4!}{4\ln 4}$ and the hermitian one has exponent $\frac{\ln(8\cdot 2\cdot 6!)}{8\ln 8}$ thus the kernel $G_H\otimes G_R$ has exponent
$$E(G_H\otimes G_R)\approx 0.5665\,.$$
If we construct a polar code from this kernel over a SOF channel and $n=1$ we have that if $xyt^2,x^2t^2\in\mathcal{A}_n$ then
$$xyt,xy,x^2t,x^2,yt^2,yt,y,xt^2,xt,x,t^2,t,1\in\mathcal{A}_n.$$
If those are the only elements in the code then we have a decreasing code with minimum distance $6$ (since the code associated to $xy$ has minimum distance 3 and the one associated to $t^2$ has minimum distance 2).
\end{ex}
\section{Conclusion}
In this paper we have stablished a construction of polar codes from pointed algebraic curves for a
discrete memoryless channel which is symmetric w.r.t the field operations. This results extend some results in \cite{Bardet} for a binary symmetric channel. Note that both the families of weak Castle and Castle curves provide good candidates for designing the proposed polar codes since they satisfy the conditions needed in the construction. Even if the nature of the results is mainly theoretical, we believe that it can contribute to a deeper understanding to polar codes over non-binary alphabeths.
|
{'timestamp': '2019-01-23T02:23:23', 'yymm': '1901', 'arxiv_id': '1901.06923', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.06923'}
|
arxiv
|
\section{Introduction}
Tsunamis are among the most dangerous natural disasters and have recently resulted in hundreds of thousands of fatalities and significant infrastructure damage (e.g. the 2004 Indian Ocean tsunami, the 2010 Chile tsunami, the 2011 Japan tsunami, and more recently the 2018 Indonesia tsunami).
A wide range of research has been conducted to simulate tsunami phenomena.
These studies are essential for evacuation planning \citep[e.g.,][]{scheer2012generic, lammel2010emergency},
designing vertical evacuation structures \citep[e.g.,][]{ash2015design,Gonzalez2013},
risk assessment \citep[e.g.,][]{gonzalez2013probabilistic,Adams2017,geist2006probabilistic,gonzalez2009probabilistic,annaka2007logic}
and early warning systems \citep[e.g.,][]{taubenbock2009last, liu2009tsunami}.
The later two, in particular, can benefit from fast simulators.
For instance, Probabilistic Tsunami Hazard Assessment (PTHA) often requires thousands of simulations to generate tsunami hazard curves and maps, and some early warning systems depend on rapid simulation results to allow longer evacuation time.
The numerical modeling of tsunamis often includes modeling multiple phases in very different scales, including tsunami generation from the source \citep[e.g.,][]{nosov2014tsunami,okada1985surface},
long-distance propagation \citep[e.g.,][]{George2006thesis,choi2003simulation,titov1997implementation},
local inundation of coastal regions \citep[e.g.,][]{park2013tsunami,qin2017multi,qin2018comparison}
and its interaction with coastal structures \citep[e.g.,][]{motley2015tsunami,qin2018three,winter2017tsunami}.
Some computer codes use separate models for different phases of tsunamis, while some integrate the modeling of multiple phases into a single simulation \citep[e.g.,][]{titov1997implementation,zhang2008efficient,macias2016comparison}, facing the computational challenges induced by very different scales (from thousands of kilometers to tens of meters) in the problem.
The simulation speed can be increased by either reducing computational cost, using more powerful machines, or both.
Since being proposed by \citet{berger1984adaptive}, the Adaptive Mesh Refinement (AMR) algorithm has been shown to effectively reduce computational cost in the numerical simulation of multi-scale problems.
It can track features much smaller than the overall scale of the problem and adjust the computational grid during the simulation.
The algorithm has been implemented and developed into several frameworks and can be categorized into three major variants.
The first one is often referred to as patch-based or structured AMR \citep{Berger1989}. It allows rectangular grid patches of arbitrary size and any integer refinement ratios between two level of grid patches \citep[e.g.,][]{hornung2002samrai,bryan2014enzo,clawpack,zhang2016boxlib,adams2015chombo}.
Another variant is the cell-based AMR, which refines individual cells and
often uses a quadtree or octree data structure to store the grid patch information.
The last variant is a combination of the first two, often referred to as block-based AMR.
Unlike the patch-based AMR, which stores the multi-resolution grid hierarchy as overlapping and nested grid patches, this approach stores the grid hierarchy as non-overlapping fixed-size grid patches, each of which is stored as a leaf in a forest of quadtrees or octrees \citep[e.g.,][]{burstedde2014forestclaw,burstedde2011p4est,fryxell2000flash,macneice2000paramesh}.
In the past two decades, the AMR algorithm has been extensively applied for geophysical applications \citep[e.g.,][]{leng2011implementation,burstedde2013large,LeVeque2011}.
In particular, tsunami models that simulate both large scale transoceanic tsunami propagation and inundation of small-scale coastal regions save several orders of computational cost by using AMR.
Another approach to increasing tsunami simulation speed is to use faster hardware and/or a greater degree of parallelism to take advantage of modern architectures.
Several codes parallelize tsunami modeling on multi-core CPUs \citep[e.g.,][]{pophet2011high}, and GeoClaw takes this approach via OpenMP.
ForestClaw \citep{burstedde2014forestclaw}, on the other hand, can simulate a tsunami on distributed-memory machines with MPI parallelism.
Recently, use of the Graphics Processing Units (GPUs), which are even accessible on general desktop PCs, have become increasingly popular in the scientific computing community.
Many researchers have reported decent speed-ups in simulating tsunamis by using the GPUs.
However, most of these earlier studies involved implementing the PDE solvers on grids with constant spatial resolution, including those of \citet{acuna2009real}, \citet{Lastra2009}, \citet{de2011simulation}, \citet{brodtkorb2012efficient}, \citet{de2013efficient}, \citet{smith2013towards}, and \citet{DeLaAsuncion2016}.
This approach may suffer from too much computational cost when modeling transoceanic tsunami propagation, since no AMR is used.
The complexity of the AMR algorithm and data structure add challenges to the implementation of tsunami simulation model on the GPUs.
There have been some implementation of AMR algorithms on the GPUs with application to astrophysics \citep[e.g.,][]{wang2010adaptive,schive2010gamer}.
However, very few models for simulating tsunami with AMR on the GPU have been developed.
Relevant work includes simulation of landslide-generated tsunamis by \citet{de2017simulation} for example.
In this paper, a CUDA \citep{nickolls2008scalable} implementation of the patched-based AMR algorithm is developed and used to simulate tsunamis on the GPU.
The Godunov-type wave-propagation scheme with 2nd-order limiters is implemented to solve the nonlinear shallow water system with varying topography.
Both canonical Cartesian grid coordinates for modeling tsunamis in small regions and spherical coordinates for transoceanic tsunami propagation are supported.
The use of AMR adds challenges to the implementation, including dynamic memory structure creation and manipulation, balanced distribution of computing loads between the CPU and the GPU, and optimizations to minimize global memory access and maximize arithmetic efficiency in the GPU kernel.
Numerical experiments on several realistic tsunami modeling problems are conducted to illustrate the correctness and efficiency of the implementation, showing speed-ups from 3.6 to 6.4 when compared to the original model running in parallel on 16-core CPU.
The paper is structured as follows.
Section \ref{sec:swe} gives an overview of the shallow water equations and numerical schemes implemented to solve them.
Section \ref{sec:amr} briefly reviews the AMR algorithm and how it is combined with the numerical scheme described in Section \ref{sec:swe}.
Section \ref{sec:implementation} describes the implementation details.
Section \ref{sec:results} shows the simulation results and performance statistics from several tsunami cases.
Finally the results are summarized in Section \ref{sec:conclusions}.
\section{Shallow Water Equation and Numerical Scheme}\label{sec:swe}
\subsection{Shallow Water Equation with Variable Topography}
The shallow water equations (SWEs) have been used broadly by many researchers in modeling of tsunamis, storm surge, and flooding.
It can be written in the form of a nonlinear system of hyperbolic conservation laws for water depth and momentum:
\begin{subequations}
\begin{align}
\label{eq:swe_1}
h_t + \left( hu \right)_x + \left( hv \right)_y & = 0, \\
\label{eq:swe_2}
\left( hu \right)_t + \left( hu^2 + \frac{1}{2}gh^2 \right)_x + \left( huv \right)_y &= -ghB_x - Dhu, \\
\label{eq:swe_3}
\left( hv \right)_t + \left( huv \right)_x + \left( hv^2 + \frac{1}{2}gh^2 \right)_y &= -ghB_y - Dhv,
\end{align}
\label{eq:swe}
\end{subequations}
where $u(x,y,t)$ and $v(x,y,t)$ are the depth-averaged velocities in the two horizontal directions, $B(x,y,t)$ is the bathymetry/topography, and $D = D(h,u,v)$ is the drag coefficient.
The subscript $t$ represents a time derivative, while
the subscripts $x$ and $y$ represent spatial derivatives in the two horizontal directions. The value of $B(x,y,t)$ is positive for topography above sea level and negative for bathymetry.
Coriolis terms can also be added to the momentum equations but is generally negligible for tsunami problems and is not used here.
The drag coefficient
used in the current implementation is
\begin{equation}
D(h,u,v) = n^2gh^{-7/3}\sqrt{u^2+v^2},
\end{equation}
where $n$ is the \textit{Manning coefficient} and depends on the roughness of the ground.
A constant value of $n=0.025$ is often used for tsunami modeling, and this
value is used for all benchmark problems in this study.
\subsection{Finite Volume Methods and Wave Propagation Algorithm}
A one-dimensional homogeneous system of hyperbolic equations in non-conservative form can be written as:
\begin{equation}
q_t + A(q)q_x = 0,
\label{eq:hyperbolic_non_conservative}
\end{equation}
For non-conservative form, the wave propagation algorithm \citep{LeVeque1997,rjl:fvmhp} can be used to update the solution:
\begin{equation}
\qinp = \qin - \frac{\Delta t}{\Delta x} \left( {\mathcal A}^+\Delta \Q_{i-1/2} + {\mathcal A}^-\Delta \Q_{i+1/2} \right),
\label{eq:wave_propagation_form}
\end{equation}
where ${\mathcal A}^+\Delta \Q_{i-1/2}$ is the net effect of all right-going waves propagating into cell $\mathcal{C}_i$ from its left boundary, and ${\mathcal A}^-\Delta \Q_{i+1/2}$ is the net effect of all left-going waves propagating into cell $\mathcal{C}_i$ from its right boundary.
Namely,
\begin{subequations}
\begin{align}
{\mathcal A}^+\Delta \Q_{i-1/2} & = \sum _{p=1}^m (\lambda ^p)^+ \mathcal{W}_{i-1/2}^p,\\
{\mathcal A}^-\Delta \Q_{i+1/2} & = \sum _{p=1}^m (\lambda ^p)^- \mathcal{W}_{i+1/2}^p,
\end{align}
\end{subequations}
where $m$ is total number of waves, $\mathcal{W}^p$ is the $p$th wave from the Riemann problem, $\lambda ^p$ is wave speed of the $p$th wave, and
\begin{equation}
(\lambda ^p)^+ = max(\lambda ^p,0), \quad (\lambda ^p)^- = min(\lambda ^p,0).
\end{equation}
The notations here are motivated by the linear case where $f(q) = Aq$.
In such a case, the waves are simply decomposition of the initial jumps into basis form by the eigenvectors of the coefficient matrix $A$, propagating at the speed of eigenvalues:
\begin{equation}
q_r-q_l = \sum _{p = 1}^m \mathcal{W}^p = \sum _{p = 1}^m \alpha ^pr^p,
\end{equation}
where $q_r$ and $q_l$ are right and left states of the Riemann problem, $r^p$ is the $p$th eigenvector of matrix $A$, and $\alpha ^p$ is coordinate in the direction of $r^p$.
The wave propagation form of Godunov's method (equation \ref{eq:wave_propagation_form}) is only first-order accurate and introduces a great amount of numerical diffusion into the solution.
This often smears out the steep gradients in the solution which are common in surface elevation near shoreline in tsunami simulation.
To obtain second-order resolution and maintain steep gradients, additional terms are added to equation \ref{eq:wave_propagation_form}:
\begin{align}
\begin{split}
\qinp = \qin & - \frac{\Delta t}{\Delta x} \left( {\mathcal A}^+\Delta \Q_{i-1/2} + {\mathcal A}^-\Delta \Q_{i+1/2} \right) \\
& - \frac{\Delta t}{\Delta x} \left( \tilde F_{i+1/2}^n - \tilde F_{i-1/2}^n \right).
\label{eq:wave_propagation_2nd_order}
\end{split}
\end{align}
The second-order correction terms are computed as
\begin{equation}
\tilde F_{i-1/2}^n = \frac{1}{2}\sum _{p=1}^m \left( 1-\frac{\Delta t}{\Delta x}|\lambda ^p| \right) |\lambda ^p| \widetilde{\mathcal{W}}^p_{i-1/2},
\end{equation}
where the time step index $n$ is dropped and the superscript $p$ refers to the wave family.
The wave $\widetilde{\mathcal{W}}^p_{i-1/2} = \Phi(\theta ^p_{i-1/2})\mathcal{W}^p_{i-1/2}$ is a limited version of the original wave $\mathcal{W}^p_{i-1/2}$, where $\theta^p_{i-1/2}$ is a scalar that measures the strength of wave $\mathcal{W}^p_{i-1/2}$ relative to waves in the same wave family arising from a neighboring Riemann problem:
\begin{equation}
\theta ^p_{i-1/2} = \frac{\mathcal{W}^p_{I-1/2} \cdot \mathcal{W}^p_{i-1/2}}{ \| \mathcal{W}^p_{i-1/2} \| },
\label{eq:theta}
\end{equation}
where the index $I$ represents the interface on the upwind side of interface $x_{i-1/2}$
\begin{equation}
I =
\begin{cases}
i-1, & \text{if } \lambda^p_{i-1/2} > 0, \\
i+1, & \text{if } \lambda^p_{i-1/2} < 0,
\end{cases}
\end{equation}
$\Phi(\theta)$ is a limiter function that gives values near 1 where solution is smooth and is close to 0 near discontinuities.
Such property of a limiter function preserves second-order accuracy in region where the solution is smooth while avoiding non-physical oscillations arising near the discontinuities.
Note that computing limited waves adds complexity to the parallel algorithm implemented on the GPU since limited waves at one interface cannot be computed until its neighboring waves are solved.
This is detailed in section \ref{sec:implementation}.
A two-dimensional hyperbolic system in non-conservative form
\begin{equation}
q_t + A(q)q_x + B(q)q_y = 0
\end{equation}
is a general extension of the one-dimensional hyperbolic system (equation \ref{eq:hyperbolic_non_conservative}) in two-dimensional space.
The Godunov-type finite volume algorithms discussed above can be naturally extended to two-dimensional space by dimensional splitting, which splits the two-dimensional problem into a sequence of one-dimensional problems.
The wave propagation algorithm now becomes
\begin{align}
\begin{split}
Q_{ij}^{*} = \qijn & - \frac{\Delta t}{\Delta x} \left( \apdqimhj + \amdqiphj \right) \\
& - \frac{\Delta t}{\Delta x} \left( \tilde F_{i+1/2,j}^n - \tilde F_{i-1/2,j}^n \right),
\label{eq:wave_propagation_2nd_order_x}
\end{split}
\end{align}
\begin{align}
\begin{split}
\qijnp = Q_{ij}^* & - \frac{\Delta t}{\Delta y} \left( \bpdqijmh + \bmdqijph \right) \\
& - \frac{\Delta t}{\Delta y} \left( \tilde G_{i,j+1/2}^n - \tilde G_{i,j-1/2}^n \right),
\label{eq:wave_propagation_2nd_order_y}
\end{split}
\end{align}
where $\apdqimhj$ and $\amdqiphj$ are net effect of all waves propagating into cell $\mathcal{C}_{ij}$ from its left and right edges,
while $\tilde F_{i-1/2,j}^n$ and $\tilde F_{i+1/2,j}^n$ are 2nd-order correction fluxes through its left an right edges.
Similarly,
$\bpdqijmh$ and $\bmdqijph$ are net effect of all waves propagating into cell $\mathcal{C}_{ij}$ from bottom and top edges,
while $\tilde G_{i,j-1/2}^n$ and $\tilde G_{i,j+1/2}^n$ are 2nd-order correction fluxes through its bottom and top edges.
In GeoClaw, the shallow water equation is written in non-conservative form, which augments the system by introducing equations for topography and momentum flux \citep{LeVeque2011}.
An approximate Riemann solver has been implemented by \citet{George2008} to solve the Riemann problems at each cell interface for this augmented system.
This Riemann solver has some nice properties for tsunami modeling, including the capability of preserving steady state of the ocean, handling dry states in the Riemann problem, and maintaining non-negative depth in the solution.
The time step size $\Delta t$ for time integration must be chosen and adapted carefully at each time step if variable time step is used, which is typical for tsunami modeling.
The Courant, Friedrichs and Lewy (CFL) condition implies that the time step size for a certain AMR level must satisfy
\begin{equation}
\nu \equiv \left| \frac{s \Delta t}{\Delta x} \right| \leq 1
\end{equation}
where $\nu$ is the CFL number and $s$ is the maximum wave speed seen at the AMR level.
\section{Adaptive Mesh Refinement}\label{sec:amr}
The block-structured adaptive mesh refinement algorithm implemented in Clawpack is
described in numerous papers, including \citet{berger1984adaptive} and \citet{Berger1989}, and is only briefly summarized here.
A collection of rectangular grid patches are used to store the solution.
Grid patches at different levels have different cell sizes.
The coarsest grid patches (level 1) cover the entire domain.
Grids patches at level $l$+1 are finer than coarser level $l$ grid patches by integer refinement ratios $r^l_x$ and $r^l_y$ in the two spatial directions, $\Delta x^{l+1} = \Delta x^l/r^l_x, \Delta y^{l+1} = \Delta y^l/r^l_y$, and cover sub-region of level $l$ grid patches.
In this study, the refinement ratios in the two spatial directions are always taken to be equal, $r^l_x = r^l_y$.
Typically, the time step size is also refined the same factor for level $l$+1 grid patches, $\Delta t^{l+1} = \Delta t^l/r^l_t$, with $r^l_t = r^l_x = r^l_y$.
The high level grid patches are regenerated every $K$ time steps such that they move with features in the solution.
When level $l$+1 grid patches need to be regenerated, some cells at level $l$ are flagged for refinement based on some criterion (in GeoClaw, typically where the amplitude of the wave is above some specified tolerance, or in specified regions where higher refinement is required, for example near the target coastal location, or where the wave will affect the solution in destination during time interval of interest as indicated by the backward adjoint solution\citep{davis2016adjoint}).
The flagged cells are then clustered into new rectangular grid patches, which usually include some cells that are not flagged as well, using an algorithm proposed by \citet{Berger1991}.
The algorithm tries to keep a balance between minimizing the number of grid patches and minimizing the number of unflagged cells that are included in the resulting rectangular grid patches.
The newly generated level $l$+1 grid patches get their initial solution from either copying data from existing old level $l$+1 grid patches or, if no such grid patch exists, interpolating from level $l$ grid patches.
We say the level $l$+1 patch cells are ``on top'' of some level $l$ cells that cover the same spatial region.
Note that the algorithm described below integrates the underlying level $l$ grid patches before the level $l+1$ patches. After updating the finer patches, any level $l$ cells under level $l+1$ cells have their values updated to the average of level $l+1$ cell values. After each regridding step, the new level $l+1$ patches need not cover the same level $l$ cells as previously.
\subsection{Time Integration} \label{sec:time_integration}
Each grid patch in the AMR grid hierarchy, despite different resolution, can be integrated in time with the wave-propagation form of Godunov's method described in the previous section.
Specifically, the following steps are applied recursively, starting from the coarsest grid patches at level $l=1$, as illustrated in a simple case in Figure \ref{fig:amr_time_integration}.
\begin{enumerate}
\item Advance the solution in all level $l$ grid patches at $t_n$ by one step of length $\Delta t^l$ to get solution at $t_n+\Delta t^l$.
\item Fill the ghost cells for all level $l+1$ grid patches, by either copying cell values from adjacent level $l+1$ grid patches if any exists, or interpolating in space and time from the cell values at level $l$ at $t_n$ and $t_n+\Delta t^l$ if no adjacent level $l+1$ grid patch exists.
Note that interpolation in time is generally required because the finer grids are integrated with smaller time steps.
\item Advance the solution at level $l+1$ for $r^l_t$ time steps such that solution at level $l+1$ is at the same time as solution at level $l$.
Each time level $l+1$ is advanced, this entire algorithm (step 1--5) is applied recursively to the next finer level (with $l$ replaced by $l+1$ in these steps) if additional level(s) exist.
\item For any grid cell at level $l$ that is covered by level $l+1$ grid cells, the solution $Q$ in that cell is replaced with an appropriate weighted average of the values from the $r^l_x r^l_y$ level $l+1$ cells on top. This is referred to below as the {\em updating process}.
\item For any grid cell at level $l$ that is adjacent to level $l+1$ grid cells, the solution $Q$ in that cell is adjusted to replace the value originally computed using fluxes found on level $l$ with the potentially more accurate value obtained by using level $l+1$ fluxes on the side of this cell adjacent to the level $l+1$ patch. This step also preserves conservation for certain problems and is referred to below as the {\em refluxing process}. (This step is dropped in our implementation, see below.)
\end{enumerate}
Step 5 is important for some problems where exact conservation is expected, e.g., of a conserved tracer or for strong shock waves in nonlinear problems, and is necessary in this case to avoid the use of different numerical fluxes at the same interface on the side of the fine patch and the side where it abuts a coarser grid.
However, this step requires storing additional flux information at every time step and communicating this information between levels and was found to have a large negative effect on the ability to speed up the code on the GPU.
We also found that this refluxing step has very little effect on the numerical results obtained for tsunami modeling (as shown in Section \ref{sec:japan2011}). In this application we do not expect conservation of momentum at any rate (due to the topographic and friction source terms) and even conservation of mass is sacrificed when AMR is applied to a cell near the coast (as described in \citet{LeVeque2011}). For these reasons we omit Step 5 in the GPU implementation.
This greatly helps to optimize the logistics of the code and achieve very impressive performance in the benchmark problems, while only introducing negligible changes to the solution.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{./figures/amr_time_integration.png}
\caption{Advancing the coarsest level by one time step, for a AMR hierarchy with 3 levels of grid patches.
The refinement ratio is 2 for both level $l=1$ and level $l=2$.
Each black horizontal arrow in a solid line represents taking one time step on a specific level.
Each blue vertical arrow in a dashed line represents one updating process that averages the solution from a fine level to a coarse level and refluxing process that preserves global conservation.
The numbers from (1) to (10) describe the orders in which all operations are taken.
}
\label{fig:amr_time_integration}
\end{figure}
\subsection{Regridding}
Every time a level is advanced by $b$ time steps, regridding based on this level is conducted (except on the finest allowed level).
Typically, $b$ is chosen as 2--4 in tsunami modeling.
A larger $b$ results in less frequent regridding, which reduces time spent on the regridding process.
However, in order to ensure the waves in the solution do not propagate beyond the refined region before the next regridding process, when cells are flagged for refinement, usually an extra layer of $b$ cells surrounding the original flagged cells are flagged.
This makes each grid patch $2b$ cells wider in each of the two horizontal dimensions and thus introduces more cells, which increases the computational time of time integration.
In the regridding process, cells must be flagged before they are clustered into new grid patches.
A variety of different flagging criteria have been implemented, including flagging based on the slope of the sea surface, sea surface elevation, or adjoint methods.
For all the benchmarks in this study, the sea surface elevation is used for flagging.
In addition to this, some spatio-temporal regions might also be specified to enforce flagging in these regions, which is very useful for problems where both the transoceanic propagation and local inundation of a tsunami must be modeled, and thus require grid cells that are $O(10^3) \sim O(10^4)$ finer than the coarsest resolution in some near shore regions like a harbor or bay.
During regridding, if a newly generated grid cell cannot copy values from old cells at the same level, its initial value must be interpolated from coarser levels.
In the updating process, coarse cell values get updated with the appropriate averaged value of fine grid cells on top.
An important requirement for both the interpolation and averaging strategies in tsunami modeling is to maintain the steady state of the ocean at rest, since refinement generally occurs before the tsunami waves arrive in an undisturbed area of the ocean.
For areas far from shoreline, the interpolation strategy can be simple linear interpolation and the averaging strategy used is averaging the surface elevation in fine cell values arithmetically and then compute depth in each cell based on the topography and surface elevation.
However, near the shoreline where one or more cells is dry, it is impossible to maintain conservation of mass and also preserve the flat sea surface during the interpolation or averaging in some circumstances.
Details of the strategies can be found in \citet{LeVeque2011}.
\section{A Hybrid CPU/GPU Implementation}\label{sec:implementation}
One very basic question to answer in designing a GPU implementation of some code is which part of the program should be done by the CPU and which part should be done by the GPU.
In the current implementation we have put the Riemann solvers, wave limiter and CFL reduction on the GPU while letting the CPU take care of the rest, including the updating process, regridding process, filling ghost cells, and updating gauge values (finding the best grid patch to sample quantities of interest from, interpolating from cell values, and output), etc.
Since the GPU and the CPU considered in the study have separate physical memory, this design requires the transfer of solution data on each grid patch back and forth between the GPU and CPU memory through a PCI express 2.0 interface, which has relatively low bandwidth compared to the main memory of the CPU and the GPU.
However, as we show later in the benchmark results, the extra time introduced by such data transfer takes less than $5\%$ of the total running time, since these operations can be carefully hidden by performing other operations concurrently.
If we instead put all procedures on the GPU, although it might save time by eliminating much of the data transfer, the code might suffer from having tiny GPU kernels that add significant overhead to the running time, and running inefficient GPU kernels that can be even slower than the CPU counterpart.
One example is filling ghost cells on the GPU.
Each GPU kernel that fills ghost cells for a two-dimensional grid patch of $a$ by $a$ cells can have parallelism of $O(a)$ at most ($2a$ ghost cells to fill on each side), which is much less than the parallelism exposed in time integration of the same grid patch, which has parallelism of $O(a^2)$ and often involves much more computation.
The overhead of launching a GPU kernel is almost a fixed amount of time regardless of the actual execution time of the kernel.
This overhead is often much longer than the execution time of a tiny GPU kernel like the GPU kernel that would be needed for filling ghost cells.
As a result, the total cost (including kernel launching overhead and kernel execution) of doing such operations on the GPU can be even higher than the cost of doing so on the CPU in some cases.
In addition to the consideration from kernel launching overhead, code that puts all procedures on the GPU wastes CPU computational resources.
Since a typical machine considered in this study consists of a multi-core CPU and a GPU, ideally the work load of the entire program would be distributed between the two as evenly as possible, such that the CPU stays busy as much as possible during the entire execution and so does the GPU.
In Section \ref{sec:results}, two metrics are proposed to measure such characteristics of a code in numerical experiments.
\subsection{Procedure Dependencies and Concurrent Execution}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./figures/dependency_graph.png}
\caption{Dependency graph of some major procedures in non-AMR portion of the code.
The color indicates the hardware resource a procedure requires.
}
\label{fig:dependency_graph}
\end{figure}
Figure \ref{fig:dependency_graph} shows the major procedures of the code that are hearafter referred to as the {\em non-AMR portion} of the code. (Omitted are the regridding process and updating process, essential components of AMR.)
An arrow from procedure $A$ to procedure $B$ indicates that procedure $A$ must be finished before procedure $B$ can start.
The color indicates the type of hardware resource a procedure needs.
4 colors represent 4 major types of hardware resource involved in the execution of the code.
A blue block uses one CPU core, a green block uses the GPU Streaming Multiprocessors, a red block uses the memory transfer engine that transfers the data from the CPU memory to the GPU memory, and a purple block uses the memory transfer engine that transfers data in the opposite way.
Note that these are separate transfer engines but there is only one of each.
Any two procedures without dependency can be done concurrently as long as relevant hardware resources are available.
The dependencies in the current implementation are enforced through a combination of rearrangement of CPU procedures and GPU kernel launches, use of OpenMP directives and CUDA streams, and proper synchronization between CPU threads and between the CPU and the GPU.
Figure \ref{fig:time_line} shows an example of these procedures being processed concurrently by four types of hardware on a machine with a three-core CPU.
Procedures follow the dependency specified in figure \ref{fig:dependency_graph}.
Procedures that use the same hardware resource must wait in queue for the hardware to become available.
Note that procedures that needs CPU cores can use any available CPU core so those processed by different CPU cores can be executed concurrently.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{./figures/time_line.png}
\caption{
An example of different procedures in non-AMR portion of the code running concurrently along the timeline.
}
\label{fig:time_line}
\end{figure}
In figure \ref{fig:time_line}, during the entire time period when the red block for grid patch 12 is processed by one of the two memory transfer engines, the GPU Streaming Multiprocessors are processing the green block for grid patch 14.
In this case, transferring the solution data of grid patch 12 from the CPU memory to the GPU memory does not induce any extra cost.
During some middle time period of the red block for grid 14, however, no GPU computation is conducted so the GPU Streaming Multiprocessors are idle, which can be caused by unavailability of data on the GPU, for instance.
This time segment does induce extra cost due to transferring data between the CPU memory and the GPU memory.
Later in section \ref{sec:results}, such extra cost will be quantified to reveal the influence of transferring data between the CPU memory and the GPU memory on performance of the code.
Two additional metrics will also be defined and measured later in section \ref{sec:results}, the proportion of time during which the CPU has some work to do instead of waiting for the GPU to finish, and the proportion of time during which GPU Streaming Multiprocessors are doing computations.
\subsection{Memory Pool}
During regridding, new grid patches are generated and old ones are destroyed.
New memory must be allocated on both the CPU and the GPU for storing solution data and auxiliary data for the new grid patches, while old memory for removed grid patches must be freed.
The total overhead of calling the CUDA runtime library to conduct these frequent memory operations cannot be neglected, and can even dominate execution of the code sometimes when grid patches are so small that the overhead is expensive relative to the time spent advancing the solution on grid patches.
To save the cost of such frequent memory operations, a memory pool is implemented, which requests a huge chunk of memory from the system by calling the CUDA runtime library at the initial time, keeping it until the end of execution, and getting more chunks when needed.
All memory allocation and deallocation requests from the code are then through this memory pool at much less cost, with no need to actually allocate/free system memory.
\subsection{Efficient Design of the Solver Kernels}
\subsubsection{The CUDA Programming Model}
The current implementation is based on the CUDA programming model and targeted Nvidia GPUs.
The architecture of Nvidia GPUs as well as explanation of the CUDA programming model are detailed in the Nvidia CUDA C programming guide \citep{nvidia}.
Here only a brief review is given to provide sufficient knowledge for understanding the implementation details introduced in this section.
In the CUDA programming model, each function that is written to run on the GPU is called a CUDA kernel (or GPU kernel).
The code in a CUDA kernel specifies a set of instructions to be executed by multiple CUDA threads in parallel.
The code can specify that some of the instructions should be executed by a certain group of threads but not the others.
All threads assigned to execute a CUDA kernel are grouped into CUDA blocks.
All such CUDA blocks then form a CUDA grid.
CUDA blocks are independent of each other, can be sent to different Streaming Multiprocessors, and run concurrently.
During the execution of a CUDA kernel, each thread is provided with information regarding which CUDA block and which thread within that block it is.
Based on this information, each thread can perform its own set of instructions on a specific portion of the data.
The GPU has many different types of hardware for data storage.
The three relevant types here are registers, shared memory and main memory.
The registers are the fastest storage and have very low access latency but each Streaming Multiprocessor has a very limited number of registers.
Each thread is assigned its own registers and thus can only access its own registers (unless special instructions are used).
The shared memory has relatively slower bandwidth and longer access latency than the registers but its bandwidth is still much faster than that of the main memory and access latency is also much shorter than that of the main memory.
Each CUDA block is assigned a specified amount of shared memory, which is accessible by all CUDA threads in the CUDA block.
The quantity of registers and shared memory in the Streaming Multiprocessor is limited and fixed.
As a result, number of CUDA blocks that can reside in a Streaming Multiprocessor at the same time is limited by total number of registers and the amount of shared memory these CUDA blocks request.
If too few CUDA blocks can reside in a Streaming Multiprocessor at the same time, the Streaming Multiprocessor has low \textit{occupancy} and thus runs the CUDA kernel less efficiently.
For this reason, it is important to minimize the number of registers used by each thread and the amount of shared memory used by each CUDA block when the CUDA kernel is designed.
The main memory is located the farthest from the chip and thus has the lowest bandwidth and the longest access latency.
In principle, a CUDA kernel should have a minimal number of read and writes to main memory, especially given the fact that stencil computations for partial differential equations are often memory bandwidth bound.
A simple idea in CUDA kernel design is to load all data as efficiently as possible from the main memory to the shared memory, conduct all computations using the shared memory as a buffer to avoid unnecessary accesses of main memory, and then write new data back to the main memory.
However, this often causes too much shared memory usage for each CUDA block and results in very inefficient execution of the CUDA kernel.
\subsubsection{Data Layout}
Every 32 threads within a CUDA block are grouped as a warp, which executes the same instruction at the same time, including memory load and write operations.
The hardware can execute memory request from all threads in a warp most efficiently if they access a contiguous piece of memory.
This is called a coalesced access.
Such characteristic of the GPU hardware makes Structures-of-Arrays (SoA) preferable over Arrays-of-Structures (AoS).
With SoA layout, the same state variables, e.g. water depth $h$, on the entire grid patch are stored contiguously in memory,
whereas with AoS format, all state variables within the same grid cell are stored contiguously in memory.
Such a data layout results in strided access of the GPU memory.
Namely, consecutive CUDA threads will access memory locations that are not consecutive.
This can greatly reduce effective memory bandwidth since memory accesses cannot be coalesced.
The current implementation contributes to the Clawpack eco-system \citep{Mandli2016}, which uses an AoS layout since Fortran arrays are dimensioned so that $q(m,i,j)$ is the $m$th component (depth or momenta) in the $(i,j)$ grid cell.
However, many applications within the Clawpack eco-system will be affected if the AoS data layout is changed.
Thus we continue to use the AoS layout in the current implementation.
In the first half of the dimensional splitting method, the CUDA kernel that solves the equation in the $x$ direction reads in data in AoS but writes intermediate solution data in SoA, which is coalesced.
The CUDA kernel that solves the equation in the $y$ direction then reads in data in SoA layout in a coalesced manner and writes new solution back in AoS layout.
\subsubsection{CUDA Kernel Implementations}
In designing a CUDA kernel, one essential goal is to assign computational tasks to each thread.
To perform time integration on gird patches with Godunov-type wave-propagation methods, the goal is to decide how to distribute to each thread the tasks of solving the Riemann problems at each cell edge, limiting waves, and updating cell values.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./figures/1d_slice.png}
\caption{
A one-dimensional slice of a grid patch along the $x$ direction.
The arrows in dashed lines represent waves from the Riemann problems at the cell edges.
}
\label{fig:slice_1d}
\end{figure}
Figure \ref{fig:slice_1d} shows a one-dimensional slice of a grid patch along the $x$ direction when the first step of the dimensional splitting method is conducted to get the intermediate state $Q^*$.
Updating cell $C_{j}$ depends on the two sets of waves at cell edges $x_{j-1/2}$ and $x_{j+1/2}$.
When waves are limited, the waves at cell edge $x_{j-1/2}$ depends on waves at cell edges $x_{j-3/2}$ and $x_{j+1/2}$, while the waves at cell edge $x_{j+1/2}$ depends on waves at cell edges $x_{j+3/2}$ and $x_{j-1/2}$.
Any of these waves depends on the two cell values around them, respectively.
As a result, the solution at cell $C_{j}$ depends on four neighboring sets of waves, which depend on the 5 cell values around cell $C_j$ (including itself).
If each CUDA thread is assigned to update a cell in this one-dimensional slice, it needs to solve the four Riemann problems the cell depends on.
Redundant work is performed since some Riemann problems are solved and some waves are limited by neighboring CUDA threads as well.
On the other hand, if each thread is assigned to solve a Riemann problem at one cell edge in this one-dimensional slice, limit the waves, and then update the two neighboring cells with left- and right-going waves, the code must carefully avoid data racing since each cell is updated by two CUDA threads.
This typically involves the usage of a synchronization mechanism called lock, which decreases the execution efficiency of the CUDA kernel.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./figures/mapping.png}
\caption{
Assign CUDA threads to grid cells and cell edges.
}
\label{fig:mapping}
\end{figure}
In the current implementation, a combination of the two ideas above is implemented.
Figure \ref{fig:mapping} shows how CUDA threads are first assigned to cell edges for solving Riemann problems and limiting waves, and then re-assigned to grid cells for updating the solution.
Each solid arrow denotes assigning one CUDA thread on a cell edge.
The thin arrows show the initial assignment to edges, while the thick arrows show the final assignment to cells.
In the first stage, each CUDA thread is assigned to a cell edge to solve the Riemann problem there.
The left and right state for the Riemann problem for a thread is loaded from the main memory, while resulting waves from the Riemann problem are written into the shared memory.
In the second stage, each CUDA thread limits its waves to get correction fluxes at the cell edge it is assigned to.
This requires reading waves from the two neighboring edges, which were produced by the two neighboring CUDA threads and stored in the shared memory.
Each thread then writes the limited waves (correction fluxes) back to the shared memory.
In the last stage, each CUDA thread is assigned to update a cell (thick arrows).
At this time, each thread already has left-going waves and correction fluxes at the right edge of its cell in its registers, which can be directly applied to update the cell value.
The right-going waves and correction fluxes at the left edge of the same cell was produced by its left neighboring thread, and were stored in the shared memory in the last stage.
Thus each thread needs to read in these waves and fluxes from the shared memory and apply them to update the cell it is assigned to.
Each thread then writes the updated value $Q^*$ back to the main memory.
A similar kernel is then conducted for the second step of dimensional splitting method to get new state $Q^{n+1}$.
This implementation requires a kernel to read solution data from the main memory only once at the beginning of the kernel execution and write the updated solution back to the main memory only once at the end of the kernel execution.
This is done by using only a reasonable number of GPU registers for each thread and a reasonable amount of shared memory for each CUDA block.
The usage of GPU registers for each CUDA thread only includes storing the state variables for left and right states of one Riemann problem, waves and wave speeds from one Riemann problem, plus any extra intermediate variables created during solving the Riemann problem,
while the usage of shared memory only includes waves from one Riemann problem per thread in a CUDA block.
\section{Numerical Results}\label{sec:results}
This section is focused on evaluating the performance of current GPU implementation.
Two machines for benchmarking the original CPU implementation and two machines for benchmarking the current GPU implementation are listed as below.
\begin{enumerate}
\item a single Nvidia Kepler K20x GPU with a 16-core AMD Opteron 6274 CPU running at 2.2 GHz as the host;
\item a single Nvidia TITAN X (Pascal) GPU with a 20-core Intel E5-2698 CPU running at 2.2 GHz as the host (but only 16 CPU threads are used for fair comparison with others);
\item a single 16-core AMD Opteron 6274 CPU running at 2.2 GHz;
\item a single 16-core Intel Xeon E-2650 CPU running at 2.0 GHz;
\end{enumerate}
As shown in the previous sections, the GPU implementation consists of jobs that are done by the CPU and jobs that are done by the GPU.
In the benchmarks, the CPU implementation always runs in parallel with 16 OpenMP threads, using 16 CPU cores.
The CPU part of the GPU implementation are also always processed in parallel by 16 OpenMP threads, using 16 CPU cores.
The GPU implementation solves the benchmark problems on machine 1 and 2 while the CPU implementation solves the same benchmark problem on machine 3 and 4.
Note that machine 1 and machine 3 have the same AMD CPU while machine 2 and machine 4 have similar Intel CPU.
Thus in this section, all speed-ups will be computed by comparing results on machine 1 to results on machine 3 and comparing results on machine 2 to results on machine 4.
We propose three metrics that can be used to measure absolute performance of current GPU implementation, which do not require comparing a GPU implementation to a CPU implementation.
We first define some quantities used in the definition of the three metrics.
Along the program execution time line $t\in\mathds{R}^+$, define $E^{GPU}_{i} = [t^{GPU}_{i,start}, t^{GPU}_{i,stop}]$ as the time interval that the $i$th GPU computation event (e.g. one of the green blocks in figure \ref{fig:time_line}) happens.
$E^{GPU}_i$ is essentially a set of all moments that the $i$th GPU computation event is happening.
Similarly, define $E^{CPU}_{i}$, $E^{h2d}_{i}$ and $E^{d2h}_{i}$ for the $i$th CPU computation, the $i$th memory transfer from the CPU to the GPU memory and the $i$th memory transfer from the GPU to the CPU memory, respectively.
Then, all time intervals during which the GPU is doing computation, $\Omega ^{GPU}$, can be represented as $\Omega ^{GPU} = \bigcup_{i=1}^{N^{GPU}} E^{GPU}_i$, where $\bigcup$ is the union operation for sets and $N^{GPU}$ is total number of GPU computation events.
Similarly, we define another three sets of intervals for the other three types of events.
All time intervals during which the CPU is doing computation, $\Omega ^{CPU}$, can be represented as $\Omega ^{CPU} = \bigcup_{i=1}^{N^{CPU}} E^{CPU}_i$, where $N^{CPU}$ is total number of CPU computation events.
All time intervals during which the memory transfer from the CPU memory to the GPU memory is happening, $\Omega ^{h2d}$, can be represented as $\Omega ^{h2d} = \bigcup_{i=1}^{N^{h2d}} E^{h2d}_i$, where $N^{h2d}$ is total number of such memory transfers.
All timer intervals during which the memory transfer from the GPU memory to the CPU memory is happening, $\Omega ^{d2h}$, can be represented as $\Omega ^{d2h} = \bigcup_{i=1}^{N^{d2h}} E^{d2h}_i$.where $N^{d2h}$ is total number of such memory transfers.
Lastly, define $E^{total} = [t_{start},t_{end}]$ as the time interval that the entire program runs.
The first metric measures the proportion of time during which the GPU is doing computation, defined as
\begin{equation}
P_1 = \frac{|\Omega ^{GPU}|}{|E^{total}|},
\end{equation}
where $|\Omega|$ represents size of the set $\Omega$, which essentially computes the total length of all time intervals in $\Omega$ in this case.
Similarly, the second metric is defined as
\begin{equation}
P_2 = \frac{|\Omega ^{CPU}|}{|E^{total}|},
\end{equation}
which measures proportion of time during which the CPU is doing computation.
The last metric measures the proportion of extra time introduced by transferring data between the CPU memory and the GPU memory
\begin{equation}
P_3 = \frac{|\Omega ^{h2d} \bigcup \Omega ^{d2h} - \Omega ^{CPU} \bigcup \Omega^{GPU}|}{E^{|total}|},
\end{equation}
where $-$ is the subtract operation for sets.
For two sets $A$ and $B$, $A-B$ denotes all elements in $A$ but not in $B$.
\subsection{2011 Japan Tsunami}\label{sec:japan2011}
\subsubsection{Problem Setup}
The first benchmark problem is the 2011 Japan tsunami, which was triggered by an earthquake of magnitude 9.0-9.1 off the Pacific coast of Tōhoku, occurred at 14:46 JST (05:46 UTC) on Friday March 11th, 2011.
The earthquake source deformation files were obtained from NOAA Pacific Marine Environmental Laboratory (PMEL).
They were not on an uniform latitude-longitude grid initially, and were converted to deformation information on uniform grids for use in our implementation.
The computational domain is from longitude $-240$ to $-100$ and latitude $-31$ to 65 in spherical coordinates.
Three levels of refinement are set across the ocean and around the source region (before getting close to the destination).
Starting from the coarsest level (level 1) that has a resolution of 2 degrees, the refinement ratios are 5 and 6, giving a resolution of 25 minutes on level 2 and 4 minutes on level 3.
A refinement tolerance parameter can be specified to guide the mesh refinement.
The smaller this parameter is set, the more likely the grid will be refined to the highest level allowed in a particular region.
The refinement tolerance parameter is chosen to be the wave amplitude and is set to $0.005$ meter.
Thus, if a region is allowed to use any of the choices above (2 degree, 24 minute, 4 minute), the region will be refined up to a maximum of level 3 when the amplitude of a wave is higher than $0.005$.
In addition to specifying a tolerance for flagging individual cells, regions of the domain can be specified so that all cells in the region, over some time interval also specified, will be refined to at least some level and at most some level.
In the simulation, the three refinement levels mentioned above are allowed in the entire region, with other constraints in specific sub-regions.
In the first 7 hours after the earthquake, a 4-minute resolution (level 3) in the region from longitude $-231$ to $-170$ and from latitude 18 to 62 is enforced.
This is reverted to the choices of 2 degrees or 24 minutes after 7 hours when the wave amplitudes are below tolerance.
Then moving onward toward the destination, a 4-minute resolution in the region from longitude $-170$ to $-120$ and from latitude 18 to 62 is enforced starting at 7 hours till the end of the 13-hour simulation.
Near Crescent City, the destination we are interested in, three higher higher levels of refinement regions are enforced to resolve for smaller-scale flow features near the coast:
\begin{enumerate}
\item Level 4 with 1-minute resolution is enforced starting from 8 hours after the earthquake, in the region from longitude $-126.995$ to $-123.535$ and from latitude 40.515 to 44.495.
\item Level 5 with 12-second resolution is enforced starting from 8 hours after the earthquake, in the region from longitude $-124.6$ to $-124.05$ and from latitude 41.502 to 41.998.
\item Level 6 with 2-second resolution is enforced starting from 8.5 hours after the earthquake, in the region from longitude $-124.234$ to $-124.143$ and from latitude 41.717 to 41.783.
\end{enumerate}
Figure \ref{fig:crescent_city} shows the three refinement regions as well as location of gauge 2, where the time series of water surface elevation is recorded.
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/japan2011_region/frame0000fig0.png} }
\hfill
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/japan2011_region/frame0000fig1.png} }
\hfill
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/japan2011_region/frame0000fig2.png} }
\caption{
Three refinement regions around Crescent city with higher resolution and location of gauge 2. Red: level 4, 1-minute resolution; Blue: level 5, 12-second resolution; White: level 6, 2-second resolution.
}
\label{fig:crescent_city}
\end{figure*}
To ensure solution data on an entire grid patch can fit into the cache of the CPU for data locality, the size of each grid patch is limited to 128 by 128 for both the GPU and CPU cases.
The Godunov-type dimensional splitting scheme is used with 2nd order MC limiter applied to the waves.
The problem is simulated for a simulation time of $13$ hours.
\subsubsection{Simulation Results}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_japan2011_full/frame0022fig0.png} }
\hfill
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_japan2011_full/frame0038fig0.png} }
\caption{$\zeta(x,y,t)$ at $5.5$ hours and $9.5$ hours after the Japan 2011 earthquake.
}
\label{fig:simulation_japan_2011}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{./figures/surface_elevation_japan2011_zoom_crescent_city/frame0037fig1.png} }
\hfill
\subfloat[]{\includegraphics[width=0.45\textwidth]{./figures/surface_elevation_japan2011_zoom_crescent_city/frame0039fig1.png} }
\caption{$\zeta(x,y,t)$ at $9.25$ hours and $9.75$ hours after the Japan 2011 earthquake, zoomed in near Crescent city.
}
\label{fig:simulation_japan_2011_zoom}
\end{figure*}
Figure \ref{fig:simulation_japan_2011} and Figure \ref{fig:simulation_japan_2011_zoom} show snapshots during the simulation for the entire computational domain and near Crescent city, colored by $\zeta(x,y,t)$ defined as
\begin{equation}
\zeta(x,y,t) =
\begin{cases}
h(x,y,t), & \text{if } B(x,y) > 0 \text{ (the flow depth),} \\
h(x,y,t)+B(x,y), & \text{if } B(x,y) \leq 0 \text{ ($\eta$, water surface elevation).}
\end{cases}
\label{eq:zeta}
\end{equation}
During the tsunami, wave height data were recorded at 4 DART buoys (Deep-ocean Assessment and Reporting of Tsunamis) near the earthquake source, the locations of which are shown in Figure \ref{fig:dart}.
The blue rectangle in the figure indicates the extent of the earthquake source.
However, most of the sea floor deformation is inside the red rectangular region, where one-minute topography files are used to make sure the region is well resolved.
The wave heights predicted by the current GPU implementation at the 4 DART buoys are shown in figure \ref{fig:wave_height_dart} and compared against observed data.
The comparison shows the predicted results agree quite well with observed data at the 4 DART buoys.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{./figures/japan_source_and_dart.png}
\caption{Japan 2011 earthquake source and DART buoys locations.
The coordinates for each DART buoys are:
1) gauge 21401, longitude $-207.417$, latitude $42.617$;
2) gauge 21413, longitude $-207.883$, latitude $30.515$;
3) gauge 21418, longitude $-211.306$, latitude $38.711$;
4) gauge 21419, longitude $-204.264$, latitude $44.455$.
}
\label{fig:dart}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./figures/japan2011_DART.png}
\caption{Water surface elevation at 4 DART buoys. From top to bottom: gauge 21401, gauge 21413, gauge 21418, gauge 21419.}
\label{fig:wave_height_dart}
\end{figure}
Recall that the current implementation uses a dimensional splitting scheme with no refluxing.
To show that this simplification gives comparable results to the original GeoClaw code, Figure \ref{fig:wave_height_crescent_gauge} gives time series of surface elevation recorded at a tide gauge near Crescent City, California, United States, during the 2011 Japan Tsunami.
The observation has been detided by subtracting the predicted tide level from it to remove the influence of tide level.
Sample result from another well-known tsunami model MOST \citep{titov1997implementation} have also been included for comparison.
The comparison shows that a simplified GeoClaw CPU code that implements such a dimensional splitting scheme with no refluxing gives very close results to those produced by the original GeoClaw and agree well with another model and observed data.
The current GPU implementation gives identical results to this simplified version of the original GeoClaw and is not shown in the Figure.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{./figures/surface_elevation_crescent_city_gauge2.png}
\caption{Water surface elevation at gauge 2 (location: longitude $-124.1840$, latitude $41.7451$) near Crescent city.
Time series from the MOST model are shifted by 6 minutes.
All other time series from numerical results are shifted by 6.5 minutes.}
\label{fig:wave_height_crescent_gauge}
\end{figure}
Figure \ref{fig:japan_total_time} shows total running time and proportion of the three components on 4 machines.
The total speed-ups are $4.3$ on machine 1 and $6.4$ on machine 2 for current GPU implementation.
Note that since time spent on the non-AMR portion decreases on machine 1 and 2, the cost for regridding and updating take up larger portion of the total run time.
However, one could still only gain a very limited additional performance increase if the regridding and updating processes were implemented on the GPU.
Amdahl's law states theoretical speed-up of the execution of a whole program is
\begin{equation}
S(s) = \frac{1}{(1-p)+\frac{p}{s}},
\label{eq:amdahl}
\end{equation}
where $S$ is theoretical speed-up of the execution of the whole program, $s$ is the speed-up of the portion that is accelerated, $p$ is proportion of total running time that the accelerated portion takes.
Further more, one has $S(s) \leq \frac{1}{1-p}$,
where the equality is achieved when $s$ approaches $\infty$ in equation \ref{eq:amdahl}.
From Amdahl's law, even if the regridding and updating processes are implemented on the GPU and are accelerated infinitely, the entire program only get roughly $1.2$ speed-up on machine 1 and 2.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{./figures/japan_2011_total_time.pdf}
\caption{Wall time (in seconds) of entire program on simulating the Japan 2011 tsunami, for original CPU implementation running on machine 3 and machine 4, and current GPU implementation running on machine 1 and machine 2.
}
\label{fig:japan_total_time}
\end{figure}
Table \ref{tb:metrics_japan} shows the three metrics for current GPU implementation running on machine 1 and machine 2 when the Japan 2011 tsunami is simulated.
The proportion of GPU computation ($P_1$) reaches about $50\%$ on machine 1 and a higher percentage of $64\%$ on machine 2.
This could be due to the fact that machine 2 has a newer GPU which has much lower overhead for kernel launch and memory transfer.
The proportion of CPU computation ($P_2$) are around $80\%$ for both machines.
In other words, during $20\%$ of the total running time, the CPU is idle.
$P_3$, the extra time introduced by transferring data between the CPU and the GPU memory, is less than $5\%$ for both machines.
This shows that even if the data can be transferred infinitely fast between the CPU and the GPU memory so the data transfer has no effect on execution time at all, the total running time of the entire program can be reduced by at most $5\%$.
Thus having to transfer data between the CPU and the GPU memory is not a critical issue that affects the performance of the code.
\begin{table}[t]
\caption{
The three metrics measured from simulating the Japan 2011 tsunami on machine 1 and machine 2.
}
\centering
\begin{tabular}{|l|l|l|}
\hline
& machine 1 & machine 2 \\ \hline
$P_1$ & 46.92\% & 64.20\% \\ \hline
$P_2$ & 84.50\% & 79.30\% \\ \hline
$P_3$ & 3.98\% & 2.90\% \\ \hline
\end{tabular}
\label{tb:metrics_japan}
\end{table}
\subsection{A local Tsunami Triggered by Near-field Sources}
\subsubsection{Problem Setup}
The second benchmark problem is the modeling of a local tsunami that is triggered by a near-field earthquake, which typically hits the shoreline much earlier than a tsunami triggered by a far-field earthquake.
The tsunami is triggered by a hypothetical Mw 7.3 earthquake on the Seattle Fault, which cuts across Puget Sound
(through Seattle and Bainbridge Island, see figure \ref{fig:seattle_fault})
and can create a tsunami that can cause significant inundation and high currents in some coastal communities around the Puget Sound.
The event was designed to model an earthquake that occurred roughly 1100 years ago, and for which geologic data is available for the uplift or subsidence at several locations.
Here, we focused on modeling this local tsunami and predicting its impact on Eagle Harbor at the Bainbridge island, the location of which is shown below in figure \ref{fig:eagle_domain}.
The ground deformation file for generating the tsunami is obtained from PMEL, which has been used for recent comparison study of GeoClaw and MOST as part of a tsunami hazard assessment of Bainbridge Island \citep{THA_Bainbridge2018}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{./figures/SFL_BI.png}
\caption{
Surface displacement for the hypothetical Seattle Fault earthquake, with Bainbridge Island labelled BI. Eagle Harbor is just north of the fault on the east side of the island.
Red contours show uplift at levels $0.5,~1,~1.5,~\ldots$ meters,
blue contours show subsidence at levels
$-0.05,$ $-0.1,~\ldots$ meters.
}
\label{fig:seattle_fault}
\end{figure}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/bainbridge_region/frame0000fig0.png} }
\hfill
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/bainbridge_region/frame0000fig1.png} }
\hfill
\subfloat[]{\includegraphics[width=0.32\textwidth]{./figures/bainbridge_region/frame0000fig2.png} }
\caption{
Computational domain and refinement regions for tsunami inundation triggered by the Seattle fault.
Red rectangle shows the region where level 2 refinement is enforced.
Blue rectangle shows the region where level 3 refinement is enforced.
White rectangle shows the region where level 4 refinement is enforced.
}
\label{fig:eagle_domain}
\end{figure*}
Figure \ref{fig:eagle_domain} also shows the computational domain, which is from longitude $-123.61$ to $-122.16$ and latitude 47 to 48.7.
Since this is a local tsunami in an enclosed Sound surrounded by land, the tsunami waves soon get reflected by shorelines and spread out to cover the full domain very soon after the earthquake.
Thus, instead of using a refinement tolerance parameter, we enforce mesh refinement everywhere in the domain regardless of wave amplitude, and never regenerate new grid patches.
Four levels of refinement are used, as denoted by the rectangles in figure \ref{fig:eagle_domain} that denote regions where refinement is enforced.
Starting from the coarsest level (level 1), which has a resolution of 30 minutes, the refinement ratios are 5, 3 and 6, giving a resolution of 6 minutes on level 2, 2 minutes on level 3 and $\frac{1}{3}$ minutes on level 4.
Note that for this benchmark problem, a large proportion of the domain is dry land and the shorelines are relatively much longer and more complex.
As a result, many branches occur along the execution path of solving Riemann problems of the shallow water system since more different situations arise, e.g. a Riemann problem with one state being dry initially but becoming wet, or staying dry, depending on the flow depth and velocity in the neighboring cell.
For the GPU, if the 32 threads in a warp do not take the same execution path, each extra branch will be executed by the entire warp, introducing significant extra execution time.
Thus the irregularity of water area in this benchmark problem is challenging for some CUDA kernels to use the GPU hardware efficiently .
\subsubsection{Simulation Results}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_eagle/frame0001fig12.png} }
\hfill
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_eagle/frame0005fig12.png} }
\caption{
$\zeta(x,y,t)$ in Puget Sound after a tsunami triggered by Seattle fault rupture.
}
\label{fig:simulation_eagle}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_eagle_zoom/frame0001fig13.png} }
\hfill
\subfloat[]{\includegraphics[width=0.49\textwidth]{./figures/surface_elevation_eagle_zoom/frame0004fig13.png} }
\caption{
$\zeta(x,y,t)$ in Eagle Harbor of Bainbridge island after a tsunami triggered by Seattle fault rupture.
The solid line denotes location of the shoreline at initial.
}
\label{fig:simulation_eagle_zoom}
\end{figure*}
Figures \ref{fig:simulation_eagle} and \ref{fig:simulation_eagle_zoom} show snapshots from the simulation at several moments during the simulation in the Puget Sound and near Eagle Harbor, colored by $\zeta(x,y,t)$ defined in equation \ref{eq:zeta}.
The black solid line denotes the original shoreline before the earthquake.
At the entry of the harbor, deep inundation occurred at several places as early as only 3 minutes after the earthquake.
Even at the very end of the Eagle Harbor, the influence from the tsunami is also significant, causing more than 2-meter deep inundation in several places starting at 9 minutes after the earthquake.
One wave gauge is placed inside Eagle Harbor to record the inundation depth during the tsunami.
Figure \ref{fig:eagle_domain} shows the location of the wave gauge.
As this is a hypothetical event for modeling an earthquake roughly 1100 year ago, there is no surface elevation observation available for comparison.
Hence, the results from current implementation are compared with those from the MOST tsunami model \citep{titov1997implementation}. Additional comparisons of GeoClaw and MOST results can be found in the comparison study recently performed by \citet{THA_Bainbridge2018}. For that study the original CPU version of GeoClaw was used, with the unsplit algorithm and refluxing, but we have confirmed that very similar results are obtained with the GPU code, at least in Eagle Harbor.
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{./figures/surface_elevation_bainbridge_gauge14.png}
\caption{
Water surface elevation at a gauge (location: longitude $-122.5089$, latitude $47.6222$) inside Eagle Harbor of Bainbridge island.
}
\label{fig:wave_height_eagle}
\end{figure*}
Figure \ref{fig:eagle_total_time} shows total running time and proportion of the two components on 4 machines (no regridding process since it is never conducted)
The total speed-ups are $3.7$ on machine 1 and $5.0$ on machine 2 for this benchmark problem.
For the original CPU implementation, the non-AMR portion takes $98\%$ and $99\%$ of the total computational time, which indicates high potential of benefiting from optimizing the performance of this portion.
Although the proportion of the non-AMR portion increases for current implementation on machine 1 and machine 2, it still takes more than $95\%$ of the total computational time, showing great potential for further improvement.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{./figures/eagle_total_time.pdf}
\caption{
Wall time (in seconds) of entire program on simulating the Seattle Fault tsunami, for original CPU implementation running on machine 3 and machine 4, and current GPU implementation running on machine 1 and machine 2.
}
\label{fig:eagle_total_time}
\end{figure}
\begin{table}[t]
\caption{
The three metrics measured from execution of the code on machine 1 and machine 2, simulating the Seattle Fault tsunami.
}
\centering
\begin{tabular}{|l|l|l|}
\hline
& machine 1 & machine 2 \\ \hline
$P_1$ & 60.76\% & 57.39\% \\ \hline
$P_2$ & 84.80\% & 84.60\% \\ \hline
$P_3$ & 6.87\% & 1.77\% \\ \hline
\end{tabular}
\label{tb:metrics_eagle}
\end{table}
Table \ref{tb:metrics_eagle} shows the three metrics for current GPU implementation running on machine 1 and machine 2 when the Seattle Fault tsunami is simulated.
Similar values are obtained for all three metrics, showing consistency and validity of the three metrics on evaluating GPU implementation with different tsunami problems.
\section{Conclusions}\label{sec:conclusions}
The shocking fatalities and infrastructure damage caused by tsunamis in the past two decades highlight the importance of developing fast and accurate tsunami models for both forecasting and hazard assessment.
This paper presents a fast and accurate GPU-based version of the GeoClaw code using patched-based AMR.
Arbitrary levels of refinement and refinement ratios between levels are supported.
The surface elevation at DART buoys and wave gauges in benchmark problems show the ability of current tsunami model to produce accurate results in tsunami modeling.
With the GPU, the entire tsunami model runs $3.6$--$6.4$ times faster than an original CPU-based tsunami model for several benchmark problems on different machines.
As a result, the Japan 2011 Tōhoku tsunami can be fully simulated for 13 hours in under 3.5 minutes wall-clock time, using a single Nvidia TITAN X GPU.
Three metrics for measuring the absolute performance of a GPU-based model are also proposed to evaluate current GPU implementation without comparing to others, which show the ability of current model to efficiently utilize GPU hardware resources.
Other hazards such as storm surge (e.g. \citet{MandliDawson2014}) and dam failures (e.g. \citet{George:Malpasset}) can also be modeled with GeoClaw and can similarly benefit from this GPU-enhanced version of GeoClaw.
\acknowledgments
The first author would like to thank Weiqun Zhang, Max Katz and Ann Almgren for many discussions with them during a summer internship at Lawrence Berkeley National Lab, supported by the AMReX project, which inspired many ideas and design strategies chosen in this work. This work was also supported in part by NSF grant EAR-1331412 and the University of Washington Department of Applied Mathematics.
|
{'timestamp': '2019-01-23T02:20:34', 'yymm': '1901', 'arxiv_id': '1901.06798', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.06798'}
|
arxiv
|
\section*{Abstract}
The emergence of replicases that can replicate themselves is a central issue in the origin of life. Recent experiments suggest that such replicases can be realized if an RNA polymerase ribozyme is divided into fragments short enough to be replicable by the ribozyme and if these fragments self-assemble into a functional ribozyme. However, the continued self-replication of such replicases requires that the production of every essential fragment be balanced and sustained. Here, we use mathematical modeling to investigate whether and under what conditions fragmented replicases achieve continued self-replication. We first show that under a simple batch condition, the replicases fail to display continued self-replication owing to positive feedback inherent in these replicases. This positive feedback inevitably biases replication toward a subset of fragments, so that the replicases eventually fail to sustain the production of all essential fragments. We then show that this inherent instability can be resolved by small rates of random content exchange between loose compartments (i.e., horizontal transfer). In this case, the balanced production of all fragments is achieved through negative frequency-dependent selection operating in the population dynamics of compartments. This selection mechanism arises from an interaction mediated by horizontal transfer between intracellular and intercellular symmetry breaking. The horizontal transfer also ensures the presence of all essential fragments in each compartment, sustaining self-replication. Taken together, our results underline compartmentalization and horizontal transfer in the origin of the first self-replicating replicases.
\section*{Author summary}
How evolution got started is a crucial question in the origin of life. One possibility is that RNA molecules gained the ability to catalyze self-replication. Researchers recently proposed how this possibility might have been realized: a long RNA catalyst was divided into short replicable fragments, and these fragments self-assembled into the original long catalyst. Ingenious though it is, we expose a hidden flaw in this proposal. An auto-catalytic system based on fragmented catalysts involves positive feedback, which necessarily biases replication toward specific fragments and eventually halts the replication of the whole system. However, we also propose an effective remedy to this flaw: compartmentalization and content exchange among compartments generate negative feedback, which tightly coordinates the replication of different fragments.
\section*{Introduction}
One of the crucial questions in the origin of life is how
molecules acquired the capability of undergoing open-ended Darwinian
evolution \cite{joyce2002antiquity, higgs2015rna}. A potential answer is
offered by the template-directed self-replication of a replicase, a
replicase that can replicate itself. To determine whether such
self-replication is demonstrable in RNA, considerable effort has been
devoted to the artificial evolution of RNA polymerase ribozymes
\cite{Johnston2001, Zaher2007, Wochner2011, Attwater2013,
mutschler2015freeze, Horning2016, 2017OriginsJoyce, Attwater2018}. A
recent milestone in this effort is the demonstration of `riboPCR,' the
exponential amplification of RNA through a PCR-like mechanism catalyzed
entirely by RNA \cite{Horning2016}. The glaring issue, however, has been
that the replicases synthesized so far have limitations in processivity
and fidelity, so that they can replicate only oligomers much shorter
than themselves (or long unstructured cytidine-rich polymers, which
exclude the ribozymes themselves).
As a potential solution to this problem, Mutschler et al.\ and Horning
et al.\ have recently proposed the fragmentation and self-assembly of a
replicase. According to their proposals, a replicase is fragmented into
multiple sequences that are short enough to be replicable by the
replicase and, moreover, capable of self-assembling into a functional
replicase \cite{mutschler2015freeze, 2017OriginsJoyce}. The possibility
of reconstituting a functional ribozyme from its fragments through
self-assembly has been experimentally demonstrated
\cite{mutschler2015freeze, 2017OriginsJoyce, Attwater2018}, attesting
the chemical plausibility of the proposals.
However, the exponential amplification of multiple distinct fragments
raises a question about the dynamical stability of the proposed
autocatalytic system. The continued replication of fragmented replicases
requires the sustained production of all its essential fragments in
yields proportional to the stoichiometric ratio of the fragments in a
replicase \cite{segre2000compositional, furusawa2003zipf,
Kamimura2018}. However, each fragment is replicated by the replicase
and thus grows exponentially. If some fragments was replicated
persistently faster than the others, the former would out-compete the
latter, causing a loss of some essential fragments and hence the
cessation of self-replication
The above consideration led us to examine whether and under what
conditions fragmented replicases achieve continued
self-replication. Using mathematical modeling, we discovered that the
fragmented replicases fail to display continued self-replication under a
simple batch condition. Replication is inevitably biased toward a subset
of the fragments owing to positive feedback inherent in the replication
cycle of the fragmented replicases, and the loss of fragment diversity
eventually halts self-replication.
To find a way to resolve the above instability, we next examined the
role of compartmentalization. Our model assumes a population of
protocells (primitive cells; hereinafter referred to as ``cells''), each encapsulating a finite number of
fragments and replicases. We found that compartmentalization, in
principle, allows the continued self-replication of the replicases by
the stochastic correction mechanism \cite{szathmary1987group,
szathmary1995major}. This mechanism is, however, severely limited in
its applicability by its strict requirements on the number of fragments
per cell and the population size of cells. Moreover, this
mechanism is inefficient because it necessitates the continuous
discarding of cells lacking some essential fragments and,
therewith, a large number of fragments that could have produced
functional replicases if combined across discarded cells.
Finally, we show that horizontal transfer between cells provides an
efficient and essential mechanism for the continued replication of the
fragmented ribozymes. The previous studies on imperfect
compartmentalization indicate that such horizontal transfer impedes the
stochastic-correction mechanism \cite{Fontanari2014,
PhysRevLett.120.158101}. Therefore, horizontal transfer might be
expected to be detrimental to the continued self-replication of the
fragmented replicases. On the contrary, we found that the horizontal
transfer of intermediate frequencies substantially stabilizes the system
to such an extent that the parameter constraints imposed by the
stochastic-correction mechanism are almost completely removed. This
stabilization is caused by negative frequency-dependent selection, which
arises through the interaction between the two distinct types of
symmetry breaking, symmetry breaking between cells and symmetry
breaking within cells, mediated by horizontal transfer.
\section*{Model}
We consider the simplest model of fragmented replicases, in which a
catalyst consists of two fragments. The fragments (denoted by $X$ and
$Y$) self-assemble into the catalyst (denoted by $C$), and the catalyst
disassembles into the fragments as follows:
\begin{align}
X + Y \xrightarrow{k^f} C, \hspace{5mm} C \xrightarrow{k^b} X + Y.
\label{Reaction1}
\end{align}
We assume that the catalytst cannot replicate its own copies, but can
replicate its fragments because shorter templates are more amenable to
ribozyme-catalyzed replication as mentioned above. Therefore,
\begin{align}
X + C \xrightarrow{k_x} 2X + C,\hspace{2mm} Y + C \xrightarrow{k_y} 2Y + C,
\label{Reaction2}
\end{align}
where the monomers are ignored under the assumption that their
concentrations are buffered at constant levels, and complementary
replication is ignored for simplicity. In the presence of the catalyst,
each fragment replicates at a rate proportional to its copy
number. Hence, the fragments undergo exponential amplification.
\section*{Results}
\subsection*{Failure of balanced replication of fragments under a batch
condition}
First, we show that the replication of the fragments $X$ and $Y$ are
unstable in a batch condition: replication is biased toward either of
the fragments even if the rate constants for $X$ and $Y$ are identical,
and the minor fragment is gradually diluted out from the system, so that
the replication of the catalysts eventually stops. In this paper, we
mainly focus on the situation where the rate constants are equal
($k_x = k_y = k$) because our results remain
qualitatively the same as long as the difference between $k_x$ and
$k_y$ is sufficiently small.
We assume that the reactions undergo in a well-mixed batch condition so that
the dynamics of the concentrations of $X$, $Y$, and $C$ (denoted by $x$,
$y$, and $c$, respectively) are written as follows:
\begin{align}
\frac{dx}{dt} &= \left( - k^f xy + k^b c + k xc \right) - x \phi
\label{xdynamics}
\\
\frac{dy}{dt} &= \left( - k^f yx + k^b c + k yc \right) - y \phi
\label{ydynamics}
\\
\frac{dc}{dt} &= \left( k^f xy - k^b c \right) - c \phi,
\label{cdynamics}
\end{align}
where $\phi = k(x + y) c$. In the right-hand side of these equations,
the first terms in the brackets represent chemical reactions, and the
second terms multiplied by $\phi$ represent dilution. This dilution terms
are introduced to fix the total concentration of fragments, $x+y+2c$,
since $x$ and $y$ increase through replication. Within the brackets
enclosing the reaction terms, the first and second terms represent
forward and backward reactions of \ref{Reaction1}, respectively. The
third terms, which are present only in Equations \ref{xdynamics} and \ref{ydynamics}, denote the replication of $X$ and $Y$ through
reactions \ref{Reaction2}, respectively.
By introducing variables $x_{tot} = x + c$ and $y_{tot} = y + c$, one
can write
\begin{align}
\frac{d}{dt} \left( \frac{x_{tot}}{y_{tot}} \right) = \frac{kc^2}{y^2_{tot}} \left( x_{tot} - y_{tot} \right).
\end{align}
This equation indicates that a steady-state solution satisfies
$x_{tot} = y_{tot}$. This solution is, however, unstable: a small
increase in, say, $x_{tot}$ over $y_{tot}$ gets amplified because
$kc^2/y^2_{tot}$ is always positive, and, as a consequence, replication
is biased to $X$. Intuitively, when $x_{tot}$ is slightly greater than
$y_{tot}$, the amount of free fragments $x$ must also be greater than
$y$ because the same amount of $X$ and $Y$ are incorporated into
catalysts. Therefore, the replication of
$X$ occurs more frequently than that of $Y$ because more templates of
$X$ are available. As a result, the increase of $x_{tot}$ is greater
than that of $y_{tot}$. Because of this positive feedback, the
concentration of the minor fragment $Y$ gradually decreases as it is
diluted out from the system, and, as a consequence, that of the
catalysts $C$ also decreases. Finally, the replication reaction stops
once the catalysts are lost from the system.
The instability of replication under a batch condition can be generally
demonstrated for catalysts composed of an arbitrary number of fragments
by straightforwardly extending the above model [see Supporting text section 1].
\subsection*{Compartmentalization can overcome the unstable replication
by selecting out non-growing cells but only under strong constraints on
the sizes of cell volume and population}
The introduction of compartments and their competitions can overcome the
unstable replication. When the system is compartmentalized into a
number of cells, stochasticity in cell compositions, competition for
growth and division of cells provide a possible solution to avoid the
loss of fragments: As the cells grow and eventually split into two with
fragments distributed randomly between the daughter cells, cells with
both $X$ and $Y$ fragments continue growth, while cells without either
of them cannot grow. By introducing such a stochastic-correction
mechanism at the cell level \cite{szathmary1987group}, one expects that
the instability by the positive feedback at the molecule level can be
resolved. To investigate this, we assume that the fragments and their
assembly to function as a catalyst are partitioned into $N_{cell}$
cells: the reactions occur in each cell.
We adopted stochastic simulation using Gillespie
algorithm \cite{Gillespie} for reactions
\ref{Reaction1} and \ref{Reaction2}. We assume that the volume of each
cell is proportional to the number of fragments inside, and as the
number of fragments increases in a cell, the cell grows. When the total
number of fragments reaches a threshold $V_{Div}$, the cell divides with
the components randomly partitioned into two daughter cells. Here, at
the division event, one randomly-chosen cell is removed to fix the total
number of cells $N_{cell}$. By this cell-cell competition, cells with
biased composition of $X$ and $Y$ are selected out because their growth
is slow.
The relevant parameters for controlling the effect of compartmentalization are the division threshold $V_{Div}$ and the number of cells $N_{cell}$.
Figure \ref{fig:1}A shows sets of the parameters with which the stochastic-correction mechanism can avoid the unstable replication, by suppressing the positive feedback and
selecting cells keeping both fragments.
If $V_{Div}$ is very small (of the order of $10$), the stochasticity of cell components is too strong to maintain both fragments continuously and either of them is lost for all cells.
Hence, the system cannot continue growth.
On the other hand, if $V_{Div}$ is too large, stochasticity in components decreases.
In each cell, the balance of fragments is broken, and the replication is biased to either of $X$ or $Y$.
Then, components of each cell are dominated by either of free $X$ or $Y$, and
the number of catalysts in dividing cells gradually decreases to one because at least one catalyst is necessary to replicate fragments.
Even when the $N_{cell}$ cells are separated into the equal number of $X$-dominant and $Y$-dominant cells,
there is no frequency-dependent selection between the cells.
Thus, the random drift will finally result in bias to either of $X$-dominant or $Y$-dominant cells.
By division events, daughter cells without catalysts randomly replace remaining cells, therefore, the cells with catalysts are finally removed from the system.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{Fig1_reduced.eps}
\caption{{\bf Sets of division threshold $V_{Div}$ and the number of cells $N_{cell}$ with which the unstable replication of reactions \ref{Reaction1} and \ref{Reaction2} are avoided by (A) compartmentalization alone and (B) that with horizontal transfer of the transfer constant $D = 0.01$.}
For the sets shown as stable [red circles], the system can continuously have cells with both fragments in the simulations up to $4\times 10^5$ division events from an initial condition where $V_{Div}/4$ copies of each $X$ and $Y$ are in each cell.
For the sets shown as unstable [blue squares], all cells with both fragments are lost from the system and it cannot continue growth.
For the sets located at the boundary of stable and unstable area [shown in red triangles], the outcome depends on simulation runs. }
\label{fig:1}
\end{center}
\end{figure}
For values of $V_{Div}$ in-between, some of $N_{cell}$ cells keep both $X$ and $Y$, and can continue the replication.
Besides $V_{Div}$, the number of cells $N_{cell}$ is also restricted, to maintain such cells keeping both fragments $X$ and $Y$.
At division events, dividing cells without both fragments randomly replace remaining cells. Hence, when the number of cells $N_{cell}$ is small, all the cells with both fragments will be finally removed.
As the number of cells $N_{cell}$ increases, the probability decreases that all the cells with both fragments are
removed. As a result, the range of $V_{Div}$ with stable replication increases.
Note that the above cell-level selection mechanism is based on the removal of non-growing cells, and is inefficient because a large number of fragments in cells lacking some fragments must be continuously removed from the system although they are still functional if combined across the non-growing cells.
\subsection*{Horizontal transfer of fragments with small rates removes the constraints of compartments for stable replication}
Without the selection in cell population nor the restriction to $V_{Div}$,
horizontal transfer of fragments between cells rescues the loss of fragments and enables continuous replication by maintaining the balance between $X$ and $Y$.
If the $X$-dominant and $Y$-dominant cells coexist in the cell population,
the transfer between cells avoids loss of fragments for both cells by supplying fragments to each other because each fragment is in excess for cells on one side
but lacking for cells on the other side.
For the purpose, we consider random mutual transfers of molecules among the $N_{cell}$ cells.
To implement the transfer, we consider reactions, $X \xrightarrow{D} 0$, $Y \xrightarrow{D} 0$, $C \xrightarrow{D} 0$
so that the $X$, $Y$ and $C$ are removed from a cell, respectively, with rate in proportional to each concentration, i.e., $Dx$, $Dy$, and $Dc$.
This gives diffusion out of the cell.
At the same time, the component is added to another randomly-chosen cell.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig2.eps}
\caption{
{\bf (i) The number of fragments $X_{tot}$ and $Y_{tot}$ of dividing cells and (ii) the number of $X$-dominant and $Y$-dominant cells at corresponding time for the transfer rates (A) $D = 0$ (B) $D = 0.001$ (C) $D = 0.01$ and (D) $D = 0.02$.}
Initially, the numbers of $X_{tot}$ and $Y_{tot}$ are approximately equal and, as time goes on,
cells are differentiated into either of $X$-dominant or $Y$-dominant compositions.
For $D=0$ (A), the system is unstable: only $X$-dominant cells (for this run) dominate (ii) and finally, cells cannot continue growth.
For $D = 0.001$ (B) and $0.01$ (C), the system is stable; $X$ and $Y$ fragments coexist in each cell with unequal population (i).
Here, the asymmetry between the major and minor fragments gets smaller as $D$ increases. In addition, the two types of cells, $X$-dominant and $Y$-dominant
cells coexist with the equal population (ii). As $D$ increases further [$D = 0.02$ (D)], the system gets unstable and only either of $X$ or $Y$ remains (ii).
The parameters are $V_{Div} = 1000$, $N_{cell} = 100$, $k^f = f^b = 1$, $k_x = k_y = 1$.
}
\label{fig:2}
\end{center}
\end{figure}
With the transfer among cells, the replication of the fragments is stabilized when the transfer constant $D$ is small.
In fact, the constraints of $V_{Div}$ and $N_{cell}$ are drastically eliminated [Figure \ref{fig:1}B]. As long as the parameters are not extremely small, the stable replication continues.
For small positive values of $D$ [Figures \ref{fig:2}B and C], the cell keeps on growing with the coexistence of $X$ and $Y$ molecules in each cell, even for large $V_{Div}$ where only $X$-dominant or $Y$-dominant cells remain for $D = 0$ [Figure \ref{fig:2}A].
Here, the asymmetry between the fractions of the major and minor fragments gets smaller as $D$ increases.
In addition, two types of cells, $X$-dominant and $Y$-dominant cells coexist roughly with equal population [Figures \ref{fig:2}B(ii) and C(ii)].
As $D$ is increased further [Figure \ref{fig:2}D], the system gets unstable and only either of $X$ or $Y$ remains.
This is natural, because for a large $D$ limit, the system is well mixed, and the system is reduced back to the case without compartmentalization.
\subsection*{Bifurcation explains the stable replication with small rates of horizontal transfer in two subsystems
as an approximation of cell population}
To answer why the small rates of transfer stabilizes the system, we approximate the dynamics of the population of cells by considering the dynamics of two subsystems between which the fragments are transferred.
We assume the equal population of $X$-dominant and $Y$-dominant cells as two subsystems of equal volumes, namely, $1$ and $2$, respectively.
We write the total concentration of $X$ (the total of free $X$s and $C$s) of each subsystem as $x^1_{tot}$ and
$x^2_{tot}$, and the total of $Y$ as $y^1_{tot}$ and $y^2_{tot}$.
The dynamics of $x^i_{tot}$ in each subsystem($i=1,2$) is written as
\begin{align}
\dot{x}^1_{tot} = \frac{dx^1_{tot}}{dt} &= F^1 - \frac{D}{2} x^1_{tot} + \frac{D}{2}x^2_{tot},
\label{meanx1}
\end{align}
\begin{align}
\dot{x}^2_{tot} = \frac{dx^2_{tot}}{dt} &= F^2 - \frac{D}{2} x^2_{tot} + \frac{D}{2}x^1_{tot}.
\label{meanx2}
\end{align}
where $F^i = k (x^i_{tot} - c^i)c^i - x^i_{tot} \mu_i$.
The first term in $F^i$ represents the replication of the component $X$ by the first reaction of
\ref{Reaction2}, where $k$, $(x^i_{tot} - c^i)$, and $c^i$ denote the
rate constant, the concentrations of free $X$ and $C$ in
subsystem $i$, respectively.
The second term multiplied by $\mu_i$ in $F^i$ represents the dilution effect of the component due to the volume growth of the subsystem.
The volume growth is assumed to keep the total concentration at unity (i.e., $x^i_{tot} + y^i_{tot} = 1$).
Accordingly, $\mu_i$ is defined as $\mu_i = k (x^i_{tot} - c^i) c^i + k (y^i_{tot} - c^i) c^i = k ( 1 - 2c^i ) c^i$.
Thus, in each subsystem, the components are diluted by the rate $\mu_i$ in total. Then, the dilution rate of each component is proportional to the amount of the component, therefore, the component $X^i_{tot}$ is diluted by the factor $x^i_{tot} \mu_i$. Along with the volume growth of each subsystem, we also assume that the total volume of both subsystems is fixed identical by removing components of each subsystem in proportion to its volume.
This process corresponds to the random removal of cells to fix $N_{cell}$ cells in our simulations, and the volume ratio of the subsystem 1 to 2 dynamically changes in general. In this section, we fix the two subsystems with an equal volume, whereas the dynamics of the volumes is investigated in the next section.
Next, the second and third terms in Equations \ref{meanx1} and \ref{meanx2} denote average out- and in-flow
of the components $X$ by the transfer, respectively.
These average flows are estimated as follows:
the amount of the fragment $X$ diffused out from the subsystem $1$ is $Dx^1_{tot}$,
but half of them is returned to the subsystem itself because, in our simulation, the population of cells is divided into $X$-dominant and $Y$-dominant cells with the equal population of $N_{cell}/2$ and each fragment diffused out from a cell is randomly re-distributed into one of the cells, i.e., half of the fragments are distributed into $X$-dominant cells.
Thus, the effective amount of fragments diffused out of subsystem $1$ to $2$ is $Dx^1_{tot}/2$.
In the same manner, the effective amount of the fragment $X$ for subsystem $1$ received from subsystem $2$ is approximated as halves of $Dx^2_{tot}$.
The dynamics for $Y$ are estimated in the same manner.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig3.eps}
\caption{{\bf The concentration $x$ and $x_{tot}$ at division events for $Y$-dominant cells as a function of $D$.} Free and Total indicate $x$ and $x_{tot}$, respectively.
For the free fragments [$x$], the results of simulations [Blue and black curves for $V_{Div} = 10^3$ and $10^4$] agree well with the solution of Equation \ref{fixpoint2} [Red curve].
For the total fragments [$x_{tot}$], the simulations [Magenta and orange curves for $V_{Div} = 10^3$ and $10^4$] agree with the solution of Equation \ref{fixpoint2} [Green curve]
for larger $D$, but shift to larger values for smaller $D$. This is because cells must possess at least one catalyst to divide, therefore, the total fragments including $c$ shift to larger values as it approaches the minimum requirement.
For reference, the values of $x_{tot} = 1/V_{Div}$ at which the number of $c$ is equal to one for $V_{Div} = 10^3$ and $10^4$ are shown by horizontal dotted lines, respectively.
}
\label{fig:3}
\end{center}
\end{figure}
The fixed-point solutions of Equation \ref{meanx1} are analytically obtained by approximating $c^i \approx 1 - \sqrt{1 - x^i_{tot} y^i_{tot}} \approx x^i_{tot} y^i_{tot}/2$.
The first approximation assumes that the dynamics of reaction \ref{Reaction1} is much faster than
those of reactions \ref{Reaction2} and transfers.
The concentration of $c^i$ is, then, approximated from the condition $dc^i/dt = 0$ as
$c^i = 1 - \sqrt{1 - x^i_{tot} y^i_{tot}}$ for $k^f = k^b$.
In addition, as we are interested in the stability of the system against the biased replication, we consider the case of highly-asymmetric composition, i.e., $x^i_{tot} y^i_{tot} \ll 1$.
Then, the second approximation $c^i = x^i_{tot} y^i_{tot} /2$ is obtained by using $(1-\epsilon)^{1/2} = 1-\epsilon/2$ for $|\epsilon| \ll 1$.
Using the approximations, the stable fixed point is obtained as
\begin{align}
x^1_{tot} = \frac{1}{2} \left( 1 + \sqrt{1 - 4\sqrt{2D/k}} \right),
\label{fixpoint1}
\end{align}
and
\begin{align}
x^2_{tot} = \frac{1}{2} \left( 1 - \sqrt{1 - 4\sqrt{2D/k}} \right).
\label{fixpoint2}
\end{align}
Here, we assume the dominant fragment of subsystem 1 is $X$ and that of subsystem 2 is $Y$ because $x^1_{tot} >1/2$ and $x^2_{tot} <1/2$.
Further, the relative value of the transfer rate $D$ to the replication rate $k$ is essential so that we fix $k=1$ below.
The solution of the minor fragment $x^2_{tot}$ is plotted as a function of the transfer rate $D$ in Figure \ref{fig:3}.
As for the free fragments $x^2 = x^2_{tot} - c^2$, it agrees well with the results of our simulation.
For the total fragment $x^1_{tot}$, it agrees with the simulations for greater $D$ although the results of simulation shift to larger values for smaller $D$, where the number of fragments decreases, but cells must possess at least one catalyst to divide, so that the total fragments including $c$ shift to greater values as it approaches the minimum requirement, i.e.,
the total concentration of minor fragments be $\geq 1/V_{Div}$.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig4_reduced.eps}
\caption{{\bf Flow diagram of Equation \ref{meanx1}.} As schematically indicated in the left-top panel, the nullclines are shown for $\dot{x}^1_{tot} = 0$ and $\dot{x}^2_{tot}= 0$ in blue and orange, respectively, and the crossing points of them are solutions. The directions of $v_1 = (1,1)$ and $v_2 = (1, -1)$ are also indicated. For the solutions, stable fixed points are shown in red: those with stable growth [i.e., both fragments are in each subsystem] are in red circles, and those without growth [either of fragments is lost from subsystems or whole systems] are in red triangles. Unstable solutions are in light-blue squares, and neutral solutions in the $v_1$-direction are in green stars at $D = 0.02$ (E). For $D = 0$ (A), the solution exists at $(x^1_{tot}, x^2_{tot}) = (1/2, 1/2)$ but it is unstable. For small values of $D$ (B to D), the stable fixed points with growth [red circles] appear in addition to fixed points without growth. At $D = 0.02$ (E), the fixed points with growth get unstable (shown in green stars) in $v_1$-directions. As $D$ increases further (F), the two fixed points are still stable in $v_2$-directions, while the solution at $(x^1_{tot}, x^2_{tot}) = (1/2, 1/2)$ is unstable in the direction. At $D = 0.03125$ (G), the system transits from the three fixed points to one fixed point. }
\label{fig:4}
\end{center}
\end{figure}
To study further the stability of the solution, we plot the flow [a direction of the vector ($\dot{x}^1_{tot}, \dot{x}^2_{tot}$)]
of Equation \ref{meanx1} in Figure \ref{fig:4}.
The steady-state solutions satisfy both $\dot{x}^1_{tot} = 0$ and $\dot{x}^2_{tot} = 0$, therefore, they
are represented as the crossing points of two nullclines [set of $(x^1_{tot}, x^2_{tot})$ satisfying $\dot{x}^1_{tot} = 0$ or $\dot{x}^2_{tot} = 0$, indicated by blue and orange curves (see left-top panel)].
For $D = 0$ [Figure \ref{fig:4}A], a solution exists at $(x^1_{tot}, x^2_{tot}) = (1/2, 1/2)$ (indicated by the light-blue square).
However, it is unstable because the flows (arrows) point outward from the solution.
Then, the system moves away from the solution by any tiny perturbation.
The flows point toward each corner of the plane (indicated by the red triangles), where either of $X$ or $Y$ is lost and cells cannot grow.
For small positive values of $D$ [Figures \ref{fig:4}B to D],
stable fixed points (shown in red circles) appear to which the flows are directed from all directions,
in addition to unstable fixed points (shown in blue squares) and the trivial solutions $(x^1_{tot}, x^2_{tot})=(0,0),(1,1)$ (shown in red triangles).
Note that there exist two stable fixed points [red circles] for each $D$ [Figures \ref{fig:4}B to D], and the solution in Equation \ref{fixpoint1} corresponds to the right-bottom one.
As $D$ increases, a bifurcation occurs at $D = 0.02$ [Figure \ref{fig:4}E] so that the stable fixed points for $D \leq 0.02$ turn to be unstable (green stars).
To understand this bifurcation, we consider eigenvectors $v_1$, $v_2$ of Jacobian matrix of Equation \ref{meanx1} for the eigenvalues $\lambda_1$ and $\lambda_2$.
At the stable fixed points, they are obtained as $v_1 = (1,1)$ and $v_2 = (1,-1)$ [see left-top panel in Figure \ref{fig:4}].
The direction of $v_1$ determines the asymmetry between $X$ and $Y$ in both subsystems.
By moving along the $v_1$-direction of the plane, the amount of $x^1_{tot} + x^2_{tot}$ either increases or decreases
while $y^1_{tot} + y^2_{tot} = 2 - (x^1_{tot} + x^2_{tot})$ decreases or increases, respectively.
On the other hand, the direction of $v_2$ corresponds to the asymmetry between subsystems $1$ and $2$ for the fragments of $X$.
By moving along the $v_2$-direction of the plane, the amount of $x^1_{tot}$ increases or decreases while $x^2_{tot}$ decreases or increases, respectively.
The corresponding eigenvalues for $v_1$ and $v_2$ are calculated as
$\lambda_1 = 5D - \frac{\sqrt{2D}}{2}$ and $\lambda_2 = 4D - \frac{\sqrt{2D}}{2}$, respectively.
As $D$ increases, a bifurcation occurs first in $v_1$-direction at $D^* = 0.02$ which is obtained from $5D^* - \frac{\sqrt{2D^*}}{2} = 0$.
In fact, the flows (arrows) at the fixed point (green stars) are in the parallel direction of $v_2$,
and point outward in the $v_1$-directions as $D$ is increased further.
This corresponds to the case in which the symmetry between $X$ and $Y$ breaks and only either of $X$ and $Y$ remains in both systems.
The estimated value of $D^*$ agrees with the results of our simulation [Figure \ref{fig:2}].
In the two subsystems, the bifurcation also occurs in $v_2$-direction at $D^+ = 0.03125$, as obtained from $4D^+ - \frac{\sqrt{2D^+}}{2} = 0$,
corresponding to the symmetry between subsystems $1$ and $2$.
At the bifurcation point, the three fixed points (one unstable and two stable points in $v_2$-directions; shown all in light-blue squares)
merge to one fixed point [Figure \ref{fig:4}G].
The behavior of the bifurcations can be understood as follows.
The system has two kinds of symmetry, one between fragments $X$ and $Y$, and one between subsystems $1$ and $2$.
For the stable replication, the symmetry between $X$ and $Y$ should be maintained because both fragments are essential.
On the other hand, the symmetry between subsystems $1$ and $2$ should be broken because each fragment should be in excess for one subsystem,
but lacking for the other subsystem. The two subsystems `help' each other by the transfer of molecules.
The former symmetry is maintained for $0 \leq D < D^*$ and breaks for $D > D^*$.
On the other hand, the latter symmetry is broken in the range $0 \leq D < D^+$.
To meet the two conditions for the stable replication, the values of $D$ are restricted as $0 < D < D^* = 0.02$ because $D^* < D^+$
($D = 0$ is eliminated by the condition each subsystem should contain both fragments).
\subsection*{Frequency-dependent selection: why the balance of fragments is achieved at the cell population}
In the previous section, we confirmed the stable replication by small rates of horizontal transfer, by assuming that the populations of two cell types are equal.
Here, we verify that the state of the equal volume i.e. the equal population of $X-$ and $Y-$dominant cells,
is stable and selected as a result of a frequency-dependent selection.
To analytically investigate the stability of the state, we consider the volume fractions of the subsystems $1$ and $2$ are slightly different from 1/2, to be replaced by
$1/2 + \epsilon$ and $1/2 - \epsilon$, respectively, with $\epsilon$ as a small parameter.
Then the rate equations \ref{meanx1} and \ref{meanx2} are
\begin{align}
\frac{dx^1_{tot}}{dt} = F^1 - D \left( \frac{1}{2} + \epsilon \right) x^1_{tot} + D \left( \frac{1}{2} - \epsilon \right) x^2_{tot},
\label{stabilityFDS1}
\\
\frac{dx^2_{tot}}{dt} = F^2 - D \left( \frac{1}{2} - \epsilon \right) x^2_{tot} + D \left( \frac{1}{2} + \epsilon \right) x^1_{tot},
\label{stabilityFDS2}
\end{align}
where the replication and the dilution terms due to the volume growth are written as $F^i = - x^{i2}_{tot} (1-x^i_{tot})^2 (1 - 2x^i_{tot})/4$ by substituting the approximation $c^i = x^i_{tot} y^i_{tot}/2$.
Below, we show that the growth rate of the minor subsystem 2 with the fraction $1/2 - \epsilon$ (for $\epsilon > 0$) increases and the major subsystem 1 decreases, leading the fraction of the two subsystems to go back to equal.
First, we write the concentrations of $X$ at the steady state as $x^1_{tot} = x^* + \delta_1$ and $x^2_{tot} = 1 - x^* + \delta_2$ where
$x^* = \frac{1}{2} \left( 1 + \sqrt{1 - 4\sqrt{2D}} \right)$ is the solution for $\epsilon = 0$ (Equation \ref{fixpoint1}),
and $\delta_1$ and $\delta_2$ are deviations caused by
the introduction of $\epsilon$, respectively, for $x^1_{tot}$ and $x^2_{tot}$.
Then, from the steady state condition of Equations \ref{stabilityFDS1} and \ref{stabilityFDS2}, one gets
$\delta_1 = \delta_2 = \frac{\sqrt{D} \epsilon}{\sqrt{2}/2 - 4\sqrt{D}}$.
The growth rates $\mu_i$ $(i = 1,2)$ are given by $(x^i_{tot} - c^i) c^i + (y^i_{tot} - c^i) c^i$ so that
\begin{align}
\mu_1 = \mu^* - \gamma(D) \epsilon,
\label{growth1}
\\
\mu_2 = \mu^* + \gamma(D) \epsilon,
\label{growth2}
\end{align}
where $\mu^* = \frac{\left\{ 1 - x^* (1-x^*) \right\} x^* (1-x^*) }{2}$ is the growth rate at $\epsilon = 0$,
and $\gamma(D) = \frac{1-2\sqrt{2}D}{\sqrt{2}\sqrt{1-4\sqrt{2D}}} \sqrt{D} > 0$, showing the decrease and the increase of the subsystem 1 and 2, respectively by $\epsilon$.
When $\epsilon > 0$, i.e., the volume of subsystem $1$ exceeds that of $2$,
the concentrations of $X$ in both subsystems $1$ and $2$ increase ($\delta_1 = \delta_2 > 0$).
For the subsystem $1$, the fragment $X$ is majority ($x^1_{tot} = x^* > 1/2$), therefore, the asymmetry between $X$ and $Y$ is enhanced by the increase of $X$.
On the other hand, the fragment $X$ is the minority in subsystem $2$, and the composition of $X$ and $Y$ gets close to be symmetric by $\delta_2$.
Because the growth rate is maximized when the concentrations of $X$ and $Y$ are equal, the rate $\mu_1$ of subsystem $1$ decreases, while that of $2$, $\mu_2$, increases by the factor $\gamma(D) > 0$ (see Equations \ref{growth1} and \ref{growth2}).
Consequently, the volume ratio of the two subsystems eventually goes back to equal.
(Note that the frequency-dependent selection remains for any non-zero $D$, however, the replication is unstable in our simulation for small values of $D$ when discreteness in molecule number is taken into account [see Supporting text section 2]).
\section*{Discussions}
In summary, we have shown that the self-replication of fragmented
replicases is unstable under a simple batch condition. Replication is
biased towards a subset of the fragments and eventually stops due to the
lack of an essential fragment. Although the stochastic-correction
mechanism induced by compartmentalization helps, it imposes severe
restrictions on the number of molecules per cell and the population
size of cells. In addition, the mechanism is inefficient
because a large number of fragments in non-growing cells must be
discarded. Finally, we have shown that the horizontal transfer of
intermediate frequencies provides an efficient and favorable solution to
the instability of the fragmented replicases.
Recent experimental studies have been challenged to use self-assembling
fragmented ribozymes to synthesize each of the component fragments to
achieve the RNA-catalyzed exponential amplification of the ribozyme
itself \cite{2017OriginsJoyce}. The self-assembly of functional RNA
polymerase ribozymes from short RNA oligomers has been demonstrated
by driving entropically disfavored reactions under iterated freeze-thaw
cycles \cite{mutschler2015freeze}. Our theoretical results predict that
these approaches for (re-)constructing RNA-based evolving systems have
the serious issue: the replication of fragments is inevitably biased, so
that it eventually fails to produce the copies of the ribozymes.
Simultaneously, our study proposes a solution for this issue:
the random exchange of fragments between loose compartments at intermediate frequencies.
Recent experiments suggest that the random exchange of contents between
compartments is plausible. The freeze-thaw cycles, which enhance the
assembly of fragments \cite{mutschler2015freeze}, also induce content
exchange between giant unilamellar vesicles through
diffusion \cite{Schwille2018} or through fusion and fission \cite{Tsuji590}.
Also, transient compartmentalization, which involves
the occasional complete mixing of contents between comparments, is
considered to be relevant to maintain functional
replicators \cite{matsuura2002importance, ichihashi2013darwinian,
bansho2016host, matsumura2016transient, PhysRevLett.120.158101}. Taken
together, it therefore seems natural to assume that compartmentalization
is imperfect enough to allow the random exchange of fragments between
compartments at the primitive stages of life.
The model of fragmented replicases investigated above can be
conceptually compared to the hypercycle \cite{eigen_hypercycle:_1979}, a
model proposed to solve error catastrophes: Both models posit
that multiple distinct sequences are integrated into an auto-catalytic
system, which as a whole maintains a greater amount of information than
possible by a single sequence. However, the two models sharply differ in
dynamical aspects. In the fragmented replicases, the dynamics involves
the positive feedback, which biases replication toward a subset of the
fragments. In the hypercycle, the dynamics involves negative feedback,
which balances the replication of distinct sequences on a long
timescale, but also causes oscillatory instability on a short timescale.
Given these comparisons, horizontal transfer as studied here will be also relevant to hypercycles.
In addition, hypercycles entail evolutionary instability due
to parasites \cite{smith1979hypercycles}.
It would be interesting to study the effect of parasites on
the fragmented ribozymes in the future.
\section*{Supporting information}
\paragraph*{Supporting text.}
\label{S1_text}
{\bf Supporting information with sections on 1) General extension of the replication to $N$-fragments ribozymes; 2) Unstable growth for small transfer rate in our simulation of compartments is due to discreteness of molecules in cells.}
\section*{Acknowledgments}
This research is partially supported by a Grant-in-Aid for Scientific Research(S) (15H05746) from the Japan Society for the Promotion of Science(JSPS).
\nolinenumbers
\section{General extension of the replication to $N$-fragments ribozymes}
One could straightforwardly extend the two-fragments to general $N$-fragments system, in which $N$ fragments of $X^i$ $(i=1 ,..., N)$ assemble to form the catalyst. The assembly of the catalyst from the $N$-fragments is written as
\begin{align}
X_1 + X_2 + ... + X_N \rightarrow C,
\label{R1}
\end{align}
and the catalyst disassemble into the $N$-fragments as
\begin{align}
C \rightarrow X_1 + X_2 + ... + X_N.
\label{R2}
\end{align}
Each of the fragments replicates with the aid of the catalyst $C$ as
\begin{align}
X_i + C \rightarrow 2X_i + C,
\label{R3}
\end{align}
for $i = 1,...,N$.
The dynamics of the fragments $X_i$ and the catalyst $C$ are written as
\begin{align}
\frac{dx^i}{dt} = \left( - x^1 x^2 \cdots x^N + c \right) + x^i c - x^i \phi,
\label{S1}
\end{align}
and
\begin{align}
\frac{dc}{dt} = \left( x^1 x^2 \cdots x^N - c \right) - c \phi,
\label{S2}
\end{align}
where $x^i$ and $c$ denote the concentrations of $X_i$ and $C$, respectively. Here, all the rate constants are fixed to one.
In the right-hand-side of the equations, the terms in the brackets denote the assembly (\ref{R1}) and the disassembly (\ref{R2}) of the catalyst,
and the second term in Equation (\ref{S1}) denotes the replication of the fragment $X^i$ (\ref{R3}).
Each of the last terms with $\phi$ represents dilution.
The dilution terms with $\phi = (1-Nc)c$ are introduced to fix total number of fragments as $\sum_i (x^i + c) = \sum_i x^i + Nc = 1$.
By adding both ends of Equations (\ref{S1}) and (\ref{S2}), one can write the dynamics of total concentration of $X^i$ (total of free $X^i$ and that in the catalyst), $x^i_{tot}$ $(i = 1,...,N)$ as
\begin{align*}
\frac{dx^i_{tot}}{dt} = ( x^i_{tot} - c )c - x^i_{tot} \phi,
\end{align*}
where the concentration of the free $X_i$, $x^i$, is written as $x^i_{tot} - c$.
Then, for arbitrary pair of $i$ and $j$ ($i, j = 1,...,N$), one obtains
\begin{align}
\frac{d}{dt} \left( \frac{x^i_{tot}}{x^j_{tot}} \right) = \frac{c^2}{x^{j2}_{tot}} \left( x^i_{tot} - x^j_{tot} \right)
\label{generalratio}
\end{align}
Given that $(c/x^j_{tot})^2$ is positive, this indicates that a small increase of $x^i_{tot}$ over $x^j_{tot}$ grows so that the solution is unstable of balanced replication $x^i_{tot} = x^j_{tot}$ for every pair of $i$ and $j$.
\section{Unstable growth for small transfer rate in our simulation of compartments is due to discreteness of molecules in cells}
The bifurcation analysis presented in the main text explains that the system is stable for infinitely small but non-zero values of $D$ for the two subsystems.
On the other hand, the system gets unstable in our simulation of compartments for small non-zero values of $D$,
because the system is dominated by either of the $X$- or $Y$-dominant cells and the symmetry between $X$ and $Y$ is broken.
As $D$ decreases, the system is dominated by either of the cells because fluctuations increase between the number of $X$- and $Y$-dominant cells and, once the number of either type of cells reaches the population size $N_{cell}$, it irreversibly breaks the symmetry between $X$ and $Y$.
Actually, the variances $\sigma^2$ of the number of $X$-dominant cells increases as $D$ decreases [Fig. \ref{fig:S1}A].
The increase of the variances suggests that pressure of the frequency-dependent selection to maintain the symmetry is getting weak as $D$ decreases.
However, the analysis up to the first order of $\epsilon$ presented in the main text does not explain such dependence on $D$ for the two subsystems.
The pressure to maintain the symmetry is represented by relative values of the factor $\gamma(D)$ to the average growth rate $\mu^*$
in Equations 13 and 14 of the main text.
If the relative values decrease with decreasing $D$, the pressure would be weakened because the relative increase or decrease of the growth rates, respectively, between subsystems of smaller or larger volumes gets smaller.
Actually the factor $\gamma(D)$ depends on $D$ as $\sqrt{D}$ for small $D$ [see Fig. \ref{fig:S2}], but the average growth rate $\mu^*$ also scales as $\sqrt{D}$.
Therefore, the relative pressure of the frequency-dependent selection does not change with decreasing $D$.
This suggests that the unstable growth in our simulation of compartments is due to the discrete nature of molecules in cells because the effect is neglected in the two subsystems.
As $D$ decreases, the number of minor fragments decreases [Fig. 3 of the main text], therefore, the cells gradually contain only a few or no catalyst.
In fact, the number of cells with the catalysts $C$ gradually decreases with decreasing $D$ [Fig. \ref{fig:S1}B: the case for both $X/Y$-fragments and catalysts
also shows similar curves] in accordance with the increase of the variances $\sigma^2$ of the number of $X$-dominant cells [Fig. \ref{fig:S1}A].
Therefore, the discrete nature of molecules in cells contributes to the enhancement of variations between cells and consequently results in the increase of the variances.
A scaling behavior between the transfer rate $D$ and the division threshold $V_{Div}$ is also consistent with the enhancement of the variances by the discreteness of molecules.
The relevant values of the transfer rate $D$ with which the effect of the discrete number of molecules appears scales as $1/V^2_{Div}$.
For substantially small $D$, the concentration of minor fragments in Equation 10 of the main text can be written as $x^1_{tot} = \sqrt{2D}$.
Therefore, the values of $D$ at which the total number of minor fragments is approximately equal to one scale as $D \approx 1/2V^2_{Div}$.
In fact, the numbers of cells with $X$ fragments and catalysts $C$ and the variances for different $V_{Div}$ and $N_{cell}$
agree as one plot as a function of $DV^2_{Div}$ [Fig. \ref{fig:S1}C and D].
This suggests that, when $V_{Div}$ is infinitely large, the variances do not increase even when $D$ decreases to infinitely small.
Further, the system is stable if the number of cells $N_{cell}$ is large because the variances of the ratio of $X$-dominant to $Y$-dominant cells
scales as $1/\sqrt{N_{cell}}$ with $N_{cell}$.
The variances $\sigma^2$ of the number of $X$-dominant cells divided by $N_{cell}$ collapses into the same curves for different $N_{cell}$ [Fig. \ref{fig:S1}D].
This indicates that they increases with $\sqrt{N_{cell}}$, and the ratio of $X$-dominant cells to total population scales as $1/\sqrt{N_{cell}}$.
Thus, fluctuations in the ratio of $X$-dominant to $Y$-dominant cells decreases when $N_{cell}$ is getting large.
\newpage
\begin{figure}
\includegraphics[width=\textwidth]{FigS1_reduced.eps}
\caption{(A) Variances $\sigma^2$ of the number of $X$-dominant cells as functions of $D$ and $V_{Div}$. Here, $N_{cell} = 100$. (B) The number of cells with catalysts $C$ scales with $DV^2_{Div}$ for $V_{Div} = 1000, 5000, 10000$. The numbers of cells with both $C$ and free $X$/$Y$ also show a similar curve.
Here, $N_{cell} = 100$. (C) Variances $\sigma^2$ of the number of $X$-dominant cells divided by $N_{cell}$ and (D) Ratio of cells with the catalysts $C$ to $N_{cell}$ are shown for different values of $N_{cell}$. All the points follow the same curve under the scaling $DV^2_{Div}$. }
\label{fig:S1}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{FigS2_reduced.eps}
\caption{Growth rate $\mu^*$ and $\gamma(D)$ of Equation 13 and 14 of the main text as a function of $D$ in (A) normal and (B) log-log scale for small $D$. The slope $\sqrt{D}$ is also shown in (B).}
\label{fig:S2}
\end{figure}
\end{document}
|
{'timestamp': '2019-01-23T02:19:35', 'yymm': '1901', 'arxiv_id': '1901.06772', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.06772'}
|
arxiv
|
\section{Introduction}
With the miniaturization of electrochemical devices and the trend towards nanometer scale electrochemistry \cite{ying2017advanced,stevenson2017materials,lin2018understanding}, understanding the behavior of liquid electrolytes at the micro- and nanoscale becomes increasingly important.
Of particular interest are correct double layer models, the behavior of electroosmotic flows at the nanoscale and their interaction with species transport.
Due to the complex physical interactions present in electrolyte flows, numerical simulation techniques are important tools for developing a deeper understanding of the flow behavior.
This contribution introduces a simulation approach for coupled ion transport and electrolyte flow.
Section \ref{sec:continuum} introduces a modified Nernst--Planck--Poisson--Navier--Stokes model which has its foundations in first principles of nonequilibrium thermodynamics \cite{dreyer2013overcoming, dreyer2014mixture,landstorfer2016theory} and takes into account ion-solvent interactions, finite ion size and solvation effects.
Section \ref{sec:numerical} introduces a finite volume discretization approach for ion transport in a self-consistent electric field which is motivated by results from semiconductor device simulation \cite{fuhrmann2016numerical}.
Pressure robust mixed finite element methods for fluid flow \cite{linke2014role}, and a fix point approach for coupling to ion transport are introduced.
Section \ref{sect:float} provides a numerical example which shows the potential significance of the proposed approach for modeling electrochemical processes at the nanoscale.
\section{The model}\label{sec:continuum}
Electroosmotic flows are characterized by the presence of an electric field that exerts a net force on the fluid molecules in regions where the local net charge due to the present ions is nonzero.
Being part of the momentum balance for the barycentric velocity of the fluid, this net force induces fluid motion.
Correspondingly, ions are advected by the barycentric velocity field.
Their motion relative to the barycentric velocity is induced by the gradients of their chemical potential and the electrostatic potential.
The elastic interactions between the ions and the solvent exerts a counterforce against ion motion.
The ion charge density varies with the redistribution of the ions and contributes to the electric field.
The \textit{dilute solution} assumption lying at the foundation of classical electrolyte models \cite{nernst1888kinetik,planck1890ueber} assumes that the ion concentration in the electrolyte is small compared to the concentration of the solvent and consequently neglects the ion-solvent interaction.
However, accumulation of ions in polarization boundary layers violates this assumption.
As a consequence, the resulting electrolyte model e.g. severely overestimates double layer capacitances at ideally polarizable electrodes \cite{BardFaulkner}.
The model used throughout this paper has been introduced in \cite{dreyer2013overcoming, dreyer2014mixture, landstorfer2016theory} based on consistent application of the principles of nonequilibrium thermodynamics \cite{deGrootMazur1962}.
It includes ion volume and solvation effects and consistently couples the transport equations to the momentum balance.
It generalizes previous approaches to include steric (ion size) effects \cite{bikerman1942,freise1952theorie,carnahan1969equation,mansoori1971equilibrium,kornyshev1981conductivity}, see also \cite{kilic2007steric}.
\subsection{Model equations}
In a given bounded space-time domain $\Omega\times (0,t^\sharp)\subset \mathbb R^d\times (0,\infty)$, and with appropriate initial and boundary conditions,
system \eqref{eq:balances}--\eqref{eq:constitutivefunc} describes the isothermal evolution of the molar concentration of $N$ charged species with molar densities (concentrations) $c_1\dots c_N$ with charge numbers $z_1\dots z_N$ dissolved in a solvent of concentration $c_0$.
Species are further characterized by their molar volumes $v_i$ and molar masses $M_i$.
The electric field is described as the gradient of the electrostatic potential $\psi$.
The barycentric velocity of the mixture is denoted by $\vec u$, and $p$ is the pressure.
The following equations are considered:
\begin{subequations}\label{eq:balances}
\begin{align}
\rho \partial_t \vec u -\nu\Delta \vec{u} + {\rho}(\vec{u} \cdot \nabla) \vec{u} + \nabla p &= {q\nabla \psi} \label{eq:moment}\\
\nabla \cdot (\rho \vec{u}) &=0 \label{eq:div0} \\
\partial_t c_i + \nabla \cdot (\vec N_i + c_i {\vec{u}}) &=0 & (i=1\dots N) \label{eq:cont}\\
-\nabla \cdot (\varepsilon \nabla \psi) &= q. \label{eq:poisson
\end{align}
\end{subequations}
Equation \eqref{eq:moment} together with \eqref{eq:div0} comprises the incompressible Navier--Stokes equations for a fluid of viscosity $\nu$ and constant density $\rho$.
In the general case, where molar volumes and molar masses are not equal, $\rho$ would depend on the local composition of the electrolyte.
In regions where the charge density $q=F\sum_{i=1}^N z_i c_i$ ($F$ being the Faraday constant) is nonzero, the electric field $-\nabla \psi$
becomes a driving force of the flow.
The partial mass balance equations \eqref{eq:cont} describe the redistribution of species concentrations due to advection in the velocity field
$\vec u$ and molar diffusion fluxes $\vec N_i$.
The Poisson equation \eqref{eq:poisson} describes the distribution of the electrostatic potential $\psi$ under a given configuration of the charge density $q$.
The constant $\varepsilon$ is the dielectric permittivity of the medium.
The fluxes $\vec N_i$, the molar chemical potentials $\mu_i$ and the incompressibility constraint for a liquid electrolyte are given by
\begin{subequations}\label{sys:dgml}
\begin{align}
\vec N_i
&= - \frac{D_i}{RT} c_i \left( \nabla \mu_i - \frac{\kappa_i M_0+M_i}{M_0}\nabla \mu _0 + z_i F \nabla \psi \right)
& (i=1\dots N) \label{eq:npfull}\\
\mu_i
&= (\kappa_i v_0+v_i)(p-p^\circ) + RT \ln \frac{c_i}{\overline c}
& (i=0\dots N) \label{eq:constfull}\\
1 &= v_0 c_0 + \sum_{i=1}^{N} ( \kappa_i v_0 + v_i)c_i.\label{eq:incompressfull}
\end{align}
\end{subequations}
The modified Nernst--Planck flux \eqref{eq:npfull} combines the gradients of the species chemical potentials $\nabla \mu_i$, the gradient of the solvent chemical potential $\nabla \mu_0$ and the electric field $-\nabla \psi$ as driving forces for the motion of ions of species $i$ relative to the barycentric velocity $\vec u$.
In this equation, $D_i$ are the diffusion coefficients, $R$ is the molar gas constant, and $T$ is the temperature.
Equation \eqref{eq:constfull} is a constitutive relation for the chemical potential $\mu_i$ depending on the local pressure and concentration.
Here, $p^\circ$ is a reference pressure, and $\overline c=\sum_{i=0}^N c_i$ is the total species concentration.
In \eqref{eq:incompressfull} a simple model for solvated ions is applied, see \cite{dreyer2014mixture,fuhrmann2016numerical,dreyer2017new}, which describes the mass and volume of a solvated ion by $\kappa_i M_0+M_i$ and $\kappa_i v_0+v_i$, respectively.
The incompressibility constraint \eqref{eq:incompressfull} limits the accumulation of ions in the polarization boundary layer to physically reasonable values \cite{dreyer2014mixture,fuhrmann2016numerical}.
The mass density of the mixture is
\begin{align}
\rho&=M_0 c_0 + \sum_{i=1}^{N}(\kappa_i M_0+M_i) c_i.
\label{eq:rho}
\end{align}
As for reasonable solvation numbers $\kappa_i\approx 10$, molar masses and molar volumes of the solvated ions are dominated by the solvent mass and volume.
Therefore we assume, for simplicity, $(\kappa_i M_0+M_i)\approx (\kappa_i+1)M_0$ and $(\kappa_i v_0 + v_i) \approx (\kappa_i+1)v_0$.
This approximation leads to a simplified constitutive function and, in particular, to a constant mass density of the mixture,
\begin{subequations}\label{eq:constitutivefunc}
\begin{align}
\vec N_i
&= - \frac{D_i}{RT} c_i \left( \nabla \mu_i - (\kappa_i+1)\nabla \mu _0 + z_i F \nabla \psi \right)
& (i=1\dots N) \label{eq:np}\\
\mu_i
&= v_0(\kappa_i+1)(p-p^\circ) + RT \ln \frac{c_i}{\overline c}
& (i=0\dots N) \label{eq:const}\\
1 &= v_0 c_0 + \sum_{i=1}^{N} ( \kappa_i + 1)v_0c_i\label{eq:incompress}\\
\rho &= \frac{M_0}{v_0}.\label{eq:constmass}
\end{align}
\end{subequations}
Comparing the constitutive equations \eqref{eq:np}-\eqref{eq:incompress} to the classical Nernst--Planck flux \cite{nernst1888kinetik,planck1890ueber}
\begin{align}\label{eq:classflux}
\vec N_i &= - D_i \left(\nabla c_i + z_i c_i \frac{F}{RT} \nabla \psi \right) & (i=1\dots N),
\end{align}
which considers dilute solutions, one observes that in \eqref{eq:classflux} the ion-solvent interaction described by the term $\nabla \mu_0$ is ignored.
Moreover in \eqref{eq:classflux} implicitly a material model is assumed that neglects the pressure dependence of $\mu_i$, which is inappropriate in charged boundary layers \cite{dreyer2013overcoming}.
\subsection{Reformulation in species activities}
In order to develop a space discretization approach for the Nernst--Planck fluxes \eqref{eq:np}, after \cite{Fuhrmann2015xCPC}, the system is reformulated in terms of (effective) species activities $a_i= \exp\left(\frac{ \mu_i-(\kappa_i+1)\mu_0}{RT}\right)$.
The quantity $\mu_i - (\kappa_i+1)\mu_0$ is sometimes denoted as entropy variable \cite{jungel2015boundedness}.
Introducing the activity coefficients $\gamma_i=\frac{a_i}{c_i}$ and its inverse (reciprocal) $\beta_i=\frac1{\gamma_i}=\frac{c_i}{a_i}$ allows to transform the Nernst--Planck--Poisson system consisting of \eqref{eq:cont}, \eqref{eq:poisson}, \eqref{eq:np} to
\begin{subequations}\label{sys:act}
\begin{align}
-\nabla \cdot (\varepsilon \nabla \psi) &= F\sum_{i=1}^N z_i \beta_i a_i= q. \label{eq:apoisson}\\
0 &=\partial_t (\beta_i a_i) + \nabla \cdot (\vec N_i + \beta_i a_i {\vec{u}}) & (i=1\dots N) \label{eq:acont}\\
\vec N_i&= -D_i \beta_i\left(\nabla a_i + a_i z_i \frac{F}{RT}\nabla\psi\right)
& i=(1\dots N). \label{eq:actflux}
\end{align}
\end{subequations}
From \eqref{eq:const} and \eqref{eq:incompress} one obtains
\begin{align*}
a_i&=\frac{v_0 \beta_i a_i}{1- v_0\sum_{j=1}^N \beta_j a_j (\kappa_j+1)} & (i=1\dots N)
\end{align*}
which is a linear system of equations which allows to express $\beta_1\dots \beta_n$ through $a_1\dots a_n$.
It has the unique solution \cite{Fuhrmann2015xCPC}
\begin{align}\label{eq:beta}
\beta_i=\beta=&\frac{1}{v_0+v_0\sum_{j=1}^Na_j(\kappa_j+1)}& (i=1\dots N).
\end{align}
It follows immediately that for any nonnegative solution $a_1\dots a_n$ of system \eqref{sys:act}, the resulting concentrations are bounded in a physically meaningful way:
\begin{align}\label{eq:cbound}
0\leq c_i=\beta_i a_i\leq \frac1{v_0}.
\end{align}
In the general case with different molar volumes and molar masses, system \eqref{eq:beta} becomes nonlinear, the quantities $\beta_i$ differ between species and in addition depend on the pressure $p$ \cite{Fuhrmann2015xCPC, fuhrmann2016numerical}, leading to a nonlinear system of equations
\begin{align}\label{eq:nlactcoeff}
\beta_i&=B_i(a_1\dots a_n, \beta_1\dots \beta_n,p) & (i=1\dots N).
\end{align}
\section{A coupled Finite-Volume-Finite-Element discretization}\label{sec:numerical}
\subsection{Thermodynamically consistent finite volume methods for species transport }\label{subsec:fvol}
A two point flux finite volume method on boundary conforming Delaunay meshes is used to approximate the Nernst--Planck--Poisson part of the problem.
It has been inspired by the successful Scharfetter--Gummel box method for the solution of charge transport problems in semiconductors \cite{scharfetter1969large,bank1983numerical}.
For a recent overview on this method see \cite{farrell2017numerical}.
It was initially developed for drift-diffusion problems in non-degenerate semiconductors exhibiting Boltzmann statistics for charge carrier densities whose fluxes are equivalent to the classical Nernst--Planck flux \eqref{eq:classflux}.
The time axis is subdivided into intervals
0=t^0< t^1 <\dots < t^{N_t}=t^\sharp.
The simulation domain $\Omega$ is partitioned into a finite number of closed convex polyhedral control volumes $K\in\mathcal K$ such that $K\cap L= \partial K\cap \partial L$ and $\overline\Omega=\bigcup_{K\in \mathcal K} K$.
With each control volume a node $\vec{x}_K\in K$ is associated. If the control volume intersects with the boundary $\partial \Omega$, its corresponding node shall be situated on the boundary: $\vec{x}_K\in\partial\Omega\cap K$.
The partition shall be \textit{admissible} \cite{EGH}, that is for two neighboring control volumes $K, L$, the \textit{edge} $\overline{\vec{x}_K\vec{x}_L}$ is orthogonal to the interface between the control volumes $\partial K \cap \partial L$.
Let $\vec h_{KL} = \vec x_L -\vec x_K$ and $h_{KL}=|\vec h_{KL}|$.
Then, the normal vectors to $\partial K$ can be calculated as $\vec n_{KL}=\frac1{h_{KL}}\vec h_{KL}$.
A constructive way to obtain such a partition is based on the creation of a boundary conforming Delaunay triangulation resp. tetrahedralization of the domain and the subsequent construction of its dual, the Voronoi tessellation intersected with the domain, see e.g. \cite{bank1983numerical,SiFuhrmannGaertner2010,farrell2017numerical},
see also Figure \ref{FIG:sgfv}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.7,>=stealth,shorten >=2pt]
\filldraw[fill=gray!10, draw=black]
( 2.00000, 1.50000) --
( 4.50000, 1.83333) --
( 5.21429, 1.35714) --
( 5.75000,-0.25000) --
( 4.25000,-1.75000) --
( 2.00000,-1.50000) --
cycle;
\filldraw[fill=gray!10, draw=black]
( 2.00000, 1.50000) --
( 1.16667, 1.83333) --
(-1.78571, 0.35714) --
(-1.21429,-1.35714) --
( 0.50000,-2.50000) --
( 2.00000,-1.50000) --
cycle;
\draw[thick,gray!80] (0,0) -- (4,0);
\draw (0,0) node[inner sep=0.5mm,shape=circle,fill] {} ;
\draw (4,0) node[inner sep=0.5mm,shape=circle,fill] {} ;
\draw (-0.5,0.25) node {$\vec{x}_K$};
\draw (4.25,0.25) node {$\vec{x}_L$};
\draw(0.3,1) node {$K$};
\draw(0.2,-0.8) node {$\textcolor{black}{a_{K}}$};
\draw (4,-1) node {$L$};
\draw(3.1,-0.8) node {$\textcolor{black}{a_{L}}$};
\draw[thick,black,->] (1,0.2) -- (3,0.2);
\draw(1.5,0.6) node {$\textcolor{black}{N_{KL}}$};
\draw [thick, gray!80] (2.4,0.0) arc (360:270:0.5);
\draw (2.15,-0.15) node[thick,gray!80,inner sep=0.2mm,shape=circle,fill] {};
\end{tikzpicture}
\caption{Two neighboring control volumes $K$ and $L$ with
collocation points $\vec x_K, \vec x_L$
stored activities $a_K, a_L$ and flux $N_{KL}$.\label{FIG:sgfv}}
\end{figure}
Denote by $\vec J_i= c_i\vec u + \vec N_i = \beta_i a_i\vec u + \vec N_i$ the convection diffusion flux of the model under consideration.
The general approach to derive a two point flux finite volume scheme for a conservation law
(index $i$ omitted)
\begin{align*}
\partial_t c + \nabla\cdot \vec J=0
\end{align*}
consists in integrating the equation over a space-time control volume $K\times [t^{n-1},t^n]$:
\begin{align*}
0
= & \int\limits_{K} (c^n-c^{n-1}) \ d\vec x + \sum\limits_{\substack{L\; \text{neighbor}\\\text{ of}\; K}}\ \int\limits_{t^{n-1}}^{t^n}\int\limits_{\partial K\cap \partial L}\vec J\cdot \vec n_{KL}\ ds \ dt.
\end{align*}
This is approximated via
\begin{align*}
0 = \lvert K \rvert\left(c_K^n-c_K^{n-1} \right)+
\sum\limits_{\substack{L\; \text{neighbor of}\; K}}
\left(t^n-t^{n-1}\right) \lvert\partial K\cap\partial L\rvert \, J^n_{KL},
\end{align*}
and it remains to define the numerical fluxes $J^n_{KL}$ which should approximate the continuous fluxes between two neighboring control volumes and depend on the unknown values in the two collocation points $\vec x_K$ and $\vec x_L$ at moment $t^n$ in order to obtain a fully implicit in time scheme.
The modification of the Scharfetter-Gummel scheme \cite{scharfetter1969large} proposed in \cite{Fuhrmann2015xCPC} is based on the similarity of the expressions \eqref{eq:actflux} and \eqref{eq:classflux}.
The flux expression \eqref{eq:classflux} is the same as the drift-diffusion flux in non-degenerate semiconductors, for which this discretization scheme was initially derived.
The only difference between \eqref{eq:actflux} and \eqref{eq:classflux} is the multiplication with the pre-factor $\beta$.
Therefore it appears to be reasonable to mimic this structure at the discrete level, and to derive a discrete equivalent of \eqref{eq:actflux} from the discrete version of \eqref{eq:classflux} by multiplication with a suitable average of $\beta$: set
\begin{align*}
J_{KL}=D\,\beta_{KL}\cdot\frac{B\left(-\delta_{KL} - \frac{u_{KL}}{D} \right)a_{K}-B\left(\delta_{KL}+\frac{u_{KL}}{D}\right)a_{L}}{h_{KL}},
\end{align*}
where $B(\xi)=\frac{\xi}{e^\xi-1}$ be the Bernoulli function.
and $\beta_{KL}$ is an average of the inverse activity coefficients $\beta_K$ and $\beta_L$, $\delta_{KL}=\frac{zF}{RT}(\psi_K -\psi_L)$ is proportional to the local electric force, and
\begin{align}\label{eq:fluxint}
u_{KL} & = \int_{\partial K \cap \partial L} \vec u \cdot \vec h_{KL} \, ds
\end{align}
is the normal integral over the interface $\partial K \cap \partial L$ of the convective flux scaled by $h_{KL}$.
If the continuous flux is divergence free, i.e. it fulfills equation \eqref{eq:div0}, the flux projections $u_{KL}$ fulfill the discrete divergence condition
\begin{align}\label{eq:discdiv0}
\sum\limits_{\substack{L\; \text{neighbor of}\; K}} \lvert \partial K \cap \partial L \rvert \, u_{KL}=0
\end{align}
which in the absence of charges and coupling through the activity coefficients guarantees a discrete
maximum principle of the approximate solution \cite{fuhrmann2011numerical}.
The resulting time discrete implicit Euler finite volume upwind scheme guarantees nonnegativity of discrete activities and exact zero fluxes under thermodynamic equilibrium conditions.
Moreover it guarantees the bounds \eqref{eq:cbound} \cite{Fuhrmann2015xCPC}.
\subsection{Pressure robust, divergence free finite elements for fluid flow.}\label{subsec:pr}
Mixed finite element methods approximate the Stokes resp. Navier--Stokes equations based on an inf-sup stable pair of velocity ansatz space $V_h$ and pressure ansatz space $Q_h$.
A fundamental property of the Stokes and Navier--Stokes equations consists in the fact that --- under appropriate boundary conditions --- the addition of a gradient force to the body force on the right-hand side of the momentum balance \eqref{eq:moment} leaves the velocity unchanged, as it just can be compensated by a change in the pressure.
Most classical mixed finite element methods for the Navier--Stokes equations do not preserve this property \cite{girault2012finite}.
As a consequence, the corresponding error estimates for the velocity depend on the pressure \cite{john2016finite}.
Moreover, the discrete solution $\vec u_h$ fulfills the divergence constraint
only in a discrete finite element sense.
This raises problems when coupling the flow simulation to a transport simulation using finite volume methods, because the maximum principle for the species concentration is directly linked to the divergence constraint in the finite volume sense \eqref{eq:discdiv0} \cite{fuhrmann2011numerical}
that is different to the finite element sense.
The pressure robust mixed methods first introduced in \cite{linke2014role}
and comprehensively described in \cite{john2017divergence}, are based on the introduction of a divergence free velocity \textit{reconstruction operator} $\Pi$ into the discrete weak formulation of the flow problem.
The resulting discretization of the stationary Stokes equation (provided here for simplicity) reads as: find $(\vec{u}_h, p_h)\in V_h\times Q_h$ such that
\begin{align*}
\int_{\Omega} \nu \nabla \vec{u}_h : \nabla \vec{v}_h dx +\int_{\Omega} p_h \nabla \cdot \vec{v}_h dx
& = \int_{\Omega} \vec{f} \cdot ({\Pi}\vec{v}_h) dx && \text{for all } \vec{v}_h \in V_h,\\
\int_\Omega q_h \nabla \cdot \vec{u}_h \, dx &= 0 && \text{for all } q_h \in Q_h.
\end{align*}
This formulation differs from that of the classical mixed methods only in the right hand side. The reconstruction operator $\Pi$ has the following properties:
\begin{enumerate}
\item[(i)] If \(\vec{v}_h \in V_h\) is divergence free in the weak sense,
then its reconstruction $\Pi \vec{v}_h$ is pointwise divergence free:
\begin{align*}
\int_\Omega q_h \nabla \cdot \vec v_h \, dx = 0 \quad \text{for all } q_h \in Q_h
\qquad \Longrightarrow \quad
\nabla\cdot ({\Pi} \vec{v}_h) = 0 \quad \text{in } \Omega.
\end{align*}
\item[(ii)] The change of the test function by the reconstruction operator causes a consistency error that should have the same asymptotic convergence rate of the original method and should not depend on the pressure.
\end{enumerate}
Under these conditions, the resulting velocity error estimate is independent of the pressure \cite{john2017divergence}
and has the optimal expected convergence rate.
Furthermore,
a good velocity approximation can be obtained without the need to resort to high order pressure approximations.
This allows significant reduction of degrees of freedom that are necessary to obtain a prescribed accuracy of the velocity.
The action of $\Pi$ on a discrete velocity field can be calculated locally, on elements or element patches. Therefore its implementation leads to low overhead in calculations.
\subsection{Coupling strategy.}
The coupling approach between the Navier--Stokes solver and the
Nernst--Planck--Poisson solver is based on a fixed point iteration strategy:
\begin{algorithm}[H]\SetAlgoLined \label{algo:coupling}
Set $(\vec{u}_h, p_h)$ to zero, calculate initial solution for \eqref{eq:poisson}--\eqref{eq:incompress}\;
\While{not converged}
{
Provide $\psi_h,q_h$ to Navier--Stokes solver\;
Solve \eqref{eq:moment}--\eqref{eq:div0} for $(\vec{u}_h, p_h$)\;
Project $\Pi\vec{u}_h, p_h$ to the Poisson--Nernst--Planck solver\;
Solve \eqref{eq:apoisson}--\eqref{eq:actflux}\;
}
\end{algorithm}
The projection of the pressure to the Poisson--Nernst--Planck solver just requires the evaluation of the pressure $p_h$ in the nodes of the triangulation.
According to \eqref{eq:fluxint}, the projection of the velocity requires the integration of the reconstructed finite element solution $\Pi\vec u_h$ over interfaces between neighboring control volumes of the finite volume method.
In the implementation, these integrals are calculated by quadrature rules.
Sufficient accuracy of this step guarantees that the projected velocity is divergence free in the sense \eqref{eq:discdiv0}.
For a detailed explanation of this algorithmically challenging step, see \cite{fuhrmann2011numerical}.
The projection operator can be assembled into a sparse matrix, allowing for computationally efficient repeated application of the projection operator during the fixed point iteration.
As a consequence, in the case of electroneutral, inert transported species, the maximum principle for species concentrations is guaranteed, see \cite{fuhrmann2011numerical} for more details.
In combination with pressure robust finite element methods, this coupling approach was first applied to modeling of thin layer flow cells \cite{merdon2016inverse}.
\section{DC induced charge electroosmosis (ICEO) over an electrode with floating voltage}
\label{sect:float}
The discretization methods and the coupling strategy introduced above are implemented in the framework of the toolbox \texttt{pdelib} \cite{pdelib} that is developed at WIAS.
The solution of the Nernst--Planck--Poisson system is performed using Newton's method.
As the convergence of this method is guaranteed only in the vicinity of the solution, due to the strong nonlinearities, several measures need to be taken in order to obtain a solution.
Time embedding is an approach to solve a stationary problem from a given initial value by solving a time dependent problem with increasing time steps by an implicit Euler method.
Time step size control allows to guarantee that the difference between the solutions for two subsequent time steps is small enough to ensure the convergence of Newton's method.
Parameter ramping controls certain parameters (solvation number, surface charge) in the initial phase of the time evolution in such a way that the nonlinearities are easy to solve.
For the flow part of the problem, the stationary Stokes problem is solved using a second order finite element method. Its velocity space consists of piecewise quadratic
continuous vector fields enhanced with cell bubble functions and its pressure space consists of piecewise linear and discontinuous scalar fields \cite{BernardiRaugel}.
For this method, an efficient divergence free reconstruction operator into the
Raviart--Thomas finite element space of first order is constructed by standard interpolation \cite{john2017divergence,linke2016pressure}.
The method has been verified against classical Helmholtz--Smoluchowski theory and corresponding asymptotic expressions \cite{overbeek1952electrokinetic,burgreen1964electrokinetic,FGLMMPreprint}.
In order to demonstrate the capabilities of the model and the method, we simulate direct current (DC) induced charge electroosmosis (ICEO) over an electrode with a floating voltage.
The effect of induced charge electroosmosis is based on a space charge region which appears at conducting surfaces immersed in an electric field.
High electric conductivity keeps the potential of the conducting surface at a constant value.
The external application of an electric field then must result in a potential gradient at the surface which triggers the formation of a space charge region.
In this space charge region, electric forces act on the fluid, eventually setting the fluid in motion.
For a more thorough discussion of this phenomenon including the alternating current case and other numerical approaches mainly based on the Helmholtz-Smoluchowski approximation for thin double layers see e.g. \cite{squires2004induced,soni2007nonlinear,gregersen2009numerical,pimenta2018numerical}.
We consider two large reservoirs that are separated by an impermeable wall with a narrow channel connecting the reservoirs.
We assume that both reservoirs are filled with the same liquid electrolyte and that the ionic concentrations and the pressure are the same on both sides.
For simplicity, we assume that the channel is of infinite width with planar parallel walls.
Then, the computational domain $\Omega$ is restricted to the rectangular channel region
in a cut plane, see Figure \ref{fig:iceomodel}.
On the boundary segment $\Gamma_e$ in the middle of the bottom wall there is a metal electrode which is not connected to any outer circuit.
\begin{figure
\centering
\begin{minipage}[b]{0.49\textwidth}
\begin{tikzpicture}[scale=0.6]
\draw[very thick] (-4,0)--(4,0);
\draw[very thick] (-4,4)--(4,4);
\draw[dashed] (-4,0)--(-4,4);
\draw[dashed] (4,0)-- (4,4);
\draw[very thick, red] (-1,0)--(1,0);
\draw[thin,|-|] (-4,-1.0) -- (4,-1.0);
\draw (2,-0.8) node {{\small $L$}};
\draw[thin,|-|] (-1,-0.5) -- (1,-0.5);
\draw (0.7,-0.3) node {{\small $L_{e}$}};
\draw[thin,|-|] (-4.45,0) -- (-4.45,4);
\draw (-4.75,2) node {{\small $H$}};
\draw (-3.6,2) node {$\Gamma_l$};
\draw (3.6,2) node {$\Gamma_r$};
\draw (0,0.4) node {$\Gamma_{e}$};
\end{tikzpicture}
\end{minipage}
\includegraphics[width=0.49\textwidth]{iceo-floating-nref1.png}
\caption{Left: sketch of the simulation domain $\Omega$. Right: coarse
grid (level 1) consisting of 940 nodes}
\label{fig:iceomodel}
\end{figure}
The channel walls on the top and on the bottom side of $\Omega$ are impermeable, i.e.\ $\vec N_i\cdot \vec n=0$ and a no slip condition is imposed, i.e.\ $\vec u=0$.
Except at the metal electrode $\Gamma_e$, the channel walls are electrically insulating and uncharged, i.e. $\varepsilon\nabla \psi \cdot \vec n=0$.
The metal electrode $\Gamma_e$ is assumed to consist of an ideal metallic conductor with zero resistivity
embedded in an insulating environment such that the electric potential within the metal is constant.
The value of this constant is defined by the applied electric field.
In this sense, the constant voltage at this electrode is \textit{floating}.
At the interfaces $\Gamma_l$ and $\Gamma_r$ to the reservoirs, a zero stress boundary condition $\nu \nabla \vec u \cdot \vec n = p \vec n$ for the flow is imposed which allows unimpeded motion of the fluid through the boundary of the simulation domain.
A partial justification of this boundary condition consists in the fact that at these boundary we can ignore the electric forces due to local electroneutrality.
Also, the same problem was simulated with a no-slip boundary condition $(\vec u=0)$ with similar results.
Since the reservoir volume is considered large compared to the channel,
we impose the rather crude approximating condition of constant prescribed
species concentrations $c_i$ at the interfaces.
The electric field is applied by imposing a potential difference between the electrolyte reservoirs.
As an approximation, bias values of the same magnitude but opposite sign
are applied at the reservoir boundaries $\Gamma_l$ and $\Gamma_r$.
Due to symmetry, it then can be assumed that the floating potential value
at the electrode on $\Gamma_e$ is zero: $\phi|_{\Gamma_e}=0$.
The potential difference between the reservoirs induces an electric current
that is carried by (positively charged) cations moving
from regions of higher potential to places with lower potential
and (negatively charged) anions moving nearly in the opposite way.
Without any charge accumulation, due to electroneutrality, there would be no influence on the velocity, and the flow velocity would be zero.
\begin{figure
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-DGL15-1000mM-nref5-stream}
\includegraphics[width=0.49\textwidth]{iceo-floating-GCH15-1000mM-nref5-stream}
\caption{Charge density (color coded) and
streamlines of the induced charge electro-osmotic flow. Left:
Solvation model \eqref{eq:npfull} with $\kappa=15$. Right: Classical Nernst--Planck flux \eqref{eq:classflux}. Please note the significantly different color ranges which exhibit
the overestimation of the boundary layer charge in the case of the classical Nernst--Planck flux.
}
\label{fig:prof}
\end{figure}
The electric field in lateral direction triggers the formation of a polarization boundary layer at the electrode.
Due to symmetry, the charge density in this layer changes sign at the center of the electrode.
As a consequence, there are electroosmotic forces of opposite sign acting on the fluid
that lead to the creation of nanofluidic vortices.
The stream plots in Figure~\ref{fig:prof} give a qualitative impression of the emerging flow
and charge distribution for a 1M electrolytic solution.
For the simulation, a series of discretization grids of increasing refinement level consisting of 465, 940,1875, 3690, 7596, 16383 nodes, respectively are used.
The results of the modified and classical Nernst--Planck models can be compared.
While one observes a significant difference in the width of the polarization boundary layer (due to the finite size limitations in the solvation based model), the flow pattern appears qualitatively very similar.
\begin{figure
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-DGL15-ref-xprofile}
\includegraphics[width=0.49\textwidth]{iceo-floating-GCH15-ref-xprofile}
\caption{y-component of velocity for $y=2nm$. Left:
Solvation model \eqref{eq:npfull} with $\kappa=15$. Right: Classical Nernst-Planck flux \eqref{eq:classflux}.}
\label{fig:profx}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-DGL15-ref-yprofile}
\includegraphics[width=0.49\textwidth]{iceo-floating-GCH15-ref-yprofile}
\caption{y-component of velocity for $x=0$. Left:
Solvation model \eqref{eq:npfull} with $\kappa=15$. Right: Classical Nernst--Planck flux \eqref{eq:classflux}.}
\label{fig:profy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-vy-xprofile}
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-vy-yprofile}
\caption{Velocity ($y$-component) profiles for solvation based model
with different values of the solvation number $\kappa$.
Left: $y=2nm$. Right: $x=0$.}
\label{fig:profkappa}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-pot-xprofile}
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-pot-yprofile}
\caption{Profiles of electrostatic potential $\psi$ for classical Nernst-Planck (``GC'')
and the solvation based model (``DGLM'') with $\kappa=15$.
Left: $y=0$. Right: $x=1.25\nano\meter$.}
\label{fig:profpot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-cp-xprofile}
\includegraphics[width=0.49\textwidth]{iceo-floating-nref3-cp-yprofile}
\caption{Profiles of positive ion concentration $c^+$ for classical Nernst-Planck (``GC'')
and the solvation based model (``DGLM'') with $\kappa=15$.
Left: $y=0$. Right: $x=1.25\nano\meter$.}
\label{fig:profcp}
\end{figure}
In order to obtain a more precise picture, Figures \ref{fig:profx} and \ref{fig:profy} exhibit profiles of the $y$-component of the flow. One observes good grid convergence behavior for both kinds of models,
suggesting accurate numerical approximation of the respective solutions.
The flow velocity for the solvation based model is considerably larger than for the classical Nernst--Planck model.
Figure \ref{fig:profkappa} provides some insight into the influence of
the solvation parameter $\kappa$ on the induced flow field.
Increasing $\kappa$, a slight increase of the velocity is observed, once again consistent with the finite size effect.
Potential and concentration profiles in horizontal and in vertical direction
are given in Figures \ref{fig:profpot} and \ref{fig:profcp}.
They demonstrate the strong differences between the classical Nernst-Planck model and the solvation based model with respect to the distribution of the electrostatic potential and the positive ions.
\section{Conclusions}
We presented a model for electro-osmotic flow at the nanoscale which includes finite ion size effects in a way that is consistent with first principles of non-equilibrium thermodynamics.
We proposed a novel numerical solution approach which preserves important structural aspects of the continuous model.
This method allows to simulate ion distributions in the vicinity of electrodes
in such a way that ion concentrations in polarization boundary layers are not drastically overestimated
and thus a meaningful coupling with the electro-osmotic flow due to the Coulomb force
is possible.
The capabilities of the numerical method have been demonstrated by a simulation of induced charge electroosmotic flow at a nanoscale electrode with floating potential.
The simulation results for a 1M electrolyte show a considerable influence of finite ion size and solvation on electroosmotic flow velocity.
Nanoscale ion transport, electric field distribution and electrolyte flow are processes which need to be thoroughly understood in order to provide meaningful interpretations of experimental work in the emerging field of nanoscale electrochemistry.
We hope to provide valuable contributions to future research efforts in this field by amending the model with more transported species and by incorporating electrochemical reactions at electrode surfaces into the model.
\section{Acknowledgement}
The research described in this paper has been supported by the German Federal Ministry
of Education and Research Grant 03EK3027D (Network ``Perspectives for Rechargeable
Magnesium-Air batteries'') and the Einstein Foundation Berlin
within the \textsc{Matheon} Project CH11 ''Sensing with Nanopores''.
\section{Bibliography}
\bibliographystyle{elsarticle-num}
|
{'timestamp': '2019-01-23T02:24:07', 'yymm': '1901', 'arxiv_id': '1901.06941', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.06941'}
|
arxiv
|
\section{Introduction}
\label{sec:Intro}
In 2000, Jan Hendrik Sch\"{o}n became famous for his work on coax materials into superconductors, and he published eight papers in Science and Nature. Julia Hsu and Lynn Loo accidentally stumbled across duplicated data used in one of the Sch\"{o}n's paper while verifying the experimental progress of their work. That raised an alarm bell to bring the attention of the scientific community towards the Sch\"{o}n's work and they found evidence of scientific misconduct in at least 16 of them. It raised questions on publication policies of these high impact journals too.
Scandals like Hwang Woo-Suk's fake stem-cell lines~\citep{saunders2008research} or Jan Hendrik Sch\"{o}n's duplicated graphs~\citep{service2003more} showed how easily scientist can publish fabricated data and put human health at risk. along with wasting millions of dollars of government research money.
These scandals raised questions on the role and responsibility of coauthors and reviewers of scientific articles. Similarly, a Japanese bone-health researcher, named Yoshihiro Sato, fabricated data, plagiarized work, and forged authorships in more than 60 studies from 1996 to 2013~\citep{kupferschmidt2018researcher, else2019universities}.
There are multiple reasons behind the retraction of the papers including duplicate publication, falsification of the data, fabrication of the data, plagiarism, ethical or legal misconduct, etc. \citep{steen2013has, bohannon2016s}. Some times back Chinese journal found 31\% of submissions plagiarized~\citep{zhang2010chinese, mallapaty2020china}. \cite{grieneisen2012comprehensive} have discussed the reasons and proportion of retractions as research misconduct (20\%), questionable data or interpretation (42\%), and publishing misconduct (47\%).
A published research work completely depends on the trust that a reader built with a writer where a reader expects honesty and fairness in the results reported by writer~\citep{rennie1997authorship}.
While publishing any of the research work, authors must be prepared to accept the public responsibility that goes with it, i.e., authors are certifying the integrity of their work~\citep{shapiro1994contributions}.
With the rise in scientific collaborations, number of retractions also increased from last two decades. Although scientific collaborations increases the research productivity by using individual's knowledge and research skill set; however, a wrong collaboration can create a distrust among peers and spoil the reputation~\citep{abramo2011relationship, mongeon2016costly}. \cite{bennett2003unethical} have shown in their study that how unethical practices in authorship of scientific papers have increased over the past few decades.
\cite{khan1999controlled} have explained in their study the rising trend in multiple authorship and this trend is sharp in medical papers~\citep{onwude1993multiple, king2018analysis}. Most of the fraud is happening in the medical field and many studies have highlighted the scientific misconduct in medicine~\citep{steen2011retractions, cassao2018retracted}.
There are large number of authors who keeps on publishing fraud research~\citep{fanelli2009many, kuroki2018repeating}. This raise a serious concern on authors that are they deliberately committing research fraud?~\citep{steen2011retractions}.
\cite{halevi2016post} have explored the post retraction citations to retracted papers and found that the huge majority of the citations are positive, and the citing papers usually fail to mention that the cited article was retracted.
Earlier studies highlight that journal's retraction rate and its retractions for fraud are positively associated with the journal's impact factor~\citep{fang2011retracted, fang2012misconduct}. \cite{trikalinos2008falsified} stated in their study that falsified papers in high-impact journals were slow to retract. Nowadays, high impact journals have revised their retraction policies\citep{atlas2004retraction, resnik2015retraction} which lower down the number of retractions.
Over the years, a number of studies has been conducted to explain the characteristics of scientific misconduct \citep{he2013retraction, bar2018temporal}, the rise of
retractions and its causes \citep{cokol2008retraction, steen2011retractions}, effect of collaborations \citep{franceschet2010effect, asubiaro2019collaboration, zhang2020collaboration, sharma2020growth}, behavior of author \citep{martinson2005scientists}, impact on authors' career~\citep{mongeon2016costly}, the ongoing citations of retracted papers \citep{neale2010analysis}, highly cited retracted papers~\citep{da2017highly}, etc. Our study provides a systematic view on the growth of the team sizes in retracted papers, a relationship between the authors' collaboration and retractions, an association between retractions and retracted citations, a relationship between journal impact factor and retracted citations, and disciplines with higher retractions and team size.
The study is organized as follows: Section~\ref{sec:data} describes the data. Results are explained in Section~\ref{sec:result}. Finally, the concluding remarks are presented in Section~\ref{sec:conclusion}.
\section{Data description}
\label{sec:data}
We have downloaded the data from the Web of Science (WoS) managed by Clarivate Analytics (\url{https://apps.webofknowledge.com/}). We searched for papers which have \textit{Document Type} in WoS as \textit{Retracted Publication} or \textit{Retraction} from 1981-2020. The given condition filtered 12231 records. The metadata contains a paper unique ID, author's name, affiliation, journal name, document type, research area, number of citations, etc.
The metadata contains information of 27822 authors, 3127 journals, and 146 research areas.
Further, to retrieve \textit{Document Type} of all citations respective to papers, first we check for papers with at least one citation and filtered 6754 papers. Further, we searched for every citation received by all 6754 papers in WoS and checked whether the cited paper is documented as either \textit{Retracted Publication} or \textit{Retraction}. This way we have filtered 3110 papers which has at least one citation documented as a retracted publication.
\section{Results}
\label{sec:result}
\subsection{Retractions and authors' collaboration}
The growth of the number of retracted papers as per the number of authors (team size) from 1981-2020 is shown in Fig.\ref{fig:Fig1}(a). The trend is shown for teams of size 1, 2, 3-5, 6-10, $>$10, and the total number of retractions. There is a sharp rise and fall in number of retractions from 2010-2012. During 2010-2011 there is a rise in the number of retractions and then a fall is noticed immediately after in 2012. Further, during 2015-16 again a sharp rise is happened in the number of retractions.
Decade-wise number of retractions; 1981-1990 ($0.2\%$), 1991-2000 ($2.7\%$) 2001-2010 ($20.4\%)$, and 2011-2020 ($76.7\%$) is shown in Fig.\ref{fig:Fig1}(a) inset. The papers written by more than 3 and less than 10 authors in collaborations have a large retraction count. Fig.\ref{fig:Fig1}(b) also shows that larger teams have a lesser number of retracted papers as compared to shorter teams. Single author papers have 13.3\% and two authors' papers have 13.9\% of retractions. Teams of size 3-5 have 42.3\% and teams of size 6-10 have 25.7\% of retracted papers. The percentage of retracted papers decreases with increase in team size while 69.5\% of retraction is from teams smaller than 5 authors. This highlights that a larger number of authors write individuals or tend to collaborate with a few others. \cite{steen2013has} has also studied the growth of scientific retractions and authors with multiple retractions; however, we demonstrated the authors' collaboration patterns in retracted papers.
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\linewidth]{Fig1.png}
\llap{\parbox[b]{6.2in}{(a)\\\rule{0ex}{1.7in}}}
\llap{\parbox[b]{2.4in}{(b)\\\rule{0ex}{1.7in}}}
\caption{(a) The number of papers retracted from 1981-2020. The trend line colored in brown represents the total number of retracted papers.Other trend lines show the number of authors (team size) in retracted papers. The teams of size 3-5 have a large number of retractions. Also, the decade wise number of retractions is shown in inset. In 2001-2010, $20.4\%$ and in 2011-2020, $76.7\%$ of papers are retracted. (b) A number of retracted papers corresponding to varying team sizes. The minimum team size is a team of 2 authors and the maximum is a team of 81 authors. The highest number of papers were retraced from a team of size 1-5. }
\label{fig:Fig1}
\end{figure}
Geolocation of authors with the number of retreated papers is shown in Fig.\ref{fig:Fig2}(a). Authors from China have higher retractions (25.7\%), followed by USA (16.1\%), India (5.3\%), Japan (5.2\%), Iran (4.4\%), Germany(3.2\%), England (2.8\%), South Korea (2.6\%), Italy (2.4\%) and so on. The co-authorship network has 27822 authors and 167645 collaborations among authors. The relationship among the number of authors and retracted papers follows a power law with exponent $-2.27$ as shown in Fig.\ref{fig:Fig2}(b). Out of 27822 authors, $61.5\%$ of authors have owned only one, and $24.6\%$ have owned two retracted papers; however, $65.4\%$ of retracted papers are owned by 27 individual's ($0.1\%$).
The share of collaborations among authors also follows a power law with exponent $-2.31$ as shown in Fig.\ref{fig:Fig2}(c). A large number of authors worked within a closed group. The authors who share large collaborations also have large number of retracted papers as shown in Fig.\ref{fig:Fig2}(d). The pair of authors with a number of retracted papers follows a power law with exponent $-3.02$ as shown in Fig.\ref{fig:Fig2}(e). 28.8\% of author pairs have more than one retracted paper. This analysis shows how the same set of authors and pairs of authors committing fraud again and again. Also, the relationship among authors and retractions shows a scale-free behavior with exponent varies between 2 to 3.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\linewidth]{Fig2a.png}
\llap{\parbox[b]{5.5in}{(a)\\\rule{0ex}{2.0in}}}
\includegraphics[width=0.75\linewidth]{Fig2b.png}
\llap{\parbox[b]{5.0in}{(b)\\\rule{0ex}{3.7in}}}
\llap{\parbox[b]{2.5in}{(c)\\\rule{0ex}{3.7in}}}
\llap{\parbox[b]{5.0in}{(d)\\\rule{0ex}{1.9in}}}
\llap{\parbox[b]{2.6in}{(e)\\\rule{0ex}{1.9in}}}
\caption{(a) Geolocation of authors with a number of retractions. China has produced a large number of authors with retracted papers followed by the USA, India, Japan, and Iran with more than 500 authors. The grey area in the map specifies no record. (b) The number of papers retracted by authors follows a power law, $f(x) = x^{-k}$ with exponent -2.27 and $R^2$ = 0.93. (c) The number of collaborations shared by authors follows a power law with exponent -2.31 and $R^2$ = 0.89. (d) Authors with a number of collaborations and a number of retracted papers. (e) The number of papers retracted by pair of authors follows a power law with exponent -3.02 and $R^2$ = 0.95. A few author pairs have a higher retraction rate.}
\label{fig:Fig2}
\end{figure}
\subsection{Retracted citations}
\cite{da2017highly} have mentioned in their study why citing retracted papers can be a problem for academia. \cite{da2017some} also tried to explain the reason behind the retracted citations.
We studied the effect of citing wrong research by analyzing the citations of retracted papers.
55.2\% (6754) of retracted papers have received at least one citation. In contrast, 25.4\% (3110) papers are those whose citations have at least one retraction. Fig.\ref{fig:Fig3} shows the analysis of 3110 retracted papers where citations are retracted papers. The cumulative growth of the number of retracted citations from 1981-2020 is shown in Fig.\ref{fig:Fig3}(a). The relationship between the number of citations received by retracted papers and the number of retractions on those citations is plotted in Fig.\ref{fig:Fig3}(b). There are a few papers with a large number of retracted citations as shown in Fig.\ref{fig:Fig3}(c). There is one such paper where 33.3\% of citing papers are retractions. There could be two reasons behind this pattern: (i) People who cited the paper was unaware of the authenticity of the work, or the cited paper had been retracted later; (ii) The authors who cited the retracted papers are one of the authors from the retracted papers. However, the aim of the paper is not to look into the authors' profiles.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\linewidth]{Fig3.png}
\llap{\parbox[b]{5.3in}{(a)\\\rule{0ex}{3.9in}}}
\llap{\parbox[b]{5.5in}{(b)\\\rule{0ex}{2in}}}
\llap{\parbox[b]{2.65in}{(c)\\\rule{0ex}{1.9in}}}
\caption{(a) The number of retracted citations from 1981-2020. (b) The relationship between the number of citations received by retracted papers and the number of retracted citations. (c) The relationship between the number of retracted citations and the number of retractions. A few papers have a large number of retracted citations.}
\label{fig:Fig3}
\end{figure}
\subsection{Journal impact factor and research areas}
Fig.\ref{fig:Fig4}(a) shows the cumulative growth of the number of retracted papers in journals having more than 100 retractions since 1981. \textit{International Conference on Energy and Environmental Science 2011} (ICEES) has
a large number of retracted papers (6.2\%) with an average citation of 0.24, whereas \textit{Journal Biological Chemistry} (JBC) has 2.7\% of retracted papers with an average citation of 28.4. Details of the journals based on the number of retractions (>100) are listed in Table\ref{tabel:journal}. Fig.\ref{fig:Fig4}(b) shows the scattered plot of journals with more than 20 retractions. There is no significant relationship exists between the journal impact factor and the number of retractions. Similarly, the scattered plot of journals where the retracted papers have received more than 100 citations is shown in Fig.\ref{fig:Fig4}(c). The highly cited retracted papers have no significant relationship with the journal impact factor too.
Further, we investigated the citations received by retracted papers published in higher as well as lower impact factor journals. 1/4th of the papers turned out to be the retractions that have cited the retracted papers; however, there is no significant relationship exists between the higher or lower impact journals with the number of retractions or citations (see Fig.\ref{fig:Fig4}(d)). As we have seen in Fig.\ref{fig:Fig4} that the number of retractions is independent of the journal impact factor; however, all journals accept paper after a peer-reviewing. This raises a serious question on the work done by the scientific community before accepting the paper for publication. How responsible are the journal and team associated with the journal for the fraud research?
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\linewidth]{Fig4.png}
\llap{\parbox[b]{5.65in}{(a)\\\rule{0ex}{2.73in}}}
\llap{\parbox[b]{5.6in}{(b)\\\rule{0ex}{1.33in}}}
\llap{\parbox[b]{3.8in}{(c)\\\rule{0ex}{1.3in}}}
\llap{\parbox[b]{2.0in}{(d)\\\rule{0ex}{1.25in}}}
\caption{(a) The cumulative growth of retractions in journals with more than 100 retracted papers. Relation of journal impact factor to retractions for (b) number of retracted papers $\geq 20$; (c) number of citations $\geq 100$. (d) Relationship between journal impact factor and the number of retracted citations. The number of retractions, citations, and retracted citations are independent of the journal impact factor. }
\label{fig:Fig4}
\end{figure}
\begin{table}[!h]
\caption{Journals with more than 100 retracted papers from 1981-2020.}
\begin{tabular}{|l|c|c|c|c|}
\hline
Journal & \multicolumn{1}{l|}{Impact Factor} & \multicolumn{1}{l|}{\#Retraction} & \multicolumn{1}{l|}{Avg. Citations} & \multicolumn{1}{l|}{Avg. Team Size} \\ \hline
ICEES & - & 760 & 0.24 & 3.0 \\ \hline
JOURNAL OF BIOLOGICAL CHEMISTRY & 4.23 & 334 & 28.4 & 6.5 \\ \hline
PLOS ONE & 2.74 & 306 & 4.8 & 4.7 \\ \hline
TUMOR BIOLOGY & 3.65 & 291 & 6.9 & 4.6 \\ \hline
CANCER RESEARCH & 91.3 & 106 & 46 & 6.6 \\ \hline
\end{tabular}
\label{tabel:journal}
\end{table}
Fig.\ref{fig:Fig5}(a) shows research-wise number of retractions (>100). A large number of retractions are experienced in the biology and oncology discipline. The other categories that have experienced large retractions are ecology and energy disciplines.
The average number of collaborations (team size) for the respective research area is shown in Fig.\ref{fig:Fig5}(b). The biological and medical disciplines have on average large team size. Fig.\ref{fig:Fig5}(c) shows the average number of retractions respective to the research area. The results show that the journals should perform extra checks on the papers from the disciplines which have experienced a large number of retractions. The disciplines that have a large number of retractions have on average large team size.
\begin{figure}[!h]
\centering
\includegraphics[width=0.85\linewidth]{Fig5a.png}
\llap{\parbox[b]{5.7in}{(a)\\\rule{0ex}{2.5in}}}
\includegraphics[width=0.85\linewidth]{Fig5b.png}
\llap{\parbox[b]{3.1in}{(b)\\\rule{0ex}{3in}}}
\llap{\parbox[b]{1.5in}{(c)\\\rule{0ex}{3in}}}
\caption{(a) Research areas with more than 100 retractions. Results for the first 20 research areas: (b) Average team size (arranged in ascending order); (c) Average number of retracted citations.}
\label{fig:Fig5}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
This study provides a number of important insights towards the retractions over four decades. In addition to a larger sample size encompassing all retractions, this study differs from some previous analyses in terms of the analysis of team size growth, impact of collaborations on retractions, and retracted citations. Teams that are smaller in size have a large number of retractions.
We further have noticed that a handful of authors pair are more active in scientific misconduct and account for a large number of retractions. Also, with the growth of scientific collaborations, the number of retractions is growing. We have analyzed the citations of retracted papers and the very first time have shown the effect of citing the retracted papers on the papers that are citing those retractions. We have found that 1/4th of the retracted papers are such which have received citations and out of those citations, there is at least one paper that turned out to be a retracted paper. This shows that if a fraud is not recognized and retracted on time then how it would affect the new research. Further, to check whether a high-impact retracted paper is highly cited or not, we investigated the relationship between the journal impact factor and the number of retractions, citations on retractions, and retracted citations. We found that the journal impact factor is independent of the number of retractions and citations. Nowadays, with the increase in the new journals, many low impact journals have lots of retracted papers. A large number of retractions is in a journal with either no impact or low impact. Our finding showed the effect of citing the retracted paper and that citation has no relation with the journal impact factor. Furthermore, our findings suggest a need for increased attention to the papers that are referring to retracted papers. The rise in the number of retractions raises concern in the scientific community about the degree of responsibility of coauthors and reviewers of scientific articles.
Our results showed that the scientific miscount is independent of the journal impact factor; however, it highlights the journals irrespective of the impact factor that experienced a large number of retractions even after the peer- review. Both high, as well as low impact journals, experienced retractions that further raise the question of the policies and peer-reviewed process of high impact journals.
It raises an open question to social scientists or other scientific communities to investigate the rationale or psychology behind such frauds.
\section*{Acknowledgment}
We are thankful to Parul Khurana for his help in data extraction and Prof. A.Chakraborti for feedback.
\bibliographystyle{cas-model2-names}
|
{'timestamp': '2020-11-30T02:07:12', 'yymm': '2011', 'arxiv_id': '2011.13091', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.13091'}
|
arxiv
|
\section{\label{sec:intro}Introduction}
Graphene \cite{Novoselov2004}, a two-dimensional (2D) allotrope of carbon arranged on a honeycomb structure made out of hexagons \cite{RMPCastroNeto}, has shown exceptional physical, chemical and mechanical properties. It has a large theoretical specific surface area (2,630 m$^2$\,g$^{-1}$) \cite{zhu2010graphene}, high electron mobility at room temperature (250,000 cm$^2$\,V$^{-1}$\,s$^{-1}$) \cite{novoselov2005two}, thermal conductivity in the order of 5000 W\,mK$^{-1}$ \cite{Balandin2008}, high Young’s modulus ($\sim$1.1 to 2.0 TPa) \cite{Lee385,JiangPRB2009}, and good electrical conductivity \cite{PAPAGEORGIOU201775,gluchowski2020}. More recently, Cao \textit{et al.} \cite{cao2018unconventional} experimentally demonstrated that bilayer graphene, which normally consists of two vertically stacked monolayer graphene layers arranged in an AB (Bernal) stacking configuration, when rotated at the so-called ``magic angle'' of $\sim$1.1° presents an intrinsic unconventional superconductivity for a critical temperature around 1.7 K.
Owing to this impressive gamma of astonishing properties, graphene has demonstrated outstanding performance in several applications such as catalysis \cite{machado2012catalysis}, gas sensors \cite{yuan2013gassensors}, anti-corrosive coatings \cite{cui2019comprehensive}, flexible touchscreens \cite{vlasov2017touchscreen}, supercapacitors \cite{wang2009supercapacitor}, transistors \cite{schwierz2010graphene}, spintronic \cite{han2014spintronics}, twistronics \cite{CarrPRBtwistronics}, energy storage \cite{olabi135energyStorage}, and most.
However, the synthesis of high-quality and large-area monolayer graphene in a cost effective process represents a drawback for industrial-scale applications \cite{lin2019NM}. In addition, the unmodified graphene has certain limitations, such as weak electrochemical activity, easy agglomeration and difficult processing, which greatly limit its applications \cite{yu2020review}. Furthermore, this 2D nanomaterial is intrinsically a zero-gap semiconductor, or a semimetal, which turn it unsuitable for switching devices \cite{pulizzi2019graphene}. This are some of the reasons for the huge increase in the number of researches focused on functionalization of graphene including reactions with organic and inorganic molecules, chemical modification of the large graphene surface, and the general description of various covalent and non covalent interactions with it \cite{georgakilas2012ChemRev}. Chemical functionalization of graphene is a suitable approach to induce a tunable band gap opening \cite{pumera2014heteroatom}, or to enable this material to be processed by solvent-assisted techniques, such as layer-by-layer assembly, spin-coating, and filtration \cite{kuila2012chemical}.
The chemical functionalization by attaching hydrogen adatoms forming a fully hydrogenated graphene changes the hybridization of carbon atoms from $sp^2$ to $sp^3$, thus removing the conducting $\pi$-bands and opening a direct band gap at the $\Gamma$ point with magnitude of $3.5$-$3.7$ eV \cite{sofo2007PRBgraphane}. The so-called graphane is nonmagnetic semiconductor composed of 100\% hydrogenated graphene, resulting in a CH stoichiometry. This graphene-based material was predicted to be stable in an extended covalently bonded 2D hydrocarbon with two favorable conformations: chair-like conformer, with the hydrogen atoms alternating on both sides of the plane and a boat-like conformer with the hydrogen adatoms alternating in pairs \cite{sofo2007PRBgraphane}. Graphane was first synthesized by Elias \textit{et al.} using free-standing graphene, and further the authors have shown that reaction with hydrogen is reversible, so that the original metallic state, the lattice spacing, and even the quantum Hall effect can be restored by annealing \cite{elias2009Science}.
Another important stable graphene derivative was already obtained using fluorine adatoms that strongly bind to carbons given rise to the so-called fluorographene, a 2D fully fluorinated graphene analogue of a fully fluorinated one-dimensional carbon chain known as Teflon. Fluorographene is a wide band gap semiconductor with $E_{\text{g}}\,{=}\,3.8$ eV, wide enough for optoelectronic applications in the blue/UV spectrum \cite{nair2010fluorographene,jeon2011fluorographene}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{Top and side view of optimized structure of (a) Me-graphene and (b) Me-graphane. We describe three types of carbon atoms, labeled as C$_1$ ($sp^3$) (orange), C$_2$/C$_{2'}$ ($sp^2$) (green) and C$_3$/C$_{3'}$ ($sp^2$) (blue). For Me-graphane, the hydrogen atoms are represented by white spheres. The buckling height is denoted by $h$. The unit cells are represented by doted square and lattice vectors $\mathbf{a}$ and $\mathbf{b}$. (c) Representative structure used in the molecular dynamics simulations of the hydrogenation process of Me-graphene, in which we included a frame of graphene to prevent bending or folding of the membrane.}
\label{fig:me-graph_structure}
\end{figure*}
Janus graphene (J-GN) has been predicted theoretically and prepared experimentally to study asymmetric chemistry of graphene functionalization \cite{yang2013JGN,li2015JGN,zhang2013JGN}. The J-GN is prepared achieving asymmetric covalent functionalization with a variety of functional groups on the opposite sides of graphene. The hydrofluorinated J-GN, namely fluorographone, is a semiconducting graphene derivative formed by covalent functionalization with hydrogen and fluorine adatoms being adsorbed onto the opposite sides of monolayer graphene \cite{jin2016SciRepJGN}. Recently, we studied the modulation of hydrogen adsorption and the corresponding variation in electronic properties of hydrofluorinated J-GN \cite{PRM-Schleder-Marinho2020}.
The innovation arising out of graphene has boosted and inspired the research and development of novel 2D materials graphene-like allotropes such as graphyne's family \cite{baughman1987structure}, biphenylene carbon (BPC, also called graphenylene) \cite{baughman1987structure, brunetto2012nonzero, enyashin2011graphene}, penta-graphene \cite{PuruJena2015P-GN}, pentahexoctite \cite{sharma2014pentahexoctite}, T-graphene \cite{liu2012structural}, octagraphene \cite{sheng2012octagraphene}.
The carbon atoms in these allotropes are either $sp^2$ and/or $sp$ hybridized. Interestingly, all these allotropes are planar and exhibit unique electronic as well
as mechanical properties \cite{sharma2014pentahexoctite}.
Zhuo \textit{et al.} \cite{zhuo2020M-GN} have predicted a new dynamically stable 2D carbon allotrope named tertiary-methyl-graphene or Me-graphene (M-GN). This 2D carbon-based material is composed of both $sp^2$ and $sp^3$-hybridized carbon by topological assembly of \ce{C-(C3H2)4} molecules. Consisting of a transitional ratio of $sp^2$ and $sp^3$-hybridized carbon atoms at the ratio of $12:1$, M-GN is a transition between graphene (ratio $1:0$) and penta-graphene (ratio $2:1$). As expected, M-GN has transition properties between those of graphene and penta-graphene. For example, its band gap of 1.08 eV, which is between graphene (semimetal) and penta-graphene ($E_{\text{g}}\,{=}\,3.25$ eV \cite{PuruJena2015P-GN}). Furthermore, M-GN presents an unusual near zero Poisson’s ratio of $-0.002$ up to $0.009$ in the xy-plane, different from that of graphene ($0.169$) and penta-graphene ($-0.068$). M-GN also exhibits a high hole mobility of $1.60{\times} 10^5$ cm$^2$\,V$^{-1}$\,s$^{-1}$ at 300 K \cite{zhuo2020M-GN}. Those interesting properties of M-GN can be tuned by chemical functionalization, adsorbing for example hydrogen adatoms onto its surfaces. Indeed, this route tends to be explored in the future in order to open up new possibilities of applications for functionalized graphene-based materials.
Herein, we carried out a comprehensive atomistic study on the structural and electronic properties of M-GN covalent functionalized with hydrogen. For this purpose, we performed \textit{ab initio} calculations based on density functional theory, as well as fully atomistic reactive molecular dynamics within the ReaxFF force field. We predicted drastic changes in electronic and structural properties of M-GN by hydrogenation, pushing forward the frontiers of novel semiconducting 2D materials based on covalent functionalization of graphene.
\section{\label{sec:method} COMPUTATIONAL DETAILS}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{Binding energies for hydrogenation of Me-graphene: (a) single-hydrogenated, (b) double-hydrogenated on top side and (c) bottom side, and (d) binding energies per hydrogen adatom as function of hydrogen coverage. The upper panels show the respective adsorption sites in Me-graphene.}
\label{fig:binding-energies}
\end{figure*}
We performed \textit{ab initio} calculations based on density functional theory \cite{HohenbergKohn1964,KohnSham1965} as implemented in the Vienna ab initio simulation package (\textsc{vasp}) \cite{VASP}. The projector augmented-wave method \cite{PAW} was used to treat the electron-ion interaction, and the exchange-correlation functional was described by the generalized gradient approximation (GGA) as proposed by Perdew, Burke, and Ernzerhof (PBE) \cite{PBE}. Structural optimizations were performed using a conjugate gradient algorithm until the residual forces on atoms reach values smaller than $0.025$ eV\,\AA$^{-1}$. Kohn-Sham orbitals were expanded into a plane-wave basis set with kinetic energy up to 500 eV. The Brillouin zone was sampled using a $\Gamma$-centered $17{\times}17{\times}1$ $k$-point mesh for the structural optimization of M-GN, following the scheme proposed by Monkhorst and Pack \cite{Monkhorst-Pack}. M-GN monolayer and its images were separated by a vacuum space of $\sim20$ \AA\, to avoid spurious interactions.
The crystal structure of M-GN is formed by twelve $sp^2$-hybridized and one $sp^3$-hybridized carbon atoms, as illustrated in Fig.~\ref{fig:me-graph_structure}\textcolor{blue}{(a)}, which can be described as methane with four hydrogen atoms replaced by cyclocopropenylidenes \cite{zhuo2020M-GN}. The ground-state structure posses a symmetry of P$\bar{4}m$2 (space group \#115), containing carbon pentagons, hexagons, and octagons. The representative structure of fully-hydrogenated M-GN, named Me-graphane, is shown in Fig.~\ref{fig:me-graph_structure}\textcolor{blue}{(b)}. The modification of the $sp^2$-$sp^2$ carbon bonds by formation of $sp^3$-derived carbon-hydrogen bonds lead to the variation of structural conformation.
In addition, we carried out fully atomistic molecular dynamics (MD) simulations to study hydrogen adsorption and structural properties of hydrogenated M-GN at different temperatures. We employed the reactive force field (ReaxFF), which is an empirical force field for reactive systems developed by van Duin \textit{et al.} \cite{vanDuinReaxFF}. ReaxFF employs a bond order/bond energy relationship, which allows for bond formation and bond dissociation during molecular dynamics (MD) simulations. The bond orders are obtained from interatomic distances and are updated at every MD or energy minimization step \cite{raju2013reaxff}. ReaxFF MD simulations were implemented using the large-scale atomic/molecular massively parallel simulator (\textsc{lammps}) code \cite{lammps}. Precisely, we applied the well-established \ce{C-C} interaction parameters developed by Chenoweth \textit{et al.} \cite{chenoweth2008reaxff}. All ReaxFF MD simulations have been performed in the canonical (NVT) ensemble, with a time step of $0.25$ fs using the Nosé-Hoover thermostat with a coupling time constant of $25$ fs to control the temperature of the entire system. The hydrogenation was carried out in a suspended M-GN membrane supported by a graphene frame as shown in Fig.~\ref{fig:me-graph_structure}\textcolor{blue}{(c)}. The hydrogen atmosphere was composed of $1{,}000$ atoms in a volume of $58{,}300$ \AA$^3$ on both sides of the membrane. We also restricted the hydrogenation in the suspended Me-G area to avoid edge effects.
\section{\label{sec:results} Results and Discussions}
\subsection{Structural properties of the pristine and hydrogenated Me-graphene}
\begin{figure*}[t]
\centering
\includegraphics[width=.9\textwidth]{Fig3.pdf}
\caption{Orbital-resolved bandstructures for the atomic orbitals (a) $s$, (b) $p_x$, (c) $p_y$ and (d) $p_z$ of Me-graphene, and for the orbitals (e) $s$, (f) $p_x$, (g) $p_y$ and (h) $p_z$ of Me-graphane. The scale indicates the magnitude of the projection. The Fermi level was set to zero of energy.}
\label{fig:bands_orbitals}
\end{figure*}
We analyze the structural modifications of M-GN due to hydrogen adsorption onto its surface. In Table~\ref{tab:struc-parameters} we present the optimized lattice constants of M-GN, regarding $a$ and $b$ lattice vectors and buckling height $h$. We obtained $a=b=5.745$ \AA, with a thickness of $h=0.985$ \AA, in close agreement with Ref.~\cite{zhuo2020M-GN} ($a=b=5.744$ \AA\, and $h=0.988$ \AA). We have estimated three types of bond length between the $sp^2$-hybridized carbons, with values of $1.416$, $1.449$ and $1.406$ \AA, similar to the \ce{C-C} bond length in graphene (1.426 Å). Between $sp^3$-hybridized (\ce{C1}) and $sp^2$-hybridized (\ce{C2}) carbons, the bond length was estimated in 1.563 \AA. The bond angle among \ce{C1-C2-C3} is 114.2 degrees, larger than the 109.47 degrees for standard $sp^3$ hybridization angle, and that one formed among \ce{C2-C1-C_{2'}} is 95.71 degrees.
\begin{table}[t]
\centering
\caption{Structural parameters for Me-graphene and Me-graphane.}
{\def\arraystretch{1.2}\tabcolsep=3.5pt
\begin{tabular}{lccc}
\hline\hline
& & Me-graphene & Me-graphane \\\hline
\multirow{3}{*}{\shortstack[l]{Lattice \\parameters (\AA)}} & $a$ & 5.745 & 5.904 \\
& $b$ &5.745 &5.905 \\
& $h$ & 0.985 &1.110 \\[3pt]
\multirow{3}{*}{\shortstack[l]{Bond \\lengths (\AA)}} &\ce{C1-C2} & 1.563 &1.614\\
&\ce{C2-C3} & 1.416 &1.530\\
&\multirow{2}{*}{\ce{C3-C$_{3'}$}} &1.449 &1.579\\
& &1.406 &1.549\\[3pt]
\multirow{2}{*}{\shortstack[l]{Bond \\angles ($^{\circ}$)}} &\ce{C1-C2-C3} &114.22 &114.16\\
&\ce{C2-C1-C$_{2'}$} &95.71 &91.59\\\hline\hline
\end{tabular}
}
\label{tab:struc-parameters}
\end{table}
Comparing the results of M-GN to those obtained for Me-graphane, we notice that the hydrogenation promotes an overall increase of the structural parameters, inducing a lattice strain of $\sim2.8\%$. Every \ce{C-C} bond length in Me-graphane is higher or similar to $sp^3$ bond length of $1.548$ \AA\, in diamond, and much greater than $1.426$ \AA\, characteristic of $sp^2$ \ce{C-C} bond in graphene \cite{sofo2007PRBgraphane,zhuo2020M-GN}. We find that the boat-like conformer of Me-graphane is the most favorable conformation, with a buckling height of $1.110$ \AA. Similarly, the \ce{C-C} bond lengths in graphane are $1.52$ \AA, much similar to that in diamond and shorter than \ce{C-C} bond lengths in graphene. Furthermore, graphane has two favorable conformations, which are chair-like and boat-like conformers \cite{sofo2007PRBgraphane}.
\subsection{Hydrogen binding energy}
We evaluate the most stable adsorption sites
for hydrogenation calculating the hydrogen binding energies for different sites as the difference in total energy between the hydrogenated-compounds and their component parts as follows:
\begin{equation}
E_{\text{b}} = -\left[\frac{E_{\text{M-GN$+n$H}}-\left(E_{\text{M-GN}}+E_{\text{$+n$H}}\right)}{n}\right]\,,
\end{equation}
where $E_{\text{M-GN$+n$H}}$ is the ground-state energy for the M-GN containing a total of $n$ adsorbed hydrogen atoms, as well as $E_{\text{M-GN}}$ and $E_{\text{$n$H}}$ are the energies for pristine M-GN and for an isolated hydrogen times the total number of hydrogen adsorbed, respectively. The results of binding energy per hydrogen in M-GN are shown in Fig.~\ref{fig:binding-energies}. The adsorption of one H into \ce{C2} site (index 2 in figure) of M-GN, corresponding to $7\%$ of hydrogen coverage, leads to a 3.50 eV/H-atom binding energy (Fig.~\ref{fig:binding-energies}\textcolor{blue}{(a)}). When H is adsorbed in \ce{C2} site, the \ce{C1-C2} bond is broken, resulting in a decrease of the free energy due to a gain of conformational entropy yielding the highest binding energy.
We also verify the hydrogen binding energy for a second adatom being inserted in single-hydrogenated M-GN, $8\%$ of hydrogen coverage (Fig.~\ref{fig:binding-energies}\textcolor{blue}{(b)-(c)}). Considering the H adsorption into the top side of M-GN, the highest binding energies were verified for the nearest-neighbor \ce{C3} (index 1 in figure) and for \ce{C1} site (index 4 in figure), although the H adsorption on the other two sites also results in high binding energies above 3 eV/H-atom. For the H adsorption in the bottom side, the highest binding energy was achieved for the \ce{C1} site. Finally, we study the binding energy for $62\%$-hydrogenated M-GN and for Me-graphane and, as a result, all the hydrogen binding energies were higher than 3 eV/H-atom. For comparison, the hydrogen binding energy in hydrogenated penta-graphene is 3.65 eV/H-atom \cite{li2016PCCP-hfpg}, slightly higher than the ones obtained for hydrogenated M-GN. Therefore, the hydrogenation of M-GN results in highly stable chemisorption and the resultant hydrogenated M-GN tends to be more thermodynamically favorable owing to the saturation of all nonplanar $sp^2$ hybridized carbon atoms.
\subsection{Electronic structures}
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{Fig4.pdf}
\caption{Partial charge densities of VBM and CBM for (a-b) Me-graphene and (c-d) Me-graphane, with isosurfaces value 0.005 $e$\,Å$^{-3}$. (e) Charge density difference ($\Delta \rho$) plot for Me-graphane with isosurface value 0.015 $e$\,Å$^{-3}$. In $\Delta \rho$ results, blue and red regions depict charge accumulation and depletion, respectively.}
\label{fig:vbm-cbm-deltarho}
\end{figure}
In Fig.~\ref{fig:bands_orbitals}, we present the orbital-resolved band structures for M-GN and Me-graphane. The color scale indicates the magnitude of state has 4f or 5d character. Our results show that M-GN is an indirect band gap semiconductor with a band gap of 0.64 eV using GGA-PBE functional, with VBM at M ($0.5, 0.5, 0$) and CBM at $\Gamma$ point, in excellent agreement with the reported theoretical band gap of $0.65$ eV for M-GN \cite{zhuo2020M-GN} with identical band edges' coordinates in Brillouin zone. In M-GN, the states at both VBM and CBM have predominantly C-$p_z$ character, with no significant contributions from C-$p_x$ and C-$p_y$ orbitas. On the other hand, the electronic band structure of Me-graphane indicate that this is a wide band gap 2D semiconductor, presenting an indirect band gap with magnitude of 2.81 eV in GGA-PBE approach with band edges located also at M (VBM) and $\Gamma$ (CBM). Near VBM, we notice the formation of a fully-filled intermediate band with bandwidth of about $1$ eV. The states of CBM in Me-graphane have mostly C-$p_z$ character analogous to M-GN, whereas VBM is mainly formed by hybridization of C-$p_x$, C-$p_y$ and C-$p_z$ orbitals. As a result, the intermediate band near VBM of Me-graphane occurs due to structural conformation of hydrogenated M-GN, with main contributions of carbon-related orbitals and not directly by the presence of adsorbed hydrogen atoms.
\begin{table}[t]
\centering
\caption{Band gap and effective electron ($m^*_e$) and hole masses ($m^*_h$) in units of free electron mass $m_0$ at $\Gamma$ (CBM) and M (VBM) points, respectively, using GGA-PBE approach. Each effective mass was obtained from two high-symmetry directions in Brillouin zone: $\Gamma-$X and $\Gamma-$X for $m^*_e$, M$-$X and M$-\Gamma$ for $m^*_h$. }
\label{tab:eg-and-masses}
{\def\arraystretch{1.2}\tabcolsep=3.5pt
\begin{tabular}{lcccccc}
\hline\hline
\multirow{2}{*}{H coverage (\%)} & \multirow{2}{*}{$E_{\text{g}}$ (eV)} & \multicolumn{2}{c}{$m^*_e/m_0$} & & \multicolumn{2}{c}{$m^*_h/m_0$} \\\cline{3-4}\cline{6-7}
& & $\Gamma{-}$X& $\Gamma{-}$M & & M${-}$X & M${-}\Gamma$ \\\hline
0 (Me-graphene) & 0.64 & 0.23 & 0.26 & & $-0.21$ & $-0.24$\\
8 & 0 & -- & -- & & -- & --\\
15 & 0.11 & 0.14 & 0.18 & &$-0.27$ & $-0.31$\\
62 & 0 & -- & -- & & -- & --\\
92 (Me-graphane)& 2.81 & 1.02 & 1.02 & & $-3.71$ &$-3.37$\\\hline\hline
\end{tabular}
}
\end{table}
The partial charge density distributions for CBM and VBM of M-GN and Me-graphane are shown in Fig.~\ref{fig:vbm-cbm-deltarho}\textcolor{blue}{(a)-(d)}. For M-GN, occupied-electron states of VBM and also empty CBM states are localized at octagonal rings. With regard to Me-graphane, CBM states lie in \ce{C3-C3} bonds of octagonal rings, whereas VBM states in Me-graphane are delocalized over all carbon bonds, in striking agreement with the orbital-resolved band structures and representing the hybridization of the C-$p_x$, C-$p_y$ and C-$p_z$ orbitals.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Fig5.pdf}
\caption{Representative snapshots from the reactive molecular dynamics simulations of hydrogen incorporation process, where black spheres represent H atoms. (a) Early stage, (b) intermediate stage, and (c) final stage of the H adsorption dynamics. (d) Hydrogen adsorption rate in atomic percent as a function of reaction time at 300 K, regarding the adsorption sites \ce{C1}, \ce{C2}, and \ce{C3}. }
\label{fig:snaps-adscurve}
\end{figure*}
To describe the effects of hydrogenation in the electron distribution of functionalized M-GN relative to the unperturbed system, we compute the charge density difference as follows
\begin{equation}
\Delta\rho = \rho_{\text{M-GN$+n$H}} - \left(\rho_{\text{M-GN}}+ \rho_{\text{$n$H}}\right)\,,
\end{equation}
where $\rho_{\text{M-GN$+n$H}}$ is the electron density of the hydrogenated M-GN containing a total of $n$ hydrogen adatoms, and $\rho_{\text{M-GN}}$ and $\rho_{\text{$n$H}}$ are the unperturbed electron densities of the substrate and sorbate, respectively. The $\Delta \rho$ result is shown in Fig.~\ref{fig:vbm-cbm-deltarho}\textcolor{blue}{(e)}. Our results indicate that $p$ orbitals of carbon lost a small amount of electron density, verified through the depletion volumes in red. We also notive that the hydrogen adatoms effectively gained electrons, as we can see in the accumulation volumes in blue around hydrogen atoms, matching with the expectation to form \ce{C-H} chemical bonds after adsorption.
Furthermore, to analyze the tuning of electronic properties of M-GN by hydrogenation, we calculate the electronic band gap and effective electron and hole masses as a function of the amount of the hydrogen coverage, and the results are described in Table~\ref{tab:eg-and-masses}. The effective masses is a convenient approach to obtain quantitative insights on the mobility of charge carriers. The effective electron ($m^*_e$) and holes ($m^*_h$) masses were derived from parabolic fits to the GGA-PBE band structures at the band extrema CBM located at $\Gamma$ and VBM located at M point, respectively, along the principal directions M${-}$X and M${-}\Gamma$ for VBM, and $\Gamma{-}$X and $\Gamma{-}$M for CBM.
Our results indicate a dramatic variation of the band gap of M-GN by hidrogenation, ranging from 0.64 eV for pristine M-GN to 2.81 eV for Me-graphane. Moreover, for some intermediate concentrations of adsorbed hydrogen such as 8\% and 62\%, the hydrogenation produces metallic ground-state of functionalized M-GN with band gap being vanished. In turn, for $15\%$ of hydrogen coverage in M-GN we found a indirect band gap of 0.11 eV, which is 530 meV smaller than the band gap of pristine M-GN. This last hydrogenated M-GN system was modeled adsorbing one H on \ce{C2}-site (top side) and the other H onto nearest-neighbor \ce{C3}-site.
The effective electron and hole masses were also effectively tailored with changes in hydrogen coverage of M-GN. The lowest values for pristine M-GN are similar, with $m^*_e = 0.23\,m_0$ in $\Gamma{-}$X direction and $m^*_h = 0.21\,m_0$ in M${-}$X. Comparing to penta-graphene which presents $m^*_e = 0.24\,m_0$ and $m^*_h = 0.50\,m_0$ using GGA-PBE functional \cite{deb2020P-GN}, the effective hole mass in M-GN tends to be lower while the effective electron mass is analogous. Our results for Me-graphane show that the effective masses significantly increase by full hydrogenation of M-GN. We compute $m^*_e = 1.02\,m_0$ and $m^*_h = 3.71\,m_0$ in GGA-PBE for Me-graphane. Similarly, the effective masses for fully-hydrogenated penta-graphene, named penta-graphane, also tends to be higher than those of pristine penta-graphene, with $m^*_e = 1.2\,m_0$ and $m^*_h = 0.58\,m_0$ applying GGA-revised-PBE functional \cite{einollahzadeh2016P-GN-H}. Conversely, the effective electron mass in 15\%-hydrogenated M-GN decreases to $m^*_e = 0.14\,m_0$ in $\Gamma{-}$X, with effective hole mass kept in the same order of magnitude, with $m^*_h = 0.27\,m_0$ in M${-}$X direction.
\subsection{Reactive molecular dynamics}
The hydrogen adsorption dynamics was analyzed by reactive MD simulations. In Fig.~\ref{fig:snaps-adscurve}\textcolor{blue}{(a)-(c)} we present representative snapshots from
a $0.12$ ns MD simulation of the hydrogenation process in M-GN at 300 K for atmospheres composed only of H atoms inserted in both top and bottom side of the membrane. In the initial stage, Fig.~\ref{fig:snaps-adscurve}\textcolor{blue}{(a)}, the H atoms are mostly incorporated on \ce{C1} and \ce{C2}-sites. Verifying the snapshot for intermediate stage, Fig.~\ref{fig:snaps-adscurve}\textcolor{blue}{(b)}, hydrogen-adsorbed \ce{C2}-sites acts as seeds to the growth of hydrogen islands due to geometric changes caused by modification of carbon hybridization from $sp^2$ to $sp^3$. In Fig.~\ref{fig:snaps-adscurve}\textcolor{blue}{(d)} we show the curves of hydrogenation per total adsorption sites (at.\%) as a function of reaction time. At the beginning of hydrogenation, the H adsorption occurs mostly in \ce{C1} and \ce{C2}-sites of M-GN, which agrees with the snapshots (see dashed square and circles in Fig.~\ref{fig:snaps-adscurve}).
In our reactive MD simulations, we have considered the hydrogen adsorption at temperatures of $150$, $300$, and $800$ K. As a result, the hydrogen rate incorporation can be represented in two phases, starting with a linear upward curve followed by a plateau indicating the saturation of adsorption and dynamic equilibrium. The saturation is achieved for different reaction times for each analyzed temperatures, suggesting therefore that the hydrogenation of M-GN is temperature-dependent reaction, which agrees with reported results of graphene's covalent functionalization \cite{paupitz2012graphene, PRM-Schleder-Marinho2020}. Our reactive MD simulations also show that the \ce{C1-C2} bond tends to be broken, favoring the formation of defects and turning the \ce{C1} site into a more favorable adsorption site.
\section{\label{sec:conclusion} Conclusions}
Motivated by the promising properties of a new graphene-based semiconductor named Me-graphene, we have studied the effects of hydrogenation on structural and electronic properties of this 2D nanomaterial. Our \textit{ab initio} DFT calculations show a extreme modulation of the electronic properties of M-GN by hydrogenation. M-GN is a semicondutor with indirect band gap of 0.64 eV, whereas fully-hydrogenated M-GN (named Me-graphane) is a wide band gap semiconductor with $E_{\text{g}}=2.81$ eV in GGA-PBE approach. Analyzing intermediate hydrogen concentrations for partial functionalization of M-GN we found metallic ground-states a semiconducting state for 15\%-hydrogenated M-GN, with narrow band gap of 0.11 eV. In Me-graphane, the effective masses of charge carriers is at least four times higher than those of pristine M-GN, although for 15\%-hydrogenated M-GN the effective electron mass almost halved and the effective hole mass is not significantly altered. The hydrogen atoms bind strongly to M-GN indicating chemisorption, with binding energies higher than 3 eV. Me-graphane presents higher bond lengths compared to M-GN ones, with boat-like conformer being its most favorable conformation. The reactive molecular dynamics simulation shows that the hydrogenation of M-GN is a temperature-dependent reaction, with formation of hydrogen islands starting with adsorptions in the most stable carbon site predicted in our DFT calculations. In conclusion, we believe our results will motivate the interest on the synthesis of M-GN and Me-graphane, what could boost the range of potential applications of carbon-allotropes.
\begin{acknowledgments}
This work was supported by the Brazilian agencies
FAPESP, CAPES, and CNPq (Process No. 310045/2019-3). Computational resources were provided by the high performance computing center at UFABC. The authors thank Mr. Matheus Medina for his technical support in reactive molecular dynamics simulations.
\end{acknowledgments}
|
{'timestamp': '2020-12-08T02:17:12', 'yymm': '2012', 'arxiv_id': '2012.03159', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.03159'}
|
arxiv
|
\section{Introduction}
The Gromov-Hausdorff distance $d_\mathcal{GH}$ was independently introduced by Edwards \cite{edwards1975structure} and Gromov \cite{gromov1981groups} in order to quantify the difference between two given metric spaces. Let $(X,d_X)$ and $(Y,d_Y)$ be two compact metric spaces. Then, the Gromov-Hausdorff distance between them is defined as follows:
\begin{equation}\label{eq:dgh}
d_\mathcal{GH}\left(X,Y\right)\coloneqq\inf_{Z}d_\mathcal{H}^Z\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right),
\end{equation}
where $d_\mathcal{H}^Z$ denotes the Hausdorff distance between nonempty closed subsets of $Z$ (cf. \Cref{def:dH}) and the infimum is taken over all metric spaces $Z$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$.
Let $\mathcal{M}$ denote the collection of isometry classes of compact metric spaces. In this regard, it has been established that when endowed with $d_\mathcal{GH}$, $\left(\mathcal{M},d_\mathcal{GH}\right)$ is a complete and separable metric space \cite{burago2001course,petersen2006riemannian}. The geometry of $\left(\mathcal{M},d_\mathcal{GH}\right)$ has been studied extensively recently \cite{ivanov2017local,ivanov2019isometry,klibus2018convexity}.
A metric measure space $\mathcal{X}=(X,d_X,\mu_X)$ is a metric space $(X,d_X)$ endowed with a Borel probability measure $\mu_X$. Denote by $\mathcal{M}^w$ the collection of isomorphism classes (cf. \Cref{def:isomorphism}) of compact metric measure spaces with full support. By replacing the Hausdorff distance in \Cref{eq:dgh} with the $\ell^p$-Wasserstein distance (cf. \Cref{def:p-w-dist}) between probability measures, Sturm introduced the \emph{$L^p$-transportation distance} on $\mathcal{M}^w$ as a counterpart to $d_\mathcal{GH}$ \cite{sturm2006geometry,sturm2012space} and in this paper, we denote by $\dgws{p}$ (for each $p\in[1,\infty]$) his $L^p$-transportation distance. In \cite{memoli2007use,memoli2011gromov} the first author introduced the Gromov-Wasserstein distance $\dgw{p}$ (for each $p\in[1,\infty]$) on $\mathcal{M}^w$ based on an alternative representation of $d_\mathcal{GH}$ (cf. \Cref{thm:dgh-dual}). $\dgws{p}$ and $\dgw{p}$ are both legitimate metrics on $\mathcal{M}^w$ which generate the same topology, yet they do not coincide in general.
In recent years, the Gromov-Hausdorff and the Gromov-Wasserstein distances have found applications in shape analysis \cite{memoli2004comparing,memoli2005theoretical,memoli2007use,bronstein2008numerical,bronstein2010gromov}, machine learning \cite{peyre2019computational,alvarez2018gromov,titouan2019optimal,bunne2019learning,chowdhury2020generalize,xu2019scalable,vayer2019sliced,le2019fast}, biology \cite{liebscher2018new,demetci2020gromov, cao2020manifold}, and network analysis \cite{hendrikson2016using,chowdhury2019gromov, chowdhury2019metric}. It is worth noting that the three distances $d_\mathcal{GH}$, $\dgws{p}$ and $\dgw{p}$ and their variants have been applied in topological data analysis \cite{chazal2009gromov,chazal2014persistence,blumberg2014robust,memoli2019quantitative,chowdhury2019gromov,blumberg2020stability,rolle2020stable} to establish stability results of invariants. As such, clarifying the properties of these distances, especially with respect to the structure of their associated geodesics, may help develop suitable concomitant statistical methods (see \cite{sturm2012space,chowdhury2020gromov}).
\subsection{Geodesic properties of $(\mathcal{M},d_\mathcal{GH})$, $\left(\mathcal{M}^w,\dgws{p}\right)$ and $\left(\mathcal{M}^w,\dgw{p}\right)$}
The fact that $\left(\mathcal{M}^w,\dgw{p}\right)$ is a geodesic space for each $p\in[1,\infty)$ was proved by K.T. Sturm in \cite[Theorem 3.1]{sturm2012space}.\footnote{Sturm proved the geodesic property for the collection $\mathcal{M}^w_p$ larger than $\mathcal{M}^w$ for each $p\in[1,\infty]$ which contains complete metric measure spaces with finite $\ell^p$-size, a generalization of the notion of diameter. However, the same technique applies to the case of $\mathcal{M}^w$ without any changes.} His proof is constructive in that a special type of geodesics, which in this paper we call \emph{straight-line $\dgw p$ geodesics}, was identified for any two given metric measure spaces (cf. \cite[Theorem 3.1]{sturm2012space}). Furthermore, Sturm provided a complete characterization of geodesics in $(\mathcal{M}^w,\dgw p)$ for each $p\in(1,\infty)$ by proving that \emph{every} geodesic in $\left(\mathcal{M}^w,\dgw{p}\right)$ is a straight-line $\dgw p$ geodesic.
The geodesic property of $(\mathcal{M},d_\mathcal{GH})$ was first proved in \cite{ivanov2016gromov} via the so called \emph{mid-point criterion} (cf. \Cref{thm:mid-pt-geo}) which did not yield explicit geodesics. Later in \cite{chowdhury2018explicit}, the fact that $(\mathcal{M},d_\mathcal{GH})$ is geodesic was reproved by identifying the \emph{straight-line $d_\mathcal{GH}$ geodesics} (cf. \Cref{thm:str-line-geo}) which are analogous to the straight-line $\dgw p$ geodesics constructed in \cite{sturm2012space}. In {\cite{sturm2020email} Sturm proved that $\left(\mathcal{M}^w,\dgws p\right)$ (for each $p\in[1,\infty)$) is geodesic by constructing what we call the \emph{straight-line $\dgws p$ geodesic} between any two metric measure spaces, which is a slight variant of the straight-line $\dgw p$ geodesic with respect to $\dgw p$ (cf. \Cref{thm:straight-line-gw-geo}).}
Unlike the case of $\left(\mathcal{M}^w,\dgw{p}\right)$ where the only geodesics are the straight-line $\dgw p$ geodesics, neither straight-line $d_\mathcal{GH}$ geodesics nor straight-line $\dgws p$ geodesics completely characterize geodesics in $(\mathcal{M},d_\mathcal{GH})$ and $\left(\mathcal{M}^w,\dgws p\right)$, respectively. Indeed, the authors of \cite{chowdhury2018explicit} discovered \emph{deviant} geodesics in $(\mathcal{M},d_\mathcal{GH})$, i.e., geodesics which are not straight-line $d_\mathcal{GH}$ geodesics. Following a strategy similar to the one used in \cite{chowdhury2018explicit}, we discover non-straight-line $\dgws p$ geodesics in $\left(\mathcal{M}^w,\dgws p\right)$ and present our construction in \Cref{app:d-b-geodesic} of this paper. This inspires us to obtain better understanding of geodesics in $(\mathcal{M},d_\mathcal{GH})$ and $\left(\mathcal{M}^w,\dgws{p}\right)$,
\subsection{Our results}
In this paper we elucidate characterization results for geodesics in $(\mathcal{M},d_\mathcal{GH})$ and $\left(\mathcal{M}^w,\dgws{p}\right)$. These characterizations accommodate both straight-line $d_\mathcal{GH}$ / $\dgws p$ geodesics and the above mentioned deviant geodesics.
\paragraph{Hausdorff-realizable Gromov-Hausdorff geodesics.} {In this paper, the terms ``Gromov-Hausdorff geodesics'' and ``$d_\mathcal{GH}$ geodesics'' are used interchangeably when referring to geodesics in $(\mathcal{M},d_\mathcal{GH})$.}
Given a metric space $X$, its Hausdorff hyperspace $\mathcal{H}\left(X\right)$ is the set of all nonempty bounded closed subsets of $X$ endowed with the Hausdorff distance $d_\mathcal{H}^X$ (cf. \Cref{def:dH}). Blaschke's compactness theorem (see for example \cite[Theorem 7.3.8]{burago2001course}) states that $\mathcal{H}\left(X\right)$ is compact whenever $X$ is compact. Furthermore, it is known that if $X$ is a compact geodesic space then so is $\mathcal{H}\left(X\right)$ \cite{bryant1970convexity,serra1998hausdorff}. We call a geodesic in the Hausdorff hyperspace of some metric space a \emph{Hausdorff geodesic}. Hausdorff geodesics are easier to study than Gromov-Hausdorff geodesics since the definition of $d_\mathcal{GH}$ relies on finding the infimum of Hausdorff distances over certain metric embeddings (cf. \Cref{def:dGH}). This inspires us to relate Gromov-Hausdorff geodesics with Hausdorff geodesics.
It was first observed and proved in \cite{ivanov2019hausdorff} that every straight-line $d_\mathcal{GH}$ geodesic can be realized as a Hausdorff geodesic with respect to some ambient metric space (cf. \Cref{prop:hausdorff-geodesic-straight-line}). We further show that under mild conditions, such a metric space construction is actually compact and thus the corresponding straight-line $d_\mathcal{GH}$ geodesic can be realized as a geodesic in the Hausdorff hyperspace of certain compact metric space (cf. \Cref{prop:strline-hausdorff}). We call any such a Gromov-Hausdorff geodesic \emph{Hausdorff-realizable}. It turns out that Hausdorff-realizability is a universal phenomenon:
\begin{restatable}{thm}{thmhreal}\label{thm:main-h-realizable}
Every Gromov-Hausdorff geodesic is Hausdorff-realizable.
\end{restatable}
\paragraph{Wasserstein-realizable Gromov-Wasserstein geodesics.} Given a metric space $X$, the $\ell^p$-\emph{Wasserstein hyperspace} $\mathcal{W}_p\left(X\right)$ is the set of all Borel probability measures on $X$ with finite moments and endowed with the \emph{$\ell^p$-Wasserstein distance} $d_{\mathcal{W},p}$ (cf. \Cref{def:p-w-dist}). Sturm's $L^p$-transportation distance $\dgws{p}$ is defined by replacing the Hausdorff distance term in the definition of $d_\mathcal{GH}$ with an $\ell^p$-Wasserstein distance term (cf. \Cref{def:dGW}). For the sake of symmetry of nomenclature, we call the $L^p$-transportation distance $\dgws{p}$ the \emph{Sturm's $\ell^p$-Gromov-Wasserstein distance}. Since we only focus on $\dgws{p}$ instead of $\dgw{p}$ in this paper, we also simply call $\dgws{p}$ the $\ell^p$-Gromov-Wasserstein distance without causing any confusion. See also \Cref{fig:gh vs gw} for an illustration. {Then, in this paper, the terms ``$\ell^p$-Gromov-Wasserstein geodesics'' and ``$\dgws{p}$ geodesics'' are used interchangeably when referring to geodesics in $\left(\mathcal{M}^w,\dgws p\right)$.}
\begin{figure}[htb]
\centering \includegraphics[width=0.2\textwidth]{figure/ghvsgw.eps}
\caption{{\textbf{Nomenclature.} The Gromov-Hausdorff distance between two metric spaces is defined via infimizing Hausdorff distances on certain ambient spaces (cf. \Cref{def:dGH}). This procedure of infimizing some quantities over certain ambient spaces is called ``gromovization" in \cite{memoli2011gromov}, e.g., the Gromov-Hausdorff distance is the gromovization of the Hausdorff distance. In this way, Sturm's $L^p$-transportation distance is the gromovization of the Wasserstein distance and hence we call it the $\ell^p$-Gromov-Wasserstein distance in our paper. The figure illustrates the respective gromovization processes for $d_\mathcal{GH}$ and $\dgws p$.}} \label{fig:gh vs gw}
\end{figure}
Inspired by \Cref{thm:main-h-realizable}, we analogously define the so-called $\ell^p$-Wasserstein-realizable geodesics in $\left( \mathcal{M}^w,\dgws p\right)$ (cf. \Cref{def:w-real-geo}). {For $p\in[1,\infty)$, denote $\Gamma^p$ the collection of all $\dgws{p}$ geodesics. Let $d_{\infty,p}$ be the uniform metric on $\Gamma^p$, i.e.,
$$d_{\infty,p}\left(\gamma_1,\gamma_2\right)\coloneqq\sup_{t\in[0,1]}\dgws{p}\left(\gamma_1\left(t\right),\gamma_2\left(t\right)\right)$$
for any $\gamma_1,\gamma_2\in\Gamma^p$. Let $\Gamma_\mathcal{W}^p$ denote the subset of $\Gamma^p$ consisting of all Wasserstein-realizable geodesics. Then, by applying techniques and strategies similar to those used in proving \Cref{thm:main-h-realizable}, we obtain that $\Gamma^p_\mathcal{W}$ is a dense subset of $\Gamma^p$ (cf. \Cref{prop:main-w-real-dense}).
}
Though we conjecture that every $\dgws{p}$ geodesic is Wasserstein-realizable, we have not been able to establish the closedness of $\Gamma_\mathcal{W}^p$ and the conjecture still remains open. We then turn to consider geodesics satisfying certain constraints and eventually, in \Cref{thm:bdd-geo-w-real}, we are able to identify a certain type of $\dgws{p}$ geodesics called \emph{Hausdorff-bounded} (cf. \Cref{def:hausdorff-bdd}) which turn out to be Wasserstein-realizable. For any two metric measure spaces $\gamma(s)=(X_s,d_s,\mu_s)$ and $\gamma(t)=(X_t,d_t,\mu_t)$ along a given Hausdorff-bounded geodesic $\gamma$, the Gromov-Hausdorff distance $d_\mathcal{GH}(X_s,X_t)$ between their underlying metric spaces is bounded above by the Gromov-Wasserstein distance $f\left(\dgws{p}(\gamma(s),\gamma(t))\right)$ through some suitable function $f$. Such control over Gromov-Hausdorff distances allows us to exploit the techniques used for proving \Cref{thm:main-h-realizable} in order to also establish Wasserstein-realizability.
\begin{restatable}{thm}{thmbddgeo}\label{thm:bdd-geo-w-real}
Given $p\in[1,\infty)$, every Hausdorff-bounded $\ell^p$-Gromov-Wasserstein geodesic is $\ell^p$-Wasserstein-realizable.
\end{restatable}
It turns out that both the straight-line $\dgws p$ geodesics introduced in \cite{sturm2020email} and the deviant geodesics constructed in \Cref{app:d-b-geodesic} are Hausdorff-bounded and thus Wasserstein-realizable (cf. \Cref{prop:strline-gw-h-bdd} and \Cref{rmk:deviant and branching GW}). Then, \Cref{thm:bdd-geo-w-real} indeed characterizes a large class of Gromov-Wasserstein geodesics as Wasserstein geodesics.
\paragraph{Dynamic (Gromov-)Hausdorff geodesics.} Since every Gromov-Hausdorff geodesic is a Hausdorff geodesic, we turn to study properties of Hausdorff geodesics. In the last part of the paper, we devote to draw connection between Hausdorff geodesics and Wasserstein geodesics. In particular, we establish a counterpart to the theory of displacement interpolation for Hausdorff geodesics.
For $p>1$, a geodesic $\gamma:[0,1]\rightarrow \mathcal{W}_p(X)$ of probability measures in $\mathcal{W}_p\left(X\right)$ is also called a \emph{displacement interpolation} since it is characterized by a probability measure on the set of all geodesics in $X$ itself (see \cite[Chapter 7]{villani2008optimal} and \Cref{thm:dis-int} for a precise statement). More precisely, let $\Gamma([0,1],X)$ denote the set of all geodesics in $X$, then there exists a probability measure $\Pi\in\mathcal{P}(\Gamma([0,1],X))$ such that $\gamma(t)=(e_t)_\#(\Pi)$ for each $t\in[0,1]$, where $e_t:\Gamma([0,1],X)\rightarrow X$ is the evaluation map at $t$ sending any function $\gamma:[0,1]\rightarrow X$ to $\gamma(t)$.
Now, we state an analogous characterization for Hausdorff geodesics. Let $X\in\mathcal{M}$ and $A,B\subseteq X$ be two closed subsets. Let $\rho\coloneqq d_\mathcal{H}^X\left(A,B\right)>0$. We define
$$\mathfrak{L}\left(A,B\right)\coloneqq\left\{\gamma:[0,1]\rightarrow X:\,\gamma(0)\in A,\,\gamma(1)\in B\text{ and }\forall s,t\in[0,1],\,d_X\left(\gamma(s),\gamma(t)\right)\leq|s-t|\,\rho\right\}. $$
{In other words, $\mathfrak{L}\left(A,B\right)$ is the set of all $\rho$-Lipschitz curves (cf. \Cref{sec:geo}) in $X$ starting in $A$ and ending in $B$.}
We call a closed subset $\mathfrak{D}\subseteq\mathfrak{L}\left(A,B\right)$ a \emph{Hausdorff displacement interpolation} between $A$ and $B$ if $e_0\left(\mathfrak{D}\right)=A$ and $e_1\left(\mathfrak{D}\right)=B$, where $e_0$ and $e_1$ are evaluation maps at $t=0$ and $t=1$, respectively. Then, we have the following characterization of Hausdorff geodesics via the Hausdorff displacement interpolation:
\begin{restatable}{thm}{thmdynhaus}\label{thm:main-dyn-hausdorff}
Given a compact metric space $X$, let $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$ be any map. We assume that $\rho\coloneqqd_\mathcal{H}^X\left(\gamma\left(0\right),\gamma\left(1\right)\right)>0$. Then, the following two statements are equivalent:
\begin{enumerate}
\item $\gamma$ is a Hausdorff geodesic;
\item there exists a \emph{nonempty} closed subset $\mathfrak{D}\subseteq\mathfrak{L}\left(\gamma\left(0\right),\gamma\left(1\right)\right) $ such that $\gamma\left(t\right)=e_t\left(\mathfrak{D}\right)$ for all $t\in[0,1]$.
\end{enumerate}
\end{restatable}
As in the theory of optimal transport where people accept that ``a geodesic in the space of laws is the law of a geodesic'' \cite[page 126]{villani2008optimal}, \emph{a geodesic in the space of closed subsets is the closed subset of a certain set of Lipschitz curves}. {It is tempting to ask whether one can require $\mathfrak{D}$ to contain only geodesics instead of Lipschitz curves. There are, however, counterexamples to this; see \Cref{rmk:couterex-haus-geo}.} As an application of \Cref{thm:main-dyn-hausdorff}, in \Cref{thm:exist infinite geod} we prove the existence of \emph{infinitely} many distinct Gromov-Hausdorff geodesics connecting any two compact metric spaces.
We further extend our characterization of Hausdorff geodesics via displacement interpolation to Gromov-Hausdorff geodesics. Though defined via Hausdorff distances, there is a dual formula for $d_\mathcal{GH}$ which involves \emph{correspondences} between sets (cf. \Cref{sec:gh-detail}). This notion is akin to the notion of coupling between probability measures which are inherent to the Wasserstein distance (see \Cref{rmk:relation-dyn-coup-corr} for a more detailed comparison). We extend the notion of correspondence to the so-called \emph{dynamic optimal correspondence} (cf. \Cref{def:dyn-cor}), which is a concept analogous to \emph{dynamic optimal couplings} (cf. \Cref{def:dyn-coup}) in the theory of measure displacement interpolation. We say that a geodesic $\gamma$ in $\mathcal{M}$ is \emph{dynamic} if it possesses a dynamic optimal correspondence.
{The above facts are put together as follows: via \Cref{thm:main-h-realizable} we identify any given Gromov-Hausdorff geodesic with a Hausdorff geodesic; then we invoke \Cref{thm:main-dyn-hausdorff} to generate a Hausdorff displacement interpolation, which gives rise to a dynamic optimal correspondence. In the end, we obtain the following characterization of Gromov-Hausdorff geodesics:}
\begin{restatable}{thm}{thmdyngh}\label{thm:main-dyn-gh}
Every Gromov-Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{M}$ is dynamic.
\end{restatable}
\paragraph{Miscellaneous results.} In the course of proving the main results described above, we established various supporting results and provided novel proofs of some known results both of which are of independent interest. Here we list some of such results:
\begin{enumerate}
\item In \Cref{thm:hyper-geod-iff} we prove that a compact metric space is geodesic \textit{if and only if} its Hausdorff hyperspace is geodesic.
\item In \Cref{ex:geod-sphere} we reconstruct the Gromov-Hausdorff geodesic defined in \cite{chowdhury2018explicit} between $\mathbb S^0$ and $\mathbb S^n$ via a Hausdorff geodesic construction.
\item In \Cref{thm:W-1-lip} we prove that the Wasserstein extensor $\mathcal{W}_p$ for each $p\in[1,\infty]$ is 1-Lipschitz.
\item In page \pageref{succinct proof} we provide a {novel} succinct proof of the fact that $(\mathcal{M},d_\mathcal{GH})$ is geodesic.
\item In \Cref{app:d-b-geodesic} we construct examples of deviant and branching $\dgws p$ geodesics.
\end{enumerate}
\subsection{Organization of the paper}
In \Cref{sec:background}, we collect necessary background material regarding real functions, geodesics, the Gromov-Hausdorff distance and the Gromov-Wasserstein distance. In \Cref{sec:metric-ext}, we further examine properties of metric extensions including Hausdorff hyperspaces, Wasserstein hyperspaces and the Urysohn universal metric space. In \Cref{sec:real-geo}, we study Hausdorff-realizable and Wasserstein-realizable geodesics and prove \Cref{thm:main-h-realizable} and \Cref{thm:bdd-geo-w-real}. In \Cref{sec:dyn-geo}, we study the Hausdorff displacement interpolation and dynamic Gromov-Hausdorff geodesics and we prove \Cref{thm:main-dyn-hausdorff} and \Cref{thm:main-dyn-gh}. In \Cref{sec:discussion}, we discuss some open problems and in the Appendix, we provide extra proofs and constructions.
\section{Background material}\label{sec:background}
In this section, we first collect some notions and basic results for real functions. Then, we provide necessary background materials regarding continuous curves and geodesics in metric spaces. We also introduce definitions and certain results of both the Gromov-Hausdorff distance and the Gromov-Wasserstein distance.
\subsection{Elementary properties of real functions}\label{sec:real function}
{In this section we collect some notions and basic results of real functions which we will use in \Cref{sec:W-reall-geo}.}
\begin{definition}[Proper functions]\label{def:proper-function}
For any function $f:I\rightarrow J$ where $I$ and $J$ are intervals in $\overline{\mathbb
R}\coloneqq[0,\infty]$ containing 0, we say $f$ is \emph{proper} if $f(0)=0$ and $f$ is continuous at $0$.
\end{definition}
For any increasing real function $f:[0,\infty)\rightarrow [0,\infty)$, we define its \emph{inverse} $f^{-1}:[0,\infty)\rightarrow[0,\infty]$ as follows:
$$f^{-1}(y)\coloneqq\inf\{x\geq 0:\,f(x)\geq y\},\quad\forall y\in [0,\infty), $$
where we adopt the convention that $\inf\emptyset=\infty$. Then, we have the following elementary properties of inverse of increasing functions:
\begin{proposition}[Some properties of inverse of increasing functions]\label{prop:increasing-prop}
Fix an increasing function $f:[0,\infty)\rightarrow[0,\infty)$. Then, we have the following:
\begin{enumerate}
\item $f^{-1}:[0,\infty)\rightarrow[0,\infty]$ is still an increasing function;
\item if $f$ is unbounded, i.e., $\lim_{x\rightarrow\infty}f(x)=\infty$, then $f^{-1}(y)<\infty$ for any $y\in [0,\infty)$;
\item for all $x,y\in[0,\infty)$, $x<f^{-1}(y)$ implies that $f(x)\leq y$;
\item if $f$ is \emph{strictly} increasing, then for $x,y\in[0,\infty)$, $f(x)\leq y$ implies that $x\leq f^{-1}(y)$;
\item if $f$ is proper, then for any $y>0$, $f^{-1}(y)>0$ and $f^{-1}(0)=0$. If $f$ is moreover \emph{strictly} increasing, then $f^{-1}$ is continuous at $0$ and in particular $f^{-1}$ is proper.
\end{enumerate}
\end{proposition}
\subsection{Curves and geodesics in metric spaces}\label{sec:geo}
A \emph{metric space} $(X,d_X)$ is a pair such that $X$ is a set and $d_X:X\times X\rightarrow\mathbb R_{\geq 0}$ is a function satisfying the following conditions:
\begin{enumerate}
\item for any $x,x'\in X$, $d_X(x,x')\geq 0$ and the equality holds if and only if $x=x'$;
\item for any $x,x'\in X$, $d_X(x,x')=d_X(x',x)$;
\item for any $x,x',x''\in X$, $d_X(x,x')\leq d_X(x,x'')+d_X(x'',x')$.
\end{enumerate}
We often abbreviate $(X,d_X)$ to $X$ to represent a metric space.
A map $\varphi:X\rightarrow Y$ between metric spaces is called an \textit{isometric embedding}, usually denoted by $\varphi:X\hookrightarrow Y$, if for each $x,x'\in X$
$$d_Y(\varphi(x),\varphi(x'))=d_X(x,x'). $$
We say a metric space $X$ is \emph{isometric} to another metric space $Y$ if there exists a surjective isometric embedding $\varphi:X\hookrightarrow Y$. When $X$ is isometric to $Y$, we write $X\cong Y$.
Given a metric space $X$, a \textit{curve} in $X$ is any continuous map $\gamma:[0,1]\rightarrow X$. For $C>0$, a $C$-Lipschitz curve is any curve $\gamma$ such that $d_X(\gamma(s),\gamma(t))\leq C\cdot |s-t|$ for $s,t\in[0,1]$. One important result that we use in the sequel is the following variant of the Arzel\`a-Ascoli theorem (compare with the version given in \cite[Theorem 2.5.14]{burago2001course}). We omit the proof here since it is essentially the same as the one for \cite[Theorem 2.5.14]{burago2001course} and it is also a direct consequence of a more general statement which we prove later (cf. \Cref{thm:general-AA}).
\begin{theorem}[Arzel\`a-Ascoli theorem]\label{thm:AA}
Let $X$ be a compact metric space and let $\{\gamma_i:[0,1]\rightarrow X\}_{i=0}^\infty$ be a sequence of $C$-Lipschitz curves for a fixed $C>0$, i.e., $d_X\left(\gamma_i\left(s\right),\gamma_i\left(t\right)\right)\leq C\cdot|s-t|$ for any $s,t\in[0,1]$ and $i=0,1,\ldots$. Then, there is a uniformly convergent subsequence of $\{\gamma_i\}_{i=0}^\infty$ with a $C$-Lipschitz limit $\gamma:[0,1]\rightarrow X$.
\end{theorem}
There is one special type of Lipschitz curves called geodesics:
\begin{definition}[Geodesics]
A curve $\gamma:[0,1]\rightarrow X$ is called a \emph{geodesic} if for any $s,t\in[0,1]$ one has
$d_X\left(\gamma\left(s\right),\gamma\left(t\right)\right)\leq|t-s|\cdot d_X\left(\gamma\left(0\right),\gamma\left(1\right)\right),$ i.e., $\gamma$ is $d_X(\gamma(0),\gamma(1))$-Lipschitz.
\end{definition}
By the triangle inequality, it is clear that $d_X\left(\gamma\left(s\right),\gamma\left(t\right)\right)=|t-s|\cdot d_X\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ for any $s,t\in[0,1]$.
If for any $x_0,x_1\in X$, there exists a geodesic $\gamma:[0,1]\rightarrow X$ such that $\gamma(0)=x_0$ and $\gamma(1)=x_1$, we call $X$ a \emph{geodesic space}. The following is a useful criterion for checking whether a metric space is geodesic or not.
\begin{theorem}[Mid-point criterion, {\cite[Theorem 2.4.16]{burago2001course}}]\label{thm:mid-pt-geo}
A \emph{complete} metric space $X$ is geodesic if and only if for any $x,y\in X$, there exists $z\in X$ (which we call a mid-point between $x$ and $y$) such that
$$d_X(x,z)=d_X(y,z)=\frac{1}{2}d_X(x,y). $$
\end{theorem}
Concatenation is one way of constructing new geodesics from existing ones.
\begin{proposition}[Geodesic concatenation]\label{prop:geo-concatenate}
Let $X$ be a metric space and for $i=1,\ldots,n$ let $\gamma_i:[0,1]\rightarrow X$ be a geodesic. For $i=1,\ldots,n$, let $\rho_i\coloneqq d_X(\gamma_i(0),\gamma_i(1))$. Assume that $\gamma_i(1)=\gamma_{i+1}(0)$ for $i=1,\ldots,n-1$ and $\rho_i>0$ for all $i=1,\ldots,n$. Let $\rho\coloneqq\sum_{i=1}^n\rho_i. $
Then, the curve $\gamma:[0,1]\rightarrow X$ defined as follows is a $\rho$-Lipschitz curve:
$$\gamma(t)\coloneqq\begin{cases}
\gamma_1\left(\frac{\rho}{\rho_1}t\right), & t\in\left[0,\frac{\rho_1}{\rho}\right]\\
\gamma_2\left(\frac{\rho}{\rho_2}\left( t-\frac{\rho_1}{\rho}\right)\rc, & t\in \left(\frac{\rho_1}{\rho},\frac{\rho_1+\rho_2}{\rho}\right]\\
\cdots,&\cdots\\
\gamma_n\left(\frac{\rho}{\rho_n}\left( t-\sum_{i=1}^{n-1}\frac{\rho_i}{\rho}\right)\rc, & t\in \left(\sum_{i=1}^{n-1}\frac{\rho_i}{\rho},1\right]
\end{cases}$$
In particular, if $\rho=d_X(\gamma_1(0),\gamma_n(1))$, then $\gamma$ is a geodesic.
\end{proposition}
\begin{proof}
Given any $s,t\in[0,1]$, there are two cases to consider.
\begin{enumerate}
\item There exists $k$ such that $s,t\in\left[\sum_{i=1}^{k-1}\frac{\rho_i}{\rho},\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right]$. Then,
\begin{align*}
d_X(\gamma(s),\gamma(t))&=d_X\left(\gamma_k\left(\frac{\rho}{\rho_k}\left( s-\sum_{i=1}^{k-1}\frac{\rho_i}{\rho}\right)\rc,\gamma_k\left(\frac{\rho}{\rho_k}\left( t-\sum_{i=1}^{k-1}\frac{\rho_i}{\rho}\right)\rc\right)\\
&=\left|\frac{\rho}{\rho_k}\left( s-\sum_{i=1}^{k-1}\frac{\rho_i}{\rho}\right)-\frac{\rho}{\rho_k}\left( t-\sum_{i=1}^{k-1}\frac{\rho_i}{\rho}\right)\right| \rho_k\\
&=|s-t|\rho.
\end{align*}
\item There exist $k,l>0$ such that $s\in\left[\sum_{i=1}^{k-1}\frac{\rho_i}{\rho},\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right]$ and $t\in\left[\sum_{i=1}^{k+l-1}\frac{\rho_i}{\rho},\sum_{i=1}^{k+l}\frac{\rho_i}{\rho}\right]$. Then, by case 1, we have that
$$d_X\left(\gamma(s),\gamma\left(\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right)\rc=\left|s-\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right|\rho, $$
and
$$d_X\left(\gamma(t),\gamma\left(\sum_{i=1}^{k+l-1}\frac{\rho_i}{\rho}\right)\rc=\left|t-\sum_{i=1}^{k+l-1}\frac{\rho_i}{\rho}\right|\rho, $$
\begin{align*}
d_X(\gamma(s),\gamma(t))&\leq d_X\left(\gamma(s),\gamma\left(\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right)\rc+ d_X\left(\gamma(t),\gamma\left(\sum_{i=1}^{k+l-1}\frac{\rho_i}{\rho}\right)\rc\\
&+\sum_{j=0}^{l-2} d_X\left(\gamma\left(\sum_{i=1}^{k+j}\frac{\rho_i}{\rho}\right),\gamma\left(\sum_{i=1}^{k+j+1}\frac{\rho_i}{\rho}\right)\rc\\
&=\left(\left|s-\sum_{i=1}^{k}\frac{\rho_i}{\rho}\right|+\left|t-\sum_{i=1}^{k+l-1}\frac{\rho_i}{\rho}\right|+ \sum_{j=0}^{l-2}\frac{\rho_{k+j+1}}{\rho}\right) \rho\\
&=|t-s|\rho.
\end{align*}
\end{enumerate}
Therefore, $\gamma$ is a $\rho$-Lipschitz curve.
\end{proof}
\subsection{Gromov-Hausdorff distance}\label{sec:gh-detail}
Given a metric space $X$, there is a well-known notion of distance between closed subsets of $X$: the \emph{Hausdorff distance}.
\begin{definition}[Hausdorff distance]\label{def:dH}
For nonempty closed subsets $A,B\subseteq X$, the Hausdorff distance $d_\mathcal{H}^X$ between them is defined by
$$d_\mathcal{H}^X\left(A,B\right)\coloneqq\inf\{r:\,A\subseteq B^r,\,B\subseteq A^r\}, $$
where $A^r\coloneqq\{x\in X:\,d_X\left(x,A\right)\leq r\}$ is called the $r$-thickening of $A$.
\end{definition}
\begin{remark}\label{rmk:alternative dH}
It is easy to see that $d_\mathcal{H}^X\left(A,B\right)=\max\left(\sup_{x\in A}\inf_{y\in B} d_X(x,y),\sup_{y\in B}\inf_{x\in A} d_X(x,y)\right)$ (cf. \cite[Exercise 7.3.2]{burago2001course}). This formula is also sometimes given as the definition of the Hausdorff distance.
\end{remark}
\begin{lemma}[Hausdorff distance under isometric embedding]\label{lm:dH under embedding}
Let $\varphi:X\hookrightarrow Y$ be an isometric embedding of two compact metric spaces. Let $A,B$ be nonempty closed subsets of $X$. Then, we have that
\[d_\mathcal{H}^X(A,B)=d_\mathcal{H}^Y(\varphi(A),\varphi(B)).\]
\end{lemma}
\begin{proof}
Since $\varphi$ is an isometric embedding, by \Cref{rmk:alternative dH} we have that
\begin{align*}
d_\mathcal{H}^X\left(A,B\right)&=\max\left(\sup_{x\in A}\inf_{y\in B} d_X(x,y),\sup_{y\in B}\inf_{x\in A} d_X(x,y)\right) \\
&=\max\left(\sup_{x\in A}\inf_{y\in B} d_Y(\varphi(x),\varphi(y)),\sup_{y\in B}\inf_{x\in A} d_Y(\varphi(x),\varphi(y))\right)\\
&=\max\left(\sup_{x\in \varphi(A)}\inf_{y\in \varphi(B)} d_Y(x,y),\sup_{y\in \varphi(B)}\inf_{x\in\varphi(A)} d_Y(x,y)\right)\\
&=d_\mathcal{H}^Y(\varphi(A),\varphi(B)).
\end{align*}
\end{proof}
Now we recall the definition of the Gromov-Hausdorff distance defined in \Cref{eq:dgh} as follows:
\begin{definition}[Gromov-Hausdorff distance]\label{def:dGH}
Given two metric spaces $X$ and $Y$, the Gromov-Hausdorff distance between them is defined by
$$d_\mathcal{GH}\left(X,Y\right)\coloneqq\inf_{Z}d_\mathcal{H}^Z\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right), $$
where the infimum is taken over all metric spaces $Z$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$.
\end{definition}
If $X$ and $Y$ are compact, then the infimum can be restricted to only compact metric spaces $Z$.
For two compact metric spaces $X$ and $Y$, $d_\mathcal{GH}(X,Y)=0$ if and only if $X\cong Y$. Recall that $\mathcal{M}$ denote the set of all isometry classes of compact metric spaces. Then, $\left(\mathcal{M},d_\mathcal{GH}\right)$ is a metric space. Moreover, $\left(\mathcal{M},d_\mathcal{GH}\right)$ is a Polish\footnote{A metric space is Polish if it is complete and separable.} space; see \cite[Theorem 7.3.30]{burago2001course} and \cite[Proposition 42 and 43]{petersen2006riemannian} for more details.
\medskip
\noindent\textbf{Note}: although $\mathcal{M}$ is the set of isometry classes, by a slight abuse of notation, we write $X\in \mathcal{M}$ to refer to an individual compact metric space $X$ instead of its isometry class.
\medskip
One important description of $d_\mathcal{GH}$ is the following duality formula (cf. \Cref{thm:dgh-dual}) via correspondences between sets \cite[Chapter 7]{burago2001course}. Given $\left(X,d_X\right),\left(Y,d_Y\right)\in\mathcal{M}$, define $\mathcal{R}\left(X,Y\right)$ as the set of all $R\subseteq X\times Y$ such that $\pi_X(R)=X$ and $\pi_Y(R)=Y$ where $\pi_X:X\times Y\rightarrow X$ and $\pi_Y:X\times Y\rightarrow Y$ are canonical projections. We call each $R\in\mathcal{R}\left(X,Y\right)$ a \emph{correspondence} between $X$ and $Y$. For a correspondence $R$, we define its \emph{distortion} with respect to $d_X$ and $d_Y$ by
$$\mathrm{dis}\left(R\right)\coloneqq\inf_{\left(x,y\right),\left(x',y'\right)\in R}|d_X\left(x,x'\right)-d_Y\left(y,y'\right)|. $$
\begin{theorem}\label{thm:dgh-dual}
For any $X,Y\in\mathcal{M}$, we have
$$d_\mathcal{GH}\left(X,Y\right)=\frac{1}{2}\inf_{R\in\mathcal{R}\left(X,Y\right)}\mathrm{dis}\left(R\right). $$
\end{theorem}
We let $\mathcal{R}^\mathrm{opt}\left(X,Y\right)$ denote the set of all correspondences such that the equality in \Cref{thm:dgh-dual} holds. It is proved in \cite[Proposition 1.1]{chowdhury2018explicit} that $\mathcal{R}^\mathrm{opt}\left(X,Y\right)\neq \emptyset$ and there exists an $R\in\mathcal{R}^\mathrm{opt}\left(X,Y\right)$ which is a \emph{compact} subset of $\left(X\times Y,\max\left(d_X,d_Y\right)\right)$. A direct consequence of this fact is the following result:
\begin{lemma}\label{lm:dgh_hausdorff-realizable}
If $X$ and $Y$ are compact, then there exists a compact metric space $Z$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$ such that
$$d_\mathcal{GH}\left(X,Y\right)= d_\mathcal{H}^Z\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right). $$
\end{lemma}
\begin{proof}
Let $R\in\mathcal{R}^\mathrm{opt}\left(X,Y\right)$ and $Z\coloneqq X\cup Y$. Let $d_Z:Z\times Z\rightarrow\mathbb{R}$ be such that $d_Z|_{X\times X}=d_X$, $d_Z|_{Y\times Y}=d_Y$ and for $x\in X$ and $y\in Y$
$$d_Z\left(x,y\right)\coloneqq\inf_{\left(x',y'\right)\in R}\left(d_X\left(x,x'\right)+d_Y\left(y,y'\right)+\frac{1}{2}\mathrm{dis}\left(R\right)\right).$$
It is proved in \cite[Lemma 2.8]{memoli2018sketching} that $\left(Z,d_Z\right)$ is a metric space and $d_\mathcal{H}^Z\left(X,Y\right)=d_\mathcal{GH}\left(X,Y\right).$ $\left(Z,d_Z\right)$ is obviously compact since any sequence $\{z_i\}_{i=0}^\infty\subseteq Z$ must contain either a convergent subsequence in $X$ or a convergent subsequence in $Y$.
\end{proof}
\paragraph{Geodesics.} The following result is proved in \cite[Theorem 1]{ivanov2016gromov} using the mid-point criterion (cf. \Cref{thm:mid-pt-geo}):
\begin{restatable}{theorem}{thmgeoGH}\label{thm:GH-geo}
$\left(\mathcal{M},d_\mathcal{GH} \right)$ is a geodesic metric space.
\end{restatable}
In {\cite[Theorem 1.2]{chowdhury2018explicit}}, the authors proved the existence of optimal correspondences which they used to give an explicit construction of Gromov-Hausdorff geodesics and, as a consequence provided, an alternative proof of \Cref{thm:GH-geo}:
\begin{theorem}[Straight-line $d_\mathcal{GH}$ geodesic \cite{chowdhury2018explicit}]\label{thm:str-line-geo}
For $X,Y\in\mathcal{M}$ and any $R\in\mathcal{R}^\mathrm{opt}\left(X,Y\right)$, the curve $\gamma_R:[0,1]\rightarrow\mathcal{M}$ defined as follows is a geodesic:
$$\gamma_R\left(0\right)=\left(X,d_X\right),\gamma_R\left(1\right)=\left(Y,d_Y\right)\text{ and }\gamma_R\left(t\right)=\left(R,d_{R_t}\right)\text{ for }t\in\left(0,1\right), $$
where $d_{R_t}\left(\left(x,y\right),\left(x',y'\right)\right)\coloneqq\left(1-t\right)\,d_X\left(x,x'\right)+t\,d_Y\left(y,y'\right).$
\end{theorem}
We will henceforth use the notation: $R_t\coloneqq \gamma_R(t)$ for $t\in[0,1]$.
\paragraph{Convergence.} Recall that for $\varepsilon>0$ and $X\in \mathcal{M}$, the covering number $\mathrm{cov}_\varepsilon\left(X\right)$ is the least number of $\varepsilon$-balls\footnote{An $\varepsilon$-ball is a closed ball in $X$ with radius $\varepsilon$.} required to cover the whole space $X$.
\begin{definition}[Uniformly totally bounded class]\label{def:CND}
We say a class $\mathcal{K}$ of compact metric spaces is \emph{uniformly totally bounded}, if there exist a bounded function $Q:\left(0,\infty\right)\rightarrow\mathbb{N}$ and $D>0$ such that each $X\in\mathcal{K}$ satisfies the following:
\begin{enumerate}
\item $\mathrm{diam}\left(X\right)\leq D$,
\item for any $\varepsilon>0$, $\mathrm{cov}_\varepsilon\left(X\right)\leq Q\left(\varepsilon\right)$.
\end{enumerate}
We denote by $\mathcal{K}\left(Q,D\right)$ the uniformly totally bounded class consisting of all $X\in\mathcal{M}$ satisfying the conditions above.
\end{definition}
\begin{theorem}[Gromov's pre-compactness theorem]\label{thm:pre-compact}
For any given bounded function $Q:\left(0,\infty\right)\rightarrow\mathbb{N}$ and $D>0$, the class $\mathcal{K}\left(Q,D\right)$ is pre-compact in $\left(\mathcal{M},d_\mathcal{GH}\right)$, i.e., any sequence in $\mathcal{K}\left(Q,D\right)$ has a convergent subsequence.
\end{theorem}
Interested readers are referred to \cite[Section 7.4.2]{burago2001course} for a proof.
\subsection{Sturm's Gromov-Wasserstein distance}\label{sec:gw-detail}
\paragraph{Wasserstein distance.}Given a metric space $X$ and any $p\in[1,\infty]$, there exists a natural distance $d_{\mathcal{W},p}^X$, the \emph{$\ell^p$-Wasserstein distance}, comparing certain Borel probability measures on $X$.
\begin{definition}[$\ell^p$-Wasserstein distance]\label{def:p-w-dist}
For a metric space $X$ (not necessarily compact) and $p\in[1,\infty)$, let $\mathcal{P}_p\left(X\right)$ denote the collection of all Borel probability measures $\alpha$ on $X$ such that
$$\forall x_0\in X,\quad\int_{X}d_X^p\left(x,x_0\right)d\alpha\left(x\right)<\infty.$$
For $\alpha,\beta\in\mathcal{P}_p\left(X\right)$, the $\ell^p$-Wasserstein distance between $\alpha$ and $\beta$ is defined as follows:
$$d_{\mathcal{W},p}^X\left(\alpha,\beta\right)\coloneqq\inf_{\mu\in\mathcal{C}\left(\alpha,\beta\right)}\left(\int_{X\times X}d_X^p\left(x_1,x_2\right)\,d\mu\left(x_1,x_2\right)\right)^\frac{1}{p}, $$
where $\mathcal{C}\left(\alpha,\beta\right)$ denotes the set of measure couplings between $\alpha$ and $\beta$.
For $p=\infty$, let $\mathcal{P}_\infty\left(X\right)$ denote the collection of all Borel probability measures on $X$ with bounded support. We define the $\ell^\infty$-Wasserstein distance between $\alpha,\beta\in\mathcal{P}_\infty\left(X\right)$ by
$$d_{\mathcal{W},\infty}^X\left(\alpha,\beta\right)\coloneqq\inf_{\mu\in\mathcal{C}\left(\alpha,\beta\right)}\sup_{\left(x_1,x_2\right)\in\mathrm{supp}\left(\mu\right)}d_X\left(x_1,x_2\right). $$
\end{definition}
\begin{lemma}[{\cite[Theorem 4.1]{villani2008optimal}}]
Fix $p\in[1,\infty)$. For a compact metric space $X$ and $\alpha,\beta\in\mathcal{P}_p(X)$, there exists $\mu\in\mathcal{C}(\alpha,\beta)$ such that
$$d_{\mathcal{W},p}^X\left(\alpha,\beta\right)=\left(\int_{X\times X}d_X^p\left(x_1,x_2\right)\,d\mu\left(x_1,x_2\right)\right)^\frac{1}{p}. $$
We call such $\mu$ an \emph{optimal transference plan} between $\alpha$ and $\beta$ (with respect to $d_{\mathcal{W},p}^X$) and denote by $\mathcal{C}^\mathrm{opt}_p(\alpha,\beta)$ the collection of all optimal transference plans.
\end{lemma}
\paragraph{Sturm's Gromov-Wasserstein distance.} A metric measure space is a triple $\mathcal{X}=(X,d_X,\mu_X)$ where $(X,d_X)$ is a metric space and $\mu_X$ is a Borel probability measure on $(X,d_X)$. We use script letters such as $\mathcal{X}$ to denote a metric measure space $\mathcal{X}=(X,d_X,\mu_X)$.
\begin{definition}[Isomorphism of metric measure spaces]\label{def:isomorphism}
Given two metric measure spaces $\mathcal{X}$ and $\mathcal{Y}$, we say that they are \emph{isomorphic}, if there exists an isometry $\varphi:X\rightarrow Y$ such that $\mu_Y=\varphi_\#\mu_X$, where $\varphi_\#$ denotes the pushforward map under $\varphi$. Whenever $\mathcal{X}$ is isomorphic to $\mathcal{Y}$, we write $\mathcal{X}\cong_w\mathcal{Y}$.
\end{definition}
Now, we provide the definition of the Gromov-Wasserstein distance given by Sturm in \cite{sturm2006geometry,sturm2012space}.
\begin{definition}[Gromov-Wasserstein distance]\label{def:dGW}
Let $p\in[1,\infty]$ and let $\mathcal{X}=(X,d_X,\mu_X)$ and $\mathcal{Y}=(Y,d_Y,\mu_Y)$ be two compact metric measure spaces with full support. The $\ell^p$-Gromov-Wasserstein distance $\dgws{p}$ between $\mathcal{X}$ and $\mathcal{Y}$ is defined by
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)\coloneqq\inf_{Z}\dW{p}^Z\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right), $$
where the infimum is taken over all metric spaces $Z$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$.
\end{definition}
For notational simplicity, we sometimes identify $(\varphi_X)_\#\mu_X$ with $\mu_X$ and simply write $\dW{p}^Z\left(\mu_X,\mu_Y\right)$ to avoid carrying heavy notations of pushforward maps from isometric embeddings.
Let $p\in[1,\infty]$ and let $\mathcal{X}=(X,d_X,\mu_X)$ and $\mathcal{Y}=(Y,d_Y,\mu_Y)$ be two compact metric measure spaces with full support. Then, $\dgws{p}(\mathcal{X},\mathcal{Y})=0$ if and only if $\mathcal{X}$ and $\mathcal{Y}$ are isomorphic to each other. {The case when $p\in[1,\infty)$ was mentioned in \cite[Proposition 2.4]{sturm2012space} whereas the case $p=\infty$ can be obviously derived from \cite[Theorem 5.1 (a) and (g)]{memoli2011gromov}.} Let $\mathcal{M}^w$ denote the collection of all isomorphism classes of compact metric measure spaces with full support. Then, for each $p\in[1,\infty]$, $\left(\mathcal{M}^w,\dgws p\right)$ is a metric space.
\noindent\textbf{Note}: although $\mathcal{M}^w$ is a set of isomorphism classes, by a slight abuse of notation, we write $\mathcal{X}\in \mathcal{M}^w$ to refer to an individual metric measure space $\mathcal{X}$ instead of its isomorphism class.
The following is a useful alternative formulation of the Gromov-Wasserstein distance:
{\begin{remark}[Metric coupling formulation]\label{rmk:metric coupling}
Let $\mathcal{D}(d_X,d_Y)$ denote the set of all metrics $d:X\sqcup Y\times X\sqcup Y\rightarrow\mathbb R_{\geq 0}$ such that $d|_{X\times X}=d_X$ and $d|_{Y\times Y}=d_Y$. We call each element in $\mathcal{D}(d_X,d_Y)$ a \emph{metric coupling} between $d_X$ and $d_Y$. Then, it is easy to check that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)=\inf_{d\in\mathcal{D}(d_X,d_Y)}\dW{p}^{(X\sqcup Y,d)}\left(\mu_X,\mu_Y\right). $$
\end{remark}}
{The following result is analogous to \Cref{lm:dgh_hausdorff-realizable} for the Gromov-Hausdorff distance.}
\begin{lemma}\label{lm:dgw_w-realizable}
Let $\mathcal{X}=(X,\mu_X),\mathcal{Y}=(Y,\mu_Y)\in\mathcal{M}^w$ and $p\in[1,\infty)$. Then, there exists a compact metric space $Z$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$ such that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)= \dW{p}^Z\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right). $$
\end{lemma}
\begin{proof}
It is proved in \cite[Proposition 2.4]{sturm2012space} that there exists a metric space $\hat{Z}$ and isometric embeddings ${\varphi}_X:X\hookrightarrow \hat{Z}$ and ${\varphi}_Y:Y\hookrightarrow \hat{Z}$ such that $\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)= \dW{p}^{\hat{Z}}\left(({\varphi}_X)_\#\mu_X,({\varphi}_Y)_\#\mu_Y\right). $ Now let $Z\coloneqq{\varphi}_X(X)\cup{\varphi}_Y(Y)$. Then, $Z$ is compact. Since $\mathrm{im}(\varphi_X),\mathrm{im}(\varphi_Y)\subseteq Z$, both $\varphi_X$ and $\varphi_Y$ are actually isometric embeddings ${\varphi}_X:X\hookrightarrow Z$ and ${\varphi}_Y:Y\hookrightarrow Z$, respectively. Then, it is easy to see that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)=\dW{p}^{\hat{Z}}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right)= \dW{p}^Z\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right). $$
\end{proof}
\section{Metric extensions}\label{sec:metric-ext}
For any two metric spaces $X$ and $Y$, if there exists an isometric embedding $X\hookrightarrow Y$, then we call $Y$ a \emph{metric extension} of $X$. A \emph{metric extensor} is any map $\mathcal{F}$ taking a compact metric space $X$ to another metric space $\mathcal{F}\left(X\right)$ such that $\mathcal{F}(X)$ is a metric extension of $X$. In this section, we examine three standard models of metric extensions, namely, the Hausdorff hyperspace, the Wasserstein hyperspace and the Urysohn universal metric space. Properties of these metric extensions and their corresponding metric extensors are essential for proving our main results.
\subsection{Hausdorff hyperspaces}
Given a metric space $X$, the \emph{Hausdorff hyperspace} $\mathcal{H}\left(X\right)$ of $X$ is composed of all nonempty bounded closed subsets of $X$ and is endowed with the Hausdorff distance $d_\mathcal{H}^X$ as its metric.
\begin{theorem}\label{thm:hyper-complete}
If $X$ is a complete metric space, then $\left(\mathcal{H}\left(X\right),d_\mathcal{H}^X\right)$ is also complete.
\end{theorem}
\begin{theorem}[Blaschke's theorem]\label{thm:blaschke}
If $X$ is a compact metric space, then $\left(\mathcal{H}\left(X\right),d_\mathcal{H}^X\right)$ is also compact.
\end{theorem}
See \cite[Section 7.3]{burago2001course} for proofs of the above two results. Note that $\mathcal{H}$ mapping $X$ to $\mathcal{H}(X)$ is then a map from $\mathcal{M}$ to $\mathcal{M}$. The map sending $x\in X$ to the singleton $\{x\}\in \mathcal{H}\left(X\right)$ for each $x\in X$ is an isometric embedding from $X$ to $\mathcal{H}\left(X\right)$. This implies that $\mathcal{H}:\mathcal{M}\rightarrow\mathcal{M}$ is a metric extensor, which we call the \emph{Hausdorff extensor}. One interesting aspect of $\mathcal{H}$ as a map is the stability. In fact, it is proved in \cite{mikhailov2018hausdorff} that $\mathcal{H}$ is a 1-Lispchitz map:
\begin{theorem}[{\cite[Theorem 2]{mikhailov2018hausdorff}}]\label{thm:H-stb}
For any $X,Y\in\mathcal{M}$, we have
$$d_\mathcal{GH}\left(\mathcal{H}\left(X\right),\mathcal{H}\left(Y\right)\right)\leq d_\mathcal{GH}\left(X,Y\right). $$
\end{theorem}
Given an isometric embedding $\varphi:X\hookrightarrow Z$, for any closed subset $A\subseteq X$, the image $\varphi(A)$ is a closed subset of $Z$. This induces an isometric embedding $\varphi_*:(\mathcal{H}(X),d_\mathcal{H}^X)\hookrightarrow (\mathcal{H}(Z),d_\mathcal{H}^Z)$ mapping $A\in \mathcal{H}(X)$ to $\varphi(A)\in \mathcal{H}(Z)$. Then, \Cref{thm:H-stb} is a direct consequence of the following interesting result:
\begin{theorem}[{\cite[Theorem 1]{mikhailov2018hausdorff}}]\label{thm:H-equal}
Given two compact metric spaces $\left(X,d_X\right),\left(Y,d_Y\right)$, suppose there exist a metric space $Z$ (not necessarily compact) and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$. Then, we have that
$$d_\mathcal{H}^{\mathcal{H}(Z)}\big((\varphi_X)_*(\mathcal{H}(X)),(\varphi_Y)_*(\mathcal{H}(Y))\big)=d_\mathcal{H}^Z(\varphi_X(X),\varphi_Y(Y)). $$
\end{theorem}
\paragraph{Geodesics in Hausdorff hyperspaces.} One interesting fact about $\mathcal{H}\left(X\right)$ is that it preserves the geodesic property of $X$:
\begin{theorem}\label{thm:hgeo}
Given $X\in \mathcal{M}$, if $X$ is geodesic, then so is $\mathcal{H}\left(X\right)$.
\end{theorem}
The above theorem was first proved in \cite{bryant1970convexity} using the mid-point criterion (cf. \Cref{thm:mid-pt-geo}) and was later reproved in \cite{serra1998hausdorff} via the following explicit construction:
\begin{theorem}\label{thm:haus-geo-cons}
Let $X\in\mathcal{M}$ be a geodesic space. Let $A,B$ be two closed subsets of $X$. Let $\rho\coloneqqd_\mathcal{H}^X\left(A,B\right)$. Then, $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$ defined by $\gamma\left(t\right)\coloneqq A^{t\,\rho}\cap B^{\left(1-t\right)\,\rho}$ is a Hausdorff geodesic connecting $A$ and $B$, where for any $r\geq 0$, $A^r\coloneqq\{x\in X:\,\exists a\in A \text{ such that }d_X\left(a,x\right)\leq r\}$.
\end{theorem}
Though the construction is correct, the proof of \Cref{thm:haus-geo-cons} given in \cite{serra1998hausdorff} is based on the following false claim:
\begin{claim}[False claim in the proof of {\cite[Theorem 1]{serra1998hausdorff}}]\label{claim:false claim}
Given $X\in\mathcal{M}$ and a map $\gamma:[0,1]\rightarrow X$, if $d_X\left(\gamma\left(0\right),\gamma\left(t\right)\right)=t\cdot d_X(\gamma(0),\gamma(1))$ and $d_X\left(\gamma\left(t\right),\gamma\left(1\right)\right)=(1-t)\cdot d_X(\gamma(0),\gamma(1))$ hold for all $t\in[0,1]$, then $\gamma$ is a geodesic.
\end{claim}
A simple counterexample goes as follows: let $Y\coloneqq[0,3]\times [0,1]\subseteq\mathbb R^2$ endowed with the usual Euclidean metric and let $X$ be the quotient space of $Y$ obtained by collapsing both $\{0\}\times[0,1]$ and $\{3\}\times[0,1]$ to points, respectively. Then, any ``reasonable'' curve connecting these two points will satisfy the condition in the claim while not necessary being a geodesic. See \Cref{app:geo-hyp} for details and a correct proof of \Cref{thm:haus-geo-cons} which still follows the main idea in \cite{serra1998hausdorff}.
{It is worth noting that based on a new technique which we introduce later, i.e., the Hausdorff displacement interpolation, we are able to provide efficient alternative proofs for both \Cref{thm:hgeo} and \Cref{thm:haus-geo-cons} in \Cref{sec:hausdorff displacement interpolation}. In particular, regarding \Cref{thm:hgeo} we prove a stronger result which provides both necessary and sufficient conditions instead of just a one way implication.}
The following is a simple observation which we will use heavily in the sequel for transforming a Hausdorff geodesic to a Gromov-Hausdorff geodesic.
\begin{lemma}\label{lm:hgeo-to-dghgeo}
Let $X,Y,Z\in\mathcal{M}$. Suppose that $Z$ is geodesic and there exist $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$ such that $d_\mathcal{H}^Z\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right)=d_\mathcal{GH}\left(X,Y\right)$. Then, any Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{H}\left(Z\right)$ such that $\gamma\left(0\right)=X$ and $\gamma\left(1\right)=Y$ is actually a Gromov-Hausdorff geodesic, i.e., for any $s,t\in[0,1]$
\[d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)=|s-t|\,d_\mathcal{GH}\left(X,Y\right). \]
\end{lemma}
\begin{proof}
We only need to show that $d_\mathcal{H}^Z\left(\gamma\left(s\right),\gamma\left(t\right)\right)=d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)$ for any $s,t\in[0,1]$. By \Cref{def:dGH}, we have $d_\mathcal{H}^Z\left(\gamma\left(s\right),\gamma\left(t\right)\right)\geq d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)$. Without loss of generality, we assume that $s\leq t$. Since $\gamma$ is a Hausdorff geodesic, we have
\begin{align*}
d_\mathcal{GH}\left(X,Y\right)&=d_\mathcal{H}^Z\left(\gamma\left(0\right),\gamma\left(1\right)\right)\\
&=d_\mathcal{H}^Z\left(\gamma\left(0\right),\gamma\left(s\right)\right)+d_\mathcal{H}^Z\left(\gamma\left(s\right),\gamma\left(t\right)\right)+d_\mathcal{H}^Z\left(\gamma\left(t\right),\gamma\left(1\right)\right)\\
&\geq d_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(s\right)\right)+d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)+d_\mathcal{GH}\left(\gamma\left(t\right),\gamma\left(1\right)\right)\\
&\geq d_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)\\
&=d_\mathcal{GH}\left(X,Y\right).
\end{align*}
Therefore, every equality holds. In particular, $d_\mathcal{H}^Z\left(\gamma\left(s\right),\gamma\left(t\right)\right)=d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)$.
\end{proof}
\begin{example}[$d_\mathcal{GH}$ geodesic connecting $\mathbb S^0$ and $\mathbb S^n$]\label{ex:geod-sphere}
In \cite{chowdhury2018explicit} the authors constructed explicit Gromov-Hausdorff geodesics between the spheres $\mathbb S^0$ and $\mathbb S^n$ with the canonical geodesic distance, for each $n\in\mathbb N$. We recover their construction via the techniques introduced in this section as follows. Note that if we identify $\mathbb S^0$ with any pair of antipodal points, e.g., the north and south poles, in $\mathbb S^n$, then $d_\mathcal{H}^{\mathbb S^n}(\mathbb S^0,\mathbb S^n)=\frac{\pi}{2}$. By \cite[Proposition 1.2]{chowdhury2018explicit}, $d_\mathcal{GH}(\mathbb S^0,\mathbb S^n)=\frac{\pi}{2}=d_\mathcal{H}^{\mathbb S^n}(\mathbb S^0,\mathbb S^n).$ Then, by \Cref{thm:haus-geo-cons} and \Cref{lm:hgeo-to-dghgeo}, the Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{H}(\mathbb S^n)$ defined by $t\mapsto (\mathbb S^0)^{t\cdot\frac{\pi}{2}}\cap(\mathbb S^n)^{(1-t)\cdot\frac{\pi}{2}}=(\mathbb S^0)^{t\cdot\frac{\pi}{2}}$ for $t\in[0,1]$ is a Gromov Hausdorff geodesic connecting $\mathbb S^0$ and $\mathbb S^n$. Note that $\gamma$ is exactly the same geodesic connecting $\mathbb S^0$ and $\mathbb S^n$ constructed in \cite[Proposition 1.3]{chowdhury2018explicit}. See also \Cref{fig:geod-circle} for an illustrative representation of $\gamma$.
\end{example}
\begin{figure}[htb]
\centering \includegraphics[width=0.3\textwidth]{figure/geoS.eps}
\caption{\textbf{Illustration of \Cref{ex:geod-sphere}.} In the figure we identify $\mathbb S^0$ with the north and south poles of $\mathbb S^1$ and illustrate $\gamma(t)$ for some $t\in(0,1)$ as the thickened subset of $\mathbb{S}^1$.} \label{fig:geod-circle}
\end{figure}
\subsection{Wasserstein hyperspaces}\label{sec:W-geo}
Given a metric space $X$, let $\mathcal{W}_p\left(X\right)\coloneqq\left(\mathcal{P}_p\left(X\right),d_{\mathcal{W},p}\right)$ (cf. \Cref{def:p-w-dist}). We call $\mathcal{W}_p\left(X\right)$ the \emph{$\ell^p$-Wasserstein hyperspace} of $X$. Note that when $X$ is compact, $\mathcal{P}_p\left(X\right)=\mathcal{P}\left(X\right)$ for any $p\in[1,\infty]$, where $\mathcal{P}(X)$ denotes the collection of all Borel probability measures on $X$. The following two theorems are standard results about Wasserstein hyperspaces and see for example \cite[Section 6]{villani2008optimal} for proofs.
\begin{theorem}\label{thm:complete-w}
For $p\in[1,\infty)$, if $X$ is Polish, then $\mathcal{W}_p\left(X\right)$ is also Polish.
\end{theorem}
\begin{theorem}\label{thm:compact-w}
For $p\in[1,\infty)$, if $X$ is compact, then $\mathcal{W}_p\left(X\right)$ is also compact.
\end{theorem}
Note that $\mathcal{W}_p$ sending $X\in\mathcal{M}$ to $\mathcal{W}_p(X)\in\mathcal{M}$ then defines a map from $\mathcal{M}$ to $\mathcal{M}$ analogously to the case of $\mathcal{H}$. Moreover, the map sending $x$ to the Dirac measure $\delta_x\in\mathcal{P}(X)$ is an isometric embedding from $X$ into $\mathcal{P}(X)$. Therefore, $\mathcal{W}_p:\mathcal{M}\rightarrow\mathcal{M}$ is a metric extensor, which we call the ($\ell^p$-)\emph{Wasserstein extensor} in the sequel.
Inspired by \Cref{thm:H-equal}, we establish the following result:
\begin{theorem}\label{thm:W-equal}
Given two compact metric spaces $\left(X,d_X\right)$ and $\left(Y,d_Y\right)$, suppose there exist a (not necessarily compact) metric space $\left(Z,d_Z\right)$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$. Then, for $p\in[1,\infty]$, we have that both $(\varphi_X)_\#:\mathcal{W}_p(X)\rightarrow\mathcal{W}_p(Z)$ and $(\varphi_Y)_\#:\mathcal{W}_p(Y)\rightarrow\mathcal{W}_p(Z)$ are isometric embeddings. Moreover, we have that
\[d_\mathcal{H}^{\mathcal{W}_p\left(Z\right)}\big((\varphi_X)_\#\left( \mathcal{W}_p\left(X\right)\right),(\varphi_Y)_\#\left( \mathcal{W}_p\left(Y\right)\right)\big)= d_\mathcal{H}^Z\left(\varphi_X(X),\varphi_Y(Y)\right).\]
\end{theorem}
See \Cref{app:ot} for a proof. As a direct yet unexpected consequence, we obtain the following 1-Lipschitz property of the Wasserstein extensor $\mathcal{W}_p:\mathcal{M}\rightarrow\mathcal{M}$:
\begin{theorem}\label{thm:W-1-lip}
Given $X,Y\in\mathcal{M}$ and any $p\in[1,\infty]$, we have
$$d_\mathcal{GH}\left(\mathcal{W}_p\left(X\right),\mathcal{W}_p\left(Y\right)\right)\leqd_\mathcal{GH}\left(X,Y\right). $$
\end{theorem}
\begin{remark}[Comparison with related results]
In the literature, there are studies about the stability of $\mathcal{W}_2$ on $\mathcal{M}$. For example, {one can derive from \cite[Proposition 4.1]{lott2009ricci} that
$$d_\mathcal{GH}\left(\mathcal{W}_2\left(X\right),\mathcal{W}_2\left(Y\right)\right)\leq f_{XY}\lcd_\mathcal{GH}\left(X,Y\right)\right)$$
for all $X,Y\in\mathcal{M}$ and for some function $f_{XY}$ depending on the diameters of $X$ and $Y$. \Cref{thm:W-1-lip} is novel in that we not only proved stability of the Wasserstein extensor $\mathcal{W}_p$ for \emph{all} $p\in[1,\infty]$, but we also obtained the stronger result that $\mathcal{W}_p$ is 1-Lipschitz.}
\end{remark}
\paragraph{Geodesics in $\mathcal{W}_p\left(X\right)$.} We now state some results regarding geodesics in Wasserstein hyperspaces.
\begin{theorem}[{\cite[Corollary 7.22]{villani2008optimal}}]\label{thm:wp-geo}
If $X$ is a geodesic metric space, then $\mathcal{W}_p(X)$ is a geodesic metric space for all $p\in[1,\infty)$.
\end{theorem}
The case when $p=1$ is special in that $\mathcal{W}_1(X)$ is geodesic regardless of whether $X$ is itself geodesic:
\begin{theorem}\label{thm:W-geodesic}
For \emph{any} Polish metric space $X$, $\mathcal{W}_1\left(X\right)$ is geodesic.
\end{theorem}
{In fact, this theorem follows directly from the following explicit construction:
\begin{lemma}[{\cite[Theorem 5.1]{bottou2018geometrical}}]\label{lm:W-geodesic}
Let $X$ be a Polish metric space and let $\alpha,\beta\in\mathcal{P}_1\left(X\right)$. We define $\gamma:[0,1]\rightarrow\mathcal{P}(X)$ as follows: for each $t\in[0,1]$, let $\gamma(t)\coloneqq \left(1-t\right)\alpha+t\beta$. Then, for each $t\in[0,1]$, $\gamma(t)\in\mathcal{P}_1(X)$ and $\gamma$ is an $\ell^1$-Wasserstein geodesic connecting $\alpha$ and $\beta$. We call $\gamma$ the \emph{linear interpolation} between $\alpha$ and $\beta$.
\end{lemma}}
{\Cref{lm:W-geodesic} was first mentioned and proved in \cite{bottou2018geometrical} via Kantorovich duality. In \Cref{app:ot}, we provide an alternative proof which proceeds by calculating $\dW 1^X(\gamma(s),\gamma(t))$ for all $s,t\in[0,1]$ via an explicit construction of an optimal coupling between $\gamma(s)$ and $\gamma(t)$.}
\begin{remark}
For $p\neq 1$, a statement similar to that of \Cref{lm:W-geodesic} for $\mathcal{W}_p\left(X\right)$ is not necessarily true. For example, consider the {two point space $X=\{0,1\}$ with interpoint distance $1$}. Then, for $p\in[1,\infty)$, one can easily verify that $\mathcal{W}_p\left(X\right)\cong\left([0,1],d^\frac{1}{p}\right)$ where $d$ denotes the Euclidean metric on $[0,1]$. In this case, $\mathcal{W}_p\left(X\right)$ is geodesic if and only if $p=1$.
\end{remark}
The next lemma is a counterpart to \Cref{lm:hgeo-to-dghgeo} in the setting of metric measure spaces.
\begin{lemma}\label{lm:wgeo-to-dgwgeo}
Let $\mathcal{X}=(X,d_X,\mu_X),\mathcal{Y}=(Y,d_Y,\mu_Y)\in\mathcal{M}^w$ and let $Z\in\mathcal{M}$. Fix $p\in[1,\infty)$. Suppose there exist $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$ such that
$$\dW{p}^Z\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right)=\dgws{p}\left(\mathcal{X},\mathcal{Y}\right).$$
Then, any $\ell^p$-Wasserstein geodesic $\gamma:[0,1]\rightarrow \mathcal{W}_p\left(Z\right)$ such that $\gamma\left(0\right)=(\varphi_X)_\#\mu_X$ and $\gamma\left(1\right)=(\varphi_Y)_\#\mu_Y$ is actually an $\ell^p$-Gromov-Wasserstein geodesic. More precisely, if for each $t\in[0,1]$ we let $X_t\coloneqq\mathrm{supp}(\gamma(t))\subseteq Z$ and denote by $\tilde{\gamma}(t)$ the metric measure space $\left(X_t,d_Z|_{X_t\times X_t},\gamma(t)\right)$, then $\tilde{\gamma}:[0,1]\rightarrow\mathcal{M}^w$ is a geodesic, i.e.,
$$\dgws{p}\left(\tilde{\gamma}\left(s\right),\tilde{\gamma}\left(t\right)\right)=|s-t|\cdot\dgws{p}\left(\mathcal{X},\mathcal{Y}\right) \quad\forall s,t\in[0,1].$$
\end{lemma}
The proof is essentially the same as the one for \Cref{lm:hgeo-to-dghgeo} and we omit details.
\paragraph{A new proof of \Cref{thm:GH-geo}.}
As mentioned in the introduction, $\left(\mathcal{M},d_\mathcal{GH}\right)$ is a geodesic metric space. {This was proved in \cite{ivanov2016gromov,chowdhury2018explicit} respectively via the mid-point criterion and via an explicit construction of geodesics}. Now, we end this section by presenting a novel and succinct proof of this fact based on geodesic properties of $\mathcal{W}_1$ and $\mathcal{H}$.
\thmgeoGH*
\begin{proof}\label{succinct proof}
Given two $X,Y\in\mathcal{M}$, let $\eta\coloneqqd_\mathcal{GH} \left(X,Y\right).$ Then, there exists $Z\in\mathcal{M}$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$ such that $d_\mathcal{H}^Z\left(\varphi_X(X),\varphi_Y(Y)\right)=\eta$ (cf. \Cref{lm:dgh_hausdorff-realizable}). Without loss of generality, we assume that $Z$ is geodesic (otherwise we replace $Z$ with one of its extensions $\mathcal{W}_1\left(Z\right)$, which is geodesic by \Cref{thm:W-geodesic}). Then, $\mathcal{H}\left(Z\right)$ is geodesic by \Cref{thm:hgeo}. Consequently, there exists a Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{H}\left(Z\right)$ such that $\gamma\left(0\right)=X$ and $\gamma\left(1\right)=Y$ (we regard $X$ and $Y$ as subsets of $Z$). Then, by \Cref{lm:hgeo-to-dghgeo}, one concludes that $\gamma$ is a geodesic in $\mathcal{M}$ connecting $X$ and $Y$. Therefore, $\mathcal{M}$ is itself geodesic.
\end{proof}
\subsection{Urysohn universal metric space}\label{sec:urysohn}
In this section we introduce the Urysohn universal metric space which is a remarkable construction by Urysohn \cite{urysohn1927espace}. This metric space can be regarded as an ambient space inside which one can isometrically embed every Polish metric space. It has some natural connections with the Gromov-Hausdorff distance and the Gromov-Wasserstein distance which we will elucidate. These connections will serve an important role in the proof of our main theorems.
\begin{theorem}[Urysohn universal metric space]\label{thm:urysohn}
There exists a unique (up to isometry) Polish space $\left(\mathbb{U},d_{\mathbb{U}}\right)$, {which we call the \emph{Urysohn universal metric space}, satisfying the following two properties:}
\begin{enumerate}
\item (Universality) For any separable metric space $X$, there exists an isometric embedding $X\hookrightarrow \mathbb{U}$.
\item (Homogeneity) For any isometry $\varphi$ between two finite subsets $A,A'\subseteq\mathbb{U}$, there exists an isometry $\tilde{\varphi}:\mathbb{U}\rightarrow\mathbb{U}$ such that $\tilde{\varphi}|_A=\varphi$.
\end{enumerate}
\end{theorem}
The universality property of $\mathbb{U}$ makes the constant map taking each compact metric space $X$ to $\mathbb{U}$ a metric extensor.
The homogeneity property in \Cref{thm:urysohn} above can be generalized to compact metric spaces:
\begin{theorem}[\cite{huhunaivsvili1955property}]\label{thm:u-compact-homo}
Given two \emph{compact} subsets $A,A'\subseteq\mathbb{U}$ and an isometry $\varphi:A\rightarrow A'$, there exists an isometry $\tilde{\varphi}:\mathbb{U}\rightarrow\mathbb{U}$ such that $\tilde{\varphi}|_A=\varphi$.
\end{theorem}
\paragraph{Connection with the Gromov-Hausdorff distance.} For any $X,Y\in\mathcal{M}$, due to universality, $\mathbb{U}$ is a common ambient space. This allows us to consider the Hausdorff distance $d_\mathcal{H}^\mathbb U$ between $X$ and $Y$ which implies a connection between $\mathbb U$ and $d_\mathcal{GH}$. The following relation between the Gromov-Hausdorff distance and the Urysohn universal metric space was pointed out in \cite{berestovskii1992manifolds,antonyan2020gromov}:
\begin{proposition}
For any $X,Y\in\mathcal{M}$, we have
$$d_\mathcal{GH}\left(X,Y\right)=\inf_{\varphi_X,\varphi_Y} d_\mathcal{H}^\mathbb{U}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right), $$
where the infimum is taken over all isometric embeddings $\varphi_X:X\hookrightarrow \mathbb{U}$ and $\varphi_Y:Y\hookrightarrow \mathbb{U}$.
\end{proposition}
We further improve this proposition through the following lemma:
\begin{lemma}\label{lm:u-dgh}
For any $X,Y\in\mathcal{M}$, there exist isometric embeddings $\varphi_X:X\hookrightarrow \mathbb{U}$ and $\varphi_Y:Y\hookrightarrow \mathbb{U}$ such that
$$d_\mathcal{GH}\left(X,Y\right)=d_\mathcal{H}^\mathbb{U}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right). $$
\end{lemma}
\begin{proof}
By \Cref{lm:dgh_hausdorff-realizable}, there exist $Z\in\mathcal{M}$ and isometric embeddings $\psi_X:X\hookrightarrow Z$ and $\psi_Y:Y\hookrightarrow Z$ such that $d_\mathcal{H}^Z\left(\psi_X\left(X\right),\psi_Y\left(Y\right)\right)=d_\mathcal{GH}\left(X,Y\right)$. Then, since $Z$ is compact, there exists an isometric embedding $\varphi:Z\hookrightarrow\mathbb{U}$ (cf. \Cref{thm:urysohn}). Let $\varphi_X=\varphi|_{\psi_X(X)}\circ\psi_X$ and $\varphi_Y=\varphi|_{\psi_Y(Y)}\circ\psi_Y$. Then, by \Cref{lm:dH under embedding} we have that
$$d_\mathcal{GH}\left(X,Y\right)= d_\mathcal{H}^Z\left(\psi_X\left(X\right),\psi_Y\left(Y\right)\right)=d_\mathcal{H}^{\varphi\left(Z\right)}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right)=d_\mathcal{H}^\mathbb{U}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right).$$
\end{proof}
Then, combining with \Cref{thm:u-compact-homo}, we derive the following lemma:
\begin{lemma}\label{lm:u-dgh-any}
For any $X,Y\in\mathcal{M}$, let $\varphi_X:X\hookrightarrow\mathbb{U}$ be an isometric embedding. Then, there exists an isometric embedding $\varphi_Y:Y\hookrightarrow \mathbb{U}$ such that
$$d_\mathcal{GH}\left(X,Y\right)=d_\mathcal{H}^\mathbb{U}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right). $$
\end{lemma}
\begin{proof}
By \Cref{lm:u-dgh}, there exist isometric embeddings $\psi_X:X\hookrightarrow\mathbb{U}$ and $\psi_Y:Y\hookrightarrow\mathbb{U}$ such that $d_\mathcal{GH}\left(X,Y\right)=d_\mathcal{H}^\mathbb{U}\left(\psi_X\left(X\right),\psi_Y\left(Y\right)\right).$ Now, both $\varphi_X\left(X\right)$ and $\psi_X\left(X\right)$ are compact subsets of $\mathbb{U}$ and $\tau\coloneqq\varphi_X\circ\psi_X^{-1}:\psi_X\left(X\right)\rightarrow\varphi_X\left(X\right)$ is an isometry. By \Cref{thm:u-compact-homo}, there exists an isometry $\tilde{\tau}:\mathbb{U}\rightarrow\mathbb{U}$ such that $\tilde{\tau}|_{\psi_X\left(X\right)}=\tau$. Let $\varphi_Y\coloneqq\tilde{\tau}|_{\psi_Y(Y)}\circ\psi_Y:Y\rightarrow\mathbb{U}$. It is clear that $\varphi_Y$ is an isometric embedding and thus
$$d_\mathcal{H}^\mathbb{U}\left(\varphi_X\left(X\right),\varphi_Y\left(Y\right)\right)= d_\mathcal{H}^\mathbb{U}\left(\tilde{\tau}^{-1}\circ\varphi_X\left(X\right),\tilde{\tau}^{-1}\circ\varphi_Y\left(Y\right)\right)=d_\mathcal{H}^\mathbb{U}\left(\psi_X\left(X\right),\psi_Y\left(Y\right)\right)=d_\mathcal{GH}\left(X,Y\right).$$
\end{proof}
See \Cref{fig:u-dgh} for an illustration of the proof for \Cref{lm:u-dgh-any}.
\begin{figure}[htb]
\centering \includegraphics[width=0.8\textwidth]{figure/illustration-Urysohn.eps}
\caption{\textbf{Illustration of the proof of \Cref{lm:u-dgh-any}}} \label{fig:u-dgh}
\end{figure}
\Cref{lm:u-dgh-any} then leads us to the following key observation which is instrumental in proving \Cref{thm:main-h-realizable}.
\begin{lemma}
For any Gromov-Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{M}$, let $\rho\coloneqq d_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$. Then, for any finite sequence $0\leq t_0<t_1<\ldots<t_n\leq 1$, there exist for all $i=0,\ldots,n$ isometric embeddings $\varphi_i:\gamma\left(t_i\right)\hookrightarrow \mathbb{U}$ such that
$$d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=|t_i-t_j|\,\rho,\quad\forall0\leq i,j\leq n.$$
\end{lemma}
\begin{proof}
We prove the lemma by induction on $n$. When $n=0$, the statement holds true trivially. Now, suppose the statement holds for $n\geq 0$. Consider a sequence $0\leq t_0<t_1<\ldots<t_n<t_{n+1}\leq 1$. By the induction assumption, there exist $\varphi_i:\gamma\left(t_i\right)\hookrightarrow\mathbb{U}$ for $0\leq i\leq n$ such that
\[d_\mathcal{H}^\mathbb{U}\left(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\right)=|t_i-t_j|\rho,\,\forall 0\leq i,j\leq n.\]
By \Cref{lm:u-dgh-any}, there exists an isometric embedding $\varphi_{n+1}:\gamma\left(t_{n+1}\right)\hookrightarrow\mathbb{U}$ such that \[d_\mathcal{H}^\mathbb{U}\big(\varphi_n\left(\gamma\left(t_n\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)=|t_n-t_{n+1}|\rho.\]
Then, for any $i<n$, we have
\begin{align*}
d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)&\leq d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_{n}\left(\gamma\left(t_{n}\right)\right)\big)+d_\mathcal{H}^\mathbb{U}\big(\varphi_n\left(\gamma\left(t_n\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)\\
&\leq \left(t_n-t_i\right)\rho+\left(t_{n+1}-t_n\right)\rho\\
&=\left(t_{n+1}-t_i\right)\rho.
\end{align*}
Since $d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)\geq d_\mathcal{GH}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)=\left(t_{n+1}-t_i\right)\rho$, we have that $d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_{n+1}\left(\gamma\left(t_{n+1}\right)\right)\big)=\left(t_{n+1}-t_i\right)\rho$ for all $i<n$. This concludes the induction step.
\end{proof}
\begin{coro}\label{coro:finite-seq-h-dgh}
Given any Gromov-Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{M}$, we let $\rho\coloneqq d_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$. Then, for any finite sequence $0\leq t_0<t_1<\ldots<t_n\leq 1$, there exist $X\in \mathcal{M}$ and isometric embeddings $\varphi_i:\gamma\left(t_i\right)\hookrightarrow X$ for $i=0,\ldots,n$ such that
$$d_\mathcal{H}^X\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=|t_i-t_j|\,\rho,\quad\forall0\leq i,j\leq n.$$
\end{coro}
\begin{proof}
By the previous lemma, there exist isometric embeddings $\varphi_i:\gamma\left(t_i\right)\hookrightarrow \mathbb{U}$ such that
$$d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=|t_i-t_j|\rho.$$
Let $X\coloneqq\cup_{i=0}^n\varphi_i\left(\gamma_i\left(t_i\right)\right)\subseteq\mathbb{U}$. Then, since each $\varphi_i\left(\gamma_i\left(t_i\right)\right)$ is compact, we have that $X$ is compact and thus
$$d_\mathcal{H}^X\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=d_\mathcal{H}^\mathbb{U}\big(\varphi_i\left(\gamma\left(t_i\right)\right),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=|t_i-t_j|\rho,\,\forall 0\leq i,j\leq n.$$
\end{proof}
\paragraph{Gromov-Wasserstein counterparts.} All the previous results in this section have counterparts for the Gromov-Wasserstein distance. We will list the most useful ones for later use and we delay the proofs to the end of this section. {Recall from \Cref{sec:gw-detail} that we use script letters such as $\mathcal{X}$ to denote metric measure spaces $\mathcal{X}=(X,d_X,\mu_X)$.}
\begin{lemma}\label{lm:u-dgw}
For any $p\in[1,\infty)$ and any $\mathcal{X},\mathcal{Y}\in\mathcal{M}^w$, there exist isometric embeddings $\varphi_X:X\hookrightarrow \mathbb{U}$ and $\varphi_Y:Y\hookrightarrow \mathbb{U}$ such that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)=\dW{p}^\mathbb{U}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right). $$
\end{lemma}
\begin{lemma}\label{lm:u-dgw-any}
For any $p\in[1,\infty)$ and any $\mathcal{X},\mathcal{Y}\in\mathcal{M}^w$, let $\varphi_X:X\hookrightarrow\mathbb{U}$ be an isometric embedding. Then, there exists an isometric embedding $\varphi_Y:Y\hookrightarrow \mathbb{U}$ such that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)=\dW{p}^\mathbb{U}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right). $$
\end{lemma}
\begin{lemma}\label{lm:pgw-geo-finite-embedding-u}
For any $p\in[1,\infty)$, let $\gamma:[0,1]\rightarrow \mathcal{M}^w$ be an $\ell^p$-Gromov-Wasserstein geodesic. Let $\rho\coloneqq \dgws{p}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ and write $\gamma(t)\coloneqq(X_t,d_t,\mu_t)$ for each $t\in[0,1]$. Then, for any finite sequence $0\leq t_0<t_1<\ldots<t_n\leq 1$, there exist isometric embeddings $\varphi_i:X_{t_i}\hookrightarrow \mathbb{U}$ such that
$$\dW{p}^\mathbb{U}\left((\varphi_i)_\#\left(\mu_{t_i}\right),(\varphi_j)_\#\left(\mu_{t_j}\right)\right)=|t_i-t_j|\,\rho,\quad\forall0\leq i,j\leq n.$$
\end{lemma}
\begin{coro}\label{coro:finite-seq-w-dgw}
Assume the same conditions as in \Cref{lm:pgw-geo-finite-embedding-u}. Then, for any finite sequence $0\leq t_0<t_1<\ldots<t_n\leq 1$, there exist $X\in \mathcal{M}$ and isometric embeddings $\varphi_i:X_{t_i}\hookrightarrow X$ for $i=0,\ldots,n$ such that
$$\dW{p}^X\left((\varphi_i)_\#\mu_{t_i},(\varphi_j)_\#\mu_{t_j}\right)=|t_i-t_j|\,\rho,\quad\forall0\leq i,j\leq n.$$
\end{coro}
\paragraph{Relegated proofs.}
\begin{proof}[Proof of \Cref{lm:u-dgw}]
By \Cref{lm:dgw_w-realizable}, there exist $Z\in\mathcal{M}$ and isometric embeddings $\psi_X:X\hookrightarrow Z$ and $\psi_Y:Y\hookrightarrow Z$ such that $\dW{p}^Z\left((\psi_X)_\#\mu_X,(\psi_Y)_\#\mu_Y\right)=\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)$. Then, since $Z$ is compact, there exists an isometric embedding $\varphi:Z\hookrightarrow\mathbb{U}$ (cf. \Cref{thm:urysohn}). Let $\varphi_X=\varphi|_{\psi_X(X)}\circ\psi_X$ and $\varphi_Y=\varphi|_{\psi_Y(Y)}\circ\psi_Y$. Then,
\begin{align*}
\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)&= \dW{p}^Z\left((\psi_X)_\#\mu_X,(\psi_Y)_\#\mu_Y\right)\\
&=\dW{p}^{\varphi\left(Z\right)}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right)\\
&=\dW{p}^\mathbb{U}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right).
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{lm:u-dgw-any}]
By \Cref{lm:u-dgw}, there exist isometric embeddings $\psi_X:X\hookrightarrow\mathbb{U}$ and $\psi_Y:Y\hookrightarrow\mathbb{U}$ such that $\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)=\dW{p}^\mathbb{U}\left((\psi_X)_\#\mu_X,(\psi_Y)_\#\mu_Y\right).$ Now, both $\varphi_X\left(X\right)$ and $\psi_X\left(X\right)$ are compact subsets of $\mathbb{U}$ and $\tau\coloneqq\varphi_X\circ\psi_X^{-1}:\psi_X\left(X\right)\rightarrow\varphi_X\left(X\right)$ is an isometry. By \Cref{thm:u-compact-homo}, there exists an isometry $\tilde{\tau}:\mathbb{U}\rightarrow\mathbb{U}$ such that $\tilde{\tau}|_{\psi_X\left(X\right)}=\tau$. Let $\varphi_Y\coloneqq\tilde{\tau}\circ\psi_Y:Y\rightarrow\mathbb{U}$. It is clear that $\varphi_Y$ is an isometric embedding and thus
\begin{align*}
\dW{p}^\mathbb{U}\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right)&= \dW{p}^\mathbb{U}\left((\tilde{\tau}^{-1})_\#\circ(\varphi_X)_\#\mu_X,(\tilde{\tau}^{-1})_\#\circ(\varphi_Y)_\#\mu_Y\right)\\
&=\dW{p}^\mathbb{U}\left((\psi_X)_\#\mu_X,(\psi_Y)_\#\mu_Y\right)\\
&=\dgws{p}\left(X,Y\right).
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{lm:pgw-geo-finite-embedding-u}]
We prove the lemma by induction on $n$. When $n=0$, the statement holds true trivially. Now, suppose the statement holds for $n\geq 0$. Consider a sequence $0\leq t_0<t_1<\ldots<t_n<t_{n+1}\leq 1$. By the induction assumption, there exist isometric embeddings $\varphi_i:X_{t_i}\hookrightarrow\mathbb{U}$ for $0\leq i\leq n$ such that $\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_j)_\#\mu_{t_j}\right)=|t_i-t_j|\rho,\,\forall 0\leq i,j\leq n$. By \Cref{lm:u-dgw-any}, there exists an isometric embedding $\varphi_{n+1}:X_{t_{n+1}}\hookrightarrow\mathbb{U}$ such that $\dW{p}^\mathbb{U}\left((\varphi_n)_\#\mu_{t_n},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)=|t_n-t_{n+1}|\rho.$ Then, for any $i<n$, we have
\begin{align*}
&\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)\\
\leq &\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_{n})_\#\mu_{t_{n}}\right)+\dW{p}^\mathbb{U}\left((\varphi_n)_\#\mu_{t_n},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)\\
\leq &\left(t_n-t_i\right)\rho+\left(t_{n+1}-t_n\right)\rho\\
=&\left(t_{n+1}-t_i\right)\rho.
\end{align*}
Since $\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)\geq \dgws{p}\left((\varphi_i)_\#\mu_{t_i},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)=\left(t_{n+1}-t_i\right)\rho$, we have that $\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_{n+1})_\#\mu_{t_{n+1}}\right)=\left(t_{n+1}-t_i\right)\rho$ for all $i<n$. This concludes the induction step.
\end{proof}
\begin{proof}[Proof of \Cref{coro:finite-seq-w-dgw}]
By \Cref{lm:pgw-geo-finite-embedding-u}, there exist isometric embeddings $\varphi_i:X_{t_i}\hookrightarrow \mathbb{U}$ for $i=0,\ldots,n$ such that $\dW{p}^X\left((\varphi_i)_\#\mu_{t_i},(\varphi_j)_\#\mu_{t_j}\right)=|t_i-t_j|\rho.$ Let $X\coloneqq\cup_{i=0}^n\varphi_i\left(X_{t_i}\right)\subseteq\mathbb{U}$. Then, since each $\varphi_i\left(X_{t_i}\right)$ is compact, we have that $X$ is compact and thus for all $0\leq i,j\leq n$ we have
$$\dW{p}^X\left((\varphi_i)_\#\mu_{t_i},(\varphi_j)_\#\mu_{t_j}\right)=\dW{p}^\mathbb{U}\left((\varphi_i)_\#\mu_{t_i},(\varphi_j)_\#\mu_{t_j}\right)=|t_i-t_j|\rho.$$
\end{proof}
\section{Hausdorff and Wasserstein-realizable geodesics}\label{sec:real-geo}
In this section, we study both Hausdorff and Wasserstein-realizable geodesics and prove \Cref{thm:main-h-realizable} and \Cref{thm:bdd-geo-w-real}. Both proofs rely on convergence results of Lipschitz curves under certain metric extensors and we first study such convergence results in \Cref{sec:convergence}.
\subsection{Convergence}\label{sec:convergence}
Given a metric space $X$ and a Hausdorff convergent sequence\footnote{A Hausdorff convergent sequence in a metric space $X$ is a sequence of compact subsets of $X$ converging under the Hausdorff distance $d_\mathcal{H}^X$.} of subsets $\{A_i\}_{i=0}^\infty$ with limit $A\subseteq X$, then $\lim_{i\rightarrow\infty}d_\mathcal{GH}(A_i,A)=0$, since $d_\mathcal{GH}(A_i,A)\leq d_\mathcal{H}^X(A_i,A)$ for $i=0,\ldots$. Conversely, a Gromov-Hausdorff convergent sequence\footnote{A Gromov-Hausdorff convergent sequence is a sequence of compact metric spaces converging under the Gromov-Hausdorff distance $d_\mathcal{GH}$.} of compact metric spaces $\{X_i\}_{i=0}^\infty$ with limit $X\in\mathcal{M}$ can be realized as a Hausdorff convergent sequence in some ambient space. A similar statement was mentioned in \cite[Chapter 10]{petersen2006riemannian} whose proof operates by passing to a subsequence. We provide a proof for our statement which involves the Urysohn universal metric space (cf. \Cref{sec:urysohn}).
\begin{lemma}\label{lm:ghconv=hconv}
Let $\{X_i\}_{i=0}^\infty$ be a convergent sequence in $(\mathcal{M},d_\mathcal{GH})$ with limit $X\in\mathcal{M}$. Then, there exist a \emph{Polish} metric space $Z$ and isometric embeddings $\varphi:X\hookrightarrow Z$ and $\varphi_i:X_i\hookrightarrow Z$ for $i=0,\ldots$ such that $\lim_{i\rightarrow\infty}d_\mathcal{H}^Z(\varphi_i(X_i),\varphi(X))=0$.
\end{lemma}
\begin{proof}
Let $Z=\mathbb{U}$, the Urysohn universal metric space. Then, $Z$ is Polish. By universality (cf. \Cref{thm:urysohn}), there exists an isometric embedding $\varphi:X\hookrightarrow \mathbb{U}$. By \Cref{lm:u-dgh-any}, there exist isometric embeddings $\varphi_i:X_i\hookrightarrow \mathbb{U}$ such that $d_\mathcal{H}^\mathbb{U}(\varphi(X),\varphi_i(X_i))=d_\mathcal{GH}(X,X_i)$ for $i=0,\ldots$. Therefore, $\lim_{i\rightarrow\infty}d_\mathcal{H}^Z(\varphi_i(X_i),\varphi(X))=0$.
\end{proof}
\paragraph{A generalized Arzel\`a-Ascoli theorem.} The version of Arzel\`a-Ascoli theorem in \Cref{thm:AA} requires a fixed range $X$ for all curves $\gamma_i:[0,1]\rightarrow X$. This can be generalized to curves $\gamma_i:[0,1]\rightarrow X_i$ with convergent ranges, i.e., $X_i$ converges (in a suitable sense) to $X\in\mathcal{M}$ as $i$ approaches $\infty$.
\begin{theorem}[Generalized Arzel\`a-Ascoli theorem]\label{thm:general-AA}
Let $(Z,d_Z)$ be a complete metric space and let $\{X_i\}_{i=0}^\infty$ be a Hausdorff convergent sequence of compact subsets of $Z$. Let $X\in \mathcal{H}(Z)$ be the limit of $\{X_i\}_{i=0}^\infty$ under $d_\mathcal{H}^Z$. Let $\{\gamma_i:[0,1]\rightarrow X_i\}_{i=0}^\infty$ be a sequence of $C$-Lipschitz curves for some $C>0$ fixed. Then, there is a uniformly convergent (in the sense of $d_Z$) subsequence of $\{\gamma_i\}_{i=0}^\infty$ with a $C$-Lipschitz limit $\gamma:[0,1]\rightarrow X$.
\end{theorem}
\begin{proof}
Let $T\coloneqq\{t_n\}_{n=0}^\infty$ be a countable dense subset of $[0,1]$. Let $\rho_i\coloneqqd_\mathcal{H}^Z(X_i,X)$ for $i=0,\ldots$. Then, for each $\gamma_i(t_0)\in X_i$, there exists $x_i^0\in X$ such that $d_Z(\gamma_i(t_0),x_i^0)\leq\rho_i$. Since $X$ is compact, there exists a subsequence of $\{x_i^0\}_{i=0}^\infty$, still denoted by $\{x_i^0\}_{i=0}^\infty$, converging to a point $x^0\in X$. Then, since $\rho_i\rightarrow0$ as $i\rightarrow\infty$, we have
$$0\leq\lim_{i\rightarrow\infty}d_Z(\gamma_i(t_0),x^0)\leq\lim_{i\rightarrow\infty}d_Z(\gamma_i(t_0),x_i^0)+ \lim_{i\rightarrow\infty}d_Z(x_i^0,x^0)=0,$$
and consequently, $\lim_{i\rightarrow\infty}d_Z(\gamma_i(t_0),x^0)=0$. We similarly consider $t_1, t_2$ and so on and construct $x^n\in X$ for $n=1,2,\ldots$ in a manner similar to the construction of $x^0$. Then, by a standard diagonal argument, there exist a subsequence of $\{X_i\}_{i=0}^\infty$ (still denoted by $\{X_i\}_{i=0}^\infty$) and points $x^n\in X$ for $n=0,\ldots$ such that $\lim_{i\rightarrow\infty}d_Z(\gamma_i(t_n),x^n)=0$ for $n=0,\ldots$. Then, for $m,n\in\mathbb{N}$, we have
\begin{equation}\label{eq:gamma-T-C-Lip}
d_Z(x^m,x^n)=\lim_{i\rightarrow\infty}d_Z(\gamma_i(t_m),\gamma_i(t_n))\leq C\cdot |t_m-t_n|.
\end{equation}
Now, we define $\gamma:[0,1]\rightarrow X$ as follows:
$$\gamma(t) \coloneqq \begin{cases} x^n,&t=t_n\in T\\
\lim_{j\rightarrow\infty} x^{n_j},&t\in [0,1]\backslash T\text{ and }\{t_{n_j}\}_{j=0}^\infty\text{ is a subsequence of }T\text{ converging to }t
\end{cases}.$$
The existence of the limit $\lim_{j\rightarrow\infty} x^{n_j}$ is due to completeness of $Z$ and \Cref{eq:gamma-T-C-Lip}. It is obvious that $\gamma(t)$ is well-defined, i.e., its image is independent of the choice of $\{t_{n_j}\}_{j=0}^\infty$. It is easy to check that $\gamma$ is also $C$-Lipschitz. Now, it remains to prove that $\{\gamma_i\}_{i=0}^\infty$ uniformly converges to $\gamma$. For any $\varepsilon>0$, pick a finite subsequence $T_N\coloneqq\{t_0,t_1,\ldots, t_N\}$ of $T$ (possibly relabeled) such that $T_N$ is an $\frac{\varepsilon}{3C}$-net of $[0,1]$. By definition of $\gamma$ on $T$, there exists $M>0$ such that for all $i>M$ and for all $n=0,\ldots,N$, we have $d_Z(\gamma(t_n),\gamma_i(t_n))\leq\frac{\varepsilon}{3}$. Then, for any $t\in [0,1]$, there exists $t_n\in T_N$ such that $|t-t_n|\leq \frac{\varepsilon}{3C}$ and that for $i>M$
\begin{align*}
d_Z(\gamma(t),\gamma_i(t))&\leq d_Z(\gamma(t),\gamma(t_n))+d_Z(\gamma(t_n),\gamma_i(t_n))+d_Z(\gamma_i(t_n),\gamma_i(t))\\
&\leq C\cdot |t-t_n|+\frac{\varepsilon}{3} + C\cdot |t-t_n|\\
&\leq \varepsilon.
\end{align*}
This implies that $\{\gamma_i\}_{i=0}^\infty$ converges to $\gamma$ uniformly.
\end{proof}
\subsection{Hausdorff-realizable geodesics}\label{sec:haus-real-geo}
We first define Hausdorff-realizable geodesics as follows:
\begin{definition}[Hausdorff-realizable geodesic]
A geodesic $\gamma:[0,1]\rightarrow\mathcal{M}$ is called \emph{Hausdorff-realizable}, if there exist $X\in\mathcal{M}$ and for each $t\in[0,1]$ isometric embedding $\varphi_t:\gamma\left(t\right)\hookrightarrow X$ such that
$$d_\mathcal{H}^X\big(\varphi_s\left(\gamma\left(s\right)\right),\varphi_t\left(\gamma\left(t\right)\right)\big)=d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right),\quad\forall s,t\in[0,1].$$
In this case, we say that $\gamma$ is \emph{$X$-Hausdorff-realizable}.
\end{definition}
\begin{remark}[Hausdorff-realizable geodesics are Hausdorff geodesics]
{Suppose that $\gamma:[0,1]\rightarrow\mathcal{M}$ is a $X$-Hausdorff-realizable Gromov-Hausdorff geodesic via the family of isometric embeddings
\[\{\varphi_t:\gamma(t)\hookrightarrow X\}_{t\in[0,1]}.\]
Then, obviously the curve defined by $t\mapsto\varphi_t(\gamma(t))$ for $t\in[0,1]$ is a geodesic in the Hausdorff hyperspace $\mathcal{H}(X)$ of $X$. This is the converse of \Cref{lm:hgeo-to-dghgeo}. In short, we emphasize that \emph{a Hausdorff-realizable Gromov-Hausdorff geodesic is the same as a Hausdorff geodesic with respect to some underlying ambient space}.}
\end{remark}
\begin{example}[Trivial Gromov-Hausdorff geodesics are Hausdorff-realizable]
Let $\gamma:[0,1]\rightarrow\mathcal{M}$ be a ``trivial" Gromov-Hausdorff geodesic, i.e., there exists $X\in\mathcal{M}$ such that $\gamma(t)\cong X$ for all $t\in[0,1]$. Then, it is obvious that $\gamma$ is $X$-Hausdorff-realizable.
\end{example}
In \cite{ivanov2019hausdorff}, the authors show that any straight-line $d_\mathcal{GH}$ geodesic can be Hausdorff-realized in a metric space:
\begin{proposition}[{\cite[Corollary 3.1]{ivanov2019hausdorff}}]\label{prop:hausdorff-geodesic-straight-line}
Let $X,Y\in\mathcal{M}$ and $R\in\mathcal{R}^\mathrm{opt}\left(X,Y\right)$ and let $\rho\coloneqqd_\mathcal{GH}\left(X,Y\right)$. Let $\gamma_R$ be the straight-line $d_\mathcal{GH}$ geodesic connecting $X$ and $Y$ based on $R$ (cf. \Cref{thm:str-line-geo}). Let $Z\coloneqq R\times[0,1]$ and define $d_Z:Z\times Z\rightarrow\mathbb{R}_+$ by
$$d_Z\left(\left(\left(x,y\right),t\right),\left(\left(x',y'\right),t'\right)\right)\coloneqq\inf_{\left(x'',y''\right)\in R}\left(d_{R_t}\left(\left(x,y\right),\left(x'',y''\right)\right)+d_{R_{t'}}\left(\left(x',y'\right),\left(x'',y''\right)\right)\right)+\rho\,|t-t'|,$$
for any $\left(x,y\right),\left(x',y'\right)\in R$ and $t,t'\in [0,1]$. Then, by canonically identifying $R_t$ with $R\times\{t\}\subseteq Z$, we have $d_\mathcal{H}^Z\left(R_s,R_t\right)=d_\mathcal{GH}\left(R_s,R_t\right)$.
\end{proposition}
\begin{remark}\label{rmk:quotient-str-line}
In fact, $Z$ is a pseudo-metric space since for any $\left(x,y\right),\left(x,y'\right)\in R$, we have at $t=0$ that $d_Z\big(\left(\left(x,y\right),0\right),\left(\left(x,y'\right),0\right)\big)=d_X\left(x,x\right)=0$. A similar result holds for $t=1$. By identifying points at zero distance, we obtain a new metric space $\tilde{Z}$. It is obvious that the result in \Cref{prop:hausdorff-geodesic-straight-line} still holds by replacing $Z$ with $\tilde{Z}$.
\end{remark}
Now, we show that the space constructed in \Cref{prop:hausdorff-geodesic-straight-line} (or more precisely the quotient space discussed in the remark above) is compact for compact correspondences and thus show that straight-line $d_\mathcal{GH}$ geodesics corresponding to compact correspondences are Hausdorff-realizable.
\begin{proposition}\label{prop:strline-hausdorff}
Assuming the same notation as in \Cref{prop:hausdorff-geodesic-straight-line}, if $R$ is compact in the product space $(X\times Y,d_{X\times Y}\coloneqq\max(d_X,d_Y))$, then
$\gamma_R$ is Hausdorff-realizable.
\end{proposition}
\begin{proof}
We only need to show that the construction $Z$ in \Cref{prop:hausdorff-geodesic-straight-line} is sequentially compact. Then, the metric space $\tilde{Z}$ in \Cref{rmk:quotient-str-line} is also sequentially compact and thus compact.
For any sequence $\{\left(\left(x_i,y_i\right),t_i\right)\}_{i=1}^\infty$ in $Z$, by compactness of $[0,1]$ and $R$, there exists a subsequence, still denoted by $\{\left(\left(x_i,y_i\right),t_i\right)\}_{i=1}^\infty$, such that $\{t_i\}_{i=1}^\infty$ converges to some $t\in[0,1]$ and that $\{\left(x_i,y_i\right)\}_{i=1}^\infty$ converges to some $\left(x,y\right)\in R$ under $d_{X\times Y}\coloneqq\max\left(d_X,d_Y\right)$.
Now, we show that $\lim_{i\rightarrow\infty}d_Z\big(\left(\left(x_i,y_i\right),t_i\right),\left(\left(x,y\right),t\right)\big)=0$. Indeed,
\begin{align*}
0\leq d_Z\big(\left(\left(x,y\right),t\right),\left(\left(x_i,y_i\right),t_i\right)\big)&=\inf_{\left(x',y'\right)\in R}\left(d_{R_t}\left(\left(x,y\right),\left(x',y'\right)\right)+d_{R_{t_i}}\left(\left(x',y'\right),\left(x_i,y_i\right)\right)\right)+\rho\,|t-t_i|\\
&\leq d_{R_t}\left(\left(x,y\right),\left(x_i,y_i\right)\right)+\rho\,|t-t_i|\\
&=\left(1-t\right)\,d_X\left(x,x_i\right)+t\,d_Y\left(y,y_i\right)+\rho\,|t-t_i|\\
&\leq d_{X\times Y}\left(\left(x,y\right),\left(x_i,y_i\right)\right)+\rho\,|t-t_i|.
\end{align*}
Then, by assumptions on the sequence $\{((x_i,y_i),t_i)\}_{i=1}^\infty$,
$$\lim_{i\rightarrow\infty}d_Z\big(\left(\left(x_i,y_i\right),t_i\right),\left(\left(x,y\right),t\right)\big)=0.$$
As a result, $Z$ is sequentially compact and hence we conclude the proof.
\end{proof}
The following lemma provides an interesting description of Hausdorff-realizable geodesics: for any $X$-Hausdorff-realizable geodesic $\gamma$, there is a smallest closed subset $\mathcal{G}_X\subseteq X$ which Hausdorff-realizes $\gamma$.
\begin{lemma}\label{lm:union-geo-closed}
Given a compact metric space $X$ and a Hausdorff geodesic $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$, the union $\mathcal{G}_X\coloneqq\cup_{t\in[0,1]}\gamma\left(t\right)$ is a closed (and thus compact) subset of $X$.
\end{lemma}
\begin{proof}
Let $\rho\coloneqqd_\mathcal{H}^X\left(\gamma\left(0\right),\gamma\left(1\right)\right)$. If $\rho=0$, then $\gamma(t)=\gamma(0)$ for all $t\in[0,1]$. Thus $\mathcal{G}_X=\gamma(0)$ is closed.
Now, we assume that $\rho>0$. Fix an arbitrary $x\in X$. Define $f_x:[0,1]\rightarrow\mathbb{R}$ by taking $t\in[0,1]$ to $d_X\left(x,\gamma\left(t\right)\right)\coloneqq\inf\{d_X\left(x,x_t\right):\,x_t\in\gamma\left(t\right)\}$. We first show that $f_x$ is continuous. Fix $t_0\in[0,1]$. Since $\gamma\left(t_0\right)$ is compact, there exists $x_{t_0}\in\gamma\left(t_0\right)$ such that $f_x\left(t_0\right)=d_X\left(x,x_{t_0}\right)$. For each $\varepsilon>0$, let $\delta=\frac{\varepsilon}{\rho}>0$. For any $t\in[0,1]$ such that $|t-t_0|<\delta$, there exists $x_t\in \gamma\left(t\right)$ such that \[d_X\left(x_t,x_{t_0}\right)\leq d_\mathcal{H}^X\left(\gamma\left(t\right),\gamma\left(t_0\right)\right)=|t-t_0|\rho<\delta\rho=\varepsilon.\]
Then,
$$f_x\left(t\right)\leq d_X\left(x,x_t\right)\leq d_X\left(x,x_{t_0}\right)+d_X\left(x_{t_0},x_t\right)< f_x\left(t_0\right)+\varepsilon.$$
Now, assume $x_t'\in\gamma\left(t\right)$ is such that $f_x\left(t\right)=d_X\left(x,x_t'\right)$. Let $x_{t_0}'\in\gamma\left(t_0\right)$ be such that
\[d_X\left(x_{t_0}',x_t'\right)\leqd_\mathcal{H}^X\left(\gamma\left(t_0\right),\gamma\left(t\right)\right)<\varepsilon.\]
Then,
$$f_x\left(t\right)= d_X\left(x,x_t'\right)\geq d_X\left(x,x_{t_0}'\right)-d_X\left(x_{t_0}',x_t'\right)> f_x\left(t_0\right)-\varepsilon.$$
Therefore, $|f_x\left(t\right)-f_x\left(t_0\right)|<\varepsilon$ for any $|t-t_0|<\delta$. This implies the continuity of $f_x$.
Let $\{x_i\}_{i=0}^\infty$ be a convergent sequence in $X$ such that $x_i\in\gamma\left(t_i\right)$ for some $t_i\in[0,1]$ and $i=0,\ldots$. Suppose $x\in X$ is its limit. Assume that $x\not\in\mathcal{G}_X$ and thus $f_x\left(t\right)>0$ for each $t\in[0,1]$. Then, by continuity of $f_x$, there exists a constant $c>0$, such that $f>c$ on $[0,1]$. But $f_x\left(t_i\right)\leq d_X\left(x_i,x\right)$ and the right hand side approaches 0 as $i\rightarrow\infty$, a contradiction. Hence, there exists $t\in[0,1]$ such that $f\left(t\right)=0$ and thus $x\in\gamma\left(t\right)\subseteq\mathcal{G}_X$. This proves that $\mathcal{G}_X$ is closed in $X$.
\end{proof}
Let $\Gamma$ be the collection of all Gromov-Hausdorff geodesics $\gamma:[0,1]\rightarrow\mathcal{M}$. Let $d_\infty$ be the uniform metric on $\Gamma$, i.e., $d_\infty\left(\gamma_1,\gamma_2\right)\coloneqq\sup_{t\in[0,1]}d_\mathcal{GH}\left(\gamma_1\left(t\right),\gamma_2\left(t\right)\right)$ for any $\gamma_1,\gamma_2\in\Gamma$. Let $\Gamma_\mathcal{H}$ denote the subset of $\Gamma$ consisting of all Hausdorff-realizable geodesics in $\mathcal{M}$. Then, \Cref{thm:main-h-realizable} is equivalent to saying that $\Gamma_\mathcal{H}=\Gamma$. Before proving \Cref{thm:main-h-realizable}, we apply the properties developed in \Cref{sec:W-geo} towards proving the following {preliminary} result:
\begin{proposition}\label{prop:main-h-dense}
$\Gamma_\mathcal{H}$ is a dense subset of $\Gamma$.
\end{proposition}
\begin{proof}
Fix any Gromov-Hausdorff geodesic $\gamma:[0,1]\rightarrow\mathcal{M}$ with $\rho\coloneqqd_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)>0$ and $\varepsilon>0$. Let $0=t_0<\ldots<t_n=1$ be a sequence such that $t_{i+1}-t_i<\frac{\varepsilon}{2\rho}$ for $i=0,\ldots,n-1$. Then, by \Cref{coro:finite-seq-h-dgh}, there exist $X\in\mathcal{M}$ and isometric embeddings $\varphi_i:\gamma\left(t_i\right)\hookrightarrow X$ such that $d_\mathcal{H}^X\big(\varphi_i(\gamma\left(t_i\right)),\varphi_j\left(\gamma\left(t_j\right)\right)\big)=|t_i-t_j|\rho.$ Let $Z\coloneqq \mathcal{W}_1\left(X\right)$ and still denote by $\varphi_i$ the composition of $\varphi_i:\gamma(t_i)\hookrightarrow X$ and the canonical embedding $X\hookrightarrow\mathcal{W}_1(X)=Z$ for $i=0,\ldots,n$. Then, we still have $d_\mathcal{H}^Z\big(\varphi_i(\gamma\left(t_i\right)),\varphi_j(\gamma\left(t_j\right))\big)=|t_i-t_j|\rho.$ Since $Z$ is compact and geodesic (cf. \Cref{thm:compact-w} and \Cref{thm:W-geodesic}), $\mathcal{H}(Z)$ is geodesic (cf. \Cref{thm:hgeo}). Hence, there exist Hausdorff geodesics $\gamma_i:[0,1]\rightarrow \mathcal{H}\left(Z\right)$ such that $\gamma_i\left(0\right)=\varphi_i(\gamma\left(t_i\right))$ and $\gamma_i\left(1\right)=\varphi_{i+1}(\gamma\left(t_{i+1}\right))$ for $i=0,\ldots,n-1$. Then, $\gamma_i(1)=\gamma_{i+1}(0)$ for $i=0,\ldots,n-1$ and
\begin{align*}
d_\mathcal{H}^Z(\gamma_0(0),\gamma_{n-1}(1))&=d_\mathcal{H}^Z\big(\varphi_0(\gamma(t_0)),\varphi_n(\gamma(t_n))\big)\\
&=\sum_{i=0}^{n-1}d_\mathcal{H}^Z\big(\varphi_i(\gamma(t_i)),\varphi_{i+1}(\gamma(t_{i+1}))\big)\\
&=\sum_{i=0}^{n-1}d_\mathcal{H}^Z\big(\gamma_i(0),\gamma_{i}(1))\big).
\end{align*}
Therefore, we can concatenate (cf. \Cref{prop:geo-concatenate}) all the $\gamma_i$s to obtain a new geodesic $\tilde{\gamma}:[0,1]\rightarrow \mathcal{H}\left(Z\right)$ such that $\tilde{\gamma}(t_i)=\varphi_i(\gamma(t_i))$ for each $i=0,\ldots,n$. By \Cref{lm:hgeo-to-dghgeo}, $\tilde{\gamma}$ is a Gromov-Hausdorff geodesic and by construction, $\tilde{\gamma}\in\Gamma_\mathcal{H}$. Now, for any $t\in[0,1]$, suppose $t\in[t_i,t_{i+1}]$ for some $i\in\{0,\ldots,n-1\}$. Then,
\begin{align*}
d_\mathcal{GH}\left(\gamma\left(t\right),\tilde{\gamma}\left(t\right)\right)&\leqd_\mathcal{GH}\left(\gamma\left(t\right),{\gamma}\left(t_i\right)\right)+d_\mathcal{GH}\left(\gamma\left(t_i\right),\tilde{\gamma}\left(t_i\right)\right)+d_\mathcal{GH}\left(\tilde{\gamma}\left(t_i\right),\tilde{\gamma}\left(t\right)\right) \\
&=|t-t_i|\,\rho+0+|t-t_i|\,\rho\leq 2\cdot \frac{\varepsilon}{2\rho}\cdot\rho =\varepsilon.
\end{align*}
So, $d_\infty(\gamma,\tilde{\gamma})\leq\varepsilon$. Therefore, we conclude that $\Gamma_\mathcal{H}$ is dense in $\Gamma$.
\end{proof}
\paragraph{Proof of \Cref{thm:main-h-realizable}.} Now, we obtain a proof of \Cref{thm:main-h-realizable} by showing that $\Gamma_\mathcal{H}$ is closed in $\Gamma$. {The proof is an intricate application of the generalized Arzel\`a-Ascoli theorem (\Cref{thm:general-AA}). In order to meet the conditions in \Cref{thm:general-AA}, one needs to exploit the stability of the Hausdorff extensor (\Cref{thm:H-equal}) and carefully leverage Gromov's pre-compactness theorem (\Cref{thm:pre-compact}).}
\thmhreal*
\begin{proof}
By \Cref{prop:main-h-dense}, we only need to show that $\Gamma_\mathcal{H}$ is closed in $\Gamma$.
Let $\{\gamma_i:[0,1]\rightarrow\mathcal{M}\}_{i=0}^\infty$ be a Cauchy sequence in $\Gamma_\mathcal{H}$ with a limit $\gamma:[0,1]\rightarrow \mathcal{M}$ in $\Gamma$, i.e., $\lim_{i\rightarrow\infty}d_\infty(\gamma_i,\gamma)=0$. Moreover, we let $\rho\coloneqq d_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$.
\begin{claim}\label{clm:key-in-main}
There exist $X_i\in\mathcal{M}$ such that $\gamma_i$ is $X_i$-Hausdorff-realizable for $i=0,\ldots$ and moreover $\{X_i\}_{i=0}^\infty$ has a $d_\mathcal{GH}$-convergent subsequence.
\end{claim}
Assuming the claim for now, suppose $X\in\mathcal{M}$ is such that $\lim_{i\rightarrow\infty}d_\mathcal{GH}(X_i,X)=0$ (after possibly passing to a subsequence), then by \Cref{lm:ghconv=hconv}, there exist a Polish metric space $Z$ and isometric embeddings $\varphi:X\hookrightarrow Z$ and $\varphi_i:X_i\hookrightarrow Z$ for $i=0,\ldots$ such that $\lim_{i\rightarrow\infty}d_\mathcal{H}^Z(\varphi_i(X_i),\varphi(X))=0$. By stability of the Hausdorff extensor $\mathcal{H}$ (cf. \Cref{thm:H-equal}),
$$\lim_{i\rightarrow\infty}d_\mathcal{H}^{\mathcal{H}(Z)}\big((\varphi_i)_*(\mathcal{H}(X_i)),\varphi_*(\mathcal{H}(X))\big)=\lim_{i\rightarrow\infty}d_\mathcal{H}^Z(\varphi_i(X_i),\varphi(X))=0.$$
For any $\varepsilon>0$, there exists $K(\varepsilon)>0$ such that $\gamma_i$ is $(\rho+\varepsilon)$-Lipschitz for $i\geq K(\varepsilon)$, since $\gamma_i$ is $d_\mathcal{GH}(\gamma_i(0),\gamma_i(1))$-Lipschitz and $d_\mathcal{GH}(\gamma_i(0),\gamma_i(1))\rightarrow\rho$ as $i\rightarrow\infty$. For each $i=0,\ldots$, since $\gamma_i$ is $X_i$-Hausdorff-realizable, we can view $\gamma_i:[0,1]\rightarrow\mathcal{M}$ as a Hausdorff geodesic $\gamma_i:[0,1]\rightarrow \mathcal{H}(X_i)$. Moreover, by \Cref{thm:hyper-complete}, $\mathcal{H}(Z)$ is complete. Then, by the generalized Arzel\`a-Ascoli theorem (\Cref{thm:general-AA}), the sequence $\{\gamma_i:[0,1]\rightarrow \mathcal{H}(X_i)\}_{i=K(\varepsilon)}^\infty$ uniformly converges to a $(\rho+\varepsilon)$-Lipschitz curve $\tilde{\gamma}:[0,1]\rightarrow \mathcal{H}(X)$, where we identify $\mathcal{H}(X_i)$ with $\varphi_i(\mathcal{H}(X_i))$ and $\mathcal{H}(X)$ with $\varphi(\mathcal{H}(X))$. Obviously, for $0<\varepsilon'<\varepsilon$, the subsequence $\{\gamma_i\}_{i=K(\varepsilon')}^\infty$ uniformly converges to the same curve $\tilde{\gamma}$ and thus $\tilde{\gamma}$ is $(\rho+\varepsilon')$-Lipschitz. Since $\varepsilon'$ is arbitrary, $\tilde{\gamma}$ is $\rho$-Lipschitz. By uniform convergence, we have that for each $t\in[0,1]$, $\lim_{i\rightarrow\infty}d_\mathcal{H}^{Z}(\tilde{\gamma}(t),\gamma_i(t))=0$. We know that $\gamma(t)$ is the Gromov-Hausdorff limit of $\{\gamma_i(t)\}_{i=0}^\infty$, thus $\tilde{\gamma}(t)\cong\gamma(t)$ for all $t\in[0,1]$. Since $\tilde{\gamma}$ is $\rho$-Lipschitz, we have that for each $s,t\in[0,1]$
$$d_\mathcal{H}^X(\tilde{\gamma}(s),\tilde{\gamma}(t))\leq |s-t|\rho=d_\mathcal{GH}(\gamma(s),\gamma(t))\leq d_\mathcal{H}^X(\tilde{\gamma}(s),\tilde{\gamma}(t)).$$
Therefore, $d_\mathcal{H}^X(\tilde{\gamma}(s),\tilde{\gamma}(t))=d_\mathcal{GH}(\gamma(s),\gamma(t)) $ for $s,t\in[0,1]$ and thus $\gamma\in\Gamma_\mathcal{H}$. The structure of the argument above is also captured in \Cref{fig:proof-thm1}.
Now, we finish by proving \Cref{clm:key-in-main}
\begin{proof}[Proof of \Cref{clm:key-in-main}]
Since $\gamma_i\in\Gamma_\mathcal{H}$, there exists $Y_i\in\mathcal{M}$ such that $\gamma_i$ is $Y_i$-Hausdorff-realizable. Then, let $X_i\coloneqq\mathcal{G}_{Y_i}=\cup_{t\in[0,1]}\gamma_i(t)$ as in \Cref{lm:union-geo-closed}. Here $\gamma_i(t)$ also denotes the isometric copy of itself into $Y_i$ and thus one can view $\gamma_i(t)$ as an element of $\mathcal{H}(Y_i)$. It is obvious that $\gamma_i$ is also $X_i$-Hausdorff-realizable. Now, we prove that $\{{X_i}\}_{i=0}^\infty$ has a convergent subsequence via Gromov's pre-compactness theorem (cf. \Cref{thm:pre-compact}). Let $\rho_i\coloneqqd_\mathcal{GH}(\gamma_i(0),\gamma_i(1))$ for $i=0,1,\ldots$.
\begin{enumerate}
\item Fix $i\in\mathbb{N}$ and $t\in[0,1]$. Then,
$$d_\mathcal{H}^{Y_i}(\gamma_i(t),\gamma_i(0))=d_\mathcal{GH}(\gamma_i(t),\gamma_i(0))=t\rho_i\leq\rho_i.$$
Therefore, for any $x_t\in\gamma_i(t)$, there exists $x_0\in\gamma_i(0)$ such that $d_{Y_i}(x_t,x_0)\leq\rho_i$. This implies that $\gamma_i(t)\subseteq(\gamma_i(0))^{\rho_i}\subseteq Y_i$. Since $t$ is arbitrary, we have that $X_i=\mathcal{G}_{Y_i}=\cup_{t\in[0,1]}\gamma_i(t)\subseteq \left(\gamma_i(0)\right)^{\rho_i}$. Therefore, $\mathrm{diam}\left(X_i\right)\leq 2\rho_i+\mathrm{diam}\left(\gamma_i\left(0\right)\right)$ for any $i=0,\ldots$. Since $\{\mathrm{diam}(\gamma_i(0))\}_{i=0}^\infty$ approaches $\mathrm{diam}(\gamma(0))$ and $\{\rho_i\}_{i=0}^\infty$ approaches $\rho$ as $i\rightarrow\infty$, there exists $\delta>0$ such that $\rho_i\leq \delta$ and $\mathrm{diam}(\gamma_i(0))\leq\delta$ for all $i=0,\ldots$. Therefore, $\{X_i\}_{i=0}^\infty$ is uniformly bounded by $3\delta$.
\item For any $\varepsilon>0$, pick $0=t_0<t_1<\ldots<t_N=1$ such that $t_{n+1}-t_n<\frac{\varepsilon}{2\delta}$ for $n=0,\ldots,N-1$. Let $S_n\coloneqq\{s_n(k):\,k=0,\ldots,k_n\}$ be an $\frac{\varepsilon}{4}$-net of $\gamma(t_n)$ for $n=0,\ldots,N$. Let $M>0$ be a positive integer such that $d_\infty(\gamma_i,\gamma)\leq \frac{\varepsilon}{8}$ for all $i>M$. For $n\in\{0,\ldots,N\}$ and $i>M$, let $R_n^i\in\mathcal{R}^\mathrm{opt}\left(\gamma_i(t_n),\gamma(t_n)\right)$ be an optimal correspondence. Then,
\[\mathrm{dis}(R_n^i)=2d_\mathcal{GH}(\gamma_i(t_n),\gamma(t_n))\leq 2d_\infty(\gamma_i,\gamma)\leq \frac{\varepsilon}{4}.\]
For each $s_n(k)\in S_n$, choose $s_n^i(k)\in\gamma_i(t_n)$ such that $(s_n^i(k),s_n(k))\in R_n^i$. Then, we have that $S_n^i\coloneqq\{s_n^i(k)\}_{k=0}^{k_n}$ is an $\frac{\varepsilon}{2}$-net of $\gamma_i(t_n)$. Indeed, for any $x_n^i\in \gamma_i(t_n)$, there exists $x_n\in\gamma(t_n)$ such that $(x_n^i,x_n)\in R_n^i$. Let $s_n(k)\in S_n$ be such that $d_{\gamma(t_n)}(x_n,s_n(k))\leq\frac{\varepsilon}{4}$. Then,
$$d_{X_i}\left( x_n^i, s_n^i(k)\right)=d_{\gamma_i(t_n)}\left( x_n^i, s_n^i(k)\right)\leq \mathrm{dis}(R_n^i)+d_{\gamma(t_n)}(x_n,s_n(k))\leq \frac{\varepsilon}{4}+\frac{\varepsilon}{4}= \frac{\varepsilon}{2}. $$
Furthermore, we prove that $\cup_{n=0}^N S_n^i$ is an $\varepsilon$-net of $X_i$. For each $t\in[0,1]$, suppose $t\in[t_n,t_{n+1}]$ for some $n\in\{0,\ldots,N-1\}$. For any $x_t^i\in\gamma_i(t)$, since
\[d_\mathcal{H}^{X_i}(\gamma_i(t),\gamma_i(t_n))\leq |t-t_n|\rho_i\leq \frac{\varepsilon}{2\delta}\cdot \delta=\frac{\varepsilon}{2},\]
there exists $x_n^i\in\gamma_i(t_n)$ such that $d_{X_i}\left( x_n^i,x_t^i\right)\leq \frac{\varepsilon}{2}$. Then, there exists $s_n^i(k)\in S_n^i$ such that $d_{X_i}\left( x_n^i,s_n^i(k)\right)\leq \frac{\varepsilon}{2}$ and thus
$$d_{X_i}\left( x_t^i,s_n^i(k)\right)\leq d_{X_i}\left( x_t^i,x_n^i\right)+d_{X_i}\left( x_n^i,s_n^i(k)\right)\leq \varepsilon.$$
Now, note that $\left|\cup_{n=0}^NS_n^i\right|\leq \sum_{n=0}^Nk_n$ for each $i>M$. Let
\[Q\left(\varepsilon\right)\coloneqq\max\left(\max\{\mathrm{cov}_\varepsilon\left(X_i\right):\,i=1,\ldots,M\},\sum_{n=0}^Nk_n\right),\]
then we have $\mathrm{cov}_\varepsilon\left(X_i\right)\leq Q\left(\varepsilon\right)$ for all $i=0,\ldots$.
\end{enumerate}
Therefore, $\{X_i\}_{i=0}^\infty\subseteq \mathcal{K}\left(Q,3\delta\right)$ (cf. \Cref{def:CND}). By Gromov's pre-compactness theorem (cf. \Cref{thm:pre-compact}), $\{X_i\}_{i=0}^\infty$ has a convergent subsequence.
\end{proof}
\end{proof}
\begin{figure}[htb]
\centering \includegraphics[width=0.8\textwidth]{figure/ProofTHm1.eps}
\caption{\textbf{Illustration of the proof of \Cref{thm:main-h-realizable}.} In this figure, we identify $X$ with $\varphi(X)$ and $\mathcal{H}(X)$ with $\varphi_*(\mathcal{H}(X))$ and similarly for $X_i$ and $\mathcal{H}(X_i)$. The figure illustrates our main strategy for proving \Cref{thm:main-h-realizable} as follows: we transform the $d_\mathcal{H}^Z$ convergent sequence $\{X_i\}_{i=0}^\infty$ to a $d_\mathcal{H}^{\mathcal{H}(Z)}$ convergent sequence $\{\mathcal{H}(X_i)\}_{i=0}^\infty$; then we use the generalized Arzel\`a-Ascoli theorem to establish a limit $\tilde{\gamma}:[0,1]\rightarrow\mathcal{H}(X)$ for the Lipschitz curves $\{\gamma_i:[0,1]\rightarrow\mathcal{H}(X_i)\}_{i=0}^\infty$; finally, we show that $\tilde{\gamma}$ coincides with $\gamma$.} \label{fig:proof-thm1}
\end{figure}
\subsection{Wasserstein-realizable geodesics}\label{sec:W-reall-geo}
We first specify the definition of Wasserstein-realizable geodesics as follows.
\begin{definition}[Wasserstein-realizable geodesic]\label{def:w-real-geo}
For $p\in[1,\infty)$, an $\ell^p$-Gromov-Wasserstein geodesic $\gamma:[0,1]\rightarrow\left(\mathcal{M}^w,\dgws{p}\right)$ where $\gamma(t)\coloneqq(X_t,d_t,\mu_t)$ for $t\in[0,1]$ is called $\ell^p$-\emph{Wasserstein-realizable} (or simply Wasserstein-realizable), if there exist $X\in\mathcal{M}$ and for each $t\in[0,1]$ isometric embedding $\varphi_t:X_t\hookrightarrow X$ such that
$$\dW{p}^X\left((\varphi_s)_\#\mu_s,(\varphi_t)_\#\mu_t\right)=\dgws{p}\left(\gamma\left(s\right),\gamma\left(t\right)\right),\quad\forall s,t\in[0,1].$$
In this case, we say that $\gamma$ is \emph{$X$-Wasserstein-realizable}.
\end{definition}
\begin{remark}[Wasserstein-realizable geodesics are Wasserstein geodesics]
{Suppose that an $\ell^p$-Gromov-Wasserstein geodesic $\gamma:[0,1]\rightarrow(\mathcal{M}^w,\dgws{p})$ is $X$-Wasserstein-realizable via the following family of isometric embeddings
\[\{\varphi_t:\gamma(t)\hookrightarrow X\}_{t\in[0,1]}.\]
Denote $\gamma(t)=(X_t,d_t,\mu_t)$ for each $t\in[0,1]$. Then, obviously $t\mapsto(\varphi_t)_\#\mu_t$ for $t\in[0,1]$ is a geodesic in the Wasserstein hyperspace $\mathcal{W}_p(X)$ of $X$. This is the converse of \Cref{lm:wgeo-to-dgwgeo}. In words, \emph{a Wasserstein-realizable Gromov-Wasserstein geodesic is a Wasserstein geodesic.}}
\end{remark}
\begin{example}[Trivial Gromov-Wasserstein geodesics are Wasserstein-realizable]
Let $\gamma:[0,1]\rightarrow\mathcal{M}^w$ be a ``trivial" Gromov-Wasserstein geodesic, i.e., there exists $\mathcal{X}=(X,d_X,\mu_X)\in\mathcal{M}^w$ such that $\gamma(t)\cong_w\mathcal{X}$ for all $t\in[0,1]$. Then, it is obvious that $\gamma$ is $X$-Wasserstein-realizable.
\end{example}
Now, we recall some notation. Let $\Gamma^p$ be the collection of all $\ell^p$-Gromov-Wasserstein geodesics. Let $d_{\infty,p}$ be the uniform metric on $\Gamma^p$, i.e., for any $\gamma_1,\gamma_2\in\Gamma^p$
$$d_{\infty,p}\left(\gamma_1,\gamma_2\right)\coloneqq\sup_{t\in[0,1]}\dgws{p}\left(\gamma_1\left(t\right),\gamma_2\left(t\right)\right).$$
Let $\Gamma_\mathcal{W}^p$ denote the subset of $\Gamma^p$ consisting of all Wasserstein-realizable geodesics in $\mathcal{M}^w$.
\begin{restatable}{proposition}{propwrealdense}\label{prop:main-w-real-dense}
For $p\in[1,\infty)$, $\Gamma_\mathcal{W}^p$ is a dense subset of $\Gamma^p$.
\end{restatable}
\begin{proof}
Fix any $\ell^p$-Gromov-Wasserstein geodesic $\gamma:[0,1]\rightarrow\mathcal{M}^w$. Let $\rho\coloneqq\dgws{p}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ and assume that $\rho>0$. For any $\varepsilon>0$, let $0=t_0<t_1<\ldots<t_n=1$ be a sequence such that $t_{i+1}-t_i<\frac{\varepsilon}{2\rho}$ for $i=0,\ldots,n-1$. Then, by \Cref{coro:finite-seq-w-dgw}, there exist $X\in\mathcal{M}$ and isometric embeddings $\varphi_i:X_{t_i}\hookrightarrow X$ such that $\dW{p}^X\left((\varphi_i)_\#\mu_{t_i},\left(\varphi_j\right)_\#\mu_{t_j}\right)=|t_i-t_j|\rho.$ Let $Z\coloneqq \mathcal{W}_1\left(X\right)$ and for each $i=0,\ldots,n$ still denote by $\varphi_i$ the composition of $\varphi_i:X_{t_i}\hookrightarrow X$ and the canonical embedding $X\hookrightarrow\mathcal{W}_1(X)=Z$. Then, we still have $\dW{p}^Z\left((\varphi_i)_\#\mu_{t_i},\left(\varphi_j\right)_\#\mu_{t_j}\right)=|t_i-t_j|\rho.$ Since $Z$ is compact and geodesic (cf. \Cref{thm:compact-w} and \Cref{thm:W-geodesic}), $\mathcal{W}_{p}(Z)$ is geodesic (cf. \Cref{thm:wp-geo}). Hence, for each $i=0,\ldots,n-1$ there {exists an} $\ell^p$-Wasserstein geodesic $\gamma_i:[0,1]\rightarrow \mathcal{W}_p\left(Z\right)$ such that $\gamma_i\left(0\right)=(\varphi_i)_\#\mu_{t_i}$ and $\gamma_i\left(1\right)=(\varphi_{i+1})_\#\mu_{t_{i+1}}$. Then, $\gamma_i(1)=\gamma_{i+1}(0)$ for $i=0,\ldots,n-1$ and
\begin{align*}
\dW p^Z(\gamma_0(0),\gamma_{n-1}(1))&=\dW p^Z\left((\varphi_0)_\#\mu_{t_0},(\varphi_n)_\#\mu_{t_n}\right)\\
&=\sum_{i=0}^{n-1}\dW p^Z((\varphi_{i})_\#\mu_{t_{i}},(\varphi_{i+1})_\#\mu_{t_{i+1}})\\
&=\sum_{i=0}^{n-1}\dW p^Z(\gamma_i(0),\gamma_{i}(1))).
\end{align*}
Therefore, we can concatenate all the $\gamma_i$s via \Cref{prop:geo-concatenate} to obtain a new $\ell^p$-Wasserstein geodesic $\tilde{\gamma}:[0,1]\rightarrow \mathcal{W}_p\left(Z\right)$ such that $\tilde{\gamma}(t_i)=(\varphi_{i})_\#\mu_{t_{i}}$ for each $i=0,\ldots,n$. Then, by \Cref{lm:wgeo-to-dgwgeo}, $\tilde{\gamma}$ is a Gromov-Wasserstein geodesic and thus $\tilde{\gamma}\in\Gamma_\mathcal{W}^p$. Now, for any $t\in[0,1]$, suppose $t\in[t_i,t_{i+1}]$ for some $i\in\{0,\ldots,n-1\}$. Then,
\begin{align*}
\dgws{p}\left(\gamma\left(t\right),\tilde{\gamma}\left(t\right)\right)&\leq\dgws{p}\left(\gamma\left(t\right),{\gamma}\left(t_i\right)\right)+\dgws{p}\left(\gamma\left(t_i\right),\tilde{\gamma}\left(t_i\right)\right)+\dgws{p}\left(\tilde{\gamma}\left(t_i\right),\tilde{\gamma}\left(t\right)\right) \\
&=|t-t_i|\,\rho+0+|t-t_i|\,\rho\leq 2\cdot \frac{\varepsilon}{2\rho}\cdot\rho =\varepsilon.
\end{align*}
So, $d_{\infty,p}(\gamma,\tilde{\gamma})\leq\varepsilon$. Therefore, we conclude that $\Gamma_\mathcal{W}^p$ is dense in $\Gamma^p$.
\end{proof}
\paragraph{Hausdorff-boundedness.} Now, we introduce the Hausdorff-boundedness condition mentioned in the introduction with the purpose of identifying a certain family of Wasserstein-realizable geodesics.
{First recall from \Cref{sec:real function} that for any function $f:I\rightarrow J$ where $I$ and $J$ are intervals in $\overline{\mathbb
R}\coloneqq[0,\infty]$ containing 0, we say that $f$ is \emph{proper} if both $f(0)=0$ and $f$ is continuous at $0$ (cf. \Cref{def:proper-function}).}
\begin{definition}[Hausdorff-bounded families]\label{def:hausdorff-bdd}
Given $p\in[1,\infty)$ and a family $\mathcal{F}$ of metric measure spaces, we say that $\mathcal{F}$ is ($\ell^p$-)\emph{Hausdorff-bounded}, if there exists an \emph{increasing and proper} function $f:[0,\infty)\rightarrow[0,\infty)$ such that for any $\mathcal{X}=(X,d_X,\mu_X),\mathcal{Y}=(Y,d_Y,\mu_Y)\in\mathcal{F}$ and isometric embeddings $\varphi_X:X\hookrightarrow Z$ and $\varphi_Y:Y\hookrightarrow Z$,
$$d_\mathcal{H}^Z(X,Y)\leq f\left(\dW{p}^Z((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y)\right). $$
After specifying such an $f$, we say that $\mathcal{F}$ is \emph{$f$-Hausdorff-bounded}.
\end{definition}
{The remark below provides a first glimpse into the motivation for the definition of Hausdorff-bounded families.}
\begin{remark}\label{rmk:f-h-bdd to gh bound}
Given an $f$-Hausdorff-bounded family $\mathcal{F}$, for any $\mathcal{X},\mathcal{Y}\in \mathcal{F}$, we have that
$$d_\mathcal{GH}(X,Y)\leq f\left(\dgws{p}(\mathcal{X},\mathcal{Y})\right).$$
\end{remark}
{The definition of Hausdorff-boundedness is not superfluous. For example, the whole class $\mathcal{M}^w$ is not a Hausdorff-bounded family for any increasing and proper function $f$ (see also \Cref{ex:non-h-b-gw}).}
{The Hausdorff-boundedness property is not easy to verify directly. We therefore seek conditions which imply it. To this end, we introduce the notion of \emph{$h$-boundedness} for metric measure spaces which will turn out to imply Hausdorff-boundedness (cf. \Cref{prop:ex-f-bdd-family}).}
\noindent{\textbf{Note:} given a metric space $X$ and $\varepsilon\geq 0$, we will henceforth use the symbol $B_\varepsilon^X(x)$ to denote the \emph{closed ball} $B^X_\varepsilon(x)\coloneqq\{x'\in X:\,d_X(x,x')\leq \varepsilon\}$ centered at $x$ with radius $\varepsilon$. We abbreviate $B_\varepsilon^X(x)$ to $B_\varepsilon(x)$ whenever the underlying space $X$ is clear from the context.}
\begin{definition}[$h$-bounded metric measure spaces]\label{def:h-bdd-mms}
Let $h:[0,\infty)\rightarrow[0,1]$ be a \emph{strictly increasing and proper} function. For any given $\mathcal{X}=(X,d_X,\mu_X)\in\mathcal{M}^w$, we say $\mathcal{X}$ is \emph{$h$-bounded}, if for any $x\in X$ and $\varepsilon\geq 0$, $\mu_X\left( B^X_\varepsilon(x)\right)\geq h(\varepsilon)$.
\end{definition}
\begin{remark}[Influence of diameter on $h$-boundedness]
Since $\mu_X(B_0(x))\geq0=h(0)$ and $\mu_X(B_{D}(x))=1\geq h(D)$ for any $D\geq\mathrm{diam}(X)$ always hold, in the above definition one can restrict $\varepsilon$ to the interval $(0,\mathrm{diam}(X)]$.
\end{remark}
{Below we show that the doubling condition essentially implies $h$-boundedness.}
\begin{example}[Examples of $h$-bounded metric measure spaces]\label{ex:f-bdd-mms}
In this example, we present some common types of $h$-bounded metric measure spaces together with explicit constructions of $h$. Fix a metric measure space $\mathcal{X}=(X,d_X,\mu_X)$ with $\mathrm{diam}(X)\leq D$.
\begin{itemize}
\item We say $\mathcal{X}$ is \emph{$C$-doubling} for a constant $C>1$ if for any $x\in X$ and $\varepsilon\geq 0$, we have $$\mu_X(B_{2\varepsilon}(x))\leq C\cdot \mu_X(B_\varepsilon(x)).$$
Then, for any $x\in X$ and $0\leq \varepsilon\leq D$,
$$\mu_X(B_\varepsilon(x))\geq C^{-1}\mu_X(B_{2\varepsilon}(x))\geq\ldots\geq C^{-\log_2\left(\frac{D}{\varepsilon}\right)-1}\mu_X\left(B_{2^{\log_2\left(\frac{D}{\varepsilon}\right)+1}\varepsilon}(x)\right)= C^{-\log_2\left(\frac{D}{\varepsilon}\right)-1}.$$
{The function $C^{-\log_2\left(\frac{D}{\cdot}\right)-1}:[0,D]\rightarrow[0,C^{-1}]$ is strictly increasing and proper. Since $C>1$, $C^{-\log_2\left(\frac{D}{\cdot}\right)-1}$ can be extended to a strictly increasing and proper function $h_\mathcal{X}:[0,\infty)\rightarrow[0,1]$. Then, $\mathcal{X}$ is $h_\mathcal{X}$-bounded.}
\item Suppose $X$ is a finite set and let $\delta_\mathcal{X}\coloneqq\min\{\mu_X(x):\,x\in X\}>0$. Let $h_\mathcal{X}:[0,\infty)\rightarrow[0,1]$ be any strictly increasing and proper function such that $h_\mathcal{X}(\varepsilon)\leq \delta_\mathcal{X}$ for all $\varepsilon\in[0,D]$. Then, $\mathcal{X}$ is $h_\mathcal{X}$-bounded.
\end{itemize}
\end{example}
Given a compact metric measure space $\mathcal{X}$, if we define $h_\mathcal{X}^\mathrm{inf}:[0,\infty)\rightarrow[0,1]$ by $\varepsilon\mapsto \inf_{x\in X}\mu_X(B_\varepsilon(x))$ for each $\varepsilon\in[0,\infty)$, then obviously we have for any $x\in X$ and $\varepsilon\geq 0$ that $\mu_X(B_\varepsilon(x))\geq h_\mathcal{X}^\mathrm{inf}(\varepsilon)$. We of course cannot conclude directly that $\mathcal{X}$ is $h_\mathcal{X}^\mathrm{inf}$-bounded since $h_\mathcal{X}^\mathrm{inf}$ is not necessarily strictly increasing and proper. However, it turns out that a slightly modification of the construction of $h_\mathcal{X}^\mathrm{inf}$ gives rise to the following result:
\begin{lemma}\label{lm:compact-surjective}
For any $\mathcal{X}\in\mathcal{M}^w$, there exists a strictly increasing and proper function $h_\mathcal{X}:[0,\infty)\rightarrow[0,1]$ such that $\mathcal{X}$ is $h_\mathcal{X}$-bounded.
\end{lemma}
\begin{proof}
Without loss of generality, we assume that $\mathrm{diam}(X)=1$. For each $n\in\mathbb{N}$, let $X_n$ be a finite $\frac{1}{2n}$-net of $X$ such that $X_1\subseteq X_2\subseteq\ldots.$ Let $\eta_n\coloneqq\inf_{x_n\in X_n}\mu_X\left(B_\frac{1}{2n}(x_n)\right)$. Since $X_n$ is finite, the infimum is obtained and $\eta_n>0$ for $n= 1,\ldots.$ Obviously, $1\geq\eta_1\geq \eta_2\geq \ldots>0$. Then, we choose any strictly decreasing positive sequence $1\geq\zeta_1>\zeta_2>\ldots$ such that $\lim_{n\rightarrow\infty}\zeta_n=0$ and $\zeta_n\leq \eta_n$. Such a sequence obviously exists.
Define a function $f:\left\{\frac{1}{n}:n=1,2\ldots\right\}\rightarrow\mathbb{R}$ by mapping $\frac{1}{n}$ to $\zeta_{n+1}$ for $n=1,\ldots$. We extend $f$ to a new function $g:[0,1]\rightarrow[0,\zeta_2]$ by linearly interpolating $f$ inside the intervals $\left[\frac{1}{n+1},\frac{1}{n}\right]$ for $n\in\mathbb{N}$ and by letting $g(0)\coloneqq 0$. Now, let $\hat g:[1,\infty)\rightarrow[\zeta_2,1]$ be any strictly increasing function. Then, we extend $g$ to $h_\mathcal{X}:[0,\infty)\rightarrow[0,1]$ as follows:
$$h_\mathcal{X}(\varepsilon) = \begin{cases} g(\varepsilon),&\varepsilon\in[0,1]\\
\hat g(\varepsilon),&\varepsilon\in(1,\infty)
\end{cases}.$$
Then, it is easy to check that $h_\mathcal{X}$ is strictly increasing and proper. Moreover,
$$h_\mathcal{X}\left(\varepsilon\right)\leq \zeta_{n+1}\quad\forall n\in\mathbb{N},\forall \varepsilon\in\left(\frac{1}{n+1},\frac{1}{n}\right].$$
Now, for any $x\in X$, there exists $x_n\in X_n$ such that $d_X(x,x_n)\leq \frac{1}{2n}$. Then, $B_\frac{1}{2n}(x_n)\subseteq B_\frac{1}{n}(x)$. Thus, $\mu_X\left(B_\frac{1}{n}(x)\right)\geq \mu_X\left(B_\frac{1}{2n}(x_n)\right)\geq \zeta_n$. Since for any $\varepsilon\in(0,1]$, there exists $n\in\mathbb{N}$ such that $\frac{1}{n+1}< \varepsilon\leq \frac{1}{n}$, we obtain
$$\mu_X\left(B_\varepsilon(x)\right)\geq \mu_X\left(B_\frac{1}{n+1}(x)\right)\geq \zeta_{n+1}\geq h_\mathcal{X}\left(\varepsilon\right).$$
Therefore, $\mathcal{X}$ is $h_\mathcal{X}$-bounded.
\end{proof}
We say that a family $\mathcal{F}\subseteq\mathcal{M}^w$ of compact metric measure spaces is \emph{uniformly $h$-bounded} for some strictly increasing and proper $h:[0,\infty)\rightarrow[0,1]$ if every $\mathcal{X}\in\mathcal{F}$ is $h$-bounded.
Since each $\mathcal{X}\in\mathcal{M}^w$ is $h_\mathcal{X}$-bounded for some strictly increasing and proper $h_\mathcal{X}$, the one-element family $\mathcal{F}\coloneqq\{\mathcal{X}\}$ is obviously uniformly $h_\mathcal{X}$-bounded. Moreover, any finite family $\mathcal{F}$ is uniformly $h$-bounded where $h\coloneqq\min_{\mathcal{X}\in \mathcal{F}} h_\mathcal{X}$. However for an infinite family $\mathcal{F}$, it may not be true that one can find a uniform strictly increasing and proper $h$ for $\mathcal{F}$; see the example below:
\begin{example}[An example of non-uniformly $h$-bounded family]
For $n\in\mathbb{N}$, denote by $\Delta_n$ the $n$-point space with interpoint distance 1. Endow $\Delta_n$ with uniform probability measure (denoted by $\mu_n$) and denote the corresponding metric measure space by $\tilde{\Delta}_n=(\Delta_n,d_n,\mu_n)$. Let $\mathcal{F}=\{\tilde{\Delta}_n:\,n\in\mathbb N\}$. Then, there is no strictly increasing and proper function $h$ such that $\mathcal{F}$ is uniformly $h$-bounded. Indeed, we otherwise assume that $\mathcal{F}$ is uniformly $h$-bounded for some strictly increasing and proper function $h$. Then, for each $\tilde{\Delta}_n$, we pick $x_n\in \Delta_n$ and $\varepsilon=\frac{1}{2}$. Since $\tilde{\Delta}_n$ is $h$-bounded, we have that
$$\mu_n\left( B_\frac{1}{2}(x_n)\right)=\mu_n(\{x_n\})=\frac{1}{n}\geq h\left(\frac{1}{2}\right)>0.$$
Then, this implies that $\frac{1}{n}>h\left(\frac{1}{2}\right)>0$ holds for all $n\in\mathbb N$, which is impossible.
\end{example}
The following result reveals a connection between uniform $h$-boundedness and Hausdorff-boundedness.
\begin{proposition}\label{prop:ex-f-bdd-family}
{Fix $p\in[1,\infty)$. Let $h:[0,\infty)\rightarrow[0,1]$ be a strictly increasing and proper function and let $\mathcal{F}$ be a family of $h$-bounded metric measure spaces. Then, $\mathcal{F}$ is $\tilde{h}^{-1}$-Hausdorff-bounded, where $\tilde{h}:[0,\infty)\rightarrow[0,\infty)$ is defined by $t\mapsto \frac{t}{2}\cdot h^\frac{1}{p}\left(\frac{t}{2}\right)$ for each $t\in[0,\infty)$.}
\end{proposition}
\begin{proof}
Let $\mathcal{X},\mathcal{Y}\in \mathcal{F}$. Suppose that $Z\in\mathcal{M}$ and that there exist isometric embeddings $X\hookrightarrow Z$ and $Y\hookrightarrow Z$. For notational simplicity, we still denote by $\mu_X$ and $\mu_Y$ their respective pushforwards under the respective isometric embeddings. Let $\rho \coloneqq \dW p^Z(\mu_X,\mu_Y)$ and $\eta\coloneqqd_\mathcal{H}^{Z}(X,Y)$. Assume that $\eta>0$ since the case $\eta=0$ is trivial. Then, by compactness of $X$ and $Y$, there exists $x_0\in X$ and $y_0\in Y$ such that $d_Z(x_0,y_0)=d_\mathcal{H}^Z(X,Y)=\eta.$ Without loss of generality, we assume that
$$d_Z(x_0,y_0)=d_Z(x_0,Y)\coloneqq\inf\{d_Z(x_0,y):\,y\in Y\}. $$
Then, consider the closed ball $B_{\frac{\eta}{2}}^X(x_0)$ in $X$, and let $x\in B_{\frac{\eta}{2}}^X(x_0)$. By the triangle inequality, for any $y\in Y$ we have that
$$d_Z(x,y)\geq d_Z(x_0,y) - d_Z(x_0, x)\geq d_Z(x_0,y_0)-d_X(x_0,x)\geq \eta-\frac{\eta}{2}=\frac{\eta}{2}. $$
Let $\mu\in\mathcal{C}^\mathrm{opt}_p(\mu_X,\mu_Y)$ be an optimal coupling, then we have that
\begin{align*}
\delta &= \dW p^Z(\mu_X,\mu_Y)\\
&=\left(\int_{X\times Y}\left( d_Z(x,y)\right)^pd\mu(x,y)\right)^\frac{1}{p}\\
&\geq \left(\int_{B_{\frac{\eta}{2}}^X(x_0)\times Y}\left( d_Z(x,y)\right)^pd\mu(x,y)\right)^\frac{1}{p}\\
&\geq \left(\int_{B_{\frac{\eta}{2}}^X(x_0)\times Y}\left( \frac{\eta}{2}\right)^pd\mu(x,y)\right)^\frac{1}{p}\\
&=\frac{\eta}{2}\cdot\left(\mu\left( B_{\frac{\eta}{2}}^X(x_0)\times Y\right)\rc^\frac{1}{p}\\
&=\frac{\eta}{2}\cdot\left(\mu_X\left( B_{\frac{\eta}{2}}^X(x_0)\right)\rc^\frac{1}{p}\\
&\geq \frac{\eta}{2}\cdot h^\frac{1}{p}\left(\frac{\eta}{2}\right)\\
&=\tilde{h}(\eta)
\end{align*}
Obviously, the function $\tilde{h}:[0,\infty)\rightarrow[0,\infty)$ defined by $t\mapsto \frac{t}{2}\cdot h^\frac{1}{p}\left(\frac{t}{2}\right)$ is strictly increasing and proper. By {item 4} of \Cref{prop:increasing-prop}, we have that
$$d_\mathcal{H}^Z(X,Y)=\eta\leq \tilde{h}^{-1}(\delta)=\tilde{h}^{-1}\left(\dW p^Z(\mu_X,\mu_Y)\right). $$
By {items 1 and 5} of \Cref{prop:increasing-prop} we have that $\tilde{h}^{-1}$ is increasing and proper. Moreover, it is easy to see that $\lim_{t\rightarrow\infty}\tilde{h}(t)=\infty$. This implies that $\tilde{h}^{-1}$ is a finite function $\tilde{h}^{-1}:[0,\infty)\rightarrow[0,\infty)$ (cf. {item 2} of \Cref{prop:increasing-prop}).
Thus, $\mathcal{F}$ is $\tilde{h}^{-1}$-Hausdorff-bounded.
\end{proof}
For an $\ell^p$-Gromov-Wasserstein geodesic $\gamma:[0,1]\rightarrow\mathcal{M}^w$, we say that $\gamma$ is Hausdorff-bounded if the family $\{\gamma(t)\}_{t\in[0,1]}$ is Hausdorff-bounded. The family of Hausdorff-bounded Gromov-Wasserstein geodesics is rich and we present two examples as follows.
\begin{proposition}[Geodesics consisting of finite spaces]\label{prop:f-bdd-finite}
Fix $p\in[1,\infty)$. Let $\gamma$ be an $\ell^p$-Gromov-Wasserstein geodesic. Assume that there exist a positive integer $N$, a constant $C>0$ and a constant $D>0$ such that for each $t\in[0,1]$,
\begin{enumerate}
\item the cardinality of $\gamma(t)$ is bounded above by $N$;
\item for any $x\in \gamma(t)$, $\mu_t(\{x\})\geq C$;
\item $\mathrm{diam}(\gamma(t))\leq D$.
\end{enumerate}
Then, $\gamma$ is Hausdorff-bounded.
\end{proposition}
\begin{proof}
As mentioned earlier in \Cref{ex:f-bdd-mms}, there exists a strictly increasing and proper function $h:[0,\infty)\rightarrow[0,1]$ such that for each $t\in[0,1]$, $\gamma(t)$ is $h$-bounded. Then, by \Cref{prop:ex-f-bdd-family}, $\{\gamma(t)\}_{t\in[0,1]}$ is Hausdorff-bounded.
\end{proof}
For any two given compact metric measure spaces, Sturm constructed in \cite{sturm2020email} an $\ell^p$-Gromov-Wasserstein geodesic connecting them, which we call a \emph{straight-line $\dgws p$ geodesic}:
\begin{theorem}[Straight-line $\dgws p$ geodesic \cite{sturm2020email}]\label{thm:straight-line-gw-geo}
Fix $p\in[1,\infty)$ and let $\mathcal{X},\mathcal{Y}\in\mathcal{M}^w$. Let $Z\in\mathcal{M}$ be such that there exist isometric embeddings $X\hookrightarrow Z$ and $Y\hookrightarrow Z$ such that
$$\dgws{p}\left(\mathcal{X},\mathcal{Y}\right)= \dW{p}^Z\left(\mu_X,\mu_Y\right), $$
whose existence is guaranteed by \Cref{lm:dgw_w-realizable}.
Let $\mu\in\mathcal{C}_p^\mathrm{opt}(\mu_X,\mu_Y)$ be any optimal coupling between $\mu_X$ and $\mu_Y$ with respect to $\dW{p}^Z$. Then, the curve $\gamma_\mu:[0,1]\rightarrow\mathcal{M}^w$ defined as follows is an $\ell^p$-Gromov-Wasserstein geodesic:
$$\gamma_{\mu}(t)\coloneqq\begin{cases}\mathcal{X},& t=0\\
\left(S,d_{t},\mu\right), & t\in(0,1)\\
\mathcal{Y}, &t=1
\end{cases}$$
where $S\coloneqq\mathrm{supp}(\mu)\subseteq X\times Y $ and $d_{t}\left(\left(x,y\right),\left(x',y'\right)\right)\coloneqq\left(1-t\right)\,d_X\left(x,x'\right)+t\,d_Y\left(y,y'\right)$ for any $(x,y),(x',y')\in S$. We call $\gamma_\mu$ a \emph{straight-line $\dgws p$ geodesic}\footnote{Our definition is slightly different from the one given in \cite{sturm2020email}: under our notation, for each $t\in[0,1]$, \cite{sturm2020email} defines a metric measure space $\gamma_\mu'(t)\coloneqq(Z\times Z,d_t',\mu)$ where $d_t'$ is defined by $d_t'((z_1,z_2),(z_1',z_2'))\coloneqq(1-t)d_Z(z_1,z_1')+t\,d_Z(z_2,z_2')$ for any $z_i,z_i'\in Z$ where $i=1,2$. Note that our geodesic $\gamma_\mu(t)$ in \Cref{thm:straight-line-gw-geo} can then be obtained by simply restricting $\gamma_\mu'(t)$ to the support of $\mu$.}.
\end{theorem}
Now, we use \Cref{prop:ex-f-bdd-family} to show the following:
\begin{proposition}\label{prop:strline-gw-h-bdd}
Given $p\in[1,\infty)$, any straight-line $\dgws p$ geodesic is Hausdorff-bounded.
\end{proposition}
\begin{proof}
Let $\mathcal{X},\mathcal{Y}\in\mathcal{M}^w$ and let $Z\in\mathcal{M}$ be the ambient space required in \Cref{thm:straight-line-gw-geo}. Let $\mu\in\mathcal{C}_p^\mathrm{opt}(\mu_X,\mu_Y)$ be an optimal coupling for $\dW p^Z$. By \Cref{lm:compact-surjective}, there exist $h_\mathcal{X}$ and $h_\mathcal{Y}$ such that $\mathcal{X}$ is $h_\mathcal{X}$-bounded and $\mathcal{Y}$ is $h_\mathcal{Y}$-bounded. Now, consider $\mathcal{S}\coloneqq(S,d_S\coloneqq\max(d_X,d_Y),\mu)$, where $S\coloneqq\mathrm{supp}(\mu)$. $\mathcal{S}$ is a compact metric measure space and $\mu$ is fully supported. By \Cref{lm:compact-surjective} again, there exists $h_\mathcal{S}$ such that $\mathcal{S}$ is $h_\mathcal{S}$-bounded.
Now, pick any $z_0=(x_0,y_0)\in S=\mathrm{supp}(\mu)$. Then, for any $t\in(0,1)$ and for any $\varepsilon>0$, we have that
\begin{align*}
B_\varepsilon^{\gamma_\mu(t)}(z_0)&\coloneqq\{(x,y)\in S:\,d_t((x,y),(x_0,y_0))\leq \varepsilon\}\\
&\supseteq \left( B_\varepsilon^X(x_0)\times B_\varepsilon^Y(y_0)\right)\cap S\\
&=B_\varepsilon^{(S,d_S)}(z_0)
\end{align*}
Therefore,
$$\mu\left( B_\varepsilon^{\gamma_\mu(t)}(z_0)\right)\geq \mu\left( B_\varepsilon^{(S,d_S)}(z_0)\right) \geq h_\mathcal{S}(\varepsilon).$$
Let $h\coloneqq\min(h_\mathcal{X},h_\mathcal{Y},h_\mathcal{S})$, then we have that $\gamma_\mu(t)$ is $h$-bounded for any $t\in[0,1]$. Then, by \Cref{prop:ex-f-bdd-family}, we have that $\gamma_\mu$ is Hausdorff-bounded.
\end{proof}
\begin{remark}[Deviant and branching GW geodesics]\label{rmk:deviant and branching GW}
Analogously to the case of Gromov-Hausdorff geodesics (cf. \cite[Section 1.1]{chowdhury2018explicit}), for each $p\in[1,\infty)$ there exist \emph{deviant} geodesics in $\Gamma_\mathcal{W}^p$: that is, geodesics which are not straight-line $\dgws p$ geodesics. Furthermore, there also exist \emph{branching} geodesics. We provide such constructions in \Cref{app:d-b-geodesic} where we also show that these exotic geodesics are Hausdorff-bounded. In analogy with the case of $(\mathcal{M},d_\mathcal{GH})$,
\begin{enumerate}
\item the existence of branching geodesics shows that $\left(\mathcal{M}^w,\dgws p\right)$ is not an Alexandrov space with curvature bounded below \cite[Chapter 10]{burago2001course};
\item the existence of deviant (non-unique) geodesics shows that $\left(\mathcal{M}^w,\dgws p\right)$ is also not a $\mathrm{CAT}(\kappa)$ space with curvature bounded above by some $\kappa\in\mathbb R$ \cite[Chapter 2.1]{bridson2013metric}.
\end{enumerate}
\end{remark}
\paragraph{Proof of \Cref{thm:bdd-geo-w-real}.} We now deal with the proof of \Cref{thm:bdd-geo-w-real}. {The proof consists of two parts: first we carefully approximate a given Hausdorff bounded geodesic $\gamma$ by a convergent sequence of Wasserstein realizable geodesics; then, this carefully chosen convergent sequence allows us to utilize the Hausdorff-boundedness condition on $\gamma$ so that we can follow a strategy analogous to the one used for proving \Cref{thm:main-h-realizable} to now show that this convergent sequence has a Wasserstein realizable limit, which must agree with $\gamma$.}
\thmbddgeo*
\begin{proof}
Let $\gamma:[0,1]\rightarrow\mathcal{M}^w$ be a Hausdorff-bounded $\ell^p$-Gromov-Wasserstein geodesic. For each $t\in[0,1]$, we write $\gamma(t)=(X_t,d_{t},\mu_t)$. Assume that $\rho\coloneqq\dgws{1}\left(\gamma\left(0\right),\gamma\left(1\right)\right)>0$. Let $T^n=\{t_i^n\coloneqq i\cdot 2^{-n}:i=0,\ldots,2^n\}$ for $n=0,1,\ldots$. Then, by \Cref{coro:finite-seq-w-dgw}, there exist $Z^n\in\mathcal{M}$ and isometric embeddings $\varphi_i^n:X_{t_i^n}\hookrightarrow Z^n$ for $i=1,\ldots,2^n$ such that $\dW{p}^{Z^n}\left(\left(\varphi_i^n\right)_\#\mu_{t_i^n},\left(\varphi_j^n\right)_\#\mu_{t_j^n}\right)=|t_i^n-t_j^n|\rho.$
For $p=1$, by \Cref{lm:W-geodesic}, we know that $\gamma_i^n:[0,1]\rightarrow \mathcal{W}_1(Z^n)$ defined by
\[t\mapsto (1-t)\left(\varphi_i^n\right)_\#\mu_{t_i^n}+t\left(\varphi_{i+1}^n\right)_\#\mu_{t_{i+1}^n}\]
is an $\ell^1$-Wasserstein geodesic for each $n=0,\ldots$ and each $i=0,\ldots,2^n-1$. Then,
\begin{align*}
\dW 1^{Z^n}\left(\gamma_0^n(0),\gamma_{2^{n}-1}^n(1)\right)&=\dW 1^{Z^n}\left((\varphi_0^n)_\#\mu_{t_0^n},\left(\varphi_{2^{n}-1}^n\right)_\#\mu_{t_{2^n}^n}\right)\\
&=\sum_{i=0}^{2^{n}-1}\dW 1^{Z^n}\left((\varphi_{i}^n)_\#\mu_{t_{i}^n},(\varphi_{i+1}^n)_\#\mu_{t_{i+1}^n}\right)\\
&=\sum_{i=0}^{2^n-1}\dW 1^{Z^n}(\gamma_i^n(0),\gamma_{i}^n(1)).
\end{align*}
Therefore, we can concatenate all the $\gamma_i^n$s for $i=0,\ldots,2^n-1$ via \Cref{prop:geo-concatenate} to obtain an $\ell^1$-Wasserstein geodesic ${\gamma}^n:[0,1]\rightarrow\mathcal{W}_1(Z^n)$ such that $\gamma^n(t_i^n)=(\varphi_i^n)_\#\mu_{t_i^n}$ for $i=0,\ldots,2^n$. Since $\dW 1^{Z^n}(\gamma^n(0),\gamma^n(1))=\rho=\dgws 1(\gamma(0),\gamma(1))$, by \Cref{lm:wgeo-to-dgwgeo} we have that $\gamma^n$ is actually an $\ell^1$-Gromov-Wasserstein geodesic. We follow the notation from \Cref{lm:wgeo-to-dgwgeo} and denote by $\tilde{\gamma}^n:[0,1]\rightarrow\mathcal{M}^w$ the $\ell^1$-Gromov-Wasserstein geodesic corresponding to $\gamma^n$. Then, it is easy to check that $d_{\infty,1}\left(\gamma,\tilde{\gamma}^n\right)\leq 2\cdot\frac{1}{2^n}\cdot\rho=2^{1-n}\rho$ via an argument similar to the one used in the proof of \Cref{prop:main-w-real-dense}. For each $n=0,\ldots,i=0,\ldots,2^n$ and $t\in[0,1]$, we have $\mathrm{supp}(\gamma_i^n(t))\subseteq \varphi_i^n\left( X_{t_i^n}\right)\cup \varphi_{i+1}^n\left( X_{t_{i+1}^n}\right)$. Therefore, $\cup_{t\in[0,1]}\mathrm{supp}({\gamma}^n(t))=\cup_{i=0}^{2^n}\varphi_i^n\left( X_{t_i^n}\right)=:Y^n, $ and thus $\gamma^n$ is actually a geodesic in $\mathcal{W}_1(Y^n)\subseteq \mathcal{W}_1(Z^n)$.
For $p>1$, since $W^n\coloneqq\mathcal{W}_1(Y^n)$ is geodesic (cf. \Cref{thm:W-geodesic}), we have that $\mathcal{W}_p(W^n)$ is geodesic (cf. \Cref{thm:wp-geo}). We still denote $\varphi_i^n$ the composition of $\varphi_i^n:X_{t_i^n}\rightarrow Z^n(\text{or }Y^n)$ and the canonical embedding $Y^n\hookrightarrow W^n$. Then, there exists a geodesic $\gamma_i^n$ connecting $(\varphi_i^n)_\#\mu_{t_i^n}$ and $(\varphi_{i+1}^n)_\#\mu_{t_{i+1}^n}$. Similarly to the case when $p=1$, we concatenate $\gamma_i^n$s to obtain an $\ell^p$-Wasserstein geodesic ${\gamma}^n:[0,1]\rightarrow\mathcal{W}_p(W^n)$ and thus an $\ell^p$-Gromov-Wasserstein geodesic $\tilde{\gamma}^n$ such that $d_{\infty,p}(\gamma,\tilde{\gamma}^n)\leq 2\cdot\frac{1}{2^n}\cdot\rho=2^{1-n}\rho$. Therefore, for each $t\in[0,1]$, $\gamma(t)$ is the $\ell^p$-Gromov-Wasserstein limit of $\{\tilde{\gamma}^n(t)\}_{n=0}^\infty$.
\begin{claim}\label{clm:key-in-main-w}
There exists a $d_\mathcal{GH}$-convergent subsequence of $\{Y^n\}_{n=0}^\infty$.
\end{claim}
Assume the claim for now and consider first the case when $p=1$. Suppose $X\in\mathcal{M}$ is such that $\lim_{n\rightarrow\infty}d_\mathcal{GH}(Y^n,X)=0$ (after possibly passing to a subsequence), then by \Cref{lm:ghconv=hconv}, there exist a Polish metric space $Z$ and isometric embeddings $\varphi:X\hookrightarrow Z$ and $\varphi^n:Y^n\hookrightarrow Z$ for $n=0,\ldots$ such that $\lim_{n\rightarrow\infty}d_\mathcal{H}^Z(\varphi^n(Y^n),\varphi(X))=0$. By \Cref{thm:W-equal},
\[\lim_{i\rightarrow\infty}d_\mathcal{H}^{\mathcal{W}_1(Z)}\big((\varphi^n)_\#(\mathcal{W}_1(Y^n)),\varphi_\#(\mathcal{W}_1(X))\big)=0.\]
For each $n=0,\ldots$, we have that ${\gamma}^n:[0,1]\rightarrow\mathcal{W}_1(Y^n)$ is $\rho$-Lipschitz. Then, since $(\varphi^n)_\#$ is an isometric embedding, $(\varphi^n)_\#\circ\gamma^n:[0,1]\rightarrow (\varphi^n)_\#(\mathcal{W}_1(Y^n))$ is also $\rho$-Lipschitz. Moreover, by \Cref{thm:complete-w}, $\mathcal{W}_1(Z)$ is complete. Then, by the generalized Arzel\`a-Ascoli theorem (\Cref{thm:general-AA}), the sequence
\[\big\{(\varphi^n)_\#\circ\gamma^n:[0,1]\rightarrow (\varphi^n)_\#(\mathcal{W}_1(Y^n))\big\}_{n=0}^\infty\]
uniformly converges to a $\rho$-Lipschitz curve $\hat{\gamma}:[0,1]\rightarrow (\varphi)_\#(\mathcal{W}_1(X))$ in the space $\mathcal{W}_1(Z)$.
By uniform convergence, for each $t\in[0,1]$ we have that
$$\lim_{n\rightarrow\infty}\dW{1}^{Z}(\hat{\gamma}(t),(\varphi^n)_\#\circ\gamma^n(t))=0.$$
Let $\tilde{X}_t\coloneqq\mathrm{supp}(\hat{\gamma}(t))$ and let $\tilde{\gamma}(t)\coloneqq\left(\tilde{X}_t,d_Z|_{\tilde{X}_t\times\tilde{X}_t},\hat{\gamma}(t)\right)$. Then, we have that
$$0\leq \lim_{n\rightarrow\infty}\dgws 1\left(\tilde{\gamma}(t),\tilde{\gamma}^n(t)\right)\leq\lim_{n\rightarrow\infty}\dW{1}^{Z}(\hat{\gamma}(t),(\varphi^n)_\#\circ\gamma^n(t))=0. $$
We know that $\gamma(t)$ is the Gromov-Wasserstein limit of $\{\tilde{\gamma}^n(t)\}_{n=0}^\infty$, thus $\tilde{\gamma}(t)\cong_w\gamma(t)$ for $t\in[0,1]$. Then, since $\tilde{\gamma}$ is $X$-Wasserstein-realizable, we have that $\gamma\in\Gamma_\mathcal{W}^1$.
When $p>1$, by stability of $\mathcal{W}_1$ (cf. \Cref{thm:W-1-lip}), $\{W^n=\mathcal{W}_1(Y^n)\}_{n=0}^\infty$ also has a convergent subsequence. Then, via an argument similar to the one used in the case of $p=1$, one concludes that $\gamma\in\Gamma_\mathcal{W}^p$.
Now, we finish by proving \Cref{clm:key-in-main-w}:
\begin{proof}[Proof of \Cref{clm:key-in-main-w}]
We assume that $\gamma $ is $f$-Hausdorff-bounded for an increasing and proper function $f:[0,\infty)\rightarrow[0,\infty)$, i.e., $f(0)=0$ and $f$ is continuous at $0$. We prove the claim by suitably applying Gromov's pre-compactness theorem (cf. \Cref{thm:pre-compact}).
\begin{enumerate}
\item For any $t\in(0,1]$, by \Cref{rmk:f-h-bdd to gh bound} we have that $d_\mathcal{GH}(X_0,X_t)\leq f\left(\dgws{1}(\gamma(0),\gamma(t))\right)\leq f(\rho)$. Since $d_\mathcal{GH}(X_0,X_t)\geq \frac{1}{2}|\mathrm{diam}(X_0)-\mathrm{diam}(X_t)|$ (cf. \cite[Theorem 3.4]{memoli2012some}), we have that
\[\mathrm{diam}(X_t)\leq 2f(\rho)+\mathrm{diam}(X_0).\]
Now, fix $n\in\mathbb{N}$ and $x_0\in \varphi_0^n\left( X_{t_0^n}\right)\subseteq Y^n$. Then, for any $t_i^n\in T^n$, there exist $x_0'\in \varphi_0^n\left( X_{t_0^n}\right)$ and $x_i'\in \varphi_i^n\left( X_{t_i^n}\right)$ such that $d_{Y^n}(x_0',x_i')=d_{Y^n}\left(\varphi_0^n\left( X_{t_0^n}\right),\varphi_i^n\left( X_{t_i^n}\right)\rc$ by compactness. Obviously, we have that
$$d_{Y^n}(x_0',x_i')\leq \dW{1}^{Y^n}\left((\varphi_0^n)_\#\mu_{t_0^n},(\varphi_i^n)_\#\mu_{t_i^n}\right)=\left|t_0^n-t_i^n\right|\rho\leq \rho.$$
Now, for any point $x_i\in \varphi_i^n\left( X_{t_i^n}\right)$, we have that
\begin{align*}
d_{Y^n}(x_0,x_i)&\leq d_{Y^n}(x_0,x_0')+d_{Y^n}(x_0',x_i')+d_{Y^n}(x_i',x_i)\\
&\leq \mathrm{diam}\left(\varphi_0^n\left( X_{t_0^n}\right)\rc+\rho+\mathrm{diam}\left(\varphi_i^n\left( X_{t_i^n}\right)\rc\\
&=\mathrm{diam}\left( X_{t_0^n}\right)+\rho+\mathrm{diam}\left( X_{t_i^n}\right)\leq \rho+2f(\rho)+2\mathrm{diam}(X_0).
\end{align*}
The inequality holds for any $i=0,\ldots,2^n$. So $\mathrm{diam}(Y^n)\leq 2(\rho+2f(\rho)+2\mathrm{diam}(X_0))$ for all $n\in\mathbb{N}$.
\item For any $\varepsilon>0$, there exists $M>0$ such that $t_{j+1}^M-t_j^M<f^{-1}(\frac{\varepsilon}{2})\cdot\rho^{-1}$. Here $f^{-1}(\frac{\varepsilon}{2})>0$ follows from {item 5} of \Cref{prop:increasing-prop}. Let $S^M_j$ be an $\frac{\varepsilon}{2}$-net of $\gamma\left(t_j^M\right)$ for all $j=0,\ldots,2^M$. For any $n>M$, $\{t_j^M\}_{j=0}^{2^M}$ is a subsequence of $\{t_i^n\}_{i=0}^{2^n}$. Indeed, $t_j^M=t^n_{j\cdot 2^{n-M}}$ for $j=0,\ldots,2^M$. For any $t_i^n$, there exists $j$ such that $t_j^M\leq t_i^n<t^M_{j+1}$. We know by construction of $Y^n$ that
$$\dW{1}^{Y^n}\left(\left(\varphi_{j\cdot 2^{n-M}}^n\right)_\#\mu_{t_j^M},\left(\varphi_{i}^n\right)_\#\mu_{t_i^n}\right)= |t_i^n-t_j^M|\rho\leq |t^M_{j+1}-t_j^M|\rho< f^{-1}\left(\frac{\varepsilon}{2}\right).$$
Therefore,
$$d_\mathcal{H}^{Y^n}\left(\varphi_{j\cdot 2^{n-M}}^n\left( X_{t_j^M}\right),\varphi_i^n\left( X_{t_i^n}\right)\right)\leq f\left(\dW{1}^{Y^n}\left(\left(\varphi_{j\cdot 2^{n-M}}^n\right)_\#\mu_{t_j^M},\left(\varphi_{i}^n\right)_\#\mu_{t_i^n}\right)\right)\leq\frac{\varepsilon}{2},$$
where we used {item 3} of \Cref{prop:increasing-prop} in the second inequality.
Therefore, for each point $x_i^n\in \varphi_i^n\left( X_{t_i^n}\right)$, there exists a point $x_j^M\in \varphi_{j\cdot 2^{n-M}}^n\left( X_{t_j^M}\right)$ such that $d_{Y^n}\left(x_i^n,x_j^M\right)\leq\frac{\varepsilon}{2}$. Note that $\varphi_{j\cdot 2^{n-M}}^n\left( S_j^M\right)$ is an $\frac{\varepsilon}{2}$-net of $\varphi_{j\cdot 2^{n-M}}^n\left( X_{t_j^M}\right)$.
Then, there exists a point $\tilde{x}_j^M\in \varphi_{j\cdot 2^{n-M}}^n\left( S_j^M\right)$ such that $d_{Y^n}\left(x^M_j,\tilde{x}_j^M\right)\leq \frac{\varepsilon}{2}$. So, $d_{Y^n}\left(x^n_i,\tilde{x}_j^M\right)\leq \varepsilon$. Therefore, we have that $Y^n\subseteq\cup_{j=0}^{2^M} \left(\varphi_{j\cdot 2^{n-M}}^n\left( S_j^M\right)\right)^\varepsilon$. Let
$$Q\left(\varepsilon\right)\coloneqq\max\left(\max\{\mathrm{cov}_\varepsilon\left(Y^n\right):\,n=1,\ldots,M-1\},\sum_{j=0}^{2^M}\left|S_j^M\right|\right),$$
then we have that $\mathrm{cov}_\varepsilon\left(Y^n\right)\leq Q\left(\varepsilon\right)$ for all $n=0,\ldots$.
\end{enumerate}
Therefore, $\{Y^n\}_{n=0}^\infty\subseteq \mathcal{K}\big(Q,2(\rho+2f(\rho)+2\mathrm{diam}(X_0))\big)$ (cf. \Cref{def:CND}). By Gromov's pre-compactness theorem (cf. \Cref{thm:pre-compact}), $\{Y^n\}_{n=0}^\infty$ has a convergent subsequence.
\end{proof}
\end{proof}
There exist examples of non-Hausdorff-bounded Gromov-Wasserstein geodesics.
\begin{example}[An example of non-Hausdorff-bounded Gromov-Wasserstein geodesic]\label{ex:non-h-b-gw}
Let $\mathcal{X}=(X,d_X,\mu_X)$ be the one point metric measure space and $\mathcal{Y}=(Y,d_Y,\mu_Y)$ be a two-point metric measure space with unit distance and uniform probability measure. Assume that $X=\{x\}$ and $Y=\{y_1,y_2\}$. Then, under the map $\varphi_X:X\rightarrow Y$ defined by $x\mapsto y_1$ and the identity map $\varphi_Y\coloneqq \mathrm{Id}:Y\rightarrow Y$, it is easy to check that
$$\dgws 1(\mathcal{X},\mathcal{Y})=\dW 1^Y\left((\varphi_X)_\#\mu_X,(\varphi_Y)_\#\mu_Y\right). $$
Define $\gamma:[0,1]\rightarrow\mathcal{W}_1(Y)$ by mapping each $t\in[0,1]$ to $(1-t)\,(\varphi_X)_\#\mu_X+t\,(\varphi_Y)_\#\mu_Y$. It is easy to describe $\gamma(t)$ explicitly as follows: for each $t\in(0,1]$, $\gamma(t)(\{y_1\})=1-\frac{t}{2}$ and $\gamma(t)(\{y_2\})=\frac{t}{2}$. By \Cref{lm:W-geodesic}, $\gamma$ is an $\ell^1$-Wasserstein geodesic. Thus $\gamma$ corresponds to an $\ell^1$-Wasserstein-realizable Gromov-Wasserstein geodesic $\tilde{\gamma}$ connecting $\mathcal{X}$ and $\mathcal{Y}$ (cf. \Cref{lm:wgeo-to-dgwgeo}).
Note that $d_\mathcal{GH}(X,\tilde{\gamma}(t))=d_\mathcal{GH}(X,Y)=\frac{1}{2}$ for all $t\in(0,1]$. However, $\lim_{t\rightarrow0}\dgws 1(\mathcal{X},\tilde{\gamma}(t))=0$ which precludes the existence of any increasing and proper function $f:[0,\infty)\rightarrow[0,\infty)$ such that
$$\frac{1}{2}=d_\mathcal{GH}(X,\tilde{\gamma}(t))\leq f\left(\dgws 1(\mathcal{X},\tilde{\gamma}(t))\right). $$
Therefore, the geodesic $\tilde{\gamma}$ is not Hausdorff-bounded.
\end{example}
Note that the example constructed above is highly dependent on the special linear interpolation geodesic corresponding to $\dW 1$ (cf. \Cref{lm:W-geodesic}). We are not aware of any example of an $\ell^p$-Gromov-Wasserstein geodesic for $p>1$ which is not Hausdorff-bounded.
\begin{conjecture}\label{conj:gw}
For every $p\in(1,\infty)$, any $\ell^p$-Gromov-Wasserstein geodesic is Hausdorff-bounded. Also, for every $p\in[1,\infty)$, any $\ell^p$-Gromov-Wasserstein geodesic is Wasserstein-realizable.
\end{conjecture}
\section{Dynamic geodesics}\label{sec:dyn-geo}
In this section, we first carefully study the properties of the Hausdorff displacement interpolation and prove \Cref{thm:main-dyn-hausdorff}, then extend our results to study dynamic Gromov-Hausdorff geodesics and prove \Cref{thm:main-dyn-gh}.
\subsection{Displacement interpolation}
\paragraph{Geodesics in $\mathcal{W}_p\left(X\right)$ for $p\in\left(1,\infty\right)$.} Given a metric space $X$, it is known that if $X$ is a geodesic space, then $\mathcal{W}_p\left(X\right)$ is a geodesic space (cf. \Cref{thm:wp-geo}). Geodesics in $\mathcal{W}_p\left(X\right)$ are also called \emph{displacement interpolation}, due to a refined characterization of geodesics in $\mathcal{W}_p\left(X\right)$ which we now explain.
Let $C\left([0,1],X\right)$ denote the set of all continuous curves $\gamma:[0,1]\rightarrow X$ with the uniform metric $d_\infty^X\left(\gamma_1,\gamma_2\right)\coloneqq\sup_{t\in[0,1]}d_X\left(\gamma_1\left(t\right),\gamma_2\left(t\right)\right)$. Let $\Gamma([0,1],X)$ denote the subset of $C([0,1],X)$ consisting of all geodesics in $X$.
For $t\in[0,1]$, let $e_t:C\left([0,1],X\right)\rightarrow X$ be the \emph{evaluation map} taking $\gamma\in C\left([0,1],X\right)$ to $\gamma\left(t\right)\in X$.
\begin{definition}[Dynamic (optimal) coupling]\label{def:dyn-coup}
Let $X$ be a metric space and let $\alpha,\beta\in\mathcal{P}(X)$. We call $\Pi\in\mathcal{P}\left(C\left([0,1],X\right)\right)$ (the space of probability measures on $C([0,1],X)$) a \emph{dynamic coupling} between $\alpha$ and $\beta$, if $({e_0})_\#\Pi=\alpha$ and $({e_1})_\#\Pi=\beta$, where $(e_t)_\#$ represents the pushforward map under $e_t$. We call $\Pi$ a $\ell^p$-\emph{dynamic optimal coupling} between $\alpha$ and $\beta$, if $\mathrm{supp}\left(\Pi\right)\subseteq\Gamma\left([0,1],X\right)$ and $\left(e_0,e_1\right)_\#\Pi\in\mathcal{C}^\mathrm{opt}_p(\alpha,\beta)$ is an optimal transference plan with respect to $d_{\mathcal{W},p}$ for a given $p\in\left(1,\infty\right)$.
\end{definition}
Based on the notion of dynamic optimal coupling, a probability measure on the space of geodesics, there is the following characterization of geodesics in $\ell^p$-Wasserstein hyperspaces. For background materials and proofs, interested readers are referred to \cite[Section 3.2]{ambrosio2013user} or \cite[Section 7]{villani2008optimal}.
\begin{theorem}[Displacement interpolation]\label{thm:dis-int}
Let $X$ be a Polish geodesic space and let $p\in\left(1,\infty\right)$. Given $\alpha,\beta\in\mathcal{P}_p\left(X\right)$, and a continuous curve $\gamma:[0,1]\rightarrow\mathcal{W}_p\left(X\right)$, the following properties are equivalent:
\begin{enumerate}
\item $\gamma$ is a geodesic in $\mathcal{W}_p\left(X\right)$;
\item there exists an $\ell^p$-dynamic optimal coupling $\Pi$ between $\alpha$ and $\beta$ such that $(e_t)_\#\Pi=\gamma(t)$ for each $t\in[0,1]$.
\end{enumerate}
\end{theorem}
The notion of dynamic optimal coupling is the main inspiration for our definitions of Hausdorff displacement interpolation (cf. \Cref{def:haus-dis-int}) and dynamic optimal correspondences (cf. \Cref{def:dyn-cor}), whereas \Cref{thm:dis-int} serves as the motivation for one of our main results \Cref{thm:main-dyn-hausdorff}.
\subsubsection{Hausdorff displacement interpolation}\label{sec:hausdorff displacement interpolation}
Given a compact metric space $X$, the following is a direct consequence of definition of the uniform metric $d_\infty^X$ on $C([0,1],X)$.
\begin{lemma}
For any $t\in[0,1]$, the evaluation $e_t:C\left([0,1],X\right)\rightarrow X$ taking $\gamma$ to $\gamma\left(t\right)$ is a continuous map.
\end{lemma}
For any closed subsets $A,B\subseteq X$ with $\rho\coloneqq d_\mathcal{H}^X\left(A,B\right)>0$, recall the definition of $\mathfrak{L}\left(A,B\right)$:
\begin{align*}
\mathfrak{L}\left(A,B\right)&\coloneqq\left\{\gamma:[0,1]\rightarrow X:\,\gamma(0)\in A,\,\gamma(1)\in B\text{ and }\forall s,t\in[0,1],\,d_X\left(\gamma(s),\gamma(t)\right)\leq|s-t|\,\rho\right\}\\
&=\left\{\gamma\in C([0,1],X):\,\gamma(0)\in A,\,\gamma(1)\in B\text{ and }\gamma\text{ is }\rho\text{-Lipschitz}\right\}.
\end{align*}
We have the following basic property of $\mathfrak{L}\left(A,B\right)$.
\begin{proposition}
$\mathfrak{L}\left(A,B\right)$ is a compact subset of $C\left([0,1],X\right)$.
\end{proposition}
\begin{proof}
Let $\{\gamma_i:[0,1]\rightarrow X\}_{i=0}^\infty$ be a sequence in $\mathfrak{L}\left(A,B\right)$. By definition of $\mathfrak{L}\left(A,B\right)$, each $\gamma_i$ is $\rho$-Lipschitz. Since $X$ is also compact, by Arzel\`a-Ascoli theorem (\Cref{thm:AA}), there exists a subsequence of the sequence $\{\gamma_i\}_{i=0}^\infty$ uniformly converging to a $\rho$-Lipschitz curve $\gamma:[0,1]\rightarrow X$. Since $A$ is compact, $\gamma(0)=\lim_{i\rightarrow\infty}\gamma_i(0)\in A$. Similarly, $\gamma(1)\in B$ and as a result, $\gamma\in\mathfrak{L}\left(A,B\right)$. This implies sequential compactness and thus compactness of $\mathfrak{L}\left(A,B\right)$.
\end{proof}
\begin{remark}\label{rmk:et-closed-map}
In particular, $e_t:\mathfrak{L}\left(A,B\right)\rightarrow X$ is a closed map, i.e., for any closed subset $\mathfrak{D}\subseteq\mathfrak{L}\left(A,B\right)$, the image $e_t(\mathfrak{D})$ is closed in $X$. Indeed, $\mathfrak{D}$ is then compact since $\mathfrak{L}\left(A,B\right)$ is compact. Therefore, $e_t(\mathfrak{D})$ is compact and thus closed since $e_t$ is continuous.
\end{remark}
\begin{definition}[Hausdorff displacement interpolation]\label{def:haus-dis-int}
We call a closed subset $\mathfrak{D}\subseteq\mathfrak{L}\left(A,B\right)$ a \emph{Hausdorff displacement interpolation} between $A$ and $B$ if $e_0\left(\mathfrak{D}\right)=A$ and $e_1\left(\mathfrak{D}\right)=B$.
\end{definition}
\paragraph{Proof of \Cref{thm:main-dyn-hausdorff}.}Recall that one of our main results is the following:
\thmdynhaus*
Note that in item 2 of the theorem, that $\gamma(t)\in\mathcal{H}(X)$ follows from \Cref{rmk:et-closed-map}. The proof of the implication $1\Rightarrow 2$ is based on the following observations.
\begin{lemma}\label{lm:geo-char-hausdorff}
Let $X\in\mathcal{M}$ and let $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$ be a Hausdorff geodesic. Let $\rho\coloneqqd_\mathcal{H}^X\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ and assume that $\rho>0$. Then, for any $t_*\in[0,1]$ and any $x_*\in\gamma\left(t_*\right)$, there exists a $\rho$-Lipschitz curve $\zeta:[0,1]\rightarrow X$ such that $\zeta\left(t\right)\in\gamma\left(t\right)$ for $t\in[0,1]$ and $\zeta\left(t_*\right)=x_{*}$.
\end{lemma}
The proof of \Cref{lm:geo-char-hausdorff} is technical and we postpone it to the end of this section. One immediate consequence of the lemma is the following result.
\begin{coro}\label{coro:1to2-thm-3}
Let $X\in\mathcal{M}$ and let $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$ be a Hausdorff geodesic. Let $\rho\coloneqqd_\mathcal{H}^X\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ and assume that $\rho>0$. Let
$$\mathfrak{D}\coloneqq\left\{\zeta\in C([0,1],X):\,\forall t\in[0,1],\,\zeta(t)\in\gamma\left(t\right),\,\text{and }\zeta\text{ is }\rho\text{-Lipschitz}\right\}.$$
Then, $\mathfrak{D}$ is a nonempty closed subset of $\mathfrak{L}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$ such that $e_t(\mathfrak{D})=\gamma(t)$ for each $t\in[0,1]$.
\end{coro}
\begin{proof}
By \Cref{lm:geo-char-hausdorff}, $\mathfrak{D}$ is obviously nonempty and $e_t(\mathfrak{D})=\gamma(t)$ for each $t\in[0,1]$. Now, it remains to prove the closedness of $\mathfrak{D}$. Let $\{\zeta_i\}_{i=1}^\infty$ be a sequence in $\mathfrak{D}$ converging to some $\zeta\in C([0,1],X)$ with respect to the metric $d_\infty^X$. Then, by \Cref{thm:AA}, $\zeta$ is $\rho$-Lipschitz. Moreover, for each $t\in[0,1]$, since $\zeta_i(t)\in\gamma(t)$ for all $i=1,\ldots$ and $\lim_{i\rightarrow \infty}\zeta_i(t)=\zeta(t)$, by the closedness of $\gamma(t)$ we have that $\zeta(t)\in\gamma(t)$. Therefore, $\zeta\in\mathfrak{D}$ and thus $\mathfrak{D}$ is closed.
\end{proof}
This corollary establishes the direction $1\Rightarrow 2$ of \Cref{thm:main-dyn-hausdorff}. We prove the opposite direction as follows:
\begin{proof}[{Proof of $2\Rightarrow 1$ in \Cref{thm:main-dyn-hausdorff}}]
Suppose that there exists a closed subset $\mathfrak{D}\subseteq\mathfrak{L}\left(\gamma\left(0\right),\gamma\left(1\right)\right) $ such that $\gamma\left(t\right)=e_t\left(\mathfrak{D}\right)$ for all $t\in[0,1]$. For any $t,t'\in[0,1]$, choose an arbitrary $x_t\in e_t\left(\mathfrak{D}\right)$. Let $\zeta\in\mathfrak{D}$ be such that $\zeta(t)=x_t$ for each $t\in[0,1]$. Let $x_{t'}\coloneqq \zeta({t'})$. Then, $d_X\left(x_t,x_{t'}\right)\leq |t-t'|\rho$. So $\gamma\left(t\right)\subseteq\left(\gamma\left(t'\right)\right)^{|t-t'|\rho}$. Similarly, $\gamma\left(t'\right)\subseteq\left(\gamma\left(t\right)\right)^{|t-t'|\rho}$ and thus
$$d_\mathcal{H}^X\left(\gamma\left(t\right),\gamma\left(t'\right)\right)\leq |t-t'|\cdot\rho= |t-t'|\cdotd_\mathcal{H}^X\left(\gamma\left(0\right),\gamma\left(1\right)\right).$$
Hence, $\gamma$ is a Hausdorff geodesic.
\end{proof}
\paragraph{Interpretations and consequences of \Cref{thm:main-dyn-hausdorff}.}The following remark discusses a difference between the dynamic optimal coupling and the Hausdorff displacement interpolation.
\begin{remark}[A difference between dynamic optimal coupling and Hausdorff displacement interpolation]\label{rmk:couterex-haus-geo}
Given a compact metric space $X$ and $\alpha,\beta\in\mathcal{P}\left(X\right)$, suppose $A=\mathrm{supp}\left(\alpha\right)$ and $B=\mathrm{supp}\left(\beta\right)$. Note in fact that any dynamic optimal coupling $\Pi$ between $\alpha$ and $\beta$ is supported on $\Gamma\left(A,B\right)$, the set of geodesics $\gamma$ with $\gamma\left(0\right)\in A$ and $\gamma\left(1\right)\in B$ (cf. \cite[Corollary 7.22]{villani2008optimal}). It is tempting to ask whether $\mathfrak{L}\left(A,B\right)$ can be replaced by $\mathfrak{L}\left(A,B\right)\cap \Gamma\left(A,B\right)$ in \Cref{thm:main-dyn-hausdorff}. This is not necessarily true. Indeed, consider the following case where $X\subseteq\mathbb{R}^2$ is a large enough compact disk around the origin. Let $A$ be the square $[-3,-1]\times [-1,1]$ and let $B$ be the square $[1,3]\times [-1,1]$. Then, $d_\mathcal{H}^X\left(A,B\right)=4$. Now, let $\gamma:[0,1]\rightarrow \mathcal{H}\left(X\right)$ be the{ Hausdorff geodesic constructed in \Cref{thm:haus-geo-cons}}, i.e., $\gamma(t)=A^{(1-t)d_\mathcal{H}^X(A,B)}\cap B^{td_\mathcal{H}^X(A,B)}$ for each $t\in[0,1]$. Then, $\gamma\left(\frac{1}{2}\right)=A^2\cap B^2$. It is easy to see that $\left(0,2\right)\in \gamma\left(\frac{1}{2}\right)$ but there is no geodesic passing through $\left(0,2\right)$ starting in $A$ and ending in $B$. See \Cref{fig:haus-geo} for an illustration.
\end{remark}
\begin{figure}[htb]
\centering \includegraphics[width=0.6\textwidth]{figure/geo-ill.eps}
\caption{\textbf{Illustration of the example in \Cref{rmk:couterex-haus-geo}.} There is no line segment (geodesic) passing through $\left(0,2\right)$ starting $A$ and ending in $B$. \textbf{(color figure)}} \label{fig:haus-geo}
\end{figure}
\Cref{thm:main-dyn-hausdorff} is a powerful tool which allows us to easily reprove the geodesic property of the Hausdorff hyperspace of a geodesic space (cf. \Cref{thm:hgeo}). In fact, we apply \Cref{thm:main-dyn-hausdorff} to prove a stronger result:
\begin{theorem}\label{thm:hyper-geod-iff}
Suppose $X$ is a compact metric space. Then, $\mathcal{H}\left(X\right)$ is geodesic \emph{if and only if} $X$ is geodesic.
\end{theorem}
\begin{proof}
We first assume that $X$ is geodesic. Let $A,B\in\mathcal{H}(X)$ be two distinct subsets. Then, we have the following observation:
\begin{claim}\label{clm:nonempty L}
$\mathfrak{L}(A,B)\neq\emptyset$.
\end{claim}
\begin{proof}[Proof of \Cref{clm:nonempty L}]
Let $\rho\coloneqqd_\mathcal{H}^X(A,B)$. Since $X$ is compact and $A,B$ are closed, there exists $a\in A$ and $b\in B$ such that $d_X(a,b)=\rho$. Then, there exists a geodesic $\gamma:[0,1]\rightarrow X$ such that $\gamma(0)=a\in A$, $\gamma(1)=b\in B$ and $\gamma$ is $\rho$-Lipschitz. This implies that $\gamma\in\mathfrak{L}(A,B)$ and thus $\mathfrak{L}(A,B)\neq\emptyset$.
\end{proof}
Then by \Cref{thm:main-dyn-hausdorff}, any nonempty closed subset of $\mathfrak{L}(A,B)$ (e.g., $\mathfrak{L}(A,B)$ itself) gives rise to a Hausdorff geodesic connecting $A$ and $B$ in $\mathcal{H}(X)$. Therefore, $\mathcal{H}(X)$ is geodesic.
Now, we assume that $\mathcal{H}(X)$ is geodesic. Then, for any two distinct points $x,y\in X$, there exists a Hausdorff geodesic connecting $\{x\}$ and $\{y\}$. By \Cref{thm:main-dyn-hausdorff}, $\mathfrak{L}(\{x\},\{y\})\neq \emptyset$. Note that $d_X(x,y)=d_\mathcal{H}^X(\{x\},\{y\})=:\rho$. Then, any $\rho$-Lipschitz curve in $\mathfrak{L}(\{x\},\{y\})$ is automatically a geodesic connecting $x$ and $y$ which implies that $X$ is geodesic.
\end{proof}
Let $X$ be a compact geodesic metric space. Then, for any two distinct sets $A,B\in\mathcal{H}(X)$, we know by \Cref{clm:nonempty L} that $\mathfrak{L}(A,B)\neq\emptyset$. Then by letting $\mathfrak{D}\coloneqq\mathfrak{L}(A,B)$ and applying \Cref{thm:main-dyn-hausdorff}, we have that the curve $\gamma_{A,B}^X:[0,1]\rightarrow\mathcal{H}(X)$ defined by $t\mapsto e_t(\mathfrak{L}(A,B))$ for all $t\in[0,1]$ is a Hausdorff geodesic. In fact, $\gamma_{A,B}^X$ coincides with the curve constructed in \Cref{thm:haus-geo-cons}, which provides an alternative proof of \Cref{thm:haus-geo-cons}:
\begin{proposition}\label{prop:hausdorff geod from interpolation}
Let $X$ be a compact geodesic metric space. For any two distinct sets $A,B\in\mathcal{H}(X)$, we define $\gamma_{A,B}^X:[0,1]\rightarrow\mathcal{H}(X)$ by $t\mapsto e_t(\mathfrak{L}(A,B))$ for all $t\in[0,1]$. Then, $\gamma_{A,B}^X$ is a Hausdorff geodesic. Moreover, if we let $\rho\coloneqqd_\mathcal{H}^X(A,B)$, then for any $t\in[0,1]$, we have that
\[ e_t(\mathfrak{L}(A,B))=A^{t\rho}\cap B^{(1-t)\rho}.\]
\end{proposition}
\begin{proof}
That $\gamma_{A,B}^X$ is a Hausdorff geodesic has already been proved above.
Since $\gamma_{A,B}^X$ is a Hausdorff geodesic, we have that
\[d_\mathcal{H}^X\left( A,\gamma_{A,B}^X(t)\right)=t\rho\,\text{ and }\,d_\mathcal{H}^X\left( \gamma_{A,B}^X(t),B\right)=(1-t)\rho.\]
Then, $\gamma_{A,B}^X(t)\subseteq A^{t\rho}$ and $\gamma_{A,B}^X(t)\subseteq B^{(1-t)\rho}$. Therefore,
\begin{equation}\label{eq:et inside cap}
e_t(\mathfrak{L}(A,B))=\gamma_{A,B}^X(t)\subseteq A^{t\rho}\cap B^{(1-t)\rho}.
\end{equation}
For the other direction, we first observe that \Cref{eq:et inside cap} immediately implies that $A^{t\rho}\cap B^{(1-t)\rho}\neq\emptyset$ and thus $A^{t\rho}\cap B^{(1-t)\rho}\in\mathcal{H}(X)$. Since $\mathcal{H}(X)$ is geodesic, there exist geodesics $\gamma_A,\gamma_B:[0,1]\rightarrow\mathcal{H}(X)$ such that $\gamma_A(0)=A$, $\gamma_A(1)=\gamma_B(0)=A^{t\rho}\cap B^{(1-t)\rho}$ and $\gamma_B(1)=B$. Note that $A^{t\rho}\cap B^{(1-t)\rho}\subseteq A^{t\rho}$ and $A\subseteq \left( \gamma_{A,B}^X(t)\right)^{t\rho}\subseteq\left( A^{t\rho}\cap B^{(1-t)\rho}\right)^{t\rho},$ where $A\subseteq \left( \gamma_{A,B}^X(t)\right)^{t\rho}$ follows from the fact that $d_\mathcal{H}^X\left( A, \gamma_{A,B}^X(t)\right)=t\rho$. This implies that $d_\mathcal{H}^X\left( A, A^{t\rho}\cap B^{(1-t)\rho}\right)\leq t\rho.$ Similarly, $d_\mathcal{H}^X\left( B, A^{t\rho}\cap B^{(1-t)\rho}\right)\leq (1-t)\rho$. By the triangle inequality,
\[\rho=d_\mathcal{H}^X(A,B)\leq d_\mathcal{H}^X\left( A, A^{t\rho}\cap B^{(1-t)\rho}\right)+d_\mathcal{H}^X\left( B, A^{t\rho}\cap B^{(1-t)\rho}\right)\leq t\rho+(1-t)\rho=\rho.\]
Therefore, all equalities must hold.
Then by \Cref{prop:geo-concatenate}, we concatenate $\gamma_A$ and $\gamma_B$ to obtain a geodesic $\gamma:[0,1]\rightarrow\mathcal{H}(X)$ such that $\gamma(0)=A,\gamma(1)=B$ and $\gamma(t)=A^{t\rho}\cap B^{(1-t)\rho}$. By \Cref{thm:main-dyn-hausdorff}, there exists a closed subset $\mathfrak{D}\subseteq\mathfrak{L}(A,B)$ such that ${\gamma}(t)=e_t(\mathfrak{D})$. Then,
\[A^{t\rho}\cap B^{(1-t)\rho}=\gamma(t)=e_t(\mathfrak{D})\subseteq e_t(\mathfrak{L}(A,B)).\]
This concludes the proof.
\end{proof}
The special geodesic $\gamma_{A,B}^X$ constructed above turns out to be instrumental for proving the following interesting property of Gromov-Hausdorff geodesics:
\begin{theorem}[Existence of infinitely many Gromov-Hausdorff geodesics]\label{thm:exist infinite geod}
For any $X,Y\in\mathcal{M}$ such that $X\not\cong Y$, there exist \emph{infinitely} many distinct Gromov-Hausdorff geodesics connecting $X$ and $Y$.
\end{theorem}
In \cite{chowdhury2018explicit} it is shown that for any $n\in\mathbb{N}$ there exist infinitely many Gromov-Hausdorff geodesics between the one point space $\Delta_1$ and the $n$-point space $\Delta_n$ . Then, \Cref{thm:exist infinite geod} is a generalization of this result to the case of two arbitrary compact metric spaces.
\begin{proof}[Proof of \Cref{thm:exist infinite geod}]
By \Cref{lm:dgh_hausdorff-realizable}, there exists $Z_0\in\mathcal{M}$ and isometric embeddings $\varphi_X^{(0)}:X\hookrightarrow Z$ and $\varphi_Y^{(0)}:Y\hookrightarrow Z$ such that
\[d_\mathcal{H}^{Z_0}\left(\varphi_X^{(0)}(X),\varphi_Y^{(0)}(Y)\right)=d_\mathcal{GH}(X,Y).\]
Without loss of generality, we assume that $Z_0$ is geodesic (otherwise we replace $Z_0$ with one its extension $\mathcal{W}_1\left(Z_0\right)$, which is geodesic by \Cref{thm:W-geodesic}). Consider the following chain of isometric embeddings:
\begin{equation}\label{eq:chain of z embeddings}
Z_0\hookrightarrow Z_1\hookrightarrow Z_2\hookrightarrow\cdots \hookrightarrow Z_n \hookrightarrow \cdots
\end{equation}
where for each $n\geq 1$, $Z_n\coloneqq\mathcal{W}_1(Z_{n-1})$ and the map $Z_{n-1}\hookrightarrow Z_n=\mathcal{W}_1(Z_{n-1})$ is the canonical embedding sending $z_{n-1}\in Z_{n-1}$ to the Dirac delta measure $\delta_{z_{n-1}}\in \mathcal{P}(Z_{n-1})$. Let $\varphi_X^{(n)}:X\hookrightarrow Z_n$ denote the composition of the map $\varphi_X^{(0)}:X\hookrightarrow Z_0$ with the following composition of canonical isometric embeddings:
\[Z_0\hookrightarrow Z_1\hookrightarrow Z_2\hookrightarrow\cdots \hookrightarrow Z_n.\]
We similarly define $\varphi_Y^{(n)}:Y\hookrightarrow Z_n$. Then, by \Cref{lm:dH under embedding} we have that
\begin{equation}\label{eq:dh=dgh n}
d_\mathcal{H}^{Z_n}\left(\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)\right)=d_\mathcal{H}^{Z_0}\left(\varphi_X^{(0)}(X),\varphi_Y^{(0)}(Y)\right)=d_\mathcal{GH}(X,Y),\,\,\,\forall n\in\mathbb{N}.
\end{equation}
Following \Cref{eq:chain of z embeddings}, we have the following chain of isometric embeddings:
\begin{equation}\label{eq:chain of l embedding}
\mathfrak{L}\left(\varphi_X^{(0)}(X),\varphi_Y^{(0)}(Y)\right)\hookrightarrow\mathfrak{L}\left(\varphi_X^{(1)}(X),\varphi_Y^{(1)}(Y)\right)\hookrightarrow\cdots
\end{equation}
By \Cref{prop:hausdorff geod from interpolation} and \Cref{lm:hgeo-to-dghgeo}, for each $n\in\mathbb{N}$, the curve $\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}:[0,1]\rightarrow \mathcal{H}(Z_n)$ defined by $$\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}(t)\coloneqq e_t\left( \mathfrak{L}\left(\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)\right)\right)$$ for $t\in[0,1]$ is a Gromov-Hausdorff geodesic connecting $\varphi_X^{(n)}(X)\cong X$ and $\varphi_Y^{(n)}(Y)\cong Y$. Now, to conclude the proof, we show that for each $m\neq n\in\mathbb{N}$, $\gamma_{\varphi_X^{(m)}(X),\varphi_Y^{(m)}(Y)}^{Z_m}\neq\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}$.
Note that for any $n\in\mathbb{N}$ and any $t\in(0,1)$, we have the isometric embedding (induced from \Cref{eq:chain of l embedding}):
\[\Psi_t^n:\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}(t)\hookrightarrow\gamma_{\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)}^{Z_{n+1}}(t).\]
In fact, $\Psi_t^n$ is the restriction of the canonical embedding $Z_n\hookrightarrow Z_{n+1}$ and sends each $z_n\in \gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}(t)$ to $\delta_{z_n}\in \gamma_{\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)}^{Z_{n+1}}(t)$. Let $\rho\coloneqqd_\mathcal{GH}(X,Y)$. Then, for any $x\in \varphi_X^{(n)}(X)$ there exists $y\in \varphi_Y^{(n)}(Y)$ such that $d_{Z_n}(x,y)\leq \rho$ (cf. \Cref{eq:dh=dgh n}). By \Cref{lm:W-geodesic}, the curve $s\mapsto (1-s)\delta_x+s\,\delta_y$ is a geodesic in the space $Z_{n+1}=\mathcal{W}_1(Z_n)$ and therefore it is a $\rho$-Lipschitz curve in $\mathfrak{L}\left(\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)\right)$. Then, it is easy to see that for the given $t\in(0,1)$
\[(1-t)\delta_x+t\delta_y\,\,\in \,\,\gamma_{\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)}^{Z_{n+1}}(t)\backslash \Psi_t^n\left( \gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}(t)\right).\]
Therefore, $\Psi_t^n$ is not surjective. By \cite[Theorem 1.6.14]{burago2001course}, \[\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}(t)\not\cong\gamma_{\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)}^{Z_{n+1}}(t)\]
since both spaces are compact. Therefore
\[\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_n}\neq\gamma_{\varphi_X^{(n+1)}(X),\varphi_Y^{(n+1)}(Y)}^{Z_{n+1}}.\]
Now, for $m<n$ and $t\in(0,1)$, we have an embedding $\Psi_t^{m,n}:\gamma_{\varphi_X^{(m)}(X),\varphi_Y^{(m)}(Y)}^{Z_m}(t)\hookrightarrow\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}(t)$ defined as the following composition of maps:
\[\Psi_t^{m,n}:\gamma_{\varphi_X^{(m)}(X),\varphi_Y^{(m)}(Y)}^{Z_m}(t)\xhookrightarrow[]{\Psi_t^m}\gamma_{\varphi_X^{(m+1)}(X),\varphi_Y^{(m+1)}(Y)}^{Z_{m+1}}(t)\xhookrightarrow[]{\Psi_t^{m+1}}\cdots\xhookrightarrow[]{\Psi_t^{n-1}} \gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}(t).\]
Then, it {follows from the above} that $\Psi_t^{m,n}$ is not surjective. Hence
\[\gamma_{\varphi_X^{(m)}(X),\varphi_Y^{(m)}(Y)}^{Z_m}(t)\neq\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}(t),\]
and thus
\[\gamma_{\varphi_X^{(m)}(X),\varphi_Y^{(m)}(Y)}^{Z_m}\neq\gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}} \forall m\neq n\in \mathbb{N}.\]
Therefore, $\left\{ \gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}\right\}_{n\in\mathbb{N}}$ is an infinite family of \emph{distinct} Gromov-Hausdorff geodesics connecting $X$ and $Y$.
We summarize the construction of $\left\{ \gamma_{\varphi_X^{(n)}(X),\varphi_Y^{(n)}(Y)}^{Z_{n}}\right\}_{n\in\mathbb{N}}$ above via the following diagram:
$$
\begin{tikzcd}
\centering
[0,1]\arrow{rr}{=}\arrow[hookrightarrow,"\gamma_{\varphi_X^{(0)}(X),\varphi_Y^{(0)}(Y)}^{Z_{0}}"]{dd} && {[0,1]} \arrow[rr,"="]\arrow[hookrightarrow,"\gamma_{\varphi_X^{(1)}(X),\varphi_Y^{(1)}(Y)}^{Z_{1}}"]{dd} && {[0,1]} \arrow[hookrightarrow,"\gamma_{\varphi_X^{(2)}(X),\varphi_Y^{(2)}(Y)}^{Z_{2}}"]{dd}\arrow{rr}{=}&& {\cdots}\arrow[hookrightarrow]{dd} \\
&& \,\,\,\,\,\,\,\,\,\, & \\
\mathcal{H}(Z^0)\arrow[hookrightarrow]{rr} && \mathcal{H}(Z^1)\arrow[hookrightarrow]{rr} && \mathcal{H}(Z^2)\arrow[hookrightarrow]{rr}&& {\cdots}\\
&& \,\,\,\,\,\,\,\,\,\, & \\
Z^0\arrow{rr}{\mathcal{W}_1}\arrow[hookrightarrow,"\mathcal{H}"]{uu} && {Z^1} \arrow[rr,"\mathcal{W}_1"]\arrow[hookrightarrow,"\mathcal{H}"]{uu} && {Z^2} \arrow[hookrightarrow,"\mathcal{H}"]{uu}\arrow{rr}{\mathcal{W}_1}&& {\cdots}\arrow[hookrightarrow,"\mathcal{H}"]{uu}
\end{tikzcd}
$$
\end{proof}
{We end this section by proving \Cref{lm:geo-char-hausdorff}:}
\begin{proof}[Proof of \Cref{lm:geo-char-hausdorff}]
Without loss of generality, we assume that $0<t_*<1$. For $k=0,\ldots$ and $i=0,\ldots, 2^{k+1}$, let
$$t_i^k\coloneqq\begin{cases}\frac{i}{2^k}\cdot t_*,& 0\leq i\leq 2^k\\
t_*+\left(\frac{i}{2^k}-1\right)(1-t_*),& 2^k+1\leq i\leq 2^{k+1}\end{cases}. $$
Let $T^k=\{t_i^k\}_{i=0}^{2^{k+1}}$. Then, $t_*=t_{2^k}^k\in T^k$, $T^k\subseteq T^{k+1}$ and $T\coloneqq\cup_{k=0}^\infty T^k$ is dense in $[0,1]$. Now, for any given $k\in\mathbb{N}$, let $x_{2^k}^k\coloneqq x_{*}$. Then, there exist $x_{{2^k+1}}^k\in\gamma\left(t^k_{2^k+1}\right)$ and $x_{{2^k-1}}^k\in\gamma\left(t^k_{2^k-1}\right)$ such that
\[d_X\left(x_{{2^k+1}}^k,x_{2^k}^k\right)\leq |{t^k_{2^k+1}}-t_*|\,\rho\text{ and }d_X\left(x_{{2^k-1}}^k,x_{2^k}^k\right)\leq \left|{t^k_{2^k-1}}-t_*\right|\,\rho\]
since
\[d_\mathcal{H}^X\left(\gamma\left(t^k_{2^k+1}\right),\gamma\left(t_*\right)\right)\leq \left|{t^k_{2^k+1}}-t_*\right|\,\rho\text{ and }d_\mathcal{H}^X\left(\gamma\left(t^k_{2^k-1}\right),\gamma\left(t_*\right)\right)\leq \left|{t^k_{2^k-1}}-t_*\right|\,\rho.\]
Similarly, there exist $x_{{2^k+2}}^k\in\gamma\left(t^k_{2^k+2}\right)$ and $x_{{2^k-2}}^k\in\gamma\left(t^k_{2^k-2}\right)$ such that
\[d_X\left(x_{{2^k+2}}^k,x_{2^k+1}^k\right)\leq \left|{t^k_{2^k+2}}-t^k_{2^k+1}\right|\,\rho\text{ and }d_X\left(x_{{2^k-2}}^k,x_{2^k-1}^k\right)\leq \left|{t^k_{2^k-2}}-t^k_{2^k-1}\right|\,\rho.\]
Then, in a similar fashion, we inductively construct a sequence of points $\{x_i^k\}_{i=0}^{2^{k+1}}$ such that $x_i^k\in\gamma\left(t_i^k\right)$ and $d_X\left(x_i^k,x_{i+1}^k\right)\leq \left| t_i^k-t^k_{i+1}\right|\,\rho$. In particular, by the triangle inequality, we have $d_X\left(x_i^k,x_j^k\right)\leq \left| t_i^k-t_j^k\right|\,\rho$ for all $i,j=0,\ldots,2^{k+1}$.
Let $Z\coloneqq \mathcal{W}_1\left(X\right)$. Since $X$ is compact, $Z$ is geodesic by \Cref{thm:W-geodesic}. We identify $X$ with its image under the canonical embedding $X\hookrightarrow\mathcal{W}_1(X)=Z$, which then is a closed subset of $Z$. For each $k\in\mathbb{N}$, we interpolate between points $x_i^k$ and $x_{i+1}^k$ by a geodesic in $Z$ for all $i=0,\ldots,2^{k+1}-1$ and concatenate all such geodesics via \Cref{prop:geo-concatenate} to obtain a $\sum_{i=0}^{2^{k+1}-1}d_X\left( x_i^k,x_{i+1}^k\right)$-Lipschitz curve $\zeta_k:[0,1]\rightarrow Z$. In particular, $\zeta_k$ satisfies that $\zeta_k\left(t_i^k\right)=x_i^k$ for all $i=0,\ldots,2^{k+1}$. Moreover,
\[\sum_{i=0}^{2^{k+1}-1}d_X\left( x_i^k,x_{i+1}^k\right)\leq\sum_{i=0}^{2^{k+1}-1}\left| t_i^k-t_{i+1}^k\right|\,\rho=\rho. \]
Then, $\zeta_k$ is $\rho$-Lipschitz. By the Arzel\`a-Ascoli theorem (\Cref{thm:AA}), $\{\zeta_k\}_{k=0}^\infty$ has a subsequence, still denoted by $\{\zeta_k\}_{k=0}^\infty$, uniformly converging to a $\rho$-Lipschitz curve $\zeta:[0,1]\rightarrow Z$.
Since $\zeta_k(t_*)=x_*$ for all $k$, we have that $\zeta(t_*)=x_*$. For each $t_i^k$, since it belongs to $T^m$ for all $m\geq k$, we have $\zeta\left(t_i^k\right)=\lim_{m\rightarrow\infty}\zeta_m\left(t_i^k\right)\in X$. Now, it remains to prove that $x_t\in\gamma\left(t\right)$ for each $t\in[0,1]\backslash T$. Let $\{t_{n_k}\}_{k=0}^\infty$ be a subsequence of $T=\cup_k T^k$ converging to $t\in[0,1]\backslash T$. Assume on the contrary that $x_t\not\in\gamma\left(t\right)$ and let $\delta\coloneqq d_X\left(x_t,\gamma\left(t\right)\right)>0$. For $k$ large enough, we have $d_X\left(x_{t_{n_k}},x_t\right)<\frac{\delta}{2}$, which implies that $d_X\left(x_{t_{n_k}},\gamma\left(t\right)\right)>\frac{\delta}{2}$. This contradicts with the fact that $\lim_{k\rightarrow\infty}d_\mathcal{H}^X\left(\gamma\left(t_{n_k}\right),\gamma\left(t\right)\right)=0$. This concludes the proof.
\end{proof}
\subsection{Dynamic Gromov-Hausdorff geodesics.}
In this section we extend our results about Hausdorff geodesics in the previous section to the case of Gromov-Hausdorff geodesics.
\begin{definition}[Dynamic correspondence]\label{def:dyn-cor}
For a continuous curve $\gamma:[0,1]\rightarrow\mathcal{M}$, we call $\mathfrak{R}\subseteq\Pi_{t\in[0,1]}\gamma\left(t\right)$ a \emph{dynamic correspondence} for $\gamma$ if for any $s,t\in[0,1]$, the image of $\mathfrak{R}$ under the evaluation $e_{st}:\Pi_{t\in[0,1]}\gamma\left(t\right)\rightarrow\gamma\left(s\right)\times\gamma\left(t\right)$ taking $(x_t)_{t\in[0,1]}$ to $(x_s,x_t)$ is a correspondence between $\gamma\left(s\right)$ and $\gamma\left(t\right)$. We call $\mathfrak{R}$ a \emph{dynamic optimal correspondence} if each $e_{st}\left(\mathfrak{R}\right)$ is an optimal correspondence between $\gamma\left(s\right)$ and $\gamma\left(t\right)$.
\end{definition}
In either case, we say that $\gamma$ \emph{admits} a dynamic (optimal) correspondence.
\begin{remark}[Comparison between (dynamic) coupling and (dynamic) correspondence]\label{rmk:relation-dyn-coup-corr}
The following observation from optimal transport inspires our definition of dynamic correspondence. Given a compact metric space $X$ and $\alpha,\beta\in\mathcal{P}\left(X\right)$, note that any coupling $\mu\in\mathcal{C}\left(\alpha,\beta\right)$ satisfies $\mathrm{supp}\left(\mu\right)\in\mathcal{R}\left(\mathrm{supp}\left(\alpha\right),\mathrm{supp}\left(\beta\right)\right)$ (cf. \cite[Lemma 2.2]{memoli2011gromov}), i.e., the support of a coupling is a correspondence between supports of measures. Now, for a dynamic coupling $\Pi$ between $\alpha$ and $\beta$, we know $\left(e_s,e_t\right)_\#\Pi$ is a coupling between ${(e_s)}_\#\Pi$ and ${(e_t)}_\#\Pi$ for any $s,t\in[0,1]$. So
$$\mathrm{supp}\left(\left(e_s,e_t\right)_\#\Pi\right)\in\mathcal{R}\big(\mathrm{supp}\left(({e_s})_\#\Pi\right),\mathrm{supp}\left(({e_t})_\#\Pi\right)\big).$$
\end{remark}
\begin{definition}[Dynamic geodesic]\label{def:dyn-geo}
A Gromov-Hausdorff geodesic $\gamma:[0,1]\rightarrow\mathcal{M}$ is \emph{dynamic}, if it admits a dynamic optimal correspondence $\mathfrak{R}$.
\end{definition}
It is easy to check that straight-line $d_\mathcal{GH}$ geodesics are dynamic.
\begin{proposition}
Any straight-line $d_\mathcal{GH}$ geodesic is dynamic.
\end{proposition}
\begin{proof}
We adopt the notation from \Cref{thm:str-line-geo}. We first define
$$\Tilde{\mathfrak{R}}:=\left\{\left(\left(x,y\right)\right)_{t\in[0,1]}:\,\left(x,y\right)\in R\right\}\subseteq\Pi_{t\in[0,1]} R.$$
Consider the quotient map $q:\Pi_{[0,1]}R\rightarrow\Pi_{[0,1]}R_t$ taking $\left(\left(x,y\right)\right)_{t\in[0,1]}$ to the tuple $\left(z_t\right)_{t\in[0,1]}$ such that $z_0=x$, $z_1=y$ and $z_t=\left(x,y\right)$ for any $t\in\left(0,1\right)$. Then, $\mathfrak{R}\coloneqq q\left(\Tilde{\mathfrak{R}}\right)$ is a dynamic optimal correspondence for $\gamma_R$.
\end{proof}
Now, we proceed to proving \Cref{thm:main-dyn-gh}. {The proof combines \Cref{thm:main-h-realizable} with \Cref{thm:main-dyn-hausdorff} in a direct way: we transform a Gromov-Hausdorff geodesic into a Hausdorff geodesic, and its Hausdorff displacement interpolation will then generate a dynamic optimal correspondence. }
\thmdyngh*
\begin{proof}
Let $\gamma:[0,1]\rightarrow\mathcal{M}$ be a Gromov-Hausdorff geodesic and let $\rho\coloneqqd_\mathcal{GH}\left(\gamma\left(0\right),\gamma\left(1\right)\right)$. Without loss of generality, assume $\rho>0$. By \Cref{thm:main-h-realizable}, there exist $Z\in\mathcal{M}$ and isometric embeddings $\varphi_t:\gamma\left(t\right)\hookrightarrow Z$ for $t\in[0,1]$ such that $d_\mathcal{H}^Z\big(\varphi_s\left(\gamma\left(s\right)\right),\varphi_t\left(\gamma\left(t\right)\right)\big)=d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right)$ for $s,t\in[0,1]$.
Since $t\mapsto\varphi_t(\gamma(t))$ is a Hausdorff geodesic in $Z$, by \Cref{thm:main-dyn-hausdorff} there exists a Hausdorff displacement interpolation $\mathfrak{D}\subseteq\mathfrak{L}\big(\varphi_0(\gamma(0)),\varphi_1(\gamma(1))\big)$ such that $e_t(\mathfrak{D})=\varphi_t(\gamma(t))$ for each $t\in[0,1]$. Now, let
$$\mathfrak{R}\coloneqq\left\{\left(x_t\right)_{t\in[0,1]}\in\Pi_{t\in[0,1]}\gamma(t):\,\exists\zeta\in\mathfrak{D}\text{ such that }\varphi_t(x_t)=\zeta(t),\,\forall t\in[0,1]\right\}. $$
It is obvious that $\mathfrak{R}\neq\emptyset$ and $e_t(\mathfrak{R})=\gamma(t)$ for any $t\in[0,1]$. Since $e_t=e_{t}\circ e_{st}$ and $e_s=e_{s}\circ e_{st}$ for $s,t\in[0,1]$, we have that $e_{s}(e_{st}(\mathfrak{R}))=\gamma(s)$ and $e_{t}(e_{st}(\mathfrak{R}))=\gamma(t)$ so that $e_{st}(\mathfrak{R})$ is a correspondence between $\gamma(s) $ and $\gamma(t)$. Therefore, $\mathfrak{R}$ is a dynamic correspondence for $\gamma$.
Now, we show that $\mathfrak{R}$ is optimal. In fact, for any $s,t\in[0,1]$,
\begin{align*}
e_{st}(\mathfrak{R})&=\left\{\left(x_s,x_t\right)\in\gamma\left(s\right)\times \gamma\left(t\right):\,\exists\zeta\in\mathfrak{D}\text{ such that }\varphi_s(x_s)=\zeta(s)\text{ and }\varphi_t(x_t)=\zeta(t)\right\}\\
&\subseteq\{\left(x_s,x_t\right)\in\gamma\left(s\right)\times \gamma\left(t\right):\,d_Z\left(\varphi_s(x_s),\varphi_t(x_t)\right)\leq |s-t|\,\rho\}=:{R}_{st}.
\end{align*}
For any $\left(x_s,x_t\right),\left(x_s',x_t'\right)\in{R}_{st}$, by identifying $x_t$ with $\varphi_t(x_t)\in Z$, we have
\begin{align*}
|d_Z\left(x_s,x_s'\right)-d_Z\left(x_t,x_t'\right)|&\leq |d_Z\left(x_s,x_s'\right)-d_Z\left(x_s',x_t\right)|+|d_Z\left(x_s',x_t\right)-d_Z\left(x_t,x_t'\right)|\\
&\leq d_Z\left(x_s,x_t\right)+d_Z\left(x_s',x_t'\right)\\
&\leq2|s-t|\,\rho\\
&=2d_\mathcal{GH}\left(\gamma\left(s\right),\gamma\left(t\right)\right).
\end{align*}
Therefore, ${R}_{st}$ is an optimal correspondence between $\gamma\left(s\right)$ and $\gamma\left(t\right)$ and so \emph{must} be $e_{st}(\mathfrak{R})\subseteq{R}_{st}$. Therefore, $\mathfrak{R}$ is an dynamic optimal correspondence for $\gamma$.
\end{proof}
\section{Discussion}\label{sec:discussion}
Gromov-Hausdorff and Gromov-Wasserstein geodesics have been studied in \cite{ivanov2016gromov,chowdhury2018explicit,ivanov2019hausdorff,sturm2006geometry,sturm2012space}. However, those papers did not address the \emph{characterization} of such geodesics. In this paper, we have proved that not only Gromov-Hausdorff geodesics are actually Hausdorff geodesics but also, in an analogous sense, that a large collection of Gromov-Wasserstein geodesics are Wasserstein geodesics. We further drew structural connections between Hausdorff geodesics and Wasserstein geodesics and studied the dynamic characterization of Gromov-Hausdorff geodesics.
\subsection*{Some open problems}
Besides \Cref{conj:gw} about Wasserstein-realizable Gromov-Wasserstein geodesics, there are other related unsolved problems which we summarize next.
\paragraph{Geodesic hull.} \Cref{lm:union-geo-closed} states that for any $X$-Hausdorff-realizable Gromov-Hausdorff geodesic $\gamma$, the union $\mathcal{G}_X\coloneqq\cup_{t\in[0,1]}\gamma\left(t\right)\subseteq X$ also Hausdorff-realizes $\gamma$. Intuitively speaking, such a $\mathcal{G}_X$ is a ``minimal'' ambient space containing $\gamma$ without any redundant points. It is tempting to call $\mathcal{G}_X$ ``the geodesic hull'' of $\gamma$. In order to make sense of this nomenclature, we need to understand the relation between $\mathcal{G}_X$ and $\mathcal{G}_Y$ whenever $\gamma$ is also $Y$-Hausdorff-realizable for some $Y\in\mathcal{M}$ which is not isometric to $X$. Is it true that $\mathcal{G}_X\cong\mathcal{G}_Y$ for any such $Y$? Even if this is not the case, it still remains interesting to unravel commonalities between elements of the family $$\mathcal{M}(\gamma):=\{\mathcal{G}_X:\, \gamma \,\mbox{is $X$-Hausdorff-realizable}\}$$ where each $X$ Hausdorff-realizes the given Gromov-Hausdorff geodesic $\gamma$. For example: what is the Gromov-Hausdorff diameter of $\mathcal{M}(\gamma)$?
\paragraph{Beyond geodesics.}
One of the main insights behind our characterization of Gromov-Hausdorff geodesics is the observation in \Cref{coro:finite-seq-h-dgh} that any finite collection of compact metric spaces along a given Gromov-Hausdorff geodesic is embeddable into a common ambient space in a way such that pairwise Hausdorff distances agree with the corresponding Gromov-Hausdorff distances.
A natural question is whether such an ambient space still exists for an arbitrary finite collection of compact metric spaces that dot not necessarily reside along the trace of a Gromov-Hausdorff geodesic. The following conjecture pertains to the simplest unsolved case: the case involving only three spaces.
\begin{conjecture}\label{conj:3-haus}
Given three compact metric spaces $X_1,X_2,X_3\in\mathcal{M}$, there exist $Z\in\mathcal{M}$ and isometric embeddings $\varphi_i:X_i\hookrightarrow Z$ such that $d_\mathcal{H}^Z\left(\varphi_i\left(X_i\right),\varphi_j\left(X_j\right)\right)=d_\mathcal{GH}\left(X_i,X_j\right)$ for all $i,j\in\{1,2,3\}$.
\end{conjecture}
A similar question can be posed in relation to dynamic optimal correspondences:
\begin{conjecture}\label{conj:3-cor}
Given three compact metric spaces $X_1,X_2,X_3\in\mathcal{M}$, there exists $R\subseteq X_1\times X_2\times X_3$ such that $e_{ij}\left(R\right)$ is an \emph{optimal} correspondence between $X_i$ and $X_j$ for all $i,j\in\{1,2,3\}$.
\end{conjecture}
It seems also interesting to elucidate the relationship between these two conjectures: note that when only dealing with two spaces, in the proof of \Cref{lm:dgh_hausdorff-realizable} we used the existence of an optimal correspondence to establish the existence of an ambient space which realizes the Gromov-Hausdorff distance between the two given spaces; it is then plausible that \Cref{conj:3-cor} could imply \Cref{conj:3-haus}.
\section*{Acknowledgements}
This work was partially supported by the NSF through grants DMS-1723003, CCF-1740761, and CCF-1526513. {We thank Professor Karl-Theodor Sturm for sharing \cite{sturm2020email} with us.} We also thank Qingsong Wang for pointing out the simple counterexample (\Cref{ex:counter}) to \Cref{claim:false claim}.
\paragraph{Declarations of interest:}none
|
{'timestamp': '2021-05-19T02:05:02', 'yymm': '2105', 'arxiv_id': '2105.05369', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.05369'}
|
arxiv
|
\section{Introduction}
\label{sec-intro}
With emerging interests in cyber-physical systems \cite{Humayed2017cyber} like networked control systems, robotics and autonomous vehicles, stabilization and safety are two fundamental objectives, which require dynamic systems to achieve the stability (or tracking/synchronization) objective on the one hand and to satisfy safety constraints on the other hand. In particular, the safety objective is placed in the priority position. For safety-critical systems, it is imperative to keep them into the safe set while controlling them. Consequently, the design of the stabilizing controller must comply with state (or input) constraints to ensure the safety. Similar to control Lyapunov functions (CLFs) proposed for the stabilization objective \cite{Sontag1989universal, Jankovic2001control, Pepe2017control}, safety constraints can be specified in terms of a set invariance and verified via control barrier functions (CBFs) \cite{Ames2016control, Romdlony2016stabilization}. CLFs and CBFs have been applied to deal with different objectives for diverse dynamical systems \cite{Ogren2001control, Panagou2015distributed, Lindemann2018control}, and further combined to study the stabilization and safety objectives simultaneously, and the combination can be either implicit \cite{Ngo2005integrator} or explicit \cite{Ames2016control, Jankovic2018robust, Romdlony2016stabilization} via different techniques.
In the fields of engineering, biology and physics, time delays are frequently encountered due to information acquisition and computation for control decisions and executions \cite{Sipahi2011stability}, and may induce many undesired phenomena like oscillation, instability and performance deterioration \cite{Gao2019stability}. For this purpose, both stability and stabilization have been studied extensively in the past decades \cite{Fridman2014tutorial}. Since time delays cause a violation of monotonic decrease conditions of classic Lyapunov functions, two ways to extend the classic Lyapunov-based method are \cite{Ren2019krasovskii}: (i) the Krasovskii approach based on Lyapunov-Krasovskii functionals which are positive definite and whose derivatives are negative definite along system solutions; (ii) the Razumikhin approach based on Lyapunov-Razumikhin functions which are positive definite and whose derivatives are negative definite under the Razumikhin condition \cite{Teel1998connections}. These two approaches have been applied successfully in stability analysis and controller design for time-delay systems \cite{Sipahi2011stability, Ren2018vector, Gao2019stability}. On the other hand, in the aforementioned areas involving time delays, numerous dynamic systems are safety-critical, which results in the need of the safety objective for time-delay systems \cite{Prajna2005methods, Jankovic2018control}. However, most existing results are focused on stability, stabilization and robustness instead of on safety, which motivates us to investigate how to guarantee the safety objective of time-delay systems.
In this letter, we focus on nonlinear systems with state delays and follow the Razumikhin approach to investigate the stabilization and safety problems. First, based the notion of steepest descent feedback \cite{Pepe2014stabilization}, we propose a novel control Lyapunov-Razumikhin function (CLRF) to overcome the verification of the Razumikhin condition and to facilitate the controller design for time-delay systems. With the proposed CLRF, the classic small control property (SCP) is extended to the time-delay case. Therefore, based on the Razumikhin-type CLF and SCP, the closed-from controller is designed to guarantee the stabilization objective. Second, following the similar mechanism and based on the unsafe set (e.g., obstacles and forbidden states) \cite{Prajna2005methods}, the control barrier-Razumikhin function (CBRF) is proposed for time-delay systems for the first time, and further the safety controller is derived explicitly in the closed form. Finally, to achieve the stabilization and safety objectives simultaneously, the proposed CLRF and CBRF are merged to combine both stabilization and safety control design, which results in a novel control design method via the development of the Razumikhin-type control Lyapunov-barrier function (CLBF). In particular, we show how to construct the Razumikhin-type CLBF via the proposed CLRF and CBRF. In conclusion, our main contributions are two-fold: (i) by proposing CLRF and CBRF, both stabilizing and safety controllers are derived explicitly, which extends the Sontag's formula \cite{Sontag1989universal} and the existing results \cite{Ames2016control, Jankovic2018robust} to the time-delay case; (ii) the proposed CLRF and CBRF are merged together such that the stabilizing control and the safety control can be combined in the sense that the stabilization and safety objectives are ensured simultaneously for time-delay systems.
The remainder of this paper is as follows. Preliminaries are presented in Section \ref{sec-nonconsys}. All Razumikhin-type control functions are proposed in Section \ref{sec-Razumikhintype}. Simulation is presented in Section \ref{sec-examples} followed by conclusions and future research in Section \ref{sec-conclusion}. All proofs are located in the Appendix.
\section{Preliminaries}
\label{sec-nonconsys}
Let $\mathbb{R}:=(-\infty, +\infty); \mathbb{R}^{+}:=[0, +\infty); \mathbb{N}:=\{0, 1, \ldots\}$ and $\mathbb{N}^{+}:=\{1, 2, \ldots\}$. $\|x\|$ denotes the Euclidian norm of the vector $x\in\mathbb{R}^{n}$, and $(a,b):=(a^{\top}, b^{\top})^{\top}$ for $a, b\in\mathbb{R}^{n}$. Given a set $\mathbb{C}\subset\mathbb{R}^{n}$, $\partial\mathbb{C}$ is the boundary of $\mathbb{C}$; $\textrm{Int}(\mathbb{C})$ is the interior of $\mathbb{C}$; $\overbar{\mathbb{C}}$ is the closure of $\mathbb{C}$. Given $\delta>0$ and $\mathbf{x}\in\mathbb{R}^{n}$, an open ball centered at $\mathbf{x}$ with radius $\delta$ is denoted by $\mathbf{B}(\mathbf{x}, \delta):=\{x\in\mathbb{R}^{n}: \|x-\mathbf{x}\|<\delta\}$; $\mathbf{B}(\delta):=\mathbf{B}(0, \delta)$. $\mathcal{C}([a, b], \mathbb{R}^{n})$ denotes the class of piecewise continuous functions mapping $[a, b]$ to $\mathbb{R}^{n}$; $\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{p})$ denotes the class of continuously differentiable functions mapping $\mathbb{R}^{n}$ to $\mathbb{R}^{p}$. $\textmd{Id}$ is the identity function, and $\alpha\circ\beta(v):=\alpha(\beta(v))$ for any $\alpha, \beta\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{n})$. A function $\alpha: \mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ is of class $\mathcal{K}$ if it is continuous, $\alpha(0)=0$, and strictly increasing; it is of class $\mathcal{K}_{\infty}$ if it is of class $\mathcal{K}$ and unbounded. A function $\beta: \mathbb{R}^{+}\times\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ is of class $\mathcal{KL}$ if $\beta(s, t)$ is of class $\mathcal{K}$ for each fixed $t\geq0$ and $\beta(s, t)\rightarrow0$ as $t\rightarrow0$ for each fixed $s\geq0$. A function $V: \mathbb{R}^{n}\rightarrow\mathbb{R}$ is called \textit{proper} if the sublevel set $\{x\in\mathbb{R}^{n}: V(x)\leq c\}$ is compact for all $c\in\mathbb{R}$, or equivalently, $V$ is radially unbounded.
\subsection{Time-Delay Control Systems}
\label{subsec-nonsystem}
In this letter, we consider nonlinear time-delay control systems with the following dynamics:
\begin{align}
\label{eqn-1}
\begin{aligned}
\dot{x}(t)&=f(x_{t})+g(x_{t})u, &\quad& t>0, \\
x(t)&=\xi(t), &\quad& t\in[-\Delta, 0],
\end{aligned}
\end{align}
where $x\in\mathbb{R}^{n}$ is the system state, $x_{t}=x(t+\theta)\in\mathbb{R}^{n}$ is the time-delay state with $\theta\in[-\Delta, 0]$, and $\Delta>0$ is the upper bound of time delays. The initial state is $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{X}_{0})$ with $\mathbb{X}_{0}\subset\mathbb{R}^{n}$ and $\|\xi\|_{\text{d}}:=\sup_{\theta\in[-\Delta, 0]}\|\xi(\theta)\|$ being bounded. The control input $u$ takes value from the set $\mathbb{U}\subset\mathbb{R}^{m}$, and the input function is not specified explicitly since it may depend on the current state or/and the time-delay trajectory. Assume that the functionals $f: \mathcal{C}([-\Delta, 0], \mathbb{R}^{n})\rightarrow\mathbb{R}^{n}$ and $g: \mathcal{C}([-\Delta, 0], \mathbb{R}^{n})\rightarrow\mathbb{R}^{n\times m}$ are continuous and locally Lipschitz, which guarantees the existence of the unique solution to the system \eqref{eqn-1}; see \cite[Section 2]{Hale1993introduction}. Also, let $f(0)=0$ and $g(0)=0$, that is, $x(t)\equiv0$ for all $t>0$ is a trivial solution of the system \eqref{eqn-1}. To consider the stabilization problem of the system \eqref{eqn-1}, we assume that the origin is included in the initial set $\mathbb{X}_{0}$.
\begin{definition}[\cite{Sastry2013nonlinear}]
\label{def-1}
Given the control input $u\in\mathbb{U}$, the system \eqref{eqn-1} is \textit{globally asymptotically stable (GAS)} if there exists $\beta\in\mathcal{KL}$ such that $\|x(t)\|\leq\beta(\|\xi\|_{\text{d}}, t)$ for all $t\geq0$ and all bounded $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n})$; and the system \eqref{eqn-1} is \textit{semi-globally asymptotically stable (semi-GAS)} if there exists $\beta\in\mathcal{KL}$ such that $\|x(t)\|\leq\beta(\|\xi\|_{\text{d}}, t)$ for all $t\geq0$ and all $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{X}_{0})$.
\end{definition}
From Definition \ref{def-1}, the \textit{stabilization control} is to design a feedback controller such that the closed-loop system is GAS. To study the system safety, some notations are defined below. For the system \eqref{eqn-1}, its unsafe set is denoted as an open set $\mathbb{D}\subset\mathbb{R}^{n}$. The system \eqref{eqn-1} is \textit{safe}, if $x(t)\notin\overbar{\mathbb{D}}$ for all $t\geq-\Delta$. Hence, $\mathbb{X}_{0}\cap\mathbb{D}=\varnothing$ is assumed such that $\xi(\theta)\notin\overbar{\mathbb{D}}$ for all $\theta\in[-\Delta, 0]$. The \textit{safety control} is to design a feedback controller to ensure the safety of the closed-loop system.
Since the safety and stabilization objectives of time-delay systems cannot be achieved via classic CLFs and CBFs, our goal is to implement the Razumikhin approach to propose novel types of CLFs and CBFs for the system \eqref{eqn-1}.
\section{Main Results}
\label{sec-Razumikhintype}
In this section, we follow the Razumikhin approach to propose control Lyapunov and barrier functions for time-delay systems. To this end, we first propose a novel control Lyapunov-Razumikhin function for the stabilization objective, then a novel control barrier-Razumikhin function for the safety objective, and finally combine the proposed Razumikhin-type control functions to study the stabilization and safety objectives simultaneously.
\subsection{Control Lyapunov-Razumikhin Functions}
\label{subsec-clrf}
We start with recalling the following control Lyapunov-Razumikhin function from \cite{Jankovic2001control, Pepe2017control}.
\begin{definition}
\label{def-2}
For the system \eqref{eqn-1}, a function $V_{\textrm{c}}\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$ is called a \textit{control Lyapunov-Razumikhin function (CLRF-I)}, if
\begin{enumerate}[(i)]
\item there exist $\alpha_{1}, \alpha_{2}\in\mathcal{K}_{\infty}$ such that $\alpha_{1}(\|x\|)\leq V_{\textrm{c}}(x)\leq\alpha_{2}(\|x\|)$ for all $x\in\mathbb{R}^{n}$;
\item there exist $\gamma_{\textrm{c}}, \rho_{\textrm{c}}\in\mathcal{K}$ with $\rho_{\textrm{c}}(v)>v$ for all $v>0$ such that for all $\phi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n})$ with $\phi(0)=x$, if $\rho_{\textrm{c}}(V_{\textrm{c}}(x))\geq\|V_{\textrm{c}}(\phi)\|_{\text{d}}$, then $\inf_{u\in\mathbb{U}}\{L_{f}V_{\textrm{c}}(\phi)+L_{g}V_{\textrm{c}}(\phi)u\}\leq-\gamma_{\textrm{c}}(V_{\textrm{c}}(x))$,
\end{enumerate}
where $\|V_{\textrm{c}}(\phi)\|_{\text{d}}:=\sup_{\theta\in[-\Delta, 0]}V_{\textrm{c}}(\phi(\theta))$, $L_{f}V_{\textrm{c}}(\phi):=\frac{\partial V_{\textrm{c}}(x)}{\partial x}f(\phi)$ and $L_{g}V_{\textrm{c}}(\phi):=\frac{\partial V_{\textrm{c}}(x)}{\partial x}g(\phi)$.
\end{definition}
In Definition \ref{def-2}, the effects of time delays are shown via the Razumikhin condition: $\rho_{\textrm{c}}(V_{\textrm{c}}(x))\geq\|V_{\textrm{c}}(\phi)\|_{\text{d}}$, which can be written equivalently as $\rho_{\textrm{c}}(V_{\textrm{c}}(x))\geq V_{\textrm{c}}(\phi(\theta))$ for all $\theta\in[-\Delta, 0]$; see \cite{Pepe2017control, Ren2018vector}. In particular, if $L_{g}V_{\textrm{c}}(\phi)\equiv0$, then Definition \ref{def-2} is reduced to the one in \cite{Jankovic2001control}.
In the delay-free case \cite{Sontag1989universal}, the existence of the continuous controller is verified via the small control property (SCP). However, due to the Razumikhin condition in the time-delay case, the violation of the Razumikhin condition results in additional difficulties in the controller design, and thus the SCP is not available here. On the other hand, the existing construction of the stabilizing controller is based on the optimization theory \cite{Sepulchre2012constructive} and the trajectory-based approach \cite{Jankovic2001control}, and the closed form of the continuous controller cannot be expressed easily and explicitly. In the following, to avoid the verification of the Razumikhin condition and to establish the continuous controller explicitly, we propose an alternative CLRF, which is based on the steepest descent feedback controller \cite{Clarke2010discontinuous, Pepe2017control}.
\begin{definition}
\label{def-3}
For the system \eqref{eqn-1}, a function $V\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$ is called a \textit{control Lyapunov-Razumikhin function (CLRF-II)}, if item (i) in Definition \ref{def-2} holds, and there exist $\gamma_{\textsf{v}}, \eta_{\textsf{v}}, \mu_{\textsf{v}}\in\mathbb{R}^{+}$ such that $\gamma_{\textsf{v}}>\eta_{\textsf{v}}$, and for any nonzero $\phi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n})$ with $\phi(0)=x$,
\begin{align}
\label{eqn-2}
&\inf_{u\in\mathbb{U}}\{L_{f}V(\phi)+L_{g}V(\phi)u\}<-\gamma_{\textsf{v}}V(x)+\eta_{\textsf{v}}\|e^{\mu_{\textsf{v}}\theta}V(\phi)\|_{\text{d}},
\end{align}
where $\|e^{\mu_{\textsf{v}}\theta}V(\phi)\|_{\text{d}}:=\sup_{\theta\in[-\Delta, 0]}e^{\mu_{\textsf{v}}\theta}V(\phi(\theta))$.
\end{definition}
Different from Definition \ref{def-2} based on the Razumikhin condition, the Razumikhin condition is not needed in \eqref{eqn-2}, which will be applied to facilitate the controller design afterwards. The introduction of $\mu_{\textsf{v}}\in\mathbb{R}^{+}$ is to increase the flexibility of \eqref{eqn-2}, and can be set as 0 simply. The following theorem shows that the CLRF-II is a CLRF-I under some reasonable conditions.
\begin{lemma}
\label{thm-1}
Consider the system \eqref{eqn-1} with a CLRF-II $V\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$, if there exists $\rho\in\mathcal{K}$ with $\rho(v(0))\geq\|v\|$ for all $v\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n})$ such that $\gamma_{\textsf{v}}-\eta_{\textsf{v}}\circ\rho\in\mathcal{K}$, then $V$ is a CLRF-I with $\gamma_{\textrm{c}}=\gamma_{\textsf{v}}-\eta_{\textsf{v}}\circ\rho$ and $\rho_{\textrm{c}}=\rho$.
\end{lemma}
Lemma \ref{thm-1} is similar to Remark 4 in \cite{Pepe2017control}, and the proof is omitted here. We emphasize that the converse is not necessarily valid. The reason lies in that when the Razumikhin condition is not satisfied, the evolution of the CLRF-I is unknown from Definition \ref{def-2}, whereas the evolution of the CLRF-II is bounded via \eqref{eqn-2}. With the CLRF-II, the SCP is extended to the time-delay case, which is presented below.
\begin{definition}
\label{def-4}
Consider the system \eqref{eqn-1} admitting the CLRF-II $V\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$, the system \eqref{eqn-1} is said to satisfy the \textit{Razumikhin-type small control property (R-SCP)}, if for arbitrary $\varepsilon>0$, there exists $\delta>0$ such that for any nonzero $\phi\in\mathcal{C}([-\Delta, 0], \mathbf{B}(\delta))$ with $\phi(0)=x$, there exists $u\in\mathbf{B}(\varepsilon)$ such that $L_{f}V(\phi)+L_{g}V(\phi)u<-\gamma_{\textsf{v}}V(x)+\eta_{\textsf{v}}\|e^{\mu_{\textsf{v}}\theta}V(\phi)\|_{\text{d}}$.
\end{definition}
Following the Razumikhin approach, Definition \ref{def-4} extends the SCP into the time-delay case. Note that the R-SCP is satisfied for all nonzero $\phi\in\mathcal{C}([-\Delta, 0], \mathbf{B}(\delta))$ due to time delays. With the CLRF-II and the R-SCP, the closed-form controller is derived explicitly in the next theorem such that the stabilization objective is achieved for time-delay systems. This theorem extends the Sontag's formula in \cite{Sontag1989universal} to the time-delay case, and the proof is presented in Appendix \ref{asubsec-pf2}.
\begin{theorem}
\label{thm-2}
If the time-delay system \eqref{eqn-1} admits a CLRF-II $V\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$ and satisfies the R-SCP, then the controller $u(\phi):=\kappa(\lambda, \mathfrak{a}_{\textsf{v}}(\phi), (L_{g}V(\phi))^{\top})$ defined as
\begin{equation}
\label{eqn-3}
\kappa(\lambda, p, q):=\left\{\begin{aligned}&\frac{p+\sqrt{p^{2}+\lambda\|q\|^{4}}}{-\|q\|^{2}}q, &\text{ if }& q\neq 0, \\
&0, &\text{ if }& q=0, \end{aligned}\right.
\end{equation}
with $\lambda>0$ and $\mathfrak{a}_{\textsf{v}}(\phi):=L_{f}V(\phi)+\gamma_{\textsf{v}}V(x)-\eta_{\textsf{v}}\|e^{\mu_{\textsf{v}}\theta}V(\phi)\|_{\text{d}}$, is continuous at the origin and ensures the GAS of the closed-loop system.
\end{theorem}
\subsection{Razumikhin-type Control Barrier Function}
\label{subsec-cbrf}
A commonly-used approach to investigate the safety specification is based on CBFs, which are generally defined via the safe set \cite{Ames2016control}. However, for physical systems like robotic systems, the workspace and unsafe set are known \textit{a priori}. Hence, an intuitive way to define the CBF is based on the unsafe set \cite{Wieland2007constructive}. To be specific, given the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, a function $B\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$ is associated such that
\begin{align}
\label{eqn-4}
\mathbb{D}&\subseteq\{x\in\mathbb{R}^{n}: B(x)>0\}.
\end{align}
Moreover, with the function $B\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$, the Razumikhin-type CBF is proposed below for time-delay systems.
\begin{definition}
\label{def-5}
Consider the system \eqref{eqn-1} with the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, a function $B\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$ is called a \textit{control barrier-Razumikhin function (CBRF)}, if there exist $\mu_{\textsf{b}}\geq0$ and $\gamma_{\textsf{b}}>\eta_{\textsf{b}}\geq0$ such that $\mathbb{S}_{\textsf{b}}:=\{x\in\mathbb{R}^{n}: B(x)\leq0\}\neq\varnothing$, and for any nonzero $\phi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n}\setminus\mathbb{D})$ with $\phi(0)=x$,
\begin{align}
\label{eqn-5}
&\inf_{u\in\mathbb{U}}\{L_{f}B(\phi)+L_{g}B(\phi)u\}<-\gamma_{\textsf{b}}B(x)+\eta_{\textsf{b}}\|e^{\mu_{\textsf{b}}\theta}B(\phi)\|_{\text{d}}.
\end{align}
\end{definition}
In Definition \ref{def-5}, the non-emptiness of the set $\mathbb{S}_{\textsf{b}}$ is to guarantee the non-emptiness of the safe set and further the safety of the initial state, and the condition \eqref{eqn-5} is motivated from \eqref{eqn-2}. From the similarity between \eqref{eqn-5} and \eqref{eqn-2}, the following theorem presents the closed-form safety controller.
\begin{theorem}
\label{thm-3}
Given the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, if the system \eqref{eqn-1} admits a CBRF $B\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$ and satisfies the R-SCP, then the controller $u(\phi):=\kappa(\lambda, \mathfrak{a}_{\textsf{b}}(\phi), (L_{g}B(\phi))^{\top})$ with the function $\kappa$ defined in \eqref{eqn-3} and $\mathfrak{a}_{\textsf{b}}(\phi):=L_{f}B(\phi)+\gamma_{\textsf{b}}B(x)-\eta_{\textsf{b}}\|e^{\mu_{\textsf{b}}\theta}B(\phi)\|_{\text{d}}$, is continuous at the origin and ensures the safety of the closed-loop system with the initial condition $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{S}_{\textsf{b}})$.
\end{theorem}
The proof of Theorem \ref{thm-3} is presented in Appendix \ref{asubsec-pf3}. Note that from \eqref{eqn-4}, $B(x)>0$ may hold for some $x\in\mathbb{R}^{n}\setminus\mathbb{D}$. In this case, if the boundaries of $\mathbb{D}$ and $\mathbb{S}_{\textsf{b}}$ do not intersect such that $\mathbb{S}_{\textsf{b}}$ is entered first, that is, $\overline{\mathbb{R}^{n}\setminus(\mathbb{D}\cup\mathbb{S}_{\textsf{b}})}\cap\overbar{\mathbb{D}}=\varnothing$, then the initial condition is allowed to be in $\mathcal{C}([-\Delta, 0], \mathbb{R}^{n}\setminus\mathbb{D})$. In particular, $\overline{\mathbb{R}^{n}\setminus(\mathbb{D}\cup\mathbb{S}_{\textsf{b}})}\cap\overbar{\mathbb{D}}=\varnothing$ implies $\partial(\mathbb{D}\cup\mathbb{S}_{\textsf{b}})\cap\partial\mathbb{D}=\varnothing$, and thus $(\mathbb{D}+\mathbf{B}(\varepsilon))\cap\partial\mathbb{D}\subset\mathbb{S}_{\textsf{b}}$ for arbitrarily small $\varepsilon>0$.
\subsection{Razumikhin-type Control Lyapunov-Barrier Function}
\label{subsec-combineA}
To incorporate the stabilization and safety objectives simultaneously, the optimization techniques have been extensively applied \cite{Jankovic2018robust, Ames2016control} to merge the CLF and CBF. However, it not easy to solve time-delay optimization problems to derive closed-form analytical solutions \cite{Wu2019new}. To avoid this issue and motivated by the existing results \cite{Romdlony2016stabilization}, a novel Razumikhin-type control function is proposed below to study the stabilization and safety objectives simultaneously.
\begin{definition}
\label{def-6}
Consider the system \eqref{eqn-1} with the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, a proper and lower-bounded function $W\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$ is called a \textit{control Lyapunov-barrier-Razumikhin function (CLBRF)}, if
\begin{enumerate}[(i)]
\item $W: \mathbb{R}^{n}\rightarrow\mathbb{R}$ is positive on the set $\mathbb{D}$;
\item for any nonzero $\phi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n}\setminus\mathbb{D})$, there exist $\gamma_{\textsf{w}}>\eta_{\textsf{w}}\geq0$ and $\mu_{\textsf{w}}\geq0$ such that $\inf_{u\in\mathbb{U}}\{L_{f}W(\phi)+L_{g}W(\phi)u\}<-\gamma_{\textsf{w}}W(x)+\eta_{\textsf{w}}\|e^{\mu_{\textsf{w}}\theta}W(\phi)\|_{\text{d}}$;
\item $\mathbb{S}_{\textsf{w}}=\{x\in\mathbb{R}^{n}: W(x)\leq0\}\neq\varnothing$;
\item $\overline{\mathbb{R}^{n}\setminus(\mathbb{D}\cup\mathbb{S}_{\textsf{w}})}\cap\overbar{\mathbb{D}}=\varnothing$.
\end{enumerate}
\end{definition}
With the proposed CLBRF and R-SCP, the next theorem shows the controller design to ensure the safety and semi-GAS of the system \eqref{eqn-1} simultaneously, and the proof is given in Appendix \ref{asubsec-pf4}.
\begin{theorem}
\label{thm-4}
Given the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, if the system \eqref{eqn-1} admits a CLBRF $W\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$ and satisfies the R-SCP, then the controller $u(\phi)=\kappa(\lambda, \mathfrak{a}_{\textsf{w}}(\phi), (L_{g}W(\phi))^{\top})$ with the function $\kappa$ defined in \eqref{eqn-3}, $\lambda>0$ and $\mathfrak{a}_{\textsf{w}}(\phi):=L_{f}W(\phi)+\gamma_{\textsf{w}}W(x)-\eta_{\textsf{w}}\|e^{\mu_{\textsf{w}}\theta}W(\phi)\|_{\text{d}}$, is continuous at the origin and ensures both semi-GAS and safety of the closed-loop system with the initial condition $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n}\setminus\mathbb{D})$.
\end{theorem}
Theorem \ref{thm-4} implies the simultaneous satisfaction of both safety and stabilization objectives. Next, we show how to construct the CLBRF via the proposed CLRF-II and CBRF.
\begin{theorem}
\label{thm-5}
Given the unsafe set $\mathbb{D}\subset\mathbb{R}^{n}$, and assume that the system \eqref{eqn-1} admits a CLRF-II $V\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R}^{+})$ and a CBRF $B\in\mathcal{C}(\mathbb{R}^{n}, \mathbb{R})$, if
\begin{enumerate}[(i)]
\item there exist $\mathbb{X}\subset\mathbb{R}^{n}\setminus\{0\}$ and a continuous function $\varphi: \mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ such that $\mathbb{D}\subset\mathbb{X}$ and $B(x)\leq-\varphi(\|x\|)$ for all $x\notin\mathbb{X}$;
\item there exists $\psi\in\mathbb{R}^{+}$ such that $\alpha_{2}(\|x\|)<\psi\varphi(\|x\|)$ for all $x\in\partial\mathbb{X}$, where $\alpha_{2}$ is in Definition \ref{def-2};
\item $\min\{\gamma_{\textsf{v}}, \gamma_{\textsf{b}}\}>\max\{\eta_{\textsf{v}}, \eta_{\textsf{b}}\}$,
\end{enumerate}
then $W(x)=V(x)+\psi B(x)$ is a CLBRF for the system \eqref{eqn-1} with the initial condition satisfying $\xi\in\mathcal{C}([-\Delta, 0], \mathbb{R}^{n}\setminus\mathbf{D})$ with $\mathbf{D}:=\{x\in\mathbb{X}: W(x)>0\}$.
\end{theorem}
The proof of Theorem \ref{thm-5} is presented in Appendix \ref{asubsec-pf5}. With the CLRF-II and CBRF, item (iii) can be verified easily. With item (i), the existence of $\psi\in\mathbb{R}^{+}$ in item (ii) can be verified. Item (i) can be satisfied via the construction and computation. Specifically, let $\mathbb{X}:=\mathbb{D}+\mathbf{B}(\varepsilon)$ with arbitrarily small $\varepsilon>0$, and for all $x\in\partial\mathbb{X}$, $B(x)$ can be computed and upper bounded via some function $-\varphi(\|x\|)$. Then, $B(x)$ can be set as $-\varphi(\|x\|)$ for all $x\notin\mathbb{X}$, which implies the satisfaction of item (i). Theorem \ref{thm-5} extends the results in \cite{Romdlony2016stabilization} into the time-delay case. Different from the exponentially stabilizing CLF and the constant property of the function $\varphi$ in \cite{Romdlony2016stabilization}, all conditions here are allowed to depend on the system state.
\section{Numerical Example}
\label{sec-examples}
Consider the following mechanical system from \cite{Romdlony2016stabilization}
\begin{align}
\label{eqn-6}
\dot{x}_{1}=x_{2}, \quad \dot{x}_{2}=-h(x_{t})-x_{1}+u,
\end{align}
where $x=(x_{1}, x_{2})\in\mathbb{R}^{2}$ with $x_{1}$ being the displacement and $x_{2}$ being the velocity, and $u\in\mathbb{R}$ is the control input. In \eqref{eqn-6}, $h(x_{t})=(0.8+2e^{-100|x_{2}(t-\tau)|})\tanh(10x_{2}(t-\tau))+x_{2}(t-\tau)$ is the delayed friction model to describe the damping parameter, where $\tau\in[0, 0.3]$ is the time delay. Therefore, the system \eqref{eqn-6} can be written as the form of \eqref{eqn-1} with $f(x)=(x_{2}, -h(x_{t})-x_{1})$ and $g(x)=(0, 1)$.
To guarantee the stabilization objective, we define the Lyapunov candidate as $V(x):=x_{1}^{2}+x_{1} x_{2}+x_{2}^{2}$. By the detailed computation, we have that item (i) in Definition \ref{eqn-2} holds with $\alpha_{1}(v)=0.5v^{2}$ and $\alpha_{2}(v)=1.5v^{2}$ for all $v\geq0$. Hence, $V(x)$ is a CLRF-II if the condition \eqref{eqn-2} holds. For the safety objective, we assume the unsafe set to be $\mathbb{D}:=\{x\in\mathfrak{X}: \mathcal{H}(x)<4\}$ with $\mathcal{H}(x):=(1-(x_{1}+2)^{2})^{-1}+(1-(x_{2}-1)^{2})^{-1}$. Then, the corresponding barrier function is defined as
\begin{align}
B(x)=\left\{\begin{aligned}
&(e^{-\mathcal{H}(x)}-e^{-4})\|x\|^{2}, && \forall x\in\mathfrak{X}, \\
&-e^{-4}\|x\|^{2}, && \text{elsewhere},
\end{aligned}\right.
\end{align}
where $\mathfrak{X}:=(-3, -1)\times(0, 2)$. Obviously, $B(x)>0$ holds for $x\in\mathbb{D}$, and $\mathbb{S}_{\textsf{b}}:=\{x\in\mathbb{R}^{n}: B(x)\leq0\}\neq\varnothing$. Then $B(x)$ is a CBRF if the condition \eqref{eqn-5} holds.
Based on $V(x)$ and $B(x)$, we next construct the CLBRF $W(x):=V(x)+\psi B(x)$ to investigate the stabilization and safety objectives simultaneously. First, item (i) in Theorem \ref{thm-5} holds with $\varphi(v):=e^{-4}v^{2}$ for all $v\geq0$. From item (i) in Theorem \ref{thm-5}, $\psi>81.8972$. Item (iii) can be satisfied based on the construction of the CLRF $V(x)$ and CBRF $B(x)$. Therefore, $W(x)=V(x)+\psi B(x)$ is the CLBRF for the system \eqref{eqn-6}, and will be used to design the controller of the form \eqref{eqn-3} to guarantee the stabilization and safety objectives simultaneously. Given $\gamma_{\textsf{w}}=2.5, \eta_{\textsf{w}}=2$ and the gain $\lambda=2$, Fig. \ref{fig-1} shows the state trajectories of the closed-loop system starting from different initial conditions. From Fig. \ref{fig-1}, all trajectories converge to zero while avoiding the unsafe set $\mathbb{D}$, which thus verifies the efficiency of the proposed approach.
\begin{figure}
\centering
\begin{picture}(60, 95)
\put(-60, -8){\resizebox{60mm}{35mm}{\includegraphics[width=2.5in]{Fig4-C1}}}
\end{picture}
\caption{The simulation of the closed-loop system based on the proposed CLBRF. The dark grey region is the unsafe set $\mathbb{D}$, and the solid curves are the state trajectories starting from different constant initial conditions.}
\label{fig-1}
\end{figure}
\section{Conclusion}
\label{sec-conclusion}
This paper provided a novel framework for the control design of safety-critical systems with time delays. Based on the Razumikhin approach, the Razumikhin-type control Lyapunov and barrier functions were proposed to investigate the stabilization and safety control problems. To achieve the safety and stabilization objectives simultaneously, the proposed Razumikhin-type control functions were combined such that the stabilizing and safety controllers can be merged. Future work will be devoted to decentralized safety control for multi-agent systems with time delays.
|
{'timestamp': '2021-09-28T02:09:23', 'yymm': '2105', 'arxiv_id': '2105.05450', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.05450'}
|
arxiv
|
\section{Introduction}
We shall use the graphical language for symmetric monoidal categories, see \cite{selinger2010survey} for a quick overview. Let $\cat C$ be a symmetric monoidal category; the monoidal unit is denoted by $\mathbf{I}$, the class of objects by $\cat C_0$, and the class of morphisms by $\cat C_1$. The defining laws for a \memph{cocommutative monoidal comonoid} $A$ in $\cat C$ may be depicted as follows:
\begin{equation}\label{mon:law}
\begin{tikzpicture}[xscale = .65, yscale = .75, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (139) at (-1.625, 5.45) {};
\node [style=none] (141) at (-1.625, 5.325) {};
\node [style=none] (144) at (-2.1, 6.125) {};
\node [style=none] (146) at (-2.1, 5.825) {};
\node [style=none] (246) at (-0.925, 4.8) {};
\node [style=none] (247) at (-0.925, 4.425) {};
\node [style=none] (249) at (-0.2, 6.1) {};
\node [style=none] (250) at (-0.2, 5.325) {};
\node [style=none] (251) at (-1.15, 6.125) {};
\node [style=none] (252) at (-1.15, 5.825) {};
\node [style=none] (281) at (1.15, 5.175) {$=$};
\node [style=none] (298) at (3.85, 5.45) {};
\node [style=none] (299) at (3.85, 5.325) {};
\node [style=none] (300) at (4.325, 6.125) {};
\node [style=none] (301) at (4.325, 5.825) {};
\node [style=none] (302) at (3.15, 4.8) {};
\node [style=none] (303) at (3.15, 4.425) {};
\node [style=none] (304) at (2.425, 6.1) {};
\node [style=none] (305) at (2.425, 5.325) {};
\node [style=none] (306) at (3.375, 6.125) {};
\node [style=none] (307) at (3.375, 5.825) {};
\node [style=none] (314) at (9.6, 5.175) {$=$};
\node [style=none] (315) at (11.875, 5.175) {$=$};
\node [style=none] (316) at (10.75, 6.1) {};
\node [style=none] (317) at (10.75, 4.425) {};
\node [style=none] (318) at (13.05, 6) {$\bullet$};
\node [style=none] (319) at (13.05, 5.575) {};
\node [style=none] (320) at (13.75, 5.05) {};
\node [style=none] (321) at (13.75, 4.425) {};
\node [style=none] (322) at (14.475, 6.1) {};
\node [style=none] (323) at (14.475, 5.575) {};
\node [style=none] (308) at (15.95, 6.25) {};
\node [style=none] (309) at (17.375, 5.2) {};
\node [style=none] (310) at (16.675, 4.65) {};
\node [style=none] (311) at (16.675, 4.275) {};
\node [style=none] (312) at (17.375, 6.25) {};
\node [style=none] (313) at (15.95, 5.2) {};
\node [style=none] (324) at (8.4, 6) {$\bullet$};
\node [style=none] (325) at (8.4, 5.575) {};
\node [style=none] (326) at (7.7, 5.05) {};
\node [style=none] (327) at (7.7, 4.425) {};
\node [style=none] (328) at (6.975, 6.1) {};
\node [style=none] (329) at (6.975, 5.575) {};
\node [style=none] (330) at (20.125, 6.1) {};
\node [style=none] (331) at (20.125, 5.575) {};
\node [style=none] (332) at (20.825, 5.05) {};
\node [style=none] (333) at (20.825, 4.425) {};
\node [style=none] (334) at (21.55, 6.1) {};
\node [style=none] (335) at (21.55, 5.575) {};
\node [style=none] (336) at (18.8, 5.175) {$=$};
\node [style=none, label={[align=center]center: coassociativity\\$(\delta_A \otimes 1_A) \circ \delta_A = (1_A \otimes \delta_A) \circ \delta_A$}] (337) at (1.25, 3.175) {};
\node [style=none, label={[align=center]center:counitality\\$(1_A \otimes \epsilon_A) \circ \delta_A = 1_A = (\epsilon_A \otimes 1_A) \circ \delta_n$}] (338) at (10.95, 3.175) {};
\node [style=none, label={[align=center]center:cocommutativity\\$\sigma_{AA} \circ \delta_A = \delta_A$}] (339) at (18.8, 3.175) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (141.center) to (139.center);
\draw [style=wire] (146.center) to (144.center);
\draw [style=wire] (247.center) to (246.center);
\draw [style=wire] (250.center) to (249.center);
\draw [style=wire] (252.center) to (251.center);
\draw [style=wire, bend right=90, looseness=1.25] (141.center) to (250.center);
\draw [style=wire, bend right=90, looseness=1.25] (146.center) to (252.center);
\draw [style=wire] (299.center) to (298.center);
\draw [style=wire] (301.center) to (300.center);
\draw [style=wire] (303.center) to (302.center);
\draw [style=wire] (305.center) to (304.center);
\draw [style=wire] (307.center) to (306.center);
\draw [style=wire, bend left=90, looseness=1.25] (299.center) to (305.center);
\draw [style=wire, bend left=90, looseness=1.25] (301.center) to (307.center);
\draw [style=wire] (311.center) to (310.center);
\draw [style=wire] (317.center) to (316.center);
\draw [style=wire] (319.center) to (318.center);
\draw [style=wire] (321.center) to (320.center);
\draw [style=wire] (323.center) to (322.center);
\draw [style=wire, bend right=90, looseness=1.25] (319.center) to (323.center);
\draw [style=wire, in=-90, out=90] (309.center) to (308.center);
\draw [style=wire] (311.center) to (310.center);
\draw [style=wire, in=-90, out=90, looseness=0.75] (313.center) to (312.center);
\draw [style=wire, bend left=90, looseness=1.25] (309.center) to (313.center);
\draw [style=wire] (325.center) to (324.center);
\draw [style=wire] (327.center) to (326.center);
\draw [style=wire] (329.center) to (328.center);
\draw [style=wire, bend left=90, looseness=1.25] (325.center) to (329.center);
\draw [style=wire] (331.center) to (330.center);
\draw [style=wire] (333.center) to (332.center);
\draw [style=wire] (335.center) to (334.center);
\draw [style=wire, bend right=90, looseness=1.25] (331.center) to (335.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
Our convention is to draw a string diagram in the lower-left to upper-right direction. Therefore, above, the \memph{comultiplication} $\delta_A : A \longrightarrow A \otimes A$ is depicted as an upward fork
$\begin{tikzpicture}[xscale = .3, yscale = .3, baseline={([yshift=5pt]current bounding box.south)}]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (324) at (8.4, 6.1) {};
\node [style=none] (325) at (8.4, 5.575) {};
\node [style=none] (326) at (7.7, 5.05) {};
\node [style=none] (327) at (7.7, 4.425) {};
\node [style=none] (328) at (6.975, 6.1) {};
\node [style=none] (329) at (6.975, 5.575) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (325.center) to (324.center);
\draw [style=wire] (327.center) to (326.center);
\draw [style=wire] (329.center) to (328.center);
\draw [style=wire, bend left=90, looseness=1.25] (325.center) to (329.center);
\end{pgfonlayer}
\end{tikzpicture}$
and the \memph{counit} $\epsilon_A : A \longrightarrow \mathbf{I}$ an upward dead-end
$\begin{tikzpicture}[xscale = .3, yscale = .3, baseline={([yshift=5pt]current bounding box.south)}]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (326) at (7.7, 5.8) {$\bullet$};
\node [style=none] (327) at (7.7, 4.425) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (327.center) to (326.center);
\end{pgfonlayer}
\end{tikzpicture}$.
Suppose that for each object $A \in \cat C$ there are distinguished arrows $\delta_A : A \longrightarrow A \otimes A$, called the \memph{duplicate} on $A$, and $\epsilon_A : A \longrightarrow \mathbf{I}$, called the \memph{discard} on $A$, that form a cocommutative monoidal comonoid as depicted in (\ref{mon:law}) (the qualifier ``distinguished'' is needed because there may be other such entities). Moreover, they respect the monoidal product:
\begin{equation}\label{fins:frob}
\begin{tikzpicture}[xscale = .55, yscale = .55, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (42) at (7.725, -0.75) {$=$};
\node [style=none] (108) at (4.275, 0.4) {};
\node [style=none] (110) at (6.275, 0.4) {};
\node [style=none] (111) at (5.275, -1.825) {};
\node [style=none] (112) at (4.275, -0.35) {};
\node [style=none] (113) at (6.275, -0.325) {};
\node [style=none] (114) at (5.275, -1.075) {};
\node [style=none] (117) at (9.225, 0.65) {};
\node [style=none] (118) at (12.125, 0.65) {};
\node [style=none] (119) at (10.05, -1.825) {};
\node [style=none] (120) at (9.225, -0.625) {};
\node [style=none] (121) at (10.85, -0.6) {};
\node [style=none] (122) at (10.05, -1.225) {};
\node [style=none] (124) at (10.825, 0.65) {};
\node [style=none] (125) at (13.825, 0.65) {};
\node [style=none] (126) at (12.95, -1.825) {};
\node [style=none] (127) at (12.125, -0.625) {};
\node [style=none] (128) at (13.825, -0.6) {};
\node [style=none] (129) at (12.95, -1.25) {};
\node [style=none] (131) at (19.65, -0.75) {$=$};
\node [style=none] (134) at (18.2, -1.525) {};
\node [style=none] (137) at (18.2, -0.05) {$\bullet$};
\node [style=none] (138) at (20.25, -3) {$\epsilon_{A \otimes B} = \epsilon_{A} \otimes \epsilon_{B}$};
\node [style=none] (145) at (21.025, -1.525) {};
\node [style=none] (146) at (21.025, -0.05) {$\bullet$};
\node [style=none] (148) at (22.275, -1.525) {};
\node [style=none] (149) at (22.275, -0.05) {$\bullet$};
\node [style=none] (151) at (9.25, -3) {$\delta_{A \otimes B} = (1_A \otimes \sigma_{BA} \otimes 1_B) \circ (\delta_{A} \otimes \delta_B)$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire, bend right=90, looseness=1.25] (112.center) to (113.center);
\draw [style=wire] (108.center) to (112.center);
\draw [style=wire] (110.center) to (113.center);
\draw [style=wire] (111.center) to (114.center);
\draw [style=wire, bend right=90, looseness=1.25] (120.center) to (121.center);
\draw [style=wire] (117.center) to (120.center);
\draw [style=wire, in=90, out=-90, looseness=1.25] (118.center) to (121.center);
\draw [style=wire] (119.center) to (122.center);
\draw [style=wire, bend right=90, looseness=1.25] (127.center) to (128.center);
\draw [style=wire, in=90, out=-90, looseness=1.25] (124.center) to (127.center);
\draw [style=wire] (125.center) to (128.center);
\draw [style=wire] (126.center) to (129.center);
\draw [style=wire] (134.center) to (137.center);
\draw [style=wire] (145.center) to (146.center);
\draw [style=wire] (148.center) to (149.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
Following \cite{fritz2020synthetic}, we call $\cat C$ a \memph{Markov category} if the discards are natural, that is, they are the component morphisms of the natural transformation between the identity functor $1_{\cat C}$ and the endofunctor that sends everything to $\mathbf{I}$:
\begin{equation}\label{discar:nat}
\begin{tikzpicture}[xscale = .55, yscale = .55, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (200) at (19.4, 1.75) {$=$};
\node [style=none] (202) at (17.45, 1.525) {};
\node [style=none] (205) at (17.45, 0.6) {};
\node [style=small box] (207) at (17.45, 1.875) {$f$};
\node [style=none] (208) at (17.45, 3.225) {$\bullet$};
\node [style=none] (209) at (17.45, 2.3) {};
\node [style=none] (210) at (21.15, 2.725) {$\bullet$};
\node [style=none] (211) at (21.15, 1.1) {};
\node [style=none] (212) at (19.4, -0.75) {$\epsilon_B \circ f = \epsilon_A$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (202.center) to (205.center);
\draw [style=wire] (208.center) to (209.center);
\draw [style=wire] (210.center) to (211.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
However, it is unreasonable to also require duplicates to be natural, see the opening discussion in \cite[\S~10]{fritz2020synthetic}.
We may forego this discussion on the naturality of discards if the monoidal unit $\mathbf{I}$ is actually terminal, since in that case the desired property would hold automatically; conversely, the naturality of discards, together with the other conditions stipulated above, implies that $\mathbf{I}$ is terminal, and hence $\epsilon_\mathbf{I} = 1_\mathbf{I}$. Also note that $\delta_\mathbf{I} = 1_\mathbf{I}$ in any strict Markov category. See \cite[Remark~2.3]{fritz2020synthetic}.
Accordingly, a \memph{Markov functor} is a strong symmetric monoidal functor $F : \cat C \longrightarrow \cat D$ between two Markov categories that preserves the distinguished comonoids by way of matching duplicates and discards, that is, for all objects $A \in \cat C$, the diagrams
\begin{equation*}
\bfig
\Atriangle(0,0)/->`->`->/<400,400>[FA`FA \otimes FA`F(A \otimes A); \delta_{FA}`F\delta_A`\cong]
\Atriangle(1500,0)/->`->`->/<250,400>[FA`\mathbf{I}`F \mathbf{I}; \epsilon_{FA}`F\epsilon_A`\cong]
\efig
\end{equation*}
commute, where the horizontal arrows are the structure isomorphisms in question; see \cite[Definition~10.14]{fritz2020synthetic}). Of course the discards are matched automatically, since $\mathbf{I}$ is terminal. Denote by \cat{MarCat} the category of Markov categories and Markov functors. In this paper, for simplicity, \cat{MarCat} is only implicitly treated as a $2$-category when we occasionally speak of \memph{comonoid equivalence}, that is, equivalence in \cat{MarCat}.
To formulate the notion of a free Markov category, as in \cite{selinger2010survey}, we first need to fix a suitable class of signatures (called ``tensor schemes'' in \cite{joyal1991geometry}), as follows.
A \memph{graph monoid} is a directed graph $G = (V(G), A(G), s, t)$ whose set of vertices come with the additional structure of a monoid. A \memph{graph monoid homomorphism} is a graph homomorphism that is also a homomorphism between the monoids in question.
For any set $S$, let $W(S)$ denote the free monoid of words over (the alphabet) $S$. It is a graph monoid, with the trivial directed graph structure (no arrows). More generally, a \memph{free graph monoid} is a triple $(G, S, g)$, where $G$ is a graph monoid and $g : S \longrightarrow V(G)$ is an injection, such that the monoid $V(G)$ is isomorphic to the free monoid $W(S)$ over $g$. We usually just say $G$ is free graph monoid and leave $S$, $g$ implicit in notation when they are not needed in the discussion. Also it is harmless to treat $S$ as if it is a subset of $V(G)$.
A \memph{free $\textup{DD}$-graph monoid} is a free graph monoid $(G, S, g)$ such that, for each $a \in S$, there are distinguished arrows $a \longrightarrow aa$, called the \memph{duplicate} on $a$, and $a \longrightarrow \emptyset$, called the \memph{discard} on $a$.
A Markov category $\cat C$ is \memph{free} over a free $\textup{DD}$-graph monoid $G$ if there is a graph monoid homomorphism $i : G \longrightarrow \cat C$ matching duplicates and discards such that, for any Markov category $\cat D$, any graph monoid homomorphism $f : G \longrightarrow \cat D$ matching duplicates and discards factors through $i$, that is, there is a Markov functor $F : \cat C \longrightarrow \cat D$ such that $f = F \circ i$, and $F$ is unique up to a unique monoidal natural isomorphism. The objects and morphisms in $i(G)$ are referred to as \memph{generators}.
Since Markov categories are symmetric monoidal categories, the same graphical language can be used, but with the additional syntax depicted in (\ref{mon:law}), (\ref{fins:frob}), and (\ref{discar:nat}). This does not give the free Markov category over a free graph monoid, though, at least not for string diagrams up to isomorphisms. We need a coarser equivalence relation to accommodate the equations in (\ref{mon:law}), (\ref{fins:frob}), and (\ref{discar:nat}).
\begin{rem}
The surgery method described in this paper works rather well in some applications where free Markov categories play an essential role, see \cite{yinzhang2021}. For other monoidal categories with more complex conditions, it is not that the method does not work, it is just that the equivalence relation so constructed may be hard to parse visually, and consequently working with a graphical language does not seem to offer advantages over a symbolic one, which in a sense defeats the purpose of graphical languages.
For the construction of free Markov categories, there is a more sophisticated combinatorial approach, see \cite{FriLiang2022}.
\end{rem}
\section{Preliminaries on Joyal-Street style diagrams}
The discussion below assumes familiarity with the first two chapters of \cite{joyal1991geometry}. We begin by recalling some definitions therein.
A \memph{generalized topological graph} $\Gamma = (\Gamma, \Gamma_0)$ consists of a Hausdorff space $\Gamma$ and a finite subset $\Gamma_0 \subseteq \Gamma$ such that the complement $\Gamma \smallsetminus \Gamma_0$ is a one-dimensional manifold without boundary and with finitely many connected components. It is \memph{acyclic} if it has no circuits, that is, there is no embedding from the unit circle in $\mathds{R}^2$ into $\Gamma$, in particular, each of the connected components in $\Gamma \smallsetminus \Gamma_0$ is homeomorphic to the open unit interval $(0, 1) \subseteq \mathds{R}$. We shall only consider acyclic graphs and hence may omit the qualifier for brevity.
An element of $\Gamma_0$ is called a \memph{node}. A connected component of $\Gamma \smallsetminus \Gamma_0$ is called an \memph{edge}. The set of edges of $\Gamma$ is denoted by $\Gamma_1$, and the set $\overline \Gamma \smallsetminus \Gamma$ by $\partial \Gamma$, where $\overline \Gamma$ is the topological closure (compactification) of $\Gamma$. The points in $\partial \Gamma$ are referred to as the \memph{outer nodes} of $\Gamma$ (nodes in $\Gamma_0$ are also called \memph{inner nodes} for clarity). Note that $\partial \Gamma$ and $\Gamma_0$ together form the boundary of $\Gamma$.
The compactification $\hat e$ of an edge $e$ is homeomorphic to the closed unit interval $[0, 1] \subseteq \mathds{R}$. An edge $e$ is called \memph{pinned} if the inclusion $e \subseteq \Gamma$ can be extended to an embedding $\hat e \longrightarrow \Gamma$, and \memph{half-loose} if it can only be extended to $\hat e$ minus one endpoint, and \memph{loose} if neither of the previous two cases holds.
An \memph{orientation} of an edge $e$ is just a total ordering on the pair of endpoints of $\hat e$, where the \memph{source} $e(0)$ is the image of the first element under the embedding $\hat e \longrightarrow \overline \Gamma$ and the
\memph{target} $e(1)$ that of the last element. We say that $\Gamma$ is \memph{progressive} if it carries a choice of orientation for
each of its edges. In that case, the \memph{input} $\inn(x)$ of a node $x \in \Gamma_0$ is the set of the oriented edges with target $x$ and the \memph{output} $\out(x)$ the set of those with source $x$. An inner node $x \in \Gamma_0$ is \memph{initial} if $\inn(x) = \emptyset$ and \memph{terminal} if $\out(x) = \emptyset$. If, in addition, $\Gamma$ is equipped with a choice of total orderings on $\inn(x)$ and $\out(x)$ for each $x \in \Gamma_0$ then it is \memph{polarized}. We shall only consider polarized graphs and hence may omit the qualifier for brevity.
The \memph{domain} $\dom(\Gamma)$ of $\Gamma$ consists of the edges whose sources are outer nodes and the \memph{codomain} $\cod(\Gamma)$ those whose targets are outer nodes. The edges in $\dom(\Gamma)$, $\cod(\Gamma)$ are often identified with the corresponding outer nodes for convenience. We say that $\Gamma$ is \memph{anchored} if both $\dom(\Gamma)$ and $\cod (\Gamma)$ are equipped with a total ordering.
\begin{defn}
Let $\cat C$ be a symmetric strict monoidal category. A \memph{valuation} $v : \Gamma \longrightarrow \cat C$ is a pair of functions
\[
v_1 : \Gamma_0 \longrightarrow \cat C_1, \quad v_0 : \Gamma_1 \longrightarrow \cat C_0
\]
such that, for every node $x \in \Gamma_0$ with $\inn(x) = (a_1, \ldots, a_n)$ and $\out(x) = (b_1, \ldots, b_m)$, $v_1(x)$ is of the form $\bigotimes_i v_0(a_i) \longrightarrow \bigotimes_i v_0(b_i)$; here if $n = 0$ or $m = 0$ then the monoidal product in question is just the monoidal unit $\mathbf{I}$ of $\cat C$. The pair $(\Gamma, v)$ is called a \memph{diagram} in $\cat C$, and is denoted simply by $\Gamma$ when the context is clear. If $\Gamma$ is anchored then the domain $\dom(\Gamma, v)$ of $(\Gamma, v)$ is the object $\bigotimes_{e_i \in \dom(\Gamma)} v_0(e_i)$, where the monoidal product is taken with respect to the ordering on $\dom(\Gamma)$, similarly for the codmain $\cod(\Gamma, v)$ of $(\Gamma, v)$.
An \memph{isomorphism of diagrams} $\Gamma \longrightarrow \Omega$ is an isomorphism of the graphs that is compatible with the valuations in the obvious way. If the graphs are anchored then the isomorphism is required to preserve the anchoring too.
A diagram in a free graph monoid $(G, S, g)$ is formulated in the same way, with the valuation $v : \Gamma \longrightarrow (G, S, g)$ mapping the edges and nodes into the alphabet $S$ and the set $A(G)$ of arrows, respectively.
\end{defn}
\begin{rem}\label{free:sys:mo:string}
Isomorphisms of graphs are homeomorphisms of one-dimensional topological spaces and, insofar as diagrams in symmetric monoidal categories are concerned, they can be equivalently replaced by ambient isotopies of graphs embedded in $\mathds{R}^4$, but not in lower dimensions. The reason is simply that, in the graphical language of symmetric monoidal categories, we represent a symmetry as a braid, which is then forced by the axioms to be trivial, and indeed all braids in $\mathds{R}^4$ are trivial. For instance, the crossing in the third diagram of (\ref{mon:law}) is a projection in $\mathds{R}^2$ of the symmetry braid in $\mathds{R}^4$, and there is no need to depict, between the two strands, which one lies over which one, precisely because all the cases represent the same braid, that is, the trivial one. Thus, the intersection point therein is not a node of the graph, it is merely the (intentional) overlap of the shadows in $\mathds{R}^2$ of the two edges, which do not intersect in $\mathds{R}^4$ at all.
\end{rem}
\begin{thm}[{\cite[Theorem~2.3]{joyal1991geometry}}]\label{free:sym}
The isomorphism classes of anchored diagrams $(\Gamma, v)$ in the free graph monoid $G$, denoted by $[\Gamma, v]$, form the free symmetric monoidal category $\cat F_s(G)$ over $G$.
\end{thm}
From here on we assume that $\Gamma$ is anchored.
Recall that $\Gamma_0$ may be ordered as follows: $x < y$ if and only if there is a directed path $p$ in $\Gamma$ with $p(0) = x$ to $p(1) = y$; the thus formed partially ordered set is denoted by $\dot\Gamma$.
An initial segment of $\dot \Gamma$ is referred to as a \memph{level} of $\Gamma$, with $\emptyset$ the smallest one and $\dot \Gamma$ the largest one.
An edge $e$ is said to be \memph{cut} by a level $L$ when $e(0) \in L \cup \dom(\Gamma)$ and $e(1) \in (\dot \Gamma \smallsetminus L) \cup \cod(\Gamma)$.
Let $\cut(L)$ denote the set of edges cut by $L$; we have $\cut(\emptyset) = \dom(\Gamma)$ and $\cut(\dot \Gamma) = \cod(\Gamma)$. For a pair of levels $L \subseteq M$, the \memph{layer} $\Gamma[L, M]$ in $\Gamma$ is the subgraph of $\Gamma$ such that
\begin{itemize}
\item its inner nodes are those in $M \smallsetminus L$,
\item its pinned edges are those $e \in \Gamma_1$ with $e(0), e(1) \in M \smallsetminus L$,
\item its loose and half-loose edges are those in $\cut(L) \cup \cut(M)$.
\end{itemize}
So $\cut(L) = \dom(\Gamma[L, M])$ and $\cut(M) = \cod(\Gamma[L, M])$, and the loose edges are exactly those in $\cut(L) \cap \cut(M)$, in particular, $\dom(\Gamma) = \dom(\Gamma[\emptyset, L])$ and $\cod(\Gamma) = \cod(\Gamma[L, \dot \Gamma])$.
Any subgraph $\Gamma' \subseteq \Gamma$ inherits the orientations of the edges and the total orderings of $\inn_{\Gamma'}(x) \subseteq \inn_{\Gamma}(x)$, $\out_{\Gamma'}(x) \subseteq \out_{\Gamma}(x)$ for each $x \in \Gamma'_0$, and hence is polarized. On the other hand, there is not a natural way to order $\dom(\Gamma') \smallsetminus \dom(\Gamma)$ or $\cod(\Gamma') \smallsetminus \cod(\Gamma)$, so $\Gamma'$ is not naturally anchored unless these two sets are empty.
\begin{rem}\label{order:raley:com}
Since a layer $\Gamma[L, M]$ is not naturally anchored, it cannot be used as is to produce a morphism in $\cat F_s(G)$. If we choose total orderings on the domain and codomain then of course $\Gamma[L, M]$ may be turned into an anchored graph, although different choices in general bring about nonisomorphic results. In some situations, what orderings we choose is immaterial, as long as they are compatible. For instance, if $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ are only equipped with partial orderings on the domains and codomains then by writing
\[
[\Gamma, v] = [\Gamma_3, v_3] \circ [\Gamma_2, v_2] \circ [\Gamma_1, v_1]
\]
we mean that the partial orderings may be extended to compatible total orderings so to make the equality hold. In particular, this shall be what is meant when we write
\[
[\Gamma, v] = [\Gamma[M, \dot \Gamma], v_3] \circ [\Gamma[L, M], v_2] \circ [\Gamma[\emptyset, L], v_1],
\]
where $v_1$, $v_2$, and $v_3$ are the induced valuations.
\end{rem}
\section{Surgeries on diagrams}
We continue to work with an anchored graph $\Gamma$.
\begin{defn}\label{def:full}
We say that a subgraph $\Gamma' \subseteq \Gamma$ is \memph{normal} if
\begin{itemize}
\item no directed path in $\Gamma$ contains two distinct edges in $\Gamma'$ if one of them is loose in $\Gamma'$,
\item the degree of every $x \in \Gamma'_0$ in $\Gamma'$ is the same as in $\Gamma$, in other words, all the edges at $x$ in $\Gamma$ are included in $\Gamma'$,
\item every directed path $p$ in $\Gamma$ with $p(0), p(1) \in \Gamma'_0$ belongs to $\Gamma'$.
\end{itemize}
\end{defn}
For a subgraph $\Gamma' \subseteq \Gamma$, let $\Gamma'^{\flat}$ denote the subgraph obtained from $\Gamma'$ by deleting its loose edges.
Observe that if two subgraphs $\Gamma'$, $\Gamma''$ have the same set of nodes and both satisfy the second condition above then $\Gamma'^{\flat}$, $\Gamma''^{\flat}$ must be equal. Also, in a normal subgraph $\Gamma'$ of $\Gamma$, if $x \in \Gamma'_0$ is initial in $\Gamma'$ then it is initial in $\Gamma$, similarly if it is terminal.
\begin{lem}\label{full:lay}
A subgraph $\Gamma' \subseteq \Gamma$ is normal if and only if there is a layer $\Gamma[L, M]$ in $\Gamma$ such that $\Gamma' \subseteq \Gamma[L, M]$ and $\Gamma[L, M]^\flat = \Gamma'^{\flat}$.
\end{lem}
\begin{proof}
For the ``if'' direction, it is enough to show that $\Gamma[L, M]$ is normal. First, let $p$ be a directed path in $\Gamma$. Suppose for contradiction that $p$ contains two distinct edges $e$, $e'$ in $\Gamma[L, M]$, where $e$ is loose in $\Gamma[L, M]$. We have, in $\dot \Gamma$, either $e(1) \leq e'(0)$ or $e'(1) \leq e(0)$. For the first case, since $e$ is cut by $M$, we have $e(1) \in \dot \Gamma \smallsetminus M$ and hence $e'(0) \in \dot \Gamma \smallsetminus M$, which entails that $e'$ cannot be in $\Gamma[L, M]$, contradiction. Similarly, for the second case, since $e$ is cut by $L$ as well, we have $e(0) \in L$ and hence $e'(1) \in L$, so $e'$ cannot be in $\Gamma[L, M]$, contradiction again. Next, if $p(0), p(1) \in \Gamma[L, M]_0 = M \smallsetminus L$ then clearly $p$ belongs to $\Gamma[L, M]$. Lastly, for any $x \in \Gamma[L, M]_0$, if $e \in \inn_\Gamma(x)$ is cut by $L$ then it belongs to $\Gamma[L, M]$ by definition, otherwise $e(0) \in M \smallsetminus L$ and hence $e$ still belongs to $\Gamma[L, M]$ by definition, similarly for $e \in \out_\Gamma(x)$. So $\Gamma[L, M]$ is normal.
For the ``only if'' direction, let $\Gamma' \subseteq \Gamma$ be normal. Let $L$ be the set of the nodes $x \in \dot \Gamma$ such that either $x \notin \Gamma'_0$ and $x < y$ for some $y \in \Gamma'_0$ or $x \leq e(0) \in \dot \Gamma$ for some loose edge $e$ in $\Gamma'$. If the first case holds and $z < x$ then $z \notin \Gamma'_0$, for otherwise $z < x < y$ and hence $x \in \Gamma'_0$ too by the third condition in Definition~\ref{def:full}, contradiction. So $L$ is a level. Actually, if $x \leq e(0) \in \dot \Gamma$ for some loose edge $e$ in $\Gamma'$ then $x \notin \Gamma'_0$, for otherwise $x < e(0)$ and there would be a directed path in $\Gamma$ that contains $e$ and an edge $e'$ at $x$, and since $e' \in \Gamma'$ by the second condition in Definition~\ref{def:full}, this contradicts the first condition therein. So $L \cap \Gamma'_0 = \emptyset$. Now, let $M$ be the set of the nodes $x \in \dot \Gamma$ such that $x \leq y$ for some $y \in \Gamma'_0$ or $x \leq e(0) \in \dot \Gamma$ for some loose edge $e$ in $\Gamma'$. Clearly $M$ is a level and contains $L$. We have $M \smallsetminus L = \Gamma'_0$.
Since $\Gamma[L, M]^{\flat}$ is normal too, we have $\Gamma[L, M]^{\flat} = \Gamma'^{\flat}$. For any loose edge $e$ in $\Gamma'$, if $e(1) \in M$ then there would be a directed path in $\Gamma$ that contains two distinct edges in $\Gamma'$, one of which is $e$, contradicting the first condition in Definition~\ref{def:full}, so $e$ is cut by both $L$ and $M$, in other words, $e$ is a loose edge in $\Gamma[L, M]$. So $\Gamma' \subseteq \Gamma[L, M]$.
\end{proof}
Denote by $\Gamma'^{\sharp}$ the layer $\Gamma[L, M]$ constructed from $\Gamma'$ in the latter half of the proof above.
So normal subgraphs are almost the same as layers. The point, of course, is that they are more intuitive to work with.
Suppose that $\Gamma' \subseteq \Gamma$ is a normal subgraph. Let $\phi : \Omega \longrightarrow \Gamma'$ be an isomorphism of graphs. Let $\Lambda$ be a graph and
\[
\alpha : \dom(\Lambda) \longrightarrow \dom(\Omega), \quad \beta: \cod(\Lambda) \longrightarrow \cod(\Omega)
\]
bijections. We can graft $\Lambda$ onto $\Gamma$ by identifying the edges in $\dom(\Lambda)$, $\cod(\Lambda)$ with the corresponding edges in $\Gamma'$ --- for this to work, the loose edges in $\Gamma'$ need to be duplicated, with one copy each in $\dom(\Gamma')$ and $\cod(\Gamma')$ --- and then cut out the rest of $\Gamma'$ from $\Gamma$. We call this operation the \memph{$(\Omega / \Lambda, \phi, \alpha, \beta)$-surgery} on $\Gamma$ and write $\Gamma^\Omega_\Lambda(\phi, \alpha, \beta)$ for the resulting graph, where $\phi$, $\alpha$, and $\beta$ shall be dropped from the notation when the context is clear; the triple $(\Omega / \Lambda, \alpha, \beta)$ is referred to as the \memph{template} of the surgery.
Clearly there are induced bijections
\[
\alpha^\sharp : \dom(\Gamma^\Omega_\Lambda) \longrightarrow \dom(\Gamma), \quad \beta^\sharp: \cod(\Gamma^\Omega_\Lambda) \longrightarrow \cod(\Gamma)
\]
and hence $\Gamma^\Omega_\Lambda$ is indeed naturally anchored; in particular, if $\Lambda$ is isomorphic to $\Omega$ (as polarized graphs) then $\Gamma^\Omega_\Lambda$ is isomorphic to $\Gamma$ (as anchored polarized graphs). If the template is empty, that is, if $\Lambda = \Omega = \emptyset$, then $\Gamma^\Omega_\Lambda = \Gamma$.
Let $G$ be a free graph monoid. Let $v : \Gamma \longrightarrow G$, $w : \Lambda \longrightarrow G$, and $u : \Omega \longrightarrow G$ be valuations. Let $\phi : (\Omega, u) \longrightarrow (\Gamma', v')$ be an isomorphism of diagrams, where $v'$ is the restriction of $v$ to $\Gamma'$. Suppose that $u$, $w$ are compatible with respect to $\alpha$ and $\beta$, that is, $w(e) = (u \circ \alpha)(e)$ for $e \in \dom(\Lambda)$, and similarly for $e \in \cod(\Lambda)$. Since $\Gamma' \subseteq \Gamma$ is normal, there is an induced valuation $v_w : \Gamma^\Omega_\Lambda \longrightarrow G$ fusing $v$, $w$ together such that $v_w(e) = (v \circ \alpha^\sharp)( e)$ for $e \in \dom(\Gamma^\Omega_\Lambda)$, and similarly for $e \in \cod(\Gamma^\Omega_\Lambda)$. So \[
\dom(\Gamma^\Omega_\Lambda, v_w) = \dom(\Gamma, v) \quad \text{and} \quad \cod(\Gamma^\Omega_\Lambda, v_w) = \cod(\Gamma, v).
\]
The transition from the diagram $(\Gamma, v)$ to the diagram $(\Gamma^\Omega_\Lambda, v_w)$ is called the \memph{$((\Omega, u) / (\Lambda, w), \phi, \alpha, \beta)$-surgery} on $(\Gamma, v)$ with the \memph{template} $((\Omega, u) / (\Lambda, w), \alpha, \beta)$; again, we usually only show $\Omega / \Lambda$ in notation.
Observe that a surgery can be reversed (by another surgery) in the sense that the resulting diagram is isomorphic to the one we started with.
\begin{lem}\label{sur:decom}
Let $\Omega$, $\Lambda$ be as above. An anchored diagram $(\Xi, t)$ is the result of a surgery on $(\Gamma, v)$ with a template $\Omega / \Lambda$ if and only if there are
\begin{itemize}
\item a normal subgraph $\Gamma' \subseteq \Gamma$ isomorphic to $\Omega$,
\item total orderings that anchor $\Lambda$ and $\Gamma'$,
\item a valuation $w: \Lambda \longrightarrow G$ with $\dom(\Lambda, w) = \dom(\Gamma', v')$ and $\cod(\Lambda, w) = \cod(\Gamma', v')$,
\item morphisms $g$, $h$ in $\cat F_s(G)$
\end{itemize}
such that
\[
h \circ [\Gamma'^{\sharp}, v'^{\sharp}] \circ g = [\Gamma, v] \quad \text{and} \quad h \circ [\Lambda^{\sharp}, w^{\sharp}] \circ g = [\Xi, t],
\]
where $v'^{\sharp}$ is the restriction of $v$ to $\Gamma'^{\sharp}$, $\Lambda^{\sharp} = \Lambda \uplus (\Gamma'^{\sharp} \smallsetminus \Gamma')$, and $w^{\sharp} = w \uplus (v \upharpoonright (\Gamma'^{\sharp} \smallsetminus \Gamma'))$.
\end{lem}
\begin{proof}
The ``if'' direction is clear. For the ``only if'' direction, let $\Gamma' \subseteq \Gamma$ be the normal subgraph in question. By Lemma~\ref{full:lay}, $\Gamma' \subseteq \Gamma'^{\sharp} = \Gamma[L, M]$ for some levels $L$, $M$. Then
\begin{gather*}
[\Gamma, v] = [\Gamma[M, \dot \Gamma], v_M] \circ [\Gamma'^{\sharp}, v'^{\sharp}] \circ [\Gamma[\emptyset, L], v_L],\\
[\Xi, t] = [\Gamma[M, \dot \Gamma], v_M] \circ [\Lambda^{\sharp}, w^{\sharp}] \circ [\Gamma[\emptyset, L], v_L].
\end{gather*}
where $v_M$, $v_L$ are the induced valuations; here we have used the convention outlined in Remark~\ref{order:raley:com}. The lemma follows.
\end{proof}
We shall often denote a diagram simply by $\Gamma$ when the valuation may be left implicit. By the same token, the same symbol may denote both the diagram and the underlying graph.
\begin{defn}
Let $T$ be a \memph{symmetric} set of templates that contains the empty one; this just means that if $((\Omega, u) / (\Lambda, w), \alpha, \beta)$ is in $T$ then $( (\Lambda, w) / (\Omega, u), \alpha^{-1}, \beta^{-1})$ is also in $T$. A surgery with a template in $T$ is referred to as a \memph{$T$-surgery}. Two anchored diagrams $\Gamma$, $\Upsilon$ in $G$ are \memph{$T$-equivalent}, denoted by $\Gamma \leftrightsquigarrow_T \Upsilon$, if there is a sequence of anchored diagrams
\[
(\Gamma = \Gamma_1, \ldots, \Gamma_n = \Upsilon)
\]
such that, for each $i$, there is a $T$-surgery on $\Gamma_i$ whose result is isomorphic to $\Gamma_{i+1}$.
\end{defn}
If $[\Gamma] = [\Upsilon]$, $[\Gamma'] = [\Upsilon']$, and $\Gamma \leftrightsquigarrow_T \Gamma'$ then clearly $\Upsilon \leftrightsquigarrow_T \Upsilon'$. So we may treat $T$-equivalence as a relation on the set of objects of $\cat F_s(G)$. It is also easy to see that if $[\Gamma] \leftrightsquigarrow_T [\Gamma']$ and $[\Upsilon] \leftrightsquigarrow_T [\Upsilon']$ then
\begin{itemize}
\item $[\Upsilon] \otimes [\Gamma] \leftrightsquigarrow_T [\Upsilon'] \otimes [\Gamma']$,
\item $[\Upsilon] \circ [\Gamma] \leftrightsquigarrow_T [\Upsilon'] \circ [\Gamma']$ if the compositions are defined.
\end{itemize}
So $T$-equivalence is indeed a monoidal congruence relation on $\cat F_s(G)$; we denote it by $\bm S_T$. It follows that the quotient $\cat F_s(G) / \bm S_T$ is a symmetric strict monoidal category.
Let $\bm E_{T}$ be the monoidal congruence relation on $\cat F_s(G)$ generated by the pairs $([\Omega, u], [\Lambda, w])$, where $((\Omega, u) / (\Lambda, w), \alpha, \beta)$ runs through the templates in $T$ with any chosen orderings on the domains and codomains of $\Omega$, $\Lambda$ that are compatible with $\alpha$, $\beta$.
\begin{cor}\label{graph:small}
$\bm S_T = \bm E_{T}$.
\end{cor}
\begin{proof}
That $\bm S_T \subseteq \bm E_{T}$ is an easy consequence of Lemma~\ref{sur:decom}. That $\bm S_T \supseteq \bm E_{T}$ follows from a routine induction on how $\bm E_{T}$ is generated.
\end{proof}
\section{Constructing free Markov categories}
Suppose that $(G, S, g)$ is a free $\textup{DD}$-graph monoid.
The five equalities in (\ref{mon:law}) and (\ref{discar:nat}) give rise to the \memph{set of Markov templates} and the corresponding \memph{Markov surgeries}. We describe in detail what these templates are. For coassociativity,
\begin{equation}\label{temp:coass}
\begin{tikzpicture}[xscale = .65, yscale = .6, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (139) at (-1.625, 5.45) {};
\node [style=none] (141) at (-1.625, 5.325) {};
\node [style=none, label={above : $H$}] (144) at (-2.1, 6.125) {};
\node [style=none] (146) at (-2.1, 5.825) {};
\node [style=none] (246) at (-0.925, 4.8) {};
\node [style=none, label={below : $T$}] (247) at (-0.925, 4.425) {};
\node [style=none, label={above : $A$}] (249) at (-0.2, 6.1) {};
\node [style=none] (250) at (-0.2, 5.325) {};
\node [style=none, label={above : $C$}] (251) at (-1.15, 6.125) {};
\node [style=none] (252) at (-1.15, 5.825) {};
\node [style=none] (281) at (3.4, 5.175) {$=$};
\node [style=none] (298) at (6.1, 5.45) {};
\node [style=none] (299) at (6.1, 5.325) {};
\node [style=none, label={above : $A$}] (300) at (6.575, 6.125) {};
\node [style=none] (301) at (6.575, 5.825) {};
\node [style=none] (302) at (5.4, 4.8) {};
\node [style=none, label={below : $T$}] (303) at (5.4, 4.425) {};
\node [style=none, label={above : $H$}] (304) at (4.675, 6.1) {};
\node [style=none] (305) at (4.675, 5.325) {};
\node [style=none, label={above : $C$}] (306) at (5.625, 6.125) {};
\node [style=none] (307) at (5.625, 5.825) {};
\node [style=none] (338) at (-3.425, 5.175) {$=$};
\node [style=none] (339) at (-4.675, 5.175) {$\Lambda_a$};
\node [style=none] (340) at (2.075, 5.175) {$\Omega_a$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (141.center) to (139.center);
\draw [style=wire] (146.center) to (144.center);
\draw [style=wire] (247.center) to (246.center);
\draw [style=wire] (250.center) to (249.center);
\draw [style=wire] (252.center) to (251.center);
\draw [style=wire, bend right=90, looseness=1.25] (141.center) to (250.center);
\draw [style=wire, bend right=90, looseness=1.25] (146.center) to (252.center);
\draw [style=wire] (299.center) to (298.center);
\draw [style=wire] (301.center) to (300.center);
\draw [style=wire] (303.center) to (302.center);
\draw [style=wire] (305.center) to (304.center);
\draw [style=wire] (307.center) to (306.center);
\draw [style=wire, bend left=90, looseness=1.25] (299.center) to (305.center);
\draw [style=wire, bend left=90, looseness=1.25] (301.center) to (307.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
where the subscript ``$a$'' indicates the element in the alphabet $S$ as well as the corresponding duplicate in question, the ordering on the output of each node is given from left to right, and the uppercase letters indicate what the bijections $\alpha$, $\beta$ are (we use letters instead of numerals to avoid unintended orderings). For left counitality,
\begin{equation}\label{temp:counit}
\begin{tikzpicture}[xscale = .65, yscale = .4, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (314) at (8.7, 5.675) {$=$};
\node [style=none, label={above : $H$}] (316) at (4.5, 6.6) {};
\node [style=none, label={below : $C$}] (317) at (4.5, 4.925) {};
\node [style=none] (324) at (11.525, 6.5) {$\bullet$};
\node [style=none] (325) at (11.525, 5.575) {};
\node [style=none] (326) at (10.825, 5.05) {};
\node [style=none, label={below : $C$}] (327) at (10.825, 4.425) {};
\node [style=none, label={above : $H$}] (328) at (10.1, 6.6) {};
\node [style=none] (329) at (10.1, 5.575) {};
\node [style=none] (330) at (7.6, 5.675) {$\Omega_a$};
\node [style=none] (331) at (2.15, 5.675) {$\Lambda_a$};
\node [style=none] (332) at (3.15, 5.675) {$=$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (317.center) to (316.center);
\draw [style=wire] (325.center) to (324.center);
\draw [style=wire] (327.center) to (326.center);
\draw [style=wire] (329.center) to (328.center);
\draw [style=wire, bend left=90, looseness=1.25] (325.center) to (329.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
where the dot indicates that the discard $\epsilon_a$ is assigned to the terminal node, similarly for right counitality. For cocommutativity,
\begin{equation}\label{temp:cocom}
\begin{tikzpicture}[xscale = .65, yscale = .4, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (314) at (9.2, 5.675) {$=$};
\node [style=none, label={above : $H$}] (324) at (12.025, 6.625) {};
\node [style=none] (325) at (12.025, 5.575) {};
\node [style=none] (326) at (11.325, 5.05) {};
\node [style=none, label={below : $A$}] (327) at (11.325, 4.425) {};
\node [style=none, label={above : $C$}] (328) at (10.6, 6.6) {};
\node [style=none] (329) at (10.6, 5.575) {};
\node [style=none] (330) at (8.1, 5.675) {$\Omega_a$};
\node [style=none] (331) at (2.15, 5.675) {$\Lambda_a$};
\node [style=none] (332) at (3.15, 5.675) {$=$};
\node [style=none, label={above : $C$}] (333) at (5.95, 6.625) {};
\node [style=none] (334) at (5.95, 5.575) {};
\node [style=none] (335) at (5.25, 5.05) {};
\node [style=none, label={below : $A$}] (336) at (5.25, 4.425) {};
\node [style=none, label={above : $H$}] (337) at (4.525, 6.6) {};
\node [style=none] (338) at (4.525, 5.575) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (325.center) to (324.center);
\draw [style=wire] (327.center) to (326.center);
\draw [style=wire] (329.center) to (328.center);
\draw [style=wire, bend left=90, looseness=1.25] (325.center) to (329.center);
\draw [style=wire] (334.center) to (333.center);
\draw [style=wire] (336.center) to (335.center);
\draw [style=wire] (338.center) to (337.center);
\draw [style=wire, bend left=90, looseness=1.25] (334.center) to (338.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
This may seem different from how cocommutativity is depicted in (\ref{mon:law}). As we have explained in Remark~\ref{free:sys:mo:string}, the crossing is an artifact of representing graphs on a plane and may be dispensed with if we spell out the orderings, which is all that really matters. Finally, for naturality of discard,
\begin{equation}\label{temp:discar}
\begin{tikzpicture}[xscale = .55, yscale = .5, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (314) at (7.525, 5.675) {$=$};
\node [style=none, label={below : $1$}] (324) at (2.15, 5) {};
\node [style=none] (325) at (2.15, 6.55) {$\bullet$};
\node [style=none] (328) at (8.9, 6.075) {};
\node [style=none] (329) at (8.9, 7.1) {$\bullet$};
\node [style=none] (330) at (6.425, 5.675) {$\Omega_f$};
\node [style=none] (331) at (-0.325, 5.675) {$\Lambda_f$};
\node [style=none] (332) at (0.675, 5.675) {$=$};
\node [style=none, label={below : $1$}] (337) at (8.9, 4.325) {};
\node [style=none] (338) at (8.9, 5.35) {};
\node [style=wide small box] (339) at (9.625, 5.75) {$f$};
\node [style=none] (340) at (10.35, 6.075) {};
\node [style=none] (341) at (10.35, 7.1) {$\bullet$};
\node [style=none, label={below : $k$}] (342) at (10.35, 4.325) {};
\node [style=none] (343) at (10.35, 5.35) {};
\node [style=none] (344) at (9.625, 6.75) {$\cdots$};
\node [style=none] (345) at (9.625, 4.7) {$\cdots$};
\node [style=none, label={below : $k$}] (346) at (3.65, 5) {};
\node [style=none] (347) at (3.65, 6.55) {$\bullet$};
\node [style=none] (348) at (2.9, 5.75) {$\cdots$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (325.center) to (324.center);
\draw [style=wire] (329.center) to (328.center);
\draw [style=wire] (338.center) to (337.center);
\draw [style=wire] (341.center) to (340.center);
\draw [style=wire] (343.center) to (342.center);
\draw [style=wire] (347.center) to (346.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
where the subscript ``$f$'' indicates the arrow in question and the numerals indicate the ordering on the input of $f$ as well as what the bijection $\alpha$ is (there is no need to depict $\beta$ as the graphs have the empty codomain). We also include the versions that switch the roles of $\Lambda$, $\Omega$ so that the set becomes symmetric. Denote the resulting monoidal congruence relation on $\cat F_s(G)$ by $\bm S_M$.
Let $a = a_1 \ldots a_n$ be an object in $\cat F_s(G)$, where $a_i \in S$. Let $\Gamma$ be the diagram that joins those of $\bigotimes_i \delta_{a_i}$, $(\bigotimes_i 1_{a_i}) \otimes (\bigotimes_i 1_{a_i})$ together by connecting the codomain of the former with the domain of the latter. Of course $\Gamma$ is not unique, as it depends on the order in which the strings are connected. On the other hand, in light of (\ref{temp:cocom}), all such $\Gamma$ represent the same morphism in the quotient category $\cat F_s(G) / \bm S_M$, which is denoted by $\delta_a$. Also set $\epsilon_a = \bigotimes_i \epsilon_{a_i}$ in $\cat F_s(G) / \bm S_M$. Then the equalities (\ref{mon:law}), (\ref{fins:frob}), and (\ref{discar:nat}) hold in $\cat F_s(G) / \bm S_M$ by construction, in other words, $\cat F_s(G) / \bm S_M$ is a Markov category.
Let $i_s : G \longrightarrow \cat F_s(G)$ be the obvious graph monoid homomorphism (see the proofs of \cite[Theorems~1.2, 2.3]{joyal1991geometry} for details); note that either one of the diagrams in (\ref{temp:cocom}) may be designated as the duplicate on $a$ in $\cat F_s(G)$, but the choice is manifestly unnatural, so we do not treat $i_s$ as a graph monoid homomorphism matching duplicates and discards. On the other hand,
\[
i_m = (- / \bm S_M ) \circ i_s: G \longrightarrow \cat F_s(G) / \bm S_M
\]
is a graph monoid homomorphism matching duplicates and discards.
Now, let $\cat D$ be a Markov category and $f : G \longrightarrow \cat D$ a graph monoid homomorphism matching duplicates and discards. By Theorem~\ref{free:sym}, there is a strong symmetric monoidal functor
\[
F_s : \cat F_s(G) \longrightarrow \cat D \quad \text{with} \quad f = F_s \circ i_s.
\]
Since $\cat D$ is a Markov category, by Corollary~\ref{graph:small}, we must have $F_s([\Upsilon]) = F_s([\Gamma])$ if $[\Upsilon] / \bm S_M = [\Gamma] / \bm S_M$. This yields a Markov functor $F_m : \cat F_s(G) / \bm S_M \longrightarrow \cat D$ with $f = F_m \circ i_m$. In summary, we have a commutative diagram
\begin{equation}\label{free:mark:dia}
\bfig
\Vtrianglepair(0,0)/<-`->`<-`<-`<-/<700,400>[\cat F_s(G) / \bm S_M`\cat F_s(G)`\cat D`G; - / \bm S_M`F_s`i_m`i_s`f]
\morphism(0,400)|a|/{@{->}@/^3em/}/<1400,0>[\cat F_s(G) / \bm S_M`\cat D;F_m]
\efig
\end{equation}
Since $F_s$ is unique up to a unique monoidal natural isomorphism, so is $F_m$. In conclusion, we have shown:
\begin{thm}\label{free:markov:quot}
The quotient category $\cat F_s(G) / \bm S_M$ is the free Markov category over $G$.
\end{thm}
More informatively, this means that the graphical language of symmetric monoidal categories over $G$, with the additional syntax depicted in (\ref{mon:law}), (\ref{fins:frob}), and (\ref{discar:nat}), up to Markov surgeries (instead of isomorphisms as in free symmetric monoidal categories and many other cases discussed in \cite{selinger2010survey}) forms the free Markov category over $G$, which is strict. Also, as pointed out after the proof of \cite[Theorem~1.2]{joyal1991geometry}, there are a unique symmetric strict monoidal functor $F_s$ and hence such a unique $F_m$ that make (\ref{free:mark:dia}) commute.
\begin{rem}\label{graph:lan:mar}
In terms of coherence, freeness may be reformulated as follows. A well-formed
equation between morphisms in the symbolic language of symmetric monoidal categories
follows from the axioms of Markov categories if and only if it holds, up to Markov surgeries, in the graphical language of symmetric monoidal categories.
\end{rem}
A node $x$ of an anchored diagram $(\Gamma, v)$ in $G$ is \memph{quasi-terminal} if either it is terminal or every maximal directed path $p$ in $\Gamma$ with $p(0) = x$ ends at a terminal node or, in case that $v(x)$ is a duplicate, this is so for all such paths through one of the prongs. Denote the set of quasi-terminal nodes of $\Gamma$ by $\Delta_\Gamma$ and its complement by $\tilde \Delta_\Gamma = \Gamma_0 \smallsetminus \Delta_\Gamma$.
A \memph{splitter path} in $\Gamma$ is a concatenation of two directed paths joined at the starting nodes. Denote by $P_\Gamma$ the set of directed paths that end in $\cod(\Gamma)$ and by $S_\Gamma$ the set of splitter paths between edges in $\cod(\Gamma)$.
For the next two lemmas, suppose that $(\Gamma, v)$, $(\Upsilon, w)$ are anchored diagrams belonging to the same $\bm S_M$-congruence class.
\begin{lem}\label{decor:mat}
There are
\begin{itemize}
\item a bijection $\pi : \tilde \Delta_\Gamma \longrightarrow \tilde \Delta_{\Upsilon}$ compatible with the valuations, that is, $v(x) = w(\pi(x))$ for all $x \in \tilde \Delta_\Gamma$;
\item a bijection $\dot \pi : P_\Gamma \longrightarrow P_{\Upsilon}$ compatible with $\pi$, that is, $\pi$ restricts to a bijection between the nodes in $\tilde \Delta_\Gamma$ belonging to $p \in P_\Gamma$ and those in $\tilde \Delta_\Upsilon$ belonging to $\dot \pi(p)$,
\item a bijection $\ddot \pi : S_\Gamma \longrightarrow S_{\Upsilon}$ compatible with $\pi$.
\end{itemize}
\end{lem}
\begin{proof}
All these are immediate by an induction on the least number of Markov surgeries required to get from $\Gamma$ to $\Upsilon$ and inspection of the Markov templates.
\end{proof}
Call the diagram $\Gamma$ \memph{Markov minimal} if every node in $\Delta_\Gamma$ is terminal; in that case, surgeries with the templates (\ref{temp:counit}), (\ref{temp:discar}) can no longer be applied in the $\Omega$-to-$\Lambda$ direction. It follows that there is a Markov minimal anchored diagram in every $\bm S_M$-congruence class.
We say that $\Upsilon$ is \memph{Markov congruent} to $\Gamma$ if they become isomorphic upon surgeries with the templates (\ref{temp:coass}), (\ref{temp:cocom}).
\begin{lem}\label{mar:min:com}
If $\Gamma$, $\Upsilon$ are Markov minimal then they are Markov congruent.
\end{lem}
\begin{proof}
We show this by an induction on the least number $n$ of Markov surgeries needed to beget $\Upsilon$ from $\Gamma$. The base case $n = 1$ is clear. For the inductive step, let $\sigma_1, \ldots, \sigma_n$ be a sequence of Markov surgeries that begets $\Upsilon$ from $\Gamma$ and $\Gamma_i$ the result of $\sigma_i$. Let $k_i$ be the size of $\Delta_{\Gamma_i}$. We may assume $k_i > 0$ for all $i < n$, for otherwise the claim would follow from the inductive hypothesis. Let $l < n-1$ be the least number such that $k_{l}, \ldots, k_n$ are strictly decreasing. So, for every $i > l$, $\sigma_i$ is a surgery with the template (\ref{temp:counit}) or (\ref{temp:discar}) applied in the $\Omega$-to-$\Lambda$ direction. Now, if $k_{l-1} = k_l$ then $\sigma_l$ is a surgery with the template (\ref{temp:coass}) or (\ref{temp:cocom}), in which case we can delete $\sigma_l$ and modify the surgeries $\sigma_i$, $i > l$, accordingly so to obtain a shorter sequence that begets $\Upsilon$ from $\Gamma$, contradicting the choice of $n$. Similarly, if $k_{l-1} < k_l$ then $\sigma_l$ is a surgery with the template (\ref{temp:counit}) or (\ref{temp:discar}) applied in the $\Lambda$-to-$\Omega$ direction, in which case we can delete $\sigma_l$ together with $\sigma_{l'}$ for some $l < l' \leq n$ and obtain a contradiction. So $k_i = 0$ for all $i$. The lemma follows.
\end{proof}
\section{Effects in a Markov category}
Generalizing the construction of a causal conditional in \cite[\S~4]{Fong:thesis}, we shall define a class of morphisms in an arbitrary Markov category $\cat M$ that will be of central interest.
We first assume that $\cat M$ is \memph{straight}; this just means that $\cat M$ is strict and the monoid $(\cat M_0, \otimes, \mathbf{I})$ has no idempotents or elements of finite order other than $\mathbf{I}$; these two conditions are met in many natural situations where the results of this paper are intended for application. Note that the second condition holds if and only if $w^n \neq w^m$ for all $w \neq \mathbf{I}$ and all $n \neq m > 0$; here $w^n$ is a shorthand for the monoidal product of $n$ copies of $w$ itself; this includes the empty product $w^0 = \mathbf{I}$.
\begin{defn}
Let $W = ( w_1, \ldots, w_n )$ be a sequence of objects and $w = \bigotimes_i w_i$. A morphism $w \longrightarrow v$ in $\cat M$ is called a \memph{multiplier} on $(W, v)$ if it is generated from the duplicates, discards, symmetries, and identities on $w_i$, $1 \leq i \leq n$.
\end{defn}
\begin{rem}\label{mul:pow}
The straightness of $\cat M$ and (\ref{mon:law}) or, more intuitively, the coherence of the graphical language for Markov categories that has been established above, guarantee that if $n = 1$ then there is a unique multiplier on $(w, w^m)$, which is denoted by $\iota_{w \rightarrow w^m}$. In that case, the multiplier may be depicted as a diagram that contains exactly one terminal node when $m = 0$, no node when $m = 1$, and $i$ nodes for $1 \leq i \leq m -1$ when $m > 1$, where $i$ is chosen to be conducive to the situation at hand.
\end{rem}
\begin{rem}\label{uni:pro}
Suppose that $(\cat M_0, \otimes, \mathbf{I})$ is indeed a free monoid.
If every $w_i$ is in the alphabet and $w_i \neq w_j$ for all $i \neq j$ then there is again a unique multiplier $\iota_{\bar w \rightarrow v}$ on $(\bar w, v)$. For instance, if $w = w_1 w_2 w_3$ and $v = w_1^2 w_2 w_1 w_2$ then $\iota_{\bar w \rightarrow v}$ may be depicted as the Markov minimal diagram
$\begin{tikzpicture}[xscale = .3, yscale = .3, baseline={([yshift=8pt]current bounding box.south)}]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (108) at (4.275, 0.15) {};
\node [style=none] (110) at (7.325, 0.15) {};
\node [style=none] (111) at (5.275, -1.575) {};
\node [style=none] (112) at (4.275, -0.1) {};
\node [style=none] (113) at (6.275, -0.65) {};
\node [style=none] (114) at (5.275, -0.95) {};
\node [style=none] (117) at (6.175, 0.15) {};
\node [style=none] (118) at (8.8, 0.15) {};
\node [style=none] (119) at (7.975, -1.575) {};
\node [style=none] (120) at (7.125, -0.6) {};
\node [style=none] (121) at (8.775, -0.35) {};
\node [style=none] (122) at (7.975, -1.075) {};
\node [style=none] (124) at (10.2, -1.55) {};
\node [style=none] (150) at (10.2, 0.025) {$\bullet$};
\node [style=none] (151) at (5.275, 0.15) {};
\node [style=none] (152) at (4.4, -1.5) {$w_1$};
\node [style=none] (153) at (7, -1.55) {$w_2$};
\node [style=none] (154) at (9.2, -1.5) {$w_3$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire, in=-135, out=-90] (112.center) to (113.center);
\draw [style=wire] (108.center) to (112.center);
\draw [style=wire, in=45, out=-90, looseness=1.25] (110.center) to (113.center);
\draw [style=wire, in=270, out=90] (111.center) to (114.center);
\draw [style=wire, in=-90, out=-60, looseness=1.25] (120.center) to (121.center);
\draw [style=wire, in=120, out=-90] (117.center) to (120.center);
\draw [style=wire, in=90, out=-90, looseness=1.25] (118.center) to (121.center);
\draw [style=wire] (119.center) to (122.center);
\draw [style=wire] (124.center) to (150.center);
\draw [style=wire] (114.center) to (151.center);
\end{pgfonlayer}
\end{tikzpicture}$, where how the duplicates in the trident are arranged, how the edges at the nodes are ordered, how the copies of the same object in the codomain are ordered, and so on, can all be left unspecified without any ambiguity.
If $w_i = w_j$ for some $i \neq j$ then multipliers may not unique. In fact, this is so even if $w_i \neq w_j$ for all $i \neq j$, because there may be ``algebraic relations'' among $w_i$, $1 \leq i \leq n$, for example, $w_1w_2 = w_3$, unless every $w_i$ is in the alphabet, which then is the situation just discussed.
\end{rem}
A \memph{loop} in a directed graph $G = (V(G), A(G), s, t)$ is an arrow in $A(G)$ whose source and target coincide. Note that $G$ could still be acyclic even if it has loops, because a cycle in $G$ does not contain loops by definition. We shall work with directed graphs $G$ such that there is exactly one loop at each vertex in $G$, which is referred to as the \memph{identity loop} on $v$ and is denoted by $1_v$, for reasons that will become clear.
The category \cat{FinDAG} has finite directed acyclic graphs with identity loops as objects and graph homomorphisms between them as morphisms.
We have a more flexible notion of morphisms in \cat{FinDAG}, because arrows can be collapsed due to the presence of identity loops. For instance, under the standard definition, there can be no morphisms from any directed graph with a nonempty set of arrows to the directed graph with exactly one vertex; on the other hand, the one-vertex graph is the terminal object in \cat{FinDAG} as defined above, that is, every object in \cat{FinDAG} comes with exactly one morphism to it.
\begin{defn}\label{gen:condi}
Let $W = (w_1, \ldots, w_n)$ be a sequence of objects with $w_i \neq \mathbf{I}$ for all $i$. For each $S \subseteq \{1, \ldots, n\}$, let $w_S = {\bigotimes_{i \in S}} w_i$, where $w_\emptyset = \mathbf{I}$. For each $i$, let $\kappa_{i} : w_{S_i} \longrightarrow w_i$ be a morphism with $i \notin S_i$; let $K$ be the set of these morphisms $\kappa_{i}$. We associate a directed graph $G$ with the pair $(W, K)$: the set $V(G)$ of vertices is $\{1, \ldots, n\}$ and there is an arrow $j \rightarrow i$ if and only if $j \in S_i$. Of course $G$ may or may not be cyclic. After adding identity loops, we do assume $G \in \cat{FinDAG}$. Among other things, this implies $S_i = \emptyset$ for some $i$; in that case, $\kappa_i$ shall be depicted as $\begin{tikzpicture}[xscale = .3, yscale = .3, baseline={([yshift=6pt]current bounding box.south)}]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (124) at (9.825, -1.55) {$\bullet$};
\node [style=none] (150) at (9.825, 0.025) {};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (124.center) to (150.center);
\end{pgfonlayer}
\end{tikzpicture}$.
Let $S, T \subseteq V(G)$. Let $G_{S \rightarrow T}$ be the subgraph of $G$ that consists of all the vertices in $S \cup T$ and all the directed paths that end in $T$ but do not \memph{travel toward} $S$, that is, do not pass through or end in $S$ (starting in $S$ is allowed). Note that, for every $i \in V(G_{S \rightarrow T})$, if $i \notin S$ then its parents in $G$ are all in $G_{S \rightarrow T}$ as well and if $i \in S$ then it has no parents in $G_{S \rightarrow T}$.
Construct a diagram in $\cat M$ as follows. For each $i \in V(G_{S \rightarrow T})$, let $\bar w_i$ be the monoidal product of as many copies of $w_i$ as the number of children of $i$ in $G_{S \rightarrow T}$. Let $\Gamma_i$ be a diagram of
\[
\begin{dcases*}
\iota_{w_i \rightarrow \bar w_i w_i} & if $i \in S \cap T$,\\
\iota_{w_i \rightarrow \bar w_i} & if $i \in S \smallsetminus T$,\\
\iota_{w_i \rightarrow \bar w_i w_i} \circ \kappa_i & if $i \in T \smallsetminus S$,\\
\iota_{w_i \rightarrow \bar w_i} \circ \kappa_i & if $i \notin S \cup T$,
\end{dcases*}
\]
where in the last cases the diagram for $\kappa_i$ is the obvious one; note the extra copy of $w_i$ in the codomain of $\iota_{w_i \rightarrow \bar w_i w_i}$. Recall from Remark~\ref{mul:pow} that the multipliers used here are unique and there is no need to specify orderings for their codomains in the diagrams. For each $j \in V(G_{S \rightarrow T})$, let $o_j$ be the number of edges in the codomain of $\Gamma_j$ and $p_j$ the number of all the edges with the value $w_j$ in the domains of all the other diagrams $\Gamma_i$. Observe that $o_j = p_j + 1$ if $j \in T$ and $o_j = p_j$ in all other cases. So we may identify the corresponding edges and fuse these \memph{components} $\Gamma_i$ together into a single diagram, denoted by $\Gamma_{[w_T \| w_S]_{(W, K)}}$, with the domain $w_S$ and the codomain $w_T$.
The diagram thus obtained is Markov minimal. It may not be unique up to isomorphisms, but is unique up to Markov congruence. So it represents a unique morphism, which is referred to as the \memph{$(W, K)$-effect} of $w_S$ on $w_T$ and is denoted by $[w_T \| w_S]_{(W, K)} : w_S \longrightarrow w_T$, or simply $[w_T]_{(W, K)}$ when $S = \emptyset$.
\end{defn}
We shall write $[w_T \| w_S]$ simply as $[T \| S]$ when the context is clear. The subscript $(W, K)$ may be dropped from the notation too.
\begin{lem}\label{graph:split}
For every $i \in V(G_{S \rightarrow T}) \smallsetminus S$ there are a level $L$ of $\Gamma_{[T \| S]}$ and a subset $T_i$ of $V(G_{S \rightarrow T})$ containing $i$ such that
\begin{itemize}
\item $V(G_{S \rightarrow T_i})$ does not contain any child of $i$ in $G_{S \rightarrow T}$,
\item $\Gamma_{[T \| S]}[\emptyset, L] = \Gamma_{[{T_i} \| S]}$ and $\Gamma_{[T \| S]}[L, \dot \Gamma_{[T \| S]}] = \Gamma_{[{T} \| {T_i}]}$.
\end{itemize}
\end{lem}
\begin{proof}
To begin with, note that the component $\Gamma_i \subseteq \Gamma_{[T \| S]}$ is a normal subgraph. Let $L$ be the level of $\Gamma_{[T \| S]}$ as constructed in the latter half of the proof of Lemma~\ref{full:lay}; more precisely, $L$ consists of those nodes $x \in \dot \Gamma_{[T \| S]}$ such that $x \leq y_i \in (\Gamma_i)_0$, where $y_i$ is the least node in $\dot \Gamma_i$, which exists if $(\Gamma_i)_0 \neq \emptyset$, in particular, if $i \notin S$. Let $T_i \subseteq V(G_{S \rightarrow T})$ such that $j \in T_i$ if and only if there is an edge in $\cut(L)$ with the value $w_j$. So $i \in T_i$, and no child $k$ of $i$ in $G_{S \rightarrow T}$ can be an ancestor of any $j \in T_i$ in $G$, in other words, $k \notin V(G_{S \rightarrow T_i})$. Let $S_i \subseteq S$ such that $j \in S_i$ if and only if there is an edge in $\cut(\emptyset) \cap \cut(L)$, that is, a loose edge of $\Gamma_{[T \| S]}[\emptyset, L]$, with the value $w_j$. So every vertex in $(T_i \cup S) \smallsetminus S_i$ is an ancestor of $i$ in $G_{S \rightarrow T}$ (vertices are ancestors of themselves).
Consider any $j \in V(G_{S \rightarrow T_i})$. Assume $j \notin S_i$. If $j \notin T_i \cup S$ then it must be an ancestor of some $k \in T_i \smallsetminus S_i$ in $G_{S \rightarrow T_i}$. So, at any rate, $j$ is an ancestor of $i$ in $G_{S \rightarrow T}$. It follows that if $j \notin T_i$ then the component $\Gamma_j$ of $\Gamma_{[T \| S]}$ is entirely contained in $\Gamma_{[T \| S]}[\emptyset, L]$ and, more importantly, we have $e(1) \in L$ for every $e \in (\Gamma_j)_1$ and hence $\Gamma_j$ may be computed with respect to $G_{S \rightarrow T_i}$ too, that is, it may be regarded as a component of $\Gamma_{[{T_i} \| S]}$. If $j \in T_i$ then we may choose a suitable diagram of the multiplier in question (recall the last sentence of Remark~\ref{mul:pow}) and thereby assume that there is exactly one edge in $\cut(L)$ with the value $w_j$. In that situation, the portion of the component $\Gamma_j$ of $\Gamma_{[T \| S]}$ contained in $\Gamma_{[T \| S]}[\emptyset, L]$ is again the component $\Gamma_j$ of $\Gamma_{[{T_i} \| S]}$.
Now assume $j \in S_i$. Then, by the discussion above, any child of $j$ in $G_{S \rightarrow T_i}$ would be an ancestor of $i$ in $G_{S \rightarrow T}$, which contradicts the definition of $S_i$. So the component $\Gamma_j$ of $\Gamma_{[T_i \| S]}$ depicts the identity of $w_j$, which of course is the same as the loose edge in $\cut(\emptyset) \cap \cut(L)$.
Finally, if $w_k$ is the value of an edge of $\Gamma_{[T \| S]}[\emptyset, L]$ then $k$ must belong to $V(G_{S \rightarrow T_i})$. So we have shown $\Gamma_{[T \| S]}[\emptyset, L] = \Gamma_{[{T_i} \| S]}$.
By the choice of $\Gamma_j$ for each $j \in T_i \smallsetminus S_i$ made above, we may identify $\cut(L)$ with $T_i$. Then a similar analysis shows that $\Gamma_{[T \| S]}[L, \dot \Gamma_{[T \| S]}] = \Gamma_{[{T} \| {T_i}]}$.
\end{proof}
\begin{prop}\label{mar:fun:eff}
Let $\cat N$ be another straight Markov category and $F : \cat M \longrightarrow \cat N$ a strict Markov functor. Then
\[
F([T \| S]_{(W, K)}) = [T \| S]_{(F(W), F(K))}.
\]
\end{prop}
\begin{proof}
The directed acyclic graph associated with $(F(W), F(K))$ is also $G$ and hence we may take the diagrams $\Gamma_{[T \| S]_{(W, K)}}$, $\Gamma_{[T \| S]_{(F(W), F(K))}}$ to have the same underlying graph with different valuations in different categories.
The \memph{height} of a directed acyclic graph is the maximum number of edges in a directed path. We proceed by induction on the height $h$ of $G_{S \rightarrow T}$.
The case $h = 0$ is rather trivial. The case $h = 1$ cannot be reduced to the case $h = 0$, but it is straightforward to check since $\Gamma_{[T \| S]}$ is of the simple form
\begin{equation}\label{height:1}
\begin{tikzpicture}[xscale = .55, yscale = .55, baseline={(current bounding box.center)}]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (114) at (4.15, -1.725) {};
\node [style=none] (119) at (7.75, -1.15) {};
\node [style=none] (122) at (7.75, -0.925) {};
\node [style=none] (124) at (5.475, -1.725) {};
\node [style=none] (150) at (5.475, 0.025) {$\bullet$};
\node [style=none] (151) at (4.15, 0.125) {};
\node [style=small box] (152) at (8.3, -0.575) {$\kappa_{T^*}$};
\node [style=none] (155) at (8.3, -0.35) {};
\node [style=none] (156) at (8.3, 0.5) {};
\node [style=none] (157) at (6.625, -1.15) {};
\node [style=none] (158) at (6.625, 0.175) {};
\node [style=none] (160) at (7.2, -1.925) {};
\node [style=none] (161) at (7.2, -1.475) {};
\node [style=none] (162) at (8.3, -1.925) {};
\node [style=none] (163) at (8.3, -0.925) {};
\node [style=none] (164) at (8.85, -1.15) {};
\node [style=none] (165) at (8.85, -0.925) {};
\node [style=none] (166) at (9.975, -1.15) {};
\node [style=none] (167) at (9.975, 0.175) {};
\node [style=none] (168) at (9.4, -1.875) {$\bullet$};
\node [style=none] (169) at (9.4, -1.475) {};
\node [style=none] (170) at (11.175, 0.025) {};
\node [style=none] (171) at (11.175, -1.875) {$\bullet$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (119.center) to (122.center);
\draw [style=wire] (124.center) to (150.center);
\draw [style=wire] (114.center) to (151.center);
\draw [style=wire] (155.center) to (156.center);
\draw [style=wire] (157.center) to (158.center);
\draw [style=wire, in=-90, out=-90] (157.center) to (119.center);
\draw [style=wire] (160.center) to (161.center);
\draw [style=wire] (162.center) to (163.center);
\draw [style=wire] (164.center) to (165.center);
\draw [style=wire] (166.center) to (167.center);
\draw [style=wire, in=-90, out=-90] (166.center) to (164.center);
\draw [style=wire] (168.center) to (169.center);
\draw [style=wire] (170.center) to (171.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
where $T^*$ is the set of those vertices $i \in T \smallsetminus S$ with $\dom(\kappa_i) \neq \emptyset$ and $\kappa_{T^*} = \bigotimes_{i \in T^*} \kappa_i$; note that not all the multipliers are displayed herein.
For the inductive step $h > 1$, let $i$, $j$, and $k$ be three distinct vertices in $G_{S \rightarrow T}$ such that each is a child of the next. Let $T_j$ be as given by Lemma~\ref{graph:split}. Although $i$ does not belong to $G_{S \rightarrow T_j}$, the height of $G_{S \rightarrow T_j}$ may still be $h$. Anyway, if the heights of $G_{S \rightarrow T_j}$, $G_{T_j \rightarrow T}$ are indeed both less than $h$ then, by the inductive hypothesis,
\[
F([{T_j} \| S]_{(W, K)}) = [T_j \| S]_{(F(W), F(K))}, \quad F([{T} \| T_j]_{(W, K)}) = [T \| T_j]_{(F(W), F(K))}
\]
and hence, by Lemma~\ref{graph:split},
\[
\begin{split}
F([T \| S]_{(W, K)}) & = F([{T} \| T_j]_{(W, K)} \circ [T_j \| S]_{(W, K)})\\
& = [T \| T_j]_{(F(W), F(K))} \circ [T_j \| S]_{(F(W), F(K))} \\
& = [T \| S]_{(F(W), F(K))},
\end{split}
\]
where the last equality holds because the construction of $T_j$ only depends on $G_{S \rightarrow T}$, not the diagram. If either of the heights is equal to $h$ then we can break up the graph again in the same way, and eventually the height will have to drop.
\end{proof}
More generally, we shall consider \memph{twists} of $[T \| S]$, that is, a morphism of the form
\[
[\tau(T) \| \sigma(S)] = \iota_{w_T \rightarrow w_{\tau(T)}} \circ [T \| S] \circ \iota_{w_{\sigma(S)} \rightarrow w_S},
\]
where $\sigma$, $\tau$ are permutations of $S$, $T$ and $\iota_{w_T \rightarrow w_{\tau(T)}}$, $\iota_{w_{\sigma(S)} \rightarrow w_S}$ are the unique multipliers generated from symmetries. Such a twist is determined by the pair of permutations.
\begin{rem}
The construction in Definition~\ref{gen:condi} can still be carried out without assuming straightness, albeit not canonically, because we may have to choose monoidal products if $\cat M$ is not strict and multipliers if $w^n = w^m$ could hold for $w \neq \mathbf{I}$ and $n \neq m > 0$.
\end{rem}
All is not lost for an arbitrary Markov category $\cat M$. Following the discussion in \cite[\S~1.1]{joyal1991geometry}, we may first strictify $\cat M$ into $\str(\cat M)$ as follows (this is a special case of a construction in \cite[\S~2]{maclane1985coherence}). The objects of $\str(\cat M)$ are the words of the free monoid $W(\cat M_0)$. The \memph{evaluation} map $\eval : W(\cat M_0) \longrightarrow \cat M_0$ is given by induction on the length of the word:
\[
\eval(\emptyset) = \mathbf{I}, \quad \eval(a) = a \text{ for } a \in \cat M_0, \quad \eval(a_1 \ldots a_{n+1}) = \eval(a_1 \ldots a_{n}) \otimes a_{n+1}.
\]
The arrows $w \longrightarrow w'$ in $\str(\cat M)$ are the arrows $\eval(w) \longrightarrow \eval(w')$ in $\cat M$. The monoidal product $\bar \otimes$ in $\str(\cat M)$ is given by concatenation ${v} \botimes {w} = vw$ and
\begin{equation*}
\bfig
\square(0,0)/->`->`<-`->/<1000, 400>[\eval(vw)`\eval(v'w')`\eval(v) \otimes \eval(w)`\eval(v') \otimes \eval(w'); f \botimes g`\cong`\cong`f \otimes g]
\efig
\end{equation*}
where the vertical arrows are the unique coherence isomorphisms. There are obvious candidates for symmetries, duplicates, and discards in $\str(\cat M)$ and it follows from the MacLane coherence theorem that $\eval : \str(\cat M) \longrightarrow \cat M$ is indeed a Markov functor. Moreover, $\eval$ is full and faithful and surjective on isomorphism classes, and hence is a comonoid equivalence (this is rather clear if we apply \cite[Proposition~10.16]{fritz2020synthetic}); its quasi-inverse is the obvious full embedding $\alp : \cat M \longrightarrow \str(\cat M)$, indeed $\alp$ is the right inverse of $\eval$ since $\eval \circ \alp$ is the identity.
Now we may simply define an \memph{effect} in $\cat M$ over a sequence of objects $(a_1,
\ldots, a_n)$ to be a morphism of the form $\eval([T \| S]_{(W, K)}) : \eval(w_S) \longrightarrow \eval(w_T)$, where $\eval(w_i) = a_i$ for all $i$.
\section{Functoriality over ordered directed acyclic graphs}
In any directed acyclic graph $G$, the directed paths induce a partial ordering on $V(G)$. We say that $G$ is \memph{ordered} if $V(G)$ comes equipped with a total ordering that extends this partial ordering. The category \cat{FinODAG} has ordered directed acyclic graphs with identity loops as objects and order-preserving graph homomorphisms between them as morphisms. Let $G = (V, A, s, t, \sigma)$ be an object in \cat{FinODAG}, where the total ordering is expressed by the bijection $\sigma: V \longrightarrow \{1, \ldots, n\}$; we shall also write $V = (v_1, \ldots, v_n)$.
\begin{ter}\label{graph:bas}
Elements of the free monoid $W(V)$ are also referred to as \memph{variables} and those of length $1$, that is, the vertices in $V$, \memph{atomic variables}. If no atomic variable occurs more than once in a variable $v$ then $v$ is \memph{singular}; in particular, $\emptyset$ is singular.
Concatenation of two variables $v$, $w$ is also written as $v \otimes w$. As before, for each $S \subseteq \{1, \ldots, n\}$ let $v_S = \bigotimes_{i \in S} v_i$, where $v_\emptyset = \emptyset$. Write $v_{S'} \subseteq v_S$, or $v_{S'} \in v_S$ if $S'$ is a singleton, and say that $v_{S'}$ is a \memph{sub-variable} of $v_S$ if $S' \subseteq S$. Write $v_{S} \cap v_{S'} = v_{S \cap S'}$, $v_S \smallsetminus v_{S'} = v_{S \smallsetminus S'}$, and so on.
\end{ter}
\begin{defn}\label{cau:gen}
The free $\textup{DD}$-graph monoid \memph{$W(G)$} has $W(V)$ as its set of vertices and, for each atomic variable $v \in V$, the following arrows added:
\[
\emptyset \toleft^{\epsilon_v} v \to^{\delta_v} vv, \quad \pa(v) \to^{\kappa_v} v,
\]
where $\pa(v)$ is the singular variable that contains exactly the parents of $v$, and is more accurately denoted by $\pa_G(v)$ if necessary. We refer to $\kappa_v$ as a \memph{causal mechanism}. If $\pa(v) = \emptyset$ then this adds an arrow $\emptyset \longrightarrow v$, which is called an \memph{exogenous} causal mechanism.
\end{defn}
Denote by $\cau(G)$ the free Markov category over $W(G)$. For convenience, set $\kappa_\emptyset = 1_\emptyset$ in $\cau(G)$.
We single out a special case of Definition~\ref{gen:condi}, which is the main object of our study below.
\begin{defn}\label{cau:condi}
Let $v$, $w$ be singular variables in $\cau(G)$ and $K$ the set of causal mechanisms. The $(V, K)$-effect of $v$ on $w$ in $\cau(G)$ is referred to as the \memph{causal effect} of $v$ on $w$ and is denoted simply by $[w \| v]$, or by $[w]$ when $v = \emptyset$, which is called the \memph{prior} on $w$.
\end{defn}
This notion was first introduced in \cite[\S~4]{Fong:thesis}, where it is called causal conditional. We call it causal effect because it is closely related to the eponymous notion in the literature on causality.
Let $\phi : H \longrightarrow G$ be a morphism in $\cat{FinODAG}$. We construct a graph monoid homomorphism
\[
f_\phi : W(G) \longrightarrow \cau(H)
\]
as follows. For any variable $v \in W(G)$, denote by $\phi^{-1}(v)$ the variable $\bigotimes_{w \in v} \bigotimes_{u \in \phi^{-1}(w)} u$ in $\cau(H)$, where ``$w \in v$'' enumerates the occurrences of atomic variables in $v$, following their order in $v$, and ``$u \in \phi^{-1}(w)$'' enumerates the elements of the set $\phi^{-1}(w)$, following their order in $V_H$. Clearly $\phi^{-1}(v)$ is singular if and only if $v$ is; in that case, if $v = v_S$ for some $S$ then the order in which the monoidal product $\phi^{-1}(v)$ is taken is the one induced by $V_H$, because morphisms in $\cat{FinODAG}$ preserve order. Set $f_\phi(v) = \phi^{-1}(v)$. If $v$ is atomic then set
\begin{equation}\label{sub:fun:gen}
\begin{gathered}
f_\phi(1_v) = 1_{\phi^{-1}(v)}, \quad f_\phi(\epsilon_v ) = \epsilon_{\phi^{-1}(v)}, \quad f_\phi(\delta_v ) = \delta_{\phi^{-1}(v)}, \\
f_\phi(\kappa_v) = [\phi^{-1}(v) \| \phi^{-1}(\pa(v))]_{(V_H, K_H)},
\end{gathered}
\end{equation}
where $1_v$ stands for the identity loop on $v$ and $K_H = \set{\kappa_{x} : x \in V_H}$.
Note that $H_{\phi^{-1}(\pa(v)) \rightarrow \phi^{-1}(v)}$ does not contain any vertex that is not already in $\phi^{-1}(\pa(v))$ or $\phi^{-1}(v)$, but this does not mean that it is of height at most $1$.
\begin{defn}[Refinement]\label{phi:inter}
Since $\cau(G)$ is free, the graph monoid homomorphism $f_\phi$ induces a Markov functor, called the \memph{$\phi$-refinement}:
\[
\phi^* : \cau(G) \longrightarrow \cau(H).
\]
\end{defn}
In principle, $\phi^*$ is unique up to a unique monoidal natural isomorphism. Here it is simply unique because it must be strict (see the remark after Theorem~\ref{free:markov:quot}).
Heuristically, we think of $\phi$ as a substitution operation because the vertices of the graphs are thought of as the atomic variables in the corresponding ``syntactic categories'' for causal inference.
\begin{exam}\label{trea:two}
Let $v \in \cau(G)$ be a singular variable and $G_{\bar v}$ be the subgraph of $G$ with all the incoming arrows at $v$ removed. Let $i_v : G_{\bar v} \longrightarrow G$ be the obvious graph embedding. We refer to the $i_v$-refinement
\[
i_{v}^* : \cau(G) \longrightarrow \cau(G_{\bar v})
\]
as the \memph{$i_v$-intervention}. So, the only nontrivial assignment in (\ref{sub:fun:gen}) for the construction of $i_{v}^*$ is, for each $u \in v$,
\begin{equation}\label{inter:exam}
\begin{tikzpicture}[xscale = .5, yscale = .55, baseline=(current bounding box.center)]
\begin{pgfonlayer}{nodelayer}
\node [style=none] (210) at (1.9, 4.3) {};
\node [style=none] (211) at (1.9, 3.55) {};
\node [style=wide small box] (225) at (1.9, 3.025) {$\kappa_u$};
\node [style=none] (226) at (1.175, 2.575) {};
\node [style=none] (227) at (1.175, 1.775) {};
\node [style=none] (228) at (2.625, 2.575) {};
\node [style=none] (229) at (2.625, 1.775) {};
\node [style=none] (286) at (4.625, 2.925) {$\longmapsto$};
\node [style=none] (295) at (1.97, 2) {$\cdots$};
\node [style=none] (296) at (7.2, 4.3) {};
\node [style=none] (297) at (7.2, 3.55) {$\bullet$};
\node [style=none] (299) at (6.475, 2.575) {$\bullet$};
\node [style=none] (300) at (6.475, 1.775) {};
\node [style=none] (301) at (7.925, 2.575) {$\bullet$};
\node [style=none] (302) at (7.925, 1.775) {};
\node [style=none] (304) at (7.27, 2) {$\cdots$};
\node [style=none] (305) at (4.625, 0.425) {$i_{v}^*(\kappa_u) = \bar \kappa_u \otimes \epsilon_{\pa(u)}$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=wire] (210.center) to (211.center);
\draw [style=wire] (226.center) to (227.center);
\draw [style=wire] (228.center) to (229.center);
\draw [style=wire] (210.center) to (211.center);
\draw [style=wire] (226.center) to (227.center);
\draw [style=wire] (228.center) to (229.center);
\draw [style=wire] (296.center) to (297.center);
\draw [style=wire] (299.center) to (300.center);
\draw [style=wire] (301.center) to (302.center);
\draw [style=wire] (296.center) to (297.center);
\draw [style=wire] (299.center) to (300.center);
\draw [style=wire] (301.center) to (302.center);
\end{pgfonlayer}
\end{tikzpicture}
\end{equation}
where $\kappa_u$, $\bar \kappa_u$ are the causal mechanisms on $u$ in $\cau(G)$, $\cau(G_{\bar v})$, respectively.
In the definition of $\cau(G)$, we do not have an exogenous causal mechanism $\emptyset \longrightarrow w$ for every $w \in V_G$ unless $w$ has no parents in $G$. Of course, such generators, indeed, arbitrary generators, can be introduced to suit the task at hand. In that case, there is no need to go through this subgraph $G_{\bar v}$ anymore, since (\ref{inter:exam}) can already be defined within $\cau(G)$ itself so that $i_{v}^*$ becomes an endofunctor $\cau(G) \longrightarrow \cau(G)$. This is the approach adopted in \cite{jacobs2019causal}, where a specific interpretation is intended for these exogenous causal mechanisms, namely uniform probability distribution. Casting $i_{v}^*$ as an endofunctor is merely for convenience and elegance, though, not a technical necessity.
\end{exam}
\begin{prop}\label{cond:preserve}
Let $v$, $w$ be singular variables in $\cau(G)$. Then $\phi^*([w \| v]) = [\phi^*(w) \| \phi^*(v)]$.
\end{prop}
This assertion is a bit terse and may seem to be just a consequence of Proposition~\ref{mar:fun:eff}. But the situation becomes clearer if we do not abbreviate the notation:
\begin{equation}\label{comp:two}
\bfig \morphism(0,0)/=/<1450,0>[[\phi^*(w) \| \phi^*(v)]_{( \phi^*(V_G), \phi^*(K_G))}`\phi^*([w \| v]_{(V_G, K_G)}); \textup{Proposition~\ref{mar:fun:eff}}]
\morphism(1450,0)/=/<1330,0>[\phi^*([w \| v]_{(V_G, K_G)})`[\phi^*(w) \| \phi^*(v)]_{(V_H, K_H)}; \textup{Proposition~\ref{cond:preserve}}]
\efig
\end{equation}
where $K_G = \set{\kappa_{u} : u \in V_G}$.
\begin{proof}
We shall indeed show that the first and last effects in (\ref{comp:two}) are equal. To that end, let $\Gamma_{\phi^*G}$ be the diagram constructed from the diagram $\Gamma_{[\phi^*(w) \| \phi^*(v)]_{( \phi^*(V_G), \phi^*(K_G))}}$ by depicting each relevant $\phi_*(\kappa_y)$ as $\Gamma_{[\phi^{-1}(y) \| \phi^{-1}(\pa(y))]_{(V_H, K_H)}}$ and then rewriting the multipliers accordingly. After surgeries, we assume that $\Gamma_{\phi^*G}$ is Markov minimal. Abbreviate $\Gamma_{[\phi^*(w) \| \phi^*(v)]_{(V_H, K_H)}}$ as $\Gamma_{H}$; note that $\Gamma_{H}$ is already Markov minimal. Therefore, it is enough to show that $\Gamma_{\phi^*G}$ is Markov congruent to $\Gamma_{H}$.
If $p$ is a directed path in $H_{\phi^*(v) \rightarrow \phi^*(w)}$ that ends at some vertex $x \in V_H$ then $\phi(p)$ is a directed path in $G_{v \rightarrow w}$ that ends at $\phi(x) \in V_G$ and does not travel toward $v$, unless $p$ is completely collapsed by $\phi$, in which case $\phi(p)$ is the identity loop on $\phi(x)$. So $\phi(H_{\phi^*(v) \rightarrow \phi^*(w)})$ is a subgraph of $G_{v \rightarrow w}$. This implies that $\Gamma_{\phi^*G}$ has a unique node with the value $\kappa_x$ for every vertex $x \in V(H_{\phi^*(v) \rightarrow \phi^*(w)}) \smallsetminus \phi^*(v)$. Since $\Gamma_{\phi^*G}$ is Markov minimal, we see that if every multiplier involved is depicted as a diagram with at most one node (recall the last sentence of Remark~\ref{mul:pow}) then there is a bijection between the maximal directed paths in $\Gamma_{\phi^*G}$ and those in $\Gamma_{H}$ such that each matching pair thread through edges and nodes with the same values. It follows that $\Gamma_{\phi^*G}$ is Markov congruent to $\Gamma_{H}$.
\end{proof}
\begin{thm}[Functoriality of refinement]\label{int:func}
The construction of $\phi^*$ is functorial, that is, there is a contravariant functor
\[
\intv : \cat{FinODAG} \longrightarrow \cat{MarCat}
\]
sending each object $G \in \cat{FinODAG}$ to $\cau(G)$ and each morphism $\phi \in \cat{FinODAG}$ to $\phi^*$.
\end{thm}
\begin{proof}
We need to check that, for all morphisms $G'' \to^\psi G' \to^\phi G$ in $\cat{FinODAG}$, $\psi^* \circ \phi^* = (\phi \circ \psi)^*$ as symmetric strict monoidal functors. For objects, this follows from the fact that morphisms in $\cat{FinODAG}$ preserve order. For morphisms, since the categories are free, it is enough to check that, for all generators $g \in \cau(G)$, $\psi^*(\phi^*(g)) = (\psi \circ \phi)^*(g)$. This is clear if $g$ is $1_v$, $\epsilon_v$, or $\delta_v$. The case $g = \kappa_v$ follows from Proposition~\ref{cond:preserve}.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{'timestamp': '2022-04-12T02:34:25', 'yymm': '2204', 'arxiv_id': '2204.04920', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.04920'}
|
arxiv
|
\section{Introduction}
The spinning particles are studied since the work of Frenkel
\cite{Fren}. For review of results before 1968, we cite the book
\cite{Corben}, later studies can be found in \cite{Fryd}. For the
more recent research, we refer to \cite{LSS, Silenko} and references
therein. The concept of spinning particle admits generalization in
low-dimensional space-times \cite{GKL1}, \cite{AL}, and higher
dimensions \cite{LSS6d, LSShd1, LSShd1/2}. For the constant
curvature spaces, we cite \cite{KuLSS1}. It is believed that the
spinning particle models provide the realistic quasi-classical
description of the motion of real elementary particles with spin and
localized twisted wave packets \cite{KI}. For the applications
spinning particle concept in high energy and accelerator physics,
astrophysics we mention \cite{App1, App2, App3, App4, App5, App6,
App7, App8, App9, App10}.
The class of spinning particle models whose quantization corresponds
to the irreducible representation of the Poincare group is of
interest. The examples are given above. The method of
Kirillov-Konstant-Soureau \cite{Kirillov, Kostant, Sour} tells us
that the quantization of the model leads to irreducible
representation if its classical limit is a dynamical system on the
co-orbit of corresponding group. In this setting, the state of the
irreducible particle is determined by the values of momentum $p$ and
total angular momentum $J$, being subjected to mass-shell and
spin-shell conditions. All the gauge invariant observables are the
functions of $p$, $J$. The action functional is given by the
symplectic form on the co-orbit.
The majority of irreducible spinning particle models have one common
feature: the generalized coordinates include, besides the particle
position, the position in internal space. The configuration space of
the model is given by the fiber bundle
$\mathbb{R}^{1,d-1}\times\mathcal{S}$, with the internal space
$\mathcal{S}$ being typical fiber. The Lagrangian of the model is a
function of Lorentz invariant combinations of the generalized
coordinates and their derivatives, typically up to the first order.
The structure of Lagrangian is selected from the requirement of
irreducibility of Poincare group representation. The equations of
motion follow from the least action principle, and they inevitably
involve internal coordinates. In all these models, the relationship
between the representation and classical dynamics is hidden in the
structure of Lagrangian. The universal dynamical principle selecting
the trajectories of spinning particle has been unknown for a long
time.
In the current article, we consider the spinning particle motion
using the recently proposed world sheet concept \cite{LK}. In this
paper, it has been shown that the classical trajectories of the
irreducible spinning particle inevitably lie on the cylindrical
(hyper)surface in Minkowski space. The (hyper)surface was termed
the world sheet of spinning particle. The shape of the world sheet
is determined by the representation. The world sheet position in the
space-time is determined by the values of the momentum and total
angular momentum, being subjected to the mass shell and spin shell
conditions. The dynamical equations for the particle classical
trajectories follow from the fact that all the world lines that lie
on one and the same world sheet are connected by the gauge
transformations. The resulting equations of motion are purely
geometrical relations on the particle world line, and they do not
involve extra variables.
The geometry of world sheets of massive spinning particles has been
described in the original paper \cite{LK}. The world surfaces has
been shown to be toroidal cylinders of dimension $[(d+1)/2]$ with
timelike axis (the square brackets determine the integer part of
enclosed number). In $d=3,4$, the world surfaces are circular
cylinders with timelike axis. The value of momentum determines the
direction of symmetry axis, and the value of total angular momentum
defines the position of cylinder in Minkowski space. The spinning
particle trajectories are general curves on circular cylinders. The
geometrical equations of motion in $d=3$ have been derived in
\cite{LK}. In $d=4$, the same problem has been solved in
\cite{Ret1}. An alternative derivation of equations of motion for
cylindrical curves has been given in \cite{Ret2}. In all the cases,
the differential equations involve invariants of trajectory up to
the fourth order in derivatives in a very complicated combination,
and they are known only in implicit form. The geometrical equations
of motion are non-Lagrangian, but they admit equivalent variational
formulation with extra fields. This formulation represents
previously known model \cite{GKL1}. The world sheet concept admits
inclusion of interactions with external electromagnetic field
\cite{KR1}. At the interaction level, the spinning particle paths
still lie on a two-dimensional hypersurface, but its shape depends
on the configuration of external field.
In the articles \cite{NerRam1, NerRam2, NerMan}, it has been noted
that the helices with a lightlike tangent vector and timelike
symmetry axis are admissible trajectories for massive particle. In
the framework of the world sheet formalism, these curves represent
the cylindrical paths with a lightlike tangent vector. The
geometrical equations of motion for lightlike helical paths in
$d=3,4$ derived in \cite{LK}, \cite{Ret1}. It has been shown that
the lightlike helices represent the special class of cylindrical
world lines with a reduced gauge symmetry. In so doing, each helix
forms the gauge equivalence class of its own. This suggests that the
helices can be considered as the one-dimensional spinning particle
world sheets. The construction \cite{NerRam1, NerRam2, NerMan} seems
to be unique in space-time dimension $d=3,4$ and it applies only for
massive particles. We do not know geometrical models of massive
particles with lightlike trajectories in higher dimensions. As for
massless particles, their positions are known to lie on the
hyperplanes \cite{Duval1, Duval2}, but the general fact is not
proven up to date.
In the present article, we apply the world sheet concept for the
study of dynamics of massless spinning particles with continuous
helicity in $d=3$ Minkowski space. The world sheets are parabolic
cylinders with lightlike symmetry axis. The focal distance of the
parabolic cylinder is determined by helicity. The world sheet
position is determined by the values of momentum and total angular
momentum. The geometrical meaning of $p$ and $J$ is explicitly
identified. Assuming that all the particle trajectories lying on one
and the same cylinder are connected by the gauge transformations, we
derive the equations of motion for the curves with timelike tangent
vector on the world sheet. We show that the particle paths enjoy a
single differential equation involving invariants of trajectory up
to the forth order. The equation of motion is non-Lagrangian, but it
admits equivalent variational formulation with extra dynamical
variables, been previously known \cite{GKL1}. The lightlike
trajectories are identified with the paths with zero curvature.
These paths do not correspond to the trajectories of irreducible
particle. The world sheet of the particle with zero mass and
helicity are shown to be hyperplanes. Hence, the trajectories of
such particles are proven to be planar curves.
The article is organized as follows. In Section 2, we describe the
geometry of the world sheet of massless particle with continuous
helicity in three-dimensional Minkowski space. In Section 3, we
derive the equations with hight derivatives for general cylindrical
path on the world sheet. Timelike and lightlike trajectories are
considered. In Section 4, we construct Hamiltonian formulation for
the model and discuss the correspondence with the previously known
model of such a particle. In Section 5, we discuss the dynamics of
massless particles. The conclusion summarizes the results.
\section{Irreducibility conditions and world sheet}
We consider the spinning particle that travels in $3d$ Minkowski
space. The particle position is denoted by $x^{\,\mu}\,,\mu=0,1,2$,
the momentum is $p$ and total angular momentum is $J$. We assume
that the quantization of the model corresponds to the irreducible
representation of the Poincare group with a continuous helicity.
Irreducibility means that the momentum $p$ and total angular
momentum $J$ meet the mass-shell and spin-shell conditions,
\begin{equation}\label{ms-shell}
\phantom{\frac12\bigg(}(p,p)=0\,,\qquad (p,J)=\sigma\,.\phantom{\frac12\bigg(}
\end{equation}
Here, the round brackets denote the scalar product with respect to
the Minkowski metric. We use a mostly positive signature of the
Minkowski metric \mbox{$\eta_{\,\mu\nu}=\text{diag}(-1,1,1)$}
throughout the paper. Constant parameter $\sigma$ is helicity. The
case $\sigma=0$ corresponds to the massless particle. The values of
momentum $p$ and total angular momentum $J$, being subjected to
condition (\ref{ms-shell}), determine the state of spinning
particle. The space of all the classical spinning particle states is
associated with the co-orbit (\ref{ms-shell}) of the Poincare group.
The vector of spin angular momentum $M$ is determined by the rule
\begin{equation}\label{Mdef}
\phantom{\frac12\bigg(}M=J-[x,p]\,,\phantom{\frac12\bigg(}
\end{equation}
where the square brackets denote the cross product in $3d$
space-time. We use the convention
\begin{equation}\label{vectprod}
\phantom{\frac12\bigg(}[u,v]=\epsilon_{\mu\nu\rho}u{}^\mu v{}^\nu dx{}^\rho\,,\qquad\epsilon_{012}=1\,,\phantom{\frac12\bigg(}
\end{equation}
where $\epsilon_{\mu\nu\rho}$ is $3d$ levi-Civita symbol, and $u$,
$v$ are test vectors. In accordance with our definition, the
representation for double cross product of three test vectors $u$,
$v$, $w$ reads
\begin{equation}\label{}
\phantom{\frac12\bigg(}[u,[v,w]]=w(u,v)-v(u,w)\,.\phantom{\frac12\bigg(}
\end{equation}
We note that the last formula is sensitive to the particular choice
of the signature of the metric. The article \cite{LK} tells us that
the spin angular momentum vector $M$ must be normalized in every
irreducible spinning particle theory,
\begin{equation}\label{L-shell}
(M,M)=\varrho\,.
\end{equation}
The value of $\varrho$ distinguishes representations with one and
the same value of helicity $\sigma$ (\ref{ms-shell}).
Conditions (\ref{ms-shell}), (\ref{L-shell}) have consequences.
Combining (\ref{L-shell}) with the definition of the spin angular
momentum (\ref{Mdef}), we see that the set of particle positions in
the state with prescribed values of momentum $p$ and total angular
momentum $J$ forms a hypersurface in Minkowski space,
\begin{equation}\label{cylinder}
\phantom{\frac12}(p,x)^2+2(v,x)+a=0\,.\phantom{\frac12}
\end{equation}
Here, $p$ is the particle momentum, and quantities $v$, $a$ are
determined by the total angular momentum $J$ by the following rule:
\begin{equation}\label{v-a}
\phantom{\frac12}v=[J,p]\,,\qquad a=(J,J)-\varrho\,.\phantom{\frac12}
\end{equation}
By definition (\ref{v-a}), the momentum $p$ is lightlike, and the
vector $v$ is normalized and orthogonal to $p$,
\begin{equation}\label{v-p-cond}
\phantom{\frac12}(v,p)=(p,p)=0\,,\qquad (v,v)-\sigma^2=0\,.\phantom{\frac12}
\end{equation}
The hypersurface, being defined by equation (\ref{cylinder}), is
termed a world sheet of spinning particle. By construction, it
includes all the particle positions with the momentum $p$ and total
angular momentum $J$.
The hypersurface, being defined by equation (\ref{cylinder}), is a
parabolic cylinder with lightlike axis. The lightlike vector $p$
determines the direction of symmetry axis. The spacelike vector $v$
determines the direction of asymptotes of parabolas, being
orthogonal sections of cylinder. The focal distance of parabolic
cylinder is determined the helicity. It equals $\sigma$. The
quantity $a$ determines the distance between the vertex of parabolic
cylinder and origin. The cylinder parameters determine the total
angular momentum $J$ by the rule
\begin{equation}\label{J-a}
J=\frac{a+\varrho}{2\sigma}v+\sigma e\,.
\end{equation}
The relation involves auxiliary lightlike vector $e$ which is
orthogonal to $v$ and has a normalized scalar product with $p$,
\begin{equation}\label{e-p-v}
\phantom{\frac12}(e,e)=(e,v)=0\,,\qquad (e,p)=1\,.\phantom{\frac12}
\end{equation}
Equation (\ref{cylinder}) has consequences. It shows that the
classical positions of the particle with continuous helicity lie on
a parabolic hypercylinder with lightlike axis in Minkowski space.
The position of hypersurface is determined by the momentum and total
angular momentum. Formula (\ref{v-a}) expresses the values of
cylinder parameters $p$, $v$, $a$ in terms of $p$ and $J$. The
relationship between the quantities $p$, $v$, $a$ and $p$, $J$ is a
bijection. Formula (\ref{J-a}) determines the momentum and total
angular momentum of the particle in terms of cylinder parameters.
This result means that the co-orbit of the particle with continuous
helicity can be parameterized by the set of parabolic cylinders with
lightlike axis, being the world sheets. The relationship between the
world sheets and co-orbit points is purely geometrical. The world
sheet is the hypersurface including the all the possible particle
positions with given values of $p$ and $J$. The inverse is also true
because the space-time position of the world sheet determines the
state of the particle.
World sheet concept determines a dynamical principle that governs
the particle motion. Equation (\ref{cylinder}) represents a single
restriction imposed onto the particle positions that follows from
the irreducibility condition of the Poincare group representation.
This means that all the points of the world sheet represent the
possible particle positions, while the world paths must be general
curves on the world sheet. Assuming that all the curves that lie on
one and the same world sheet are connected by the gauge
transformation, we associate the classical paths of continuous
helicity particle with general cylindrical lines on parabolic
cylinders with a timelike axis. This reduces the task of description
of particle path by differential equations to the problem of
classification of cylindrical curves. We elaborate on this problem
in the next section.
\section{World paths on world sheets}
\subsection{Problem setting}
In the present section, we consider the problem of classification of
curves on parabolic cylinders with lightlike axis in Minkowski
space. We address three questions: (i) to derive a (system of)
differential equations describing the curves on the parabolic
cylinder; (ii) to express the parameters of the world sheet (and,
hence, the particle state) in terms of derivatives of world path;
(iii) to identify the gauge symmetries of the model. The problem
represents the particular case of more general task of description
of the class of curves lying on the set of surfaces. The solution to
questions (i), (ii) is well-known in the differential geometry of
the curves. We cite the textbook \cite{DiffGeom} for details. The
solution to question (iii) is given in \cite{LK} for the first time,
even though the problem is quite simple in itself.
The description of curves on circular hypercylinder in $3d$
Euclidean space has been first studied in \cite{1966}. The problem
has been recently reconsidered in \cite{C1}, \cite{C2}. It has been
shown that the cylindrical curves are described by a single scalar
equation of fourth order. In $3d$ Minkowski space, the cylindrical
curves has been studied in \cite{LK}. In $4d$ Minkowski space, the
curves on $2d$ circular cylinders are classified in \cite{Ret1}. The
mentioned above articles use different approaches. In the work
\cite{C1}, the concept of constant separation curves is used. The
article \cite{C2} assumes that the cylinder is determined by
algebraic equation. The last approach is best suited for
classification of spinning particle trajectories because the world
sheet is determined by the algebraic equation (\ref{cylinder}).
In our classification of paths on parabolic cylinders we mostly
follow article \cite{C1}, some additional comments are given in
\cite{LK}. As the complete solution to the problem involves many
technical details, we first explain the general method. The
computation details are given in the next subsections. The world
sheet of continuous helicity spinning particle is determined by the
equation (\ref{cylinder}). The curve $x(\tau)$ lies on the cylinder
(\ref{cylinder}) if the equation of hypersurface is satisfied for
all the values of the parameter $\tau$. This implies infinite set of
differential consequences,
\begin{equation}\label{dc}
\frac{d^k}{d\tau^k}\Big((p,x)^2+2(v,x)+a\Big|_{x=x(\tau)}\Big)=0, \qquad k=0,1,2,\ldots.
\end{equation}
These relations represent an overcomplete system of equations that
connects the derivatives of trajectory and cylinder parameters. The
relevant information is included into the differential consequences
of orders $k=0,\ldots,4$. It includes five equations for four
independent components of $p$, $v$, $a$ subjected to
(\ref{v-p-cond}). Solving these equations with respect to $p$, $v$,
$a$, we express the parameters of cylinder in terms of parameters of
trajectory. This solves the problem (ii). The consistency condition
for the system (\ref{dc}) is a differential equation, being
satisfied by the cylindrical paths. This equation represents a
solution to the problem (i). By construction, it involves the
derivatives of world path up to fourth order. The higher-order
differential consequences (\ref{dc}) must follow from the lower
order ones, so they are not independent.The gauge symmetries of the
model are generated by the shifts along the cylindrical surface. So,
the gauge generators are associated with the basis vectors in the
tangent space to the cylinder. This solves the problem (iii).
The described above procedure has several subtleties. First, the
world lines representing the classical trajectories of spinning
particles must be causal. The causality condition $\dot{x}^0>0$ is
imposed to prevent existence of closed world loops, which are
considered unphysical. Throughout the article only causal curves are
considered. Second, the tangent vector to the Minkowski space curve
can be timelike, lightlike or spacelike. Our analysis shows that the
cylindrical curves of different type are not connected by the gauge
transformations. So, the classification problems for spacelike,
timelike, and lightlike cylindrical curves represent different
tasks. In the present work, we consider the timelike and lightlike
paths because they automatically meet causality condition. The
spacelike curves can be included in general scheme in a similar way,
but the causality may be an issue.
Finally, the cylinders can intersect. The intersection line belongs
to several cylinders in the set (\ref{cylinder}), so equations
(\ref{dc}) do not have a unique solution with respect to the
parameters $p$, $v$, $a$. These paths must be excluded because they
do not determine a particle state in an unambiguous way. In the
case of parabolic cylinders set, the intersection is a line with
lightlike tangent vector or non-casual curve. The line appears if
the cylinders with one and the same direction of axis are intersect.
One line belongs to the infinite number of world sheets with one and
the same direction of symmetry axis. The non-casual curve appears if
two cylinders with different directions of symmetry axis intersect.
All the mentioned paths are excluded.
In the paper \cite{LK}, the curves that lie one a unique
representative in the class of world sheets was termed typical. The
curves that belong to multiple world sheets were termed atypical.
The classification of timelike and lightlike lines on parabolic
cylinders presented in subsections 3.1 and 3.2 considers only
typical curves. The atypical curves (including lightlike straight
lines) are systematically ignored below.
\subsection{Timelike world lines on the parabolic cylinder}
Now we can proceed with explicit derivation of the equations of
cylindrical path. The differential consequences of (\ref{cylinder})
up to fourth order have the form
\begin{equation}\label{3}\begin{array}{c}\displaystyle
(\dot{x},n)=0\,,\quad(\ddot{x},n)+(\dot{x},p)^2=0\,,\quad(\dddot{x},n)+3(\dot{x},p)(\ddot{x},p)=0\,,\\[3mm]
(\ddddot{x},n)+4(\dot{x},p)(\dddot{x},p)+3(\ddot{x},p)^2=0\,.
\end{array}\end{equation}
Here, we use a notation,
\begin{equation}\label{pnapva}\begin{array}{c}\displaystyle
n=(x,p)p+v\,.
\end{array}\end{equation}
The new vector $n$ subjects to the conditions
\begin{equation}\label{p-n-cond}\begin{array}{c}\displaystyle
(p,n)=0\,,\qquad (n,n)=\sigma^2\,.
\end{array}\end{equation}
As we can see from the first equation, vector $n$ defines the normal
to tangent space to the world sheet at the point with coordinate
$x$. As far as $n$ is spacelike, the tangent space has the Lorentz
signature in each point of the cylinder.
Let us turn to the description of cylindrical curves. Assume that
$x(\tau)$ is a timelike world line parameterized by the natural
parameter. The velocity vector is normalized,
\begin{equation}\label{timecond}
\phantom{\frac12}(\dot{x},\dot{x})=-1\,.\phantom{\frac12}
\end{equation}
Throughout the section, the dot denotes the derivative by the
natural parameter $\tau$. The Frenet-Serret moving frame, being
associated with the timelike curve $x(\tau)$, reads
\begin{equation}\label{Frenet-frame}\begin{array}{c}\displaystyle
e_{0}=\dot{x},\qquad e_{1}=\frac{\ddot{x}}{\sqrt{(\ddot{x},\ddot{x})}}\,,\qquad
e_{2}=\frac{[\dot{x},\ddot{x}]}{\sqrt{(\ddot{x},\ddot{x})}}\,.
\end{array}\end{equation}
The basis vectors $e_a,a=0,1,2$ of the Frenet-Serret frame are
normalized and orthogonal to each other,
\begin{equation}\label{6}\begin{array}{c}\displaystyle
\phantom{\frac12}-(e_{0},e_{0})=(e_{1},e_{1})=(e_{2},e_{2})=1,\quad
(e_{0},e_{1})=(e_{0},e_{2})=(e_{1},e_{2})=0\,.\phantom{\frac12}
\end{array}\end{equation}
The vector $e_0$ is timelike, and the vectors $e_a,a=1,2$ are
spacelike. Condition (\ref{timecond}) and basis (\ref{Frenet-frame})
are well-defined for each timelike curve, which is not a straight
line. This does not restrict generality because no rectilinear paths
with timelike tangent vector lie on the parabolic cylinder with the
lightlike axis.
The Frenet-Serret formulas for the timelike curve $x(\tau)$ read
\begin{equation}\label{Frenet-formulas}
\dot{e}_0=\varkappa_1e_1\,,\qquad
\dot{e}_1=\varkappa_1e_0+\varkappa_2e_2\,,\qquad
\dot{e}_2=-\varkappa_2e_1\,.
\end{equation}
The curvature $\varkappa_1$ and torsion $\varkappa_2$ of the curve
are determined by the rule
\begin{equation}\label{7}\begin{array}{c}\displaystyle
\varkappa_{1}=\sqrt{(\ddot{x},\ddot{x})}\,,\qquad
\varkappa_{2}=\frac{(\dot{x},\ddot{x},\dddot{x})}{\sqrt{(\ddot{x},\ddot{x})}}
\end{array}\end{equation}
By construction, the curvature $\varkappa_1$ is a positive number,
and the torsion $\varkappa_2$ is a real quantity. With account for
conditions (\ref{Frenet-formulas}), the time derivatives of particle
position can be expressed as the linear combinations of the
Frenet-Serret basis vectors (\ref{Frenet-frame}) with the
coefficients depending on the curvature and torsion of path and
their derivatives,
\begin{equation}\label{8}\begin{array}{c}\displaystyle
\dot{x}=e_0\,,\qquad \ddot{x}=\varkappa_1e_1\,,\qquad
\dddot{x}=\varkappa_1{}^2e_0+\dot{\varkappa}_1e_1+\varkappa_1\varkappa_2e_2\,,\\[5mm]\displaystyle
\ddddot{x}=3\dot{\varkappa}_1\varkappa_1e_0+(\ddot\varkappa_1+\varkappa_1{}^3-\varkappa_1\varkappa{}_2{}^2)e_1+
(2\dot{\varkappa}_1\varkappa_2+\varkappa_2\dot\varkappa_2)e_2\,.
\end{array}\end{equation}
The representation for $\dddot{x}$ involves the derivative of
curvature $\dot{\varkappa}_1$. The representation for $\ddddot{x}$
involves the second derivative of curvature $\ddot{\varkappa}_1$,
and first derivative of torsion $\dot{\varkappa}_2$.
The unknown vectors $p$, $n$ are determined by the conditions
(\ref{3}) and (\ref{p-n-cond}). We seek the solution to
(\ref{p-n-cond}) in the following form:
\begin{equation}\label{10}\begin{array}{c}\displaystyle
p=\gamma\sqrt{\text{sign}(\sigma)\sigma\varkappa_1}(e_0-\beta
e_1+\alpha e_2)\,,\qquad n=\text{sign}(\sigma)\sigma(\alpha
e_1+\beta e_2)\,,
\end{array}\end{equation}
where $\alpha,\beta,\gamma$ are new dimensionless unknowns. The
quantities $\alpha,\beta$ are subjected to the condition
\begin{equation}\label{}
\alpha^2+\beta^2=1\,.
\end{equation}
The quantity $\gamma$ is positive because $p^0>0$. On substituting
representation (\ref{10}) into (\ref{3}), we arrive at the following
system of algebraic equations for $\alpha$, $\beta$ and $\gamma$:
\begin{equation}\label{11}\begin{array}{c}\displaystyle
\alpha^2+\beta^2-1=0\,,\qquad \alpha+\gamma^2=0\,,\qquad
B\alpha+C\beta+3\gamma^2\beta=0\,,\\[5mm]\displaystyle
E\alpha+D\beta+4\gamma^2(B\beta-C\alpha)+\gamma^2(7-3\alpha^2)=0\,.
\end{array}\end{equation}
Here, the notation is used,
\begin{equation}\label{12}\begin{array}{c}\displaystyle
B=\varkappa_1{}^{-2}\dot{\varkappa_1}\,,\qquad
C=\varkappa_1{}^{-1}\varkappa_2\,,\\[5mm]\displaystyle
D=\varkappa_1{}^{-3}(2\dot{\varkappa_1}\varkappa_2+\varkappa_1\dot{\varkappa_2})\,,\qquad
E=\varkappa_1{}^{-3}(\ddot{\varkappa_1}+\varkappa_1^3-\varkappa_1\varkappa_2^2)\,.
\end{array}\end{equation}
Conditions (\ref{11}), (\ref{12}) determine unknowns $\alpha$,
$\beta$, $\gamma$ of decomposition (\ref{10}) in terms of
derivatives of trajectory. The system (\ref{11}) is overcomplete
because three unknowns are subjected to four equations.
The quantities $\alpha$, $\beta$ are easily expressed from the first
and second equations of the system (\ref{11}),
\begin{equation}\label{gammabeta}
\gamma=\sqrt{-\alpha}\,,\qquad
\beta=\frac{B\alpha}{3\alpha-C}\,.
\end{equation}
Substituting this solution into the third and fourth relations
(\ref{11}), we get two polynomial constraints for a remaining
unknown $\alpha$,
\begin{equation}\label{P1}
P_1(\alpha)=9\alpha^3+9C\alpha^2-(4B^2+4C^2-3E+21)\alpha+BD-EC+7C=0\,.
\end{equation}
\begin{equation}\label{P2}
P_2(\alpha)=9\alpha^4-6C\alpha^3+(B^2+C^2-9)\alpha^2+6C\alpha-C^2=0\,.
\end{equation}
Relation (\ref{P2}) determines the unknown $\alpha$ in terms of
derivatives of trajectory up to third order. The explicit
representation for $\alpha$ can be found by application of standard
solution to cubic equation (\ref{P1}), for example Cardano formula.
The only negative root is relevant because of condition
(\ref{gammabeta}). We do not provide this solution because the
system (\ref{P1}), (\ref{P2}) admits a simpler representation
without radicals. We give it in the next paragraph. Relation
(\ref{P1}) is another restriction for unknown $\alpha$. Since both
the conditions (\ref{P1}), (\ref{P2}) are consequences of cylinder
equation (\ref{cylinder}), they has to be satisfied simultaneously.
So, $\alpha$ is a common root of polynomials $P_1(\alpha)$ and
$P_2(\alpha)$ . Two different polynomials have a common root if and
only if their resultant with respect to the variable $\alpha$
vanishes,
\begin{equation}\label{Res1}
\text{Res}_\alpha(P_1(\alpha),P_2(\alpha))=0.
\end{equation}
This is consistency condition for the system (\ref{P1}), (\ref{P2}).
By construction, the resultant is a polynomial in the coefficients
of polynomials (\ref{P1}), (\ref{P2}), being functions of curvature
and torsion of world line, and their derivatives. This resultant is
a differential equation, being satisfied by the cylindrical curves.
It involves derivatives of the path up to fourth order.
Let us now find explicit solution for $\alpha$ and representation
for resultant (\ref{Res1}). Introduce special notation for
combinations of derivatives of curvature and torsion,
\begin{equation}\label{FGH}\begin{array}{c}\displaystyle
F=-(4B^2+4C^2-3E+21)/9,\qquad G=(BD-EC+7C)/9,\\[5mm]\displaystyle
H=B^2+C^2-9.
\end{array}\end{equation}
Here, $F$, $G$, $H$ can be considered as alternative combinations
absorbing invariants of trajectory and their derivatives. In terms
of quantities $F$, $G$, $H$ equations (\ref{P1}), (\ref{P2}) take
the most simple form:
\begin{equation}\label{P1-FGH}
P_1(\alpha)=\alpha^3+C\alpha^2+F\alpha+G=0\,;
\end{equation}
\begin{equation}\label{P2-FGH}
P_2(\alpha)=9\alpha^4-6C\alpha^3+H\alpha^2+6C\alpha-C^2=0\,.
\end{equation}
The consistency condition in terms of resultant of polynomials
(\ref{P1-FGH}), (\ref{P1-FGH}) reads
\begin{align}
\notag &\text{Res}_\alpha(P_1(\alpha),P_2(\alpha))=\\[5mm]
\notag
&=81F^2G^2H-18FG^2H^2-15C^4F^2H+18C^2F^3H-C^2F^2H^2+486CFG^3-\\[5mm]
\notag &-324CG^3H+270C^5FG-162C^3F^2G+1458C^2FG^2+18C^3GH+\\[5mm]
\notag &+189C^2G^2H+4C^4FH-1134C^3FG+15C^2G^2H^2+2C^3GH^2+30C^5GH-\\[5mm]
\notag &-486CF^3G+G^2H^3-120C^3FGH+90C^2FG^2H-6CFGH^2+108CF^2GH-\\[5mm]
\notag &-216C^3G+972C^2G^2-36C^4F+504C^5G+540C^4G^2-216C^4F^2+\\[5mm]
\notag &+36C^6F+540C^3G^3-81C^2F^4-90C^4F^3-1458CG^3+\\[5mm]
&+C^6H-7C^6+729G^4+15C^8=0.\label{Res2}
\end{align}
The common root of (\ref{P1-FGH}), (\ref{P2-FGH}) can be determined
by the Euclid algorithm. Assuming that this root is simple (i.e. the
curve lies on a single cylinder), we obtain
\begin{equation}\label{alpha-sol}
\alpha=-\frac{U}{V}
\end{equation}
where
\begin{align}\label{alpha-sol-U}
\notag & U=225FC^2-6CGH+15C^2FH-216GFG-90C^3G+90C^2F^2-75C^4-\\[5mm]
&-5C^2H+81G^2-108CG+36C^2+81F^3+FH^2-18F^2H\,;
\end{align}
\begin{align}\label{alpha-sol-V}
\notag & V=C^3H+GH^2-18FGH+81F^2G-135CG^2-6C^3+15C^2GH+15C^5-\\[5mm]
&-24C^3F+90FC^2G+99C^2G\,.
\end{align}
Relation (\ref{Res2}) determines the equation of motion for
cylindrical curves. Formulas (\ref{alpha-sol}), (\ref{alpha-sol-U}),
(\ref{alpha-sol-V}) determine a solution to equations (\ref{P1}),
(\ref{P2}) with respect to unknown $\alpha$.
In terms of auxiliary quantity $\alpha$ (\ref{alpha-sol}) the
solution for the momentum $p$ and vector $n$ reads
\begin{equation}\label{p-sol}
p=\sqrt{-\text{sign}(\sigma)\sigma\varkappa_1\alpha}\bigg(\dot{x}-\frac{B\alpha}{\varkappa_1(3\alpha-C)}
\ddot{x}
+\frac{\alpha}{\varkappa_1}[\dot{x},\ddot{x}]\bigg)\,.
\end{equation}
\begin{equation}\label{n-sol}
n=\frac{\sigma}{\varkappa_1}\bigg(\alpha\ddot{x}+
\frac{B\alpha}{3\alpha-C}
[\dot{x},\ddot{x}]\bigg)\,.
\end{equation}
Relations (\ref{pnapva}), (\ref{cylinder}) determine the cylinder
parameters $a$, $v$,
\begin{equation}\label{v-timelike}
v=\sigma\alpha
\bigg[\bigg((x,\dot{x})\varkappa_1+(x,\dot{x},\ddot{x})\varkappa_1\alpha-
(x,\ddot{x})B\alpha\bigg)\dot{x}+
\end{equation}
\begin{equation}\notag
+\frac{1}{\varkappa_1}\bigg(1-\frac{B\alpha}{C-3\alpha}\bigg((x,\dot{x})\varkappa_1+(x,\dot{x},\ddot{x})\alpha\bigg)-\frac{(x,\ddot{x})B^2\alpha^2}{C-3\alpha}\bigg)\ddot{x}-
\end{equation}
\begin{equation}\notag
-\frac{1}{\varkappa_1}\bigg((x,\dot{x})\varkappa_1\alpha+(x,\dot{x},\ddot{x})\alpha^2+
\frac{B}{C-3\alpha}\bigg(1-(x,\ddot{x})\alpha^2\bigg)\bigg)[\dot{x},\ddot{x}]\bigg]\,.
\end{equation}
\begin{equation}\notag
a=
\frac{\sigma\alpha}{\varkappa_1}\bigg[\bigg((x,\dot{x})\varkappa_1+(x,\dot{x},\ddot{x})\alpha+
\frac{(x,\ddot{x})B\alpha}{C-3\alpha}\bigg)^2+
\end{equation}
\begin{equation}\label{a-sol}
+\frac{(x,\dot{x},\ddot{x})B}{C-3\alpha}-2(x,\ddot{x})\bigg]\,.
\end{equation}
The solution for the total angular momentum $J$ (\ref{J-a}) reads
\begin{equation}\label{J-sol}
J=\bigg[\frac{(x,p)^2\varkappa_1\alpha-\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma\varkappa_1\alpha}}+\frac{a+\varrho}{2}\sqrt{-\frac{\alpha\varkappa_1}{\sigma}}\bigg]\dot{x}+
\end{equation}
\begin{equation}\notag
+\frac{\alpha}{\varkappa_1}\bigg[\bigg((x,p)+
\frac{B}{C-3\alpha}\frac{(x,p)^2\varkappa_1\alpha+\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma\varkappa_1\alpha}}\bigg)+\frac{B}{(C-3\alpha)}\frac{a+\varrho}{2}\sqrt{-\frac{\alpha\varkappa_1}{\sigma}}\bigg]\ddot{x}+
\end{equation}
\begin{equation}\notag
+\frac{\alpha}{\varkappa_1}\bigg[\bigg(\frac{(x,p)^2\varkappa_1\alpha+\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma\varkappa_1\alpha}}-\frac{B(x,p)}{C-3\alpha}\bigg)+\sqrt{-\frac{\alpha\varkappa_1}{\sigma}}\bigg][\dot{x},\ddot{x}]\,.
\end{equation}
The solution uses auxiliary vector $e$ (\ref{e-p-v}),
\begin{equation}\label{e-sol}
e=\frac{(x,p)^2\varkappa_1\alpha-\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma^3\varkappa_1\alpha}}\,\dot{x}+\frac{\alpha}{\varkappa_1}\bigg(\frac{(x,p)}{\sigma}+
\frac{B}{C-3\alpha}\frac{(x,p)^2\varkappa_1\alpha+\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma^3\varkappa_1\alpha}}\bigg)\,\ddot{x}+
\end{equation}
\begin{equation}\notag
+\frac{\alpha}{\varkappa_1}\bigg(\frac{(x,p)^2\varkappa_1\alpha+\sigma}{2\sqrt{-\text{sign}(\sigma)\sigma^3\varkappa_1\alpha}}-\frac{B}{C-3\alpha}\frac{(x,p)}{\sigma}\bigg)\,[\dot{x},\ddot{x}]\,.
\end{equation}
Relations (\ref{p-sol}), (\ref{J-sol}) determine the particle
momentum $p$ and particle total angular momentum $J$ (hence, the
cylinder parameters $v$, $a$) in terms of derivatives of classical
trajectory. The description of the particle state is purely
geometrical because no internal variables are involved in
(\ref{p-sol}), (\ref{J-sol}).
Equation (\ref{Res1}) has two obvious gauge symmetries:
reparametrization and translations along the cylinder axis,
\begin{equation}\label{gt-timelike}
\delta_\xi x=\dot{x}\xi\,,\qquad \delta_\eta x= p \eta.
\end{equation}
(in the last case the vector $p$ is considered as the function of
derivatives of trajectory (\ref{p-sol})). The gauge transformations
are independent for general timelike lines because the vectors
$\dot{x}$, $p$ are non-zero and non-collinear. The vector $p$ is
nonzero for spinning particle with nonzero helicity (see conditions
(\ref{ms-shell})). The velocity vector $\dot{x}\neq0$ is nonzero
because the spinning particle is timelike and casual. The vectors
$\dot{x}$ and $p$ are not collinear because $p$ is lightlike and
$\dot{x}$ is timelike. The tangent space to the cylinder
(\ref{cylinder}) is two-dimensional, so the gauge symmetry is
sufficient to connect each pair of timelike curves that lie on one
and the same world sheet. This result confirms the relationship
between the general timelike cylindrical curves and spinning
particle trajectories.
Now, we can summarize the results of the subsection. We have
associated the timelike spinning particle trajectories as curves on
parabolic cylinders. We have shown that these curves are solutions
to the fourth-order differential equation (\ref{Res1}). Equations
(\ref{p-sol}), (\ref{J-sol}) determine the particle momentum and
total angular momentum in terms of derivatives of trajectory. The
representation for $p$, $J$ does not involve internal variables, so
the particle state description is purely geometrical. Equations
(\ref{gt-timelike}) determine the gauge transformations for spinning
particle trajectories. In section $4$ we show that the geometrical
equation of motion follow from the previously known model. This
ensures that this model describes spinning particle with continuous
helicity at the quantum level.
\subsection{Isotropic paths on the world sheet}\addcontentsline{toc}{section}{Isotropic paths on the world sheet}
Let $x(\tau)$ be an lightlike curve, so the length of the tangent
vector is equal to zero,
\begin{equation}\label{isocond}\begin{array}{c}\displaystyle
(\dot{x},\dot{x})=0\,.
\end{array}\end{equation}
The natural parameter on the curve (so called pseudo-arc-length) is
determined by the condition
\begin{equation}\label{}\begin{array}{c}\displaystyle
(\ddot{x},\ddot{x})=1\,.
\end{array}\end{equation}
The natural parameter $\tau$ is well defined on an arbitrary
isotropic curve which is not a straight line. This does not restrict
generality because all the rectilinear paths on the parabolic
cylinder with lightlike axis are cylinder elements, being atypical
curves. The atypical curves are excluded from our consideration.
The Frenet-Serret moving frame, being associated with the isotropic
curve $x(\tau)$, reads
\begin{equation}\label{Frenet-frame-iso}\begin{array}{c}\displaystyle
e_{0}=\dot{x},\qquad e_{1}=\ddot{x}\,,\qquad
e_{2}=-\dddot{x}-\frac{1}{2}(\dddot{x},\dddot{x})\dot{x}\,.
\end{array}\end{equation}
The basis $e_a,a=0,1,2$ includes two lightlike vectors $e_0$ and
$e_2$ with normalized scalar product, and normalized timelike vector
$e_1$,
\begin{equation}\label{6-iso}\begin{array}{c}\displaystyle
\phantom{\frac12}(e_{0},e_{2})=(e_{1},e_{1})=1,\quad
(e_{0},e_{0})=(e_{2},e_{2})=(e_{0},e_{1})=(e_{1},e_{2})=0\,.\phantom{\frac12}
\end{array}\end{equation}
The basis (\ref{Frenet-frame-iso}) is well-defined for each
lightlike curve, which is not a straight line.
The Frenet-Serret formulas for the lightlike curve $x(\tau)$ read
\begin{equation}\label{Frenet-formulas-iso}
\dot{e}_0=e_1\,,\qquad
\dot{e}_1=\varkappa e_0-e_2\,,\qquad
\dot{e}_2=-\varkappa e_1\,.
\end{equation}
The lightlike curve is characterized by a single invariant
$\varkappa$, which includes third derivatives of the path,
\begin{equation}\label{7}\begin{array}{c}\displaystyle
\varkappa=-\frac{1}{2}(\dddot{x},\dddot{x})\,.
\end{array}\end{equation}
The quantity $\varkappa$ can be interpreted as some special analog
of curvature, even thought it geometrical meaning is slightly
different. As we will see below, the condition $\varkappa=0$ selects
the lightlike curves on the parabolic cylinder. The representation
of derivatives of trajectory in terms of curvature $\varkappa$ reads
\begin{equation}\label{8}\begin{array}{c}\displaystyle
\dot{x}=e_0\,,\qquad \ddot{x}=e_1\,,\qquad
\dddot{x}=\varkappa e_0-e_2\,,\\[5mm]\displaystyle
\ddddot{x}=\dot\varkappa e_0+2\varkappa e_1\,.
\end{array}\end{equation}
As usual, this representation involves invariants of trajectory up
to fourth order $\varkappa$, $\dot\varkappa$.
We seek for unknown vectors $p$, $n$, being determined by the
conditions (\ref{3}), (\ref{p-n-cond}) in the following form:
\begin{equation}\label{p-n-iso1}\begin{array}{c}\displaystyle
p=\gamma\sqrt{\text{sign}(\sigma)\sigma}\bigg(\frac{1}{2}\beta^2e_0+\beta
e_1-e_2\bigg)\,,\qquad n=\pm\sigma(\beta e_0+e_1)+\alpha p\,,
\end{array}\end{equation}
where $\alpha,\beta,\gamma$ are new unknowns. The quantity $\alpha$
has the dimension of angular momentum. The quantity $\beta$ has the
dimension of inverse square root of length. The quantity $\gamma$ is
dimensionless. Moreover, we assume $\gamma>0$ in order to meet the
condition $p^0>0$. The sign of $\pm$ determines relative orientation
of vector $p$ with respect to the Frenet-Serret frame
(\ref{Frenet-frame-iso}).
On substituting representation (\ref{p-n-iso1}) into (\ref{3}), we
arrive at the following system of algebraic equations for $\alpha$,
$\beta$ and $\gamma$:
\begin{equation}\label{p-n-iso}\begin{array}{c}\displaystyle
\alpha\gamma=0\,,\qquad
\pm1+\gamma^2+\frac{\alpha\beta\gamma}{\sqrt{\sigma}}=0\,,\qquad
\beta\bigg(\pm1-3\gamma^2-\frac{\alpha\beta\gamma}{2\sqrt{\sigma}}\bigg)+\varkappa\frac{\alpha\gamma}{\sqrt{\sigma}}=0\,,\\[5mm]\displaystyle
2\varkappa\bigg(\pm1+2\gamma^2+\frac{\alpha\beta\gamma}{\sqrt{\sigma}}\bigg)+5\gamma^2\beta^2-\dot{\varkappa}\frac{\alpha\gamma}{\sqrt{\sigma}}=0\,.
\end{array}\end{equation}
The solution to these equations eventually reads
\begin{equation}\label{}
\alpha=\beta=0\,,\qquad\gamma=1\,.
\end{equation}
For the vectors $p$ and $n$, we find
\begin{equation}\label{p-dddot}
p=\sqrt{\text{sign}(\sigma)\sigma}\,\dddot{x}\,,\qquad n=-\sigma \ddot{x}\,.
\end{equation}
The cylinder parameters $v$, $a$ read
\begin{equation}\label{pva}
\phantom{\frac12}v=-\sigma\big(\ddot{x}+(x,\dddot{x})\dddot{x}\big)\,,\qquad
a=\sigma\big(2(x,\ddot{x})+(x,\dddot{x}){}^2\big)\,.\phantom{\frac12}
\end{equation}
The representation for the total angular momentum $J$ reads
\begin{equation}\label{J-dddot}\begin{array}{c}\displaystyle
J=-\sqrt{\text{sign}(\sigma)\sigma}\bigg[\dot{x}+(x,\dddot{x})\ddot{x}-\bigg((x,\ddot{x})+\frac{\varrho}{2\sigma}\bigg)\dddot{x}\bigg]\,.
\end{array}\end{equation}
The solution uses explicit representation for the auxiliary vector
$e$ (\ref{e-p-v})
\begin{equation}\label{}\begin{array}{c}\displaystyle
e=-\frac{1}{\sqrt{\text{sign}(\sigma)\sigma}}\bigg(\dot{x}+(x,\dddot{x})\ddot{x}+\frac{(x,\dddot{x})^2}{2}\dddot{x}\bigg)\,.
\end{array}\end{equation}
The system (\ref{p-n-iso}) has a consistency condition,
\begin{equation}\label{isoeom1}\begin{array}{c}\displaystyle
\phantom{\frac12}\varkappa=0\,.\phantom{\frac12}
\end{array}\end{equation}
From the differential geometry of the curves, this equation means
that the lightlike path on the parabolic cylinder have zero
curvature. This fact means that the lightlike paths on the parabolic
cylinder with lightlike axis (\ref{cylinder}) are indeed the curves
with zero curvature. The complete set of equations for the
cylindrical paths includes zero curvature condition (\ref{isoeom1})
and the lightlike condition for the particle velocity
(\ref{isocond}),
\begin{equation}\label{EoM-iso}
(\dddot{x},\dddot{x})=0\,,\qquad (\dot{x},\dot{x})=0\,.
\end{equation}
The first equations in this system has the third order in
derivatives, and the second one has the first order. Equations
(\ref{EoM-iso}) has a single gauge symmetry, being
reparametrization,
\begin{equation}\label{gt-iso}
\delta_\xi x=\dot{x}\xi\,,
\end{equation}
where $\xi=\xi(\tau)$ is an arbitrary function of proper time.
The identification between the lightlike curves on parabolic
cylinders and the trajectories of spinning particles suggests that
the all the parameters of trajectory are determined by the momentum
and total angular momentum. As we will see, this is not true. The
general solution to equations (\ref{EoM-iso}) reads
\begin{equation}\label{xabcd1}\begin{array}{c}\displaystyle
x(\tau)=\frac{1}{6}a\tau^3+\frac{1}{2}b\tau^2+c\tau+d\,.
\end{array}\end{equation}
The quantity $\tau$ is a natural parameter on the lightlike curve.
The Cauchy data are constant vectors $a$, $b$, $c$, $d$ that subject
to conditions
\begin{equation}\label{xabcd2}\begin{array}{c}\displaystyle
\phantom{\frac12}(a,a)=(c,c)=(a,b)=(b,c)=(a,d)=0\phantom{\frac12}\,,\\[5mm]\phantom{\frac12}\phantom{\frac12}\displaystyle -(a,c)=(b,b)=1\,.\phantom{\frac12}
\end{array}\end{equation}
We also assume that $(d,c)=0$. If this is not true, we make a
reparametrization $\tau\mapsto\tau+\gamma$ with appropriate
$\gamma$. The curve (\ref{xabcd1}) lies on the world sheet
(\ref{cylinder}) if
\begin{equation}\label{path-iso}\begin{array}{c}\displaystyle
a=\frac{1}{\sqrt{\text{sign}(\sigma)\sigma}}p, \qquad b=\frac{1}{\sigma}[p,J],\qquad c=-\sqrt{\text{sign}(\sigma)\sigma}\tau
e\,,\\[7mm]\displaystyle (d,b)=\frac{1}{\sigma}((J,J)-\varrho).
\end{array}\end{equation}
The particle state determines the parameters of trajectory if these
equations can be solved with respect to $a$, $b$, $c$, $d$. This is
not possible because the solution for $d$ has ambiguity,
\begin{equation}\label{}
d=\frac{1}{\sigma^2}((J,J)-\varrho)[p,J]+\lambda
e,\qquad \lambda \in \mathbb{R}.
\end{equation}
The quantity $\lambda$ controls parallel shifts of path along the
symmetry axis of cylinder. It is an additional data, being
independent of $p$ and $J$. Thus the position of general lightlike
cylindrical curve is determined by five parameters. The theory
(\ref{EoM-iso}) cannot be a dynamical system on the continuous
helicity co-orbit (\ref{ms-shell}), which has two physical degrees
of freedom (four physical polarisations).
The results of the subsection demonstrate that the lightlike world
lines are not admissible classical trajectories of relativistic
spinning particle with continuous helicity. In particular, no
geometrical model of continuous helicity spinning particle can be
constructed with lightlike trajectories. The problem of lightlike
curves has no analogue in massive case, where the geometrical models
of relativistic particles with lightlike lines are known for a long
time \cite{NerRam1}, \cite{NerRam2}, \cite{NerMan}. Our no-go result
for continuous helicity particle suggests that the presence of
light-like trajectories is a feature of massive models.
\section{Hamilton's formalism}\addcontentsline{toc}{section}{Hamilton's formalism}
In the articles \cite{GKL1}, the equations of motion of irreducible
spinning particles has been derived from the action functional
involving extra variables, being internal space coordinates. In the
context of current research the model \cite{GKL1} is relevant. The
paper considers the massive particle, but the action functional
admits a smooth continuous helicity limit $m\to0,ms\to\sigma$. In
this section, we demonstrate that the equations of motion for
cylindrical curves follow form the least action principle of the
work \cite{GKL1}. For reasons of simplicity, we consider the case of
spacelike or lightlike spin vector. The accessory parameter
$\varrho$ is determined by the rule
\begin{equation}\label{MM}
(M,M)=\alpha^2\,,
\end{equation}
which corresponds to the identification $\varrho=\alpha^2$. The case
of lightlike spin vector can be considered in the similar way. We
leave the details to the reader.
The equation of the world sheet can be equivalently rewritten in the
following vector form:
\begin{equation}\label{x-sol}
x=(x,e)p+(x,p)e-\frac{(x,p){}^2+a}{2\sigma^2}v\,.
\end{equation}
The vector equation has the same valuable information about the
particle position as a scalar relation because it has only one
independent component. The scalar multiplication of left and right
hand sides of equation (\ref{x-sol}) on $p,e$ leads to identity. The
only nontrivial consequence of relation appears after multiplication
of both sides of equation by $v$, and it gives the (\ref{cylinder}).
Relation (\ref{x-sol}) can be considered as the solution to the
world sheet equation in the parametric form. In this setting, the
functions $(x,p)$, $(x,e)$ serve as local coordinates on the
cylinder. Once the spinning particle travels the path on the world
sheet, the quantities $(x,p)$, $(x,e)$ are arbitrary functions of
proper time, being restricted by the causality condition. In what
follows, we assume that the causal trajectories are considered.
Relation (\ref{x-sol}) determines the dynamics of spinning particle.
Differentiating by the proper time, we obtain
\begin{equation}\label{dot-x}
\dot{x}=(\dot{x},e) p+(\dot{x},p)\bigg(e-\frac{(x,p)}{\sigma^2}v\bigg)\,,
\end{equation}
where $(\dot{x},p)$, $(\dot{x},e)$ are arbitrary functions. The
quantities $(\dot{x},p)$, $(\dot{x},e)$ have sense of velocities of
generalized coordinates $(x,e)$, $(x,p)$ on the world sheet.
Equation (\ref{dot-x}) should be complimented by the conservation
law for momentum and total angular momentum, and the constraints for
the vectors $p$, $v$, $e$,
\begin{equation}\label{dp-v-e}
\dot{p}=\dot{v}=\dot{e}=0\,;
\end{equation}
\begin{equation}\label{p-v-e-const}
(p,p)=(e,e)=(e,v)=0,\quad (e,p)-1=0\,,\quad
(v,v)-\sigma^2=0\,.
\end{equation}
The system (\ref{dot-x}), (\ref{dp-v-e}), (\ref{p-v-e-const})
determine the class of cylindrical curves by obvious reasons.
Equations (\ref{dp-v-e}), (\ref{p-v-e-const}) tell us that the
vectors $p$, $v$, $e$ are integrals of motion subjected to
constraints (\ref{p-v-e-const}). After that the integration of
differential equation (\ref{dot-x}) gives (\ref{x-sol}). The
quantity $a$ appears as the constant of integration. In so doing,
the spinning particle travels along the cylindrical path if and only
if equations are satisfied (\ref{dot-x}), (\ref{dp-v-e}),
(\ref{p-v-e-const}).
The system (\ref{dot-x}), (\ref{dp-v-e}), (\ref{p-v-e-const}) does
not follow from the least action principle of dynamical variables
$x$, $p$, $v$, $e$, $(\dot{x},p)$, $(\dot{x},e)$ because 14
quantities are subjected to 18 evolutionary equations and
constraints. To construct the variational principle, we solve
constraints (\ref{p-v-e-const}) using the lightlike vector $\xi$
with normalized $0$-component,
\begin{equation}\label{}
\xi=(1,\sin\varphi,\cos\varphi)\,.
\end{equation}
The new dynamical variable $\varphi$ can be considered as angular
variable in the internal space, being a circle. This corresponds to
the configuration space of spinning particle
$\mathbb{R}^{1,2}\times\mathbb{S}^1$. By definition, we put
\begin{equation}\label{v-sol}
v=\sigma\frac{[\xi,p]}{(\xi,p)}-(\alpha+(x,p))p\,;
\end{equation}
\begin{equation}\label{e-sol}
e=\frac{\xi}{(\xi,p)}+\frac{(x,p)+\alpha}{\sigma(\xi,p)}[\xi,p]-\frac{((x,p)+\alpha)^2}{2\sigma^2}p\,,
\end{equation}
where $p$ is the momentum, being lightlike vector. Relation
(\ref{v-sol}), (\ref{e-sol}) automatically meets all the constraints
(\ref{p-v-e-const}) involving $v$ and $e$. The only reaming
constraint is the mass shell condition for the particle momentum
\begin{equation}\label{}
(p,p)=0\,.
\end{equation}
In terms of dynamical variables $x$, $p$, $\xi$ equations
(\ref{dot-x}), (\ref{dp-v-e}), (\ref{p-v-e-const}) take the
following form
\begin{equation}\label{d-xi-x}
\dot{x}=\bigg((\dot{x},e)+\frac{(x,p)^2-\alpha^2}{2\sigma^2}(\dot{x},p)\bigg)p+(\dot{x},p)\bigg(\frac{\xi}{(\xi,p)}+\frac{\alpha}{\sigma}\frac{[\xi,p]}{(\xi,p)}\bigg)\,;
\end{equation}
\begin{equation}\label{d-xi-v}
\frac{d}{d\tau}\bigg(\sigma\frac{[\xi,p]}{(\xi,p)}-(\alpha+(x,p))p\bigg)=0\,;
\end{equation}
\begin{equation}\label{d-xi-p}
\phantom{\frac12\bigg(}\dot{p}=0\,,\qquad (p,p)=0\,.\phantom{\frac12\bigg(}
\end{equation}
(We dot not write out the consequences of the condition $\dot{e}=0$
because the quantity is completely determined by $p$ and $v$.)
Relations (\ref{d-xi-x}), (\ref{d-xi-v}), (\ref{d-xi-p}) have clear
physical sense. Equation (\ref{d-xi-x}) determines the evolution of
the particle position. Equation (\ref{d-xi-p}) tells us that the
vector $p$ conserves, and it is lightlike. Equation (\ref{d-xi-v})
expresses a single independent relation because the vector $v$ has a
single independent component. It has a consequence,
\begin{equation}\label{m-d-xi}
(\dot{x},p)=\frac{\sigma\dot\varphi}{(\xi,p)}\,.
\end{equation}
On substituting this expression for $(\dot{x},p)$ into
(\ref{d-xi-x}), we obtain the system of two vector equations
(\ref{d-xi-x}), (\ref{d-xi-p}) for the dynamical variables $x$, $p$,
$\varphi$.
Relations (\ref{d-xi-x}), (\ref{d-xi-p}), (\ref{m-d-xi}) follow from
the least action principle for the functional
\begin{equation}\label{S-Ham}
S=\int\bigg\{ (p,\dot{x})+\frac{\sigma}{(\xi,p)}\dot{\varphi}+
\alpha\frac{(\partial_\varphi\xi,p)}{(\xi,p)}\dot{\varphi}-\frac{\lambda}{2}
(p,p)\bigg\}d\tau\,.
\end{equation}
The dynamical variables are the particle position $x$, momentum $p$,
angular variable $\varphi$, and Lagrange multiplier $\lambda$.
Relations (\ref{d-xi-x}), (\ref{d-xi-p}), (\ref{m-d-xi}) appear as
the variational derivatives with respect to the dynamical variables
$x$, $p$, $\lambda$. Taking the Lagrange derivative with respect to
$p$, we obtain equations (\ref{d-xi-x}), (\ref{m-d-xi}),
\begin{equation}\label{}
\dot{x}=\bigg((\dot{x},e)+\frac{(x,p)^2-\alpha^2}{2\sigma^2}\frac{\sigma\dot{\varphi}}{(\xi,p)}\bigg)p
+\frac{\sigma\dot{\varphi}}{(\xi,p)}\bigg(\frac{\xi}{(\xi,p)}+\frac{\alpha}{\sigma}\frac{[\xi,p]}{(\xi,p)}\bigg)\,.
\end{equation}
Taking the Lagrange derivative with respect to $x$ and $\lambda$, we
get (\ref{d-xi-p}). The variation of action (\ref{S-Ham}) with
respect to $\varphi$ does not lead to a new independent dynamical
equation because of gauge identity
\begin{equation}\label{}
(\xi,p)\frac{\delta
S}{\delta\varphi}+\frac{\sigma\dot{\varphi}}{(\xi,p)}\frac{\delta
S}{\delta x}=0\,.
\end{equation}
This proves the variational principle for the cylindrical curves.
It remains to verify that the quantization of the classical model
(\ref{S-Ham}) corresponds to the irreducible representation with
helicity $\sigma$ and accessory parameter $\alpha^2$. The fact is
non-trivial because the particles of continuous helicity follow one
and the same paths irrespectively to value of the representation
parameters. The total angular momentum vector reads
\begin{equation}\label{}
J=[x,p]+\bigg(\frac{\sigma}{(\xi,p)}+\frac{\alpha(\partial_\varphi
\xi,p)}{(\xi,p)}\bigg)\xi-\alpha\partial_\varphi\xi\,.
\end{equation}
One can see that the vector $J$ meets spin shell condition
\begin{equation}\label{}
(p,J)=\bigg(p,\bigg(\frac{\sigma}{(\xi,p)}+\frac{\alpha(\partial_\varphi
\xi,p)}{(\xi,p)}\bigg)\xi-\alpha\partial_\varphi\xi\bigg)\equiv\sigma.
\end{equation}
The spin angular momentum reads
\begin{equation}\label{}
M=\bigg(\frac{\sigma}{(\xi,p)}+\frac{\alpha(\partial_\varphi
\xi,p)}{(\xi,p)}\bigg)\xi-\alpha\partial_\varphi\xi\,.
\end{equation}
It is easy to see that condition (\ref{MM}) is true. This result
ensures that the geometrical equations of motion for cylindrical
lines admit equivalent variational formulation with the auxiliary
variables. In its own turn, the variational model can be quantized
in a way that corresponds to the continuous helicity representation
of the Poincare group.
\section{Massless particle}
The massless co-orbit is determined by the relations
(\ref{ms-shell}) with $\sigma=0$. The momentum $p$ and total angular
momentum $J$ of massless particle are subjected to following
mass-shell and spin shell-conditions,
\begin{equation}\label{ml-ms-shell}
\phantom{\frac12}(p,p)=0\,,\qquad (p,J)=0\,.\phantom{\frac12}
\end{equation}
These relations are inconsistent for timelike $J$, so we assume that
the norm of $J$ is nonnegative throughout the section. The spin
vector $M$ is determined by the rule (\ref{Mdef}). Similarly to $J$,
the vector $M$ is lightlike or spacelike. The accessory parameter
$\varrho=\alpha^2$ is determined by the relation
\begin{equation}
\phantom{\frac12}(M,M)=\alpha^2\,.\phantom{\frac12}
\end{equation}
Now, we can discuss the structure of spinning particle world sheet.
Equations (\ref{ml-ms-shell}) have a consequence,
\begin{equation}\label{p-v-sim}
\phantom{\frac12}[J,p]=(J,J)^{\frac12}p\,.\phantom{\frac12}
\end{equation}
It means that the quantity $v$ (\ref{v-a}) and momentum $p$ are
collinear, while the norm of $J$ determines the aspect ratio. With
account of (\ref{p-v-sim}), the equation of the world sheet of
spinning particle eventually reads
\begin{equation}\label{ml-ws}
(J-[x,p])^2-\alpha^2=((x,p)+(J,J)^{\frac12}+\alpha)((x,p)+(J,J)^{\frac12}-\alpha)=0\,.
\end{equation}
The formula determines the pair of parallel hyperplanes with the
normal $p$. The quantity $\alpha$ controls the distance between
hyperplanes. The world sheet is path-connected if $\alpha=0$. In the
latter case, we have a single hyperplane,
\begin{equation}\label{ml-ws-1}
(x,p)+(J,J)^{\frac12}=0\,.
\end{equation}
Here, the vector $p$ serves as the normal, while the norm of $J$
determines the distance between the hyperplane (\ref{ml-ws-1}) and
origin.
Equations (\ref{ml-ws}) and (\ref{ml-ws-1}) tell us that the
positions of massless spinning particle are localized on the
hyperplanes, whose position in Minkowski space is defined by the
values of momentum and total angular momentum. This fact has been
observed previously in chiral fermion model \cite{Duval1},
\cite{Duval2} earlier. Our result shows that the planar motion has
no alternative for the massless particle irrespectively to any
specifics of the model. In particular, the torsion of the spinning
particle path must be zero in all the instances,
\begin{equation}
\phantom{\frac12}(\dot{x},\ddot{x}, \dddot{x})=0\,.\phantom{\frac12}
\end{equation}
Unfortunately, this equation contains only partial information about
the model dynamics. The number of independent parameters labelling
the particular hypersurface in the set (\ref{ml-ws}),
(\ref{ml-ws-1}) is less than the co-orbit dimension. In the case of
two hyperplanes (\ref{ml-ws}), the position of the world sheet is
determined by three parameters: the lightlike vector $p$, and the
norm of total angular momentum $(J,J)^{\frac12}$. In the case of a
single hyperplane (\ref{ml-ws-1}), only the ratio
$p/(J,J)^{\frac12}$ is relevant. It involves only two initial data
in independent way. The dimension of co-orbit (\ref{ml-ms-shell})
equals four in all the instances. The extra dynamical degrees of
freedom have no geometrical description in terms of world sheet
formalism, and they require introduction of internal variables.
This result means that world sheet concept cannot be used for
construction of geometric model of massless spinning particle. On
the other hand, the spinning particles must be planar curves in
every irreducible spinning particle theory. As it has been mentioned
above, this condition is satisfied for previously known models.
\section{Conclusion}
In the current article, the recently proposed idea of characterising
the classical spinning particle dynamics by the world sheet, rather
than the world line \cite{LK}, has been applied to the problem of
description of dynamics of irreducible spinning particle with
continuous helicity. It has been shown that the admissible classical
positions of the particle lie on parabolic hypercylinder in
Minkowski space irrespectively to any specifics of the model. The
position of the hypercylinder is determined by the values of
momentum and total angular momentum. The focal distance is
determined by helicity. The classical trajectories of the spinning
particle are given by causal cylindrical lines. Assuming that all
the trajectories belonging to the same cylinder are connected by
gauge transformation, we have derived an ordinary differential
equation describing the general cylindrical lines with time-like
tangent vector. These equations of motion are purely geometrical,
and they involve invariants of the classical path including the
derivatives up to the fourth order. The momentum and total angular
momentum are expressed as the functions of trajectory. To our best
knowledge, geometrical equations of motion, not involving any extra
variables, have been previously unknown for the continuous helicity
particle.
We have paid the particular attention to the class of lightlike
cylindrical lines. It has been shown that the cylindrical lightlike
trajectory either a straight line (representing the cylinder
element) or the curve with zero (lightlike) curvature. Unlike the
massive case, no lightlike curves can serve as physically acceptable
trajectories of spinning particles. The lines lie on infinite number
of parabolic cylinders with one and the same direction of axis.
These trajectories do not determine the state of the particle in
unambiguous way. As for zero curvature paths, their position in
spacetime involves besides the momentum and total angular momentum
an extra initial data. Having extra degree of freedom, the theory of
lightlike cylindrical curves cannot be considered as the a dynamical
system on the continuous helicity co-orbit. As the spinning
particles are dynamical systems on the co-orbit, this theory can't
describe the motion of spinning particle.
We have proven that the geometric equations of motion for
cylindrical curves can be derived from the least action principle.
This is important from several viewpoints. First, the concept of
irreducible spinning particle suggests that the model can be
quantized, while its quantization corresponds to the irreducible
representation of the Poincare group. The variational principle
provides the way to constructing the quantum theory. Second, the
variational principle shows that our results are consistent with the
previous studies. We explicitly demonstrate that the differential
equations for cylindrical curves follow from the action functional
of the work \cite{GKL1}, and vice versa. The quantization of this
model do correspond to the continuous helicity representation. In
all the cases, the variational principle involves extra variables
having sense of coordinates in internal space.
The studies of the spinning particle world sheet concept can be
continued in several directions. One of the interesting issues is
the geometry trajectories of spinning particle in the external
electromagnetic and gravitational field. The article \cite{KR1}
tells us that the world sheet of massive particle in electromagnetic
field is a cylindrical hypersurface, whose radius is fixed by the
representation. The world sheets of continuous helicity may have
much more interesting geometry because the sections of parabolic
cylinder are not compact. The couplings between the particle
traveling a cylindrical path and external field are expected to be
non-local, while the non-locality is controlled by the helicity.
Expanding these equations in helicity, we will obtain approximate
equations describing cylindrical trajectories of continuous helicity
spinning particles. The leading orders of these equations will serve
as the analogs of Frenkel \cite{Fren} and Mathisson-Papapetrou
\cite{Mathisson}, \cite{Papapetrou} models .
\section*{Acknowledgments}
The authors thank A.A. Sharapov for valuable discussions of this
work. The work was supported by RFBR (project number 20-32-70023)
and Foundation for the Advancement of Theoretical Physics and
Mathematics ``BASIS".
\renewcommand\refname{Bibliography}
|
{'timestamp': '2021-11-16T02:28:26', 'yymm': '2111', 'arxiv_id': '2111.07605', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.07605'}
|
arxiv
|
\section{Introduction}
Anticipating human actions is critical for real-world applications in autonomous driving, video surveillance, human-computer interaction, \emph{etc}.
According to the prediction horizons, the anticipation task is mainly investigated in two tracks: next-action anticipation~\cite{Vondrick16,Mahmud17, Qi17, Damen18, Farha18, Ke19, Fadime20} and dense anticipation~\cite{Farha18, Ke19, Fadime20}.
Next action anticipation predicts upcoming actions $\tau$ seconds in advance, where the value of $\tau$ is considered as 1 in many recent works.
Dense anticipation predicts multiple actions into the future and their durations for long horizons of up to several minutes or an entire video.
Our paper focuses on the more challenging dense anticipation task where all existing methods~\cite{Farha18, Fadime20, Ke19} are fully supervised.
Annotating videos for the fully supervised version of this task can be tedious, as it requires labelling the full set of actions in the subsequent sequence as well as their start and end times. In real-world videos, sequences are more likely to be labelled or tagged only at specific events. These tags are incomplete and instantaneous, \emph{i.e.} not present at every action and without duration information.
This motivates us to develop a weakly supervised dense anticipation framework that learns from video sequences with an incomplete set of action and duration labels.
Specifically, we aim to learn from a small set of fully-labelled data and predominantly from
weak labels in which the video segment is annotated only with the first action class of the anticipated sequence (see Fig.~\ref{fig:01_WS-VDA}).
This can greatly reduce the labelling effort as now we only need to provide the class label of a single action instead of all frames in the sequence.
In practice, this type of weak label is akin to the \emph{time-stamp annotations} used in weakly-supervised temporal action segmentation, in which an arbitrary frame from each action segment is labelled~\cite{Li21,Moltisanti19,Ma20}.
When annotating timestamps, annotators quickly go through a video and press a button when an action is occurring. This is $\sim$6x faster than marking the exact start and end frames of action segments~\cite{Ma20} and still provides strong cues to learn effective models for action segmentation.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{images/Figure1.eps}
\caption{
Dense anticipation with full supervision vs. weak supervision. The fully supervised label contains all the actions in the future video sequence as well as their durations. In this work, we consider a weak label in which only the first action label without any duration information is available. Our proposed framework is both semi- and weakly-supervised. We use a small set of fully-labelled videos, while the remainings are weakly-labelled.}
\label{fig:01_WS-VDA}
\end{figure}
In our case, our weak label can be viewed as an incomplete version of the full label since it has only one (the first) of the full set of action labels, and no duration labels. Since each action label and action duration are all treated as separate terms in the loss for conventional anticipation methods~\cite{Mahmud17,Farha18,Fadime20}, a naive route to learn would be to ignore any missing labels from the loss.
This option, while simple, does not fully leverage the data of the weakly-labelled set. We opt instead to learn an auxiliary model to generate pseudo-labels for the missing labels. The use of pseudo-labelling has become popular in unsupervised and semi-supervised learning~\cite{Helmstetter18, Yang21, Meng19, Yu20} and has been successful for tasks like image classification~\cite{Ge19, Wu17, Wangc21, Fang20} and segmentation~\cite{Dong19, Yao21, Chang20}. Inspired by these works, we propose a framework for learning a primary and conditional module for (semi-) weakly-supervised dense action anticipation.
The conditional module is learned on a small fully-labelled training set to generate pseudo-labels for a larger weakly-labelled training set. The pseudo-labelled weak data is then applied to learn the primary anticipation module which will be used during inference.
Directly learning on the outputs of an auxiliary model is often not better than learning on the limited set of provided labels as it does not add new knowledge into the system. The phenomenon is referred to as confirmation bias~\cite{Arazo20}; extending previous solutions such as label smoothing~\cite{Zhang17} or label sampling and augmentation~\cite{Zhu18, Berthelot19, Iscen19} is non-trivial for sequence data. As such, we introduce an adaptive refinement method which learns refined sequence labels based on the predictions of the primary and conditional module.
In our experimentation, we have observed that the accuracy of dense anticipation is highly sensitive to having the correct duration prediction, especially in the earlier anticipated actions\footnote{Consider a ground truth sequence of AABBCCDD where each letter is the action of a frame; a prediction of AAAABBCCDD would score a mean-over-classes of only 0.25 since all B, C and D frames are misaligned.}. We are therefore motivated to ensure that the anticipated durations are correct. To that end, we introduce an additional duration attention module applicable to recursive dense anticipation methods~\cite{Farha18,Fadime20}. We compute an attention score between the observed video context and the hidden representation at each prediction step to explicitly emphasize the correlations, which greatly improves the duration accuracy.
The contributions of this paper are summarized as follows:
1. We explore a novel and practical weakly-supervised dense anticipation task and propose an adaptive refinement method to make the most of weakly-labelled videos while using only a small number of fully-labelled videos.
2. We propose an attention scheme for predicting the duration of the anticipated actions which better accounts for the action correlations.
3. Our semi-supervised framework is flexible and applicable to a variety of dense anticipation backbones. The duration attention scheme serves as a plug-and-play module to improve the performance of recursive anticipation methods. Evaluation on standard benchmarks shows that our weakly supervised learning scheme can compete with state-of-the-art fully supervised approaches.
\section{Related Work}
Action recognition is the hallmark task of video understanding.
In standard action recognition settings, short, trimmed video clips are classified with action labels.
In contrast, action anticipation is applied to longer, untrimmed video sequences and aims to predict future actions \emph{before} they occur.
The task in next action anticipation is to predict the upcoming action $\tau$ before it occurs.
Various architectures ranging from recurrent neural networks (RNNs) ~\cite{Pirri19,Furnari20,Zhang20,Canuto20}, convolutional networks combined with RNNs~\cite{Mahmud17}, to transformers~\cite{Wang21} are proposed.
The main focus of these works is to extract relevant information from the observations to predict the label of the action starting in $\tau$ seconds, varying between zero ~\cite{lan2014hierarchical} to 10s of seconds \cite{koppula2015anticipating}.
Other models leverage external cues such as hand movements to help with the anticipation task~\cite{Liu20eccv, Dessalene21}.
Dense action anticipation predicts \emph{all} subsequent actions and their durations for longer horizons of the unobserved sequence.
Recursive methods~\cite{Farha18, Fadime20} use an encoder to extract visual features from the observed sequence and use an RNN as a decoder to predict future actions and their duration sequentially. As recursive predictions may accumulate and propagate errors, Ke~\emph{et al.}~\cite{Ke19} anticipates actions directly for specific future times in a single shot.
When it comes to duration anticipation, all previous methods are relatively simple in that they apply a linear layer on top of the features of observed or predicted actions. Only past action features are used, without taking action correlations into account. Intuitively, actions with higher correlations with current action tend to influence more on current action's duration. Consequently, our method improves on previous works by introducing an attention mechanism for duration anticipation.
To date, all methods for dense anticipation~\cite{Farha18,Fadime20,Ke19} follow a fully supervised setting and require extensive annotations for learning.
Driven by the laborious demand of fully labelled data in computer vision, some researchers focus on weakly- or semi-supervised learning to reduce annotation workload~\cite{Ahn19, Liu20, Lee21, Chen20}.
Previously,~\cite{Ng20} apply a weakly-supervised model on forecasting future action sequences, where only action sequences rather than frame-wise labels are provided as coarse labels. They combine the attention scheme with GRU to recurrently predict action labels with more focus on related observed actions, which is similar to our duration attention.
Our work is similar in spirit to the teacher-student model~\cite{Tarvainen17, Laine16} which also uses an auxiliary model to support training. However, we do not explicitly enforce label consistency between the two models and instead use a third refinement module to directly improve the pseudo-labels.
Pseudo-labels are widely used in weak supervision~\cite{ChangY20, WangJ20, Zhang21}. Most often they are only propagated for unlabelled or semi-labelled data. We also generate for fully-labelled data and by minimizing the distance between ground truth and improved pseudo labels by our refinement module, we make the model adaptive refine the accuracy of pseudo labels.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{images/Figure2.eps}
\caption{Method overview. (a) The conditional module is trained on the small set of fully-labelled data to generate pseudo-labels. Once trained, it remains fixed, and does not contribute gradients in the following steps. (b) The primary module is trained on the small set of fully-labelled data and the large set of weakly-labelled data with the first future action label as the incomplete label. The weak labels are augmented into full pseudo-labels by refining the outputs of the conditional module.}
\label{fig:02_overview}
\end{figure*}
\section{Method}
Our proposed framework is both semi- and weakly-supervised. It is trained on a small set of fully labelled videos and a large set of weakly labelled videos with only the first action in the anticipated sequence. The model is comprised of three components: a primary module used during inference (Sec.~\ref{sec:primary}), a conditional module for generating pseudo-labels (Sec.~\ref{sec:conditional}), and a sequence refinement module (Sec.~\ref{sec:refinement}) to refine the estimated pseudo-labels.
We treat the primary and conditional modules as black-box encoder-decoders, where the observed video is encoded into features, while the decoder generates the anticipated action labels and durations.
As the proposed framework is general, we can use any previously proposed dense anticipation model~\cite{Farha18,Fadime20,Ke19} as a backbone. The training procedure can be broken down into two stages. The conditional module is trained initially on the fully-labelled set $\mathcal{F}$ so that it can be used to generate pseudo-labels for the weakly-labelled data. The combined set of the fully-labelled and the pseudo-labelled weak data $\mathcal{W}$ then merged to train the primary module.
Directly using the pseudo-labels may result in confirmation bias, as these labels are generated from a model which is learned only on the small set of fully-labelled data. Therefore we refine the pseudo-labels with a sequence refinement module which is learned simultaneously with the primary module. Fig.~\ref{fig:02_overview} illustrates an overview.
\subsection{Preliminaries}\label{sec:prelim}
For a given video, $\textbf{x}=\{x_{1}, \dots, x_{t}, \dots, x_{T}\}$ denotes the set of $T$ observed frames. Dense anticipation aims to predict the future $M$ action labels $\textbf{c}=\{c_{1}, \dots, $ $c_{m}, \dots, c_{M}\}$ and associated durations $\textbf{d}=\{d_{1}, \dots, d_{m}, \dots, d_{M}\}$ for frames $T+1$ onwards until the end of the video sequence. Note that $t$ is a per-frame index in the video, while $m$ is a per-action index. A fully supervised setting is then associated with a set of data $\mathcal{F}=\left\{\left(\textbf{x}, \textbf{c}, \textbf{d}\right)\right\}$. We also denote the action labels and duration jointly by $\mathbf{y} = \{y_1, \dots y_m \dots y_M\}$, where $y_m = (c_m, d_m)$, and distinguish the ground truth and the corresponding predictions as ${y}_m$ and $\hat{y}_m$ respectively.
Under a weakly-supervised setting, we assume we are given the set
$\mathcal{W}=\left\{\left(\textbf{x},\textbf{c}'\right)\right\}$, \emph{i.e.} observed videos of $T$ observed frames $\textbf{x}=\{x_{1}, \dots, x_{t}, \dots, x_{T}\}$, along with the weak label $\textbf{c}'=c_1$, \emph{i.e.} the action label of frame $x_{T+1}$. There are no assumptions on $T$, \emph{i.e.} if the observed sequence end in the middle of an action, $c_1$ will be the current action label; if $T$ is exactly the last frame of an action, then $c_1$ will be the label of the next action.
This translates to the dense anticipation protocol of previous works~\cite{Fadime20,Farha18,Ke19} in which the first $X\%$ of a video is observed and predictions are made on the following $Y\%$ from $X$ to $X+Y$. Therefore, we use as the weak label the first frame of the remaining $Y\%$.
We formulate dense anticipation as a mixed classification and regression task to anticipate the action labels and duration respectively. Without any assumption on the backbone anticipation method, we will refer to the primary module as function $f(\mathbf{x})$ and the conditional module as function $f_{\text{cond}}(\mathbf{x}, \mathbf{c}')$. The conditional module is trained based on the loss in Eq.~\ref{eq:condloss}. Then, $\mathcal{F}$ and $\mathcal{W}$ are used to train the primary module with conditional module fixed as elaborated in Section ~\ref{sec:primary}. The main issue is how to adjust pseudo-labels.
\subsection{Conditional Module}~\label{sec:conditional}
The conditional module $\tilde{\mathbf{y}} = f_{\text{cond}}(\textbf{x},\textbf{c}')$ is an auxiliary component trained for generating pseudo labels $\tilde{\mathbf{y}}$ for the weak set $\mathcal{W}$. To do so, it is trained in the standard way using $\mathcal{F}$ with the following loss function
\begin{equation}
L_{\text{cond}} = \frac{1}{|\mathcal{F}|}\sum_{\mathcal{F}}\sum_{m=1}^M\left( -c_m\log(\hat{c}^{\text{cond}}_m) + (d_m-\hat{d}^{\text{cond}}_m)^2\right),
\label{eq:condloss}
\end{equation}
where the first term is a cross-entropy loss for the anticipated action label $\hat{c}^{\text{cond}}_m$, while the second term is an MSE for the predicted action duration $\hat{d}^{\text{cond}}_m$. After training, the conditional module remains fixed. For generating pseudo labels, we simply apply $f_{\text{cond}}$. However, to make full use of the weak label, we replace the estimated $\hat{c}_1$ with the weak label $\mathbf{c}' = c_1$, \emph{i.e.}
$\tilde{\mathbf{y}}= \{(c_1, \hat{d}^{\text{cond}}_1), (\hat{c}^{\text{cond}}_2,\hat{d}^{\text{cond}}_2), \dots, (\hat{c}^{\text{cond}}_M,\hat{d}^{\text{cond}}_M)\}$.
\subsection{Primary model}~\label{sec:primary}
The primary module $\hat{\mathbf{y}} = f(\textbf{x})$ predicts the future action and duration sequence $\hat{\mathbf{y}}$ given video $\mathbf{x}$ and is the module used for inference. During training, the objective is to minimize a loss based on the labelled ground truth $\mathcal{F}$ and the refined pseudo-labels of $\mathcal{W}$, \emph{i.e.}
\begin{equation}
\begin{split}
L_{\text{prim}} = & \frac{1}{|\mathcal{F}|}\sum_{\mathcal{F}}\sum_{m=1}^M \left(-c_m\log(\hat{c}_m) + (d_m-\hat{d}_m)^2\right) + \frac{1}{|\mathcal{W}|}\sum_{\mathcal{W}}\left(-c_1\log(\hat{c}_1)\right)\\
+&\frac{1}{|\mathcal{W}|}\sum_{\mathcal{W}}\left(\sum_{m=2}^M \left(-\tilde{c}_m'\log(\hat{c}_m)\right) + \sum_{m=1}^M(\tilde{d}_m'-\hat{d}_m)^2\right),
\label{eq:primloss}
\end{split}
\end{equation}
where $\hat{y}_m = (\hat{c}_m, \hat{d}_m)$ is the predicted label from the primary module while $\tilde{y}_m' = (\tilde{c}_m', \tilde{d}_m')$ is the refined pseudo-labels (see Sec.~\ref{sec:refinement}) on $\mathcal{W}$. The first two terms represent the loss based on ground truth labels on $\mathcal{F}$ and $\mathcal{W}$; we term this $L_{\text{label}}$.
The third term in the loss is based on pseudo-labels on $\mathcal{W}$ and we term this $L_{\text{pseudo-label}}$.
\subsection{Sequence Refinement}\label{sec:refinement}
Directly using the pseudo-labels from the conditional module to train the primary module does not allow us to fully benefit from $\mathcal{W}$, since the conditional module is trained only on $\mathcal{F}$. As $\mathcal{F}$ is quite small (5-15\% of the training set in our case), there is also the risk of confirmation bias~\cite{Arazo20}.
To mitigate this possibility, we learn a refinement module to refine the pseudo-labels from the conditional module.
For a video $\mathbf{x}$, the refinement module can be expressed as a function $F$ applied the predicted labels from the primary module and the estimated pseudo-labels from the conditional module, \emph{i.e.}
\begin{equation}
\tilde{\mathbf{y}}' = F(\hat{\mathbf{y}}, \tilde{\mathbf{y}}) = F \left(f(\textbf{x}), f_{\text{cond}}((\textbf{x},\textbf{c}'))\right).
\end{equation}
\noindent We propose two refinement schemes as different options for $F$ which we outline below.\\
\noindent \textbf{Linear Refinement.}
As a naive baseline, we first propose to use a weighted geometric mean of the primary and conditional module outputs, where we consider $\hat{c}$ as a probability estimate over the classes. To that end, the refined label can be defined as
\begin{equation}
\tilde{\mathbf{y}}' = f(\textbf{x})^{\frac{1}{\alpha+1}} \cdot f_{\text{cond}}((\textbf{x}, \textbf{c}'))^{\frac{\alpha}{\alpha+1}},
\label{eq:klmin}
\end{equation}
where $\alpha$ is a hyperparameter determining the weighting of each component.
Note that $\tilde{\mathbf{y}}'$ is actually the optimal solution when considering a linear weighting of the minimal KL divergences between (1) the refined pseudo-label $\tilde{\mathbf{y}}'$ and the estimate of the primary module $\hat{\mathbf{y}}$ as well as between $\tilde{\mathbf{y}}'$ and (2) the estimate of the conditional module $\tilde{\mathbf{y}}$.
Intuitively, the refined output is the ``closest'' sequence to both modules' predictions.
From Eq.~\ref{eq:klmin}, it can be observed that when $\alpha=\infty$, $\tilde{\mathbf{y}}'=f_{\text{cond}}(\textbf{x}, \textbf{c}')$ while $\alpha=0$ gives $\tilde{\mathbf{y}}'=f(\textbf{x})$. These two extreme cases correspond to the refinement directly using the conditional or primary module outputs as the refined sequence respectively. We define a schedule for $\alpha$ to decrease from a large to a small value. This is based on the rationale that at the outset of training, the primary module is not so accurate and will need to rely on the conditional module, but as training progresses a smaller $\alpha$ is more suitable.\\
\noindent \textbf{Adaptive Refinement.}
Instead of a manually set schedule for $\alpha$, we can also directly learn a refined output. Ideally, we would like for the refined outputs $\tilde{\mathbf{y}}'$ to be more accurate than the outputs of both $f(\textbf{x})$ and $f_{\text{cond}}(\textbf{x}, \textbf{c}')$. We can do this by leveraging the ground truth labels of $\mathcal{F}$ and adding a loss on the refined output $\tilde{\mathbf{y}}'$:
\begin{equation}
\begin{split}
L_{\text{adap}} = L_{\text{prim}} +
\frac{1}{|\mathcal{F}|}\sum_{\mathcal{F}}\sum_{m=1}^M\bigg(-c_m\log(\tilde{c}'_m) + (d_m-\tilde{d}'_m)^2\bigg).
\end{split}
\label{eq:autoloss}
\end{equation}
\noindent The adaptive refinement is realized via a linear layer that takes predicted and pseudo sequences and outputs a refined one.
One key change made when learning the adaptive refinement as opposed to the linear refinement is that the conditional module is trained on only a portion of $\mathcal{F}$ (we opt for half out of simplicity). We purposely limit the training of the conditional module to prevent the refinement module from fully relying on its output.
Then, $\mathcal{F}$ is used to train the primary and refinement module simultaneously. The objective function contains two parts: loss between output from the primary module $\hat{y}$ and ground truth (\emph{i.e.} the first term in Eq.~\ref{eq:primloss}) and refined output $\tilde{y}'$ and ground truth (\emph{i.e.} the second term in Eq.~\ref{eq:autoloss}).
Lastly, $\mathcal{F}$ as well as $\mathcal{W}$ is then applied to learn the primary and refinement module concurrently based on the loss in Eq.~\ref{eq:autoloss}.
We refer readers to Supplementary Section 8 to get a better idea of the training process.
\subsection{Duration Attention}\label{sec:attn}
We introduce attention for the duration estimation; this is applicable only to recursive dense anticipation methods~\cite{Farha18,Fadime20}. At the decoder, the action label and duration for action $m$ would be classified and regressed directly from the hidden state $H_m$. We propose to add an attention score between the hidden state and the input video to improve the duration estimate. Specifically, given video encoding $\mathcal{I}$, the attention weighted sum of the encoding can be defined as:
\begin{equation}\label{eq:attn}
\mathbf{attn}(H_{m}',\mathcal{I}) = \text{softmax}(\frac{H_{m}'\mathcal{I}^{\intercal}}{\sqrt{d_I}})\mathcal{I}, \qquad \text{where} \qquad H_{m}' = WH_{m}+b
\end{equation}
\noindent where $W \in \mathbb{R}^{d_I\times d_h}$ and $b \in \mathbb{R}^{d_I}$ are learned parameters, $\mathcal{I}^{\intercal}$ is the transpose of $\mathcal{I}$, $d_h$ and $d_I$ are the dimensionality of $H_m$ and $\mathcal{I}$ respectively. The attention-based duration $\hat{d}_m$ is estimated as a linear transformation of the previous hidden state $H_{m-1}$ and the weighted encoding:
\begin{equation}\label{eq:duration}
\hat{d}_{m} = [\mathbf{attn}(H_{m}',\mathcal{I}), H_{m-1}]\mathbf{\beta}+\mathbf{\epsilon}
\end{equation}
\noindent where $\mathbf{\beta}$, $\mathbf{\epsilon}$ are learned parameters and $[ \cdot ]$ denotes a concatenation.\\
\noindent \textbf{Duration Attention Regularizer.} To further minimize the prediction differences between the primary and conditional module, we encourage the attention score between the two modules to be similar. To that end, we add to the objective functions Eq.~\ref{eq:primloss} and Eq.~\ref{eq:autoloss} an $l_2$-norm between the attention scores of the conditional and primary modules, \emph{i.e.}
\begin{equation}~\label{eq:duration_regularizer}
\begin{split}
L_{\text{prim}}' =
L_{\text{prim}}
+ \sum_{m=1}^M\|\mathbf{attn}_m^{\text{prim}}-\mathbf{attn}_m^{\text{cond}}\|_2^2
\end{split}
\end{equation}
where $\mathbf{attn}_m^{\text{prim}}$ and $\mathbf{attn}_m^{\text{cond}}$ represent the attention scores of step $m$ in the primary and conditional modules respectively. The same regularizer is also added to
Eq.~\ref{eq:autoloss} to yield $L_{\text{adap}}'$.
\section{Experiments}
\subsection{Datasets, Evaluation \& Implementation Details}
We evaluate our method on the two benchmark datasets used in dense anticipation: Breakfast Actions~\cite{Kuehne14} and 50Salads~\cite{Stein13}. Both datasets record realistic cooking activities, with each video featuring a sequence of continuous actions in making either a breakfast item or a salad\footnote{Dataset details are in the Supplementary Section 1.}. From the designated training splits of each dataset, we partition 15\% and 20\% of the training data for the fully labelled set $\mathcal{F}$ for Breakfast and 50Salads respectively\footnote{We use a slightly higher percentage for 50Salads due to the small dataset size}. The remaining 85\% / 80\% of training sequences are assigned to $\mathcal{W}$ and have only a weak label, \emph{i.e.} the single action label $c_1$ (see Sec.~\ref{sec:prelim}). Following the conventions of~\cite{Farha18,Fadime20,Ke19}, we observe 20\% or 30\% of the video and anticipate the subsequent 20\% and 50\% of the video sequence (with additional results on 10\% and 30\% in the Supplementary Section 2). In line with previous works, we evaluate our anticipation results with mean over classes (MoC)~\cite{Farha18}.
As input features, we use the 64-dimension Fisher vectors computed on top of improved dense trajectories~\cite{IDT} as provided by~\cite{Farha18} on a per-frame basis.
Currently, as all dense anticipation methods are fully supervised, there are no direct comparisons to competing state-of-the-art methods. However, as our framework is general, we experiment with 3 different anticipation methods as backbones in a series of self-comparisons. We test using (1) a naive RNN where both encoder and decoder is a one-layer LSTM with 512 hidden dimensions (2) the one-shot method of Ke~\cite{Ke19} and (3) the recursive method of Sener~\cite{Fadime20}.
Our result for Ke \emph{et al.} is our re-implementation as they do not provide source code; our fully-supervised re-implementation yields similar values as their reported results.
All hyperparameters follow the original settings in their papers.
For the linear refinement method, $\alpha$ begins from 30 and decreases to 0.5 with a decay rate of 0.95 per epoch.
The batch size is 2 for 50Salads and 16 for Breakfast.
Using linear refinement, the model converges at about 20 epochs for the first step and 25 epochs for the second step. The model converges at about 15 epochs for the first step, 20 for the second and third step when using adaptive refinement.
\begin{table*}[t!]
\caption{MoC of different models. Results reported in Baseline (1) for Ke~\cite{Ke19} and Sener~\cite{Fadime20} are taken directly from their published results. Other results are averaged on the officially provided different splits for training (which is further split into fully- and weakly-labeled sets randomly according to the percentages mentioned above) and test set.}
\begin{center}
\small
\begin{tabular}{|p{1.3cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|}
\hline
\quad & \multicolumn{4}{|c|}{Breakfast} &\multicolumn{4}{|c|}{50Salads} \\
\hline
Obs. & \multicolumn{2}{|c|}{20\%} & \multicolumn{2}{|c|}{30\%} & \multicolumn{2}{|c|}{20\%} & \multicolumn{2}{|c|}{30\%} \\
\hline
Pred. & 20\% & 50\% & 20\% & 50\% & 20\% & 50\% & 20\% & 50\% \\
\hline
\rowcolor{green!20}
\multicolumn{9}{|l|}{Baseline 1: $f(\mathbf{x})$, fully-supervised on entire training set (theoretical upper bound)}\\
\hline
\rowcolor{green!20}
RNN & 6.53 & 5.30 & 8.52 & 5.37 & 9.71 & 7.82 & 12.64 & 8.54\\
\rowcolor{green!20}
Ke~\cite{Ke19} & 11.92 & 7.03 & 12.26 & 8.18 & 11.53 & 9.50 & 15.92 & 9.89\\
\rowcolor{green!20}
Sener~\cite{Fadime20} & 13.10 & 11.10 & 17.00 & 15.10 & 19.90 & 15.10 & 22.50 & 11.20\\
\hline
\rowcolor{red!20}
\multicolumn{9}{|l|}{Baseline 2: $f(\mathbf{x})$, supervised on full label set $\mathcal{F}$ (theoretical lower bound)} \\
\hline
\rowcolor{red!20}
RNN & 3.92 & 2.35 & 5.48 & 4.26 & 8.08 & 5.45 & 8.13 & 6.70\\
\rowcolor{red!20}
Ke~\cite{Ke19} & 6.81 & 5.39 & 7.32 & 5.88 & 8.36 & 4.51 & 11.19 & 8.23\\
\rowcolor{red!20}
Sener~\cite{Fadime20} & 6.19 & 4.90 & 7.30 & 5.92 & 8.67 & 7.01 & 12.73 & 8.00\\
\hline
\rowcolor{cyan!20}
\multicolumn{9}{|l|}{Baseline 3: $f(\mathbf{x})$, supervised on full label set $\mathcal{F}$ + weak set $\mathcal{W}$ with $L_{\text{label}}$} \\
\hline
\rowcolor{cyan!20}
RNN & 6.01 & 4.29 & 7.56 & 5.93 & 9.33 & 6.96 & 11.45 & 8.54\\
\rowcolor{cyan!20}
Ke~\cite{Ke19} & 8.89 & 5.71 & 10.05 & 7.59 & 9.25 & 6.11 & 13.17 & 9.80\\
\rowcolor{cyan!20}
Sener~\cite{Fadime20} & 7.64 & 5.54 & 8.05 & 6.77 & 9.97 & 7.89 & 13.30 & 9.61\\
\hline
\rowcolor{blue!20}
\multicolumn{9}{|l|}{Our model with adaptive refinement but without duration attention.} \\
\hline
\rowcolor{blue!20}
RNN & 7.85 & 7.96 & 8.33 & 8.21 & 10.48 & 7.40 & 13.04 & 10.05\\
\rowcolor{blue!20}
Ke~\cite{Ke19} & 9.74 & 6.24 & 11.02 & 9.24 & 11.84 & 9.27 & 13.88 & 12.81\\
\rowcolor{blue!20}
Sener~\cite{Fadime20} & 8.98 & 7.71 & 9.71 & 7.31 & 12.62 & 9.44 & 13.94 & 10.73\\
\hline
\multicolumn{9}{|l|}{Our full model with adaptive refinement and duration attention.} \\
\hline
RNN & 9.12 & 8.33 & 10.17 & 8.90 & 12.11 & 9.57 & 14.37 & 10.91\\
Sener~\cite{Fadime20} & 9.74 & 8.56 & 11.63 & 8.99 & 12.41 & 9.67 & 14.94 & 12.14\\
\hline
\end{tabular}
\end{center}
\label{tb:result}
\end{table*}
\subsection{Supervised Baselines}
We first compare the impact that the amount of data would have on the fully supervised case (see Table~\ref{tb:result}). We design three baselines and in each case, train a stand-alone primary module. Baseline (1) is fully supervised on the entire training set -- this signifies the upper bound that our weakly-supervised method can achieve.
Baseline (2) is supervised on only the labelled set $\mathcal{F}$. This baseline gives some indicator of the accuracy of the conditional module before the weak label is applied to replace $\hat{c}_1$ and acts as a lower bound. Baseline (3) supervised on the given labels of $\mathcal{F}$ and $\mathcal{W}$, \emph{i.e.} applying the first two terms or $L_{\text{label}}$ of Eq.~\ref{eq:primloss}. This baseline tells us what can be learned from the full set of provided labels.
Full supervision with the entire training set, \emph{i.e.} Baseline (1) achieves the best results, with the model of Sener~\emph{et al.}~\cite{Fadime20} performing best. However, performance drops with fewer labels, \emph{i.e.} Baselines (2) and (3) and the one-shot method of Ke~\emph{et al.}~\cite{Ke19} is slightly stronger than~\cite{Fadime20}. The gains from adding the labels of the weak set $\mathcal{W}$, \emph{i.e.} from Baseline (2) to (3) demonstrate that having even a single $c_1$ label helps to improve MoC by 1-2\%.
\subsection{Impact of Adding Pseudo-Labels and Duration Attention}
If we add pseudo-labels to train the primary module, \emph{i.e.} by applying the full loss given in Eq.~\ref{eq:primloss} (see Table~\ref{tb:result}, fourth section) and using adaptive refinement, we observe that we gain in performance across the board when compared to Baseline (3), even though it uses the same amount of provided ground truth labels.
The most impressive is the RNN encoder-decoder model. With only the pseudo-labels from the weak set, we can surpass the original fully supervised baseline. Using the one-shot method from Ke~\cite{Ke19}, we can surpass the supervised baseline when anticipating 50\% of the sequence after observing 30\% for both Breakfast and 50Salads.
On Sener's model~\cite{Fadime20}, however, we are not able to surpass the fully supervised baseline, though the gap closes progressively. Our full model (Table~\ref{tb:result}, fifth section) which incorporates the attention duration sees additional gains in most settings.
There is also a visual explanation in Supplementary Section 4 which intuitively illustrates different correlations between different observed actions and current predicted action.
Note that we do not apply the duration attention to the model of Ke~\cite{Ke19} since it is not recursive.
All three backbones improve from Baseline (3) when adding adaptive refinement and duration attention. Given the challenge of the dense anticipation task, however, the overall performance is still very low, especially for the simple RNN and Ke's~\cite{Ke19} model. This is likely the reason why adding our framework can outperform the fully supervised case. As the models are rather simplistic, we speculate they
cannot fully leverage all the ground truth labels from the entire training dataset (Table~\ref{tb:result}, Baseline (1)). Training with our framework (Table~\ref{tb:result}, our model in purple and white section) may result in even higher accuracies because our refined pseudo-labels, while less accurate than ground truth, model a simpler distribution.
\subsection{Future Horizon of Anticipated Actions}
We analyze in Table~\ref{tb:action_acc} the anticipated actions over time by computing the accuracy for the first future action (weak label) versus the next three actions (no label).
The trends for the two settings are very different; Baseline 3 without the conditional module has a sharp drop-off from the second action. This is unsurprising since most videos have only a weak label of the first future action. Incorporating our conditional module with the refined pseudo-labels improves the first action's accuracy and decreases the drop-off of subsequent actions.
Refer to Supplementary Section 7 for a visualization of the anticipated action sequence.
\begin{table}[H]
\caption{Accuracy of the predicted actions at different time steps.}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\quad & First & Second & Third & Fourth\\
\hline
Baseline 3
& 16.17 & 6.49 & 3.22 & 1.67\\
Our full model & 18.75 & 14.33 & 9.09 & 5.49\\
\hline
\end{tabular}
\label{tb:action_acc}
\end{table}
\subsection{Ablation Study}
In the following experiments we use Sener's~\cite{Fadime20} method as the backbone, an observation of 30\% and anticipation of 10\%. Table~\ref{tb:corr} verifies that refining the pseudo-labels is more effective than training with them directly.
Furthermore, the learned adaptive refinement is better than the linear refinement as it improves upon the linear scheme by 4\% on both datasets.
In addition to Fisher vector IDT features, we also experiment with the ground truth labels and the stronger I3D features~\cite{Carreira17} as inputs, the result is shown in Table~\ref{tb:feat}.
To use ground truth labels as input, we simply use a one-hot vector.
It gives much higher accuracy, indicating that there is still some gap in recognition performance.
The same gap was also confirmed in~\cite{Fadime20}. In line with previous results which use both features, I3D achieves higher MoC than Fisher vector. We observe, however, that using I3D features requires longer training time,
\emph{i.e.} 20 epochs in step 1 and 2, 25 epochs in step 3
(we refer readers to Section 3.4 in the main paper and Section 8 in the Supplementary for a detailed training procedure),
likely due to the larger dimensionality of I3D compared to Fisher vectors.
\begin{table}[H]
\quad
\parbox{0.45\linewidth}{
\caption{MoC on different refinements.}
\centering
\begin{tabular}{|c|c|c|}
\hline
\quad & Breakfast & 50Salads \\
\hline
No refinement & 6.28 & 10.31\\
Linear & 7.79 & 12.17 \\
Adaptive & 12.78 & 16.24 \\
\hline
\end{tabular}
\label{tb:corr}
}
\quad
\parbox{0.45\linewidth}{
\caption{MoC on different video features.}
\centering
\begin{tabular}{|c|c|c|}
\hline
\quad & Breakfast & 50Salads \\
\hline
Ground truth & 61.30 & 35.40 \\
Fisher vector & 12.78 & 16.24 \\
I3D & 15.65 & 21.30 \\
\hline
\end{tabular}
\label{tb:feat}
}
\end{table}
\section{Conclusion}
In this paper, we investigate a novel dense anticipation task, emphasizing pseudo labels to promote anticipation accuracy using weakly-labelled videos. To predict accurate action/duration sequences, we propose a sequence refinement method that generates pseudo sequences conditioned on the next-step action and adaptively refines the pseudo sequences to guide prediction.
We also introduce duration attention which takes action correlations into account to boost duration anticipation.
The proposed method outperforms, if not better than, other fully supervised methods while requiring far less annotation effort.
More datasets will be involved in future works.
\noindent\textbf{Acknowledgements }This research is supported by the National Research Foundation, Singapore under its NRF Fellowship for AI (NRF-NRFFAI1-2019-0001).
\bibliographystyle{unsrt}
\section{Dataset Details}
\noindent \textbf{Breakfast Actions} contains 1712 videos which are performed by 52 different individuals in 18 different kitchens.
The videos are unscripted and uncontrolled with natural lighting, view points and environments.
\textbf{50Salads} is food preparation dataset capturing 25 people preparing 2 mixed salads each.
Both datasets have standardized train-test splits which we follow. We further split the training set into fully- and weakly-labelled sets, with specific proportions and other details in Table~\ref{tb:info}.
\begin{table}[H]
\caption{Basic information of dataset.}
\begin{center}
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Dataset & fps & \tabincell{c}{Video duration\\median, mean$\pm$std} & Classes & Total & Train & Full & Weak & Test \\
\hline
Breakfast & 15 & 91s, 140s$\pm$122 & 48 & 1712 & 1460& 15\% & 85\% & 252\\
50Salads & 30 & 389s, 370s$\pm$106 & 19 & 50 & 40 & 20\% & 80\% & 10\\
\hline
\end{tabular}
\end{center}
\label{tb:info}
\end{table}
\section{Complete Results}
\begin{table*}
\caption{MoC of different methods on Breakfast. Better viewed in colour.}
\begin{center}
\small
\begin{tabular}{|p{1.3cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|}
\hline
Observed & \multicolumn{4}{|c|}{20\%} & \multicolumn{4}{|c|}{30\%} \\
\hline
Predicted & 10\% & 20\% & 30\% & 50\% & 10\% & 20\% & 30\% & 50\% \\
\hline
\rowcolor{green!20}
\multicolumn{9}{|l|}{Baseline 1: $f(\mathbf{x})$, fully-supervised on entire training set (theoretical upper bound)} \\
\hline
\rowcolor{green!20}
RNN & 8.39 & 6.53 & 5.93 & 5.30 & 9.19 & 8.52 & 7.92 & 5.37 \\
\rowcolor{green!20}
Ke & 13.04 & 11.92 & 7.76 & 7.03 & 14.24 & 12.26 & 11.60 & 8.18 \\
\rowcolor{green!20}
Sener & 15.60 & 13.10 & 12.10 & 11.10 & 19.50 & 17.00 & 15.60 & 15.10 \\
\hline
\rowcolor{red!20}
\multicolumn{9}{|l|}{fully-supervised on the fully labelled subset (theoretical lower bound)} \\
\hline
\rowcolor{red!20}
RNN & 5.48 & 3.92 & 3.45 & 2.35 & 5.98 & 5.48 & 5.23 & 4.26 \\
\rowcolor{red!20}
Ke & 7.18 & 6.81 & 5.32 & 5.39 & 9.83 & 7.32 & 6.33 & 5.88 \\
\rowcolor{red!20}
Sener & 7.47 & 6.19 & 5.18 & 4.90 & 7.93 & 7.30 & 5.47 & 5.92 \\
\hline
\rowcolor{cyan!20}
\multicolumn{9}{|l|}{Baseline 3: $f(\mathbf{x})$, supervised on full label set $\mathcal{F}$ + weak set $\mathcal{W}$ with $L_{\text{label}}$} \\
\hline
\rowcolor{cyan!20}
RNN & 7.29 & 6.01 & 5.16 & 4.29 & 8.34 & 7.56 & 6.62 & 5.93 \\
\rowcolor{cyan!20}
Ke & 9.76 & 8.89 & 6.51 & 5.71 & 11.71 & 10.05 & 8.52 & 7.59 \\
\rowcolor{cyan!20}
Sener & 8.09 & 7.64 & 6.37 & 5.54 & 9.38 & 8.05 & 7.45 & 6.77 \\
\hline
\rowcolor{blue!20}
\multicolumn{9}{|l|}{Our model with adaptive refinement but without duration attention.} \\
\hline
\rowcolor{blue!20}
RNN & 9.87 & 7.85 & 6.89 & 7.96 & 10.90 & 8.33 & 8.31 & 8.21 \\
\rowcolor{blue!20}
Ke & 11.82 & 9.74 & 7.32 & 6.24 & 13.75 & 11.02 & 10.06 & 9.24 \\
\rowcolor{blue!20}
Sener & 9.03 & 8.98 & 7.64 & 7.71 & 10.11 & 9.71 & 8.11 & 7.31 \\
\hline
\multicolumn{9}{|l|}{Our full model with adaptive refinement and duration attention} \\
\hline
RNN & 9.93 & 9.12 & 8.70 & 8.33 & 12.55 & 10.17 & 9.54 & 8.90 \\
ours & 10.09 & 9.74 & 7.99 & 8.56 & 12.78 & 11.63 & 10.73 & 8.99 \\
\hline
\end{tabular}
\end{center}
\label{tb:breakfast}
\end{table*}
\begin{table*}
\caption{MoC of different methods on 50Salads. Better viewed in colour.}
\begin{center}
\small
\begin{tabular}{|p{1.3cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|}
\hline
Observed & \multicolumn{4}{|c|}{20\%} & \multicolumn{4}{|c|}{30\%} \\
\hline
Predicted & 10\% & 20\% & 30\% & 50\% & 10\% & 20\% & 30\% & 50\% \\
\hline
\rowcolor{green!20}
\multicolumn{9}{|l|}{Baseline 1: $f(\mathbf{x})$, fully-supervised on entire training set (theoretical upper bound)}\\
\hline
\rowcolor{green!20}
RNN & 11.49 & 9.71 & 9.60 & 7.82 & 12.97 & 12.64 & 11.83 & 8.54 \\
\rowcolor{green!20}
Ke & 12.29 & 11.53 & 10.97 & 9.50 & 16.34 & 15.92 & 11.56 & 9.89 \\
\rowcolor{green!20}
Sener & 25.50 & 19.90 & 18.20 & 15.10 & 30.60 & 22.50 & 19.10 & 11.20 \\
\hline
\rowcolor{red!20}
\multicolumn{9}{|l|}{Baseline 2: $f(\mathbf{x})$, supervised on full label set $\mathcal{F}$ (theoretical lower bound)} \\
\hline
\rowcolor{red!20}
RNN & 9.81 & 8.08 & 6.59 & 5.45 & 10.65 & 8.13 & 7.52 & 6.70 \\
\rowcolor{red!20}
Ke & 9.16 & 8.36 & 7.65 & 4.51 & 12.69 & 11.19 & 8.31 & 8.23 \\
\rowcolor{red!20}
Sener & 11.36 & 8.67 & 7.30 & 7.01 & 13.16 & 12.73 & 10.95 & 8.00 \\
\hline
\rowcolor{cyan!20}
\multicolumn{9}{|l|}{Baseline 3: $f(\mathbf{x})$, supervised on full label set $\mathcal{F}$ + weak set $\mathcal{W}$ with $L_{\text{label}}$} \\
\hline
\rowcolor{cyan!20}
RNN & 10.60 & 9.33 & 8.31 & 6.96 & 13.25 & 11.45& 10.55 & 8.54 \\
\rowcolor{cyan!20}
Ke & 11.87 & 9.25 & 8.83 & 6.11 & 14.97 & 13.17 & 10.74 & 9.80 \\
\rowcolor{cyan!20}
Sener & 12.91 & 9.97 & 8.86 & 7.89 & 14.63 & 13.30 & 11.19 & 9.61 \\
\hline
\rowcolor{blue!20}
\multicolumn{9}{|l|}{Our model with adaptive refinement but without duration attention.} \\
\hline
\rowcolor{blue!20}
RNN & 12.72 & 10.48 & 9.84 & 7.40 & 14.52 & 13.04 & 12.72& 10.05 \\
\rowcolor{blue!20}
Ke & 15.00 & 11.84 & 10.96 & 9.27 & 15.66 & 13.88 & 12.89 & 12.81 \\
\rowcolor{blue!20}
Sener & 13.07 & 12.62 & 10.01 & 9.44 & 15.25 & 13.94 & 11.44 & 10.73 \\
\hline
\multicolumn{9}{|l|}{Our full model with adaptive refinement and duration attention} \\
\hline
RNN & 14.53 & 12.11 & 10.06 & 9.57 & 15.09 & 14.37 & 13.25 & 10.91 \\
ours & 16.80 & 12.41 & 10.12 & 9.67 & 16.24 & 14.94 & 13.53 & 12.14 \\
\hline
\end{tabular}
\end{center}
\label{tb:salad}
\end{table*}
We provide a complete set of anticipations (10\%, 20\%, 30\% and 50\%) in Tables~\ref{tb:breakfast} and~\ref{tb:salad} for Breakfast and 50Salads respectively.
Findings are consistent with the 20\% and 50\% results in the main paper. Baseline 1 is a fully supervised version; the MoC of Baseline 2 drops because we omit a large proportion of videos from the training set. We observe an increase in the performance of Baseline 3 compared to Baseline 2 when weak labels are added back to help training. The boosts manifested in Baseline 4 and 5 indicate the advantage of pseudo labels and duration attention respectively.
\section{Variance in One Split}
To further prove the randomness of our data choice and observe the variance in one split, we run 10 times on each split and plot the means and standard deviations. Here we use observation of 30\% and prediction of 20\% and use Sener's method as the backbone. As shown in Figure~\ref{fig:barchart}, we can see standard deviations of 50Salads are higher than those of Breakfast. The Reason may be that 50Salads has fewer videos, which is more unstable.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/variance.eps}
\end{center}
\caption{Bar charts of means and standard deviations in each split.}
\label{fig:barchart}
\end{figure}
\section{Visualization of Attention Scheme}
We use a heat map (Figure~\ref{fig:heatmap}) to further illustrate the advantage of our duration attention scheme. Take a video in 50Salads as an example, we track the attention score between current predicted action and observed actions. In the heat map, it's obvious that the correlation between ``cut cucumber'' and ``peel cucumber'' as well as the correlation between ``place tomato into bowl'' and ``cut tomato'' are the highest, which indicates that more relevant actions have more influence on current action duration.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{images/heatmap.eps}
\end{center}
\caption{Heat map of attention score between current predicted action and observed actions. x-axis is observed actions and y-axis is predicted actions. Deeper colour indicates higher attention score.}
\label{fig:heatmap}
\end{figure}
\section{Full-Weak Split}
We vary the proportion of fully-labelled data in the training set in Table~\ref{tb:split} and observe that by increasing the proportion of fully-labelled data (the total amount of data is fixed), the performance gets progressively closer to the fully supervised model.
For the RNN and Ke's model, we are able to exceed the performance of the fully supervised model, though this is largely due to their poor baseline performance even with 100\% of the training data fully supervised. It is likely that these models, being simpler and having fewer parameters, require a smaller proportion of fully-labelled data.
For a larger model like that of Sener, 25\% / 30\% of the data is not sufficient to match the fully-supervised performance.
We omit experiments for RNN with split 25\% on Breakfast and split 30\% on 50Salads because MoC with smaller splits already exceeds fully-supervised results.
\begin{table}[H]
\caption{MoC on different full vs. weak data splits. Percentages indicate the proportion of fully-labelled data in the training set. RNN and Sener use our full model in the weakly-supervised setting,\emph{i.e.} with duration attention while Ke, as a one-shot method, does not have duration attention. $^*$100\% indicates the original fully-supervised model (also without duration attention).}
\begin{center}
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\quad & \multicolumn{4}{c|}{Breakfast} & \multicolumn{4}{c|}{50Salads} \\
\hline
\quad & 5\% & 15\% & 25\% & 100\%$^*$ & 10\% & 20\% & 30\% & 100\%$^*$ \\
\hline
RNN & 11.03 & 12.55 & \myslbox & 9.19 & 8.84 & 15.09 & \myslbox & 12.97 \\
\hline
Ke & 12.40 & 13.75 & 17.45 & 14.24 & 11.73 & 15.66 & 20.00 & 16.34 \\
\hline
Sener & 11.90 & 12.78 & 17.22 & 19.50 & 14.63 & 16.24 & 17.37 & 30.60 \\
\hline
\end{tabular}
\end{center}
\label{tb:split}
\end{table}
\section{Memory Complexity Analysis}
A simple comparison of memory complexity (expressed by the number of hyperparameters) of three baseline models (with fully-supervised setting) and our full model with adaptive refinement is shown in Table~\ref{tb:complexity}. Not surprisingly, the more complicated model has more hyperparameters. The number of hyperparameters of our full model is approximately two times the corresponding backbone's, which is in accordance with our intuition that the primary and conditional module is similar and are both based on the backbone. We omit time complexity analysis because it is not comparable between fully- and weakly-supervised models.
\begin{table}[H]
\caption{Memory complexity analysis.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\quad & RNN & Ke & Sener & Ours with RNN & Ours with Sener's\\
\hline
50Salads & 3579944 & 8937316 & 36997429 & 7159894 & 75374068 \\
\hline
Breakfast & 3624546 & 9845839 & 40270080 & 7249098 & 81590790 \\
\hline
\end{tabular}
\end{center}
\label{tb:complexity}
\end{table}
\section{Visualized Result}
Figure~\ref{fig:visual} shows an example of anticipating 50\% of the sequence after observing 30\%, where each colour indicates an action.
We can see that the action sequence is correct, but there are some errors in the predicted duration.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.6\linewidth]{images/visualization.eps}
\end{center}
\caption{Visualized result for dense anticipation on Breakfast.}
\label{fig:visual}
\end{figure}
\section{Pseudo Codes}
Below are pseudo codes for linear refinement and adaptive refinement respectively.
\begin{algorithm}
\caption{Linear Refinement}
\begin{algorithmic}
\Require \text{initial model }$Prim, Cond$; $\mathcal{W}, \mathcal{F}$; \text{Epoch} $N_1, N_2$; $\alpha$, \text{decay parameter} $d$
\State $\text{Step 1:}$
\For {$n = 1\text{ to }N_1$}
\State $pseudo\_label \leftarrow Cond(\mathcal{F})$
\State $L \leftarrow Loss(ground\_truth, pseudo\_label)$
\State \text{Update} $Cond$ \text{by minimizing} $L$
\EndFor
\State \text{Fix} $Cond$
\State $\text{Step 2:}$
\For {$n = 1\text{ to }N_2$}
\State $predicted\_label \leftarrow Prim(\mathcal{F})$
\State $L_1 \leftarrow Loss(ground\_truth, predicted\_label)$
\State $predicted\_label \leftarrow Prim(\mathcal{W})$
\State $pseudo\_label \leftarrow Cond(\mathcal{W})$
\State $refined\_label \leftarrow predicted\_label^{\frac{1}{\alpha+1}}*pseudo\_label^{\frac{\alpha}{\alpha+1}}$
\State $L_2 \leftarrow Loss(refined\_label, predicted\_label)$
\State \text{Update} $Prim$ \text{by minimizing} $L_1+L_2$
\State $\alpha \leftarrow d*\alpha$
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t!]
\caption{Adaptive Refinement}
\begin{algorithmic}
\Require \text{initial model }$Prim, Cond, Refine$; $\mathcal{W}, \mathcal{F}$; \text{Epoch} $N_1, N_2, N_3$
\State $\text{Step 1:}$
\For {$n = 1\text{ to }N_1$}
\State $pseudo\_label \leftarrow Cond(\mathcal{F})$
\State $L \leftarrow Loss(ground\_truth, pseudo\_label)$
\State \text{Update} $Cond$ \text{by minimizing} $L$
\EndFor
\State \text{Fix} $Cond$
\State $\text{Step 2:}$
\For {$n = 1\text{ to }N_2$}
\State $predicted\_label \leftarrow Prim(\mathcal{F})$
\State $pseudo\_label \leftarrow Cond(\mathcal{F})$
\State $refined\_label \leftarrow Refine(predicted\_label, pseudo\_label)$
\State $L_1 \leftarrow Loss(ground\_truth, predicted\_label)$
\State $L_2 \leftarrow Loss(refined\_label, predicted\_label)$
\State \text{Update} $Prim, Refine$ \text{by minimizing} $L_1+L_2$
\EndFor
\State $\text{Step 3:}$
\For {$n = 1\text{ to }N_3$}
\State $predicted\_label \leftarrow Prim(\mathcal{F})$
\State $pseudo\_label \leftarrow Cond(\mathcal{F})$
\State $refined\_label \leftarrow Refine(predicted\_label, pseudo\_label)$
\State $L_1 \leftarrow Loss(ground\_truth, predicted\_label)$
\State $L_2 \leftarrow Loss(refined\_label, predicted\_label)$
\State $predicted\_label \leftarrow Prim(\mathcal{W})$
\State $pseudo\_label \leftarrow Cond(\mathcal{W})$
\State $refined\_label \leftarrow Refine(predicted\_label, pseudo\_label)$
\State $L_3 \leftarrow Loss(refined\_label, predicted\_label)$
\State \text{Update} $Prim, Refine$ \text{by minimizing} $L_1+L_2+L_3$
\EndFor
\end{algorithmic}
\end{algorithm}
\end{document}
|
{'timestamp': '2021-11-16T02:27:46', 'yymm': '2111', 'arxiv_id': '2111.07593', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.07593'}
|
arxiv
|
\section{Introduction}
The origin and nature of fast radio bursts (FRBs) have remained
enigmatic since the first FRB discovery by \citet{lorimer07}. The $
17$ or so distinct FRB sources that have been reported so far are
bright ($\sim 0.1$--$1~{\rm Jy}$) and brief ($\sim 1~{\rm ms}$) pulses
of $\sim 1~{\rm GHz}$ radio emission
\citep{lorimer07,keane12,thornton13,spitler14,burke-spolaor14,petroff15a,ravi15,champion16,masui15,keane16}.
The pulse arrival times of FRBs show a $\nu^{-2}$ frequency dependence
indicative of a passage through a cold plasma, with the so-called
dispersion measure (DM) measuring the line-of-sight column density of
free electrons. FRBs are selected to have large measured DMs of $\sim
300-1600$~pc~cm$^{-3}$, in excess of the values expected from models
of the interstellar electron distribution in the Milky Way galaxy, and
have therefore been inferred to originate from extragalactic sources
at cosmological distances. The cosmological distance has been
confirmed in the case of the sole repeating FRB~121102, which has been
localised to a dwarf galaxy at redshift $z=0.19$ \citep{chatterjee,tendulkar,marcote}.
The range of excess (above Galactic) DMs of the known FRBs correspond
to a range of co-moving distances of 0.9--4.4~Gpc, with a median of
2.4~Gpc (corresponding to a redshift of $z=0.64$), under the
assumption that most of the excess DM is contributed by the
intergalactic medium and using the standard cosmological parameters
\citep{planck}. The latest estimate of the all-sky rate of FRBs with
flux $>0.3$~Jy is $1.1^{+3.8}_{-1.0}\times 10^4~{\rm day}^{-1}$ at
95\% confidence \citep{scholz16}. This estimate is based on the single
detection of FRB~121102, and ignoring the fact that it has been
detected repeatedly over a period of 4 years. We note that some other
recent rate estimates have smaller uncertainties, but find, to varying
degrees of significance, a dependence of rate on Galactic latitude
(see, e.g., \cite{vanderwiel16}, and references therein). By adopting
the \citet{scholz16} rate and its uncertainty, we encompass this
uncertainty in the latitude dependence as well.
If FRBs occur in
Milky-Way-like galaxies, we can divide the observed cosmological rate
by the number of galaxies within the FRB survey volume, to find the
expected FRB rate within a single such galaxy. We note that the
recently localised FRB~121102
comes from an extreme-emission-line dwarf galaxy \citep{tendulkar},
quite unlike the Milky Way. However, this host galaxy is not necessarily
representative of all FRB hosts.
The product
between the comoving number density of $L_*$ galaxies, $\sim
10^{-2}~{\rm Mpc^{-3}}$ \citep{prada}, and the cosmological volume out
to the median distance of known FRBs, $57$~comoving Gpc$^3$, implies
that an FRB should occur in our galaxy once per $140^{+1400}_{-110}$
years \citep{lingam17}. A 0.3~Jy FRB from a comoving distance of $2.4$~Gpc
(a luminosity distance of $3.9$~Gpc), placed at a typical Galactic
distance of $\sim 10$~kpc, would have an observed 1~GHz flux density
of $f_\nu\approx 3\times 10^{10}$~Jy or, equivalently, $3\times
10^{-16}~{\rm W~m}^{-2} {\rm Hz}^{-1}$. FRB~121102 has been bursting
repeatedly for at least 4 years. If most FRBs persist for decades or
even centuries, a Galactic FRB could be active now. A powerful
local FRB may have already been detected in the far side-lobes of radio
telescope beams, but mistakenly ascribed to artificial interference.
Indeed, the radio flux density level of the received GHz-band signals from
commercial radio stations, cellular communications and wireless
networks is within a few orders of magnitude of the expected flux
level from a Galactic FRB. For example, a typical desktop Wi-Fi
transmitter operating at 2.4~GHz under the 802.11b standard has a
radiated power of 100~mW over an 82~MHz bandpass with an outdoor range
of $\sim 100$~m, corresponding to a detected flux density of
$f_\nu=1\times 10^{-14}~ {\rm W~m}^{-2} {\rm Hz}^{-1}$. Each of the
transmitter's individual channels has a bandpass of 22~MHz, and
therefore a time resolution of $\Delta t \sim 2\times 10^{-8}$~s. By binning an
incoming signal into millisecond ($\Delta t=10^{-3}$~s) time bins
a Wi-Fi receiver would improve its sensitivity in
proportion to $\sqrt{\Delta t}$, i.e. by a factor of $\sim
200$, to a level of $f_\nu\sim 5\times 10^{-17}~ {\rm W~m}^{-2} {\rm
Hz}^{-1}$ ($5\times 10^9$~Jy, i.e. 5~GJy). This is a factor 6
fainter than the typical Galactic FRB flux discussed above,
and means that such a Galactic FRB, and even fainter and
perhaps-more-frequent FRBs, would be detectable by existing
communication devices. In the subsequent sections, we outline how an
array of numerous low-cost radio receivers can be used to detect and
localise giant Galactic FRBs.
\section{A global array of cellular receivers for Galactic FRB detection}
We consider below three related technical approaches to the
assembly of an array of low-cost radio receivers, suitable for the
detection of Galactic FRBs. The choice of the most practical approach
will depend on several issues that need to be resolved, such as the
ability to access and manipulate raw radio signals picked up by the
antennas, the flux from FRBs at sub-GHz frequencies, the level of
terrestrial noise at different locations, and the ability to filter
out that foreground noise.
\subsection{A cell phone communications channel approach}
There are currently an estimated 7 billion active cellular phone
accounts on our planet (similar to the number of people), operating in
several frequency bands, from 0.8 to 2.4~GHz. Each of these phones is,
as argued above, a radio receiver that is in principle sensitive to a
Galactic FRB signal. Furthermore, every smartphone is a programmable
computer capable of analyzing the signal, of timing it up to $\Delta t
\sim 10^{-7}$~s precision with its global-positioning system (GPS)
module, of storing this information, and of diffusing it through the
internet.
We propose therefore to build a Citizens-Science project in which
participants voluntarily download onto their phones an application
that runs in the background some or all of the time, monitoring the
phone's antenna input for candidate broad-band millisecond-timescale
pulses that appear similar to an FRB. The application would record
candidate FRB pulses (most of which originate from artificial and
natural noise sources) and would periodically upload the candidate
pulse information (pulse profile, GPS-based arrival time), along with
information about the phone (GPS-based location, operating frequency)
to a central processing website. The central website will continuously
correlate the incoming information from all participants, to identify
the signature of a real, globe-encompassing, FRB.
Because of the received signal's integration into ms time bins
(required to improve the sensitivity to FRB levels, see above), every
phone's actual arrival time accuracy will be no better than
$10^{-3}$~s. However, improved time precision can be recovered by
averaging the reported arrival times recorded by many participating
phones at a similar location. For example, averaging the ms-precision
reports from 10,000 phones within a city of radius 3~km (light travel
time $<10^{-5}$~s), would improve the precision by a factor of 100, to
$\sim 10^{-5}$~s. Triangulation of the GPS-timed pulse arrival times
from different Earth locations would then give the FRB sky position to
an accuracy of order $\sim c\Delta t/2R_\oplus\sim 1$~arcmin. If time
binning of the FRB signal, and subsequent loss of the native
$10^{-7}$~s GPS timing precision could be avoided (a possibility
considered in some of the other technical frameworks that we propose
below), then naturally the localisation precision can be improved down
to
the sub-arcsecond level.
Because of the $\nu^{-2}$ arrival-time dependence of a radio pulse
propagating through the Galactic plasma, phones operating at diverse
frequencies (multiple networks and phone models) will receive the
signal at a time delay,
\begin{equation}
\delta t= 0.144\times\left({{\rm DM}\over 200~{\rm
pc~cm^{-3}}}\right)\left({\nu\over {\rm 2.4 GHz}}\right)^{-2} ~{\rm s} .
\end{equation}
Over, e.g., a 22~MHz cellphone channel bandwidth at 2.4~GHz, a typical
Galactic DM of $200~{\rm pc~cm^{-3}}$ \citep{Rane_16} will spread the
FRB arrival time over just 2.6~ms, comparable to typical FRB pulse widths.
The channel bandwidth
therefore will not result in any significant smearing of
the pulse over time, which could have reduced the
detection sensitivity and timing precision.
By comparing the arrival times of different frequencies at the same
locations, the central website will be able to solve for the FRB's DM
that, when compared to a Galactic DM model \citep{cordes02,yao16}, will
indicate the FRB source distance within the Galaxy. Alternatively, the
application software itself could attempt to de-disperse all candidate
incoming signals across the full frequency range available to each
receiver. With efficient new algorithms, real-time de-dispersion of FRB signals is now feasible
on small computers \citep{Zackay_14,Zackay_17}, and so is likely
possible on smartphones as well. In such a scenario, the
identification of a $\nu^{-2}$
frequency sweep would be a real-time test of incoming signals, performed at the
level of each individual receiver.
One clear advantage of the above operation plan is that it is
essentially cost free---all of the necessary hardware (the world's
cell phones) is already in place, and one needs only to carry out the plan's
organizational steps in order to make it work for the scientific
program. Potential problems with this proposed mode are, first, that
cell phones may be hardwired at the basic electronics level to
demodulate and digitize incoming communications signals, and therefore
the raw broad-band radio signal containing the FRB may be inaccessible
to software. Furthermore, mobile phone communications are encoded so
as to allow many users to share the frequency band, and this encoding
permits the detection of communication signals at sub-noise levels
(as opposed to the un-encoded FRB signal).
The sought-after millisecond-timescale FRB
signal will need to be disentangled from
the foreground noise of cellular and other
communications emissions, as well as from natural radio noise from
atmospheric processes and from the sun. Although the feasibility of
this requires further study, the prospects look promising based on
a number of past attempts. \citet{katz2003}
review a handful of experiments, and decribe their own experiment,
which is similar to our proposal.
The basic concept consisted of a number of wide-angle, geographically distributed
radio receivers that searched for short radio bursts, separating
astronomical signals from noise by requiring coincident detections.
\citet{katz2003} used three 611~MHz receivers in the eastern US, sensitive to
$\gtrsim 3\times 10^4$~Jy bursts ($2\times 10^5$ fainter than considered here)
on timescales $\gtrsim 125$~ms (50 times longer than here).
The recorded, GPS-time-stamped, bursts were periodically uploaded to a
central processing station, exactly as in our proposed plan.
Over 18 months of continuous operation, \citet{katz2003}
detected a burst roughly
every 10~s, but 99.9\% of these signals could be rejected as local
interference based on their non-coincidence between the three receivers.
The remaining $\sim 4000$ coincident signals could all be traced to
solar radio bursts, by comparison to reports from a solar radio observatory.
No other astronomical sources were detected by \citet{katz2003} nor by
previous experiments. Interestingly, \citet{katz2003} succeeded in
using their GPS signal, with its $10^{-7}$~s accuracy, to time-stamp
their detected bursts to the accuracy of their
20~$\mu$s-long individual time samples, and they note
that, in principle, they could have used a multiple-time-sample
averaging period
shorter than 0.125~s (at the expense of sensitivity).
This would have allowed them to triangulate
their source localisations, just as we propose to do.
\subsection{A cell phone FM radio channel approach}
Most or all cell phones have built-in FM-band radio receivers
operating at around $\nu\sim 100$~MHz and enabling direct (i.e. not
through the internet or the service provider) reception of radio
broadcasts. Interestingly, this hardware is de-activated by phone
manufacturers in about $\sim {2\over 3}$ of all phones, in the
service-providers' interest of having the customers download and pay
for the radio broadcasts, rather than receiving them for free.
Nevertheless, about ${1\over 3}$ of all phones (still a sizeable
number when considering the global number) do have the direct FM
reception option activated. The raw, non-demodulated radio signal from
this channel is more likely to be accessible to the application
software in its search for an FRB signal than in the preceeding
approach using the $\sim 1$~GHz cellular communication channel. A shortcoming of this
option, however, is the yet-unknown properties of FRBs at $\sim
100$~MHz frequencies. Current upper limits from FRB searches at
145~MHz \citep{karastergiou} and 139-170~MHz \citep{tingay}, limit the
FRB spectral slope to $>+0.1$.
As with the $\sim 1$~GHz cellular-communications option,
discussed above, here too integration over time could
make a typical Galactic FRB
detectable at 100~MHz,
even for more positive slopes as high as $+1$, such that
the FRB would have $\gtrsim 5$~GJy at 100~MHz. The foreground noise
question in this option is similar (though in a different frequency
band) to that in the previous, cellular-communications, option.
\subsection{A software-defined radio approach}
A software-defined radio (SDR) is a radio system where components such
as filters, amplifiers, demodulators, etc., that are typically
implemented in hardware, are implemented instead in software on a
personal computer. SDR devices are widely available for $\sim$\$10~US
a piece, and they are popular with radio amateurs.
They are often the size of a memory stick and likewise can be
USB plugged. An SDR device includes an antenna than can detect the
full raw ambient radio emissions over some frequency range and can
input them with minimal processing into a computer, where the signals
can be software-processed at will. Our third approach is therefore to
deploy a large number (depending on the available budget) of such SDR
devices, to be plugged into participating personal computers around
the globe, or base the network on devices already in use by
participating radio amateurs. As with the phone option, the participants will download
and install software that will continuously monitor the input from the
SDR. As before, the computers will upload the information on candidate
Milky Way FRBs to a central data-processing website.
A disadvantage of this approach is the need to actually buy and send
the SDR hardware to the selected participating individuals of the
network (unless one takes the existing-amateur-SDR approach).
The advantages involve having an accessible FRB signal,
uniformly processed and fully analysable at will (including spectral
information from every station). Every SDR could be supplemented with
a simple exterior antenna or antenna booster (wireless reception
boosters are also widely and inexpensively available for cell phones
and laptops) that would considerably enhance its sensitivity, lowering
or fully avoiding the need for time integration, and
hence for the sacrifice of timing precision, or simply probing
for fainter and more frequent bursts (see below). Furthermore, the ability
to choose the stations sites at will in a well-spaced global network,
specifically in ``radio-quiet'' locations with minimal artificial and
natural radio interference, may prove to be the most important benefit.
\section{Lower-flux, more-common Galactic FRBs}
A major practical problem of the schemes described above are the long
and uncertain timescales---decades to many centuries---expected for
the detection of a single, Galactic $3\times 10^{10}$~Jy FRB, unless
typical FRBs persistently repeat for decades or centuries (which is
a real possibility, given the case of FRB~121102). If FRBs typically
do not repeat, then even for
the more optimistic end of the rate estimate, broadcasting standards,
phone models and other technical factors, may change over a decade,
not to mention the limited patience of the participants and the
experiment managers. A resolution of this concern, however, could be
based on the fact that FRBs must have a distribution of
luminosities. Indeed, if the known FRBs are at the cosmological
distances indicated by their excess DMs, then they are clearly not
``standard candles''. A reasonable expectation is then that FRB
numbers increase at decreasing luminosities. If so, lower-luminosity
FRBs should be detected more frequently by the global cellular
network.
Let us assume that we can parameterize the FRB number per unit
luminosity with a Schechter form,
\begin{equation}
{dN \over d(\log L_\nu)} \propto L_\nu^{-\alpha+1}e^{-L_\nu/L_{F\star}},
\end{equation}
where $L_{F\star}$ corresponds to the characteristic specific luminosity
of an FRB source (namely, the one that yields an observed flux density
of $\sim 0.3~$Jy at a luminosity distance of $\sim 4$~Gpc). One way
to calibrate $\alpha$ is by speculating that the Galactic population
of rotating radio transients
(RRATs), which have some properties in common with
FRBs, constitute the low-luminosity counterparts of
FRBs. The rate of RRATs over the entire sky at a flux
of $\sim 0.3$~Jy is $\sim 10^6~{\rm
day}^{-1}$, based on the estimated number of sources in the Galaxy,
$\sim 10^5$, and their individual repetition rates,
$\sim 10$~day$^{-1}$\citep{mclaughlin06}.
The RRAT rate is thus $\sim
10^{11}$ times the Galactic FRB rate (of once per 300~yr, i.e.
$10^{-5}$~day$^{-1}$). The RRAT flux
of $\sim 0.3$~Jy, in turn, corresponds to $\sim 10^{-11}$ the typical flux of
a Galactic FRB. If these two populations of
transient radio sources are related, then $\alpha\approx 2$.
Interestingly, this value
corresponds to an equal luminosity contribution from transients per
logarithmic interval of luminosity.
The flux distribution from a Galactic
FRB population having a particular luminosity
will be $(dN/d(\log f_\nu))\vert_{L_\nu}
\propto f_\nu^{-3/2}$ for a spherically distributed population
(e.g. in the Galactic halo), or $\propto f_\nu^{-1}$
for a planar distribution (e.g. the Galactic disk)---
coincidentally matching the power-law scaling at
low fluxes in the luminosity function for $\alpha=2$.
At a 5~GJy flux level, still detectable by our proposed arrays,
one might then expect to find Galactic FRBs 6 times more frequently than
at 30~GJy, i.e. once per 5 to 250 years. Increasing the sensitivity
by one or two orders of magnitudes, e.g. by adding simple antennas
in the SRD option, would potentially allow for the detection of
Galactic FRBs on a yearly to weekly basis, and for the direct
determination of their luminosity function.
\section{Conclusions}
The first, and so-far only, FRB that has been localised, FRB~121102,
is at a cosmological distance, it has been repeating for at least 4
years, and its host galaxy is a low-metallicity dwarf.
We have argued that if most FRBs are cosmological, but their hosts
are not necessarily dwarf galaxies like the host of FRB~121102,
then their all-sky rate implies that the Milky Way hosts an FRB
every 30 to 1500 years. If, furthermore, many FRBs repeat like FRB~121102, and for long enough, then
the occurrence frequency could be higher, and a local FRB may even be
active now. A typical Galactic FRB will be a
millisecond broad-band radio pulse with 1~GHz flux density of $\sim
3\times 10^{10}$~Jy, not much different from the radio flux levels and
frequencies detectable by cellular communication devices (cell phones,
WiFi, GPS). If the Milky Way has a currently active and repeating FRB source,
then some Local Group galaxies would have them too, at MJy
flux-density levels, which
could be detected by monitoring nearby galaxies with dedicated
small radio telescopes.
An argument against frequent Galactic FRBs could be that FRBs require
some kind of exotic and energetic physical event, such as a
super-flare from a magnetar, and that irradiation of the Earth
by such an event once per
century or millenium would be accompanied by clear signatures, or
perhaps even by mass extinctions. However, this argument relies on a
still-speculative connection between the radio emission of FRBs and their
emissions in other bands. Observationally, an upper limit of $10^8$~Jy
has been set on any FRB-like radio flux accompanying the giant 2004 December
$\gamma$-ray burst from the magnetar SGR~1806-20, and no $\gamma$-ray
counterparts have been detected for any FRB \citep{tendulkar16}.
Our proposed search for Galactic FRBs using a global array of low-cost
(possibly already existing) radio receivers would enable triangulation
of the GPS-timed pulse arrival times from different Earth locations,
localising the FRB sky position to arcminute or even arcsecond
precision. Pulse arrival times from devices operating at diverse
frequencies, or from de-dispersion calculations on the devices
themselves,
will yield the DM that, when compared to a Galactic DM
model, will indicate the FRB source distance within the Galaxy.
Fainter FRBs could potentially be detected on a yearly or even weekly
basis, enabling a direct measurement of the FRB luminosity function.
\section*{Acknowledgments}
We thank C. Carlsson, A. Fialkov, J. Guillochon, Z. Manchester,
E. Ofek, M. Reid, B. Zackay, and the anonymous referee, for
useful advice and comments. This work was supported in
part by Grant 1829/12 of the I-CORE program of the PBC and the Israel
Science Foundation (D.M.) and by a grant from the Breakthrough Prize
Foundation (A.L.). A.L. acknowledges support from the Sackler
Professorship by Special Appointment at Tel Aviv University.
|
{'timestamp': '2017-02-15T02:05:59', 'yymm': '1701', 'arxiv_id': '1701.01475', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.01475'}
|
arxiv
|
\section{Introduction}
E. Stein \cite{St} proved in 1976 that for any $n\ge 3$,
if a set $A\subset\mathbb{R}^n$ contains a sphere centered at each point of
a set $C\subset\mathbb{R}^n$ of positive Lebesgue measure, then $A$
also has positive Lebesgue measure.
It was shown by Mitsis \cite{Mi} that the same holds if
we only assume that
$C$ is a Borel subset of $\mathbb{R}^n$ of Hausdorff dimension greater than 1.
The analogous results are also true in the case $n=2$; this was proved
independently by Bourgain \cite{Bo} and Marstrand \cite{Mar} for
circles centered at the points of an
arbitrary set
$C\subset\mathbb{R}^2$
of positive Lebesgue measure,
and by Wolff \cite{Wo} for $C\subset\mathbb{R}^2$
of Hausdorff dimension greater than $1$.
In fact, Bourgain proved a stronger result, which extends
to other curves with non-zero curvature.
Inspired by these results, the authors in \cite{KNS} studied what happens if the circles are replaced by axis-parallel squares. They
constructed a closed set $A$ of Hausdorff dimension $1$ that contains the boundary of an axis-parallel square centered at each point in $\mathbb{R}^2$
(see \cite[Theorem 1.1]{KNS}).
Thornton studied in \cite{Th} the higher dimensional versions: the problem when $0\le k < n$ and $A\subset\mathbb{R}^n$ contains
the $k$-skeleton of an $n$-dimensional axis-parallel
cube centered at every point of a compact set of given
dimension $d$ for some fixed $d\in[0,n]$. (Recall that the \emph{$k$-skeleton} of a polytope is the union of its $k$-dimensional faces.) He found the smallest possible dimension of
such a compact $A$ in the cases when
we consider box dimension and packing dimension. He conjectured that
the smallest possible Hausdorff dimension of $A$ is $\max(d-1,k)$,
which would be the generalization of \cite[Theorem 1.4]{KNS},
which addresses the case $n = 2, k = 0$.
In this paper we prove Thornton's conjecture
not only for cubes but for general polytopes of $\mathbb{R}^n$. It turns out that it plays an important role whether 0 is contained in one of the $k$-dimensional affine subspaces defined by the $k$-skeleton of the polytope (see Theorem~\ref{skeleton}). This is even more true if instead of just scaling, we also allow rotations.
In this case,
we ask what the minimal Hausdorff dimension of a set is that contains a scaled and rotated copy of the $k$-skeleton of a given polytope centered at each point of $C$. Obviously, it must have dimension at least $k$ if $C$ is nonempty.
It turns out that this is sharp: we show that there is a Borel set of dimension $k$ that contains a scaled and rotated copy of the $k$-skeleton of a polytope centered at each point of $\mathbb{R}^n$, \emph{provided that 0 is not in any of the $k$-dimensional affine subspaces defined by the $k$-skeleton}. On the other hand, if 0 belongs to one of these affine subspaces, then the problem becomes much harder (see Remark~\ref{Kakeya}).
\medskip
As mentioned above at the end of the second paragraph, a (very) special case of Theorem~\ref{skeleton}, namely, when $n=2$ and $S$ consists of the 4 vertices of a square centered at the origin, was already proved in \cite{KNS}.
Our proof of Theorem~\ref{skeleton} is much simpler than
the proof in \cite{KNS}. In fact, in all our results mentioned above, we will show that, in the sense of Baire category, the minimal dimension is attained by residually many sets. As it often happens, it is much easier to show that some properties hold for residually many sets than to try to construct a set for which they hold.
In our case, after proving residuality for $k$-dimensional affine subspaces, we automatically obtain residuality for countable unions of $k$-dimensional subsets of $k$-dimensional affine subspaces, hence $k$-skeletons.
If we allow rotations but do not allow scaling, the question becomes: what is the minimal Hausdorff
dimension of a set that contains a rotated copy of the $k$-skeleton of a given polytope centered at each point of $C$? We do not know the answer to this question for a general compact set $C$. However, as the following simple example shows, it is no longer true that a typical construction has minimal dimension.
Let $C \subset \mathbb{R}^2$ denote the unit circle centered at $0$, and let the ``polytope'' be a single point of $C$. Then $\{0\}$ is a set of dimension 0 that contains, centered at each point of $C$, a rotated copy of our ``polytope''. (That is, it contains a point at distance 1 from each point of $C$.) On the other hand, it is easy to show that, if $A$ contains a \emph{nonzero} point at distance 1 from each point of $C$, then $A$ has dimension at least 1. In particular, a ``typical'' $A$ has dimension 1 and not 0. The same example also shows that the minimal dimension can be different depending on whether the ``polytope'' consists of one point or two points.
However, we will show that a typical construction does have minimal dimension, provided that $C$ has full dimension, i.e., $\dim C=n$ for $C\subset\mathbb{R}^n$. In this case, the minimal (as well as typical) dimension of a set $A$ that contains a rotated copy of the $k$-skeleton of a polytope centered at each point of $C$ is $k+1$.
Somewhat surprisingly, we obtain that the smallest possible dimension (and also the typical dimension) is still $k+1$ if we want the
$k$-skeleton of a rotated copy of the polytope of \emph{every size} centered at every point.
\medskip
Let us state our results more precisely.
Throughout this paper, by a \emph{scaled copy} of a fixed set $S\subset\mathbb{R}^n$ we mean a
set of the form $x+rS=\{x+rs\ :\ s\in S \}$, where $x\in\mathbb{R}^n$ and $r>0$. We say that $x+rS$ is a scaled copy of $S$ \emph{centered at $x$}.
(That is, the center of $S$ is assumed to be the origin.) Similarly, a
\emph{rotated copy} of $S$ centered at $x \in \mathbb{R}^n$ is $x+T(S) =\{x+T(s)\ :\ s\in S \}$, where $T\in SO(n)$. Combining these two, we define a
\emph{scaled and rotated copy} of $S$ centered at $ x\in\mathbb{R}^n$ by $x+rT(S)=\{x+rT(s)\ :\ s\in S \}$, where $r>0$ and $T\in SO(n)$.
In this paper we will consider only Hausdorff dimension, and we will denote by $\dim E$ the Hausdorff dimension of a set $E$. We list here the special cases of our results when the polytope is a cube and the set of centers is $\mathbb{R}^n$. (The first statement was already proved in \cite{Th}.)
\begin{cor}\label{c:dim}
For any integers $0\le k<n$, the minimal dimension of a Borel set $A\subset\mathbb{R}^n$
that contains the $k$-skeleton of
\begin{enumerate}
\item a scaled copy of a cube centered at every point of $\mathbb{R}^n$ is $n-1$;
\item a scaled and rotated copy of a cube centered at every point of $\mathbb{R}^n$ is $k$;
\item a rotated copy of a cube centered at every point of $\mathbb{R}^n$ is $k+1$;
\item a rotated cube of every size centered at every point of $\mathbb{R}^n$ is $k+1$.
\end{enumerate}
In fact, the same results hold if the $k$-skeleton of a cube is replaced
by any $S\subset\mathbb{R}^n$ with $\dim S=k$ that can be covered by a countable union
of $k$-dimensional affine subspaces that do not contain $0$.
\end{cor}
\medskip
For $k = n-1$ it is natural to ask if, in addition to dimension $k + 1 = n$, we can also guarantee positive Lebesgue measure in the settings (3) and (4). As we will see, we cannot guarantee positive measure. We show that there are residually many \emph{Nikodym sets}, i.e., sets of measure zero which contain a punctured hyperplane through every point. %
The existence of Nikodym sets in $\mathbb{R}^n$ for every $n\ge 2$ was proved by Falconer \cite{Fa86}.
We also obtain residually many sets of measure zero which contain a hyperplane at every positive distance from every point.
By combining our these two results, we get the following.
\begin{cor}\label{c:zeromeasure}
Let $S\subset\mathbb{R}^n$ $(n\ge 2)$ be a set that can be covered by countably many
hyperplanes and suppose that $0\not\in S$. Then there exists a set of
Lebesgue measure zero that contains a scaled and rotated copy of $S$
of every scale centered at every point of $\mathbb{R}^n$.
\end{cor}
Note that here we need only the assumption $0\not\in S$
(which clearly cannot be dropped), while in Corollary~\ref{c:dim} we needed
the stronger assumption that the covering affine subspaces do not contain $0$. Also, Corollary~\ref{c:zeromeasure} is clearly false for $n = 1$.
\medskip
One can ask what happens for those sets $S$ to which neither the classical results nor our results can be applied. %
One of the simplest such case is when, say, $n=1$ and $S=C-1/2$, where $C$ is the classical triadic Cantor set in the interval $[0,1]$. We do not know how large a set $A$ can be that contains a scaled
copy of $S$ centered at each $x\in\mathbb{R}$. Does it always have positive Lebesgue measure, or Hausdorff dimension at least $1$?
In \cite{LP} \L{}aba and Pramanik construct random Cantor sets for which such a set must have positive Lebesgue measure, and by the result of M\'ath\'e \cite{Mt}, there exist Cantor sets for which such a set $A$ can have zero measure. Hochman \cite{Ho} and Bourgain \cite{Bo2} prove that for any porous Cantor set $C$ with $\dim C > 0$, such a set $A$ must have Hausdorff dimension strictly larger than $\dim C$ and at least $1/2$.
\medskip
Finally we remark that T. W. K\"orner \cite{Ko} observed in 2003 that small Kakeya-type sets can be constructed using Baire category argument.
He proved that if we consider the Hausdorff metric on the space of all
compact sets that contain line segments
in every possible direction between two fixed parallel line segments,
then in this space, residually many sets have zero Lebesgue measure.
As we will see, in our results we obtain
residually many sets in a different type of metric space:
we consider Hausdorff metric in a ``code space''.
\section{Scaled copies}%
In this section we consider only scaled (not rotated) copies of $S$. We will prove the following theorem:
\begin{theorem}\label{skeleton}
Let $S$ be the $k$-skeleton of an arbitrary polytope in $\mathbb{R}^n$
for some $0\le k<n$, and let $d\in[0,n]$ be arbitrary.
\begin{itemize}
\item[(i)] Suppose that
$0$ is not contained in any of the $k$-dimensional affine subspaces
defined by $S$.
Then the smallest possible dimension of a compact set $A$ that contains
a scaled copy of $S$
centered at each point of some $d$-dimensional compact set $C$ is $\max(d-1,k)$.
\item[(ii)] Suppose that
$0$ is contained in at least one of the $k$-dimensional affine subspaces
defined by $S$.
Then the smallest possible dimension
of a compact set $A$ that contains
a scaled copy of $S$
centered at each point of some $d$-dimensional compact set $C$ is $\max(d,k)$.
\end{itemize}
\end{theorem}
Thornton's conjecture mentioned in the introduction is clearly a special case of part (i) of this theorem.
\medskip
In fact, our main goal is to study a slightly different problem, from
which we can deduce the results above.
Our aim is to find for a given ``skeleton'' $S$
and for a given
nonempty compact set of centers $C$ (instead of a given $S$ and a given $\dim C$) the smallest possible value of $\dim A$, where $A$
contains a scaled copy of $S$ centered at each point of $C$.
We will study the case when $S$ is the $k$-skeleton of a polytope, or more generally, the case when $S$ is a countable union $S=\bigcup S_i$, where each $S_i$ is contained in an affine subspace $V_i$.
We will assume that $C$ is compact and nonempty. Our aim is to show that, in the sense of Baire category, a typical set $A$ that contains a scaled copy of $S$ centered at each point of $C$ has minimal dimension.
Let us make this more precise.
Fix a nonempty compact set $C\subset\mathbb{R}^n$ and a non-degenerate closed interval $I\subset (0,\infty)$.
In what follows, we view $C \times I$ as a parametrization of the space of certain scaled copies of a given set $S \subset \mathbb{R}^n$; in particular, $(x, r) \in C \times I$ corresponds to the copy centered at $x$ and scaled by $r$.
Let ${\mathcal{K}}$ denote the space of all compact sets $K\subset C\times I$ that have full projection onto $C$. (That is, for each $x\in C$ there is an $r\in I$ with $(x,r)\in K$.) We equip ${\mathcal{K}}$ with the Hausdorff metric. Clearly, ${\mathcal{K}}$ is a closed subset of the space of all compact subsets of $C\times I$, and hence it is a complete metric space. In particular, the Baire category theorem holds for ${\mathcal{K}}$, so we can speak about a typical $K\in{\mathcal{K}}$ in the Baire category sense: a property $P$ holds for a typical $K\in{\mathcal{K}}$
if $\{K\in{\mathcal{K}} : P \textrm{ holds for } K\}$ is residual in ${\mathcal{K}}$,
or equivalently,
if there exists a dense $G_\delta$
set ${\mathcal{G}}\subset{\mathcal{K}}$ such that the property holds for every $K\in{\mathcal{G}}$.
Let $A$ be an arbitrary set that contains a scaled copy of $S\subset\mathbb{R}^n$
centered at each point of $C$.
First we show an easy lower estimate on $\dim A$, which in some important cases will
turn out to be sharp.
Let $C'$ denote the orthogonal projection of $C$ onto $W:=\Span\{S\}^\perp$.
(As usual, we denote by $\Span\{S\}$ the linear span of $S$,
so it always contains the origin.)
For every point $x'\in C'$ there exists an $x\in C$
such that the projection of $x$ onto $W$ is $x'$,
and there exists an $r>0$
such that $x+rS\subset A$ and hence
$x+rS\subset (x'+\Span\{S\}) \cap A$.
Since for any $x'\in C'\subset W = \Span\{S\}^\perp$
the set $(x'+\Span\{S\}) \cap A$ contains a
scaled copy of $S$, we obtain by the general Fubini type inequality
(see e.g. in \cite{Fa85} or \cite{Fa90})
\begin{equation}
\label{triviineq}
\dim A \ge \dim C' + \dim S.
\end{equation}
Now let $K\in{\mathcal{K}}$ and $S\subset\mathbb{R}^n$ and consider
\begin{equation}\label{defAKS}
A=A_{K,S}:=\bigcup_{(x,r)\in K}x+rS.
\end{equation}
Note that $A_{K,S}$ contains a scaled copy of $S$ centered at each point of $C$,
so by the previous paragraph,
\begin{equation}
\label{triviAKS}
\dim A_{K,S} \ge \dim C' + \dim S.
\end{equation}
The following lemma shows that for a typical $K\in{\mathcal{K}}$ we have equality in
\eqref{triviAKS} if $S$ is an affine subspace.
\begin{lemma}\label{main}
Let $V$ be an affine subspace of $\mathbb{R}^n$, let $\emptyset\neq C\subset\mathbb{R}^n$ be compact,
and let $C'$ denote the projection of $C$ onto $\Span\{V\}^\perp$.
Then for a typical $K\in{\mathcal{K}}$, and for $A_{K,V}$ defined by \eqref{defAKS},
$$\dim A_{K,V}=\dim C'+\dim V.$$
\end{lemma}
We postpone the proof of this lemma and first study some of its corollaries.
Suppose that $S$ is a countable union $S=\bigcup S_i$, where each $S_i$ is a
subset of an affine subspace $V_i$. %
Let $C_i'$ denote the orthogonal projection of $C$ onto $W_i:=\Span\{V_i\}^\perp$. Since a countable intersection of
residual sets is residual, and since the Hausdorff dimension of a countable union of sets is the supremum of the Hausdorff dimension of the individual sets, it follows that for a typical $K\in{\mathcal{K}}$,
$$\dim A_{K,S}=
\dim\left(\bigcup_i A_{K,S_i}\right)\le \sup_i (\dim C_i'+\dim V_i).$$
On the other hand, if $A$ contains a scaled copy of $S=\bigcup_i S_i$ centered at each $x\in C$, then applying \eqref{triviineq} to each $S_i$, we get
$\dim A \ge \dim C'_i+\dim S_i$ for each $i$ and thus
$\dim A\ge \sup_i (\dim C_i'+\dim S_i)$.
Therefore, we obtain the following theorem:
\begin{theorem}\label{thm} Let $C$ be an arbitrary nonempty compact subset
in $\mathbb{R}^n$, and let $S=\bigcup_{i=1}^\infty S_i$, where each $S_i$ is a
subset of an affine subspace $V_i$. %
Let $C_i'$ denote the orthogonal projection of $C$ onto $\Span\{V_i\}^\perp$. Then:
\begin{itemize}
\item[(i)] For every set $A$ that contains a scaled copy of $S$ centered at each point of $C$,
$$\dim A\ge \sup_i (\dim C_i'+\dim S_i).$$
\item[(ii)] For a typical $K\in{\mathcal{K}}$, the set
$A=A_{K,S}$ defined by \eqref{defAKS}
contains a scaled copy of $S$ centered at each point of $C$ and
$$\dim A \le \sup_i (\dim C_i'+\dim V_i).$$
Furthermore, if $S$ is compact then so is $A$.
\end{itemize}
\end{theorem}
Let $W_i=\Span\{V_i\}^\perp$. Note that if $0\not\in V_i$ then $\dim W_i=n-\dim V_i-1$.
Therefore if $\dim C=n$, $k<n$, $\dim S=k$, and for every $i$ we
have $0\not\in V_i$ and $\dim V_i=k$, then
$\sup_i \dim S_i=k$ and $\dim C'_i=n-k-1$ for every $i$, so
Theorem~\ref{thm} gives $\dim A=n-1$, which proves
the general version of (1) of Corollary~\ref{c:dim}.
\medskip
So far we studied the problem of finding the minimal Hausdorff dimension of a set $A$ that contains a copy of a given set $S$ centered at each point of a given set $C$.
Now we turn to the problem when, instead of $S$ and $C$, we are only given $S$ and $d=\dim C$.
We suppose that $\dim S_i = \dim V_i$ for each $i$, so the lower and
upper estimates in (i) and (ii) agree.
Since clearly
$\dim C_i' \ge \max(0, \dim C - \codim W_i)$,
where $\codim W_i$ denotes the co-dimension of the linear space $W_i$, therefore
Theorem~\ref{thm}(i) gives
$$
\dim A \ge \sup_i (\max(0, d - \codim W_i) + \dim S_i).
$$
In order to show that this estimate is sharp when $\dim S_i=\dim V_i$, by
Theorem~\ref{thm}(ii), it is enough to find a compact set $C\subset\mathbb{R}^n$ for which
$\dim C_i' = \max(0, \dim C - \codim W_i)$ holds for each $i$.
This can be done by the following claim, which we will prove later.
\begin{claim}\label{falconer}
For each $i\in\mathbb{N}$, let $W_i$ be a linear subspace of $\mathbb{R}^n$ of co-dimension
$l_i\in\{0,1,\ldots,n\}$.
Then for every $d\in[0,n]$ there exists a $d$-dimensional compact set $C\subset\mathbb{R}^n$ whose projection onto $W_i$ has dimension $\max(0,d-l_i)$ for each $i$.
\end{claim}
Therefore Theorem~\ref{thm} and Claim~\ref{falconer} give the following.
\begin{cor}\label{corgen}
Suppose that $S=\bigcup_{i=1}^\infty S_i$, where each $S_i$ is a
subset of an affine subspace $V_i$ with $\dim S_i=\dim V_i$.
For each $i$, let $W_i=\Span\{V_i\}^\perp$.
Let $d\in [0,n]$ be arbitrary.
Then the smallest possible dimension of a set $A$ that contains a scaled copy of $S$ centered at each point of some $d$-dimensional set $C$ is
$\sup_i (\max(0, d - \codim W_i) + \dim S_i)$.
\end{cor}
Now we claim that
Theorem~\ref{skeleton} is a special case of Corollary~\ref{corgen}.
Indeed, if $S$ is a $k$-skeleton of a polytope,
then for each $i$ we have $\dim S_i = \dim V_i =k$, and
$W_i$ has co-dimension either $k+1$ if
$0\not\in V_i$, or $k$ if $0\in V_i$.
Thus $\max(0, d - \codim W_i) + \dim S_i$ is either $\max(k,d-1)$ if $0\not\in V_i$, or $\max(k,d)$ if $0\in V_i$.
It remains to prove Claim~\ref{falconer} and Lemma~\ref{main}.
The following simple proof is based on an argument that
was communicated to us by
K.~J.~Falconer.
\begin{proof}[Proof of Claim~\ref{falconer}]
We can clearly suppose that $d>0$ and $l_i\in\{1,\ldots,n-1\}$.
For $0<s\le n$, Falconer \cite{Fa94}
introduced ${\mathcal{G}}^s_n$ as the class of those $G_\delta$
subsets $F\subset\mathbb{R}^n$ for which $\bigcap_{i=1}^\infty f_i(F)$
has Hausdorff dimension at least $s$
for all sequences of similarity transformations
$\{f_i\}_{i=1}^\infty$.
Among other results, Falconer proved
that ${\mathcal{G}}^s_n$ is closed under countable intersection, and
if $F_1\in{\mathcal{G}}^s_n$ and $F_2\in{\mathcal{G}}^t_m$ then $F_1\times F_2 \in {\mathcal{G}}^{s+t}_{n+m}$.
Examples of sets of ${\mathcal{G}}^s_n$ with Hausdorff dimension exactly $s$ are also
shown in \cite{Fa94} for every $0<s\le n$.
For $l<d$, let $E_l\in{\mathcal{G}}^{d-l}_{n-l}$ with $\dim E_l=d-l$, and
for $l\ge d$ let $E_l$ be a dense $G_\delta$ subset of $\mathbb{R}^{n-l}$ with
$\dim E_l=0$.
Let $F_l= E_l \times \mathbb{R}^l\subset\mathbb{R}^{n-l}\times\mathbb{R}^l$.
Clearly, the projection of $F_l$ onto $\mathbb{R}^{n-l}$ has Hausdorff dimension
$\max(0,d-l)$.
Now we show that $F_l\in{\mathcal{G}}^d_n$. This follows from the product rule
mentioned above if $l< d$. In the case $l\ge d$,
we need to prove that $\dim(\bigcap_{i=1}^\infty f_i(E_l\times \mathbb{R}^l))\ge d$
for any sequence of similarity transformations $\{f_i\}_{i=1}^\infty$.
Let $V$ be an $(n-l)$-dimensional subspace of $\mathbb{R}^n$ which is generic in
the sense that it intersects all the countably many $l$-dimensional affine
subspaces $f_i(\{0\}\times \mathbb{R}^l)$ in a single point.
Then for each translate $V+x$ of $V$, the set
$f_i(E_l\times\mathbb{R}^l)\cap (V+x)$ is similar to the dense $G_\delta$ set $E_l$,
hence $(\bigcap_{i=1}^\infty f_i(E_l\times \mathbb{R}^l))\cap (V+x)$ is
nonempty for each $x$, which implies that indeed
$\dim(\bigcap_{i=1}^\infty f_i(E_l\times \mathbb{R}^l))\ge l \ge d$.
For each $i$, let $H_i$ be a rotated copy of $F_{l_i}$ with projection
of Hausdorff dimension $\max(0,d-l_i)$ onto $W_i$.
Since each $H_ i$ is of class ${\mathcal{G}}^{d}$, the intersection
$D:= \bigcap_{i=1}^\infty H_i$ is of class ${\mathcal{G}}^{d}$. In particular, its Hausdorff dimension is at least $d$. It is also clear that the projection of $D$ onto each $W_i$ has Hausdorff
dimension at most $\max(0,d-l_i)$.
Now $D$ has all the required properties except that
it might have Hausdorff dimension larger than $d$, and it is not compact
but $G_\delta$.
If $\dim D>d$, then let $C$ be a compact subset of $D$ with Hausdorff
dimension $d$.
Then for each $i$, the projection of $C$ onto $W_i$ is at most $\max(0,d-l_i)$,
but it cannot be smaller since $W_i$ has co-dimension $l_i$.
If $\dim D=d$ then let $D_j$ be compact subsets of $D$ with $\dim D_j\to d$
and let $C$ be a disjoint union of shrunken converging copies of $D_j$ and
their limit point.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{main}]
By \eqref{triviAKS}, it is enough to show that
$\dim A_{K,V}\le \dim C'+\dim V$ holds for a typical $K\in{\mathcal{K}}$.
Write $V=v+V_0$ where $V_0$ is a $k$-dimensional linear subspace, $v\in\mathbb{R}^n$
and $v\perp V_0$.
Without loss of generality we can assume that $v=0$ or $|v|=1$.
Let $x'$ denote the projection of a point $x$ onto $\Span\{V\}^\perp$,
and let $\proj x\in\mathbb{R}$ denote the projection of $x$ onto $\mathbb{R} v$.
(Clearly, if $v=0$, then $\proj x=0$.)
Let ${\mathcal{K}}^n$ denote the space of all nonempty compact subsets of $\mathbb{R}^n$, equipped with the Hausdorff metric. Then
$$A=A_{K,V}=\bigcup_{(x,r)\in K}x'+(\proj x+r)v+V_0,$$ so
$$\dim A=\dim V_0+\dim\left(\bigcup_{(x,r)\in K}x'+(\proj x+r)v\right)=k+\dim F(K),$$
where $F:\,{\mathcal{K}}\to{\mathcal{K}}^n$ is defined by $$F(K)=\bigcup_{(x,r)\in K}x'+(\proj x+r)v.$$ It is easy to see that $F$ is continuous.
Since for every open set $G\subset\mathbb{R}^n$ and for every compact set $K\subset G$ we have
$\dist(\mathbb{R}^n\setminus G, K)>0$, it follows that for any open set $G\subset\mathbb{R}^n$, $\{K\in{\mathcal{K}}^n: K\subset G\}$ is an open subset of ${\mathcal{K}}^n$.
Consequently, for any $s, \delta, \varepsilon>0$, the set of those compact sets $K\in{\mathcal{K}}^n$
that have an open cover $\bigcup G_i$ where
$\sum_i (\diam G_i)^s<\varepsilon$ and $\diam G_i<\delta$ for each $i$ is an open
subset of ${\mathcal{K}}^n$. Therefore
for any $s>0$, $\{K\in {\mathcal{K}}^n: \dim K \le s\}$ is a $G_\delta$ subset of ${\mathcal{K}}^n$.
Since $F$ is continuous, $\{K\in {\mathcal{K}} : \dim F(K) \le s\}$ is a $G_\delta$ subset of ${\mathcal{K}}$.
\medskip
We finish the proof by showing that $\{K\in {\mathcal{K}} : \dim F(K) \le \dim C'\}$ is dense. To obtain this, for every compact set $L\in{\mathcal{K}}$ we construct another compact set $K\in {\mathcal{K}}$ arbitrary close to $L$, such that
$\{\proj x+r:(x,r)\in K\}$ is finite and so $F(K)$ is covered by a finite union of copies of $C'$.
For a given $L\in{\mathcal{K}}$, such a $K\in{\mathcal{K}}$ can be constructed by choosing a sufficiently small $\varepsilon>0$
and letting
$$K:=\{(x,r):\,\exists r'\text{ s.t. } (x,r')\in L,\,\proj x+r\in\varepsilon\mathbb{Z},\,|r-r'|\le\varepsilon\}.\mbox{\qedhere}$$
\end{proof}
\section{Scaled and rotated copies}%
In this section, we study the problem when we are allowed to \emph{scale and rotate} copies of $S$. That is, now our aim is to
find for a given set $S\subset\mathbb{R}^n$ and a nonempty compact set of centers $C\subset\mathbb{R}^n$
the minimal possible value of $\dim A$, where $A$ contains a scaled and
rotated copy of $S$ centered at each point of $C$.
(That is, for every $x\in C$, there exist $r>0$ and $T\in SO(n)$ such that
$x+rT(S)\subset A$.)
For a fixed nonempty compact set $C\subset\mathbb{R}^n$ and a closed interval $I\subset (0,\infty)$,
let ${\mathcal{K}}'$ denote the space of all compact sets $K\subset C\times I \times SO(n)$
that have full projection onto $C$.
We fix a metric on $SO(n)$ that induces the natural topology
and equip ${\mathcal{K}}'$ with the Hausdorff metric.
Then ${\mathcal{K}}'$ is also a complete metric space, so again we can talk about
typical $K\in{\mathcal{K}}'$ in the Baire category sense.
Now for $K\in{\mathcal{K}}'$ and $S\subset\mathbb{R}^n$, we let
\begin{equation}
\label{A'}
A'_{K,S}: = \bigcup_{(x,r,T)\in K} x+rT(S).
\end{equation}
Note that $A'_{K,S}$ contains a scaled and rotated copy of $S$
centered at each point of $C$.
Again, first we consider the case when $S$ is an affine subspace, but
we now exclude the case when $S$ contains $0$.
\begin{lemma}
\label{l:rotated}
Let $V$ be an
affine subspace of $\mathbb{R}^n$ such that $0\not\in V$ and
let $C\subset\mathbb{R}^n$ be an arbitrary nonempty compact set.
Then for a typical $K\in{\mathcal{K}}'$, and for $A'_{K,V}$ defined by \eqref{A'},
$$
\dim A'_{K,V}=\dim V.
$$
\end{lemma}
\begin{proof}
Clearly it is enough to show that $\dim A'_{K,V}\le\dim V$
holds for a typical $K\in{\mathcal{K}}'$.
For any $N\in\mathbb{N}$, we define $F_N':{\mathcal{K}}'\to{\mathcal{K}}^n$ by
$F_N'(K)=A'_{K,V}\cap [-N,N]^n$. It is easy to see that $F'_N$ is continuous.
Then exactly the same argument as in the proof of Lemma~\ref{main}
gives that $\{K\in{\mathcal{K}}' : \dim F_N'(K)\le s\}$ is a $G_\delta$
subset of ${\mathcal{K}}'$, which implies that
$\{K\in{\mathcal{K}}' : \dim A'_{K,V}\le s\}$ is also $G_\delta$.
So it remains to prove that $\{K\in{\mathcal{K}}' : \dim A'_{K,V}\le \dim V\}$ is dense.
Fix $\varepsilon>0$.
Then, by compactness and since $0\not\in V$,
there exists an $N=N(\varepsilon)\in\mathbb{N}$ and $(\dim V)$-dimensional affine
subspaces $V_1,\ldots,V_{N}$ such that
for any $(x,r,T)\in C\times I \times SO(n)$
there exists $(r',T')\in I \times SO(n)$
within $\varepsilon$ distance of $(r,T)$ such that $x+r'T'(V)=V_i$ for some $i\le N$.
Thus, given any compact set $L\in{\mathcal{K}}'$ and $\varepsilon>0$,
we can take
$$
K=\{(x,r',T')\ :\
x+r' T'(V)\in\{V_1,\ldots,V_{N}\}\ \} \cap L_\varepsilon,
$$
where
$$
L_\varepsilon=\{(x,r',T')\ :\ \exists(r, T)\text{ s.t. } (x,r,T)\in L,\ \dist((r',T'),(r,T))\le\varepsilon\}.
$$
It follows that $K \in {\mathcal{K}}'$ and the (Hausdorff) distance between $K$ and $L$ is at most $\varepsilon$. Furthermore, $\dim A'_{K,V}=\dim V$, since
$A'_{K,V}$ can be covered by
finitely many $(\dim V)$-dimensional affine spaces.
\end{proof}
By taking a countable intersection of residual sets we obtain the following corollary of Lemma~\ref{l:rotated},
which clearly implies the general form of (2) of Corollary~\ref{c:dim}.
\begin{theorem}
Let $C$ be an arbitrary nonempty compact subset in $\mathbb{R}^n$, $k<n$ and let $S\subset\mathbb{R}^n$ be a $k$-Hausdorff-dimensional set that can be covered by a countable union of $k$-dimensional affine subspaces that do not contain $0$.
Then for a typical $K\in{\mathcal{K}}'$, the set $A'_{K,S}$
contains a scaled and rotated copy of $S$ centered at every point of $C$, and
$\dim A'_{K,S}=\dim S$. %
\end{theorem}
\begin{remark}
\label{Kakeya}
If $0\in V$ and $V$ is $k$-dimensional then
a scaled and rotated copy of $V$
centered at $x$ is a $k$-dimensional affine subspace that contains $x$.
Therefore a set $A$ that contains a scaled and rotated copy
of $V$ centered at every point
of $C$ is a set that contains a $k$-dimensional affine subspace
through every point of $C$.
The Lebesgue measure of such an $A$ is clearly bounded below by the Lebesgue measure of $C$. By generalizing the planar result of Davies \cite{Da} to higher dimensions, Falconer \cite{Fa86} proved there is such an $A$ which attains this lower bound. In Section~\ref{s:Alan} we show that the Lebesgue measure of a typical such $A$ is in fact this minimum.
On the other hand,
to find the minimal dimension of such an $A$ is closely related to
the Kakeya problem, especially in the special case $k=1$, and for some
nontrivial $C$ this problem is as hard as the Kakeya problem.
\end{remark}
\section{Rotated copies: dimension}
\label{s:rotated}
Now we study what happens if we allow rotation
but do not allow scaling.
As we mentioned in the introduction, it is \emph{not} true that for a general nonempty compact set of centers $C$, a typical construction has minimal dimension. However, we will show that this is true provided that $C$ has full dimension.
The following lower estimate can be found in \cite{HKM}:
\begin{fact}\label{t:atleastkplus1}
Let $0 \leq k < n$ be integers, and let $S \subset \mathbb{R}^n$ be a $k$-Hausdorff-dimensional set that can be covered by a countable union
of $k$-dimensional affine subspaces that do not contain $0$.
Let $\emptyset\neq C \subset \mathbb{R}^n$ and $A \subset \mathbb{R}^n$ be such that
for every $x \in C$, there exists a rotated copy of $S$ centered at $x$ contained in $A$.
Then $\dim A \geq \max\{ k, k+ \dim C - (n-1) \}$.
In particular, if $\dim C=n$ then $\dim A \ge k+1$.
\end{fact}
\begin{remark}
If instead of fixing $C$, we fix only the dimension $d$ of $C$, and $S$ can be covered by one $k$-dimensional affine subspace $V$, then
the following simple examples show that the estimate in Fact~\ref{t:atleastkplus1} is sharp.
Without loss of generality we can assume that $V$ is at unit distance from $0$.
For $d \leq n-1$, we can take $A = \mathbb{R}^k \times \{0\} \subset \mathbb{R}^{n}$ and take $C$ to be a $d$-dimensional subset of $\mathbb{R}^k \times S^{n-k-1}$,
where $S^m$ denotes the unit sphere in $\mathbb{R}^{m+1}$ centered at $0$.
For $d = n-1 + s$, where $s \in [0, 1]$,
let $E \subset \mathbb{R}^{n-k}$ be an $s$-dimensional subset of a line
and let $F \subset \mathbb{R}^{n-k}$ be the set with a copy of $S^{n-k-1}$
centered at every point of $E$. It is easy to show that $\dim F = n-k-1+s$.
Let $C = \mathbb{R}^k \times F$ and $A = \mathbb{R}^k \times E$.
In both cases $A$ contains a rotated copy of $S$ centered at every point of $C$,
$\dim C=d$ and $\dim A=\max\{k, k+\dim C - (n-1)\}$.
If $S$ can be covered by two distinct $k$-dimensional affine subspaces but
cannot be covered by one, then this question becomes much more difficult.
Consider, for example, the case
when $S$ consists of two points, both at distant $1$ from $0$, so now
$A$ contains two distinct points at distance $1$ from every point of a $1$-dimensional set $C \subset \mathbb{R}^2$. The discussion in the introduction implies that if we take $C = S^1$, then $\dim A \geq 1$. We do not know if there exists a set $C$ with $\dim C = 1$ for which there is such a set $A$ with $\dim A < 1$.
\end{remark}
Our goal is to show that for every fixed $C$ with $\dim C = n$, the estimate $\dim A \geq k + 1$ in Fact~\ref{t:atleastkplus1} is always sharp. Moreover, we construct sets of Hausdorff dimension $k+1$ that contain
the $k$-skeleton of an
$n$-dimensional rotated polytope of \emph{every} size centered at \emph{every} point.
More precisely, we want to construct a set $A$ that contains
a rotated copy of every positive size of a given set $S\subset\mathbb{R}^n$ centered at
every point of a given nonempty compact set $C$.
(That is, for every $x\in C$ and $r>0$ there exists $T\in SO(n)$ such that
$x+rT(S)\subset A$.) Instead of every $x \in C$ and $r>0$ we will guarantee only every $(x,r)$
from each fixed nonempty compact set $J\subset \mathbb{R}^n \times (0,\infty)$. By taking
countable unions, we get the desired construction for every $(x, r) \in \mathbb{R}^n \times (0, \infty)$.
For a fixed nonempty compact set $J\subset \mathbb{R}^n \times (0,\infty)$, let ${\mathcal{K}}''$ denote the space of all compact sets
$K\subset J \times SO(n)$ that have full projection onto $J$.
Again, by taking a metric on $SO(n)$ that induces the natural topology
and equipping ${\mathcal{K}}''$ with the Hausdorff metric,
${\mathcal{K}}''$ is also a complete metric space, so again we can talk about
typical $K\in{\mathcal{K}}''$ in the Baire category sense.
Now for any $K\in{\mathcal{K}}''$ and $S\subset\mathbb{R}^n$, the set
\begin{equation}
\label{A''}
A''_{K,S}: = \bigcup_{(x,r,T)\in K} x+rT(S)
\end{equation}
contains a rotated copy of $S$
of scale $r$ centered at $x$ for every $(x,r) \in J$.
Note that taking $J=C \times \{1\}$ gives us the special case when only
rotation is used.
Again, we start with the case when $S$ is a $k$-dimensional
($0\le k<n$) affine subspace of $\mathbb{R}^n$
that does not contain the origin.
Note that if $d=\dist(S,0)$ then $x+rT(S)$
is at distance $rd$ from $x$.
This motivates the following easy
deterministic $(k+1)$-dimensional construction.
\begin{prop}
For any integers $0\le k<n$ there exists a Borel set $B\subset\mathbb{R}^n$ of Hausdorff
dimension $k+1$ that contains a $k$-dimensional affine subspace at every
positive distance from every point of $\mathbb{R}^n$.
\end{prop}
\begin{proof}
Let $W_1, W_2,\ldots$ be a countable collection of $(k+1)$-dimensional
affine subspaces of $\mathbb{R}^n$ such that $B:=\bigcup_i W_i$ is dense.
Then $B$ is clearly a Borel set $B\subset\mathbb{R}^n$ of Hausdorff dimension $k+1$,
so all we need to show is that for any fixed $x\in\mathbb{R}^n$ and
$r>0$ the set $B$ contains a $k$-dimensional affine subspace at distance $r$
from $x$. Choose $i$ such that $W_i$ intersects the interior of the ball
$B(x,r)$. Then the intersection of $W_i$ and the sphere $S(x,r)$ is a sphere
in the $(k+1)$-dimensional affine space $W_i$, and any $k$-dimensional
affine subspace of $W_i\subset B$ that is tangent to this sphere
is at distance $r$ from $x$.
\end{proof}
The proof of the following lemma is based on the same
idea as in the construction above.
\begin{lemma}
\label{l:rotatedeverysize}
Let $0\le k<n$ be integers and $V$ be a $k$-dimensional
affine subspace of $\mathbb{R}^n$ such that $0\not\in V$.
Let $J\subset \mathbb{R}^n\times(0,\infty)$ be an arbitrary nonempty compact set.
Then for a typical $K\in{\mathcal{K}}''$, and for $A''_{K,V}$ defined by \eqref{A''},
$$
\dim A''_{K,V}\le k +1.
$$
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $V$ is at distance $1$ from the origin.
Let $A(n,k+1)$ be the space of all $(k+1)$-dimensional affine
subspaces of $\mathbb{R}^n$, equipped with a natural metric (for example the metric
defined in \cite[3.16]{Ma}), and
let $W_1,W_2,\ldots$ be a countable dense set
in $A(n,k+1)$. Let $B=\bigcup_i W_i$.
Exactly the same argument as in the proof of Lemma~\ref{l:rotated}
gives that
$\{K\in{\mathcal{K}}'' : \dim A''_{K,V}\le s\}$ is $G_\delta$ for any $s$, so
again it remains to prove that $\{K\in{\mathcal{K}}'' : \dim A''_{K,V}\le k +1\}$ is
dense in ${\mathcal{K}}''$.
Since $\dim B=k+1$, it is enough to show that
$\{K\in{\mathcal{K}}'' : A''_{K,V}\subset B\}$ is dense in ${\mathcal{K}}''$.
First we show that for any $(x,r,T)\in J \times SO(n)$
and $\varepsilon>0$, there exist $i\in\mathbb{N}$ and $T'\in SO(n)$ such that
$\dist(T,T')<\varepsilon$ and $x+rT'(V)\subset W_i$.
We will also see from the proof that for the given $\varepsilon>0$ and the above chosen $i$,
there exists a neighborhood of $(x,r,T)$ such that
for any $(x^*,r^*,T^*)$ from that neighborhood,
there exists ${T^*}'\in SO(n)$ such that
$\dist(T^*,{T^*}')<\varepsilon$ and $x^*+r^*{T^*}'(V)\subset W_i$.
Hence, by the compactness of $J \times SO(n)$,
for a given $\varepsilon>0$, there exists an $N$ such that we can choose an
$i\le N$ for every $(x,r,T)\in J \times SO(n)$.
So fix $(x,r,T)\in J \times SO(n)$ and $\varepsilon>0$.
Let $W$ be a $(k+1)$-dimensional affine subspace of $\mathbb{R}^n$ that contains $V$
such that $0<\dist(W,0)<\dist(V,0)=1$. We denote by $v$ be the point
of $V$ closest to the origin, and let $V_0=x+rT(V)$, $v_0=x+rT(v)$ and $W_0=x+rT(W)$.
Then $S_0:=W_0\cap S(x,r)$ is a sphere
in $W_0$, and $V_0$ is the tangent of $S_0$ at the point $v_0$.
If $W_i$ is sufficiently close to $W_0$, then we can pick a point $v'_0\in S'_0:=W_i\cap S(x,r)$ close to
$v_0$, and a $k$-dimensional affine subspace $V'_0\subset W_i$ close to
$V_0$ that is the tangent of $S'_0$ at $v'_0$.
Then $V'_0$ is at distance $r$ from $x$ and it is as close to $V_0=x+rT(V)$
as we wish, so $V_0'=x+rT'(V)$ for some $T'\in SO(n)$
and $T'$ can be chosen arbitrarily close to $T$,
which completes the proof of the claim of the previous paragraph.
Thus, for a given $L\in{\mathcal{K}}''$ and $\varepsilon>0$, if we let
$$
K=\{(x,r,T')\ : \exists i\le N,\exists\, T\text{ s.t. }
(x,r,T)\in L,\ \dist(T,T')\le \varepsilon,\
x+rT'(V)\subset W_i\},
$$
then $K \in {\mathcal{K}}''$ and the Hausdorff distance between $K$ and $L$ is at most $\varepsilon$.
Furthermore,
$A''_{K,V}\subset \bigcup_{i=1}^{N}W_i \subset B$,
which completes the proof.
\end{proof}
The same statements hold if, instead of $S=V$, we consider any subset $S\subset V$. By taking a countable intersection of residual sets we obtain the following.
\begin{theorem}\label{t:generalkplus1}
Let $0\le k<n$ be integers and
let $S=\bigcup_{i=1}^\infty S_i$, where each $S_i$ is a
subset of a $k$-dimensional affine subspace $V_i$ with
$0\not\in V_i$.
Let $J\subset \mathbb{R}^n \times (0,\infty)$ be an arbitrary nonempty compact set.
Recall that ${\mathcal{K}}''$ denotes the space of all compact sets
$K\subset J \times SO(n)$ that have full projection onto $J$.
Then
for a typical $K\in{\mathcal{K}}''$, the set $A''_{K,S}$ defined by \eqref{A''}
is a closed set with $\dim A''_{K,S}\le k+1$, and
for every $(x,r)\inJ$, there exists a $T\in SO(n)$ such that
$x+rT(S)\subset A''_{K,S}$.
\end{theorem}
We can see from Fact \ref{t:atleastkplus1} that the estimate $k+1$ above is sharp, provided that $\dim S=k$
and $J\supset C\times\{r\}$ for some $r>0$ and $C\subset\mathbb{R}^n$ with $\dim C=n$.
This gives the general version of (3) and (4) of Corollary~\ref{c:dim}.
\begin{remark}
In Theorem~\ref{t:generalkplus1} we obtain a rotated and scaled copy of $S$
for every $(x, r) \in J$ inside a set of Hausdorff
dimension $k+1$. We claim that using a similar
argument as in \cite[Remark 1.6]{JJKM} we can also move $S$ \emph{continuously}
inside a set of Hausdorff dimension $k+1$ so that during this motion we get
$S$ in every required position.
Indeed, let $K$ be a fixed (typical) element
of ${\mathcal{K}}''$ guranteed by Theorem~\ref{t:generalkplus1}
such that $\dim A''_{K,S}\le k+1$.
Since $K$ is a nonempty compact subset of the
metric space $\conv(J)\times SO(n)$,
where $\conv$ denotes the convex hull,
there exists a continuous function
$g:C_{1/3}\to \conv(J)\times SO(n)$
on the classical Cantor set $C_{1/3}$
such that $g(C_{1/3})=K$.
All we need to do is to extend this map
continuously to $[0,1]$ such that
$\dim A''_{g([0,1]),S} \le k+1$.
For each complementary interval $(a,b)$ of the Cantor set, we define $g$ on $(a,b)$ in such a way that $g$ is smooth on $[a,b]$ and that the diameter of $g([a,b])$ is at most a constant multiple of the distance between $g(a)$ and $g(b)$. This gives the desired extension since the union of
the sets of the form $x+rT(S)$ ($(x,r,T)\in g((a,b))$) will be a countable union of smooth
$k+1$-dimensional manifolds, so
$\dim A''_{g((a,b)),S} = k+1$.
Note that if $J=C \times \{1\}$ then we get only congruent copies.
So in particular, for any $k<n$, the $k$-skeleton
of a unit cube can be continuously moved by rigid motions in $\mathbb{R}^n$
within a set of Hausdorff dimension $k+1$ in such
a way that the center of the cube goes through every point of $C$,
or by joining such motions, through every point of $\mathbb{R}^n$.
\end{remark}
\section{Rotated copies: measure}
\label{s:Alan}
In this section, we study what happens when we place a rotated punctured hyperplane through every point. We show that typical arrangements of this kind have Lebesgue measure zero and are hence Nikodym sets. Using similar methods, we also show that typical arrangements of placing a rotated hyperplane at every positive distance from every point have measure zero. We use $| \cdot |$ to denote the Lebesgue measure.
Let $e_1 = (1,0,\ldots,0) \in \mathbb{R}^n$ and $H = \{(y_1, \ldots, y_n) \in \mathbb{R}^n : y_1 = 0\}$. By a \emph{rotated hyperplane at distance $r \in [0, \infty)$ from $x \in \mathbb{R}^n$}, we mean a set of the form $x + rT(e_1) + T(H)$ for some $T \in SO(n)$. Note that we now allow $r$ to be $0$, and that $x + rT(e_1) + T(H)$ differs from $x + rT(e_1+H)$ when $r = 0$.
Fix a nonempty compact set $J \subset \mathbb{R}^n \times [0, \infty)$. As in Section \ref{s:rotated}, we let ${\mathcal{K}}''$ denote the space of compact sets $K \subset J \times SO(n)$ that have full projection onto $J$.
In this section we prove the following result.
\begin{theorem}
\label{theorem:typical-davies-nikodym}
For a typical $K \in {\mathcal{K}}''$, the set
\[
\bigcup_{(x, r, T)\in K} (x + rT(e_1) + T(H)) \setminus \{x\}
\]
has measure zero.
\end{theorem}
Note that if $r = 0$, then $(x + rT(e_1) + T(H)) \setminus \{x\}$ is $x + T(H\setminus\{0\})$, so we are placing a rotated copy of the \emph{punctured hyperplane} $H\setminus\{0\}$ through $x$. Thus, if we consider the case $J = C \times \{0\}$ for some compact set $C \subset \mathbb{R}^n$, we see that typical arrangements give rise to Nikodym sets. We also obtain our claim in Remark~\ref{Kakeya} that if we place an un-punctured hyperplane through every point in $C$, the typical arrangement of this kind has Lebesgue measure equal to $|C|$.
By taking countable unions of sets of the form in Theorem~\ref{theorem:typical-davies-nikodym}, we obtain the following:
\begin{cor}
There is a set of measure zero in $\mathbb{R}^n$ which contains a hyperplane at every positive distance from every point as well as a punctured hyperplane through every point.
\end{cor}
\subsection{Translating cones}
\label{subsec:cones}
In this section we introduce the main geometric construction for proving Theorem~\ref{theorem:typical-davies-nikodym}.
This construction is done in $\mathbb{R}^2$ and we will later see how to apply it to the $n$-dimensional problem.
Our geometric arguments are similar to those used to construct Kakeya needle sets of arbitrarily small measure, see e.g. \cite{Be}.
For $-\frac{\pi}{2} < \phi_1 < \phi_2 < \frac{\pi}{2}$, we define
\begin{align*}
D(\phi_1, \phi_2)
&=
\{ (r \sin\theta, r \cos\theta) : r \in \mathbb{R}, \theta \in [\phi_1, \phi_2] \}.
\end{align*}
In other words, $D(\phi_1, \phi_2) \subset \mathbb{R}^2$ denotes the double cone bounded by the lines through the origin of signed angles $\phi_1, \phi_2$ with respect to the $y$-axis. (Note in particular that our sign convention measures the angles in the clockwise direction.)
Our geometric construction begins by partitioning $D = D(\phi_1, \phi_2)$ into finitely many double cones $\{ D_i\}$. Next, we translate each $D_i$ downwards to a new vertex $v_i \in D_i \cap \{y_2 < 0\}$ to obtain $\widetilde D_i := v_i + D_i$. Our goal is to choose the $\{D_i\}$ and the $\{v_i\}$ so that the resulting double cones $\{ \widetilde D_i\}$ satisfy the following three properties.
First, the $\{\widetilde D_i\}$ should have considerable overlap (and hence small measure) in a strip below the $x$-axis.
Second, we would like our construction to preserve certain distances to lines. To be more precise, first let
\[
D^\perp(\phi_1, \phi_2)
=
\{ (r \sin\theta, r \cos\theta) : r \geq 0, \theta \in [\phi_2 - \tfrac{\pi}{2}, \phi_1 + \tfrac{\pi}{2}] \}
.
\]
Our second desired property is that for any point $p \in D^\perp(\phi_1, \phi_2)$ and any line $\ell \subset D$, there is a line in some $\widetilde D_i$ which has the same distance to $p$ as $\ell$ does.
For a non-horizontal line $\ell\subset \mathbb{R}^2$ and $p \in \mathbb{R}^2$, we define $d(p, \ell)$ to be the signed distance from $p$ to $\ell$. The sign is positive if $p$ is on the left of $\ell$, and negative if $p$ is on the right. In our construction, we will always consider only lines whose direction belongs to the original cone $D(\phi_1,\phi_2)$. In particular, they are never horizontal so the signed distance is defined. The essential property of $D^\perp(\phi_1, \phi_2)$ is that for any $p \in D^\perp(\phi_1, \phi_2)$, the map $\ell \mapsto d(p, \ell)$ is an increasing function as $\ell$ rotates from one boundary line $\ell_1$ of $D$ to the other boundary line $\ell_2$. Hence,
\[
\{d(p, \ell) : \ell \subset D\}
=
[d(p, \ell_1), d(p, \ell_2)].
\]
Before stating the third and final property, we observe that since $v_i \in D_i \cap \{y_2 < 0\}$ for all $i$, we have $D \cap \{y_2 \geq 0\} \subset (\bigcup_{i} \widetilde D_i) \cap \{y_2 \geq 0\}$. The third desired property is that the reverse containment holds if we thicken $D$ slightly. That is, $(\bigcup_{i} \widetilde D_i) \cap \{y_2 \geq 0\}$ should be contained in a small neighborhood of $D \cap \{y_2 \geq 0\}$.
The following lemma asserts that it is indeed possible to partition $D$ and translate the pieces to achieve the three desired properties above. %
\begin{lemma}%
\label{lemma:iterated-partition}
Let $-\frac{\pi}{2} < \phi_1 < \phi_2 < \frac{\pi}{2}$, $D = D(\phi_1, \phi_2)$, $R > 0$, and $\varepsilon > 0$. Then we can choose the partition $D = \bigcup D_i$ and the translates $\widetilde D_i = v_i + D_i$ so that
\begin{enumerate}
\item\label{condition-small-iter}
$|(\bigcup_i \widetilde D_i) \cap \{ -R \leq y_2 \leq 0 \}| < \varepsilon$.
\item\label{condition-distance-iter}
If $p \in D^\perp$ and $\ell_0 \subset D$ is a line, then there is a line $\widetilde \ell$ in some $\widetilde D_i$ such that $d(p, \widetilde \ell) = d(p, \ell_0)$.
\item\label{condition-nbhd-iter}
$(\bigcup_i \widetilde D_i) \cap \{y_2 \geq 0\}$ is contained in the $\varepsilon$-neighborhood of $D \cap \{ y_2 \geq 0\}$.
\end{enumerate}
\end{lemma}
To prove this lemma, we first need a more elementary construction, in which we translate each $D_i$ downwards by only a small amount $\delta$.
\begin{lemma}%
\label{lemma:basic-partition}
Let $-\frac{\pi}{2} < \phi_1 < \phi_2 < \frac{\pi}{2}$, $D = D(\phi_1, \phi_2)$, $\delta > 0$, and $\varepsilon > 0$. Then we can choose the partition $D = \bigcup D_i$ and the translates $\widetilde D_i = v_i + D_i$ so that
\begin{enumerate}
\item\label{condition-small-basic}
$|(\bigcup_i \widetilde D_i) \cap \{ -\delta \leq y_2 \leq 0 \}| \leq c \delta^2$, where $c = |D \cap \{0 \leq y_2 \leq 1\}|$.
\item\label{condition-distance-basic}
If $p \in D^\perp$ and $\ell_0 \subset D$ is a line, then there is a line $\widetilde \ell$ in some $\widetilde D_i$ such that $d(p, \widetilde \ell) = d(p, \ell_0)$.
\item\label{condition-vertex-basic}
For each $i$, $v_i \in \{ y_2 = -\delta \}$.
\item\label{condition-contain-basic}
For each $i$, $D^\perp \subset \widetilde D_i^\perp$.
\item\label{condition-nbhd-basic}
$(\bigcup_i \widetilde D_i) \cap \{y_2 \geq 0\}$ is contained in the $\varepsilon$-neighborhood of $D \cap \{ y_2 \geq 0\}$.
\end{enumerate}
(If $D_i = D(\psi_1, \psi_2)$, then $D_i^\perp := D^\perp(\psi_1, \psi_2)$ and $\widetilde D_i^\perp := v_i + D_i^\perp$.)
\end{lemma}
\begin{proof}
We claim that for any partition $D = \bigcup_i D_i$, if we choose any $v_i \in \{y_2 = -\delta\} \cap D_i \cap (-D_i^\perp)$, then we have \eqref{condition-small-basic}, \eqref{condition-distance-basic}, \eqref{condition-vertex-basic}, and \eqref{condition-contain-basic}. Indeed, \eqref{condition-vertex-basic} is immediate. Since $-v_i \in D_i^\perp$, we have $D^\perp \subset D_i^\perp \subset \widetilde D_i^\perp$, so \eqref{condition-contain-basic} holds. And \eqref{condition-vertex-basic} implies \eqref{condition-small-basic} since
$
|(\bigcup_i \widetilde D_i) \cap \{-\delta \leq y_2 \leq 0\}|
\leq
\sum_i |\widetilde D_i \cap \{-\delta \leq y_2 \leq 0\}|
=
\sum_i |D_i \cap \{0\leq y_2 \leq \delta\}|
=
c\delta^2.$
To show \eqref{condition-distance-basic} holds, let $p \in D^\perp$ and $\ell_0 \subset D$. Then $\ell_0$ is in some $D_i$. Let $\ell_1, \ell_2$ be the two boundary lines of $D_i$ with $d(p, \ell_1) < d(p, \ell_2)$. Recall that $\ell \subset D_i$ if and only if $v_i + \ell \subset \widetilde D_i$. Since $p \in D_i^\perp$ and $p \in \widetilde D_i^\perp$, we have
$\{d(p, \ell) : \ell \subset D_i\}
=
[d(p, \ell_1), d(p, \ell_2)]$
and
$\{d(p, \ell) : \ell \subset \widetilde D_i\}
=
[d(p, v_i+\ell_1), d(p, v_i+\ell_2)]$.
Since $-v_i \in D_i \cap \{y_2 \geq 0\}$, we have
$[d(p, \ell_1), d(p, \ell_2)] \subset [d(p, v_i+\ell_1), d(p, v_i + \ell_2)]$. Thus,
\[
d(p, \ell_0)
\in
\{d(p, \ell) : \ell \subset D_i\}
\subset
\{d(p, \ell) : \ell \subset \widetilde D_i\}
,
\]
so there is some $\widetilde \ell \subset \widetilde D_i$ such that $d(p, \widetilde \ell) = d(p, \ell_0)$, which completes the proof of \eqref{condition-distance-basic} and hence our claim. Finally, by making the partition $\bigcup_i D_i$ sufficiently fine and choosing $v_i$ as above, we can ensure that \eqref{condition-nbhd-basic} holds.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:iterated-partition}]
We fix a large $N$ and repeatedly apply Lemma~\ref{lemma:basic-partition} with $\delta = R/N$ until the vertex of each double cone lies in $\{ y_2 = -R\}$. That is, we apply Lemma~\ref{lemma:basic-partition} once on $D$ to get $E_1$, a union of double cones with vertices in $\{ y_2 = -\delta\}$ and such that $|E_1 \cap \{-\delta \leq y_2 \leq 0 \}| < c' \delta^2$, where $c' = 2|D \cap \{0 \leq y_2 \leq 1 \}|$. Next, we apply Lemma~\ref{lemma:basic-partition} to every double cone in $E_1$ to get $E_2$, a union of double cones with vertices in $\{ y_2 = -2\delta\}$ and such that $|E_2 \cap \{-2\delta \leq y_2 \leq -\delta \}| < c' \delta^2$. By Lemma~\ref{lemma:basic-partition}\eqref{condition-nbhd-basic}, we can also ensure that $|E_2 \cap \{-\delta \leq y_2 \leq 0 \}| < c' \delta^2$.
We continue in this way to obtain $E_1, \ldots, E_N$, such that $|E_k \cap \{-j\delta \leq y_2 \leq -(j-1)\delta \}| < c' \delta^2$ for $1 \leq j \leq k \leq N$. Because of Lemma~\ref{lemma:basic-partition}\eqref{condition-nbhd-basic}, we can also ensure that $E_k \cap \{y_2 \geq 0\}$ is in the $\varepsilon$-neighborhood of $D \cap \{y_2 \geq 0\}$ for each $k$. Ultimately, we have $|E_N \cap \{-R \leq y_2 \leq 0 \}| \leq N c' \delta^2 = c'R^2/N$. By choosing $N$ sufficiently large, we can make this quantity as small as we wish. Writing $E_N$ as $\bigcup_i \widetilde D_i$, we obtain \eqref{condition-small-iter} and \eqref{condition-nbhd-iter}. Furthermore, every time we translate downwards by $\delta$, Lemma~\ref{lemma:basic-partition}\eqref{condition-contain-basic} allows us to apply Lemma~\ref{lemma:basic-partition}\eqref{condition-distance-basic}. Thus, \eqref{condition-distance-iter} holds.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:typical-davies-nikodym}}
First we apply our main geometric construction from the previous section to prove the following lemma.
\begin{lemma}
\label{lemma:nbhd-SOn}
Let $n\ge 2$, $(x_0,r_0,T_0) \in \mathbb{R}^n \times [0, \infty) \times SO(n)$, let $B \subset \mathbb{R}^n$ be a closed ball, and suppose that $x_0 + r_0T_0(e_1) \not\in B$.
Let $\eta > 0$ be arbitrary. Then there is a (relatively) open neighborhood $\widetilde U$ of $(x_0,r_0)$ in $\mathbb{R}^n \times [0, \infty)$ such that for each $\varepsilon > 0$, there is a set $\widetilde D \subset \mathbb{R}^n$ such that:
\begin{enumerate}
\item For all $(x,r) \in \widetilde U$, there is an affine hyperplane $V \subset \widetilde D$ of distance $r$ from $x$ and such that the angle between $V$ and $T_0(H)$ is at most $\eta$.
\item $|B \cap \widetilde D| < \varepsilon$.
\end{enumerate}
\end{lemma}
\begin{proof}
First we show the lemma for $n=2$. Since $x_0 + r_0T_0(e_1) \not\in B$, without loss of generality, we may assume that $T_0 \in SO(2)$ is the identity, that $x_0+r_0e_1 \in \{y_1 = 0, y_2 > 0\}$, and that $B$ does not intersect $\{y_1 = 0, y_2 \ge 0\}$. We can also assume that $B$ lies in $\{y_2 \geq - 2\diam B\}$. %
It follows that $x_0$ lies in the upper half-plane $\{y_2 > 0\}$.
Using the notation from Section \ref{subsec:cones}, let $D = D(-\phi, \phi)$ be a double cone,
where $\phi \in (0,\eta)$ is small enough so that $x_0 \in D^\perp$ and $B \cap D \cap \{y_2 \geq 0\} = \emptyset$.
The boundary of $D$ is made up of two lines, $\ell_1, \ell_2$, with $d(x_0, \ell_1) < r_0 < d(x_0, \ell_2)$. Let $\rho > 0$ be sufficiently small so that $d(x_0, \ell_1) < r_0 - \rho$ and $r_0 + \rho < d(x_0, \ell_2)$.
Let $\widetilde U$ be a (relatively) open neighborhood of $(x_0, r_0)$ contained in
\[
\{ y \in D^\perp : d(y, \ell_1) < r_0 - \rho \text{ and } r_0 + \rho < d(y, \ell_2)\} \times (r_0-\rho, r_0+\rho)
.\]
Then for any $(x, r) \in \widetilde U$, there is a line $\ell \subset D$ of signed distance $r$ from $x$. Given $\varepsilon > 0$, we apply Lemma~\ref{lemma:iterated-partition} to get $\widetilde D := \bigcup_i \widetilde D_i$ with $|\widetilde D \cap \{ -2\diam B \leq y_2 \leq 0 \}| < \varepsilon$ and $B \cap \widetilde D \cap \{y_2 \geq 0\} = \emptyset$. It follows that $|B \cap \widetilde D| < \varepsilon$. By Lemma~\ref{lemma:iterated-partition}\eqref{condition-distance-iter},
for every $(x, r) \in \widetilde U$, there is some line $\ell \subset \widetilde D$ of distance $r$ from $x$. Every line $\ell \subset \widetilde D$ is a translate of some line in $D$, so the angle between $\ell$ and $H$ is at most $\phi < \eta$. This completes the proof in dimension $n=2$.
For an arbitrary $n\ge 2$, we can assume without loss of generality that $T_0$ is the identity, that $x_0$ (and hence also $x_0+r_0 e_1$) is contained in the two-dimensional plane $\mathbb{R}^2\subset\mathbb{R}^n$ defined by the first two coordinate axes, and that the same assumptions hold as in the first paragraph of our proof. Then, if we project the ball $B$ into $\mathbb{R}^2$, take the sets $\widetilde U,\widetilde D\subset \mathbb{R}^2$ constructed above, and multiply them by $\mathbb{R}^{n-2}$, the resulting sets satisfy the requirements of the statement of Lemma~\ref{lemma:nbhd-SOn} with $\varepsilon$ replaced by $\varepsilon\diam(B)^{n-2}$.
\end{proof}
\begin{lemma}
\label{lemma:nbhd-of-x-r}
Let $(x_0,r_0,T_0) \in \mathbb{R}^n \times [0, \infty) \times SO(n)$, and let $B \subset \mathbb{R}^n$ be a closed ball. %
Let $G$ be a (relatively) open neighborhood of $(x_0,r_0,T_0)$ in $\mathbb{R}^n \times [0, \infty) \times SO(n)$. Then there is an open neighborhood $U \subset \mathbb{R}^n \times [0, \infty)$ of $(x_0,r_0)$ such that for each $\varepsilon > 0$, there is a compact set $K \subset G$ with full projection onto $U$ and such that
\begin{equation}
\label{eq:small-intersection-B}
B \cap \bigcup_{\substack{(x, r, T) \in K\\ x+ rT(e_1) \not\in 2B}}(x+rT(e_1) + T(H))
\end{equation}
has measure less than $\varepsilon$. (Here $2B$ denotes the closed ball with the same center as $B$ and with twice the radius.)
\end{lemma}
\begin{proof}
Without loss of generality, we may assume $G = G_1 \times G_2$, where $G_1$ and $G_2$ are open sets in $\mathbb{R}^n \times [0, \infty)$ and $SO(n)$, respectively.
If $x_0 + r_0T_0(e_1) \in B$, then we can choose $K \subset G$ to contain a neighborhood of $(x_0,r_0,T_0)$ and such that $x+rT(e_1) \in 2B$ for all $(x, r, T) \in K$. Then the set \eqref{eq:small-intersection-B} is empty, so the lemma holds trivially.
Now suppose $x_0 + r_0T_0(e_1) \not\in B$. We can apply the previous lemma with $\eta$ sufficiently small (depending on $G_2$) to get a set $\widetilde U$. We take $U$ to be an open neighborhood of $(x_0, r_0)$ inside $\widetilde U$ and compactly contained in $G_1$. Then for each $\varepsilon > 0$, the previous lemma gives a set $\widetilde D$. We take $K$ to be the closure of
\[
\{(x, r, T) \in U \times SO(n) : x + rT(e_1) + T(H) \subset \widetilde D
\},
\]
and by the properties of $\widetilde D$ given by the previous lemma, this $K$ has the desired properties.
\end{proof}
For $B \subset \mathbb{R}^n$ a closed ball, let $\mathcal{A}(B)$ be the set of all $K \in {\mathcal{K}}''$ such that
\[
B \cap \bigcup_{\substack{(x, r, T) \in K\\ x + rT(e_1) \not\in 2B}}(x+rT(e_1) + T(H))
\]
has measure zero.
\begin{lemma}
\label{lemma:2-diam-Q-dense}
$\mathcal{A}(B)$ is residual in ${\mathcal{K}}''$.
\end{lemma}
\begin{proof}
For $\varepsilon > 0$, let $\mathcal{A}(B, \varepsilon)$ be the set of those $K \in {\mathcal{K}}''$ for which there is an $\eta > 0$ such that
\[
B \cap \bigcup_{\substack{(x, r, T) \in K^\eta\\ x + rT(e_1) \not\in 2B}}(x+rT(e_1) + T(H))
\]
has measure less than $\varepsilon$, where $K^\eta$ denotes the open $\eta$-neighborhood of $K$. Since $\mathcal{A}(B) = \bigcap_{m=1}^\infty \mathcal{A}(B, \frac{1}{m})$, it is enough to show that $\mathcal{A}(B, \varepsilon)$ is open and dense in ${\mathcal{K}}''$ for each $\varepsilon > 0$.
Fix $\varepsilon > 0$. $\mathcal{A}(B, \varepsilon)$ is clearly open in ${\mathcal{K}}''$. To show that it is dense, let $L \in {\mathcal{K}}''$ be arbitrary. Our aim is to find a $K \in \mathcal{A}(B, \varepsilon)$ arbitrarily close to $L$. For each $(x, r, T) \in L$, we take a neighborhood $G_{(x, r, T)}$ of $(x,r,T)$, which we choose sufficiently small (to be specified later). Then we apply Lemma~\ref{lemma:nbhd-of-x-r} to $(x, r, T)$ to get a neighborhood $U_{(x, r, T)} \subset \mathbb{R}^n \times [0, \infty)$ of $(x, r)$. By compactness, there is a finite collection $\{(x_i, r_i, T_i)\} \subset L$ such that $\{U_{(x_i, r_i, T_i)}\}$ covers $J$.
Choose $\varepsilon_i$ so that $\sum_i \varepsilon_i < \varepsilon$. We apply Lemma~\ref{lemma:nbhd-of-x-r} to each $U_{(x_i, r_i, T_i)}$ with $\varepsilon_i$ in place of $\varepsilon$ to get a compact $K_i \subset G_{(x_i, r_i, T_i)}$ with full projection onto $U_{(x_i, r_i, T_i)}$. Let $\widetilde K_i = K_i \cap (J \times SO(n))$. Let $K$ be the union of $\bigcup_i \widetilde K_i$ together with a finite $\delta$-net of $L$. Then $K \in \mathcal{A}(B, \varepsilon)$. By choosing $\delta$ and all the $G_{(x, r, T)}$ sufficiently small, we can make $K$ and $L$ arbitrarily close to each other in the Hausdorff metric.
\end{proof}
Now we are ready to prove Theorem~\ref{theorem:typical-davies-nikodym}. It follows easily from Lemma~\ref{lemma:2-diam-Q-dense} that, for a typical $K \in {\mathcal{K}}''$,
\begin{equation}
\label{eq:union-punctured}
\bigcup_{(x, r, T) \in K}(x+rT(e_1) + T(H\setminus\{0\}))
\end{equation}
has measure zero. Indeed, let $\{B_i\}$ be a countable collection of balls such that every point in $\mathbb{R}^n$ is covered by a ball of arbitrarily small diameter, and suppose that $K\in\bigcap_i\mathcal A(B_i)$.
For every $(x,r,T)\in K$ and for every $y\in H\setminus\{0\}$ there is a $B_i$ which contains $x + rT(e_1) + T(y)$ and has diameter less than $|y|/2$. Then $x + rT(e_1) \not\in 2B_i$, so $x + rT(e_1) + T(y)$ belongs to the null set $$B_i \cap \bigcup_{\substack{(x, r, T) \in K\\ x + rT(e_1) \not\in 2B_i}}(x+rT(e_1) + T(H)).$$
To complete the proof of Theorem~\ref{theorem:typical-davies-nikodym}, we need to show that we can remove the puncture from $H \setminus \{0\}$ when the distance $r$ is nonzero. By adapting the argument in the proof of Lemma~\ref{l:rotatedeverysize}, we can show that for any $r_0 > 0$, for a typical $K \in {\mathcal{K}}''$, the set
\begin{equation}
\label{eq:union-gtr-r0}
\bigcup_{\substack{(x, r, T) \in K\\ r\geq r_0}}x+rT(e_1)
\end{equation} has dimension at most $1$, hence measure zero. By taking a countable intersection of $r_0$ tending to $0$, we see that for a typical $K \in {\mathcal{K}}''$, the set
$$\bigcup_{\substack{(x, r, T) \in K\\ r\neq 0}}x+rT(e_1)$$ has measure zero. This completes the proof of Theorem~\ref{theorem:typical-davies-nikodym}.
\begin{remark}
The argument in the proof of Lemma~\ref{l:rotatedeverysize} cannot be applied directly to show that the set \eqref{eq:union-gtr-r0} has dimension at most $1$ for a typical $K \in {\mathcal{K}}''$. There is a slight complication due to the fact that for $r_0 > 0$, the function ${\mathcal{K}}'' \to {\mathcal{K}}^n$ defined by $K \mapsto \bigcup_{(x, r, T) \in K, r \geq r_0} x + rT(e_1)$ is not necessarily continuous. However, the technical modifications required are straightforward, so we leave this to the reader.
\end{remark}
|
{'timestamp': '2018-02-16T02:03:12', 'yymm': '1701', 'arxiv_id': '1701.01405', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.01405'}
|
arxiv
|
\section*{Introduction}
Coherent light tuned on, or near, atomic resonances has allowed for the discovery and investigation of a wealth of nonlinear optical effects~\cite{Boyd2003, Gordon1965, Callen1967, Durbin1981, Boshier1982, LeBerre1984, Santamato1984, Deng2005, Nascimento2006}. In addition to cross-action effects in which the presence of one beam alters the behavior of another, many self-action effects have been thoroughly investigated, including self-phase modulation~\cite{Durbin1981}, self-focusing~\cite{Shen1975, Marburger1975, Boshier1982} and self-defocusing~\cite{Gordon1965, Callen1967, Yu1998}, continuous-wave (CW) on-resonance enhancement~\cite{LeBerre1984}, and self-induced transparency~\cite{McCall1967, McCall1969}. While nonlinear index-induced focusing and defocusing of light have been studied in both cold atom systems~\cite{Labeyrie2003, Labeyrie2007, Labeyrie2011} and hot atomic vapor~\cite{Grischkowsky1970, Bjorkholm1974, Zhang2015, Zhang2017}, these effects have only been studied individually, and to our knowledge no experiment to date has demonstrated a tunable interplay between nonlinear focusing and defocusing at a fixed detuning from an atomic resonance, with an input beam that is both converging and diverging through the nonlinear medium. Self-action effects strongly modify the spatial structure of an optical beam, and may result in output mode profiles that exhibit interesting optical properties such as diffraction-free and self-healing behavior~\cite{Durnin1987, Gori1987, MacDonald1996}. The ability of optical modes to reconstruct themselves after encountering an obstruction in their path has been investigated in both the classical~\cite{Bouchal1998, Litvin2009, Chu2012, Wen2015} and quantum regimes~\cite{McLaren2014}, the latter of which has resulted in a demonstration of more robust propagation and recovery of quantum entanglement.
The first experimental investigation into self-healing Bessel-Gauss (BG) beams was carried out by Durnin and co-authors~\cite{Durnin1987}. In their work, significant (annular) aperturing of the incident optical mode produced a BG beam with reduced diffractive spreading compared to that of a Gaussian beam. More recent methods involve either spatial light modulators~\cite{Davis1993, Vaity2015} or axicon (conical) lenses~\cite{McLeod1954, Bouchal1998}. In some cases, spatial light modulators can be damaged at relatively low intensities~\cite{Beck2010, HamamatsuSLMTechInfo}, thus limiting the intensity of the generated non-Gaussian light. Further, while some progress has been made combining numerous optical elements with axicon lenses to produce tunable output self-healing modes, the cone angle of axicon lenses themselves is inherently fixed. In contrast, hot atomic vapors provide an all-optical method of generating non-Gaussian, BG-like modes~\cite{Nascimento2006}. With this method, two recent motivating works demonstrated some tunability in the shape of the output~\cite{Zhang2015, Zhang2017}. While self-focusing and defocusing were the nonlinear effects considered in the theoretical discussions therein, a description of the spatial-mode structure of these modes, as well as their ability to self-reconstruct after encountering an obstruction still remains to be investigated.
Here we experimentally demonstrate tunable generation of partially self-reconstructing non-Gaussian beams. To accomplish this, we focus strong, Gaussian spatial-mode, nearly resonant laser light into a nonlinear medium consisting of hot alkali atoms. A complex interplay between various self-action effects in the medium determines the overall output mode shape, and accordingly enables some tunability of the generated non-Gaussian beam by varying the input optical power or the temperature of the atomic medium. The ability to tune the mode conversion process in this manner should allow for optimization of the self-reconstruction, which we demonstrate for a non-Gaussian beam encountering an obstruction at its center. These findings suggest that tunable, non-Gaussian light generated via atomic vapors may be useful for experiments in quantum information, as well as demonstrating new approaches to optical communication and imaging.
\begin{figure}[ht!]
\centering
\includegraphics*[width=\textwidth]{FIG1-eps-converted-to.pdf}
\caption{\textbf{Schematic of the experimental setup for the generation of tunable, self-reconstructing optical modes}. The mode images (A, B, and C) were obtained with an input power of $P = 300$ mW and a cell temperature of $T = 125$ $^\circ$C. Ti:Sapph: Titanium-sapphire CW laser. BS: Beamsplitter. HWP: Half-wave plate. PBS: Polarizing beam splitter. $^{85}$Rb: Rubidium-85 vapor cell. CCD: Camera. A: Reference image before the obstruction. B: Obstructed mode directly after the obstruction. C: Reconstructed mode several meters after the obstruction.}
\label{fig:setup}
\end{figure}
\section*{Results}
\subsection*{Experimental layout}
The experimental layout is shown in Fig.~\ref{fig:setup}. Light from a CW titanium-sapphire laser is tuned close to the D1 line of rubidium and coupled into a single-mode, polarization-maintaining fiber in order to produce a Gaussian spatial mode. The beam is focused to a spot size of 220\,$\mu$m at the center of a 2.5\,cm long rubidium-85 cell, whose temperature is controlled by a resistive heating element and monitored by a thermocouple. A small portion of the light is sent to a polarization spectroscopy setup for laser locking (see Methods), and the remaining optical power is varied by the combination of a half-wave plate and polarizing beam splitter prior to the rubidium cell. At a distance 190 mm after the cell, the resulting output modes are imaged with a CCD camera (2048 x 1088 pixels with a 5.5 $\mu$m pixel pitch) to investigate the spatial mode profiles as a function of input power and cell temperature. In the reconstruction experiments, the output beam is instead re-collimated with a 500\,mm focal length lens placed one focal length from the center of the rubidium cell. Additionally, due to the large size of the optical modes, a 60 mm focal length lens is placed approximately 30 mm in front of the camera. Because atomic vapor acts as a nonlinear lens in the conversion process, the defocusing strength of the output mode varies depending on the choice of detuning, input power and cell temperature. While this has the benefit of producing the desired tunability of the mode shapes, for consistency throughout all experiments the 500\,mm lens is kept fixed at one focal length from the focus of the unconverted Gaussian beam. A removable circular obstruction 3 mm in diameter is placed in the center of the beam's path, and both the unobstructed and reconstructed images are recorded at various distances from the obstruction.
\subsection*{Tunable non-Gaussian beam generation}
A typical output spatial mode is shown at position A in Fig.~\ref{fig:setup}. These truncated BG-like modes are generated when the laser is detuned to the red side of the rubidium D1 line, over a frequency range on the order of one Doppler-broadened atomic linewidth. In the medium, the Kerr nonlinearity results in a spatially-varying refractive index~\cite{Boyd2003}, which in turn leads to a spatially-varying phase shift on the incident beam. We suspect that the output mode profiles arise from a complex interplay between this nonlinear phase shift and self-induced transparency, which gives rise to a spatially-varying soft aperture effect that is dependent on the local intensity of the input mode. Interference in the far-field then underlies the conversion from an input Gaussian beam to the non-Gaussian output mode.
In Fig.~\ref{fig:slices} (a-b), we show two series of normalized intensity cross-sections obtained from single-shot images for various levels of input power and cell temperature. Smoothed two-dimensional intensity projections of the measurements are displayed below. We find that the conversion into the non-Gaussian pattern is enhanced for increasing levels of both optical power and temperature: rings with appreciable signal-to-noise ratios are generally observed at optical powers above $P \sim 200$ mW and temperatures above $T ~\sim 125 ^\circ$ C. These results agree with previous work~\cite{Zhang2015}, where the intensity-dependent phase shift and temperature-dependent atomic density of the nonlinear medium were considered.
We use numerical fitting to extract the radial positions and intensities (relative to the central peak) of the first ring in the output patterns obtained at various powers and temperatures. We observe that the atomic vapor acts as a nonlinear lens, as shown in Fig.~\ref{fig:slices} (c-d). With increasing optical power, the atomic vapor focuses the converted mode and reduces the size of the ring in the far-field. The relative intensity of the first ring increases even more dramatically with increasing power, reaching values greater than unity for input powers above 300 mW. The maximum value of $P = 350$ mW used throughout the measurements was chosen based on the maximum allowed coupling power for a single-mode fiber. Conversely, we see that increasing the cell temperature leads to beam defocusing. The complex interplay of nonlinear effects is exemplified by the fact that increasing the cell temperature also increases the atomic density of the nonlinear medium, and we see that mode conversion is enhanced at higher temperatures, as evidenced by larger field intensity in the first ring at higher temperature. In our experiments, mode conversion is optimal at a temperature of about T $\sim$ 125 $^\circ$C, as shown in Fig.~\ref{fig:slices} (d). Above this temperature, the medium becomes less transparent and linear absorption limits the conversion.
\begin{figure}[t!]
\centering
\includegraphics*[width=\textwidth]{FIG2-eps-converted-to.pdf}
\caption{ \textbf{Tunability of non-Gaussian modes obtained from atomic vapor by varying power and temperature.} \textbf{(a)} Image cross-sections as a function of incident optical power, keeping the cell temperature fixed at $T = 125$ $^\circ$C. \textbf{(b)} Cross-sections as a function of cell temperature, with a constant input power of $P = 275$ mW. \textbf{(c)} Radial positions (red) and intensities of the first ring relative to the central peak (blue) versus input optical power. $T = 125$ $^\circ$C. \textbf{(d)} Radial positions (red) and relatives intensities of the first ring (blue) with varying cell temperature. $P = 250$ mW. The shaded regions indicate uncertainties based on one standard deviation.}
\label{fig:slices}
\end{figure}
Given that the atomic vapor creates a nonlinear lensing effect, the focusing (or defocusing) strength of the output mode depends on the position of the vapor cell relative to the focus of the input beam. In each experiment, effort was taken to position the cell at the center of the input beam's focus. This was found to reliably give results such as those shown in Figs.~\ref{fig:slices} (a-b). For the measurements in Figs.~\ref{fig:slices} (c-d), however, we found that the position of the cell could be adjusted to optimize the intensity of the ring. In this case, the position was set to optimize the ring intensity, which reached a maximum at an input power of about $P ~\sim 325$ mW. Accordingly, the results in Figs.~\ref{fig:slices} (c-d) differ from those in Figs.~\ref{fig:slices} (a-b). Nonetheless, it was found to be qualitatively true that increasing the power and temperature would result in nonlinear focusing and defocusing of the mode, respectively.
Lastly, as expected, we have found that the mode shapes can be tuned by varying the laser frequency. These results are shown in the supplementary information. For this reason, the laser frequency was locked to a fixed detuning from the resonance (see Methods), to prevent fluctuations and/or drift in the laser frequency from altering the shape of the optical modes.
\begin{figure}[ht!]
\centering
\includegraphics*[width=\textwidth]{FIG3-eps-converted-to.pdf}
\caption{\textbf{Self-induced transparency in atomic vapor}. \textbf{(a)} Measured optical transmission through rubidium-85 vapor as a function of input optical power with a fixed cell temperature of $T = 125$ $^\circ$C and with the laser detuned 275 MHz to the red side of the D1 line. The inset shows the corresponding optical transmission when varying cell temperature and keeping the input power fixed at $P = 300$ mW. \textbf{(b)}. Optical transmission as a function of input optical power for $T = 125$ $^\circ$C and a laser detuning of -80 MHz. The shaded regions indicate uncertainties based on standard deviations.}
\label{fig:transparency}
\end{figure}
These observations most likely stem from a combination of numerous competing self-actions effects. The collection of these effects, however, can be thought of as an complex aperture which is tuned by the interactions between light and matter. Among these interactions, we find that one of the dominant effects is a self-induced transparency of the medium~\cite{McCall1967, McCall1969, McClelland1986}, which effectively behaves as a soft-aperture for the input beam~\cite{LeBerre1984}. In Fig.~\ref{fig:transparency} (a), we show the measured transmission (i.e., the ratio of the output optical power to input power) through the atomic vapor while the cell temperature is kept fixed at T$ = 125$ $^\circ$C. At low ($\sim$ mW) power levels, optical losses resulting from atomic absorption strongly attenuate the beam. As the power is increased, self-transparency occurs and the transmission increases rapidly, reaching 50$\%$ at an input power of $P \sim 50$ mW. For higher powers, the rate of increase in transmission slows down and approaches unity. A corresponding temperature measurement is shown in the inset of Fig.~\ref{fig:transparency} (a), for an input power of $P = 300$\,mW. Here, the transmission falls off predictably at high temperatures. We confirmed the transparency effect for frequency detunings of -275 MHz (Fig.~\ref{fig:transparency} (a) and -80 MHz (Fig.~\ref{fig:transparency} (b), the predominant frequencies used throughout this work. The effect is typified by the fact that, at a given temperature, the transmission can be boosted simply by increasing the intensity of the optical beam.
\subsection*{Reconstruction of non-Gaussians beams following an obstruction}
The ability of an optical beam to regenerate itself following a disturbance is, in itself, a particularly interesting property given the diffractive nature of light. Here we show that our non-Gaussian pattern can reconstruct itself after encountering a circular obstruction placed at the center of the optical mode. In Fig.~\ref{fig:reconstruction} (a), we show a series of normalized images which zoom in on the obstruction area and illustrate the reconstruction of the central portion of the mode as it travels along the propagation direction. Images of the unobstructed and obstructed modes are shown in the top and bottom of Fig.~\ref{fig:reconstruction} (a), respectively. At $z = 0$ m, the (removable) circular block attentuates approximately 20$\%$ of the light, and the remaining light increasingly regenerates itself for $z > 0$ m. (A series of full images is shown in the Supplementary information.) We quantify the reconstruction by calculating the two-dimensional correlation between the images of the (unobstructed) reference at $z = 0$ m and the propagating obstructed beam (see Supplementary information). The result is shown in Fig.~\ref{fig:reconstruction} (b), where we have carried out two calculations: one taken over the entire area of the image, and a second using only the obstructed area. Within the obstruction area, mode regeneration increases with $z$, reaching a maximum correlation of $\sim$\,94\% at $z \sim 7.5$ m. If the calculation is taken over the entire image, maximum correlation is observed at $z \sim 8$ m, but as a whole, the correlation decreases with $z$. Given that the modes resemble truncated (after the first- and second-order rings) BG modes, reconstruction, albeit imperfect, after a finite propagation distance is expected.
\begin{figure}[ht!]
\centering
\includegraphics*[width=\textwidth]{FIG4-eps-converted-to.pdf}
\caption{\textbf{Self-reconstruction of a non-Gaussian mode generated with atomic vapor.} \textbf{(a)} Images of unobstructed (top) and obstructed (bottom) modes at various positions along the propagation coordinate. The size of each image is 1.5 mm $\times$ 1.5 mm, and the images correspond to the region over which the obstructed area correlation is calculated. \textbf{(b)} Calculated two-dimensional correlation between the (unobstructed) reference at $z = 0$ m and the obstructed modes at various positions for the full image (blue) and the obstruction area only (red). In the latter case, a parabolic fit reveals an optimal correlation at $z \sim 7.5$ m. The shaded regions indicate uncertainties based on one standard deviation. \textbf{(c)} Full width half maxima of central peaks normalized to their widths at $z = 0$ m, for a BG-like mode with a focal position $z_{\textrm{focus}} = 5.5$ m and a Gaussian mode with $z_{\textrm{focus}} = 1.1$ m. In both cases, approximately 20\% of the light is blocked. The inset shows cross-sections of the reference and obstructed BG-like modes at the optimal reconstruction distance of $z = 7.5$ m.}
\label{fig:reconstruction}
\end{figure}
Next, we compare the reconstruction of our BG-like mode with that of a Gaussian mode. Studying the propagation of the unobstructed modes in Fig.~\ref{fig:reconstruction} (a), one sees that the center of the mode reaches a focus around $z_{\textrm{focus}} \sim 5.5$ m. Therefore, we consider the reconstruction of a focused Gaussian mode which has also been attenuated by approximately 20\% via a central obstruction. We employ Gaussian fitting to extract the full width half maxima (FWHM) of the central peaks from the reconstructing modes in both cases. In Fig.~\ref{fig:reconstruction} (c), we show the FWHM from single shot images normalized to their widths at $z = 0$ m, as a function of the normalized distance $z / z_{\textrm{focus}}$. In both cases, mode regeneration is associated with an increase in the FWHM, which slows after the focus and eventually increases again due to diffraction. Importantly, the normalized FWHM of the non-Gaussian mode reaches unity prior to the Gaussian, by more than a factor of two in normalized distance. The point of maximum image correlation for the non-Gaussian case ($z \sim 7.5$ m) corresponds to the point $z / z_{\textrm{focus}} = 1.4$ in Fig.~\ref{fig:reconstruction} (c). At this point, the normalized FWHM is $\sim$ 0.8 for the non-Gaussian mode (see the inset of Fig.~\ref{fig:reconstruction} (c)), whereas it is only $\sim$ 0.7 for the Gaussian mode.
\section*{Discussion}
In this manuscript we have experimentally demonstrated that atomic vapor may be used to generate tunable self-reconstructing optical beams. While the complete dynamics of near-resonant light interacting with multi-level atomic systems is reasonably complex, these results suggest that nonlinear phase shifts and self-induced transparency are dominant effects, producing an effective soft aperturing of the incident optical mode, which leads to the conversion into a truncated BG mode. Although this truncation is expected to worsen the mode's self-healing ability when compared to an ideal BG mode, we nonetheless observe image correlations of up to 94\% in the central portion of the recovered mode profile and up to 61\% across the entire image. The flexible approach taken here is advantageously robust to high levels of optical power, and thus allows one to control the output mode profile by varying the optical power, as well as the temperature of the atomic vapor and the laser frequency. In this manner, the reconstruction of the mode after interacting with an obstruction can be optimized, which may see applications in future experiments involving the recovery of information in the spatial profile of the modes. We expect that such optimization could further improve the correlations reported here.
\section*{Methods}
\label{sec:methods}
Locking to the atomic resonances of rubidium is achieved with a polarization spectroscopy setup as described in Refs.~\cite{Wieman1976, Pearman2002, Ratnapala2004}. Images are recorded using a beam profiler (Edmund Optics 89-308) with the laser locked at a fixed detuning $\Delta / 2 \pi = -80$ MHz from the red side of the D1 line of rubidium-85. In Fig.~\ref{fig:transparency}, data is also shown for a second detuning of -275 MHz. In each measurement, a series of ten images acquired over a span of approximately 1 s are recorded, and the data and error bars in Figs.~\ref{fig:slices} and ~\ref{fig:reconstruction} represent mean values and standard deviations, respectively, based on these ten images. Every image is acquired using an integration time of 10 ms, and all of the actual images shown in this manuscript are single shots.
In the self-healing experiments, the obstruction is made by coating the surface of a circular object ($\sim$ 3 mm in diameter) with acrylic paint, and then contacting the object with a 170 $\mu$m thick, 22 mm $\times$ 22 mm precision microscope cover slip (Thorlabs CG15CH). The obstruction is placed on a flip-mount so that both the reconstructing and the unobstructed propagating modes may be imaged. In comparing the reconstruction with that of a Gaussian beam, the obstruction is made using the same approach, but with the diameter chosen so that the object blocks $\sim$ 20\% of the light, as in the non-Gaussian mode case. The FWHM in Fig.~\ref{fig:reconstruction} are found via Gaussian fitting, and the focal positions $z_{\textrm{focus}}$ are taken to be the positions where the unobstructed FWHM of the central peaks are minimal.
|
{'timestamp': '2017-01-09T02:08:11', 'yymm': '1701', 'arxiv_id': '1701.01715', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.01715'}
|
arxiv
|
\section{Introduction}\label{sec:intro}
With the explosion of the Internet of Things (IoT) world, many devices already on the market have undergone a transformation, becoming more functional and cost-effective.
One of these is undoubtedly the IP camera, a video surveillance camera that uses the network to transmit audio and video signals. Unfortunately, IP cameras, like many other IoT devices, are the target of countless malicious cyber attacks that often aim to violate the confidentiality of transmitted data.
Today, IP cameras are being used to make CCTV, as confirmed by a 2019 estimate of around 770 million IP cameras being used in this field~\cite{survcamera}.
In addition, IP cameras are increasingly being used in peoples' houses. Thanks to their low cost, they are used, for example, to monitor a house when people are away, to monitor pets and also as baby monitors.
In view of the main uses, it is clear that the nature of the data transmitted by these devices is decidedly intimate and private.
Therefore, it would seem reasonable to assume that a series of countermeasures are implemented to protect the confidentiality of the transmissions.
However, there are no general guidelines for the realisation of such products and often the precautions taken are very weak or even completely absent.
The security of the IoT world has been at the centre of the global debate for a few years now. In fact, in October 2016, the Mirai malware was able to build a botnet, compromising thousands of devices, including many IP cameras~\cite{mirai}. In 2017, the list of known vulnerabilities within the CCTV Calculator~\cite{cctvcalc} was published, involving IP cameras from many vendors, including Samsung, D-link, Panasonic, Cisco, Hikvision, FOSCAM, Y-Cam, TP-Link, AirLive, Sony, Arecont Vision, Linksys, Canon and many more.
The topic of security and IP cameras is an open one, so in this paper we decided to carry out a vulnerability assessment and penetration testing process on one of today's IP cameras in particular, the TP-Link Tapo C200~\cite{c200}.
\subsection{Contributions}
This article explores whether and how IP cameras can be exploited today using freeware, i.e. non-commercial tools. We address this question in a scenario where there is an insider attacker who connects to the same network as the IP camera with her attacking laptop via Ethernet or Wi-Fi connectivity.
The results are that, relying on simple tools such as nmap, Ettercap, Wireshark and Python programming, the attacker can seriously compromise the service and undermine user privacy.
More precisely, we describe all our results obtained through the reverse engineering process performed on the Tapo camera; thanks to this process we were able to perform our vulnerability assessment.
The vulnerability assessment exercise allowed us to obtain information and evaluating a ceremony for everything related to proprietary services and third-party video streams. The work continued with penetration testing, which aim to exploit vulnerabilities obtained from the vulnerability assessment exercise on the Tapo camera. Specifically, we found that the Tapo camera suffers from: Denial of Service which clearly renders the camera unavailable; Video eavesdropping which affects user privacy; in addition, we demonstrate a third attack called \lq\lq Motion Oracle" which allows an attacker on the same network as the Tapo C200 to detect traffic related to the camera's motion detection service and use it to obtain information about the victim by analysing the frequency of generated traffic.
Finally, we prototype effective countermeasures, at least against video eavesdropping, by taking advantage of a cheap Raspberry Pi~\cite{raspb} device. This can implement encryption so that packets are only transferred in encrypted form over the network and therefore cannot be understood by a man in the middle (MITM).
The Raspberry Pi then transmits the encrypted packets from the IP Camera, ensuring user security, in a completely transparent manner.
\subsection{Testbed}
Our testbed consists of some useful devices for VAPT experiments.
The Tapo C200 is an entry-level IP camera designed for home use. Released in 2020 with a very low cost (around 30€), the Tapo C200 has several technical features, thanks to which it boasts several functionalities, such as: compatibility with the main smart home systems (e.g. Alexa~\cite{alexa} and Google Assistant~\cite{GoogleAssistant}); night vision, resolution up to Full HD; motion detection; acoustic alarm; data storage; voice control; use with third-party software; webcam mode.
To use the Tapo C200, a TP-Link account is required which allows access to the various cloud services offered.
Access to the TP-Link account is via the Tapo app, which is available free of charge on Google Play~\cite{tapoGPlay} and APP store~\cite{tapoApple} for devices running Android 4.4+ and IOS 9+ respectively. The application has a simple and intuitive user interface, from which it is possible to use and manage the devices of the Tapo family, including the Tapo C200. The range of services offered for the Tapo C200 includes: Remote access and control of the Tapo C200; Sharing the Tapo C200 with other accounts; synchronisation of settings; integrating the Tapo C200 with smart home systems; and receiving motion notifications.
Below, the Figure~\ref{fig:testbed} shows the testbed on which the experiments were conducted and how the elements of the testbed are connected to each other, note that all connections are made via a Wi-Fi network.
In a nutshell, the testbed consists of:
\begin{enumerate}[(a)]
\item A Wi-Fi switch to which the various devices were connected
\item The Tapo C200 IP Camera
\item A smartphone on which SSL Packet Capture and the Tapo application were installed
\item A Linux machine on which Ettercap and Wireshark tools have been installed
\item A Linux machine on which the third-party players iSpy and VLC have been installed
\end{enumerate}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.35]{Images/origninal_testbed.png}
\caption{Original testbed}
\label{fig:testbed}
\end{figure}
By monitoring the environment with the various instruments, the Tapo C200 was used as a normal user, with the aim of extracting useful information for operational analysis.
\subsection{Related Work}\label{sec:RW}
As early as 2015, it was shown that IP cameras can encompass multiple research areas, including multimedia security, network security and cloud security, and end-user privacy.
In fact, Tekeoglu and Tosun looked into the security of cloud-based wireless IP cameras~\cite{cloudbasedIPcamerajpeg}. They studied the traffic generated by a wireless IP camera that is easy to set up for the average home user. Hence, they proved that an attacker can sniff the IP camera's network traffic and would be able to reconstruct the JPEG images from the data stream. Based on their information, their system has some limitations, e.g. their script only reconstructed 253 JPEG images over about 20 hours of video track.
A few years later in 2017, Liranzo and Hayajneh presented an analysis of the main privacy and security issues affecting thousands of IP camera consumers globally, they proposed a series of recommendations to help protect consumers' IoT devices in the intimacy of their home~\cite{2017issueIPcamera}.
It is clear that modern IP cameras collect a large amount of data, which is then transferred to cloud systems.
As a result, IP cameras process data to such an extent that it becomes important to assess the security of these cameras also from the point of view of privacy for users.
\subsection{Paper Structure}
This manuscript continues to describe all background information useful for understanding the content of the paper (\S\ref{sec:background}). The core of the paper begins with the reverse engineering process (\S\ref{sec:reverse}) which continues with the vulnerability assessment (\S\ref{sec:VA}) and the penetration testing (\S\ref{sec:PT}) performed on the Tapo C200 IP camera. Finally, the paper describes a countermeasure (\S\ref{sec:countermeasures})
and concludes with some broader evaluations of the results (\S\ref{sec:conclusions}).
\section{Background}~\label{sec:background}
In this section we provide all the background information necessary for understanding all the steps performed in the VAPT work on the Tapo C200.
The IP camera also has the functionality to integrate the device with third-party infrastructures and systems. Such systems require the use of dedicated protocols such as the Real Time Streaming Protocol (RTSP)~\cite{rtsp}.
RTSP is a network protocol used for multimedia streaming systems, its purpose is to orchestrate the exchange of media, such as audio and video, across the network. RTSP extends the RTP~\cite{rtp} and RTCP~\cite{rtcp} protocols by adding the necessary directives for streaming, such as:
\begin{itemize}
\item Describe: is used by the client to obtain information about the desired resource
\item Setup: specifies how a single media stream is to be transported
\item Play/Pause: starts/stops multimedia playback
\item Record: used to store a stream
\end{itemize}
ONVIF (Open Network Video Interface Forum) is an organisation whose aim is to promote compatibility between video surveillance equipment so that systems made by different companies are interoperable.
ONVIF provides an interface for communication with the different equipment through the use of various protocols. ONVIF provides different configuration profiles to make the best use of the different technical features.
Profiles G, Q, S and T, are dedicated to video systems, however, the Tapo C200 is only compatible with Profile S~\cite{comp}.
Profile S is one of the most basic profiles and is designed for IP-based video systems, the features provided for Profile S that are compatible with the Tapo C200 are user authentication, NTP support and H264~\cite{onvifs} audio and video streaming using RTSP.
In order to take advantage of the multimedia streaming services, it was necessary to use the open source, multi-platform media player VLC~\cite{vlc} and the iSpy DVR~\cite{ispy}.
In order to scan the services exposed by the device, the software Nmap~\cite{nmap} was used, which is able to perform port scanning operations, and the software Nessus~\cite{nessus}, which allows to perform a risk analysis on network devices, indicating the criticality level of vulnerabilities according to the CVSS standard.
CVSS is an industry standard designed to assess the severity of computer system security vulnerabilities~\cite{cvss}. The assessment consists of a score on a scale from 0 to 10 calculated on the basis of a formula~\cite{scoreCvssdoc} that takes into account Attack Vector (AV), Scope (S), Attack Complexity (AC), Confidentiality (C), Privileges Required (PR) \& Integrity (I), User Interaction (UI) and Availability (A). The higher the score obtained based on the values assigned to these metrics, the more serious the vulnerability.
Finally, the Tapo C200 was analysed using the tools Ettercap~\cite{ettercap} to intercept traffic to and from the device, SSL packet capture~\cite{sslpacket} to intercept and read traffic to and from the Tapo application and Wireshark~\cite{wireshark} to understand network traffic and use the H264 extractor extension to convert network traffic into playable video.
\section{Reverse engineering}~\label{sec:reverse}
Our work starts from the reverse engineering process that allowed us to deduce that: when the application and the camera are on different networks, a classic client-server architecture is used in which the Tapo Server is used to distribute information, such as the device configuration.
Conversely, when both are on the same network, no external servers are used to control the camera and access the video stream.
It is important to note that the flow for control and editing differs substantially from that for video streaming.
The diagram in Figure \ref{ctrltapo} describes the sequence of operations that allow the user to access the video stream through the Tapo application. After choosing one of the initialised devices, the Tapo application logs into the Tapo C200, obtaining the token \emph{stok} needed to perform the control and modification operations.
The \emph{stok} is a token generated during authentication by the Tapo C200 which, after authenticating the user with a username and password, assigns this token to the Tapo application. The purpose of the token is to create a session without having to re-authenticate the application every time the camera settings are changed. The various requests generated by the application carry the \emph{stok} obtained in the last authentication phase. In this way the Tapo C200 only accepts requests from authenticated users.
Following the user's request to access the video stream the Tapo application and the Tapo C200 coordinate, in particular, the Tapo C200 sends to the application a \textit{nonce} necessary for the construction of the symmetric key.
At this point both parties calculate the initialisation vector and AES key in order to encrypt the video stream using AES in CBC mode.
Once the key has been calculated, the Tapo application authenticates itself to the Tapo C200 by providing an \lq\lq response" tag. Finally, the Tapo C200, after verifying the validity of the response, starts the multimedia streaming session by encrypting it with the agreed key.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/utilizzo_strandard_tapo.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext.\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over TLS channel.\newline
\fcolorbox{black}{orange}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Stream video Encrypted with AES key
\end{flushleft}
\caption{Sequence diagram for using the IP Camera with the proprietary Tapo application}
\label{ctrltapo}
\end{figure}
From the diagram in Figure~\ref{diagterz} we can notice how the integration of the Tapo C200 with third-party software is handled. Using the Tapo application, we first need to access the device in question and change its settings so that we can create a new user for third-party software.
After that, we can use the Tapo C200 with the previously mentioned software iSpy and VLC, which log in and start the free-to-air media streaming session with the camera.
In order to be able to use third-party applications, it was necessary to correctly configure the already initialised Tapo C200 by creating a special user on the camera using the Tapo application. Therefore, since this was a change to the Tapo C200 settings, the request generated by the Tapo application was accompanied by the \emph{stok}.
To access the resource, the Linux machine with VLC and iSpy installed was used, i.e. the machine (e) of the testbed. Using Ettercap and Wireshark installed on machine (d) instead, a MiTM attack was carried out in order to analyse the traffic.
From machine (e) it was possible to access the video stream using both players.
\begin{itemize}
\item On VLC, select the \lq\lq network resources" section of the player and enter:\\
\url{rtsp://username:password@<tapoc200address>/stream/1}
\item On iSpy, configure the ONVIF S profile to URI:\\ \url{http://username:password@<tapoc200address>/onvif/device\_service}
\end{itemize}
Since ONVIF's S profile does not add any functionality to the video stream, no differences emerged from the analysis of the traffic generated for the video stream of the two players. In particular, the packets analysed showed that, after accessing through the specific URI for ONVIF devices, iSpy uses RTSP in the exact way used by VLC. The diagram shows how the Tapo C200 is integrated with third-party software. Using the Tapo application, we first need to access the device in question and change its settings so that we can create a new user account for third-party software.
In summary, the Tapo C200 can then be used with the previously mentioned software iSpy and VLC, which log in and start the unencrypted multimedia streaming session with the camera.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.32]{Images/Utilizzo_standard.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext.\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over TLS channel
\end{flushleft}
\caption{Sequence diagram for using the IP Camera with third party software}
\label{diagterz}
\end{figure}
\section{Vulnerability Assessment}~\label{sec:VA}
The vulnerability assessment activity carried out is divided into two parts, one concerning proprietary services and the other concerning third party video streams.
\subsection{Proprietary services analysis}
Security in terms of the confidentiality of the video stream when the Tapo application and Tapo C200 are not on the same network, is entrusted to the channel itself on which it takes place. Using SSL/TLS on port 443, all data exchanged between the Tapo C200, the TP-Link server and the Tapo application is properly encrypted.
In addition, the Tapo application and the Tapo C200 use SSL pinning~\cite{pinning}, meaning that if information needs to be shared with the TP-Link server, no SSL/TLS session is established towards any entity other than a TP-Link server whose SSL/TLS certificate they already have by default.
However, despite the use of SSL/TLS, it was found that the traffic is easily detected, particularly with regard to movement notifications.
The motion detection function follows the steps:
\begin{enumerate}
\item The Tapo C200 detects movement and sends notification to the TP-Link server;
\item The server receives the notification and sends an alert to all devices connected to the account associated with the camera;
\item The application receives the message and generates a notification for the user.
\end{enumerate}
Our study found that the size of the messages generated was always the same, 523 bytes. By using the Tapo C200 as an oracle, the attacker may be able to discern movement notifications based on the size of the messages, without intervening in the cryptographic scheme and deducing, for example, when the victim is home and when no one is home. In addition, by filtering and blocking only those types of messages, the attacker could undermine the availability of the notification service by deceiving the victim that there is no movement in the area covered by the camera.
\subsection{Third party video stream analysis}
The use of third-party players for multimedia streaming presents a high level criticality, both as regards the ONVIF service on port 2020 and RTSP on port 554.
Since the ONVIF profile S is limited to providing an interface for the use of RTSP, both approaches seen for video streaming do not have any system to guarantee the confidentiality of the multimedia stream. In fact, after configuring the Tapo C200 to use these systems, whenever a request is made, the video stream is sent in clear text following the H264 encoding.
This approach is critical, especially with regard to the confidentiality of the data as it could allow the attacker to intercept the stream and decode it, thus obtaining the original and reproducible video stream.
\section{Penetration Testing}~\label{sec:PT}
\subsection{Camera DoS} \label{sec:camerados}
A Denial of Service (DoS) is a malfunction of a device due to a computer attack. Generally, such attacks aim to undermine the availability of a service by overloading the system and making an arbitrarily large number of requests, sometimes in a distributed manner (Distributed Denial of Services or DDoS). In the case of the Tapo C200, a DoS attack was relatively simple. In fact, an intensive scan by Nessus caused the device to crash, as can be seen in Figure~\ref{fig:crashed_camera}, causing it to stop and then reboot. This demonstrates how an attacker on the same network as the device can seriously compromise the availability of the service.
Figure~\ref{fig:DoSScore} shows the CVSS score of this vulnerability,
which has been classified as Medium, with a value of 6.5.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/crashed_camera.jpg}
\caption{Crash of the Tapo C200 camera}
\label{fig:crashed_camera}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/Dosscore.png}
\caption{DoS score}
\label{fig:DoSScore}
\end{figure}
\subsection{Video Eavesdropping} \label{sec:privacyattack}
To demonstrate how an attacker could obtain the multimedia stream transmitted using third-party software, an experiment was conducted using the machine on which the multimedia players were installed, that is the machine (e), and starting a multimedia session with the Tapo C200. At the same time, machine (d) was used again in MiTM attack mode to intercept the traffic exchanged between the player and the Tapo C200.
By analysing the intercepted traffic with Wireshark it was possible to use the H264 extractor tool~\ref{extractor}, the tool was able to correctly interpret and reconstruct the entire intercepted multimedia streaming session. The output of the tool, produce the reproducible video of the intercepted session.
Finally, Figure~\ref{fig:leakscore} shows the CVSS score of this vulnerability, which was classified as Medium, with a value of 6.5
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{Images/pacchetti264.png}
\caption{Packages H264}
\label{extractor}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/leakscore.png}
\caption{Video eavesdropping score}
\label{fig:leakscore}
\end{figure}
\subsection{Motion Oracle} \label{sec:motionoracle}
To demonstrate how an attacker could use the Tapo C200 as an oracle, a special experiment was conducted.
Throughout the night, the Tapo C200 was used to record a public street. Once the traffic was recorded, outgoing packets from the Tapo C200 were filtered on the base:
\begin{itemize}
\item Of the protocol used, by inserting the SSL/TLS filter
\item Of the frame length, by inserting the filter \textit{frame.length==523}
\end{itemize}
Once the packets were obtained, we constructed a graph shown in Figure~\ref{graftraf}. From the graph we can see the number of SSL/TLS packets of 523 bytes captured at ten minute intervals. In addition, it can be seen that starting at 23:00, the amount of packets begins to decrease until about 3:00. From 3:30 it is possible to see an increase in packets reaching a maximum around 7:00. The trend of the curve is consistent with what could be expected and shows that the influx of cars decreases during the night and increases in the morning.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Images/grafico_movimenti.jpg}
\caption{Graph of transit packet traffic on 7-8 June 2021}
\label{graftraf}
\end{figure}
By comparing the graph with the video recorded on the Tapo C200's SD card, it was found that there was a complete match between captured packets and recorded movements. Observing the recorded video, it was found that all the actual movements are shown in the graph. For example, a frame from about 00:05 is shown in Figure~\ref{immovi}, where a car can be seen in transit.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Images/movimento_00_05.png}
\caption{Movement recorded at 00:05}
\label{immovi}
\end{figure}
This demonstrates how an attacker on the same network as the Tapo C200 can precisely detect traffic related to the Tapo C200's motion detection service, traffic that could be used to deduce information about the victim by monitoring the frequency of these packets.
Furthermore, this experiment demonstrates how the attacker can trivially filter out such packets and deceive the victim.
Figure~\ref{scoremo} shows the CVSS score of this vulnerability, which was classified as \emph{Medium}, with a value of 5.4.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/motionscore.png}
\caption{Motion Oracle score}
\label{scoremo}
\end{figure}
\section{Countermeasures}\label{sec:countermeasures}
Our work proposes a set of countermeasures developed to mitigate the vulnerability of video eavesdropping. The Raspberry Pi 4 Model B was used for this purpose.
The idea behind the proposed solution to guarantee the confidentiality of the video stream in the third party scenario is to use the Raspberry Pi 4 Model B as an access point of the Tapo C200 in order to modify the traffic in transit by applying encryption.
\subsection{Raspberry Pi into the testbed}
The Raspberry Pi 4 Model B has a network card with a wired and a wireless interface. Thanks to this feature it was possible to connect the Raspberry Pi 4 Model B to the testbed modem while exposing a Wi-Fi network to which the Tapo C200 could be connected, as shown in Figure \ref{testbed2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.35]{Images/pi_third_testbed.png}
\caption{Raspberry Pi in the network infrastructure}
\label{testbed2}
\end{figure}
In order to build the network infrastructure described above, the Raspberry Pi 4 Model B had to be configured to act as an access point for the Tapo C200 on behalf of the modem~\cite{settingrasp}. Specifically, it was necessary to install the hostapd~\cite{hostpad} tool.
This module, available for Linux machines, allows normal network interfaces to be transformed into Access Points. In addition, it was necessary to modify the dhcpd configuration of the Raspberry Pi 4 Model B.
\subsection{Traffic encryption}
To encrypt network traffic we configured IPtables on the Raspberry Pi 4 Model B, the policy adopted is shown in Listing~\ref{code:IPtablesEnc}:
\begin{lstlisting}[caption={Configuration of IPtables for encryption}, label={code:IPtablesEnc}]
iptables - A FORWARD - m
--physdev-is-in wlan0 -p udp
-s TAPOC200_ADDRESS_IPV4
-j NFQUEUE --queue num 1
iptables -A INPUT-A FORWARD -m
--physdev-is-in wlan0 -f -j DROP
\end{lstlisting}
In this way it was possible to intercept packets in transit (FORWARD) from the network interface (-m --physdev-is-in wlan0) transported with UDP (-p udp) coming from the Tapo C200 (-s TAPOC200\_ADDRESS\_IPV4) and send them on queue 1.
Thanks to the second rule, it is possible to eliminate all possible fragments of the packets in transit, in order to avoid sending partial packets.
At this point, we have built a script to encrypt the packets queued by the firewall. The script takes care of retrieving packets from the queue, extracting the payload, encrypting it and sending it back to the recipient's address, all the script code is shown in the Listing~\ref{code:encryptscript}.
\begin{lstlisting}[caption={Encryption script}, label={code:encryptscript}]
def encrypt(packet):
cipher_suite = Fernet(key)
encoded_text = cipher_suite.encrypt
(packet.get_payload()[28:])
pkt = IP(packet.get_payload())
UDP_IP = pkt[IP].dst
UDP_PORT = pkt[UDP].dport
MESSAGE = encoded_text
sock = socket.socket(socket.AF_INET,
socket.SOCK_DGRAM)
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
packet.drop()
\end{lstlisting}
\subsection{Traffic decryption}
A configuration of IPtables was also developed for the client with the third-party software running, the policy adopted is shown in Listing~\ref{code:IPtablesDec}:
\begin{lstlisting}[caption={Configuration of IPtables for decryption}, label={code:IPtablesDec}]
iptables -A INPUT -p udp
-s RASPBERRY_ADDRESS_IPV4
-j NFQUEUE --queue num 2
\end{lstlisting}
Through this solution it is possible to intercept incoming packets (INPUT) transported with UDP (-p udp) coming from the Raspberry Pi 4 Model B (-s RASPBERRY \- ADDRESS \_ IPV4) and send them on queue 2.
The script for decrypting the packets queued by the firewall is shown in Listing~\ref{code:decryptscript}.
More specifically, the script takes care of fetching packets from the queue, extracting the payload and decrypting it before accepting it into the system.
\begin{lstlisting}[caption={Decryption script}, label={code:decryptscript}]
def decrypt(packet):
cipher_suite = Fernet(key)
decoded_text = cipher_suite.decrypt
(packet.get_payload()[28:])
pkt = IP(packet.get_payload())
UDP_IP = pkt[IP].dst
UDP_PORT = pkt[UDP].dport
MESSAGE = decoded_text
sock = socket.socket(socket.AF_INET,
socket.SOCK_DGRAM)
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
packet.drop()
\end{lstlisting}
\subsection{Proposed solution in the third-party scenario}
The configuration of the countermeasures just seen, although it modifies the initial testbed, has no impact on the actions the user has to take to initialise the Tapo C200 and its use.
A sequence diagram of how the Tapo C200 works with third-party software is shown in Figure~\ref{utipi}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/Utilizzo_Pi.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over the TLS channel\newline
\fcolorbox{black}{orange}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Video stream encrypted with symmetrical AES key
\end{flushleft}
\caption{Sequence diagram of the use of the IP Camera with Raspberry Pi in the network infrastructure}
\label{utipi}
\end{figure}
It can be seen that in the use phase the presence of the Raspberry Pi 4 Model B is totally transparent to the user. The routing of the video stream is managed by the devices themselves, so the user can use the service exactly as in the initial scenario. Moreover, thanks to the proposed countermeasure, we can observe that the video stream exchanged on the home network between the camera and the third party application is encrypted.
In order to demonstrate the effectiveness of the countermeasure, the experiment was carried out again, but in this case the execution of the H264 tool failed. The intercepted traffic, shown in Figure~\ref{trafficcif}, was unintelligible and the execution of the H264 extractor tool produced no results.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/traffico_cifrato.png}
\caption{Encrypted network traffic}
\label{trafficcif}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
IP cameras such as the Tapo C200 are very useful, widespread and definitely within everyone's reach. With the experiments conducted in this manuscript we have analysed the security of these devices also from a privacy point of view.
In order to fully understand what Tapo's vulnerabilities might be, a lot of reverse engineering work had to be done using various technologies.
This allowed us to define a detailed description of the ceremony that the Tapo C200 performs during its operation, among these operations we consider: the initialisation of the device, the use of the device through the proprietary systems of TP-Link and the integration of such device with third-party technologies.
Subsequently, the vulnerability assessment was carried out, from which it was found that although video streaming through the use of the Tapo application enjoys certain guarantees, the streaming service through the use of third-party applications is devoid of encryption mechanisms and therefore vulnerable.
Furthermore, it was highlighted that without intervening on the cryptographic systems used, one of the features offered by the camera, namely motion detection, can be a vector to profile and deceive the victim. It has been demonstrated how an attacker could exploit these vulnerabilities through penetration testing. The trial highlighted the Tapo C200's poor resilience to DoS attacks.
Therefore, it was highlighted that by applying a filter on the packets leaving the Tapo C200 on the basis of the protocol and the size, the motion detection service behaves like an oracle for the attacker.
By exploiting the mechanisms in question, the attacker could deduce the victim's habits and surgically inhibit the serve.
Furthermore, it was highlighted that thanks to the use of the H264 extractor tool, the attacker is able to intercept a video stream and convert it back into a playable video. The average of the Base scores of these vulnerabilities is 6.14, so the risk level associated with the Tapo C200 is Medium.
Finally, with regard to the critical issues encountered with the use of third-party technologies, a countermeasure was proposed using the Raspberry Pi 4 Model B. This countermeasure uses the Raspberry as an extension of the Tapo C200, capable of applying encryption to transmissions and guaranteeing confidentiality in a completely transparent way to the user.
Our experiments taught us that, although the security measures in place in the Tapo C200 are in line with the state of the art, they have not yet been tailored to all possible use scenarios, a finding that we plan to validate in the future over other common IoT devices.
\bibliographystyle{abbrv}
\section{Introduction}\label{sec:intro}
With the explosion of the Internet of Things (IoT) world, many devices already on the market have undergone a transformation, becoming more functional and cost-effective.
One of these is undoubtedly the IP camera, a video surveillance camera that uses the network to transmit audio and video signals. Unfortunately, IP cameras, like many other IoT devices, are the target of countless malicious cyber attacks that often aim to violate the confidentiality of transmitted data.
Today, IP cameras are being used to make CCTV, as confirmed by a 2019 estimate of around 770 million IP cameras being used in this field~\cite{survcamera}.
In addition, IP cameras are increasingly being used in peoples' houses. Thanks to their low cost, they are used, for example, to monitor a house when people are away, to monitor pets and also as baby monitors.
In view of the main uses, it is clear that the nature of the data transmitted by these devices is decidedly intimate and private.
Therefore, it would seem reasonable to assume that a series of countermeasures are implemented to protect the confidentiality of the transmissions.
However, there are no general guidelines for the realisation of such products and often the precautions taken are very weak or even completely absent.
The security of the IoT world has been at the centre of the global debate for a few years now. In fact, in October 2016, the Mirai malware was able to build a botnet, compromising thousands of devices, including many IP cameras~\cite{mirai}. In 2017, the list of known vulnerabilities within the CCTV Calculator~\cite{cctvcalc} was published, involving IP cameras from many vendors, including Samsung, D-link, Panasonic, Cisco, Hikvision, FOSCAM, Y-Cam, TP-Link, AirLive, Sony, Arecont Vision, Linksys, Canon and many more.
The topic of security and IP cameras is an open one, so in this paper we decided to carry out a vulnerability assessment and penetration testing process on one of today's IP cameras in particular, the TP-Link Tapo C200~\cite{c200}.
\subsection{Contributions}
This article explores whether and how IP cameras can be exploited today using freeware, i.e. non-commercial tools. We address this question in a scenario where there is an insider attacker who connects to the same network as the IP camera with her attacking laptop via Ethernet or Wi-Fi connectivity.
The results are that, relying on simple tools such as nmap, Ettercap, Wireshark and Python programming, the attacker can seriously compromise the service and undermine user privacy.
More precisely, we describe all our results obtained through the reverse engineering process performed on the Tapo camera; thanks to this process we were able to perform our vulnerability assessment.
The vulnerability assessment exercise allowed us to obtain information and evaluating a ceremony for everything related to proprietary services and third-party video streams. The work continued with penetration testing, which aim to exploit vulnerabilities obtained from the vulnerability assessment exercise on the Tapo camera. Specifically, we found that the Tapo camera suffers from: Denial of Service which clearly renders the camera unavailable; Video eavesdropping which affects user privacy; in addition, we demonstrate a third attack called \lq\lq Motion Oracle" which allows an attacker on the same network as the Tapo C200 to detect traffic related to the camera's motion detection service and use it to obtain information about the victim by analysing the frequency of generated traffic.
Finally, we prototype effective countermeasures, at least against video eavesdropping, by taking advantage of a cheap Raspberry Pi~\cite{raspb} device. This can implement encryption so that packets are only transferred in encrypted form over the network and therefore cannot be understood by a man in the middle (MITM).
The Raspberry Pi then transmits the encrypted packets from the IP Camera, ensuring user security, in a completely transparent manner.
\subsection{Testbed}
Our testbed consists of some useful devices for VAPT experiments.
The Tapo C200 is an entry-level IP camera designed for home use. Released in 2020 with a very low cost (around 30€), the Tapo C200 has several technical features, thanks to which it boasts several functionalities, such as: compatibility with the main smart home systems (e.g. Alexa~\cite{alexa} and Google Assistant~\cite{GoogleAssistant}); night vision, resolution up to Full HD; motion detection; acoustic alarm; data storage; voice control; use with third-party software; webcam mode.
To use the Tapo C200, a TP-Link account is required which allows access to the various cloud services offered.
Access to the TP-Link account is via the Tapo app, which is available free of charge on Google Play~\cite{tapoGPlay} and APP store~\cite{tapoApple} for devices running Android 4.4+ and IOS 9+ respectively. The application has a simple and intuitive user interface, from which it is possible to use and manage the devices of the Tapo family, including the Tapo C200. The range of services offered for the Tapo C200 includes: Remote access and control of the Tapo C200; Sharing the Tapo C200 with other accounts; synchronisation of settings; integrating the Tapo C200 with smart home systems; and receiving motion notifications.
Below, the Figure~\ref{fig:testbed} shows the testbed on which the experiments were conducted and how the elements of the testbed are connected to each other, note that all connections are made via a Wi-Fi network.
In a nutshell, the testbed consists of:
\begin{enumerate}[(a)]
\item A Wi-Fi switch to which the various devices were connected
\item The Tapo C200 IP Camera
\item A smartphone on which SSL Packet Capture and the Tapo application were installed
\item A Linux machine on which Ettercap and Wireshark tools have been installed
\item A Linux machine on which the third-party players iSpy and VLC have been installed
\end{enumerate}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.35]{Images/origninal_testbed.png}
\caption{Original testbed}
\label{fig:testbed}
\end{figure}
By monitoring the environment with the various instruments, the Tapo C200 was used as a normal user, with the aim of extracting useful information for operational analysis.
\subsection{Related Work}\label{sec:RW}
As early as 2015, it was shown that IP cameras can encompass multiple research areas, including multimedia security, network security and cloud security, and end-user privacy.
In fact, Tekeoglu and Tosun looked into the security of cloud-based wireless IP cameras~\cite{cloudbasedIPcamerajpeg}. They studied the traffic generated by a wireless IP camera that is easy to set up for the average home user. Hence, they proved that an attacker can sniff the IP camera's network traffic and would be able to reconstruct the JPEG images from the data stream. Based on their information, their system has some limitations, e.g. their script only reconstructed 253 JPEG images over about 20 hours of video track.
A few years later in 2017, Liranzo and Hayajneh presented an analysis of the main privacy and security issues affecting thousands of IP camera consumers globally, they proposed a series of recommendations to help protect consumers' IoT devices in the intimacy of their home~\cite{2017issueIPcamera}.
It is clear that modern IP cameras collect a large amount of data, which is then transferred to cloud systems.
As a result, IP cameras process data to such an extent that it becomes important to assess the security of these cameras also from the point of view of privacy for users.
\subsection{Paper Structure}
This manuscript continues to describe all background information useful for understanding the content of the paper (\S\ref{sec:background}). The core of the paper begins with the reverse engineering process (\S\ref{sec:reverse}) which continues with the vulnerability assessment (\S\ref{sec:VA}) and the penetration testing (\S\ref{sec:PT}) performed on the Tapo C200 IP camera. Finally, the paper describes a countermeasure (\S\ref{sec:countermeasures})
and concludes with some broader evaluations of the results (\S\ref{sec:conclusions}).
\section{Background}~\label{sec:background}
In this section we provide all the background information necessary for understanding all the steps performed in the VAPT work on the Tapo C200.
The IP camera also has the functionality to integrate the device with third-party infrastructures and systems. Such systems require the use of dedicated protocols such as the Real Time Streaming Protocol (RTSP)~\cite{rtsp}.
RTSP is a network protocol used for multimedia streaming systems, its purpose is to orchestrate the exchange of media, such as audio and video, across the network. RTSP extends the RTP~\cite{rtp} and RTCP~\cite{rtcp} protocols by adding the necessary directives for streaming, such as:
\begin{itemize}
\item Describe: is used by the client to obtain information about the desired resource
\item Setup: specifies how a single media stream is to be transported
\item Play/Pause: starts/stops multimedia playback
\item Record: used to store a stream
\end{itemize}
ONVIF (Open Network Video Interface Forum) is an organisation whose aim is to promote compatibility between video surveillance equipment so that systems made by different companies are interoperable.
ONVIF provides an interface for communication with the different equipment through the use of various protocols. ONVIF provides different configuration profiles to make the best use of the different technical features.
Profiles G, Q, S and T, are dedicated to video systems, however, the Tapo C200 is only compatible with Profile S~\cite{comp}.
Profile S is one of the most basic profiles and is designed for IP-based video systems, the features provided for Profile S that are compatible with the Tapo C200 are user authentication, NTP support and H264~\cite{onvifs} audio and video streaming using RTSP.
In order to take advantage of the multimedia streaming services, it was necessary to use the open source, multi-platform media player VLC~\cite{vlc} and the iSpy DVR~\cite{ispy}.
In order to scan the services exposed by the device, the software Nmap~\cite{nmap} was used, which is able to perform port scanning operations, and the software Nessus~\cite{nessus}, which allows to perform a risk analysis on network devices, indicating the criticality level of vulnerabilities according to the CVSS standard.
CVSS is an industry standard designed to assess the severity of computer system security vulnerabilities~\cite{cvss}. The assessment consists of a score on a scale from 0 to 10 calculated on the basis of a formula~\cite{scoreCvssdoc} that takes into account Attack Vector (AV), Scope (S), Attack Complexity (AC), Confidentiality (C), Privileges Required (PR) \& Integrity (I), User Interaction (UI) and Availability (A). The higher the score obtained based on the values assigned to these metrics, the more serious the vulnerability.
Finally, the Tapo C200 was analysed using the tools Ettercap~\cite{ettercap} to intercept traffic to and from the device, SSL packet capture~\cite{sslpacket} to intercept and read traffic to and from the Tapo application and Wireshark~\cite{wireshark} to understand network traffic and use the H264 extractor extension to convert network traffic into playable video.
\section{Reverse engineering}~\label{sec:reverse}
Our work starts from the reverse engineering process that allowed us to deduce that: when the application and the camera are on different networks, a classic client-server architecture is used in which the Tapo Server is used to distribute information, such as the device configuration.
Conversely, when both are on the same network, no external servers are used to control the camera and access the video stream.
It is important to note that the flow for control and editing differs substantially from that for video streaming.
The diagram in Figure \ref{ctrltapo} describes the sequence of operations that allow the user to access the video stream through the Tapo application. After choosing one of the initialised devices, the Tapo application logs into the Tapo C200, obtaining the token \emph{stok} needed to perform the control and modification operations.
The \emph{stok} is a token generated during authentication by the Tapo C200 which, after authenticating the user with a username and password, assigns this token to the Tapo application. The purpose of the token is to create a session without having to re-authenticate the application every time the camera settings are changed. The various requests generated by the application carry the \emph{stok} obtained in the last authentication phase. In this way the Tapo C200 only accepts requests from authenticated users.
Following the user's request to access the video stream the Tapo application and the Tapo C200 coordinate, in particular, the Tapo C200 sends to the application a \textit{nonce} necessary for the construction of the symmetric key.
At this point both parties calculate the initialisation vector and AES key in order to encrypt the video stream using AES in CBC mode.
Once the key has been calculated, the Tapo application authenticates itself to the Tapo C200 by providing an \lq\lq response" tag. Finally, the Tapo C200, after verifying the validity of the response, starts the multimedia streaming session by encrypting it with the agreed key.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/utilizzo_strandard_tapo.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext.\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over TLS channel.\newline
\fcolorbox{black}{orange}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Stream video Encrypted with AES key
\end{flushleft}
\caption{Sequence diagram for using the IP Camera with the proprietary Tapo application}
\label{ctrltapo}
\end{figure}
From the diagram in Figure~\ref{diagterz} we can notice how the integration of the Tapo C200 with third-party software is handled. Using the Tapo application, we first need to access the device in question and change its settings so that we can create a new user for third-party software.
After that, we can use the Tapo C200 with the previously mentioned software iSpy and VLC, which log in and start the free-to-air media streaming session with the camera.
In order to be able to use third-party applications, it was necessary to correctly configure the already initialised Tapo C200 by creating a special user on the camera using the Tapo application. Therefore, since this was a change to the Tapo C200 settings, the request generated by the Tapo application was accompanied by the \emph{stok}.
To access the resource, the Linux machine with VLC and iSpy installed was used, i.e. the machine (e) of the testbed. Using Ettercap and Wireshark installed on machine (d) instead, a MiTM attack was carried out in order to analyse the traffic.
From machine (e) it was possible to access the video stream using both players.
\begin{itemize}
\item On VLC, select the \lq\lq network resources" section of the player and enter:\\
\url{rtsp://username:password@<tapoc200address>/stream/1}
\item On iSpy, configure the ONVIF S profile to URI:\\ \url{http://username:password@<tapoc200address>/onvif/device\_service}
\end{itemize}
Since ONVIF's S profile does not add any functionality to the video stream, no differences emerged from the analysis of the traffic generated for the video stream of the two players. In particular, the packets analysed showed that, after accessing through the specific URI for ONVIF devices, iSpy uses RTSP in the exact way used by VLC. The diagram shows how the Tapo C200 is integrated with third-party software. Using the Tapo application, we first need to access the device in question and change its settings so that we can create a new user account for third-party software.
In summary, the Tapo C200 can then be used with the previously mentioned software iSpy and VLC, which log in and start the unencrypted multimedia streaming session with the camera.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.32]{Images/Utilizzo_standard.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext.\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over TLS channel
\end{flushleft}
\caption{Sequence diagram for using the IP Camera with third party software}
\label{diagterz}
\end{figure}
\section{Vulnerability Assessment}~\label{sec:VA}
The vulnerability assessment activity carried out is divided into two parts, one concerning proprietary services and the other concerning third party video streams.
\subsection{Proprietary services analysis}
Security in terms of the confidentiality of the video stream when the Tapo application and Tapo C200 are not on the same network, is entrusted to the channel itself on which it takes place. Using SSL/TLS on port 443, all data exchanged between the Tapo C200, the TP-Link server and the Tapo application is properly encrypted.
In addition, the Tapo application and the Tapo C200 use SSL pinning~\cite{pinning}, meaning that if information needs to be shared with the TP-Link server, no SSL/TLS session is established towards any entity other than a TP-Link server whose SSL/TLS certificate they already have by default.
However, despite the use of SSL/TLS, it was found that the traffic is easily detected, particularly with regard to movement notifications.
The motion detection function follows the steps:
\begin{enumerate}
\item The Tapo C200 detects movement and sends notification to the TP-Link server;
\item The server receives the notification and sends an alert to all devices connected to the account associated with the camera;
\item The application receives the message and generates a notification for the user.
\end{enumerate}
Our study found that the size of the messages generated was always the same, 523 bytes. By using the Tapo C200 as an oracle, the attacker may be able to discern movement notifications based on the size of the messages, without intervening in the cryptographic scheme and deducing, for example, when the victim is home and when no one is home. In addition, by filtering and blocking only those types of messages, the attacker could undermine the availability of the notification service by deceiving the victim that there is no movement in the area covered by the camera.
\subsection{Third party video stream analysis}
The use of third-party players for multimedia streaming presents a high level criticality, both as regards the ONVIF service on port 2020 and RTSP on port 554.
Since the ONVIF profile S is limited to providing an interface for the use of RTSP, both approaches seen for video streaming do not have any system to guarantee the confidentiality of the multimedia stream. In fact, after configuring the Tapo C200 to use these systems, whenever a request is made, the video stream is sent in clear text following the H264 encoding.
This approach is critical, especially with regard to the confidentiality of the data as it could allow the attacker to intercept the stream and decode it, thus obtaining the original and reproducible video stream.
\section{Penetration Testing}~\label{sec:PT}
\subsection{Camera DoS} \label{sec:camerados}
A Denial of Service (DoS) is a malfunction of a device due to a computer attack. Generally, such attacks aim to undermine the availability of a service by overloading the system and making an arbitrarily large number of requests, sometimes in a distributed manner (Distributed Denial of Services or DDoS). In the case of the Tapo C200, a DoS attack was relatively simple. In fact, an intensive scan by Nessus caused the device to crash, as can be seen in Figure~\ref{fig:crashed_camera}, causing it to stop and then reboot. This demonstrates how an attacker on the same network as the device can seriously compromise the availability of the service.
Figure~\ref{fig:DoSScore} shows the CVSS score of this vulnerability,
which has been classified as Medium, with a value of 6.5.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/crashed_camera.jpg}
\caption{Crash of the Tapo C200 camera}
\label{fig:crashed_camera}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/Dosscore.png}
\caption{DoS score}
\label{fig:DoSScore}
\end{figure}
\subsection{Video Eavesdropping} \label{sec:privacyattack}
To demonstrate how an attacker could obtain the multimedia stream transmitted using third-party software, an experiment was conducted using the machine on which the multimedia players were installed, that is the machine (e), and starting a multimedia session with the Tapo C200. At the same time, machine (d) was used again in MiTM attack mode to intercept the traffic exchanged between the player and the Tapo C200.
By analysing the intercepted traffic with Wireshark it was possible to use the H264 extractor tool~\ref{extractor}, the tool was able to correctly interpret and reconstruct the entire intercepted multimedia streaming session. The output of the tool, produce the reproducible video of the intercepted session.
Finally, Figure~\ref{fig:leakscore} shows the CVSS score of this vulnerability, which was classified as Medium, with a value of 6.5
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{Images/pacchetti264.png}
\caption{Packages H264}
\label{extractor}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/leakscore.png}
\caption{Video eavesdropping score}
\label{fig:leakscore}
\end{figure}
\subsection{Motion Oracle} \label{sec:motionoracle}
To demonstrate how an attacker could use the Tapo C200 as an oracle, a special experiment was conducted.
Throughout the night, the Tapo C200 was used to record a public street. Once the traffic was recorded, outgoing packets from the Tapo C200 were filtered on the base:
\begin{itemize}
\item Of the protocol used, by inserting the SSL/TLS filter
\item Of the frame length, by inserting the filter \textit{frame.length==523}
\end{itemize}
Once the packets were obtained, we constructed a graph shown in Figure~\ref{graftraf}. From the graph we can see the number of SSL/TLS packets of 523 bytes captured at ten minute intervals. In addition, it can be seen that starting at 23:00, the amount of packets begins to decrease until about 3:00. From 3:30 it is possible to see an increase in packets reaching a maximum around 7:00. The trend of the curve is consistent with what could be expected and shows that the influx of cars decreases during the night and increases in the morning.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Images/grafico_movimenti.jpg}
\caption{Graph of transit packet traffic on 7-8 June 2021}
\label{graftraf}
\end{figure}
By comparing the graph with the video recorded on the Tapo C200's SD card, it was found that there was a complete match between captured packets and recorded movements. Observing the recorded video, it was found that all the actual movements are shown in the graph. For example, a frame from about 00:05 is shown in Figure~\ref{immovi}, where a car can be seen in transit.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Images/movimento_00_05.png}
\caption{Movement recorded at 00:05}
\label{immovi}
\end{figure}
This demonstrates how an attacker on the same network as the Tapo C200 can precisely detect traffic related to the Tapo C200's motion detection service, traffic that could be used to deduce information about the victim by monitoring the frequency of these packets.
Furthermore, this experiment demonstrates how the attacker can trivially filter out such packets and deceive the victim.
Figure~\ref{scoremo} shows the CVSS score of this vulnerability, which was classified as \emph{Medium}, with a value of 5.4.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/motionscore.png}
\caption{Motion Oracle score}
\label{scoremo}
\end{figure}
\section{Countermeasures}\label{sec:countermeasures}
Our work proposes a set of countermeasures developed to mitigate the vulnerability of video eavesdropping. The Raspberry Pi 4 Model B was used for this purpose.
The idea behind the proposed solution to guarantee the confidentiality of the video stream in the third party scenario is to use the Raspberry Pi 4 Model B as an access point of the Tapo C200 in order to modify the traffic in transit by applying encryption.
\subsection{Raspberry Pi into the testbed}
The Raspberry Pi 4 Model B has a network card with a wired and a wireless interface. Thanks to this feature it was possible to connect the Raspberry Pi 4 Model B to the testbed modem while exposing a Wi-Fi network to which the Tapo C200 could be connected, as shown in Figure \ref{testbed2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.35]{Images/pi_third_testbed.png}
\caption{Raspberry Pi in the network infrastructure}
\label{testbed2}
\end{figure}
In order to build the network infrastructure described above, the Raspberry Pi 4 Model B had to be configured to act as an access point for the Tapo C200 on behalf of the modem~\cite{settingrasp}. Specifically, it was necessary to install the hostapd~\cite{hostpad} tool.
This module, available for Linux machines, allows normal network interfaces to be transformed into Access Points. In addition, it was necessary to modify the dhcpd configuration of the Raspberry Pi 4 Model B.
\subsection{Traffic encryption}
To encrypt network traffic we configured IPtables on the Raspberry Pi 4 Model B, the policy adopted is shown in Listing~\ref{code:IPtablesEnc}:
\begin{lstlisting}[caption={Configuration of IPtables for encryption}, label={code:IPtablesEnc}]
iptables - A FORWARD - m
--physdev-is-in wlan0 -p udp
-s TAPOC200_ADDRESS_IPV4
-j NFQUEUE --queue num 1
iptables -A INPUT-A FORWARD -m
--physdev-is-in wlan0 -f -j DROP
\end{lstlisting}
In this way it was possible to intercept packets in transit (FORWARD) from the network interface (-m --physdev-is-in wlan0) transported with UDP (-p udp) coming from the Tapo C200 (-s TAPOC200\_ADDRESS\_IPV4) and send them on queue 1.
Thanks to the second rule, it is possible to eliminate all possible fragments of the packets in transit, in order to avoid sending partial packets.
At this point, we have built a script to encrypt the packets queued by the firewall. The script takes care of retrieving packets from the queue, extracting the payload, encrypting it and sending it back to the recipient's address, all the script code is shown in the Listing~\ref{code:encryptscript}.
\begin{lstlisting}[caption={Encryption script}, label={code:encryptscript}]
def encrypt(packet):
cipher_suite = Fernet(key)
encoded_text = cipher_suite.encrypt
(packet.get_payload()[28:])
pkt = IP(packet.get_payload())
UDP_IP = pkt[IP].dst
UDP_PORT = pkt[UDP].dport
MESSAGE = encoded_text
sock = socket.socket(socket.AF_INET,
socket.SOCK_DGRAM)
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
packet.drop()
\end{lstlisting}
\subsection{Traffic decryption}
A configuration of IPtables was also developed for the client with the third-party software running, the policy adopted is shown in Listing~\ref{code:IPtablesDec}:
\begin{lstlisting}[caption={Configuration of IPtables for decryption}, label={code:IPtablesDec}]
iptables -A INPUT -p udp
-s RASPBERRY_ADDRESS_IPV4
-j NFQUEUE --queue num 2
\end{lstlisting}
Through this solution it is possible to intercept incoming packets (INPUT) transported with UDP (-p udp) coming from the Raspberry Pi 4 Model B (-s RASPBERRY \- ADDRESS \_ IPV4) and send them on queue 2.
The script for decrypting the packets queued by the firewall is shown in Listing~\ref{code:decryptscript}.
More specifically, the script takes care of fetching packets from the queue, extracting the payload and decrypting it before accepting it into the system.
\begin{lstlisting}[caption={Decryption script}, label={code:decryptscript}]
def decrypt(packet):
cipher_suite = Fernet(key)
decoded_text = cipher_suite.decrypt
(packet.get_payload()[28:])
pkt = IP(packet.get_payload())
UDP_IP = pkt[IP].dst
UDP_PORT = pkt[UDP].dport
MESSAGE = decoded_text
sock = socket.socket(socket.AF_INET,
socket.SOCK_DGRAM)
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
packet.drop()
\end{lstlisting}
\subsection{Proposed solution in the third-party scenario}
The configuration of the countermeasures just seen, although it modifies the initial testbed, has no impact on the actions the user has to take to initialise the Tapo C200 and its use.
A sequence diagram of how the Tapo C200 works with third-party software is shown in Figure~\ref{utipi}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Images/Utilizzo_Pi.png}
\begin{flushleft}
\fcolorbox{black}{black}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication in plaintext\newline
\fcolorbox{black}{blue}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Communication over the TLS channel\newline
\fcolorbox{black}{orange}{\rule{0pt}{4pt}\rule{4pt}{0pt}}\quad Video stream encrypted with symmetrical AES key
\end{flushleft}
\caption{Sequence diagram of the use of the IP Camera with Raspberry Pi in the network infrastructure}
\label{utipi}
\end{figure}
It can be seen that in the use phase the presence of the Raspberry Pi 4 Model B is totally transparent to the user. The routing of the video stream is managed by the devices themselves, so the user can use the service exactly as in the initial scenario. Moreover, thanks to the proposed countermeasure, we can observe that the video stream exchanged on the home network between the camera and the third party application is encrypted.
In order to demonstrate the effectiveness of the countermeasure, the experiment was carried out again, but in this case the execution of the H264 tool failed. The intercepted traffic, shown in Figure~\ref{trafficcif}, was unintelligible and the execution of the H264 extractor tool produced no results.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Images/traffico_cifrato.png}
\caption{Encrypted network traffic}
\label{trafficcif}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
IP cameras such as the Tapo C200 are very useful, widespread and definitely within everyone's reach. With the experiments conducted in this manuscript we have analysed the security of these devices also from a privacy point of view.
In order to fully understand what Tapo's vulnerabilities might be, a lot of reverse engineering work had to be done using various technologies.
This allowed us to define a detailed description of the ceremony that the Tapo C200 performs during its operation, among these operations we consider: the initialisation of the device, the use of the device through the proprietary systems of TP-Link and the integration of such device with third-party technologies.
Subsequently, the vulnerability assessment was carried out, from which it was found that although video streaming through the use of the Tapo application enjoys certain guarantees, the streaming service through the use of third-party applications is devoid of encryption mechanisms and therefore vulnerable.
Furthermore, it was highlighted that without intervening on the cryptographic systems used, one of the features offered by the camera, namely motion detection, can be a vector to profile and deceive the victim. It has been demonstrated how an attacker could exploit these vulnerabilities through penetration testing. The trial highlighted the Tapo C200's poor resilience to DoS attacks.
Therefore, it was highlighted that by applying a filter on the packets leaving the Tapo C200 on the basis of the protocol and the size, the motion detection service behaves like an oracle for the attacker.
By exploiting the mechanisms in question, the attacker could deduce the victim's habits and surgically inhibit the serve.
Furthermore, it was highlighted that thanks to the use of the H264 extractor tool, the attacker is able to intercept a video stream and convert it back into a playable video. The average of the Base scores of these vulnerabilities is 6.14, so the risk level associated with the Tapo C200 is Medium.
Finally, with regard to the critical issues encountered with the use of third-party technologies, a countermeasure was proposed using the Raspberry Pi 4 Model B. This countermeasure uses the Raspberry as an extension of the Tapo C200, capable of applying encryption to transmissions and guaranteeing confidentiality in a completely transparent way to the user.
Our experiments taught us that, although the security measures in place in the Tapo C200 are in line with the state of the art, they have not yet been tailored to all possible use scenarios, a finding that we plan to validate in the future over other common IoT devices.
\bibliographystyle{abbrv}
|
{'timestamp': '2022-02-15T02:40:55', 'yymm': '2202', 'arxiv_id': '2202.06597', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.06597'}
|
arxiv
|
\section{Introduction}\label{sec1}
Object detection is a basic and challenging task in computer vision. CNN algorithms are required to obtain each instance of interest in an image, as well as bounding boxes with category labels. Currently, object detection algorithms have shifted from conventional methods to deep learning-based algorithms, such as YOLO\cite{redmon2016you}, SSD\cite{liu2016ssd}, and Fast RCNN\cite{girshick2015fast}. These algorithms have achieved significant success in the detection of horizontal objects. However, if an object inclines significantly, as shown in Figure \ref{fig1} (a), the bounding box detected by these horizontal detectors contains a lot of background, which decreases the accuracy when describing the object information. Moreover, when multiple objects of the same category are very close, the horizontal bounding box will include parts of multiple objects, which may be misjudged as the same object in the later processing stage of the model, leading to misdetection. To overcome these problems, existing studies\cite{jiang2017r2cnn,ma2018arbitrary,liao2017textboxes,liao2018textboxes++,zhou2020arbitrary,2021Oriented} have modified the mainstream horizontal object detector to detect oriented objects, including by increasing predictions of angle or position of four corner points directly to represent oriented objects. Although better results have been achieved, some issues remain. These issues can be caused either by the horizontal object detector or by defects in the model design. The detection performance of an anchor-based detector\cite{ma2018arbitrary,liao2017textboxes,xu2020gliding} has certain requirements regarding the size, aspect ratio, and quantity of anchors. Further, although the ratio and aspect ratio of the anchors remain unchanged after initialisation, even with elaborate design, it is difficult to deal with candidates with large shape changes, specifically for targets in oriented objects. Moreover, a predefined positioning framework can hinder the generalisation ability of the detector. For example, the YOLOv3\cite{redmon2018yolov3} anchor clustering based on different data cannot be applied to objects with large sizes or reduced aspect ratios. To achieve a higher average precision rate, the anchor-based detector RRPN was established\cite{ma2018arbitrary}. Due to differences in angles, anchors must be placed densely on the input image, providing many parameters for model calculation. However, these anchors increase the number of negative samples, which leads to an imbalance between the positive and negative samples for training. Although in other methods\cite{zhou2020arbitrary,2021Oriented,zhou2017east} the directly predicted angle reached a high mAP, they still have the problem of inaccurate angle prediction. More specifically, the angle difference between the predicted and real values should be minimal. Although the slant border based on this angle can locate an object, as shown in Figure \ref{fig1} (a), the accuracy of the final model is reduced.
In this study, we developed a single-stage rotating object detector via two points with a solar corona heatmap (ROTP). Traditional detectors use an anchor-based mechanism based on the regression angle, object size, and corner coordinates. In contrast to the two modes of most popular oriented detectors, our proposed ROTP adopts two key points to detect targets. As shown in Figure \ref{fig1} (b), by locating the two key points of the vertex and the centre, as well as predicting the size of the target, a simple mapping function can be used to generate a slanted rectangular border of the oriented objects. To achieve this, the ROTP outputs five feature maps, including a heatmap of the centre point and vertex, and predicts the target centre and vertex positions using keypoint detection. The other feature maps indicate the long and short sides of a target, the offsets of the key points, and the direction of the vertex relative to the central point.
Our contributions in this paper can be summarized as follows:
\begin{enumerate}
\item We propose a single-stage rotating object detector via two points with a solar corona heatmap (ROTP). ROTP uses the predicted vertex and centre points to describe the oriented object, which avoids the regression error caused by direct prediction of the angle. It also replaces angle prediction with prediction of the relative direction, thus improving the accuracy to describe the orientation of objects.
\item We designed a solar corona heatmap (SCH) based on the spatial position relationship to predict the centre point more accurately for slender objects. This method considers the large aspect ratio of remote sensing objects, which has a more robust performance than Gaussian heatmaps.
\item Our method obtains preferable precision with more semantic design and fewer hyperparametric designs compared with other classical-oriented object detection methods based on deep learning.
\end{enumerate}
The rest of this paper is organised as follows: Section 2 reviews related work in the field of horizontal object detection and oriented object detection. Section 3 presents the overall framework of the proposed ROTP model in detail, as well as the function design. Implementation details, comparison experiments, and ablation studies are presented in Section 4, then conclusions are drawn in Section 5.
\begin{figure}[H]
\includegraphics[scale=0.6]{intro1.pdf}
\centering
\caption{(a) Deviation that inevitably occurs when only the angle is predicted in rotating target detection. (b) Addition of predicted top position, which enhances accuracy of bounding boxes.
}
\label{fig1}
\end{figure}
\section{Related Work}\label{sec2}
Many excellent oriented object detectors are based on horizontal object detectors. In this section, we first introduce horizontal object detection methods and then introduce oriented object detection methods.
\noindent\textbf{Horizontal object detection.}
Object detection aims to detect each object in natural scene images with a rectangular border. Deep learning, which has a broad range of applications \cite{DBLPXing,9565320,IJCAI20Fanzhen}, has already been successfully applied in object detection. Current classical deep learning-based methods can be summarised as single-stage and two-stage methods. The single-stage method directly predicts all objects, whereas the two-stage method is subdivided based on preliminary predictions. Therefore, although the two-stage method is more accurate, it is also slower. These methods can be further categorised into anchor-based and anchor-free methods. Specifically, the anchor-based two-stage model represented by RCNN\cite{2014}, Fast RCNN\cite{girshick2015fast} and faster RCNN\cite{2016Faster} set multiscale anchor boxes in advance, which can be understood as a series of candidate regions of scales and sizes. First, they generate ROIs and then use features of the ROI to predict object categories. SSD\cite{liu2016ssd}, YOLOv2\cite{redmon2017yolo9000} and its variants\cite{redmon2018yolov3} are representative single-stage anchor-based methods. There is no candidate ROI area, and the classification and regression results are regressed in the final stage. Specifically, SSD\cite{liu2016ssd} combines the advantages of two-stage and single-junction segments to achieve a balance between speed and precision. Subsequently, Retinanet\cite{lin2017focal} and DSSD\cite{2017DSSD} are used to fuse multiscale features in the model to further improve the detection accuracy. In addition, anchor-free methods are becoming popular, including FoveaBox\cite{2019FoveaBox}, RepPoints\cite{2019RepPoints} and FCOS\cite{tian2019fcos}based on per-pixel detection, while heatmaps were introduced in \cite{law2018cornernet} and Centernet\cite{zhou2019objects} for object detection to achieve object-to-key mapping. These detection methods simplify the network structure by removing the anchors and improving the detection speed, and provide a new research direction to detect objects. Nevertheless, the abovementioned object detection methods only generate position information in the horizontal or vertical directions, which limits their universality. For example, in scene text and other rotating object images, the aspect ratio of the instance is relatively large, the arrangement is dense, and the direction is arbitrary, which requires more accurate position information to describe these objects. Therefore, oriented object detection has gradually become a popular research direction.
\noindent\textbf{Oriented object detection.}
Owing to the huge scale changes and arbitrary directions, detecting oriented targets is challenging. Extensive research has been devoted to this task. Many oriented object detectors have been proposed based on horizontal object detectors. RRPN\cite{ma2018arbitrary} and $ R^2CNN $\cite{jiang2017r2cnn} are the classical methods. Based on Fast RCNN\cite{girshick2015fast}, $ R^2CNN $ \cite{jiang2017r2cnn} raises two pooling sizes and an output branch to predict the corner positions. RRPN\cite{ma2018arbitrary} achieved better prediction results by adding rotating anchors at different angles. Based on
SSD\cite{liu2016ssd}, textboxes\cite{liao2017textboxes} and textboxes++\cite{liao2018textboxes++} have been proposed. According to the characteristics of slender text lines, a rectangular convolution kernel was proposed, and the results of the instance are regressed by the vortices. RRD\cite{2018Rotation} predicts a rotating object based on the invariance property of rotation features, improving the accuracy of regression of long text. EAST\cite{zhou2017east} had a U-shaped network\cite{2015U} and predicted the border with angle and the four corner positions of the instance simultaneously. The above methods are applicable to the field of scene text detection, whereas aerial remote sensing target detection is more difficult. Compared with text lines, remote sensing targets have many categories, such as complex background, multiscale, and a large number of dense small targets. Many robust oriented target detectors have emerged based on horizontal detectors. For example, ROI-Transformer\cite{ding2018learning} extracts rotation-invariant features on the ROI, which increases the accuracy of the next step of classification and regression. ICN\cite{azimi2018towards} combines pyramid and feature pyramid modules and achieves good results on remote sensing datasets. The glide vertex\cite{xu2020gliding} predicts the deviation of the four angled corner points at the horizontal border to obtain a more accurate direction bounding box. P-RSDet\cite{zhou2020arbitrary} and BBVector\cite{2021Oriented} introduced the anchor free heatmap detection method into oriented target detection to achieve a fast and accurate detection effect.
The above anchor free rotation detector still uses the method of horizontal target mapping heatmap, and does not take into account the problem that Gaussian heatmap is prone to position perception deviation on high aspect ratio targets.
\begin{figure}[H]
\includegraphics[scale=0.6]{arch.pdf}
\centering
\caption{ROTP architecture. The ROTP is divided into three parts: backbone, neck, and head. When an image with the size of WxHx3 is input into our model, it will output five feature maps, among which two are heatmaps for predicting the centre points and vertices, and the other three are regression for the targets’ size, coordinates bias, and direction, respectively. Finally, all key points are matched to obtain the final output.
}
\label{fig2}
\end{figure}
\section{Proposed Method}\label{sec3}
Herein, we first elaborate on the instance representation of our method and then describe the SCH generation. Finally, we present details of the proposed ROTP and its loss function.
\begin{figure}[H]
\includegraphics[scale=0.6]{relate.pdf}
\centering
\caption{Current work only considers the angle of the boundary box relative to the image coordinate system, as shown in (a) and (b), but does not consider the real orientation of the object. The red border in (c) represents the groundtruth, and the purple border is the predicted. The predicted border covers the whole target, but the IOU between it and the groundtruth seems large. As shown in (d), a coordinate system was established for each target. The centre point of the target is coordinate point zero, and the angle represents the included angle of the vector between the vertex and the centre point relative to the positive direction of the X-axis.
}
\label{fig3}
\end{figure}
\subsection{Instance Representation}\label{subsec1}
Oriented objects can generally be expressed by two patterns, which can be used interchangeably. The first type of pattern is represented by $ (x_1, y_1) $ , $ (x_2, y_2) $ , $ (x_3, y_3) $ , and $ (x_4, y_4) $ in the order of four clockwise corners, and the second type is $ (x, y, w, h, \alpha) $, where $ (x, y) $ represents the center point, w and h are the width and height of the target, $ \alpha $ represents the rotation angle of the target.In this study, oriented objects are modelled with two key points: $ (\sum_{i=1}^{4}{x_i/4,}\sum_{i=1}^{4}{y_i/4}) $ , which is the centre point $ (x_c, y_c) $ , and $ (\sum_{i=1}^{2}{x_i/2,}\sum_{i=1}^{2}{y_i/2}) $ , which is the vertex $ (x_t, y_t) $.
From the introduction in the previous section, we use the vertex, centre point, and length and width to represent a target. First, $ (x_c^i, y_c^i) $ , which represents the coordinate of the centre point from the heatmap, is extracted. Here, $ i $ represents the index of the targets. Thereafter, $ (x_t^i, y_t^i) $, which represents the positions of all vertices, is extracted. Because datasets are labelled with the problem of inaccuracy, the vertex label is occasionally marked on the background area, and the training input for the vertex coordinates is 0.9 times the coordinates along the centre point. To combine the centre point and the corresponding vertices, we added a prediction head in order to predict the direction of the vertices relative to the centre point, as shown in Figure \ref{fig3} (a) and (b). Previous studies only focused on the target bounding box angle in the image coordinate system and but ignored the targets; although this approach decreases the predicted angle range, the value of angle loss value may suddenly increase during the calculation, \cite{2019Learning}. In this paper, we propose a relative direction that is not based on an image coordinate system but is rather based on our well-designed coordinate system of the centre of the object. As shown in Figure \ref{fig3} (d), the relative direction can be expressed as the included angle between the vector from the vertex to the centre and with respect to the positive X-axis. The relative direction is calculated using the following formula:
\begin{small}
\begin{equation}
\theta_i=\left\{\begin{matrix}
\frac{180}{\pi}\arccos\frac{{x_t}^i-{x_c}^i}{\sqrt{({x_t}^i-{x_c}^i)^2 + ({y_t}^i-{y_c}^i})^2}&{x_t}^i-{x_c}^i\geq 0,{y_t}^i-{y_c}^i\geq 0 \\
\frac{180}{\pi}\arccos\frac{{x_t}^i-{x_c}^i}{\sqrt{({x_t}^i-{x_c}^i)^2 + ({y_t}^i-{y_c}^i})^2}&{x_t}^i-{x_c}^i > 0,{y_t}^i-{y_c}^i > 0 \\
360 - \frac{180}{\pi}\arccos\frac{{x_t}^i-{x_c}^i}{\sqrt{({x_t}^i-{x_c}^i)^2 + ({y_t}^i-{y_c}^i})^2}&{x_t}^i-{x_c}^i\leq 0,{y_t}^i-{y_c}^i\leq 0 \\
360 - \frac{180}{\pi}\arccos\frac{{x_t}^i-{x_c}^i}{\sqrt{({x_t}^i-{x_c}^i)^2 + ({y_t}^i-{y_c}^i})^2}&{x_t}^i-{x_c}^i > 0,{y_t}^i-{y_c}^i < 0 \\
\end{matrix}\right.
\end{equation}
\end{small}
The relative direction $ \theta_i $ is defined as ranging from 0 to 360. We first establish the vector from the centre point $ (x_c,y_c) $ to the vertex $ (x_t,y_t) $, and then calculate the cosine of the positive angle $ \alpha $ between the vector and the X-axis according to Eq\ref{eqcosa}. We then obtain the radian through the inverse trig function . Finally, according to the position of the vertex relative to the central point, the vertex is mapped to the four quadrants with the central point as the origin, and the relative direction is determined. The advantage of this method is that the angle is regarded as a constant with the same scale as that of the target, and the regression prediction can be directly performed without additional processing.
\begin{equation}
\cos\alpha=\frac{{x_t}-{x_c}}{\sqrt{({x_t}-{x_c})^2 + ({y_t}-{y_c}})^2}
\label{eqcosa}
\end{equation}
In the training stage, the direction of the tagets was optimized with a smooth $ L_1 $ loss as follows:
\begin{equation}
L_{D} = \frac{1}{N}\sum_{i=1}^NSmooth_{L_1}(\theta_i - \acute{\theta_i})
\end{equation}
Where $ N $ is the number of whole peak elements, $ \theta_i $ refers to the direction of instance, $ \acute{\theta_i} $ denotes the prediction of direction, and i denotes the index of all objects in a batch. The smooth $ L_1 $ loss is represented as follows:
\begin{equation}
f(x) = \left\{\begin{matrix}
0.5x^2&|x|<1 \\
|x| - 0.5 & otherwise\\
\end{matrix}\right.
\end{equation}
\begin{figure}[H]
\includegraphics[scale=0.6]{hm.pdf}
\centering
\caption{Large quantities of edge feature information are lost in the Gaussian heatmap when describing a rotating target with a large aspect ratio, as shown in (b) and (d). When the object is very narrow, the model can only learn a small amount of information available near the centre point. Our heatmap method, as shown in (a) and (c), enables the model to learn information at the edges of objects while learning information around the centre point. Panels (e) and (f) show the heatmap with colours.}
\label{fig4}
\end{figure}
\subsection{Solar Corona Heatmap}\label{subsec2}
Existing studies on key point detections have mostly used the Gaussian kernel function, $ Y_{hm}=\exp^{-\frac{(x-x_p)^2+(y-y_p)^2}{2\delta^2}} $ to mapping objects into heatmap $ Y\in {[0,1]}^{\frac{W}{d}\times\frac{H}{d}\times C} $, where $ (x_c, y_c) $ represents the coordinates of the centre point, $ (x_p,y_p) $ is an element on heatmap, and $ \sigma $ is the radius of the Gaussian kernel. When mapping a remote-sensing object into a Gaussian heatmap, as shown in Figure \ref{fig4} (b) and (d), the heatmap area only accounts for a small part of the ground truth bounding box when the target is extremely slender (for example, large trucks, bridges, and some designated ships). In the case of a large aspect ratio, the radius of the heatmap is small, and the resulting area would be small. For long and block objects without obvious texture changes, the detector may miscalculate the peak value at the top or tail during reasoning, which is detrimental to oriented object detection. Such defects are also reflected in the horizontal heatmap detector; thus, the detector should be able to better learn the peak value of the heatmap. Centernet\cite{zhou2019objects} performs many data enhancement operations, such as cutting, reversing, and translating, to ensure the accuracy of the detector. For remote sensing images, as the objects are small and dense, such data enhancement operations will lower the accuracy of the detector. Therefore, this study proposes a spatial perception heatmap to address this problem, as shown in Figure \ref{fig4} (a). As shown in Figure \ref{fig4} (c), the heatmap radius based on half of the longer side can include all objects. In the scenario of dense objects, this method judges the peak value of the surrounding objects into a low confidence level area. Thus, our solar corona heatmap is based on both the short and long sides; therefore, it retains the weights of the surrounding area of the centre point and perceives the area of the head and tail. The formula used is as follows:
\begin{equation}
H_c = \frac{1}{2}(e^{-\frac{(x_p-x_c)^2+(y_p-y_c)^2}{\mu_1 \times h}}+e^{-\frac{(x_p-x_c)^2+(y_p-y_c)^2}{\mu_1 \times w}})\ (x_p,y_p)\in T
\end{equation}
\begin{equation}
H_t = e^{-\frac{(x_p-x_t)^2+(y_p-y_t)^2}{\mu_1 \times h}}
\end{equation}
Where $ H_c $ is the heatmap of center point, $ H_t $ means the heatmap of vertex point, $ w $ is the side length with the smallest difference from the value of twice the distance from the vertex to the centre point, and $ h $ is another side length. $ T $ represents the entire point-group of an instance. In this study, we used $ \mu = 0.125 $ in all the experiments. In the training stage, only the peak points were positive. All other points, including those in the Gaussian bulge, were negative. The model learns only one point, and the loss function does not usually converge because the number of positive samples is not sufficient compared with the number of background samples. To deal with this issue, Eq\ref{eqhml} is used. The closer the target is to the peak point, the higher the confidence is, and the points beyond a certain range are, the lower the confidence is ; thus, the gap between the peak point and other points can be measured effectively. The variant focal loss is used to calculate the ground truth and prediction.
\begin{equation}
L_{hm}=-\frac{1}{N_{pos}}\sum_{ps}\left\{\begin{matrix}
(1-\rho_{ps})^\alpha\log(\rho_{ps})&\acute{\rho_{ps}} = 1 \\
(1-\acute{\rho_{ps}})^\beta(\rho_{ps})^\alpha\log(1 - \rho_{ps})&\acute{\rho_{ps}} \neq 1 \\
\end{matrix}\right.
\label{eqhml}
\end{equation}
where $ N_{pos} $ represents the number of pixels covered by the heatmap, $ \rho_{ps} $ represents the predicted confidence, $ \acute{\rho_{ps}} $ represents the ground truth, $ \alpha $ and $ \beta $ are adjustable parameters that are used to ensure that the model can better learn the pixels with different confidence levels; their values are set to 2 and 4, respectively, in this study.
\subsection{Offset}\label{subsec3}
When in the inference stage, the position of the maximum confidence points from the centre heatmaps is extracted as the central position of the objects, and the position of the maximum confidence points from vertex heatmaps is extracted as the vertex of the objects. However, the difference between the sizes of the output heatmaps and the input images is four times larger . Because the coordinate position can only be an integer, floating point deviation loss will occur in the mapping process, which affects the positioning of slender oriented objects. To compensate for the floating point bias loss, we predict the offset map $ O\in {R}^{4\times\frac{W}{d}\times\frac{H}{d}\times C} $. Given the peak point $ \widetilde{c}=(\widetilde{c_x},\widetilde{c_y}) $ and $ \widetilde{t}=(\widetilde{t_x},\widetilde{t_y}) $ in each input image, the floating point deviation value $ O $ is calculated using the following formula:
\begin{equation}
O = [ \frac{\widetilde{c_x}}{d} - \lfloor\frac{\widetilde{c_x}}{d}\rfloor,\frac{\widetilde{c_y}}{d} - \lfloor\frac{\widetilde{c_y}}{d}\rfloor,\frac{\widetilde{t_x}}{d} - \lfloor\frac{\widetilde{t_x}}{d}\rfloor,\frac{\widetilde{t_y}}{d} - \lfloor\frac{\widetilde{t_y}}{d}\rfloor ]
\end{equation}
In this study, targets of the same category share one vertex float offset and one centre float offset. In the training stage, the offset is optimised with a smooth $ L_1 $ loss as follows.
\begin{equation}
L_{offsets} = \frac{1}{N}\sum_{i=1}^NSmooth_{L_1}(O_i - \acute{O_i})
\end{equation}
\subsection{Framework}\label{subsec4}
Figure \ref{fig2} illustrates the overarching pipeline of the proposed ROTP. The network architecture is divided into three parts: the backbone, neck, and output heads. In addition, the formulation of the input image size is $ I\in {R}^{H \times W \times 3} $, where $ W $ and $ H $ are the width and height, respectively, of the input image. The input image is first sent to the backbone for the convolution operation to obtain feature maps’ these feature maps are then sent to the neck. In the present study, ResNet101\cite{he2016deep} was selected as the ROTP encoder-decoder. The neck is a deformed FPN structure. The FPN extracts different feature maps of the receptive field. To ensure that the detector detects large and small targets simultaneously, the feature maps are combined with up-sampling before being sent to the heads. ROTP outputs five feature maps of size $ F\in {R}^{\frac{W}{d}\times\frac{H}{d}\times S} $, where $ S $ is the degree of depth of the corresponding prediction of outputs, and $ d $ is the output step size, which was set to four in this study. Of the five output maps, one is the vertex predicted in the form of a heatmap $ (T\in {R}^{\frac{W}{d}\times\frac{H}{d}\times cls}) $, and the other$ (C\in {R}^{\frac{W}{d}\times\frac{H}{d}\times cls}) $ is the centre point predicted by the heatmap form; the other three are the length and width of the regression target $ (Reg\in {R}^{\frac{W}{d}\times\frac{H}{d}\times 2}) $, the offsets of the peaks$ (O\in {R}^{\frac{W}{d}\times\frac{H}{d}\times 4}) $, and the direction of the vertex $ (D\in {R}^{\frac{W}{d}\times\frac{H}{d}\times 1}) $ relative to the centre point.
\subsection{Loss Function}\label{subsec5}
In the training stage, the regress loss $ L_{Reg} $ of the width and height of the objects is also optimised with a smooth $ L_1 $ loss. The loss function of the proposed method is defined as follows:
\begin{equation}
L={\lambda_0L}_{ht}+\lambda_1L_{hc}+\lambda_2L_{Reg}+{\lambda_3L}_{offsets}+{\lambda_4L}_{D}
\end{equation}
$ \lambda_0 $, $ \lambda_1 $ , $ \lambda_2 $ , $ \lambda_3 $ , and $ \lambda_4 $ are the hyperparameters that balance all loss functions, where the inner $ \lambda_3 $ is 0.1, and the rest are set to 1.0; all losses except for $ L_{ht} $ and $ L_{hc} $ are calculated only when $ \rho^\prime $ is 1.0.
\section{Experiment}\label{sec4}
\subsection{Datasets}\label{subsec1}
The HRSC2016 dataset was used in the present study. In the annotation attribute of HRSC2016, there are annotations for vertex and centre point positions, and we directly use this information for training, whereas in the UCAS-AOD dataset, the marks in some images were not marked according to the four points clockwise from the top left corner of the bounding box, which misled our method. We marked the vertex first, then marked the centre point, and remarked these targets in terms of the long and short sides of the original objects. We used the DOTA datasets to verify the effectiveness of our method in various categories.
\noindent\textbf{HRSC2016}\cite{liu2016ship}
The dataset is a remote ship detection dataset, containing 1061 images marked with a rotating bounding box, ranging in size from $ 300 \times 300 $ to $ 1500 \times 900 $. The standard mAP evaluation protocol was used to evaluate the HRSC2016 dataset.
\noindent\textbf{UCAS-AOD}\cite{zhu2015orientation}
The dataset comprises 1000 aircraft images with 7482 instances and 510 car images with 7114 instances. The resolution of the UCASAOD is approximately $ 1280 \times 700 $, which is composed of a horizontal label box and a slanted label box. We used only the slanted label box for the training.
\noindent\textbf{DOTA}\cite{xia2018dota}
The dataset contains 2806 aerial images in 15 categories. In this dataset, the smallest image is $ 800 \times 800 $ and the largest is $ 4000 \times 4000 $. The training, validation, and test sets were divided into a $ 3:1:2 $ ratio. We cut the image to $ 1024 \times 1024 $ and the gap was set 200 pixels as the step size.
\subsection{Implementation Details}\label{subsec2}
In the training stage, ROTP used Resnet101\cite{he2016deep} as the backbone, the image input resolution is set to $ 800 \times 800 $ , and the size of the output feature map is $ 200\times200 $ . The image sizes of the training data differed in the present study. Ensuring that the target would not be deformed during the process of image scaling, we scaled the image without changing its aspect ratio, and zero filling was carried out for the rest. For the UCASAOD dataset, $ 70\% $ of the data were shuffled as the training set and $ 30\% $ as the test set. For the DOTA dataset, direct scaling would cause object loss owing to the high image resolution; we therefore cut the DOTA image to a $ 1024\times1024 $ size and gap set to 200. For all the data, we used random rotation, random flip, pixel transformation, and other processing methods to enhance the data. Adam\cite{kingma2014adam} acted as an optimiser for our network and uses a warmup policy, with batch size set to 4, and training to loss convergence. In the testing stage, the test images were scaled in the same way as the train images scaled in the training stage. The first 200 key points were selected, and the confidence was set to 0.25. For other datasets, we calculated AP with the default IOU parameter in PASCAL VOC\cite{everingham2010pascal}, that is, 0.5. The accuracy is used for performance evaluation, which is well sued in other data mining and machine learning tasks \cite{IJCAT2012,6729567}. All experiments were trained with Pytorch on an Nvidia RTX 2080TI GPU.
\begin{figure}[H]%
\centering
\includegraphics[width=0.8\textwidth]{result.pdf}
\caption{Visualization of the detection results of DOTA, UCAS-AOD and HRSC2016.
}
\label{fig6}
\end{figure}
\subsection{Comparisons with STATE-OF-THE-ART}\label{subsec3}
\begin{table}[h]
\begin{center}
\caption{Comparisons on DOTA between ROPT and some classcial oriented detection methods.}\label{tab1}%
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}lllllllllllllllll@{}}
\toprule
Method & Pl & Bd &Br &Gft &Sv &Lv &Sh &Tc &Bc &St &Sbf &Ra &Ha &Sp &He &mAP \\
\midrule
R2CNN\cite{jiang2017r2cnn} &80.94 &65.67 &35.34 &67.44 &59.92 &50.91 &55.81 &90.67 &66.92 &72.39 &55.06 &52.23 &55.14 &53.35 &48.22 &60.67 \\
R2CNN\cite{jiang2017r2cnn} &80.94 &65.67 &35.34 &67.44 &59.92 &50.91 &55.81 &90.67 &66.92 &72.39 &55.06 &52.23 &55.14 &53.35 &48.22 &60.67 \\
ICN\cite{azimi2018towards} &81.40 &74.30 &47.70 &70.30 &64.90 &67.80 &70.00 &90.80 &79.10 &78.20 &53.60 &62.90 &67.00 &64.20 &50.20 &68.20 \\
RRPN\cite{ma2018arbitrary} &88.52 &71.20 &31.66 &59.30 &51.85 &56.19 &57.25 &90.81 &72.84 &67.38 &56.69 &52.84 &53.08 &51.94 &53.58 &61.01 \\
R-DFPN\cite{yang2018automatic} &80.92 &65.82 &33.77 &58.94 &55.77& 50.94 &54.78 &90.33 &66.34 &68.66 &48.73 &51.76 &55.10 &51.32 &35.88 &57.94 \\
RoI-Transformer\cite{ding2018learning} &\textbf{88.64} &78.52 &43.44 &\textbf{75.92} &68.81 &73.68 &\textbf{83.59} &90.74 &77.27 &81.46 &\textbf{58.39} &53.54 &62.83 &58.93 &47.67 &69.56 \\
P-RSDet\cite{zhou2020arbitrary} &88.58 &77.84 &\textbf{50.44} &69.29 &71.10 &\textbf{75.79} &78.66 &\textbf{90.88} &80.10 &81.71 &57.92 &\textbf{63.03} &66.30 &\textbf{69.77} &\textbf{63.13} &\textbf{72.30} \\
ROPT &81.17 &\textbf{84.34} &47.33 &62.65 &\textbf{71.42} &71.04 &78.09 &89.93 &\textbf{80.34} &\textbf{84.78} &44.74 &60.81 &\textbf{66.43} &69.14 &62.15 &70.29 \\
\botrule
\end{tabular}}
\end{center}
\end{table}
The results of the comparison of the performance of ROTP and some famous one-stage or two-stage oriented object detection methods on DOTA are illustrated in Table \ref{tab1}. The accuracy of our method for Bd, Sv, Bc,St, and Ha was $ 5.82\% $,$ 0.32\% $, $ 0.24\% $ , $ 3.07\% $ and $ 0.13\% $ higher, respectively, than those of the second method. The total mAP was $ 2.01\% $ lower than that of P-RSDET which is also based on the anchor free heatmap mothed. This is because the AP value of our model was very low when detecting Br and Sbf categories, resulting in a decline in overall accuracy. In addition, the accuracy of the category is competitive with that of other methods.
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{meanAveragePrecision (\%). Comparison results between ROTP and other classical oriented object detectors on UCAS-AOD. During calculating mAP, we set IOU in 0.5 and confidence in 0.25.}\label{tab2}%
\begin{tabular}{@{}llll@{}}
\toprule
Method & Plane & Car &mAP \\
\midrule
RRPN\cite{ma2018arbitrary} & 88.04 & 74.36 & 81.2\\
$ R^2CNN $\cite{jiang2017r2cnn} &89.76 &78.89 &84.32 \\
R-DFPN\cite{yang2018automatic} &88.91 &81.27 &85.09 \\
X-LineNet\cite{wei2020x} &91.3 &- &- \\
P-RSDet\cite{zhou2020arbitrary} &92.69 &\textbf{87.38} &90.03 \\
ROPT &\textbf{95.42} &84.76 &\textbf{90.09} \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
The results of the comparison of the performance of ROTP and other classical-oriented object detectors on the UCAS-AOD dataset are shown in Table \ref{tab2}. ROTP demonstrated a certain degree of improvement over other methods in mAP. P-RSDet also used a single-stage heatmap method to detect rotating targets and achieved a $ 90.3\% $mAP; ROTP was $ 2.73\% $ higher than P-RSDET in the aircraft category, but $ 2.62\% $ lower in the car category. We assume this to be because P-RSDET cuts the image to $ 512\times512 $ , while we scaled the image at equal proportions. Our method will make the small target size smaller, reducing the detection accuracy of the automobile category.
\begin{table}[h]
\begin{center}
\caption{Comparisons on HRSC2016 with oriented bounding boxes. Work used the VOC07 method to calculate mAP(\%)}\label{tab3}%
\begin{tabular}{@{}lllllll@{}}
\toprule
Method & $ R^2CNN $\cite{jiang2017r2cnn} & RC1\&RC2\cite{inproceedings} & Axis Learning\cite{xiao2020axis} & RRPN\cite{ma2018arbitrary} &TOSO\cite{feng2020toso} &ROTP \\
\midrule
mAP & 73.03 & 75.7 & 78.15 & 79.08 & 79.29 & \textbf{80.6} \\
\botrule
\end{tabular}
\end{center}
\end{table}
The results of the comparison of performance for HRSC2016 of ROTP and other mainstream deep-learning-based methods are shown in Table \ref{tab3}. R2CNN adds multi-scale ROI pooling and inclined frame detection based on FastRCNN, achieving an AP of 73.07. The RRPN introduced several tilted anchors and reached 79.08 AP, while our model reached 80.6 AP, representing a certain improvement over the above algorithm. However, shortcomings are also present that require many data enhancement strategies to ensure the uptake of positive samples.
\subsection{Ablation Studies}\label{subsec3}
We examine the availability of the proposed method from three perspectives: the separate use of angles or angles with key points, different encoder-decoders, and different heatmap generation methods.
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{Comparison on different Encoder-Decoder.}\label{tab4}%
\begin{tabular}{@{}llll@{}}
\toprule
Encoder-Decoder & Plane & Car &mAP \\
\midrule
Unet\cite{2015U} & 93.69 &86.15 &89.92\\
104-Hourglass\cite{2016Stacked} & 96.34&88.56 \\
ROPT &95.42 &84.76 &90.09 \\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\noindent\textbf{Different encoders and decoders:}
In ROTP, we used the improved Restnet101 with FPN as the encoders and decoders, as shown in Table \ref{tab4}. To test the impact of different encoders and decoders on our model, we replaced the previous Heads architecture with the network architecture of Unet and 104-Hourglass. We conducted experiments using the UCASAOD dataset. As shown in the table, the AP of our codec is $ 1.73\% $ higher than that of Unet in the aircraft category, but that of Unet is $ 1.43\% $ higher than that of our codec in the automobile category. This result is caused by the network structure. To keep the ROTP robust in the detection of targets at different scales, the network structure is built by combining different feature maps from the FPN; this is well reflected in the airplane image. However, in the car image, the change in the size of the car target is small, which can be understood to be a small target. After resizing, the car image becomes smaller, and a feature image with a large receptive field will lose information for the small target, which will cause little interference to the feature image after combination. However, unit-smooth convolution processing does not involve this shortcoming. In addition, using hourglass as the backbone improves the mAP by $ 2.36\% $ over that of Resnet101. The experiments show that the proposed improved network structure is effective.
\begin{figure}[H]
\includegraphics[scale=0.6]{ablation.pdf}
\centering
\caption{Comparison of KeyPoints Match.(a) the direction is predicted without matching the corresponding vertices;(b) predict direction and match vertices}
\label{fig7}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{Comparison between angles and keypoints with angles}\label{tab5}%
\begin{tabular}{@{}llll@{}}
\toprule
KeyPoints Match & Plane & Car &mAP \\
\midrule
Angles & 93.69 &86.15 &89.92\\
Angles and Keypoints &95.42 &84.76 &90.09\\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\noindent\textbf{KeyPoints Match:}
In this study, we added the top prediction to the direction prediction. To demonstrate the advanced nature of our proposed method, various experiments were carried out on the UCAS-AOD. As shown in Table \ref{tab5}, when the keypoint matching method was adopted, the mAP value was $ 8.97\% $ higher than that of the angle prediction method alone. As shown in Figure \ref{fig7}, the direction prediction was not accurate for some objects, leading to failure of prediction. Therefore, the keypoint matching method is effective for ROTP.
\begin{table}[h]
\begin{center}
\begin{minipage}{174pt}
\caption{Comparison between Gauss Heatmap and Ours}\label{tab6}%
\begin{tabular}{@{}llll@{}}
\toprule
Heatmap Generation & Plane & Car &mAP \\
\midrule
Gauss Heatmap &92.26 &79.62 &85.94\\
Ours &95.42 &84.76 &90.09\\
\botrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\noindent\textbf{Difference in Heatmaps:}
A comparison of the Solar Coron and Gaussian heatmaps is shown in Table \ref{tab6}. In the comparison experiment, a directed target detection experiment was carried out on the UCASAOD dataset. based on different heatmap generation schemes. Owing to the high aspect ratio and the differing sizes of the directed targets, Gaussian heatmaps have a low AP when treating small targets, as shown in Figure \ref{fig4}. It may also be that the original small target becomes smaller upon resizing the input images. Finally, using the Solar Corona heatmap improves the mAP by $ 4.15\% $ over that of the Gaussian heatmap, thus proving the effectiveness of our heatmap generation method.
\section{Conclusion}\label{sec5}
In the present study, an instance generation method is proposed for rotating target representation and solar corona heatmap (SCH) generation for multi-object-oriented perception. A single-stage rotating object detector via two points with a solar corona heatmap ROTP was proposed for rotating object detection. Via simultaneous vertex and centre point detection, ROTP can avoid the position offset caused by the direct prediction angle, and the model can achieve good results without complex pre-design without using anchors and can detect key points. This method enables ROTP to quickly detect densely arranged objects of different sizes , such as cars, boats, and small planes. Experimental results on multiple datasets indicate that the modelling of rotation detectors based on vertices and centre points is effective. However, this study also has shortcomings. If the central area of the target to be detected is covered, ROTP will not be able to capture the real central position of the instance, resulting in detection failure. In addition, our attempted to use ROTP to detect scene text images\cite{2015ICDAR2015,karatzas2013icdar} did not have a satisfactory result. The real centre point of a text sentence cannot be captured based on a single central heatmap. In fact, detection targeting of a single character is the correct way to use a single-stage heatmap to detect scene text. In future work, researchers are encouraged to mark the vertices of the rotating target or other useful key points, and we hope to extend this work to the field of 3D detection and tracking.
|
{'timestamp': '2022-02-15T02:39:19', 'yymm': '2202', 'arxiv_id': '2202.06565', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.06565'}
|
arxiv
|
\section{Introduction}
Rapoport-Zink spaces (short RZ-spaces) are moduli spaces of $p$-divisible groups endowed with additional structure.
In \cite{RZ96}, Rapoport and Zink study two major classes of RZ-spaces, called (EL) type and (PEL) type.
The abbreviations (EL) and (PEL) indicate, in analogy to the case of Shimura varieties, whether the extra structure comes in form of \textbf{E}ndomorphisms and \textbf{L}evel structure or in form of \textbf{P}olarizations, \textbf{E}ndomorphisms and \textbf{L}evel structure.
\cite{RZ96} develops a theory of these spaces, including important theorems about the existence of local models and non-archimedean uniformization of Shimura varieties, for the (EL) type and for the (PEL) type whenever $p \neq 2$.
The blanket assumption $p \neq 2$ made by Rapoport and Zink in the (PEL) case is by no means of cosmetical nature, but originates to various serious difficulties that arise for $p=2$.
However, we recall that one can still use their definition in that case to obtain ``naive'' moduli spaces that still satisfy basic properties like being representable by a formal scheme.
In this paper, we construct the $2$-adic Rapoport-Zink space $\mathcal{N}_E$ corresponding to the group of unitary similitudes of size $2$ relative to any (wildly) ramified quadratic extension $E|F$, where $F|\mathbb{Q}_2$ is a finite extension.
It is given as the closed formal subscheme of the corresponding naive RZ-space $\mathcal{N}_E^{\mathrm{naive}}$ described by the so-called ``straightening condition'', which is defined below.
The main result of this paper is a natural isomorphism $\eta: \mathcal{M}_{Dr} \isoarrow \mathcal{N}_E$, where $\mathcal{M}_{Dr}$ is Deligne's formal model of the Drinfeld upper halfplane (\emph{cf.}\ \cite{BC91}).
This result is in analogy with \cite{KR14}, where Kudla and Rapoport construct a corresponding isomorphism for $p \neq 2$ and also for $p=2$ when $E|F$ is an unramified extension.
The formal scheme $\mathcal{M}_{Dr}$ solves a certain moduli problem of $p$-divisible groups and, in this way, it carries the structure of an RZ-space of (EL) type.
In particular, $\mathcal{M}_{Dr}$ is defined even for $p=2$.
As in loc.\ cit., there are natural group actions by $\SL_2(F)$ and the split $\SU_2(F)$ on the spaces $\mathcal{M}_{Dr}$ and $\mathcal{N}_E$, respectively.
The isomorphism $\eta$ is hence a geometric realization of the exceptional isomorphism of these groups.
As a consequence, one cannot expect a similar result in higher dimensions.
Of course, the existence of ``good'' RZ-spaces is still expected, but a general definition will probably need a different approach.
The study of residue characteristic $2$ is interesting and important for the following reasons:
First of all, from the general philosophy of RZ-spaces and, more generally, of local Shimura varieties \cite{RV14}, it follows that there should be uniform approach for all primes $p$.
In this sense, the present paper is in the same spirit as the recent constructions of RZ-spaces of Hodge type of W. Kim \cite{Kim}, Howard and Pappas \cite{HP} and Bültel and Pappas \cite{BP}.
Second, Rapoport-Zink spaces have been used to determine the arithmetic intersection numbers of special cycles on Shimura varieties \cite{KRY}; in this kind of problem, it is necessary to deal with all places, even those of residue characteristic $2$.
Finally, studying the cases of residue characteristic $2$ also throws light on the cases previously known.
In the specific case at hand, the methods we develop in the present paper also give a simplification of the proof for $p\neq 2$ of Kudla and Rapoport \cite{KR14}, see Remark \ref{POL_p>2} \eqref{POL_p>2eq}.
\smallskip
We will now explain the results of this paper in greater detail.
Let $F$ be a finite extension of $\mathbb{Q}_2$ and $E|F$ a ramified quadratic extension.
Following \cite{Jac62}, we consider the following dichotomy for this extension (see section \ref{LA}):
\begin{itemize}
\item[(R-P)] There is a uniformizer $\pi_0 \in F$, such that $E = F[\Pi]$ with $\Pi^2 + \pi_0 = 0$.
Then the rings of integers $O_F$ of $F$ and $O_E$ of $E$ satisfy $O_E = O_F[\Pi]$.
\item[(R-U)] $E|F$ is given by an Eisenstein equation of the form $\Pi^2 - t\Pi + \pi_0 = 0$.
Here, $\pi_0$ is again a uniformizer in $F$ and $t \in O_F$ satisfies $\pi_0|t|2$. We still have $O_E = O_F[\Pi]$.
Note that in this case $E|F$ is generated by a square root of the unit $1-4\pi_0/t^2$ in $F$.
\end{itemize}
An example for an extension of type (R-P) is $\mathbb{Q}_2(\sqrt{-2})| \mathbb{Q}_2$, whereas $\mathbb{Q}_2(\sqrt{-1})| \mathbb{Q}_2$ is of type (R-U).
Note that for $p>2$, any ramified quadratic extension over $\mathbb{Q}_p$ is of the form (R-P).
Our results in the cases (R-P) and (R-U) are similar, but different.
We first describe the results in the case (R-P).
Let $E|F$ be of type (R-P).
\smallskip
We first define a naive moduli problem $\mathcal{N}_E^{\mathrm{naive}}$, that merely copies the definition from $p \neq 2$ (\emph{cf.}\ \cite{KR14}).
Let $\breve{F}$ be the completion of the maximal unramified extension of $F$ and $\breve{O}_F$ its ring of integers.
Then $\mathcal{N}_E^{\mathrm{naive}}$ is a set-valued functor on $\Nilp$, the category of $\breve{O}_F$-schemes where $\pi_0$ is locally nilpotent.
For $S \in \Nilp$, the set $\mathcal{N}_E^{\mathrm{naive}}(S)$ is the set of equivalence classes of tuples $(X,\iota,\lambda,\varrho)$.
Here, $X/S$ is a formal $O_F$-module of height $4$ and dimension $2$, equipped with an action $\iota: O_E \to \End(X)$.
This action satisfies the Kottwitz condition of signature $(1,1)$, \emph{i.e.}, for any $\alpha \in O_E$, the characteristic polynomial of $\iota(\alpha)$ on $\Lie X$ is given by
\begin{equation*}
\charp(\Lie X, T \mid \iota(\alpha)) = (T - \alpha)(T - \overbar{\alpha}).
\end{equation*}
Here, $\alpha \mapsto \overbar{\alpha}$ denotes the Galois conjugation of $E|F$.
The right hand side of this equation is a polynomial with coefficients in $\mathcal{O}_S$ via the structure map $O_F \inj \breve{O}_F \to \mathcal{O}_S$.
The third entry $\lambda$ is a principal polarization $\lambda: X \to X^{\vee}$ such that the induced Rosati involution satisfies $\iota(\alpha)^{\ast} = \iota(\overbar{\alpha})$ for all $\alpha \in O_E$. (Here, $X^{\vee}$ is the dual of $X$ as formal $O_F$-module.)
Finally, $\varrho$ is a quasi-isogeny of height $0$ (and compatible with all previous data) to a fixed framing object $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ over $\overbar{k} = \breve{O}_F / \pi_0$.
This framing object is unique up to isogeny under the condition that
\begin{equation*}
\{ \varphi \in \End^0(\mathbb{X}, \iota_{\mathbb{X}}) \mid \varphi^{\ast}(\lambda_{\mathbb{X}}) = \lambda_{\mathbb{X}} \} \simeq \U(C,h),
\end{equation*}
for a split $E|F$-hermitian vector space $(C,h)$ of dimension $2$, see Lemma \ref{RP_frnaive}.
Recall that this is exactly the definition used in loc.\ cit.\ for the ramified case with $p > 2$.
There, $\mathcal{N}_E = \mathcal{N}_E^{\mathrm{naive}}$ and we have natural isomorphism
\begin{equation*}
\eta: \mathcal{M}_{Dr} \isoarrow \mathcal{N}_E,
\end{equation*}
where $\mathcal{M}_{Dr}$ is the Drinfeld moduli problem mentioned above.
However, for $p=2$, it turns out that the definition of $\mathcal{N}_E^{\mathrm{naive}}$ is not the ``correct'' one in the sense that it is not isomorphic to the Drinfeld moduli problem.
Hence this naive definition of the moduli space is not in line with the results from \cite{KR14} and the general philosophy of (conjectural) local Shimura varieties (see \cite{RV14}).
In order to remedy this, we will describe a new condition on $\mathcal{N}_E^{\mathrm{naive}}$, which we call the \emph{straightening condition}, and show that this cuts out a closed formal subscheme $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$ that is naturally isomorphic to $\mathcal{M}_{Dr}$.
Interestingly, the straightening condition is not trivial on the rigid-analytic generic fiber of $\mathcal{N}_E^{\mathrm{naive}}$ (as originally assumed by the author), but it cuts out an (admissible) open and closed subspace, see Remark \ref{RP_genfiber}.
We would like to explicate the defect of the naive moduli space.
For this, let us recall the definition of $\mathcal{M}_{Dr}$.
It is a functor on $\Nilp$, mapping a scheme $S$ to the set $\mathcal{M}_{Dr}(S)$ of equivalence classes of tuples $(X,\iota_B,\varrho)$.
Again, $X/S$ is a formal $O_F$-module of height $4$ and dimension $2$.
Let $B$ be the quaternion division algebra over $F$ and $O_B$ its ring of integers.
Then $\iota_B$ is an action of $O_B$ on $X$, satisfying the \emph{special} condition of Drinfeld (see \cite{BC91} or section \ref{RP3} below).
The last entry $\varrho$ is an $O_B$-linear quasi-isogeny of height $0$ to a fixed framing object $(\mathbb{X}, \iota_{\mathbb{X},B})$ over $\overbar{k}$.
This framing object is unique up to isogeny (\emph{cf.}\ \cite[II.\ Prop.\ 5.2]{BC91}).
Fix an embedding $O_E \inj O_B$ and consider the involution $b \mapsto b^{\ast} = \Pi b' \Pi^{-1}$ on $B$, where $b \mapsto b'$ is the standard involution.
By Drinfeld (see Proposition \ref{RP_Dr} below), there exists a principal polarization $\lambda_{\mathbb{X}}$ on the framing object $(\mathbb{X}, \iota_{\mathbb{X},B})$ of $\mathcal{M}_{Dr}$, such that the induced Rosati involution satisfies $\iota_{\mathbb{X},B}(b)^{\ast} = \iota_{\mathbb{X},B}(b^{\ast})$ for all $b \in O_B$.
This polarization is unique up to a scalar in $O_F^{\times}$.
Furthermore, for any $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, the pullback $\lambda = \varrho^{\ast}(\lambda_{\mathbb{X}})$ is a principal polarization on $X$.
We now set
\begin{equation*}
\eta(X,\iota_B,\varrho) = (X,\iota_B|_{O_E},\lambda,\varrho).
\end{equation*}
By Lemma \ref{RP_clemb}, this defines a closed embedding $\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E^{\mathrm{naive}}$.
But $\eta$ is far from being an isomorphism, as the following proposition shows:
\begin{prop}
The induced map $\eta(\overbar{k}): \mathcal{M}_{Dr}(\overbar{k}) \to \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is not surjective.
\end{prop}
Let us sketch the proof here.
Using Dieudonn\'e theory, we can write $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ naturally as a union
\begin{equation*}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation*}
where the union runs over all $O_E$-lattices $\Lambda$ in the hermitian vector space $(C,h)$ that are $\Pi^{-1}$-modular, \emph{i.e.}, the dual $\Lambda^{\sharp}$ of $\Lambda$ with respect to $h$ is given by $\Lambda = \Pi^{-1} \Lambda^{\sharp}$ (see Lemma \ref{RP_IS}).
By Jacobowitz (\cite{Jac62}), there exist different types (\emph{i.e.}, $\U(C,h)$-orbits) of such lattices $\Lambda \subseteq C$ that are parametrized by their norm ideal $\Nm (\Lambda) = \langle \{ h(x,x) | x \in \Lambda \} \rangle \subseteq F$.
In the case at hand, $\Nm(\Lambda)$ can be any ideal with $2 O_F \subseteq \Nm(\Lambda) \subseteq O_F$.
It is easily checked (see Chapter \ref{LA}) that the norm ideal of $\Lambda$ is minimal, that is $\Nm (\Lambda) = 2 O_F$, if and only if $\Lambda$ admits a basis consisting of isotropic vectors, and hence we call these lattices \emph{hyperbolic}.
Now, the image under $\eta$ of $\mathcal{M}_{Dr}(\overbar{k})$ is the union of all lines $\mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k})$ where $\Lambda \subseteq C$ is hyperbolic.
This is a consequence of Remark \ref{RP_rmkNE} and Theorem \ref{RP_thm} below.
On the framing object $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ of $\mathcal{N}_E^{\mathrm{naive}}$, there exists a principal polarization $\widetilde{\lambda}_{\mathbb{X}}$ such that the induced Rosati involution is the identity on $O_E$.
This polarization is unique up to a scalar in $O_E^{\times}$ (see Thm.\ \ref{POL_thm} \eqref{POL_thm1}).
On $C$, the polarization $\widetilde{\lambda}_{\mathbb{X}}$ induces an $E$-linear alternating form $b$, such that $\det b$ and $\det h$ differ only by a unit (for a fixed basis of $C$).
After possibly rescaling $b$ by a unit in $O_E^{\times}$, a $\Pi^{-1}$-modular lattice $\Lambda \subseteq C$ is hyperbolic if and only if $b(x,y) + h(x,y) \in 2 O_F$ for all $x,y \in \Lambda$.
This enables us to describe the ``hyperbolic'' points of $\mathcal{N}_E^{\mathrm{naive}}$ (\emph{i.e.}, those that lie on a projective line corresponding to a hyperbolic lattice $\Lambda \subseteq C$) in terms of polarizations.
We now formulate the closed condition that characterizes $\mathcal{N}_E$ as a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
For a suitable choice of $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$, we may assume that $\frac{1}{2}(\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $\mathbb{X}$.
The following definition is a reformulation of Definition \ref{RP_strdef}.
\begin{defn}
Let $S \in \Nilp$.
An object $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ satisfies the \emph{straightening} condition, if $\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda})$ is a polarization on $X$.
Here, $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
\end{defn}
We remark that $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $X$.
This is a consequence of Theorem \ref{POL_thm}, which states the existence of certain polarizations on points of a larger moduli space $\mathcal{M}_E$ containing $\mathcal{N}_E^{\mathrm{naive}}$, see below.
For $S \in \Nilp$, let $\mathcal{N}_E(S) \subseteq \mathcal{N}_E^{\mathrm{naive}}(S)$ be the subset of all tuples $(X,\iota,\lambda,\varrho)$ that satisfy the straightening condition.
By \cite[Prop.\ 2.9]{RZ96}, this defines a closed formal subscheme $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$.
An application of Drinfeld's Proposition (Proposition \ref{RP_Dr}, see also \cite{BC91}) shows that the image of $\mathcal{M}_{Dr}$ under $\eta$ lies in $\mathcal{N}_E$.
The main theorem in the (R-P) case can now be stated as follows, see Theorem \ref{RP_thm}.
\begin{thm}
$\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ is an isomorphism of formal schemes.
\end{thm}
This concludes our discussion of the (R-P) case.
From now on, we assume that $E|F$ is of type (R-U).
In the case (R-U), we have to make some adaptions for $\mathcal{N}_E^{\mathrm{naive}}$.
For $S \in \Nilp$, let $\mathcal{N}_E^{\mathrm{naive}}(S)$ be the set of equivalence classes of tuples $(X,\iota,\lambda,\varrho)$ with $(X,\iota)$ as in the \mbox{(R-P)} case.
But now, the polarization $\lambda: X \to X^{\vee}$ is supposed to have kernel $\ker \lambda = X[\Pi]$ (in contrast to the (R-P) case, where $\lambda$ is a principal polarization).
As before, the Rosati involution of $\lambda$ induces the conjugation on $O_E$.
There exists a framing object $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ over $\Spec \overbar{k}$ for $\mathcal{N}_E^{\mathrm{naive}}$, which is unique up to isogeny under the condition that
\begin{equation*}
\{ \varphi \in \End^0(\mathbb{X}, \iota_{\mathbb{X}}) \mid \varphi^{\ast}(\lambda_{\mathbb{X}}) = \lambda_{\mathbb{X}} \} \simeq \U(C,h),
\end{equation*}
where $(C,h)$ is a split $E|F$-hermitian vector space of dimension $2$ (see Proposition \ref{RU_frnaive}).
Finally, $\varrho$ is a quasi-isogeny of height $0$ from $X$ to $\mathbb{X}$, respecting all structure.
Fix an embedding $E \inj B$.
Using some subtle choices of elements in $B$ (these are described in Lemma \ref{LA_quat} \eqref{LA_quatRU}) and by Drinfeld's Proposition, we can construct a polarization $\lambda$ as above for any $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$.
This induces a closed embedding
\begin{equation*}
\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E^{\mathrm{naive}}, (X,\iota_B,\varrho) \mapsto (X,\iota_B|_{O_E},\lambda,\varrho).
\end{equation*}
We can write $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ as a union of projective lines,
\begin{equation*}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation*}
where the union now runs over all selfdual $O_E$-lattices $\Lambda \subseteq (C,h)$ with $\Nm (\Lambda) \subseteq \pi_0 O_F$.
As in the (R-P) case, these lattices $\Lambda \subseteq C$ are classified up to isomorphism by their norm ideal $\Nm(\Lambda)$.
Since $\Lambda$ is selfdual with respect to $h$, the norm ideal can be any ideal satisfying $t O_F \subseteq \Nm(\Lambda) \subseteq O_F$.
We call $\Lambda$ \emph{hyperbolic} when the norm ideal is minimal, \emph{i.e.}, $\Nm (\Lambda) = t O_F$.
Equivalently, the lattice $\Lambda$ has a basis consisting of isotropic vectors.
Recall that here $t$ is the element showing up in the Eisenstein equation for the (R-U) extension $E|F$ and that $\pi_0|t|2$.
Hence there exists at least one type of selfdual lattices $\Lambda \subseteq C$ with $\Nm (\Lambda) \subseteq \pi_0 O_F$.
In the case (R-U), it may happen that $|t| = |\pi_0|$, in which case all lattices $\Lambda$ in the description of $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ are hyperbolic.
The image of $\mathcal{M}_{Dr}(\overbar{k})$ under $\eta$ in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is the union of all projective lines corresponding to hyperbolic lattices.
Unless $|t| = |\pi_0|$, it follows that $\eta(\overbar{k})$ is not surjective and thus $\eta$ cannot be an isomorphism.
For the case $|t| = |\pi_0|$, we will show that $\eta$ is an isomorphism on reduced loci $(\mathcal{M}_{Dr})_{\red} \isoarrow (\mathcal{N}_E^{\mathrm{naive}})_{\red}$ (see Remark \ref{RU_rmkNE}), but $\eta$ is not an isomorphism of formal schemes.
This follows from the non-flatness of the deformation ring for certain points of $\mathcal{N}_E^{\mathrm{naive}}$, see section \ref{LM_naive}.
On the framing object $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ of $\mathcal{N}_E^{\mathrm{naive}}$, there exists a polarization $\widetilde{\lambda}_{\mathbb{X}}$ such that $\ker \widetilde{\lambda}_{\mathbb{X}} = \mathbb{X}[\Pi]$ and such that the Rosati involution induces the identity on $O_E$.
After a suitable choice of $(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$, we may assume that $\frac{1}{t} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $\mathbb{X}$.
The straightening condition for the (R-U) case is given as follows (see Definition \ref{RU_strdef}).
\begin{defn}
Let $S \in \Nilp$.
An object $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ satisfies the \emph{straightening} condition, if $\lambda_1 = \frac{1}{t}(\lambda + \widetilde{\lambda})$ is a polarization on $X$.
Here, $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
\end{defn}
Note that $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $X$ by Theorem \ref{POL_thm}.
The straightening condition defines a closed formal subscheme $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$ that contains the image of $\mathcal{M}_{Dr}$ under $\eta$.
The main theorem in the (R-U) case can now be stated as follows, compare Theorem \ref{RU_thm}.
\begin{thm}
$\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ is an isomorphism of formal schemes.
\end{thm}
When formulating the straightening condition in the (R-U) and the (R-P) case, we mentioned that $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a polarization for any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$.
This fact is a corollary of Theorem \ref{POL_thm}, that states the existence of this polarization in the following more general setting.
Let $F|\mathbb{Q}_p$ be a finite extension for any prime $p$ and $E|F$ an arbitrary quadratic extension.
We consider the following moduli space $\mathcal{M}_E$ of (EL) type.
For $S \in \Nilp$, the set $\mathcal{M}_E(S)$ consists of equivalence classes of tuples $(X,\iota_E,\varrho)$, where $X$ is a formal $O_F$-module of height $4$ and dimension $2$ and $\iota_E$ is an $O_E$-action on $X$ satisfying the Kottwitz condition of signature $(1,1)$ as above.
The entry $\varrho$ is an $O_E$-linear quasi-isogeny of height $0$ to a supersingular framing object $(\mathbb{X}, \iota_{\mathbb{X},E})$.
The points of $\mathcal{M}_E$ are equipped with polarizations in the following natural way, see Theorem \ref{POL_thm}.
\begin{thm} \label{IN_POL}
\begin{enumerate}
\item \label{IN_POL1} There exists a principal polarization $\widetilde{\lambda}_{\mathbb{X}}$ on $(\mathbb{X},\iota_{\mathbb{X},E})$ such that the Rosati involution induces the identity on $O_E$, \emph{i.e.}, $\iota(\alpha)^{\ast} = \iota(\alpha)$ for all $\alpha \in O_E$.
This polarization is unique up to a scalar in $O_E^{\times}$.
\item Fix $\widetilde{\lambda}_{\mathbb{X}}$ as in part \eqref{IN_POL1}.
For any $S \in \Nilp$ and $(X,\iota_E,\varrho) \in \mathcal{M}_E(S)$, there exists a unique principal polarization $\widetilde{\lambda}$ on $X$ such that the Rosati involution induces the identity on $O_E$ and such that $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
\end{enumerate}
\end{thm}
If $p = 2$ and $E|F$ is ramified of (R-P) or (R-U) type, then there is a canonical closed embedding $\mathcal{N}_E \inj \mathcal{M}_E$ that forgets about the polarization $\lambda$.
In this way, it follows that $\widetilde{\lambda}$ is a polarization for any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$.
The statement of Theorem \ref{IN_POL} can also be expressed in terms of an isomorphism of moduli spaces $\mathcal{M}_{E,\mathrm{pol}} \isoarrow \mathcal{M}_E$.
Here $\mathcal{M}_{E,\mathrm{pol}}$ is a moduli space of (PEL) type, defined by mapping $S \in \Nilp$ to the set of tuples $(X,\iota,\widetilde{\lambda},\varrho)$ where $(X,\iota,\varrho) \in \mathcal{M}_E(S)$ and $\widetilde{\lambda}$ is a polarization as in the theorem.
\smallskip
We now briefly describe the contents of the subsequent sections of this paper.
In section \ref{LA}, we recall some facts about the quadratic extensions of $F$, the quaternion algebra $B|F$ and hermitian forms.
In the next two sections, sections \ref{RP} and \ref{RU}, we define the moduli spaces $\mathcal{N}_E^{\mathrm{naive}}$, introduce the straightening condition describing $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$ and prove our main theorem in both the cases (R-P) and (R-U).
Although the techniques are quite similar in both cases, we decided to treat these cases separately, since the results in both cases differ in important details.
Finally, in Section \ref{POL} we prove Theorem \ref{IN_POL} on the existence of the polarizations $\widetilde{\lambda}$.
\smallskip
\paragraph{\bf{Acknowledgements}}
First of all, I am very grateful to my advisor M.\ Rapoport for suggesting this topic and for his constant support and helpful discussions.
I thank the members of our Arbeitsgruppe in Bonn for numerous discussions and I also would like to thank the audience of my two AG talks for many helpful questions and comments.
Furthermore, I owe thanks to A.\ Genestier for many useful remarks and for pointing out a mistake in an earlier version of this paper.
I would also like to thank the referee for helpful comments.
This work is the author's PhD thesis at the University of Bonn, which was supported by the SFB/TR45 `Periods, Moduli Spaces and Arithmetic of Algebraic Varieties' of the DFG (German Research Foundation).
Parts of this paper were written during the fall semester program `New Geometric Methods in Number Theory and Automorphic Forms' at the MSRI in Berkeley.
\section{Preliminaries on quaternion algebras and hermitian forms}
\label{LA}
Let $F|\mathbb{Q}_2$ be a finite extension.
In this section we will recall some facts about the quadratic extensions of $F$, the quaternion division algebra $B|F$ and certain hermitian forms.
For more information on quaternion algebras, see for example the book by Vigneras \cite{Vig80}.
A systematic classification of hermitian forms over local fields has been done by Jacobowitz in \cite{Jac62}.
Let $E|F$ be a quadratic field extension and denote by $O_F$ resp.\ $O_E$ the rings of integers.
There are three mutually exclusive possibilities for $E|F$:
\begin{itemize} \label{LA_quadext}
\item $E|F$ is unramified.
Then $E = F[\delta]$ for $\delta$ a square root of a unit in $F$.
We can choose $\delta$ such that $\delta^2 = 1 + 4u$ for some $u \in O_F^{\times}$.
In this case, $O_E = O_F[\frac{1+\delta}{2}]$.
The element $\gamma = \frac{1+\delta}{2}$ satisfies the Eisenstein equation $\gamma^2 - \gamma - u = 0$.
In the following we will write $F^{(2)}$ instead of $E$ and $O_F^{(2)}$ instead of $O_E$ when talking about the unramified extension of $F$.
\item $E|F$ is ramified and $E$ is generated by the square root of a uniformizer in $F$.
That is, $E = F[\Pi]$ and $\Pi$ is given by the Eisenstein equation $\Pi^2 + \pi_0 = 0$ for a uniformizing element $\pi_0 \in O_F$.
We also have $O_E = O_F[\Pi]$.
Following Jacobowitz, we will say $E|F$ is of type (R-P) (which stands for ``ramified-prime'').
\item Finally, $E|F$ can be given by an Eisenstein equation of the form $\Pi^2 - t\Pi + \pi_0 = 0$ for a uniformizer $\pi_0$ and $t \in O_F$ such that $\pi_0|t|2$.
Then $E|F$ is ramified and $O_E = O_F[\Pi]$.
Here, $E$ is generated by the square root of a unit in $F$.
Indeed, for $\vartheta = 1 - 2\Pi/t$ we have $\vartheta^2 = 1 - 4\pi_0/t^2 \in O_F^{\times}$.
Thus $E|F$ is said to be of type (R-U) (for ``ramified-unit'').
\end{itemize}
We will use this notation throughout the paper.
\begin{rmk}
The isomorphism classes of quadratic extension of $F$ correspond to the non-trivial equivalence classes of $F^{\times} / (F^{\times})^2$.
We have $F^{\times} / (F^{\times})^2 \simeq \operatorname{H}^1(G_F, \mathbb{Z}/2\mathbb{Z})$ for the absolute Galois group $G_F$ of $F$ and $\dim \operatorname{H}^1(G_F, \mathbb{Z}/2\mathbb{Z}) = 2 + d$, where $d = [F: \mathbb{Q}_2]$ is the degree of $F$ over $\mathbb{Q}_2$ (see, for example, \cite[Cor.\ 7.3.9]{NSW00}).
A representative of an equivalence class in $F^{\times} / F^{\times 2}$ can be chosen to be either a prime or a unit, and exactly half of the classes are represented by prime elements, the others being represented by units.
It follows that there are, up to isomorphism, $2^{1 + d}$ different extensions $E|F$ of type (R-P) and $2^{1+d} -2$ extension of type (R-U).
(We have to exclude the trivial element $1 \in F^{\times} / F^{\times 2}$ and one unit element corresponding to the unramified extension.)
\end{rmk}
\begin{lem} \label{LA_diff}
The inverse different of $E|F$ is given by $\mathfrak{D}_{E|F}^{-1} = \frac{1}{2\Pi}O_E$ in the case \emph{(R-P)} and by $\mathfrak{D}_{E|F}^{-1} = \frac{1}{t}O_E$ in the case \emph{(R-U)}.
\end{lem}
\begin{proof}
The inverse different is defined as
\begin{equation*}
\mathfrak{D}_{E|F}^{-1} = \{ \alpha \in E \mid \Tr_{E|F}(\alpha O_E) \subseteq O_F \}.
\end{equation*}
It is enough to check the condition on the trace for the elements $1$ and $\Pi \in O_E$.
If we write $\alpha = \alpha_1 + \Pi \alpha_2$ with $\alpha_1,\alpha_2 \in F$, we get
\begin{align*}
\Tr_{E|F}(\alpha \cdot 1) & = \alpha + \overbar{\alpha} = 2\alpha_1 + \alpha_2(\Pi + \overbar{\Pi}), \\
\Tr_{E|F}(\alpha \cdot \Pi) & = \alpha \Pi + \overbar{\alpha \Pi} = \alpha_1 (\Pi + \overbar{\Pi}) + \alpha_2(\Pi^2 + \overbar{\Pi}^2).
\end{align*}
In the case (R-P) we have $\Pi + \overbar{\Pi} = 0$ and $\Pi^2 + \overbar{\Pi}^2 = 2\pi_0$, while in the case (R-U), $\Pi + \overbar{\Pi} = t$ and $\Pi^2 + \overbar{\Pi}^2 = t^2 - 2\pi_0$.
It is now easy to deduce that the inverse different is of the claimed form.
\end{proof}
Over $F$, there exists up to isomorphism exactly one quaternion division algebra $B$, with unique maximal order $O_B$.
For every quadratic extension $E|F$, there exists an embedding $E \inj B$ and this induces an embedding $O_E \inj O_B$.
If $E|F$ is ramified, a basis for $O_E$ as $O_F$-module is given by $(1, \Pi)$.
We would like to extend this to an $O_F$-basis of $O_B$.
\begin{lem} \label{LA_quat}
\begin{enumerate}
\item \label{LA_quatRP} If $E|F$ is of type \emph{(R-P)}, there exists an embedding $F^{(2)} \inj B$ such that $\delta \Pi = - \Pi \delta$.
An $O_F$-basis of $O_B$ is then given by $(1,\gamma,\Pi,\gamma \cdot \Pi)$, where $\gamma = \frac{1+\delta}{2}$.
\item \label{LA_quatRU} If $E|F$ is of type \emph{(R-U)}, there exists an embedding $E_1 \inj B$, where $E_1|F$ is of type \emph{({R-P})} with uniformizer $\Pi_1$ such that $\vartheta \Pi_1 = - \Pi_1 \vartheta$.
The tuple $(1,\vartheta,\Pi_1,\vartheta \Pi_1)$ is an $F$-basis of $B$.
Furthermore, there is also an embedding $\widetilde{E} \inj B$ with $\widetilde{E}|F$ of type \emph{(R-U)} with elements $\widetilde{\Pi}$ and $\widetilde{\vartheta}$ as above, such that $\vartheta \widetilde{\vartheta} = - \widetilde{\vartheta} \vartheta$ and $\widetilde{\vartheta}^2 = 1 + (t^2/\pi_0) \cdot u$ for some unit $u \in F$.
In terms of this embedding, an $O_F$-basis of $O_B$ is given by $(1,\Pi,\widetilde{\Pi},\Pi \cdot \widetilde{\Pi} / \pi_0)$.
Also,
\begin{equation} \label{LA_quatRUunr}
\frac{\Pi \cdot \widetilde{\Pi}}{\pi_0} = \gamma
\end{equation}
for some embedding $F^{(2)} \inj B$ of the unramified extension and $\gamma^2 - \gamma - u = 0$.
Hence, $O_B = O_F[\Pi, \gamma]$ as $O_F$-algebra.
\end{enumerate}
\end{lem}
\begin{proof}
\eqref{LA_quatRP} This is \cite[II.\ Cor.\ 1.7]{Vig80}.
\eqref{LA_quatRU} By \cite[I.\ Cor.\ 2.4]{Vig80}, it suffices to find a uniformizer $\Pi_1^2 \in F^{\times} \setminus \Nm_{E|F}(E^{\times})$ in order to prove the first part.
But $\Nm_{E|F}(E^{\times}) \subseteq F^{\times}$ is a subgroup of order $2$ and $F^{\times 2} \subseteq \Nm_{E|F}(E^{\times})$.
On the other hand, the residue classes of uniformizing elements in $F^{\times} / F^{\times 2}$ generate the whole group.
Thus they cannot all be contained in $\Nm_{E|F}(E^{\times})$.
For the second part, choose a unit $\delta \in F^{(2)}$ with $\delta^2 = 1 + 4 u \in F^{\times} \setminus F^{\times2}$ for some $u \in O_F^{\times}$ and set $\gamma = \frac{1+\delta}{2}$.
Let $\widetilde{E}|F$ be of type (R-U), generated by $\widetilde{\vartheta}$ with $\widetilde{\vartheta}^2 = 1 + (t^2/\pi_0) \cdot u$.
We have to show that $\widetilde{\vartheta}^2$ is not contained in $\Nm_{E|F}(E^{\times})$.
Assume it is a norm, so $\widetilde{\vartheta}^2 = \Nm_{E|F}(b)$ for a unit $b \in E^{\times}$.
Then $b$ is of the form $b = 1 + x \cdot (t/ \Pi)$ for some $x \in O_E$.
Indeed, let $\ell$ be the $\Pi$-adic valuation of $b-1$, \emph{i.e.}, $b = 1 + x \cdot \Pi^{\ell}$ and $x \in O_E^{\times}$.
We have
\begin{equation} \label{LA_Nmeq}
1 + (t^2/ \pi_0) \cdot u = \Nm_{E|F}(b) = 1 + \Tr_{E|F}(x \Pi^{\ell}) + \Nm_{E|F}(x \Pi^{\ell})
\end{equation}
Let $v$ be the $\pi_0$-adic valuation on $F$.
Then $v(\Nm_{E|F}(x \Pi^{\ell})) = \ell$ and $v(\Tr_{E|F}(x \Pi^{\ell})) \geq v(t) + \lfloor \frac{\ell}{2} \rfloor$, by Lemma \ref{LA_diff}.
On the left hand side, we have $v((t^2/ \pi_0) \cdot u) = 2v(t) -1$.
Comparing the valuations on both sides of \eqref{LA_Nmeq}, the assumption $\ell < 2v(t)-1$ now quickly leads to a contradiction.
Hence $\ell \geq 2v(t)-1$ and $b = 1 + x \cdot (t/ \Pi)$ for some $x \in O_E$.
Again,
\begin{equation*}
1 + (t^2/ \pi_0) \cdot u = \Nm_{E|F}(b) = 1 + \Tr_{E|F}(xt/ \Pi) + \Nm_{E|F}(xt/ \Pi).
\end{equation*}
An easy calculation shows that the residue $\overbar{x} \in k = O_E / \Pi = O_F / \pi_0$ of $x$ satisfies $u = x + x^2$.
But this equation has no solution in $k$, since a solution of $\gamma^2 - \gamma - u = 0$ generates the unramified quadratic extension of $F$.
It follows that $\widetilde{\vartheta}^2$ cannot be a norm.
Using again \cite[I.\ Cor.\ 2.4]{Vig80}, we find an embedding $\widetilde{E} \inj B$ such that $\vartheta \widetilde{\vartheta} = - \widetilde{\vartheta} \vartheta$.
We have $\Pi = t(1 + \vartheta) / 2$ and $\widetilde{\Pi} = \pi_0(1 + \widetilde{\vartheta}) / t$, thus
\begin{equation*}
\frac{\Pi \cdot \widetilde{\Pi}}{\pi_0} = \frac{(1 + \vartheta) \cdot (1 + \widetilde{\vartheta})}{2} = \frac{1 + \vartheta + \widetilde{\vartheta} + \vartheta \cdot \widetilde{\vartheta}}{2},
\end{equation*}
and
\begin{align*}
(\vartheta + \widetilde{\vartheta} + \vartheta \cdot \widetilde{\vartheta})^2 & = \vartheta^2 + \widetilde{\vartheta}^2 - \vartheta^2 \cdot \widetilde{\vartheta}^2 \\
& = (1 - 4\pi_0/t^2) + (1 + t^2u/\pi_0) - (1 - 4\pi_0/t^2) (1 + t^2u/\pi_0) \\
& = 1 + 4u.
\end{align*}
Hence $\gamma \mapsto \frac{\Pi \cdot \widetilde{\Pi}}{\pi_0}$ induces an embedding $F^{(2)} \inj B$.
It remains to prove that the tuple $u = (1,\Pi,\widetilde{\Pi},\Pi \cdot \widetilde{\Pi} / \pi_0)$ is a basis of $O_B$ as $O_F$-module.
By \cite[I.\ Cor.\ 4.8]{Vig80}, it suffices to check that the discriminant
\begin{equation*}
\disc(u) = \det(\Trd(u_i u_j)) \cdot O_F
\end{equation*}
is equal to $\disc(O_B)$.
An easy calculation shows $\det(\Trd(u_i u_j)) \cdot O_F = \pi_0 O_F$ and then the assertion follows from \cite[V, II.\ Cor.\ 1.7]{Vig80}.
\end{proof}
For the remainder of this section, we will consider lattices $\Lambda$ in a $2$-dimensional $E$-vector space $C$ with a split $E|F$-hermitian\footnote{Here and in the following, sesquilinear forms will be linear from the left and semi-linear from the right.} form $h$.
Recall from \cite{Jac62} that, up to isomorphism, there are $2$ different $E|F$-hermitian vector spaces $(C,h)$ of fixed dimension $n$, parametrized by the discriminant $\disc(C,h) \in F^{\times} / \Nm_{E|F}(E^{\times})$.
A hermitian space $(C,h)$ is called \emph{split} whenever $\disc(C,h) = 1$.
In our case, where $(C,h)$ is split of dimension $2$, we can find a basis $(e_1,e_2)$ of $C$ with $h(e_i,e_i) = 0$ and $h(e_1,e_2) = 1$.
Denote by $\Lambda^{\sharp}$ the dual of a lattice $\Lambda \subseteq C$ with respect to $h$.
The lattice $\Lambda$ is called $\Pi^i$-\emph{modular} if $\Lambda = \Pi^i \Lambda^{\sharp}$ (resp.\ \emph{unimodular} or \emph{selfdual} when $i=0$).
In contrast to the $p$-adic case with $p > 2$, there exists more than one type of $\Pi^i$-modular lattices in our case (\emph{cf.}\ \cite{Jac62}):
\begin{prop} \label{LA_latt}
Define the norm ideal $\Nm(\Lambda)$ of $\Lambda$ by
\begin{equation}
\Nm(\Lambda) = \langle \{ h(x,x) | x \in \Lambda \} \rangle \subseteq F.
\end{equation}
Any $\Pi^i$-modular lattice $\Lambda \subseteq C$ is determined up to the action of $\U(C,h)$ by the ideal $\Nm(\Lambda) = \pi_0^{\ell} O_F \subseteq F$.
For $i= 0$ or $1$, the exponent $\ell$ can be any integer such that
\begin{align*}
|2| & \leq |\pi_0|^{\ell} \leq |1| \text{ (for } E|F \text{ (R-P), unimodular } \Lambda), \\
|2\pi_0| & \leq |\pi_0|^{\ell} \leq |\pi_0| \text{ (for } E|F \text{ (R-P), } \Pi \text{-modular } \Lambda), \\
|t| & \leq |\pi_0|^{\ell} \leq |1| \text{ (for } E|F \text{ (R-U), unimodular } \Lambda), \\
|t| & \leq |\pi_0|^{\ell} \leq |\pi_0| \text{ (for } E|F \text{ (R-U), } \Pi \text{-modular } \Lambda),
\end{align*}
where $|\cdot|$ is the (normalized) absolute value on $F$.
Two $\Pi^i$-modular lattices $\Lambda$ and $\Lambda'$ are isomorphic if and only if $\Nm(\Lambda) = \Nm(\Lambda')$.
\qed
\end{prop}
For any other $i$, the possible values of $\ell$ for a given $\Pi^i$-modular lattice $\Lambda$ are easily obtained by shifting.
In fact, we can choose an integer $j$ such that $\Pi^j \Lambda$ is either unimodular or $\Pi$-modular.
Then $\Nm (\Lambda) = \pi_0^{-j} \Nm (\Pi^j \Lambda)$ and we can apply the proposition above.
Since $(C,h)$ is split, any $\Pi^i$-modular lattice $\Lambda$ contains an \emph{isotropic} vector $v$ (\emph{i.e.}, with $h(v,v) = 0$).
After rescaling with a suitable power of $\Pi$, we can extend $v$ to a basis of $\Lambda$.
Hence there always exists a basis $(e_1,e_2)$ of $\Lambda$ such that $h$ is represented by a matrix of the form
\begin{equation} \label{LA_lattmatrix}
H_{\Lambda} = \begin{pmatrix}
x & \overbar{\Pi}^i \\
\Pi^i & \\
\end{pmatrix}, \quad x \in F.
\end{equation}
If $x = 0$ in this representation, then $\Nm(\Lambda) = \pi_0^{\ell} O_F$ is as small as possible, or in other words, the absolute value of $|\pi_0|^{\ell}$ is minimal.
On the other hand, whenever $|\pi_0|^{\ell}$ takes the minimal absolute value for a given $\Pi^i$-modular lattice $\Lambda$, there exists a basis $(e_1,e_2)$ of $\Lambda$ such that $h$ is represented by $H_{\Lambda}$ with $x = 0$.
Indeed, this follows because the ideal $\Nm(\Lambda)$ already determines $\Lambda$ up to isomorphism.
In this case (when $x = 0$), we call $\Lambda$ a \emph{hyperbolic} lattice.
By the arguments above, a $\Pi^i$-modular lattice is thus hyperbolic if and only if its norm is minimal.
In all other cases, where $\Lambda$ is $\Pi^i$-modular but not hyperbolic, we have $\Nm(\Lambda) = x O_F$.
For further reference, we explicitly write down the norm of a hyperbolic lattice for the cases that we need later.
For other values of $i$, the norm can easily be deduced from this by shifting (see also \cite[Table 9.1]{Jac62}).
\begin{lem} \label{LA_hyp}
A $\Pi^i$-modular lattice $\Lambda$ is hyperbolic if and only if
\begin{align*}
\Nm(\Lambda) & = 2 O_F, & & \text{for } E|F \text{ (R-P), } i = 0 \text{ or } -1, \\
\Nm(\Lambda) & = t O_F, & & \text{for } E|F \text{ (R-U), } i = 0 \text{ or } 1.
\end{align*}
The norm ideal of $\Lambda$ is minimal among all norm ideals for $\Pi^i$-modular lattices in $C$.
\qed
\end{lem}
In the following, we will only consider the cases $i = 0$ or $-1$ for $E|F$ (R-P) and the cases $i = 0$ or $1$ for $E|F$ (R-U), since these are the cases we will need later.
We want to study the following question:
\begin{qu} \label{LA_lattqu}
Assume $E|F$ is (R-P).
Fix a $\Pi^{-1}$-modular lattice $\Lambda_{-1} \subseteq C$ (not necessarily hyperbolic).
How many unimodular lattices $\Lambda_0 \subseteq \Lambda_{-1}$ are there and what norms $\Nm(\Lambda_0)$ can appear?
Dually, for a fixed unimodular lattice $\Lambda_0 \subseteq C$, how many $\Pi^{-1}$-modular lattices $\Lambda_{-1}$ with $\Lambda_0 \subseteq \Lambda_{-1}$ do exist and what are their norms?
Same question for $E|F$ (R-U) and unimodular resp.\ $\Pi$-modular lattices.
\end{qu}
Of course, such an inclusion is always of index $1$.
The inclusions $\Lambda_0 \subseteq \Lambda_{-1}$ of index $1$ correspond to lines in $\Lambda_{-1} / \Pi \Lambda_{-1}$.
Denote by $q$ the number of elements in the common residue field of $O_F$ and $O_E$.
Then there exist at most $q+1$ such $\Pi$-modular lattices $\Lambda_0$ for a given $\Lambda_{-1}$.
The same bound holds in the dual case, \emph{i.e.}, there are at most $q+1$ $\Pi^{-1}$-modular lattices containing a given unimodular lattice $\Lambda_0$.
The Propositions \ref{LA_lattRP} and \ref{LA_lattRU} below provide an exhaustive answer to Question \ref{LA_lattqu}.
Since the proofs consist of a lengthy but simple case-by-case analysis, we will leave it to the interested reader.
\begin{prop} \label{LA_lattRP} Let $E|F$ of type \emph{(R-P)}.
\begin{enumerate}
\item Let $\Lambda_{-1} \subseteq C$ be a $\Pi^{-1}$-modular hyperbolic lattice.
There are $q+1$ hyperbolic unimodular lattices contained in $\Lambda_{-1}$.
\item Let $\Lambda_{-1} \subseteq C$ be a $\Pi^{-1}$-modular non-hyperbolic lattice.
Let $\Nm(\Lambda_{-1}) = \pi_0^{\ell}O_F$.
Then $\Lambda_{-1}$ contains one unimodular lattice $\Lambda_0$ with $\Nm(\Lambda_0) = \pi_0^{\ell+1}O_F$ and $q$ unimodular lattices of norm $\pi_0^{\ell}O_F$.
\item Let $\Lambda_0 \subseteq C$ be a unimodular hyperbolic lattice.
There are two hyperbolic $\Pi^{-1}$-modular lattices $\Lambda_{-1}\supseteq \Lambda_0$ and $q-1$ non-hyperbolic $\Pi^{-1}$-modular lattices $\Lambda_{-1} \supseteq \Lambda_0$ with $\Nm(\Lambda_{-1}) = 2/\pi_0 O_F$.
\item \label{LA_lattRPmin} Let $\Lambda_0 \subseteq C$ be unimodular non-hyperbolic.
Let $\Nm(\Lambda_0) = \pi_0^{\ell}O_F$.
There exists one $\Pi^{-1}$-modular lattice $\Lambda_{-1} \supseteq \Lambda_0$ with $\Nm(\Lambda_{-1}) = \pi_0^{\ell}O_F$ and, unless $\ell = 0$, there are $q$ non-hyperbolic $\Pi^{-1}$-modular lattices $\Lambda_{-1} \supseteq \Lambda_0$ with $\Nm(\Lambda_{-1}) = \pi_0^{\ell-1}O_F$.
\end{enumerate}
\end{prop}
Note that the total amount of unimodular resp.\ $\Pi^{-1}$-modular lattices found for $\Lambda = \Lambda_{-1}$ resp.\ $\Lambda_0$ is $q+1$ except in the case of Proposition \ref{LA_lattRP} \eqref{LA_lattRPmin} when $\ell = 0$.
In that particular case, there is just one $\Pi^{-1}$-modular lattice contained in $\Lambda_0$.
The same phenomenon also appears in the case (R-U), see part \eqref{LA_lattRUmin} of the following proposition.
\begin{prop} \label{LA_lattRU} Let $E|F$ of type \emph{(R-U)}.
\begin{enumerate}
\item Let $\Lambda_0 \subseteq C$ be a unimodular hyperbolic lattice.
There are $q+1$ hyperbolic $\Pi$-modular lattices $\Lambda_1 \subseteq \Lambda_0$.
\item \label{LA_lattRUmin} Let $\Lambda_0 \subseteq C$ be unimodular non-hyperbolic with $\Nm(\Lambda_0) = \pi_0^{\ell}O_F$.
There is one $\Pi$-modular lattice $\Lambda_1 \subseteq \Lambda_0$ with norm ideal $\Nm(\Lambda_1) = \pi_0^{\ell+1}O_F$ and if $\ell \neq 0$, there are also $q$ non-hyperbolic $\Pi$-modular lattices $\Lambda_1 \subseteq \Lambda_0$ with $\Nm(\Lambda_1) = \pi_0^{\ell}O_F$.
\item \label{LA_lattRUh} Let $\Lambda_1 \subseteq C$ be a $\Pi$-modular hyperbolic lattice.
There are two unimodular hyperbolic lattices containing $\Lambda_1$ and $q-1$ unimodular lattices $\Lambda_0$ with $\Lambda_1 \subseteq \Lambda_0$ and $\Nm(\Lambda_0) = t/ \pi_0 O_F$.
\item \label{LA_lattRUnh} Let $\Lambda_1 \subseteq C$ be a $\Pi$-modular non-hyperbolic lattice and let $\Nm(\Lambda_1) = \pi_0^{\ell}O_F$.
The lattice $\Lambda_1$ is contained in $q$ unimodular lattices of norm $\pi_0^{\ell-1}O_F$ and in one unimodular lattice $\Lambda_0$ with $\Nm(\Lambda_0) = \pi_0^{\ell}O_F$.
\end{enumerate}
\end{prop}
If $E|F$ is a quadratic extension of type (R-U) such that $|t| = |\pi_0|$, there exist only hyperbolic $\Pi$-modular lattices in $C$ and hence case \eqref{LA_lattRUnh} of Proposition \ref{LA_lattRU} does not appear.
\section{The moduli problem in the case (R-P)}
\label{RP}
Throughout this section, $E|F$ is a quadratic extension of type (R-P), \emph{i.e.}, there exist uniformizing elements $\pi_0 \in F$ and $\Pi \in E$ such that $\Pi^2 + \pi_0 = 0$.
Then $O_E = O_F[\Pi]$ for the rings of integers $O_F$ and $O_E$ of $F$ and $E$, respectively.
Let $k$ be the common residue field with $q$ elements, $\overbar{k}$ an algebraic closure, and $\breve{F}$ the completion of the maximal unramified extension of $F$, with ring of integers $\breve{O}_F = W_{O_F}(\overbar{k})$.
Let $\sigma$ be the lift of the Frobenius in $\Gal(\overbar{k}|k)$ to $\Gal(\breve{O}_F|O_F)$.
\subsection{The definition of the naive moduli problem $\mathcal{N}_E^{\mathrm{naive}}$}
We first construct a functor $\mathcal{N}_E^{\mathrm{naive}}$ on $\Nilp$, the category of $\breve{O}_F$-schemes $S$ such that $\pi_0 \mathcal{O}_S$ is locally nilpotent.
We consider tuples $(X, \iota, \lambda)$, where
\begin{itemize}
\item $X$ is a formal $O_F$-module over $S$ of dimension $2$ and height $4$.
\item $\iota: O_E \to \End(X)$ is an action of $O_E$ satisfying the \emph{Kottwitz condition}:
The characteristic polynomial of $\iota(\alpha)$ on $\Lie X$ for any $\alpha \in O_E$ is
\begin{equation*}
\charp(\Lie X, T \mid \iota(\alpha)) = (T - \alpha)(T - \overbar{\alpha}).
\end{equation*}
Here $\alpha \mapsto \overbar{\alpha}$ is the non-trivial Galois automorphism and the right hand side is a polynomial with coefficients in $\mathcal{O}_S$ via the composition $O_F[T] \inj \breve{O}_F[T] \to \mathcal{O}_S[T]$.
\item $\lambda: X \to X^{\vee}$ is a principal polarization on $X$ such that the Rosati involution satisfies $\iota(\alpha)^{\ast} = \iota(\overbar{\alpha})$ for $\alpha \in O_E$.
\end{itemize}
\begin{defn} \label{RP_isonaive}
A \emph{quasi-isogeny} (resp.\ an \emph{isomorphism}) $\varphi: (X,\iota,\lambda) \to (X', \iota', \lambda')$ of two such tuples $(X,\iota,\lambda)$ and $(X', \iota', \lambda')$ over $S$ is an $O_E$-linear quasi-isogeny of height $0$ (resp.\ an $O_E$-linear isomorphism) $\varphi: X \to X'$ such that $\lambda = \varphi^{\ast}(\lambda')$.
Denote the group of quasi-isogenies $\varphi: (X,\iota,\lambda) \to (X,\iota,\lambda)$ by $\QIsog(X,\iota,\lambda)$.
\end{defn}
For $S = \Spec \overbar{k}$ we have the following proposition:
\begin{prop} \label{RP_frnaive}
Up to isogeny, there exists precisely one tuple $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ over $\Spec \overbar{k}$ such that the group $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ contains $\SU(C,h)$ as a closed subgroup.
Here $\SU(C,h)$ is the special unitary group for a $2$-dimensional $E$-vector space $C$ with split $E|F$-hermitian form $h$.
\end{prop}
\begin{rmk} \label{RP_rmknaive}
If $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ is as in the proposition, we always have $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}}) \cong \U(C,h)$.
This follows directly from the proof and gives a more natural way to describe the framing object.
However, we will need the slightly stronger statement of the Proposition later, in Lemma \ref{RP_clemb}.
\end{rmk}
\begin{proof}[Proof of Proposition \ref{RP_frnaive}]
We first show uniqueness.
Let $(X,\iota,\lambda) / \Spec \overbar{k}$ be such a tuple.
Its (relative) rational Dieudonn\'e module $N_X$ is a $4$-dimensional vector space over $\breve{F}$ with an action of $E$ and an alternating form $\langle \,,\rangle$ such that for all $x,y \in N_X$,
\begin{equation} \label{RP_alt}
\langle x, \Pi y \rangle = - \langle \Pi x, y \rangle.
\end{equation}
The space $N_X$ has the structure of a $2$-dimensional vector space over $\breve{E} = E \otimes_F \breve{F}$ and we can define an $\breve{E}|\breve{F}$-hermitian form on it via
\begin{equation} \label{RP_herm}
h(x,y) = \langle \Pi x, y \rangle + \Pi \langle x, y \rangle.
\end{equation}
The alternating form can be recovered from $h$ by
\begin{equation} \label{RP_altherm}
\langle x, y \rangle = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{2 \Pi} \cdot h(x.y) \right).
\end{equation}
Furthermore we have on $N_X$ a $\sigma$-linear operator ${\bf F}$, the Frobenius, and a $\sigma^{-1}$-linear operator ${\bf V}$, the Verschiebung, that satisfy ${\bf V}{\bf F} = {\bf F}{\bf V} = \pi_0$.
Recall that $\sigma$ is the lift of the Frobenius on $\breve{O}_F$.
Since $\langle \,, \rangle$ comes from a polarization, we have
\begin{align*}
\langle {\bf F}x, y \rangle & = \langle x, {\bf V}y \rangle ^{\sigma}, \\
\intertext{and}
h({\bf F}x, y) & = h(x,{\bf V}y)^{\sigma},
\end{align*}
for all $x,y \in N_X$.
Let us consider the $\sigma$-linear operator $\tau = \Pi {\bf V}^{-1}$.
Its slopes are all zero, since $N_X$ is isotypical of slope $\frac{1}{2}$.
(This follows from the condition on $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.)
We set $C = N_X^{\tau}$.
This is a $2$-dimensional vector space over $E$ and $N_X = C \otimes_E \breve{E}$.
Now $h$ induces an $E|F$-hermitian form on $C$ since
\begin{equation*}
h(\tau x, \tau y) = h(-{\bf F} \Pi^{-1} x, \Pi {\bf V}^{-1}y) = - h(\Pi^{-1} x, \Pi y)^{\sigma} = h(x,y)^{\sigma}.
\end{equation*}
A priori, there are up to isomorphism two possibilities for $(C,h)$, either $h$ is split on $C$ or non-split.
But automorphisms of $(C,h)$ correspond to elements of $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
The unitary groups of $(C,h)$ for $h$ split and $h$ non-split are not isomorphic and they cannot contain each other as a closed subgroup.
Hence the condition on $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ implies that $h$ is split.
Assume now we have two different objects $(X,\iota,\lambda)$ and $(X',\iota',\lambda')$ as in the proposition.
These give us isomorphic vector spaces $(C,h)$ and $(C',h')$ and an isomorphism between these extends to an isomorphism between $N_X$ and $N_X'$ (respecting all rational structure) which corresponds to a quasi-isogeny between $(X,\iota,\lambda)$ and $(X',\iota',\lambda')$.
The existence of $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ now follows from the fact that a $2$-dimensional $E$-vector space $(C,h)$ with split $E|F$-hermitian form contains a unimodular lattice $\Lambda$.
Indeed, this gives us a lattice $M = \Lambda \otimes_{O_E} \breve{O}_E \subseteq C \otimes_E \breve{E}$.
We extend $h$ to $N = C \otimes_E \breve{E}$ and define the $\breve{F}$-linear alternating form $\langle \,, \rangle$ as in \eqref{RP_altherm}.
Now $M$ is unimodular with respect to $\langle \,, \rangle$, because $\frac{1}{2 \Pi} \breve{O}_E$ is the inverse different of $\breve{E}|\breve{F}$ (see Lemma \ref{LA_diff}).
We choose the operators ${\bf F}$ and ${\bf V}$ on $M$ such that ${\bf F}{\bf V} = {\bf V}{\bf F} = \pi_0$ and $\Lambda = M^{\tau}$ for $\tau = \Pi {\bf V}^{-1}$.
This makes $M$ a (relative) Dieudonn\'e module and we define $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ as the corresponding formal $O_F$-module.
\end{proof}
We fix such a framing object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ over $\Spec \overbar{k}$.
\begin{defn} \label{RP_defnaive}
For arbitrary $S \in \Nilp$, let $\overbar{S} = S \times_{\Spf \breve{O}_F} \Spec \overbar{k}$.
Define $\mathcal{N}_E^{\mathrm{naive}}(S)$ as the set of equivalence classes of tuples $(X,\iota,\lambda,\varrho)$ over $S$, where $(X,\iota,\lambda)$ as above and
\begin{equation*}
\varrho: X \times_S \overbar{S} \to \mathbb{X} \times_{\Spec \overbar{k}} \overbar{S}
\end{equation*}
is a quasi-isogeny between the tuple $(X,\iota,\lambda)$ and the framing object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ (after base change to $\overbar{S}$).
Two objects $(X,\iota,\lambda,\varrho)$ and $(X',\iota',\lambda',\varrho')$ are equivalent if and only if there exists an isomorphism $\varphi: (X,\iota,\lambda) \to (X',\iota',\lambda')$ such that $\varrho = \varrho' \circ (\varphi \times_S \overbar{S})$.
\end{defn}
\begin{rmk} \label{RP_loftnaive}
\begin{enumerate}
\item \label{RP_loftnaive1} The morphism $\varrho$ is a quasi-isogeny in the sense of Definition \ref{RP_isonaive}, \emph{i.e.}, we have $\lambda = \varrho^{\ast}(\lambda_{\mathbb{X}})$.
Similarly, we have $\lambda = \varphi^{\ast}(\lambda')$ for the isomorphism $\varphi$.
We obtain an equivalent definition of $\mathcal{N}_E^{\mathrm{naive}}$ if we replace strict equality by the condition that, locally on $S$, $\lambda$ and $\varrho^{\ast}(\lambda_{\mathbb{X}})$ (resp.\ $\varphi^{\ast}(\lambda')$) only differ by a scalar in $O_F^{\times}$.
This variant is used in the definition of RZ-spaces of (PEL) type for $p > 2$ in \cite{RZ96}.
In this paper we will use the version with strict equality, since it simplifies the formulation of the straightening condition, see Definition \ref{RP_strdef} below.
\item \label{RP_loftnaive2} $\mathcal{N}_E^{\mathrm{naive}}$ is pro-representable by a formal scheme, formally locally of finite type over $\Spf \breve{O}_F$.
This follows from \cite[Thm.\ 3.25]{RZ96}.
\end{enumerate}
\end{rmk}
As a next step, we use Dieudonn\'e theory in order to get a better understanding of the special fiber of $\mathcal{N}_E^{\mathrm{naive}}$.
Let $N = N_{\mathbb{X}}$ be the rational Dieudonn\'e module of the base point $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ of $\mathcal{N}_E^{\mathrm{naive}}$.
This is a $4$-dimensional vector space over $\breve{F}$, equipped with an $E$-action, an alternating form $\langle \,, \rangle$ and two operators ${\bf V}$ and ${\bf F}$.
As in the proof of Proposition \ref{RP_frnaive}, the form $\langle \,, \rangle$ satisfies condition \eqref{RP_alt}:
\begin{equation}
\langle x, \Pi y \rangle = - \langle \Pi x, y \rangle.
\end{equation}
A point $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ corresponds to an $\breve{O}_F$-lattice $M_X \subseteq N$.
It is stable under the actions of the operators ${\bf V}$ and ${\bf F}$ and of the ring $O_E$.
Furthermore $M_X$ is unimodular under $\langle \,, \rangle$, \emph{i.e.}, $M_X = M_X^{\vee}$, where
\begin{equation*}
M_X^{\vee} = \{ x \in N \mid \langle x, y \rangle \in \breve{O}_F \text{ for all } y \in M_X \}.
\end{equation*}
We can regard $N$ as a $2$-dimensional vector space over $\breve{E}$ with the $\breve{E}|\breve{F}$-hermitian form $h$ defined by
\begin{equation}
h(x,y) = \langle \Pi x, y \rangle + \Pi \langle x, y \rangle.
\end{equation}
Let $\breve{O}_E = O_E \otimes_{O_F} \breve{O}_F$.
Then $M_X \subseteq N$ is an $\breve{O}_E$-lattice and we have
\begin{equation*}
M_X = M_X^{\vee} = M_X^{\sharp},
\end{equation*}
where $M_X^{\sharp}$ is the dual lattice of $M_X$ with respect to $h$.
The latter equality follows from the formula
\begin{equation}
\langle x, y \rangle = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{2 \Pi} \cdot h(x.y) \right)
\end{equation}
and the fact that the inverse different of $E|F$ is $\mathfrak{D}_{E|F}^{-1} = \frac{1}{2\Pi}O_E$ (see Lemma \ref{LA_diff}).
We can thus write the set $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ as
\begin{equation} \label{RP_geompts}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \{ \breve{O}_E \text{-lattices } M \subseteq N_{\mathbb{X}} \mid M^{\sharp} = M, \pi_0 M \subseteq {\bf V}M \subseteq M \}.
\end{equation}
Let $\tau = \Pi {\bf V}^{-1}$.
This is a $\sigma$-linear operator on $N$ with all slopes zero.
The elements invariant under $\tau$ form a $2$-dimensional $E$-vector space $C = N^{\tau}$.
The hermitian form $h$ is invariant under $\tau$, hence it induces a split hermitian form on $C$ which we denote again by $h$.
With the same proof as in \cite[Lemma 3.2]{KR14}, we have:
\begin{lem} \label{RP_latt}
Let $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
Then:
\begin{enumerate}
\item $M + \tau(M)$ is $\tau$-stable.
\item Either $M$ is $\tau$-stable and $\Lambda_0 = M^{\tau} \subseteq C$ is unimodular $(\Lambda_0^{\sharp} = \Lambda_0)$ or $M$ is not $\tau$-stable and then $\Lambda_{-1} = (M + \tau(M))^{\tau} \subseteq C$ is $\Pi^{-1}$-modular $(\Lambda_{-1}^{\sharp} = \Pi \Lambda_{-1})$.
\end{enumerate}
\end{lem}
Under the identification $N = C \otimes_E \breve{E}$, we get $M = \Lambda_0 \otimes_{O_E} \breve{O}_E$ for a $\tau$-stable Dieudonn\'e lattice $M$.
If $M$ is not $\tau$-stable, we have $M + \tau M = \Lambda_{-1} \otimes_{O_E} \breve{O}_E$ and $M \subseteq \Lambda_{-1} \otimes_{O_E} \breve{O}_E$ is a sublattice of index $1$.
The next lemma is the analogue of \cite[Lemma 3.3]{KR14}.
\begin{lem} \label{RP_IS}
\begin{enumerate}
\item \label{RP_IS1} Fix a $\Pi^{-1}$-modular lattice $\Lambda_{-1} \subseteq C$.
There is an injective map
\begin{equation*}
i_{\Lambda_{-1}}: \mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(\overbar{k}) \inj \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})
\end{equation*}
mapping a line $\ell \subseteq (\Lambda_{-1} / \Pi \Lambda_{-1}) \otimes \overbar{k}$ to its preimage in $\Lambda_{-1} \otimes \breve{O}_E$.
Identify $\mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(\overbar{k})$ with its image in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
Then $\mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(k) \subseteq \mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(\overbar{k})$ is the set of $\tau$-invariant Dieudonn\'e lattices $M \subseteq \Lambda_{-1} \otimes \breve{O}_E$.
\item The set $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is a union
\begin{equation} \label{RP_ISunion}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \smashoperator[r]{\bigcup_{\Lambda_{-1} \subseteq C}} \mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(\overbar{k}),
\end{equation}
ranging over all $\Pi^{-1}$-modular lattices $\Lambda_{-1} \subseteq C$.
The projective lines corresponding to the lattices $\Lambda_{-1}$ and $\Lambda_{-1}'$ intersect in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ if and only if $\Lambda_0 = \Lambda_{-1} \cap \Lambda_{-1}'$ is unimodular.
In this case, their intersection consists of the point $M = \Lambda_0 \otimes \breve{O}_E \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
\end{enumerate}
\end{lem}
\begin{proof}
We only have to prove that the map $i_{\Lambda_{-1}}$ is well-defined.
Denote by $M$ the preimage of $\ell \subseteq (\Lambda_{-1} / \Pi \Lambda_{-1}) \otimes \overbar{k}$ in $\Lambda_{-1} \otimes \breve{O}_E$.
We need to show that $M$ is an element in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ under the identification of \eqref{RP_geompts}.
It is clearly a sublattice of index $1$ in $\Lambda_{-1} \otimes \breve{O}_E$, stable under the actions of ${\bf F}$, ${\bf V}$ and $O_E$.
Let $e_1 \in \Lambda_{-1} \otimes \breve{O}_E$ such that $e_1 \otimes \overbar{k}$ generates $\ell$.
We can extend this to a basis $(e_1,e_2)$ of $\Lambda_{-1}$ and with respect to this basis, $h$ is represented by a matrix of the form
\begin{equation*}
\begin{pmatrix}
x & -\Pi^{-1} \\
\Pi^{-1} & y \\
\end{pmatrix},
\end{equation*}
with $x, y \in \Pi^{-1} \breve{O}_E \cap \breve{O}_F = \breve{O}_F$.
The lattice $M \subseteq \Lambda_{-1} \otimes \breve{O}_E$ is generated by $e_1$ and $\Pi e_2$.
With respect to this new basis, $h$ is now given by the matrix
\begin{equation*}
\begin{pmatrix*}
x & 1 \\
1 & \pi_0 y \\
\end{pmatrix*}.
\end{equation*}
Since all entries of the matrix are integral, we have $M \subseteq M^{\sharp}$.
But this already implies $M^{\sharp} = M$, because they both have index $1$ in $\Lambda_{-1} \otimes \breve{O}_E$.
Thus $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and $i_{\Lambda_{-1}}$ is well-defined.
\end{proof}
\begin{rmk} \label{RP_rmklatt}
\begin{enumerate}
\item \label{RP_rmklatt1} Recall from Proposition \ref{LA_latt} that the isomorphism type of a $\Pi^i$-modular lattice $\Lambda \subseteq C$ only depends on its norm ideal $\Nm(\Lambda) = \langle \{ h(x,x) | x \in \Lambda \} \rangle = \pi_0^{\ell} O_F \subseteq F$.
In the case that $\Lambda = \Lambda_0$ or $\Lambda_{-1}$ is unimodular or $\Pi^{-1}$-modular, $\ell$ can be any integer such that $|1| \geq |\pi_0|^{\ell} \geq |2|$.
In particular, there are always at least two possible values for $\ell$.
Recall from Lemma \ref{LA_hyp}, that $\Lambda$ is \emph{hyperbolic} if and only if $\Nm(\Lambda) = 2 O_F$.
\item The intersection behaviour of the projective lines in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ can be deduced from Proposition \ref{LA_lattRP}.
In particular, for a given unimodular lattice $\Lambda_0 \subseteq C$ with $\Nm(\Lambda_0) \subseteq \pi_0 O_F$, there are $q+1$ lines intersecting in $M = \Lambda_0 \otimes \breve{O}_E$.
If $\Nm(\Lambda_0) = O_F$, the lattice $M = \Lambda_0 \otimes \breve{O}_E$ is only contained in one projective line.
On the other hand, a projective line $\mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})(\overbar{k}) \subseteq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ contains $q+1$ points corresponding to unimodular lattices in $C$.
By Lemma \ref{RP_IS} \eqref{RP_IS1}, these are exactly the $k$-rational points of $\mathbb{P}(\Lambda_{-1} / \Pi \Lambda_{-1})$.
\item \label{RP_rmklatt3} If we restrict the union at the right hand side of \eqref{RP_ISunion} to hyperbolic $\Pi^{-1}$-modular lattices $\Lambda_{-1} \subseteq C$ (\emph{i.e.}, $\Nm(\Lambda_{-1}) = 2 O_F$, see Lemma \ref{LA_hyp}), we obtain a canonical subset $\mathcal{N}_E(\overbar{k}) \subseteq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and there is a description of $\mathcal{N}_E$ as a pro-representable functor on $\Nilp$ (see below).
We will see later (Theorem \ref{RP_thm}) that $\mathcal{N}_E$ is isomorphic to the Drinfeld moduli space $\mathcal{M}_{Dr}$, described in \cite[I.3]{BC91}.
In particular, the underlying topological space of $\mathcal{N}_E$ is connected.
(The induced topology on the projective lines is the Zariski topology, see Proposition \ref{RP_NEnaiveRL}.)
Moreover, each projective line in $\mathcal{N}_E(\overbar{k})$ has $q+1$ intersection points and there are $2$ projective lines intersecting in each such point (see also Proposition \ref{LA_lattRP}). \\
We fix such an intersection point $P \in \mathcal{N}_E(\overbar{k})$.
Now going back to $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$, there are $q-1$ additional lines going through $P \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ that correspond to non-hyperbolic lattices in $C$ (see Proposition \ref{LA_lattRP}).
Each of these additional lines contains $P$ as its only ``hyperbolic'' intersection point, all other intersection points on this line and the line itself correspond to unimodular resp.\ $\Pi^{-1}$-modular lattices $\Lambda \subseteq C$ of norm $\Nm (\Lambda) = (2 / \pi_0)O_F$ (whereas all hyperbolic lattices occuring have the norm ideal $2 O_F$, see Lemma \ref{LA_hyp}).
Assume $\mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}) \subseteq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is such a line and let $P' \in \mathbb{P}(\Lambda / \Pi \Lambda) (\overbar{k})$ be an intersection point, where $P \neq P'$.
There are again $q$ more lines going through $P'$ (always $q+1$ in total) that correspond to lattices with norm ideal $\Nm (\Lambda) = (2 / \pi_0^2)O_F$, and these lines again have more intersection points and so on.
This goes on until we reach lines $\mathbb{P}(\Lambda' / \Pi \Lambda')(\overbar{k})$ with $\Nm (\Lambda') = O_F$.
Each of these lines contains $q$ points that correspond to unimodular lattices $\Lambda_0 \subseteq C$ with $\Nm(\Lambda_0) = O_F$.
Such a lattice is only contained in one $\Pi^{-1}$-modular lattice (see part \ref{LA_lattRPmin} of Proposition \ref{LA_lattRP}).
Hence, these points are only contained in one projective line, namely $\mathbb{P}(\Lambda' / \Pi \Lambda')(\overbar{k})$. \\
In other words, each intersection point $P \in \mathcal{N}_E(\overbar{k})$ has a ``tail'', consisting of finitely many projective lines, which is the connected component of $P$ in $(\mathcal{N}_E^{\mathrm{naive}} (\overbar{k}) \setminus \mathcal{N}_E(\overbar{k})) \cup \{ P \}$.
Figure \ref{RP_NEnaivejpg} shows a drawing of $(\mathcal{N}_E^{\mathrm{naive}})_{\red}$ for the cases $F = \mathbb{Q}_2$ (on the left hand side) and $F|\mathbb{Q}_2$ a ramified quadratic extension (on the right hand side).
The ``tails'' are indicated by dashed lines.
\end{enumerate}
\end{rmk}
\begin{figure}[hbt]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width = \textwidth]{R-P_NEnaive.jpg}
\caption{$e = 1$, $f = 1$.}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width = \textwidth]{R-P_NEnaive_1.jpg}
\caption{$e= 2$, $f = 1$.}
\end{subfigure}
\caption{
The reduced locus of $\mathcal{N}_E^{\mathrm{naive}}$ for $E|F$ of type (R-P) where $F|\mathbb{Q}_2$ has ramification index $e$ and inertia degree $f$.
Solid lines are given by subschemes $\mathcal{N}_{E, \Lambda}$ for hyperbolic lattices $\Lambda$.
}
\label{RP_NEnaivejpg}
\end{figure}
Fix a $\Pi^{-1}$-modular lattice $\Lambda = \Lambda_{-1} \subseteq C$.
Let $X_{\Lambda}^{+}$ be the formal $O_F$-module over $\Spec \overbar{k}$ associated to the Dieudonn\'e lattice $M = \Lambda \otimes \breve{O}_E \subseteq N$.
It comes with a canonical quasi-isogeny
\begin{equation*}
\varrho_{\Lambda}^{+} : \mathbb{X} \to X_{\Lambda}^{+}
\end{equation*}
of $F$-height $1$.
We define a subfunctor $\mathcal{N}_{E, \Lambda} \subseteq \mathcal{N}_E^{\mathrm{naive}}$ by mapping $S \in \Nilp$ to
\begin{equation} \label{RP_NEL}
\mathcal{N}_{E, \Lambda}(S) = \{ (X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S) \mid (\varrho_{\Lambda}^{+} \times S) \circ \varrho \text{ is an isogeny} \}.
\end{equation}
Note that the condition of \eqref{RP_NEL} is closed, \emph{cf.}\ \cite[Prop.\ 2.9]{RZ96}.
Hence $\mathcal{N}_{E, \Lambda}$ is representable by a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
On geometric points, we have a bijection
\begin{equation} \label{RP_NEL=P1geom}
\mathcal{N}_{E, \Lambda}(\overbar{k}) \isoarrow \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation}
as a consequence of Lemma \ref{RP_IS} \eqref{RP_IS1}.
\begin{prop} \label{RP_NEnaiveRL}
The reduced locus of $\mathcal{N}_E^{\mathrm{naive}}$ is given by
\begin{equation*}
(\mathcal{N}_E^{\mathrm{naive}})_{\red} = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathcal{N}_{E, \Lambda},
\end{equation*}
where $\Lambda$ runs over all $\Pi^{-1}$-modular lattices in $C$.
For each $\Lambda$, there is an isomorphism of reduced schemes
\begin{equation*}
\mathcal{N}_{E, \Lambda} \isoarrow \mathbb{P}(\Lambda / \Pi \Lambda),
\end{equation*}
inducing the map \eqref{RP_NEL=P1geom} on $\overbar{k}$-valued points.
\end{prop}
\begin{proof}
The embedding
\begin{equation} \label{RP_NEnaivered}
\smashoperator[r]{\bigcup_{\Lambda \subseteq C}} (\mathcal{N}_{E, \Lambda})_{\red} \inj (\mathcal{N}_E^{\mathrm{naive}})_{\red}
\end{equation}
is closed, because each embedding $\mathcal{N}_{E, \Lambda} \subseteq \mathcal{N}_E^{\mathrm{naive}}$ is closed and, locally on $(\mathcal{N}_E^{\mathrm{naive}})_{\red}$, the left hand side is always only a finite union of $(\mathcal{N}_{E, \Lambda})_{\red}$.
It follows already that \eqref{RP_NEnaivered} is an isomorphism, since it is a bijection on $\overbar{k}$-valued points (see the equations \eqref{RP_ISunion} and \eqref{RP_NEL=P1geom}) and $(\mathcal{N}_E^{\mathrm{naive}})_{\red}$ is reduced by definition and locally of finite type over $\Spec \overbar{k}$ by Remark \ref{RP_loftnaive} \eqref{RP_loftnaive2}.
For the second part of the proposition, we follow the proof presented in \cite[4.2]{KR14}.
Fix a $\Pi^{-1}$-modular lattice $\Lambda \subseteq C$ and let $M = \Lambda \otimes \breve{O}_E \subseteq N$, as above.
Now $X_{\Lambda}^{+}$ is the formal $O_F$-module associated to $M$, but we also get a formal $O_F$-module $X_{\Lambda}^{-}$ associated to the dual $M^{\sharp} = \Pi M$ of $M$.
This comes with a natural isogeny
\begin{equation*}
\nat_{\Lambda}: X_{\Lambda}^{-} \to X_{\Lambda}^{+}
\end{equation*}
and a quasi-isogeny $\varrho_{\Lambda}^{-} : X_{\Lambda}^{-} \to \mathbb{X}$ of $F$-height $1$.
For $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ where $S \in \Nilp$, we consider the composition
\begin{equation*}
\varrho_{\Lambda, X}^{-} = \varrho^{-1} \circ (\varrho_{\Lambda}^{-} \times S): (X_{\Lambda}^{-} \times S) \to X.
\end{equation*}
By \cite[Lemma 4.2]{KR14}, this composition is an isogeny if and only if $(\varrho_{\Lambda}^{+} \times S) \circ \varrho$ is an isogeny, or, in other words, if and only if $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(S)$.
Let $\mathbb{D}_{X_{\Lambda}^{-}}(S)$ be the (relative) Grothendieck-Messing crystal of $X_{\Lambda}^{-}$ evaluated at $S$ (\emph{cf.}\ \cite[Def.\ 3.24]{ACZ} or \cite[5.2]{Ahs11}).
This is a locally free $\mathcal{O}_S$-module of rank $4$, isomorphic to $\Lambda / \pi_0 \Lambda \otimes_{O_F} \mathcal{O}_S$.
The kernel of $\mathbb{D}(\nat_{\Lambda})(S)$ is given by $(\Lambda / \Pi \Lambda) \otimes_{O_F} \mathcal{O}_S$, locally a direct summand of rank $2$ of $\mathbb{D}_{X_{\Lambda}^{-}}(S)$.
For any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(S)$, the kernel of $\varrho_{\Lambda, X}^{-}$ is contained in $\ker (\nat_{\Lambda})$.
It follows from \cite[Cor.\ 4.7]{VW11} (see also \cite[Prop.\ 4.6]{KR14}) that $\ker \mathbb{D}(\varrho_{\Lambda, X}^{-})(S)$ is locally a direct summand of rank $1$ of $(\Lambda / \Pi \Lambda) \otimes_{O_F} \mathcal{O}_S$.
This induces a map
\begin{equation*}
\mathcal{N}_{E, \Lambda}(S) \to \mathbb{P}(\Lambda / \Pi \Lambda)(S),
\end{equation*}
functorial in $S$, and the arguments of \cite[4.7]{VW11} show that it is an isomorphism.
(One easily checks that their results indeed carry over to the relative setting over $O_F$.)
\end{proof}
\subsection{Construction of the closed formal subscheme $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$}
\label{RP2}
We now use a result from section \ref{POL}.
By Theorem \ref{POL_thm} and Remark \ref{POL_rmk} \eqref{POL_rmk2}, there exists a principal polarization $\widetilde{\lambda}_{\mathbb{X}}: \mathbb{X} \to \mathbb{X}^{\vee}$ on $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$, unique up to a scalar in $O_E^{\times}$, such that the induced Rosati involution is the identity on $O_E$.
Furthermore, for any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$, the pullback $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a principal polarization on $X$.
The next proposition is crucial for the construction of $\mathcal{N}_E$.
Recall the notion of a \emph{hyperbolic} lattice from Proposition \ref{LA_latt} and the subsequent discussion.
\begin{prop} \label{RP_strprop}
It is possible to choose $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$ such that
\begin{equation*}
\lambda_{\mathbb{X},1} = \frac{1}{2} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}}) \in \Hom(\mathbb{X},\mathbb{X}^{\vee}).
\end{equation*}
Fix such a choice and let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
Then, $\frac{1}{2} (\lambda + \widetilde{\lambda}) \in \Hom(X,X^{\vee})$ if and only if $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(\overbar{k})$ for some hyperbolic lattice $\Lambda \subseteq C$.
\end{prop}
\begin{proof}
The polarization $\widetilde{\lambda}_{\mathbb{X}}$ on $\mathbb{X}$ induces an alternating form $(\,,)$ on the rational Dieudonn\'e module $N = M_{\mathbb{X}} \otimes_{\breve{O}_F} \breve{F}$.
For all $x,y \in N$, the form $(\,,)$ satisfies the equations
\begin{align*}
({\bf F}x, y) &= (x, {\bf V}y)^{\sigma}, \\
(\Pi x, y) &= (x, \Pi y).
\end{align*}
It induces an $\breve{E}$-alternating form $b$ on $N$ via
\begin{equation*}
b(x,y) = \delta((\Pi x, y) + \Pi (x,y)),
\end{equation*}
where $\delta \in \breve{O}_F$ is a unit generating the unramified quadratic extension of $F$, chosen such that $\delta^{\sigma} = -\delta$ and $\frac{1+\delta}{2} \in \breve{O}_F$, see page \pageref{LA_quadext}.
On the other hand, we can describe $(\,,)$ in terms of $b$,
\begin{equation} \label{RP_altalt}
(x,y) = \Tr_{\breve{E}|\breve{F}} \left( \frac{1}{2\Pi\delta} \cdot b(x,y) \right).
\end{equation}
The form $b$ is invariant under $\tau = \Pi {\bf V}^{-1}$, since
\begin{equation*}
b(\tau x, \tau y) = b(-{\bf F} \Pi^{-1} x, \Pi {\bf V}^{-1} y) = b(\Pi^{-1} x, \Pi y)^{\sigma} = b(x,y)^{\sigma}.
\end{equation*}
Hence $b$ defines an $E$-linear alternating form on $C = N^{\tau}$, which we again denote by $b$.
Denote by $\langle \,, \rangle$ the alternating form on $M_{\mathbb{X}}$ induced by the polarization $\lambda_{\mathbb{X}}$ and let $h$ be the corresponding hermitian form, see \eqref{RP_herm}.
On $N_{\mathbb{X}}$, we define the alternating form $\langle \,, \rangle_1$ by
\begin{equation*}
\langle x, y \rangle_1 = \frac{1}{2} (\langle x, y \rangle + (x,y)).
\end{equation*}
This form is integral on $M_{\mathbb{X}}$ if and only if $\lambda_{\mathbb{X},1} = \frac{1}{2} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $\mathbb{X}$.
We choose $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ such that it corresponds to a unimodular hyperbolic lattice $\Lambda_0 \subseteq (C,h)$ under the identifications of \eqref{RP_geompts} and Lemma \ref{RP_latt}.
There exists a basis $(e_1,e_2)$ of $\Lambda_0$ such that
\begin{equation} \label{RP_matrices}
h \, \weq \begin{pmatrix}
& 1 \\
1 & \\
\end{pmatrix}, \quad b \, \weq \begin{pmatrix}
& u \\
-u & \\
\end{pmatrix},
\end{equation}
for some $u \in E^{\times}$.
Since $\widetilde{\lambda}_{\mathbb{X}}$ is principal, the alternating form $b$ is perfect on $\Lambda_0$, thus $u \in O_E^{\times}$.
After rescaling $\widetilde{\lambda}_{\mathbb{X}}$, we may assume that $u = 1$.
We now have
\begin{equation*}
\frac{1}{2} (h(x,y) + b(x,y)) \in O_E,
\end{equation*}
for all $x,y \in \Lambda_0$.
Thus $\frac{1}{2} (h + b)$ is integral on $M_{\mathbb{X}} = \Lambda_0 \otimes_{O_E} \breve{O}_E$.
This implies that
\begin{align*}
\langle x, y \rangle_1 & = \frac{1}{2} (\langle x, y \rangle + (x,y)) = \frac{1}{2} \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{2 \Pi} \cdot h(x.y) + \frac{1}{2\Pi \delta} \cdot b(x,y) \right) \\
& = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{4 \Pi} (h(x,y) + b(x,y)) \right) + \Tr_{\breve{E}|\breve{F}} \left(\frac{1-\delta}{4\Pi \delta} \cdot b(x,y) \right) \in \breve{O}_F,
\end{align*}
for all $x,y \in M_{\mathbb{X}}$.
Indeed, in the definition of $b$, the unit $\delta$ has been chosen such that $\frac{1+\delta}{2} \in \breve{O}_F$, so the second summand is in $\breve{O}_F$.
The first summand is integral, since $\frac{1}{2} (h + b)$ is integral.
It follows that $\lambda_{\mathbb{X},1} = \frac{1}{2} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $\mathbb{X}$.
Let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and assume that $\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda}) = \varrho^{\ast}(\lambda_{\mathbb{X},1})$ is a polarization on $X$.
Then $\langle \,, \rangle_1$ is integral on the Dieudonn\'e module $M \subseteq N$ of $X$.
By the above calculation, this is equivalent to $\frac{1}{2}(h + b)$ being integral on $M$.
In particular, this implies that
\begin{equation*}
h(x,x) = h(x,x) + b(x,x) \in 2 \breve{O}_F,
\end{equation*}
for all $x \in M$.
Let $\Lambda = (M +\tau(M))^{\tau}$.
Then $h(x,x) \in 2O_F$ for all $x \in \Lambda$, hence $\Nm (\Lambda) \subseteq 2O_F$.
By Lemma \ref{LA_hyp} and the bound of norm ideals, we have $\Nm (\Lambda) = 2O_F$ and $\Lambda$ is a hyperbolic lattice.
It follows that $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda'}(\overbar{k})$ for some hyperbolic $\Pi^{-1}$-modular lattice $\Lambda' \subseteq C$.
Indeed, if $M^{\tau} \subsetneq \Lambda$ then $\Lambda$ is $\Pi^{-1}$-modular and $\Lambda' = \Lambda$. If $M^{\tau} = \Lambda$ then it is contained in some $\Pi^{-1}$-modular hyperbolic lattice $\Lambda'$ by Proposition \ref{LA_lattRP}.
Conversely, assume that $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(\overbar{k})$ for some hyperbolic lattice $\Lambda \subseteq C$.
It suffices to show that $\frac{1}{2} (h + b)$ is integral on $\Lambda$.
Indeed, it follows that $\frac{1}{2} (h + b)$ is integral on the Dieudonn\'e module $M$.
Thus $\langle \,, \rangle_1$ is integral on $M$ and this is equivalent to $\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda}) \in \Hom(X,X^{\vee})$.
Let $\Lambda' \subseteq C$ be the $\Pi^{-1}$-modular lattice generated by $e_1$ and $\Pi^{-1} e_2$, where $(e_1,e_2)$ is the basis of the lattice $\Lambda_0$ corresponding to the framing object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
By \eqref{RP_matrices}, $h$ and $b$ have the following form with respect to the basis $(e_1, \Pi^{-1} e_2)$,
\begin{equation*}
h \, \weq \begin{pmatrix}
& -\Pi^{-1} \\
\Pi^{-1} & \\
\end{pmatrix}, \quad b \, \weq \begin{pmatrix}
& \Pi^{-1} \\
-\Pi^{-1} & \\
\end{pmatrix}.
\end{equation*}
In particular, $\Lambda'$ is hyperbolic and $\frac{1}{2} (h + b)$ is integral on $\Lambda'$.
By Proposition \ref{LA_latt}, there exists an automorphism $g \in \SU(C,h)$ mapping $\Lambda$ onto $\Lambda'$.
Since $\det g = 1$, the alternating form $b$ is invariant under $g$.
It follows that $\frac{1}{2} (h + b)$ is also integral on $\Lambda$.
\end{proof}
From now on, we assume $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$ chosen in a way such that
\begin{equation*}
\lambda_{\mathbb{X},1} = \frac{1}{2} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}}) \in \Hom(\mathbb{X},\mathbb{X}^{\vee}).
\end{equation*}
Note that this determines the polarization $\widetilde{\lambda}_{\mathbb{X}}$ up to a scalar in $1 + 2O_E$.
If we replace $\widetilde{\lambda}_{\mathbb{X}}$ by $\widetilde{\lambda}_{\mathbb{X}}' = \widetilde{\lambda}_{\mathbb{X}} \circ \iota_{\mathbb{X}}(1 + 2u)$ for some $u \in O_E$, then $\lambda_{\mathbb{X},1}' = \lambda_{\mathbb{X},1} + \widetilde{\lambda}_{\mathbb{X}} \circ \iota_{\mathbb{X}}(u)$.
We can now formulate the straightening condition.
\begin{defn} \label{RP_strdef}
Let $S \in \Nilp$.
An object $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ satisfies the \emph{straightening condition} if
\begin{equation} \label{RP_strcond}
\lambda_1 \in \Hom(X,X^{\vee}),
\end{equation}
where $\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda}) = \varrho^{\ast}(\lambda_{\mathbb{X},1})$.
\end{defn}
This definition is clearly independent of the choice of the polarization $\widetilde{\lambda}_{\mathbb{X}}$.
We define $\mathcal{N}_E$ as the functor that maps $S \in \Nilp$ to the set of all tuples $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ that satisfy the straightening condition.
By \cite[Prop.\ 2.9]{RZ96}, $\mathcal{N}_E$ is representable by a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
\begin{rmk} \label{RP_rmkNE}
The reduced locus of $\mathcal{N}_E$ can be written as
\begin{equation*}
(\mathcal{N}_E)_{\red} = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathcal{N}_{E, \Lambda} \simeq \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda),
\end{equation*}
where we take the unions over all \emph{hyperbolic} $\Pi^{-1}$-modular lattices $\Lambda \subseteq C$.
By Proposition \ref{LA_lattRP} and Lemma \ref{RP_IS}, each projective line contains $q+1$ points corresponding to unimodular lattices and there are two lines intersecting in each such point.
Recall from Remark \ref{RP_rmklatt} \eqref{RP_rmklatt1} that there exist non-hyperbolic $\Pi^{-1}$-modular lattices $\Lambda \subseteq C$, thus we have $\mathcal{N}_E(\overbar{k}) \neq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$, and in particular $(\mathcal{N}_E)_{\red} \neq (\mathcal{N}_E^{\mathrm{naive}})_{\red}$.
\end{rmk}
\begin{rmk} \label{RP_genfiber}
As has been pointed out to the author by A.\ Genestier, the straightening condition is not trivial on the rigid-analytic generic fiber of $\mathcal{N}_E^{\mathrm{naive}}$.
However, we can show that it is open and closed.
Since a proper study of the generic fiber would go beyond the scope of this paper, we restrain ourselves to indications rather than complete proofs.
Let $C$ be an algebraically closed extension of $F$ and $\mathcal{O}_C$ its ring of integers.
Take a point $x = (X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\mathcal{O}_C)$ and consider its $2$-adic Tate module $T_2(x)$.
It is a free $O_E$-module of rank $2$ and $\lambda$ endows $T_2(x)$ with a perfect (non-split) hermitian form $h$.
If $x \in \mathcal{N}_E(\mathcal{O}_C)$, then the straightening condition implies that $(T_2(x),h)$ is a lattice with minimal norm\footnote{Calling this lattice ``hyperbolic'' doesn't make much sense here since it is anisotropic.}
$\Nm(T_2(x))$ in the vector space $V_2(x) = T_2(x) \otimes_{O_E} E$ (see Proposition \ref{LA_latt} and \cite{Jac62}).
But $V_2(x)$ also contains selfdual lattices with non-minimal norm ideal.
Let $\Lambda \subseteq V_2(x)$ be such a lattice with $\Nm(\Lambda) \neq \Nm(T_2(x))$.
Let $\Lambda'$ be the intersection of $T_2(x)$ and $\Lambda$ in $V_2(x)$.
The inclusions $\Lambda' \inj \Lambda$ and $\Lambda' \inj T_2(x)$ define canonically a formal $O_F$-module $Y$ with $T_2(Y) = \Lambda'$ and a quasi-isogeny $\varphi: X \to Y$.
By inheriting all data, $Y$ becomes a point in $\mathcal{N}_E^{\mathrm{naive}}(\mathcal{O}_C)$ that does not satisfy the straightening condition.
To see that the straightening condition is open and closed on the generic fiber, consider the universal formal $O_F$-module $\mathcal{X} = (\mathcal{X},\iota_{\mathcal{X}},\lambda_{\mathcal{X}})$ over $\mathcal{N}_E^{\mathrm{naive}}$ and let $T_2(\mathcal{X})$ be its Tate module.
Then $T_2(\mathcal{X})$ is a locally constant sheaf over $\mathcal{N}_E^{\mathrm{naive,rig}}$ with respect to the \'etale topology.
The polarization $\lambda_{\mathcal{X}}$ defines a hermitian form $h$ on $T_2(\mathcal{X})$.
Since $T_2(\mathcal{X})$ is a locally constant sheaf, the norm ideal $\Nm(T_2(\mathcal{X}))$ with respect to $h$ (see Proposition \ref{LA_latt}) is locally constant as well.
Hence the locus where $\Nm(T_2(\mathcal{X}))$ is minimal is open and closed in $\mathcal{N}_E^{\mathrm{naive,rig}}$.
But this is exactly $\mathcal{N}_E^{\mathrm{rig}} \subseteq \mathcal{N}_E^{\mathrm{naive,rig}}$.
\end{rmk}
\subsection{The isomorphism to the Drinfeld moduli problem}
\label{RP3}
We now recall the Drinfeld moduli problem $\mathcal{M}_{Dr}$ on $\Nilp$.
Let $B$ be the quaternion division algebra over $F$ and $O_B$ its ring of integers.
Let $S \in \Nilp$.
Then $\mathcal{M}_{Dr}(S)$ is the set of equivalence classes of objects $(X,\iota_B,\varrho)$ where
\begin{itemize}
\item $X$ is a formal $O_F$-module over $S$ of dimension $2$ and height $4$,
\item $\iota_B: O_B \to \End(X)$ is an action of $O_B$ on $X$ satisfying the \emph{special} condition, \emph{i.e.}, $\Lie X$ is, locally on $S$, a free $\mathcal{O}_S \otimes_{O_F} O_F^{(2)}$-module of rank 1, where $O_F^{(2)} \subseteq O_B$ is any embedding of the unramified quadratic extension of $O_F$ into $O_B$ (\emph{cf.}\ \cite{BC91}),
\item $\varrho: X \times_S \overbar{S} \to \mathbb{X} \times_{\Spec \overbar{k}} \overbar{S}$ is an $O_B$-linear quasi-isogeny of height $0$ to a fixed framing object $(\mathbb{X},\iota_{\mathbb{X}}) \in \mathcal{M}_{Dr}(\overbar{k})$.
\end{itemize}
Such a framing object exists and is unique up to isogeny.
By a proposition of Drinfeld, \emph{cf.}\ \cite[p.\ 138]{BC91}, there always exist polarizations on these objects, as follows:
\begin{prop}[Drinfeld] \label{RP_Dr}
Let $\Pi \in O_B$ a uniformizer with $\Pi^2 \in O_F$ and let $b \mapsto b'$ be the standard involution of $B$.
Then $b \mapsto b^{\ast} = \Pi b' \Pi^{-1}$ is another involution on $B$.
\begin{enumerate}
\item \label{RP_Dr1} There exists a principal polarization $\lambda_{\mathbb{X}}: \mathbb{X} \to \mathbb{X}^{\vee}$ on $\mathbb{X}$ with associated Rosati involution $b \mapsto b^{\ast}$.
It is unique up to a scalar in $O_F^{\times}$.
\item \label{RP_Dr2} Let $\lambda_{\mathbb{X}}$ as in \eqref{RP_Dr1}.
For $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, there exists a unique principal polarization
\begin{equation*}
\lambda: X \to X^{\vee}
\end{equation*}
with Rosati involution $b \mapsto b^{\ast}$ such that $\varrho^{\ast}(\lambda_{\mathbb{X}}) = \lambda$ on $\overbar{S}$.
\end{enumerate}
\end{prop}
We now relate $\mathcal{M}_{Dr}$ and $\mathcal{N}_E$.
For this, we fix an embedding $E \inj B$.
Any choice of a uniformizer $\Pi \in O_E$ with $\Pi^2 \in O_F$ induces the same involution $b \mapsto b^{\ast} = \Pi b' \Pi^{-1}$ on $B$.
For the framing object $(\mathbb{X},\iota_{\mathbb{X}})$ of $\mathcal{M}_{Dr}$, let $\lambda_{\mathbb{X}}$ be a polarization associated to this involution by Proposition \ref{RP_Dr} \eqref{RP_Dr1}.
Denote by $\iota_{\mathbb{X},E}$ the restriction of $\iota_{\mathbb{X}}$ to $O_E \subseteq O_B$.
For any object $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, let $\lambda$ be the polarization with Rosati involution $b \mapsto b^{\ast}$ that satisfies $\varrho^{\ast}(\lambda_{\mathbb{X}}) = \lambda$, see Proposition \ref{RP_Dr} \eqref{RP_Dr2}.
Let $\iota_E$ be the restriction of $\iota_B$ to $O_E$.
\begin{lem} \label{RP_clemb}
$(\mathbb{X},\iota_{\mathbb{X},E},\lambda_{\mathbb{X}})$ is a framing object for $\mathcal{N}_E^{\mathrm{naive}}$.
Furthermore, the map
\begin{equation*}
(X,\iota_B,\varrho) \mapsto (X,\iota_E,\lambda,\varrho)
\end{equation*}
induces a closed immersion of formal schemes
\begin{equation*}
\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E^{\mathrm{naive}}.
\end{equation*}
\end{lem}
\begin{proof}
There are two things to check: that $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ contains $\SU(C,h)$ as a closed subgroup
and that $\iota_E$ satisfies the Kottwitz condition.
Indeed, once these two assertions hold, we can take $(\mathbb{X},\iota_{\mathbb{X},E},\lambda_{\mathbb{X}})$ as a framing object for $\mathcal{N}_E^{\mathrm{naive}}$ and the morphism $\eta$ is well-defined.
For any $S \in \Nilp$, the map $\eta(S)$ is injective, because $(X,\iota_B,\varrho)$ and $(X',\iota_B',\varrho') \in \mathcal{M}_{Dr}(S)$ map to the same point in $\mathcal{N}_E^{\mathrm{naive}}(S)$ under $\eta$ if and only if the quasi-isogeny $\varrho' \circ \varrho$ on $\overbar{S}$ lifts to an isomorphism on $S$, \emph{i.e.}, if and only if $(X,\iota_B,\varrho)$ and $(X',\iota_B',\varrho')$ define the same point in $\mathcal{M}_{Dr}(S)$.
The functor
\begin{equation*}
F: S \mapsto \{ (X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S) \mid \iota \text{ extends to an } O_B \text{-action} \}
\end{equation*}
is pro-representable by a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$ by \cite[Prop.\ 2.9]{RZ96}.
Now, the formal subscheme $\eta(\mathcal{M}_{Dr}) \subseteq F$ is given by the special condition.
But the special condition is open and closed (see \cite[p.\ 7]{RZ14}), thus $\eta$ is a closed embedding.
It remains to show the two assertions from the beginning of this proof.
We first check the condition on $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
Let $G_{(\mathbb{X},\iota_{\mathbb{X}})}$ be the group of $O_B$-linear quasi-isogenies $\varphi: (\mathbb{X},\iota_{\mathbb{X}}) \to (\mathbb{X},\iota_{\mathbb{X}})$ of height $0$ such that the induced homomorphism of Dieudonn\'e modules has determinant $1$.
Then we have (non-canonical) isomorphisms $G_{(\mathbb{X},\iota_{\mathbb{X}})} \simeq \SL_{2,F}$ and $\SL_{2,F} \simeq \SU(C,h)$, since $h$ is split.
The uniqueness of the polarization $\lambda_{\mathbb{X}}$ (up to a scalar in $O_F^{\times}$) implies that $G_{(\mathbb{X},\iota_{\mathbb{X}})} \subseteq \QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
This is a closed embedding of linear algebraic groups over $F$, since a quasi-isogeny $\varphi \in \QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ lies in $G_{(\mathbb{X},\iota_{\mathbb{X}})}$ if and only if it is $O_B$-linear and has determinant $1$, and these are closed conditions on $\QIsog(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
Finally, the special condition implies the Kottwitz condition for any element $b \in O_B$ (see \cite[Prop.\ 5.8]{RZ14}), \emph{i.e.}, the characteristic polynomial for the action of $\iota(b)$ on $\Lie X$ is
\begin{equation*}
\charp(\Lie X, T \mid \iota(b)) = (T - b)(T - b'),
\end{equation*}
where the right hand side is a polynomial in $\mathcal{O}_S[T]$ via the structure homomorphism $O_F \inj \breve{O}_F \to \mathcal{O}_S$.
From this, the second assertion follows.
\end{proof}
Let $O_F^{(2)} \subseteq O_B$ be an embedding such that conjugation with $\Pi$ induces the non-trivial Galois action on $O_F^{(2)}$, as in Lemma \ref{LA_quat} \eqref{LA_quatRP}.
Fix a generator $\gamma = \frac{1+\delta}{2}$ of $O_F^{(2)}$ with $\delta^2 \in O_F^{\times}$.
On $(\mathbb{X},\iota_{\mathbb{X}})$, the principal polarization $\widetilde{\lambda}_{\mathbb{X}}$ given by
\begin{equation*}
\widetilde{\lambda}_{\mathbb{X}} = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}}(\delta)
\end{equation*}
has a Rosati involution that induces the identity on $O_E$.
For any $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, we set $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}}) = \lambda \circ \iota_B(\delta)$.
The tuple $(X,\iota_E,\lambda,\varrho) = \eta(X,\iota_B,\varrho)$ satisfies the straightening condition \eqref{RP_strcond}, since
\begin{equation*}
\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda}) = \lambda \circ \iota_B(\gamma) \in \Hom(X,X^{\vee}).
\end{equation*}
In particular, the tuple $(\mathbb{X},\iota_{\mathbb{X},E},\lambda_{\mathbb{X}})$ is a framing object of $\mathcal{N}_E$ and $\eta$ induces a natural transformation
\begin{equation} \label{RP_MDrtoNE}
\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E.
\end{equation}
Note that this map does not depend on the above choices, as $\mathcal{N}_E$ is a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
\begin{thm} \label{RP_thm}
$\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ is an isomorphism of formal schemes.
\end{thm}
We will first prove this on $\overbar{k}$-valued points:
\begin{lem} \label{RP_thmgeom}
$\eta$ induces a bijection $\eta(\overbar{k}): \mathcal{M}_{Dr}(\overbar{k}) \to \mathcal{N}_E(\overbar{k})$.
\end{lem}
\begin{proof}
We can identify the $\overbar{k}$-valued points of $\mathcal{M}_{Dr}$ with a subset $\mathcal{M}_{Dr}(\overbar{k}) \subseteq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
The rational Dieudonn\'e-module $N$ of $\mathbb{X}$ is equipped with an action of $B$.
Fix an embedding $F^{(2)} \inj B$ as in Lemma \ref{LA_quat} \eqref{LA_quatRP}.
This induces a $\mathbb{Z}/ 2$-grading $N = N_0 \oplus N_1$ of $N$, where
\begin{align*}
N_0 & = \{ x \in N \mid \iota(a)x = ax \text{ for all } a \in F^{(2)} \}, \\
N_1 & = \{ x \in N \mid \iota(a)x = \sigma(a)x \text{ for all } a \in F^{(2)} \},
\end{align*}
for a fixed embedding $F^{(2)} \inj \breve{F}$.
The operators ${\bf V}$ and ${\bf F}$ have degree $1$ with respect to this decomposition.
Recall that $\lambda$ has Rosati involution $b \mapsto \Pi b' \Pi^{-1}$ on $O_B$ which restricts to the identity on $O_F^{(2)}$.
The subspaces $N_0$ and $N_1$ are therefore orthogonal with respect to $\langle \,, \rangle$.
Under the identification \eqref{RP_geompts}, a lattice $M \in \mathcal{M}_{Dr}(\overbar{k})$ respects this decomposition, \emph{i.e.}, $M = M_0 \oplus M_1$ with $M_i = M \cap N_i$.
Furthermore it satisfies the special condition:
\begin{equation*}
\dim M_0 / {\bf V}M_1 = \dim M_1 / {\bf V}M_0 = 1.
\end{equation*}
We already know that $\mathcal{M}_{Dr}(\overbar{k}) \subseteq \mathcal{N}_E(\overbar{k})$, so let us assume $M \in \mathcal{N}_E(\overbar{k})$.
We want to show that $M \in \mathcal{M}_{Dr}(\overbar{k})$, \emph{i.e.}, that the lattice $M$ is stable under the action of $O_B$ on $N$ and satisfies the special condition.
It is stable under the $O_B$-action if and only if $M = M_0 \oplus M_1$ for $M_i = M \cap N_i$.
Let $y \in M$ and $y = y_0 + y_1$ with $y_i \in N_i$.
For any $x \in M$, we have
\begin{equation} \label{RP_thmeq1}
\langle x,y \rangle = \langle x,y_0 \rangle + \langle x, y_1 \rangle \in \breve{O}_F.
\end{equation}
We can assume that $\lambda_{\mathbb{X},1} = \lambda_{\mathbb{X}} \circ \iota_B(\gamma)$ with $\gamma \in O_F^{(2)}$ under our fixed embedding $F^{(2)} \inj B$.
Recall that $\gamma^{\sigma} = 1 - \gamma$ from page \pageref{LA_quadext}.
Let $\langle \,, \rangle_1$ be the alternating form on $M$ induced by $\lambda_{\mathbb{X},1}$.
Then,
\begin{equation} \label{RP_thmeq2}
\langle x, y \rangle_1 = \gamma \cdot \langle x, y_0 \rangle + (1-\gamma) \cdot \langle x, y_1 \rangle \in \breve{O}_F.
\end{equation}
From the equations \eqref{RP_thmeq1} and \eqref{RP_thmeq2}, it follows that $\langle x,y_0 \rangle$ and $\langle x, y_1 \rangle$ lie in $\breve{O}_F$.
Since $x \in M$ was arbitrary and $M = M^{\vee}$, this gives $y_0, y_1 \in M$.
Hence $M$ respects the decomposition of $N$ and is stable under the action of $O_B$.
It remains to show that $M$ satisfies the special condition:
The alternating form $\langle \,, \rangle$ is perfect on $M$, thus the restrictions to $M_0$ and $M_1$ are perfect as well.
If $M$ is not special, we have $M_i = {\bf V}M_{i+1}$ for some $i \in \{0, 1\}$.
But then, $\langle \,, \rangle$ cannot be perfect on $M_i$.
In fact, for any $x,y \in M_{i+1}$,
\begin{equation*}
\langle {\bf V}x, {\bf V}y \rangle^{\sigma} = \langle {\bf F}{\bf V} x, y \rangle = \pi_0 \cdot \langle x, y \rangle \in \pi_0 \breve{O}_F.
\end{equation*}
Thus $M$ is indeed special, \emph{i.e.}, $M \in \mathcal{M}_{Dr}(\overbar{k})$, and this finishes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{RP_thm}]
We already know that $\eta$ is a closed embedding
\begin{equation*}
\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E.
\end{equation*}
Let $(\mathbb{X},\iota_{\mathbb{X}})$ be the framing object of $\mathcal{M}_{Dr}$ and choose an embedding $O_F^{(2)} \subseteq O_B$ and a generator $\gamma$ of $O_F^{(2)}$ as in Lemma \ref{LA_quat} \eqref{LA_quatRP}.
We take $(\mathbb{X},\iota_{\mathbb{X},E},\lambda_{\mathbb{X}})$ as a framing object for $\mathcal{N}_E$ and set $\widetilde{\lambda}_{\mathbb{X}} = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}}(\delta)$.
Let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E(S)$ and $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
We have
\begin{equation*}
\varrho^{-1} \circ \iota_{\mathbb{X}} (\gamma) \circ \varrho = \varrho^{-1} \circ \lambda_{\mathbb{X}}^{-1} \circ \lambda_{\mathbb{X},1} \circ \varrho = \lambda^{-1} \circ \lambda_1 \in \End(X),
\end{equation*}
where $\lambda_{\mathbb{X},1} = \frac{1}{2} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})$ and $\lambda_1 = \frac{1}{2} (\lambda + \widetilde{\lambda})$.
Since $O_B = O_F[\Pi, \gamma]$, this induces an $O_B$-action $\iota_B$ on $X$ and makes $\varrho$ an $O_B$-linear quasi-isogeny.
We have to check that $(X,\iota_B,\varrho)$ satisfies the special condition.
Recall that the special condition is open and closed (see \cite[p.\ 7]{RZ14}), so $\eta$ is an open and closed embedding.
Furthermore, $\eta(\overbar{k})$ is bijective and the reduced loci $(\mathcal{M}_{Dr})_{\red}$ and $(\mathcal{N}_E)_{\red}$ are locally of finite type over $\Spec \overbar{k}$.
Hence $\eta$ indcues an isomorphism on reduced subschemes.
But any open and closed embedding of formal schemes, that is an isomorphism on the reduced subschemes, is already an isomorphism.
\end{proof}
\section{The moduli problem in the case (R-U)}
\label{RU}
Let $E|F$ be a quadratic extension of type (R-U), generated by a uniformizer $\Pi$ satisfying an Eisenstein equation of the form $\Pi^2 - t\Pi + \pi_0 = 0$ where $t \in O_F$ and $\pi_0|t|2$.
Let $O_F$ and $O_E$ be the rings of integers of $F$ and $E$. We have $O_E = O_F[\Pi]$.
As in the case (R-P), let $k$ be the common residue field, $\overbar{k}$ an algebraic closure, $\breve{F}$ the completion of the maximal unramified extension with ring of integers $\breve{O}_F = W_{O_F}(\overbar{k})$ and $\sigma$ the lift of the Frobenius in $\Gal(\overbar{k}|k)$ to $\Gal(\breve{O}_F|O_F)$.
\subsection{The naive moduli problem} \label{RU1}
Let $S \in \Nilp$.
Consider tuples $(X,\iota,\lambda)$, where
\begin{itemize}
\item $X$ is a formal $O_F$-module over $S$ of dimension $2$ and height $4$.
\item $\iota: O_E \to \End(X)$ is an action of $O_E$ on $X$ satisfying the \emph{Kottwitz condition}:
The characteristic polynomial of $\iota(\alpha)$ for some $\alpha \in O_E$ is given by
\begin{equation*}
\charp(\Lie X, T \mid \iota(\alpha)) = (T - \alpha)(T - \overbar{\alpha}).
\end{equation*}
Here $\alpha \mapsto \overbar{\alpha}$ is the Galois conjugation of $E|F$ and the right hand side is a polynomial in $\mathcal{O}_S[T]$ via the structure morphism $O_F \inj \breve{O}_F \to \mathcal{O}_S$.
\item $\lambda: X \to X^{\vee}$ is a polarization on $X$ with kernel $\ker \lambda = X[\Pi]$, where $X[\Pi]$ is the kernel of $\iota(\Pi)$.
Further we demand that the Rosati involution of $\lambda$ satisfies $\iota(\alpha)^{\ast} = \iota(\overbar{\alpha})$ for all $\alpha \in O_E$.
\end{itemize}
We define quasi-isogenies $\varphi: (X,\iota,\lambda) \to (X',\iota',\lambda')$ and the group $\QIsog(X,\iota,\lambda)$ as in Definition \ref{RP_isonaive}.
\begin{prop} \label{RU_frnaive}
Up to isogeny, there exists exactly one such tuple $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ over $S = \Spec \overbar{k}$ under the condition that the group $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$
contains a closed subgroup isomorphic to $\SU(C,h)$ for a $2$-dimensional $E$-vector space $C$ with split $E|F$-hermitian form $h$.
\end{prop}
\begin{rmk} \label{RU_rmknaive}
As in the case (R-P), we have $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}}) \cong \U(C,h)$ for $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ as in the Proposition.
\end{rmk}
\begin{proof}[Proof of Proposition \ref{RU_frnaive}]
We first show uniqueness of the object.
Let $(X,\iota,\lambda) / \Spec \overbar{k}$ be a tuple as in the proposition and consider its rational Dieudonn\'e-module $N_X$.
This is a $4$-dimensional vector space over $\breve{F}$ equipped with an action of $E$ and an alternating form $\langle \,,\rangle$ such that
\begin{equation}
\langle x, \Pi y \rangle = \langle \overbar{\Pi} x, y \rangle
\end{equation}
for all $x,y \in N_X$.
Let $\breve{E} = \breve{F} \otimes_F E$.
We can see $N_X$ as $2$-dimensional vector space over $\breve{E}$ with a hermitian form $h$ given by
\begin{equation} \label{RU_herm}
h(x,y) = \langle \Pi x, y \rangle - \overbar{\Pi} \langle x,y \rangle.
\end{equation}
Let ${\bf F}$ and ${\bf V}$ be the $\sigma$-linear Frobenius and the $\sigma^{-1}$-linear Verschiebung on $N_X$.
We have ${\bf F}{\bf V} = {\bf V}{\bf F} = \pi_0$ and, since $\langle \,, \rangle$ comes from a polarization,
\begin{equation*}
\langle {\bf F}x, y \rangle = \langle x, {\bf V}y \rangle^{\sigma}.
\end{equation*}
Consider the $\sigma$-linear operator $\tau = \Pi {\bf V}^{-1} = {\bf F} \overbar{\Pi}^{-1}$.
The hermitian form $h$ is invariant under $\tau$:
\begin{equation*}
h(\tau x, \tau y) = h({\bf F} \overbar{\Pi}^{-1} x, \Pi {\bf V}^{-1} y) = h({\bf F} x, {\bf V}^{-1} y) = h(x,y)^{\sigma}.
\end{equation*}
From the condition on $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ it follows that $N_X$ is isotypical of slope $\frac{1}{2}$ and thus the slopes of $\tau$ are all zero.
Let $C = N_X^{\tau}$.
This is a $2$-dimensional vector space over $E$ with $N_X = C \otimes_E \breve{E}$ and $h$ induces an $E|F$-hermitian form on $C$.
A priori, there are two possibilities for $(C,h)$, either $h$ is split or non-split.
The group $\U(C,h)$ of automorphisms is isomorphic to $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$.
But the unitary groups for $h$ split and $h$ non-split are not isomorphic and do not contain each other as a closed subgroup.
Thus the condition on $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ implies that $h$ is split.
Assume we are given two different objects $(X,\iota,\lambda)$ and $(X',\iota',\lambda')$ as in the proposition.
Then there is an isomorphism between the spaces $(C,h)$ and $(C',h')$ extending to an isomorphism of $N_X$ and $N_{X'}$ respecting all structure.
This corresponds to a quasi-isogeny $\varphi: (X,\iota,\lambda) \to (X',\iota',\lambda')$.
Now we prove the existence of $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
We start with a $\Pi$-modular lattice $\Lambda$ in a $2$-dimensional vector space $(C,h)$ over $E$ with split hermitian form.
Then $M = \Lambda \otimes_{O_E} \breve{O}_E$ is an $\breve{O}_E$-lattice in $N = C \otimes_{E} \breve{E}$.
The $\sigma$-linear operator $\tau = 1 \otimes \sigma$ on $N$ has slopes are all $0$.
We can extend $h$ to $N$ such that
\begin{equation*}
h(\tau x, \tau y) = h(x,y)^{\sigma},
\end{equation*}
for all $x,y \in N$.
The operators ${\bf F}$ and ${\bf V}$ are given by the equations $\tau = \Pi {\bf V}^{-1} = {\bf F} \overbar{\Pi}^{-1}$.
Finally, the alternating form $\langle \,, \rangle$ is defined via
\begin{equation*}
\langle x,y \rangle = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{t\vartheta} \cdot h(x,y) \right),
\end{equation*}
for $x,y \in N$.
The lattice $M \subseteq N$ is the Dieudonn\'e module of the object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
We leave it to the reader to check that this is indeed an object as considered above.
\end{proof}
We fix such an object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ over $\Spec \overbar{k}$ from the proposition.
We define the functor $\mathcal{N}_E^{\mathrm{naive}}$ on $\Nilp$ as in Definition \ref{RP_defnaive}.
\begin{rmk}
$\mathcal{N}_E^{\mathrm{naive}}$ is pro-representable by a formal scheme, formally locally of finite type over $\Spf \breve{O}_F$, \emph{cf.}\ \cite[Thm.\ 3.25]{RZ96}.
\end{rmk}
We now study the $\overbar{k}$-valued points of the space $\mathcal{N}_E^{\mathrm{naive}}$.
Let $N = N_{\mathbb{X}}$ be the rational Dieudonn\'e-module of $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$.
This is a $4$-dimensional vector space over $\breve{F}$, equipped with an action of $E$, with two operators ${\bf F}$ and ${\bf V}$ and an alternating form $\langle \,, \rangle$.
Let $(X, \iota, \lambda, \varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
This corresponds to an $\breve{O}_F$-lattice $M = M_X \subseteq N$ which is stable under the actions of ${\bf F}$, ${\bf V}$ and $O_E$.
The condition on the kernel of $\lambda$ implies that $M = \Pi M^{\vee}$ for
\begin{equation*}
M^{\vee} = \{ x \in N \mid \langle x,y \rangle \in \breve{O}_F \text{ for all } y \in M \}.
\end{equation*}
The alternating form $\langle \,, \rangle$ induces an $\breve{E}|\breve{F}$-hermitian form $h$ on $N$, seen as $2$-dimensional vector space over $\breve{E}$ (see equation \eqref{RU_herm}):
\begin{equation*}
h(x,y) = \langle \Pi x, y \rangle - \overbar{\Pi} \langle x,y \rangle.
\end{equation*}
We can recover the form $\langle \,, \rangle$ from $h$ via
\begin{equation} \label{RU_altherm}
\langle x, y \rangle = \Tr_{\breve{E}|\breve{F}} \left( \frac{1}{t\vartheta} \cdot h(x,y) \right).
\end{equation}
Since the inverse different of $E|F$ is $\mathfrak{D}_{E|F}^{-1} = \frac{1}{t}O_E$ (see Lemma \ref{LA_diff}), this implies that $M$ is $\Pi$-modular with respect to $h$, as $\breve{O}_E$-lattice in $N$.
We denote the dual of $M$ with respect to $h$ by $M^{\sharp}$.
There is a natural bijection
\begin{equation} \label{RU_geompts}
\mathcal{N}_E^{\mathrm{naive}} (\overbar{k}) = \{ \breve{O}_E \text{-lattices } M \subseteq N \mid M = \Pi M^{\sharp}, \pi_0 M \subseteq {\bf V}M \subseteq M \}.
\end{equation}
Recall that $\tau = \Pi {\bf V}^{-1}$ is a $\sigma$-linear operator on $N$ with slopes all $0$.
Further $C = N^{\tau}$ is a $2$-dimensional $E$-vector space with hermitian form $h$.
\begin{lem} \label{RU_latt}
Let $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
Then:
\begin{enumerate}
\item $M + \tau(M)$ is $\tau$-stable.
\item Either $M$ is $\tau$-stable and $\Lambda_1 = M^{\tau} \subseteq C$ is $\Pi$-modular with respect to $h$, or $M$ is not $\tau$-stable and then $\Lambda_0 = (M + \tau(M))^{\tau} \subseteq C$ is unimodular.
\end{enumerate}
\end{lem}
The proof is the same as that of \cite[Lemma 3.2]{KR14}.
We identify $N$ with $C \otimes_{E} \breve{E}$.
For any $\tau$-stable lattice $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$, we have $M = \Lambda_1 \otimes_{O_E} \breve{O}_E$.
If $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is not $\tau$-stable, there is an inclusion $M \subseteq \Lambda_0 \otimes_{O_E} \breve{O}_E$ of index $1$.
Recall from Proposition \ref{LA_latt} that the isomorphism class of a $\Pi$-modular or unimodular lattice $\Lambda \subseteq C$ is determined by the norm ideal
\begin{equation*}
\Nm(\Lambda) = \langle \{ h(x,x) | x \in \Lambda \} \rangle.
\end{equation*}
There are always at least two types of unimodular lattices.
However, not all of them appear in the description of $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
\begin{lem} \label{RU_IS}
\begin{enumerate}
\item \label{RU_IS1} Let $\Lambda \subseteq C$ be a unimodular lattice with $\Nm (\Lambda) \subseteq \pi_0 O_F$.
There is an injection
\begin{equation*}
i_{\Lambda}: \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}) \inj \mathcal{N}_E^{\mathrm{naive}}(\overbar{k}),
\end{equation*}
that maps a line $\ell \subseteq \Lambda / \Pi \Lambda \otimes_k \overbar{k}$ to its inverse image under the canonical projection
\begin{equation*}
\Lambda \otimes_{O_E} \breve{O}_E \to \Lambda / \Pi \Lambda \otimes_k \overbar{k}.
\end{equation*}
The $k$-valued points $\mathbb{P}(\Lambda / \Pi \Lambda)(k) \subseteq \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k})$ are mapped to $\tau$-invariant Dieudonn\'e modules $M \subseteq \Lambda \otimes_{O_E} \breve{O}_E$ under this embedding.
\item Identify $\mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k})$ with its image under $i_{\Lambda}$.
The set $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ can be written as
\begin{equation*}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation*}
where the union is taken over all lattices $\Lambda \subseteq C$ with $\Nm(\Lambda) \subseteq \pi_0 O_F$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\Lambda \subseteq C$ be a unimodular lattice.
For any line $\ell \in \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k})$, denote its preimage in $\Lambda \otimes \breve{O}_E$ by $M$.
The inclusion $M \subseteq \Lambda \otimes \breve{O}_E$ has index $1$ and $M$ is an $\breve{O}_E$-lattice with $\Pi(\Lambda \otimes \breve{O}_E) \subseteq M$.
Furthermore $\Lambda \otimes \breve{O}_E$ is $\tau$-invariant by construction, hence $\Pi(\Lambda \otimes \breve{O}_E) = {\bf V}(\Lambda \otimes \breve{O}_E) = {\bf F}(\Lambda \otimes \breve{O}_E)$.
It follows that $M$ is stable under the actions of ${\bf F}$ and ${\bf V}$.
Thus $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ if and only if $M = \Pi M^{\sharp}$.
The hermitian form $h$ induces a symmetric form $s$ on $\Lambda / \Pi \Lambda$.
Now $M$ is $\Pi$-modular if and only if it is the preimage of an isotropic line $\ell \subseteq \Lambda / \Pi \Lambda \otimes \overbar{k}$.
Note that $s$ is also anti-symmetric since we are in characteristic $2$.
We first consider the case $\Nm (\Lambda) \subseteq \pi_0 O_F$.
We can find a basis of $\Lambda$ such that $h$ has the form
\begin{equation*}
H_{\Lambda} = \begin{pmatrix}
x & 1 \\
1 & \\
\end{pmatrix}, \quad x \in \pi_0 O_F,
\end{equation*}
see \eqref{LA_lattmatrix}.
It follows that the induced form $s$ is even alternating (because $x \equiv 0 \mod \pi_0$).
Hence any line in $\Lambda / \Pi \Lambda \otimes \overbar{k}$ is isotropic.
This implies that $i_{\Lambda}$ is well-defined, proving part \ref{RU_IS1} of the Lemma.
Now assume that $\Nm (\Lambda) = O_F$.
There is a basis $(e_1,e_2)$ of $\Lambda$ such that $h$ is represented by
\begin{equation*}
H_{\Lambda} = \begin{pmatrix}
1 & 1 \\
1 & \\
\end{pmatrix}.
\end{equation*}
The induced form $s$ is given by the same matrix and $\ell = \overbar{k} \cdot e_2$ is the only isotropic line in $\Lambda / \Pi \Lambda$.
Since $\ell$ is already defined over $k$, the corresponding lattice $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ is of the form $M = \Lambda_1 \otimes \breve{O}_E$ for a $\Pi$-modular lattice $\Lambda_1 \subseteq \Lambda$.
But, by Proposition \ref{LA_lattRU}, any $\Pi$-modular lattice in $C$ is contained in a unimodular lattice $\Lambda'$ with $\Nm (\Lambda') \subseteq \pi_0 O_F$.
It follows that we can write $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ as a union
\begin{equation*}
\mathcal{N}_E^{\mathrm{naive}}(\overbar{k}) = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation*}
where the union is taken over all unimodular lattices $\Lambda \subseteq C$ with $\Nm(\Lambda) \subseteq \pi_0 O_F$.
This shows the second part of the Lemma.
\end{proof}
\begin{rmk} \label{RU_ISrmk}
We can use Proposition \ref{LA_lattRU} to describe the intersection behaviour of the projective lines in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$.
A $\tau$-invariant point $M \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ corresponds to the $\Pi$-modular lattice $\Lambda_1 = M^{\tau} \subseteq C$.
If $\Nm(\Lambda_1) \subseteq \pi_0^2 O_F$, there are $q+1$ lines going through $M$.
If $\Nm(\Lambda_1) = \pi_0 O_F$, the point $M$ is contained in one or $2$ lines, depending on whether $\Lambda_1$ is hyperbolic or not, see part \eqref{LA_lattRUh} and \eqref{LA_lattRUnh} of Proposition \ref{LA_lattRU}.
The former case (\emph{i.e.}, $\Lambda_1$ is hyperbolic) appears if and only if $\pi_0 O_F = \Nm(\Lambda_1) = tO_F$ (see Lemma \ref{LA_hyp}).
This happens only for a specific type of (R-U) extension $E|F$, see page \pageref{LA_quadext}.
We refer to Remark \ref{RU_rmkred}, Remark \ref{RU_rmkNE} and Section \ref{LM_naive} for a further discussion of this special case.\\
On the other hand, each projective line in $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ contains $q+1$ $\tau$-invariant points.
Such a $\tau$-invariant point $M$ is an intersection point of $2$ or more projective lines if and only if $|t| = |\pi_0|$ or $\Lambda_1 = M^{\tau} \subseteq C$ has a norm ideal satisfying $\Nm(\Lambda_1) \subseteq \pi_0^2 O_F$.
\end{rmk}
\begin{figure}[hbt]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width = \textwidth]{R-U_NEnaive.jpg}
\caption{$e = 2$, $f = 1$, $v(t) = 2$.}
\label{RU_NEnaivejpg1}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width = \textwidth]{R-U_NEnaive_1.jpg}
\caption{$e = 2$, $f = 1$, $v(t) = 1$.}
\label{RU_NEnaivejpg2}
\end{subfigure}
\caption{
The reduced locus of $\mathcal{N}_E^{\mathrm{naive}}$ for an (R-U) extension $E|F$ where $e$ and $f$ are the ramification index and the inertia degree of $F|\mathbb{Q}_2$ and $v(t)$ is the $\pi_0$-adic valuation of $t$.
We always have $1 \leq v(t) \leq e$.
The solid lines lie in $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$.
}
\label{RU_NEnaivejpg}
\end{figure}
Let $\Lambda \subseteq C$ as in Lemma \ref{RU_IS}.
We denote by $X_{\Lambda}^{+}$ the formal $O_F$-module corresponding to the Dieudonn\'e module $M = \Lambda \otimes \breve{O}_E$.
There is a canonical quasi-isogeny
\begin{equation*}
\varrho_{\Lambda}^{+}: \mathbb{X} \to X_{\Lambda}^{+}
\end{equation*}
of $F$-height $1$.
For $S \in \Nilp$, we define
\begin{equation*}
\mathcal{N}_{E, \Lambda}(S) = \{ (X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S) \mid (\varrho_{\Lambda}^{+} \times S) \circ \varrho \text{ is an isogeny} \}.
\end{equation*}
By \cite[Prop.\ 2.9]{RZ96}, the functor $\mathcal{N}_{E, \Lambda}$ is representable by a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
On geometric points, we have
\begin{equation} \label{RU_NEL=P1geom}
\mathcal{N}_{E, \Lambda}(\overbar{k}) \isoarrow \mathbb{P}(\Lambda / \Pi \Lambda)(\overbar{k}),
\end{equation}
as follows from Lemma \ref{RU_IS} \eqref{RU_IS1}.
\begin{prop} \label{RU_NEnaiveRL}
The reduced locus of $\mathcal{N}_E^{\mathrm{naive}}$ is a union
\begin{equation*}
(\mathcal{N}_E^{\mathrm{naive}})_{\red} = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathcal{N}_{E, \Lambda}
\end{equation*}
where $\Lambda$ runs over all unimodular lattices in $C$ with $\Nm(\Lambda) \subseteq \pi_0 O_F$.
For each $\Lambda$, there exists an isomorphism
\begin{equation*}
\mathcal{N}_{E, \Lambda} \isoarrow \mathbb{P}(\Lambda / \Pi \Lambda),
\end{equation*}
inducing the bijection \eqref{RU_NEL=P1geom} on $\overbar{k}$-valued points.
\end{prop}
The proof is analogous to that of Proposition \ref{RP_NEnaiveRL}.
\begin{rmk} \label{RU_rmkred}
Similar to Remark \ref{RP_rmklatt} \eqref{RP_rmklatt3}, we let $(\mathcal{N}_E)_{\red} \subseteq (\mathcal{N}_E^{\mathrm{naive}})_{\red}$ be the union of all projective lines $\mathcal{N}_{E, \Lambda}$ corresponding to \emph{hyperbolic} unimodular lattices $\Lambda \subseteq C$.
Later, we will define $\mathcal{N}_E$ as a functor on $\Nilp$ and show that $\mathcal{N}_E \simeq \mathcal{M}_{Dr}$, where $\mathcal{M}_{Dr}$ is the Drinfeld moduli problem (see Theorem \ref{RU_thm}, a description of the formal scheme $\mathcal{M}_{Dr}$ can be found in \cite[I.3]{BC91}).
In particular, $(\mathcal{N}_E)_{\red}$ is connected and each projective line in $(\mathcal{N}_E)_{\red}$ has $q+1$ intersection points and there are $2$ lines intersecting in each such point. \\
It might happen that $(\mathcal{N}_E)_{\red} = (\mathcal{N}_E^{\mathrm{naive}})_{\red}$ (see, for example, Figure \ref{RU_NEnaivejpg2}), if there are no non-hyperbolic unimodular lattices $\Lambda \subseteq C$ with $\Nm (\Lambda) \subseteq \pi_0 O_F$.
In fact, this is the case if and only if $|t| = |\pi_0|$, see Proposition \ref{LA_latt} and Lemma \ref{LA_hyp}.
(Note however that we still have $\mathcal{N}_E \neq \mathcal{N}_E^{\mathrm{naive}}$, see Remark \ref{RU_rmkNE} and Section \ref{LM_naive}.)
Assume $|t| \neq |\pi_0|$ and let $P \in \mathcal{N}_E(\overbar{k})$ be an intersection point.
Then, as in the case where $E|F$ is of type (R-P) (compare Remark \ref{RP_rmklatt} \eqref{RP_rmklatt3}), the connected component of $P$ in $((\mathcal{N}_E^{\mathrm{naive}})_{\red} \setminus (\mathcal{N}_E)_{\red}) \cup \{ P \}$ consists of a finite union of projective lines (corresponding to non-hyperbolic lattices, by definition of $(\mathcal{N}_E)_{\red}$).
In Figure \ref{RU_NEnaivejpg1}, these components are indicated by dashed lines (they consist of just one projective line in that case).
\end{rmk}
\subsection{The straightening condition}
\label{RU2}
As in the case (R-P), see section \ref{RP2}, we use the results of section \ref{POL} to define the straightening condition on $\mathcal{N}_E^{\mathrm{naive}}$.
By Theorem \ref{POL_thm} and Remark \ref{POL_rmk} \eqref{POL_rmk2}, there exists a principal polarization $\widetilde{\lambda}^0_{\mathbb{X}}$ on the framing object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ such that the Rosati involution is the identity on $O_E$.
We set $\widetilde{\lambda}_{\mathbb{X}} = \widetilde{\lambda}^0_{\mathbb{X}} \circ \iota_{\mathbb{X}}(\Pi)$, which is again a polarization on $\mathbb{X}$ with the Rosati involution inducing the identity on $O_E$, but with kernel $\ker \widetilde{\lambda}_{\mathbb{X}} = \mathbb{X}[\Pi]$.
This polarization is unique up to a scalar in $O_E^{\times}$, \emph{i.e.}, any two polarizations $\widetilde{\lambda}_{\mathbb{X}}$ and $\widetilde{\lambda}_{\mathbb{X}}'$ with these properties satisfy
\begin{equation*}
\widetilde{\lambda}_{\mathbb{X}}' = \widetilde{\lambda}_{\mathbb{X}} \circ \iota(\alpha),
\end{equation*}
for some $\alpha \in O_E^{\times}$.
For any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$,
\begin{equation*}
\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}}) = \varrho^{\ast}(\widetilde{\lambda}^0_{\mathbb{X}}) \circ \iota(\Pi)
\end{equation*}
is a polarization on $X$ with kernel $\ker \widetilde{\lambda} = X[\Pi]$, see Theorem \ref{POL_thm} \eqref{POL_thm2}.
Recall that a unimodular or $\Pi$-modular lattice $\Lambda \subseteq C$ is called \emph{hyperbolic} if there exists a basis $(e_1,e_2)$ of $\Lambda$ such that, with respect to this basis, $h$ has the form
\begin{equation*}
\begin{pmatrix}
& \Pi^i \\
\overbar{\Pi}^i & \\
\end{pmatrix},
\end{equation*}
for $i = 0$ resp.\ $1$.
By Lemma \ref{LA_hyp}, this is the case if and only if $\Nm (\Lambda) = tO_F$.
\begin{prop} \label{RU_strprop}
For a suitable choice of $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$, the quasi-polarization
\begin{equation*}
\lambda_{\mathbb{X},1} = \frac{1}{t} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}})
\end{equation*}
is a polarization on $\mathbb{X}$.
Let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
Then $\lambda_1 = \frac{1}{t} (\lambda + \widetilde{\lambda})$ is a polarization if and only if $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(\overbar{k})$ for a hyperbolic unimodular lattice $\Lambda \subseteq C$.
\end{prop}
\begin{proof}
On the rational Dieudonn\'e module $N = M_{\mathbb{X}} \otimes_{\breve{O}_F} \breve{F}$, denote by $\langle \,, \rangle$, $(\,,)$ and $\langle \,, \rangle_1$ the alternating forms induced by $\lambda_{\mathbb{X}}$, $\widetilde{\lambda}_{\mathbb{X}}$ and $\lambda_{\mathbb{X},1}$, respectively.
The form $\langle \,, \rangle_1$ is integral on $M_{\mathbb{X}}$ if and only if $\lambda_{\mathbb{X},1}$ is a polarization on $\mathbb{X}$.
We have
\begin{align*}
({\bf F}x, y) & = (x,{\bf V}y)^{\sigma}, \\
(\Pi x ,y) & = (x, \Pi y), \\
\langle x, y \rangle_1 & = \frac{1}{t} (\langle x,y \rangle + (x,y)),
\end{align*}
for all $x,y \in N$.
The form $(\,,)$ induces an $\breve{E}$-bilinear alternating form $b$ on $N$ by the formula
\begin{equation} \label{RU_altb}
b(x,y) = c((\Pi x,y) - \overbar{\Pi} (x,y)).
\end{equation}
Here, $c$ is a unit in $\breve{O}_E$ such that $c \cdot \sigma(c)^{-1}= \overbar{\Pi}\Pi^{-1}$.
Since $\frac{\overbar{\Pi}}{\Pi} = \frac{t - \Pi}{\Pi} \in 1 + \frac{t}{\Pi}\breve{O}_E$, we can even choose $c \in 1 + t\Pi^{-1}\breve{O}_E$.
The dual of $M$ with respect to this form is again $M^{\sharp} = \Pi^{-1} M$, since
\begin{equation*}
(x,y) = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{t\vartheta c} \cdot b(x,y) \right),
\end{equation*}
and the inverse different of $E|F$ is given by $\mathfrak{D}_{E|F}^{-1} = t^{-1}O_E$, see Lemma \ref{LA_diff}.
Now $b$ is invariant under the $\sigma$-linear operator $\tau = \Pi {\bf V}^{-1} = {\bf F} \overbar{\Pi}^{-1}$, because
\begin{equation*}
b(\tau x, \tau y) = b({\bf F} \overbar{\Pi}^{-1} x, \Pi {\bf V}^{-1} y) = \frac{c}{\sigma(c)} \cdot b(\overbar{\Pi}^{-1} x, \Pi y)^{\sigma} = b(x,y)^{\sigma}.
\end{equation*}
Hence $b$ defines an $E$-linear alternating form on $C$.
We choose the framing object $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ such that $M_{\mathbb{X}}$ is $\tau$-invariant (see Lemma \ref{RU_latt}) and such that $\Lambda_1 = M_{\mathbb{X}}^{\tau}$ is hyperbolic.
We can find a basis $(e_1,e_2)$ of $\Lambda_1$ such that
\begin{equation*}
h \, \weq \begin{pmatrix}
& \Pi \\
\overbar{\Pi} & \\
\end{pmatrix}, \quad b \, \weq \begin{pmatrix}
& u \\
-u & \\
\end{pmatrix},
\end{equation*}
for some $u \in E^{\times}$.
Since $\widetilde{\lambda}_{\mathbb{X}}$ has the same kernel as $\lambda_{\mathbb{X}}$, we have $u = \overbar{\Pi} u'$ for some unit $u' \in O_E^{\times}$.
We can choose $\widetilde{\lambda}_{\mathbb{X}}$ such that $u' = 1$ and $u = \overbar{\Pi}$.
Now $\frac{1}{t} (h(x,y) + b(x,y))$ is integral for all $x,y \in \Lambda_1$.
Hence $\frac{1}{t} (h(x,y) + b(x,y))$ is also integral for all $x,y \in M_{\mathbb{X}}$.
For all $x,y \in M_{\mathbb{X}}$, we have
\begin{align*}
\langle x,y \rangle_1 & = \frac{1}{t} (\langle x,y \rangle + (x,y)) = \frac{1}{t} \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{t\vartheta} \cdot h(x,y) + \frac{1}{t\vartheta c} \cdot b(x,y) \right) \\
& = \Tr_{\breve{E}|\breve{F}} \left(\frac{1}{t^{2}\vartheta} \cdot (h(x,y) + b(x,y)) \right) + \Tr_{\breve{E}|\breve{F}} \left(\frac{1-c}{t^2\vartheta c} \cdot b(x,y) \right).
\end{align*}
The first summand is integral since $\frac{1}{t} (h(x,y) + b(x,y))$ is integral.
The second summand is integral since $1-c$ is divisible by $t\Pi^{-1}$ and $b(x,y)$ lies in $\Pi \breve{O}_E$.
It follows that the second summand above is integral as well.
Hence $\langle \,, \rangle_1$ is integral on $M_{\mathbb{X}}$ and this implies that $\lambda_{\mathbb{X},1}$ is a polarization on $\mathbb{X}$.
Now let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and denote by $M \subseteq N$ its Dieudonn\'e module.
Assume that $\lambda_1 = t^{-1}(\lambda + \widetilde{\lambda})$ is a polarization on $X$.
Then $\langle \,, \rangle_1$ is integral on $M$.
But this is equivalent to $t^{-1}(h(x,y) + b(x,y))$ being integral for all $x,y \in M$.
For $x= y$, we have
\begin{equation*}
h(x,x) = h(x,x) + b(x,x) \in t\breve{O}_F.
\end{equation*}
Let $\Lambda \subseteq C$ be the unimodular or $\Pi$-modular lattice given by $\Lambda = M^{\tau}$ resp.\ $\Lambda = (M + \tau(M))^{\tau}$, see Lemma \ref{RU_latt}.
Then $h(x,x) \in tO_F$ for all $x \in \Lambda$.
Thus $\Nm(\Lambda) \subseteq tO_F$ and, by minimality, this implies that $\Nm(\Lambda) = tO_F$ and $\Lambda$ is hyperbolic (see Lemma \ref{LA_hyp}).
Hence, in either case, the point corresponding to $(X,\iota,\lambda,\varrho)$ lies in $\mathcal{N}_{E, \Lambda'}$ for a hyperbolic lattice $\Lambda'$.
Conversely, assume that $(X,\iota,\lambda,\varrho) \in \mathcal{N}_{E, \Lambda}(\overbar{k})$ for some hyperbolic lattice $\Lambda \subseteq C$.
We want to show that $\lambda_1$ is a polarization on $X$.
This follows if $\langle \,, \rangle_1$ is integral on $M$, or equivalently, if $t^{-1} (h(x,y) + b(x,y))$ is integral on $M$.
For this, it is enough to show that $t^{-1} (h(x,y) + b(x,y))$ is integral on $\Lambda$.
Let $\Lambda' \subseteq C$ be the unimodular lattice generated by $\overbar{\Pi}^{-1} e_1$ and $e_2$, where $(e_1,e_2)$ is the basis of the $\Pi$-modular lattice $\Lambda_1 = M_{\mathbb{X}}$.
With respect to the basis $(\overbar{\Pi}^{-1} e_1, e_2)$, we have
\begin{equation*}
h \, \weq \begin{pmatrix}
& 1 \\
1 & \\
\end{pmatrix}, \quad b \, \weq \begin{pmatrix}
& 1 \\
-1 & \\
\end{pmatrix}.
\end{equation*}
In particular, $\Lambda'$ is a hyperbolic lattice and $t^{-1}(h + b)$ is integral on $\Lambda'$.
By Proposition \ref{LA_latt}, there exists an element $g \in \SU(C,h)$ with $g\Lambda = \Lambda'$.
Since $\det g = 1$, the alternating form $b$ is invariant under $g$.
Thus $t^{-1} (h + b)$ is also integral on $\Lambda$.
\end{proof}
From now on, we assume that $(\mathbb{X},\iota_{\mathbb{X}},\lambda_{\mathbb{X}})$ and $\widetilde{\lambda}_{\mathbb{X}}$ are chosen in a way such that
\begin{equation*}
\lambda_{\mathbb{X},1} = \frac{1}{t} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}}) \in \Hom(\mathbb{X},\mathbb{X}^{\vee}).
\end{equation*}
\begin{defn} \label{RU_strdef}
A tuple $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ satisfies the \emph{straightening condition} if
\begin{equation} \label{RU_strcond}
\lambda_1 = \frac{1}{t} (\lambda + \widetilde{\lambda}) \in \Hom(X,X^{\vee}).
\end{equation}
\end{defn}
This condition is independent of the choice of $\widetilde{\lambda}_{\mathbb{X}}$.
In fact, we can only change $\widetilde{\lambda}_{\mathbb{X}}$ by a scalar of the form $1 + t\Pi^{-1}u$, $u \in O_E$.
But if $\widetilde{\lambda}'_{\mathbb{X}} = \widetilde{\lambda}_{\mathbb{X}} \circ \iota(1 + t\Pi^{-1}u)$, then $\lambda'_{\mathbb{X},1} = \lambda_{\mathbb{X},1} + \widetilde{\lambda}_{\mathbb{X}} \circ \iota(\Pi^{-1}u) = \lambda_{\mathbb{X},1} + \widetilde{\lambda}^0_{\mathbb{X}} \circ \iota(u)$ and $\lambda'_1 = \lambda_1 + \varrho^{\ast}(\widetilde{\lambda}^0_{\mathbb{X}}) \circ \iota(u)$.
Clearly, $\lambda'_1$ is a polarization if and only if $\lambda_1$ is one.
For $S \in \Nilp$, let $\mathcal{N}_E(S)$ be the set of all tuples $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ that satisfy the straightening condition.
By \cite[Prop.\ 2.9]{RZ96}, the functor $\mathcal{N}_E$ is representable by a closed formal subscheme of $\mathcal{N}_E^{\mathrm{naive}}$.
\begin{rmk} \label{RU_rmkNE}
The reduced locus of $\mathcal{N}_E$ is given by
\begin{equation*}
(\mathcal{N}_E)_{\red} = \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathcal{N}_{E, \Lambda} \simeq \smashoperator[r]{\bigcup_{\Lambda \subseteq C}} \mathbb{P}(\Lambda / \Pi \Lambda),
\end{equation*}
where the union goes over all \emph{hyperbolic} unimodular lattices $\Lambda \subseteq C$.
Note that, depending on the form of the (R-U) extension $E|F$, it may happen that all unimodular lattices are hyperbolic (when $|t| = |\pi_0|$) and in that case, we have $(\mathcal{N}_E)_{\red} = (\mathcal{N}_E^{\mathrm{naive}})_{\red}$.
However, the equality does not extend to an isomorphism between $\mathcal{N}_E$ and $\mathcal{N}_E^{\mathrm{naive}}$.
This will be discussed in section \ref{LM_naive}.
\end{rmk}
\subsection{The main theorem for the case (R-U)}
As in the case (R-P), we want to establish a connection to the Drinfeld moduli problem.
Therefore, fix an embedding of $E$ into the quaternion division algebra $B$.
Let $(\mathbb{X}, \iota_{\mathbb{X}})$ be the framing object of the Drinfeld problem.
We want to construct a polarization $\lambda_{\mathbb{X}}$ on $\mathbb{X}$ with $\ker \lambda_{\mathbb{X}} = \mathbb{X}[\Pi]$ and Rosati involution given by $b \mapsto \vartheta b' \vartheta^{-1}$ on $B$.
Here $b \mapsto b'$ denotes the standard involution on $B$.
By Lemma \ref{LA_quat} \eqref{LA_quatRU}, there exists an embedding $E_1 \inj B$ of a ramified quadratic extension $E_1|F$ of type (R-P), such that $\Pi_1 \vartheta = - \vartheta \Pi_1$ for a prime element $\Pi_1 \in E_1$.
From Proposition \ref{RP_Dr} \eqref{RP_Dr1} we get a principal polarization $\lambda_{\mathbb{X}}^0$ on $\mathbb{X}$ with associated Rosati involution $b \mapsto \Pi_1 b' \Pi_1^{-1}$.
If we assume fixed choices of $E_1$ and $\Pi_1$, this is unique up to a scalar in $O_F^{\times}$.
We define
\begin{equation*}
\lambda_{\mathbb{X}} = \lambda_{\mathbb{X}}^0 \circ \iota_{\mathbb{X}} (\Pi_1 \vartheta).
\end{equation*}
Since $\lambda_{\mathbb{X}}^0$ is a principal polarization and $\Pi_1 \vartheta$ and $\Pi$ have the same valuation in $O_B$, we have $\ker \lambda_{\mathbb{X}} = \mathbb{X}[\Pi]$. The Rosati involution of $\lambda_{\mathbb{X}}$ is $b \mapsto \vartheta b' \vartheta^{-1}$.
On the other hand, any polarization on $\mathbb{X}$ satisfying these two conditions can be constructed in this way (using the same choices for $E_1$ and $\Pi_1$).
Hence:
\begin{lem} \label{RU_pol}
\begin{enumerate}
\item \label{RU_pol1} There exists a polarization $\lambda_{\mathbb{X}}: \mathbb{X} \to \mathbb{X}^{\vee}$, unique up to a scalar in $O_F^{\times}$, with $\ker \lambda_{\mathbb{X}} = \mathbb{X}[\Pi]$ and associated Rosati involution $b \mapsto \vartheta b' \vartheta^{-1}$.
\item \label{RU_pol2} Fix $\lambda_{\mathbb{X}}$ as in \eqref{RU_pol1} and let $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$.
There exists a unique polarization $\lambda$ on $X$ with $\ker \lambda = X[\Pi]$ and Rosati involution $b \mapsto \vartheta b' \vartheta^{-1}$ such that $\varrho^{\ast}(\lambda_{\mathbb{X}}) = \lambda$ on $\overbar{S} = S \times_{\Spf \breve{O}_F} \overbar{k}$.
\end{enumerate}
\end{lem}
Note also that the involution $b \mapsto \vartheta b' \vartheta^{-1}$ does not depend on the choice of $\vartheta \in E$.
We write $\iota_{\mathbb{X},E}$ for the restriction of $\iota_{\mathbb{X}}$ to $E \subseteq B$ and, in the same manner, we write $\iota_E$ for the restriction of $\iota_B$ to $E$ for any $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$.
Fix a polarization $\lambda_{\mathbb{X}}$ of $\mathbb{X}$ as in Lemma \ref{RU_pol} \eqref{RU_pol1}.
Accordingly for a tuple $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, let $\lambda$ be the polarization given by Lemma \ref{RU_pol} \eqref{RU_pol2}.
\begin{lem} \label{RU_Dr}
The tuple $(\mathbb{X},\iota_{\mathbb{X},E}, \lambda_{\mathbb{X}})$ is a framing object of $\mathcal{N}_E^{\mathrm{naive}}$.
Moreover, the map
\begin{equation*}
(X,\iota_B,\varrho) \mapsto (X,\iota_E,\lambda,\varrho)
\end{equation*}
induces a closed embedding of formal schemes
\begin{equation*}
\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E^{\mathrm{naive}}.
\end{equation*}
\end{lem}
\begin{proof}
We follow the same argument as in the proof of Lemma \ref{RP_clemb}.
Again it is enough to check that $\QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ contains $\SU(C,h)$ as a closed subgroup and that $\iota_E$ satisfies the Kottwitz condition.
By \cite[Prop.\ 5.8]{RZ14}, the special condition on $\iota_B$ implies the Kottwitz condition for $\iota_E$.
It remains to show that $\SU(C,h) \subseteq \QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$.
But the group $G_{(\mathbb{X}, \iota_{\mathbb{X}})}$ of automorphisms of determinant $1$ of $(\mathbb{X}, \iota_{\mathbb{X}})$ is isomorphic to $\SL_{2,F}$ and $G_{(\mathbb{X}, \iota_{\mathbb{X}})} \subseteq \QIsog(\mathbb{X}, \iota_{\mathbb{X}}, \lambda_{\mathbb{X}})$ is a Zariski-closed subgroup by the same argument as in Lemma \ref{RP_clemb}.
Hence the statement follows from the exceptional isomorphism $\SL_{2,F} \simeq \SU(C,h)$.
\end{proof}
As a next step, we want to show that this already induces a closed embedding
\begin{equation}
\eta: \mathcal{M}_{Dr} \inj \mathcal{N}_E.
\end{equation}
Let $\widetilde{E} \inj B$ an embedding of a ramified quadratic extension $\widetilde{E}|F$ of type (R-U) as in Lemma \ref{LA_quat} \eqref{LA_quatRU}.
On the framing object $(\mathbb{X},\iota_{\mathbb{X}})$ of $\mathcal{M}_{Dr}$, we define a polarization $\widetilde{\lambda}_{\mathbb{X}}$ via
\begin{equation*}
\widetilde{\lambda}_{\mathbb{X}} = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}} (\widetilde{\vartheta}),
\end{equation*}
where $\widetilde{\vartheta}$ is a unit in $\widetilde{E}$ of the form $\widetilde{\vartheta}^2 = 1 + (t^2/ \pi_0)\cdot u$, see Lemma \ref{LA_quat} \eqref{LA_quatRU}.
The Rosati involution of $\widetilde{\lambda}_{\mathbb{X}}$ induces the identity on $O_E$ and we have
\begin{align*}
\lambda_{\mathbb{X},1} & = \frac{1}{t} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}}) = \frac{1}{t} \cdot \lambda_{\mathbb{X}} \circ \iota_B (1 + \widetilde{\vartheta}) = \lambda_{\mathbb{X}} \circ \iota_B (\widetilde{\Pi} / \pi_0) \\
& = \lambda_{\mathbb{X}} \circ \iota_B(\Pi^{-1} \gamma) \in \Hom({\mathbb{X}},{\mathbb{X}}^{\vee}),
\end{align*}
using the notation of Lemma \ref{LA_quat} \eqref{LA_quatRU}.
For $(X,\iota_B,\varrho) \in \mathcal{M}_{Dr}(S)$, we set $\widetilde{\lambda} = \lambda \circ \iota_B(\widetilde{\vartheta})$.
By the same calculation, we have $\lambda_1 = \frac{1}{t} (\lambda + \widetilde{\lambda}) \in \Hom(X,X^{\vee})$.
Thus the tuple $(X,\iota_E,\lambda,\varrho) = \eta(X,\iota_B,\varrho)$ satisfies the straightening condition.
Hence we get a closed embedding of formal schemes $\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ which is independent of the choice of $\widetilde{E}$.
\begin{thm} \label{RU_thm}
$\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ is an isomorphism of formal schemes.
\end{thm}
We first check this for $\overbar{k}$-valued points:
\begin{lem} \label{RU_thmgeom}
$\eta$ induces a bijection $\eta(\overbar{k}): \mathcal{M}_{Dr}(\overbar{k}) \to \mathcal{N}_E(\overbar{k})$.
\end{lem}
\begin{proof}
We only have to show surjectivity and we will use for this the Dieudonn\'e theory description of $\mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$, see \eqref{RU_geompts}.
The rational Dieudonn\'e-module $N = N_{\mathbb{X}}$ of $\mathbb{X}$ now carries additionally an action of $B$.
The embedding $F^{(2)} \inj B$ given by
\begin{equation}
\gamma \mapsto \frac{\Pi \cdot \widetilde{\Pi}}{\pi_0},
\end{equation}
(see Lemma \ref{LA_quat} \eqref{LA_quatRU}) induces a $\mathbb{Z} / 2$-grading $N = N_0 \oplus N_1$.
Here,
\begin{align*}
N_0 & = \{ x \in N \mid \iota(a) x = a x \text{ for all } a \in F^{(2)} \}, \\
N_1 & = \{ x \in N \mid \iota(a) x = \sigma(a) x \text{ for all } a \in F^{(2)} \},
\end{align*}
for a fixed embedding $F^{(2)} \inj \breve{F}$.
The operators ${\bf F}$ and ${\bf V}$ have degree $1$ with respect to this grading.
The principal polarization
\begin{equation*}
\lambda_{\mathbb{X},1} = \frac{1}{t} (\lambda_{\mathbb{X}} + \widetilde{\lambda}_{\mathbb{X}}) = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}}(\Pi^{-1} \gamma)
\end{equation*}
induces an alternating form $\langle \,, \rangle_1$ on $N$ that satisfies
\begin{equation*}
\langle x,y \rangle_1 = \langle x, \iota(\Pi^{-1} \gamma) \cdot y \rangle,
\end{equation*}
for all $x,y \in N$.
Let $M \in \mathcal{N}_E(\overbar{k}) \subseteq \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ be an $\breve{O}_F$-lattice in $N$.
We claim that $M \in \mathcal{M}_{Dr}(\overbar{k})$.
For this, it is necessary that $M$ is stable under the action of $O_F^{(2)}$ (since $O_B = O_F[\Pi, \gamma] = O_F^{(2)}[\Pi]$, see Lemma \ref{LA_quat} \eqref{LA_quatRU}) or equivalently, that $M$ respects the grading of $N$, \emph{i.e.}, $M = M_0 \oplus M_1$ for $M_i = M \cap N_i$.
Furthermore $M$ has to satisfy the \emph{special} condition:
\begin{equation*}
\dim M_0 / {\bf V}M_1 = \dim M_1 / {\bf V}M_0 = 1.
\end{equation*}
We first show that $M = M_0 \oplus M_1$.
Let $y = y_0 + y_1 \in M$ with $y_i \in N_i$.
Since $M = \Pi M^{\vee}$, we have
\begin{equation*}
\langle x, \iota(\Pi)^{-1} y \rangle = \langle x, \iota(\Pi)^{-1} y_0 \rangle + \langle x, \iota(\Pi)^{-1} y_1 \rangle \in \breve{O}_F,
\end{equation*}
for all $x \in M$.
Together with
\begin{align*}
\langle x, y \rangle_1 & = \langle x, y_0 \rangle_1 + \langle x, y_1 \rangle_1 = \langle x, \iota( \widetilde{\Pi} / \pi_0) y_0 \rangle + \langle x, \iota( \widetilde{\Pi} / \pi_0) y_1 \rangle \\
& = \gamma \cdot \langle x, \iota (\Pi^{-1}) y_0 \rangle + (1-\gamma) \cdot \langle x, \iota (\Pi^{-1}) y_1 \rangle \in \breve{O}_F,
\end{align*}
this implies that $\langle x, \iota (\Pi^{-1}) y_0 \rangle$ and $\langle x, \iota (\Pi^{-1}) y_1 \rangle$ lie in $\breve{O}_F$ for all $x \in M$.
Hence, $y_0, y_1 \in M$ and this means that $M$ respects the grading.
It follows that $M$ is stable under the action of $O_B$.
In order to show that $M$ is special, note that
\begin{equation*}
\langle {\bf V}x, {\bf V}y \rangle_1^{\sigma} = \langle {\bf F}{\bf V}x, y \rangle_1 = \pi_0 \cdot \langle x,y \rangle_1 \in \pi_0 \breve{O}_F,
\end{equation*}
for all $x,y \in M$.
The form $\langle \,, \rangle_1$ comes from a principal polarization, so it induces a perfect form on $M$.
Now it is enough to show that also the restrictions of $\langle \,, \rangle_1$ to $M_0$ and $M_1$ are perfect.
Indeed, if $M$ was not special, we would have $M_i = {\bf V}M_{i+1}$ for some $i$ and this would contradict $\langle \,, \rangle_1$ being perfect on $M_i$.
We prove that $\langle \,, \rangle_1$ is perfect on $M_i$ by showing $\langle M_0, M_1 \rangle_1 \subseteq \pi_0 \breve{O}_F$.
Let $x \in M_0$ and $y \in M_1$.
Then,
\begin{align*}
\langle x, y \rangle_1 & = (1-\gamma) \cdot \langle x, \iota(\Pi)^{-1} y \rangle, \\
\langle x, y \rangle_1 & = - \langle y, x \rangle_1 = - \gamma \cdot \langle y, \iota(\Pi)^{-1} x \rangle = \gamma \cdot \langle x, \iota(\overbar{\Pi})^{-1} y \rangle.
\end{align*}
We take the difference of these two equations.
From $\Pi \equiv \overbar{\Pi} \mod \pi_0$, it follows that $\langle x, \iota(\Pi)^{-1} y \rangle \equiv 0 \mod \pi_0$ and thus also $\langle x, y \rangle_1 \equiv 0 \mod \pi_0$.
The form $\langle \,, \rangle_1$ is hence perfect on $M_0$ and $M_1$ and the special condition follows.
This finishes the proof of Lemma \ref{RU_thmgeom}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{RU_thm}]
Let $(\mathbb{X}, \iota_{\mathbb{X}})$ be a framing object for $\mathcal{M}_{Dr}$ and let further
\begin{equation*}
\eta(\mathbb{X}, \iota_{\mathbb{X}}) = (\mathbb{X}, \iota_{\mathbb{X},E}, \lambda_{\mathbb{X}})
\end{equation*}
be the corresponding framing object for $\mathcal{N}_E$.
We fix an embedding $F^{(2)} \inj B$ as in Lemma \ref{LA_quat} \eqref{LA_quatRU}.
For $S \in \Nilp$, let $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E(S)$ and $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
We have
\begin{align*}
\varrho^{-1} \circ \iota_{\mathbb{X}} (\gamma) \circ \varrho & = \varrho^{-1} \circ \iota_{\mathbb{X}} (\Pi) \circ \lambda_{\mathbb{X}}^{-1} \circ \lambda_{\mathbb{X},1} \circ \varrho \\
& = \iota(\Pi) \circ \lambda^{-1} \circ \lambda_1 \in \End(X),
\end{align*}
for $\lambda_1 = t^{-1} (\lambda + \widetilde{\lambda})$, since $\ker \lambda = X[\Pi]$.
But $O_B = O_F[\Pi, \gamma ]$ (see Lemma \ref{LA_quat} \eqref{LA_quatRU}), so this already induces an $O_B$-action $\iota_B$ on $X$.
It remains to show that $(X,\iota_B,\varrho)$ satisfies the \emph{special} condition (see the discussion before Proposition \ref{RP_Dr} for a definition).
The special condition is open and closed (see \cite[p.\ 7]{RZ14}) and $\eta$ is bijective on $\overbar{k}$-points.
Hence $\eta$ induces an isomorphism on reduced subschemes
\begin{equation*}
(\eta)_{\red}: (\mathcal{M}_{Dr})_{\red} \isoarrow (\mathcal{N}_E)_{\red},
\end{equation*}
because $(\mathcal{M}_{Dr})_{\red}$ and $(\mathcal{N}_E)_{\red}$ are locally of finite type over $\Spec \overbar{k}$.
It follows that $\eta: \mathcal{M}_{Dr} \to \mathcal{N}_E$ is an isomorphism.
\end{proof}
\subsection{Deformation theory of intersection points} \label{LM_naive}
In this section, we will study the deformation rings of certain geometric points in $\mathcal{N}_E^{\mathrm{naive}}$ with the goal of proving that $\mathcal{N}_E \subseteq \mathcal{N}_E^{\mathrm{naive}}$ is a strict inclusion even in the case $|t| = |\pi_0|$.
In contrast to the non-$2$-adic case, we are not able to use the theory of local models (see \cite{PRS13} for a survey) since there is in general no normal form for the lattices $\Lambda \subseteq C$, see Proposition \ref{LA_latt} and \cite[Thm.\ 3.16]{RZ96}.\footnote{It is possible define a local model for the non-naive spaces $\mathcal{N}_E$ (also in the case (R-P)) and establish a local model diagram as in \cite[3.27]{RZ96}. The local model is then isomorphic to the local model of the Drinfeld moduli problem. This will be part of a future paper of the author.}
Thus we will take the more direct approach of studying the deformations of a fixed point $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ and using the theory of Grothendieck-Messing (\cite{Me72}).
Let $\Lambda \subseteq C$ be a $\Pi$-modular hyperbolic lattice.
By Lemma \ref{RU_IS}, there is a unique point $x = (X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(\overbar{k})$ with a $\tau$-stable Dieudonn\'e module $M \subseteq C \otimes_E \breve{E}$ and $M^{\tau} = \Lambda$.
Since $\Lambda$ is hyperbolic, $x$ satisfies the straightening condition, \emph{i.e.}, $x \in \mathcal{N}_E(\overbar{k})$.
(In Figure \ref{RU_NEnaivejpg}, $x$ would lie on the intersection of two solid lines.)
Let $\widehat{\mathcal{O}}_{\mathcal{N}_E^{\mathrm{naive}},x}$ be the formal completion of the local ring at $x$.
It represents the following deformation functor $\Defo_x$.
For an artinian $\breve{O}_F$-algebra $R$ with residue field $\overbar{k}$, we have
\begin{equation*}
\Defo_x(R) = \{ (Y,\iota_Y,\lambda_Y) / R \mid Y_{\overbar{k}} \cong X \},
\end{equation*}
where $(Y,\iota_Y,\lambda_Y)$ satisfies the usual conditions (see section \ref{RU1}) and the isomorphism $Y_{\overbar{k}} \cong X$ is actually an isomorphism of tuples $(Y_{\overbar{k}},\iota_Y,\lambda_Y) \cong (X,\iota,\lambda)$ as in Definition \ref{RP_isonaive}.
Now assume the quotient map $R \to \overbar{k}$ is an $O_F$-pd-thickening (cf.\ \cite{Ahs11}).
For example, this is the case when $\mathfrak{m}^2 = 0$ for the maximal ideal $\mathfrak{m}$ of $R$.
Then, by Grothendieck-Messing theory (see \cite{Me72} and \cite{Ahs11}), we get an explicit description of $\Defo_x(R)$ in terms of liftings of the Hodge filtration:
The (relative) Dieudonn\'e crystal $\mathbb{D}_X(R)$ of $X$ evaluated at $R$ is naturally isomorphic to the free $R$-module $\Lambda \otimes_{O_F} R$ and this isomorphism is equivariant under the action of $O_E$ induced by $\iota$ and respects the perfect form $\Phi = \langle \,, \rangle \circ (1,\Pi^{-1})$ induced by $\lambda \circ \iota(\Pi^{-1})$.
The Hodge-filtration of $X$ is given by $\mathcal{F}_X = V \cdot \mathbb{D}_X(\overbar{k}) \cong \Pi \cdot (\Lambda \otimes_{O_F} \overbar{k}) \subseteq \Lambda \otimes_{O_F} \overbar{k}$.
A point $Y \in \Defo_x(R)$ now corresponds, via Grothendieck-Messing, to a direct summand $\mathcal{F}_Y \subseteq \Lambda \otimes_{O_F} R$ of rank $2$ lifting $\mathcal{F}_X$, stable under the $O_E$-action and totally isotropic with respect to $\Phi$.
Furthermore, it has to satisfy the Kottwitz condition (see section \ref{RU1}): For the action of $\alpha \in O_E$ on $\Lie Y = (\Lambda \otimes_{O_F} R) / \mathcal{F}_Y$, we have
\begin{equation*}
\charp(\Lie Y, T \mid \iota(\alpha)) = (T - \alpha)(T - \overbar{\alpha}).
\end{equation*}
Let us now fix an $O_E$-basis $(e_1, e_2)$ of $\Lambda$ and let us write everything in terms of the $O_F$-basis $(e_1,e_2,\Pi e_1, \Pi e_2)$.
Since $\Lambda$ is hyperbolic, we can fix $(e_1,e_2)$ such that $h$ is represented by the matrix
\begin{equation*}
h \weq \left( \begin{array}{cc}
& \Pi \\
\overbar{\Pi} & \\
\end{array} \right),
\end{equation*}
and then
\begin{equation*}
\Phi = \Tr_{E|F} \frac{1}{t\vartheta} h(\cdot, \Pi^{-1} \cdot) \weq
\left( \begin{array}{cc|cc}
& t/\pi_0 & & 1 \\
& & -1 & \\ \hline
& -1 + t^2/\pi_0 & & t \\
1 & & & \\
\end{array} \right).
\end{equation*}
An $R$-basis $(v_1,v_2)$ of $\mathcal{F}_Y$ can now be chosen such that
\begin{equation*}
(v_1 v_2) = \begin{pmatrix}
y_{11} & y_{12} \\
y_{21} & y_{22} \\
1 & \\
& 1 \\
\end{pmatrix},
\end{equation*}
with $y_{ij} \in R$.
As an easy calculation shows, the conditions on $\mathcal{F}_Y$ above are now equivalent to the following conditions on the $y_{ij}$:
\begin{align*}
y_{11} + y_{22} & = t, \\
y_{11} y_{22} - y_{12}y_{21} &= \pi_0,\\
t(\frac{ty_{22}}{\pi_0} + 2) = y_{11}(\frac{ty_{22}}{\pi_0} + 2) &= y_{21}(\frac{ty_{22}}{\pi_0} + 2) = y_{12} (\frac{ty_{22}}{\pi_0} + 2) = 0.
\end{align*}
Let $T$ be the closed subscheme of $\Spec O_F[y_{11},y_{12},y_{21},y_{22}]$ given by these equations.
Let $T_y$ be the formal completion of the localization at the ideal generated by the $y_{ij}$ and $\pi_0$.
Then we have $\Defo_x(R) \cong T_y(R)$ for any $O_F$-pd-thickening $R \to \overbar{k}$.
In particular, the first infinitesimal neighborhoods of $\Defo_x$ and $T_y$ coincide.
The first infinitesimal neighborhood of $T_y$ is given by $\Spec O_F[y_{ij}]/((y_{ij})^2,y_{11}+y_{22}-t,\pi_0)$, hence $T_y$ has Krull dimension $3$ and so has $\Defo_x$. However, $\mathcal{M}_{Dr}$ is regular of dimension $2$, cf.\ \cite{BC91}.
Thus,
\begin{prop}
$\mathcal{N}_E^{\mathrm{naive}} \neq \mathcal{M}_{Dr}$, even when $|t| = |\pi_0|$.
\end{prop}
Indeed, $\dim \widehat{\mathcal{O}}_{\mathcal{N}_E^{\mathrm{naive}},x} = \dim \Defo_x = 3 > 2 = \dim \widehat{\mathcal{O}}_{\mathcal{N}_E,x}$.
\section{A theorem on the existence of polarizations}
\label{POL}
In this section, we will prove the existence of the polarization $\widetilde{\lambda}$ for any $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E^{\mathrm{naive}}(S)$ as claimed in the sections \ref{RP2} and \ref{RU2} in both the cases (R-P) and (R-U).
In fact, we will show more generally that $\widetilde{\lambda}$ exists even for the points of a larger moduli space $\mathcal{M}_E$ where we forget about the polarization $\lambda$.
We start with the definition of the moduli space $\mathcal{M}_E$.
Let $F|\mathbb{Q}_p$ be a finite extension (not necessarily $p=2$) and let $E|F$ be a quadratic extension (not necessarily ramified).
We denote by $O_F$ and $O_E$ the rings of integers, by $k$ the residue field of $O_F$ and by $\overbar{k}$ the algebraic closure of $k$.
Furthermore, $\breve{F}$ is the completion of the maximal unramified extension of $F$ and $\breve{O}_F$ its ring of integers.
Let $B$ be the quaternion division algebra over $F$ and $O_B$ the ring of integers.
If $E|F$ is unramified, we fix a common uniformizer $\pi_0 \in O_F \subseteq O_E$.
If $E|F$ is ramified and $p>2$, we choose a uniformizer $\Pi \in O_E$ such that $\pi_0 = \Pi^2 \in O_F$.
If $E|F$ is ramified and $p=2$, we use the notations of section \ref{LA} for the cases (R-P) and (R-U).
For $S \in \Nilp$, let $\mathcal{M}_E(S)$ be the set of isomorphism classes of tuples $(X,\iota_E,\varrho)$ over $S$.
Here, $X$ is a formal $O_F$-module of dimension $2$ and height $4$ and $\iota_E$ is an action of $O_E$ on $X$ satisfying the Kottwitz condition for the signature $(1,1)$, \emph{i.e.}, the characteristic polynomial for the action of $\iota_E(\alpha)$ on $\Lie(X)$ is
\begin{equation} \label{POL_Kottwitz}
\charp(\Lie X, T \mid \iota(\alpha)) = (T - \alpha)(T - \overbar{\alpha}),
\end{equation}
for any $\alpha \in O_E$, compare the definition of $\mathcal{N}_E^{\mathrm{naive}}$ in the sections \ref{RP} and \ref{RU}.
The last entry $\varrho$ is an $O_E$-linear quasi-isogeny
\begin{equation*}
\varrho: X \times_S \overbar{S} \to \mathbb{X} \times_{\Spec \overbar{k}} \overbar{S},
\end{equation*}
of height $0$ to the framing object $(\mathbb{X},\iota_{\mathbb{X},E})$ defined over $\Spec \overbar{k}$.
The framing object for $\mathcal{M}_E$ is the Drinfeld framing object $(\mathbb{X},\iota_{\mathbb{X},B})$ where we restrict the $O_B$-action to $O_E$ for an arbitrary embedding $O_E \inj O_B$.
The special condition on $(\mathbb{X},\iota_{\mathbb{X},B})$ implies the Kottwitz condition for any $\alpha \in O_E$ by \cite[Prop.\ 5.8]{RZ14}.
\begin{rmk} \label{POL_rmk}
\begin{enumerate}
\item Up to isogeny, there is more than one pair $(X,\iota_E)$ over $\Spec \overbar{k}$ satisfying the conditions above.
Indeed, let $N_X$ be the rational Dieudonn\'e module of $(X,\iota_E)$.
This is a $4$-dimensional $\breve{F}$-vector space with an action of $O_E$.
The Frobenius ${\bf F}$ on $N_X$ commutes with the action of $O_E$.
For a suitable choice of a basis of $N_X$, it may be of either of the following two forms,
\begin{equation*}
{\bf F} = \begin{pmatrix}
& & 1 & \\
& & & 1 \\
\pi_0 & & & \\
& \pi_0 & & \\
\end{pmatrix} \! \sigma \; \text{ or } \; {\bf F} = \begin{pmatrix}
\pi_0 & & & \\
& \pi_0 & & \\
& & 1 & \\
& & & 1 \\
\end{pmatrix} \! \sigma.
\end{equation*}
This follows from the classification of isocrystals, see for example \cite[p.\ 3]{RZ96}.
In the left case, ${\bf F}$ is isoclinic of slope $1/2$ (the supersingular case), and in the right case, the slopes are $0$ and $1$.
Our choice of the framing object above assures that we are in the supersingular case, since the framing object for the Drinfeld moduli problem can be written as a product of two formal $O_F$-modules of dimension $1$ and height $2$ (\emph{cf.}\ \cite[p.\ 136-137]{BC91}).
\item \label{POL_rmk2} Let $p=2$ and $E|F$ ramified of type (R-P) or (R-U).
We can identify the framing objects $(\mathbb{X},\iota_{\mathbb{X},E})$ for $\mathcal{N}_E^{\mathrm{naive}}$, $\mathcal{M}_{Dr}$ and $\mathcal{M}_E$ by Lemma \ref{RP_Dr} and Lemma \ref{RU_Dr}.
In this way, we obtain a forgetful morphism $\mathcal{N}_E^{\mathrm{naive}} \to \mathcal{M}_E$.
This is a closed embedding, since the existence of a polarization $\lambda$ for $(X,\iota_E,\varrho) \in \mathcal{M}_E(S)$ is a closed condition by \cite[Prop.\ 2.9]{RZ96}.
\end{enumerate}
\end{rmk}
By \cite[Thm.\ 3.25]{RZ96}, $\mathcal{M}_E$ is pro-representable by a formal scheme over $\Spf \breve{O}_F$.
We will prove the following theorem in this section.
\begin{thm} \label{POL_thm}
\begin{enumerate}
\item \label{POL_thm1} There exists a principal polarization $\widetilde{\lambda}_{\mathbb{X}}$ on $(\mathbb{X},\iota_{\mathbb{X},E})$ such that the Rosati involution induces the identity on $O_E$, \emph{i.e.}, $\iota(\alpha)^{\ast} = \iota(\alpha)$ for all $\alpha \in O_E$.
This polarization is unique up to a scalar in $O_E^{\times}$, that is, for any two polarizations $\widetilde{\lambda}_{\mathbb{X}}$ and $\widetilde{\lambda}_{\mathbb{X}}'$ of this form, there exists an element $\alpha \in O_E^{\times}$ such that $\widetilde{\lambda}_{\mathbb{X}}' = \widetilde{\lambda}_{\mathbb{X}} \circ \iota_{\mathbb{X},E}(\alpha)$.
\item \label{POL_thm2} Fix $\widetilde{\lambda}_{\mathbb{X}}$ as in part \eqref{POL_thm1}.
For any $S \in \Nilp$ and $(X,\iota_E,\varrho) \in \mathcal{M}_E(S)$, there exists a unique principal polarization $\widetilde{\lambda}$ on $X$ such that the Rosati involution induces the identity on $O_E$ and such that $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
\end{enumerate}
\end{thm}
\begin{rmk} \label{POL_p>2}
\begin{enumerate}
\item We will see later that this theorem describes a natural isomorphism between $\mathcal{M}_E$ and another space $\mathcal{M}_{E,\mathrm{pol}}$ which solves the moduli problem for tuples $(X,\iota_E,\widetilde{\lambda},\varrho)$ where $\widetilde{\lambda}$ is a principal polarization with Rosati involution the identity on $O_E$.
This is an RZ-space for the symplectic group $\GSp_2(E)$ and thus the theorem gives us another geometric realization of an exceptional isomorphism of reductive groups, in this case $\GSp_2(E) \cong \GL_2(E)$.
Since there is no such isomorphism in higher dimensions, the theorem does not generalize to these cases and a different approach is needed to formulate the straightening condition.
\item \label{POL_p>2eq} With the Theorem \ref{POL_thm} established, one can give an easier proof of the isomorphism $\mathcal{N}_E \isoarrow \mathcal{M}_{Dr}$ for the cases where $E|F$ is unramified or $E|F$ is ramified and $p >2$, which is the main theorem of \cite{KR14}.
Indeed, the main part of the proof in loc.\ cit.\ consists of the Propositions 2.1 and 3.1, which claim the existence of a certain principal polarization $\lambda_X^0$ for any point $(X,\iota,\lambda,\varrho) \in \mathcal{N}_E(S)$.
But there is a canonical closed embedding $\mathcal{N}_E \inj \mathcal{M}_E$ and under this embedding, $\lambda_X^0$ is just the polarization $\widetilde{\lambda}$ of Theorem \ref{POL_thm}, for a suitable choice of $\widetilde{\lambda}_{\mathbb{X}}$ on the framing object.
More explicitly, using the notation on page $2$ of loc.\ cit., we take $\widetilde{\lambda}_{\mathbb{X}} = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}}^{-1}(\Pi) = \lambda_{\mathbb{X}}^{0} \circ \iota_{\mathbb{X}}(-\delta)$ in the unramified case and $\widetilde{\lambda}_{\mathbb{X}} = \lambda_{\mathbb{X}} \circ \iota_{\mathbb{X}}(\zeta^{-1})$ in the ramified case.
\end{enumerate}
\end{rmk}
We will split the proof of this theorem into several lemmata.
As a first step, we use Dieudonn\'e theory to prove the statement for all geometric points.
\begin{lem} \label{POL_geompts}
Part \eqref{POL_thm1} of theorem holds.
Furthermore, for a fixed polarization $\widetilde{\lambda}_{\mathbb{X}}$ on $(\mathbb{X},\iota_{\mathbb{X},E})$ and for any $(X,\iota_E,\varrho) \in \mathcal{M}_E(\overbar{k})$, the pullback $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a polarization on $X$.
\end{lem}
\begin{proof}
This follows almost immediately from the theory of affine Deligne-Lusztig varieties (see, for example, \cite{CV}) since we are comparing the geometric points of RZ-spaces for the isomorphic groups $\GL_2(E)$ and $\GSp_2(E)$.
It is also possible to check this via a more direct computation using Dieudonn\'e theory, as we will indicate briefly.
Proceeding very similarly to Proposition \ref{RP_frnaive} or Proposition \ref{RU_frnaive} (cf.\ \cite{KR14} in the unramified case), we can associate to $\mathbb{X}$ a lattice $\Lambda$ in the $2$-dimensional $E$-vector space $C$ (the Frobenius invariant points of the (rational) Dieudonn\'e module).
The choice of a principal polarization on $\mathbb{X}$ with trivial Rosati involution corresponds now exactly to a choice of perfect alternating form on $\Lambda$.
It immediately follows that such a polarization exists and that it is unique up to a scalar in $O_E^{\times}$.
For the second part, let $X \in \mathcal{M}_E(\overbar{k})$ and $M \subseteq C \otimes_E \breve{E}$ be its Dieudonn\'e module.
Since $\varrho$ has height $0$, we have
\begin{equation*}
[M : M \cap (\Lambda \otimes_E \breve{E})] = [(\Lambda \otimes_E \breve{E}) : M \cap (\Lambda \otimes_E \breve{E})],
\end{equation*}
and one easily checks that a perfect alternating form $b$ on $\Lambda$ is also perfect on $M$.
\end{proof}
In the following, we fix a polarization $\widetilde{\lambda}_{\mathbb{X}}$ on $(\mathbb{X},\iota_{\mathbb{X},E})$ as in Theorem \ref{POL_thm} \eqref{POL_thm1}.
Let $(X,\iota_E,\varrho) \in \mathcal{M}_E(S)$ for $S \in \Nilp$ and consider the pullback $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$.
In general, this is only a quasi-polarization.
It suffices to show that $\widetilde{\lambda}$ is a polarization on $X$.
Indeed, since $\varrho$ is $O_E$-linear and of height $0$, this is then automatically a principal polarization on $X$ such that the Rosati involution is the identity on $O_E$.
Define a subfunctor $\mathcal{M}_{E,\mathrm{pol}} \subseteq \mathcal{M}_E$ by
\begin{equation*}
\mathcal{M}_{E,\mathrm{pol}}(S) = \{ (X,\iota_E,\varrho) \in \mathcal{M}_E(S) \mid \widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}}) \text{ is a polarization on } X \}.
\end{equation*}
This is a closed formal subscheme by \cite[Prop.\ 2.9]{RZ96}.
Moreover, Lemma \ref{POL_geompts} shows that $\mathcal{M}_{E,\mathrm{pol}}(\overbar{k}) = \mathcal{M}_E(\overbar{k})$.
\begin{rmk}
Equivalently, we can describe $\mathcal{M}_{E,\mathrm{pol}}$ as follows.
For $S \in \Nilp$, we define $\mathcal{M}_{E,\mathrm{pol}}(S)$ to be the set of equivalence classes of tuples $(X,\iota_E,\widetilde{\lambda},\varrho)$ where
\begin{itemize}
\item $X$ is a formal $O_F$-module over $S$ of height $4$ and dimension $2$,
\item $\iota_E$ is an action of $O_E$ on $X$ that satisfies the Kottwitz condition in \eqref{POL_Kottwitz} and
\item $\widetilde{\lambda}$ is a principal polarization on $X$ such that the Rosati involution induces the identity on $O_E$.
\item Furthermore, we fix a framing object $(\mathbb{X},\iota_{\mathbb{X},E},\widetilde{\lambda}_{\mathbb{X}})$ over $\Spec \overbar{k}$, where $(\mathbb{X},\iota_{\mathbb{X},E})$ is the framing object for $\mathcal{M}_E$ and $\widetilde{\lambda}_{\mathbb{X}}$ is a polarization as in Theorem \ref{POL_thm} \eqref{POL_thm1}.
Then $\varrho$ is an $O_E$-linear quasi-isogeny
\begin{equation*}
\varrho: X \times_S \overbar{S} \to \mathbb{X} \times_{\Spec \overbar{k}} \overbar{S},
\end{equation*}
of height $0$ such that, locally on $\overbar{S}$, the (quasi-)polarizations $\varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ and $\widetilde{\lambda}$ on $X$ only differ by a scalar in $O_E^{\times}$, \emph{i.e.}, there exists an element $\alpha \in O_E^{\times}$ such that $\varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}}) = \widetilde{\lambda} \circ \iota_E(\alpha)$.
Two tuples $(X,\iota_E,\widetilde{\lambda},\varrho)$ and $(X',\iota_E',\widetilde{\lambda}',\varrho')$ are equivalent if there exists an $O_E$-linear isomorphism $\varphi: X \isoarrow X'$ such that $\varphi^{\ast}(\widetilde{\lambda}')$ and $\widetilde{\lambda}$ only differ by a scalar in $O_E^{\times}$.
\end{itemize}
In this way, we gave a definition for $\mathcal{M}_{E,\mathrm{pol}}$ by introducing extra data on points of the moduli space $\mathcal{M}_E$, instead of extra conditions.
It is now clear, that $\mathcal{M}_{E,\mathrm{pol}}$ describes a moduli problem for $p$-divisible groups of (PEL) type.
It is easily checked that the two descriptions of $\mathcal{M}_{E,\mathrm{pol}}$ give rise to the same moduli space.
\end{rmk}
Theorem \ref{POL_thm} now holds if and only if $\mathcal{M}_{E,\mathrm{pol}} = \mathcal{M}_E$.
This equality is a consequence of the following statement.
\begin{lem} \label{POL_rings}
For any point $x = (X,\iota_E,\varrho) \in \mathcal{M}_{E,\mathrm{pol}}(\overbar{k})$, the embedding $\mathcal{M}_{E,\mathrm{pol}} \inj \mathcal{M}_E$ induces an isomorphism of completed local rings $\widehat{\mathcal{O}}_{\mathcal{M}_{E,\mathrm{pol}}, x} \cong \widehat{\mathcal{O}}_{\mathcal{M}_E,x}$.
\end{lem}
For the proof of this Lemma, we use the theory of local models, \emph{cf.}\ \cite[Chap.\ 3]{RZ96}.
We postpone the proof of this lemma to the end of this section and we first introduce the local models $\mathrm{M}_E^{\mathrm{loc}}$ and $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}}$ for $\mathcal{M}_E$ and $\mathcal{M}_{E,\mathrm{pol}}$.
Let $C$ be a $4$-dimensional $F$-vector space with an action of $E$ and let $\Lambda \subseteq C$ be an $O_F$-lattice that is stable under the action of $O_E$.
Furthermore, let $(\,,)$ be an $F$-bilinear alternating form on $C$ with
\begin{equation} \label{POL_altloc}
(\alpha x,y) = (x,\alpha y),
\end{equation}
for all $\alpha \in E$ and $x,y\in C$ and such that $\Lambda$ is unimodular with respect to $(\,,)$.
It is easily checked that $(\,,)$ is unique up to an isomorphism of $C$ that commutes with the $E$-action and that maps $\Lambda$ to itself.
For an $O_F$-algebra $R$, let $\mathrm{M}_E^{\mathrm{loc}}(R)$ be the set of all direct summands $\mathcal{F} \subseteq \Lambda \otimes_{O_F} R$ of rank $2$ that are $O_E$-linear and satisfy the \emph{Kottwitz condition}.
That means, for all $\alpha \in O_E$, the action of $\alpha$ on the quotient $(\Lambda \otimes_{O_F} R) / \mathcal{F}$ has the characteristic polynomial
\begin{equation*}
\charp(\Lie X, T \mid \alpha) = (T - \alpha)(T - \overbar{\alpha}).
\end{equation*}
The subset $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}}(R) \subseteq \mathrm{M}_E^{\mathrm{loc}}(R)$ consists of all direct summands $\mathcal{F} \in \mathrm{M}_E^{\mathrm{loc}}(R)$ that are in addition totally isotropic with respect to $(\,,)$ on $\Lambda \otimes_{O_F} R$.
The functor $\mathrm{M}_E^{\mathrm{loc}}$ is representable by a closed subscheme of $\Gr(2,\Lambda)_{O_F}$, the Grassmanian of rank $2$ direct summands of $\Lambda$, and $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}}$ is representable by a closed subscheme of $\mathrm{M}_E^{\mathrm{loc}}$.
In particular, both $\mathrm{M}_E^{\mathrm{loc}}$ and $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}}$ are projective schemes over $\Spec O_F$.
These local models have already been studied by Deligne and Pappas. In particular, we have:
\begin{prop}[\cite{DP}] \label{POL_loc}
$\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}} = \mathrm{M}_E^{\mathrm{loc}}$.
In other words, for an $O_F$-algebra $R$, any direct summand $\mathcal{F} \in \mathrm{M}_E^{\mathrm{loc}}(R)$ is totally isotropic with respect to $(\,,)$.
\end{prop}
The moduli spaces $\mathcal{M}_E$ and $\mathcal{M}_{E,\mathrm{pol}}$ are related to the local models $\mathrm{M}_E^{\mathrm{loc}}$ and $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}}$ via local model diagrams, \emph{cf.}\ \cite[Chap.\ 3]{RZ96}.
Let $\mathcal{M}_E^{\mathrm{large}}$ be the functor that maps a scheme $S \in \Nilp$ to the set of isomorphism classes of tuples $(X,\iota_E,\varrho; \gamma)$.
Here,
\begin{equation*}
(X,\iota_E,\varrho) \in \mathcal{M}_E(S),
\end{equation*}
and $\gamma$ is an $O_E$-linear isomorphism
\begin{equation*}
\gamma: \mathbb{D}_X(S) \isoarrow \Lambda \otimes_{O_F} \mathcal{O}_S.
\end{equation*}
On the left hand side, $\mathbb{D}_X(S)$ denotes the (relative) Grothendieck-Messing crystal of $X$ evaluated at $S$, \emph{cf.}\ \cite[5.2]{Ahs11}.
Let $\widehat{\mathrm{M}}_E^{\mathrm{loc}}$ be the $\pi_0$-adic completion of $\mathrm{M}_E^{\mathrm{loc}} \otimes_{O_F} \breve{O}_F$.
Then there is a local model diagram:
\begin{equation*}
\xymatrix{
& \mathcal{M}_E^{\mathrm{large}} \ar[ld]_f \ar[rd]^g & \\
\mathcal{M}_E & & \widehat{\mathrm{M}}_E^{\mathrm{loc}}
}
\end{equation*}
The morphism $f$ on the left hand side is the projection $(X,\iota_E,\varrho; \gamma) \mapsto (X,\iota_E,\varrho)$.
The morphism $g$ on the right hand side maps $(X,\iota_E,\varrho; \gamma) \in \mathcal{M}_E^{\mathrm{large}}(S)$ to
\begin{equation*}
\mathcal{F} = \ker (\Lambda \otimes_{O_F} \mathcal{O}_S \xrightarrow{\gamma^{-1}} \mathbb{D}_X(S) \to \Lie X) \subseteq \Lambda \otimes_{O_F} \mathcal{O}_S.
\end{equation*}
By \cite[Thm.\ 3.11]{RZ96}, the morphism $f$ is smooth and surjective.
The morphism $g$ is formally smooth by Grothendieck-Messing theory, see \cite[V.1.6]{Me72}, resp.\ \cite[Chap.\ 5.2]{Ahs11} for the relative setting (\emph{i.e.}, when $O_F \neq \mathbb{Z}_p$).
We also have a local model diagram for the space $\mathcal{M}_{E,\mathrm{pol}}$.
We define $\mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}}$ as the fiber product $\mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}} = \mathcal{M}_{E,\mathrm{pol}} \times_{\mathcal{M}_E} \mathcal{M}_E^{\mathrm{large}}$.
Then $\mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}}$ is closed formal subscheme of $\mathcal{M}_E^{\mathrm{large}}$ with the following moduli description.
A point $(X,\iota_E,\varrho; \gamma) \in \mathcal{M}_E^{\mathrm{large}}(S)$ lies in $\mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}}(S)$ if $\widetilde{\lambda} = \varrho^{\ast}(\widetilde{\lambda}_{\mathbb{X}})$ is a principal polarization on $X$.
In that case, $\widetilde{\lambda}$ induces an alternating form $(\,,)^X$ on $\mathbb{D}_X(S)$ which, under the isomorphism $\gamma$, is equal to the form $(\,,)$ on $\Lambda \otimes_{O_F} \mathcal{O}_S$, up to a unit in $O_E \otimes_{O_F} \mathcal{O_S}$.
The local model diagram for $\mathcal{M}_{E,\mathrm{pol}}$ now looks as follows.
\begin{equation} \label{POL_LMD}
\begin{aligned}
\xymatrix{
& \mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}} \ar[ld]_{f_{\mathrm{pol}}} \ar[rd]^{g_{\mathrm{pol}}} & \\
\mathcal{M}_{E,\mathrm{pol}} & & \widehat{\mathrm{M}}_{E,\mathrm{pol}}^{\mathrm{loc}}
}
\end{aligned}
\end{equation}
Here, $\widehat{\mathrm{M}}_{E,\mathrm{pol}}^{\mathrm{loc}}$ is the $\pi_0$-adic completion of $\mathrm{M}_{E,\mathrm{pol}}^{\mathrm{loc}} \otimes_{O_F} \breve{O}_F$ and $f_{\mathrm{pol}}$ and $g_{\mathrm{pol}}$ are the restrictions of the morphisms $f$ and $g$ above.
Again, $g_{\mathrm{pol}}$ is formally smooth by Grothendieck-Messing theory and $f_{\mathrm{pol}}$ is smooth and surjective by construction.
We can now finish the proof of Lemma \ref{POL_rings}.
\begin{proof}[Proof of Lemma \ref{POL_rings}]
We have the following commutative diagram.
\begin{equation} \label{POL_commdiag}
\begin{aligned}
\xymatrix{
\mathcal{M}_{E,\mathrm{pol}} \, & \mathcal{M}_{E,\mathrm{pol}}^{\mathrm{large}} \ar[l]_{f_{\mathrm{pol}}} \ar[r]^{g_{\mathrm{pol}}} & \; \widehat{\mathrm{M}}_{E,\mathrm{pol}}^{\mathrm{loc}} \ar@{=}[d] \\
\mathcal{M}_E & \mathcal{M}_E^{\mathrm{large}} \ar[l]_f \ar[r]^g & \widehat{\mathrm{M}}_E^{\mathrm{loc}}
\ar@{_{(}->}"1,1"*+\frm{};"2,1"*+\frm{}
\ar@{_{(}->}"1,2"*+\frm{};"2,2"*+\frm{}
}
\end{aligned}
\end{equation}
The equality on the right hand side follows from Proposition \ref{POL_loc}.
The other vertical arrows are closed embeddings.
Let $x \in \mathcal{M}_{E,\mathrm{pol}}(\overbar{k})$.
By \cite[Prop.\ 3.33]{RZ96}, there exists an \'etale neighbourhood $U$ of $x$ in $\mathcal{M}_E$ and section $s: U \to \mathcal{M}_E^{\mathrm{large}}$ such that $g \circ s$ is formally \'etale.
Similarly, $U_{\mathrm{pol}} = U \times_{\mathcal{M}_E} \mathcal{M}_{E,\mathrm{pol}}$ and $s_{\mathrm{pol}}$ is the base change of $s$ to $U_{\mathrm{pol}}$.
Then the composition $g_{\mathrm{pol}} \circ s_{\mathrm{pol}}$ is also formally \'etale.
This formally \'etale maps induce isomorphism of local rings $\widehat{\mathcal{O}}_{\mathcal{M}_E,x} \isoarrow \widehat{\mathcal{O}}_{\widehat{\mathrm{M}}_E^{\mathrm{loc}},x'}$ and $\widehat{\mathcal{O}}_{\mathcal{M}_{E,\mathrm{pol}},x} \isoarrow \widehat{\mathcal{O}}_{\widehat{\mathrm{M}}_{E,\mathrm{pol}}^{\mathrm{loc}},x'}$, $x' = s(g(x))$.
By Proposition \ref{POL_loc}, we have $\widehat{\mathcal{O}}_{\widehat{\mathrm{M}}_E^{\mathrm{loc}},x'} = \widehat{\mathcal{O}}_{\widehat{\mathrm{M}}_{E,\mathrm{pol}}^{\mathrm{loc}},x'}$ and since this identification commutes with $g \circ s$ (resp.\ $g_{\mathrm{pol}} \circ s_{\mathrm{pol}}$), we get the desired isomorphism $\widehat{\mathcal{O}}_{\mathcal{M}_{E,\mathrm{pol}},x} \cong \widehat{\mathcal{O}}_{\mathcal{M}_E,x}$.
\end{proof}
|
{'timestamp': '2017-05-25T02:06:46', 'yymm': '1506', 'arxiv_id': '1506.05390', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.05390'}
|
arxiv
|
\section{Introduction}
In this article we will connect two distinct results that have been achieved in the context of gauge/gravity duality.
The first result, which is motivated by the Penrose limit in the AdS$_5\times$S$^5$ geometry\cite{Berenstein:2002jq},
is the natural language for the computation of anomalous dimensions of single trace operators in the planar limit provided
by integrable spin chains (see \cite{Beisert:2010jr} for a thorough review).
For the spin chain models we study, using only the symmetries of the system, one can determine the exact large $N$
anomalous dimensions and the two magnon scattering matrix.
Using integrability one can go further and determine the complete scattering matrix of spin chain
magnons\cite{Beisert:2005tm,Beisert:2006qh}.
The second results which we will use are the powerful methods exploiting group respresentation theory, which allow
one to study correlators of operators whose classical dimension is of order $N$.
In this case, the large $N$ limit is not captured by summing the planar diagrams.
Our results allow a rather complete understanding of the anomalous dimensions of gauge theory operators that are dual to
giant graviton branes with open strings suspended between them.
These results generalize the analysis of \cite{Hofman:2007xp} to systems that include
non-maximal giant gravitons and dual giant gravitons.
The boundary magnons of an open string attached to a maximal giant graviton are fixed in place - they can not hop between
sites of the open string.
In the case of non maximal giant gravitons and dual giant gravitons there are non-trivial interactions between the open string
and the brane, allowing the boundary magnons to move away from the string endpoints.
The operators we focus on are built mainly out of one complex $U(N)$ adjoint scalar $Z$, and a much smaller number $M$
of impurities given by a second complex scalar field $Y$, which are the ``magnons'' that hop on the lattice of the $Z$s.
The dilatation operator action on these operators matches the Hamiltonian of a spin chain model comprising of a set of defects
that scatter from each other.
The spin chain models enjoy an $SU(2|2)^2$ symmetry.
The symmetries of the system determines the energies of impurities, as well as the two
impurity scattering matrix\cite{Beisert:2005tm,Beisert:2006qh}.
The $SU(2|2)$ algebra includes two sets of bosonic generators ($R^a{}_b$ and $L^\alpha{}_\beta$) that each generate
an $SU(2)$ group.
The action of the generators is summarized in the relations
\bea
[R^a{}_b,T^c]=\delta^c_b T^a -{1\over 2}\delta^a_b T^c\, ,
\qquad
[L^\alpha {}_\beta,T^\gamma]=\delta^\gamma_\beta T^\alpha -{1\over 2}\delta^\alpha_\beta T^\gamma
\eea
where $T$ is any tensor transforming as advertised by its index.
The algebra also includes two sets of super charges $Q^\alpha_a$ and $S^b_\beta$.
These close the algebra
\bea
\{ Q^\alpha_a,S^b_\beta\} = \delta^b_a L^\alpha_\beta +\delta^\alpha_\beta R^b_a +\delta^b_a\delta^\alpha_\beta C\,,
\eea
where $C$ is a central charge, and
\bea
\{ Q^\alpha_a,Q_b^\beta\} = 0\,,
\qquad
\{ S_\alpha^a,S^b_\beta\} = 0.
\eea
We will realize this algebra on states that include magnons.
When the magnons are well separated, each magnon transforms in a definite representation of $su(2|2)$ and the full state
transforms in the tensor product of these individual representations.
Acting on the $i$th magnon we can have a centrally extended representation\cite{Beisert:2005tm,Beisert:2006qh}
\bea
\{ Q^\alpha_a,S^b_\beta\} = \delta^b_a L^\alpha_\beta +\delta^\alpha_\beta R^b_a +\delta^b_a\delta^\alpha_\beta C_i\,,
\eea
\bea
\{ Q^\alpha_a,Q_b^\beta\} =\epsilon^{\alpha\beta}\epsilon_{ab}{k_i\over 2}\,,
\qquad
\{ S_\alpha^a,S^b_\beta\} =\epsilon_{\alpha\beta}\epsilon^{ab}{k_i^*\over 2}\label{centcharge}\,.
\eea
The total multimagnon state must be in a representation for which the central charges $k_i,k_i^*$ vanish.
Thus the multi magnon state transforms under the representation with
\bea
C=\sum_i C_i\,,\qquad \sum_i k_i=0=\sum_i k_i^*\,.
\eea
A key ingredient to make use of the $su(2|2)$ symmetry entails determining the central charges $k_i$, $k_i^*$ and hence
the representations of the individual magnons.
There is a natural geometric description of the system, first obtained by an inspired argument in\cite{Berenstein:2005jq} and
later put on a firm footing in \cite{Hofman:2006xt}, which gives an elegant and simple description of these central charges.
The two dimensional spin chain model that is relevant for planar anomalous dimensions is dual to the worldsheet theory of the string
moving in the dual AdS$_5\times$S$^5$ geometry.
This string is a small deformation of a ${1\over 2}$ BPS state.
A convenient description of the ${1\over 2}$-BPS sector (first anticipated in \cite{Berenstein:2004kk})
is in terms of the LLM coordinates introduced in \cite{Lin:2004nb}, which are
specifically constructed to describe ${1\over 2}-$BPS states built mainly out of $Z$s.
In the LLM coordinates, there is a preferred LLM plane on which states that are built mainly from $Z$s orbit with a radius
$r=1$ (in convenient units).
Consider a closed string state dual to a single trace gauge theory operator built mainly from $Z$s, but also containing a few
magnons $M$.
The closed string solution looks like a polygon with vertices on the unit circle.
The sides of the polygon are the magnons.
The specific advantage of these coordinates is that they make the analysis of the symmetries particularly simple and
allow a perfect match to the $SU(2|2)^2$ superalgebra of the gauge theory described above.
Matching the gauge theory and gravity descriptions in this way implies a transparent geometrical understanding of the $k_i$
and $k_i^*$, as we now explain.
The commutator of two supersymmetries in the dual gravity theory contains NS-$B_2$ gauge field transformations.
As a consequence of this gauge transformation, strings stretched in the LLM plane acquire a phase which is the origin of the
central charges $k_i$ and $k_i^*$.
It follows that we can immediately read off the central charges for any particular magnon from the sketch of the closed
string worldsheet on the LLM plane: the straight line segment corresponds to a complex number which is the central
charge\cite{Hofman:2006xt}.
The gauge theory operators that correspond to closed strings have a bare dimension that grows, at most, as $\sqrt{N}$.
We are interested in operators whose bare dimension grows as $N$ when the large $N$ limit is taken.
These operators include systems of giant graviton branes.
The key difference as far as the sketch of the state on the LLM plane is concerned, is that the giant gravitons can
orbit on circles of radius $r<1$ while dual giant gravitons orbit on circles of radius $r>1$.
The magnons populating open strings which are attached to the giant gravitons can be divided into boundary magnons
(which sit closest to the ends of the open string) and bulk magnons.
The boundary magnons will stretch from a giant graviton located at $r\ne 1$ to the unit circle, while bulk magnons stretch
between points on the unit circle.
We will also consider the case below that the entire open string is given by a single magnon, in which case it will stretch
between two points with $r\ne 1$.
The computation of correlators of the corresponding operators in the field theory is highly non-trivial.
Indeed, as a consequence of the fact that we now have order $N$ fields in our operators, the number of ribbon graphs
that can be drawn is huge.
These enormous combinatoric factors easily overpower the usual ${1\over N^2}$ suppression of non-planar diagrams so
that both planar and non-planar diagrams must be summed to capture even the leading large $N$
limit of the correlator\cite{Balasubramanian:2001nh}.
This problem can be overcome by employing group representation theory techniques.
The article \cite{Corley:2001zk} showed that it is possible to compute the correlation functions of operators built from
any number of $Z$s exactly, by using the Schur polynomials as a basis for the local operators of the theory.
In \cite{Corley:2002mj} these results were elegantly explained by pointing out that the organization of operators
in terms of Schur polynomials is an organization in terms of projection operators.
Completeness and orthogonality of the basis follows from the completeness and orthogonality of the underlying projectors.
With these insights\cite{Corley:2001zk,Corley:2002mj}, many new directions opened up.
A basis for the local operators which organizes the theory using the quantum numbers of the global symmetries was given in
\cite{Brown:2007xh,Brown:2008ij}.
Another basis, employing projectors related to the Brauer algebra was put forward in \cite{Kimura:2007wy} and developed in a
number of interesting
works\cite{Kimura:2008ac,Kimura:2009jf,Kimura:2009ur,Kimura:2010tx,Kimura:2011df,Kimura:2012hp,Kimura:2013fqa}.
For the systems we are interested in, the most convenient basis to use is provided by the restricted Schur polynomials.
Inspired by the Gauss Law which will arise in the world volume description of the giant graviton branes, the authors of
\cite{Balasubramanian:2004nb} suggested operators in the gauge theory that are dual to excited giant graviton brane states.
This inspired idea was pursued both in the case that the open strings are described by an open string
word\cite{de Mello Koch:2007uu,de Mello Koch:2007uv,Bekker:2007ea} and in the case of minimal open strings, with each
open string represented by a single magnon\cite{Bhattacharyya:2008rb,Bhattacharyya:2008xy}.
The operators introduced in \cite{de Mello Koch:2007uu,Bhattacharyya:2008rb} are the restricted Schur polynomials.
Further, significant progress was made in understanding the spectrum of anomalous dimensions of these operators in the
studies\cite{de Mello Koch:2007uv,Bekker:2007ea,Koch:2010gp,DeComarmond:2010ie,Carlson:2011hy,Koch:2011hb,deMelloKoch:2012ck,deMelloKoch:2011ci}.
Extensions which consider orthogonal and symplectic gauge groups and other new ideas, have also been
achieved\cite{Caputa:2013hr,Caputa:2013vla,Diaz:2013gja,Kemp:2014apa,Kemp:2014eca,Diaz:2014ixa}.
In this paper we will connect the string theory description and the gauge theory description of the operators corresponding
to systems of excited giant graviton branes.
Our study gives a concrete description of the central charges $k_i$ and some of the consequences of the $su(2|2)$ symmetry.
We will see that the restricted Schur polynomials provide a natural description of the quantum brane states.
For the open strings we find a description in terms of open spin chains with boundaries and we explain precisely what the
boundary interactions are.
The double coset ansatz of the gauge theory, which solves the problem of minimal open strings consisting entirely
of a single magnon, also has an immediate and natural interpretation in the same framework.
There are closely related results which employ a different approach to the questions considered in this article.
A collective coordinate approach to study giant gravitons with their excitations has been pursued in
\cite{Berenstein:2013md,Berenstein:2013eya,Berenstein:2014pma,Berenstein:2014isa,Berenstein:2014zxa}.
This technique employs a complex collective coordinate for the giant graviton state,
which has a geometric interpretation in terms of the fermion droplet (LLM) description of half
BPS states\cite{Berenstein:2004kk,Lin:2004nb}.
The motivation for this collective coordinate starts from the observation that within semiclassical gravity, we think of the
D-branes as being localized in the dual spacetime geometry.
It might seem however, that since in the field theory the operators we write down have a precise ${\cal R}$-charge
and a fixed energy, they are dual to a delocalized state.
Indeed, since gauge/gravity duality is a quantum equivalence it is subject to the uncertainty principle of quantum mechanics.
The ${\cal R}$-charge of an operator is the angular momentum of the dual states in the gravity theory, so that by the
uncertainty principle, the dual giant graviton-branes must be fully delocalized in the conjugate angle in the geometry.
The collective coordinate parametrizes coherent states, which do not have a definite ${\cal R}$-charge and so may permit
a geometric interpretation of the position of the D-brane as the value of the collective coordinate.
With the correct choice for the coherent states, mixing between different states of a definite ${\cal R}$-charge
would be taken into account and so when diagonalizing the dilatation operator (for example) the mixing between
states with different choices of the values of the collective coordinate might be suppressed.
This computation would be, potentially, much simpler than a direct computation utilizing operators with a definite
${\cal R}$-charge.
Of course, by diagonalizing the dilatation operator for operators dual to giant graviton brane plus open string states,
one would expect to recover the collective coordinates, but this may only be possible after a complicated mixing problem
in degenerate perturbation theory is solved.
Some of the details that have emerged from our study do not support this semiclassical reasoning.
Specifically, we find that the brane states are given by restricted Schur polynomials and these do not receive any
corrections when the perturbation theory problem is solved, so that there does not seem to be any need to
solve a mixing problem which constructs localized states from delocalized ones.
Our large $N$ eigenstates do have a definite ${\cal R}$-charge.
The nontrivial perturbation theory problem involves mixing between operators corresponding to the same giant
graviton branes, but with different open string words attached.
Thus, it is an open string state mixing problem, solved with a discrete Fourier transform, as it was for the closed string.
However, there is general agreement between the approaches:
the Fourier transform solves a collective coordinate problem which diagonalizes momentum, rather than position.
For an interesting recent study of anomalous dimensions, at finite $N$, using a very different approach,
see \cite{Kimura:2015bna}.
This article is organized as follows: In section 2 we recall the relevant facts about the restricted Schur polynomials.
The action of the dilatation operator on these restricted Schur polynomials is studied in section 3 and the eigenstates
of the dilatation operator are constructed in section 4.
Section 5 provides the dual string theory interpretation of these
eigenstates and perfect agreement between the energies of the string theory states and the corresponding eigenvalues
of the dilatation operator is demonstrated.
In sections 6 and 7 we consider the problem of magnon scattering, both in the bulk and off the boundary magnons.
We have checked that the magnon scattering matrix we compute is consistent with scattering results obtained in
the weak coupling limit of the theory.
One important conclusion is that the spin chain is not integrable.
In section 8 we review the double coset ansatz and describe the dual string theory interpretation of these results.
Our conclusions and some discussion is given in section 9. The Appendices collect some technical details.
\section{Giants with open strings attached}
In this section we will review the gauge theory description of the operators dual to giant graviton branes with open string
excitations.
In this description, each open string is described by a word with order $\sqrt{N}$ letters.
Most of the letters are the $Z$ field.
There are however $M\sim O(1)$ impurities which are the magnons of the spin chain.
For simplicity we will usually take all of the impurities to be a second complex matrix $Y$.
This idea was first applied in \cite{Balasubramanian:2002sa} to reproduce the spectum of small fluctuations of giant
gravitons \cite{Das:2000st}.
The description was then further developed in
\cite{Aharony:2002nd,Berenstein:2003ah,Berenstein:2006qk,Berenstein:2005vf,Berenstein:2005fa}.
The articles \cite{Berenstein:2006qk,Berenstein:2005vf,Berenstein:2005fa} in particular developed this description to the
point where interesting dynamical questions\footnote{For example, one could consider the force exerted by the string on
the giant.} could be asked and answered.
The open string words are then inserted into a sea of $Z$s which make up the giant graviton brane(s).
Concretely, the operators we consider are
\bea
&&O(R,R_1^{k},R_2^{k};\{n_i\}_1,\{n_i\}_2,\cdots,\{n_i\}_k)\cr
&&={1\over n!}\sum_{\sigma\in S_{n+k}}\chi_{R,R_1^{k},R_2^{k}} (\sigma)
Z^{i_1}_{i_{\sigma(1)}}\cdots Z^{i_n}_{i_{\sigma(n)}} (W_k)^{i_{n+1}}_{i_{\sigma(n+1)}}
\cdots (W_2)^{i_{n+k-1}}_{i_{\sigma(n+k-1)}}(W_1)^{i_{n+k}}_{i_{\sigma(n+k)}}
\label{Ops}
\eea
where the open string words are
\bea
(W_I)^i_j =(YZ^{n_1}YZ^{n_2-n_1}Y\cdots YZ^{n_{M_I}-n_{M_I-1}}Y)^i_j\, .
\eea
We have used the notation $\{n_i\}_I$ in (\ref{Ops}) to describe the integers $\{n_1,n_2,\cdots,n_{M_I}\}$ which appear in the
$I$th open string word.
This is a lattice notation, which lists the number of $Z$s appearing to the left of each of the $Y$s, starting from the second $Y$:
the $Z$s form a lattice and the $n_i$ give a position in this lattice.
This notation is particularly convenient when we discuss the action of the dilatation operator.
We will also find an occupation notation useful.
The occupation notation lists the number of $Z$s between consecutive $Y$s, and is indicated by placing the $n_i$ in brackets.
Thus, for example $O(R,R_1^1,R_2^1,\{n_1,n_2,n_3\})=O(R,R_1^1,R_2^1,\{(n_1),(n_2-n_1),(n_3-n_2)\})$.
$R$ is a Young diagram with $n+k$ boxes.
A bound state of $p_s$ giant gravitons and $p_a$ dual giant gravitons is described by a Young diagram $R$ with
$p_a$ rows, each containing order $N$ boxes and $p_s$ columns, each containing order $N$ boxes.
$\chi_{R,R_1^{k},R_2^{k}}(\sigma)$ is a restricted character \cite{de Mello Koch:2007uu} given by
\bea
\chi_{R,R_1^{k},R_2^{k}}(\sigma)={\rm Tr}_{R_1^{k},R_2^{k}}\left( \Gamma_R(\sigma )\right)
\eea
$R^{k}$ is a Young diagram with $n$ boxes, that is, it is a representation of $S_n$.
The irreducible representation $R$ of $S_{n+k}$ is reducible if we restrict to the $S_n$ subgroup.
$R^k$ is one of the representations that arise upon restricting.
In general, any such representation will be subduced more than once.
Above we have used the subscripts 1 and 2 to indicate this.
We have in mind a Gelfand-Tsetlin like labeling to provide a systematic way to describe the possible $R^k$ we might consider.
In this labeling, we use the transformation of the representation under the chain of subgroups
$S_{n+k}\supset S_{n+k-1}\supset S_{n+k-2}\supset\cdots\supset S_n$.
This is achieved by labeling boxes in $R$.
Dropping the boxes with labels $\le i$, we obtain the representation of $S_{n+k-i}$ to which $R^k$ belongs.
We have to spell out how this chain of subgroups are embedded in $S_{n+k}$.
Think of $S_{q}$ as the group which permutes objects labeled $1,2,3,\cdots,q$.
Here we have $q=n+k$ and the objects we have in mind are the $Z$ fields or the open string words.
We associate an integer to an object by looking at the upper indices in (\ref{Ops}); as an example, the open
string described by $W_2$ is object number $n+k-1$.
To go from $S_{n+k-i}$ to $S_{n+k-i-1}$, we keep only the permutations that fix $n+k-i$.
We can put the states in $R_1^k$ and $R_2^k$ into a 1-to-1 correspondence.
The trace ${\rm Tr}_{R_1^{k},R_2^{k}}$ sums the column index over $R_1^k$ and the row index over $R_2^k$.
If we associate the row and column indices with the endpoints of the open string, we can associate the endpoints
of the open string $I$ with the box labeled $I$ in $R_1^k$ and $R_2^k$.
The numbers appearing in the boxes of $R_1^k$ literally tell us where the $k$ open strings start and the numbers in
$R_2^k$ where the $k$ open strings end.
See Figure \ref{fig:labeling} for an example of this labeling.
Each $Y$ in an open string word is a magnon.
We will take the number of magnons $M_I=O(1)$ $\forall I$.
The $Z^{i_j}_{i_{\sigma(j)}}$ with $1\le j\le n$ belong to the system of giants and the $Z$'s appearing
in $W_I$ belong to the $I$th open string.
It is clear that $n\sim O(N)$.
\begin{figure}
\begin{center}
\includegraphics[height=5cm,width=9cm]{labeling}
\caption{A cartoon illustrating the $R,R_1^{k},R_2^{k}$ labeling for an example with $k=4$ open string strings
and 3 giant gravitons. The shape of the strings stretching between the giants is not realistic - only
the locations of the end points of the open strings is accurate.
The giant gravitons are orbiting on the circles shown; the radius shown for each orbit is accurate.
They wrap an $S^3$ which is transverse to the plane on which they orbit. The smaller the
radius of the giant's orbit, the larger the $S^3$ it wraps. The size of the $S^3$ that the giant wraps is given by
its momentum, which is equal to the number of boxes in the column which corresponds to the giant.
The numbers appearing in the boxes of $R_1^4$ tell us
where the open strings start and the numbers appearing in the boxes of $R_2^4$ where they end.}
\label{fig:labeling}
\end{center}
\end{figure}
Each giant graviton is associated with a long column and each dual giant graviton with a long row in the Young
diagrams labeling the restricted Schur polynomial.
Our notation for the Young diagrams is to list row lengths.
Thus a Young diagram that has two columns, one of length $n_1$ and the second of length $n_2$ with
$n_2<n_1$ is denoted $(2^{n_2},1^{n_1-n_2})$, while a Young diagram with two rows, one of length $n_1$ and one of
length $n_2$ ($n_1>n_2$) is denoted $(n_1,n_2)$.
We want to use the results of \cite{de Mello Koch:2007uu,de Mello Koch:2007uv,Bekker:2007ea} to study correlation
functions of these operators.
The correlators are obtained by summing all contractions between the $Z$s belonging to the giants, and by grouping the
open string words in pairs and summing only the planar diagrams between the fields in each pair of the open string words.
To justify the planar approximation for the open string words we take $n_i\ge 0$ and $\sum_{i=1}^L n_i\le O(\sqrt{N})$.
For a nice careful discussion of related issues, see \cite{Garner:2014kna}.
We can put these operators into correspondence with normalized states
\bea
O(R,R_1^{k},R_2^{k};\{n_i\}_1,\{n_i\}_2,\cdots,\{n_i\}_k)\leftrightarrow
|R,R_1^{k},R_2^{k};\{n_i\}_1,\{n_i\}_2,\cdots,\{n_i\}_k\rangle
\label{positionstates}
\eea
by using the usual state-operator correspondence available for any conformal field theory.
In what follows we will mainly use the state language.
\section{Action of the Dilatation Operator}
The one loop dilatation operator, in the $SU(2)$ sector, is\cite{Minahan:2002ve}
\bea
D=-{g_{YM}^2\over 8\pi^2}{\rm Tr}\left(\left[ Y,Z\right]\left[{d\over dY},{d\over dZ}\right]\right)
\eea
Our goal in this section is to review the action of this dilatation operator on the restricted Schur polynomials, which was constructed
in general in \cite{de Mello Koch:2007uv,Bekker:2007ea}.
When we act with $D$ on $O(R,R_1^{k},R_2^{k};\{n_i\}_1,\{n_i\}_2,\cdots,\{n_i\}_k)$ the derivative with respect to $Y$ will
act on a $Y$ belonging to a specific open string word.
Thus, in the large $N$ limit we can decompose the action of $D$ into a sum of terms, with each individual term
being the action on a specific open string.
If we act on a magnon belonging to the bulk of the open string word, then the only contribution comes by acting with the derivative
respect to $Z$ on a field that is immediately adjacent to the magnon.
We act only on the adjacent $Z$ fields because to capture the large $N$ limit we should use the planar approximation
for the open string word contractions.
To illustrate the action on a bulk magnon, consider the operator corresponding to a single giant graviton with a single
open string attached.
The giant has momentum $n$ so that $R$ is a single column with $n+1$ boxes: $R=1^{n+1}$.
Further, $R_1^1=R_2^1=1^n$.
The open string has three magnons and hence we can describe the corresponding state as
$|1^{n+1},1^n,1^n;\{n_1,n_2\}\rangle$.
The action on the bulk magnon at large $N$ is
\bea
D_{\rm bulk\,\,magnon}|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle
={g_{YM}^2 N\over 8\pi^2}\Bigg[2|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle\cr
-|1^{n+1},1^n,1^n;\{(n_1-1),(n_2+1)\}\rangle
-|1^{n+1},1^n,1^n;\{(n_1+1),(n_2-1)\}\rangle\Bigg]
\eea
If we act on a magnon which occupies either the first or last position of the open string word, we realize one of the four
possibilities listed below.
\begin{itemize}
\item[1.] The derivative with respect to $Z$ acts on the $Z$ adjacent to the $Y$, belonging to the open string and the
coefficient of the product of derivatives with respect to $Y$ and $Z$ replaces these fields in the same order.
None of the labels of the state change.
This term has a coefficient of 1\cite{de Mello Koch:2007uv,Bekker:2007ea}.
\item[2.] The derivative with respect to $Z$ acts on the $Z$ adjacent to the $Y$, belonging to the open string word and the
coefficient of the product of derivatives with respect to $Y$ and $Z$ replaces these fields in the opposite order.
In this case, a $Z$ has moved out of the open string word and into its own slot in the restricted Schur polynomial - a hop
off interaction in the terminology of \cite{de Mello Koch:2007uv}.
In the process the Young diagrams labeling the excited giant graviton grows by a single box.
If the string is attached to a giant graviton, the column the endpoint of the relevant open string belongs to inherits the extra box.
If the string is attached to a dual giant graviton, the row the endpoint of the relevant open string belongs to inherits the extra box.
The coefficient of this term is given by minus one times the square root of the factor associated with the open string box
divided by $N$\cite{de Mello Koch:2007uv,Bekker:2007ea}.
We remind the reader that a box in row $i$ and column $j$ is assigned the factor $N-i+j$.
\item[3.] The derivative with respect to $Z$ acts on a $Z$ belonging to the giant and the
coefficient of the product of derivatives with respect to $Y$ and $Z$ replaces these fields in the opposite order.
In this case, a $Z$ has moved from its own slot in the restricted Schur polynomial and onto the open string word - a hop
on interaction in the terminology of \cite{de Mello Koch:2007uv}.
In the process the Young diagrams labeling the giant graviton shrinks by a single box.
The details of which column/row shrinks is exactly parallel to the discussion in point 2 above.
The coefficient of this term is given by minus one times the square root of the factor associated with the open string box
divided by $N$\cite{de Mello Koch:2007uv,Bekker:2007ea}.
\item[4.] The derivative with respect to $Z$ acts on a $Z$ belonging to the giant and the
coefficient of the product of derivatives with respect to $Y$ and $Z$ replaces these fields in the same order.
This is a kissing interaction in the terminology of \cite{de Mello Koch:2007uv}.
None of the labels of the state change.
The coefficient of this term is given by the factor associated with the open string box divided by
$N$\cite{de Mello Koch:2007uv,Bekker:2007ea}.
\end{itemize}
For the example we are considering the dilatation operator has the following large $N$ action on the magnons closest
to the string endpoints
\bea
D_{\rm first\,\,magnon}|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle
={g_{YM}^2 N\over 8\pi^2}\Big[\left(1+1-{n\over N}\right)|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle\cr
-\sqrt{1-{n\over N}}\left(|1^{n+2},1^{n+1},1^{n+1};\{(n_1-1),(n_2)\}\rangle
+|1^{n},1^{n-1},1^{n-1};\{(n_1+1),(n_2)\}\rangle\right)\Big]\cr
\eea
\eject
\noindent
and
\bea
D_{\rm last\,\,magnon}|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle
={g_{YM}^2 N\over 8\pi^2}\Big[\left(1+1-{n\over N}\right)|1^{n+1},1^n,1^n;\{(n_1),(n_2)\}\rangle\cr
-\sqrt{1-{n\over N}}\left(|1^{n+2},1^{n+1},1^{n+1};\{(n_1),(n_2-1)\}\rangle
+|1^{n},1^{n-1},1^{n-1};\{(n_1),(n_2+1)\}\rangle\right)\Big]\cr
\eea
There are a few points worth noting:
The complete action of the dilatation operator can be read from the Young diagram labels of the operator.
The factors of the boxes in the Young diagram for the endpoints of a given open string determine the action of
the dilatation operator on that open string.
When the labels $R_1^k\ne R_2^k$, the string end points are on different giant gravitons and the two endpoints are
associated with different boxes in the Young diagram so that the action of the dilatation operator on the two boundary
magnons is distinct.
To determine these endpoint interactions we must go beyond the planar approximation.
Notice that for a maximal giant graviton we have $n=N$.
In this case, most of the boundary magnon terms in the Hamiltonian vanish and the boundary magnons are locked in
place at the string endpoints.
The giant graviton brane is simply supplying a Dirichlet boundary condition for the open string.
For non-maximal giants, all of the boundary magnon terms are non-zero and, for example, $Z$ fields that belong
to the open string can wander into slots describing the giant.
Alternatively, since the split between open string and brane is probably not very sharp, we might think that the magnons
can wander from the string endpoints into the bulk of the open string.
The coefficient of these hopping terms is modified by the presence of the giant graviton, so that the boundary
magnons do not behave in the same way as the bulk magnons do.
As a final example, consider a dual giant graviton which carries momentum $n$.
In this case, $R$ is a single row of $n$ boxes and we have
\bea
D_{\rm first\,\,magnon}|n+1,n,n;\{(n_1),(n_2)\}\rangle
={g_{YM}^2 N\over 8\pi^2}\Big[\left(1+1+{n\over N}\right)|n+1,n,n;\{(n_1),(n_2)\}\rangle\cr
-\sqrt{1+{n\over N}}\left(|n+2,n+1,n+1;\{(n_1-1),(n_2)\}\rangle
+|n,n-1,n-1;\{(n_1+1),(n_2)\}\rangle\right)\Big]\cr
\eea
In the appendix \ref{TwoLoop} we discuss the action of the dilatation operator at two loops.
\section{Large $N$ Diagonalization: Asymptotic States}
We are now ready to construct eigenstates of the dilatation operator.
We will not construct exact large $N$ eigenstates.
Rather, we focus on states for which all magnons are well separated.
From these states we can still obtain the anomalous dimensions.
In section \ref{toexact} we will describe how one might use these asymptotic states to construct exact eigenstates,
following \cite{Beisert:2005tm,Beisert:2006qh}.
In the absence of integrability however, this can not be carried to completion and our states are best thought of as
very good approximate eigenstates.
The $Z$s in the open string word define a lattice on which the $Y$s hop.
Our construction entails taking a Fourier transform on this lattice.
The boundary interactions allow $Z$s to move onto and out of the lattice, so the lattice size is not fixed.
It is not clear what the Fourier transform is, if the size of the lattice varies.
The goal of this section is to deal with these complications.
With each application of the one-loop dilatation operator, a single $Z$ can enter or leave the open string word.
At $\gamma$ loops at most $\gamma$ $Z$s can enter or leave.
At any finite loop order ($\gamma$) the change in length $\Delta L=\gamma$ of the lattice is finite
while the total length $L$ of the lattice is $\sqrt{N}$.
Thus, at large $N$ the ratio ${\Delta L\over L}\to 0$ and we can treat the lattice length as fixed.
This observation is most easily used by first introducing ``simple states'' that have a definite number of $Z$s, in the
lattice associated to each open string.
This is accomplished by relaxing the identification of the open string word with the lattice.
The dilatation operator's action now allows magnons to move off the open string, mixing simple states with states that are not simple.
However, by modifying these simple states we can build states that are closed under the action of the dilatation operator.
Our simple states are defined by taking a ``Fourier transform'' of the states (\ref{positionstates}).
The simplest system to consider is that of a single giant, with a single string attached, excited by only
two magnons (i.e. only boundary magnons - no bulk magnons).
The string word is composed using $J$ $Z$ fields and the complete operator using $J+n$ $Z$s.
Introduce the phases
\bea
q_a=e^{i2\pi k_a \over J}\label{choiceofphases}
\eea
with $k_a=0,1,...,J-1$.
As a consequence of the fact that the lattice is a discrete structure, momenta are quantized with the momentum spacing
set by the inverse of the total lattice size.
This explains the choice of phases in (\ref{choiceofphases}).
The simple states we consider are thus given by
\bea
|q_1,q_2\rangle &=&\sum_{m_1=0}^{J-1}\sum_{m_2=0}^{m_1}q_1^{m_1}q_2^{m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\cr
&+&\sum_{m_2=0}^{J-1}\sum_{m_1=0}^{m_2}q_1^{m_1}q_2^{m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle
\label{simplestates}
\eea
This Fourier transform is a transform on the lattice describing the open string worldsheet.
The two magnons sit at positions $m_1$ and $m_2$ on this lattice.
If $m_2>m_1$, there are $m_2-m_1$ $Z$s between the magnons.
If $m_1>m_2$, there are $J+m_2-m_1$ $Z$s between the magnons.
The $Z$s before the first magnon of the string and after the last magnon of the string, are mixed up with the $Z$s
of the giant - they do not sit on the open string word.
All of the terms in (\ref{simplestates}) are states with different positions for the two magnons,
but each is a giant that contains precisely $n$ $Z$s with an open string attached, and the open string contains
precisely $J$ $Z$s.
We can't distinguish where the string begins and where the giant ends: the open string and giant morph smoothly into each other.
This is in contrast to the case of a maximal giant graviton, where the magnons mark the endpoints of the open string\footnote{For
the maximal giant graviton, the boundary magnons are not able to hop and so sit forever at the end of the open string.
For a non-maximal giant graviton the boundary magnons can hop.
Even if they are initially placed at the string endpoint, they will soon explore the bulk of the string.}.
If this interpretation is consistent we must recover the expected inner product on the lattice and we do:
Consider a giant with momentum $n$.
An open string with a lattice of $J$ sites is attached to the giant.
The string is excited by $M$ magnons, at positions $n_1,....,n_{M-1}$ and $n_M$, with $n_{j+1} > n_j$.
The corresponding normalized states, denoted by $|n;J;n_1,n_2,\cdots,n_k\rangle$ will obey\footnote{As a consequence
of the fact that it is not possible to distinguish where the open string begins and where the giant ends, there is no delta
function setting the positions of the first magnons to be equal to each other - we have put this constraint in by hand
in (\ref{frstdlta}).}
\bea
\langle n;J;n_1,m_2,\cdots,m_M|n,J,n_1,n_2,\cdots,n_M\rangle =
\delta_{m_2 n_2}\cdots \delta_{m_M n_M} \quad n_{k+1}>n_k, m_{k+1}>m_k\, .\label{frstdlta}
\eea
This is the statement that, up to the ambiguity of where the open string starts, the magnons
must occupy the same sites for a non-zero overlap.
It is clear that ($G(x)\equiv 1^{x+1},1^x,1^x$ and again, $ n_{j+1}>n_j, m_{j+1}>m_j$)
\bea
&&\langle G(n+J+m_1-m_2);\{ m_2,\cdots,m_M\}|G(n+J+n_1-n_2);\{ n_2,\cdots,n_M\}\rangle
=\delta_{m_2 n_2}\cdots \delta_{m_k n_k} \nonumber
\eea
reproducing the lattice inner product.
The simple states are an orthogonal set of states.
To check this, compute the coefficient $c_a$ of the state $|1^{n+a+1},1^{n+a},1^{n+a};\{J-a\}\rangle$.
Looking at the two terms in (\ref{simplestates}) we find the following two contributions
\bea
c_a&=&\sum_{m_1=a}^{J-1}q_1^{m_1}q_2^{m_1-a}+\sum_{m_1=0}^{a-1}q_1^{m_1}q_2^{m_1-a}\cr
\cr
&=&\left\{
\begin{matrix}
Jq_2^{-a} &{\rm if}\quad k_1+k_2=0\cr
0 &{\rm if}\quad k_1+k_2\ne 0\cr
\end{matrix}\right.
\eea
Thus, $q_1=q_2^{-1}$ to get a non-zero result.
We will see that this zero lattice momentum constraint maps into the constraint that the $su(2|2)$ central
charges of the complete magnon state must vanish.
Our simple states are then given by setting $q_2=q_1^{-1}$ and are labeled by a single parameter $q_1$;
denote the simple states using a subscript $s$ as $|q_1\rangle_s$.
The asymptotic large $N$ eigenstates are a small modification of these simple states.
When we apply the dilatation operator to the simple states nothing prevents the boundary magnons from
``hopping past the endpoints of the open string'', so the simple states are not closed under the action
of the dilatation operator.
We need to relax the sharp cut off on the magnon movement, by allowing the sums that appear in
(\ref{simplestates}) above to be unrestricted.
We accomplish this by introducing a ``cut off'' function, shown in Figure \ref{fig:cutoff}.
In terms of this cut off function $f(\cdot)$ our eigenstates are
\bea
&&|\psi(q_1)\rangle =
\sum_{m_2=0}^{n+J}\sum_{m_1=0}^{m_2}f(m_2) q_1^{m_1-m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle\cr
&+&\sum_{m_1=0}^{J+m_2}\sum_{m_2=0}^{n} f(m_1)f(J-m_1+m_2)q_1^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\cr
&&
\eea
\begin{figure}
\begin{center}
\includegraphics[height=4cm,width=7cm]{functionf}
\caption{The cutoff function used in constructing large $N$ eigenstates}
\label{fig:cutoff}
\end{center}
\end{figure}
The dilatation operator can not arrange that the number of $Z$s between two magnons becomes negative.
Thus, any bounds on sums in the definition of our simple states enforcing this are respected.
On the other hand, the dilatation operator allows boundary magnons to hop arbitrarily far beyond the open string endpoint.
Bounds in the sums for simple states enforcing this are not respected.
Replace these bounds enforced as the upper limit of a sum, by bounds enforced by the cut off function.
From Figure \ref{fig:cutoff} we see that the cut off function is defined using a parameter $\delta J$.
We require that ${\delta J\over J}\to 0$ as $N\to\infty$, so that at large $N$ the difference between these eigenstates
and the simple states $|q_1\rangle_s$ vanishes, as demonstrated in Appendix \ref{nodiff}.
We also want to ensure that
\bea
f(i)=f(i+1)+\epsilon\qquad \forall i
\label{conditionforf}
\eea
with $\epsilon\to 0$ as $N\to\infty$.
(\ref{conditionforf}) is needed to ensure that we do indeed obtain an eigenstate.
It is straight forward to choose a function $f(x)$ with the required properties.
We could for example choose $\delta J$ to be of order $N^{{1\over 4}}$.
Our large $N$ answers are not sensitive to the details of the cut off function $f(x)$.
When $1/N$ corrections to the eigenstates are computed $f(x)$ may be more constrained and we may need to
reconsider the precise form of the cut off function and how we implement the bounds.
It is now straight forward to verify that, at large $N$, we have
\bea
D|\psi(q_1)\rangle &=&2\times {g_{YM}^2\over 8\pi^2}
\left(1+\left[1-{n\over N}\right]-\sqrt{1-{n\over N}}(q_1+q_1^{-1})\right)|\psi(q_1)\rangle\cr
&=&2g^2 \left(1+\left[1-{n\over N}\right]-\sqrt{1-{n\over N}}(q_1+q_1^{-1})\right)|\psi(q_1)\rangle
\eea
The analysis for the dual giant graviton of momentum $n$ leads to
\bea
D|\psi(q_1)\rangle &=&2\times {g_{YM}^2\over 8\pi^2}
\left(1+\left[1+{n\over N}\right]-\sqrt{1+{n\over N}}(q_1+q_1^{-1})\right)|\psi(q_1)\rangle\cr
&=&2g^2 \left(1+\left[1+{n\over N}\right]-\sqrt{1+{n\over N}}(q_1+q_1^{-1})\right)|\psi(q_1)\rangle
\label{AdSEnergy}
\eea
The generalization to include more magnons is straight forward.
We will simply consider increasingly complicated examples and for each simply quote the final results.
The discussion is most easily carried out using the occupation notation.
For example, the simple states corresponding to three magnons are
\bea
|q_1,q_2,q_3\rangle&=&\sum_{n_3=0}^{J-1}\sum_{n_2=0}^{n_3}\sum_{n_1=0}^{n_2}q_1^{n_1}q_2^{n_2}q_3^{n_3}
|G(n+J+n_1-n_3);\{(n_2-n_1),(n_3-n_2)\}\rangle\cr
&+&\sum_{n_1=0}^{J-1}\sum_{n_3=0}^{n_1}\sum_{n_2=0}^{n_3}q_1^{n_1}q_2^{n_2}q_3^{n_3}
|G(n+n_1-n_3);\{(J+n_2-n_1),(n_3-n_2)\}\rangle\cr
&+&\sum_{n_2=0}^{J-1}\sum_{n_1=0}^{n_2}\sum_{n_3=0}^{n_1}q_1^{n_1}q_2^{n_2}q_3^{n_3}
|G(n+n_1-n_3);\{(n_2-n_1),(J+n_3-n_2)\}\rangle\cr
&&\eea
where we have again lumped together the Young diagram labels $G(x)=R,R_1^1,R_2^1=1^{x+1},1^x,1^x$.
The coefficient of the ket $|G(n+J-a-b);\{(a),(b)\}\rangle$ is given by the sum
\bea
\sum_{n_1=0}^{J-1}(q_1 q_2 q_3)^{n_1}q_2^a q_3^{a+b}
\eea
which vanishes if $k_1+k_2+k_3\ne 0$.
Consequently we can set $q_3=q_1^{-1}q_2^{-1}$.
Including the cut off function, our energy eigenstates are given by
\bea
&&|\psi(q_1,q_2)\rangle=\sum_{n_3=0}^{\infty}\sum_{n_2=0}^{n_3}\sum_{n_1=0}^{n_2}q_1^{n_1-n_3}q_2^{n_2-n_3}
f(n_3) |G(n+J+n_1-n_3);\{(n_2-n_1),(n_3-n_2)\}\rangle\cr
&+&\sum_{n_1=0}^{J+n_2}\sum_{n_3=0}^{\infty}\sum_{n_2=0}^{n_3}q_1^{n_1-n_3}q_2^{n_2-n_3}
f(n_1)f(J+n_3-n_1)
|G(n+n_1-n_3);\{(J+n_2-n_1),(n_3-n_2)\}\rangle\cr
&+&\sum_{n_2=0}^{J+n_3}\sum_{n_1=0}^{n_2}\sum_{n_3=0}^{\infty}q_1^{n_1-n_3}q_2^{n_2-n_3}
f(n_2)f(J+n_3-n_1)
|G(n+n_1-n_3);\{(n_2-n_1),(J+n_3-n_2)\}\rangle\nonumber
\eea
It is a simple matter to see that
\bea
D|\psi (q_1,q_2)\rangle=(E_1+E_2+E_3)|\psi (q_1,q_2)\rangle
\eea
where
\bea
E_1&=&g^2 \left(1+\left[1-{n\over N}\right]-\sqrt{1-{n\over N}}(q_1+q_1^{-1})\right)\cr
E_2&=& g^2 \left( 2-q_2-q_2^{-1}\right) \cr
E_3&=& g^2 \left( 1+\left[1-{n\over N}\right]-\sqrt{1-{n\over N}}(q_3+q_3^{-1})\right)
\label{gtresult}
\eea
Now consider the extension to states containing many magnons:
For an $M$ magnon state, consider all $M$ cyclic orderings of the ``magnon positions''
\bea
&& n_1\le n_2\le n_3\le \cdots\le n_{M-2}\le n_{M-1}\le n_M\le J-1\cr
&& n_M\le n_1\le n_2\le n_3\le \cdots\le n_{M-2}\le n_{M-1}\le J-1\cr
&& n_{M-1}\le n_M\le n_1\le n_2\le n_3\le \cdots\le n_{M-2}\le J-1\cr
&&\vdots\qquad\qquad\vdots\qquad\qquad\vdots\cr
&& n_2\le n_3\le \cdots\le n_{M-2}\le n_{M-1}\le n_M\le n_1\le J-1
\label{orderings}
\eea
Construct the differences $\{n_2-n_1,n_3-n_2,n_4-n_3,\cdots,n_M-n_{M-1},n_1-n_M\}$.
Every difference except for one is positive.
Add $J$ to the difference that is negative, i.e. the resulting differences are
$\{\Delta_2,\Delta_3,\Delta_4,\cdots,\Delta_M,\Delta_1\}$ with
\bea
\begin{matrix}
\cr
\Delta _{i}\cr
\cr
\end{matrix}
=\left\{
\begin{matrix}
n_i-n_{i-1} &{\rm if\quad} n_i\ge n_{i-1}\cr
\cr
J+n_i-n_{i-1} &{\rm if\quad} n_i\le n_{i-1}\cr
\end{matrix}
\right.
\eea
For each ordering in (\ref{orderings}) we have a term in the simple state.
This term is obtained by summing over all values
of $\{ n_1,n_2,\cdots,n_L\}$ consistent with the ordering considered, of the following summand
\bea
q_1^{n_1}q_2^{n_2}\cdots q_L^{n_M}
|1^{n+\Delta_1+1},1^{n+\Delta_1},1^{n+\Delta_1};\{(\Delta_2),(\Delta_3),\cdots,(\Delta_M)\}\rangle
\eea
Repeating the argument we outlined above, this term vanishes unless $q_M^{-1}=q_1 q_2\cdots q_{M-1}$ so
that the summand can be replaced by
\bea
q_1^{n_1-n_M}q_2^{n_2-n_M}\cdots q_{M-1}^{n_{M-1}-n_M}
|1^{n+\Delta_1+1},1^{n+\Delta_1},1^{n+\Delta_1};\{(\Delta_2),(\Delta_3),\cdots,(\Delta_M)\}\rangle
\eea
Finally, consider the extension to many string states and an arbitrary system of giant graviton branes.
Each open string word is constructed as explained above.
We add extra columns (one for each giant graviton) and rows (one for each dual giant graviton) to $R$.
The labels $R^k_1$ and $R^k_2$ specify how the open strings are connected to the giant and dual giant gravitons.
When describing twisted string states, the strings describe a closed loop, ``punctuated by'' the giant gravitons
on which they end.
As an example, consider a two giant graviton state, with a pair of strings stretching between the giant gravitons.
The two strings carry a total momentum of $J$.
Notice that we are using the two strings to define a single lattice of $J$ sites.
One might have thought that the two strings would each define an independent lattice.
To understand why we use the two strings to define a single lattice, recall that we are identifying the zero lattice momentum
constraint with the constraint that the $su(2|2)$ central charges of the complete magnon state must vanish.
There is a single $su(2|2)$ constraint on the two string state, not one constraint for each string.
We interpret this as implying there is a single zero lattice momentum constraint for the two strings, and hence there
is a single lattice for the two strings.
This provides a straight forward way to satisfy the $su(2|2)$ central charge constraints.
The first giant graviton has a momentum of $b_0$ and the second a momentum of $b_1$.
The first string is excited by $M$ magnons with locations $\{n_1,n_2,\cdots,n_{M-1},n_M\}$ and the second by
$\tilde{M}$ magnons with locations $\{\tilde{n}_1,\tilde{n}_2,\cdots,\tilde{n}_{\tilde{M}-1},\tilde{n}_{\tilde{M}}\}$
where we have switched to the lattice notation.
We need to consider the $M+\tilde{M}$ orderings of the $\{ n_i\}$ and $\{\tilde{n}_i\}$.
Given a specific pair of orderings, we can again form the differences
\bea
\Delta_{1}&=&\left\{
\begin{matrix}
n_1-\tilde{n}_{M} &{\rm if\quad} n_1\ge \tilde{n}_{M} &\cr
J+n_1-\tilde{n}_{M} &{\rm if\quad} n_1\le \tilde{n}_{M} &\cr
\end{matrix}
\right.
\cr
\begin{matrix}
\cr
\Delta _{i}\cr
\cr
\end{matrix}
&=&\left\{
\begin{matrix}
n_i-n_{i-1} &{\rm if\quad} n_i\ge n_{i-1} &\cr
& &i=2,3,\cdots,M\cr
J+n_i-n_{i-1} &{\rm if\quad} n_i\le n_{i-1} &\cr
\end{matrix}
\right.\cr
\cr
\Delta_{M+1}&=&\left\{
\begin{matrix}
\tilde{n}_{1}-n_M &{\rm if\quad} n_M\le \tilde{n}_{1} &\cr
J+\tilde{n}_{1}-n_M &{\rm if\quad} n_M\ge \tilde{n}_{1} &\cr
\end{matrix}
\right.\cr
\cr
\begin{matrix}
\cr
\Delta _{M+i}\cr
\cr
\end{matrix}
&=&\left\{
\begin{matrix}
\tilde{n}_i-\tilde{n}_{i-1} &{\rm if\quad} \tilde{n}_i\ge \tilde{n}_{i-1} &\cr
& &i=2,3,\cdots,\tilde{M}\cr
J+\tilde{n}_i-\tilde{n}_{i-1} &{\rm if\quad} \tilde{n}_i\le \tilde{n}_{i-1} &\cr
\end{matrix}
\right.
\eea
For each ordering we again have a term in the simple state, obtained by summing over all values
of $\{ n_1,n_2,\cdots,n_M,\tilde{n}_1,\tilde{n}_2,\cdots,\tilde{n}_{\tilde{M}}\}$ consistent with the ordering
considered, of the following summand
\bea
q_1^{n_1}\cdots q_M^{n_M}\tilde q_1^{\tilde n_1}\cdots \tilde q_{\tilde M}^{\tilde n_{\tilde M}}
|G(\Delta_1,\Delta_{M+1});\{(\Delta_2),(\Delta_3),\cdots,(\Delta_M)\},
\{(\Delta_{M+2}),(\Delta_{M+3}),\cdots,(\Delta_{M+\tilde M})\}\rangle\cr
\eea
where
\bea
G(x,y)\equiv
{\tiny \yng(2,2,2,2,2,2,2,1,1,1,1,1),
\young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{2},{\,},{\,},{\,},{\,},{1}),
\young({\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{\,},{\,}{1},{\,},{\,},{\,},{\,},{2})}
\eea
In the first Young diagram above there are $b_1+y+1$ rows with 2 boxes in each row and
$b_0+x-b_1-y-1$ rows with 1 box in each row.
Repeating the argument we outlined above, this term vanishes unless
$\tilde q_{\tilde M}^{-1}=q_1 \cdots q_{M}\tilde q_1\cdots \tilde q_{\tilde M-1}$ so
that the summand can be replaced by
\bea
q_1^{n_1-\tilde n_{\tilde M}}q_2^{n_2-\tilde n_{\tilde M}}\cdots \tilde q_{\tilde M-1}^{\tilde n_{\tilde M-1}-\tilde n_{\tilde M}}
|G(\Delta_1,\Delta_{M+1});\{(\Delta_2),(\Delta_3),\cdots,(\Delta_M)\},
\{(\Delta_{M+2}),(\Delta_{M+3}),\cdots,(\Delta_{M+\tilde M})\}\rangle\cr
\eea
This completes our discussion of the large $N$ asymptotic eigenstates.
We will now consider the dual string theory description of these states.
\section{String Theory Description}
The string theory description of the gauge theory operators is most easily
developed using the limit introduced by Maldacena and Hofman\cite{Hofman:2006xt}, in which
the spectrum on both sides of the correspondence simplifies.
The limit considers operators of large ${\cal R}$ charge $J$ and scaling dimension $\Delta$
holding $\Delta -J$ and the 't Hooft coupling $\lambda$ fixed.
Both sides of the correspondence enjoy an $SU(2|2)\times SU(2|2)$ supersymmetry with
novel central extensions as realized by Beisert in \cite{Beisert:2005tm,Beisert:2006qh}.
Once the central charge of the spin-chain/worldsheet excitations have been determined,
their spectrum and constraints on their two body scattering are determined.
A powerful conclusion argued for in \cite{Hofman:2006xt} using the physical picture developed in \cite{Berenstein:2005jq}
is that there is a natural geometric interpretation for these central charges in the classical string theory.
This geometric interpretation also proved useful in the analysis of maximal giant gravitons in \cite{Hofman:2007xp}.
In this section we will argue that it is also applicable to the case of non-maximal giant and dual giant gravitons.
Giant gravitons carry a dipole moment under the RR five form flux $F_5$.
When they move through the spacetime, the Lorentz force like coupling to $F_5$ causes them to expand in directions
transverse to the direction in which they move\cite{Myers:1999ps}.
The giant graviton orbits on a circle inside the $S^5$ and wraps an $S^3$ transverse to this circle but also contained in the S$^5$.
Using the complex coordinates $x=x^5+ix^6$, $y=x^3+ix^4$ and $z=x^1+ix^2$ the $S^5$ is described by
\bea
|z|^2+|x|^2+|y|^2 = 1
\label{sphereeqn}
\eea
in units with the radius of the $S^5$ equal to 1.
The giant is orbiting in the $1-2$ plane on the circle $|z|=r$.
The size to which the giant expands is determined by canceling the force causing them to expand, due to the coupling to
the $F_5$ flux, against the D3 brane tension, which causes them to shrink.
Since the coupling to the $F_5$ flux depends on their velocity, the size of the giant graviton is determined by its angular
momentum $n$ as \cite{McGreevy:2000cw,Hashimoto:2000zp,Grisaru:2000zn}
\bea
|x|^2+|y|^2 = {n\over N}
\eea
Using (\ref{sphereeqn}) we see that the giant graviton orbits on a circle of radius\cite{McGreevy:2000cw}
\bea
r=\sqrt{1-{n\over N}}<1
\eea
Consider now the worldsheet geometry for an open string attached to a giant graviton.
Following \cite{Hofman:2006xt}, we will describe this worldsheet solution using LLM coordinates\cite{Lin:2004nb}.
The worldsheet for this solution, in these coordinates, is shown in Figure \ref{fig:giant}.
The figure shows an open string with 6 magnons.
Each magnon corresponds to a directed line segment in the figure.
The first and last magnons connect to the giant which is orbiting on the smaller circle shown.
Between the magnons we have a collection of $O(\sqrt{N})$ $Z$s.
These are pushed by a centrifugal force to the circle $|z|=1$ giving the string worldsheet the shape shown in the figure.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4.5cm,width=4.5cm]{giantpic1}
\caption{The giant is orbiting on the smaller circle shown. Each red segment is a magnon.
The arrows in the figure simply indicate the orientation of the central charge $k_i$ of the $i$th magnon.}
\label{fig:giant}
\end{center}
\end{figure}
In the limit that the magnons are well separated, each magnon transforms in a definite $SU(2|2)^2$ representation.
The open string itself transforms as the tensor product of the individual magnon representations.
The representation of each individual magnon is specified by giving the values of the central charges $k_i,k_i^*$ appearing
in (\ref{centcharge}).
Regarding the plane shown in Figure \ref{fig:giant} as the complex plane, $k$ is given by the complex number determined by the
vector describing the directed segment corresponding to the magnon.
In particular, the magnitude of $k$ is given by the length of the line corresponding to the magnon.
The energy of the magnon, which transforms in a short representation, is determined by supersymmetry to
be\cite{Beisert:2005tm,Beisert:2006qh}
\bea
E=\sqrt{1+2\lambda |k|^2}=1+\lambda |k|^2-{1\over 2}\lambda^2|k|^4+...
\label{stringspect}
\eea
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4.5cm,width=4.5cm]{bulkmagnon}
\caption{A bulk magnon subtending an angle $\theta$ has a length of $2\sin {\theta\over 2}$.}
\label{fig:bulkmagnon}
\end{center}
\end{figure}
For a magnon which subtends an angle $\theta$ we find\cite{Hofman:2006xt}
\bea
E=1+4\lambda \sin^2 {\theta\over 2}+O(\lambda^2)=1+\lambda (2-e^{i\theta}-e^{-i\theta})+O(\lambda^2)
\eea
This is in perfect agreement with the field theory answer (\ref{gtresult}) if we set $\lambda = g^2$ and
\bea
q=e^{i{2\pi k\over J}}=e^{i\theta}\qquad\Rightarrow\qquad \theta = {2\pi k\over J}
\eea
Thus the angle that is subtended by the magnon is equal to its momentum, which is the well known result
obtained in \cite{Hofman:2006xt}.
Consider now the boundary magnon, as shown in Figure \ref{fig:boundarymagnon}.
The circle on which the giant orbits has a radius given by
\bea
r=\sqrt{1-{n\over N}}
\eea
The large circle has a radius of 1 in the units we are using.
Thus, the length of the boundary magnon is given by the length of the diagonal of the isosceles trapezium shown
in Figure \ref{fig:boundarymagnon}.
Consequently
\bea
E&=&1+\lambda ((1-r)^2 +4r \sin^2 {\theta\over 2})+O(\lambda^2)\cr
&=&1+\lambda \left(1+r^2-r(e^{i\theta}+e^{-i\theta})\right)+O(\lambda^2)
\eea
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4cm,width=10cm]{boundarymagnon}
\caption{A boundary magnon subtending an angle $\theta$ has a length of
$\sqrt{(1-r)^2+4r\sin^2 {\theta\over 2}}$.}
\label{fig:boundarymagnon}
\end{center}
\end{figure}
This is again in complete agreement with (\ref{gtresult}) after we set $\theta = {2\pi k\over J}$
and recall that $r=\sqrt{1-{n\over N}}$.
This is a convincing check of the boundary terms in the dilatation operator and of our large $N$ asymptotic eigenstates.
In the description of maximal giant gravitons, the boundary magnon always stretches from the center
of the disk to a point on the circumference of the circle $|z|=1$.
Consequently, for the maximal giant the boundary magnon subtends an angle of zero and it never has a non-zero momentum.
For submaximal giants we see that the boundary magnons do in general carry non-zero momentum.
This is completely expected: in the case of a maximal giant graviton, the boundary magnons are locked in the first
and last position of the open string lattice.
As we move away from the maximal giant graviton, the coefficients of the boundary terms which allow the boundary
magnons to hop in the lattice, increase from zero, allowing the boundary magnons to move and hence, to carry a
non-zero momentum.
In the Appendix \ref{TwoLoop} we have checked that the two loop answer in the field theory agrees with the $O(\lambda^2)$
term of (\ref{stringspect}).
Notice that the vector sum of the directed lines segments vanishes.
This is nothing but the statement that our operator vanishes unless $q_M^{-1}=q_1 q_2\cdots q_{M-1}$.
This condition ensures that although each magnon transforms in a representation of $su(2|2)^2$ with non-zero
central charges, the complete state enjoys an $su(2|2)^2$ symmetry that has no central extension.
It is for this reason that the central charges must sum to zero and hence that the vector sum of the red segments must vanish.
This is achieved in an interesting way for certain multi-string states: each open string can transform under an $su(2|2)^2$ that has
a non-zero central charge and it is only for the full state of all open strings plus giants that the central charge vanishes.
An example of this for a two string state is given in Figure \ref{twostate}.
\begin{figure}[hb]
\begin{center}
\includegraphics[height=4.5cm,width=4.5cm]{twostate}
\caption{A two strings attached to two giant gravitons state.
Both giants are submaximal and so are moving on circles with a radius $|z|<1$.
One of the strings has only two boundary magnons.
The second string has two boundary magnons and three bulk magnons.
Notice that each open string has a non-vanishing central charge.
It is only for the full state that the central charge vanishes.
See \cite{Berenstein:2014zxa} for closely related observations.}
\label{twostate}
\end{center}
\end{figure}
To conclude this section, we will consider an example involving a dual giant graviton.
In this case, the giant graviton orbits on a circle\cite{Hashimoto:2000zp,Grisaru:2000zn}
\bea
r=\sqrt{1+{n\over N}}>1
\eea
The length of the line segment corresponding to the boundary magnon is again given by the length of the diagonal of an
isosceles trapezium, as shown in Figure \ref{fig:dualboundarymagnon}. Consequently
\bea
E&=&1+\lambda ((r-1)^2 +4r \sin^2 {\theta\over 2})+O(\lambda^2)\cr
&=&1+\lambda \left(1+r^2-r(e^{i\theta}+e^{-i\theta})\right)+O(\lambda^2)
\eea
which is in perfect agreement with (\ref{AdSEnergy}) after we set $\theta={2\pi k\over J}$ and $r=\sqrt{1+{n\over N}}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4cm,width=10cm]{adsboundarymagnon}
\caption{A boundary magnon subtending an angle $\theta$ has a length of
$\sqrt{(r-1)^2+4r\sin^2 {\theta\over 2}}$.}
\label{fig:dualboundarymagnon}
\end{center}
\end{figure}
\section{From asymptotic states to exact eigenstates}\label{toexact}
The states we have written down above are asymptotic states in the sense that we have implicitly assumed that
all of the magnons are well separated.
In this case the excitations can be treated individually and the symmetry algebra acts as a tensor product representation.
However, the magnons can come close together and even swap positions.
When they swap positions, we get different asymptotic states that must be combined to obtain the exact eigenstate.
The asymptotic states must be combined in a way that is compatible with the algebra, as explained in \cite{Beisert:2005tm}.
This requirement ultimately implies a unique way to complete the asymptotic states to obtain the exact eigenstate.
When two bulk magnons swap positions, the corresponding asymptotic states are combined using the two particle $S$-matrix.
The relevant two particle $S$-matrix has been determined in \cite{Beisert:2005tm,Beisert:2006qh}.
It is also possible for a bulk magnon to reflect/scatter off a boundary magnon.
For maximal giant gravitons\cite{Hofman:2007xp}, the reflection from the boundary preserves the fact that the boundary
magnon has zero momentum and it reverses the sign of the momentum of the bulk magnon.
In this section we would like to investigate the scattering of a bulk magnon off a boundary magnon for a non-maximal
giant graviton.
We must require that the total central charge $k$ of the state vanishes.
Thus, after the scattering the directed line segments must still sum to zero.
Further the central charge $C$ of the state must remain unchanged.
Taken together, these conditions uniquely fix the momentum of both bulk and boundary magnon after the
scattering.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4cm,width=9.5cm]{reflection}
\caption{A bulk magnon scatters with a boundary magnon.
In the process the direction of the momentum of the bulk magnon is reversed.}
\label{fig:reflection}
\end{center}
\end{figure}
In Figure \ref{fig:reflection} the process of scattering a bulk magnon off the boundary magnon is shown.
After the scattering the magnons that have a different momentum, corresponding to line segments that have changed
and these are shown in green.
In this case the giant graviton is close enough to a maximal giant that the momentum of the boundary magnon
is reversed, so this is a reflection-like scattering.
Before and after the scattering the line segments line up to form a closed circuit, so that the central charge $k$ of the
state before and after scattering is zero.
To analyze the constraint arising from fixing the central charge $C$, we parameterize the problem as shown in figure
\ref{fig:fixC}.
There is a single parameter $\theta$ which is fixed by requiring
\bea
&&\sqrt{1+8\lambda \sin^2{\varphi_2\over 2}}
+\sqrt{1+8\lambda\left(\left[1+r\right]^2+4r\sin^2{\varphi_1\over 2}\right)} \cr
&&=\sqrt{1+8\lambda \sin^2{\theta\over 2}}
+\sqrt{1+8\lambda\left(\left[1+r\right]^2+4r
\sin^2\left({\varphi_1+\varphi_2+\theta\over 2}\right)\right)}\cr
&&\label{magconstraints}
\eea
which is the condition that the state has the correct central charge $C$. In the above formula we have
\bea
r=\sqrt{1-{b_0\over N}}\, .
\eea
The equation (\ref{magconstraints}) has two solutions, one of which is negative $\theta =-\varphi_2$ and
describes the state before the scattering.
We need to choose the solution for which $\theta\ne -\varphi_2$.
Notice that for $b_0=N$ this condition implies that $\theta =\varphi_2$ which is indeed the correct answer\cite{Hofman:2007xp}.
In this case, the bulk magnon reflects off the boundary with a reverse in the direction of its momentum but no change
in its magnitude.
The momentum of the bulk magnon remains zero.
When $b_0=0$ the momenta of the two magnons is exchanged which is again the correct answer \cite{Beisert:2005tm,Beisert:2006qh}.
When $0 < b_0 < N$ we find the solution to (\ref{magconstraints}) for the momentum of the bulk magnon interpolates
between reflection like scattering (when the momentum of the magnon is reversed) and magnon like scattering (when the
momenta of the two magnons are exchanged).
In this case though, in general, the magnitude of the momenta of the bulk and the boundary magnons are not
preserved by the scattering - the scattering is inelastic.
Finally, the scattering of a bulk magnon from a boundary magnon attached to a dual giant graviton is always
magnon like scattering. i.e. neither of the momenta change direction.
\begin{figure}[h]
\begin{center}
\includegraphics[height=4.5cm,width=4.5cm]{fixC}
\caption{A bulk magnon scatters with a boundary magnon.
In the process the direction of the momentum of the bulk magnon is reversed.
Before the scattering the boundary magnon subtends an angle $\varphi_1$ and the bulk magnon subtends an angle $\varphi_2$.
After the scattering the boundary magnon subtends an angle $\varphi_1+\varphi_2+\theta$ and the bulk magnon subtends an
angle $-\theta$.}
\label{fig:fixC}
\end{center}
\end{figure}
The fact that the scattering between boundary and bulk magnons is not elastic has far reaching consequences.
First, the system will not be integrable.
In the case of purely elastic scattering for all magnon scatterings, the number of asymtotic states that must be combined to
construct the exact energy eigenstate is roughly $(M-1)!$ for $M$ magnons.
This is the number of ways of arranging the magnons (distinguished by their momentum) up to cyclicity.
There are $M$ magnon momenta appearing and these momenta are the same for all the asymptotic states.
The exact eigenstates can then be constructed using a coordinate space Bethe ansatz.
For the case of inelastic scattering, the momenta appearing depend on the specific asymptotic state one considers and there
are many more than $(M-1)!$ asymptotic states that must be combined to construct the exact eigenstate.
In this case constructing the exact eigenstates from the asymptotic states appears to be a formidable problem.
\section{$S$-matrix and boundary reflection matrix}\label{sctref}
We have a good understanding of the symmetries of the theory and the representations under which the states transform.
Following Beisert \cite{Beisert:2005tm,Beisert:2006qh}, this is all that is needed to obtain the magnon scattering matrix.
In this section we will carry out this analysis.
Each magnon transforms under a centrally extended representation of the $SU(2|2)$ algebra
\bea
\{ Q^\alpha_a,Q^\beta_b\}=\epsilon^{\alpha\beta}\epsilon_{ab}{k_i\over 2}\,,
\qquad
\{S^a_\alpha,S^b_\beta\}=\epsilon^{ab}\epsilon_{\alpha\beta}{k^*_i\over 2}\,,
\eea
\bea
\{ S^a_\alpha,Q^\beta_b\}=\delta^a_b L^\beta_\alpha+\delta^\beta_\alpha R^a_b+\delta^a_b\delta^\beta_\alpha C_i\,.
\eea
There are also the usual commutators for the bosonic $su(2)$ generators.
There are three central charges $k_i,k_i^*,C_i$ for each $SU(2|2)$ factor.
Following \cite{Hofman:2007xp} we set the central charges of the two copies to be equal.
It is useful to review how the bosonic part of the $SU(2|2)^2$ symmetry acts in the gauge theory.
${\cal N}=4$ super Yang-Mills theory has 6 hermitian adjoint scalars $\phi^i$ that transform as a vector of $SO(6)$.
We have combined them into the complex fields as follows
\bea
&& X=\phi^1+i\phi^2\,,\qquad \bar{X}=\phi^1-i\phi^2\,,\cr
&& Y=\phi^3+i\phi^4\,,\qquad \bar{Y}=\phi^3-i\phi^4\,,\cr
&& Z=\phi^5+i\phi^6\,,\qquad \bar{Z}=\phi^5-i\phi^6\,.
\eea
The bosonic subgroup of $SU(2|2)^2$ is $SU(2)\times SU(2)=SO(4)$ that rotates $\phi^1,\phi^2,\phi^3,\phi^4$ as a vector.
In terms of complex fields, $Y,X$ and $\bar{Y},\bar{X}$ transform under different $SU(2|2)$ groups.
$Z,\bar Z$ do not transform.
To specify the representation that each magnon transforms in, following \cite{Beisert:2005tm,Beisert:2006qh} we specify
parameters $a_k,b_k,c_k,d_k$ for each magnon, where
\bea
Q^\alpha_a|\phi^b\rangle =a_k\delta_a^b|\psi^\alpha\rangle\,,
\qquad
Q^\alpha_a|\psi^\beta\rangle =b_k \epsilon^{\alpha\beta}\epsilon_{ab}|\phi^b\rangle\,,
\eea
\bea
S^a_\alpha |\phi^b\rangle = c_k \epsilon_{\alpha\beta}\epsilon^{ab}|\psi^\beta\rangle\,,
\qquad
S^a_\alpha|\psi^\beta\rangle =d_k \delta^\beta_\alpha |\phi^a\rangle\,,
\eea
for the $k$th magnon.
We are using the non-local notation of \cite{Beisert:2006qh}.
Using the representation introduced above
\bea
Q^{1}_1 Q^2_2 |\phi^2\rangle =a_k Q^1_1|\psi^2\rangle =
b_k a_k\epsilon^{12}\epsilon_{12}|\phi^2\rangle\,,
\qquad
Q^2_2 Q^{1}_1 |\phi^2\rangle = 0\,,
\eea
so that $k_k =2\, a_k\, b_k$.
An identical argument using the $S^a_\alpha$ supercharges gives $k^*_k =2 \, c_k\, d_k$.
Consider next a state with a total of $K$ magnons.
If we are to obtain a representation without central extension, we must require that the central charges vanish
\bea
{k\over 2}=\sum_{k=1}^K {k_k\over 2}=\sum_{k=1}^K a_k b_k =0\,,\cr
{k^*\over 2}=\sum_{k=1}^K {k_k^*\over 2}=\sum_{k=1}^Kc_k d_k =0\,.\label{vanishcentral}
\eea
To obtain a formula for the central charge $C$ consider
\bea
Q^\alpha_a S^b_\beta|\phi^c\rangle
= c_k Q^\alpha_a\epsilon^{bc}\epsilon_{\beta\gamma}| \psi^\gamma\rangle
= c_k b_k\epsilon^{bc}\epsilon_{\beta\gamma}\epsilon^{\alpha\gamma}\epsilon_{ad}|\phi^d\rangle\,.
\eea
Now set $a=b$ and $\alpha=\beta$ and sum over both indices to obtain
\bea
Q^\alpha_a S^a_\alpha|\phi^c\rangle = 2b_kc_k|\phi^c\rangle\,.
\eea
Very similar manipulations show that
\bea
S^a_\alpha Q^\alpha_a |\phi^c\rangle = 2a_k d_k|\phi^c\rangle
\eea
so that we learn the value of the central charge $C_k$
\bea
\{Q^\alpha_a ,S^a_\alpha\}|\phi^c\rangle =4C|\phi^c\rangle =2(a_kd_k+b_kc_k)|\phi^c\rangle\,,
\qquad\Rightarrow\qquad C_k={1\over 2}(a_kd_k+b_kc_k)\,.
\eea
Using
\bea
\{ S^1_2,Q^1_1\}= L^1_2\qquad L^1_2|\psi^2\rangle =|\psi^1\rangle
\eea
we easily find
\bea
\{ S^1_2,Q^1_1\}|\psi^2\rangle=(a_k d_k-b_kc_k)|\psi^1\rangle\qquad\Rightarrow\qquad
a_kd_k-b_kc_k=1\,.
\eea
This is also the condition to get an atypical representation of $su(2|2)$ \cite{Beisert:2006qh}.
Following \cite{Beisert:2005tm}, a useful parametrization for the parameters of the representation is given by
\bea
a_k =\sqrt{g}\eta_k\,,\qquad b_k={\sqrt{g}\over\eta_k} f_k \left(1-{x^+_k\over x^-_k}\right)\,,
\eea
\bea
c_k={\sqrt{g} i\eta_k\over f_kx^+_k}\,,\qquad d_k={\sqrt{g}x^+_k\over i\eta_k}\left(1-{x^-_k\over x^+_k}\right)\,.
\eea
The parameters $x^\pm_k$ are set by the momentum $p_k$ of the magnon
\bea
e^{i{2 \pi p_k\over J}}={x^+_k\over x^-_k}\,.\label{xisp}
\eea
The parameter $f_k$ is a pure phase, given by the product $\prod_j e^{ip_j}$, where $j$ runs over all magnons to
the left of the magnon considered.
To ensure unitarity $|\eta_k|^2=i(x_k^--x_k^+)$.
The condition $a_kd_k-b_kc_k=1$ to get an atypical representation implies that
\bea
x_k^+ +{1\over x_k^+}-x_k^--{1\over x_k^-}={i\over g}\,.\label{atyp}
\eea
This equation will be very useful in verifying some of the S-matrix formulas given below.
A useful parametrization for the parameters specifying the representation for a boundary magnon is given by
\bea
a_k=\sqrt{g}\eta_k\,,\qquad b_k={\sqrt{g}\over\eta_k} f_k \left(1-r{x_k^+\over x_k^-}\right)\,,
\eea
\bea
c_k={\sqrt{g} i\eta_k\over f_k x_k^+}\,,\qquad d_k={\sqrt{g}x_k^+\over i\eta_k}\left(1-r{x_k^-\over x_k^+}\right)\,,
\eea
where $r=\sqrt{1-{n\over N}}$ is the radius of the path on which the giant graviton of momentum $n$ orbits\footnote{For an open
string attached to a dual giant graviton, we would have $r=\sqrt{1+{n\over N}}$ where $n$ is the momentum of the dual giant
graviton.} and the parameters $x_k^\pm$ are again set by the momentum carried by the boundary magnon according to
(\ref{xisp}).
For the boundary magnon, $f_k$ is again a phase as described above and now $|\eta_k|^2=i(r x_k^--x_k^+)$.
For a maximal giant graviton $r=0$ and the boundary magnon carries no momentum and $|\eta_k|^2=-ix_k^+$.
For the boundary magnon, the condition $a_kd_k-b_kc_k=1$ to get an atypical representation implies that
\bea
x_k^+ +{1\over x_k^+}-rx_k^--{r\over x_k^-}={i\over g}\label{secatyp}
\eea
This equation will again be useful below.
Equation (\ref{secatyp}) interpolates between (\ref{atyp}) for $r=1$, which is the correct condition for a bulk magnon
and the condition obtained for $r=0$
\bea
x_k^+ +{1\over x_k^+}={i\over g}
\eea
which was used in \cite{Hofman:2007xp} for the boundary magnon attached to a maximal giant graviton.
Following \cite{Beisert:2005tm,Beisert:2006qh} one can check that the above parametrization obeys (\ref{vanishcentral}).
Finally,
\bea
a_k b_k c_k d_k &=&g^2 (e^{-ip_k}-1)(e^{ip_k}-1)
=4g^2 \sin^2 {p_k\over 2}\cr
&=&{1\over 4}\Big[ (a_k d_k + b_k c_k)^2-(a_k d_k-b_kc_k)^2\Big]
={1\over 4}\Big[ (2 C_k)^2-1\Big]
\eea
so that
\bea
C_k=\pm\sqrt{{1\over 4}+4g^2 \sin^2 {p_k\over 2}}
\eea
The components of an energy eigenstate in different asymptotic regions are related by the bulk-bulk and boundary-bulk
magnon scattering matrices $S$ and $R$.
$S$ and $R$ must commute with the $su(2|2)$ group.
The labels of the representations of individual magnons can change under the scattering but they must do so in a way that
preserves the central charges of the total state.
In the picture of the energy eigenstates provided by the LLM plane, the central charges are given by the directed line
segments (which are vectors and hence can also be viewed as complex numbers), one for each magnon.
The fact that these line segments close into polygons is the statement that the central charges $k$ and $k^*$ of our
total state vanishes.
The sum of the lengths squared of these line segments determines the central charge $C$.
By scattering these segments can rearrange themselves as long as the sums
$\sum_i\sqrt{1+2\lambda l^2_i}$ with $l_i$ the length of segment $i$ is preserved and so long as they still form a closed polygon.
Consider now the scattering of two bulk magnons, magnon $k$ and magnon $k+1$.
The quantum numbers of the two incoming magnons and those of the outgoing magnons (denoted with a prime)
are as follows
\bea
a_k =\sqrt{g}\eta_k\qquad &&a_k'=a_k\cr
b_k ={\sqrt{g}\over\eta_k}f_k\left(1-{x_k^+\over x_k^-}\right)\qquad &&
b_k'={x^+_{k+1}\over x_{k+1}^-}b_k\cr
c_k ={\sqrt{g}i\eta_k\over f_k x^+_k}\qquad &&
c_k'={x^-_{k+1}\over x^+_{k+1}}c_k\cr
d_k ={\sqrt{g}x_k^+\over i\eta_k}\left(1-{x_k^-\over x_k^+}\right)\qquad &&
d_k'=d_k
\eea
\bea
a_{k+1} =\sqrt{g}\eta_{k+1}\qquad &&a_{k+1}'=a_{k+1}\cr
b_{k+1} ={x_k^+\over x_k^-}{\sqrt{g}\over\eta_{k+1}}f_{k}\left(1-{x_{k+1}^+\over x_{k+1}^-}\right)\qquad &&
b_{k+1}'={\sqrt{g}\over\eta_{k+1}}f_{k}\left(1-{x_{k+1}^+\over x_{k+1}^-}\right)\cr
c_{k+1} ={x_k^-\over x_k^+}{\sqrt{g}i\eta_{k+1}\over f_{k} x^+_{k+1}}\qquad &&
c_{k+1}'={\sqrt{g}i\eta_{k+1}\over f_{k} x^+_{k+1}}\cr
d_{k+1} ={\sqrt{g}x_{k+1}^+\over i\eta_{k+1}}\left(1-{x_{k+1}^-\over x_{k+1}^+}\right)\qquad &&
d_{k+1}'=d_{k+1}\,.
\eea
We will also study the scattering of a bulk magnon with a boundary magnon.
Denoting the quantum numbers of the boundary magnon with a subscript $b$ and the quantum numbers of the bulk magnon
without a subscript, the quantum numbers of the magnons before and after the reflection are as follows
\bea
a =\sqrt{g}\eta\qquad &&a'=\sqrt{g}\eta'\cr
b ={\sqrt{g}\over\eta'}f\left(1-{x^+\over x^-}\right)\qquad &&
b'={\sqrt{g}\over\eta'}f\left(1-{x^{+\prime}\over x^{-\prime}}\right)\cr
c ={\sqrt{g}i\eta\over f x^+}\qquad &&
c'={\sqrt{g}i\eta'\over f x^{+\prime}}\cr
d ={\sqrt{g}x^+\over i\eta}\left(1-{x^-\over x^+}\right)\qquad &&
d'={\sqrt{g}x^{+\prime}\over i\eta'}\left(1-{x^{-\prime}\over x^{+\prime}}\right)
\eea
\bea
a_b =\sqrt{g}\eta_b\qquad &&a_b'=\sqrt{g}\eta_b'\cr
b_b ={x^+\over x^-}{\sqrt{g}\over\eta_b}f\left(1-r{x_b^+\over x_b^-}\right)\qquad &&
b_b'={\sqrt{g}\over\eta_b'}f{x^{+\prime}\over x^{-\prime}}
\left(1-r{x^{+\prime}_b\over x^{-\prime}_b}\right)\cr
c_b ={x^-\over x^+}{\sqrt{g}i\eta_b\over f x^+_b}\qquad &&
c_b'={x^{-\prime}\over x^{+\prime}}{\sqrt{g}i\eta_b'\over f x^{+\prime}_b}\cr
d_b ={\sqrt{g}x_b^+\over i\eta_b}\left(1-{x_b^-\over x_b^+}\right)\qquad &&
d_b'={\sqrt{g}x_b^{+\prime}\over i\eta_b'}\left(1-{x_b^{-\prime}\over x_b^{+\prime}}\right)
\eea
where ${x^{+\prime}\over x^{-\prime}}=e^{-i\theta}$,
${x_b^{+\prime}\over x_b^{-\prime}}={x^+ x_b^+x^{-\prime}\over x^- x_b^-x^{+\prime}}$
and we solve (\ref{magconstraints}) for $\theta$.
Implementing the consequences of invariance under $SU(2|2)^2$ is exactly parallel to the analysis of
\cite{Beisert:2005tm,Beisert:2006qh,Hofman:2007xp}.
For completeness we will review the $S$-matrix describing the scattering of two bulk magnons.
Since the $S$-matrix has to commute with the bosonic $su(2)$ generators Schur's Lemma implies that it must be proportional
to the identity in each given irreducible representation of $su(2)$.
This immediately implies that
\bea
S_{12}|\phi_1^a\phi_2^b\rangle = A_{12}|\phi_{2'}^{\{ a}\phi_{1'}^{b\}}\rangle +
B_{12}|\phi_{2'}^{[ a}\phi_{1'}^{b]}\rangle + {1\over 2}C_{12}\epsilon^{ab}\epsilon_{\alpha\beta}
|\psi_{2'}^\alpha\psi_{1'}^\beta \rangle
\label{frstSmat}
\eea
\bea
S_{12}|\psi_1^\alpha\psi_2^\beta\rangle =D_{12}|\psi_{2'}^{\{ \alpha}\psi_{1'}^{\beta \}}\rangle +
E_{12}|\psi_{2'}^{[ \alpha}\psi_{1'}^{\beta]}\rangle + {1\over 2}F_{12}\epsilon_{ab}\epsilon^{\alpha\beta}
|\phi_{2'}^a\phi_{1'}^b \rangle
\label{{scndSmat}}
\eea
\bea
S_{12}|\phi_1^a\psi_2^\beta\rangle=G_{12}|\psi_{2'}^\beta\phi_{1'}^a\rangle+H_{12}|\phi_{2'}^a\psi_{1'}^\beta\rangle\cr
S_{12}|\psi_1^\alpha\phi_2^b\rangle=K_{12}|\psi_{2'}^\alpha\phi_{1'}^b\rangle+L_{12}|\phi_{2'}^b\psi_{1'}^\alpha\rangle
\eea
Next, demanding the $S$-matrix commutes with the supercharges implies\cite{Beisert:2005tm,Beisert:2006qh}
\bea
A_{12}&=&S^0_{12}{x_2^+-x_1^-\over x_2^--x_1^+}\cr
B_{12}&=&S^0_{12}{x_2^+-x_1^-\over x_2^--x_1^+}
\left(1-2{1-{1\over x_2^-x_1^+}\over 1-{1\over x_2^- x_1^-}}{x_2^+-x_1^+\over x_2^+-x_1^-}\right)\cr
C_{12}&=&S^0_{12}{2g^2\eta_1\eta_2\over f x_1^+x_2^+}{1\over 1-{1\over x_1^+x_2^+}}
{x_2^--x_1^-\over x_2^--x_1^+}\cr
D_{12}&=&-S^0_{12}\cr
E_{12}&=&-S^0_{12}\left(1-2{1-{1\over x_2^+x_1^-}\over 1-{1\over x_2^-x_1^-}}{x_2^+-x_1^+\over x_2^--x_1^+}
\right)\cr
F_{12}&=&-S^0_{12}{2f(x_1^+-x_1^-)(x_2^+-x_2^-)\over \eta_1\eta_2x_1^- x_2^-}
{1\over 1-{1\over x_1^-x_2^-}}{x_2^+-x_1^+\over x_2^--x_1^+}\cr
G_{12}&=&S^0_{12}{x_2^+-x_1^+\over x_2^--x_1^+}\qquad
H_{12}=S^0_{12}{\eta_1\over\eta_2}{x_2^+-x_2^-\over x_2^--x_1^+}\cr
K_{12}&=&S^0_{12}{\eta_2\over\eta_1}{x_1^+-x_1^-\over x_2^--x_1^+}\qquad
L_{12}=S^0_{12}{x_2^--x_1^-\over x_2^--x_1^+}
\eea
Thus, the $S$-matrix is determined up to an overall phase.
Here we have simply chosen $D_{12}=-S^0_{12}$ which specifies the overall phase.
This overall phase is constrained by crossing symmetry\cite{Janik:2006dc}.
When considering the equations for the reflection/scattering matrix describing the reflection/scattering of a bulk magnon
from a boundary magnon, we need to pay attention to the fact that the central charges of the representation are no
longer swapped between the two magnons.
Rather, the central charges after the reflection are determined by solving (\ref{magconstraints}).
Denote the central charge of the boundary magnon before the reflection by $p_B$.
Denote the central charge of the bulk magnon before the reflection by $p_b$.
Denote the central charge of the boundary magnon after the reflection by $k_B$.
Denote the central charge of the bulk magnon after the reflection by $k_b$.
Denote the reflection/scattering matrix by ${\cal R}$.
Invariance of the reflection/scattering matrix under the bosonic generators implies that
\bea
{\cal R}|\phi_{p_B}^a\phi_{p_b}^b\rangle = A^R_{12}|\phi_{k_B}^{\{ a}\phi_{k_b}^{b\}}\rangle +
B^R_{12}|\phi_{k_B}^{[ a}\phi_{k_b}^{b]}\rangle + {1\over 2}C^R_{12}\epsilon^{ab}\epsilon_{\alpha\beta}
| \psi_{k_B}^\alpha\psi_{k_b}^\beta\rangle
\label{frstRmat}
\eea
\bea
{\cal R}|\psi_{p_B}^\alpha\psi_{p_b}^\beta\rangle =D^R_{12}|\psi_{k_B}^{\{ \alpha}\psi_{k_b}^{\beta \}}\rangle +
E^R_{12}|\psi_{k_B}^{[ \alpha}\psi_{k_b}^{\beta]}\rangle + {1\over 2}F^R_{12}\epsilon_{ab}\epsilon^{\alpha\beta}
|\phi_{k_B}^a\phi_{k_b}^b\rangle
\label{{scndRmat}}
\eea
\bea
{\cal R}|\phi_{p_B}^a\psi_{p_b}^\beta\rangle=G^R_{12}|\psi_{k_B}^\beta\phi_{k_b}^a\rangle
+H^R_{12}|\phi_{k_B}^a\psi_{k_b}^\beta\rangle\cr\cr
{\cal R}|\psi_{p_B}^\alpha\phi_{p_b}^b\rangle=K^R_{12}|\psi_{k_B}^\alpha\phi_{k_b}^b\rangle
+L^R_{12}|\phi_{k_B}^b\psi_{k_b}^\alpha\rangle
\eea
The analysis now proceeds as above.
The result is
\bea
A^R_{12}&=&
\frac{\eta_1 \eta_2 x_1^{\prime +}x_1^+ (x_1^--x_2^+)\left((x_2^+-rx_2^-)(rx_2^{\prime +}-x_2^{\prime -})x_2^+
+(x_2^--rx_2^+)(x_2^{\prime +}-rx_2^{\prime -})x_2^{\prime +}\right)}{\eta_1' \eta_2' x_2^{\prime +}x_2^+
(x_1^--x_1^+)(x_1^+-x_1^{\prime +})(x_1^+(rx_2^+-x_2^-)+x_2^-(rx_2^--x_2^+))}\cr
B^R_{12}&=&A^R_{12}\left[1+
\frac{2x_2^{\prime -}(x_1^{\prime -}-x_1^{\prime +})}
{x_1^{\prime +}(x_1^--x_2^+)(x_1^{\prime -}x_2^{\prime -}-rx_1^{\prime +}x_2^{\prime +})}
\frac{B_1}{B_2}\right]\cr
B_1&=&
x_2^- x_1^{\prime +} \Big[(x_1^- - x_1^+) (2 x_1^- - x_1^{\prime -}) (x_2^+ x_1^{\prime +}- x_1^+ x_2^+) -
x_1^{\prime +}x_1^- (x_2^+ - r x_2^-) (x_1^- - x_2^+)\Big]
\frac{r x_2^{\prime +} - x_2^{\prime -}}{r x_2^{\prime -} - x_2^{\prime +}}\cr
&+&\Big[x_1^+ x_1^{\prime +}(x_1^- - x_2^+) (x_2^- - r x_2^+) +
(x_1^- - x_1^+) x_2^- x_2^+ (x_1^{\prime +} - x_1^+)\Big]x_1^{\prime -} x_2^{\prime -} \cr
B_2&=&
(rx_2^--x_2^+)\Big[
x_1^+x_2^{\prime -}x_1^{\prime -} \frac{rx_2^+-x_2^-}{rx_2^--x_2^+}
-x_1^{\prime +}x_1^- x_2^-\frac{rx_2^{\prime +}-x_2^{\prime -}}{rx_2^{\prime -}-x_2^{\prime +}} \Big] \cr
C^R_{12}&=&S^0_{12}
{2 \eta_2 \eta_1 C_1\over f x_2^+ (x_1^+-x_1^{\prime +})
(x_1^+ (r x_2^+-x_2^-)+x_2^-(r x_2^--x_2^+)) (x_1^{\prime -} x_2^{\prime -}-r x_1^{\prime +} x_2^{\prime +})}\cr
C_1&=&x_1^{\prime +} {x_1^--x_2^+\over x_1^--x_1^+}
\Big(x_1^{\prime +} x_1^- x_2^-(x_2^+-r x_2^-)(r x_2^{\prime +}-x_2^{\prime -})
+x_1^+ x_1^{\prime -} x_2^{\prime -} (x_2^--r x_2^+)(x_2^{\prime +}-r x_2^{\prime -})\Big)\cr
&+&x_2^- x_2^+ (x_1^+-x_1^{\prime +})\Big (x_1^- (r x_1^{\prime +} x_2^{\prime +}+x_1^{\prime -}
x_2^{\prime -}-2 x_1^{\prime +} x_2^{\prime -})+x_1^{\prime -} x_2^{\prime -} (r x_2^{\prime -}-x_1^{\prime -}
+x_1^{\prime +}-x_2^{\prime +})\Big)\cr
D^R_{12}&=&-S^0_{12}\cr
E^R_{12}&=&-S_{12}^0\left[
1-2x_1^+x_2^{\prime -}
\frac{\frac{x_1^{\prime -}}{x_1^-}(x_1^{\prime -}-x_1^{\prime +}+x_2^{\prime +}-rx_2^{\prime -})
-(x_1^{\prime -}-x_1^{\prime +})-\frac{x_1^{\prime +}x_2^-}{x_1^+ x_2^{\prime -}}
\frac{x_2^+-r x_2^-}{x_2^--rx_2^+}(x_2^{\prime -}-rx_2^{\prime +})}
{\big[x_1^+ +x_2^- \frac{x_2^+-r x_2^-}{x_2^--rx_2^+}\big]
\big[ r x_1^{+\prime} x_2^{+\prime}-x_1^{-\prime} x_2^{-\prime}\big]}
\right]\cr
F^R_{12}&=&S^0_{12}
\frac{2 x_1^+ x_1^{\prime +} f (x_1^{-\prime}-x_1^{+\prime})(x_2^{\prime -}-rx_2^{\prime +})(x_2^--rx_2^+)}
{\eta_1' \eta_2' x_1^- x_1^{-\prime}
\big[x_1^+ (x_2^--r x_2^+)+x_2^- (x_2^+-r x_2^-)\big]
\big[x_1^{-\prime} x_2^{-\prime}-r x_1^{+\prime} x_2^{+\prime}\big]}\cr
&&\times \Bigg[x_1^- -x_1^{\prime -}
+\frac{r x_2^{-}- x_2^{+}}{x_2^--rx_2^+}\frac{x_2^- x_1^-}{x_1^+}
+\frac{x_2^{\prime +}-rx_2^{\prime -}}{x_2^{\prime -}-rx_2^{\prime +}}
\frac{x_1^{\prime -}x_2^{\prime -}}{x_1^{\prime +}}\Bigg]\cr
G^R_{12}&=&S^0_{12}
\frac{\eta_1 x_1^{+}\Big[x_2^+ (rx_2^--x_2^+)(rx_2^{\prime +}-x_2^{\prime -})+x_2^{\prime +}(rx_2^+-x_2^-)
(x_2^{\prime +}-rx_2^{\prime -})\Big]}
{\eta_2' x_2^{\prime +} (x_1^--x_1^+)\Big[x_1^+ (x_2^--r x_2^+)+x_2^- (x_2^+-r x_2^-)\Big]}\cr
H^R_{12}&=&S^0_{12}
\frac{\eta_1 (x_1^{-\prime}-x_1^{+\prime}) \Big[x_1^- x_2^- (r x_2^--x_2^+)+x_1^+ x_1^{-\prime}
(r x_2^+-x_2^-)\Big]}{\eta_1' x_1^{-\prime} (x_1^--x_1^+)\Big[x_1^+ (x_2^--r x_2^+)+x_2^- (x_2^+-r x_2^-)\Big]}\cr
K^R_{12}&=&S^0_{12}
\frac{\eta_2 x_2^- \Big[x_1^- x_1^{+\prime} (r x_2^{+\prime}-x_2^{-\prime})
+x_1^{-\prime} x_2^{-\prime} (r x_2^{-\prime}-x_2^{+\prime})\Big]}
{\eta_2' x_1^{-\prime} x_2^{-\prime} \Big[x_1^+ (x_2^--r x_2^+)+x_2^- (x_2^+-r x_2^-)\Big]}\cr
L^R_{12}&=&S^0_{12}
\frac{\eta_2 x_2^- (x_1^--x_1^{-\prime}) (x_1^{-\prime}-x_1^{+\prime})}
{\eta_1' x_1^{-\prime} \Big[x_1^+ (x_2^--r x_2^+)+x_2^-(x_2^+-r x_2^-)\Big]}\label{Smattrix}
\eea
where
\bea
{x_1^+\over x_1^-}=e^{ip_b}\qquad {x_2^+\over x_2^-}=e^{ip_B}\,,
\eea
\bea
{x_{1'}^+\over x_{1'}^-}=e^{ik_b}\qquad {x_{2'}^+\over x_{2'}^-}=e^{ik_B}\,.
\eea
It is simple to verify that this $R$ matrix is unitary for any value of $r$ and any momenta, and further that it reproduces
the bulk $S$ matrix for $r=1$ and the reflection matrix for scattering from a maximal giant graviton for $r=0$.
In performing this check we compared to the expressions in \cite{Correa:2009dm}.
To provide a further check of these expressions, we have considered the case that the boundary and the bulk magnons
have momenta that sum to $\pi$, as shown in Figure \ref{fig:easyscatter}.
In this situation it is very simple to compute the final momenta of the two magnons - the final momenta are minus the initial
momenta.
\begin{figure}[h]
\begin{center}
\includegraphics[height=4.5cm,width=4.5cm]{easyscatter}
\caption{A bulk magnon scatters with a boundary magnon.
The sum of the momenta of the two magnons is $\pi$.
Here we only show two of the magnons; we indicate them in red before the scattering and in green after the scattering.
In the process the direction of the momentum both magnons is reversed.}
\label{fig:easyscatter}
\end{center}
\end{figure}
In Appendix \ref{BA} we have computed the value of ${1\over 2}\left(1+{B^R_{12}\over A^R_{12}}\right)$ at
one loop.
We find this agrees perfectly with the answer obtained from (\ref{Smattrix}).
To perform this check, one needs to express $x^{\pm}$ in terms of $p$ by solving $x^+=x^- e^{ip}$ and
(\ref{secatyp}) for the boundary magnon or (\ref{atyp}) for the bulk magnon.
Doing this we find
\bea
x^-=e^{-i{p\over 2}}\left( {1\over 2 g \sin{p\over 2}}+2g \sin{p\over 2}\right)+O(g^2),
\eea
for a bulk magnon and
\bea
x^-= -{i\over g(r-e^{ip})}+ige^{-i p}(r-e^{ip}){re^{ip}-1\over r+e^{ip}}+O(g^2)
\eea
for a boundary magnon.
Inserting these expansions into (\ref{Smattrix}) and keeping only the leading order (which is $g^0$) at small $g$,
we reproduce (\ref{CCh}) for any allowed value of $r$.
It is a simple matter to verify that the boundary Yang-Baxter equation is not satisfied by this reflection matrix, indicating
that the system is not integrable.
This conclusion follows immediately upon verifying that changing the order in which the bulk magnons scatter with the boundary
magnon leads to final states in which the magnons have different momenta.
Consequently, the integrability is lost precisely because the scattering of the boundary and bulk magnons, for boundary
magnons attached to a non-maximal giant graviton, is inelastic.
\section{Links to the Double Coset Ansatz and Open Spring Theory}
There is an interesting limiting case that we can consider, obtained by taking each open string word to simply be a single $Y$,
i.e. each open string is a single magnon.
In this case one must use the correlators computed in \cite{Bhattacharyya:2008rb,Bhattacharyya:2008xy} as opposed
to the correlators computed in \cite{de Mello Koch:2007uu}.
The case with distinguishable open strings is much simpler since when the correlators are computed, only contractions
between corresponding open strings contribute; when the open strings are identical, it is possible to contract any two of them.
In this case one must consider operators that treat these ``open strings'' symmetrically, leading to the operators constructed
in \cite{Bhattacharyya:2008rb}.
In a specific limit, the action of the dilatation operator factors into an action on the $Z$s and an action on the
$Y$s \cite{Carlson:2011hy,Koch:2011hb}.
The action on the $Y$s can be diagonalized by Fourier transforming to a double coset which describes how the
magnons are attached to the giant gravitons\cite{Koch:2011hb,deMelloKoch:2012ck}.
For an operator labeled by a Young diagram $R$ with $p$ long rows or columns, the action on the $Z$s then reduces to the
motion of $p$ particles along the real line with their coordinates given by the lengths of the Young diagram $R$,
interacting through quadratic pair-wise interaction potentials \cite{deMelloKoch:2011ci}.
For interesting related work see \cite{Lin:2014yaa}.
Our goal in this section is to explain the string theory interpretation of these results.
The conclusion of \cite{Koch:2011hb,deMelloKoch:2012ck} is that eigenstates of the dilatation operator given by operators
corresponding to Young diagrams $R$ that have $p$ long rows or columns can be labeled by a graph with $p$ vertices and
directed edges.
The number of directed edges matches the number of magnons $Y$ used to construct the operator.
These graphs have a natural interpretation in terms of the Gauss Law expected from the worldvolume theory of the giant graviton
branes\cite{Balasubramanian:2004nb}.
Since the giant graviton has a compact world volume, the Gauss Law implies the total charge on the giant's world volume vanishes.
Each string end point is charged, so this is a constraint on the possible open string configurations: the number of strings emanating
from the giant must equal the number of strings terminating on the giant.
Thus, the graphs labeling the operators are simply enumerating the states consistent with the Gauss Law.
To stress this connection we use the language ``Gauss graphs'' for the labels, we refer to the vertices of the graph as branes
since each one is a giant graviton brane and we identify the directed edges as strings since each is a magnon.
The action of the dilatation operator is nicely summarized by the Gauss graph labeling the operator.
Count the number $n_{ij}$ of strings (of either orientation) stretching between branes $i$ and $j$ in the Gauss graph.
The action of the dilatation operator on the Gauss graph operator is then given by
\bea\label{actiondil}
DO_{R,r}(\sigma) = -{g_{YM}^2\over 8\pi^2} \sum_{i<j}n_{ij} ( \sigma )
\Delta_{ij} O_{R,r}(\sigma)\,.
\eea
The operator $\Delta_{ij}$ is defined in Appendix \ref{OscJust}.
For a proof of this, see \cite{Koch:2011hb,deMelloKoch:2012ck}.
To obtain anomalous dimensions one needs to solve an eigenproblem on the $R,r$ labels, which has been accomplished
in \cite{deMelloKoch:2011ci} in complete generality.
For three open strings stretched between three giant gravitons we have to solve the following eigenvalue problem
\bea
&&{g_{YM}^2\over 8\pi^2}\Big[(2N-c_1-c_2+3)O(c_1,c_2,c_3)-
\sqrt{(N-c_1+1)(N-c_2+1)}O(c_1+1,c_2-1,c_3)\cr
&&- \sqrt{(N-c_1)(N-c_2+2)}O(c_1-1,c_2+1,c_3)\Big]\cr
&&+{g_{YM}^2\over 8\pi^2}
\Big[(2N-c_2-c_3+5)O(c_1,c_2,c_3)
-\sqrt{(N-c_2+1)(N-c_3+3)}O(c_1,c_2-1,c_3+1)\cr
&&-\sqrt{(N-c_2+2)(N-c_3+2)}O(c_1,c_2+1,c_3-1)\Big]\cr
&&+{g_{YM}^2\over 8\pi^2}\Big[
(2N-c_1-c_3+4)O(c_1,c_2,c_3)
-\sqrt{(N-c_3+2)(N-c_1+1)}O(c_1+1,c_2,c_3-1)\cr
&&-\sqrt{(N-c_3+3)(N-c_1)}O(c_1-1,c_2,c_3+1)\Big]\cr
&&=\gamma O(c_1,c_2,c_3)
\label{DCspect}
\eea
where $c_1$, $c_2$ and $c_3$ are the lengths of the columns = momenta of the three giant gravitons and $\gamma$
is the anomalous dimension.
At large $N$, approximating for example $O(c_1,c_2,c_3)=O(c_1+1,c_2,c_3-1)$ which amounts to ignoring
back reaction on the giant gravitons, we have
\bea
&&{g_{YM}^2 N\over 8\pi^2}\Big[\sqrt{1-{c_1\over N}}-\sqrt{1-{c_2\over N}}\Big]^2 O(c_1,c_2,c_3)
+{g_{YM}^2 N\over 8\pi^2}\Big[\sqrt{1-{c_2\over N}}-\sqrt{1-{c_3\over N}}\Big]^2 O(c_1,c_2,c_3)\cr
&&+{g_{YM}^2 N\over 8\pi^2}\Big[
\sqrt{1-{c_3\over N}}-\sqrt{1-{c_1\over N}}\Big]^2 O(c_1,c_2,c_3)
=\gamma O(c_1,c_2,c_3)\,.\label{sss}
\eea
The Gauss graph associated with this operator has a string stretching between the brane of momentum $c_1$ and the brane
of momentum $c_3$, a string stretching between the brane of momentum $c_1$ and the brane of momentum $c_2$ and a string
strecthing between the brane of momentum $c_2$ and the brane of momentum $c_3$.
On the string theory side, since our magnons don't carry any momentum, we have three giants moving in the plane with
magnons stretched radially between them.
Identifying the central charges, we find they are radial vectors with length equal to the distance between the giants.
With these central charges we can write down the energy
\bea
E=\sqrt{1+2\lambda (r_1-r_2)^2}+\sqrt{1+2\lambda (r_1-r_3)^2}+\sqrt{1+2\lambda (r_3-r_2)^2}\,.
\label{StringforDC}
\eea
Using the usual translation between the momentum of the giant graviton and the radius of the circle it moves on
\bea
r_i=\sqrt{1-{c_i\over N}}\qquad i=1,2,3
\eea
we find that the order $\lambda$ term in the expansion of (\ref{StringforDC}) precisely matches the gauge theory result
(\ref{sss}).
If we don't ignore back reaction on the giant graviton, we find that (\ref{DCspect}) leads to a harmonic oscillator eigenvalue
problem.
In this case, we are keeping track of the $Z$s slipping past a magnon, from one giant onto the next.
In this way, one of the giants will grow and one will shrink thereby changing the radius of their orbits and hence the length
of the magnon stretched between them.
In this process we would expect the energy to vary continuously, which is exactly what we see at large $N$.
A specific harmonic oscillator state (see \cite{deMelloKoch:2011ci} for details) corresponds to two giant
gravitons executing a periodic motion.
In one period, the giants first come towards each other and then move away from
each other again.
Exciting these oscillators to any finite level, we find an energy that is of order the 't Hooft coupling divided by $N$.
These very small energies translate into motions with a huge period.
There is an important point worth noting.
The harmonic oscillator problem that arises from (\ref{DCspect}) is obtained by expanding (\ref{DCspect}) assuming
that $c_1-c_2$ is order $\sqrt{N}$ and $c_1,c_2$ are of order $N$.
The oscillator Hamiltonian then arises as a consequence of (and depends sensitively on) the order $1$ shifts in the
coefficients of the terms in (\ref{DCspect}).
Thus to really trust the oscillator Hamiltonian we find we must be sure that (\ref{DCspect}) is accurate enough that
we can expand it and the order $1$ term we obtain is accurate.
This is indeed the case, as we discuss in Appendix \ref{OscJust}.
\section{Conclusions}
In this study we have used the descriptions of the action of the dilatation operator derived using an approach which
relies heavily on group representation theory techniques, to study the anomalous dimensions of operators with a bare
dimension that grows as $N$, as the large $N$ limit is taken.
For these operators, even just to capture the leading large $N$ limit, we are forced to sum much more than just the
planar diagrams and this is precisely what the representation theoretic approach manages to do.
We have demonstrated an exact agreement with results coming from the dual gravity description, which is convincing
evidence in support of this approach.
It gives definite correct results in a systematic large $N$ expansion, demonstrating that the representation theoretic methods
provide a useful language and calculational framework with which to tackle the kinds of large $N$ but non-planar limits
we have studied in this article.
Of course, we have mainly investigated the leading large $N$ limit and the computation of ${1\over N}$ corrections
is an interesting problem that we hope to return to in the future.
The progress that was made in understanding the planar limit of ${\cal N}=4$ super Yang-Mills theory is impressive
(see \cite{Beisert:2010jr} for a comprehensive review).
Of course, much of the progress is thanks to integrability.
There are however results that do not rely on integrability, only on the symmetries of the theory.
In our study we clearly have a genuine extension of methods (giant magnons, the $SU(2|2)$ scattering matrix) that worked
in the planar limit, into the large $N$ but non-planar setting.
Further, even though integrability does not persist, it is present when the radius $r$ of the circle on which the graviton
moves is $r=0$ (maximal giant graviton) or $r=1$ (point-like giant graviton).
If we perturb about these two values of $r$, we are departing from integrability in a controlled way and hence we might
still be able to exploit integrability.
For more general values of $r$, we have managed to find asymptotic eigenstates in which the magnons are well separated
and we expect these to be very good approximate eigenstates.
Indeed, anomalous dimensions computed using these asymptotic eigenstates exactly agree with the dual string theory
energies.
Without the power of integrability it does not seem to be easy to patch together asymptotic states to obtain
exact eigenstates.
We have a clearer understanding of the non-planar integrability discovered in
\cite{Koch:2010gp,DeComarmond:2010ie,Carlson:2011hy,Koch:2011hb,deMelloKoch:2012ck,deMelloKoch:2011ci}.
The magnons in these systems remain separated and hence free, so they are actually non-interacting.
One of the giants would need to lose all of its momentum before any two magnons would scatter.
It is satisfying that the gauge theory methods based on group representation theory are powerful enough
to detect this integrability directly in the field theory.
The results we have found here give the all loops prediction for the anomalous dimensions of these operators.
In the limit when we consider a very large number of fields there would seem to be many more circumstances
in which one could construct operators that are ultimately dual to free systems.
This is an interesting avenue that deserves careful study, since these simple free systems may provide convenient
starting points, to which interactions may be added systematically.
A possible instability associated to open strings attached to giants has been pointed out in \cite{Berenstein:2006qk}.
In this case it seems that the spectrum of the spin chain becomes continuous, the ground state is no longer BPS and
supersymmetry is broken.
The transition that removes the BPS state is simply that the gap from the ground state to the continuum closes.
Of course, the spectrum of energies is discrete but this is only evident at subleading orders in $1/N$ when one
accounts for the back reaction of the giant graviton-branes.
The question of whether these BPS states with given quantum numbers exist or not has been linked to a walls of stability
type description \cite{Denef:2000nb} in \cite{Berenstein:2014zxa}.
It would be interesting to see if these issues can be understood using the methods of this article.
\noindent
{\it Acknowledgements:}
We would like to thank David Berenstein, Sanjaye Ramgoolam, Joao Rodrigues and Costas Zoubos
for useful discussions and/or correspondence.
This work is based upon research supported by the South African Research Chairs
Initiative of the Department of Science and Technology and National Research Foundation.
Any opinion, findings and conclusions or recommendations expressed in this material
are those of the authors and therefore the NRF and DST do not accept any liability
with regard thereto.
\begin{appendix}
\section{Two Loop Computation of Boundary Magnon Energy}\label{TwoLoop}
The dilatation operator, in the su$(2)$ sector, can be expanded as\cite{Minahan:2002ve}
\bea
D=\sum_{k=0}^\infty \left( {g_{YM}^2\over 16\pi^2}\right)^k D_{2k}=
\sum_{k=0}^\infty g^{2k} D_{2k}\, ,
\eea
where the tree level, one loop and two loop contributions are
\bea
D_0 = \Tr\left(Z {\partial\over \partial Z}\right)+\Tr\left(Y {\partial\over \partial Y}\right)\, ,
\eea
\bea
D_2 = -2 : \Tr \left( \left[ Z,Y\right]\left[{\partial\over \partial Z},{\partial\over \partial Y}\right]\right) :\, ,
\eea
\bea
D_4 =D_4^{(a)}+D_4^{(b)}+D_4^{(c)}\,,
\eea
\bea
D_4^{(a)}= -2 :\Tr \left(\left[\left[Y,Z\right],{\partial\over \partial Z}\right]
\left[\left[{\partial\over \partial Y},{\partial\over \partial Z}\right],Z\right]\right):\cr
D_4^{(b)}=-2 :\Tr \left(\left[\left[Y,Z\right],{\partial\over\partial Y}\right]
\left[\left[{\partial\over\partial Y},{\partial\over\partial Z}\right],Y\right]\right):\cr
D_4^{(c)}=-2 :\Tr \left(\left[\left[Y,Z\right],T^a \right]
\left[\left[{\partial\over\partial Y},{\partial\over\partial Z}\right],T^a\right]\right):\, .
\eea
The boundary magnon energy we computed above came from $D_2$.
By computing the contribution from $D_4$ we can compare to the second term in the expansion of the string energies.
Since we are using the planar approximation when contracting fields in the open string words, in the limit of well separated magnons,
the action of $D_4$ can again be written as a sum of terms, one for each magnon.
Thus, if we compute the action of $D_4$ on a state $|1^{n+1},1^n,1^n;\{n_1,n_2\}\}\rangle$ with a single string and a
single bulk magnon, its a trivial step to obtain the action of $D_4$ on the most general state.
A convenient way to summarize the result is to quote the action of $D_4$ on a state for which the magnons have momenta
$q_1,q_2,q_3$.
Of course, we will have to choose the $q_i$ so that the total central charge vanishes as explained in the article above.
Thus we could replace $q_3\to (q_1 q_2)^{-1}$ in the formulas below.
We will write the answer for a general giant graviton system with strings attached.
For the boundary terms, each boundary magnon corresponds to an end point of the string and each end point is
associated with a specific box in the Young diagram.
Denote the factor of the box corresponding to the first magnon by $c_F$ and the factor of the box associated to the
last magnon by $c_L$.
A straight forward but somewhat lengthy computation, using the methods developed in \cite{de Mello Koch:2007uv,Bekker:2007ea}
gives
\bea
&& (D_4)_{\rm first\,\, magnon}|\psi (q_1,q_2,q_3)\rangle =\cr
&&\,\,\,\,\,\qquad -{g^4\over 2}\left[\left( 1+{c_{F}\over N}\right)^2
-2 (1+{c_{F}\over N})\sqrt{c_{F}\over N}(q_1+q_1^{-1})
+ {c_{F}\over N}(q_1^2+2+q_1^{-2})\right]|\psi (q_1,q_2,q_3)\rangle\cr
&&=-{g^4\over 2}\left[ 1+{c_F\over N}-\sqrt{c_F\over N}(q_1+q_1^{-1})\right]^2
|\psi(q_1,q_2,q_3)\rangle\cr
&&=-{1\over 2}\left[ g^2\left( 1+{c_F\over N}-\sqrt{c_F\over N}(q_1+q_1^{-1})\right)\right]^2
|\psi(q_1,q_2,q_3)\rangle
\eea
in perfect agreement with (\ref{stringspect}).
The term $D^{(b)}_4$ does not make a contribution to the action on distant magnons, since we sum only the planar
open string word contractions.
The remaining terms $D^{(a)}_4,D^{(c)}_4$ both make a contribution to the action on distant magnons.
For completeness note that
\bea
(D_4)_{\rm bulk\,\, magnon}|\psi(q_1,q_2,q_3)\rangle = -{1\over 2}\left[ 2g^2\left( 2-(q_2+q_2^{-1})\right)\right]^2
|\psi(q_1,q_2,q_3)\rangle\,.
\eea
\section{The difference between simple states and eigenstates vanishes at large $N$}\label{nodiff}
In this section we want to quantify the claim made in section 4 that the difference between our simple states and our
exact eigenstates vanishes in the large $N$ limit.
We will do this by computing the difference between the simple states and eigenstates and observing this
difference has a norm that goes to zero in the large $N$ limit.
For simplicity, we will consider a two magnon state.
The generalization to many magnon states is straight forward.
Our simple states have the form
\bea
|q\rangle &=&{\cal N}\Big(\sum_{m_1=0}^{J-1}\sum_{m_2=0}^{m_1}q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\cr
&+&\sum_{m_2=0}^{J-1}\sum_{m_1=0}^{m_2}q^{m_1-m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle
\Big)\,.
\eea
Requiring that $\langle q|q\rangle =1$ we find
\bea
{\cal N}={1\over J\sqrt{J+1}}\,.\label{NrmStts}
\eea
With this normalization we find that the simple states are orthogonal
\bea
\langle q_a |q_b\rangle =\delta_{k_a k_b}+O\left({1\over J}\right)\qquad {\rm where}\qquad q_a=e^{i{2\pi k_a\over J}},
\quad q_b=e^{i{2\pi k_b\over J}}\,.
\eea
This is perfectly consistent with the fact that in the planar limit the lattice states, given by
$|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle$ are orthogonal and our simple states
are simply a Fourier transform of these.
Our eigenstates have the form (we will see in a few moments that the normalization in the next equation below is the same
as the normalization in (\ref{NrmStts}))
\bea
&&|\psi(q)\rangle ={\cal N}\Big(
\sum_{m_2=0}^{\infty}\sum_{m_1=0}^{m_2}f(m_2) q^{m_1-m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle\cr
&+&\sum_{m_1=0}^{J+m_2}\sum_{m_2=0}^{\infty} f(m_1)f(J-m_1+m_2)q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\Big)\cr
\cr
&&\equiv |q\rangle + |\delta q\rangle
\eea
where
\bea
&&|\delta q\rangle ={\cal N}\Big(
\sum_{m_2=J}^{n+J+1}\sum_{m_1=0}^{m_2}f(m_2) q^{m_1-m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle\cr
&+&\sum_{m_1=J}^{J+m_2}\sum_{m_2=0}^{n+m_1} f(J-m_1+m_2)f(m_1)q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\Big)\cr
&+&\sum_{m_1=0}^{J-1}\sum_{m_2=m_1+1}^{n+m_1} f(J-m_1+m_2)q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\Big)\cr
&=&{\cal N}\Big(
\sum_{m_2=J}^{J+\delta J}\sum_{m_1=0}^{m_2}f(m_2) q^{m_1-m_2}
|1^{n+J+m_1-m_2+1},1^{n+J+m_1-m_2},1^{n+J+m_1-m_2};\{ m_2-m_1\}\rangle\cr
&+&\sum_{m_1=J}^{l_-}\sum_{m_2=0}^{J+\delta J} f(J-m_1+m_2)f(m_1)q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\Big)\cr
&+&\sum_{m_1=0}^{J-1}\sum_{m_2=m_1+1}^{m_1+\delta J} f(J-m_1+m_2)q^{m_1-m_2}
|1^{n+m_1-m_2+1},1^{n+m_1-m_2},1^{n+m_1-m_2};\{ J-m_1+m_2\}\rangle\Big)\nonumber
\eea
and $l_-$ is the smallest of $J+m_2$ and $J+\delta J$.
It is rather simple to see that $|\delta q\rangle$ is given by a sum of $O(J)$ terms and that each term
has a coefficient of order $\delta J$.
Consequently, up to an overall constant factor $c_{\delta q}$ which is independent of $J$, we can bound
the norm of $|\delta q\rangle$ as
\bea
\langle\delta q|\delta q\rangle \le c_{\delta q} J(\delta J)^2{\cal N}^2=c_{\delta q}{(\delta J)^2\over J (J+1)}
\eea
which goes to zero in the large $J$ limit, proving our assertion that the difference between the simple states and the
large $N$ eigenstates vanishes in the large $N$ limit.
\section{Review of Dilatation Operator Action}\label{OscJust}
The studies \cite{Koch:2010gp,DeComarmond:2010ie} have computed the dilatation operator action without invoking
the distant corners approximation.
The only approximation made in these studies is that correlators of operators with $p$ long rows/columns with operators
that have $p$ long rows/columns and some short rows/columns, vanishes in the large $N$ limit.
These results are useful since they provide data against which the distant corners approximation could be compared.
Further, we have demonstrated that the action of the dilatation operator reduces to a set of decoupled harmonic oscillators
in \cite{Carlson:2011hy,Koch:2011hb,deMelloKoch:2012ck,deMelloKoch:2011ci}.
However, to obtain this result we needed to expand one of the factors in the dilatation operator to subleading order.
The agreement of the resulting spectrum\footnote{One can also compare the states that have a definite scaling dimension.
The states obtained in the distant corners approximation are in perfect agreement with the states obtained in
\cite{Koch:2010gp,DeComarmond:2010ie} by a numerical diagonalization of the dilatation operator.} is strong evidence
that the distant corners approximation is valid.
It is worth discussing these details and explaining why we do indeed obtain the correct large $N$ limit.
This point is not made explicitly in \cite{Carlson:2011hy,Koch:2011hb,deMelloKoch:2012ck,deMelloKoch:2011ci}.
In terms of operators belonging to the $SU(2)$ sector and normalized to have a unit two point function, the action of the
one loop dilatation operator
$$
DO_{R,(r,s)}(Z,Y)=\sum_{T,(t,u)} N_{R,(r,s);T,(t,u)}O_{T,(t,u)}(Z,Y)
$$
is given by
{\small
$$
N_{R,(r,s);T,(t,u)}= - g_{YM}^2\sum_{R'}{c_{RR'} d_T n m\over d_{R'} d_t d_u (n+m)}
\sqrt{f_T \, {\rm hooks}_T\, {\rm hooks}_r \, {\rm hooks}_s \over f_R \, {\rm hooks}_R\, {\rm hooks}_t\, {\rm hooks}_u}\times
$$
$$
\times\Tr\Big(\Big[ \Gamma_R((n,n+1)),P_{R\to (r,s)}\Big]I_{R'\, T'}\Big[\Gamma_T((n,n+1)),P_{T\to (t,u)}\Big]I_{T'\, R'}\Big) \, .
$$
}
The above formula is exact.
After using the distant corners approximation to simplify the trace and prefactor, this becomes
\bea
D O_{R,(r,s)\mu_1\mu_2}=-g_{YM}^2\sum_{u\nu_1\nu_2}\sum_{i<j}\delta_{\vec{m},\vec{n}}M^{(ij)}_{s\mu_1\mu_2 ; u\nu_1\nu_2}\Delta_{ij}
O_{R,(r,u)\nu_1\nu_2}\,.
\label{factoredD}
\eea
Notice that we have a factorized action: the $\Delta_{ij}$ (explained below) acts only on the Young diagrams $R,r$ and
\bea
M^{(ij)}_{s\mu_1\mu_2 ; u\nu_1\nu_2}&&={m\over\sqrt{d_s d_u}}\left(
\langle \vec{m},s,\mu_2\, ;\, a|E^{(1)}_{ii}|\vec{m},u,\nu_2\, ;\, b\rangle
\langle \vec{m},u,\nu_1\, ;\, b|E^{(1)}_{jj}|\vec{m},s,\mu_1\, ;\, a\rangle\right.\cr
&&\left. +
\langle \vec{m},s,\mu_2\, ;\, a|E^{(1)}_{jj}|\vec{m},u,\nu_2\, ;\, b\rangle
\langle \vec{m},u,\nu_1\, ;\, b|E^{(1)}_{ii}|\vec{m},s,\mu_1\, ;\, a\rangle\right)
\eea
where $a$ and $b$ are summed, acts only on the $s,\mu_1,\mu_2$ labels of the restricted Schur polynomial.
$a$ labels states in the irreducible representation $s$ and $b$ labels states in the irreducible representation $t$.
To spell out the action of operator $\Delta_{ij}$ it is useful to split it up into three terms
\bea
\Delta_{ij}=\Delta_{ij}^{+}+\Delta_{ij}^{0}+\Delta_{ij}^{-}\,.
\eea
Denote the row lengths of $r$ by $r_i$ and the row lengths of $R$ by $R_i$.
Introduce the Young diagram $r_{ij}^+$ obtained from $r$ by removing a box from row $j$ and adding it to row $i$.
Similarly $r_{ij}^-$ is obtained by removing a box from row $i$ and adding it to row $j$.
In terms of these Young diagrams we have
\bea
\Delta_{ij}^{0}O_{R,(r,s)\mu_1\mu_2} = -(2N+R_i+R_j-i-j)O_{R,(r,s)\mu_1\mu_2}\,,
\label{0term}
\eea
\bea
\Delta_{ij}^{+}O_{R,(r,s)\mu_1\mu_2} = \sqrt{(N+R_i-i)(N+R_j-j+1)}O_{R^+_{ij},(r^+_{ij},s)\mu_1\mu_2}\,,
\label{pterm}
\eea
\bea
\Delta_{ij}^{-}O_{R,(r,s)\mu_1\mu_2} = \sqrt{(N+R_i-i+1)(N+R_j-j)}O_{R^-_{ij},(r^-_{ij},s)\mu_1\mu_2}\,.
\label{mterm}
\eea
As a matrix $\Delta_{ij}$ has matrix elements
\bea
&&\Delta^{R,r ; T,t}_{ij} =
\sqrt{(N+R_i-i)(N+R_j-j+1)}\delta_{T,R^+_{ij}}\delta_{t,r^+_{ij}}\cr
&&+\sqrt{(N+R_i-i+1)(N+R_j-j)}\delta_{T,R^+_{ij}}\delta_{t,r^+_{ij}}
-(2N+R_i+R_j-i-j)\delta_{T,R}\delta_{t,r}\,.\cr
&&\label{ExtD}
\eea
In terms of these matrix elements we can write (\ref{factoredD}) as
\bea
D O_{R,(r,s)\mu_1\mu_2}=-g_{YM}^2\sum_{T,(t,u)\nu_1\nu_2}\sum_{i<j}\delta_{\vec{m},\vec{n}}
M^{(ij)}_{s\mu_1\mu_2 ; u\nu_1\nu_2}\,
\Delta^{R,r ; T,t}_{ij} O_{T,(t,u)\nu_1\nu_2}\,.
\label{factoredDsecond}
\eea
Although the distant corners approximation has been used to extract the large $N$ value of
$M^{(ij)}_{s\mu_1\mu_2 ; u\nu_1\nu_2}$, the action of $\Delta^{R,r ; T,t}_{ij}$ is computed exactly.
In particular, the coefficients appearing in (\ref{ExtD}) are simply the factors associated with the boxes that are added or
removed by $\Delta^{R,r ; T,t}_{ij}$, and hence in developing a systematic large $N$ expansion for $\Delta^{R,r ; T,t}_{ij}$
we can trust the shifts of numbers of order $N$ by numbers of order 1.
The limit in which the dilatation operator reduces to sets of decoupled oscillators corresponds to the limit in which the difference
between the row (or column) lengths of Young diagram $R$ are fixed to be $O(\sqrt{N})$ while the row lengths themselves
are order $N$.
The continuum variables are then
\bea
x_i={R_{i+1}-R_i\over\sqrt{R_1}}\qquad i=1,2,\cdots,p-1
\eea
when $R$ has $p$ rows (or columns) and the shortest row (or column) is $R_1$.
In this case, the leading and subleading (order $N$ and order $\sqrt{N}$) contribution to $\Delta_{ij}O_{R,(r,s)\mu_1\mu_2}$
vanish, leaving a contribution of order $1$.
This contribution is sensitive to the exact form of the coefficients appearing in (\ref{ExtD}), and it is with these shifts
that we reproduce the numerical results of \cite{Koch:2010gp,DeComarmond:2010ie}.
\section{One Loop Computation of Bulk/Boundary Magnon Scattering}\label{BA}
In this appendix we will compute the scattering of a bulk and boundary magnon, to one loop, using the asymptotic Bethe ansatz.
See \cite{Staudacher:2004tk} where studies of this type were first suggested and \cite{Freyhult:2005ws} for related systems.
We can introduce a wave function $\psi (l_1,l_2,\cdots)$ as follows
\bea
O=\sum_{l_1,l_2,\cdots}\psi (l_1,l_2,\cdots)O(R,R_1^{k},R_2^{k};\{l_1,l_2,\cdots\})\,.
\eea
We assume that the boundary magnon (at $l_1$) and the next magnon along the open string (at $l_2$) are very well separated
from the remaining magnons.
These magnons are both assumed to be $Y$ impurities.
To obtain the scattering we want, we only need to focus on these two magnons.
The time independent Schr\"odinger equation following from our one loop dilatation operator is
\bea
E\psi (l_1,l_2)=\left(3+{c\over N}\right)\psi(l_1,l_2)-\sqrt{c\over N}\left(\psi(l_1-1,l_2)+\psi(l_1+1,l_2)\right)\cr
-(\psi(l_1,l_2-1)+\psi(l_1,l_2+1))\label{separated}
\eea
where $c$ is the factor of the box that the endpoint associated to the magnon at $l_1$ belongs to.
The equation (\ref{separated}) is valid whenever the two magnons are not adjacent in the open string word, i.e. when
$l_2>l_1+1$\footnote{Notice that we are associating a lattice site to every field in the spin chain and not just to the $Z$s.}.
In the situation that the magnons are adjacent, we find
\bea
E\psi (l_1,l_1+1)=\left(1+{c\over N}\right)\psi(l_1,l_1+1)-\sqrt{c\over N}\psi(l_1-1,l_2)
-\psi(l_1,l_1+2)\,.\label{adjacent}
\eea
We make the following Bethe ansatz for the wave function
\bea
\psi(l_1,l_2)=e^{ip_1 l_1+ip_2 l_2}+R_{12}\, e^{ip_1'l_1+p_2'l_2}\,.
\eea
It is straight forward to see that this ansatz obeys (\ref{separated}) as long as
\bea
E=3+{c\over N}-\sqrt{c\over N}(e^{ip_1}+e^{-ip_1})-(e^{ip_2}+e^{-ip_2})\label{evalue}
\eea
and
\bea
\sqrt{c\over N}(e^{ip_1}+e^{-ip_1})+e^{ip_2}+e^{-ip_2}=
\sqrt{c\over N}(e^{ip_1'}+e^{-ip_1'})+e^{ip_2'}+e^{-ip_2'}\,.\label{relatemom}
\eea
Note that (\ref{evalue}) is indeed the correct one loop anomalous dimension and (\ref{relatemom})
can be obtained by equating the $O(\lambda)$ terms on both sides of (\ref{magconstraints}), as it should be.
From (\ref{adjacent}) we can solve for the reflection coefficient $R$.
The result is
\bea
R_{12}=-{2e^{ip_2}-\sqrt{c\over N}e^{ip_1+ip_2}-1\over 2e^{ip_2'}-\sqrt{c\over N}e^{ip_1'+ip_2'}-1}
\eea
Two simple checks of this result are
\begin{itemize}
\item[1.] We see that $R_{12}R_{21}=1$.
\item[2.] If we set $c=N$ we recover the S-matrix of \cite{Staudacher:2004tk}.
\end{itemize}
We will now move beyond the $su(2)$ sector by considering a state with a single $Y$ impurity and a single $X$
impurity. The operator with a $Y$ impurity at $l_1$ and an $X$ impurity at $l_2$ is denoted
$O(R,R_1^{k},R_2^{k};\{l_1,l_2,\cdots\})_{YX}$ and the operator with an $X$ impurity at $l_1$ and a $Y$ impurity at
$l_2$ is denoted $O(R,R_1^{k},R_2^{k};\{l_1,l_2,\cdots\})_{XY}$.
We now introduce a pair of wave functions as follows
\bea
O=\sum_{l_1,l_2,\cdots}\left[\psi_{YX} (l_1,l_2,\cdots)O(R,R_1^{k},R_2^{k};\{l_1,l_2,\cdots\})_{YX}\right.\cr
\left.+\psi_{XY} (l_1,l_2,\cdots)O(R,R_1^{k},R_2^{k};\{l_1,l_2,\cdots\})_{XY}\right]\,.
\eea
From the one loop dilatation operator we find the time independent Schr\"odinger equation (\ref{separated}) for
each wave function, when the impurities are not adjacent.
When the impurities are adjacent, we find the following two time independent Schr\"odinger equations
\bea
E\psi_{YX}(l_1,l_1+1)=\left(2+{c\over N}\right)\psi_{YX}(l_1,l_1+1)-\sqrt{c\over N}\psi_{YX}(l_1-1,l_1+1)\cr
-\psi_{XY}(l_1,l_1+1)-\psi_{YX}(l_1,l_1+2)\label{sep1}
\eea
\bea
E\psi_{XY}(l_1,l_1+1)=\left(2+{c\over N}\right)\psi_{XY}(l_1,l_1+1)-\sqrt{c\over N}\psi_{XY}(l_1-1,l_1+1)\cr
-\psi_{YX}(l_1,l_1+1)-\psi_{XY}(l_1,l_1+2)\label{sep2}
\eea
Making the following Bethe ansatz for the wave function
\bea
\psi_{YX}(l_1,l_2)&=&e^{ip_1 l_1+ip_2 l_2}+Ae^{ip_1'l_1+ip_2'l_2}\cr
\psi_{XY}(l_1,l_2)&=&Be^{ip_1'l_1+ip_2'l_2}
\eea
we find that the two equations of the form (\ref{separated}) imply that both $\psi_{XY}(l_1,l_2)$ and $\psi_{YX}(l_1,l_2)$
have the same energy, which is given in (\ref{evalue}).
The equations (\ref{sep1}) and (\ref{sep2}) imply that
\bea
A={e^{ip_2'}+e^{ip_2}-1-\sqrt{c\over N}e^{ip_1'+ip_2'}\over 1+\sqrt{c\over N}e^{ip_1'+ip_2'}-2e^{ip_2'}}\,,\cr
B={e^{ip_2}-e^{ip_2'}\over 1+\sqrt{c\over N}e^{ip_1'+ip_2'}-2e^{ip_2'}}\,.
\eea
It is straight forward but a bit tedious to check that $|A|^2+|B|^2=1$ which is a consequence of unitarity.
To perform this check it is necessary to use the conservation of momentum $p_1+p_2=p_1'+p_2'$, as well as the
constraint (\ref{relatemom}).
We now finally obtain
\bea
{A\over R_{12}}={e^{ip_2'}+e^{ip_2}-1-\sqrt{c\over N}e^{ip_1'+ip_2'}\over 2e^{ip_2}-\sqrt{c\over N}e^{ip_1+ip_2}-1}\,.
\label{CCh}
\eea
This should be equal to
\bea
{1\over 2}\left( 1+{B_{12}\over A_{12}}\right)
\eea
where $A_{12}$ and $B_{12}$ are the S-matrix elements computed in section \ref{sctref}, describing the scattering
between a bulk and a boundary magnon.
This allows us to perform a non-trivial check of the S-matrix elements we computed.
\section{No Integrability}
The (boundary) Yang-Baxter equation makes use of the boundary magnon ($B$) and two bulk magnons ($1$ and $2$).
For our purposes, it is enough to track only scattering between bulk and boundary magnons.
The Yang-Baxter equation requires equality between the scattering\footnote{There are some bulk magnon scatterings
that we are ignoring as they don't affect our argument.} which takes $B+1\to B'+1'$ and then
$B'+2\to \tilde B'+\tilde 2$ and the scattering which takes $B+2\to B'+2'$ and then $B'+1\to \tilde B'+\tilde 1$.
For the first scattering, given the initial momenta $p_1,p_2,p_B$, we need to solve
\bea
\sqrt{1+8\lambda\sin^2 {p_1\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{p_B\over 2})}\cr
=
\sqrt{1+8\lambda\sin^2 {k_1\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{q\over 2})}
\eea
\bea
\sqrt{1+8\lambda\sin^2 {p_2\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{q\over 2})}\cr
=
\sqrt{1+8\lambda\sin^2 {k_2\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{k_B\over 2})}
\eea
for the final momenta $k_1,k_2,k_B$.
For the second scattering we need to solve
\bea
\sqrt{1+8\lambda\sin^2 {p_2\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{p_B\over 2})}\cr
=
\sqrt{1+8\lambda\sin^2 {l_2\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{s\over 2})}
\eea
\bea
\sqrt{1+8\lambda\sin^2 {p_1\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{s\over 2})}\cr
=
\sqrt{1+8\lambda\sin^2 {l_1\over 2}}+
\sqrt{1+8\lambda ((1+r)^2+4r\sin^2{l_B\over 2})}
\eea
for the final momenta $l_1,l_2,l_B$.
It is simple to check that, in general, $k_1\ne l_1$, $k_2\ne l_2$ and $k_B\ne l_B$, so the two scatterings can't
possibly be equal.
\end{appendix}
|
{'timestamp': '2016-01-26T02:06:42', 'yymm': '1506', 'arxiv_id': '1506.05224', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.05224'}
|
arxiv
|
\section{Introduction}
One of the biggest challenges to classical simulation of quantum systems is the infamous fermion sign problem of quantum Monte Carlo (QMC) simulations. It appears when the weights of configurations in a QMC simulation may become negative and therefore cannot be directly interpreted as probabilities~\cite{LohJr:1990up}. In the presence of a sign problem, the simulation effort typically grows exponentially with system size and inverse temperature.
While the sign problem is nondeterministic polynomial (NP) hard~\cite{Troyer:2005hv}, implying that there is little hope of finding a \emph{generic} solution, this does not exclude \emph{ad hoc} solutions to the sign problem for specific models. For example, one can sometimes exploit symmetries to design appropriate sign-problem-free QMC algorithms for a restricted class of models~\cite{PhysRevLett.83.3116}. However, it is unclear how broad these classes are and it is in general hard to foresee whether a given physical model would have a sign problem in \emph{any} QMC simulations. The situation is not dissimilar to the study of many intriguing problems in the NP complexity class, where a seemingly infeasible problem might turn out to have a polynomial-time solution surprisingly~\cite{Hayes:2008cm}.
A fruitful approach in pursuing such specific solutions is to design Hamiltonians that capture the right low energy physics and allow sign-problem-free QMC simulations at the same time, called ``designer'' Hamiltonians~\cite{Kaul:2013ika}. This naturally calls for design principles. For bosonic and quantum spin systems a valuable guiding principle is the Marshall sign rule~\cite{Marshall:1955tw, Kaul:2015hg} which ensures nonnegative weight for all configurations. The design of the sign-problem-free fermionic Hamiltonians is harder. The method of choice for fermionic QMC simulations are the determinantal QMC approaches, including traditional discrete-time~\cite{Blankenbecler:1981vj} and new continuous-time approaches~\cite{1999PhRvL..82.4155R, Rubtsov:2005iw, Gull:2008cm, Iazzi:2014vv, Wang:2015tf}. Both approaches map the original interacting system to free fermions with an imaginary-time dependent Hamiltonian. The partition function is then written as a weighted sum of matrix determinants after tracing out the fermions~\cite{Blankenbecler:1981vj,1999PhRvL..82.4155R, Iazzi:2014vv}:
\begin{equation}
Z = \sum_{\mathcal{C}}f_{\mathcal{C}} \, \det\left[ I + \mathcal{T}e^{-\int_{0}^{\beta}d\tau H_{\mathcal{C}}(\tau)} \right],
\label{eq:general}
\end{equation}
where $f_{\mathcal{C}}$ is a c-number and $H_{\mathcal{C}}(\tau)$ is an imaginary-time dependent single-particle Hamiltonian matrix (whose matrix elements denote hopping amplitudes and onsite energies on a lattice), both depending on the Monte Carlo configuration $\mathcal{C}$. $\mathcal{T}$ denotes the time-ordering and $I$ is the identity matrix.
The appearance of the matrix determinant complicates the analysis of the sign problem because it is often not straightforward to see the sign of the Monte Carlo weight of a given configuration~\cite{Batrouni:1993tn, Gubernatis:1994ta}, and the sign of the determinant is related \cite{Iazzi:2014uo} to the Aharonov-Anandan phase~\cite{PhysRevLett.58.1593} of the imaginary-time evolution.
The situation is further complicated by the fact that even for a given physical model the choice of the effective Hamiltonian $H_{\mathcal{C}}$ is not unique (it depends on details of the QMC algorithm such as whether and how to perform an auxiliary field decomposition) and the specific choice may affect the appearance of the sign problem~\cite{Batrouni:1990ug,Batrouni:1993tn,Chen:1992wo}.
One successful guiding principle for fermionic simulations that has been discovered in the context of nuclear physics~\cite{PhysRevC.48.1518, Anonymous:JutiR0Zk}, lattice QCD~\cite{Hands:2000kq} and condensed matter physics~\cite{Wu:2005im} relies on the time-reversal-symmetry (TRS) of the effective Hamiltonian $H_{\mathcal{C}}$. TRS ensures a nonnegative matrix determinant in \Eq{eq:general} because the eigenvalues of the matrix necessarily appear in Kramers pairs.
A typical example of this kind is the attractive Hubbard model at balanced filling of two spin species, where after decomposition of the interaction term the Monte Carlo weight even factorizes into product of two identical matrix determinants. Additional conditions such as half filling and bipartiteness of the lattice lead to a solution of the sign problem for the repulsive Hubbard model.
See Refs.~\cite{Wu:2005im, Assaad:2008hx} for a thorough discussion and Refs.~\cite{Hohenadler:2011kk, Berg:2012ie, Wang:2014gx} for several recent applications of the TRS principle.
Unfortunately, besides the quite intuitive TRS principle~\cite{Anonymous:JutiR0Zk, Hands:2000kq, Wu:2005im}, a broad criterion for the sign of the matrix determinant is still lacking. Recent progresses on solving the sign problem in a class of fermionic models using the continuous-time quantum Monte Carlo approach~\cite{Huffman:2014fj} and the Majornara representation~\cite{Li:2015jf} provide hints about such a guiding principle. For example, one could search for real-antisymmetric matrices with nonnegative determinant~\cite{Huffman:2014fj, Wang:2014iba}, or try to split the fermionic operator into Majorana fermions for a potential cancellation of the sign~\cite{Li:2015jf}. However, compared to the TRS principle~\cite{Anonymous:JutiR0Zk, Hands:2000kq, Wu:2005im}, both approaches are still not enlightening enough to serve as a guiding principle. Moreover, because of the different appearances of the two solutions~\cite{Huffman:2014fj, Li:2015jf}, it is unclear what is the connection between them and whether there is a deeper underlying reason for such solutions.
In this Letter, we present a guiding principle that not only unifies the two recent solutions to the sign problem~\cite{Huffman:2014fj,Li:2015jf}, but also suggests a general strategy that enables us to discover solutions to the sign problem for a broader class of fermionic models. The guiding principle exploits the symmetry of the \emph{effective Hamiltonian} $H_{\mathcal{C}}$ and consequently the Lie group structure of the \emph{evolution matrix} $\mathcal{T}e^{-\int_{0}^{\beta} d\tau H_{\mathcal{C}}}$. In particular, the \emph{split orthogonal group} $O(n,n)$ is formed by
all $2n\times 2n$ real matrices that preserve the metric $\eta = {\rm diag}(\underbrace{1,\ldots,1}_{n}, \underbrace{-1,\ldots,-1}_{n})$
\begin{equation}
M^{T}\eta M = \eta.
\label{eq:group}
\end{equation}
Similar to the Lorentz group $O(3,1)$, a familiar example in relativistic physics, the $O(n,n)$ group contains \emph{four} components. More explicitly, writing the matrix $M$ in the form $M=\left(\begin{array}{cc}M_{11} & M_{12} \\M_{21} & M_{22}\end{array}\right) $ with $n\times n$ sub-blocks, one has $|\det(M_{11})|\ge 1 $ and $|\det(M_{22})|\ge 1 $~\cite{nishikawa1983exponential}. The four components of $O(n,n)$ can be classified by the signs of $\det(M_{11})$ and $\det(M_{22})$, denoted as $O^{\pm\pm}(n,n)$. Different components can only be connected by \emph{improper} rotations that change the sign of the determinant of the sub-block $M_{11}$ or $M_{22}$. Only the $O^{++}(n,n)$ component forms a subgroup because it contains the identity element.
\begin{theorem*}
If $M$ belongs to the split orthogonal group $O(n,n)$, then the following statements hold~\cite{SM,MO,Taoblog}
\begin{subequations}
\begin{numcases}
{\det\left(I+M\right)}
\ge 0, & \text{if $M\in O^{++}(n,n)$}, \label{eq:theorem++}\\
\le 0, & \text{if $M\in O^{--}(n,n)$}, \label{eq:theorem--}\\
= 0, & \text{otherwise}. \label{eq:theorem+-}
\end{numcases}
\label{eq:theorem}
\end{subequations}
\end{theorem*}
This rather strong statement about the definite sign of the matrix determinant, no matter positive or negative, is invaluable for the determinantal QMC simulations. Furthermore, we have the following
\begin{corollary*}
Given an arbitrary number of real matrices
$A_{i}$ that satisfy $\eta A_{i}\eta =-A_{i}^{T}$, we have~\cite{SM,MO}
\begin{equation}\det\left(I + \prod_{i}e^{A_{i}}\right)\ge 0.
\label{eq:corollary}
\end{equation}
\end{corollary*}
The proof follows immediately by noticing that $A_{i}$ lies in the Lie algebra of the group $O(n,n)$~\cite{orthogonalgroup}. Each factor of the matrix product $\prod_{i}e^{A_{i}}$ is an exponential from the Lie algebra to the Lie group $O(n,n)$ in the identity component, thus \Eq{eq:corollary} is a consequence of \Eq{eq:theorem++}. Note that the form of the matrix determinant of \Eq{eq:corollary} resembles the weight that appears in the determinantal QMC calculations \Eq{eq:general}~\cite{Blankenbecler:1981vj,1999PhRvL..82.4155R, Iazzi:2014vv}.
Before moving on, we comment on the general relevance of \Eq{eq:theorem} and \Eq{eq:corollary} to physical problems. On a bipartite lattice, the parities of the sublattices naturally provide the metric $\eta$ appearing in \Eq{eq:group}. To further reveal its physical meaning, we write an element in the Lie algebra $A_{i} = \left(\begin{array}{cc} C_{i} & B_{i} \\B^{T}_{i} & D_{i}\end{array}\right)$ explicitly. In the special case of $C_{i}=D_{i}=0$, $A_{i}$ can be recognized as a bipartite single-particle Hamiltonian and the condition on $A_{i}$ has appeared in Eq.~(4) of Ref.~\cite{Huffman:2014fj}. The Corollary (\ref{eq:corollary}) states that the partition function of such bipartite imaginary-time dependent noninteracting system is nonnegative~\footnote{This fact was used implicitly in the CT-QMC calculation of the R\'enyi entanglement entropy in~\cite{Wang:2014ir}.}. Moreover, in general the matrix $A_{i}$ does not need to be symmetric. The condition on $A_{i}$ only requires $C_{i}^{T}= -C_{i}$ and $D_{i}^{T} = -D_{i}$, thus provides more flexibilities in designing the QMC approaches.
To see how the above rigorous mathematical statements apply to determinantal QMC simulations of physical systems, we consider first the spinless $t-V$ model on a bipartite lattice,
\begin{equation}
\hat{H}= { \sum_{i,j} \hat{c}^{\dagger}_{{i}} K_{ij} \hat{c}_{{j}}} + \sum_{\braket{i,j}} { \left[V\left( \hat{n}_{i}\hat{n}_{j} - \frac{\hat{n}_{i}+\hat{n}_{j}}{2}\right) -\Gamma \right] }.
\label{eq:tVmodel}
\end{equation}
Here $\hat{c}^\dagger_{i}$ and $\hat{c}_{i}$ are fermion creation and annihilation operators and $\hat{n}_{i}=\hat{c}^\dagger_{i}\hat{c}_{i}$ is the occupation number operator on site $i$. There are $2n$ lattice sites which split into two sublattices $\mathcal{A}$ and $\mathcal{B}$. In accordance with the metric $\eta$, we sort the sites by placing all sites in $\mathcal{A}$ before those in $\mathcal{B}$. The bipartite hopping matrix $K$ has zeros on the diagonal and is real-symmetric, therefore fulfills the requirement $\eta K \eta = -K^{T}$ of the Corollary~\cite{Huffman:2014fj}. The second term is a repulsive interaction between nearest-neighbors $\braket{i,j}$ (belonging to different sublattices) and we introduced a constant shift $\Gamma$, which will play a crucial role in later discussions.
We employ the continuous-time quantum Monte Carlo (CT-QMC) framework~\cite{1999PhRvL..82.4155R, Rubtsov:2005iw, Gull:2008cm, Iazzi:2014vv} in the following analysis. This approach is free from time discretization errors, as efficient~\cite{Iazzi:2014vv} and more flexible and powerful~\cite{Wang:2015tf, Wang:2015twa} than the discrete-time counterpart~\cite{Blankenbecler:1981vj}. Furthermore the discrete-time algorithms can be derived as a restricted version of CT-QMC on an equidistant grid of imaginary-times~\cite{Mikelsons:2009eka} and our results apply to them as well. We rewrite the Hamiltonian \Eq{eq:tVmodel} as $\hat{H} = \hat{H}_{0}+\sum_{\braket{i,j} }\hat{v}_{ij}$ and perform an expansion in the interaction term~\cite{1999PhRvL..82.4155R}
\begin{widetext}
\begin{eqnarray}
Z = {\rm Tr} \left(e^{-\beta \hat{H}}\right)
= \sum_{k=0}^{\infty} \sum_{\braket{i_{1},j_{1}}} \ldots \sum_{\braket{i_{k},j_{k}}} \int_{0}^{\beta}d \tau_{1} \ldots \int^{\beta}_{\tau_{k-1}} d\tau_{k}\, {\rm Tr}\left[e^{-(\beta-\tau_{k})\hat{H}_{0} } \left(-\hat{v}_{i_{k}j_{k}}\right) \ldots \left(-\hat{v}_{i_{1}j_{1}}\right) e^{-\tau_{1}\hat{H}_{0}} \right]. \label{eq:expansion}
\end{eqnarray}
\end{widetext}
At this point there are multiple ways to proceed, which result in distinct CT-QMC algorithms, differing in efficiency and the use of auxiliary fields (see \cite{Wang:2015tf} for an overview). In particular, we choose the following auxiliary field decomposition of the interaction term to reveal the connections of various solutions to the sign problem~\cite{Huffman:2014fj, Li:2015jf},
\begin{equation}
- \hat{v}_{ij} = \frac{\Gamma}{2} \sum_{\sigma =\pm} \exp \left[\sigma \lambda \left(\hat{c}_i^{\dagger} \hat{c}_j+ \hat{c}_j^{\dagger} \hat{c}_i \right)\right],
\label{eq:auxfield+}
\end{equation}
where $\lambda = {\rm acosh} \left(1 + \frac{V}{2 \Gamma}\right) $ is a real number for \emph{repulsive} interaction $V$ and \emph{positive} shift $\Gamma$. The decomposition \Eq{eq:auxfield+} is valid because the operator $\hat{o} = (\hat{c}_i^{\dagger} \hat{c}_j + \hat{c}_j^{\dagger} \hat{c}_i)$ satisfies $ \hat{o}^2 = \hat{o}^4 = \hat{n}_i + \hat{n}_j - 2 \hat{n}_i \hat{n}_j $ when $i \neq j$. Compared to the conventional decompositions routinely employed in the determinantal QMC simulations~\cite{1983PhRvB..28.4059H, 1999PhRvL..82.4155R}, the auxiliary field in~\Eq{eq:auxfield+} couples to fermion hoppings instead of the density operators~\cite{Scalapino:1984wz, Gubernatis:1985wo}. This is one of the key ingredients to avoiding the sign problem. In retrospect, this choice can be motivated by the Corollary (\ref{eq:corollary}).
Plugging \Eq{eq:auxfield+} into~\Eq{eq:expansion}, the square bracket becomes a product of exponentials of fermion bilinear operators. The trace therefore acquires an appealing physical meaning: it is the partition function of an imaginary-time dependent noninteracting system, which evolves alternatively under the free part of the original Hamiltonian $\hat{H}_{0}$ and hopping with an amplitude $\sigma\lambda$ between the sites $i,j$ that belong to different sublattices. Tracing out these free fermions, one obtains
\begin{widetext}
\begin{equation}
Z = \sum_{k=0}^{\infty} \left(\frac{\Gamma}{2}\right)^{k}\sum_{\braket{i_{1},j_{1}}} \ldots \sum_{\braket{i_{k},j_{k}}} \sum_{\sigma_{1}=\pm} \ldots \sum_{\sigma_{k}=\pm} \int_{0}^{\beta}d \tau_{1} \ldots \int^{\beta}_{\tau_{k-1}} d\tau_{k} \, \det\left[I +e^{-(\beta-\tau_{k})K}e^{\Lambda^{\sigma_{k}}_{i_{k}j_{k}}}\ldots e^{{\Lambda^{\sigma_{1}}_{i_{1}j_{1}}} } e^{-\tau_{1} K}\right],
\label{eq:weight}
\end{equation}
\end{widetext}
where the matrix $(\Lambda^{\sigma}_{ij})_{lm} = \sigma \lambda (\delta_{li}\delta_{mj} + \delta_{lj}\delta_{mi})$ according to the exponential factor of \Eq{eq:auxfield+}.
Equation~(\ref{eq:weight}) is in the general form of \Eq{eq:general} and the matrix determinant has the form of~\Eq{eq:corollary}. The interaction vertex $e^{\Lambda^{\sigma}_{ij}}$ performs a hyperbolic rotation $\left(\begin{array}{cc}\cosh\lambda & \sigma \sinh \lambda \\ \sigma \sinh\lambda & \cosh\lambda \end{array}\right)$ in the relevant $2\times 2$ block involving the sites $i,j$. Importantly, both the original hopping matrix $K$ and the auxiliary Hamiltonian matrix $\Lambda^{\sigma}_{ij}$ satisfy the condition of the Corollary~(\ref{eq:corollary}). The weight (\ref{eq:weight}) is therefore nonnegative and there is no sign problem. The Monte Carlo method can be used to sample the summations over the interaction bonds and the auxiliary fields as well as the integrations over the imaginary times on equal footing, see Refs. \cite{Iazzi:2014vv, Wang:2015tf} for details about efficient Monte Carlo simulation of \Eq{eq:weight}.
Using the auxiliary field to decouple the interaction vertex is not the only way to formulate a sign-problem-free QMC approach for the model (\ref{eq:tVmodel}). The Theorem~(\ref{eq:theorem--},\ref{eq:theorem+-}) apply to other components of the $O(n,n)$ group and connect the above solution to the solutions based on the continuous-time interaction expansion method (CT-INT)~\cite{Huffman:2014fj, Wang:2014iba} and the related but more efficient LCT-INT method~\cite{Iazzi:2014vv,Wang:2015tf}. These methods correspond to special choices of the shift $\Gamma = -V/4$ \cite{Werner:2010gu} which results in a purely imaginary coupling strength $\lambda = i \pi$ in \Eq{eq:auxfield+}. The vertex matrix $e^{\Lambda^{\sigma}_{ij}}$ thus has the form $\left(\begin{array}{cc} -1 & 0 \\ 0 &-1 \end{array}\right)$ in the relevant $2\times 2$ block independent of the auxiliary field, which is equivalent to rewriting the interaction term $\hat{v}_{ij} = \frac{V}{4}e^{i\pi(\hat{n}_{i}+\hat{n}_{j})}$ in the LCT-INT approach~\cite{Iazzi:2014vv,Wang:2015tf}. The vertex matrix maps the evolution matrix in \Eq{eq:weight} back and forth between the $O^{++}(n,n)$ and the $O^{--}(n,n)$ components, because the sites $i,j$ belong to different sublattices and the vertex matrix flips the signs of both $\det(M_{11})$ and $\det(M_{22})$. The matrix determinant in \Eq{eq:weight} is thus non-positive for an odd number of vertices according to \Eq{eq:theorem--}. However, the negative value $\Gamma=-V/4$ cancels this sign due to a prefactor $\left(\frac{\Gamma}{2}\right)^{k}$ in the weight. Hence Theorem~(\ref{eq:theorem}) ensures the absence of a sign problem~\footnote{In fact, the QMC simulation is sign-problem-free for any shift $\Gamma \in [-V/4, 0)$, where $\lambda =i\pi + {\rm acosh} \left(|1 + \frac{V}{2 \Gamma}|\right)$ and the matrix product in \Eq{eq:weight} lives in either $O^{++}(n,n)$ or $O^{--}(n,n)$ component.} in the auxiliary-field-free (L)CT-INT simulations~\cite{Huffman:2014fj, Wang:2014iba, Wang:2015tf}.
To take full advantage of the Corollary~(\ref{eq:corollary}), one can further consider long-range interactions in the model~\Eq{eq:tVmodel}, e.g., \emph{attractive} interaction between sites belonging to the \emph{same} sublattice~\cite{Huffman:2014fj, Li:2015jf}. We decouple the interactions a
\begin{equation}
- \hat{v}_{ij} = \frac{\Gamma}{2} \sum_{\sigma =\pm} \exp\left[\sigma \lambda \left(\hat{c}_i^{\dagger} \hat{c}_j-\hat{c}_j^{\dagger} \hat{c}_i\right)\right].
\label{eq:auxfield-}
\end{equation}
The coupling strength $\lambda = {\rm acos}\left(1 + \frac{V}{2 \Gamma}\right)$ is real for \emph{attractive} interactions and any \emph{positive} shift $\Gamma\ge|V|/4$. The effective single-particle Hamiltonian in the exponential of \Eq{eq:auxfield-} is \emph{antisymmetric} and connects sites in the \emph{same} sublattice, thus fulfills the requirement of the Corollary~(\ref{eq:corollary}). There is no sign problem either~\footnote{Moreover, by expanding chemical potential terms on equal footing with the interaction vertices, one can show there is no sign problem in the model \Eq{eq:tVmodel} with a staggered chemical potential~\cite{Huffman:2014fj} using the Theorem (\ref{eq:theorem}). We thank Shailesh Chandrasekharan and Emilie Huffman for pointing out this to us.}.
Alternatively, in~\Eq{eq:auxfield+} and \Eq{eq:auxfield-} one can split a fermion into two Majorana operators~\cite{Li:2015jf} and identify two complex-conjugate factors in the Monte Carlo weight~\footnote{We found the convention used in \cite{Wei:2014wwb} is more straightforward in achieving this goal.}. It is however clear that the unconventional decoupling in the hopping channels in Eqs.~(\ref{eq:auxfield+},\ref{eq:auxfield-}) to respect the Corollary is the underlying reason of a nonnegative matrix determinant. In light of \Eq{eq:corollary}, rewriting the fermions using Majorana operators is unnecessary in the Monte Carlo simulations. Nevertheless, the Majorana representation~\cite{Li:2015jf} is an ingenious way to prove the Corollary.
We have shown that Theorem~(\ref{eq:theorem}) unifies the recent solutions of the sign problem~\cite{Huffman:2014fj, Li:2015jf} as different choices of the constant shift $\Gamma$. The Corollary~(\ref{eq:corollary}) is particularly instructive as it suggests that one just needs to decompose the original interacting model into free effective Hamiltonians that satisfy the condition of \Eq{eq:corollary} in order to avoid the sign problem. The mechanism of solving the sign problem using \Eq{eq:theorem} and \Eq{eq:corollary} goes beyond the previous understandings based on the TRS principle~\cite{Anonymous:JutiR0Zk, Hands:2000kq, Wu:2005im}. This can be easily seen from the fact that the real eigenvalues of the matrix $I+M$ are not necessarily doubly degenerate as required by the Kramers theorem \footnote{Another way to see it is different from the TRS principle is that the identity matrix in the Theorem~(\ref{eq:theorem}) and Corollary~(\ref{eq:corollary}) are crucial for the (in)equalities to hold, while this is certainly not the case for the considerations based on the time-reversal-symmetry~\cite{Anonymous:JutiR0Zk, Hands:2000kq, Wu:2005im}.}.
As a further application~\cite{SM} we consider the following \emph{two-flavor} Hubbard model on a bipartite lattice
\begin{eqnarray}
\hat{H} & = &\sum_{\alpha=\{\uparrow,\downarrow\}} \sum_{ i,j} \hat{c}_{i\alpha}^{\dagger}K^{\alpha}_{ij}\hat{c}_{j\alpha} + \sum_{i} \hat{v}_{i}, \nonumber \\
\hat{v}_{i} & =& U\left( \hat{n}_{i\uparrow}\hat{n}_{i\downarrow} - \frac{\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}}{2}\right) -\Gamma,
\label{eq:Hubbard}
\end{eqnarray}
where the real hopping matrix $K^{\alpha}$ connects the \emph{same} flavor $\alpha$ on \emph{different} sublattices.
The model~\Eq{eq:Hubbard} covers a variety of interesting physical systems that were previously inaccessible for determinantal QMC simulations. For example, the choice $K^{\downarrow}=r K^{\uparrow}$ with a ratio $0< r < 1$ realizes the asymmetric Hubbard model, which was implemented recently in a one-dimensional optical lattice with a tunable ratio $r$~\cite{Jotzu:2015tq}. On the other hand, one can also choose to have spatially anisotropic hopping amplitudes for each flavor, therefore to realize Hubbard models with mismatched Fermi surfaces~\cite{Gukelberger:2014db}.
All these cases break the SU(2) spin symmetry as well as the time-reversal symmetry, therefore are not guaranteed to be sign-problem-free according to the conventional TRS principle~\cite{Anonymous:JutiR0Zk, Hands:2000kq, Wu:2005im}. However, one can now solve the sign problem using the insights provided by the Corollary (\ref{eq:corollary}). We first consider the $U>0$ case for simplicity. Enlightened by the new understanding, we decouple the interaction term \Eq{eq:Hubbard} similarly to \Eq{eq:auxfield+} and obtain an auxiliary field coupled to the local spin-flip $(\hat{c}^{\dagger}_{i\uparrow} \hat{c}_{i\downarrow} + \hat{c}_{i\downarrow}^{\dagger} \hat{c}_{i\uparrow} )$,
which connects \emph{different} flavors on the \emph{same} site. Thus for the ordering of the spin-orbital ($\mathcal{A}\uparrow$, $\mathcal{B}\downarrow$; $\mathcal{B}\uparrow$, $\mathcal{A}\downarrow$), it is easy to see the effective Hamiltonians are bipartite and symmetric, therefore satisfy the condition of the Corollary. This shows that an auxiliary field coupled to the $x$-component of the spin operator is sign-problem free for the model~(\ref{eq:Hubbard})~\footnote{Such a decomposition was previously discussed in~\cite{Chen:1992wo}. It is also related to the anomalous decoupling used in~\cite{Batrouni:1990ug} up to a particle-hole transformation.}. The attractive case can be studied without a sign problem by performing a particle-hole transformation to the model. Alternatively, one can perform the decomposition according to \Eq{eq:auxfield-} for attractive interactions, thus have a sign-problem-free simulation with the auxiliary field coupled to the $y$-component of the spin operator. Moreover, there is no sign problem even when we explicitly add spin-flip terms in the Hamiltonian as long as the hopping matrix satisfies the condition of~\Eq{eq:corollary}. This covers a large class of compass Hubbard models~\cite{Nussinov:2015dp} which are relevant to multiorbital and ultracold atoms systems~\cite{Zhao:2008ev, Wu:2008ea, Budich:2015vc}.
Using the special choice of $\Gamma = -U/4$, the above solution reduces to the (L)CT-INT formulation and the determinant of the two flavors factorizes into two parts
in the absence of the single-particle spin-flip terms.
Even though the two determinants are not necessarily equal due to the broken TRS, Theorem~(\ref{eq:theorem}) ensures that they have the \emph{same sign} because the evolution matrix of the two flavors lie in the same component of $O(n,n)$. In contrast to the case of spinless fermions, the vertex matrix of $\hat{v}_{i} = \frac{U}{4}e^{i\pi(\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow})}$ can bring the evolution matrix into all four components of the $O(n,n)$ group since each vertex matrix changes the sign of either $\det(M_{11})$ or $\det(M_{22})$ of both flavors. The Monte Carlo weights of odd expansion orders vanish because of \Eq{eq:theorem+-}. Although the matrix size in the LCT-INT simulation is only half of the previously discussed auxiliary field approach, the use of two-vertices insertion/removal updates~\cite{Rubtsov:2005iw} in the Monte Carlo simulation leads to more complicated updates and measurement procedures~\cite{Kozik:2013ji, YHLiu:2015}. The auxiliary field approach may thus be advantageous.
These solutions to the sign problem can also be applied to projector QMC methods~\cite{Sugiyama:1986vt, White:1989wh, Wang:2015tf} which sample the ground state wave-function overlap $\braket{\Psi_{T}|e^{-\Theta \hat{H}} |\Psi_{T}}$ instead of the partition function. One can choose the trial-wave function $\ket{\Psi_{T}}$ as the ground state of a single-particle trial Hamiltonian that fulfills the condition of Corollary (\ref{eq:corollary}) to avoid the sign problem.
All the sign-problem-free models solved by \Eq{eq:theorem} and \Eq{eq:corollary} in this Letter are at half-filling on bipartite lattices with particle number conservation~\footnote{Models with explicit pairing terms naturally fit the Majorana QMC formalism~\cite{Li:2015jf}.}. It will be interesting to see whether one can even go beyond this constraint. Conversely, we emphasize that the requirements of \Eq{eq:theorem} and \Eq{eq:corollary} are by no means the \emph{necessary} conditions for a sign-problem-free QMC simulation. There should be more ``de-sign'' principles of this kind for fermionic Hamiltonians and quantum Monte Carlo methods. Our work suggests it is fruitful to exploit the inherent Lie group and Lie algebra structure in the Monte Carlo weight to search for such ``de-sign'' principles. Incidentally, both the split orthogonal group and the TRS ``de-sign'' principle seem to be related to the ten-fold way classification of random matrices~\footnote{They solve models in the chiral orthogonal (BDI) and symplectic (AII) symmetry classes respectively.}. It would be interesting to generalize them to other symmetry classes~\cite{zirnbauer1996riemannian, PhysRevB.55.1142, heinzner2005symmetry} and draw connections to recent topological classification of gapped free-fermion systems~\cite{PhysRevB.78.195125, kitaev2009periodic, 1367-2630-12-6-065010}.
Furthermore, findings reported in this paper apply as well to fermions coupled to quantum spins or $\mathbb{Z}_2$ gauge fields. The Theorem (\ref{eq:theorem}) ensures a matrix determinant with a definite sign after integrating out fermions as long as the split orthogonal group structure is respected. This allows to design new sign-free models relevant to lattice gauge theories~\cite{SM}.
\paragraph{Acknowledgements} We thank Ethan Brown, Peter Br\"ocker, Zi-Xiang Li, Simon Trebst and Hong Yao for useful discussions. We also thank all who contributed to the discussions of the MathOverflow question \href{http://mathoverflow.net/questions/204460/how-to-prove-this-determinant-is-positive}{``How to prove this determinant is positive?''} which led to the proof of the Theorem and the Corollary. The work at ETH was supported by ERC Advanced Grant SIMCOFE, the Swiss National Science Foundation, and the National Center of Competence in Research Quantum Science and Technology QSIT. Gergely Harcos was supported by OTKA grants K 101855 and K 104183 and ERC Advanced Grant 321104.
\bibliographystyle{apsrev4-1}
|
{'timestamp': '2015-10-21T02:11:59', 'yymm': '1506', 'arxiv_id': '1506.05349', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.05349'}
|
arxiv
|
\section{Introduction}
Ultra-cool dwarfs (UCDs: late M dwarfs and brown dwarfs with effective temperature below 2,700~K, \citealt{Martin1999, Kirkpatrick2005}) make up a significant fraction of all stellar objects in the Galaxy. Counting dwarf stars of spectral type M7 and later including L, T, and Y dwarfs, the census of stars and brown dwarfs by \citealt{kirkpatrick_2012} suggests that UCDs account for about 18 per cent of the stellar and substellar objects within 8~pc of the Sun. UCDs are excellent targets in the search for temperate transiting planets because of their low mass and size. The habitable zones of UCD stars are 30--100 times closer to their host star than that of the Sun due to their low temperature. Therefore, temperate planets exhibit a comparably short orbital period from one to a few days, which increases the likelihood of observing transits \citep{gillon_trappist_2013}.
Since the radius of a mature UCD is about ten times smaller than the Sun \citep{Chabrier1997, Dieterich2014}, the transit depth of an Earth-sized planet is of the order of 1 per cent, which is within the detection range of small ground-based telescopes. Additionally, planets orbiting UCDs are optimal targets for the characterisation of their atmospheres' chemical composition via transmission spectroscopy \\ \citep{Kalt2009, dewit_2013}. However, despite their high frequency, the statistics of the planetary population of late M dwarfs and brown dwarfs are poorly understood \citep{delrez_speculoos_2018}. This is mainly due to the low intrinsic brightness of UCDs, which reduces the sample of stars within reach of radial velocity and transit searches. Additionally, UCDs can display a high degree of stellar activity complicating the planet detection via the radial velocity and the transit technique. Knowledge of planet occurrence rates comes mainly from the \textit{Kepler} mission \citep[e.g. ][]{dressing_2015}. However, UCDs were not the primary targets of the \textit{Kepler} mission. Furthermore, some ground-based surveys, such as MEarth \citep{Berta_2012} focusing on M dwarfs, were designed for planets with radii above 2$R_{\oplus}$ at short orbital periods, which turned out to be rare \citep{Berta_2013}.
Radial velocity and transit surveys indicate that tightly packed systems of low-mass planets are common around solar-type stars \citep{howard_2010} and red dwarfs \citep{kirkpatrick_2012,dressing_2013,Hardegree_2019}. UCDs could host a comparable planetary population as simulations by \citet{ormel_2017} suggest. In the present analysis, we test whether this hypothesis is compatible with the results of the TRAPPIST Ultra-Cool Dwarf Transit Survey (TRAPPIST-UCDTS, \citealt{gillon_trappist_2013}), which discovered three temperate Earth-sized planets orbiting the late M dwarf TRAPPIST-1 located 12.1 pc from the Sun \citep{gillon_trappist_2016}. Further transit observations and analysis of transit timing variations have revealed that the system hosts 7 rocky, Earth-sized planets in almost perfectly edge-on and circular orbits \citep{gillon_trappist_2017,luger_2017}. No additional planets have been found around TRAPPIST-1 \citep{Ducrot_2020} or any of the other stars in this data set.
The data analysed in this paper was acquired with TRAPPIST-South (TRAnsiting Planets and PlanetesImals Small Telescope-South) \citep{gillon_trappist_2011, jehin_2011}, which is a robotic f/8 Ritchey-Chr\'{e}tien 60 cm telescope located at the La Silla Observatory in Chile. TRAPPIST-UCDTS has been monitoring late M dwarfs and brown dwarfs since 2011 as a prototype survey for the more ambitious SPECULOOS survey \citep{burdanov_2018,delrez_speculoos_2018, gillon2018}.
Here, we reanalyse the data of 40 targets observed between 2011 and 2017 by TRAPPIST-South starting from the raw images. The pipeline created for this analysis was developed alongside the SPECULOOS Southern Observatory (SSO) pipeline \citep{Murray_2020}.
The paper is structured as follows. In Section \ref{sampledescription}, we describe the sample used for this analysis. In Section \ref{Methods}, we describe the light curve detrending, removal of data points affected by non-planetary signals and the modelling of stellar variability and activity. In Section \ref{pipeline}, the light curve of TRAPPIST-1, and thus the performance of the pipeline for the only planetary system found in the data set, is discussed. In Section \ref{flareenergies}, we estimate the energy of the flare events that we find in the light curves. The transit injection tests characterising the statistics of the survey are outlined in Section \ref{injectiontests}, while the results of the injection tests are described in Sections \ref{expnr} and \ref{expnr2}. Finally, we summarise our results, compare them to other surveys, and discuss the significance for multi-planetary systems around UCDs in Section \ref{conclusion}.
\section{TRAPPIST-UCDTS: instrumentation and methodology} \label{sampledescription}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{obsfreq.pdf}
\caption{Histogram of the added exposure time of all targets. The target with 175 hours total exposure time is TRAPPIST-1, which was observed more extensively due to the detection of a planetary system around it.}
\label{fig:obsfreq}
\end{figure}
TRAPPIST-UCDTS was conducted mostly in the I+z$'$ filter due to the redness of the targets (cf. figure 6 in \citealt{delrez_speculoos_2018}). The response function of the TRAPPIST instrument is depicted in Fig. \ref{fig:response} (M. Gillon -- priv. comm.). The field of view is equal to 22\arcmin~$\times$~22\arcmin with a resolution of 0.65 arcsec/pixel. More information can be found in \cite{gillon_trappist_2011}.\footnote{More information about the equipment is also available here: \url{https://www.trappist.uliege.be/cms/c_5288613/en/trappist-equipment}.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{response.pdf}
\caption{Response function of the TRAPPIST instrument. The total response is computed by multiplying the different components (the mirror coating reflectance has to be taken into account twice). The limiting contributor on the lower end is the filter transmission while the upper end is determined by the mean CCD quantum efficiency.}
\label{fig:response}
\end{figure}
As TRAPPIST-UCDTS is a targeted survey, the exposure time is optimised for each target to maximise the signal-to-noise ratio and duty cycle and is typically between 30 and 60 seconds. Most parts of the light curves allow the detection of transits caused by Earth-sized planets. The UCDs have typically been observed for 50 hours, as visible in Fig. \ref{fig:obsfreq}. The here quoted value is the total exposure time, not the total observation time dedicated to this target. A description of the survey and the first 20 light curves (5 M6, 6 M7, 4 M8, and 5 M9) is provided in \cite{gillon_trappist_2013}. The first 40 light curves observed between 2011 and 2017 are characterised further in \cite{delrez_speculoos_2018} and \citet{burdanov_2018}.
The stars, and associated parameters based on Section \ref{radiiandmasses}, are listed in Table \ref{startable}.
Some of the target stars have been further observed within the TRAPPIST-UCDTS or SPECULOOS programs without identifying additional transiting planets.
\begin{table*}
\caption{The coordinates are in J2015.5. The apparent $K_S$ magnitudes were retrieved from 2MASS \citep{cutri2003} and the $G-G_{RP}$ colour index from Gaia DR2 \citep{gaia1,gaia2}. Radii and masses are estimated as described in Section \ref{radiiandmasses}. The dagger symbol indicates that the absolute $K_S$ magnitude was above 10, and hence we set the mass estimate to the approximate mass of an M9 star. The number of observation nights is shown below \textit{Nights}, while \textit{DP} indicates the number of measurements before bad-weather removal. \textit{Exp} is the mean exposure time and \textit{Span} the number of days between the first and the last observation. In the column below \textit{RMS}, we list the median nightly RMS of the differential light curves before division by the GP mean (i.e. as in Fig. \ref{star1} panel 5).}
\label{startable}
\begin{tabular}{lrrrrrlrrrrr}
\hline
Star id & RA & DEC & $K_S$ & $G-G_{RP}$ & Radius & Mass & Nights & DP & Exp & Span & RMS \\
& {[}deg{]} & {[}deg{]} & & & {[}$R_\odot${]} & {[}$M_\odot${]}& & & {[}s{]} & {[}d{]} & {[}\%{]}\\
\hline
2MASS J03111547+0106307 & 47.8150 & 1.1085 & 9.7550 & 1.3880 & 0.166 & 0.130 & 6 & 3056 & 28.8 & 387 & 0.49 \\
2MASS J03341218-4953322 & 53.5667 & -49.8902 & 10.3920 & 1.6013 & 0.108 & 0.075 $\dagger$ & 21 & 5953 & 51.8 & 1448 & 0.49 \\
2MASS J07235966-8015179 & 110.9878 & -80.2518 & 10.4400 & 1.4577 & 0.128 & 0.104 & 18 & 4294 & 51.2 & 1151 & 0.38 \\
2MASS J08023786-2002254 & 120.6600 & -20.0428 & 9.5750 & 1.2125 & 0.287 & 0.273 & 5 & 4450 & 20.3 & 1400 & 0.31 \\
2MASS J11592743-5247188 & 179.8564 & -52.7891 & 10.3220 & 1.6090 & 0.108 & 0.075 $\dagger$ & 29 & 8878 & 50 & 38 & 0.71 \\
2MASS J15072779-2000431 & 226.8663 & -20.0123 & 10.6610 & 1.5500 & 0.157 & 0.124 & 5 & 1172 & 50 & 9 & 0.54 \\
2MASS J15345704-1418486 & 233.7331 & -14.3151 & 10.3050 & 1.6171 & 0.110 & 0.075 $\dagger$ & 14 & 4994 & 65 & 760 & 0.39 \\
2MASS J20392378-2926335 & 309.8508 & -29.4461 & 10.3670 & 1.4521 & 0.140 & 0.112 & 12 & 5368 & 55 & 756 & 0.35 \\
2MASS J21342228-4316102 & 323.5938 & -43.2730 & 9.6850 & 1.3983 & 0.170 & 0.134 & 4 & 2495 & 25.9 & 449 & 0.65 \\
2MASS J22135048-6342100 & 333.4614 & -63.7020 & 9.9380 & 1.4442 & 0.134 & 0.108 & 11 & 5533 & 41.2 & 738 & 0.74 \\
APMPM J2330-4737 & 352.5638 & -47.6167 & 10.2790 & 1.5189 & 0.120 & 0.096 & 9 & 3749 & 40.1 & 741 & 0.46 \\
APMPM J2331-2750 & 352.8410 & -27.8272 & 10.6510 & 1.5543 & 0.112 & 0.079 & 4 & 1727 & 24.4 & 1543 & 0.74 \\
DENIS J051737.7-334903 & 79.4094 & -33.8190 & 10.8320 & 1.5912 & 0.118 & 0.093 & 20 & 5416 & 50 & 34 & 0.65 \\
DENIS J1048.0-3956 & 162.0541 & -39.9395 & 8.4470 & 1.5876 & 0.108 & 0.075 $\dagger$ & 8 & 7505 & 18.7 & 734 & 0.49 \\
GJ 283 B & 115.0859 & -17.4151 & 9.2910 & 1.4724 & 0.124 & 0.101 & 9 & 7630 & 17.7 & 745 & 0.35 \\
GJ 644 C & 253.8934 & -8.3984 & 8.8160 & 1.5169 & 0.117 & 0.091 & 13 & 12513 & 17.2 & 1130 & 0.41 \\
LEHPM 2- 783 & 304.9574 & -58.2801 & 9.7150 & 1.4957 & 0.164 & 0.129 & 17 & 13311 & 20.7 & 745 & 0.51 \\
LHS 1979 & 121.4061 & -9.5465 & 9.4430 & 1.3129 & 0.182 & 0.144 & 8 & 7973 & 10.8 & 1822 & 0.47 \\
LHS 5303 & 238.1869 & -26.3891 & 9.3150 & 1.4505 & 0.136 & 0.109 & 9 & 6866 & 24.6 & 649 & 0.35 \\
LP 593-68 & 57.7502 & -0.8812 & 10.2320 & 1.5153 & 0.127 & 0.103 & 8 & 2933 & 50.7 & 1096 & 0.45 \\
LP 655-48 & 70.0984 & -5.5017 & 9.5450 & 1.5314 & 0.121 & 0.097 & 4 & 1726 & 45 & 376 & 0.67 \\
LP 666-9 & 133.3984 & -3.4931 & 9.9420 & 1.5754 & 0.109 & 0.075 $\dagger$ & 19 & 6698 & 50.6 & 1465 & 0.63 \\
LP 698-2 & 323.1245 & -5.2012 & 10.3790 & 1.4241 & 0.151 & 0.120 & 5 & 1365 & 55 & 427 & 0.31 \\
LP 760-3 & 337.2250 & -13.4266 & 9.8430 & 1.4771 & 0.119 & 0.094 & 7 & 3418 & 17.7 & 800 & 0.47 \\
LP 775-31 & 68.8180 & -16.1145 & 9.3520 & 1.5455 & 0.134 & 0.108 & 6 & 3065 & 29.2 & 372 & 0.42 \\
LP 787-32 & 140.6950 & -15.7911 & 10.0510 & 1.3439 & 0.166 & 0.130 & 12 & 7877 & 32.4 & 1464 & 0.59 \\
LP 789-23 & 151.6317 & -16.8898 & 10.9920 & 1.5166 & 0.124 & 0.101 & 13 & 1713 & 50 & 18 & 0.60 \\
LP 851-346 & 178.9268 & -22.4171 & 9.8810 & 1.5426 & 0.118 & 0.093 & 10 & 8652 & 27 & 1077 & 0.54 \\
LP 911-56 & 206.6901 & -31.8231 & 10.0380 & 1.4809 & 0.129 & 0.104 & 11 & 5938 & 48.6 & 753 & 0.33 \\
LP 914-54 & 224.1570 & -28.1671 & 8.9280 & 1.5337 & 0.118 & 0.093 & 18 & 15823 & 15.4 & 1108 & 0.48 \\
LP 938-71 & 15.7207 & -37.6277 & 10.0690 & 1.5672 & 0.116 & 0.089 & 7 & 2031 & 43.6 & 488 & 0.45 \\
LP 944-20 & 54.8985 & -35.4276 & 9.5480 & 1.5922 & 0.108 & 0.075 $\dagger$ & 10 & 3925 & 50.1 & 349 & 0.47 \\
LP 993-98 & 40.5279 & -41.4100 & 10.5500 & 1.3524 & 0.168 & 0.132 & 6 & 2935 & 40 & 7 & 0.36 \\
LP 888-18 & 52.8763 & -30.7125 & 10.2640 & 1.5688 & 0.116 & 0.089 & 8 & 3821 & 38.8 & 739 & 0.43 \\
SCR J1546-5534 & 236.6718 & -55.5809 & 9.1120 & 1.5682 & 0.124 & 0.100 & 33 & 14557 & 30 & 103 & 0.35 \\
SIPS J1309-2330 & 197.3411 & -23.5116 & 10.6690 & 1.5578 & 0.116 & 0.089 & 13 & 4706 & 50 & 446 & 0.62 \\
TRAPPIST-1 & 346.6264 & -5.0435 & 10.2960 & 1.5484 & 0.115 & 0.087 & 64 & 12357 & 55 & 935 & 0.48 \\
UCAC4 379-100760 & 271.4345 & -14.3784 & 8.8610 & 1.3889 & 0.145 & 0.116 & 7 & 7067 & 15 & 405 & 0.49 \\
V* DY Psc & 6.1023 & -1.9716 & 10.5390 & 1.6024 & 0.111 & 0.075 $\dagger$ & 5 & 1136 & 50 & 10 & 0.78 \\
VB 10 & 289.2277 & 5.1632 & 8.7650 & 1.1037 & 0.113 & 0.083 & 8 & 9505 & 15.3 & 381 & 0.46 \\
\hline
\end{tabular}
\end{table*}
\section{Methods} \label{Methods}
During the survey, the data were inspected for transits by eye on a day-by-day basis. We perform an automated global analysis for this study, which ensures consistency in how the data are treated and which enables us to estimate the probability of finding planets around UCDs using a clearly defined detection criterion.
The individual steps in the automated global analysis are detailed in the following subsections.
\subsection{Image reduction}
To calibrate the science images, we employ the standard offset, bias, and dark subtraction followed by flat-field division.
Thereafter, we perform aperture photometry using the CASUTools software \citep{casutools}. For each target, 20 suitable images are stacked with CASUTools $imstack$ to generate a high-quality image of the field and extract the coordinates ($\text{RA}_i$, $\text{DEC}_i$) of the stars using CASUTools $imcore$, CASUTools $wcsfit$ and $astrometry.net$ \citep{astrometrynet}.
The flux of the stars in all images of the respective field is then computed with $imcorelist$ by adding the pixel values within apertures at the coordinates $\text{RA}_i$ and $\text{DEC}_i$, and subtracting the background flux. To estimate the latter, $imcore$ divides the image into a grid of scale size 64 pixels and computes the interatively k-sigma clipped median on the grid sections. It then lightly filters these values and computes the background via bilinear interpolation. We account for proper motion by recomputing the coordinates of the stars in a target field for each night based on Gaia DR2 \citep{gaia1,gaia2} proper motion data. This is necessary since the target stars are nearby and the time span between the first and the last measurement can be extensive, as visible in Table \ref{startable} or Fig. \ref{star2}. The aperture radius is set to 7.07 pixels (4.6 arcsec) for all stars and images. \cite{Damasso_2010} suggest an aperture radius between 2 and 3 times the mean FWHM of the field stars, which indicates that an aperture radius between 6 and 9 pixels is optimal for most of our science images. To further test this, we computed the RMS of the differential target light curves for different aperture radii. We did not remove bad-weather data, but we applied an iterative global 4-sigma clipping to remove outliers. Based on this analysis, we concluded that 7.07 pixels is the most suitable aperture radius yielding the lowest mean RMS. We chose a fixed aperture over a dynamic one to avoid adding structure to the light curves, which could result from a suboptimal background estimate that a dynamic aperture includes to a varying extent.
\subsection{Differential photometry}
To correct for photometric effects of non-astrophysical origin, we divide the raw target light curve by the weighted average (hereafter called the Artificial Light Curve, ALC) of the median-normalised light curves of selected comparison stars in the same frame as the target star.
First, we define a set of potential reference stars in the same field as the target star. This set consists of all the stars with more than 100 captured electrons per second in their aperture. We also remove bright stars which lead to saturated pixels. We then arrange the median-normalised light curves in a matrix \textit{M} of dimension ($n$,$m$) with $n$ the number of potential reference stars and $m$ the number of science images of the respective field. The matrix form enables us to compare the flux measurements in the same science image (the first axis) or in time (the second axis).
\begin{enumerate}[leftmargin=.5cm]
\item
The initial step consists of 3-sigma clipping along the first axis of \textit{M}.
The fluxes of the normalised light curves in the same science image are similarly affected by the airmass, and they are median-normalised along the second axis in \textit{M}. Therefore, we can compare them to each other to remove outliers and to find stars that behave very differently compared to the median star in the field. The standard deviation of the normalised flux values in a science image is dominated by the fainter stars. Consequently, this step represents a coarse clipping on the brighter stars. These bright stars get a high weight in step (iii), which motivates us to sigma clip here since stars can display a high degree of variability which cannot be removed by sigma clipping along the second axis in matrix \textit{M}.
\item
A reference star's light curve is discarded if more than 20 per cent of its values are removed in step (i).
\item
We set the statistical weight $w^j$ of each remaining potential reference star $j$ to its median brightness before normalisation:
\begin{equation} \label{eq:weightinitial}
w^j_{\text{initial}}=\text{Md}_i(N^j_i),
\end{equation}
where $N^j_i$ is the photon count for star $j$ in science image $i$ in a fixed aperture and $\text{Md}_i(N^j_i)$ is the median over all photon counts of star $j$.
By setting the statistical weights $w^j$ to the initial weights $w^j_{\text{initial}}$ as above, we compute the ALC as the weighted mean of the normalised light curves:
\begin{equation} \label{eq:alc}
\text{ALC}_i=\sum\limits_{j=1}^{n_{\text{ref}}} \frac{N^j_i w^j}{\text{Md}_i(N^j_i)} {\left(\sum\limits_{j=1}^{n_{\text{ref}}} w^j\right)}^{-1},
\end{equation}
where $\text{ALC}_i$ is the ALC value for the $i^{\text{th}}$ science image.
We tested a more sophisticated noise model including read-out, dark, and scintillation noise as initial weights. This approach led to the same final weights.
\item
By dividing each light curve in \textit{M} by the ALC, we get a first estimate of the reference stars' differential light curves. We set the weight of each reference star to the inverse square of the RMS of its differential light curve:
\begin{equation} \label{eq:weights}
w^j={\text{RMS}_i\left( \frac{N^j_i}{\text{Md}_i(N^j_i)} \frac{1}{\text{ALC}_i} \right)}^{-2}
\end{equation}
and recompute the ALC as in Eq. \ref{eq:alc}.
\item
We repeat step (iv) until the weights converge to within 0.00001, which is typically achieved after 20 iterations.
Reusing the variability estimate to refine the weights in the ALC calculation is conceptually similar to the detrending approach in \citet{broeg_2005}.
\item
Some stars can have a higher RMS than similarly bright stars either due to continuous brightness variations caused by, for example, pulsation or binarity or because some parts of the light curve are noisy as a result of bad pixels, an imperfect flat-field correction, or flares. Other stars are perfectly quiet throughout the light curve. In the case of non-stationary noise which is not present in the other light curves, we overweight the noisy parts of the respective light curve and underweight the parts with low temporal RMS by assigning fixed weights in the ALC calculation (cf. Eq. \ref{eq:alc}). Thus, these variable stars can induce some spurious features in the ALC, but they can easily be identified and removed. For this purpose, we modelled the RMS of the normalised light curve of a calm star with median photon count $N^j$ by fitting the following function with three fitting parameters ($a$, $b$, $c$) to the RMS of the stars with the lowest RMS in a given flux bin:
\begin{eqnarray}
RMS(N^j) \ &\propto & \ \frac{\sqrt{\sigma_{\text{photon}}^{2} + \sigma_{\text{const}}^{2} + \sigma_{\text{scintillation}}^{2}}}{N^j}\nonumber \\ &\propto & \ a \sqrt{\frac{1}{N^j} + \frac{b}{(N^j)^{2}} + c}.
\label{lowerrorfit}
\end{eqnarray}
Poisson noise is proportional to $\sqrt{N^j}$, while scintillation noise is proportional to ${N^j}$ \citep{young1967}. In addition, there is a constant error term to account for read-out and dark current noise.
Stars with an RMS 20 per cent above the minimal RMS of a calm star, calculated in Eq. (\ref{lowerrorfit}), are identified as variable stars. We exclude these stars from the reference star set, as shown in Fig. \ref{excludenoisystars}. The theoretical noise estimate, computed as in \cite{Damasso_2010} eq. 3, is included in Fig. \ref{excludenoisystars} to compare to our fit of the RMS of the entire light curves including unfavourable nights.
\begin{figure}
\includegraphics[width=\columnwidth]{rmscut82.pdf}
\caption{Each dot displays the RMS of a potential reference star's entire light curve in the field of one target star. We used equation \ref{lowerrorfit} as a fitting function to estimate a lower limit for the RMS achievable in this survey as a function of the photon count per second (thick solid black line). If the RMS of the normalised differential light curve of a reference star exceeds the dashed line, we assume that this star is variable and we exclude it from the Artificial Light Curve calculation. Stars with count rates of less than 100 electrons per second have already been excluded for this figure. The RMS does not follow the RMS model for stars with low electron counts since we sigma-clip the flux values measured in the same science image. The RMS values are above the theoretical noise estimate (thin solid black line - \citet{Damasso_2010}) because we compute the RMS of the entire light curves including measurements impacted by unfavourable weather.}
\label{excludenoisystars}
\end{figure}
\item
Using the estimate of the ALC from step (v), we can 4-sigma clip along the reference stars' differential light curves to remove any outliers which have not been flagged in step (i). If more than 20 per cent of a reference star's light curve is flagged, its weight is set to zero.
\item
We repeat steps (v) and (vi).
\item
The previous steps optimise the ALC with respect to a typical star in the given target field, neglecting the fact that noise can be locally correlated and that light curves are colour-dependent. To improve the correction of the target star, we increase the statistical weight of reference stars near the target star as a function of angular distance to the target star. More specifically, we multiply the statistical weight of the reference stars by a factor $w_\text{d}$ which depends on the angular distance of the respective star to the target star. The distance weighting is computed for each target star field individually, and it remains the same over the full light curve. Since the distance weighting function should be flat near the target star and decay further out, we use the following functional form for $w_\text{d}$:
\begin{equation}
w_\text{d} = \frac{1}{1+\left(a \frac{d_r}{\text{max}_r(d_r)}\right)^2}
\label{distanceweighting}
\end{equation}
where $a$ is a fitting parameter, $d_r$ is the angular distance between target and reference star, and $\text{max}_r(d_r)$ is the angular distance to the star furthest from the target star.
Simultaneously with parameter $a$ in equation (\ref{distanceweighting}), we set a lower threshold for the colour since the target stars are very red compared to the average field star. This cut-off reduces the impact of colour-dependent atmospheric effects on the differential light curves. For this purpose, we calculate the RMS of the final light curves for a grid of potential values of parameter $a$ and colour thresholds based on the Gaia DR2 $G-G_{RP}$ colour index. We proceed with the combination of the two parameters which minimises the RMS of the respective light curve.
The typical cutoff value for $G-G_{RP}$ is around 0.5, while the parameter $a$ in the distance weighting scheme is more variable but typically around 2.4.
\end{enumerate}
\subsection{Removal of bad-weather data} Clouds or hazes can lead to spatial inhomogeneity in the transparency of the atmosphere, which hampers the effectiveness of differential photometry. Additionally, these and similar effects decrease the number of photons that reach the detector, thus increasing photon noise. We tackle this problem by removing all flux measurements where the running mean of the ALC is below 0.5 or where the running RMS exceeds 10 per cent. For the running RMS, we compute the RMS of typically 15 flux values closest to the point in time where we intend to estimate the scatter in the light curve. The same procedure is applied for the running mean.\\ Fig. \ref{rmsrms} exemplifies that the running RMS of the differential target light curve is increased if the running RMS of the ALC is high. It also shows that the minimal RMS of the target light curve increases as a function of the ALC RMS, which is consistent with the findings of \cite{Berta_2012}.
\begin{figure}
\includegraphics[width=\columnwidth]{dp2.pdf}
\caption{Comparison between the running RMS of the differential target light curve (not GP-detrended but corrected for flares and cosmic rays) and the ALC. The RMS of the target light curve is low if the conditions are favourable. Adverse conditions such as clouds or hazes, identified by the high RMS in the Artificial Light Curve (ALC), can cause some scatter in the target light curve. We remove any flux value where the RMS of the ALC is above 10 per cent. \textsc{cubehelix} colour scheme \citep{Green_2011}.}
\label{rmsrms}
\end{figure}
\subsection{Removal of flares}
Flares are characterised by an abrupt rise followed by a slower decrease in brightness.
They can pose significant problems for all of the following light curve optimisation steps, and they tend to confound the transit search. Therefore, it is beneficial to find and remove data points affected by flares. Various approaches were chosen in other surveys to address this problem. \cite{Berta_2012}, for instance, performed a grid search on their data modelling a flare as a fast rise in flux followed by an exponential decay. They removed a night's light curve if they found a flare with an amplitude above 4 sigma. Since stars were often observed during one full night in TRAPPIST-UCDTS, we cannot discard a whole night if there was a flare because it would mean losing a significant amount of data. The MEarth telescopes observe a star roughly every 20 minutes, while TRAPPIST-UCDTS focuses on one target at a time with a cadence of the order of a minute. We therefore had to find a criterion to find flares and remove these from the light curves.
Using \textit{Kepler} data, \cite{Davenport_2014} generated a median flare template from 885 flares observed on the M4 star GJ 1243 to analyse the different phases of a flare. Their template suggests a flux rise, characterised by a fourth order polynomial, followed by an initial impulsive exponential decay which smoothly transitions into a more gradual exponential decay phase. As other authors \citep[e.g.][]{Walkowicz_2011,Loyd_2014}, we approximated a flare's shape as a single exponential decay. This simple model is sufficient for our purposes since we iteratively fit and remove flares and because a gradual flux decay within the typical flux range is not expected to confound the transit search code.
Flare candidates in the differential target light curves are identified by evaluating criterion~(\ref{eq:flares}) with condition (\ref{eq:condition}) for all flux values:
\begin{equation}
\: \frac{2 f_{j}-f_{j-2}-f_{j+3}}{\sigma_j} \:\cdot \: \frac{\left|f_{j} - f_{j-2}\right|-\left| f_{j+1} - f_{j}\right|}{\sigma_j}>12
\label{eq:flares}
\end{equation}
\begin{equation}
2 f_{j}-f_{j-2}-f_{j+3}>0,
\label{eq:condition}
\end{equation}
where $f_j$ is the $j^{\text{th}}$ flux measurement of the respective star and $\sigma_j$ is the RMS of the 60 data points closest to $j$.
Criterion~(\ref{eq:flares}) can be understood as follows.
For any data point, we compare the potential flux peak $f_j$ to adjacent flux measurements to find flares by exploiting the fact that there is a peak in the light curve and that the flux rises more rapidly than it decays. If there is a flare in the light curve and it peaks at $f_j$, then $f_{j}-f_{j-2}$ captures the flux rise and is positive. In this case, $f_j-f_{j+3}$ is a measure for how quickly the flux decays after the flare peak and is positive as well. The first component of criterion~(\ref{eq:flares}) favours strong signals consisting of a very sharp rise in flux followed by a fast decay since in this case both $f_{j}-f_{j-2}$ and $f_j-f_{j+3}$ are positive and add up.
In the second part of criterion~(\ref{eq:flares}) we exploit the asymmetry of a flare's shape since $f_{j}-f_{j-2}$ is expected to be much greater than $f_j-f_{j+1}$. In this case, $f_j-f_{j+1}$ was chosen over $f_j-f_{j+3}$ since $f_j-f_{j+1}$ captures the asymmetry better. To avoid false signals from noisy data, we divide both parts of criterion~(\ref{eq:flares}) by the sigma clipped standard deviation of the flux differences\footnote{For each data point $j$, we compute the difference between two adjacent flux measurements for the 60 data points closest to $j$. Towards the beginning or end of a night's observation period, the number of included data points gradually shrinks to 30. A flare in the data leads to a high standard deviation of the flux differences, which decreases the strength of the detection signal. Therefore, we clip the flux differences iteratively with $\sigma_{\text{upper}}$ equal to 6 and $\sigma_{\text{lower}}$ equal to 10. A lower sigma leads to a higher number of detections but also to more false positives, which are expected to be filtered out by requiring the flare amplitude to exceed a certain threshold.}. The cut-off value was set to 12 by inspecting the value of criterion~(\ref{eq:flares}) for the smaller flares that we still intend to detect and remove. Assuming the shape of the flare to consist of an abrupt rise followed by an exponential decay, setting the cadence to one minute, the characteristic decay time to 5 minutes, and the standard deviation to 0.5 per cent, we find the value of criterion~(\ref{eq:flares}) exceeds 12 if the ratio of the flare amplitude to the standard deviation is about 3. Preliminary tests with SPECULOOS data suggest that a lower cut-off might be adequate for other data sets (C.A. Murray -- priv. comm.)
Condition~(\ref{eq:condition}) ensures that signals which resemble inverted flares are not removed. We always use $f_{j}-f_{j-2}$ instead of the more intuitive $f_{j}-f_{j-1}$ because the typical exposure time (40 seconds) is relatively long, and thus a flare can begin just shortly before the shutter is closed. This results in a less steep rise in flux in the light curve that does not accurately represent the actual flux evolution of the star. Thus, we avoid the potential distortion of the true flare signal by comparing the potential peak to the penultimate data point before it. Also, it is possible that the flare indeed rises over the duration of two data points, in which case $f_{j}-f_{j-2}$ leads to a stronger signal.
To avoid removing flare-like signals within the typical range of the flux values, we only fit and remove signals if the peak is at least three times the sigma-clipped running RMS above the running median of the data.
The identified flares are removed by fitting an exponential decay to the data points and removing the data starting from the data point just before the flare peak to four times the characteristic decay time after the flare peak, as shown in Fig. \ref{fig:flare1}. We repeat this procedure until no additional flares are found.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{flare1.pdf}
\qquad
\includegraphics[width=.9\columnwidth]{flare2.pdf}
\label{fig:flare1}
\caption{Two flares in the light curve of LEHPM 2-783 with an exposure time of 21 seconds. In the bottom plot, the flux rises over 2 data points to its maximum, which justifies using $f_{j}-f_{j-2}$ in the flare detection criterion. We remove all flux values from the data point just before the flux peak to the dashed vertical line (in orange).}
\end{figure}
\subsection{Removal of measurements affected by cosmic rays} The electron count within an aperture is significantly increased if a cosmic ray hits a pixel. To circumvent this problem, we discard all images for which the flux of the target star is four times the running RMS above the running mean of its light curve.
\subsection{Light curve detrending from pixel position} The flat-field calibration corrects for pixel sensitivity variations. We refine this calibration and correct residual differential pixel sensitivity variations and potential gaps between pixels, which can affect the light curve if the target moves on the CCD. This effect can be modelled by fitting a 2D second order polynomial as a function of pixel position of the target to the differential target light curve before and separately after the meridian flip of the telescope due to the German equatorial mount. Dividing the target light curve by the fitted polynomial for each observation night before and after the meridian flip removes most of the flux dependence on the pixel position. The standard deviation of the star's CCD position during an observation night, before or after the telescope flip, is typically about 1 pixel. A very similar approach was taken by \cite{Berta_2012} who corrected for correlations between light curve trends and external parameters such as the pixel position in their instrumental systematics model. They also allowed different baselines before and after the meridian flip, which is conceptually the same as our approach. Similar to \cite{Berta_2012}, we do not include a correction for a potential correlation between the FWHM of the target star and its light curve because we find the Pearson correlation coefficient to be usually between -0.1 and 0.1 indicating no photometrically relevant correlation.
\subsection{Light curve detrending using Gaussian Process model}\label{gpmodel} We employ Gaussian Process \citep[GP;][]{gp_2006} regression to model the intrinsic stellar brightness variations, as well as residual instrumental and atmospheric variations not fully accounted for in our initial detrending. The model is intended to capture the general trend over the typical variation timescale, but not short abrupt signals.
We model long-term variations due to effects such as precipitable water vapour \citep{Bailer-Jones_2003, Berta_2012, Murray_2020}, varying target FWHM, and intrinsic flux variations due to starspots or the inhomogeneous cloud coverage of UCDs. Short-term fluctuations in brightness, which can also result from precipitable water vapour changes \citep{Murray_2020}, should remain in the corrected light curves, however.
GP modelling requires an informed choice of kernel covariance function $k$, the mean function $\mu$, and an estimate of the uncertainty of each observation $\sigma_y$. The kernel is used to compute the covariance between every two measurements (Eq. \ref{eq:kernel} in the present analysis), which results in an $n \: \times \: n$ covariance matrix for $n$ data points. To account for noise in the data, the squared uncertainty of the data points is added on the diagonal.
Combined with the mean function, the covariance matrices then provide a Gaussian distribution at each point of interest.
We use the GP package Celerite \citep{celerite}, which is computationally fast but restricts us in the choice of the kernel. In general, the computational cost of GPs scales cubically with the number of data points. Celerite, however, exhibits linear scaling, which is advantageous for the analysis of a large data set such as ours.
Several kernels were tested, of which the stochastically-driven damped simple harmonic oscillator (SHO) kernel with the quality factor set to $1/\sqrt{2}$ and the mean function $\mu$ set to the median flux yielded the best results regarding flexibility and applicability to all light curves. This treads an intermediary path between smooth periodic and rough stochastic variations. The kernel of the SHO with the mentioned quality factor is equal to:
\begin{equation}
k(\tau) = S_{0} \: \omega_{0} \: e^{-\frac{1}{\sqrt{2}} \omega_{0} \tau} \text{cos}\left( \frac{ \omega_{0} \tau }{\sqrt{2}} - \frac{\pi}{4} \right),
\label{eq:kernel}
\end{equation}
where $\tau$ is the time difference between two measurements. The free hyperparameters $S_{0}$ and $\omega_{0}$ are computed by maximising the likelihood of the data given the model using the L-BFGS-B non-linear optimisation routine \citep{Byrd_1995,Zhu_1997}. L-BFGS-B is based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm \citep{fletcher1987} and allows the solving of large non-linear optimization problems with bounds using the gradient projection method. We include the entire light curve of each target star for this procedure to capture the global behaviour of the light curve and lower the impact of transits on the GP hyperparameters. The mean of the Gaussian distribution predicted by the GP for each time point then serves as a model for the flux originating from the star.
The uncertainty $\sigma_y$ that we need to assign to each data point influences how closely the GP fits the data. We aim at modelling the stellar light curves, but there could be additional planetary transits affecting the measured brightness of their host star. Thus, we set the uncertainty of a flux measurement to the running RMS and add an additional component to the running RMS to reflect this source of uncertainty regarding the true brightness of the star alone as further explained in Section \ref{prepost}. In this analysis, we added 7 ppt to the running RMS ($\sigma_y^2= RMS_{\text{running}}^2 + 0.007^2$) in quadrature, which works well in the case of TRAPPIST-1, for example, as shown in Fig. \ref{gpfit}.
The running RMS increases at a transit, which additionally lowers the risk of overfitting the light curve and thus hiding transits. To maximise the increase of the uncertainty estimate near a transit, we must make sure that a typical transit fits into the range of data points that we include for the running RMS. For computational efficiency, we set the running RMS of each data point again to the RMS of a fixed number of data points $n_{\text{dp}}$ nearest to the respective data point. Since a transit of a habitable planet orbiting a UCD is expected to take about one hour, we choose $n_{\text{dp}}$ for each star and day such that the median of the time difference between the last and the first included data point is equal to one hour. Transits are not expected to be ubiquitous in the light curve, and therefore we do not expect them to significantly impact the final GP hyperparameters and consequently the estimate of the typical variation time scale.
The FWHM of the kernel function is generally between 1 and 9 hours, while the logarithm of the amplitude hyperparameter varies between -18 and -12.
The most important products of the different steps described in this section are displayed for two stars in Figs. \ref{star1} and \ref{star2}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{gpvars.pdf}
\caption{Gaussian Process fit of the normalised differential light curve with a transit of TRAPPIST-1b vs. Heliocentric Julian Date (HJD) applying different measurement uncertainty estimates (blue, violet, red, and orange solid lines). The black dots show the light curve values binned in 7-minute bins. Using the running RMS as the measurement uncertainty leads to a GP mean which closely follows the light curve trends. Quadratically adding a constant additional error term to the running RMS leads to a GP fit that still captures the overall variability of the star itself but less of the planetary transits.}
\label{gpfit}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{paplot11.pdf}
\caption{Top to bottom: 1) The light curve of the target star LP 993-98 in Relative flux vs. Heliocentric Julian Date. Observation nights are separated by a black vertical line. Noisy Artificial Light Curve (ALC) parts are displayed in red.
2) The ALC is the weighted mean of the light curves of the stars surrounding the target star. Noisy ALC parts are displayed in red.
3) Target light curve as in 1) but on a different scale.
4) Differential light curve decorrelated from pixel position. Noisy ALC data points have been removed. The flares in the light curve (in blue) as well as data points affected by cosmic rays (in violet) will be removed in the following, resulting in the light curve in panel 5 (LC5).
5) Gaussian Process mean prediction for LC5, which is very flat in this case.
6) LC5 divided by the GP mean prediction.}
\label{star1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{paplot23.pdf}
\caption{Top to bottom: 1) The light curve of the target star LP 593-68. Observation nights are separated by a black vertical line. Noisy Artificial Light Curve (ALC) parts are displayed in red.
2) The ALC is the weighted mean of the light curves of the stars surrounding the target star. Noisy ALC parts are displayed in red.
3) Target light curve as in 1) but on a different scale.
4) Differential light curve decorrelated from pixel position. Noisy ALC data points have been removed. The flares in the light curve (in blue) as well as data points affected by Cosmic Rays (in violet) will be removed in the following resulting in the light curve in panel 5 (LC5).
5) Gaussian Process mean prediction for LC5.
6) LC5 divided by the GP mean prediction.}
\label{star2}
\end{figure*}
\subsection{Transit search} To find transits in the GP-detrended differential light curves, we use the Box-fitting Least Squares algorithm (BLS) by \citet{BLS2002}.
BLS folds the light curve on a given number of periods, bins the folded light curves, and fits a box-shaped transit to them. It then provides us with the transit parameters which best fit the analysed light curve. Namely, the period, transit depth, signal residue, and the bin number in the folded light curve where the transit starts and ends respectively. From this, we can reconstruct the position of the BLS-transits in the light curves, and visually compare these potential transits to the neighbouring light curve regions and the ALC to assess the BLS output.
We tested 10,000 orbital periods ranging from 0.8 to 10 days, but no additional planets except those of TRAPPIST-1 were found.
\section{Pipeline output for TRAPPIST-1} \label{pipeline}
We assessed the performance of our pipeline using the data collected on TRAPPIST-1 since it is the only star in the data set known to host transiting planets. Fig. \ref{fig:foldedb} shows the TRAPPIST-1 light curve folded with the BLS peak period and the respective box-shaped transit. It shows that we correctly identify all transits of TRAPPIST-1b (in red) using the completely automated pipeline. We then removed the transits of TRAPPIST-1b from the respective light curve and reran the GP-detrending and BLS. This revealed three out of four transits of TRAPPIST-1c present in the light curve since BLS found an alias of the true period to be the most likely orbital period. Fig. \ref{fig:foldedc} displays the TRAPPIST-1 light curve folded with the new BLS peak period. All but one of the TRAPPIST-1c transits (in red) are located within the region where BLS predicts a transit.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{foldedb3.pdf}
\caption{Box Least Squares (BLS) transit for the GP-corrected differential light curve of TRAPPIST-1. BLS correctly identifies all transits of TRAPPIST-1b (in red) and the transit depth. The displayed light curve is folded with the BLS peak period which is equal to the true orbital period of TRAPPIST-1b (1.511 days, \citealt{delrez_trappist_2017}).}
\label{fig:foldedb}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{foldedc3.pdf}
\caption{For this figure, we removed the data points affected by the transits of TRAPPIST-1b, recomputed the Gaussian Process fit, and let the Box Least Squares algorithm (BLS) look for transits in the resulting detrended light curve. The light curve is folded with the BLS peak period (3.391 days) which is equal to $\frac{7}{5}$ times the period of TRAPPIST-1c (2.422 days, \citealt{delrez_trappist_2017}). The transits of TRAPPIST-1c are displayed in red. Therefore, the pipeline together with BLS detects TRAPPIST-1c but misses one transit near phase equal 0.93.}
\label{fig:foldedc}
\end{figure}
TRAPPIST-1c was transiting on the second observation night while TRAPPIST-1g did so on the third. The pipeline identifies these two transits as originating from the same planet if the raw data of the first three observation nights is fed in. Thus, it would have predicted the planetary system very early on.
To get an idea of the accuracy of the BLS output parameters, we compare our results to the TRAPPIST system parameters as derived from Spitzer data by\\ \cite{gillon_trappist_2017}.
The transit depth for planets b and c (0.576~per~cent, 0.586~per~cent respectively) are lower than the Spitzer estimates (0.7266$\pm$0.0088~per~cent, 0.687$\pm$0.010~per~cent) while the transit durations for planets b and c (27$\pm$4~min, 44$\pm$12~min) are near the Spitzer values (36.40$\pm$0.17~min, 42.37$\pm$0.22~min).
This is mainly due to the box-shaped transit, which is not an adequate representation of a transit for a transit depth or duration analysis but is sufficient for an efficient transit search. Given this effect, the impact of the GP fitting on the transit depth is expected to be minor for the TRAPPIST-1 light curve. The retrieved periods are both accurate up to less than 2 minutes if we account for the aliasing factor in the period of planet c.
We removed the transits of planets b and c in the TRAPPIST light curve, leaving gaps at the respective positions, and included the resultant light curve in the following analysis.
\section{Flare energies} \label{flareenergies}
The population of early and mid-type M dwarfs hosts a comparably high fraction of flaring stars. In a study by \cite{guenther_flares_2019} based on two months of TESS mission data \citep{Ricker_2014}, about 10 per cent of early M dwarfs and about 30 per cent of mid M dwarfs showed observable flares. The flare statistics for late M dwarfs and brown dwarfs are less well understood. We detected a considerable number of flares in our UCD data set and estimated the flare energies as part of this global analysis.
For this, we followed the approach outlined in \citet{shibayama_2013}, which is based on the assumption that the spectrum of white-light flares is consistent with blackbody emission $B_{\lambda}$ at a constant effective temperature.
We set the effective temperature of the flare $T_{\text{flare}}$ to 9,000~K \citep[as in][]{kretzschmar_2011,guenther_flares_2019}.
Based on the normalised light curve evolution, we know the excess flux $C^{\prime}$ originating from the flare relative to the stellar flux, i.e. the ratio between the observed luminosity of the flare divided by the observed luminosity of the star. Assuming the effective temperature $T_{\text{eff}}$ of the UCDs to be 2,700~K and that the effective temperature of the flare is constant, we computed the area $A_{\text{flare}}$ of the flare taking account of the response function of the TRAPPIST instrument $R_{\lambda}$ and using the stellar radii $R_{\ast}$ in Table \ref{startable}:
\begin{equation}
A_{\text{flare}}(t) = C^{\prime}(t) \pi R_{\ast}^2 \frac{\int R_{\lambda} B_{\lambda}(T_{\text{eff}}) d\lambda}{\int R_{\lambda} B_{\lambda}(T_{\text{flare}}) d\lambda}.
\label{eq:flarearea}
\end{equation}
The response function $R_{\lambda}$ is equal to the CCD window transmission multiplied by the quantum efficiency, the reflectance of the mirrors (twice) and the transmission function of the I+z$'$ filter.
The estimate of the flare area then serves to compute the bolometric flare luminosity for each flux measurement by the Stefan-Boltzmann law, whereas the total flare energy is computed by integrating over the duration of the flare:
\begin{equation}
E_{\text{flare}} = \int_{\text{flare}} \sigma_{\text{SB}} T_{\text{flare}}^4 A_{\text{flare}}(t) dt,
\label{eq:flareenergy}
\end{equation}
where $\sigma_{\text{SB}}$ is the Stefan-Boltzmann constant.
\citet{shibayama_2013} estimated the uncertainty in the flare energy to $\pm$~60 per cent. Since we used ground-based measurements while \citet{shibayama_2013} used \textit{Kepler} data, we expect the uncertainty of our estimates to be higher due to the uncertainty in our ALC solution. We neglected the transmission of the atmosphere in Eq. \ref{eq:flarearea} because it affects the result at a level below 0.1 per cent of the flare energy as computed using ESO's SkyCalc Sky Model Calculator based on \citet{Noll2012} and \citet{Jones2013}.
The results are displayed in Table \ref{restable}. The range of the flare energy estimates is similar to but lower than the values for M dwarfs found by \citet{guenther_flares_2019}. For most targets there is an effective temperature estimate derived by the SPECULOOS team as described in the appendix of \cite{Gillon_2020}. The majority of these effective temperatures is near 2,700~K, the value we used to compute the flare energy estimates. Computing the flare energies with the effective temperature of the stars set to 2,000~K leads to results which are one order of magnitude lower than those in Table \ref{restable}.
\begin{table}
\caption{This table lists the flare energy estimates based on the approach described in \citet{shibayama_2013}. We detected a considerable number of flares but no superflares with energies above $10^{34}$~erg.}
\label{restable}
\begin{tabular}{lll}
\hline
Star id & $\log_{10}$(flare energy [erg]) & $\text{T}_{\text{eff}}$ \\
& & {[}K{]} \\
\hline
2MASS J03111547+0106307 & 31.4 & -- \\
2MASS J03341218-4953322 & & -- \\
2MASS J07235966-8015179 & & 2827 \\
2MASS J08023786-2002254 & & -- \\
2MASS J11592743-5247188 & 30.3, 31.0 & 2355 \\
2MASS J15072779-2000431 & & -- \\
2MASS J15345704-1418486 & & 2502 \\
2MASS J20392378-2926335 & & 2928 \\
2MASS J21342228-4316102 & & -- \\
2MASS J22135048-6342100 & 30.3, 30.7, 30.7, 30.8 & 2889 \\
APMPM J2330-4737 & & 2738 \\
APMPM J2331-2750 & & 2573 \\
DENIS J051737.7-334903 & & 2656 \\
DENIS J1048.0-3956 & & 2360 \\
GJ 283 B & & 2803 \\
GJ 644 C & 30.1, 30.4, 30.9 & 2674 \\
LEHPM 2- 783 & 30.9, 31.5, 31.5, 31.5, 31.9, & -- \\
& 32.8 & \\
LHS 1979 & & -- \\
LHS 5303 & 30.4, 30.6, 30.8 & 2903 \\
LP 593-68 & 31 & 2819 \\
LP 655-48 & 32.1 & 2714 \\
LP 666-9 & 30.6, 30.9, 31.0, 31.2, 32.7 & 2400 \\
LP 698-2 & & -- \\
LP 760-3 & 30.4 & 2716 \\
LP 775-31 & & 2863 \\
LP 787-32 & & -- \\
LP 789-23 & & 2778 \\
LP 851-346 & 30.1, 30.5 & 2687 \\
LP 911-56 & 30.6 & 2824 \\
LP 914-54 & 31.8 & 2701 \\
LP 938-71 & 30.5 & 2648 \\
LP 944-20 & & 2313 \\
LP 993-98 & 31.1 & -- \\
LP 888-18 & & 2642 \\
SCR J1546-5534 & 30.5, 30.5, 30.8, 31.0 & 2758 \\
SIPS J1309-2330 & 30.9 & 2647 \\
TRAPPIST-1 & 31.0, 31.1 & 2629 \\
UCAC4 379-100760 & 30.4, 30.5 & 2976 \\
V* DY Psc & & -- \\
VB 10 & & 2578\\
\hline
\end{tabular}
\end{table}
\section{Transit injection tests} \label{injectiontests}
To assess the sensitivity of the survey, we calculated the probability of detecting planets orbiting their stars with a given orbital period and radius. This task involves estimating the probability that the planetary orbit allows transits ($\mathbb{P}_{\text{geometry}}$) as well as the probability that we identify a planet in such an orbit ($\mathbb{P}_{\text{identify}}$).
The former can be computed based on the stellar and tested planetary parameters alone. The latter is estimated by simulating the effect of such planets on the light curves and testing whether we can recover the transits. The planet occurrence rate for UCDs are not well-known and could vary within the group of UCDs. Due to the size of our sample, we treat the UCDs as a homogeneous group in this analysis.
\subsection{Planetary and stellar parameters} \label{radiiandmasses}
We tested the ability of the pipeline to detect planets for different orbital parameters and planetary radii by generating 10,000 sets of orbital period ($P$), phase, inclination ($i$) and planetary radius ($R_p$) for each star. Since there are 40 targets in UCDTS, we evaluated the light curves resulting from 400,000 different configurations in total. In this analysis, we restricted the orbits to circular ones. The period, radius, and phase were randomly drawn from a uniform distribution, whereas for the inclinations $i$ we can exploit the fact that cos($i$) is uniformly distributed for randomly chosen orbits. We did not test orbits with inclinations which lead to impact parameters greater than 1 (i.e. inclination angles smaller than $\frac{\pi}{2} - \mathrm{arctan} (\frac{R_{\ast}}{a})$ with $R_{\ast}$ being the stellar radius and $a$ the semi-major axis) since these transits are unlikely to be recovered by BLS and for an observer they are difficult to distinguish from stellar or atmospheric effects. We account for the orbits with impact parameter above 1 in Section \ref{section_detprobs} with $\mathbb{P}_{\text{geometry}}$.
The \citet{mandel_2002} transit model in PyTransit \citep{parviainen_2015} was used to compute model transit light curves.
In addition to the randomly drawn orbital parameters, PyTransit needs the stellar masses, radii, and limb darkening coefficients.
The limb-darkening coefficients were set to [0.65, 0.28], which are those of TRAPPIST-1 for the I+z$'$ filter in \citet{gillon_trappist_2016} inferred from \citet{Claret_2011}. The extent of limb-darkening varies by stellar host, but all stars are similar to TRAPPIST-1 in spectral type. We found the results to depend only marginally on the limb-darkening coefficients.
The stellar mass was derived as in \citet{benedict_2016} using the apparent 2MASS $K_S$-band magnitude and the Gaia DR2 parallax.
The absolute $K_S$ magnitudes of seven targets are above 10, where the stellar mass estimate of \citet{benedict_2016} is not valid. For these stars the mass estimation polynomial leads to a very low mass estimate. To avoid using non-physical stellar masses, we set a minimum of 0.075~$M_{\odot}$ to the stellar mass estimate. This mass was chosen because it is the approximate mass of an M9 star \citep{reid_2005}. Some of the targets could be brown dwarfs for which we overestimate the mass by setting a minimum of 0.075~$M_{\odot}$. Consequently, this leads to an underestimated probability of finding the planet in a transiting orbit. However, setting a lower limit does not change the results significantly and thus, the results overall do not depend critically on the exact value of the lower limit.
We neglect reddening since all targets are within 30~pc and therefore within the Local Bubble where reddening is negligible \citep{holmberg_2007}.
Next, we needed to estimate the stellar radius to infer the transit depth of a planet with a given size. The relationship between the $K_S$-band magnitude and the radius for M dwarfs described in \citet{mann_2015} provides the desired estimate for $K_S$ below 9.8. Since the radius-magnitude relationship still provides estimates in the expected radius range even if the $K_S$-band magnitude is slightly above 9.8, we used the Mann radius estimate for all target stars. The uncertainties of the Gaia parallax and the $K_S$ magnitude measurements are low and \citet{mann_2015} report a fractional residual of less than 5 per cent for their fitting polynomial. However, we applied their polynomial for stars at the upper end of the absolute magnitudes considered. Therefore, the estimates suffer from low number statistics.
As the last input parameter of PyTransit, the semi-major axis scaled by the stellar radius was derived using Kepler's third law.
Light curves with simulated planets were then computed by multiplying the real final light curves by the model light curves from PyTransit.
Running the BLS algorithm on the product then provided us with a guess for the location, depth, and duration of the injected transits. Next, we need to define when BLS result is to be considered as a correct recovery.
\subsection{Identification criterion} \label{identificationcriterion}
We know at which time point in the light curve we injected transits and where BLS detected a transit-like signal. Thus, we compared the time of the injected transits in the light curve to the BLS-transit region to assess whether we correctly identify the injected planet. Whenever BLS recovered at least half of the transit duration at half of the transit depth (the area shaded in orange in Fig.~\ref{detcrit}), the respective transit is labelled as detected. We count an injected planet as recovered if at least two injected transits were detected by the mentioned criterion.\\ Other authors such as \citet{giacobbe_2012} or \citet{petigura_2013} used a criterion based on the recovered period. We found this period-based criterion to work well for calm light curves. In all other cases it is, however, too restrictive since BLS seeks to include any light curve feature resembling a transit. A transit-like signal in the original light curve can therefore cause the BLS period to differ significantly from the injected period and its aliases even if BLS found all or most injected transits. We assume that the BLS algorithm is used as an initial tool to find potential planetary transits. As done in TRAPPIST-UCDTS, the potential transits can subsequently be vetted by analysing, for instance, the stellar FWHM, the ALC, or the light curves of stars of the same or a similar spectral type. In other studies \citep[e.g.][]{Berta_2013, He_2017}, a planet was counted as recovered if a single transit with detection significance above a certain threshold was found. \cite{Berta_2012,Berta_2013} additionally propagated the uncertainties associated with their detrending pipeline into the detection significance, which is favourable for MEarth, a multi-telescope survey targeting a high number of stars. TRAPPIST-UCDTS consists of a significantly lower number of stars but operates at a higher cadence than MEarth, which enables rigorous vetting of the transit candidates. Furthermore, the low number of stars allows us to keep track of the impact of the detrending procedure. If we can correctly identify two transits originating from the same planet, we can schedule follow-up observations to observe a third transit. For these reasons, the two-transit detection criterion was preferred.
Applying our identification criterion to each BLS result, we counted the number of correctly recovered planets in a given radius and period range to assess the probability ($\mathbb{P}_{\text{identify}}$) to identify such planets. Additionally, we calculated the results for $\mathbb{P}_{\text{geometry}}$ in the same radius and period range and combined the two probability estimates to get the detection probability.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{detcrit1b.pdf}
\caption{Illustration of the detection criterion. A synthetic light curve was multiplied by a transit light curve generated with PyTransit to display a potential light curve with an injected transit. The area shaded in orange in the dashed line box indicates where the injected transit dips below half of its maximal transit depth. The transits suggested by the Box Least Squares algorithm can be shifted due to noise or other transits in the light curve. We regard a transit as detected if the BLS transit (shaded in violet in the solid line box) overlaps at least half of the transit at half depth (shaded in orange in the dashed line box).}
\label{detcrit}
\end{figure}
\subsection{Detection probability} \label{section_detprobs}
The probability of detecting a planet in a transit survey depends on the probability of the existence of such a planet, the probability of the planet orbiting its host star such that it periodically transits its star as seen from Earth, and the probability that the observer identifies the signal in the light curve.
The probability of finding a planet orbiting star $j$ is therefore equal to:
\begin{equation}
\mathbb{P}_{\text{detect}_j}=\mathbb{P}_{\text{planet}} \cdot \mathbb{P}_{\text{geometry}_j} \cdot \mathbb{P}_{\text{identify}_j} \:
\label{pdetgen}
\end{equation}
with
\begin{enumerate}[leftmargin=*,labelindent=0pt,label=$\cdot$]
\item $\mathbb{P}_{\text{planet}}$: Probability that the tested planet orbits the respective star. This is set to 1 except where stated otherwise.\\
\item $\mathbb{P}_{\text{geometry}_j}(R_{\ast_j},M_{\ast_j},P)$: Probability that the inclination of the planetary orbit lies within the accepted inclination angle range (impact parameter smaller than 1). Using Kepler's third law, we get the semi-major axis $a$ from the orbital period and the stellar mass. The geometric probability for an orbit with impact parameter below 1 is equal to $\frac{R_{\ast}}{a}$.\\
\item $\mathbb{P}_{\text{identify}_j}(R_{\ast_j},M_{\ast_j},R_p,P)$: Probability of identifying the planetary transits (cf. Section \ref{identificationcriterion}) if the planet orbits within the accepted inclination angle range.
\end{enumerate}
\subsection{Effect of GP-detrending on identification probability} \label{prepost}
The identification probability mainly depends on how we detrend the light curves before or while searching for transits. In the following, we discuss why we detrend the light curves using a GP before injecting transits and looking for them with BLS (detrend-inject-recovery test).
This approach is intended to be as close as possible to the actual planet detection probability in a survey within which we can analyse light curves in detail and we can apply more sophisticated techniques. The approach taken might appear optimistic. However, a more optimistic detrending approach will lead to a more conservative lower limit on the occurrence rate of planets like TRAPPIST-1b later on. Furthermore, we investigated the effect of the detrending on our occurrence rate estimates to assess the impact on our conclusions.
Injecting transits first, GP-detrending after that and subsequently looking for transits (inject-detrend-recovery test) is not optimal since it necessarily decreases the transit depth. Inspecting a light curve by eye as done in TRAPPIST-UCDTS would likely yield a higher detection probability, and hence inject-detrend-recovery testing underestimates the actual probability of spotting transits. Additionally, detrending after every transit injection is computationally expensive. Detrending and searching for transits simultaneously would be optimal but is even more computationally expensive and therefore not suited for our statistical analysis.
One necessary component for computing the GP hyperparameters is the uncertainty estimate of the flux measurements. By setting this to the running RMS of the light curve and adding an additional term $\sigma_{\text{add}}$ to it in quadrature, we can adjust how closely we fit the data using the GP, as visible in Figs. \ref{gpfit} and \ref{fig:aderr}.
The GP mean for flat light curves is also flat. This means that the detrend-inject-recovery tests yield a very similar detection probability as a simultaneous approach irrespective of $\sigma_{\text{add}}$. Therefore, we do not need to optimise $\sigma_{\text{add}}$ for these light curves. Instead, we perform inject-detrend-recovery tests on a set of 16 variable light curves with planets similar to TRAPPIST-1b and determine the $\sigma_{\text{add}}$ that maximises the detection probability. This gives us an optimal $\sigma_{\text{add}}$ for each star. Using a GP with $\sigma_{\text{add}}$ set to the mean of these 16 optimised $\sigma_{\text{add}}$, which is equal to 7 ppt, ensures that we mainly remove the general trend and not small transit-like features. We then apply this GP trained with inject-detrend-recovery tests for our detrend-inject-recovery tests. By this procedure, we seek to emulate a simultaneous search.
The impact of the GP-detrending depends on the depth of the transits since the identification probability decays very quickly with transit depth. Therefore, the detection probability of small planets is more prone to being affected by GP-detrending. For a typical variable star, the identification probability for the two test cases is compared in Fig. \ref{fig:aderr} if we inject planets similar to TRAPPIST-1b. The transit detection efficiency over a larger parameter space is compared in Table \ref{comptab} for two stars, which shows that the detection probabilities of the two cases overall converge as we add an additional error term to the running RMS.
In Table \ref{comptab}, we can see again that for the detrend-inject-recovery test case, the identification probability decreases with increasing $\sigma_{\text{add}}$ since a lower $\sigma_{\text{add}}$ leads to a flatter light curve. Transits injected into these flatter light curves are easier to detect. In the inject-detrend-recovery test case, the identification probability increases with increasing $\sigma_{\text{add}}$ because we remove less of the transit depth (cf. Fig. \ref{gpfit}).
\begin{figure}
\includegraphics{determine_aderr.pdf}
\caption{Identification probabilities as a function of the quadratically added uncertainty term $\sigma_{\text{add}}$ for a photometrically variable target with an injected planet with period between 1.4 and 1.8 days and radius between 1 and 1.3 $R_{\oplus}$. Detrending before injecting the transits increases the identification probability estimate. The effect is more pronounced as $\sigma_{\text{add}}$ decreases since a GP fit with a lower $\sigma_{\text{add}}$ produces flatter light curves. Detrending after injecting transits (blue dashed line) can remove a part of the transit but quadratically adding an additional error term $\sigma_{\text{add}}$ mitigates this problem. If we add a very high $\sigma_{\text{add}}$, the GP mean prediction is flat, which is equivalent to not detrending. As visible in this figure, the GP-detrending increases the identification probability in both cases.}
\label{fig:aderr}
\end{figure}
In Section \ref{expnr}, we derive a lower limit on the occurrence rate of planets similar to TRAPPIST-1b. As outlined above, this value depends on the detrending. Therefore, we we compute it for different detrending scenarios.
\subsection{Expected number of planets in the data set}
\label{expnr1}
The probability of finding $n$ planets in the entire sample is described by a Poisson binomial distribution. The mean of this distribution is equal to the number of planets which we expect to find in the data set, defined as
\begin{equation}
\mathbb{E}_{\text{detected planets}}(R_p,P) = \sum_{j=1}^{\text{nr of target stars}} \mathbb{P}_{\text{detect}_j}(R_p,P),
\label{eq:expectationvalue}
\end{equation}
in which $R_p$ is the radius of the planet, P is its orbital period, and $\mathbb{P}_{\text{detect}_j}$ is computed as described in Eq. \ref{pdetgen}. In Fig. \ref{fig:expmap1} we illustrate $\mathbb{E}_{\text{detected planets}}(R_p,P)$ with the occurrence rate of the tested planets set to 1.
\begin{figure}
\includegraphics[width=\columnwidth]{fi2c.pdf}
\caption{The mean of the Poisson binomial distribution (bold in the centre of each box) is equal to the number of planets we expect to find in the TRAPPIST-South survey (cf. equation \ref{eq:expectationvalue}) assuming an occurrence rate of 1 in each box. The standard deviation of the distribution is shown in the top right corner of each box, while the bottom right number is equal to the skewness of the distribution.
The mean clearly decreases as a function of period, which is caused by the lower number of transits and the lower geometric probability. The expectation value for the number of planets in the survey saturates above 2 $R_{\oplus}$. Furthermore, we do not expect to detect many planets smaller than Earth. The skewness of the distribution increases as the mean decreases, putting the value for the standard deviation in context.}
\label{fig:expmap1}
\end{figure}
Additionally, we can calculate the probability of not finding any planet for a given star and the probability of not detecting a single planet of a given type in the combined data set (Fig. \ref{fig:expmap2}) using
\begin{equation}
\mathbb{P}_{\text{no-detection}}(R_p,P) = \prod_{j=1}^{\text{nr of target stars}} \left( 1-\mathbb{P}_{\text{detect}_j}(R_p,P) \right).
\label{eq:nodetectionprobability}
\end{equation}
As visible in Fig. \ref{fig:expmap2}, an Earth-sized planet with an orbital period above 3 days is unlikely to be detected even if these planets are assumed to be frequent. Furthermore, we can see that the detection of at least one close-in planet in the data set is likely if there is one around every single star with randomly drawn orbital parameters (such as the inclination).
\begin{table}
\caption{Averaged identification probability for two target stars (the same stars as in Figs. \ref{star1} and \ref{star2}) with the period and radius drawn from a uniform distribution between 0.8 and 4 days and between 0.7 and 4 $R_{\oplus}$ respectively. We tested how the quadratically added uncertainty term $\sigma_{\text{add}}$ influences the identification probability as we apply the GP-detrending before we inject transits (detrend-inject-recovery test) or after we have injected transits into the light curve (inject-detrend-recovery test). The results for the two versions differ significantly if we set the uncertainty to the running RMS, but the values converge as we quadratically add a higher additional error term. This indicates that the light curves are not significantly overfitted overall in the detrend-inject-recovery case analysed here.}
\label{comptab}
\begin{tabular}{ |p{3cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Identification probability for LP 993-98} \\
\hline
$\sigma_{\text{add}}$& Detrend-inject & Inject-detrend \\
\hline
0 ppt & 19.3\% & 14.8\% \\
5 ppt & 17.9\% & 15.8\% \\
7 ppt & 16.7\% & 15.4\% \\
10 ppt & 16.0\% & 16.0\%\\
\hline
\end{tabular}
\begin{tabular}{ |p{3cm}|p{2cm}|p{2cm}| }
\hline
\multicolumn{3}{|c|}{Identification probability for LP 593-68} \\
\hline
$\sigma_{\text{add}}$& Detrend-inject & Inject-detrend \\
\hline
0 ppt & 25.3\% & 21.5\% \\
5 ppt & 24.2\% & 22.1\% \\
7 ppt & 24.0\% & 22.3\% \\
10 ppt & 23.6\% & 22.3\%\\
\hline
\end{tabular}
\end{table}
\section{Injection results}
\label{expnr}
\begin{figure}
\includegraphics[width=\columnwidth]{fi1d.pdf}
\caption{Probability of no single planet detection assuming that all stars host a planet of the tested type in a randomly chosen orbit (cf. equation \ref{eq:nodetectionprobability}). This probability increases sharply for planets with radii below 1 $R_{\oplus}$. Due to the geometric probability decreasing as a function of the period and the lower number of transits in the data, the total no-detection probability also increases for longer orbital periods.}
\label{fig:expmap2}
\end{figure}
In Fig. \ref{fig:expmap3} the probability of not detecting a single planet, assuming that 10 per cent of all stars host a planet of the tested type in a randomly chosen orbit, is displayed. This analysis indicates that the probability of finding at least one planet similar to TRAPPIST-1b (in radius and orbital period) in the entire data set is equal to 5 per cent if the occurrence rate for such planets is equal to 10 per cent. In other words, in this case the probability of not detecting a planet like TRAPPIST-1b is equal to 95 per cent and the TRAPPIST team would been very lucky to find a planet as they did. Since TRAPPIST-1b was found in this survey, we can conclude that the occurrence rate is likely to be above 10 per cent.
In Fig. \ref{fig:probatleastone}, we display the probability of finding at least one planet in the data set as a function of the occurrence rate for three different detrending scenarios. The results quoted here refer to the case where we detrend before injecting planets and quadratically add 7~ppt to the initial uncertainty estimate except where stated otherwise.
Additionally to the result that the occurrence rate is likely to be above 10 per cent if the likelihood of finding at least one planet is expected to be greater than 5 per cent, we see that an occurrence rate of only 2 per cent would lead to a likelihood of 99 per cent of not detecting a planet like TRAPPIST-1b in such a survey.
\begin{figure}
\includegraphics[width=\columnwidth]{fj6_marchd.pdf}
\caption{Probability of no planet detection assuming that 10 per cent of all stars have a planet of the tested type. The black diamond shows the period and radius of TRAPPIST-1b.}
\label{fig:expmap3}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{occrate.pdf}
\caption{Probability of finding at least one planet similar to TRAPPIST-1b as a function of the occurrence rate resulting from Eq. (\ref{eq:minprob}) for three detrending scenarios. The optimistic scenario (orange) describes the detection probabilities if we detrend the light curves without removing any part of the transits. In the intermediate scenario (violet-red), we still correct for variability without removing transits but we compromise on how well the stellar variability is removed. In the pessimistic scenario (blue), we used the probabilities to identify planets in light curves which have been GP-detrended after the transit injection.
In the intermediate scenario, if the occurrence rate is below 10 per cent (14 per cent in the pessimistic scenario, 7 per cent in the optimistic scenario), the probability of not finding any planet similar to TRAPPIST-1b in period and radius orbiting a ultra-cool dwarf, exceeds 95 per cent, which means that the probability of finding at least one planet is below 5 per cent.}
\label{fig:probatleastone}
\end{figure}
\subsection{Dependence on detrending}
As a first estimate of $\mathbb{P}_{\text{planet}_\text{lim}}$, the lower limit on the occurrence rate of planets like TRAPPIST-1b, we derive the occurrence rate for which the probability of no planet detection in the data set is equal to 95 per cent:
\begin{equation} \label{eq:minprob}
0.95 = \prod_{j=1}^{\text{nr of target stars}} \left( 1- \mathbb{P}_{\text{planet}_{\text{lim}}}\cdot \mathbb{P}_{\text{geometry}_j} \cdot \mathbb{P}_{\text{identify}_j} \right).
\end{equation}
We estimate that the geometric probability is accurate up to 10 per cent, in which case the identification probability dominates the result.
If we overfit the light curve in the GP-fitting or any of the previous steps, we produce very flat light curves, within which injected transits are easy to identify compared to transits injected in more variable light curves. Thus, overfitting before the planet injection leads to a higher identification probability $\mathbb{P}_{\text{identify}}$.
As visible in equation (\ref{eq:minprob}), the lower limit on the occurrence rate $ \mathbb{P}_{\text{planet}_{\text{lim}}}$ decreases as $\mathbb{P}_{\text{identify}}$ increases.
Consequently, choosing a more optimistic detrending leads to a more conservative estimate of $\mathbb{P}_{\text{planet}_\text{lim}}$.
Ideally, we would search for transits while simultaneously fitting stellar variability.
Finding an efficient algorithm that performs light curve detrending simultaneously with the transit search, and is suited for statistical analyses, is out of scope of this paper. To test how different non-simultaneous detrending scenarios affect the lower limit, we computed it for three different cases.
Using a GP to detrend without simultaneously fitting for transits decreases the detection probability since the GP will try to fit the injected transits. Thus, dividing by the GP mean prediction reduces the depth of the injected transits. We might have found transiting planets by visual inspection that are hidden after GP-detrending. Thus, GP-detrending after the planet injection and before the transit search constitutes the pessimistic scenario. To avoid overfitting the light curves and thus improve the identification probability after detrending, we added an additional component to the uncertainty of each data point in the GP computation. More specifically, we set the uncertainty to $\sigma_y= \left(RMS_{\text{running}}^2 + \sigma_{\text{add}}^2 \right)^{\frac{1}{2}}$ and set $\sigma_{\text{add}}$ to 7~ppt as explained in Section \ref{prepost}.
For the intermediate scenario, we injected transits after GP detrending. This is equivalent to assuming that we can detrend the light curves without removing transits. However, we compromise on how closely the GP fits the data and thus how well it removes stellar and instrumental variability by using the same uncertainty estimate as in the pessimistic scenario.
For the optimistic scenario, the light curves were detrended before transit injection as well. The additional error component $\sigma_{\text{add}}$, however, was set to zero. Consequently, we detrended without removing transits or compromising on the GP fit. This case constitutes the optimistic scenario as simultaneous transit and variability fitting using a GP with an SHO kernel is unlikely to yield a higher identification probability. Therefore, this approach leads to the lowest estimate of $\mathbb{P}_{\text{planet}_\text{lim}}$.
As depicted in Fig. \ref{fig:probatleastone}, the lower limit estimate on the occurrence rate of planets similar to TRAPPIST-1b orbiting UCDs does not change drastically but stays within a range of 7 to 14 per cent depending on whether we detrend before or after injecting planets.
\section{Likelihood analysis}
\label{expnr2}
Alternatively, we can calculate the likelihood of the occurrence rate $r$. $\mathbb{P}(D \: | \: r)$ is the probability of finding a data set $D$ with exactly one planet similar to TRAPPIST-1b around TRAPPIST-1 and no planets around the other surveyed stars, given an occurrence rate $r$ within the parameter range 0.9 to 3 days and 1 to 4~$R_{\oplus}$. This accompanies the probability of finding a planet around at least one of the surveyed stars as described in section \ref{expnr}. We outline the derivation below, while a detailed explanation can be found in Appendix~\ref{sec:occurrence}.
First, we marginalised the detection probabilities over the analysed radius and period range (0.9 to 3 days, 1 to 4 $R_{\oplus}$) because the planetary and orbital parameters of a potential undiscovered planet are unknown. For this, it is necessary to choose a prior reflecting our knowledge about where in the parameter space planets are likely to be found. Since this is unknown for UCDs, we tested two different priors. As our first prior, we chose a uniform prior. By using this prior for the marginalisation of the detection probabilities, equal weight is given to all potential planets within the tested parameter range. This prior may not be optimal given that for low-mass stars e.g. hot Neptunes are more rare than their warm counterparts \citep[e.g.][]{dressing_2015,Hirano_2018}. As the second prior we chose the normalised occurrence rates of M dwarfs as computed by \cite{dressing_2015} using \textit{Kepler} data. The following steps were carried out separately for the two sets of marginalised detection probabilities resulting from the two priors. The different results are shown in Figs. \ref{fig:pdr} and \ref{fig:pdrkepler}.
We drop the dependence on stellar radii and masses and denote the marginalised detection probability for star $j$ as $F_j$.
The probability of finding a planet around star $j$ results as $r \cdot F_j$, where $r$ is the number of planets we expect to find within the mentioned radius and period range.
Adding over all possible combinations of single-planet systems and weighting them with their corresponding probability, we finally get:
\begin{equation}
\mathbb{P}(D \: | \: r) =
r f(\Theta_{T1b}) \cdot \prod_{j \: \in \: \text{S}}
\left( 1-r F_j \right) .
\end{equation}
where $f(\Theta_{T1b})$ is the probability of finding TRAPPIST-1b around its host star and $S$ is the set of all target stars except TRAPPIST-1.
In fact, the detection probabilities for the injected planets also depend on the probability that a human confirms a correct BLS planet detection. This varies depending on the stellar and planetary parameters and the individual observer, but a characterisation of this function is out of scope of this paper. As explained in Section \ref{identificationcriterion}, we assume that the flexible design of the survey and the definition of the identification criterion allow us to set the confirmation probability of a correct BLS detection to $1$.
In Fig. \ref{fig:pdr} and \ref{fig:pdrkepler}, we show $\mathbb{P}(D \: | \: r)$ of TRAPPIST-UCDTS (dashed red line). Furthermore, we averaged over the detection probabilities to further estimate the shape of $\mathbb{P}(D \: | \: r)$ for a higher number of surveyed stars. Since the \textit{Kepler}-informed prior puts a low weight on some regions of the parameter space where the detection probability is high, the marginalised detection probabilities are lower compared to the results derived using a uniform prior. Therefore, the peak of $\mathbb{P}(D \: | \: r)$ is at a higher $r$ in Fig. \ref{fig:pdrkepler} than in Fig. \ref{fig:pdr} for the same number of stars.
The low number of target stars restricts us from drawing strong conclusions on the occurrence rate of short-period planets hosted by UCDs. A larger sample would be more indicative, as illustrated in Fig. \ref{fig:pdr}.
From a Bayesian perspective \citep[e.g.][]{Hall_2018}, if one chooses a prior distribution on $r$, then the likelihood $\mathbb{P}(D|r)$ may be inverted to a posterior distribution $\mathbb{P}(r|D)$. If the prior is uniform $\mathbb{P}(r)=1$ then $\mathbb{P}(r|D)\propto \mathbb{P}(D|r)$, and the curves in Fig. \ref{fig:pdr} may be interpreted as Bayesian degrees of belief in the unknown parameter $r$.
\begin{figure}
\includegraphics{pdruniform.pdf}
\caption{Probability of the survey result as a function of the occurrence rate $r$ for different numbers of surveyed stars divided by the maximum of the respective probability curve. A uniform prior was used to marginalise over the unknown planetary parameters. The most likely occurrence rate for TRAPPIST-UCDTS is currently equal to 1 but shrinks to 0.3 as we observe 200 target stars without detecting another planetary system.}
\label{fig:pdr}
\end{figure}
\begin{figure}
\includegraphics{pdrkepler.pdf}
\caption{Probability of the survey result as a function of the occurrence rate $r$ except that the normalised Kepler occurrence rates for M dwarfs were used as the prior. This prior essentially excludes planets with orbits below 2 days and radii above 2 $R_{\oplus}$.}
\label{fig:pdrkepler}
\end{figure}
\section{Conclusions} \label{conclusion}
\subsection{Survey results}
We performed aperture photometry, generated differential light curves and cleaned these from non-planetary signals (cf. Section \ref{Methods}) and also from slow stellar and residual instrumental variability (cf. Sections \ref{gpmodel}, \ref{prepost}). Through injection-recovery tests we could then infer, inter alia, the number of planets we expect to find in such a survey (cf. Section \ref{expnr}). The results rely on our estimates of the stellar parameters in Table \ref{startable} and the applied identification criterion in Section \ref{identificationcriterion}.
We expect to find 0.52 planets similar to TRAPPIST-1b in radius and period and 0.2 planets similar to TRAPPIST-1c if all stars analysed in this study are orbited by a planet of the tested type. Therefore, it is not surprising that only one system was detected.
Using the pipeline, we would have found the planets b and c automatically, which validates the pipeline. An indication for the existence of planet c would have been noticed first because it transits during the second observation night and its transit is clearly visible and easily identified by the pipeline.
In accordance with previous results \citep[e.g.][]{dressing_2015, Demory_2016}, we find that hot mini-Neptunes are likely to be rare around cool stars since the probability of missing these planets is low (cf. Fig. \ref{fig:expmap2}) and we found none in the data set. Figs. \ref{fig:expmap1} and \ref{fig:expmap2} further indicate that we are sensitive to Earth-sized planets with short periods of the order of up to three days only.
\citet{He_2017} performed a planet injection analysis on brown dwarf light curves acquired by the \textit{Spitzer Space Telescope}. Their study lead to an expectation value for the number of discovered planets of (0.6, 0.81, 0.83, 0.85, 0.89) within the radius bins (1$\pm$0.25, 1.5$\pm$0.25, 2$\pm$0.25, 2.5$\pm$0.25, 3$\pm$0.25) $R_{\oplus}$ for an orbital period between 1.28 and 3 days. We expect to find (0.24, 0.48, 0.59, 0.62, 0.64) planets in the same parameter range. They observed 44 brown dwarfs with a median observation time of 20.9 hours. Thus, they have observed their targets for a shorter amount of time. However, they count one well-observed planetary transit as a detection while we require the detection of at least two transits. Furthermore, their observations are almost continuous while ours can be quite spread out in time, which can adversely affect the detection probability. The expectation value in \citet{He_2017} flattens out for radii above 1.5~$R_{\oplus}$ while this is the case for radii above 2 $R_{\oplus}$ in our analysis. This is likely the case because the \textit{Spitzer} light curves do not contain short-term variations due to, for example, precipitable water vapour.
Injection-recovery tests by \citet{Demory_2016} on 189 late M dwarfs observed in \textit{K2}'s Campaigns 1--6 yielded a recovery efficiency of 10 per cent for planets similar to TRAPPIST-1b. \textit{K2} has been monitoring each campaign field quasi-continuously for approximately 80 days, which is much longer than our typical summed exposure time of 50 hours. Nevertheless, we got a detection probability of 30 per cent in the period and radius bin of TRAPPIST-1b. Possible reasons for this difference are the fact that late M dwarfs are faint in the \textit{Kepler} passband and that the \textit{Kepler} cadence of 30 minutes can smear out the transits. Furthermore, \citet{Demory_2016} used a criterion based on the BLS period, which is likely to yield a lower detection probability for active stars compared to our detection criterion defined in Section \ref{identificationcriterion}.
\subsection{Multiplanetary systems}
Planetary systems like TRAPPIST-1, consisting of multiple, similarly sized planets in edge-on, coplanar orbits, produce comparably deep transits with the characteristic, sharp transit shape. A human analysing a light curve of such a planetary system would connect the transits produced by different planets and further observe this target star. The same is true for the BLS algorithm, as in the case of TRAPPIST-1, it first connects the transits of TRAPPIST-1c and TRAPPIST-1g before finding the transits of TRAPPIST-1b, which are very clear compared to the first two transits. Thus, multiplanetary systems similar to TRAPPIST-1 are ideal targets for surveys such as TRAPPIST-UCDTS. Generally, multiplanetary coplanar systems lead to a higher identification probability.
If all surveyed stars are orbited by two planets with planetary radii between 1 and 1.3 $R_{\oplus}$ at randomly drawn inclinations, one of them with a period between 1 and 1.4 days and the other with a period between 1.8 and 2.3 days, we estimate the probability of detecting at least one of the planets to 69 per cent (cf. Fig. \ref{fig:expmap2}). If coplanarity is assumed, however, the innermost planet is the dominant contributor to the detection probability which is thus above 57 per cent for the mentioned case.
We must conclude in both cases that a close-in planetary system, consisting of one or more planets, is relatively likely to be discovered in a survey like TRAPPIST-UCDTS, if these systems are common around UCDs.
Therefore, we expect to find some multi-planet systems in surveys such as SPECULOOS \citep{burdanov_2018,delrez_speculoos_2018, gillon2018, jehin_2018}, consisting of multiple telescopes fully dedicated to the search of exoplanets orbiting UCDs and achieving a higher photometric precision than TRAPPIST-South, if compact coplanar planetary systems are frequent. The latter is supported by evidence that compact multiple systems are relatively common around mid-type M dwarfs. Considering planets with periods of less than 10 days, \citet{Muirhead_2015} found that $21^{+7}_{-5}$ per cent of all mid-type M dwarfs host compact multiple systems, while \citet{Hardegree_2019} estimated an occurrence rate of $44^{+45}_{-33}$ per cent. We accompany these results with an estimate of the lower limit on the occurrence rate between 7 and 14 per cent for planets with a radius in [1,1.3] $R_\oplus$ and the period in [1.4,1.8] days hosted by UCDs.
\subsection{Transit detection challenges}
With few exceptions \citep{dalal_2019,huber_2013}, measurements of the obliquities of stars hosting multiple planets \citep[e.g.][]{hirano_2020,sanchis_2012,hirano_2012,albrecht_2013,chaplin_2013} suggest that the axis of rotation of the host star is likely to be close to parallel to the orbital axis of its planets. Stars with close-in transiting planets, are therefore unlikely to exhibit a quiet light curve since the starspot and the inhomogeneous cloud coverage expected on the stellar surface of late-type M dwarfs and brown dwarfs \citep{metchev_2015,goldman_2005} will constantly change as seen from Earth due to stellar rotation. This is critical given the fast rotation of many brown dwarfs and very-low-mass stars of the order of one day \citep[e.g.][]{Irwin_2011,scholz_2016}. Additionally, precipitable water vapour can cause variability in the light curves \citep{Bailer-Jones_2003, Murray_2020} further complicating the confirmation of a transit candidate.
Therefore, it is likely that some transits, especially grazing transits, have not been identified by eye if the time scale and amplitude of the brightness variations during the respective night are comparable to those of planetary transits. This indicates that a rigorous computational search for periodic transit signals with simultaneous variability correction is necessary.
\section*{Acknowledgements}
We thank the anonymous referee for their thorough review which helped elevate the quality of this work. FL gratefully acknowledges a scholarship from the Fondation Zd\u{e}nek et Michaela Bakala. The research leading to these results has received funding from the European Research Council under the FP/2007-2013 ERC Grant Agreement n$^{\circ}$ 336480 and from the ARC grant for Concerted Research Actions, financed by the Wallonia-Brussels Federation. TRAPPIST is funded by the Belgian Fund for Scientific Research (Fonds National de la Recherche Scientifique, FNRS) under the grant FRFC 2.5.594.09.F, with the participation of the Swiss National Science Foundation (SNF). MG and EJ are F.R.S.-FNRS Senior Research Associates. AM acknowledges support from the senior Kavli Institute Fellowships. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
\section*{Data availability}
Data will be available via VizieR at CDS.
\bibliographystyle{mnras}
|
{'timestamp': '2020-07-16T02:00:53', 'yymm': '2007', 'arxiv_id': '2007.07278', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.07278'}
|
arxiv
|
\section{Introduction}
In recent years, the field of object detection has witnessed dramatic progress in the domain of both images~\cite{girshick2014rcnn,ren2015faster,girshick15fastrcnn,he2017maskrcnn} and videos~\cite{zhu17fgfa,feichtenhofer2017detect,DBLP:conf/eccv/BertasiusTS18,xiao-eccv2018,gberta_2020_CVPR}. To a large extent these advances have been driven by the introduction of increasingly bigger labeled datasets~\cite{502, Kuznetsova_2020,Yang2019vis,ILSVRCarxiv14}, which have enabled the training of progressively more complex and deeper models. However, even the largest datasets in this field~\cite{502, Kuznetsova_2020,Yang2019vis} still define objects at a very coarse level, with label spaces expressed in terms of nouns, e.g., tomato, fish, plant or flower. Such noun-centric ontologies fail to represent the dramatic variations in appearance caused by changes in object ``states'' for many of these classes. For example, a tomato may be fresh or fried, large or small, red or green. It may also frequently appear in conjunction with other objects such as a knife, a cutting board, or a cucumber. Furthermore, it may undergo drastic appearance changes caused by human actions (e.g., slicing, chopping, or squeezing).
The goal of this work is to design a model that can recognize a rich variety of contextual object cues and states in the visual world (see Figure~\ref{main_fig} for a few illustrative examples). A detection system capable to do so is useful not only for understanding the object functional properties, but also for inferring present or future human-object interactions. Unfortunately, due to the long-tailed, open-ended distribution of the data, implementing a fully-supervised approach to this problem is challenging as it requires collecting large amounts of annotations capturing the visual appearance of objects in all their different forms. Instead of relying on manually-labeled data, we propose a novel paradigm that learns \textbf{C}ontextualized \textbf{OB}ject \textbf{E}mbeddings (COBE) from instructional videos with automatically-transcribed narrations~\cite{miech19howto100m}. The underlying assumption is that, due to the tutorial properties of instructional video, the accompanying narration often provides a rich and detailed description of an object in the scene in terms of not only its name (noun) but also its appearance (adjective or verb), and contextual objects (other nouns) that tend to occur with it. We refer collectively to these ancillary narrated properties as the ``contextualized object embedding'' in reference to the contextualized word representation~\cite{keskarCTRL2019} which we use to automatically annotate objects in a video.
Specifically, we propose to train a visual detector to map each object instance appearing in a video frame into its contextualized word representation obtained from the contextual narration. Because the word representation is contextualized, the word embedding associated to the object will capture not only the category of the object but also the words that were used to describe it, e.g., its size, color, and function, other objects in the vicinity, and/or actions being applied to it. Consider for example the frame illustrated in Figure~\ref{arch_fig}. Its accompanying narration is ``... break an egg into a bowl...'' The contextualized word embedding of the egg represented in the frame will be a vector that encodes primarily the word ``egg'' but also the contextual action of ``breaking'' and the contextual object ``bowl.'' By training our model to predict the contextualized embedding of an object, we force it to recognize fine-grained contextual information beyond the coarse categorical labels. In addition to the ability of representing object concepts at a finer grain, such a representation is also advantageous because it leverages the compositionality and the semantic structure of the language, which enable generalization of states and contexts across categories.
We train COBE on the instructional videos of HowTo100M dataset~\cite{miech19howto100m}, and then test it on the evaluation sets of HowTo100M, EPIC-Kitchens~\cite{Damen2018EPICKITCHENS}, and YouCook2~\cite{ZhLoCoBMVC18} datasets. Our experiments show that on all three of these datasets, COBE outperforms the state-of-the-art Faster R-CNN detector. Furthermore, we demonstrate that despite a substantial semantic and appearance gap between HowTo100M and EPIC-Kitchens datasets, COBE successfully generalizes to this setting where it outperforms several highly competitive baselines trained on large-scale manually-labeled data. Lastly, our additional experiments in the context of zero-shot and few-shot learning indicate that COBE can generalize concepts even to novel classes, not seen during training, or from few examples.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{./paper_figures/speech2object/states2_v5_EK.pdf}
\end{center}
\vspace{-0.3cm}
\caption{Many objects in the real-world undergo state changes that dramatically alter their appearance, such as in the case of the onions illustrated in this figure. In this work, we aim at training a detector that recognizes the appearances, states and contexts of objects using narrated instructional video. \vspace{-0.5cm}}
\label{main_fig}
\end{figure}
\section{Related Work}
\textbf{Object Detection in Images.} Modern object detectors~\cite{girshick2014rcnn,ren2015faster,girshick15fastrcnn,he2017maskrcnn,SPP,lin2017focal,guptaECCV14,44872,dai16rfcn,DBLP:journals/corr/RedmonDGF15,DBLP:journals/corr/RedmonF16} are predominantly built on deep CNNs~\cite{NIPS2012_4824,Simonyan14c,He2016DeepRL}. Most of these systems are typically trained on datasets where the label space is discretized, the number of object categories is fixed a priori, and where thousands of bounding box annotations are available for each object class. In contrast, our goal is to design a detection system that works effectively on datasets that exhibit long-tailed and open-ended distribution, and that generalizes well to the settings of few-shot and zero-shot learning. We accomplish this by designing a detection model that learns to map each visual object instance into a language embedding of that object and its contextual narration.
\textbf{Object States and Contexts.} In the past, several methods have tackled the problem of object attribute classification~\cite{krishna-eccv2016,10.1109/CVPR.2014.211,DBLP:conf/cvpr/HuangEEY15,nagarajan2018attrop}, which could be viewed as a form of object state recognition. However, unlike our approach, these methods assume a predefined set of attributes, and the existence of manually labeled datasets to learn to predict these attributes. Other methods~\cite{alayrac16objectstates,10.1109/CVPR.2013.333} aim to model object appearance changes during human-object interactions. However, due to their reliance on manual annotations, they are trained on small-scale datasets that contain few object categories and states.
Finally, we note the approach in~\cite{StatesAndTransformations}, which discovers object states and transformations from manually-annotated Web images. In comparison, our model does not require manual annotations of object states and contexts, but instead relies on automatically-transcribed narrations from instructional videos. Furthermore, compared to the method in~\cite{StatesAndTransformations}, our COBE operates in a continuous label space instead of in a discretized space. This allows COBE to be effective in few-shot and zero-shot learning. Lastly, our model is designed to produce an output for every detected object instance in a video frame, instead of producing a single output for the whole image~\cite{StatesAndTransformations}.
\textbf{Vision, Language, and Speech.} Many prior methods for jointly modeling vision and language learn to project semantically similar visual and language inputs to the same embedding~\cite{miech19howto100m,NIPS2013_5204,10.1145/3240508.3240712, 8953675, 6c5b3a91378f4dbb9e12ab8a41e739d3, conf/cvpr/KleinLSW15, 10.1145/3206025.3206064, journals/corr/PanMYLR15, 10.5555/2886521.2886647, journals/corr/WangLL15}. These methods compute a single feature vector for the whole image or video, whereas our goal is to learn a representation for object instances detected in a given video frame. This allows us to capture fine-grained state, appearance and context cues relating to object instances.
Closer to our goal, the method of Harwath et al.~\cite{conf/eccv/HarwathRSCTG18} leverages spoken audio captions to learn to localize the relevant portions of the image, and the work of Zhou et al.~\cite{ZhLoCoBMVC18} uses textual video descriptions for visual object grounding. In contrast, our system: learns from transcribed speech of real-world Web videos rather than from verbal or textual descriptions provided ad-hoc by annotators; it goes beyond mere localization by predicting the semantics of contextual narration; and, finally, it leverages the semantic structure of a modern contextualized word model.
\section{Technical Approach}
Our goal is to design a system that predicts a contextualized object embedding for every detected object instance in a video frame. To train this model, we need ground-truth bounding boxes for all object instances appearing in frames of HowTo100M, our training dataset. Unfortunately such annotations are not available. Similarly to~\cite{yalniz2019billionscale, journals/corr/abs-1911-04252}, we address this issue by leveraging a pretrained model to transfer annotations from a labeled dataset to our unlabeled dataset. Afterwards, we train COBE using the obtained ``pseudo'' bounding box annotations, and the narrated speech accompanying each HowTo100M video frame. The next subsections provide the details of the approach.
\subsection{Augmenting HowTo100M with Bounding Boxes}
\label{subsec_augm}
Inspired by recent semi-supervised learning methods~\cite{yalniz2019billionscale, journals/corr/abs-1911-04252}, we use a simple automatic framework to augment the original HowTo100M dataset with pseudo ground-truth bounding boxes using a pretrained detector. As our objective is to model object appearances in settings involving human-object interactions, we choose to leverage an object detector pretrained on a set of manually-labeled bounding boxes of EPIC-Kitchens~\cite{Damen2018EPICKITCHENS} which is one the most extensive datasets in this area and is also a part of the benchmark we chose for our quantitative evaluations. The detector is trained to recognize the $295$ object categories of EPIC-Kitchens. We apply it on frames of HowTo100M sampled at 1 FPS, and then consider all detections exceeding a certain probability threshold as candidates for pseudo ground-truth data. We also check whether the narration associated with the given video frame includes a token corresponding to one of the 295 coarse object categories. If it does, we accept that particular detection as pseudo ground-truth annotation, otherwise we discard it.
This procedure allows us to automatically annotate about $559K$ HowTo100M video frames with a total of about $1.1M$ bounding boxes spanning $154$ object categories. The remaining $141$ EPIC-Kitchens classes are discarded due to yielding too few detections in HowTo100M. As in the original HowTo100M dataset, each of these frames has an associated transcribed narration. Throughout the rest of the paper, we refer to this subset of HowTo100M frames as HowTo100M\_BB.
\subsection{Learning Contextualized Object Embeddings}
\label{model_sec}
We now present COBE, our approach for learning contextualized object embeddings from instructional videos of HowTo100M. COBE is comprised of three high-level components: 1) a pretrained contextualized language model which maps the automatically-transcribed narration associated with a given frame to a semantic feature space, 2) a visual detection model (based on a Faster R-CNN architecture) which predicts a contextualized object embedding for every detected object instance, and 3) a contrastive loss function which forces the predicted object embedding to be similar to the contextualized word embedding. Below, we describe each of these components in more detail.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{./paper_figures/arch/arch3.pdf}
\end{center}
\vspace{-0.1cm}
\caption{An illustration of the procedure used to train COBE. Given an instructional video frame with an automatically-transcribed narration, we process the textual narration with a contextualized language model and feed the video frame to a visual detection model corresponding to a modified Faster R-CNN. We use a contrastive loss to guide the object embedding predicted by the visual model to be similar to the contextualized word embedding obtained from text.
\vspace{-0.5cm}}
\label{arch_fig}
\end{figure}
\textbf{Contextual Language Model.} Contextualized word embedding models such as BERT~\cite{keskarCTRL2019,devlin-etal-2019-bert, Lan2020ALBERT, liu2019roberta, NIPS2019_8812, conneau2019unsupervised} have enabled remarkable progress in many language problems. Due to their particular design, such models learn an effective token representation that incorporates context from every other word in the sentence. Conceptually, this is similar to what we are trying to do, except that we want to incorporate relevant visual context for each object instance. Since every video frame in our dataset is accompanied by an automatically-transcribed speech, we want to exploit contextual language models for learning contextualized object embeddings.
Formally, let us consider a video frame $I$ and its accompanying instructional narration in the form of a sequence $x = (x_1, \hdots, x_n)$ where $x_j$ represents a distinct word in the narration and $n$ is the length of the narration segment considered. We feed the textual narration $x$ into a pretrained language model $g$, which outputs a matrix $g(x_1, \hdots, x_n) \in \mathbb{R}^{n \times d}$ where each row corresponds to a contextualized word representation for each word token. For brevity, we denote $g_j$ as the $j$-{th} row of this matrix. Note that due to the use of the self-attention blocks~\cite{NIPS2017_7181} in these language models, each vector $g_j$ provides a representation of word $x_j$ contextualized by the entire sentence $(x_1, \hdots, x_n)$, thus providing a rich semantic representation of each word {\em and} the contextual narration where it appears.
In order to train our visual detection model, we use only narration words $x_j$ that match the categorical label of one of the EPIC-Kitchen object classes in the frame. The contextualized word embeddings $g_j$ corresponding to these words are used to supervise our model.
We note that because of the design of our augmentation procedure (see steps outlined in \ref{subsec_augm}), every frame in HowTo100M\_BB contains at least one object instance with its category word represented in the associated narration, thus yielding at least one training vector $g_j$.
\textbf{Visual Detection Model.} Our visual detection model is based on the popular Faster R-CNN detector~\cite{ren2015faster} implemented using a ResNeXt-101~\cite{xie2016groups} backbone with a feature pyramid network (FPN)~\cite{lin2016fpn}. The main difference between our detector and the original Faster R-CNN model is that we replace the classification branch~\cite{ren2015faster} with our proposed contextualized object embedding (COBE) branch (see Figure~\ref{arch_fig}). We note that the original classification branch in Faster R-CNN is used to predict a classification score $s_i \in \mathbb{R}^{C}$ for each Region of Interest (RoI) where $C$ is the number of coarse object categories. In contrast, our proposed COBE branch is trained to predict a contextualized word embedding $f_i \in \mathbb{R}^{d}$ matching the embedding computed from the corresponding narration. Our COBE branch is advantageous because it allows us to capture not only the coarse categorical label of the object but also the contextual language information contained in the narration, e.g., the object size, color, shape as well as co-occurring objects and actions.
\textbf{Loss Function.} We use the noise contrastive estimation criterion~\cite{pmlr-v9-gutmann10a,DBLP:journals/corr/abs-1807-03748} to train our visual detection model. Consider an object embedding $f_i \in \mathbb{R}^{d}$ predicted by our visual model for the $i^{th}$ foreground RoI. Let $g_i^+$ be the {\em matching} contextual word embedding computed from the narration, i.e., such that the word associated to $g_i^+$
matches the class label of the $i$-th RoI. Now let us assume that we also sample a set of $m$ negative word embeddings that are {\em not} associated with the category label of object embedding $f_i$. We pack these negative embeddings into a matrix $H_i\in \mathbb{R}^{m \times d}$. A contrastive loss is then used to measure whether $f_i$ is similar to its positive word embedding $g^+_i$, and dissimilar to the negative word embeddings $H_{ik}$. With similarity measured as a dot product, we can express our NCE loss for the foreground and background RoIs as:
\vspace{-0.5cm}
\begin{align}
\mathcal{L}_{fg} = \sum_{i \sim \mathcal{FG}} - \log \frac{e^{(f_i \cdot g^+_i)}}{e^{(f_i \cdot g^+_i)} +\sum_{k=1}^m e^{(f_i \cdot H_{ik})}} \hspace{0.5cm} & \mathcal{L}_{bg} = \sum_{i \sim \mathcal{BG}} - \log \frac{1}{1+\sum_{k=1}^m e^{(f_i \cdot H_{ik})}}
\end{align}
\vspace{-0.2cm}
where in the left equation the outer summation is over all foreground RoIs (denoted as $\mathcal{FG}$), and in the right equation is over the entire set of background RoIs (denoted as $\mathcal{BG}$). We note that using different losses for the background and foreground RoIs is necessary because we do not want to associate background RoIs to any of the contextual narration embeddings. The final NCE loss function is expressed as: $\mathcal{L} = (\mathcal{L}_{fg} + \mathcal{L}_{bg})/B$, where $B$ is the total number of RoIs.
\section{Implementation Details}
We train our model for $10$ epochs with an initial learning rate of $0.001$, a linear warmup of $500$ steps and a momentum of $0.9$. The hyperparameters of RPN and FPN are the same as in~\cite{he2017maskrcnn}. We use a multi-scale training approach implemented by resizing the shorter side of the frame randomly between $400$ and $800$ pixels. Our model is trained in a distributed setting using $64$ GPUs, each GPU holding a single frame. We initialize our model with a Faster R-CNN pretrained on COCO for object detection. During training, we use $m=2048$ randomly-sampled negative contextualized embeddings. As our contextual language model, we use a pretrained Conditional Language Transformer Model (CTRL)~\cite{keskarCTRL2019}, which we found to be slightly more effective than the other state-of-the-art language models~\cite{devlin-etal-2019-bert, Lan2020ALBERT, liu2019roberta, NIPS2019_8812, conneau2019unsupervised}. We freeze the weights of this language model, remove its last layer and add to it a shallow $5$-layer MLP which we train jointly with our visual detection model. The contextualized embedding vectors have dimensionality $1280$. During inference, we run the bounding box prediction branch on $1000$ proposals, apply non-maximum suppression, and use boxes with a score higher than $0.001$ as our final output.
\section{Experimental Results}
\textbf{Evaluation Task.} We want to evaluate the effectiveness of our detection model according to two aspects: 1) its accuracy in localizing objects in video frames, and 2) its ability to represent object contextual information. To do this, we propose a new task, called contextualized object detection. This task requires predicting a $(noun, context)$ tuple, and a bounding box associated with $noun$. Here, $noun$ represents the coarse category of an object, whereas $context$ is any other word providing contextual information about that object. A special instance of this task is the detection of co-occurring objects, which requires predicting a $(noun, noun)$ tuple, and a bounding box associated with the first $noun$. For example, predicting $(pan, mushrooms)$ means that a model needs to spatially localize a pan with mushrooms near it (or possibly in it). Another instance of a contextualized object detection task is the object-action detection task, which requires predicting a $(noun, verb)$ tuple, and a bounding box associated with the $noun$. In this case, the predicted noun indicates a coarse object category, while the verb denotes an action that has been applied on that object. For instance, predicting the tuple $(onion, peel)$ means that a model should spatially localize an onion being peeled. In general, $context$ can take any of the following forms: $noun, verb, adjective$ or $adverb$
\textbf{Using COBE to Predict Discrete Labels.} To evaluate our model on the contextualized object detection task, we need to be able to predict probabilities over the discrete space of $(noun, context)$ tuples defined for a given dataset. Given a discrete set of tuples, we first feed all such tuples through the same contextual word embedding model used to train our detector. This allows us to generate a matrix $Z \in \mathbb{R}^{T \times d}$, which contains contextualized word embeddings for all of the $T$ possible tuples. Afterwards, to make a prediction in the discrete tuple space, we feed a given video frame through our visual detection model, and obtain a continuous contextualized object embedding $f_i \in \mathbb{R}^{d}$ for each object instance $i$. To compute probabilities $p_i \in \mathbb{R}^{T}$ over the discrete tuple space, we perform a matrix-vector product and apply a softmax operator: $p_i = softmax(Z f_i)$.
\textbf{Evaluation Datasets.} Our evaluation dataset needs to satisfy three key requirements: 1) it should involve a variety of human-object interactions, 2) it should be annotated with bounding box labels, and 3) it should contain textual descriptions of the objects beyond mere object categories. The dataset that best fits all of these requirements is the EPIC-Kitchens dataset~\cite{Damen2018EPICKITCHENS} (denoted from now as EK for brevity). It contains 1) $(noun, verb)$ action tuple labels, 2) bounding box annotations of {\em active} objects, and 3) a list of $nouns$ associated with an action (e.g., ['pan', 'mushrooms'] for "put mushrooms in the pan"). We construct the ground truth by finding frames where the object category of a bounding box matches the $noun$ of either 1) an action tuple or 2) one of the $nouns$ from a $noun$ list. Furthermore, to expand our evaluation beyond EK, we manually label a collection of frames from HowTo100M and YouCook2\_BB~\cite{ZhLoCoBMVC18} with a set of $(noun, context)$ tuple labels. This leads to the following evaluation sets: 1) $9K$ frames with $171$ unique $(noun, context)$ tuples in HowTo100M\_BB\_test, 2) $25.4K$ frames with $178$ unique $(noun, context)$ tuples in EK, and 3) $2K$ frames with $166$ unique $(noun, context)$ tuples in YouCook2\_BB.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.97\linewidth]{./paper_figures/accuracy_plots/acc_vs_gpu_hours_v4.pdf}
\end{center}
\vspace{-0.1cm}
\caption{Contextualized object detection results on the evaluation sets of HowTo100M\_BB\_test, EPIC-Kitchens, and YouCook2\_BB. We compare COBE with the Tuple Faster R-CNN baseline, which is adapted to produce detections of the form $(noun, context)$ as the task requires. Both methods use the same network architecture (except for the contextualized object embedding branch), and are trained on the same HowTo100M\_BB dataset. Whereas COBE uses contextual word embeddings as its supervisory signal, Tuple Faster R-CNN uses the discrete tuple labels. We study the mAP performance of each method as a function of training iterations. Based on these results, we observe that COBE outperforms Tuple Faster R-CNN on all three datasets.\vspace{-0.4cm}}
\label{acc_plots_fig}
\end{figure}
\textbf{Evaluation Metric.} The performance is evaluated using the widely adopted COCO~\cite{502} metric of mean average precision, which is computed using an Intersection over Union (IoU) threshold of $0.5$. In the subsections below, we present our quantitative results.
\subsection{Comparing COBE to a Discrete Detector}
\label{detector_comparison_sec}
To the best of our knowledge, there are no prior published methods that we can directly include in our comparisons for the contextualized object detection task. We note that our task differs from the action classification task in the original EK benchmark because the latter 1) {\em does not} require spatial localization of object instances, and 2) it focuses exclusively on $(noun, verb)$ classification.
Thus, as our primary baseline, we consider a method that we name ``Tuple Faster R-CNN.'' Given a video frame as its input, it produces detections of the form $(noun, context)$ just as our task requires. We train this model on the same HowTo100M\_BB dataset as our method but instead of using contextual word embeddings as its supervisory signal, it uses the discrete tuple labels. We also note that Tuple R-CNN uses the same network architecture as our COBE, except for the contextualized object embedding branch, which is instead replaced by the standard classification branch.
The performance of both methods is evaluated on the evaluation sets of 1) HowTo100M\_BB\_test, 2) EK, and 3) YouCook2\_BB. We present these results in Figure~\ref{acc_plots_fig}, where we plot the detection accuracy as a function of training iterations. Our results suggest that compared to the Tuple Faster R-CNN, COBE is more effective, and it requires fewer training iterations to achieve strong performance. Furthermore, the fact that COBE outperforms Tuple Faster R-CNN on all three datasets highlights the wide applicability of our learned representation.
\subsection{Comparison to Models Trained on Large-Scale Manually-Labeled Datasets}
\label{generalization_sec}
In this section, we compare our strategy of training on narrated frames with traditional training on large-scale manually-labeled datasets. Specifically, we include baselines that {\em combine} a standard Faster R-CNN detector (trained on HowTo100M\_BB) with $noun$ or $verb$ classifiers pretrained on large-scale labeled data. To implement highly competitive baselines, we use several state-of-the-art models trained on large-scale manually-labeled datasets: a 2D ResNet-101 trained on ImageNet~\cite{He2016DeepRL}, a 3D ResNet-101 trained on the Kinetics dataset~\cite{DBLP:conf/cvpr/CarreiraZ17}, and an R(2+1)D ResNet-152 trained on the Instagram-65M (IG65M) dataset~\cite{DBLP:conf/cvpr/GhadiyaramTM19} as our image-level or video-level classifiers. We adapt these baselines to the task of contextualized object detection on the EK dataset by generating all possible tuple combinations between the frame-level detections of Faster R-CNN, and the top-5 predictions of the $noun/verb$ classifier. For example, if Faster R-CNN detects $tomato$ and $bowl$ in a given video frame, while the pretrained classifier predicts $(knife, cucumber, cutting board, spoon, fork)$ as its top-5 predictions, this would result in $10$ distinct tuple predictions: $(tomato, knife)$, $(tomato, cucumber)$, ... , $(bowl, spoon)$, $(bowl, fork)$.
We stress that the Faster R-CNN detector used in all of the baselines is trained on the same HowTo100M\_BB dataset as COBE, and it also uses the same network architecture as COBE (except for the contextualized object embedding branch). It should also be noted that we do not finetune or pretrain on EK any of the evaluated models (including ours). Having said this, we do adopt a detector trained on EK to pseudo-label the frames of HowTo100M (see Section~\ref{subsec_augm}), which might ease the transfer to EK. However, this applies to all of our baselines, not just COBE.
We also note that it may be argued that while COBE has been trained on an open-ended continuous distribution, the baselines (except for Tuple Faster R-CNN) have the disadvantage of being tested on some EK tuples that do not exist in the discrete label space of their training set. For example, the Kinetics classifier was trained on $400$ action categories, many of which are not nouns. Thus, to make the comparison with these baselines more fair, we use the following evaluation protocol. For each of our introduced baselines, we construct a subset of EK, where the ground truth $(noun, noun)$ or $(noun, verb)$ tuples include only the categories that the baseline was trained on. This leads to three different evaluation subsets, which we refer to as EK\_I (for the ImageNet baseline), EK\_K (for the Kinetics and IG65M baselines, which share the same Kinetics label space), and EK\_H (including all tuples that appear in both EK and HowTo100M\_BB).
\begin{table}[t]
\caption{Results on the detection of $(noun, noun)$, and $(noun, verb)$ tuples. The evaluation is performed on different subsets of the EPIC-Kitchens dataset using mAP with an IoU threshold of $0.5$. Our COBE outperforms by large margins several competitive baselines in both settings.}
\label{results1_table}
\setlength{\tabcolsep}{2.4pt}
\scriptsize
\begin{center}
\begin{tabular}{ c c c c c c c}
\cline{1-7}
\multicolumn{1}{ c }{Method} & \multicolumn{1}{ c }{Training Data} & \multicolumn{1}{c }{Eval. Set} & \# of Tuples & \# of Frames & \multicolumn{1}{ c }{Noun-Noun} & \multicolumn{1}{ c }{Noun-Verb}\\
\cline{1-7}
\multicolumn{1}{ c }{Faster R-CNN + S3D~\cite{miech19endtoend}} & HowTo100M\_BB, HowTo100M & EK\_H & 178 & 25.4K & 9.0 & 14.1\\
\multicolumn{1}{ c }{Tuple Faster R-CNN} & HowTo100M\_BB & EK\_H & 178 & 25.4K & 11.5 & 14.0\\
\multicolumn{1}{ c }{\bf COBE} & HowTo100M\_BB & EK\_H & 178 & 25.4K & \bf 16.9 & \bf 24.7\ \\ \hlin
\multicolumn{1}{ c }{Faster R-CNN + 2D ResNet-101~\cite{He2016DeepRL}} & HowTo100M\_BB, ImageNet & EK\_I & 23 & 3K & 13.2 & -\\
\multicolumn{1}{ c }{\bf COBE} & HowTo100M\_BB & EK\_I & 23 & 3K & \bf 26.0 & -\\ \hline
\multicolumn{1}{ c }{Faster R-CNN + R(2+1)D ResNet-152~\cite{DBLP:conf/cvpr/GhadiyaramTM19}} & HowTo100M\_BB, IG65M & EK\_K & 43 & 8.6K & 8.4 & 15.0 \\
\multicolumn{1}{ c }{Faster R-CNN + 3D ResNet-101~\cite{DBLP:conf/cvpr/CarreiraZ17}} & HowTo100M\_BB, Kinetics & EK\_K & 43 & 8.6K & 7.7 & 19.4\\
\multicolumn{1}{ c }{\bf COBE} & HowTo100M\_BB & EK\_K & 43 & 8.6K & \bf 32.4 & \bf 30.1\\ \hline
\vspace{-0.4cm}
\end{tabular}
\end{center}
\footnotesize
\end{table}
We present our quantitative results in Table~\ref{results1_table}. Based on these results, we observe that COBE outperforms by a large margin all of the baselines. This suggests that our proposed framework of learning from narrated instructional videos is beneficial compared to the popular approach of training on large manually-labeled datasets such as ImageNet, Kinetics or IG65M. We also include in this comparison the recent S3D model~\cite{miech19endtoend} trained via joint video-text embedding on the HowTo100M dataset. This is a relevant baseline since, similarly to our approach, it leverages the ``free'' supervision of narration to train the visual model. However, unlike our approach, which trains the visual model as a contextualized object detector, it learns a holistic video embedding unable to localize objects. As shown in Table~\ref{results1_table}, this S3D baseline~\cite{miech19endtoend} combined with the Faster R-CNN detector achieves much lower accuracy than COBE. This indicates that holistic video models such as~\cite{miech19endtoend} cannot be easily adapted to this task, and that a specialized contextualized object detector such as COBE might be better suited for solving this problem.
\subsection{Zero-Shot and Few-Shot Learning}
\label{zero_shot_sec}
We also consider the previous task in the scenarios of zero-shot and few-shot learning. We use $30$ EK $(noun, noun)$ categories and $26$ EK $(noun, verb)$ categories that have not been seen during training. We refer to this subset of EPIC-Kitchens as EK\_Z to indicate that it is used for {\em zero}-shot learning experiments. Furthermore, in order to investigate the {\em few}-shot learning capabilities of our model, we randomly select $5$ examples for each unseen tuple category (from the data that does not belong to the EK\_Z evaluation set), and then finetune our model on these examples.
Most of our baselines from the previous subsection are unfortunately not applicable in this setting because here we only consider tuple categories that have not been previously seen by any of the classifiers. Thus, in this subsection we can only compare our model in the few-shot setting with a Tuple Faster R-CNN that was trained on HowTo100M\_BB and finetuned on the few examples of EK (this baseline is inapplicable in the zero-shot setting). From Table~\ref{results2_table}, we observe that in the zero-shot setting, COBE produces reasonable results even though it has never been trained on these specific tuple categories. In the few-shot setting, COBE outperforms Tuple Faster R-CNN, demonstrating a superior ability to learn new categories from few examples.
\begin{table}[t]
\caption{We evaluate few-shot and zero-shot detection performance on a subset of EPIC-Kitchens (EK) containing $30$ $(noun, noun)$, and $26$ $(noun, verb)$ tuple-categories that were not included in the training set. For few-shot learning experiments, we finetune both methods on the EK data with $5$ samples per tuple-category. Despite the challenging setting, our method performs well on both tasks.}
\label{results2_table}
\setlength{\tabcolsep}{5pt}
\scriptsize
\begin{center}
\begin{tabular}{c c c c c c c c c}
\cline{6-9}
& & & & & \multicolumn{2}{ c }{Noun-Noun} & \multicolumn{2}{ c }{Noun-Verb} \\
\cline{1-9}
\multicolumn{1}{ c }{Method} & \multicolumn{1}{ c }{Training Data} & Eval. Set & \# of Tuples & \# of Frames & \multicolumn{1}{ c }{Zero-Shot} & \multicolumn{1}{ c }{Few-Shot} & \multicolumn{1}{c }{Zero-Shot} & \multicolumn{1}{c }{Few-Shot}\\
\cline{1-9}
\multicolumn{1}{ c }{Tuple Faster R-CNN} & HowTo100M\_BB & EK\_Z & 56 & 15.6K & - & 0.3 & - & 0.1 \\
\multicolumn{1}{ c }{\bf COBE} & HowTo100M\_BB & EK\_Z & 56 & 15.6K & \bf 10.3 & \bf 25.5 & \bf 8.6 & \bf 29.5\\ \hline
\vspace{-0.7cm}
\end{tabular}
\end{center}
\footnotesize
\end{table}
\begin{figure}[b]
\begin{center}
\includegraphics[width=1\linewidth]{./paper_figures/object2speech/results2_v2.pdf}
\end{center}
\vspace{-0.1cm}
\caption{Examples of object-to-text retrieval on EPIC-Kitchens. Given a visual object query (denoted with a green bounding box), we retrieve the $({\color{blue} object}, {\color{red} context})$ pairs that are most similar to our COBE visual features in the space of the contextualized language model. The retrieved text examples show that our visual representation captures rich and diverse contextual details around the object.
\label{o2s_fig}
\end{figure}
\subsection{Object Detection Results}
\label{obj_det_sec}
We also conduct object detection experiments on the $124K$ frames of EK ($180$ object categories) by comparing COBE to a Faster R-CNN traditionally trained
for object detection. Both methods share the same architecture (except for the contextualized object branch). Also, note that both methods are trained on HowTo100M\_BB, and not on EK. We evaluate the performance of each method using mAP with an IoU threshold of $0.5$. We report that COBE achieves $\textbf{15.4}$ mAP whereas Faster R-CNN yields $\textbf{14.0}$ mAP in the detection accuracy. We also note that pre-training COBE on HowTo100M\_BB and then finetuning it on EK produces a mAP of $\textbf{22.6}$, whereas COBE only trained on EK yields a mAP of $\textbf{12.7}$. This highlights the benefit of pretraining on HowTo100M\_BB.
\subsection{Qualitative Results}
\label{qual_results_sec}
\textbf{Object-To-Text Retrieval.} Since COBE produces outputs in the same space as the contextualized word embedding, we can use it for various object-to-text retrieval applications. For example, given a predicted object embedding, we can compute its similarity to a set of contextualized word embeddings of the form $({\color{blue} object}, {\color{red} context})$ where ${\color{blue} object}$ represents a coarse object-level category from EK and ${\color{red} context}$ is any other word providing contextual details. In Figure~\ref{o2s_fig}, we visualize some of our results, where the green box depicts a detected object, which is used as a query for our text-retrieval task. Our results indicate that the text retrieved by our model effectively captures diverse contextual details for each of the detected objects. Furthermore, we would like to point out that these results are obtained on the EK dataset, without training COBE on it. Despite the semantic and appearance difference between HowTo100M and EK, COBE successfully generalizes to this setting.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{./paper_figures/speech2object/results_EK_v5.pdf}
\end{center}
\vspace{-0.2cm}
\caption{Qualitative illustrations of text-to-object retrieval (cropped for better visibility). Given a textual query of the form $({\color{blue} object}, {\color{red} context})$, the method retrieves the most similar COBE object instances in the space defined by the contextualized language. Note that each column shows different {\color{red} contexts} for a fixed {\color{blue} object}. \vspace{-0.4cm}}
\label{s2o_fig}
\end{figure}
\textbf{Text-To-Object Retrieval.} We can also reverse the previous object-to-text retrieval task, and instead use text queries to retrieve object instances. In Figure~\ref{s2o_fig}, we visualize the top retrievals for several $({\color{blue} object}, {\color{red} context})$ queries. Each column in the figure depicts the ${\color{blue} object}$ part of the tuple, while different rows illustrate different ${\color{red} context}$ words in the the tuple. Our results suggest that COBE captures a rich variety of fine-grained contextual cues, including the states of the objects, their functional properties, color, and shape, as well as the actions applied to them.
\begin{figure}[b]
\vspace{-0.2cm}
\begin{center}
\includegraphics[width=1\linewidth]{./paper_figures/object_analogies/results_EK_v5.pdf}
\end{center}
\caption{Visual object analogies: just like prior work on language representation learning~\cite{NIPS2013_5021}, we show that we can leverage our learned contextualized object embeddings to combine different visual concepts via simple vector arithmetics. The last column shows object instances retrieved based on visual queries defined by arithmetics in the COBE space.}
\label{analogies_fig}
\end{figure}
\textbf{Visual Object Analogies.} Prior work on language representation learning~\cite{NIPS2013_5021} has shown that it is possible to perform semantic analogies among different words by adding/subtracting their vector embeddings. Since our model is trained to predict a contextualized word embedding for each object instance, we consider the same arithmetic operations with COBE in order to leverage the compositional properties of language models but in the visual domain. In Figure~\ref{analogies_fig}, we visualize a few examples of visual object analogies. We build COBE queries by adding the difference between two COBE vectors to a third one, and then perform retrieval of objects in EK. Our examples demonstrate that we can meaningfully combine visual concepts via subtraction and addition in the learned contextual object embedding space. For instance, the example in the first row of Figure~\ref{analogies_fig} shows that adding the difference between ``cutting dough'' and ``dough'' to a ``fish'' image in COBE space yields the retrieval of ``cutting fish.'' Similarly, the example in the second row shows that adding the difference between ``pouring milk into a glass'' and ``pouring water into a glass'' to a ``a coffee cup'' in COBE space results in a retrieved ``pouring milk into a coffee cup'' object. These examples suggest that our system retains some compositional properties of the language model that was used to train it.
\section{Ablation Studies}
\textbf{Contextual Language Model.} To investigate the effectiveness of different language models, we evaluate COBE on the manually annotated test split of HowTo100M\_BB using the following state-of-the-art language models: RoBERTa~\cite{liu2019roberta}, T5~\cite{2019t5}, ELECTRA~\cite{DBLP:conf/iclr/ClarkLLM20}, Transformer XL~\cite{dai-etal-2019-transformer}, BERT~\cite{devlin-etal-2019-bert}, XLNet~\cite{NIPS2019_8812}, ALBERT~\cite{Lan2020ALBERT}, Word2Vec~\cite{NIPS2013_5021}, XLM~\cite{conneau2019unsupervised}, and CTRL~\cite{keskarCTRL2019}. As before the performance of each model variant is evaluated according to the standard mAP detection metric. We present these results in Table~\ref{results4_table}. These results indicate that the choice of contextual language model in our system can be quite important as the results in Table~\ref{results4_table} range from $9.4$ mAP to $18.4$ mAP, which is a substantial gap. We select the Conditional Transformer Language Model (CTRL)~\cite{keskarCTRL2019} as our choice as it exhibits the best performance on our validation set.
\textbf{Negatives per Positive.} Prior work~\cite{miech19howto100m, DBLP:journals/corr/abs-1807-03748} has demonstrated that using a large number of negative samples per positive sample is important for approaches that leverage NCE loss~\cite{pmlr-v9-gutmann10a}. Here, we validate this finding in our setting and present these results in the last three rows of Table~\ref{results4_table}. Specifically, we experiment with the negative to positive sample ratio of $128, 512$ and $2048$. We observe that using a large number of negative samples per one positive sample yields considerably better performance. We note that we have not experimented with even larger negative samples as it slows down the training, but we will attempt to do so in our future work.
\begin{table}[t]
\caption{Here, we study the effectiveness of different language models used to train our visual detection model. The ablation studies are conducted on the test set of HowTo100M\_BB dataset. We evaluate the performance of each baseline using the same mean average precision metric as before. Based on these results, we note that the Conditional Transformer Language Model (CTRL)~\cite{keskarCTRL2019} achieves the best accuracy so we adopt it as our contextual language model. Furthermore, we also investigate how the negative-to-positive sample ratio during training affects our model's performance. As expected, a larger number of negative per single positive sample leads to better results.}
\label{results4_table}
\scriptsize
\begin{center}
\begin{tabular}{c c c c c}
\hline
Language Model & Training Data & Negatives per Positive & Eval. Set & mAP\\
\hline
RoBERTa~\cite{liu2019roberta} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 9.4\\
T5~\cite{2019t5} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 14.1\\
ELECTRA~\cite{DBLP:conf/iclr/ClarkLLM20} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 14.5\\
Transformer XL~\cite{dai-etal-2019-transformer} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 15.3\\
BERT~\cite{devlin-etal-2019-bert} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 16.2\\
XLNet~\cite{NIPS2019_8812} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 16.3\\
Word2Vec~\cite{NIPS2013_5021} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 16.7\\
ALBERT~\cite{Lan2020ALBERT} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 16.9\\
XLM~\cite{conneau2019unsupervised} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & 18.0\\ \hline
CTRL~\cite{keskarCTRL2019} & HowTo100M\_BB & 128 & HowTo100M\_BB\_test & 13.6\\
CTRL~\cite{keskarCTRL2019} & HowTo100M\_BB & 512 & HowTo100M\_BB\_test & 15.5\\
CTRL~\cite{keskarCTRL2019} & HowTo100M\_BB & 2048 & HowTo100M\_BB\_test & \bf 18.4\\
\hline
\vspace{-0.7cm}
\end{tabular}
\end{center}
\end{table}
\vspace{-0.2cm}
\section{Discussion}
\vspace{-0.1cm}
In this work, we introduced COBE, a new framework for learning contextualized object embeddings. Unlike prior work in this area, our approach does not rely on manual labeling but instead leverages automatically-transcribed narrations from instructional videos. Our experiments demonstrate that COBE learns to capture a rich variety of contextual object cues, and that it is highly effective in zero-shot and few-shot learning scenarios.
COBE leverages a pretrained object detector to generate pseudo annotations on videos. While this removes the need for manually labeling frames, it limits the approach to predict contexts for only the predefined set of object classes recognized by the detector. Extending the method to learn contextualized object detection from instructional videos without the use of pretrained models is challenging, particularly because captions are noisy. However, solving this problem would allow us to take a big step towards a self-supervised learnable detection framework. We intend to tackle this problem in our future work.
\section*{Broader Impact}
This work considers the learning of object embeddings from narrated video. In terms of positive impact, our system is particularly relevant for language-based applications on unlabeled video (e.g., text-based retrieval) and may facilitate a tighter integration of vision and NLP methods in the future.
As for the potential negative effects, we note that COBE learns to capture various aspects of human actions. Thus, because we are using videos from an uncurated Web dataset, it is possible that COBE might learn contextualized representations that are subjective towards particular characteristics. Furthermore, we note that any algorithm that is relevant to action recognition--and to retrieving videos based on arbitrary language queries--may potentially be used for surveillance purposes.
\section*{Funding Transparency Statement}
We did not receive any third-party funding in direct support of this work.
\section*{Acknowledgements}
We would like to thank Fabio Petroni, and Rohit Girdhar for helpful discussions. Additionally, we thank Bruno Korbar for help with the experiments.
\small
\bibliographystyle{unsrt}
|
{'timestamp': '2020-11-02T02:06:05', 'yymm': '2007', 'arxiv_id': '2007.07306', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.07306'}
|
arxiv
|
\section{Introduction}
Whether building attribute predictors or taking measurements over a fixed space, combining multiple lists of predictions or measurements is a common final step. When these predictions or measurements are generated from identical processes, differing from each other only as a consequence of intrinsic noise, there will likely be principled and case-specific ways to combine these values across lists. This familiar scenario corresponds to independent, identically distributed measurements of a quantity of interest.
However, it is not uncommon to handle lists that are fundamentally different from one another. In such a scenario, each list corresponds to a different proxy, i.e. a value that is monotonically increasing with a common target quantity but otherwise following a functional relationship that we do not understand. When the problem of combining such lists is supervised, this corresponds to any situation in which we may employ boosting. When the problem is unsupervised, however, we can no longer extract information from the spacing between different scores within a list, and so the only meaningful information provided by each list is a ranking of elements. Given this, order statistics would seem to play a natural role.
\section{Definition}
We begin by defining the joint CDF value for an $n$-dimensional order statistic. Given a sorted (ascending) list $R = [r_1, r_2, \cdots, r_n]$, and fixing $s_0$ to be $0$, the joint CDF value of $R$ is defined to be $V(R) = V_1(R)$, where
\begin{equation}
\label{eq:1}
V_i(R) = \begin{cases}
\bigints_{s_{i-1}}^{r_i}{ds_i} & \text{if $i=n$} \\
\bigints_{s_{i-1}}^{r_i}{V_{i+1}(R) ds_i} & \text{if $0 < i < n$}
\end{cases}
\end{equation}
or, more intuitively,
\begin{equation}
V(R) = \int_{0}^{r_1}\int_{s_1}^{r_2}\cdots \int_{s_{n-1}}^{r_n}{ds_n ds_{n-1}\cdots ds_1} \label{eq:2}
\end{equation}
In addition, we would like to clarify terminology: throughout this paper, ranked lists of elements should be assumed sorted in descending order of score, with "larger rank" referring to smaller scoring elements appearing later in the list.
\section{Previous Usage}
We now describe how the joint CDF value was used in prior work \cite{aerts} \cite{stuart} to combine multiple lists of scores. Let us assume we begin with several ranked lists of elements. We choose as our null hypothesis that each ranked list is generated by randomly permuting a set of elements.
Define the \textbf{rank ratio} of an element in a list to be the rank of the element divided by the length of the list. For a particular element $e$, then, let $R_e$ be the sorted (ascending) list of rank ratios, computed over all the lists in which $e$ is present. \href{https://en.wikipedia.org/wiki/Majorization}{Majorization} induces a partial ordering over the space of such sorted lists as follows: for two such sorted lists $S = [s_i]$ and $R = [r_i]$ of rank ratios, we define $S \leq R$ to mean that $S$ is majorized by $R$, i.e that $s_i \leq r_i\text{ }\forall\text{ }i$. This aligns well with our intuition - if all the rank ratios for an element $e$ are smaller than the corresponding rank ratios for an element $f$, then we can unambiguously state that $e$ should come before $f$ in any merged ranking.
Using this partial ordering, we have that the p-value for a list $R$ under the chosen null hypothesis would be
\begin{equation}
n! \int_{S \leq R}{p(S) dS} = n! \int_{0}^{r_1}p(s_1)\int_{s_1}^{r_2}p(s_2 | s_1)\cdots \int_{s_{n-1}}^{r_n}{p(s_n | s_{n-1}) ds_n ds_{n-1}\cdots ds_1} \label{eq:3}
\end{equation}
where these probabilities are taken under the null hypothesis. It can be shown inductively that the expression in \eqref{eq:3} is equal to $Q(R) = n! V(R)$ if and only if $s_i \sim U(s_{i-1}, 1)\text{ }\forall \text{ }i$.
\begin{theorem}
$\exists \text{ } i | s_i \not\sim U(s_{i-1}, 1) $
\end{theorem}
\begin{proof}
\begin{equation}
P(s_1 \leq x) = 1 - P(s_1 > x) = 1 - \prod_{i}{P(s_i > x)} = 1 - (1-x)^n \label{eq:4}
\end{equation}
It follows that
\begin{equation}
p(s_1 = x) = \frac{d}{dx}P(s_1 \leq x) = n (1-x)^{n-1} \label{eq:5}
\end{equation}
which is not a uniform distribution, providing the desired counterexample.
\end{proof}
It follows that $Q(R)$ cannot be used directly as a p-value against the stated null hypothesis. Presumably unaware of this result, one previous approach \cite{stuart} used $Q(R_e)$ as a p-value for the corresponding element $e$, and produced a combined list by sorting (in ascending order) all elements $e$ according to these p-values . A second group \cite{aerts} demonstrated through numerical experiments that $Q(R)$, unlike a valid p-value, did not follow a uniform distribution under the null hypothesis. This group \cite{aerts} went on to measure the empirical distribution of $Q(R)$ under the null hypothesis and fitted it approximately to a $\beta$ distribution for $n \leq 5$ and a $\gamma$ distribution for larger $n$, ultimately using these fitted distributions to convert the joint CDF values to p-values.
\section{Proposed Usage}
While using the same input (several ranked lists of elements) and assuming the same null hypothesis, we impose one additional constraint: that every element present in at least one list is present in all lists. This constraint can be guaranteed as follows: for each list, add all missing elements to the end of the ranking. If $k$ elements are added in this manner to a list of original size $n$, then each of the added elements are assigned a rank of $n + \frac{k+1}{2}$, i.e. the average rank of all such elements had they been added in an arbitrary order.
After this is done, all ranked lists will be identical in size and in the set of elements they contain. Now, for a given element, let $f_i$ be the fraction of lists in which the element appears at rank $i$. It follows that $\sum_{i}{f_i} = 1$. Furthermore, let $g_i = g_{i-1} + f_i$ be the fraction of lists in which the element appears at rank no greater than $i$, with $g_1 = f_1$, and let $r_i = 1 - g_{n-i}$. In line with the intuition underlying the partial ordering we defined in Section 3, we define here a new partial ordering $e \leq h$ if and only if $g_{e,i} \geq g_{h,i}$ $\forall$ $i \iff r_{e, i} = g_{e, n-i} \leq g_{h, n-i} = r_{h, i}$ $\forall$ $i$, where $g_{e, i}$ and $r_{e,i}$ are the $g_i$ and $r_i$, respectively, for element $e$.
Given this, let us redefine $R_e$ to be the list $[r_{e,i}] = [r_i]$ for $1 \leq i \leq n-1$. Using this definition of $r_i$, we have
\begin{equation}
r_i = 1 - g_{n-i} = 1 - (g_{n-i+1} - f_{n-i+1}) = r_{i-1} + f_{n-i+1} \text{ }\forall \text{ }i > 0 \label{eq:6}
\end{equation}
\begin{theorem}
$r_{i+1} \sim U(r_i, 1)$ $\forall$ $i \geq 0$ under the null hypothesis.
\end{theorem}
\begin{proof}
The recursion in \eqref{eq:6} can be used to show inductively that
\begin{equation}
r_i = \sum_{k=n-i+1}^{n}{f_k} \label{eq:7}
\end{equation}
Given that, let us consider the distribution of $f_{n-i}$ conditioned on $\{f_k, k > n-i\})$ under the null hypothesis. Since each list is assumed to be randomly permuted, it follows that each element $e$ is equally likely to be in any position for a particular list. Thus, conditioned only on $\{f_k, k > n-i\}$, we have that the probability $p_{n-i}$ of $e$ appearing in position $n-i$ in a particular list, is given by the probability that $e$ hasn't already appeared further down in that list, i.e. $1 - r_i$, multiplied by a uniform probability density 1 over the remaining positions. Since $f_{n-i}$ is, by definition, the expectation of this probability $p_{n-i}$ over many lists, it follows that $f_{n-i} \sim U(0, 1-r_i) \iff r_{i+1} = r_i + f_{n-i} \sim U(r_i, 1)$, as desired.
\end{proof}
Thanks to this result, it follows that for $R_e$ \textit{as we have defined it in this section}, $V(R_e)$ can be used directly as a p-value.
\section{Previous Methods of Computation}
The first approach \cite{stuart} discussed earlier attempted to compute $V(R)$ using
\begin{equation}
V(R) = \frac{1}{n} \sum_{i = 1}^{n}{(r_i - r_{i-1})V(R_{-i})} \label{eq:8}
\end{equation}
where $R_{-i}$ is defined as $R$ with $r_i$ removed. A straightforward application of dynamic programming to the recursion in \eqref{eq:8} gives a runtime of $O(n!)$ for computing $V(R)$ using this method .
The second approach \cite{aerts} we referenced was able to derive another recursion over an intermediate function $T_k$:
\begin{equation}
T_k(R) = \sum_{i=1}^{k}{(-1)^{i-1} \frac{T_{k-i}}{i!} (r_{n-k+1})^i} \label{eq:9}
\end{equation}
where $V(R) = T_n(R)$. A straightforward application of dynamic programming to the recursion in \eqref{eq:9} gives a runtime of $O(n^2)$ for computing $V(R)$ using this method.
Both \eqref{eq:8} and \eqref{eq:9} can be derived via manipulations of \eqref{eq:2}, but we have not included these derivations here.
\section{Improved Method of Computation}
For the purposes of this section, let $R = [r_i]$ have length $n$ and let $R_k = [r_i \text{ }\forall\text{ }i \leq k]$ be the subsequence consisting of the first $k$ elements of $R$. We seek to compute $V(R)$ as given by the expression in \eqref{eq:2}. We note that this is simply the $n$-dimensional volume of space that lies on or strictly below the curve defined by $R$. To compute this, we begin by rearranging our integrals, such that the outermost integral is the integral in $s_n$ and the innermost integral is the integral in $s_1$. By construction, we know that $s_i \geq 0$, $s_i \leq r_i$, $s_i \leq s_{i+1}$, and $s_i \leq 1$. Given this, we can write $V(R) = Z_n(R)$ where
\begin{equation}
\label{eq:10}
Z_i(R) = \begin{cases}
\bigint_{0}^{\min{(s_2, r_1)}}{ds_1} & \text{if $i=1$} \\
\bigint_{0}^{\min{(s_{i+1}, r_i)}}{Z_{i-1}(R) ds_i} & \text{if $1 < i \leq n$}
\end{cases}
\end{equation}
with $s_{n+1} = 1$ set for consistency. Less formally, this can be written as
\begin{equation}
V(R) = \int_{0}^{r_n}\int_{0}^{\min{(r_{n-1},s_n)}}\cdots \int_{0}^{\min{(r_{1},s_2)}}{ds_1 ds_2\cdots ds_n} \label{eq:11}
\end{equation}
Substituting $R_k$ for $R$ into \eqref{eq:10} and rearranging, we have that
\begin{equation}
\label{eq:12}
V(R_k) = \int_{0}^{r_k}{Z_{k-1}(R_k) ds_k} = \int_{0}^{r_k}{V(R_{k-2} + [\min{(r_{k-1},s_k)}])ds_k}
\end{equation}
where list addition refers to concatenation. This can be further rewritten as
\begin{equation}
\label{eq:13}
V(R_k) = \int_{0}^{r_{k-1}}{V(R_{k-2} + [s_k])ds_k} + \int_{r_{k-1}}^{r_k}{V(R_{k-1})ds_k}
\end{equation}
Letting $V_k(x) = V(R_{k-1} + [x])$, we can substitute and simplify further to get
\begin{equation}
\label{eq:14}
V_k(r_k) = \left(\int_{0}^{r_{k-1}}{V_{k-1}(s_k)ds_k}\right) +(r_k - r_{k-1})V_{k-1}(r_{k-1})
\end{equation}
We note that
\begin{equation}
\label{eq:15}
\frac{\partial}{\partial x}V_k(x) = V_{k-1}(r_{k-1})
\end{equation}
i.e. $C = \left(V_k(x) - x V_{k-1}(r_{k-1})\right)$ is constant with respect to x. Consequently, integrating both sides of \eqref{eq:14} with respect to $r_k$ gives
\begin{equation}
\label{eq:16}
\int_{0}^{r_k}{V_k(x) dx} = \int_{0}^{r_k}{\left(x V_{k-1}(r_{k-1}) + C\right) dx} = \frac{{(r_k)}^2}{2} V_{k-1}(r_{k-1}) + r_k \left(V_k(r_k) - r_k V_{k-1} (r_{k-1})\right)
\end{equation}
which can be further simplified to
\begin{equation}
\label{eq:17}
\int_{0}^{r_k}{V_k(x) dx} = r_k V_k(r_k) - \frac{{(r_k)}^2}{2} V_{k-1}(r_{k-1})
\end{equation}
Substituting $k-1$ for $k$ in \eqref{eq:17}, substituting the resulting expression into \eqref{eq:14}, and simplifying gives
\begin{equation}
\label{eq:18}
V_k(r_k) = r_k V_{k-1}(r_{k-1}) - \frac{{(r_{k-1})}^2}{2} V_{k-2}(r_{k-2})
\end{equation}
Since we ultimately wish to find $V_n(r_n)$, \eqref{eq:18} provides us with a convenient recursive formula that we can use to compute $V_n(r_n)$ in $O(n)$ time. This will allow us to feasibly compute the statistic for the proposed usage in Section 4, as the number $n$ of elements being sorted is usually much greater than the number $n$ of lists being aggregated. To use this recursion, we note that the explicit base cases are
\begin{equation}
\label{eq:19}
V_1(r_1) = r_1
\end{equation}
\begin{equation}
\label{eq:20}
V_2(r_2) = r_2 r_1 - \frac{(r_1)^2}{2}
\end{equation}
and the implicit base cases are
\begin{equation}
\label{eq:21}
V_0 = V([]) = 1
\end{equation}
\section{Implementation}
An implementation of the proposed usage and the improved method of computation can be found on GitHub, under arvindthiagarajan/multimodal-statistics.
\bibliographystyle{plain}
|
{'timestamp': '2020-06-19T02:02:39', 'yymm': '2006', 'arxiv_id': '2006.10124', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.10124'}
|
arxiv
|
\section{A Newton/gradient coordinate descent optimizer for classification}
Denote by $\mathcal{D} = \left\{(\mathbf{x}_i,\mathbf{y}_i)\right\}_{i=1}^{N_\text{data}}$ data/label pairs, and consider the following class of deep learning architectures:
\begin{equation}\label{eq:architecture}
\mathcal{L}(\mathbf{W},\xi,\mathcal{D}) = \sum_{(\mathbf{x}_i,\mathbf{y}_i) \in \mathcal{D}} \mathcal{L}_{\text{CE}}(\cdot;\mathbf{y}_i) \circ \mathcal{F}_{\text{SM}} \circ \mathcal{F}_{\text{LL}} (\cdot; \mathbf{W}) \circ \mathcal{F}_{\text{HL}} (\mathbf{x}_i; \xi),
\end{equation}
where $\mathcal{L}_{\text{CE}},\mathcal{F}_{\text{SM}},\mathcal{F}_{\text{LL}}$ and $\mathcal{F}_{\text{HL}}$ denote a cross-entropy loss, softmax layer, linear layer, and hidden layer, respectively. We denote linear layer weights by $\mathbf{W}$, and consider a general class of hidden layers (e.g. dense networks, convolutional networks, etc.), denoting associated weights and biases by the parameter $\xi$. The final three layers are expressed as
\begin{equation}\label{eq:final_layers}
\mathcal{L}_{\text{CE}}(\mathbf{x};\mathbf{y}) = -{\sum_{i=1}^{N_c} {y}^i \log {x}^i}; \quad
\mathcal{F}^i_{\text{SM}}(\mathbf{x}) = \frac{\exp (-x_i)}{\sum_{j=1}^{N_c}\exp(-x_j)};\quad
\mathcal{F}^i_{\text{LL}}(\mathbf{x}) = {\mathbf{W}} \mathbf{x}
\end{equation}
and map $\mathcal{F}_{\text{HL}}:\mathbb{R}^{N_{\text{in}}}\rightarrow \mathbb{R}^{N_{\text{basis}}}$; $\mathcal{F}_{\text{LL}}:\mathbb{R}^{N_{\text{basis}}}\rightarrow \mathbb{R}^{N_{\text{classes}}}$; $\mathcal{F}_{\text{SM}}:\mathbb{R}^{N_{\text{classes}}}\rightarrow \mathbb{R}^{N_{\text{classes}}}$; and $\mathcal{L}_{\text{CE}}:\mathbb{R}^{N_{\text{classes}}}\rightarrow \mathbb{R}$. Here, $N_{\text{basis}}$ is the dimension of the output of the hidden layer; this notation is explained in the next paragraph. The standard classification problem is to solve
\begin{equation}\label{eq:fullLoss}
\left(\mathbf{W}^*,\xi^*\right) = \underset{\mathbf{W},\xi}{\text{argmin}}\, \mathcal{L}(\mathbf{W},\xi,\mathcal{D}).
\end{equation}
The recent work by~\citet{cyr2019robust} performed a similar partition of weights into linear layer weights $\mathbf{W}$ and hidden layer weights $\xi$ for regression problems. Two important observations were made using
this decomposition. First, the output of the hidden layers can be treated as an adaptive basis with the learned weights $\mathbf{W}$ corresponding to the coefficients producing the prediction. Second, holding $\xi$ fixed leads to a linear least squares problem for the basis coefficients $\mathbf{W}$ that can be solved for a global minimum. This work builds on these two observations for classification problems.
The output of the hidden layers $\mathcal{F}_{\text{HL}}$ defines a basis
\begin{equation}
\Phi_\alpha(\cdot,\xi): \mathbb{R}^{N_{\text{in}}} \rightarrow \mathbb{R} \mbox{ for } \alpha = 1 \ldots N_{\text{basis}}
\label{eq:adapt-basis}
\end{equation}
where $\Phi_\alpha(x,\xi)$ is row $\alpha$ of $\mathcal{F}_{\text{HL}}(x,\xi)$.
Thus the input to the softmax classification layer are $N_{\text{classes}}$ functions, each defined using the adaptive basis $\Phi_\alpha$ and a single row of the weight matrix $\mathbf{W}$.
The crux of this approach to classification is the observation that
for all $\xi$, the function
\begin{equation}\label{eq:S_def}
\mathcal{S}(\mathbf{W}, \mathcal{D}) = \mathcal{L}(\mathbf{W},\xi, \mathcal{D})
\end{equation}
is convex with respect to $\mathbf{W}$,
and therefore the global minimizer
\begin{equation}\label{eq:WLoss}
\mathbf{W}^* = \underset{\mathbf{W}}{\text{argmin}}\, \mathcal{S}(\mathbf{W}, \mathcal{D})
\end{equation}
may be obtained via Newton iteration with line search.
In Sec. \ref{sec:algo}, we introduce a coordinate-descent optimizer that alternates between a globally optimal solution of \eqref{eq:WLoss} and a gradient descent step minimizing \eqref{eq:fullLoss}. Combining this with the interpretation of the hidden layer as providing a data-driven adaptive basis, this ensures that during training the parameters evolve along a manifold providing optimal fit of the adaptive basis to data \citep{cyr2019robust}. We summarize this perspective and relation to previous work in Sec. \ref{sec:literature}, and in Sec. \ref{sec:results} we investigate how this approach differs from stochastic gradient descent (GD), both in accuracy and in the qualitative properties of the hidden layer basis.
\section{Convexity analysis and invertibility of the Hessian} \label{sec:convexity}
In what follows, we use basic properties of convex functions \citep{boyd2004convex} and the Cauchy-Schwartz inequality \citep{folland1999real} to prove that $\mathcal{S}$ in \eqref{eq:S_def} is convex. Recall that convexity is preserved under affine transformations. We first note that $\mathcal{L}_{\text{LL}}(\mathbf{W};\mathcal{D},\xi)$ is linear.
By \eqref{eq:architecture}, it remains only to show that $\mathcal{L}_{\text{CE}} \circ \mathcal{F}_{\text{SM}}$ is convex. We write, for any data vector $\mathbf{y}$,
\begin{align}
\mathcal{L}_{\text{CE}} \circ \mathcal{F}_{\text{SM}} (\mathbf{x}; \mathbf{y}) &= - \sum_{i=1}^{N_{\text{classes}}} y_
\log\left(\frac{\exp(-x_i)}{\sum_{j=1}^{N_\text{classes}} \exp(- x_j)}\right)\\
&= \sum_{i=1}^{N_{\text{classes}}} y_
x_i - N_{\text{classes}} \log \left( \sum_{i=1}^{N_{\text{classes}}} \exp \left(-x_i \right)\right).
\end{align}
The first term above is affine and thus convex. We prove the convexity of the second term
${f(\mathbf{x}) := -\log \left( \sum_{i=1}^{N_{\text{classes}}} \exp \left(-x_i \right)\right)}$ by writing
\begin{equation}
f(\theta \mathbf{x} + (1-\theta)\mathbf{y}) = \log \left( \sum_{i=1}^{N_{\text{classes}}}
\left( \exp(-x_i) \right)^\theta
\left( \exp(-y_i) \right)^{1-\theta}
\right).
\end{equation}
Applying Cauchy-Schwartz with $1/p = \theta$ and $1/q = 1 - \theta$, noting that $1/p + 1/q = 1$, we obtain
\begin{align}
f(\theta \mathbf{x} + (1-\theta)\mathbf{y}) &\leq
\log \left(
\left( \sum_{i=1}^{N_{\text{classes}}}\exp(-x_i) \right)^\theta
\left( \sum_{i=1}^{N_{\text{classes}}}\exp(-y_i) \right)^{1-\theta}
\right)\\
&= \theta f(\mathbf{x}) + (1-\theta)f(\mathbf{y}).
\end{align}
Thus $f$, and therefore $\mathcal{L}_{\text{CE}} \circ \mathcal{F}_{\text{SM}}$ and $\mathcal{S}$, are convex. As a consequence, the Hessian $H$ of $\mathcal{S}$ with respect to $\mathbf{W}$ is a symmetric positive semi-definite function, allowing application of a convex optimizer in the following section to realize a global minimum.
\section{Algorithm} \label{sec:algo}
Application of the traditional Newton method to the problem \eqref{eq:fullLoss} would require solution of a dense matrix problem of size equal to the total number of parameters in the network. In contrast, we alternate between applying Newton's method to solve only for $\mathbf{W}$ in \eqref{eq:WLoss} and a single step of a gradient-based optimizer for the remaining parameters $\xi$; the Newton step therefore scales with the number of weights ($N_{\text{basis}}\times N_{\text{classes}}$) in the linear layer. Since $\mathcal{S}$ is convex,
Newton's method with appropriate backtracking or trust region may be expected to achieve a global minimizer. We pursue a simple backtracking approach, taking the step direction and size from standard Newton and repeatedly reducing the step direction until the Armijo condition is satisfied, ensuring an adequate reduction of the loss \citep{armijo1966minimization,dennis1996numerical}. For the gradient descent step we apply Adam \citep{kingma2014adam}, although one may apply any gradient-based optimizer; we denote such an update to the hidden layers for fixed $\mathbf{W}$ by the function $\text{GD}(\xi,\mathcal{B},\mathbf{W})$. To handle large data sets, stochastic gradient descent (GD) updates parameters using gradients computed over disjoint subsets $\mathcal{B} \subset \mathcal{D}$ \citep{bottou2010large}.
To expose the same parallelism, we apply our coordinate descent update over the same batches by solving \eqref{eq:WLoss} restricted to $\mathcal{B}$. Note that this implies an optimal choice of $\mathbf{W}$ over $\mathcal{B}$ only. We summarize the approach in Alg. \ref{thealg}.
\RestyleAlgo{plainruled}
\begin{algorithm}[t]
\KwData{batch $\mathcal{B}\subset \mathcal{D},\xi_{\text{old}},\mathbf{W}_{\text{old}},\alpha,\rho$}
\KwResult{$\xi_{\text{new}}, \mathbf{W}_{\text{new}}$}
\For{$j \in \{1,...,\texttt{\textup{newton\_steps}}\}$}{
Compute gradient $G = \nabla_{\mathbf{W}} S(\mathbf{W}_{\text{old}},\mathcal{B}) $ and Hessian $H = \nabla_{\mathbf{W}} \nabla_{\mathbf{W}} S(\mathbf{W}_{\text{old}},\mathcal{B})$ \;
Solve $H\bm{s} = -G$\;
$\mathbf{W}^{\dagger} \gets \mathbf{W}_{\text{old}} + \bm{s}$\;
$\lambda \gets 1$\;
\While{$S(\mathbf{W}^{\dagger},\mathcal{B}) > S(\mathbf{W}_{\textup{old}},\mathcal{B}) + \alpha \lambda G \cdot \bm{s}$}{
$\lambda \gets \lambda \rho$;\\
$\mathbf{W}^{\dagger} \gets \mathbf{W}_{\text{old}} + \lambda \bm{s}$\;
}
}
$\mathbf{W}_{\text{new}} \gets \mathbf{W}^\dagger$\;
${\xi}_{\text{new}} \gets \mathrm{GD}(\xi_{\text{old}},\mathcal{B},\mathbf{W}_{\text{new}})$\;
\caption{Application of coordinate descent algorithm for classification to a single batch $\mathcal{B}\subset \mathcal{D}$. For the purposes of this work, we use $\rho = 0.5$ and $\alpha = 10^{-4}$.}
\label{thealg}
\end{algorithm}
While $H$ and $G$ may be computed analytically from \eqref{eq:final_layers}, we used automatic differentiation for ease of implementation.
The system $H\bm{s} = -G$ can be solved using either a dense or an iterative method. Having proven convexity of $\mathcal{S}$ in \eqref{eq:WLoss}, and thus positive semi-definiteness of the Hessian, we may apply a conjugate gradient method. We observed that solving to a relatively tight residual resulted in overfitting during training, while running a fixed number $N_{cg}$ of iterations improved validation accuracy. Thus, we treat $N_{cg}$ as a hyperparameter in our studies below. We also experimented with dense solvers; due to rank deficiency we considered a pseudo-inverse of the form $H^\dagger = (H + \epsilon I)^{-1}$, where taking a finite $\epsilon>0$ provided similar accuracy gains. We speculate that these approaches may be implicitly regularizing the training. For brevity we only present results using the iterative approach; the resulting accuracy was comparable to the dense solver. In the following section we typically use only a handful of Newton and CG iterations, so the additional cost is relatively small.
We later provide convergence studies comparing our technique to GD using the Adam optimizer and identical batching. We note that a lack of optimized software prevents a straightforward comparison of the performance of our approach vs. standard GD; while optimized GPU implementations are already available for GD, it is an open question how to most efficiently parallelize the current approach. For this reason we compare performance in terms of iterations, deferring wall-clock benchmarking to a future work when a fair comparison is possible.
\section{Relation to previous works} \label{sec:literature}
We seek an extension of \citet{cyr2019robust}. This work used an adaptive basis perspective to motivate a block coordinate descent approach utilizing a linear least squares solver.
The training strategy they develop can be found under the name of variable projection, and was used to train small networks~\citep{mcloone1998hybrid,pereyra2006variable}.
In addition to the work in \citet{cyr2019robust}, the perspective of neural networks producing an adaptive basis has been considered by several approximation theorists to study the accuracy of deep networks \citep{yarotsky2017error,opschoor2019deep, daubechies2019nonlinear}. The combination of the adaptive basis perspective combined with the block coordinate descent optimization demonstrated dramatic increases in accuracy and performance in \citet{cyr2019robust}, but was limited to an $\ell_2$ loss. None of the previous works have considered the generalization of this approach to training deep neural networks with a cross-entropy loss typically used in classification as we develop here.
\citet{bottou2018optimization} provides a mathematical summary on the breadth of work on numerical optimizers used in machine learning.
Several recent works have sought different means to incorporate second-order optimizers to accelerate training and avoid issues with selecting hyperparameters and training schedules \citep{osawa2019large,osawa2020scalable,botev2017practical,martens2010deep}. Some pursue a quasi-Newton approach, defining approximate Hessians, or apply factorization to reduce the effective bandwidth of the Hessian \citep{botev2017practical,xu2019newton}. Our work pursues a (block) coordinate descent strategy, partitioning degrees of freedom into sub-problems amenable to more sophisticated optimization \citep{nesterov2012efficiency,wright2015coordinate,blondel2013block}. Many works have successfully employed such schemes in ML contexts (e.g. \citep{blondel2013block,fu1998penalized,shevade2003simple,clarkson2012sublinear}), but they typically rely on stochastic partitioning of variables rather than the partition of the weights of deep neural networks into hidden layer variables and their complement pursued here. The strategy of extracting convex approximations to nonlinear loss functions is classical \citep{bubeck2014convex}, and some works have attempted to minimize general loss functions by minimizing surrogate $\ell_2$ problems \citep{barratt2020least}.
\section{Results} \label{sec:results}
We study the performance and properties of the NGD algorithm as compared to the standard stochastic gradient descent (GD) on several benchmark problems with various architectures. We start by applying dense network architectures to classification in the peaks problem. This allows us to plot and compare the qualitative properties of the basis functions $\Phi_\alpha(\cdot,\xi)$
encoded in the hidden layer \eqref{eq:adapt-basis} when trained with the two methods. We then compare the performance of NGD and GD for the standard image classification benchmarks CIFAR-10, MNIST, and Fashion MNIST using both dense and convolutional (ConvNet) architectures. Throughout this section, we compare performance in terms of iterations of Alg. \ref{thealg} for NGD and iterations of stochastic gradient descent, each of which achieves a single update of the parameters $(\mathbf{W}, \xi)$ in the respective algorithm based on a batch $\mathcal{B}$; this is the number of epochs multiplied by the number of batches.
\subsection{Peaks problem} \label{sec:peaks}
The peaks benchmark is a synthetic dataset for understanding the qualitative performance of classification algorithms \citep{haber2017stable}. Here, a scattered point cloud in the two- dimensional unit square $[0,1]^2$ is partitioned into disjoint sets. The classification problem is to determine which of those sets a given 2D point belongs to.
The two-dimensional nature allows visualization of how NGD and GD classify data. In particular, plots of both how the nonlinear basis encoded by the hidden layer maps onto classification space and how the linear layer combines the basis functions to assign a probability map over the input space are readily obtained. We train a depth 4 dense network of the form \eqref{eq:architecture} with $N_{\text{in}} = 2$, three hidden layers of width 12 contracting to a final hidden layer of width $N_{\text{basis}} = 6$, with $\tanh$ activation and batch normalization, and $N_{\text{classes}} = 5$ classes.
As specified by \citet{haber2017stable}, $5000$ training points are sampled from $[0,1]^2$. The upper-left most image in Figure~\ref{fig:peaks2} shows the sampled data points with their observed classes. For the peaks benchmark we use a single batch containing all training points, i.e. $\mathcal{B} = \mathcal{D}$. The NGD algorithm uses $5$ Newton iterations per training step with $3$ CG iterations approximating the linear solve. The learning rate for Adam for both NGD and GD is $10^{-4}$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Images/levelset_train_accur-crop.pdf}
\includegraphics[width=0.49\textwidth]{Images/levelset_test_accur-crop.pdf}
\caption{The training (\textit{left}) and validation (\textit{right}) accuracy for the peaks problem for both gradient descent (GD) and the Newton/gradient descent (NGD) algorithm. The solid lines represent the mean of 16 independent runs, and the shaded areas represent the mean $\pm$ one standard deviation.}
\label{fig:peaks1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Images/Figure_1_yet_another_run_cropped.pdf}
\includegraphics[width=0.4\textwidth]{Images/Figure_2_yet_another_run_adjusted_cropped.pdf}
\hspace{0.05\textwidth}
\includegraphics[width=0.4\textwidth]{Images/Figure_4_yet_another_run_adjusted_cropped.pdf}
\includegraphics[width=0.4\textwidth]{Images/Figure_3_yet_another_run_adjusted_cropped.pdf}
\hspace{0.05\textwidth}
\includegraphics[width=0.4\textwidth]{Images/Figure_5_yet_another_run_adjusted_cropped.pdf}
\caption{Results for peaks benchmarks, with comparison between NGD and GD on an identical architecture. In this example, GD obtained a training accuracy of 99.3\% and validation accuracy of 96.2\%, while NGD obtained a training accuracy of 99.6\% and validation accuracy of 98.0\%. \textbf{Top:} Training data (\textit{left}), classification by GD (\textit{center}), and classification by NGD (\textit{right}). GD misclassifies large portions of the input space.
\textbf{Middle:} The linear and softmax layers combine basis functions to assign classification probabilities to each class. The sharp basis learned in GD leads to artifacts and attribution of probability far from the sets (\textit{left}) while diffuse basis in NGD provides a sharp characterization of class boundaries (\textit{right}).
\textbf{Bottom:} Adaptive basis encoded by hidden layer, as learnt by GD (\textit{left}) and NGD (\textit{right}). For GD the basis is sharp, and individual basis functions conform to classification boundaries, while NGD learns a more regular basis. }
\label{fig:peaks2}
\end{figure}
Figure \ref{fig:peaks1} demonstrates that for an identical architecture, NGD provides a rapid increase in both training and validation accuracy over GD after a few iterations. For a large number of iterations both approaches achieve comparable training accuracy, although NGD generalizes better to the validation set.
The improvement in validation accuracy is borne out in Figure \ref{fig:peaks2}, which compares representative instances of training using GD and NGD.
While a single instance is shown, the character of these results is consistent with other neural networks trained for the Peaks problem in the same way.
The top row illustrates the predicted classes $\text{argmax } [\mathcal{F}_{\text{SM}}(\mathbf{x})] \in \{0,1,2,3,4\}$ for $\mathbf{x} \in [0,1]^2$ and the training data, demonstrating that
the NGD-trained network predicts the class $i=2$ of lowest training point density more accurately than the GD-trained network.
The remaining sets of images visualize both the classification probability map $[\mathcal{F}_{\text{SM}}(\mathbf{x})]_i$ for $i \in \{0,1,2,3,4\}$ (middle row) and the six basis functions $\Phi_\alpha(\mathbf{x}, \xi)$ (bottom row) learned by each optimizer.
The difference in the learned bases is striking.
GD learns a basis that is nearly discontinuous, in which the support of each basis function appears fit to the class boundaries. On the other hand, NGD learns a far smoother basis that can be combined to give sharper class boundary predictions.
This manifests in the resulting probability map assigned to each class; linear combinations of the rougher GD basis results in imprinting and assignment of probability far from the relevant class.
This serves as an explanation of the improved validation accuracy of NGD as compared to GD despite similar final training accuracy.
The NGD algorithm separates refinement of the basis from the determination of the coefficients. This provides an effective regulation of the final basis, leading to improved generalization.
\subsection{Image recognition benchmarks}
We consider in this section a collection of image classification benchmarks: MNIST \citep{deng2012mnist,grother1995nist}, fashion MNIST \citep{xiao2017fashion}, and CIFAR-10 \citep{krizhevsky2009learning}. We focus primarily on CIFAR-10 due to its increased difficulty;
it is well-known that one may obtain near-perfect accuracy in the MNIST benchmark without sophisticated choice of architecture.
For all problems, we consider a simple dense network architecture to highlight the role of the optimizer, and for CIFAR-10 we also utilize convolutional architectures (ConvNet). This highlights that our approach applies to general hidden layer architectures. Our objective is to demonstrate improvements in accuracy due to optimization with all other aspects held equal. For CIFAR-10, for example, the state-of-the-art requires application of techniques such as data-augmentation and complicated architectures to realize good accuracy; for simplicity we do not consider such complications to allow a simple comparison. The code for this study is provided at \url{github.com/rgp62/}.
For all results reported in this section, we first optimize the hyperparameters listed in Table~\ref{tab:parameters} by maximizing the validation accuracy over the training run. We perform this search using the Gaussian process optimization tool in the scikit-optimize package with default options \citep{scikitopt}. This process is performed for both GD and NGD to allow a fair comparison. The ranges for the search are shown in Table~\ref{tab:parameters} with the optimal hyperparameters for each dataset examined in this study. For all problems we partition data into training, validation and test sets to ensure hyperparameter optimization is not overfitting. For MNIST and fashion MNIST we consider a $50K/10K/10K$ partition, while for CIFAR-10 we consider a $40K/10K/10K$ partition. All training is performed with a batch size of $1000$ over $100$ epochs. For all results the test accuracy falls within the first standard deviation error bars included in Figures \ref{fig:three_sets} and \ref{fig:convnet}.
Figure~\ref{fig:three_sets} shows the training and validation accuracies using the optimal hyperparameters for a dense architecture with two hidden layers of width 128 and 10 and ReLU activation functions. We find for all datasets, NGD more quickly reaches a maximum validation accuracy compared to GD, while both optimizers achieve a similar final validation accuracy. For the more difficult CIFAR-10 benchmark, NGD attains the maximum validation accuracy of GD in roughly one-quarter of the number of iterations.
In Figure~\ref{fig:convnet}, we use the CIFAR-10 dataset to compare the dense architecture to the following ConvNet architecture,
\begin{equation*}
\begin{aligned}
&\underset{\textrm{8 channels, 3x3 kernel}}{\textrm{Convolution}} \rightarrow \underset{\textrm{2x2 window}}{\textrm{Max Pooling}} \rightarrow \underset{\textrm{16 channels, 3x3 kernel}}{\textrm{Convolution}} \\
& \rightarrow \underset{\textrm{2x2 window}}{\textrm{Max Pooling}} \rightarrow \underset{\textrm{16 channels, 3x3 kernel}}{\textrm{Convolution}} \rightarrow \underset{\textrm{width 64}}{\textrm{Dense}} \rightarrow \underset{\textrm{width 10}}{\textrm{Dense}}
\end{aligned}
\end{equation*}
where the convolution and dense layers use the ReLU activation function. Again, NGD attains the maximum validation accuracy of GD in one-quarter the number of iterations, and also leads to an improvement of 1.76\% in final test accuracy. This illustrates that NGD accelerates training and can improve accuracy for a variety of architectures.
\begin{table}
\begin{center}
\begin{tabular}{ c|c|c|c|c|c }
Hyperparameter & range & MNIST & Fashion & CIFAR-10 & CIFAR-10 \\
& & & MNIST & & ConvNet \\
\hline
& & & & \\
Learning rate & $[10^{-8},10^{-2}]$ & $10^{-2.81}$ & $10^{-3.33}$ & $10^{-3.57}$ & $10^{-2.66}$ \\
& & $10^{-2.26}$ & $10^{-2.30}$ & $10^{-2.50}$ & $10^{-2.30}$ \\
\hline
Adam decay & $[0.5,1]$ & $0.537$ & $0.756$ & $0.629$ & $0.755$ \\
parameter 1 & & $0.630$ & $0.657$ & $0.891$ & $0.657$ \\
\hline
Adam decay & $[0.5,1]$ & $0.830$ & $0.808$ & $0.782$ & $0.858$ \\
parameter 2 & & $0.616$ & $0.976$ & $0.808$ & $0.976$ \\
\hline
CG iterations & $[1,10]$ & $3$ & $1$ & $2$ & $2$ \\
\hline
Newton iterations & $[1,10]$ & $6$ & $5$ & $4$ & $7$ \\
\end{tabular}
\end{center}
\caption{Hyperparameters varied in study (\textit{first column}), ranges considered (\textit{second column}), and optimal values found for MNIST (\textit{third column}), Fashion MNIST (\textit{fourth column}), CIFAR-10 (\textit{fifth column}), and CIFAR-10 with the ConvNet architecture (\textit{last column}). For the learning rate and the Adam decay parameters, the optimal values for NGD followed by GD are shown. The optimal CG and Newton iterations are only applicable to NGD.}
\label{tab:parameters}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Images/three_sets_cropped.pdf}
\caption{Training accuracy (\textit{top row}) and validation accuracy (\textit{bottom row}) for CIFAR-10, Fashion MNIST, and MNIST datasets. Mean and standard deviation over 10 training runs are shown.}
\label{fig:three_sets}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=5in]{Images/cifar_convnet_cropped.pdf}
\caption{Training accuracy (\textit{left}) and validation accuracy (\textit{right}) for ConvNet architectures. Mean and standard deviation over 10 training runs are shown.}
\label{fig:convnet}
\end{figure}
\section{Conclusions}
The NGD method, motivated by the adaptive basis interpretation of deep neural networks, is a block coordinate descent method for classification problems. This method separates the weights of the linear layer from the weights of the preceding nonlinear layers. NGD uses this decomposition to exploit the convexity of the cross-entropy loss with respect to the linear layer variables. It utilizes a Newton method to approximate the global minimum for a given batch of data before performing a step of gradient descent for the remaining variables. As such, it is a hybrid first/second order optimizer which extracts significant performance from a second-order substep that only scales with the number of weights in the linear layer, making it an appealing and feasible application of second-order methods for training very deep neural networks. Applying this optimizer to dense and convolutional networks, we have demonstrated acceleration with respect to the number of epochs in the validation loss for the peaks, MNIST, Fashion MNIST, and CIFAR-10 benchmarks, with improvements in accuracy for peaks benchmark and CIFAR-10 benchmark using a convolutional network.
Examining the basis functions encoded in the hidden layer of the network in the peaks benchmarks revealed significant qualitative difference between NGD and stochastic gradient descent in the exploration of parameter space corresponding to the hidden layer variables. This, and the role of the tolerance in the Newton step as an implicit regularizer, merit further study.
The difference in the regularity of the learned basis and probability classes suggests that one may obtain a qualitatively different model by varying only the optimization scheme used. We hypothesize that this more regular basis may have desirable robustness properties which may effect resulting model sensitivity. This could have applications toward training networks to be more robust against adversarial attacks.
\begin{ack}
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract {DE-NA0003525}. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND Number: {SAND2020-6022 J}.
The work of R. Patel, N. Trask, and M. Gulian are supported by the U.S. Department of Energy, Office of Advanced Scientific Computing Research under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project. E. C. Cyr is supported by the Department of Energy early career program. M. Gulian is supported by the John von Neumann fellowship at Sandia National Laboratories.
\end{ack}
\bibliographystyle{abbrvnat}
|
{'timestamp': '2020-06-19T02:02:36', 'yymm': '2006', 'arxiv_id': '2006.10123', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.10123'}
|
arxiv
|
\section{Introduction }
The Galton-Watson branching process (GWP) is a famous classical model for population growth.
Although this process is well-investigated but it seems to be wholesome to deeper discuss and
improve some famed facts from classical theory of GWP. In first half part of the paper,
Sections 2 and 3, we will develop discrete-time analogues of Theorems from the paper of the author {\cite{Imomov14a}}.
These results we will exploit in subsequent sections to discuss properties of so-called
Q-process as GWP with infinite-living trajectory.
Let a random function $Z_n $ denotes the successive population size in the GWP at the
moment $n \in {\mathbb{N}}_0 $, where; ${\mathbb{N}}_0 = \{ 0\} \cup {\mathbb{N}}$ and
${\mathbb{N}} = \left\{ 1,2,\ldots \right\}$. The state sequence
$\left\{ {Z_n , n \in{\mathbb{N}}_0 } \right\}$ can be expressed in the form of
$$
Z_{n + 1} = \xi _{n1} + \xi _{n2} + \cdots + \xi_{nZ_n },
$$
where $\xi _{nk} $, $n,k \in {\mathbb{N}}_0 $, are independent variables with general offspring
law $p_k : = \mathbb{P}\left\{{\xi _{11} = k} \right\}$. They are interpreted as a number of descendants
of $k$-th individual in $n$-th generation. Owing to our assumption
$\left\{ {Z_n , n \in {\mathbb{N}}_0 }\right\}$ is a homogeneous Markov chain with state
space ${\cal S} \subset {\mathbb{N}}_0 $ and transition functions
\begin{equation}
P_{ij} : = \mathbb{P}\bigl\{ {Z_{n + 1} = j\bigm|{Z_n = i} }\bigr\}
= \sum\limits_{k_1 + \, \cdots \, + k_i = j} {p_{k_1 }\cdot p_{k_2 } \, \cdots \,p_{k_i } },
\end{equation}
for any $i,j \in {\cal S}$, where $p_j = P_{1j}$ and $\sum\nolimits_{j \in {\cal S}} {p_j }= 1$.
And on the contrary, any chain satisfying to property (1.1) represents GWP with the evolution
law $\left\{ {p_k ,k \in {\cal S}} \right\}$. Thus, our GWP is completely defined by setting
the distribution $\left\{ {p_k }\right\}$; see {\cite[pp.1--2]{ANey}}, {\cite[p.19]{Jagers75}}.
From now on we will assume that $p_k \ne 1$ and $p_0 > 0$, $p_0 + p_1 < 1$.
A probability generating function (GF) and its iterations is important analytical tool
in researching of properties of GWP. Let
$$
F(s) = \sum\limits_{k \in {\cal S}} {p_k s^k }, \quad \parbox{2.4cm}{\textit{for} {} $0 \le s < 1$.}
$$
Obviously that $A: = \mathbb{E}\xi_{11} = F'(s \uparrow 1) $ denotes the mean per
capita number of offspring provided the series $\sum\nolimits_{k \in {\cal S}} {kp_k } $ is finite.
Owing to homogeneous Markovian nature transition functions
$$
P_{ij} (n): =\mathbb{P}_i \bigl\{ {Z_n = j} \bigr\}
= \mathbb{P}\bigl\{ {Z_{n + r} = j\bigm| {Z_r = i} }\bigr\},
\quad \parbox{2.8cm}{ \textit{for any} {} $r \in {\mathbb{N}}_0$}
$$
satisfy to the Kolmogorov-Chapman equation
$$
P_{ij} (n + 1) = \sum\limits_{k \in {\cal S}} {P_{ik} (n)P_{kj} },
\quad \parbox{2.2cm}{\textit{for} {} $i,j \in {\cal S}$.}
$$
Hence
\begin{equation}
\mathbb{E}_i s^{Z_n } : = \sum\limits_{j \in {\cal S}} {P_{ij} (n)s^j } = \bigl[ {F_n (s)} \bigr]^i ,
\end{equation}
where GF $F_n (s) = \mathbb{E}_1 s^{Z_n }$ is $n$-fold functional iteration
of $F(s)$; see {\cite[pp.16--17]{Harris66}}.
Throughout this paper we write $\mathbb{E}$ and $\mathbb{P}$ instead
of $\mathbb{E}_1 $ and $\mathbb{P}_1 $ respectively.
It follows from (1.2) that $\mathbb{E}Z_n = A^n $. The GWP is classified as sub-critical,
critical and supercritical, if $A < 1$, $A = 1$ and $A > 1$, accordingly.
The event $\left\{ {Z_n = 0} \right\}$ is a simple absorbing state for any GWP.
The limit $q = \lim _{n \to \infty } P_{10} (n)$ denotes the process starting
from one individual eventually will be lost and called the extinction probability of GWP.
It is the least non-negative root of $F(q) = q \le 1$ and that $q = 1$ if the process
is non-supercritical. Moreover the convergence $ \mathop {\lim }\nolimits_{n \to \infty } F_n (s) = q$
holds uniformly for $0 \le s \le r < 1$. An assertion describing decrease speed of the function
$R_n (s): = q - F_n (s)$, due to its importance, is called the Basic Lemma (in fact this name is usually
used for the critical situation).
In Section 2 we follow on intentions of papers {\cite{Imomov12}} and {\cite{Imomov14a}} and
prove an assertion about asymptote of the function $R'_n (s)$ as Differential Analogue of
Basic Lemma. This simple assertion (and its corollaries, Theorem 1 and 2) will
lays on the basis of our reasoning in Section 3.
We start the Section 3 with recalling the Lemma 3 proved in {\cite[p.15]{ANey}}.
Until the Theorem 6 we study ergodic property of transition functions $\left\{ {P_{ij} (n)} \right\}$,
having carried out the comparative analysis of known results. We discuss a role
of $\mu _j = \lim _{n \to \infty } {{P_{1j} (n)} \mathord{\left/ {\vphantom {{P_{1j} (n)}
{P_{11} (n)}}} \right. \kern-\nulldelimiterspace} {P_{11} (n)}}$ qua the invariant measures and
seek an analytical form of GF ${\cal M}(s) = \sum\nolimits_{j \in {\cal S}} {\mu _j s^j }$ and
also we discuss ${\cal R}$-classification of GWP. Further consider the variable ${\cal H}$ denoting
an extinction time of GWP, that is ${\cal H} = \min \left\{ {n:Z_n = 0} \right\}$.
An asymptote of $\mathbb{P} \left\{{{\cal H}= n} \right\}$ has been studied in {\cite{KNS}} and {\cite{Slack}}.
The event $\left\{ {n < {\cal H} < \infty } \right\}$ represents a condition of $\left\{ Z_n \ne 0\right\}$ at
the moment $n$ and $\left\{ Z_{n + k} = 0\right\}$ for some $k \in {\mathbb{N}}$. By the extinction theorem
$\mathbb{P}_i \left\{ {{\cal H} < \infty }\right\} = q^i $. Therefore in non-supercritical case
$\mathbb{P}_i\left\{ {n < {\cal H} < \infty } \right\}
\equiv \mathbb{P}_i \left\{{{\cal H} > n} \right\} \to 0$. Hence, $Z_n \to 0$ with probability one,
so in these cases the process will eventually die out. We also consider a conditional distribution
$$
\mathbb{P}_i^{{\cal H}(n)}\{ * \}: = \mathbb{P}_i
\bigl\{{ * \bigm| {n < {\cal H} < \infty }}\bigr\}.
$$
in the section. The classical limit theorems state that if $q > 0$ then under certain moment assumptions the
limit $\widetilde P_{ij} (n): = \mathbb{P}_i^{{\cal H}(n)} \bigl\{ {Z_n = j} \bigr\}$ exists
always; see {\cite[p.16]{ANey}}. In particular, Seneta {\cite{Seneta69}} has proved that if $A \ne 1$ then the set
$\left\{ {\nu _j : = \lim _{n \to \infty } \widetilde P_{1j} (n)} \right\}$
represents a probability distribution and, limiting GF
${\cal V}(s) = \sum\nolimits_{j \in {\cal S}} {\nu _j s^j }$ satisfies to Schroeder equation
\begin{equation}
1 - {\cal V}\left( {{F(qs)} \over q } \right) = \beta \cdot \bigl[ {1 - {\cal V}(s)} \bigr],
\end{equation}
where $\beta = F'(q)$. The equation (1.3) determines an invariant property of numbers
$\left\{{\nu _j }\right\}$ with respect to the transition functions
$\left\{ {\widetilde P_{1j} (n)} \right\}$ and, the set $\left\{ {\nu _j } \right\}$ is
called ${\cal R}$-invariant measure with parameter ${\cal R} = \beta ^{ - 1}$; see {\cite{Pakes99}}.
In the critical case we know the Yaglom theorem about a convergence of conditional distribution
of ${{2Z_n }\mathord{\left/ {\vphantom {{2Z_n } {F''(1)n}}} \right. \kern-\nulldelimiterspace} {F''(1)n}}$ given
that $\left\{ {{\cal H} > n} \right\}$ to the standard exponential law.
In the end of the Section we investigate an ergodic property of probabilities $\widetilde P_{ij} (n)$ and
we refine above mentioned result of Seneta, having explicit form of ${\cal V}(s)$.
More interesting phenomenon arises if we observe the limit of $\mathbb{P}_i^{{\cal H}(n + k)} \{ * \} $
letting $k \to \infty $ and fixed $n \in {\mathbb{N}}$.
In Section 4 we observe the conditioned limit $\lim _{k \to \infty }
\mathbb{P}_i^{{\cal H}(n + k)} \bigl\{ {Z_n = j}\bigr\}$ which represents an honest probability
measures $\textbf{\textsf{Q}} = \bigl\{ {{\cal Q}_{ij} (n)} \bigr\}$ and defines homogeneous Markov chain called
the Q-process. Let $W_n $ be the state at the moment $n \in {\mathbb{N}}$ in Q-Process.
Then $W_0 \mathop = \limits^d Z_0 $ and $\mathbb{P}_i \bigl\{ {W_n = j} \bigr\} = {\cal Q}_{ij} (n)$.
The Q-process was considered first by Lamperti and Ney {\cite{LNey68}}; see, also {\cite[pp.56--60]{ANey}}.
Some properties of it were discussed by Pakes {\cite{Pakes99}}, {\cite{Pakes71}}, and in
{\cite{Imomov14b}}, {\cite{Imomov02}}.
The considerable part of the paper of Klebaner, R\"{o}sler and Sagitov {\cite{KRS07}} is devoted to discussion of
this process from the viewpoint of branching transformation called the Lamperti-Ney transformation.
Continuous-time analogue of Q-process was considered by the author {\cite{Imomov12}}.
Section 5 is devoted to classification properties of Markov chain
$\bigl\{ {W_n, n \in {\mathbb{N}}}\bigr\}$. Unlike of GWP the Q-process is classified on two
types depending on value of positive parameter $\beta $. It is positive-recurrent if $\beta <1$
is transient if $\beta = 1$. The set $\bigl\{ {\upsilon _j : = \lim _{n \to \infty }
{{{\cal Q}_{ij} (n)} \mathord{\left/ {\vphantom {{{\cal Q}_{ij} (n)} {{\cal Q}_{i1} (n)}}} \right.
\kern-\nulldelimiterspace} {{\cal Q}_{i1} (n)}}} \bigr\}$ is an invariant measure for Q-process.
The section studies properties of the invariant measure.
Sections 6 and 7 are devoted to examine of structure and long-time behaviors of the total
state $ S_n = \sum\nolimits_{k = 0}^{n -1} {W_k }$ in Q-process until time $n$. First we
consider the joint distribution of the cumulative process $\bigl\{ {W_n ,S_n } \bigr\}$. As a result
of calculation we will know that in case of $\beta < 1$ the variables $W_n $ and $S_n $ appear
asymptotically not dependent. But in the case $\beta = 1$ we state that under certain conditions the
normalized cumulative process $\bigl( {{{W_n } \mathord{\left/ {\vphantom {{W_n } {\mathbb{E}W_n }}} \right.
\kern-\nulldelimiterspace} {\mathbb{E}W_n }};\;{{S_n } \mathord{\left/ {\vphantom {{S_n } {\mathbb{E}S_n }}} \right.
\kern-\nulldelimiterspace} {\mathbb{E}S_n }}} \bigr)$ weakly converges to the two-dimensional random vector
having a finite distribution. Comparing results of old researches we note that in case of $\beta = 1$
the properties of $S_n $ essentially differ from properties of the total progeny of simple GWP.
In this connection we refer the reader to {\cite{Dwass69}}, {\cite{KNagaev94}} and {\cite{Kennedy75}}
in which an interpretation and properties of total progeny of GWP in various contexts was investigated.
In case of $\beta < 1$, in accordance with the asymptotic independence property of $W_n$ and $S_n $
we seek a limiting law of $S_n $ separately. So in Section 7 we state and prove an analogue of Law
of Large Numbers and the Central Limit Theorem for $S_n $.
\medskip
\section{Basic Lemma and its Differential analogue}
In this section we observe an asymptotic property of the function $R_n (s): = q - F_n (s)$ and
its derivative. In the critical situation an asymptotic explicit expansion of this function is
known from the classical literature which is given in the formula (2.10) below.
Let $A \ne 1$. First we consider $s \in [0;\,q)$. The mean value theorem gives
\begin{equation}
R_{n + 1} (s) = F'\bigl( {\xi _n (s)} \bigr)R_n (s),
\end{equation}
where $\xi _n (s) = q - \theta R_n (s)$, $0 < \theta < 1$.
We see that $\xi _n (s) < q$. Since the GF and its derivatives are monotonically
non-decreasing then consecutive application of (2.1) leads $R_n (s) < q\beta ^n $.
Collecting last finding and seeing that $\beta < 1$ we
write following inequalities:
\begin{equation}
F^{(k)} \bigl(q(1 - \beta ^n )\bigr) < F^{(k)} \bigl(\xi _n (s)\bigr) < F^{(k)}(q),
\quad \parbox{2.3cm}{\textit{for} {} $k = 1,\,2$.}
\end{equation}
In (2.2) the top index means derivative of a corresponding order. Considering together
representation (2.1) and inequalities (2.2) we take relations
\begin{equation}
{{R_{n + 1} (s)} \over \beta } < R_n (s) < {{R_{n + 1} (s)}
\over {F'\bigl(q(1 - \beta ^n )\bigr)}} \raise 1.5pt\hbox{.}
\end{equation}
In turn, by Taylor formula and the iteration for $F(s)$ we have expansion
\begin{equation}
R_{n + 1} (s) = \beta R_n (s) - {{F''\bigl(\xi _n (s)\bigr)} \over 2}R_n^2(s),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where and throughout this section $\xi _n (s)$ is such for which are satisfied relations (2.2).
Assertions (2.2)--(2.4) yield:
\begin{equation}
{{F''\bigl(q(1 - \beta ^n )\bigr)} \over {2\beta }} < {\beta
\over {R_{n +1} (s)}} - {1 \over {R_n (s)}} < {{F''(q)}
\over {2F'\bigl(q(1 - \beta ^n )\bigr)}} \raise 1.5pt\hbox{.}
\end{equation}
Repeated application of (2.5) leads us to the following:
$$
{1 \over {2\beta }}\sum\limits_{k = 0}^{n- 1} {F''\bigl(q(1 - \beta ^k)\bigr)\beta ^k }
< {{\beta ^n } \over {R_n (s)}} - {1 \over {q - s}}
< {{F''(q)} \over 2}\sum\limits_{k = 0}^{n - 1} {{{\beta ^k }
\over {F'\bigl(q(1 - \beta ^k )\bigr)}}} \raise 1.5pt\hbox{.}
$$
Taking limit as $n \to \infty $ from here we have estimation
\begin{equation}
{{\Delta _1 } \over 2} \le \mathop {\lim }\limits_{n \to \infty }
\left[ {{{\beta ^n } \over {R_n (s)}} - {1 \over {q - s}}}
\right]\le {{\Delta _2 } \over 2} \raise 1.5pt\hbox{,}
\end{equation}
where
$$
\Delta _1 : = \sum\limits_{k \in {\mathbb{N}}_0 }
{{{F''\bigl(q(1 - \beta ^k )\bigr)} \over \beta }\beta ^k }
\qquad \mbox{\textit{and}} \qquad
\Delta _2 : = \sum\limits_{k \in {\mathbb{N}}_0 }
{{{F''(q)} \over {F'\bigl(q(1 - \beta ^k )\bigr)}}\beta ^k }.
$$
We see that last two series converge. Designating
$$
{1 \over {A_1 (s)}}: = {1 \over {q - s}} + {{\Delta _1 } \over 2}
\qquad \mbox{\textit{and}} \qquad
{1 \over {A_2 (s)}}: = {1 \over {q - s}} + {{\Delta _2} \over 2} \raise 1.5pt\hbox{,}
$$
we rewrite the relation (2.6) as following:
\begin{equation}
{1 \over {A_1 (s)}} \le \mathop {\lim }\limits_{n \to \infty }{{\beta ^n }
\over {R_n (s)}} \le {1 \over {A_2 (s)}} \raise 1.5pt\hbox{.}
\end{equation}
Clearly that
$$
{1 \over {A_2 (s)}} - {1 \over {A_1 (s)}} = {{\Delta _2 - \Delta_1 } \over 2} < \infty.
$$
So there is a positive $\delta =\delta (s)$ such that $\Delta _1 \le \delta \le \Delta _2 $ and
the limit in (2.7) is equal to
\begin{equation}
{1 \over {{\cal A}(s)}} = {1 \over {q - s}} + {\delta \over 2} \raise 1.5pt\hbox{.}
\end{equation}
Having spent similar reasoning for $s \in [q;\,1)$ as before, we will be convinced that the
limit $\lim _{n \to \infty } {{\beta ^n } \mathord{\left/ {\vphantom {{\beta ^n } {R_n (s)}}} \right.
\kern-\nulldelimiterspace} {R_n (s)}} = {\cal A}(s)$ holds for all $s \in [0;1)$.
So we can formulate the following Basic Lemma.
\begin{lemma}
The following assertions are true for all $s \in [0;1)$:
\begin{enumerate}
\item [\textbf{\textsc{(i)}}] if $A \ne 1$ and $F''(q) < \infty $, then
\begin{equation}
R_n (s) = {\cal A}(s) \cdot \beta ^n \left( {1 + o(1)}\right)
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where the function ${\cal A}(s)$ is defined in (2.8);
\item [\textbf{\textsc{(ii)}}] {(\textbf{see {\cite[p.19]{ANey}}})} if $A = 1$ and $2B: = F''(1) < \infty $, then
\begin{equation}
R_n (s) = \,{{1 - s} \over {\,(1 - s)Bn + 1}}\left( {1 + o(1)}\right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
\end{enumerate}
\end{lemma}
The following lemma is discrete-time analogue of Lemma 2 from {\cite{Imomov14a}}.
\begin{lemma}
The following assertions hold for all $s \in [0;1)$:
\begin{enumerate}
\item [\textbf{\textsc{(i)}}] if $A\ne 1$ and $F''(q) < \infty $, then
\begin{equation}
R'_n (s) = - {\cal K}(s) \cdot \beta ^n \left( {1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where $ {\cal K}(s) = \exp \left\{ { - \delta \cdot {\cal A}(s)}\right\}$ and
$\delta = \delta (s) \in [\Delta _1 ;\,\Delta _2 ]$;
\item [\textbf{\textsc{(ii)}}] if $A = 1$ and $2B: = F''(1) < \infty $, then
\begin{equation}
R'_n (s) = \,{{\hbar (s)B} \over {\,s - F(s)}}\,R_n^2 (s)\,\left({1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where $ F'(s) \le \hbar (s) \le 1$ and $R_n (s)$ has the expression (2.10).
\end{enumerate}
\end{lemma}
\begin{proof}
Concerning the first part of the lemma we have equality
\begin{equation}
{{R'_{n + 1} (s)} \over {R'_n (s)}} = \beta - F''\bigl(\xi _n (s)\bigr)R_n(s),
\end{equation}
Let at first $s \in [0;\,q)$. As the function $R_n (s)$ monotonously decreases by $s$,
then its derivative $R'_n (s) < 0$ and, hence ${{R'_{n + 1} (s)} \mathord{\left/
{\vphantom {{R'_{n + 1} (s)} {R'_n (s)}}} \right. \kern-\nulldelimiterspace} {R'_n (s)}} > 0$.
Therefore, taking the logarithm and after, summarizing along $n$,
we transform the equality (2.13) to the form of
\begin{equation}
\ln \left[ { -{{R'_n (s)} \over {\beta ^n }}} \right]
= \sum\limits_{k = 0}^{n - 1} {\ln \left[ {1 - {{F''\bigl(\xi _k (s)\bigr)}
\over \beta }R_k (s)} \right]}
= :\sum\limits_{k = 0}^{n - 1}{\ln L_k (s)},
\end{equation}
where
$$
L_n (s) = 1 - {{F''\bigl(\xi _n(s)\bigr)} \over \beta }R_n (s).
$$
Using elementary inequalities
$$
{{b - a} \over b} < \ln {b \over a} < {{b - a} \over a} \raise 1.5pt\hbox{,}
\quad \parbox{3cm}{\textit{where} {} $0 < b < a$,}
$$
for $L_k (s)$ (a relevance of the use is easily be checked), we write
\begin{equation}
{{L_k (s) - 1} \over {L_k (s)}} < \ln L_k (s) < L_k (s) - 1.
\end{equation}
In accordance with (2.2)
\begin{equation}
- {{F''(q)} \over \beta }R_k (s) < L_k (s) - 1
< - {{F''\bigl(q(1 -\beta ^k )\bigr)} \over \beta }R_k (s) < 0.
\end{equation}
On the other hand as $R_n (s) < q\cdot \beta ^n $,
then $F_n (s)> q\cdot \bigl(1 - \beta ^n \bigr)$ and hence
\begin{equation}
\beta L_k (s) = F'(F_k (s)) > F'\bigl(q(1 - \beta ^k )\bigr).
\end{equation}
Combining of relations (2.15)--(2.17) yields
$$
- {{F''(q)} \over {F'\bigl(q(1 - \beta ^k )\bigr)}}R_k (s) < \ln L_k (s)
< - {{F''\bigl(q(1 - \beta ^k )\bigr)} \over \beta }R_k (s).
$$
Using this relation in (2.14) we obtain
$$
\sum\limits_{k = 0}^{n - 1} {{{F''\bigl(q(1 - \beta ^k )\bigr)}
\over \beta}R_k (s)} < \ln \left[ { - {{\beta ^n }
\over {R'_n (s)}}}\right] < \sum\limits_{k = 0}^{n - 1} {{{F''(q)}
\over {F'\bigl(q(1 -\beta ^k )\bigr)}}} R_k (s).
$$
Hence in our designations
\begin{equation}
A_2 (s) \cdot \Delta _1 \le \mathop {\lim }\limits_{n \to \infty}
\ln \left[ { - {{\beta ^n } \over {R'_n (s)}}} \right] \le A_1(s) \cdot \Delta _2,
\end{equation}
Since $\Delta _1 \le \delta \le \Delta _2 $, owing to (2.7)--(2.9)
\begin{equation}
A_2 (s) \le \mathop {\lim }\limits_{n \to \infty } {{R_n (s)}\over {\beta ^n }} = {\cal A}(s) \le A_1 (s).
\end{equation}
Considering together the estimations (2.18) and (2.19) we conclude
\begin{equation}
\Delta _1 \le \mathop {\lim }\limits_{n \to \infty } {{\ln \left[{ - {\displaystyle{\beta ^n }
\over \displaystyle{R'_n (s)}}} \right]} \over {{\cal A}(s)}}\le \Delta _2.
\end{equation}
The function ${{\beta ^n } \mathord{\left/ {\vphantom {{\beta ^n } {R'_n (s)}}} \right.
\kern-\nulldelimiterspace} {R'_n (s)}}$ is continuous and monotone by $s$ for each $n \in {\mathbb{N}}_0 $.
Inequalities (2.20) entail that the functions $\ln \bigl[ { -{{\beta ^n } \mathord{\left/
{\vphantom {{\beta ^n } {R'_n (s)}}} \right. \kern-\nulldelimiterspace} {R'_n (s)}}} \bigr]$
converge uniformly for $0 \le s \le z < q$ as $n \to \infty $. From here we get (2.11)
for $0 \le s < q$. By similar reasoning we will be convinced that convergence (2.11) is fair
for $s \in [q;\,1)$ and ergo for all values of $s$, such that $0 \le s < 1$.
Let's prove now the formula (2.12). The Taylor expansion and iteration of $F(s)$ produce
\begin{equation}
F_n (F(s)) - F_n (s) = \,BR_n^2 (s)\,\left( {1 + o(1)}\right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
\end{equation}
In the left-side part of (2.21) we apply the mean value Theorem and have
\begin{equation}
F'_n \left( {c(s)} \right) = \,{B \over {F(s) - s}}\,R_n^2(s)\,\left( {1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where $ s < c(s) < F(s)$. If we use a derivative's monotonicity property of any GF,
a functional iteration of $F(s)$ entails
$$
F'_n (s) < F'_n (c(s)) < {{F'_{n + 1} (s)} \over {F'(s)}} \raise 1.5pt\hbox{.}
$$
From here, using iteration again we have
\begin{equation}
{{F'(s)} \over {F'\bigl(F_n (s)\bigr)}}F'_n \bigl(c(s)\bigr)
< F'_n (s) < F'_n \bigl(c(s)\bigr).
\end{equation}
It follows from relations (2.22), (2.23) and the fact $ F_n (s)\uparrow 1$, that
$$
F'(s) \le \mathop {\lim }\limits_{n \to \infty }
{{\bigl( {F(s) - s} \bigr)F'_n (s)} \over {BR_n^2 (s)}} \le 1.
$$
Designating $\hbar (s)$ the mid-part of last inequalities leads us to the representation (2.12).
Lemma 2 is proved.
\end{proof}
\begin{remark}
The function ${\cal A}(s)$ plays the same role, as the akin function in the Basic Lemma for the
continuous-time Markov branching process established in {\cite{Imomov14a}}; see also {\cite{Imomov12}}.
Really, it can check up that in the conditions of the Lemma 1,
$0 < {\cal A}(0) < \infty $, ${\cal A}(q) = 0$, ${\cal A}'(q) = - 1$, and also it
is asymptotically satisfied to the Schroeder equation:
$$
{\cal A}\bigl( {F_n (qs)} \bigr) = \beta ^n \cdot {\cal A}(qs)\bigl( {1 + o(1)} \bigr),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
$$
for all $0 \le s < 1$.
\end{remark}
Now due to the Lemma 2 we can calculate the probability of return to an initial state $Z_0 = 1$ in time $n$.
So since $ F'_n (0) = P_{11} (n)$, putting $s = 0$ in (2.11) and (2.12)
we directly obtain the following two local limit theorems.
\begin{theorem}
Let $A \ne 1$ and $F''(q) < \infty $. Then
\begin{equation}
\beta ^{ - n} P_{11} (n) = {\cal K}(0)\left( {1 + o(1)}\right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where the function ${\cal K}(s)$ is defined in (2.11).
\end{theorem}
\begin{theorem}
If $A = 1$ and the second moment $F''(1) = :2B$ is finite, then
\begin{equation}
n^2 P_{11} (n) = {{\widehat p_1 } \over {p_0 B}}\left({1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
whenever $p_1 \le\widehat p_1 \le 1$.
\end{theorem}
\medskip
\section{An Ergodic behavior of Transition Functions $\left\{ {P_{ij} (n)}\right\}$ and Invariant Measures}
We devote this section to ergodicity property of transition functions $\left\{ {P_{ij} (n)} \right\}$.
Herewith we will essentially use the Lemma 2 with combining the following ratio limit
property (RLP) {\cite{ANey}}.
\begin{lemma}[\textbf{see {\cite[p.15]{ANey}}}]
If $p_1 \ne 0$, then for all $i,j \in {\cal S}$ the RLP holds:
\begin{equation}
{{P_{ij} (n)} \over {P_{11} (n)}} \longrightarrow iq^{i - 1} \mu _j,
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
where $\mu _j = \lim _{n \to \infty } {{P_{1j} (n)}\mathord{\left/ {\vphantom
{{P_{1j} (n)} {P_{11} (n)}}} \right. \kern-\nulldelimiterspace} {P_{11} (n)}} < \infty$.
\end{lemma}
Denoting
$$
{\cal M}_n^{(i)} (s) = \sum\limits_{j \in {\cal S}} {{{P_{ij} (n)}\over {P_{11} (n)}}s^j },
$$
we see that a GF analogue of assertion (3.1) is
\begin{equation}
{\cal M}_n^{(i)}(s) \sim iq^{i - 1} {\cal M}_n (s) \longrightarrow iq^{i - 1}{\cal M}(s),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
\end{equation}
here ${\cal M}_n (s) = {\cal M}_n^{(1)} (s)$ and
${\cal M}(s) = \sum\nolimits_{j \in {\cal S}} {\mu _j s^j } $.
The properties of numbers $\left\{ {\mu _j } \right\}$ are of some interest within our purpose.
In view of their non-negativity the limiting GF ${\cal M}(s)$
is monotonously not decreasing by $s$. And according to the assertion (3.2) in studying of behavior
of ${{P_{ij} (n)} \mathord{\left/ {\vphantom {{P_{ij} (n)} {P_{11} (n)}}} \right.
\kern-\nulldelimiterspace} {P_{11} (n)}}$ is enough to consider function ${\cal M}_n (s)$.
It has been proved in {\cite[pp.12--14]{ANey}} the sequence $\left\{ {\mu _j } \right\}$
satisfies to equation
\begin{equation}
\beta \mu _j = \sum\limits_{k \in {\cal S}} {\mu _k P_{kj}},
\quad \parbox{2.4cm}{ \textit{for all} {} $j \in {\cal S}$,}
\end{equation}
where $P_{ij} = \mathbb{P}_i\left\{ {Z_1 = j} \right\}$.
Therewith the GF ${\cal M}(s)$ satisfies to the functional equation
\begin{equation}
{\cal M}\bigl( {F(s)} \bigr) = \beta {\cal M}(s) + {\cal M}(p_0),
\end{equation}
whenever $s$ and $p_0 $ are in the region of convergence of ${\cal M}(s)$.
The following theorem describes main properties of this function.
\begin{theorem}
Let $p_1 \ne 0$. Then ${\cal M}(s)$ converges for $0 \le s < 1$. Furthermore
\begin{enumerate}
\item [\textbf{\textsc{(i)}}] if $A \ne 1$ and $F''(q) < \infty $, then
\begin{equation}
{\cal M}(s) = {{{\cal A}(0) - {\cal A}(s)} \over {{\cal K}(0)}} \raise 1.5pt\hbox{,}
\end{equation}
whenever ${\cal A}(s)$ and ${\cal K}(s)$ are functions in (2.9) and (2.11) respectively;
\item [\textbf{\textsc{(ii)}}] if $A = 1$ and $2B: = F''(1) < \infty $,
then ${\cal M}_n (s) = {\cal M}(s) + r_n (s)$, where
\begin{equation}
{\cal M}(s) = {{p_0 } \over {\widehat p_1 B}} \cdot {s \over {1 -s}} \raise 1.5pt\hbox{,}
\end{equation}
and $p_1 \le \widehat p_1 \le 1$, ${r_n (s)} = \mathcal{O}\left( {{1 \mathord{\left/
{\vphantom {1 n}} \right. \kern-\nulldelimiterspace} n}} \right)$ as $n \to \infty $.
\end{enumerate}
\end{theorem}
\begin{proof}
The convergence property of GF ${\cal M}(s)$ was proved in {\cite[p.13]{ANey}}.
In our designations we write
\begin{equation}
{\cal M}_n (s) = {{F_n (s) - F_n (0)} \over {F'_n (0)}}
= \left({1 - {{R_n (s)} \over {R_n (0)}}} \right) \cdot {{R_n (0)} \over{P_{11} (n)}} \raise 1.5pt\hbox{.}
\end{equation}
In case $A \ne 1$ it follows from (2.9) that
$$
{{R_n (s)} \over {R_n (0)}} \longrightarrow {{{\cal A}(s)} \over {{\cal A}(0)}} \raise 1.5pt\hbox{,}
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$,}
$$
and, considering (2.24) implies
\begin{equation}
{{R_n (0)} \over {P_{11} (n)}} \longrightarrow {{{\cal A}(0)} \over {{\cal K}(0)}} \raise 1.5pt\hbox{.}
\end{equation}
Combining (3.7) and (3.8) we obtain ${\cal M}(s)$ in form of (3.5).
Let's pass to the case $A = 1$. Due to statement of (2.10) appears
\begin{equation}
1 - {{R_n (s)} \over {R_n (0)}} \sim \,{s \over {\,(1 - s)Bn +1}} \raise 1.5pt\hbox{,}
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
\end{equation}
In turn according to (2.25)
\begin{equation}
{{R_n (0)} \over {P_{11} (n)}} \sim {{\,p_0 } \over {\widehat p_1}}\,n,
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
\end{equation}
Considering together relations (3.7), (3.9) and (3.10) we obtain
$$
{\cal M}_n (s) \sim {{\,p_0 } \over {\widehat p_1 }}{{sn} \over{\,(1 - s)Bn + 1}}\raise 1.5pt\hbox{,}
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
$$
Taking limit from here we find the limiting GF in the form of (3.6).
The proof is completed.
\end{proof}
\begin{remark}
The theorem above is an enhanced form of Theorem 2 from {\cite[p.13]{ANey}} in
sense that in our case we get the information on analytical form of limiting GF ${\cal M}(s)$.
\end{remark}
The following assertions follow from the theorem proved above.
\begin{corollary}
Let $p_1 \ne 0$. Then
\begin{enumerate}
\item [\textbf{\textsc{(i)}}] if $A \ne 1$ and $F''(q) < \infty $, then
\begin{equation}
{\cal M}(q) = \sum\limits_{j \in {\cal S}} {\mu _j q^j }
={{{\cal A}(0)} \over {{\cal K}(0)}} < \infty ;
\end{equation}
\item [\textbf{\textsc{(ii)}}] if $A = 1$ and $2B:= F''(1) < \infty $, then
\begin{equation}
\sum\limits_{j = 1}^n {\mu _j } \sim {{p_0 } \over {\widehat p_1 B}}\,n,
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
The relation (3.11) follows from (3.5).
In case $A = 1$ as shown in (3.6)
$$
{\cal M}(s) \sim {{p_0 } \over {\widehat p_1 B}} \cdot {1 \over {1- s}} \raise 1.5pt\hbox{,}
\quad \parbox{1.6cm}{\textit{as} {} $s \uparrow 1$.}
$$
According to the Hardy-Littlewood Tauberian theorem the last relation entails (3.12).
\end{proof}
Now from the Lemma 3 and Theorems 1 and 2 we get complete account about asymptotic
behaviors of transition functions $P_{ij} (n)$. Following theorems are fair.
\begin{theorem}
Let $p_1 \ne 0$. If $A \ne 1$ and $F''(q) < \infty $, then
$$
\beta ^{ - n} P_{ij} (n) = {{{\cal A}(0)} \over {{\cal M}(q)}}iq^{i - 1}
\mu _j \left( {1 + o(1)} \right), \quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
$$
\end{theorem}
\begin{theorem}
Let $p_1 \ne 0$. If in critical GWP the second moment $F''(1) = :2B$ is finite then for transition
functions the following asymptotic representation holds:
$$n
^2 P_{ij} (n) = {{\widehat p_1 } \over {p_0 B}}i\mu _j \left( {1+ o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
$$
\end{theorem}
Further we will discuss the role of the set $\left\{ {\mu _j }\right\}$ as invariant measures concerning
transition probabilities $\left\{ {P_{ij} (n)} \right\}$. An invariant (or stationary) measure of
the GWP is a set of nonnegative numbers $\left\{ {\mu _j^ *} \right\}$ satisfying to equation
\begin{equation}
\mu _j^ * = \sum\limits_{k \in {\cal S}} {\mu _k^ * P_{kj} }.
\end{equation}
If $\sum\nolimits_{j \in {\cal S}} {\mu _j^ * } < \infty $ (or without loss of generality
$\sum\nolimits_{j \in {\cal S}} {\mu _j^ * } = 1$) then it is called as invariant distribution.
As $P_{00} (n) = 1$ then according to (3.13) $\mu _0^ * = 0$ for any invariant measure
$\left\{ {\mu _j^ * } \right\}$. If $P_{10} (n) = 0$ then condition (3.13) becomes
$\mu _j^ * = \sum\nolimits_{k = 1}^j {\mu _k^ * P_{kj} (n)} $.
If $P_{10} (n) > 0$ then $P_{i0} (n) > 0$ and hence $\mu _j^ * > 0$.
In virtue of Theorem 4 in non-critical situation the transition functions
$P_{ij} (n)$ exponentially decrease to zero as $n \to \infty$.
Following a classification of the continuous-time Markov process
we characterize this decrease by a "decay parameter"
$$
{\cal R} = - \mathop {\lim }\limits_{n \to \infty }{{\ln P_{ii}(n)} \over n} \raise 1.5pt\hbox{.}
$$
We classify the non-critical Markov chain $\left\{ {Z_n ,\,\,n \in {\mathbb{N}}_0 } \right\}$
as ${\cal R}$-transient if
$$
\sum\limits_{n \in {\mathbb{N}}} {e^{{\cal R}n}P_{ii} (n)} < \infty
$$
and ${\cal R}$-recurrent otherwise. This chain is called as ${\cal R}$-positive if
$\lim _{n \to \infty } e^{{\cal R}n} P_{ii} (n) > 0$, and ${\cal R}$ -null if last limit
is equal to zero.
Now assertion(3.11) and Theorem 4 yield the following statement.
\begin{theorem}
Let $p_1 \ne 0$. If $A \ne 1$ and $F''(q) < \infty $, then ${\cal R} = \left| {\ln \beta } \right|$ and
the chain $\left\{ {Z_n } \right\}$ is ${\cal R}$-positive. The set of numbers $\left\{ {\mu _j } \right\}$
determined by GF (3.5) is the unique (up to multiplicative constant) ${\cal R}$-invariant measure for GWP.
\end{theorem}
In critical situation the set $\left\{ {\mu _j } \right\}$ directly enters to a role of invariant measure
for the GWP. Indeed, in this case $\beta = 1$ and according to (3.3) the following invariant equation holds:
$$
\mu _j = \sum\limits_{k \in {\cal S}} {\mu _k P_{kj} },
\quad \parbox{2.4cm}{\textit{for all} {} $j \in {\cal S}$,}
$$
and owing to (3.12) $\sum\nolimits_{j \in {\cal S}} {\mu _j } = \infty $ .
\begin{remark}
As shown in Theorems 4 and 5 hit probabilities of GWP to any states through the long interval time
depend on the initial state. That is ergodic property for
$\left\{ {Z_n , n \in {\mathbb{N}}_0 } \right\}$ is not carried out.
\end{remark}
Our further reasoning is connected with earlier introduced variable
$$
{\cal H}: = \min \bigl\{ {n \in {\mathbb{N}}:\;Z_n =0} \bigr\},
$$
which denote the extinction time of GWP. Let as before
$$
\mathbb{P}_i^{{\cal H}(n)} \{* \}:
= \mathbb{P}_i \bigl\{{* \bigm|{n < {\cal H}< \infty }} \bigr\}.
$$
Put into consideration probabilities $\widetilde P_{ij} (n)=
\mathbb{P}_i^{{\cal H}(n)} \bigl\{ {Z_n = j} \bigr\}$ and denote
$$
{\cal V}_n^{(i)} (s) = \sum\limits_{j \in {\cal S}} {\widetilde P_{ij} (n)s^j }
$$
to be the appropriate GF. As it has been noticed in the introduction section that if $q > 0$,
then the limit $\nu _j : =\lim _{n \to \infty } \widetilde P_{1j} (n)$ always exists.
In case of $A \ne 1$ the set $\left\{ {\nu _j } \right\}$ represents a probability distribution.
And limiting GF ${\cal V}(s) = \sum\nolimits_{j \in {\cal S}} {\nu _j s^j } $ satisfies to
Schroeder's equation (1.3) for $0 \le s \le 1$. But if $A = 1$ then $\nu _j \equiv 0$;
see {\cite{Seneta69}} and {\cite[p.16]{ANey}}. In forthcoming two theorems we observe
the limit of $\widetilde P_{ij} (n)$ as $n \to \infty $ for any $i,j \in {\cal S}$.
Unlike aforementioned results of Seneta we get the explicit expressions for the appropriate GF.
\begin{theorem}
Let $p_1 \ne 0$. If $A \ne 1$ and $F''(q) < \infty $, then
$$
\mathop {\lim }\limits_{n \to \infty } \widetilde P_{ij} (n) = \nu_j,
\quad \parbox{2.4cm}{\textit{for all} {} $j \in {\cal S}$,}
$$
and suitable GF ${\cal V}(s) =\sum\nolimits_{j \in {\cal S}} {\nu _j s^j } $ has a form of
\begin{equation}
{\cal V}(s) = 1 - {{{\cal A}(qs)} \over {{\cal A}(0)}} \raise 1.5pt\hbox{,}
\end{equation}
where the function ${\cal A}(s)$ is defined in (2.8).
\end{theorem}
\begin{proof}
We write
\begin{equation}
\widetilde P_{ij} (n) = {{\mathbb{P}_i \bigl\{ {Z_n = j,\; n <{\cal H} < \infty } \bigr\}}
\over {\mathbb{P}_i \bigl\{ {n <{\cal H} < \infty } \bigr\}}} \raise 1.5pt\hbox{.}
\end{equation}
In turn
$$
\mathbb{P}_i \bigl\{ {Z_n = j,\; n < {\cal H} < \infty } \bigr\}
= \mathbb{P}\bigl\{ {n < {\cal H} < \infty \bigm| {Z_n = j} }\bigr\} \cdot P_{ij} (n).
$$
Since the vanishing probability of $j$ particles is equal
to $q^j $ then from last form we receive that
\begin{equation}
\mathbb{P}_i \bigl\{ {Z_n = j,\; n < {\cal H} < \infty }\bigr\} = q^j \cdot P_{ij} (n)
\end{equation}
Using relation (3.16) implies
\begin{equation}
\mathbb{P}_i \bigl\{ {n < {\cal H} < \infty } \bigr\} = \sum\limits_{j\in {\cal S}}
{\mathbb{P}_i \bigl\{ {Z_n = j,\; n < {\cal H} < \infty }\bigr\}}
= \sum\limits_{j \in {\cal S}} {P_{ij} (n)q^j }.
\end{equation}
Now it follows from (3.15)--(3.17) and Lemma 3 that
$$
\widetilde P_{ij} (n) = {\displaystyle{{{P_{ij} (n)} \over {P_{11} (n)}} \cdot q^j } \over
{\displaystyle \sum\nolimits_{k \in {\cal S}} {{{P_{ik} (n)} \over {P_{11} (n)}}q^k } }}\; \buildrel {}
\over \longrightarrow \;{{\mu _j \cdot q^j} \over {\sum\nolimits_{k \in{\cal S}}{\mu_k q^k }}}
= {{\mu _j q^j } \over {{\cal M}(q)}} = :\nu _j,
$$
as $n \to \infty $. It can be verified the limit distribution $\left\{ {\nu _j } \right\}$ defines the
GF ${\cal V}(s) = {{{\cal M}(qs)} \mathord{\left/ {\vphantom {{{\cal M}(qs)} {{\cal M}(q)}}} \right.
\kern-\nulldelimiterspace} {{\cal M}(q)}}$. Applying here equality (3.5) we get to (3.14).
\end{proof}
\begin{remark}
The mean of distribution measure $\widetilde P_{ij} (n)$
$$
\sum\limits_{j \in {\cal S}} {j\widetilde P_{ij} (n)} \longrightarrow {q\over {{\cal A}(0)}} \raise 1.5pt\hbox{,}
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$}
$$
and, the limit distribution $ \left\{ {\nu _j } \right\}$ has the finite
mean ${\cal V}'(s \uparrow 1) = {q \mathord{\left/ {\vphantom {q {{\cal A}(0)}}}
\right. \kern-\nulldelimiterspace} {{\cal A}(0)}}$.
\end{remark}
Further consider the case $A = 1$. In this case
$\mathbb{P}\left\{ {{\cal H} < \infty } \right\} = 1$, therefore
\begin{eqnarray}
{\cal V}_n^{(i)} (s) \nonumber
& = & \sum\limits_{j \in {\cal S}} {\mathbb{P}_i \bigl\{ {Z_n = j\bigm| {{\cal H} > n}} \bigr\}s^j } \\
& = & \sum\limits_{j \in {\cal S}} {{{P_{ij} (n)} \over {\mathbb{P}_i \bigl\{ {Z_n > 0} \bigr\}}}s^j }
= 1 - {{1 - F_n^i (s)} \over {1 - F_n^i (0)}} \raise 1.5pt\hbox{.} \nonumber
\end{eqnarray}
We see that $ 1 - F_n^i (s) \sim iR_n (s)$ as $n \to \infty $. Hence considering (3.7) obtains
\begin{equation}
{\cal V}_n^{(i)} (s) \sim 1 - {{R_n (s)} \over {R_n (0)}}
={{P_{11} (n)} \over {R_n (0)}} \cdot {\cal M}_n(s),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
\end{equation}
Combining expansions (2.10), (2.25), (3.6) and (3.18), we state the following theorem.
\begin{theorem}
Let $A = 1$. If $2B: = F''(1) < \infty $, then
$$
n{\cal V}_n^{(i)} (s) = {1 \over B} \cdot {s \over {1 - s}} + \rho_n (s),
$$
where $ {\rho _n (s)} = \mathcal{O}\left( {{1 \mathord{\left/
{\vphantom {1 n}} \right. \kern-\nulldelimiterspace} n}} \right)$ as $n \to \infty $.
\end{theorem}
\begin{remark}
It is a curious fact that in last theorem we managed to be saved of undefined variable
$\widehat p_1 \in [p_1 ;1]$.
\end{remark}
Now define the stochastic process $\widetilde Z_n $ with the transition matrix
$\left\{ {\widetilde P_{ij} (n)} \right\}$. It is easy to be convinced that $\widetilde Z_n $ represents
a discrete-time Markov chain. According to last theorems the properties of its trajectory lose
independence on initial state with growth the numbers of generations.
In non-critical case, according to the Theorem 7, for GWP $\widetilde Z_n $ there
is (up to multiplicative constant) unique set of nonnegative numbers $\left\{ {\nu _j } \right\}$ which
are not all zero and $\sum\nolimits_{j \in {\cal S}} {\nu _j } = 1$.
Moreover as $ {\cal M}(qs) = {\cal M}(q) \cdot {\cal V}(s)$ then using the
formula (3.4) we can establish the following invariant equation:
$$
\beta \cdot {\cal V}(s) = {\cal V}\left( {\widehat F(s)} \right) - {\cal V}\left( {\widehat F(s)} \right),
$$
where ${\cal V}(s) =\sum\nolimits_{j \in {\cal S}} {\nu _j s^j } $ and
$\widehat F(s) ={{F(qs)} \mathord{\left/ {\vphantom {{F(qs)} q}} \right. \kern-\nulldelimiterspace} q}$.
So we have the following
\begin{theorem}
Let $A \ne 1$ and $F''(q) < \infty $. Then
$$
P_{ij} (n) = \widetilde P_{ij} (n) \cdot \sum\limits_{k\in {\cal S}} {P_{ik} (n)q^{k - j} },
$$
where transition functions $\widetilde P_{ij} (n)$ have an ergodic property and their limits
$\nu _j = \lim _{n \to \infty } \widetilde P_{ij} (n)$ present $\left| {\ln \beta } \right|$-invariant
distribution for the Markov chain $\left\{ {\widetilde Z_n } \right\}$.
\end{theorem}
In critical situation we have the following assertion which directly implies from
Theorem 8 and taking into account the continuity theorem for GF.
\begin{theorem}
If in critical GWP $2B: = F''(1) < \infty $, then
$$
n\widetilde P_{ij} (n) = {1 \over B} + \mathcal{O}\left( {{\,1 \over \,n}}\right),
\quad \parbox{2cm}{\textit{as} {} $n \to \infty$.}
$$
\end{theorem}
\medskip
\section{Limiting interpretation of $\mathbb{P}_i^{{\cal H}(n + k)}\{* \}$}
In this section, excepting cases $p_1 = 0$ and $q = 0$, we observe the
distribution $\mathbb{P}_i^{{\cal H}(n + k)} \{ Z_n = j\}$. It has still
been noticed by Harris {\cite{Harris51}} that its limit as $k\to \infty $
always exists for any fixed $n \in {\mathbb{N}}$. By means of
relations (3.15)--(3.17) it was obtained in {\cite[pp.56--60]{ANey}} that
$$
\mathop {\lim }\limits_{k \to \infty } \mathbb{P}_i^{{\cal H}(n + k)} \bigl\{Z_n = j\bigr\}
= {{jq^{j - i} } \over {i\beta ^n }}P_{ij} (n) = :{\cal Q}_{ij} (n).
$$
Since $F'_n (q) = \left[ {F'(q)} \right]^n= \beta ^n $, then by (1.2)
$$
\sum\limits_{j \in {\cal S}} {{{jq^{j - i} } \over {i\beta ^n}}P_{ij} (n)} = {1 \over {iq^{i - 1}
\beta ^n }}\left[{\sum\nolimits_{j \in {\cal S}} {P_{ij} (n)s^j } } \right]^\prime_{s = q} = 1.
$$
So we have an honest probability measure $\textbf{\textsf{Q}}
=\left\{ {{\cal Q}_{ij} (n)} \right\}$. The stochastic process
$\left\{ {W_n , n \in {\mathbb{N}}_0 } \right\}$ defined by this measure is called the Q-process.
By definition
$$
\textbf{\textsf{Q}} = \left\{{\lim _{k \to \infty} \mathbb{P}_i
\bigl\{{* \bigm| {n + k < {\cal H} < \infty }} \bigr\}}\right\}
= \bigl\{ {\mathbb{P}_i \bigl\{{* \bigm|{{\cal H}
= \infty}}\bigr\}} \bigr\},
$$
that the Q-process can be considered as GWP with a non-degenerating trajectory in remote
future, that is it conditioned on event $\left\{ {{\cal H} = \infty } \right\}$.
Harris {\cite{Harris51}} has established that if $A = 1$ and
$2B: = F''(1) < \infty $ the distribution of ${{Z_n } \mathord{\left/ {\vphantom {{Z_n } {Bn}}} \right.
\kern-\nulldelimiterspace} {Bn}}$ conditioned on $\left\{ {{\cal H} = \infty } \right\}$ has the
limiting Erlang's law. Thus the Q-process $\left\{ {W_n , n \in {\mathbb{N}}_0 } \right\}$ represents
a homogeneous Markov chain with initial state $W_0 \mathop = \limits^d Z_0 $ and
general state space which will henceforth denoted as ${\cal E} \subset {\mathbb{N}}$.
The variable $W_n $ denote the state size of this chain in instant $n$ with the transition matrix
\begin{equation}
{\cal Q}_{ij} (n) = \mathbb{P}_i \bigl\{ {W_{n + k} = j} \bigr\}
= {{jq^{j - i} } \over {i\beta ^n }}P_{ij} (n),
\quad \parbox{2.7cm}{\textit{for all} {} $i, j \in {\cal E}$,}
\end{equation}
and for any $n,k \in {\mathbb{N}}$ .
Put into consideration a GF
$$
Y_n^{(i)} (s): = \sum\limits_{j \in {\cal E}} {{\cal Q}_{ij}(n)s^j }.
$$
From (1.2) and (4.1) we have
\begin{eqnarray}
Y_n^{(i)} (s) \nonumber
& = & \sum\limits_{j \in {\cal E}} {{{jq^{j - i} } \over {i\beta ^n }}P_{ij} (n)s^j } \\
& = & {{q^{1 - i} s} \over {i\beta ^n }}\sum\limits_{j \in {\cal E}} {P_{ij} (n)(qs)^{j - 1} }
= {{qs} \over {i\beta ^n }}{\partial \over {\partial x}}\left[ {\left( {{{F_n (x)}
\over q}} \right)^i } \right]_{x = qs}. \nonumber
\end{eqnarray}
Therefore
\begin{equation}
Y_n^{(i)} (s) = \left[ {{{F_n (qs)} \over q}} \right]^{i - 1} Y_n(s),
\end{equation}
where GF $Y_n (s): = Y_n^{(1)} (s) = \mathbb{E}\left[{s^{W_n }
\left| {W_0 = 1} \right.} \right]$ has the form of
\begin{equation}
Y_n (s) = s{{F'_n (qs)} \over {\beta ^n }} \raise 1.5pt\hbox{,}
\quad \parbox{2.4cm}{\textit{for all} {} $n \in {\mathbb{N}}$.}
\end{equation}
As $F_n (s) \to q$ owing to (4.2) and (4.3), ${{{\cal Q}_{ij} (n)} \mathord{\left/
{\vphantom {{{\cal Q}_{ij}(n)}{{\cal Q}_{1j}(n)}}}\right. \kern-\nulldelimiterspace}
{{\cal Q}_{1j} (n)}} \to 1$, at infinite growth of the number of generations.
Using (4.2) and iterating $F(s)$ produce a following functional relation:
\begin{equation}
Y_{n + 1}^{(i)} (s) = {{Y(s)} \over {\widehat F(s)}}Y_n^{(i)}\left( {\widehat F(s)} \right),
\end{equation}
where $\widehat F(s) = {{F(qs)}\mathord{\left/{\vphantom {{F(qs)}q}}\right.
\kern-\nulldelimiterspace} q}$ and $Y(s): = Y_1 (s)$.
We see that Q-process is completely defined by GF
$$
Y(s) = s{{F'(qs)} \over \beta }
$$
and, its evolution is regulated by the positive parameter $\beta $. In fact, if the first
moment $\alpha : = Y'(1)$ is finite then differentiating of (4.3) in $s = 1$ gives
$$
\mathbb{E}_i W_n = \left( {i - 1} \right)\beta ^n + \mathbb{E}W_n
$$
and
\begin{equation}
\mathbb{E}W_n =\left\{\begin{array}{l} 1 + \gamma \left( {1 - \beta ^n } \right) \, \hfill,
\qquad \parbox{2.1cm}{\textit{when} {} $\beta < 1 $,} \\
\\
\left( {\alpha - 1} \right)n + 1 \hfill, \qquad \parbox{2.1cm}{\textit{when} {} $\beta = 1 $,} \\
\end{array} \right.
\end{equation}
where $\gamma := {{\left( {\alpha - 1} \right)} \mathord{\left/ {\vphantom {{\left({\alpha - 1}
\right)} {\left( {1 - \beta } \right)}}} \right. \kern-\nulldelimiterspace}
{\left( {1 - \beta } \right)}}$ and $\alpha = 1 + {{\widehat F^{''} (1)} \mathord{\left/
{\vphantom {{\widehat F^{''} (1)} \beta }} \right. \kern-\nulldelimiterspace} \beta } > 1$.
\medskip
\section{Classification and ergodic behavior of states of Q-processes}
The formula (4.5) shows that if $\beta < 1$, then
$$
\mathbb{E}_i W_n \longrightarrow 1 + \gamma , \quad \parbox{2cm}{\textit{as} {} $n \to \infty$}
$$
and, provided that $\beta = 1$
$$
\mathbb{E}_i W_n \sim \left( {\alpha - 1} \right)n, \quad \parbox{2cm}{\textit{as} {} $ n \to \infty $.}
$$
The Q-Process has the following properties:
\begin{enumerate}
\item [\textbf{\textsc{(i)}}] if $\beta < 1$, then it is positive-recurrent;
\item [\textbf{\textsc{(ii)}}] if $\beta = 1$, then it is transient.
\end{enumerate}
In the transient case $W_n \to \infty $ with probability $1$; see {\cite[p.59]{ANey}}.
Let's consider first the positive-recurrent case. In this case according to (2.11), (4.2), (4.3) the
limit $\pi (s): = \lim _{n\to \infty } Y_n^{(i)} (s)$ exists provided that $\alpha < \infty $.
Then owing to (4.4) we make sure that GF $\pi (s) =\sum\nolimits_{j \in {\cal E}} {\pi _j s^j } $
satisfies to invariant equation $\pi (s){{ \cdot F(qs)} \mathord{\left/ {\vphantom {{ \cdot F(qs)} q}} \right.
\kern-\nulldelimiterspace} q} = Y(s) \cdot \pi \left({{{F(qs)}\mathord{\left/{\vphantom {{F(qs)} q}}\right.
\kern-\nulldelimiterspace} q}} \right)$. Applying this equation reduces to
\begin{equation}
\pi (s) = {{Y_n (s)} \over{\widehat{F_n }(s)}}\pi \left( {\widehat{F_n }(s)} \right),
\end{equation}
where $ \widehat{F_n }(s) = {{F_n (qs)} \mathord{\left/ {\vphantom {{F_n (qs)} q}} \right.
\kern-\nulldelimiterspace} q}$. A transition function analogue of (5.1) is form of
$\pi _j = \sum\nolimits_{i \in {\cal E}} {\pi _i {\cal Q}_{ij}(n)}$. Taking limit in (5.1)
as $n \to \infty $ it follows that $\pi \left( {\widehat{F_n }(s)} \right) \sim \widehat{F_n }(s)$ and
it in turn entails $\sum\nolimits_{j \in {\cal E}} {\pi _j } = 1$ since $\widehat{F_n }(s) \to 1$.
So in this case the set $\left\{ {\pi _j , j \in {\cal E}}\right\}$ represents an invariant distribution.
Differentiation (5.1) and taking into account (4.5) we easily compute that
\begin{equation}
\pi '(1) = \sum\nolimits_{j \in {\cal E}} {j\pi _j } = 1 + \gamma,
\end{equation}
where as before $\gamma : = {{\left( {\alpha - 1} \right)} \mathord{\left/ {\vphantom
{{\left( {\alpha - 1} \right)} {\left( {1 - \beta } \right)}}} \right.
\kern-\nulldelimiterspace} {\left( {1 - \beta } \right)}}$.
Further we note that owing to (2.11) and (4.2)
$$
\pi (s) = s\exp \bigl\{ { - \delta (qs) \cdot {\cal A}(qs)}\bigr\},
$$
where the function ${\cal A}(s)$ looks like (2.8). Since $\pi (1) = 1$ and
${\cal A}(qs) = \mathcal{O}\left( {1 - s} \right)$ as $s \uparrow 1$ it is necessary to be
$$
\delta (qs) = \mathcal{O}\left( {(1 - s)^{ - \sigma } } \right)
$$
with $\sigma < 1$. On the other hand for feasibility of equality (5.2) is equivalent to that
$$
\left. {{{\partial \bigl[ {\delta (qs) \cdot {\cal A}(qs)}\bigr]}
\over {\partial s}}} \right|_{s \uparrow 1} = - \gamma.
$$
If we remember the form of function ${\cal A}(s)$ the last condition becomes
\begin{equation}
\mathop {\lim }\limits_{s \uparrow 1} \left\{ {\delta'(qs)\left[ {q (1 - s)
- {{\delta (qs)} \over 2}q^2 (1 - s)^2 }\right] - q\delta (qs)} \right\} = - \gamma.
\end{equation}
For the function $ \delta = \delta (s)$ all cases are disregarded except for
the unique case $\sigma = 0$ for the following simple reason. All functions having a form
of $(1 - s)^{ - \sigma }$ monotonically increase to infinity as $s \uparrow 1$ when $0 < \sigma < 1$
and this fact contradicts the boundedness of function $\delta = \delta (s)$.
In the case $ \sigma < 0$ cannot be occurred (5.3) since the limit in the left-hand part is
equal to zero while $\gamma \ne 0$. In unique case $ \sigma = 0$ the limit
is constant and in view of (5.3)
$$
\delta = {\gamma \over q} \raise 1.5pt\hbox{.}
$$
We proved the following theorem.
\begin{theorem}
If $\beta < 1$ and $\alpha : = Y'(1) < \infty $, then for $0 \le s < 1$
\begin{equation}
\mathop {\lim }\limits_{n \to \infty } Y_n^{(i)} (s) = \pi (s),
\end{equation}
where $\pi (s)$ is probability GF having a form of
$$
\pi (s) = s\exp \left\{ { - {{\gamma (1 - s)} \over {1 + {\displaystyle \gamma
\over \displaystyle 2}(1 - s)}}} \right\}.
$$
\end{theorem}
The set $ \left\{ {\pi _j , j \in {\cal E}} \right\}$ coefficients in power series expansion
of $\pi (s) = \sum\nolimits_{j \in {\cal E}} {\pi _j s^j}$ are invariant distribution for the Q-process.
In transient case the following theorem hold.
\begin{theorem}
If $\beta = 1$ and $\alpha : = Y'(1) < \infty $, then for all $0 \le s < 1$
\begin{equation}
n^2 Y_n^{(i)} (s) = \mu (s)\left( {1 + r_n (s)} \right),
\quad \parbox{2cm}{\textit{as} {} $ n \to \infty $,}
\end{equation}
where $ {r_n (s)} = o(1)$ for $0 \le s < 1$ and the GF
$\mu (s) = \sum\nolimits_{j \in {\cal E}} {\mu _j s^j }$ has a form of
$$
\mu (s) = {{2s\hbar (s)}
\over {(\alpha - 1)\bigl( {F(s) - s}\bigr)}} \raise 1.5pt\hbox{,}
$$
with $ Y(s) \le s\hbar (s) \le s$. Nonnegative numbers
$\left\{ {\mu _j, j \in {\cal E}} \right\}$ satisfy to invariant equation
\begin{equation}
\mu _j = \sum\nolimits_{i \in {\cal E}} {\mu _i {\cal Q}_{ij}(n)}.
\end{equation}
Moreover $\sum\nolimits_{j \in {\cal E}} {\mu_j } = \infty $.
\end{theorem}
\begin{proof}
The convergence (5.5) immediately follows as a result of combination of (2.12), (4.2) and (4.3).
Taking limit in (4.4) reduces to equation $\mu (s)F_n (s) = Y_n (s)\mu \left( {F_n (s)} \right)$ which
equivalent to (5.6) in the context of transition probabilities. On the other hand it
follows from (5.5) that $\mu \left( {F_n (s)} \right) \sim n^2 F_n (s)$ as $n \to \infty $.
Hence $\sum\nolimits_{j \in {\cal E}} {\mu _j } = \infty $ .
\end{proof}
As $\lim _{s \downarrow 0} \left[ {{{Y_n^{(i)} (s)}\mathord{\left/ {\vphantom
{{Y_n^{(i)} (s)} s}} \right. \kern-\nulldelimiterspace} s}} \right] = {\cal Q}_{i1} (n)$,
the following two theorems imply from (5.4) and (5.5).
\begin{corollary}
If $\beta < 1$ and $\alpha : = Y'(1) < \infty $, then
\begin{equation}
{\cal Q}_{i1} (n) = e^{ - {{2\gamma } \mathord{\left/ {\vphantom {{2\gamma }
{(2 + \gamma )}}} \right. \kern-\nulldelimiterspace} {(2 + \gamma )}}} \left( {1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $ n \to \infty $.}
\end{equation}
\end{corollary}
\begin{corollary}
If $\beta = 1$ and $\alpha : = Y'(1) < \infty $, then
\begin{equation}
n^2 {\cal Q}_{i1} (n) = {{2\widetilde{\cal Q}_1 } \over{(\alpha - 1)p_0 }}\left( {1 + o(1)} \right),
\quad \parbox{2cm}{\textit{as} {} $ n \to \infty $,}
\end{equation}
here ${\cal Q}_{11}(1) \le \widetilde{\cal Q}_1 \le 1$.
\end{corollary}
\begin{theorem}
Let $\beta = 1$ and $\alpha : = Y'(1) < \infty $. Then
\begin{equation}
\mathop {\lim }\limits_{n \to \infty } {1 \over {n^2 }}\left[ {\mu_1 + \mu _2
+ \cdots + \mu _n } \right] = {2 \over {\left({\alpha - 1} \right)^2 }} \raise 1.5pt\hbox{.}
\end{equation}
\end{theorem}
\begin{proof}
By Taylor formula $F(s) - s \sim B(1 - s)^2 $ as $s \uparrow 1$.
Therefore since $ \lim _{s \uparrow 1} \hbar (s) = 1$ for GF $\mu (s)$ we have
\begin{equation}
\mu (s) \sim {4 \over {(\alpha - 1)^2 }}{1 \over {\left( {1 - s}\right)^2 }} \raise 1.5pt\hbox{,}
\quad \parbox{1.6cm}{\textit{as} {} $ s \uparrow 1 $.}
\end{equation}
According to Hardy-Littlewood Tauberian theorem each of relations (5.9) and (5.10) entails another.
\end{proof}
Another invariant measure for Q-process are numbers
\begin{equation}
\upsilon _j := \mathop {\lim }\limits_{n \to \infty } {{{\cal Q}_{ij} (n)}
\over {{\cal Q}_{i1} (n)}} \raise 1.5pt\hbox{,}
\end{equation}
which don't depend on $i \in {\cal E}$. In fact a similar way as in GWP (see Lemma 3) case it is
easy to see that this limit exists. Owing to Kolmogorov-Chapman equation
$$
{{{\cal Q}_{ij} (n + 1)} \over {{\cal Q}_{i1} (n + 1)}}{{{\cal Q}_{i1} (n + 1)} \over {{\cal Q}_{i1} (n)}}
= \sum\limits_{k \in {\cal E}} {{{{\cal Q}_{ik} (n)} \over {{\cal Q}_{i1} (n)}}{\cal Q}_{kj} (1)}.
$$
Last equality and (5.11), taking into account that $ {{{\cal Q}_{i1} (n + 1)} \mathord{\left/
{\vphantom {{{\cal Q}_{i1} (n + 1)} {{\cal Q}_{i1} (n)}}} \right. \kern-\nulldelimiterspace}
{{\cal Q}_{i1} (n)}} \to 1$ gives us an invariant relation
\begin{equation}
\upsilon _j = \sum\nolimits_{i \in {\cal E}} {\upsilon _i {\cal Q}_{ij} (1)}.
\end{equation}
In GF context the equality (5.12) is equivalent to Schroeder type functional equation
$$
{\cal U}\left( {\widehat F(s)} \right) = {{\widehat F(s)} \over {Y(s)}}{\cal U}(s),
$$
where $ \widehat{F_n }(s) = {{F_n (qs)} \mathord{\left/
{\vphantom {{F_n (qs)} q}} \right. \kern-\nulldelimiterspace} q}$ and
$$
{\cal U}(s) = \sum\nolimits_{j \in {\cal E}} {\upsilon _j s^j }
$$
with $\upsilon _1 = 1$.
Note that in conditions of Theorem 11
$$
{\cal U}(s) = \pi (s)e^{{{2\gamma } \mathord{\left/ {\vphantom {{2\gamma }
{(2 + \gamma )}}} \right. \kern-\nulldelimiterspace} {(2 + \gamma )}}}.
$$
Hence, considering (5.11), we generalize the statement (5.7):
$$
{\cal Q}_{ij} (n) \longrightarrow \pi _j = \upsilon _j e^{ - {{2\gamma }\mathord{\left/
{\vphantom {{2\gamma }{(2 + \gamma )}}}\right. \kern-\nulldelimiterspace}{(2 + \gamma )}}},
\quad \parbox{2cm}{\textit{as} {} $ n \to \infty $,}
$$
for all $i,j \in {\cal E}$.
By similar way for $\beta = 1$ it is discovered that
$$
n^2 {\cal Q}_{ij} (n) \longrightarrow \mu _j = \upsilon _j {{2\widetilde{\cal Q}_1 } \over
{(\alpha - 1)p_0}} \raise 1.5pt\hbox{,} \quad \parbox{2cm}{\textit{as} {} $ n \to \infty $,}
$$
where $\widetilde{\cal Q}_1 $ is defined in (5.8).
Providing that $Y''(1) < \infty $ it can be estimated the convergence speed in Theorem 12.
It is proved in {\cite{NMuh}} that if $C: = F'''(1) < \infty $, then
\begin{equation}
R_n (s) = {1 \over {b_n (s)}} + \Delta \cdot {{\ln b_n (s) + K(s)}
\over {\bigl( {b_n (s)} \bigr)^2 }}\bigl( {1 + o(1)} \bigr),
\end{equation}
as $n \to \infty $, where
$$
b_n (s) = {{F''(1)} \over 2}n + {1\over {1 - s}}
\qquad \mbox{\textit{and}} \qquad
\Delta = {C \over {3F''(1)}} - {{F''(1)} \over 2} \raise 1.5pt\hbox{,}
$$
and $K(s)$ is some bounded function depending on form of $F(s)$.
Since the finiteness of $C$ is equivalent to condition $Y''(1) < \infty $ then
from combination of relations (2.12), (4.2), (4.3) and (5.13) we receive
the following theorem for the case $\beta = 1$.
\begin{theorem}
If together with conditions of Theorem 12 we suppose that $Y''(1)< \infty $, then for
the error term in asymptotic formula (5.5) the following estimation holds:
$$
r_n (s) = \widetilde\Delta \cdot {{\ln b_n (s)} \over {b_n(s)}}\left( {1 + o(1)}
\right), \quad \parbox{2cm}{\textit{as} {} $ n \to \infty $,}
$$
where $\widetilde\Delta $ is constant depending on the moment $Y''(1)$ and
$$
b_n (s) = {{(\alpha - 1)n} \over 2} + {1 \over {1 - s}} \raise 1.5pt\hbox{.}
$$
\end{theorem}
\begin{corollary}
In conditions of Theorem 14 the following representation holds:
$$
n^2 {\cal Q}_{ij} (n) = \mu _j \left( {1 + {\Delta \over {\alpha - 1}} \cdot {{\ln n}
\over n}\left( {1 + o(1)} \right)}\right), \quad \parbox{2cm}{\textit{as} {} $ n \to \infty $.}
$$
\end{corollary}
\medskip
\section{ Joint distribution law of Q-process and its total state}
Consider the Q-process $\left\{ {W_n , n \in {\mathbb{N}}_0 }\right\}$ with
structural parameter $\beta = F'(q)$. Let's define a random variable
$$
S_n = W_0 + W_1 + \, \cdots \, + W_{n - 1},
$$
a total state in Q-process until time $n$. Let
$$
J_n (s;x) = \sum\limits_{j \in {\cal E}} {\sum\limits_{l \in {\mathbb{N}}}
{\mathbb{P}\bigl\{ {W_n = j, S_n = l} \bigr\}s^j x^l } }
$$
be the joint GF of $W_n $ and $S_n $ on a set of
$$
\mathbb{K} = \left\{ {(s;x) \in {\mathbb{R}}^2: \; |s| \le 1,\; |x|
\le 1, \; \sqrt {(s - 1)^2 + (x - 1)^2 } \ge r > 0} \right\}.
$$
\begin{lemma}
For all $(s;x) \in \mathbb{K}$ and any $n \in {\mathbb{N}}$ a recursive equation
\begin{equation}
J_{n + 1} (s;x) = {{Y(s)} \over {\widehat F(s)}}J_n \left({x\widehat F(s);x} \right)
\end{equation}
holds, where $Y(s) = s{{F'(qs)} \mathord{\left/ {\vphantom {{F'(qs)} \beta }} \right.
\kern-\nulldelimiterspace} \beta }$ and $\widehat F(s) = {{F(qs)} \mathord{\left/
{\vphantom {{F(qs)} q}} \right. \kern-\nulldelimiterspace} q}$.
\end{lemma}
\begin{proof}
Let's consider the cumulative process $\bigl\{ {W_n , S_n }\bigr\}$ which is evidently a
bivariate Markov chain with transition functions
$$
\mathbb{P}\bigl\{{W_{n + 1} = j,\,S_{n + 1} = l\bigm| {W_n = i,\,S_n = k}} \bigr\}
= \mathbb{P}_i \bigl\{ {W_1 = j,\,S_1 = l} \bigr\}\delta _{l, i + k},
$$
where $\delta _{ij} $ is the Kronecker's delta function. Hence we have
\begin{eqnarray}
\mathbb{E}_i \Bigl[ {s^{W_{n + 1} } x^{S_{n + 1} } \bigm| {S_n = k}} \Bigr] \nonumber
& = & \sum\limits_{j \in {\cal E}} {\sum\limits_{l \in {\mathbb{N}}} {\mathbb{P}_i \bigl\{ {W_1
= j,\, S_1 = l} \bigr\}\delta _{l,i + k} s^j x^l } } \\
& = & \sum\limits_{j \in {\cal E}} {\mathbb{P}_i \bigl\{ {W_1 = j} \bigr\}s^j x^{i + k}}
= Y^{(i)} (s) \cdot x^{i + k}. \nonumber
\end{eqnarray}
Using this result and the formula of composite probabilities, we discover that
\begin{eqnarray}
J_{n + 1} (s;x) \nonumber
& = & \mathbb{E}\Bigr[ {\mathbb{E}\bigl[ {s^{W_{n + 1}} x^{S_{n + 1}} \bigm| {W_n , S_n }} \bigr]} \Bigr]
= \mathbb{E}\left[ {Y^{(W_n )} (s) \cdot x^{W_n + S_n } } \right] \\ \nonumber
& = & \mathbb{E}\left[ {\left({\widehat F(s)} \right)^{W_n -1} \cdot Y(s) \cdot x^{W_n + S_n }} \right] \\ \nonumber
& = & {{Y(s)}\over {\widehat F(s)}}\cdot \mathbb{E}\left[{\left({x\widehat F(s)}\right)^{W_n} \cdot x^{S_n }} \right].
\end{eqnarray}
The formula (4.2.) is used in last step. The last equation reduces to (6.1).
\end{proof}
Now by means of relation (6.1) we can take an explicit expression for
GF $ J_n (s;x)$. In fact, sequentially having applied it, taking into account(4.4) and,
after some transformations we have
\begin{equation}
J_n (s;x) = s\prod\limits_{k = 0}^{n - 1} {\left[ {{{x\widehat F^\prime
\left( {H_k (s;x)} \right)} \over \beta }} \right]}
= {s \over {\beta ^n }}{{\partial H_n (s;x)} \over {\partial s}} \raise 1.2pt\hbox{,}
\end{equation}
where the sequence of functions $\left\{ {H_k (s;x)}\right\}$ is defined
for $(s;x) \in \mathbb{K}$ by following recurrence relations:
\begin{eqnarray}
H_0 (s;x) & = & s, \nonumber\\
H_{n + 1}(s;x) & = & x\widehat F\bigl( {H_n (s;x)} \bigr).
\end{eqnarray}
Since
$$
\left. {{{\partial J_n (s;x)} \over {\partial x}}} \right|_{(s;x)= (1;1)} = \mathbb{E}S_n,
$$
then provided that $\alpha : = Y'(1)$ it follows from 6.2) and (6.3) that
\begin{equation}
\mathbb{E}S_n =\left\{\begin{array}{l} (1 + \gamma )n - \gamma {\displaystyle{1 - \beta ^n }
\over \displaystyle{1 - \beta }} \hfill \raise 1pt\hbox{,}
\qquad \parbox{2.2cm}{\textit{when} {} $\beta < 1 $,} \\
\\
{\displaystyle{\alpha - 1} \over \displaystyle 2}n(n - 1) + n \hfill,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta = 1 $,} \\
\end{array} \right.
\end{equation}
where as before $\gamma : = {{\left( {\alpha - 1} \right)}\mathord{\left/
{\vphantom {{\left( {\alpha - 1} \right)}{\left({1 - \beta} \right)}}} \right.
\kern-\nulldelimiterspace} {\left( {1 - \beta } \right)}}$.
\begin{remark}
It is known from classical theory that if an evolution law of simple GWP
$\left\{ {\widehat{Z_n }, n \in {\mathbb{N}}_0 } \right\}$ is generated by GF
$\widehat F(s) = {{F(qs)} \mathord{\left/ {\vphantom {{F(qs)} q}} \right. \kern-\nulldelimiterspace} q}$,
then a joint GF of distribution of $\left\{ {\widehat{Z_n }, V_n} \right\}$, where
$V_n = \sum\nolimits_{k = 0}^{n - 1}{\widehat{Z_k }} $ is the total number of individuals
participating until time $n$, satisfies to the recurrent equation (6.3); see e.g., {\cite[p.126]{Kolchin}}.
So $H_n (s;x)$, $(s;x) \in \mathbb{K}$, represents the two-dimensional GF for
all $n \in {\mathbb{N}}$ and has all properties as $\mathbb{E}\left[ {s^{\widehat Z_n } x^{V_n }} \right]$.
\end{remark}
In virtue of the told in Remark 6, in studying of function $H_k(s;x)$ we certainly will use
properties of GF $\mathbb{E}\left[ {s^{\widehat Z_n } x^{V_n } } \right]$.
As well as $\widehat F^\prime (1) = \beta \le 1$ and hence the process
$\left\{ {\widehat{Z_n }, n \in {\mathbb{N}}_0 } \right\}$ is mortal GWP.
So there is an integer valued random variable $V = \lim _{n \to \infty } V_n $ -- a total number
of individuals participating in the process for all time of its evolution.
Hence there is a limit
$$
h(x): = \mathbb{E}x^V = \lim _{n \to \infty } \mathbb{E}x^{V_n } = \lim _{n \to \infty } H_n (1;x)
$$
and according to (6.3) it satisfied the recurrence relation
\begin{equation}
h(x) = x\widehat F\bigl( {h(x)} \bigr).
\end{equation}
Provided that the second moment $Y''(1)$ is finite, the following asymptotes for
the variances can be found from (6.2) by differentiation:
\begin{equation}
\textsf{Var} W_n \sim \left\{ \begin{array}{l} \mathcal{O}(1) \hfill ,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta < 1 $,} \\ \nonumber
\\
{\displaystyle{\left( {\alpha - 1} \right)^2} \over \displaystyle 2}n^2 \, \hfill,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta = 1 $,} \\
\end{array} \right.
\end{equation}
and
\begin{equation}
\textsf{Var} S_n \sim \left\{ \begin{array}{l} \mathcal{O}(n) \hfill ,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta < 1 $,} \\ \nonumber
\\
{\displaystyle{\left( {\alpha - 1} \right)^2} \over \displaystyle 12}n^4 \, \hfill,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta = 1 $,} \\
\end{array} \right.
\end{equation}
as $n \to \infty $. In turn it is matter of computation to verify that
\begin{equation}
\textsf{cov} \bigl( {W_n , S_n } \bigr) \sim \left\{ \begin{array}{l} \mathcal{O}(1) \hfill ,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta < 1 $,} \\ \nonumber
\\
{\displaystyle{\left( {\alpha - 1} \right)^2} \over \displaystyle 6}n^3 \, \hfill,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta = 1 $.} \\
\end{array} \right.
\end{equation}
Hence letting $\rho _n $ denote the correlation coefficient of $W_n $ and $S_n $, we have
\begin{equation}
\mathop {\lim }\limits_{n \to \infty } \rho _n = \left\{ \begin{array}{l} 0 \hfill ,
\qquad \parbox{2.2cm}{\textit{when} {} $\beta < 1 $,} \\ \nonumber
\\
{\displaystyle{\sqrt 6} \over \displaystyle 3} \, \hfill \hfill \raise 1pt\hbox{,}
\qquad \parbox{2.2cm}{\textit{when} {} $\beta = 1 $.} \\
\end{array} \right.
\end{equation}
Last statement specifies that in the case $\beta < 1$ between the variables $W_n $ and $S_n $
there is an asymptotic independence property. Contrariwise for the case $\beta = 1$ the
following "joint theorem" \,holds, which has been proved in the paper {\cite{Imomov14b}}.
\begin{theorem}
Let $\beta = 1$ and $\alpha = Y'(1) < \infty $. Then the two-dimensional process
$$
\left( {{{W_n } \over {{\mathbb{E}}W_n}};\, {{S_n } \over {{\mathbb{E}}S_n }}}\right)
$$
weakly converges to the two-dimensional random vector
$\left( {\textbf{\textsf{w}};\textbf{\textsf{s}}}\right)$ having the Laplace transform
$$
{\mathbb{E}}\left[ {e^{ - \lambda \textbf{\textsf{w}} - \theta \textbf{\textsf{s}}}} \right]
= \left[{{\rm{ch}}\sqrt \theta + {\lambda \over 2}{{{\rm{sh}}\sqrt \theta }
\over {\sqrt \theta }}} \right]^{ - 2} , \;\; \lambda , \theta \in {\mathbb{R}}_+ ,
$$
where ${\rm{ch}} x = {{\bigl( {e^x + e^{ - x} } \bigr)} \mathord{\left/
{\vphantom {{\bigl( {e^x + e^{ - x} } \bigr)} 2}} \right. \kern-\nulldelimiterspace} 2}$ and
$ {\rm{sh}} x = {{\bigl( {e^x - e^{ - x} } \bigr)} \mathord{\left/
{\vphantom {{\bigl( {e^x - e^{ - x} } \bigr)} 2}} \right. \kern-\nulldelimiterspace} 2}$.
\end{theorem}
Supposing $\lambda = 0$ in Theorem 15
produces the following limit theorem for $S_n $.
\begin{corollary}
Let $\beta = 1$ and $\alpha = Y'(1) < \infty $. Then for $0 < u < \infty $
$$
\mathop {\lim }\limits_{n \to \infty } {\mathbb{P}}\left\{ {{{S_n }
\over {{\mathbb{E}}S_n }} \le u} \right\} = F(u),
$$
where the limit function $F(u)$ has the Laplace transform
$$\
\int_0^{ + \infty } {e^{ - \theta u} dF(u)}
= {\rm{sech}}^2 \sqrt \theta \,,\;\; \theta \in {\mathbb{R}}_ + .
$$
\end{corollary}
Letting $\theta = 0$ from the Theorem 15 we have the following assertion which
was proved in the monograph {\cite[pp.59--60]{ANey}} with applying of the Helly's theorem.
\begin{corollary}
Let $\beta = 1$ and $\alpha = Y'(1) < \infty $. Then for $0 < u < \infty $
\begin{equation}
\mathop {\lim }\limits_{n \to \infty } {\mathbb{P}}\left\{ {{{W_n }
\over {{\mathbb{E}}W_n }} \le u} \right\} = 1 - e^{ - 2u} - 2ue^{-2u}.
\end{equation}
\end{corollary}
\begin*{\textit{Really}}, denoting $\psi _n (\lambda ) = \Psi _n (\lambda ;0)$ we have
$$
\psi _n (\lambda ) \longrightarrow {1 \over {\left[ {1 + {\displaystyle \lambda
\over \displaystyle 2}}\right]^2 }} \raise 1pt\hbox{,}
\quad \parbox{2cm}{\textit{as} {} $ n \to \infty $.}
$$
Here we have used that $\lim _{\theta \downarrow 0} {{{\rm{sh}}\sqrt \theta } \mathord{\left/
{\vphantom {{{\rm{sh}}\sqrt \theta } {\sqrt \theta }}} \right.
\kern-\nulldelimiterspace}{\sqrt \theta }} = 1$. The found Laplace transform corresponds
to a distribution of the right-hand side term in (6.6) produced as composition of
two exponential laws with an identical density.
\end*{}
\medskip
\section{ Asymptotic properties of $S_n $ in case of $\beta < 1$}
In this section we investigate asymptotic properties of distribution of
$S_n $ in the case $\beta < 1$. Consider the
GF $T_n (x): = \mathbb{E}x^{S_n } = J_n (1;x)$. Owing to (6.2) it has a form of
\begin{equation}
T_n (x) = \prod\limits_{k = 0}^{n - 1} {u_k (x)},
\end{equation}
where
$$
u_n (x) = {{x\widehat F^\prime \left( {h_n (x)} \right)} \over \beta } \raise 1pt\hbox{,}
$$
and $ \widehat F(s) = {{F(qs)} \mathord{\left/ {\vphantom {{F(qs)} q}} \right. \kern-\nulldelimiterspace} q}$,
$h_n (x) = \mathbb{E}x^{V_n } $, $V_n = \sum \nolimits_{k = 0}^{n - 1} {\widehat{Z_k }} $.
In accordance with (6.3) $h_{n + 1} (x) = x\widehat F\bigl( {h_n(x)} \bigr)$.
Denoting
$$
R_n (x): = h(x) - h_n (x), \;n \in {\mathbb{N}}_0,
$$
for $x \in \mathbb{K}$ we have
\begin{eqnarray}
R_n (x) \nonumber
& = & x\left[ {\widehat F\left( {h(x)} \right) - \widehat F\left( {h_{n - 1} (x)} \right)} \right] \\
& = & x\mathbb{E}\bigl[ {h(x) - h_{n - 1} (x)} \bigr]^{\widehat Z_n } \le \beta R_{n - 1} (x), \nonumber
\end{eqnarray}
since $\left| {h(x)} \right| \le 1$ and $\left| {h_n (s;x)} \right| \le 1$. Therefore
$$
\bigl| {R_n (x)} \bigr| \le \beta ^{n - k} \bigl| {R_k (x)}\bigr|,
$$
for each $n \in {\mathbb{N}}$ and $k = 0,1, \, \ldots \,, n$.
Consecutive application of last inequality gives
\begin{equation}
R_n (x) = \mathcal{O}\left( {\beta ^n } \right) \longrightarrow 0,
\end{equation}
as $n \to \infty $ uniformly for $x \in \mathbb{K}$. Further, where the
function $R_n (x)$ is used, we deal with set $\mathbb{K}$ in
which this function certainly is not zero.
By Taylor expansion and taking into account (7.2), (6.5), we have
\begin{equation}
R_{n + 1} (x) = x\widehat F^\prime \bigl( {h(x)} \bigr)R_n (x)
- x{{\widehat F^{\prime \prime} \bigl( {h(x)} \bigr)
+ \eta _n (x)}\over 2}R_n^2 (x),
\end{equation}
where $\left| {\eta _n (x)} \right| \to 0$ as $n \to \infty$ uniformly
with respect to $x \in \mathbb{K}$. Since $R_n (x) \to 0$, formula (7.3) implies
$$
R_n (x) = {{R_{n + 1}(x)} \over {x\widehat F^\prime \bigl({h(x)} \bigr)}}
\bigl( {1 + o(1)} \bigr).
$$
Owing to last equality we transform the formula (7.3) to a form of
$$
R_{n + 1} (x) = x\widehat F^\prime \bigl( {h(x)} \bigr)R_n (x)
- \left[ {{{\widehat F^{ \prime \prime } \bigl( {h(x)} \bigr)}
\over {2\widehat F^\prime \bigl( {h(x)} \bigr)}} + \varepsilon_n (x)}
\right]R_n (x)R_{n + 1} (x)
$$
and, hence
\begin{equation}
{{u(x)} \over {R_{n + 1} (x)}} = {1 \over {R_n (x)}} + v(x) + \varepsilon _n (x),
\end{equation}
where
$$
u(x) = x\widehat F^\prime \bigl( {h(x)} \bigr)
\qquad \mbox{\textit{and}} \qquad
v(x) = {{\widehat F^ {\prime \prime } \bigl( {h(x)} \bigr)}
\over {2\widehat F^\prime \bigl( {h(x)} \bigr)}} \raise 1pt\hbox{,}
$$
and $ \left| {\varepsilon _n (x)} \right| \le \varepsilon _n \to 0$
as $n \to \infty $ for all $x \in \mathbb{K}$. Repeated use of (7.4)
leads to the following representation for $R_n (x)$:
\begin{equation}
{{u^n (x)} \over {R_n (x)}} = {1 \over {h(x) - 1}} +{{v(x)\cdot \bigl[ {1 - u^n (x)} \bigr]}
\over {1 - u(x)}} + \sum\limits_{k = 1}^n {\varepsilon _k (x)u^k (x)}.
\end{equation}
Note that the formula (7.5) was written out in monograph {\cite[p.130]{Kolchin}} for the critical case.
The expansions of functions $h(x)$ and $u(x)$ in neighborhood
of $x = 1$ will be useful for our further purpose.
\begin{lemma}
Let $\beta < 1$. If $b: = \widehat F^ {\prime \prime }(1) < \infty$,
then for $h(x) = \mathbb{E}x^V $ the following relation holds:
\begin{equation}
1 - h(x) \sim {1 \over {1 - \beta }}\,(1 - x)
- {{2\beta (1 -\beta ) + b} \over {(1 - \beta )^3 }}\,(1 - x)^2,
\end{equation}
as $x \uparrow 1$.
\end{lemma}
\begin{proof}
We write down the Taylor expansion as $x \uparrow 1$:
\begin{equation}
h(x) = 1 + h'(1)\bigl(x - 1\bigr)
+ h''(1)\bigl(x - 1\bigr)^2 + o\bigl(x - 1\bigr)^2.
\end{equation}
In turn by direct differentiation from (6.5) we have
$$
h'(x) = {{\widehat F\bigl( {h(x)} \bigr)} \over {1 - u(x)}} \raise 1pt\hbox{,}
$$
and
$$
h''(x) = {{2\widehat F^\prime \bigl( {h(x)} \bigr)h'(x) + x\widehat F^{\prime \prime}
\bigl( {h(x)} \bigr)\bigl[ {h'(x)}\bigr]^2 } \over {1 - u(x)}} \raise 1pt\hbox{.}
$$
Letting $x \uparrow 1$ in last equalities entails $h'(1) = {1 \mathord{\left/
{\vphantom {1 {(1 - \beta )}}} \right. \kern-\nulldelimiterspace} {(1 - \beta )}}$ and
$$
h''(1) ={ {2\beta (1 - \beta ) + b} \over {(1 - \beta )^3 }}
$$
which together with (7.7) proves (7.6).
\end{proof}
We remind that existence of the second moment $b: = \widehat F^{\prime \prime}(1)$ is
equivalent to existence of $\alpha = Y'(1)$ and $\gamma = {b \mathord{\left/
{\vphantom {b {\beta (1 - \beta )}}} \right. \kern-\nulldelimiterspace} {\beta (1 - \beta )}}$.
We use it in the following assertion.
\begin{lemma}
Let $\beta < 1$. If $b: = \widehat F^ {\prime \prime }(1) < \infty$,
then as $x \uparrow 1$ the following relation holds:
\begin{equation}
u(x) \sim \beta x\left[ {1 - \gamma \,(1 - x)} \right]
+ {{2\beta (1 - \beta ) + b} \over {(1 - \beta)^3 }}bx\,(1 - x)^2.
\end{equation}
\end{lemma}
\begin{proof} The relation (7.8) follows from Taylor power series expansion of
function $\widehat F^\prime \left( {h(x)} \right)$, taking into account therein Lemma 5.
\end{proof}
The following Lemma 7 is a direct consequence of relation (7.6).
And Lemma 8 implies from (7.8) and Lemma 7.
Therein we consider the fact that $b = \beta(\alpha - 1)$.
\begin{lemma}
Let $\beta < 1$ and $\alpha < \infty $. Then as $\theta \to 0$
\begin{equation}
h\left({e^\theta } \right) - 1 \sim {1 \over {1 - \beta }}\theta
+ {{\beta (2 + \gamma )} \over {(1 - \beta )^2 }}\,\theta ^2.
\end{equation}
\end{lemma}
\begin{lemma}
If $\beta < 1$ and $\alpha < \infty $, then as $\theta \to 0$
\begin{equation}
u\left( {e^\theta } \right) \sim \beta \left[ {1 + (1 + \gamma)\theta } \right]
+ \beta \gamma {{1 + \beta (1 + \gamma )} \over {1 - \beta }}\,\theta ^2.
\end{equation}
\end{lemma}
The following assertion hails from (7.5), (7.9) and (7.10).
\begin{lemma}
Let $\beta < 1$ and $\alpha < \infty $. Then the following relation holds:
\begin{equation}
{{R_n \left( {e^\theta } \right)} \over {u^n \left( {e^\theta }\right)}}
\sim {1 \over {1 - \beta }}\theta + {{\beta (2 + \gamma)} \over {(1 - \beta )^2 }}\,\theta ^2,
\end{equation}
as $\theta \to 0$ and for each fixed $n \in {\mathbb{N}}$.
\end{lemma}
Further the following lemma is required.
\begin{lemma}
Let $\beta < 1$ and $\alpha < \infty $. Then the following relation holds:
\begin{equation}
\ln \prod\limits_{k = 0}^{n - 1} {u_k \left( {e^\theta } \right)} \sim
- \left( {1 - {{u\left( {e^\theta } \right)} \over \beta }} \right)n
- {{\beta \gamma (2 + \gamma )} \over {1 - \beta}}\,\theta ^3
\sum\limits_{k = 0}^{n - 1} {u^k \left( {e^\theta }\right)},
\end{equation}
as $\theta \to 0$ and for each fixed $n \in {\mathbb{N}}$.
\end{lemma}
\begin{proof}
Using inequalities $\ln (1 - y) \ge - y - {{y^2 } \mathord{\left/ {\vphantom {{y^2 }
{\left( {1 - y} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {1 - y} \right)}}$,
which hold for $0 \le y < 1$, we have
\begin{eqnarray}
\ln \prod\limits_{k = 0}^{n - 1} {u_k \left( {e^\theta } \right)} \nonumber
& = & \sum\limits_{k = 0}^{n - 1} {\ln \left\{ {1 - \left[ {1 - u_k \left( {e^\theta } \right)} \right]} \right\}} \\
& = & \sum\limits_{k = 0}^{n - 1} {\left[ { u_k \left( {e^\theta } \right)} - 1 \right]} + \rho _n^{(1)} (\theta )
= :I_n (\theta ) + \rho _n^{(1)} (\theta ),
\end{eqnarray}
where
\begin{equation}
I_n (\theta ) = - \sum\limits_{k = 0}^{n - 1} {\left[ {1 - u_k\left( {e^\theta } \right)} \right]},
\end{equation}
and
$$
0 \ge \rho _n^{(1)} (\theta ) \ge - \sum\limits_{k = 0}^{n - 1} {{{\left[ {1 - u_k \left( {e^\theta }
\right)} \right]^2 } \over{u_k \left( {e^\theta } \right)}}} \raise 1pt\hbox{.}
$$
It is easy to be convinced that the functional sequence $\left\{{h_k (x)} \right\}$ does not
decrease on $k$. Then according to property of GF, the function $u_k \left({e^\theta } \right)$ is
also non-decreasing on $k$ for each fixed $n \in {\mathbb{N}}$ and $\theta \in {\mathbb{R}}$. Hence,
\begin{equation}
0 \ge \rho _n^{(1)} (\theta ) \ge {{1 - u_0 \left( {e^\theta }\right)}
\over {u_0 \left( {e^\theta } \right)}}I_n (\theta ).
\end{equation}
We can verify also that $1 -u_0 \left( {e^\theta } \right) \to 0$ as $\theta \to 0$.
Then in accordance with (7.15) the second expression in (7.13) $\rho _n^{(1)} (\theta ) \to 0$
provided that $I_n (\theta )$ has a finite limit as $\theta \to 0$.
Further, by Taylor expansion we have
$$
\widehat F^\prime (t) = \widehat F^\prime (t_0 )
- \widehat F^{\prime \prime} (t_0 )(t_0 - t) + (t_0 - t)g(t_0 ;t),
$$
where $g(t_0 ;t) = (t_0 - t){{\widehat F^{\prime \prime \prime} (\tau)}
\mathord{\left/{\vphantom {{\widehat F^{\prime \prime \prime}(\tau )}2}} \right.
\kern-\nulldelimiterspace} 2}$ and $t_0 < \tau < t$. Using this expansion we write
$$
u_k (x) = {{u(x)} \over \beta } - {{x\widehat F^{\prime \prime}
\bigl( {h(x)} \bigr)} \over \beta }R_k (x) + R_k (x)g_k (x),
$$
herein $g_k (x) = xR_k (x){{\widehat F^{\prime \prime \prime} }\mathord{\left/
{\vphantom {{\widehat F^{\prime \prime \prime}}{2\beta }}} \right.
\kern-\nulldelimiterspace} {2\beta }}$ and $h_k (x) < \tau < h(x)$.
Therefore
\begin{equation}
u_k \left( {e^\theta } \right) = {{u\left( {e^\theta } \right)}\over \beta }
- {{e^\theta \widehat F^{\prime \prime} \left({h\left( {e^\theta } \right)}
\right)} \over \beta }R_k \left({e^\theta } \right) + R_k \left( {e^\theta }
\right)g_k \left({e^\theta } \right).
\end{equation}
It follows from (7.14) and (7.16) that
\begin{equation}
I_n (\theta ) = - \left[ {1 - {{u\left( {e^\theta } \right)}\over \beta }}
\right]n - {{e^\theta \widehat F^{\prime \prime}\left( {h\left( {e^\theta }
\right)} \right)} \over \beta}\sum\limits_{k = 0}^{n - 1} {R_k \left( {e^\theta } \right)}
+ \rho _n^{(2)} (\theta ),
\end{equation}
where
$$
0 \le \rho _n^{(2)} (\theta ) \le R_0\left( {e^\theta }\right)\sum\limits_{k = 0}^{n - 1}
{g_k \left({e^\theta } \right)}.
$$
In last estimation we used the earlier known inequality $\left|{R_n (x)} \right| \le \beta ^n
\left| {R_0 (x)} \right|$. Owingto the relation (7.9) $R_0 \left( {e^\theta } \right)
= \mathcal{O}(\theta)$ as $\theta \to 0$. In turn according to (7.2) $g_k \left( {e^\theta } \right)
=\mathcal{O}\left( {\beta ^k } \right) \to 0$ as $k \to \infty $ for all $\theta \in {\mathbb{R}}$.
Hence,
$$
R_0 \left( {e^\theta } \right)\sum\limits_{k = 0}^{n - 1} {g_k\left( {e^\theta }
\right)} = \mathcal{O}(\theta ) \longrightarrow 0, \quad \parbox{2cm}{\textit{as} {} $ \theta \to 0 $.}
$$
It follows from here that the error term in (7.17)
\begin{equation}
\rho _n^{(2)} (\theta ) \longrightarrow 0, \quad \parbox{2cm}{\textit{as} {} $ \theta \to 0 $.}
\end{equation}
Considering together (7.11), (7.17) and (7.18) and, after some computation,
taking into account a continuity property of $\widehat F^{\prime \prime }(s)$, we obtain (7.12).
The Lemma is proved.
\end{proof}
With the help of the above established lemmas, we state and prove now
the analogue of Law of Large Numbers and the Central Limit Theorem for $S_n $.
\begin{theorem}
Let $\beta < 1$ and $\alpha < \infty $. Then
\begin{equation}
\mathop {\lim }\limits_{n \to \infty } \mathbb{P}\left\{ {{{S_n } \over n}< u} \right\}
= \left\{ \begin{array}{l} 0 \hfill ,
\qquad \parbox{2.3cm}{\textit{if} {} $ u < 1 + \gamma $,} \\ \nonumber
\\
1 \hfill , \qquad \parbox{2.3cm}{\textit{if} {} $ u \ge 1 + \gamma $,} \\
\end{array} \right.
\end{equation}
where $\gamma = {{(\alpha - 1)} \mathord{\left/ {\vphantom {{(\alpha - 1)}
{(1 - \beta )}}} \right. \kern-\nulldelimiterspace} {(1 - \beta )}}$.
\end{theorem}
\begin{proof}
Denoting $\psi _n (\theta )$ be the Laplace transform of distribution
of ${{S_n }\mathord{\left/ {\vphantom {{S_n } n}} \right. \kern-\nulldelimiterspace} n}$ it follows
from formula (7.1) that $\psi _n (\theta ) = T_n \left( {\theta _n } \right)$, where
$\theta _n = \exp \left\{ { - {\theta \mathord{\left/ {\vphantom {\theta n}} \right.
\kern-\nulldelimiterspace} n}} \right\}$. The theorem statement is equivalent to that for
any fixed $\theta \in {\mathbb{R}}_ + $
\begin{equation}
\psi _n (\theta ) \longrightarrow e^{ - \theta (1 + \gamma )},
\quad \parbox{2cm}{\textit{as} {} $ n \to\infty $.}
\end{equation}
From Lemma 10 follows
\begin{equation}
\ln \psi _n (\theta ) \sim - \left( {1 - {{u\left( {\theta _n }\right)} \over \beta }}
\right)n + {{\beta \gamma (2 + \gamma )}\over {1 - \beta }}\,{{\theta ^3 } \over {n^3 }}
\sum\limits_{k =0}^{n - 1} {u^k \left( {\theta _n } \right)},
\end{equation}
as $n \to \infty $. The first addendum, owing to (7.10), becomes
\begin{equation}
\left( {1 - {{u\left( {\theta _n } \right)} \over \beta }}\right)n \sim (1 + \gamma )\theta
- \gamma {{1 + \beta (1 +\gamma )} \over {1 - \beta }}\,{{\, \theta ^2 } \over n}\raise 1pt\hbox{.}
\end{equation}
And the second one, as it is easy to see, has a decrease order of
$\mathcal{O}\left( {{1 \mathord{\left/ {\vphantom {1 {n^3 }}} \right.
\kern-\nulldelimiterspace} {n^3 }}} \right)$.
Therefore from (7.20) and (7.21) follows (7.19).
The Theorem is proved.
\end{proof}
We note that in view of the relation (7.21), it can be estimated the
rate of convergence of $ {{S_n } \mathord{\left/ {\vphantom {{S_n } n}} \right.
\kern-\nulldelimiterspace} n} \longrightarrow (1 + \gamma )$ as $n \to \infty $.
\begin{theorem}
Let $\beta < 1$, $\alpha < \infty $, and $\gamma = {{(\alpha -1)} \mathord{\left/
{\vphantom {{(\alpha - 1)} {(1 - \beta )}}} \right. \kern-\nulldelimiterspace} {(1 - \beta )}}$.
Then
$$
\mathbb{P}\left\{ {{{S_n - \mathbb{E}S_n } \over {\sqrt {2\Psi n} }} < x}
\right\} \longrightarrow \Phi (x), \quad \parbox{2cm}{\textit{as} {} $ n \to\infty $,}
$$
where the constant
$$
\Psi = \gamma {{1 + \beta (1 + \gamma )} \over {1 - \beta }}
$$
and $\Phi (x)$ -- the standard normal distribution function.
\end{theorem}
\begin{proof}
This time let $\varphi _n (\theta )$ be the characteristic function
of distribution of ${{\bigl( {S_n - \mathbb{E}S_n } \bigr)}
\mathord{\left/ {\vphantom {{\bigl( {S_n - \mathbb{E}S_n } \bigr)}
{\sqrt {2\Psi n} }}} \right.\kern-\nulldelimiterspace}
{\sqrt {2\Psi n} }}$:
$$
\varphi _n \left( \theta \right): = \mathbb{E}\left[ {\exp {{i\theta
\left( {S_n - \mathbb{E}S_n } \right)} \over {\sqrt {2\Psi n} }}}\right].
$$
According to (6.4) we have
\begin{equation}
\ln \varphi _n (\theta ) \sim - (1 + \gamma ){{i\theta n} \over {\sqrt {2\Psi n} }}
+ \ln T_n \left( {\theta _n } \right), \quad \parbox{2cm}{\textit{as} {} $ n \to\infty $,}
\end{equation}
where $\theta _n = \exp \left\{ {{{i\theta }\mathord{\left/ {\vphantom {{i\theta }
{\sqrt {2\Psi n} }}} \right. \kern-\nulldelimiterspace} {\sqrt {2\Psi n} }}} \right\}$.
Combining (7.1) and Lemma 10 yields
\begin{equation}
\ln T_n \left( {\theta _n } \right) \sim - \left( {1 - {{u\left({\theta _n } \right)}
\over \beta }} \right)n + {{\beta \gamma (2+ \gamma )} \over {1 - \beta }}\,{{i\theta ^3 }
\over {(2\Psi n)^{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-\nulldelimiterspace} 2}} }}
\sum\limits_{k = 0}^{n - 1} {u^k \left( {\theta _n } \right)}.
\end{equation}
In turn from (7.10) we have
\begin{equation}
1 - {{u\left( {\theta _n } \right)} \over \beta } \sim - (1 + \gamma ){{i\theta }
\over {\sqrt {2\Psi n} }} - \,{{\, \theta ^2 }\over {\,2n}} \raise 1pt\hbox{.}
\end{equation}
Using relations (7.23) and (7.24) in (7.22) follows
$$
\ln \varphi _n (\theta ) = - {{\theta ^2 } \over 2} + \mathcal{O}\left({{{\theta ^3 } \over
{n^{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-\nulldelimiterspace} 2}}}}}
\right), \quad \parbox{2cm}{\textit{as} {} $ n \to\infty $.}
$$
Hence we conclude that
$$
\varphi _n (\theta ) \longrightarrow \exp \left\{ { - {{\theta ^2 }\over 2}} \right\},
\quad \parbox{2cm}{\textit{as} {} $ n \to\infty $,}
$$
and the theorem statement follows from the continuity theorem for characteristic functions.
\end{proof}
\medskip
|
{'timestamp': '2020-04-21T02:27:29', 'yymm': '2004', 'arxiv_id': '2004.09307', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.09307'}
|
arxiv
|
\section{Introduction}
Integer sequences obtained by polynomial iteration, i.e., sequences that satisfy a recursion of the form
$$x_{n+1} = P(x_n),$$
occur in several areas of mathematics. Many interesting examples can be found in Finch's book \cite{Finch:2003} on mathematical constants, Chapter 6.10.
\medskip
Let us give two concrete examples: the first is the sequence given by $x_0 = 0$ and $x_{n+1} = x_n^2 + 1$ for $n \geq 0$, which is entry A003095 in the On-Line Encyclopedia of Integer Sequences (OEIS, \cite{OEIS}). Among other things, $x_n$ is the number of binary trees whose height (greatest distance from the root to a leaf) is less than $n$. This sequence grows very rapidly: there exists a constant $\beta \approx 1.2259024435$ (the digits are A076949 in the OEIS) such that $x_n = \lfloor \beta^{2^n} \rfloor$. However, this formula is not an efficient way to compute the elements of the sequence, since one needs the whole sequence to evaluate the constant $\beta$ numerically: it can be expressed as
$$\beta = \prod_{n=1}^{\infty} \big(1 + x_n^{-2} \big)^{2^{-n-1}},$$
but no formula that does not involve the sequence elements is known. Another well-known example is Sylvester's sequence (A000058 in the OEIS), which is given by $y_0 = 2$ and $y_{n+1} = y_n^2-y_n+1$. It arises in the context of Egyptian fractions, $y_n$ being the smallest positive integer for each $n$ such that
$$\frac{1}{y_0} + \frac{1}{y_1} + \frac{1}{y_2} + \cdots + \frac{1}{y_n} < 1.$$
Again, there is a pseudo-explicit formula for the sequence: for a constant $\gamma \approx 1.5979102180$, we have $y_n = \lfloor \gamma^{2^n} + \frac12 \rfloor$. However, $\gamma$ can again only be expressed in terms of the elements of the sequence. This is also the reason why little is known about the constants $\beta$ and $\gamma$ in these two examples and generally growth constants associated with similar sequences that satisfy a polynomial recursion.
\medskip
In this short note, we will prove that the constants $\beta$ and $\gamma$ in these examples are---perhaps unsurprisingly---irrational, as are all growth constants associated with similar sequences that follow a polynomial recursion. The precise statement reads as follows:
\begin{theorem}\label{thm:main_recseq}
Suppose that an integer sequence satisfies a recursion of the form $x_{n+1} = P(x_n)$ for some polynomial $P$ of degree $d > 1$ with rational coefficients. Assume further that $x_n \to \infty$ as $n \to \infty$. Set
$$\alpha = \lim_{n \to \infty} \sqrt[d^n]{x_n}.$$
Then $\alpha$ is a real number greater than $1$ that is either irrational or an integer.
\end{theorem}
It is natural to conjecture that the constants $\beta$ and $\gamma$ in our first two examples are not only irrational, but even transcendental. This is not always true for polynomial recurrences in general, though: for example, consider the sequence given by $z_1 = 3$ and $z_{n+1} = z_n^2 - 2$. It is not difficult to prove that
$$z_n = L_{2^n} = \Big( \frac{1 + \sqrt{5}}{2} \Big)^{2^n} + \Big( \frac{1 - \sqrt{5}}{2} \Big)^{2^n}$$
for all $n \geq 1$, where $L_{n}$ denotes the $n$-th Lucas number. Thus the constant $\alpha$ in Theorem~\ref{thm:main_recseq} would be the golden ratio in this example.
\medskip
In the following section, we briefly review the classical method to determine the asymptotic behaviour of polynomially recurrent sequences. Theorem~\ref{thm:main_recseq} will follow as a consequence of a somewhat stronger result, Theorem~\ref{th:irr_crit}. This theorem and its proof, which makes use of the Subspace Theorem, will be given in Section~\ref{sec:subspace}.
\section{Asymptotic formulas for polynomially recurrent sequences}\label{sec:asymp}
There is a classical technique for the analysis of polynomial recursions. A treatment of the two examples given in the introduction
can already been found in the 1973 paper of Aho and Sloane \cite{AhoSloane:1973} (along with many other examples). See also Chapter 2.2.3 of the book of Greene and Knuth~\cite{GreeneKnuth:1990} for a discussion of the method.
Let the polynomial $P$ in the recursion $x_{n+1} = P(x_n)$ be given by
$$P(x) = c_d x^d + c_{d-1} x^{d-1} + \cdots + c_0.$$
Note that
$$P(x) = c_d \Big( x + \frac{c_{d-1}}{d c_d} \Big)^d + O(x^{d-2}).$$
So if we perform the substitution $y_n = c_d^{1/(d-1)} (x_n + \frac{c_{d-1}}{d c_d})$, the recursion becomes
\begin{align*}
y_{n+1} &= c_d^{1/(d-1)} \Big(P(x_n) + \frac{c_{d-1}}{d c_d} \Big)\\
&= c_d^{d/(d-1)} \Big( x_n + \frac{c_{d-1}}{d c_d} \Big)^d + O(x_n^{d-2}) \\
&= y_n^d + O(y_n^{d-2}).
\end{align*}
Let us assume that $x_n \to \infty$, thus also $y_n \to \infty$. It is easy to see that $x_n$ and $y_n$ are increasing from some point onwards in this case. We can also assume, without loss of generality, that $y_n > 0$ for all $n$: if not, we simply choose a later starting point. Taking the logarithm, we obtain
$$\log y_{n+1} = d \log y_n + O(y_n^{-2})$$
or equivalently
\begin{equation}\label{eq:O-est}
\log \Big( \frac{y_{n+1}}{y_n^d} \Big) = O(y_n^{-2}).
\end{equation}
Next express $\log y_n$ as follows:
\begin{align*}
\log y_n &= d \log y_{n-1} + \log \Big( \frac{y_n}{y_{n-1}^d} \Big) =
d^2 \log y_{n-2} + d \log \Big( \frac{y_{n-1}}{y_{n-2}^d} \Big) + \log \Big( \frac{y_n}{y_{n-1}^d} \Big) \\
&= \cdots = d^n \log y_0 + \sum_{k=0}^{n-1} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).
\end{align*}
Extending to an infinite sum (which converges since $\log (y_{k+1}/y_k^d)$ is bounded) yields
$$\log y_n = d^n \Big( \log y_0 + \sum_{k=0}^{\infty} d^{-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big) \Big) - \sum_{k=n}^{\infty} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).$$
Set
$$\log \alpha = \log y_0 + \sum_{k=0}^{\infty} d^{-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big),$$
so that
$$\log y_n = d^n \log \alpha - \sum_{k=n}^{\infty} d^{n-k-1} \log \Big( \frac{y_{k+1}}{y_k^d} \Big).$$
In view of~\eqref{eq:O-est} and the fact that $y_n \leq y_{n+1} \leq \cdots$ for sufficiently large $n$, this gives
$$\log y_n = d^n \log \alpha + O(y_n^{-2}),$$
and thus finally
$$y_n = \alpha^{d^n} + O\big(\alpha^{-d^n}\big).$$
This means that
$$x_n = c_d^{-1/(d-1)} \alpha^{d^n} - \frac{c_{d-1}}{d c_d} + O\big(\alpha^{-d^n}\big).$$
\section{Application of the Subspace Theorem}\label{sec:subspace}
We now combine the asymptotic formula from the previous section with an application of the subspace theorem to prove our main result on polynomial recursions. In fact, we first state and prove a somewhat stronger result that implies Theorem~\ref{thm:main_recseq}.
\begin{theorem}\label{th:irr_crit}
Assume that the sequence $G_n$ attains an integral value infinitely often, and that it satisfies an asymptotic formula of the form
$$G_n=A \alpha^n+B+O(\alpha^{-\epsilon n}),$$
where $\alpha > 1$, $A$ and $B$ are algebraic numbers with $A \neq 0$, $\epsilon > 0$, and the constant implied by the $O$-term does not depend on $n$. Then the number $\alpha$ is either irrational or an integer.
\end{theorem}
In order to prove the irrationality of $\alpha$ we make use of the following version of the Subspace Theorem, which is most suitable for our purposes (cf. \cite[Chapter V, Theorem 1D]{Schmidt:1991}).
\begin{theorem}[Subspace Theorem]
Let $K$ be an algebraic number field and let
$S \subset M(K)=\{\text{canonical absolute values of}\; K\}$ be a finite set of absolute
values which contains all of the Archimedian ones. For each $\nu \in S$
let $L_{\nu,1}, \ldots, L_{\nu,N}$ be $N$ linearly independent linear
forms in $n$ variables with coefficients in $K$. Then for given
$\delta>0$, the solutions of the inequality
\[\prod_{\nu \in S} \prod_{i=1}^N |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu} <
\overline{|\mathbf x|}^{-\delta}\]
with $\mathbf x = (x_1,x_2,\ldots,x_N) \in \mathfrak a_K^N$ ($\mathfrak a_K$ being the maximal order of $K$) and $\mathbf x \neq \mathbf 0$, where
$|\cdot|_{\nu}$ denotes the valuation corresponding to $\nu$, $n_{\nu}$ is the
local degree, and
\[\overline{|\mathbf x|}= \max_{1 \leq i \leq N \atop 1 \leq j
\leq \deg K}|x_i^{(j)}|,\]
the maximum being taken over all conjugates $x_i^{(j)}$ of all entries $x_i$ of $\mathbf x$, lie in finitely many proper subspaces of $K^N$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{th:irr_crit}]
Let us assume contrary to the statement of Theorem \ref{th:irr_crit} that $\alpha=p/q$ is rational, where $p$ and $q$ are coprime positive integers, $p > q$, and $q \neq 1$. Moreover, assume that their prime factorizations are
$$p=p_1^{n_1} \dots p_k^{n_k} \quad \text{and} \quad q=q_1^{m_1} \dots q_{\ell}^{m_\ell}.$$
We choose $K$ in the Subspace Theorem to be the normal closure of $\mathbb{Q}(A,B)$ and write $D=[K:\mathbb{Q}]$. We fix one embedding of $K$ into $\mathbb{C}$ so that we can assume that $K\subseteq \mathbb{C}$. Moreover, let us write $A$ and $B$ as $A=\beta_1/Q$ and $B=\beta_2/Q$, where $\beta_1$ and $\beta_2$ lie in the maximal order $\mathfrak a_K$ of $K$ and $Q$ is a positive integer such that the ideals $(\beta_1,\beta_2)$ and $(Q)$ are coprime.
If $n$ is an index such that $G_n$ is an integer we deduce that there exists an algebraic integer $a$ which may depend on $n$ such that
\begin{equation}\label{eq:G_integral}
G_n=\frac{\beta_1 p^n+\beta_2 q^n+a}{Q q^n}=A \alpha^n+B+\frac{a}{Q q^n}=A\alpha^n+B+O(\alpha^{-\epsilon n}).
\end{equation}
Since we are assuming that $G_n$ is a rational integer, we can write the algebraic integer $a$ in the form $a=X-\beta_1 p^n-\beta_2q^n$, with $X\in \mathbb{Z}$.
Moreover, we know that
\begin{equation}\label{eq:a_small}
|a|<CQ\left(\frac{q^{1+\epsilon}}{p^{\epsilon}}\right)^n,
\end{equation}
where $C$ is the constant implied by the $O$-term.
Assume that $K$ has signature $(r,s)$. We choose
$$S=\{\infty_1,\dots,\infty_{r+s},\mathfrak p_{1,1},\dots,\mathfrak p_{k,t_k},\mathfrak q_{1,1},\dots,\mathfrak q_{t,u_{\ell}}\},$$
where the valuations $\mathfrak p_{i,1},\dots,\mathfrak p_{i,t_i}$ are all valuations lying above $p_i$ for $1\leq i \leq k$ and the valuations $\mathfrak q_{j,1},\dots,\mathfrak q_{j,u_j}$ are all valuations lying above $q_j$ for $1\leq j \leq \ell$. Moreover, let
$$\Gal(K/\mathbb{Q})=\{\sigma_1,\dots,\sigma_r,\sigma_{r+1},\bar\sigma_{r+1},\dots,\sigma_{r+s},\bar\sigma_{r+s}\},$$
so that the valuation $\infty_i$ is given by $|x|_{\infty_i}=|\sigma_i^{-1} x|$, where $|\cdot|$ is the usual absolute value of $\mathbb{C}$. Finally, denote the conjugates of $\beta_1$ and $\beta_2$ by $\beta_j^{(i)} = \sigma_i(\beta_j)$. We have the formula
$$|x_1\beta_1^{(i)}+x_2\beta_2^{(i)}+x_3|_{\infty_i}=|x_1\beta_1+x_2\beta_2+x_3|$$
for arbitrary rational numbers $x_1,x_2,x_3$.
Next, we construct suitable linear forms to apply the Subspace Theorem. Let us write
$x_1=p^n$, $x_2=q^n$ and $x_3=a$, thus $N=3$. We choose our linear forms as $L_{\nu,1}(\mathbf x)=x_1, L_{\nu,2}(\mathbf x)=x_2$ for all $\nu\in S$ and $L_{\nu,3}(\mathbf x)=x_3$ if $\nu$ lies above one of the valuations $p_1,\dots,p_k$. We choose $L_{\nu,3}(\mathbf x)=\beta_1 x_1+\beta_2 x_2+x_3$ if $\nu$ lies above one of the valuations $q_1,\dots,q_\ell$. Finally if $\nu=\infty_i$ then we put $L_{\infty_i,3}(\mathbf x)=(\beta_1-\beta_1^{(i)}) x_1+(\beta_2-\beta_2^{(i)}) x_2+x_3$.
Using the product formula (cf. \cite[pages 99--100]{Lang:ANT}) and trivial estimates we obtain
\begin{align*}
\prod_{\nu\in S} |L_{\nu,1}(\mathbf x)|_\nu^{n_\nu}&=1,&
\prod_{\nu\in S} |L_{\nu,2}(\mathbf x)|_\nu^{n_\nu}&=1,\\
\prod_{\nu|q_j \atop 1\leq j\leq \ell} |L_{\nu,3}(\mathbf x)|_\nu^{n_\nu}&\leq q^{-Dn},&
\prod_{\nu|p_i \atop 1\leq i\leq k} |L_{\nu,3}(\mathbf x)|_\nu^{n_\nu}&\leq 1.
\end{align*}
Thus we are left to compute the quantities $|L_{\infty_i,3}(\mathbf x)|_{\infty_i}$. We obtain
\begin{align*}
|L_{\infty_i,3}(\mathbf x)|_{\infty_i} &=|(\beta_1-\beta_1^{(i)}) x_1+(\beta_2-\beta_2^{(i)}) x_2+x_3 |_{\infty_i}\\
&= |\beta_1 p^n+\beta_2 q^n+a-\beta_1^{(i)}p^n-\beta_2^{(i)}q^n|_{\infty_i}\\
&=|X-\beta_1^{(i)}p^n-\beta_2^{(i)}q^n|_{\infty_i}\\
&=|X-\beta_1 p^n-\beta_2 q^n|=|a|.
\end{align*}
Combining all inequalities, we have
\begin{equation}\label{eq:prod_bound}
\prod_{\nu \in S} \prod_{i=1}^3 |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu}\leq q^{-Dn}|a|^D< (CQ)^D\left(\frac{q}{p}\right)^{\epsilon Dn}.
\end{equation}
Now choose $\delta>0$ small enough so that
$$\left(\frac{q}{p}\right)^{\epsilon D}<p^{-\delta}.$$
In view of~\eqref{eq:a_small}, the inequality $|a|_\nu \leq p^n$ holds for all valuations $\nu$ lying above $\infty$ for sufficiently large $n$, so that $\overline{|\mathbf x|} = |x_1| = p^n$. Hence we obtain
$$(CQ)^D\left(\frac{q}{p}\right)^{\epsilon Dn}<(p^n)^{-\delta}=\overline{|\mathbf x|}^{-\delta}$$
for sufficiently large $n$. In view of \eqref{eq:prod_bound} we have shown that
\begin{equation}\label{eq:ST}
\prod_{\nu \in S} \prod_{i=1}^3 |L_{\nu ,i}(\mathbf x) |_{\nu}^{n_\nu} <
\overline{|\mathbf x|}^{-\delta}.
\end{equation}
By the Subspace Theorem all solutions $x_1,x_2$ and $x_3$ to \eqref{eq:ST} lie in finitely many subspaces of $K^3$. Since by assumption there are infinitely many solutions, there exists one subspace $T\subseteq K^3$ which contains infinitely many solutions. Let $T$ be defined by
$t_1x_1+t_2x_2+t_3x_3=0$, with fixed algebraic integers $t_1,t_2,t_3\in \mathfrak a_K$. Then there must be infinitely many integers $n$ such that $t_1p^n+t_2q^n+t_3a=0$ which is in contradiction to \eqref{eq:a_small} and the assumption that $p > q > 1$. Thus we can conclude that $\alpha$ cannot be rational, unless $q=1$ so that $\alpha$ is an integer.
\end{proof}
Now the proof of Theorem~\ref{thm:main_recseq} is straightforward.
\begin{proof}[Proof of Theorem~\ref{thm:main_recseq}]
As derived in Section~\ref{sec:asymp}, if an integer sequence satisfies a recursion of the form $x_{n+1} = P(x_n)$ for some polynomial $P$ of degree $d > 1$ with rational coefficients and $x_n \to \infty$ as $n \to \infty$, then an asymptotic formula
of the form
$$x_n = A \alpha^{d^n} + B + O(\alpha^{-d^n})$$
holds. If $\alpha$ is rational, but not an integer, then we have an immediate contradiction with Theorem~\ref{th:irr_crit}.
\end{proof}
|
{'timestamp': '2020-04-21T02:28:45', 'yymm': '2004', 'arxiv_id': '2004.09353', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.09353'}
|
arxiv
|
\section{Introduction}
\label{sec:introduction}
Multivariate circular observations arise commonly in all those fields where a quantity of interest is measured as a direction or when instruments such as compasses, protractors, weather vanes, sextants or theodolites are used \citep{Mardia1972}.
Circular (or directional) data can be seen as points on the unit circle and represented by angles, provided that an initial direction and orientation of the circle have been chosen.
These data might be successfully modeled by using appropriate wrapped distributions such as the Wrapped Normal on the unit circle. The reader is pointed to \citet{MardiaJupp2000}, \citet{JammalamadakaSenGupta2001} and \citet{Batschelet1981} for modeling and inferential issues on circular data.
When data come in a multivariate setting, we might extend the univariate wrapped distribution by using a component-wise wrapping of multivariate distributions.
The multivariate Wrapped Normal ($WN_p$) is obtained by component-wise wrapping of a $p$-variate Normal distribution ($N_p$) on a $p-$dimensional torus \citep{JohnsonWehrly1978, Baba1981}. Wrapping can be explained as the geometric translation of a distribution with support on $\mathbb{R}$ to a space defined on a \textit{circular} object, e.g., a unit circle \citep{MardiaJupp2000}.
Let $\vect{X} \sim N_p(\vect{\mu}, \Sigma)$, where $\vect{\mu}$ is the vector mean and $\Sigma$ is the variance-covariance matrix. Then, the distribution of $\vect{Y} = \vect{X} \ \textrm{mod} \ 2\pi$ is denoted as $WN_p(\vect{\mu},\Sigma)$ with distribution function
\begin{equation*}
F(\vect{y})= \sum_{\vect{j} \in \mathbb{Z}^p} [\Phi_p(\vect{y} + 2 \pi \vect{j}; \Omega)- \Phi_p(2 \pi \vect{j}; \Omega) ] \ ,
\end{equation*}
and density function
\begin{equation*}
f(\vect{y})= \sum_{\vect{j} \in \mathbb{Z}^p} \phi_p(\vect{y} + 2 \pi \vect{j}; \Omega) \ ,
\end{equation*}
with $\vect{y} \in(0,2\pi]^p$, $\vect{j}\in\mathbb{Z}^p$, $\Omega=(\vect{\mu},\Sigma)$,
where $\Phi_p(\cdot)$ and $\phi_p(\cdot)$ are the distribution and density function of $\vect{X}$, respectively, and the modulus operator \textrm{mod} is applied component-wise. An appealing property of the wrapped Normal distribution is its closure with respect to convolution \citep{Baba1981, JammalamadakaSenGupta2001}.
Likelihood based inference about the parameters of the $WN_p(\vect{\mu},\Sigma)$ distribution can be trapped in numerical and computational hindrances since the log-likelihood function
\begin{equation*} \label{equ:loglik}
\ell(\Omega) = \sum_{i=1}^n \log \left[ \sum_{\vect{j} \in \mathbb{Z}^p} \phi_p(\vect{y}_i + 2 \pi \vect{j}; \Omega) \right] \,
\end{equation*}
involves the evaluation of an infinite series.
\citet{Agostinelli2007} proposed an Iterative Reweighted Maximum Likelihood Estimating Equations algorithm in the univariate setting, that is available in the \texttt{R} package \texttt{circular} \citep{AgostinelliLund2017}.
Algorithms bases on the Expectation-Maximization (EM) method have been used by \citet{Fisher1994} for parameter estimation for autoregressive models of Wrapped Normal distributions and by \citet{Coles1998}, \citet{Ravindran2011} and \citet{Ferrari2009} in a Bayesian framework according to a data augmentation approach to estimate the missing unobserved wrapping coefficients.
An innovative estimation strategy based on EM and Classification EM algorithms has been discussed in \citet{nodehi2018estimation}. In order to perform maximum likelihood estimation, the wrapping coefficients are treated as latent variables.
Let $\vect{y}_1, \ldots, \vect{y}_n$ be a i.i.d. sample from a multivariate Wrapped Normal distribution $\vect{Y} \sim WN_p(\vect{\mu}, \Sigma)$ on the $p$-torus with mean vector $\vect{\mu}$ and variance-covariance matrix $\Sigma$. We can think of $\vect{y}_i = \vect{x}_i \mod 2\pi$ where $\vect{x}_i$ is a sample from $\vect{X}_i \sim N_p(\vect{\mu}, \Sigma)$.
The EM algorithm works with the complete log-likelihood function given by
\begin{equation} \label{equ:completeloglik}
\ell_C(\Omega) = \sum_{i=1}^n \log\left[ \sum_{\vect{j} \in \mathbb{Z}^p} v_{i\vect{j}} \phi(\vect{y}_i + 2 \pi \vect{j}; \Omega)\right] \ ,
\end{equation}
that is characterized by the missing unobserved wrapping coefficients $\vect{j}$ and
$v_{i\vect{j}}$ is an indicator of the $i$th unit having the $\vect{j}$ vector as wrapping coefficients. The EM algorithm iterates between an Expectation (E) step and a Maximization (M) step. In the E-step, the conditional expectation of (\ref{equ:completeloglik}) is obtained by estimating the $v_{i\vect{j}}$ with the posterior probability that $\vect{y}_i$ has $\vect{j}$ as wrapping coefficients based on current parameters' values, i.e.
\begin{equation*}
v_{i\vect{j}} = \frac{\phi(\vect{y}_i + 2 \pi \vect{j}; \Omega)}{\sum_{\vect{h} \in \mathbb{Z}^p} \phi(\vect{y}_i + 2 \pi \vect{h}; \Omega)} \ , \qquad \vect{j} \in \mathbb{Z}^p, \quad i=1,\ldots,n \ .
\end{equation*}
In the M-step, the conditional expectation of (\ref{equ:completeloglik}) is maximized with respect to $\Omega$. The reader is pointed to \citet{nodehi2018estimation} for computational details about such maximization problem and updating formulas for $\Omega$.
An alternative estimation strategy is based on the CEM-type algorithm. The substantial difference is that the E-step is followed by a C-step (where C stands for classification) in which $v_{i\vect{j}}$ is estimated
as either 0 or 1 and so that each observation $\vect{y}_i$ is associated to the most likely wrapping coefficients $\vect{j}_i$ with $\vect{j}_i = \arg\max_{\vect{h} \in \mathbb{Z}^p} v_{i\vect{h}}$.
When the sample data is contaminated by
the occurrence of outliers, it is well known that maximum likelihood estimation, also achieved through the implementation of the EM or CEM algorithm, is likely to lead to unreliable results \citep{farcomeni2016robust}. Then, there is the need for a suitable robust procedure providing protection against those unexpected anomalous values.
An attractive solution would be to modify the likelihood equations in the M-step by introducing a set of weights aimed to bound the effect of outliers. Here, it is suggested to evaluate weights according to the weighted likelihood methodology \citep{markatou1998}. Weighted likelihood is an appealing robust technique for estimation and testing \citep{agostinelli2001test}. The methodology leads to a robust fit and gives the chance to detect possible substructures in the data.
Furthermore, the weighted likelihood methodology works in a very satisfactory fashion when combined with the EM and CEM algorithms, as in the case of mixture models \citep{greco2018weighted, greco2020weighted}.
The remainder of the paper is organized as follows. Section \ref{back} gives brief but necessary preliminaries on weighted likelihood. The weighted CEM algorithm for robust fitting of the multivariate Wrapped Normal model on data on a $p-$dimensional torus is discussed in
Section \ref{sec:estimation}.
Section \ref{sec:simulations} reports the results of some numerical studies, whereas a real data example is discussed in Section \ref{sec:examples}. Concluding remarks end the paper.
\section{Preliminaries on weighted likelihood}
\label{back}
Let $\vect{y}_1, \cdots, \vect{y}_n$ be a random sample of size $n$ drawn from a r.v. $\vect{Y}$ with distribution function $F$ and probability (density) function $f$. Let $\mathcal{M} = \{ M(\vect{y}; \vect{\theta}), \vect{\theta} \in \Theta \subseteq \mathbb{R}^d, d \geq 1, \vect{y} \in \mathcal{Y} \}$ be the assumed parametric model, with corresponding density $m(\vect{y};\vect{\theta})$, and
$\hat F_n$ the empirical distribution function. Assume that the support of $F$ is the same as that of $M$ and independent of $\vect{\theta}$. A measure of the agreement between the {\it true} and assumed model is provided by the Pearson residual function $\delta(\vect{y})$, with $\delta(\vect{y})\in [-1,+\infty)$, \citep{lindsay1994, markatou1998}, defined as
\begin{equation}
\label{pearson}
\delta(\vect{y}) = \delta(\vect{y}; \vect{\theta}, F) = \frac{f(\vect{y})}{m(\vect{y}; \vect{\theta})} - 1 \ .
\end{equation}
The finite sample counterpart of (\ref{pearson}) can be obtained as
\begin{equation}
\label{residualfs}
\delta_n(\vect{y}) = \delta(\vect{y}; \vect{\theta}, \hat F_n) = \frac{\hat f_n(\vect{y})}{m(\vect{y}; \vect{\theta})} - 1 \ ,
\end{equation}
where $\hat f_n(\vect{y})$ is a consistent estimate of the true density $f(\vect{y})$.
In discrete families of distributions, $\hat{f}_n(\vect{y})$ can be driven by the observed relative frequencies \citep{lindsay1994}, whereas in continuous models one could consider a non parametric density estimate based on the kernel function $k(\vect{y};\vect{t},h)$, that is
\begin{equation}
\label{eqn:parametric-density}
\hat f_n(\vect{y})=\int_\mathcal{Y}k(\vect{y};\vect{t},h)d\hat F_n(\vect{t}) \ .
\end{equation}
Moreover, in the continuous case, the model density in (\ref{residualfs}) can be replaced by a smoothed
model density, obtained by using the same kernel involved in non-parametric density estimation \citep{basu1994minimum, markatou1998}, that is
\begin{equation*}
\hat m(\vect{y}; \vect{\theta})=\int_\mathcal{Y}k(\vect{y};\vect{t},h)m(\vect{t};\vect{\theta}) \ d\vect{t} \
\end{equation*}
leading to
\begin{equation}
\label{residualfs2}
\delta_n(\vect{y}) = \delta(\vect{y}; \vect{\theta}, \hat F_n) = \frac{\hat f_n(\vect{y})}{\hat m(\vect{y}; \vect{\theta})} - 1 \ .
\end{equation}
By smoothing the model, the Pearson residuals in (\ref{residualfs2}) converge to zero with probability one for every $\vect{y}$ under the assumed model and it is not required that the kernel bandwidth $h$ goes to zero as the sample size $n$ increases.
Large values of the Pearson residual function correspond to regions of the support $\mathcal{Y}$ where the model fits the data poorly, meaning that the observation is unlikely to occur under the assumed model.
The reader is pointed to \cite{basu1994minimum}, \cite{markatou1998}, \cite{agostinelli2019weighted} and references therein for more details.
Observations leading to large Pearson residuals in (\ref{residualfs2}) are supposed to be down-weighted. Then, a weight in the interval $[0,1]$ is attached to each data point, that is computed accordingly to the following
weight function
\begin{equation}
\label{weight}
w(\delta(\vect{y})) = \frac{\left[A(\delta(\vect{y})) + 1\right]^+}{\delta(\vect{y}) + 1} \ ,
\end{equation}
where $w(\delta)\in[0,1]$, $[\cdot]^+$ denotes the positive part and
$A(\delta)$ is the Residual Adjustment Function (RAF, \cite{basu1994minimum}).
The weights $w(\delta_n(\vect{y}))$ are meant to be small for those data points that are in disagreement with the assumed model. Actually,
the RAF plays the role to bound the effect of large Pearson residuals on the fitting procedure.
By using a RAF such that $|A(\delta)| \le |\delta|$ outliers are expected to be properly downweighted.
The weight function (\ref{weight}) can be based on the families of RAF stemming from the
Symmetric Chi-squared divergence \citep{markatou1998}, the Generalized Kullback-Leibler divergence \citep{park+basu+2003}
\begin{equation}
\label{eq:raf-gkl}
A_{gkl}(\delta, \tau)=\frac{\log (\tau\delta+1)}{\tau}, \ 0\leq \tau \leq 1;
\end{equation}
or the
Power Divergence Measure \citep{cressie+1984, cressie+1988}
\begin{equation*}
A_{pdm}(\delta, \tau) = \left\{
\begin{array}{lc}
\tau \left( (\delta + 1)^{1/\tau} - 1 \right) & \tau < \infty \\
\log(\delta + 1) & \tau \rightarrow \infty \ .
\end{array}
\right .
\end{equation*}
The resulting weight function is unimodal and declines smoothly to zero as $\delta(\vect{y})\rightarrow -1$ or $\delta(\vect{y})\rightarrow\infty$.
Then, robust estimation can be based on
a Weighted Likelihood Estimating Equation (WLEE), defined as
\begin{equation}
\label{equ+wlee}
\sum_{i=1}^n w(\delta_n(\vect{y}_i); \vect{\theta}, \hat{F}_n) s(\vect{y}_i; \vect{\theta}) = 0 \ ,
\end{equation}
where $s(\vect{y}_i;\vect{\theta})$ is the individual contribution to the score function. Therefore,
weighted likelihood estimation can be thought as a root solving problem. Finding the solution of (\ref{equ+wlee}) requires an iterative weighting algorithm.
The corresponding weighted likelihood estimator $\hat{\vect{\theta}}^w$ (WLE) is consistent, asymptotically normal and fully efficient at the assumed model, under some general regularity conditions pertaining the model, the kernel and the weight function \citep{markatou1998, agostinelli2001test, agostinelli2019weighted}. Its robustness properties have been established in \citet{lindsay1994} in connection with minimum disparity problems. It is worth to remark that under very standard conditions, one can build a simple WLEE matching a minimum disparity objective function, hence inheriting its robustness properties.
In finite samples, the robustness/efficiency trade-off of weighted likelihood estimation can be tuned by varying the smoothing parameter $h$ in equation (\ref{eqn:parametric-density}). Large values of $h$ lead to Pearson residuals all close to zero and weights all close to one and, hence, large efficiency, since $\hat f_n(\vect{y})$ is stochastically close to the postulated model. On the other hand, small values of $h$ make $\hat f_n(\vect{y})$ more sensitive to the occurrence of outliers and the Pearson residuals become large for those data points that are in disagreement with the model. On the contrary, the shape of the kernel function $k(\vect{y};\vect{t},h)$ has a very limited effect.
For what concerns the tasks of testing and setting confidence regions, a weighted likelihood counterparts of the classical likelihood ratio test, and its asymptotically equivalent Wald and Score versions, can be established. Note that, all share the standard asymptotic distribution at the model, according to the results stated in \citep{agostinelli2001test}, that is
$$
\Lambda(\vect{\theta})=2\sum_{i=1}^nw_i\left[\ell(\hat{\vect{\theta}}^w; \vect{y}_i)-\ell(\vect{\theta}; \vect{y}_i)\right]\stackrel{p}{\rightarrow}\chi^2_p \ ,
$$
with $w_i= w(\delta_n(\vect{y}_i); \hat{\vect{\theta}}^w, \hat{F}_n) $.
Profile tests can be obtained as well.
\section{A weighted CEM algorithm}
\label{sec:estimation}
As previously stated in the Introduction, \citet{nodehi2018estimation} provided effective iterative algorithms to fit a multivariate Wrapped normal distribution on the $p-$dimensional torus.
Here, robust estimation is achieved by a suitable modification of their CEM algorithm, consisting in a weighting step
before performing the M-step, in which data-dependent weights are evaluated according to (\ref{weight}) yielding a WLEE (\ref{equ+wlee}) to be solved in the M-step.
The construction of Pearson residuals in (\ref{residualfs2}) involves a multivariate Wrapped Normal kernel with covariance matrix $h \Lambda$. Since the family of multivariate Wrapped Normal is closed under convolution, then the smoothed model density is still Wrapped Normal with covariance matrix $\Sigma+h\Lambda$. Here, we set $\Lambda = \Sigma$ so that $h$ can be a constant independent of the variance-covariance structure of the data.
The weighted CEM algorithm is structured as follows:
\begin{itemize}
\item[0] {\bf Initialization},
Starting values can be obtained by maximum likelihood estimation evaluated over a randomly chosen subset. The subsample size is expected to be as small as possible in order to increase the probability to get an outliers' free initial subset but large enough to guarantee estimation of the unknown parameters.
A starting solution for $\mu$ can be obtained by
the circular means,
whereas the diagonal entries of $\Sigma$ can be initialized as $-2\log(\hat{\rho}_r)$, where $\hat{\rho}_r$ is the sample mean resultant length and the off-diagonal elements by
$\rho_c(\vect{y}_r, \vect{y}_s) \sigma_{rr}^{(0)} \sigma_{ss}^{(0)}$ ($r \neq s$), where $\rho_c(\vect{y}_r, \vect{y}_s)$ is the circular correlation coefficient, $r=1,2,\ldots,p$ and $s=1,2,\ldots,p$, see \citet[][pag. 176, equation 8.2.2]{JammalamadakaSenGupta2001}.
In order to avoid the algorithm to be dependent on initial values, a simple and common strategy is to run the algorithm from a number of starting values using the bootstrap root searching approach as in \citet{markatou1998}. A criterion to choose among different solutions will be illustrated in Section \ref{sec:examples}.
\item[1] {\bf E-step}. Based on current parameters' values, first evaluate posterior probabilities $$v_{i\vect{j}} = \frac{\phi(\vect{y}_i + 2 \pi \vect{j}; \Omega)}{\sum_{\vect{h} \in \mathbb{Z}^p} \phi(\vect{y}_i + 2 \pi \vect{h}; \Omega)} \ , \qquad \vect{j} \in \mathbb{Z}^p, \quad i=1,\ldots,n \ ,$$
\item[2] {\bf C-step} Set $\vect{j}_i = \arg\max_{\vect{h} \in \mathbb{Z}^p} v_{i\vect{h}}$, where $v_{i\vect{j}}=1$ for $\vect{j}=\vect{j}_i$, and $v_{i\vect{j}}=0$ otherwise.
\item[3] {\bf W-step} (Weighting step) Based on current parameters' values, compute Pearson residuals according to (\ref{residualfs2}) based on a multivariate Wrapped Normal kernel with bandwidth $h$ and evaluate the weights as
\begin{equation*}
w_i=w(\delta_n(\vect{y}_i), \Omega, \hat F_n).
\end{equation*}
\item[4] {\bf M-step} Update parameters' values by computing a weighted mean and variance-covariance matrix with weights $w_i$, used to compute estimates, by maximizing the classification log-likelihood conditionally on $\hat{\vect{j}}_i$ $(i = 1,\ldots,n)$, given by
\begin{align*}
\hat{\vect{\mu}}_i &= \frac{\sum_{i=1}^n w_i \hat{\vect{x}}_i}{\sum_{i=1}^n w_i}, \\
\Sigma_{ij} &= \frac{\sum_{i=1}^n (\hat{\vect{x}}_i - \hat{\vect{\mu}}_i)^\top (\hat{\vect{x}}_j - \hat{\vect{\mu}}_j)w_i}{\sum_{i=1}^n w_i}.
\end{align*}
Note that, at each iteration the classification algorithm provides also an estimate of the original unobserved sample obtained as $\hat{\vect{x}}_i = \vect{y}_i + 2 \pi \hat{\vect{j}}_i$, $i = 1, \ldots, n$.
\end{itemize}
\section{Numerical studies}
\label{sec:simulations}
The finite sample behavior of the proposed weighted CEM has been investigated by some numerical studies based on 500 Monte Carlo trials each. Data have been drawn from a $WN_p(\vect{\mu},\Sigma)$. We set $\vect{\mu}=0$, whereas in order to account for the lack of affine equivariance of the Wrapped Normal model \citep{nodehi2018estimation}, we consider different covariance structures $\Sigma$ as in \citet{AgostinelliEtAll2015}.
In particular, for fixed condition number $CN = 20$, we construct a random correlation matrix $R$. Then, we convert the correlation matrix $R$ into the covariance matrix $\Sigma = D^{1/2} R D^{1/2}$, with $D=\textrm{diag}(\sigma\vect{1}_p)$, where $\sigma$ is a chosen constant and $\vect{1}_p$ is a $p$-dimensional vector of ones.
Outliers have been obtained by shifting a proportion $\epsilon$ of randomly chosen data points by an amount $k_\epsilon$ in the direction of the smallest eigenvalue of $\Sigma$.
We consider sample sizes $n=50,100,500$, dimensions $p=2,5$, contamination level $\epsilon=0, 5\%, 10\%, 20\%$,
contamination size $k_\epsilon=\pi/4, \pi/2, \pi$ and $\sigma=\pi/8, \pi/4, \pi/2$.
For each combination of the simulation parameters, we are going to compare the performance of CEM and weighted CEM algorithms. The weights used in the W-step are computed using the Generalized Kullback-Leibler RAF in equation (\ref{eq:raf-gkl}) with $\tau = 0.1$. According to the strategy described in \cite{agostinelli2001test}, the bandwidth $h$ has been selected by setting $\Lambda = \Sigma$, so that $h$ is a constant independent of the scale of the model.
Here, $h$ is obtained so that any outlying observation located at least three standard deviations away from the mean in a componentwise fashion, is attached a weight not larger than 0.12 when the rate of contamination in the data has been fixed equal to $20\%$. The algorithm has been initialized according to the root search approach described in \cite{markatou1998} based on $15$ subsamples of size $10$.
The weighted CEM is assumed to have reached convergence when at the $(k+1)$th iteration
$$
\max \left(\sqrt{2(1-\cos(\hat{\vect{\mu}}^{(k)}-\hat{\vect{\mu}}^{(k+1)}))}, \max|\hat{\Sigma}^{(k)}-\hat{\Sigma}^{(k+1)} | \right) <10^{-6}.
$$
The algorithm has been implemented so that $\mathbb{Z}^p$ is replaced by the Cartesian product $\times_{s=1}^p \vect{\mathcal{J}}$ where $\vect{\mathcal{J}} = (-J, -J+1, \ldots, 0, \ldots, J-1, J)$ for some $J$ providing a good approximation. Here we set $J=3$.
The algorithm runs on \texttt{R} code \citep{cran} available from the authors upon request.
Fitting accuracy has been evaluated according to
\begin{itemize}
\item[(i)] the average angle separation
\begin{equation*}
\operatorname{AS}(\hat{\vect{\mu}}) = \frac{1}{p} \sum_{i=1}^p (1 - \cos(\hat{\vect{\mu}}_i - \vect{\mu}_{i})) \ ,
\end{equation*}
which ranges in $[0, 2]$, for the mean vector;
\item[(ii)] the divergence
\begin{equation*}
\Delta(\hat{\Sigma}) = \operatorname{trace}(\hat{\Sigma} \Sigma^{-1}) - \log(| \hat{\Sigma} \Sigma^{-1} |) - p \ ,
\end{equation*}
\end{itemize}
for the variance-covariance matrix.
Here, we only report the results stemming from the challenging situation with $n=100$ and $p=5$.
Figure \ref{fig:one} displays the average angle separation whereas Figure \ref{fig:two} gives the divergence to measure the accuracy in estimating the variance-covariance matrix for the weighted CEM (in green) and CEM (in red). The weighted CEM exhibits a fairly satisfactory fitting accuracy both under the assumed model (i.e. when the sample at hand is not corrupted by the occurrence of outliers) and under contamination. The robust method outperforms the CEM method, especially in the estimation of the variance-covariance components. The algorithm results in biased estimates for both the mean vector and the variance-covariance matrix only for the large contamination rate $\epsilon=20\%$, with small contamination size and a large $\sigma$. Actually, in this data constellation outliers are not well separated from the group of genuine observations. A similar behavior has been observed for the other sample sizes.
Complete results are made available in the Supplementary Material.
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-100-3-3} \\
\end{center}
\caption{Distribution of average angle separation for $n=100$ and $p=5$ using weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:one}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-100-3-3} \\
\end{center}
\caption{Distribution of the divergence measure for $n=100$ and $p=5$ using the weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:two}
\end{figure}
\section{Real data example: Protein data}
\label{sec:examples}
The data under consideration \citep{Najibi2017} contain bivariate information about $63$ protein domains that were randomly selected from three remote Protein classes in the Structural Classification of Proteins (SCOP).
In the following,
we consider the data set corresponding to the 39-th protein domain.
A bivariate Wrapped Normal has been fitted to the data at hand by using the weighted CEM algorithm, based on a Generalized Kullback-Leibler RAF with $\tau=0.25$ and $J=6$.
The tasks of bandwidth selection and initialization have been resolved according to the same strategy described above in Section~\ref{sec:simulations}.
The inspection of the data suggests the presence of at least a couple of clusters that make the data non homogeneous.
Figure \ref{fig:three} displays the data on a flat torus together with fitted means and $95\%$ confidence regions corresponding to three different roots of the WLEE (that are illustrated by different colors): one root gives location estimate $\vect{\mu}_1=(1.85, 2.34)$ and a positive correlation $\rho_1=0.79$; the second root gives location estimate $\vect{\mu}_2=(1.85, 5.86)$ and a negative correlation $\rho_2=-0.80$; the third root gives location estimate $\vect{\mu}_3=(1.61, 0.88)$ and correlation $\rho_3=-0.46$.
The first and second roots are very close to maximum likelihood estimates obtained in different directions when unwrapping the data: this is evident from the shift in the second coordinate of the mean vector and the change in the sign of the correlation. In both cases the data exhibit weights larger than 0.5, except in few cases, corresponding to the most extreme observations, as displayed in the first two panels of Figure \ref{fig:four}. In none of the two cases the bulk of the data corresponds to an homogeneous sub-group. On the contrary, the third root is able to detect an homogeneous substructure in the sample, corresponding to the most dense region in the data configuration.
Almost half of the data points is attached a weight close to zero, as shown in the third panel of Figure \ref{fig:four}.
This findings still confirm the ability of the weighted likelihood methodology to tackle such uneven patterns as a diagnostic of hidden substructures in the data.
In order to select one of the three roots we have found, we consider the strategy discussed in \cite{agostinelli2006}, that is, we select the root leading to the lowest fitted probability
\begin{equation*}
\mathrm{Prob}_{\hat\Omega}\left(\delta_n(\vect{y}; \hat{\Omega}, \hat F_n)<-0.95\right) \ .
\end{equation*}
This probability has been obtained by drawing 5000 samples from the fitted bivariate Wrapped Normal distribution for each of the three roots. The criterion correctly leads to choose the third root, for which an almost null probability is obtained, wheres the fitted probabilities for the first and second root are 0.204 and 0.280, respectively.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\textwidth,height=0.5\textheight]{ellipse}
\end{center}
\caption{Protein data. Fitted means ($+$) and $95\%$ confidence regions corresponding to three different roots from weighted CEM ($J=6$).}
\label{fig:three}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\textwidth, height=0.5\textheight]{weightsL6}
\end{center}
\caption{Protein data. Weights corresponding to three different roots from weighted CEM.}
\label{fig:four}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper an effective strategy for robust estimation of a multivariate Wrapped normal on a $p-$dimensional torus has been presented. The method inherits the good computational properties of a CEM algorithm developed in \cite{nodehi2018estimation} jointly with the robustness properties stemming from the employ of Pearson residuals and the weighted likelihood methodology. In this respect, it is particularly appealing the opportunity to work with a family of distribution that is close under convolution and allows to parallel the procedure one would have developed on the real line by using the multivariate normal distribution.
The proposed weighted CEM works satisfactory at least in small to moderate dimensions, both on synthetic and real data. It is worth to stress that
the method can be easily extended to other multivariate wrapped models.
\section*{Additional Numerical results}
\label{sm:sec:numericalstudy}
Figure \ref{fig:sm:1}, Figure \ref{fig:sm:3} and Figure \ref{fig:sm:5} show the angle separation whereas Figure \ref{fig:sm:2}, Figure \ref{fig:sm:4} and Figure \ref{fig:sm:6} display the measure of accuracy in estimating the variance-covariance components for $p=2$ and $n = 50, 100, 500$, respectively.
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-50-3-3} \\
\end{center}
\caption{Distribution of angle separation for $n=50$ and $p=2$ using weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-50-3-3} \\
\end{center}
\caption{Distribution of the divergence measure for $n=50$ and $p=2$ using the weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-100-3-3} \\
\end{center}
\caption{Distribution of angle separation for $n=100$ and $p=2$ using weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-100-3-3} \\
\end{center}
\caption{Distribution of the divergence measure for $n=100$ and $p=2$ using the weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-2-500-3-3} \\
\end{center}
\caption{Distribution of angle separation for $n=500$ and $p=2$ using weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-2-500-3-3} \\
\end{center}
\caption{Distribution of the divergence measure for $n=500$ and $p=2$ using the weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:6}
\end{figure}
Figure \ref{fig:sm:7} show the angle separation whereas Figure \ref{fig:sm:8} display the measure of accuracy in estimating the variance-covariance components for $p=5$ and $n = 500$, respectively.
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{cosmu-5-500-3-3} \\
\end{center}
\caption{Distribution of angle separation for $n=500$ and $p=5$ using weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:7}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-1-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-1-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-1-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-2-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-2-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-2-3} \\
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-3-1}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-3-2}
\includegraphics[height=0.3\textheight, width=0.3\textwidth]{div1-5-500-3-3} \\
\end{center}
\caption{Distribution of the divergence measure for $n=500$ and $p=5$ using the weighted CEM (in green) and the CEM (in red). The contamination rate $\epsilon$ is given on the horizontal axis. Increasing contamination size $k_\epsilon$ from left to right, increasing $\sigma$ from top to bottom.}
\label{fig:sm:8}
\end{figure}
|
{'timestamp': '2020-10-19T02:19:35', 'yymm': '2010', 'arxiv_id': '2010.08444', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.08444'}
|
arxiv
|
\section{Introduction}
Isolation, commonly using OS processes, is a cornerstone abstraction
for security. It allows us to isolate and limit software compromises
to one {\em fault domain} within an application and is the basis for
implementing the design principle of privilege separation. In the last
few years, user-level enclaves have become available in commodity CPUs
that support TEEs. A prime example of enclaved TEEs is Intel
SGX~\cite{sgxexplained}. Conceptually, enclaves are in sharp contrast
to processes in that they do not trust the privileged OS, promising a
drastic reduction in the TCB of a fault domain. This is why the design
of enclaved TEEs is of fundamental importance to security.
One of the big challenges with using today's enclaves is {\em
performance}. For example, many prior efforts have reported $1$--$2$
orders of magnitude slowdowns when supporting common
applications on SGX~\cite{ratel,graphene-sgx, scone, haven,occlum}.
This raises the question whether one can design enclaved TEEs which
have substantially better performance.
As a step towards this goal, we point towards one of the key
abstractions provided by enclaved TEEs---their memory model. The memory
model used in several existing TEEs
~\cite{armtrustzone,amd-sev,ferraiuolo2017komodo,
sgx,mckeen2016sgx,costan2016sanctum}, including SGX,
which originates from a long line of prior
works~\cite{bastion,overshadow}, follows what we call the {\em spatial
isolation model}. In this model, the virtual memory of the enclave is
statically divided into two types: {\em public} and {\em private}
memory
regions. These types are fixed throughout the region's lifetime. The
spatial isolation model is a simple but a rigid model, as its
underlying principle breaks compatibility with the most basic of data
processing patterns where data needs to privately computed on before
being made public or shared externally. In the spatial model,
traditional applications will need
to create multiple data copies when sharing across enclave
boundaries, and additionally encrypt data, if they desire security
from an untrusted OS. Consequently, to support abstractions like
shared memory, pipes, fast synchronization, IPC, file I/O, and others
on spatially-isolated memory, data has to be copied between public to
private memory regions frequently. This results in very high
overheads, a phenomenon reported in many frameworks
trying to re-enable compatibility on TEEs that use the spatial
model~\cite{ratel,occlum,graphene-sgx,scone,haven,dayeol2020keystone}.
In this work, we revisit the spatial isolation memory model adopted by
modern TEEs. We propose a new memory model called \textsc{Elasticlave}\xspace which
allows enclaves to share memory across enclaves and with the OS, with
more flexible permissions than in spatial isolation.
While allowing flexibility, \textsc{Elasticlave}\xspace does not make any simplistic
security assumptions or degrade its security guarantees over the
spatial isolation model. We view enclaves as a fundamental
abstraction for partitioning applications in this work, and therefore,
we assume that enclaves do {\em not} trust each other and can become
compromised during their lifetime.
The \textsc{Elasticlave}\xspace design directly eliminates the need for expensive data
copy operations, which are necessary in the spatial isolation model to
ensure security. The end result is that \textsc{Elasticlave}\xspace offers
$10\times$--$100\times$ better performance than spatially-isolated
TEEs with the same level of application-desired security.
The main challenge designing \textsc{Elasticlave}\xspace is providing sufficient
flexibility in defining security over shared memory regions, while
{\em minimizing complexity}. Relaxing the spatial isolation model such
that it allows enclaves to privately share memory between them,
without trusting a trusted OS as an intermediary, requires careful
consideration.
In particular, we want to allow enclaves to share a memory region and
be able to alter their permissions on the region over time, thereby
eliminating the need to create private copies. The permission
specification mechanism should be flexible enough to allow non-faulty
(uncompromised) enclaves to enforce any desirable sequence of
permission changes on the region which the application demands. At the
same time, we do {\em not} want the compromised OS or any other
enclaves that may have become compromised during runtime to be able to
escalate their privileges arbitrarily, beyond what was initially
agreed upon.
For instance, simply providing the equivalent of the traditional
shared memory and IPC interfaces (e.g., POSIX) can leave several
avenues of attacks unaddressed. The untrusted OS or
compromised enclaves may modify/read shared memory out of turn, create
TOCTOU attacks, intentionally create race conditions, re-delegate
permissions, and so on.
Thus, the \textsc{Elasticlave}\xspace interface is designed with abstractions that relax
the spatial model {\em minimally}. Further, a simple interface design
makes it easy to analyze the final security, and simultaneously, keeps
the implementation impact small.
We implement our design on an open-source, RTL-level \mbox{RISC-V}\xspace
$800$~MHz{}
processor~\cite{rocket}.
We evaluate performance and chip area impact of \textsc{Elasticlave}\xspace using a
cycle-level simulator~\cite{DBLP:conf/isca/KarandikarMKBAL18} on
several synthetic as well as real-world benchmarks. We observe that \textsc{Elasticlave}\xspace enables performance improvements of {\em $1$--$2$ orders of magnitude} over the spatial isolation model implemented in the same
processor configuration. We also show that \textsc{Elasticlave}\xspace has a modest cost
on implementation complexity. First, we show that the additional TCB
is than $7,000$ LoC.
Second, our benchmarking highlights that the performance overhead is
affected primarily by the number of enclave-to-enclave context
switches, i.e, it is independent of the size of shared data in a
region. Further, the increased hardware register pressure due to
\textsc{Elasticlave}\xspace does {\em not} increase the critical path of the synthesized
core for all tested configurations.
Third, the hardware area footprint scales well with the maximum number
of shared regions \textsc{Elasticlave}\xspace is configured to support. Specifically,
our estimated hardware area increase is below $1\%$ of our baseline
\mbox{RISC-V}\xspace processor, for every $8$ additional shared memory regions
\textsc{Elasticlave}\xspace TEE is configured to support.
\paragraph{Contributions.}
The paper proposes a new memory model for enclaved TEEs called
\textsc{Elasticlave}\xspace. We show that its design can result in significantly better
performance than the spatial isolation model. We offer a prototype
implementation on a \mbox{RISC-V}\xspace processor, with a modest hardware area impact.
\section{Problem}
\label{sec:problem}
TEE provides the abstraction of enclaves to isolate components of an application, which run with user-level privileges.
The TEE implementation (privileged hardware) is trusted and assumed to be bug-free.\footnote{TEEs are typically implemented in hardware and firmware. Our TEE implementation uses \mbox{RISC-V}\xspace hardware feature along with a privileged software monitor, executing in the firmware-equivalent software privileged layer.}
We want to design an efficient memory model for TEEs that support enclaves.
In our setup, a security-sensitive application is partitioned into multiple potentially
compromised (or faulty) components. Each component runs in a
separate enclave created by the TEE, which serves as a basic isolation primitive.
Enclaves are assumed to be {\em mutually-distrusting}, since they can
be compromised by the adversary during their execution, e.g., due to
software exploits. This assumption is of fundamental importance, as it
captures the essence of why the application is partitioned to begin
with. The memory model adopted by Intel SGX serves as a tangible
baseline to explain how a rigid security models can induce prohibitive
performance costs.
\begin{figure*}[!ht]
\centering
\subfloat[Producer-consumer\label{fig:problems-prod-cons}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.33]{prod-cons-compact.pdf}}
}
\hspace*{0.05\linewidth}
\subfloat[Proxy\label{fig:problems-proxy}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.35]{proxy-compact.pdf}}
}
\hspace*{0.05\linewidth}
\subfloat[Client-server\label{fig:problems-client-server}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.35]{server-client-compact.pdf}}
}
\caption{Spatial ShMem\xspace baseline cost.
Gray color indicates public memory;
double-line border indicates the trusted coordinator.}
\label{fig:problems}
\end{figure*}
\subsection{Baseline: The Spatial Isolation Model}
\label{sec:baseline-model}
Most enclaves available on commodity processors, use a memory model
which we call the \emph{spatial isolation}
model~\cite{sgx,sgx2014progref, bastion, xom, aegis, mi6}, including Intel SGX, which follows many prior
proposals~\cite{bastion,overshadow}. In this model, each enclave
comprises two different types of non-overlapping virtual memory regions:
\begin{enumerate}
\item \emph{Private memory:}
exclusive to the enclave itself and inaccessible to all other
enclaves running on the system.
\item \emph{Public memory:}
fully accessible to the enclave and the untrusted OS, who may then
share it with other enclaves.
\end{enumerate}
The spatial model embodies the principle of dividing trust in an ``all
or none'' manner~\cite{ratel}. For each enclave, every other enclave
is fully trusted to access the public memory, whereas the private
memory is accessible only to the enclave itself.
This principle is in sharp contrast to any form of memory sharing,
which is extensively used when an enclave wants to exchange data with
the outside world, including with other enclaves. Memory sharing is
key to efficiency in I/O operations, inter-process communication,
multi-threading, memory mapping, signal-handling, and other standard
abstractions. Although shared memory is not directly possible to
implement in the spatial isolation model, it can be {\em simulated}
with message-passing abstractions instead. To discuss the limitations
of the spatial isolation concretely, we present a baseline for
implementing shared memory functionality in the spatial model next.
We refer to this baseline as the {\em spatial ShMem\xspace baseline.} Note
that this baseline is frequently utilized in many prior frameworks
that offer compatibility with Intel SGX~\cite{ratel, scone, graphene-sgx}.
\paragraph{The Spatial ShMem\xspace Baseline.}
This baseline simulates a shared memory abstraction between two
spatially isolated enclaves.
Observe that the two enclaves can keep a private copy of the shared
data. However, as the enclaves do not trust each other they cannot
access each other's local copy. Therefore, the shared data must
either reside in public memory, which may be mapped in the address
space of both the enclaves, or they must use message-passing (e.g., via
RPC) which itself must use the public memory. Any data or exchanged
messages using public memory are exposed to the untrusted OS.
Therefore, the spatial ShMem\xspace baseline requires a cryptographically secure
channel to be implemented on top of the public memory. Specifically,
the two enclaves encrypt data and messages before writing them to
public memory and decrypt them after reading them. We call this
a {\em secure public memory}. We can assume that the cryptographic
keys are pre-exchanged or pre-established securely by the enclaves.
A secure public memory is not sufficient to implement a shared memory
abstraction in the spatial ShMem\xspace baseline. Concurrently executing
enclaves may access data simultaneously and such accesses may require serialization in order to
maintain typical application consistency guarantees.
Notice that reads and writes to the secure public channel involves
encryption and decryption sub-steps, the atomicity of which is not
guaranteed by the TEE. No standard synchronization primitives
such as semaphores, counters, and futexes---which often rely on
OS-provided atomicity---remain trustworthy in the enclave threat model
we consider. Therefore, one simple way to serialize access is to use a
trusted mediator or coordinator enclave.
In the spatial ShMem\xspace baseline, we designate a third enclave as a
{\em trusted coordinator}. For achieving memory consistency, accesses
to shared memory are simulated with message-passing, i.e., read/writes
to shared memory are simulated with remote procedure calls to the
trusted coordinator, which implements the ``shared'' memory by
keeping its content in its private memory. For example, to implement a
shared counter, the trusted coordinator enclave keeps the counter in
its private memory, and the counter-party enclave can send messages to
the trusted coordinator for state updates.
We have assumed in our baseline that the coordinator is not faulty or
compromised. Attacks on the trusted coordinator can subvert the
semantic correctness of the shared memory abstraction. One can
consider augmenting this baseline to tolerate faulty coordinators
(e.g., using BFT-based mechanisms). But these mechanisms would only {\em increase} the performance costs and latencies,
reducing the overall throughput.
\subsection{Illustrative Performance Costs}
\label{sec:pattern-costs}
The spatial ShMem\xspace baseline is significantly more expensive to
implement than the original shared memory abstraction in a non-enclave
(native) setting. We refer readers to Section~\ref{sec:eval} for the
raw performance costs of the spatial ShMem\xspace baseline over the native.
The overheads can be $1$-$2$ orders of magnitude higher. This is
primarily because of the encryption-decryption steps and additional memory
copies that are inherent in the implementation of secure channel and
trusted coordinator. Several recent works have reported these costs
over hundreds of programs on the Intel SGX platform~\cite{occlum,
graphene-sgx, scone, ratel}. For instance Occlum reported overheads up
to $14\times$ as compared to native Linux execution~\cite{occlum}. We
present $3$ representative textbook patterns of data sharing that ubiquitously arise
in real-world applications and illustrate how spatial isolation incurs
such significant cost.
\paragraph{Pattern 1: Producer-Consumer.}
In this pattern, the producer enclave writes a stream of
data objects to shared memory for a consumer enclave to read and
process. Several applications use this for signaling completion of
sub-steps of a larger task, such as in MapReduce~\cite{vc3, m2r}. For
supporting this pattern with the spatial ShMem\xspace baseline, the producer
has to copy its output data to public memory first and then the
consumer enclave copies it to private memory. In summary, at least $2$
additional copies of the original shared data are created. Further,
the data is encrypted and decrypted once leading to $2$ compute
operations per byte and $1$ private copy in the trusted coordinator.
Figure~\ref{fig:problems-prod-cons} depicts the steps.
\paragraph{Pattern 2: Proxy.}
Many applications serve as a intermediate proxy between a producer and
consumer. For example, consider packet monitoring/filtering
application like \texttt{Bro}, \texttt{snort}, or \texttt{bpf} which
modifies the data shared between two end-point
applications. Proxy designs can be implemented using two instances of
the producer-consumer pattern, where the proxy acts as the consumer
for the first instance and producer for the second. However, in
practice, proxies often optimize by performing {\em in-place} changes
to the shared data rather than maintaining separate queues with the
end points~\cite{vif, sgx-box}. Such in-place memory processing is not
compatible with the spatial memory model. Applications which
originally use this pattern must incur additional memory copies. The
data stream must be placed in public memory, so that it can be passed to
the proxy enclave that acts as a trusted coordinator. But at the same
time, the proxy cannot operate on public memory in-place, or else it
would risk modifications by other enclaves. Therefore, there are at
least $2$ memory copies of the $2$ original shared data contents,
totaling $4$ copies when supporting this pattern with the spatial
ShMem\xspace baseline, as shown in Figure~\ref{fig:problems-proxy}. Further,
the data is encrypted and decrypted twice leading to $4$ compute
operations per byte.
\paragraph{Pattern 3: Client-Server.}
Two enclaves, referred to a client and a server, read and write shared
data to each other in this pattern. Each enclave reads the data
written by the other, performs private computation on it, and writes
back the computed result back to the shared memory. As explained, the
shared memory abstraction cannot directly be implemented with data
residing in a shared region of public memory since the OS and other
enclaves on the system are not trusted. For supporting such sharing
patterns, there will be at least $4$ data copies---one in server
private memory, one client private memory, and two for passing data
between them via a public memory. Further, the data is encrypted and
decrypted twice leading to $4$ compute operations per byte
(Figure~\ref{fig:problems-client-server}).
\begin{table}[!t]
\centering
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{@{}lllllc@{}}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{\textbf{Pattern}}} & \multicolumn{3}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Spatial\end{tabular}}} & \textbf{\textsc{Elasticlave}\xspace} \\ \cmidrule(l){3-6}
\multicolumn{2}{c}{} & \textbf{Enc} & \textbf{Dec} & \textbf{Cpy} & \textbf{Instructions} \\ \midrule
1 & Producer-Consumer & $L$ & $L$ & $3 \cdot L$ & $2$ \\
2 & Proxy & $2 \cdot L$ & $2 \cdot L$ & $6 \cdot L$ & $4$ \\
3 & Client-Server & $L$ & $L$ & $3 \cdot L$ & $2$ \\ \bottomrule
\end{tabular}
}
\caption{Data sharing overheads of spatial isolation vs.
\textsc{Elasticlave}\xspace. $L$: data size (memory words) in the shared region.
}
\label{tab:overheads_summary}
\end{table}
\paragraph{Summary.} The spatial ShMem\xspace baseline requires
multiple data copies (see Figure~\ref{fig:problems}) to avoid attacks from the OS. Table~\ref{tab:overheads_summary} summarizes the encrypt/decrypt and copy operations incurred in our presented data patterns, assuming a region of $L$ memory words is shared and each word
is accessed once.
\subsection{Problem Formulation}
\label{sec:desired-properties}
The spatial isolation forces a rigid memory model. The type of
permissions of a memory region cannot change over time. The authority
which controls the permissions is fixed, i.e., the OS for public
memory and an enclave for private memory, regardless of the trust
model desired by the application.
We ask the following research question: {\em Does there exist a
minimal relaxation of the spatial model, which retains its
security guarantees, while eliminating its performance bottlenecks?}
\paragraph{Security Model.}
We assume that the OS is arbitrarily malicious and untrusted. The target application is partitioned into enclaves, which share one or more regions of memory. Each enclave has a {\em pre-agreed set of permissions}, which the application desires for its legitimate functionality. This set
does not change, and in a sense, is the maximum permissions an enclave
needs for that region at any point of time in the region's lifetime.
Any subset of enclaves can become compromised during the execution. We
refer to compromised enclaves as {\em faulty} which can behave arbitrarily.
While providing better performance, there are $2$ security properties we desire from our TEE. First, the TEE interface does not allow faulty (and non-faulty) enclaves to escalate their privileges beyond the pre-agreed set. The second property, loosely speaking, ensures that faulty enclaves cannot obtain access permissions to the shared region, i.e., outside of the sequence that non-faulty enclaves desire to enforce. We detail these $2$ properties in Section~\ref{sec:security_discussion}.
We additionally desire two soft goals.
\paragraph{Goal 1: Flexibility vs. Security.}
We aim to design a TEE memory model that offers security
comparable or better than our proposed baseline. A naive design, which
allows unfettered flexibility to control a region's permissions, can
expose enclaves to a larger attack surface than the baseline. Enclaves
may maliciously compete to become an arbiter of permissions for a
region. It may be difficult to enforce a single consistent global view
of the permissions that each enclave has to a shared region, if
permissions can change dynamically. This in turn may create TOCTOU
bugs, since enclaves may end up making trust decisions based on a
stale view of another enclave's current permissions.
Therefore, our goal is to strike a balance between flexibility
and security.
\paragraph{Goal 2: Minimizing Implementation Complexity.}
Enabling a new memory model may incur significant
implementation complexity. A complex memory model could introduce
expensive security metadata in hardware, increase the number of
signals, and introduce a large number of instructions. These can
directly lead to performance bottlenecks in the hardware
implementation, or have an unacceptable cost in chip area or power
consumption. Our goal is thus to keep the memory model {\em simple}
and minimize implementation complexity.
\paragraph{Scope.}
The TEE implementation is assumed to be bug-free. We aim to provide integrity,
confidentiality, and access control for shared memory data. We do not
aim to provide availability hence denial-of-service (DoS) attacks on
shared memory are not in-scope, since the OS may legitimately kill an
enclave or revoke access to memory. Further, our focus is on defining
a memory interface---micro-architectural implementation flaws and
side-channels are out of our scope. Lastly, if the TEE wishes to
safeguard the physical RAM or bus interfaces, it may require
additional defenses (e.g., memory encryption), which
are orthogonal and not considered here.
\begin{figure*}[!tb]
\centering
\subfloat[Producer-consumer (two-way
isolated) \label{fig:prod-cons-soln2}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.325]{prod-cons-soln-1}}
}
\hfill
\subfloat[Proxy\label{fig:proxy-soln}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.325]{proxy-soln-1}}
}
\hfill
\subfloat[Client-server\label{fig:client-server-soln}]{
\raisebox{-0.5\height}{\includegraphics[scale=0.325]{client-server-soln-1}}
}
\caption{Data sharing patterns with \textsc{Elasticlave}\xspace.
}
\label{fig:data-patt-solutions}
\end{figure*}
\section{The \textsc{Elasticlave}\xspace Memory Interface}
\label{sec:memory-interface}
\textsc{Elasticlave}\xspace is a relaxation of the spatial isolation model i.e.,
It allows enclaves to share memory regions more
flexibly.
\subsection{Overview}
\label{sec:overview}
\textsc{Elasticlave}\xspace
highlights the importance of $3$ key first-class abstractions that allow interacting enclaves to:
(a)~have individual {\em asymmetric
permission views} of shared memory regions, i.e., every enclave can
have their local view of their memory permissions;
(b)~{\em dynamically} change these permissions as long as
they do not exceed a pre-established maximum; and
(c)~obtain {\em exclusive} access rights over shared memory regions,
and transfer it atomically in a controlled manner to other enclaves.
As a quick point of note, we show that the above three abstractions
are sufficient to drastically reduce the performance costs highlighted
in Section~\ref{sec:pattern-costs}. In particular,
Table~\ref{tab:overheads_summary} shows that in \textsc{Elasticlave}\xspace, the number
of instructions is a small constant and
the number of data copies reduces to $1$ in all cases.
Whereas the spatial
ShMem\xspace baseline requires operations linear in the size $L$ of the
shared data accessed. We
will explain why such reduction is achieved in
Section~\ref{sec:patterns-solution}. But, in a brief glance at Figure~\ref{fig:data-patt-solutions} shows how the $3$ patterns can be implemented with a single shared memory copy, if the abstractions (a)-(c) are available. Notice how enclaves require different permission limits in
their views, which need to be exclusive sometimes, and how permissions change over time. For security, it is necessary that accesses made by enclaves are serialized in particular (shown) order.
Our recommendation for the $3$ specific \textsc{Elasticlave}\xspace abstractions is
intentional. Our guiding principle is simplicity and security---one could easily relax the spatial memory model further, but this comes at the peril of subtle security issues and increased implementation complexity. We discuss these considerations in Section~\ref{sec:security_discussion} after our design details.
\begin{figure*}[!ht]
\centering
\raisebox{-0.5\height}{\begin{minipage}[b]{0.68\linewidth}
{ \footnotesize
\begin{tabularx}{\linewidth}{XXX}
\toprule
\textbf{Instruction} &
\textbf{Permitted Caller} & \textbf{Semantics}\\
\midrule
\texttt{uid =
create(size)} & owner of \texttt{uid} & create a region \\
\texttt{err = map(vaddr, uid)} & accessor of \texttt{uid} & map
VA range to a region \\
\texttt{err = unmap(vaddr, uid)} & accessor of \texttt{uid} & remove
region mapping \\
\texttt{err = share(uid, eid, P)} & owner of \texttt{uid} &
share region with an
enclave \\
\texttt{err = change(uid, P)} & accessor of
\texttt{uid} & adjust the actual access permission to a memory
region \\
\texttt{err = destroy(uid)} & owner of \texttt{uid} & destroy a memory
region\\
\texttt{err = transfer(uid, eid)} & current lock holder & transfer lock to another
accessor \\
\bottomrule
\end{tabularx}
}
\captionof{table}{Summary of security instructions in \textsc{Elasticlave}\xspace.}
\label{tab:instructions}
\end{minipage}}
\hfill
\raisebox{-0.5\height}
{\begin{minipage}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.65\linewidth]{lattice.pdf}
\captionof{figure}{Lattice for the permission hierarchy, or $\leq$ relation for permissions.}
\label{fig:permission_lattice}
\end{minipage}}
\end{figure*}
\subsection{\textsc{Elasticlave}\xspace Abstractions}
\textsc{Elasticlave}\xspace relaxes spatial isolation by allowing enclaves to define and
change permissions of regions shared externally. Our design works at
the granularity of {\textsc{Elasticlave}\xspace} {\em memory regions}. These
regions are an abstraction defined by our model; each region maps to a
contiguous range of virtual memory addresses in the enclave. From the view of each enclaves, an \textsc{Elasticlave}\xspace memory region has $4$ permission bits: standard {\em read}, {\em
write}, {\em execute}, and a protection {\em lock} bit.
For each memory region, we have two types of enclaves.
The first are {\em owners}, who have the sole privilege to create,
destroy, and initiate sharing of regions. The second kind of enclaves
are {\em accessors}. Owners can share and grant the permission to
accessors only for the regions they own. An enclave can be both an
owner and an accessor of a region.
\textsc{Elasticlave}\xspace gives $3$ first-class abstractions: {\em
asymmetry}, {\em dynamicity}, and {\em exclusivity} in an enclave's permission
views.
\paragraph{Asymmetric Permission Views.}
In several data patterns shown in Section~\ref{sec:pattern-costs},
one can see that different enclaves require different permissions of
the shared memory. For example, one enclave has read accesses whereas
others have write-only access. The spatial model is a ``one size fits
all'' approach that does not allow enclaves to setup asymmetric
permissions for a public memory region securely---the OS can always
undo any such enforcement that enclaves might specify via normal
permission bits in hardware.
In \textsc{Elasticlave}\xspace, different enclaves are allowed to specify their own set
of permissions (or {\em views}) over the same shared region, which are
enforced by the TEE. This directly translates to avoiding the costs of
creating data copies into separate regions, where each region has a
different permission.
For example, in Pattern 1 the producer has read-write permissions and
the consumer has read-only permissions for the shared queue.
\paragraph{Dynamic Permissions.}
In \textsc{Elasticlave}\xspace, enclaves can change their permissions over time,
without seeking consent from or notifying other enclaves. In the
spatial isolation model, if enclaves need different permissions over
time on the same shared data, separate data copies are needed.
\textsc{Elasticlave}\xspace eliminates the need for such copies.
For example, in Pattern 2, when the source enclave generates data it
has read-write permissions, while the proxy enclave has no
permissions. After that, the source drops all its permissions, and
proxy enclave gains read-write permissions to process the data. This
way, both source and proxy enclaves do not interfere with each others
operations on the shared region.
While enabling dynamic permissions, \textsc{Elasticlave}\xspace does not allow enclaves
to arbitrarily escalate their permissions over time. In \textsc{Elasticlave}\xspace,
only the owner can share a memory region with other accessors during the
lifetime of the memory region. When the owner shares a memory region, it sets
the {\em static maximum} permissions it wishes to allow for the
specified accessor at any time. This static maximum cannot be changed
once set by the owner for a specified enclave. Accessors can escalate
or reduce their privileges dynamically. But if the accessor tries to
exceed the static maximum at any given point in time, \textsc{Elasticlave}\xspace
delivers a general protection fault to the accessor enclave.
\paragraph{Exclusive Locks.}
\textsc{Elasticlave}\xspace incorporates a special bit for each memory region called the
{\em lock} bit. This bit serves as a synchronization mechanism between
enclaves, which may not trust each other. \textsc{Elasticlave}\xspace ensures that at
any instance of time only one accessor has this bit set, thereby
enforcing that it has an exclusive access to a region. When this bit
is on, only that accessor is able to access it---all other accessors
have their permissions temporarily disabled. When the lock is acquired
and released by one enclave, all accessors and the owner of that
region are informed through a hardware exception/signal. Lock holders
can release it generically without specifying the next holder or can
atomically transfer the lock to other accessors through {$\tt{transfer}$}\xspace
instruction. Atomic transfers become useful for flexible but
controlled transfer of exclusive access over regions. For example, in
Pattern 2, the source holds the lock bit for exclusive access to the
region for writing its request. Thus, no one can tamper with the
packet while the source writes to it.
Then, the source transfers the lock directly
to the proxy. Proxy exclusively accesses the region to update the
packet and then transfers the lock to the destination. Only then the destination can read
the updated packet.
\subsection{Design Details}
\label{sec:design}
\textsc{Elasticlave}\xspace is a memory interface specification consisting of $7$
instructions, summarized in Table~\ref{tab:instructions}, which
operate on \textsc{Elasticlave}\xspace memory regions. Each \textsc{Elasticlave}\xspace region is
addressable with a {\em universal} identifier that uniquely identifies
it in the global namespace. Universal identifiers can be mapped to
different virtual addresses in different enclaves, and at the same
time, are mapped to physical addresses by a \textsc{Elasticlave}\xspace implementation.
The \textsc{Elasticlave}\xspace interface semantics are formalized as pre- and post-conditions
in Appendix~\ref{sec:appx}, which any secure implementation of this interface should satisfy.
Next, we explain the
\textsc{Elasticlave}\xspace design by walking through the typical lifecycle of a region.
\paragraph{Owner View.}
Each memory region $r$ has a unique owner enclave throughout its
lifetime.
An enclave $p$ can create a new memory region $r$ with {$\tt{create}$}\xspace instruction, which takes the memory region size
and returns a universal id ({$\tt{uid}$}\xspace). The enclave $p$ is the owner of
the new memory region $r$.
The owner permissions are initialized to an owner-specified safe
maximum. These permissions
are bound to a
memory region. The owner, just like any accessor, can bind the memory
region to its virtual address space by using {$\tt{map}$}\xspace and {$\tt{unmap}$}\xspace
instructions. The {$\tt{map}$}\xspace instruction takes a {$\tt{uid}$}\xspace for the region
and the virtual address range to map to it. A memory region can be
mapped at multiple virtual address in an enclave, but the static
permissions bound to the region at the time of creation apply to all
instances mapped in the virtual address space.
The owner can then share the memory with any other enclave using the
{$\tt{share}$}\xspace instruction, which specifies the {$\tt{uid}$}\xspace of the memory
region, the enclave id of the other accessor, and the static maximum
permissions allowed for that accessor.
Every accessor, including the owner, can dynamically change the
permissions of a memory as long as the permissions are strictly more
restrictive (fewer privileges) than the static maximum for the
enclave. For the owner, the static maximum is the full set of
permissions, and for other accessors, it is determined by the {$\tt{share}$}\xspace
instruction granting the access. The changes to such permissions are
local to each accessor, i.e., permission changes are not globally
effected for all accessors; rather they apply to each enclave
independently. The lattice shown in
Figure~\ref{fig:permission_lattice} defines the permission hierarchy.
Finally, the owner can destroy the memory region at any point in time
by invoking the {$\tt{destroy}$}\xspace instruction. \textsc{Elasticlave}\xspace sends all accessors a
signal when the owner destroys a memory region. Destroying a region
ends the lifetime in all enclaves. The OS can invoke the {$\tt{destroy}$}\xspace
instruction on an enclave to reclaim the memory region or to protect
itself from denial-of-service via the enclave.
\paragraph{Accessor's View.}
The accessor binds a memory region in its virtual address space using
the {$\tt{map}$}\xspace instruction; the same way as owners do. The initial
permissions of the memory region are set to static maximum allowed by
the owner (specified by the owner in the {$\tt{share}$}\xspace instruction). The
accessor can restrict its permissions dynamically further at any time
as long as the resulting permissions are below this static maximum
using the {$\tt{change}$}\xspace instruction. Such changes, as mentioned previously,
remain locally visible to the accessor enclave.
\paragraph{Permission Checks.}
The \textsc{Elasticlave}\xspace TEE implementation enforces the permissions defined by
enclaves in their local views. A permission bit is set to $1$ if the
corresponding memory access (read, write, or execute) is allowed, and
set to $0$ otherwise. For memory accesses, the security checks
can be summarized by two categories:
(1) availability check of the requested resources (e.g., memory
regions and enclaves), which ensures that instructions will not be
performed on non-existing resources; and (2) permission checks of the
caller, which ensures that the caller has enough privilege to perform
the requested instruction. Table~\ref{tab:instructions} defines the permitted caller for each instruction. For example, {$\tt{share}$}\xspace and {$\tt{destroy}$}\xspace
instructions can only be performed by the owner of the region.
The {$\tt{change}$}\xspace instruction is the mechanism for dynamically updating
permissions of a \textsc{Elasticlave}\xspace region. \textsc{Elasticlave}\xspace requires that the newly requested permissions ($perm$) by an enclave fall within the limits of its static maximum permissions ($max$).
Specifically, \textsc{Elasticlave}\xspace checks that $perm \leq max$, where
the $\leq$ relation is defined by the lattice shown in Figure~\ref{fig:permission_lattice}. The lock bit
can only be held (set to $1$) in the local view of a single
enclave at any instance of time. When it is set for one enclave,
that enclave's local permission bits are enforced, and
all other enclaves have no access to the region.
When lock is set to $0$ in the local views of all enclaves, permissions of each enclave are as specified in its local view.
\begin{figure}[!tb]
\end{figure}
\paragraph{Lock Acquire \& Release.}
Accessors can attempt to ``acquire'' or ``release'' the lock by using
the {$\tt{change}$}\xspace instruction. It returns the accessor's modified
permissions, including the lock bit that indicates whether the acquire
/ release was successful. \textsc{Elasticlave}\xspace ensures that at any instance of
time, only a single enclave is holding the lock. If any other enclave
accesses the region or tries to issue a {$\tt{change}$}\xspace instruction on that
region's permissions, these requests will be denied.
A lock holder can use the {$\tt{change}$}\xspace instruction to release locks;
however, there are situations where the holder wishes to explicitly
specify who it intends to be next holder of the lock. \textsc{Elasticlave}\xspace allows
lock holder to invoke a {$\tt{transfer}$}\xspace instruction which specifies the
enclave id of the next desired accessor. The next holder must have the
memory region mapped in its address space for the transfer to be
successful.
\paragraph{\textsc{Elasticlave}\xspace Exceptions \& Signals.}
\textsc{Elasticlave}\xspace issues exceptions whenever memory operations violating any
permission checks are made by an enclave. \textsc{Elasticlave}\xspace notifies enclaves
about events that affect the shared memory region via asynchronous
signals.
Signals are issued under two scenarios.
First, when the owner destroys a memory region $r$, permissions
granted to other enclaves will be invalidated since the memory region
is not in existence. In order to prevent them from continuing without
knowing that the memory region can no longer be accessed, the security
enforcement layer will send signals to notify all accessors who had an
active mapping (i.e., mapped but not yet unmapped) for the destroyed
memory region.
The second scenario for signals is to notify changes on lock bits.
Each time an accessor successfully acquires or releases the lock
(i.e., using {$\tt{change}$}\xspace or {$\tt{transfer}$}\xspace instructions), a signal is issued to
the owner. The owner can mask such signals if it wishes to, or it can
actively monitor the lock transfers if it desires. When a transfer
succeeds, the new accessor is notified via a signal.
Lastly, we point out that \textsc{Elasticlave}\xspace explicitly does not introduce
additional interface elements, for example, to allow enclaves to
signal to each other about their intent to acquire locks, or to prevent starvation. Section~\ref{sec:security_discussion} discusses these considerations to avoid interface complexity.
\subsection{Performance Benefits}
\label{sec:patterns-solution}
\textsc{Elasticlave}\xspace relaxes the spatial isolation model by introducing
flexibility in specifying permissions over shared regions. We now
revisit the example patterns discussed in
Section~\ref{sec:pattern-costs} to show these patterns can be
implemented with significantly lower costs (summarized in
Table~\ref{tab:overheads_summary}) with \textsc{Elasticlave}\xspace.
\paragraph{Revisiting Pattern 1: Producer-Consumer.}
Application writers can partition the producer and consumer into two
enclaves that share data efficiently with \textsc{Elasticlave}\xspace. We can consider
two scenarios of faulty enclaves. The first allows one-way protection,
where the producer safeguards itself from a faulty consumer. The
second offers mutual protection where faults in either enclave do not
spill over to the other.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.8\linewidth]{prod-cons-soln1}
\caption{One-way isolated producer-consumer pattern with \textsc{Elasticlave}\xspace.
The producer writes to a memory region (R1) shared with consumer; consumer
is only allowed to read.
}
\label{fig:prod-cons-soln1}
\end{figure}
In the one-way isolation scenario, the producer can create a memory
region and share it with the consumer with the maximum permission set
to $\tt{r{}-{}-{}-}$. The producer and the consumer can then keep
their permissions to $\tt{r{}w{}-{}-}$ and $\tt{r{}-{}-{}-}$
respectively, which allow the producer to read and write data and the
consumer to only read data in the memory region
(Figure~\ref{fig:prod-cons-soln1}). The producer can directly write
its data to the shared memory region, and the consumer can directly
read from it without needing to moving the data back and forth between
the shared memory region and their private memory. The producer can
ensure that the consumer, even if compromised, cannot maliciously race
to modify the data while it is being updated in a critical section by
the producer. The whole process does not involve any extra data copies
or a cryptographically secure public memory, and only introduces fixed
costs of setting up and destroying the memory regions.
Two-way isolation is desired when both producer and consumer wish to
modify shared data in-place, while assuming that the other is faulty.
As a simple example, counters in shared queue data structures often
require atomic updates. In general, the producer and consumer may want
to securely multiplex their access to any amount of shared data (e.g.,
via shared files or pipes) for performing in-place updates. \textsc{Elasticlave}\xspace
makes this possible without creating any additional memory copies or
establishing secure channels. The shared memory region can be created
by (say) the producer and shared with the consumer with a static
maximum permission of $\mathtt{r{}w{}-{}l}$ as shown in
Figure~\ref{fig:prod-cons-soln2}. When either of them wish to acquire
exclusive access temporarily, they can use the {$\tt{change}$}\xspace instruction,
setting it from $0$ to $1$. Therefore, the only overhead incurred is
that of the execution of the {$\tt{change}$}\xspace instruction itself, which is in
sharp contrast to the $2$ copies of the entire shared data required in
spatial isolation model.
\paragraph{Revisiting Pattern 2: Proxy.}
The proxy example can be seen as a straight-forward sequential
stitching of two producer-consumer instances. The shared data would
first be written by the producer, then the proxy atomically reads or
updates it, and then the consumer would read it. All three entities
can hold the lock bit in this order to avoid any faulty enclave to
access the shared memory where unintended.
\textsc{Elasticlave}\xspace transfer instruction eliminates windows of attack when
passing the locks from one enclave to another.
Specifically, it allows the source to atomically transfer the lock to
proxy, who then atomically transfers it to the consumer. In this way,
the proxy workflow can be implemented without any extra copy of the
shared data as shown in Figure~\ref{fig:proxy-soln}.
\paragraph{Revisiting Pattern 3: Client-Server.}
The client-server workflow can similarly be executed by keeping a
single copy of the shared data, as shown in
Figure~\ref{fig:client-server-soln}, which reduces the number of data
copies from $6$ in the case of spatial isolation to $1$.
\paragraph{Compatibility with Spatial Isolation.}
It is easy to see that \textsc{Elasticlave}\xspace is strictly more expressive than
spatial isolation model, and hence keeps complete compatibility with
designs that work in the spatial isolation model. Setting up the
equivalent of the public memory is simple---the owner can create the
region and share it with $\mathtt{r{}w{}x{}-}$ for all. Private memory
simply is not shared after creation by the owner.
\paragraph{Privilege De-escalation Built-in.}
In \textsc{Elasticlave}\xspace, enclaves can self-reduce their privileges below the
allowed maximum, without raising any signals to other enclaves. This
design choice enables compatibility with several other low-level
defenses which enclaves may wish to deploy for their own internal
safety---for example, making shared object be non-executable, or
write-protecting shared security metadata.
\subsection{Security \& Simplicity}
\label{sec:security_discussion}
We begin by observing that it is straight-forward to implement the
\textsc{Elasticlave}\xspace interface with an (inefficient) trusted enforcement layer
using spatially isolated memory\footnote{The
enforcement layer would be implemented by a trusted enclave, which
would keeps the shared memory content and the permission matrix in its
{\em private} memory. Each invocation of a \textsc{Elasticlave}\xspace instruction
would translate to a RPC call to the enforcement enclave, which could
simply emulate the stated checks in Table~\ref{tab:instructions} and
Appendix~\ref{sec:appx} as
checks on its matrix.}. It follows that any memory permission
configurations which may be deemed as ``attacks'' on the \textsc{Elasticlave}\xspace
would also be admissible in a faithful emulation on the spatial
isolation model. In this sense, \textsc{Elasticlave}\xspace does {\em not degrade
security} over the spatial isolation model. The primary highlight of
\textsc{Elasticlave}\xspace is the performance gains it enables without degrading security.
We point out two desirable high-level security properties that
immediately follow from the \textsc{Elasticlave}\xspace interface
(Table~\ref{tab:instructions}). Application writers can rely on these
properties without employing any extra security mechanisms.
\paragraph{Property 1: Bounded Escalation.}
If an owner does not explicitly authorize an enclave $p$ access to a
region $r$ with a said permission, $p$ will not be able to obtain that
access.
This property follows from three design points:
(a)~Only the owner can change the set of enclaves that can access a
region. Non-owner enclaves cannot grant access permissions to other
enclaves since there is no permission re-delegation instruction in the
interface.
(b)~Each valid enclave that can access a region has its permissions
bounded by an owner-specified static maximum.
(c)~For each access or instruction, the accessor enclave and the
permission is checked to be legitimate by each instruction in the
interface (see Table~\ref{tab:instructions}).
\paragraph{Property 2: Enforceable Serialization of Non-faulty Enclaves.}
If the application has a pre-determined sequence in which accesses of
non-faulty enclaves should be serialized, then \textsc{Elasticlave}\xspace can guarantee
that accesses obey that sequence or will be aborted. Specifically, let
us consider any desired sequence of memory accesses on an \textsc{Elasticlave}\xspace
region $a_1,a_2,\dots,a_n$ and assume that all enclaves performing
these accesses are uncompromised.
Then, using \textsc{Elasticlave}\xspace, the application writer can guarantee that its
sequence of accesses will follow the desired sequence, even in the
presence of other faulty enclaves, or can be aborted safely.
The property can be enforced by composing two \textsc{Elasticlave}\xspace abstractions:
(a)~For each access $a_i$ by an enclave $e(a_i)$ in the pre-determined
sequence, the accessor can first acquire the lock to be sure that no
other accesses interfere.
(b)~When the accessor changes, say at access $a_j$, the
current enclave $e(a_j)$ can safely hand-over the lock to the next
accessor $e(a_{j+1})$ by using the {$\tt{transfer}$}\xspace instruction. Faulty
enclaves cannot acquire the lock and access the region at any
intermediate point in this access chain. For example, in
Pattern 2 (proxy), once the proxy enclave modifies
the data in-place, simply releasing the lock is not {\em safe}. A
faulty source enclave can acquire the lock before the destination does
and tamper with the data. However, the proxy can eliminate such an
attack from a racing adversary using the {$\tt{transfer}$}\xspace instruction.
\paragraph{Simplicity.}
Several additional design decisions make \textsc{Elasticlave}\xspace simple to reason
about. We discuss two of these: forcing applications to use {\em
local-only views} in making trust decisions and {\em minimizing
interface complexity}.
Each enclave is asked to make security decisions based only on its own
{\em local} view of its current and static maximum permissions. This
is an intentional design choice in \textsc{Elasticlave}\xspace to maintain simplicity. One
could expose the state of the complete access control (permission)
matrix of all enclaves to each other, for enabling more flexible or
dynamic trust decisions between enclaves.
However, this would also add complexity to application writers.
All enclaves would need to be aware of any changes to the global
access permissions, and be careful to avoid any potential TOCTOU
attacks when making any trust decisions based on it.
To simplify making trust decisions, the only interface in \textsc{Elasticlave}\xspace
where an enclave assumes a global restriction on the shared memory is
the lock bit. When using this interface, the lock holder can assume
that it is the only holder of the lock globally.
\textsc{Elasticlave}\xspace admits a simpler TEE implementation. The interface avoids
keeping any metadata that changes based on shared memory state (or
content). The TEE hardware can keep all security-relevant metadata in
access control tables, for instance. Since enclaves do not have
visibility into the global view of permissions of all other enclaves,
the TEE does not need to set up additional mechanisms to notify
enclaves on changes to this table (e.g., via signals). Further,
\textsc{Elasticlave}\xspace does {\em not} provide complete transaction memory
semantics, i.e., it does not provide atomic commits or memory
rollbacks, which come with their own complexity~\cite{rote}.
Similarly, consider starvation: A malicious or buggy
enclave may not release the lock. A more complex interface than
\textsc{Elasticlave}\xspace would either require the TEE to arbitrate directly or allow
the owner to do so, say to have memory be released within a time
bound.
However, such a solution would come with considerable interface
complexity. It would open up subtle attack avenues on the lock holder.
For instance, the enclave could lose control of the shared memory when
its contents are not yet safe to be exposed.
Instead, \textsc{Elasticlave}\xspace simply allows owners to be notified when enclaves
issue requests to acquire locks via the {$\tt{change}$}\xspace instruction. Enclaves
can implement any reasonable policy for starvation---for example, to
tear down the memory securely if a lock holder is unresponsive or
repeatedly acquiring locks to the region.
\section{Implementation on \mbox{RISC-V}\xspace}
\label{sec:implementation}
We build a prototype implementation of \textsc{Elasticlave}\xspace on an open-source
RocketChip quad-core SoC~\cite{riscvpriv, rocket}. We utilize $2$
building blocks from the \mbox{RISC-V}\xspace architecture, namely its physical
memory protection (PMP) feature and the programmable machine-mode
($\tt{m-mode}$). Note that \textsc{Elasticlave}\xspace does not make any
changes to the hardware.
We use Keystone---an open-source framework for
instantiating new security TEEs such as
\textsc{Elasticlave}\xspace~\cite{dayeol2020keystone}. Keystone provides a Linux driver
and a convenient SDK to create, start, resume, and terminate enclaves
by using the aforementioned features. It supports
$\tt{gcc}$ compiler and has C/C++ library to
build enclave applications. Keystone originally uses the spatial
isolation model, which we do not.
\paragraph{\mbox{RISC-V}\xspace PMP and $\tt{m-mode}$.}
The physical memory protection (PMP) feature of \mbox{RISC-V}\xspace allows software
in machine-mode (the highest privilege level in \mbox{RISC-V}\xspace) to restrict
physical memory accesses of software at lower privilege levels
(supervisor- and user-modes). Machine-mode software achieves this by
configuring PMP entries, which are a set of registers in each CPU
core. Each PMP register holds an entry specifying one contiguous
physical address range and its corresponding access permissions.
Interested readers can refer to the \mbox{RISC-V}\xspace standard PMP
specifications~\cite{riscvpriv}.
The \textsc{Elasticlave}\xspace TEE implementation runs as $\tt{m-mode}$ software. All the
meta-data about the memory regions, owners, static maximums, and the
current view of the permission matrix are stored in here.
The $\tt{m-mode}$ is the only privilege-level that can modify PMP entries.
Thus, the OS ($\tt{s-mode}$) and the enclave ($\tt{u-mode}$) cannot read
or update any PMP entries or meta-data.
When the enclave invokes any \textsc{Elasticlave}\xspace instruction, the execution traps
and the hardware redirects it to the $\tt{m-mode}$. This control-flow
cannot be changed by $\tt{s-mode}$ or $\tt{u-mode}$ software.
After entering $\tt{m-mode}$, \textsc{Elasticlave}\xspace first checks whether
the caller of the instruction is permitted to make the call. If it is a valid entity who is permitted to invoke this instruction, \textsc{Elasticlave}\xspace performs the meta-data, and if necessary, PMP updates.
\textsc{Elasticlave}\xspace keeps two mappings in its implementation:
(a)~virtual address ranges of each enclave and the corresponding
\textsc{Elasticlave}\xspace region universal identifier (uid); and (b)~the effective physical address range to which each uid is mapped.
Thus, when an enclave tries to access a virtual address, \textsc{Elasticlave}\xspace
performs a two-level translation: from virtual address to a uid and
subsequently to the physical address.
The {$\tt{map}$}\xspace and {$\tt{unmap}$}\xspace instruction only require updating the
first mapping, as they update virtual to uid mappings only. The
{$\tt{change}$}\xspace, {$\tt{share}$}\xspace, and {$\tt{transfer}$}\xspace only update the second
mapping because they only change permission bits without affecting
virtual memory to uid bindings. The {$\tt{create}$}\xspace and {$\tt{destroy}$}\xspace
instructions update both mappings. For enforcing access checks, the
\textsc{Elasticlave}\xspace TEE additionally maintains a permissions matrix of the
current and the static maximum permissions of each enclave.
Permissions are associated with uids, not with virtual addresses.
For enforcement, the TEE translates uid permissions in the permission
matrix to physical memory access limits via PMP entries. Whenever the
permission matrix is updated by an enclave, the permission updates
must be reflected into the access limits in PMP entries. Further, one
PMP entry is reserved by \textsc{Elasticlave}\xspace to protects is internal mappings
and security data structures.
The \mbox{RISC-V}\xspace specification limits the number of PMP registers to $16$.
Since each region is protected by one PMP entry, this fixes the
maximum number of regions allowable across all enclaves simultaneously.
This limit is not due to \textsc{Elasticlave}\xspace design, and one can increase the
number of PMP entries and \textsc{Elasticlave}\xspace only increases the needed PMP
entries by $1$.
When context-switching from one enclave to another, apart from
the standard register-save restore, Keystone modifies PMP entries to
disallow access to enclave private memory---this is because it uses a
spatial isolation design.
We modify this behavior to allow continued access to shared regions
even when the owner is not executing for \textsc{Elasticlave}\xspace.
\section{Evaluation}
\label{sec:eval}
We aim at evaluate the following research questions:
\begin{itemize}
\item How does the performance of \textsc{Elasticlave}\xspace compare with
spatial ShMem\xspace baseline on
\mbox{RISC-V}\xspace?
\item What is the impact of \textsc{Elasticlave}\xspace on privileged software
trusted code base (TCB) and hardware complexity?
\end{itemize}
We implement the spatial ShMem\xspace baseline and \textsc{Elasticlave}\xspace on the same hardware
core, in order to singularly measure the difference between the
spatial and our \textsc{Elasticlave}\xspace design.
Production-grade TEEs, such as Intel SGX, often have additional
mechanisms (e.g., hardware-level memory encryption, limited size of private physical memory,
etc.) which are orthogonal to the performance gains due to our
proposed memory model. Our implementation and evaluation exclude these
overheads.
\paragraph{Benchmarks.}
We experiment with $2$ types of benchmarks, using both our \textsc{Elasticlave}\xspace
implementation and the described spatial ShMem\xspace baseline (Section~\ref{sec:baseline-model}) on the same \mbox{RISC-V}\xspace core:
(a) simple synthetic programs we constructed that
implement the $3$ data patterns with varying number of regions and
size of data. We also construct synthetic thread synchronization
workloads with controllable contention for locks.
(b)
standard pre-existing real-world benchmarks, which include I/O
intensive workloads (IOZone~\cite{iozone}), parallel computation
(SPLASH-2~\cite{splash,splash-summary}), and CPU-intensive benchmarks (machine
learning inference with Torch~\cite{privado,
torch}). We manually modify these benchmarks to add \textsc{Elasticlave}\xspace
instructions, since we do not presently have a compiler for \textsc{Elasticlave}\xspace.
\paragraph{Experimental Setup.}
We use a cycle-accurate, FPGA-accelerated
(FireSim~\cite{DBLP:conf/isca/KarandikarMKBAL18}) simulation of
RocketChip~\cite{rocket}. Each system consists of $4$ RV64GC
cores, a 16KB instruction and data caches, $16$ PMP
entries per core (unless stated otherwise), and a
shared 4MB L2 cache. Area numbers were computed using a commercial
22nm process with Synopsys Design Compiler version L-2016.03-SP5-2
targeting $800$~MHz{}. Other than varying the number of PMP entries, we
do not make any changes to RocketChip.
\subsection{Performance of \textsc{Elasticlave}\xspace}
\label{sec:temporal_vs_spatial}
To evaluate the performance of \textsc{Elasticlave}\xspace vs. spatial isolation, we
first used synthetic benchmarks that cover common types of data
sharing behaviors in applications, including the data sharing patterns
introduced in Section~\ref{sec:baseline-model}.
\begin{figure*}[!tb] \centering
\subfloat[Producer-consumer]{
\includegraphics[width=0.27\linewidth]{icall-consumer.pdf}
}
\subfloat[Proxy]{
\includegraphics[width=0.27\linewidth]{icall-proxy.pdf}
}
\subfloat[Client-server]{
\includegraphics[width=0.27\linewidth]{icall-server.pdf}
}
\caption{Performance of the $3$ data-sharing patterns.}
\label{fig:icall_results} \end{figure*}
\begin{figure*}[!tb] \centering
\subfloat[\textsc{Elasticlave}\xspace-full]{
\includegraphics[width=0.27\linewidth]{icall-breakdown-temporal.pdf}
}
\subfloat[\textsc{Elasticlave}\xspace-nolock\label{fig:breakdown_nolock}]{
\includegraphics[width=0.27\linewidth]{icall-breakdown-nolock.pdf}
}
\subfloat[\stname{spatial}\xspace]{
\includegraphics[width=0.27\linewidth]{icall-breakdown-spatial.pdf}
}
\caption{Performance breakdown for the proxy pattern.}
\label{fig:icall_breakdown}
\end{figure*}
\paragraph{Synthetic Benchmark: Data-Sharing Patterns.}
We construct synthetic benchmarks for
the $3$ patterns in Section~\ref{sec:baseline-model} and
measure data sharing overhead (excluding any actual data processing). We set up $2$ (for producer-consumer and client-server)
or $3$ (for the proxy pattern) enclaves and compare: (a)~full
\textsc{Elasticlave}\xspace support as described in Section~\ref{sec:design}
(\textsc{Elasticlave}\xspace-full); (b)~\textsc{Elasticlave}\xspace without the lock permission bit design
(\textsc{Elasticlave}\xspace-nolock); and (c)~spatial isolation which transfers data
through secure public memory. Figure~\ref{fig:icall_results} shows the
performance for $3$ patterns. Figure~\ref{fig:icall_breakdown} shows the
breakdown for the proxy pattern.
\emph{Observations:}
The results exhibit a huge performance improvement of \stname{\textsc{Elasticlave}\xspace}\xspace-full
over \stname{spatial}\xspace, which increases with an increase in the record size.
When the record size is $512$ bytes, \stname{\textsc{Elasticlave}\xspace}\xspace-full provides over
$60\times$ speedup compared with \stname{spatial}\xspace; when the record size
increases to $64$KB the speedup also increases and reaches
$600\times$. In \stname{\textsc{Elasticlave}\xspace}\xspace-full, although invoking security
instructions is a large contributor to the overhead, by doing this the
application eliminates copying and communication through secure public
memory. As a result, the total overhead of \stname{\textsc{Elasticlave}\xspace}\xspace-full does not
increase with the size of the transferred data, unlike \stname{spatial}\xspace.
Note that \stname{\textsc{Elasticlave}\xspace}\xspace-full corresponds to a two-way isolation paradigm
highlighted in Section~\ref{sec:pattern-costs}.
\stname{\textsc{Elasticlave}\xspace}\xspace-nolock, the design of \textsc{Elasticlave}\xspace with the lock permission
bit removed, is shown to be more costly than \stname{\textsc{Elasticlave}\xspace}\xspace-full with
overhead that increases with the data size.
Figure~\ref{fig:icall_breakdown} indicates that this is because
\stname{\textsc{Elasticlave}\xspace}\xspace-nolock does not completely eliminate data copying.
\paragraph{Synthetic Benchmark: Thread Synchronization.}
We implement a common workload for spinlocks between threads, each
of which runs in a separate enclave, but both do not trust the OS.
For \textsc{Elasticlave}\xspace, we further distinguish simple spinlocks
(\textsc{Elasticlave}\xspace-spinlock) and {$\tt{futex}$}\xspace{}es (\textsc{Elasticlave}\xspace-futex). For spinlocks,
we keep the lock state in a shared region with no access to the OS.
For futexes, the untrusted OS has read-only access to the lock states,
which allows enclaves to sleep while waiting for locks and be woken up
by the OS when locks are released. This form of sharing corresponds
the one-way isolation described in Section~\ref{sec:pattern-costs},
where the OS has read-only permissions.
For \stname{spatial}\xspace, we implement a dedicated trusted
coordinator enclave to manage the lock states, with
enclaves communicating with it through secure public memory for lock
acquisition and release.
\emph{Observations:}
We report that \textsc{Elasticlave}\xspace-futex and \textsc{Elasticlave}\xspace-spinlock
achieve much higher performance compared with \stname{spatial}\xspace (Figures~\ref{fig:lock-basic}), especially
when the contention is low (the lock is acquired and released often).
For higher contention where the time spent waiting for
the lock overshadows the overhead of acquiring and releasing the lock,
the $3$ settings have comparable performance.
In addition,
\textsc{Elasticlave}\xspace-futex achieves up to $1.5\times$ CPU-time performance
improvement over \textsc{Elasticlave}\xspace-spinlock despite having no advantage in
terms of real-time performance (wall-clock latency).
Figure~\ref{fig:lock-futex} shows the performance of
\stname{\textsc{Elasticlave}\xspace}\xspace-futex vs \stname{\textsc{Elasticlave}\xspace}\xspace-spinlock.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.75\linewidth]{lock-basic.pdf}
\caption{Synthetic Thread Synchronization Performance. Cycles
(normalized by the contention).}
\label{fig:lock-basic}
\end{figure}
\begin{figure}[!tb]
\centering
\includegraphics[width=0.7\linewidth]{lock-colour.pdf}
\caption{
\stname{\textsc{Elasticlave}\xspace}\xspace-futex vs \stname{\textsc{Elasticlave}\xspace}\xspace-spinlock.}
\label{fig:lock-futex}
\end{figure}
\paragraph{Real-World Benchmark 1: File I/O.}
We run the IOZone benchmark ~\cite{iozone}; it makes frequent file I/O
calls from enclave into the untrusted host process. Here, for \stname{spatial}\xspace, the communication
does not need to be protected with secure public memory. Figures~\ref{subfig:iozone-writer-native} and ~\ref{subfig:iozone-reader-native}
shows write and read bandwidth.
\begin{figure*}[!t] \centering
\begin{minipage}[b]{0.5\linewidth}
\subfloat[IOZone Writer \label{subfig:iozone-writer-native}]{
\includegraphics[width=.45\linewidth]{writer_native.pdf}
}
\subfloat[IOZone Reader \label{subfig:iozone-reader-native}]{
\includegraphics[width=.45\linewidth]{reader_native.pdf}
}
\captionof{figure}{IOZone Bandwidth for 8M and 512M byte files.}
\label{fig:iozone}
\end{minipage}
\hfill
\begin{minipage}[b]{0.45\linewidth}
{
\includegraphics[width=0.95\linewidth]{splash2-with-native.pdf}
}
\captionof{figure}{SPLASH-2 wall-clock time, measured in cycles.}
\label{fig:splash2}
\end{minipage}
\end{figure*}
\emph{Observations:}
Even without secure public memory
communication in \stname{spatial}\xspace,
\textsc{Elasticlave}\xspace achieves a higher bandwidth than \stname{spatial}\xspace, when the record
size grows above a threshold ($16$KB). The bandwidth increase reaches
as high as $0.4$ for the writer workload and around $0.5$ for the
reader workload when the record size is sufficiently large.
\paragraph{Real-World Benchmark 2: Parallel Computation.}
We ran $7$ SPLASH-2 workloads in a two-enclave setting. We adapted the
workloads to multi-enclave implementations by collecting together the
data shared across threads in a memory region which would be shared
across the enclaves. For \stname{spatial}\xspace, load/store instructions that
operate on the memory region are trapped and emulated with RPCs by the
enclave runtime.
Figure~\ref{fig:splash2} shows numbers of cycles to execute parallel workloads in two
enclaves (excluding initialization).
We were not able to run
$\tt{libsodium}$ inside the enclave runtime, so we did not use
encryption-decryption when copying data to-and-from secure
public memory for \stname{spatial}\xspace in this experiment.
So the actual overhead in a secure implementation would be
higher than reported in here. Thus, even if the
processor had support for cryptographic accelerators (e.g., AES-NI)
that may speedup \stname{spatial}\xspace, \textsc{Elasticlave}\xspace speedups just due to
zero-copies are still significant and out-perform \stname{spatial}\xspace.
\emph{Observations:}
On all the workloads measured, \stname{\textsc{Elasticlave}\xspace}\xspace is $2$-$3$ orders of
magnitude faster than \stname{spatial}\xspace.
\paragraph{Real-World Benchmark 3: ML Inference.}
We run $4$ machine learning models for image
classification~\cite{privado} to measure \textsc{Elasticlave}\xspace performance on applications with minimal
data sharing needs (Figure~\ref{fig:privado}).
Each of the inference models runs with a single enclave thread
and one shared region between
the enclave and the OS to loads input images.
\begin{figure}[!tb]
\centering
\includegraphics[width=.75\linewidth]{privado.pdf}
\caption{Cycles spent running each ML model.}
\label{fig:privado}
\end{figure}
\emph{Observations:}
The $3$ settings have similar performance. Thus, \textsc{Elasticlave}\xspace does not
slow-down CPU-intensive applications that do not share data
extensively.
\begin{framed}
Compared to \stname{spatial}\xspace, \textsc{Elasticlave}\xspace improves I/O-intensive
workload performance up to
$600\times$ (data size $ > 64$KB) and demonstrates $50$\% higher
bandwidth. For shared-memory benchmarks, it gains
up to a $1000\times$ speedup.
\end{framed}
\paragraph{Comparison to Native.}
\textsc{Elasticlave}\xspace
is up to $90\%$ as performant as \stname{native}\xspace (traditional Linux
processes no enclave isolation) for a range of our benchmarks (Figures~\ref{fig:splash2} and ~\ref{fig:privado}).
\textsc{Elasticlave}\xspace performs comparable to \stname{native}\xspace for frequent data
sharing over large-size (Figures~\ref{subfig:iozone-writer-native} and ~\ref{subfig:iozone-reader-native}).
\subsection{Impact on Implementation Complexity}
\label{sec:overhead}
We report on the \textsc{Elasticlave}\xspace TCB, hardware chip area, context switch cost, and critical path latency.
\paragraph{TCB.}
\textsc{Elasticlave}\xspace does not incur any change to the hardware. Its only
requires additional PMP entries, one entry per shared region.
\textsc{Elasticlave}\xspace TCB is $6814$ \loc (Table~\ref{table:tcb}). $3085$ \loc
to implement the \textsc{Elasticlave}\xspace interface in $\tt{m-mode}$, $3729$ \loc to use the interface in an enclave.
\begin{table}[!tb]
\centering
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{lrr}
\toprule
\textbf{Function} & \textbf{\textsc{Elasticlave}\xspace} & \textbf{Enclave} \\
& \textbf{Privileged TCB} & \textbf{Runtime} \\
\midrule
{$\tt{uid}$}\xspace management & 1070 & 0 \\
Permission matrix enforcement & 574 & 0 \\
\textsc{Elasticlave}\xspace instruction interface & 219 & 82 \\
Argument marshaling & 0 & 88 \\
Wrappers for \textsc{Elasticlave}\xspace interface & 0 & 1407 \\
Miscellaneous & 960 & 1869 \\
\midrule
Total & 3085 & 3729 \\
\bottomrule
\end{tabular}
}
\caption{Breakdown in LoC of \textsc{Elasticlave}\xspace TCB \& Enclave runtime libraries (not in \textsc{Elasticlave}\xspace TCB).}
\label{table:tcb}
\end{table}
\paragraph{Context Switches.}
Context switching between enclaves and the OS
incurs PMP changes.
Thus, the overhead may change with
numbers of PMP-protected memory regions. To empirically measure this,
we record the percentage of cycles spent on
context switches in either direction for a workload that
never explicitly switches out to the OS (therefore all context
switches from the enclave to the OS are due to interrupts).
The percentage overhead increases
linearly with the number of memory regions but is negligibly
small: $0.1\%$ for $1$ memory region and $0.15\%$ for $4$ memory
regions.
\newcommand{$1$ GHz}{$1$ GHz}
\paragraph{Hardware Critical Path Delay.}
To determine the
critical path of the hardware design
and examine if the PMP entries are on this path,
we push the design to a target frequency of
$1$ GHz{}\footnote{We set the frequency higher than
$800$~MHz{} (which is what we have for our successful synthesis) to
push the optimization limit of the hardware design so we can
find out the bottleneck of the hardware design.}. We measure
the global critical path latency, which is of the whole core, and the
critical path through the PMP registers.
With this, we compute the slack, which is the desired delay minus the actual delay---a larger slack corresponds to a smaller actual delay in comparison to the desired delay. We find that the slack through PMP is significantly better than global critical path. With $16$ PMP entries, the slack through PMP is $-44.1$ picoseconds compared to $-190.1$ picoseconds for the global critical path.
In other words, the PMP would allow for a higher clock speed, but the rest of the design prevents it.
Thus, the number of PMP entries is {\em not the bottleneck of the
timing of the hardware design}.
We also tested that PMPs are not on the critical path for 8 and 32 PMP settings as well (details elided due to space). As a direct result, the number of PMP entries does not create a performance bottleneck for any instruction (e.g., load/store, PMP read/write), in our tests.
\paragraph{Area.}
The only impact of \textsc{Elasticlave}\xspace on \mbox{RISC-V}\xspace hardware is on the increased
PMP pressure (i.e., we require more PMP entries per region), which
increases chip area requirements. We synthesize RocketChip with
different numbers of PMP registers and collect the area statistics.
The range we explore goes beyond the limit of $16$ in the standard
\mbox{RISC-V}\xspace ISA specification. Figure~\ref{fig:area-breakdown}
exhibits the increase in the total area with increasing numbers of PMP
entries. The increase is not significant. Starting with $0$ PMP
entries, every $8$ additional PMP entries only incurr $1\%$
increase in the total area.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.6\linewidth]{area-breakdown.pdf}
\caption{ RocketChip Area vs. numbers of PMP entries.}
\label{fig:area-breakdown}
\end{figure}
\begin{framed}
\textsc{Elasticlave}\xspace does not significantly increases software TCB size
(\textasciitilde{}$6800$ LOC), critical path delay, or
hardware area pressure
(\textasciitilde{}$1\%$ per 8 PMP entries), which shows that
the design scales well with number of regions.
\end{framed}
\section{Related Work}
\label{sec:related}
Isolation abstractions are of long-standing importance to security.
There has been extensive work on partitioning security-critical
applications using software isolation abstractions using namespace
isolation (e.g., containers), software-based techniques (e.g.,
SFI~\cite{sfi}, native-client~\cite{nacl}), language-based isolation
(java-capabilities~\cite{mettler2010joe},
web-sandboxing~\cite{jit-sandboxing}), OS-based
sandboxing~\cite{tx-box}, and using hypervisors (e.g., virtual
machines~\cite{xen, sp3}). Our work is on hardware support for
isolation, namely using enclave TEEs.
\textsc{Elasticlave}\xspace draws attention to a single point in the design space of
memory models that TEEs support, specifically, its memory model and
its impact on memory sharing. The pre-dominant model used today is
that of spatial isolation, which is used in Intel SGX, as well as
others (e.g., TrustZone~\cite{armtrustzone}, AMD SEV~\cite{amd-sev,
amd-sev-es}). \textsc{Elasticlave}\xspace explains the conceptual drawbacks of this
model and offers a relaxation that enables better performance. Intel
SGX v2 follows the spatial isolation design, with the exception that
permissions and sizes of private regions can be changed
dynamically~\cite{mckeen2016sgx, xing2016sgx}. As a result, the
"all-or-none" trust division between enclaves remains the same as
in v1.
TEEs, including SGX, has received extensive security scrutiny. Recent
works have shown the challenges of leakage via side-channels. New TEEs
have focused on providing better micro-architectural resistance to
side-channels~\cite{mi6}. Designs providing stronger confidentiality
due to oblivious execution techniques have been
proposed~\cite{maas2013phantom}. Keystone is a framework for
experimenting with new enclave designs, which uses the spatial
isolation model~\cite{dayeol2020keystone}.
Several TEE designs have been proposed prior to Intel SGX which show
the promise and feasibility of enclave
TEEs~\cite{bastion,shinde2015podarch, secureme,
secureblue, xom, aegis}.
After the availability of SGX in commercial
CPUs~\cite{sgx2014progref}, several works have shed light on the
security and compatibility improvements possible over the original SGX
design~\cite{costan2016sanctum,dayeol2020keystone,ferraiuolo2017komodo,mckeen2016sgx,xing2016sgx,park2020nestedenclave,mi6,sanctuary}.
They allow for better security,
additional classes of applications, better memory allocation,
and hierarchical security protection.
Nevertheless, they largely adhere to the {\em spatial isolation
model}, and have not explicitly challenged the assumptions.
TEE-based designs for memory sharing and TCB reduction are similar in
spirit to mechanisms used in hypervisors and microkernels---for
example, as used in page-sharing via EPT tables~\cite{cloudvisor,
chaos}, IOMMU implementations for memory-mapped devices such as GPUs
or NICs~\cite{damn}.
The key difference is in the trust model: Hypervisors~\cite{xen} and
microkernels~\cite{liedtke1995microkernel} are entrusted to make
security decisions on behalf of VMs, whereas in enclaved
TEEs, the privileged software is untrusted and enclaves self-arbitrate
security decisions. Further, microkernels and monolithic kernels
operate in system mode (e.g., S-mode in \mbox{RISC-V}\xspace) which is in the
TCB. They are larger compared to (say) the \textsc{Elasticlave}\xspace TCB.
Emerging proposals such as Intel TDX~\cite{tdx}, Intel MKTME~\cite{mktme},
Intel MPK~\cite{mpk}, Donky~\cite{donky} enable hardware-enforced domain protection.
However, they protect entire virtual machines or
groups of memory pages (in contrast to enlcaves in Intel
SGX).
Notably, they extend fast hardware support to protect physical memory
of a trust domain (e.g., from a physical adversary) but
adhere to spatial model. They can benefit from \textsc{Elasticlave}\xspace memory model. \section{Conclusion}
We present \textsc{Elasticlave}\xspace, a new TEE memory model that allows enclaves to
selectively and temporarily share memory with other enclaves and the
OS.
We demonstrate that \textsc{Elasticlave}\xspace eliminates expensive data copy operation
and maintains same level of application-desired security.
Our \textsc{Elasticlave}\xspace prototype on RISC-V FPGA core offers $1$ to $2$ order of
magnitude performance improvements over existing models.
\section*{Acknowledgments}
We thank
Aashish Kolluri,
Burin Amornpaisannon,
Dawn Song,
Dayeol Lee,
Jialin Li,
Li-Shiuan Peh,
Roland Yap,
Shiqi Shen,
Yaswanth Tavva,
Yun Chen, and
Zhenkai Liang.
for their feedback on improving earlier drafts of the paper.
The authors acknowledge the support from the Crystal Center and
Singapore National Research Foundation ("SOCure`` grant
NRF2018NCR-NCR002-0001 www.green-ic.org/socure).
This material is in part based upon work supported by the National
Science Foundation under Grant No. DARPA N66001-15-C-4066 and Center
for Long-Term Cybersecurity. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the
authors and do not necessarily reflect the views of the National
Science Foundation.
\bibliographystyle{plain}
|
{'timestamp': '2020-10-19T02:19:24', 'yymm': '2010', 'arxiv_id': '2010.08440', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.08440'}
|
arxiv
|
\section{Introduction}
Consider a sequence $(M_i)_i$ of $m$-dimensional varieties in a subset $\Omega\subset \mathbb R^{m+1}$ with mean curvature bounded by some $h<\infty$ and such that the boundaries have uniformly bounded measure in compact sets:
\[
\limsup_{i\to\infty} \mathcal H^{m-1}(\partial M_i \cap K) < \infty, \qquad \forall K \, \text{compact}.
\]
Let $Z$ be the set of points at which the areas of the $M_i$ blow up:
\[
Z := \{ x\in \Omega: \text{ $\limsup_i \mathcal H^m(M_i\cap B_r(x)) = \infty$ for every $r>0$} \},
\]
i.e. $Z$ is the smallest closed subset of $\Omega$ such that the areas of the $M_i$
are uniformly bounded as $i\to\infty$ on compact subsets of $\Omega\setminus Z$.
In the recent paper \cite{White2016}, White finds natural conditions implying that $Z$ is empty. These results are useful since if $Z$ is empty, then
the areas of the $M_i$ are uniformly bounded on all compact subsets of $\Omega$. It follows that, up to subsequences, $M_i$ will converge in the sense of varifold to a limit varifold
of locally bounded first variation.
The main point of \cite{White2016} is to show that the set $Z$ belongs to the class of $(m,h)$-sets. The notion of $(m,h)$-set is a generalization of the concept of an $m$-dimensional, properly embedded submanifold without boundary and with mean curvature bounded
by $h$ \footnote{In particular, in \cite{White2016}, it is shown that if $M$ is a smooth, properly embedded, $m$-dimensional submanifold without boundary,
then $M$ is an $(m,h)$-set if and only if its mean curvature is bounded by $h$.}. In particular these sets satisfy a maximum principle which often allows to show that they are empty.
The aim of this paper is to extend the aforementioned results proven in \cite{White2016} to co-dimension one manifolds (or, more in general, to co-dimension one varifolds) which are stationary with respect to a parametric integrand \(F\).
Referring to Section \ref{sec:preliminaries} below for more details and definitions we simply recall here that a parametric integrand is a even map \(F: \Omega \times \mathbb R^{m+1}\to \mathbb R^+\) which is one homogeneous, even and {\em convex} in the second variable. For a smooth \(m\)-dimensional manifold \(M\subset \mathbb R^{m+1}\) with normal \(\nu_{M}\) we define for every open set \(\Omega\subset \mathbb R^{m+1}\)
\[
\FF(M, \Omega)= \int_{M\cap \Omega} F(x,\nu_{M}) d\mathcal H^{m}.
\]
A smooth manifold is then said to be $F$-stationary in \(\Omega\) (resp. \(F\)-stable) if
$$
\frac{d}{dt}\FF\big ( \varphi_t(M),\Omega\big)\Big|_{t=0}=0 \qquad\Biggl(\text{resp. } \quad \frac{d^2}{dt^2}\FF\big ( \varphi_t(M),\Omega\big)\Big|_{t=0}\ge0\Biggr)
$$
for every $\varphi_t(x)=x+tg(x)$ one-parameter family of diffeomorphisms (for \(t\) small enough) generated by a vector field \(g\in C_c^1(\Omega,\mathbb R^{m+1})\).
In this setting our main result reads as follows, see Theorem \ref{thm:area-blowupset} for the more general statement and Definition \ref{def:mh} for the definition of \((m,h)\)-sets with respect to a given integrand \(F\):
\begin{Theorem}\label{thm:mainintro}
Given a sequence of \(F\)-stable \(m\)-dimensional manifolds $(M_i)_i$ and $h>0$ such that
\begin{equation*}
\limsup_{i}\mathcal H^{m-1}(\partial M_i\cap K)<+\infty.
\end{equation*}
Then the area-blow up set
\[
Z:= \{ x \in \overline \Omega \colon \limsup_{i \to \infty} \mathcal H^m(M_i\cap B_r(x))=+\infty \text{ for every $r>0$ } \}
\]
is an $(m,h)$-set in $\Omega$ with respect to $F$.
\end{Theorem}
Beside its intrinsic interest, our main motivation for Theorem \ref{thm:mainintro} is that, in contrast to the case of the area functional, for manifolds which are stationary with respect to parametric integrand, no monotonicity formula is available, \cite{Allard74}. In particular, a local area bound of the form
\begin{equation}\label{estimate}
\mathcal H^{m}(M\cap B_r(x))\le C(M,m) r^m
\end{equation}
is not know to hold true. This prevents, a priori, the possibility to establish the convergence of the rescaled surfaces \(M_{x,r}=(M-x)/r\) in order to study the local behavior of a stationary surface. Note that, for (isotropic) minimal surface, \eqref{estimate} is a trivial consequence of the monotonicity formula.
Using Theorem \ref{thm:mainintro}, we can prove boundary curvature estimates for two dimensional \(F\) -stable surfaces, see also Theorem \ref{thm:curvature estimates at the boundary} for a more general statement:
\begin{Theorem}\label{thm:boundaryintro} Let \(\Omega\subset \mathbb R^3\) be uniformly convex, \(F\) be a uniformly elliptic integrand and let $\Gamma\subset \Omega$ be a $C^{2,\alpha}$ embedded curve. Let $M$ be an $F$-stable, $C^2$ \(2\)-dimensional embedded surface in $\Omega$ such that $\partial M = \Gamma$. Then there exist a constant $C>0$ and a radius $r_1>0$ depending only on $F, \Omega, \Gamma$ such that
\begin{equation*}
\sup_{\substack{p\in \Omega \\ \mathop{\mathrm{dist}}(p, \Gamma)< r_1}} r_1 |A_M(p)| \le C.
\end{equation*}
where \(A_M\) is the second fundamental form of \(M\). Furthermore the constants are uniform as long as \(\Gamma\), \(\Omega\) and \(F\) vary in compact subsets of, respectively, embedded \(C^{2,\alpha}\) curves, uniform convex domains and uniformly convex \(C^2\) integrands.
\end{Theorem}
Let us conclude this introduction with a few remarks on the proof of the main results. To prove Theorem \ref{thm:area-blowupset}, we follow the proof of White in \cite{White2016}, and we aim to show that if the blow up set is not an \((m,h)\)-set, than one can provide a vector field yielding a negative first variation. This vector field is what in \cite{SolomonWhite} is called an \(F\)-decreasing vector field and its construction seems to be possible only in co-dimension one, which is the reason for our restriction to this setting. The proof of the boundary curvature estimates will easily follow from~\cite{White1987}, once we can show that the mass density ratios
\[
\frac{\mathcal H^{2}(M\cap B_r(x))}{r^2}
\]
are bounded. In the interior we can rely on the extended monotonicity formula for \(2\)-dimensional varifolds with curvature in \(L^2\) (note that by stability one easily proves that locally \(|A|\in L^2\)). At the boundary we perform a rescaling argument and we use our assumption on \(\Omega\) to show that that the area blow up set of the sequence of rescaled surfaces must be contained in a wedge. Since Theorem \ref{thm:area-blowupset} implies that this is a \((2,0)\)-set, a simple maximum principle argument shows that it is empty, yielding the desired bound.
\subsection*{Organization of the paper} The paper is organized as follows: in Section \ref{sec:preliminaries} we recall some preliminary results and definitions and we compute the explicit formula for the first variation of a smooth manifold. In Section \ref{sec:mh} we give the definition of \((m,h)\)-sets, we show some of their properties and we prove Theorem \ref{thm:area-blowupset}, from which Theorem \ref{thm:mainintro} readily follows. In Section \ref{sec:bound} we prove Theorem \ref{thm:curvature estimates at the boundary}, which implies Theorem \ref{thm:boundaryintro}.
\subsection*{Acknowledgements}
The work of G.D.P. is supported by the INDAM-grant ``Geometric Variational Problems".
\section{Notation and preliminaries}\label{sec:preliminaries}
We work on an open set \(\Omega\subset \mathbb R^{m+1}\) and we set $B_r(x)=\{y\in\mathbb R^{m+1}:|x-y|<r\}$, \(B_r=B_r(0)\) and $B:=B_{1}(0)$. We will denote \(m\)-dimensional balls by \(B^m_r(x)\) and we set \(B_r^m=B_r^m(0)\) and \(B^m=B_1^m\). We also let \(\mathbb S^{m}\) be the unit sphere in \(\mathbb R^{m+1}\).
For a matrix \(A\in \mathbb R^{m+1}\otimes\mathbb R^{m+1}\), \(A^*\) denotes its transpose. Given \(A,B\in \mathbb R^{m+1}\otimes\mathbb R^{m+1}\), we define \( A:B=\trace A^* B=\sum_{ij} A_{ij} B_{ij}\), so that \(|A|^2= A:A\).
\subsection*{Varifolds}
We denote by \(\mathcal M_+(\Omega)\) (respectively \(\mathcal M(\Omega,\mathbb R^n)\), \(n\ge1\)) the set of positive (resp. \(\mathbb R^n\)-valued) Radon measures on \(\Omega\). Given a Radon measure \(\mu\), we denote by \(\mathrm{spt} \mu\) its support. For a Borel set \(E\), \(\mu\mathop{\llcorner} E\) is the restriction of \(\mu\) to \(E\), i.e. the measure defined by \([\mu\mathop{\llcorner} E](A)=\mu(E\cap A)\). For an \(\mathbb R^n\)-valued Radon measure \(\mu\in \mathcal M(\Omega,\mathbb R^n)\), we denote by \(|\mu|\in \mathcal M_+(\Omega)\) its total variation and we recall that, for all open sets \(U\),
\[
|\mu|(U)=\sup\Bigg\{ \int\big \langle \varphi(x) ,d\mu(x)\big\rangle\,:\quad \varphi\in C_c^\infty(U,\mathbb R^n),\quad \|\varphi\|_\infty \le 1 \Bigg\}.
\]
If \(\eta :\mathbb R^{m+1}\to \mathbb R^{m+1}\) is a Borel map and \(\mu\) is a Radon measure, we let \(\eta_\# \mu=\mu\circ \eta^{-1}\) be the push-forward of \(\mu\) through \(\eta\).
An $m$-varifold on \(\Omega\) is a positive Radon measure $V$ on $\Omega\times \mathbb S^{m}$ which is even in the \(\mathbb S^{m}\) variable, i.e. such that
$$
V(A\times S)=V(A\times (-S))\qquad\mbox{for all $A\subset \Omega$, $S\subset \mathbb S^{m}$.}
$$
We will denote with \(\mathbb V_m(\Omega)\) the set of all \(m\)-varifolds on \(\Omega\).
Given a diffeomorphism $\psi \in C^1(\Omega,\mathbb R^{m+1})$, we define the push-forward of $V\in\mathbb V_m(\Omega)$ with respect to $\psi$ as the varifold $\psi^\#V\in \mathbb V_m(\psi(\Omega))$ such that
\begin{multline*}
\int_{G(\psi(\Omega))}\Phi(x,\nu)d(\psi^\#V)(x,\nu)\\
=\int_{G(\Omega)}\Phi\left(\psi(x),\frac{((d_x\psi(x))^{-1})^*(\nu)}{|((d_x\psi(x))^{-1})^*(\nu)|}\right)J\psi(x,\nu^\perp) dV(x,\nu),
\end{multline*}
for every $\Phi\in C^0_c(G(\psi(\Omega)))$. Here $d_x\psi(x)$ is the differential mapping of $\psi$ at $x$ and
\[
J\psi(x,\nu^\perp):=\sqrt{\det\Big(\big(d_x\psi\big|_{\nu^\perp}\big)^*\circ d_x\psi\big|_{\nu^\perp}\Big)}
\]
denotes the $m$-Jacobian determinant of the differential $d_x\psi(x)$ restricted to the $m$-plane $\nu^\perp$,
see \cite[Chapter 8]{Simon}.
\subsection*{Integrands}
The anisotropic (elliptic) integrands that we consider are \(C^2\) positive functions
$$F:\Omega\times (\mathbb R^{m+1}\setminus\{0\})\to \mathbb R^+$$
which are even, one-homogeneous and convex in the second variable, i.e.
\begin{equation*}\label{e:identification}
F(x, \lambda \nu)=|\lambda| F(x, \nu)
\end{equation*}
and
\[
F(x,\nu_1+\nu_2)\le F(x,\nu_1)+F(x,\nu_2).
\]
We will denote with $D_1F(x,\nu)$ and $D_2F(x,\nu)$ respectively the differential of $F$ in the first and in the second variable. Denoting with $\{e^x_i\}_{i=1}^{m+1}$ the euclidean basis in $\mathbb R_{x}^{m+1}$ and with $\{e^\nu_i\}_{i=1}^{m+1}$ the euclidean basis in $\mathbb R_{\nu}^{m+1}$, we set
\begin{equation}\label{e:fj}
\begin{aligned}
&F_{i}(x,\nu):=\langle D_2F(x,\nu), e^\nu_i \rangle, &&( \partial_i F_j)(x,\nu)=D_{12}F(x,\nu):e^x_i \otimes e^\nu_j
\\
&\qquad \text{ and } &&F_{ij}(x,\nu):= D^2_2F(x,\nu):e^\nu_i \otimes e^\nu_j .
\end{aligned}
\end{equation}
Note that by one-homogeneity:
\begin{equation}\label{eulercod1}
\langle D_2 F(x, \nu),\nu\rangle = F(x,\nu)\qquad\mbox{for all \(\nu\in\mathbb R^{m+1}\setminus \{0\}\).}
\end{equation}
An integrand \(F\) is said to be uniformly elliptic on a set \(\Omega\) if there exists a constant \(\lambda>0\) such that
\[
\langle D_2^2F(x,\nu)\eta,\eta\rangle\ge \lambda|\eta|^2\qquad\text{for all \(x\in \overline{\Omega}\), \(\nu \in \mathbb{S}^m\), \(\eta\perp \nu\)}.
\]
Given \(x\in \Omega\), we will denote by \(F_x\) the ``frozen'' integrand
\begin{equation*}\label{frozen}
F_x:\mathbb S^m \to (0,+\infty), \qquad F_x(\nu):= F(x,\nu).
\end{equation*}
We define the {\em anisotropic energy} of \(V\in \mathbb V_m(\Omega)\) as
\begin{equation*}\label{eq:eV}
\FF(V,\Omega) := \int_{G(\Omega)} F(x,\nu)\, dV(x,\nu).
\end{equation*}
For a vector field \(g\in C_c^1(\Omega,\mathbb R^{m+1})\), we consider the family of functions \(\varphi_t(x)=x+tg(x)\), and we note that they are diffeomorphisms of \(\Omega\) into itself for \(t\) small enough. The {\em anisotropic first variation} is defined as
\[
\delta_F V(g):=\frac{d}{dt}\FF\big ( \varphi_t^{\#}V,\Omega\big)\Big|_{t=0}.
\]
It can be easily shown, see \cite[Appendix A]{DePhilippisDeRosaGhiraldin}, that
\begin{equation}\label{eq:firstvariation}
\delta_F V(g)
=\int_{G(\Omega)} \Big[\langle D_1F(x,\nu),g(x)\rangle+ B_F(x,\nu):Dg(x) \Big] dV(x,\nu),
\end{equation}
where the matrix \(B_F(x,\nu)\in \mathbb R^{m+1}\otimes\mathbb R^{m+1}\) is uniquely defined by
\begin{equation}\label{eq:range of B}
B_F(x,\nu):=F(x,\nu)\mathrm{Id}-\nu \otimes D_2 F(x,\nu),
\end{equation}
see for instance~\cite[Section 3]{Allard84BOOK} or ~\cite[Lemma A.4]{dephilippismaggi2}. We will often omit in the sequel the dependence on $F$ of the matrix $B_F(x, \nu)$. Moreover let us note the following useful fact:
\begin{equation}\label{e:rangeB}
B(x,\nu)\nu=0\qquad\text{or equivalently}\qquad \range B^*(x,\nu)=\nu^\perp
\end{equation}
We say that a varifold \(V\in \mathbb V_m(\Omega)\) has locally bounded anisotropic first variation if $\delta_F V$ is a Radon measure on $\Omega$, i.e. if
$$|\delta_F V(g)|\leq C(K)\|g\|_{\infty}, \quad \text{ for all $g \in C^1_c(\Omega,\mathbb R^{m+1})$ with spt$(g)\subset K \subset\subset \Omega$}.$$
Notice that, by Riesz representation theorem, we can write
$$\delta_F V(g) = - \int_\Omega \langle w, g \rangle d \|\delta_F V\|, \quad \text{ for all } g \in C^1_c(\Omega, \mathbb R^{m+1})$$
where $\|\delta_F V\|$ is the total variation of $\delta_F V$ and $w$ is $\|\delta_F V\|$-measurable with $|w|= 1$ $\|\delta_F V\|$-a.e. in $\Omega$.
In this case, by the Radon-Nikodym theorem, we can decompose $\|\delta_FV\|$ in its absolutely continuous and singular parts with respect to the measure $\|V\|$:
\begin{equation}\label{rappr}
\delta_F V(g)= - \int_\Omega\langle \overline{H_F}, g \rangle \,d\|V\|(x)+\int_\Omega \langle w, g \rangle \,d\sigma, \quad \text{ for all } g \in C^1_c(\Omega, \mathbb R^{m+1}).
\end{equation}
Notice that by the disintegration theorem for measures, see for instance~\cite[Theorem 2.28]{AmbrosioFuscoPallara00}, we can write
\[
V(dx,d\nu)= \|V\|(dx)\otimes \mu_x(d\nu),
\]
where \(\mu_x\in \mathcal P(\mathbb S^m)\) is a (measurable) family of parametrized non-negative even probability measures.
We define for $\|V\|$-a.e. $x \in \Omega$
$$
H_F(x):= \frac{\overline{H_F(x)}}{\int_{\mathbb S^m}F(x, \nu)d \mu_x(\nu)}.
$$
We will say that a varifold \(V\in \mathbb V_m(\Omega)\) has mean curvature $H_F(x)$ in $L^1(\norm{V}, \mathbb R^{m+1})$ if it has locally bounded anisotropic first variation and in the representation \eqref{rappr}, we have $\sigma=0$. In this case one can easily check that
\begin{equation}\label{eq:F-mean curvature 1}
\delta_FV(g) = -\int_{G(\Omega)} \langle H_F, g \rangle \;F(x,\nu)\,dV(x,\nu) \text{ for all } g\in C^1_c(\Omega, \mathbb R^{m+1}).
\end{equation}
Furthermore we will say that $H_F(x)$ is bounded by $h \in \mathbb R$ if
$$\norm{H_F}_{F,x} := F(x,H_F(x))\le h.$$
In particular we say that a varifold \(V\in \mathbb V_m(\Omega)\) has anisotropic mean curvature bounded by $h(x) \in L^1(\norm{V},\mathbb R^+)$ if
\begin{equation}\label{eq:F-mean curvature 2}
\delta_FV(g) \le \int_{G(\Omega)} h(x) \norm{g}_{F^*,x} F(x,\nu)\,dV(x,\nu) \text{ for all } g \in C^1_c(\Omega, \mathbb R^{m+1}),
\end{equation}
where
\[
\norm{w}_{F^*,x}= F^*(x, w)=\sup_{v: \, F(x,v)\le 1} \langle v,w\rangle.
\]
\begin{Remark}
Since all norms are equivalent on finite dimensional spaces, the above definition coincides with the classical one. However the above formulation has the advantage of being coordinate independent, namely if \(\Phi: \mathbb R^{m+1}\to \mathbb R^{m+1}\) is a diffeomorphism and \(V\) has \(F\)-mean curvature bounded by \(h\) then \(\Phi^{\#} V\) has \(\Phi^{\#}F\)-mean curvature still bounded by \(h\) where \(\Phi^{\#}F\) is the integrand defined by
\[
\Phi^{\#}F(x,\nu)=F\left(\Phi^{-1}(x),(d_x\Phi(\Phi^{-1}(x)))^{*}(\nu)\right) \abs{\det(d_x\Phi^{-1}(x))}
\]
and it satisfies
\[
\Phi^{\#}F(\Phi^{\#} V,\Phi(\Omega))=\FF(V,\Omega).
\]
In particular we have $H_{\Phi^\# F}$ of the varifold $\Phi^\# V$ is $(d\Phi^*)^{-1}H_F$ where $H_F$ is the anisotropic mean curvature of the varifold $V$.
\end{Remark}
We conclude this section by computing the first variation formula for the varifold induced by a manifold with boundary and by providing an explicit formula for its \(F\) mean curvature
\begin{Proposition}
Let \(M\subset \mathbb R^{m+1}\) be an oriented $C^2$ $m$-manifold $M$ with boundary, and let
\[
V_M:=\mathcal H^m \mathop{\llcorner} M \otimes \left ( \frac 12\delta_{\nu_x}+\frac 12 \delta_{-\nu_x}\right ),
\]
where \(\nu_x\) is the normal to \(M\) at \(x\). Then
\begin{equation}\label{eq:boundaryvariation}
\begin{split}
\delta_FV_M(g) = \int_{\partial M} \langle B(x,\nu_x)\eta(x),g(x) \rangle d \mathcal H^{m-1} -\int_{M} \langle H_F(x,M), g(x) \rangle \;F(x,\nu_x)d\mathcal H^m,
\end{split}
\end{equation}
for all $g\in C^1_c(\Omega, \mathbb R^{m+1})$. Here $\eta(x)$ denotes the conormal of $\partial M$ at $x$, \(H_F(x,M)\) is parallel to \(\nu_x\) and satisfies
\begin{equation}\label{eq:H_M}
-F(x,\nu_x)H_{F}(x,M)= \Bigl(D_2^2F(x,\nu):A+\sum_{i} (\partial_i F_i)(x,\nu)\Bigr)\nu_x.
\end{equation}
Here $A$ is the second fundamental form\footnote{Note that by this sign convention the second fundamental form is positive definite for a convex set with respect to the \emph{outer} normal.} of $M$ defined by
\[
A(\tau_1,\tau_2)= \langle \tau_1 , D_{\tau_2} \nu\rangle \text{for $\tau_1,\tau_2 \in T_x$}
\]
and we are adopting the convention in \eqref{e:fj}.
\end{Proposition}
Note that \eqref{eq:H_M} gives
\[
\norm{H_F}_{F,x}= \abs{ \Bigl(D_2^2F(x,\nu):A+\sum_{i} (\partial_i F_i)(x,\nu)\Bigr)}.
\]
Moreover, by \eqref{eq:H_M} and the homogeneity of \(F\), if \(M=\{f=0\}\) locally around \(x\) for a \(C^2\) function \(f\) with \(D f(x)\ne 0\), then
\begin{equation}\label{calcolo}
\begin{split}
-F&(x,Df(x))\Big\langle H_F(x, M), \frac{Df(x)}{|Df(x)|} \Big\rangle \\
&= \trace\left(D^2_2F\left(x, \frac{Df(x)}{|Df(x)|}\right) D^2f(x)\right) + \sum_{i}(\partial_{i} F_i)\left(x, \frac{Df(x)}{|Df(x)|}\right)|Df(x)|.
\end{split}
\end{equation}
\begin{proof}
Recall that for a vector field \(X\)
\[
\Div X=\Div_M X+\langle D_\nu X,\nu\rangle,
\]
where for any orthonormal basis $\tau_j$ of $T_xM=\nu^\perp$ one has
\[
\Div_M X=\sum_{i} \langle D_{\tau_i}X, \tau_i\rangle.
\]
Hence, if \(e_i\) is the standard orthonormal basis of \(\mathbb R^n\) and we adopt Einstein convention
\begin{equation}\label{e:Bg}
\begin{split}
B:Dg&=\Div (B^* g)-\langle \Div B,g\rangle
\\
&=\Div_{M}(B^* g)- \Div_M (B^*e_i)g^i+\langle D_\nu(B^* g),\nu\rangle-\langle D_\nu(B^* e_i),\nu\rangle g^i
\\
&=\Div_{M}(B^* g)-\Div_M (B^*e_i)g^i
\end{split}
\end{equation}
where \(B\) is evaluated at $(x, \nu_x)$ and in the last equality we used that $\langle \nu, B^* D_\nu g\rangle =0$ due to \eqref{eq:range of B}. Note that \(B^*g\) is tangent to \(M\) (again by \eqref{eq:range of B}), hence by the divergence theorem
\[
\begin{split}
\delta_FV(g)
&=\int_{M} B:Dg+\langle D_1F(x,\nu), g\rangle
\\
&= \int_{\partial M} \langle B(x,\nu) \eta, g \rangle - \int_{M} \left(\Div_M(B^*e_i) -\langle D_1F(x,\nu),e_i\rangle \right) g^i.
\end{split}
\]
Hence, if we set
\begin{equation}\label{e:HH}
F(x,\nu) H_F(x,M)=\Big(\Div_M(B^*e_i)- \langle D_1F(x,\nu),e_i\rangle \Big)e_i,
\end{equation}
the proof will be concluded, provided \(H_F(x,M)\) satisfies \eqref{eq:H_M}. This follows by direct computations since
\begin{equation}\label{e:freddo1}
\begin{split}
\Div_M(B^*e_i) &= \langle \tau_j, e_i\rangle \left( F_k D_{\tau_j} \nu^k +\langle D_1F,\tau_j\rangle\right) \\
& \quad - \langle \tau_j, D_{\tau_j}(D_2F)\rangle \langle \nu, e_i\rangle - \langle \tau_k, D_2F\rangle \langle D_{\tau_k}\nu, e_i\rangle \\
&=\langle \tau_j, e_i\rangle \langle D_1F,\tau_j\rangle - \langle \tau_j, D_{\tau_j}(D_2F)\rangle \langle \nu, e_i\rangle,
\end{split}
\end{equation}
where we used that $F_kD_{\tau_j}\nu^k = \langle \tau_h, D_2F\rangle \langle \tau_h, D_{\tau_j}\nu\rangle$ since $\langle \nu, D_{\tau_j}\nu\rangle =0$ and so
\begin{align*}
\langle \tau_j, e_i\rangle F_k D_{\tau_j} \nu^k - &\langle \tau_k, D_2F\rangle \langle D_{\tau_k}\nu, e_i\rangle = \langle \tau_k , D_2 F\rangle \left( \langle \tau_j, e_i\rangle \langle \tau_k, D_{\tau_j}\nu\rangle - \langle D_{\tau_k}\nu, e_i \rangle \right)
\\
&=\langle \tau_k , D_2 F\rangle \langle \tau_j, e_i\rangle \left(\langle \tau_k, D_{\tau_j}\nu\rangle - \langle \tau_j, D_{\tau_k}\nu\rangle\right) =0.
\end{align*}
Now we note that
\begin{equation}\label{e:freddo2}
\begin{split}
\langle \tau_j, e_i\rangle \langle D_1F,\tau_j\rangle&= \langle D_1F,e_i\rangle- \langle \nu, e_i\rangle \langle D_1F,\nu\rangle
\\
&= \langle D_1F,e_i\rangle- \langle \nu, e_i\rangle D_{12}F:\nu \otimes \nu,
\end{split}
\end{equation}
where in the last equality we have used the one-homogeneity of \(D_1F\). Furthermore
\begin{equation}\label{e:freddo3}
\begin{split}
\langle \tau_j, D_{\tau_j}(D_2F)\rangle &=D_{12} F:\tau_j\otimes \tau_j+ D_2^2F(\tau_j, D_{\tau_j}\nu)
\\
&= D_{12} F:\tau_j\otimes \tau_j+ D_2^2F(\tau_j, \tau_\ell)A_{\ell j},
\end{split}
\end{equation}
where \(A_{\ell j}=\langle D_{\tau_j}\nu,\tau_{\ell}\rangle\) is the second fundamental form of \(M\). Combining \eqref{e:freddo1}, \eqref{e:freddo2} and \eqref{e:freddo3}, we get \eqref{eq:H_M} since
\begin{align*}
\Div_M&(B^*(x,\nu)e_i)-\langle D_1F,e_i\rangle \\
&= - \langle \nu, e_i\rangle\Big( D_{12} F:\nu\otimes \nu+ \sum_{j}D_{12} F:\tau_j\otimes \tau_j+D_2^2F(\tau_j, \tau_\ell)A_{\ell j}\Big)
\\
&= - \langle \nu, e_i\rangle\left(\partial_j F_j+ \trace (D^2F A)\right),
\end{align*}
where in the last equality we have used that, by \eqref{e:fj}),
\[
\partial_jF_j=\sum_{j}D_{12} F:e_j\otimes e_j=D_{12} F:\nu\otimes \nu+ \sum_{j}D_{12} F:\tau_j\otimes \tau_j\,.
\]
\end{proof}
\begin{Remark}\label{rmk:fdecrasing}
Let us record here the following consequence of the above computations: if \(X=D_2F(x, a(x)\nu_x)\) on \(M\) with $a\in C^1(M, \mathbb R_+)$, then \(B^* X=0\) and thus, by \eqref{e:Bg}, \eqref{e:HH} we get
\begin{equation}\label{eq:good vectorfield identity -1}
-\langle H_M(x,\nu), X\rangle \, F(x, \nu_x) = B(x, \nu_x):DX + \langle D_1F(x,\nu_x), X\rangle.
\end{equation}
\(X\) is what is called an \(F\)-decreasing vector filed in \cite[Proposition 1]{SolomonWhite} and it will play a crucial role in the proof of our main theorem.
\end{Remark}
\section{$(m,h)$-sets} \label{sec:mh}
In this section, following \cite{White2016}, we define \((m,h)\)-sets and we prove that the area-blow up set of a sequence of varifolds with bounded curvature is an \((m,h)\)-set. Roughly speaking an \((m,h)\)-set is a set which can not be touched by manifolds with \(F\langle H_F, \nu \rangle\) greater than \(h\), i.e. they satisfy \(\norm{H_F} \le h\) in the viscosity sense. This can be phrased in several ways, as the following proposition shows.
\begin{Proposition}\label{prop:equivalence}
Given a closed set $Z\subset \mathbb R^{m+1}$, then the following three statements are equivalent.
\begin{itemize}
\item[(i)] If $f: \Omega \to \mathbb R$ is a $C^2$-function and if $f|_Z$ has a local maximum at $p$, then
\begin{equation*}
\inf_{v \in \mathbb S^m} F_{ij}(p,v) D_{ij}f(p) + (\partial_{i} F_i)\left(p, \frac{D f(p)}{|D f(p)|}\right)|D f(p)| \le h \,\abs{Df(p)},
\end{equation*}
where the second term in the left hand side is intended to be zero when $D f(p) = 0$.
\item[(ii)] If $f: \Omega \to \mathbb R$ is a $C^2$-function and if $f|_Z$ has a local maximum at $p$ and $D f(p) \neq 0$, then
\begin{equation*}
F_{ij}\left(p,\frac{D f(p)}{\abs{D f(p)}}\right) D_{ij}f(p)+ (\partial_{i} F_i)\left(p,\frac{D f(p)}{|D f(p)|}\right)|D f(p)| \le h\, \abs{Df(p)}.
\end{equation*}
\item[(iii)] Let $N$ be a relative closed domain in $\Omega$ with smooth boundary, such that $Z\subset N$ and $p \in Z \cap \partial N$, then the $F$-mean curvature $H_F(p)$ of $\partial N$ satisfies
\[ F(p, \nu_{\textnormal{int.}}(p)) \langle H_F(p), \nu_{\textnormal{int.}}(p) \rangle \le h.\]
\end{itemize}
where \(\nu_{\textnormal{int.}}\) is the \emph{interior} normal to \(N\).
\end{Proposition}
We can now give the following definition
\begin{Definition}\label{def:mh}
Given an elliptic integrand \(F\) and and open set \(\Omega\) of \(\mathbb R^{m+1}\), we say that a relatively closed set set $Z\subset \Omega$ is an $(m,h)$-set with respect to \(F\) if it satisfies one of the three equivalent conditions of Proposition \ref{prop:equivalence}.
\end{Definition}
Let us prove Proposition \ref{prop:equivalence}.
\begin{proof}[Proof of Proposition \ref{prop:equivalence}]
\emph{ (ii) $\Rightarrow$ (iii):} This is an easy consequence of \eqref{calcolo} and of the elementary Lemmas \ref{le1} and \ref{le2} below. Note that $\nu_{\text{int.}}(p)= - \frac{Df(p)}{\abs{Df(p)}}$ if $p \in \partial N$ and $N$ coincides locally with $\{ f \le f(p)\}$.
\emph{ (i) $\Rightarrow$ (ii):} Suppose $Z$ fails to have property (ii), we will show that also property (i) cannot be satisfied by $Z$. Following the argument in \cite[Lemma 2.4]{White2016}, we can construct a function $f \in C^\infty(\Omega, \mathbb R)$ such that $f|_Z$ attains its maximum at a unique point $p \in Z$, i.e.
$$f(x)<f(p)\qquad \forall x \in Z,$$
$D f(p)\neq 0$, the super-level set $\{ x: f(x)\ge a \}$ is compact for every $a \in \mathbb R$ and
\begin{equation}\label{1}
F_{ij}\left(p,\frac{D f(p)}{\abs{D f(p)}}\right) D_{ij}f(p)+ (\partial_{i} F_i)\left(p,\frac{D f(p)}{|D f(p)|}\right)|D f(p)| > h \abs{Df}(p)
\end{equation}
Up to translation, rotation and multiplication of $f$ by $\abs{D f(p)}^{-1}$, we can assume without loss of generality that $p=0$ and $D f(p)= e_{m+1}$.
It is easy to verify that there exists an open neighborhood $U \ni p$ such that $\Sigma_0:=\{ x \colon f(x)=f(p)\}\cap U$ is a smooth sub-manifold of $\Omega$. Moreover, since $\Sigma_0$ is a level set of $f$, we know that
\begin{equation}\label{normale}
\nu_{\Sigma_0}(p)=D f(p)= e_{m+1},
\end{equation}
where $\nu_{\Sigma_0}(p)$ denotes the unit normal to $\Sigma_0$ at the point $p$.
If we denote with $d(x)$ the signed distance function from $\Sigma_0$
$$d(x):=\text{sign}(f(x)- f(p))\text{dist}(x,\Sigma_0),$$
since $\Sigma_0\cap U$ is smooth, there exists $r>0$ small enough such that $d$ is a smooth function on $B_r(p)$. Moreover $B_r(p)$ is contained in the $r$-neighborhood of $\Sigma_0$, since $p \in \Sigma_0$.
Thanks to \eqref{normale}, we also deduce that
\begin{equation}\label{normale1}
Dd(p)= e_{m+1}.
\end{equation}
We observe that
$$B_r(p) \cap \{ d(x)>0\} \cap Z = \emptyset,$$
otherwise $f(p)$ would not be the maximum of $f|_Z$. We deduce that for every $\lambda >0$ the function
$$g_\lambda(x):= (e^{\lambda d(x)} -1 )$$
satisfies $g_\lambda(x)\le 0$ for every $x \in Z\cap B_r(p)$.
Fix a non negative cut off function $\varphi \in C^\infty_c(B_r(p))$ with $\varphi(x)=1$ on $B_{\frac{r}{2}}(p)$ and consider for every $\lambda>0$ the function
\[ f_\lambda(x):= f(x) + \varphi(x) \lambda^{-\frac32} g_\lambda(x).\]
By the above considerations $f_\lambda$ restricted to $Z$ attains its maximum in $p$ and by direct calculations we have that for every $x \in B_{\frac{r}{2}}(p)$
\begin{align*}
D_if_\lambda(x) &= D_i f(x) + \lambda^{-\frac12} D_i d(x) e^{\lambda d(x)}\\
D_{ij} f_\lambda(x) &= D_{ij} f(x) + \lambda^{-\frac12} D_{ij} d(x) e^{\lambda d(x)} + \lambda^{\frac12} D_id(x) D_jd(x)e^{\lambda d(x)}.
\end{align*}
Evaluating the previous derivatives in $p$ and implementing \eqref{normale1}, we get
$$
D f_\lambda(p) = e_{m+1}+\lambda^{-\frac12} D_i d(p) e^{\lambda d(p)}=(1+ \lambda^{-\frac12}) e_{m+1}$$
and
\begin{equation*}
\begin{split}
D_{ij}f_{\lambda}(p) = D_{ij}f(p) + \lambda^{-\frac12} D_{ij} d(p) + \lambda^{\frac12} (e_{m+1} \otimes e_{m+1})_{ij}.
\end{split}
\end{equation*}
By homogeneity of $F$, we have $F_{m+1, m+1}(p, e_{m+1}) = 0 $, and combining the previous equation with \eqref{1}, we deduce that there exists $\lambda_0>0$ such that for all $\lambda > \lambda_0$
\begin{equation*}
F_{ij}\left(p, e_{m+1}\right) D_{ij}f_\lambda(p) > h \abs{Df_\lambda}(p) - (\partial_{i} F_i)(p, D f_\lambda(p))\abs{Df_\lambda}(p)
\end{equation*}
We conclude that $f_\lambda$ fails the condition (i) for $\lambda$ chosen sufficiently big, showing that
\[ \lim_{\lambda \to \infty} \inf_{v \in \mathbb S^m} F_{ij}(p,v) D_{ij}f_\lambda(p) = F_{ij}(p, e_{m+1}) D_{ij}f(p).\]
Indeed,
for every $v \neq e_{m+1}$, the strict convexity of $F$ implies that $F_{m+1, m+1}(p, v) >0 $ and we can compute
\begin{equation*}
\begin{split}
\lim_{\lambda \to \infty} F_{ij}(p,v) D_{ij}f_\lambda(p)&= \lim_{\lambda \to \infty}F_{ij}(p, v) D_{ij}f(p)\\
& \quad +\lim_{\lambda \to \infty} \lambda^{-\frac12} F_{ij}(p, v) D_{ij} d(p) + \lambda^{\frac12} F_{m+1, m+1}(p, v) = + \infty,
\end{split}
\end{equation*}
unless \(v=e_{m+1}\).
\emph{ (ii) $\Rightarrow$ (i):} Suppose $Z$ fails to have property (i), we will show that this implies $Z$ does not satisfy property (ii). Similarly to the previous step, we can make use of the argument of \cite[Lemma 2.4]{White2016} and assume without loss of generality that $f \in C^\infty(\Omega, \mathbb R)$, $f|_Z$ attains its maximum at a unique point $p \in Z$ ($f(x)<f(p)$ for every $x \in Z$), the super-level set $\{ x: f(x)\ge a \}$ is compact for every $a \in \mathbb R$, there exist $r>0$ and $\delta>0$ small enough such that $f(x) < f(p)-\delta $ for all $x \not \in B_r(p)$ and
\begin{equation*}
\inf_{v \in \mathbb S^m} F_{ij}(p,v) D_{ij}f(p) > h \abs{Df}(p) - (\partial_{i} F_i)\left(p, \frac{D f(p)}{|D f(p)|}\right)|D f(p)|,
\end{equation*}
where the right hand side is intended to be zero when $D f(p) = 0$.
If $\abs{D f}(p) \neq 0$, $Z$ fails to have property (ii) since trivially
\[\inf_{v \in \mathbb S^m} F_{ij}(p, v) D_{ij}f(p)\le F_{ij}\left(p, \frac{D f(p)}{\abs{D f(p)}}\right) D_{ij}f(p).\]
Hence, we are reduced to consider the case $Df(p)=0$, i.e. the case in which there exists $v_0\in \mathbb S^m$ such that
\begin{equation}\label{eq:caseDF=0}
F_{ij}(p, v_0) D_{ij}f(p)=\inf_{v \in \mathbb S^m} F_{ij}(p,v) D_{ij}f(p) \ge \sigma >0.\end{equation}
This is done by relaxation. Up to a translation of $Z$ by $p$ and considering $f-f(p)$ we may assume without loss of generality that $p=0$ and $f(0)=0$. We can fix $M>0$ with $M \ge \sup\{\abs{f(x)}+ \abs{Df(x)}: x \in B_{2r}(0)\}$. Furthermore, for $\lambda >0$ we define the smooth auxiliary function
\[ g_{\lambda}(x,y):= f(y) - \lambda \abs{x-y}^4. \]
Observe that, by the stated properties of $f$, for every $x \in Z$ and every $y \notin B_r(0)$ we have $g_{\lambda}(x,y)\leq f(y)<-\delta < 0=g_\lambda(0,0)$. If $\abs{x-y}^4 > \frac{M}{\lambda}$, $y \in B_r(0)$ we have $g_{\lambda}(x,y) <0$. Hence for each $\lambda >0$
\[ m_{\lambda}:= \sup\{ g_\lambda(x,y) \colon x \in Z , y \in \Omega \} \]
is attained for a couple $(x_\lambda, y_\lambda) \in Z \times B_r(0)$ with $\abs{x_\lambda - y_\lambda}^4 \le \frac{M}{\lambda}$. \\
We moreover observe that $x_\lambda \to 0$ as $\lambda \to +\infty$. Indeed, for every $x\in Z \cap B_{r}(0)\setminus \{0\}$ and every $y \in B_{(\frac{M}{\lambda})^\frac14}(x)$, since $f(x)<0$ we get that for sufficiently large $\lambda$
\begin{equation*}
\begin{split}
g_\lambda(x,y) &\le f(y) = f(x) + (f(y)-f(x))\le f(x) + \sup_{z\in B_{2r}(0)}|Df|(z) \left(\frac{M}{\lambda}\right)^\frac14 \\
&\le f(x) + M \left(\frac{M}{\lambda}\right)^\frac14 < 0=g_\lambda(0,0),
\end{split}
\end{equation*}
which implies that for $\lambda$ big enough $x$ is far enough from $x_\lambda$.\\
Since $y_\lambda \in B_{(\frac{M}{\lambda})^\frac14}(x_\lambda)$, then as $\lambda \to +\infty$ we get $x_\lambda - y_\lambda\to 0$ and consequently also $y_\lambda\to 0$.\\
For each couple $(x_\lambda, y_\lambda)$ we distinguish two cases:\\
\emph{First case: $x_\lambda = y_\lambda$.} Since $y \mapsto g_\lambda(x_\lambda, y)$ admits a global maximum in $y_\lambda$ we have $D_yg_\lambda(x_\lambda, y_\lambda)= Df(y_\lambda) =0$ and $D_{y}^2g_\lambda(x_\lambda, y_\lambda) = D^2f(y_\lambda) \le 0$. By convexity of $F$, it holds $F_{ij}(y, v) \ge 0$ for every $(y,v)\in \Omega \times \mathbb S^m$, hence
\[ F_{ij}(x_\lambda, v ) D_{ij}f(x_\lambda)=F_{ij}(y_\lambda, v ) D_{ij}f(y_\lambda) \le 0 \qquad \text{ for every } v \in \mathbb S^{m}. \]
Passing this inequality to the limit for $\lambda \to +\infty$ we get
$$F_{ij}(p, v ) D_{ij}f(p) \le 0 \qquad \text{ for every } v \in \mathbb S^{m}, $$
which contradicts \eqref{eq:caseDF=0}.\\
\emph{Second case: $x_\lambda \neq y_\lambda$.} As before $y \mapsto g_\lambda(x_\lambda, y)$ admits a global maximum in $y_\lambda$, hence
\[ 0=D_yg_\lambda(x_\lambda, y_\lambda)= Df(y_\lambda) - 4\lambda \abs{y_\lambda - x_\lambda}^2 (y_\lambda - x_\lambda), \]
which gives in particular $Df(y_\lambda) \neq 0$. Furthermore
$$\lim_{\lambda \to +\infty}\abs{Df(y_\lambda)} = 0,$$
since $Df(0)=0$ and $y_\lambda \to 0$.
Now consider the new function
$$f_\lambda(x):=f(x+(y_\lambda - x_\lambda)).$$
The function $f_\lambda|Z$ admits its maximum at $x_\lambda$ because for every $x \in Z$
\begin{equation*}
\begin{split}
f_\lambda(x)- \lambda \abs{y_\lambda - x_\lambda}^4&=f(x+(y_\lambda - x_\lambda)) - \lambda \abs{ x + (y_\lambda - x_\lambda)- x}^4\\
&=g_\lambda(x,x+(y_\lambda - x_\lambda)) \leq g_\lambda(x_\lambda,y_\lambda)\\
&= f(y_\lambda)- \lambda \abs{y_\lambda - x_\lambda}^4=f_\lambda(x_\lambda)- \lambda \abs{y_\lambda - x_\lambda}^4.
\end{split}
\end{equation*}
Thanks to \eqref{eq:caseDF=0}, for $\lambda$ sufficiently large, we deduce that
\begin{align*}
\inf_{v \in \mathbb S^m} F_{ij}(x_\lambda, v) D_{ij}f_\lambda(x_\lambda) = \inf_{v \in \mathbb S^m} F_{ij}(x_\lambda, v) D_{ij}f(y_\lambda) &> \frac{\sigma}{2} \\
h \abs{Df_\lambda}(x_\lambda) - (\partial_{i} F_i)(p, D f_\lambda(x_\lambda))&< \frac{\sigma}{2}.
\end{align*}
We conclude that $Z$ fails to have property (ii).
\end{proof}
\begin{Lemma}\label{le1}
Given $f\in C^\infty(\Omega)$ and $p \in \Omega$ such that $f(p)=0$ and $Df(p)\neq 0$, then there exists $N\subset \Omega$ relatively closed with smooth boundary and $U\ni p$ open such that
$$\{f\le 0 \} \subset N \quad \mbox{ and } \quad U \cap \{f\le 0 \} = U \cap N.$$
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{le1}] If $0$ is a regular value of $f$, we can simply choose $N=\{ f \le 0 \}$. Otherwise we fix $r>0$ such that $\abs{Df(x)-Df(p)} \le \frac12 \abs{Df(p)}$ for all $x \in B_r(p)$.
We deduce that
\begin{equation}\label{conto in mezzo}
\abs{Df(x)}\ge \abs{Df(p)} -\abs{Df(x)-Df(p)} \ge \frac12 \abs{Df(p)} \qquad \forall x \in B_r(p).
\end{equation}
Let $\phi \in C^\infty_c(B_r(p))$ be non negative, $\phi=1$ on $B_{\frac{r}{2}}(p)$ and $\abs{D\phi}< \frac{4}{r}$. By Sard's theorem there is a regular value $c$ of $f$ with $0<\frac{4}{r}c < \frac{\abs{Df(p)}}{4}$.\\
We set
\[ \tilde{f}(x):= f(x) - \phi(x)c. \]
By the choice of $c$ and $\phi$ and thanks to \eqref{conto in mezzo}, we compute
$$\abs{D\tilde{f}(x)} \ge \abs{D{f}(x)}-c\abs{D\phi(x)}> \frac{\abs{Df(p)}}{2}-\frac{\abs{Df(p)}}{4}= \frac{\abs{Df(p)}}{4} \qquad \forall x \in B_r(p).$$
Hence $0$ is a regular value of $\tilde{f}|B_r(p)$ and therefore $0$ is a regular value of $\tilde{f}$ on the whole set. Since $\tilde{f}=f$ on $U:=B_{\frac{r}{2}}(p)$, we infer that $U \cap \{f\le 0 \} = U\cap \{\tilde{f}\le 0 \}$ and we conclude that the relatively closed set $N:= \{ \tilde{f} \le 0 \}$ has the claimed properties. \end{proof}
\begin{Lemma}\label{le2}
Given $N\subset \Omega$ relatively closed with smooth boundary and $p \in \partial N \cap \Omega$. There exists $f \in C^\infty(\Omega)$ and $U\ni p$ open such that
$$N \subset \{f\le 0 \}\quad \mbox{ and } \quad U \cap \{f\le 0 \} = U \cap N.$$
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{le2}]
Fix a smooth proper function $u: \Omega \to \mathbb R$ with $u < 0 $ on $N$. We define the signed distance function $d$ defined as
$$d(x):=\begin{cases}-\text{dist}(x,\partial N) & \text{if } x \in N\\
\text{dist}(x,\partial N) & \text{if } x \notin N\end{cases}.$$
Given $r>0$, as before we fix a non negative function $\phi \in C^\infty_c(B_r(p))$, with $\phi=1$ on $U:=B_{\frac{r}{2}}(p)$. It is now straightforward to check that, choosing $r$ small enough, the function
\[ f(x):= \phi(x) d(x) + (1-\phi(x)) u(x) \]
has the claimed properties.
\end{proof}
\begin{Remark}\label{Rem:1} In Proposition \ref{prop:equivalence} above, we may replace $(ii)$ with the following equivalent condition:
\begin{itemize}
\item[(ii)'] If $P$ is a paraboloid $P(x):= a_0 + \langle a_1 , x-p \rangle + \frac12 (x-p)^t A (x-p) $ for some $a_0 \in \mathbb R, a_1 \in \mathbb S^{m} $ and $A \in \mathbb R^{(m+1)\times (m+1)}$ and if $P|_Z$ has a local maximum at $p$, then
\begin{equation}\label{paraboloid}
F_{ij}\left(p,a_1\right) A_{ij} + (\partial_{i} F_i)(p,a_1)\le h.
\end{equation}
\end{itemize}
Indeed, the fact that (ii) implies (ii)' is immediate. For the converse, let $f$ as in (ii) and $p$ a local maximum of $f|_Z$. Consider for any $\varepsilon>0$ the paraboloid
\[ P_{\varepsilon}(x):= \left \langle \frac{D f(p)}{\abs{D f(p)}} , x - p \right \rangle + \frac{1}{2\abs{D f(p)}} D^2f(p)( (x-p)\otimes (x-p)) - \frac{\varepsilon}{2} \abs{x-p}^2. \]
Since $f \in C^2$, for every $\varepsilon>0$ there exists $r_\varepsilon>0$ such that \[\sup_{x \in B_{r_\varepsilon}(p)} \frac{\abs{ \frac{f(x)}{\abs{Df(p)}}-P_{\varepsilon}(x)}}{\abs{x-p}^2} \le \frac{\varepsilon}{4}\,.\]
Then $P_{\varepsilon}|_Z$ attains its local maximum in $p$.
Moreover we compute
$$DP_{\varepsilon}(p)= \frac{D f(p)}{\abs{D f(p)}} \qquad \text{ and } \qquad D^2P_{\varepsilon} = \frac{D^2f(p)}{\abs{D f(p)}} - \varepsilon \mathbf{1}.$$
Letting $\varepsilon \to 0$ in \eqref{paraboloid}, we deduce the inequality in (ii) for $f$ in $p$.
\end{Remark}
The following is our main theorem. The proof is based on (the proof of) the maximum principle of Solomon and White for varifolds which are stationary with respect to an anisotropic integrand, see \cite{SolomonWhite}.
\begin{Theorem}\label{thm:area-blowupset}
Let $\Omega \subset \mathbb R^{m+1}$ be open. Consider a sequence of varifold $(V_k)_k\subset \mathbb{V}_m(\Omega)$ and $h>0$ such that for every $K \subset \subset \Omega$ it holds
\begin{equation}\label{eq:curvature bounded by h}
\limsup_{k \to \infty} \sup \left\{ \delta_F V_k(X) - h \int \norm{X}_{F^*,x} \, F(x, \nu) \, dV_k(x,\nu) \colon \abs{X} \le \mathbf{1}_{K} \right\} < \infty.
\end{equation}
Then the area-blow up set
\[ Z:= \{ x \in \overline \Omega \colon \limsup_{k \to \infty} \|V_k\|(B_r(x))=+\infty \text{ for every $r>0$ } \}\]
is an $(m,h)$-set in $\Omega$ with respect to $F$.
\end{Theorem}
\begin{proof}
We first observe that $Z$ is a closed set. Indeed, given $\{x_n\}_{n \in \mathbb N}\subset Z$, such that $x_n \to x \in \overline \Omega$, then, for every $r>0$, there exists $n$ big enough such that $B_{r/2}(x_n)\subset B_r(x)$. We deduce that
$$\limsup_{k \to \infty} \|V_k\|(B_r(x)) \geq \limsup_{k \to \infty} \|V_k\|(B_{r/2}(x_n))=+\infty,$$
which implies that $x \in Z$ and consequently that $Z$ is closed.
Assume now that $Z$ is not an $(m,h)$-set. Hence due to Proposition \ref{prop:equivalence} there is a smooth function $f:\Omega \to \mathbb R$ and a point $p \in \Omega\cap Z$ such that $f|_Z$ has a unique local maximum at $p$, $D f(p) \neq 0$ and (ii) fails. After translation by $p$ and rotation and scaling of $f$ we may assume that $p=0$, $f(p)=0$ and $D f(p)= -e_{m+1}$. The contradiction then reads
\begin{equation}\label{stimaforte}
F_{ij}\left(p,\frac{Df(p)}{\abs{Df(p)}}\right) D_{ij}f(p)+ (\partial_{i} F_i)\left(p,\frac{Df(p)}{\abs{Df(p)}}\right)\abs{Df(p)}> h \abs{Df(p)}.
\end{equation}
Let us define the vector field
\begin{equation}\label{eq:good vectorfield}
X(x)=X^i(x) e_i = F_i(x,Df(x)) e_i.
\end{equation}
Firstly note that $\langle X(x), Df(x)\rangle = F(x,Df(x))$ hence $X$ is pushing along ``outside'' the level sets $ \{ f \le t \}$. Furthermore
\begin{equation}\label{eq:length of the good vectorfield}
\norm{X}_{F^*,x} = 1.
\end{equation}
Moreover, by \eqref{eq:good vectorfield identity -1}
\begin{equation}\label{eq:good vectorfield identity 1}
-\langle H_F(x), X\rangle \, F(x, Df(x)) = B(x, Df(x)):DX + \langle D_1F(x,Df), X\rangle \end{equation}
where $H_F(x)$ is the $F$-mean curvature of a level set $\{ f= t\}$.
Now we want to show how this vector field can be used to derive the contradiction to \eqref{eq:curvature bounded by h}.
First fix a radius $r>0$ and $\delta>0$ such that
\begin{multline}\label{eq:stimaforte}
F_{ij}\left(x,\frac{Df(x)}{\abs{Df(x)}}\right) D_{ij}f(x)+ (\partial_{i} F_i)\left(x,\frac{Df(x)}{\abs{Df(x)}}\right)\abs{Df(x)}\ge (h+\delta) \abs{Df(x)}
\\
\text{ for all } x \in B_{2r}(0)
\end{multline}
and \begin{equation}\label{eq:bounds on Df}
\frac{1}{2} \le \abs{Df(x)} \le 2 \text{ for all } x \in B_{2r}(0).
\end{equation}
By \eqref{eq:good vectorfield}, we compute $\langle X, \frac{Df(x)}{\abs{Df(x)}} \rangle = F(x, \frac{Df(x)}{\abs{Df(x)}})$, which combined with \eqref{eq:stimaforte}, gives the following estimate on $B_{2r}(0)$
\begin{equation}\label{eq:stimaforte2}
F_{ij}\left(x,\frac{Df(x)}{\abs{Df(x)}}\right) D_{ij}f(x)+ (\partial_{i} F_i)\left(x,\frac{Df(x)}{\abs{Df(x)}}\right)\abs{Df(x)}\ge (h+\delta) F(x, Df(x))
\end{equation}
By assumption we have $Z \subset \{f \le 0\}$ and $Z \cap \{f=0\}=\{0\}$, hence there exists $\eta_1>0$ such that $f(x)<-\eta_1$ for all $x \in Z \setminus B_r(0)$.
Now we fix a non-negative cut off function $\varphi(x)$ supported in $B_{2r}(0)$ with $\varphi(x) = 1$ on $B_{r}(0)$. For $0<\eta_2<\eta_1$ to be chosen later, we define the function
\[ \eta(t):=\begin{cases} 0 &\text{ if } t \leq -\eta_2 \\ \eta_2+t &\text{ if } -\eta_2 \leq t \end{cases}. \]
Now we consider the vector field
\begin{equation}\label{eq:good vector field 2}
Y(x) = -\varphi(x) \eta(f(x)) X.
\end{equation}
Then we have
\[ -DY = \varphi \eta \circ f DX + \varphi \eta'\circ f X \otimes Df + \eta\circ f X \otimes D\varphi. \]
Hence for every $a$ we have
\begin{align*}
- \delta_FV_k(Y)=& \int \varphi \eta\circ f \left( B(x,\nu):DX + \langle D_1 F(x,\nu), X\rangle \right) \\& + \varphi \eta'\circ f \left( B(x,\nu): X \otimes Df \right) + \eta \circ f \left(B(x,\nu) : X \otimes D\varphi \right)\, dV_k(x,\nu)\\
=& \int I + II + III \, dV_k(x,\nu).
\end{align*}
We analyze the three terms separately.
Note that $\abs{III} \le C \mathbf{1}_{B_{2r}\setminus B_r \cap \{ f \ge -\eta_1\}}$. Since by the choice of $r$ and $\eta_1$ we have $Z \cap B_{2r}\setminus B_r \cap \{ f \ge -\eta_1\} = \emptyset$ we have
\[ \abs{ \int III \, dV_k(x,\nu) } \le O(1) \text{ for all } k. \]
Concerning $II$ we have due the uniform convexity of $F$ there is a constant $c_F$
\begin{align*}
B(x,\nu):X \otimes Df(x) &= F(x,\nu) F(x,Df) - \langle D_2F(x,\nu), Df(x)\rangle \langle D_2F(x,Df(x)), \nu \rangle\\ &\ge c_F|Df(x)| \;{\mathop{\mathrm{dist}}}_{\mathbb{RP}^{m}}\Big(\frac{Df(x)}{|Df(x)|}, \nu\Big)^2\, F(x,\nu)
\\
&= c_F|Df(x)| \;d(x,\nu)^2 \, F(x,\nu),
\end{align*}
where, for \(v,w\in \mathbb S^m\), we set
\begin{equation}\label{e:prdist}
{\mathop{\mathrm{dist}}}_{\mathbb{RP}^{m}}(v,w):=\min\{|v+w|, |v-w|\},
\end{equation}
and we introduced the function
\begin{equation}\label{e:prdist2}
d(x,\nu):= \;{\mathop{\mathrm{dist}}}_{\mathbb{RP}^{m}}\Big(\frac{Df(x)}{|Df(x)|}, \nu\Big).
\end{equation}
We conclude taking into account \eqref{eq:bounds on Df}
\[ \int II\, dV_k(x,\nu) \ge \frac12 c_F \int \varphi \eta\circ f\, d(x,\nu)^2 \, F(x,\nu)\, dV_k(x, \nu).\]
It remains to estimate $I$. By \eqref{eq:stimaforte}, \eqref{eq:good vectorfield identity 1} and the \(C^2\) regularity of \(F\), there exists a constant $C_F \ge 0$ such that
\begin{align*}
\abs{Df(x)}\Bigl( B(x,\nu)&:DX(x) + \langle D_1F(x,\nu), X\rangle \Bigr)
\\
&\ge B(x,Df(x)):DX(x) + \langle D_1F(x,Df(x)), X\rangle \\
&\quad- C_F |Df(x)|\;{\mathop{\mathrm{dist}}}_{\mathbb{RP}^{m}}\left(\frac{Df(x)}{\abs{Df(x)}}, \nu \right)
\\
&\ge \abs{Df(x)} (h+ \delta) F(x, \nu) - C_F |Df(x)| \;d(x,\nu)\, F(x,\nu).
\end{align*}
Taking additionally into account that $\{ \eta \ge \eta_2 \} \cap B_{2r} \cap Z = \emptyset$ and \eqref{eq:bounds on Df}, we conclude
\begin{align*} \int I \, dV_k(x,\nu) \ge& (h+\delta) \int \varphi \eta\circ f \, F(x,\nu) \, dV_k(x,\nu)\\& - 2 C_F \int_{\{\eta<\eta_2\}} \varphi \eta \circ f \, d(x,\nu)\, F(x,\nu) \, dV_k(x,\nu)- O(1). \end{align*}
Combing all the estimates for $I- III$ we have
\begin{align*}
&\int I + II + III \, dV_k(x,\nu) - h\int \varphi \eta \circ f \, F(x,\nu)\, dV_k(x,\nu) \\
&\ge \int_{\{\eta < \eta_2\}} \varphi \left( \delta \,\eta\circ f - 2C_F \eta\circ f d(x,\nu) + \frac12 c_F \,\eta'\circ f d(x,\nu)^2\right) \, dV_k(x,\nu)- O(1).
\end{align*}
Observe that $0\le \eta\circ f \le 2\eta_2$ on the set $\{ f< \eta_2\}$ and $\eta' =1$ on the set $\{ \eta > 0 \}$. Let us consider the polynomial
\[
p(\mu, t):= \delta\, \mu \, - 2C_F\, \mu t + \frac12 c_F \, t^2.
\]
For a fixed $\mu\ge 0$ its minimum is obtained in $t_{\text{min.}}= \frac{2 C_F \mu}{ c_F}$ and takes the value
\[
p(\mu, t_{\text{min.}}) = \frac{\delta}{2}\, \mu - \frac{2 C_F^2 \,\mu^2}{c_F}.
\]
Hence if $\mu \le 2\eta_2$ with $\eta_2>0$ sufficient small, $p(\mu,t)$ is non-negative i.e. for such a choice of $\eta_2$ we have
\begin{align*}
&\int I + II + III \, dV_k(x,\nu) - h\int \varphi \eta \circ f \, F(x,\nu)\, dV_k(x,\nu) \\
&\ge \int_{\{\eta < \eta_2\}} \varphi \frac{\delta}{2} \,\eta\circ f + \int_{\{\eta < \eta_2\}} \varphi p(\eta\circ f, d(x,\nu)) dV_k(x,\nu) - O(1) \\
&\ge \int_{\{\eta < \eta_2\}} \varphi \frac{\delta}{2} \,\eta\circ f \, d\norm{V}_k(x) - O(1).
\end{align*}
Since $B_{\frac{r}{2}} \cap \{ \eta \circ f < \eta_2 \}$
is an open neighbourhood of $0$ and $0 \in Z$, we conclude that
\[ \lim_{k \to \infty} \int_{\{\eta < \eta_2\}} \varphi \frac{\delta}{2} \,\eta\circ f \, d\norm{V}_k(x) = +\infty, \]
contradicting the assumption \eqref{eq:curvature bounded by h} and proving the theorem.
\end{proof}
\subsection{Consequences of Theorem \ref{thm:area-blowupset}}
By repeating the arguments of \cite{White2016}, we can now derive several properties of area blow-up sets (and more in general of \((m,h)\)-sets).
\begin{Proposition}\label{prop:closedness}
Let $\Omega\subset \mathbb R^{m+1}$ be open, $(F_k)_k$ be a sequence of anisotropic integrands, and $(Z_k)_k$ be a sequence of $(m, h_k)$-subset of $\Omega$ with respect to the integrand $F_k$. Suppose that $F_k$ converges uniformly on compact subsets of $\Omega$ to some integrand $F$, $Z_k$ converges in Hausdorff distance to a closed set $Z$ and $h_k \to h$, then $Z$ is an $(m,h)$-subset of $\Omega$ with respect to the integrand $F$.
\end{Proposition}
\begin{proof} We will prove that the condition (ii)' in Remark \ref{Rem:1} holds. Let
$$P(x)=a_0 + \langle a_1, x \rangle + \frac12 x^t A x \qquad \text{for some } a_0 \in \mathbb R, a_1 \in \mathbb S^{m} \text{ and } A \in \mathbb R^{(m+1)\times (m+1)}$$
be a paraboloid that realizes its maximum on $Z$ in $p\in \Omega$. Let $r >0$ such that $B_{r}(p)\subset \subset \Omega$. For any $\varepsilon >0$ and $k$ sufficient large, the map
$$P_\varepsilon(x):=P(x)-\varepsilon \frac{\abs{x-p}^2}2$$
realizes a strict local maximum on $Z_k\cap B_{r}(p)$ along a sequence of point $p_k \in Z_k\cap B_{r}(p)$, such that $p_k \to p$.
Since $Z_k$ are $(m, h_k)$-subset of $\Omega$, we can apply the characterization (ii)' in Remark \ref{Rem:1} to $P_\varepsilon$ to deduce that
\[ F_{ij}\left(p_k,a_1\right) (A_{ij}-\varepsilon \delta_{ij}) \le h_k - (\partial_{i} F_i)(p_k,a_1) +C |p_k-p|.\]
Passing to the limit as $k\to \infty$ and \(\varepsilon \to 0\), we obtain
\[ F_{ij}\left(p,a_1\right) A_{ij} \le h - (\partial_{i} F_i)(p,a_1).
\]
\end{proof}
\begin{Corollary}\label{cor:Z-blow-up}
Let $\Omega\subset \mathbb R^{m+1}$ be open and $Z\subset \Omega$ be an $(m,h)$-set with respect to the anisotropic integrand $F$. Consider a sequence $r_k \searrow 0$ and a point $p \in \Omega \cap Z$ such that
\[ Z_i:= \frac{ Z- p}{r_i} \to Z_\infty \qquad \mbox{in Hausdorff distance}. \]
Then $Z_\infty$ is an $(m,0)$-set of $\mathbb R^{m+1}$ with respect to the frozen integrand $F_p(\nu):=F(p, \nu)$.
\end{Corollary}
\begin{proof}
It is straight forward to check that for every $r>0$ and $q \in \Omega$
\[ \frac{Z-q}{r} \]
is an $(m, rh)$-set with respect to the integrand
\[ F_{q,r}(x,\nu):= F(q+rx, \nu). \]
By Proposition \ref{prop:closedness}, $Z_\infty$ is an $(m,0)$-subset of the integrand
\[ F_p(\nu)= \lim_{k \to \infty} F_{p, r_k}(x, \nu). \]
\end{proof}
A further consequence of Theorem \ref{thm:area-blowupset} is a constancy property, compare with \cite[Section 4]{White2016}:
\begin{Proposition}
Let $\Omega \subset \mathbb R^{m+1}$ be open and $Z$ be an $(m,h)$-subset of $\Omega$ with respect to an anisotropic integrand $F$. Suppose $Z$ is a subset of a connected, $m$-dimensional, properly embedded $C^1$-submanifold $M$ of $\Omega$. Then
\[ \text{either} \quad Z =\emptyset \quad \text{ or } \quad Z = M. \]
\end{Proposition}
\begin{proof}
If $Z =\emptyset$ there is nothing to prove. Assume that $Z\neq \emptyset$ and suppose by contradiction that $Z\neq M$. Since $Z$ is closed, there exists $B_r(q) \subset \Omega \setminus Z$ with $q \in M$ and $p \in Z \cap \overline{B_r(q)}$. For a sequence of positive numbers $\lambda_k \searrow 0 $ consider
\[ Z_k := \frac{ Z - p}{\lambda_k} \quad \text{ and } \quad M_k:= \frac{ M - p}{\lambda_k}. \]
Due to the regularity of $M$, we have that $M_k \setminus B_{\frac{r}{\lambda_k}}(\frac{ q - p}{\lambda_k})$ converges in Hausdorff distance to a half plane $H$ of $T_pM$. Hence, passing to a subsequence, $Z_k \to Z_\infty$ in Hausdorff distance, with $Z_\infty \subset H$ and $0 \in Z_\infty$.
After a rotation $O$, we may assume that $H=\{ x \in \mathbb R^{m+1} \colon x_{m+1} =0, x_1\ge 0 \}$. By corollary \ref{cor:Z-blow-up} we have that $Z_\infty$ is an $(m,0)$-subset of $\mathbb R^{m+1}$ with respect to the frozen integrand $\hat{F}(\nu) := F(p,O\nu)$.
Now consider the function
$$f(x):= - x_1 + x_1^2 + x_{m+1}^2.$$
Observe that $f$ takes a strict local maximum at $0$ on $H$, hence $f|_{Z_\infty}$ has a strict local maximum in $0$, but this contradicts the characterization (ii) of Proposition \ref{prop:equivalence}, since
\[ D^2\hat{F}(e_1) (e_1 \otimes e_1 + e_{m+1} \otimes e_{m+1} ) >0. \]
\end{proof}
For the sake of completeness we prove also the anisotropic counterpart of the ``classical'' constancy theorem for varifolds. The reader may compare it with \cite[Theorem 8.4.1]{Simon} for the proof in the isotropic setting.
\begin{Proposition}\label{prop:constancy classical} Given $V\in \mathbb V_m(\Omega)$ wich is stationary with respect to an anisotropic integrand $F$. Let $\mathrm{spt}(V) \subset M$, where $M$ is a connected $M$-dimensional $C^2$ submanifold of $\Omega$, then $V= \theta_0 \, \mathcal{H}^m\mathop{\llcorner} M \otimes \delta_{T_xM}$.
\end{Proposition}
\begin{proof}
The strategy of the proof is similar to the one for the area functional, compare \cite[Theorem 8.4.]{Simon}. To simplify the presentation, we divide the proof in two steps:
\begin{itemize}
\item[Step 1)] if $M$ is a plane, i.e. $M = \{ x_{m+1} = 0 \}$, and $\Omega = B_{2r}(0)$, then the conclusion of the proposition holds on $B_r(0)$.
\item[Step 2)] we reduce the general case to the case in Step 1.
\end{itemize}
\emph{Proof of Step 1:} We will write $x=(y, z)\in \mathbb R^m \times \mathbb R$ for the coordinates in $\mathbb R^{m+1}$ i.e. $M = \{ z=0\}$. Consider the vectorfield
\[ X(x):= \varphi(y) \eta(z) f(z) D_2 F(x, e_{m+1}) \]
where $\varphi \in C^1_c(B_r^m(0))$, $f, \eta \in C^1(\mathbb R)$ satisfying $f(0)=0, f'(0) \neq 0$ and $\eta$ non-negative with $\eta(z)=0$ for $\abs{z} > r$ and $\eta(z)=1$ for $\abs{z} < \frac{r}{2}$. \\
Since $\mathrm{spt}(V) \subset M$, $f =0 $ on $M$, $\eta=1$ on $M$ and $\eta'=0$ on $M$, the first variation formula (see \cite[section 5]{DePhilippisDeRosaGhiraldin}) reduces to
\[ 0 = \delta_FV(X) = \int B_F(x, \nu) : (\varphi(y) f'(0) D_2 F(x, e_{m+1}) \otimes e_{m+1}) \, dV(x, \nu). \]
Since $f'(0) \neq 0$, the previous equation implies that
\[B_F(x, \nu) : D_2 F(x, e_{m+1}) \otimes e_{m+1}
=0, \qquad \text{for $V$-a.e. }(x,\nu),\] which, by strict convexity of $F$, is only possible when $\nu = \pm e_{m+1}$ for all $x \in B_r(0)\cap \mathrm{spt}(V)$. This shows that the tangent space of $V$ agrees with the tangent space of $M$, that is
$$V=\|V\|\otimes \left(\frac 12\delta_{e_{m+1}} + \frac 12 \delta_{-e_{m+1}}\right).$$
Furthermore, we consider the vectorfield
\[ X(x):= \varphi(y)\eta (z) e_i, \qquad \text{ for every} 1\le i \le m. \]
Since $\eta = 1 $ on $M$ and $B_F(x, \nu)$ is even in the second variable, the first variation formula reads
\begin{align*}
0 = \delta_F V(X) &= \int B_F(x, e_{m+1}) : (e_i \otimes D \varphi) + \partial_i F(x,e_{m+1}) \varphi \, d\|V\|(x) \\
&= \int F(x, e_{m+1}) \partial_i\varphi + \partial_i F(x,e_{m+1}) \varphi \, d\norm{V} (x)\\&= \int \partial_i (F(x,e_{m+1}) \varphi) \, d\norm{V}(x).
\end{align*}
Hence $\norm{V}$ is constant on $M\cap B_r(0)$. This concludes the proof of step 1. \\
\emph{Proof of Step 2:} Fix any $p \in M\cap \mathrm{spt}(V)$ and $0< r< \mathop{\mathrm{dist}}(x, \partial \Omega)$ such that the following holds: there is a $C^2$ function $\Phi: B_{2r}(p) \to B_{2r}(0)\subset \mathbb R^{m+1}$ with $\Phi(M\cap B_{2r}(0)) = \{ x_{m+1} = 0 \}\cap B_{2r}(p)$. We replace $V$, $M$ and $F$ in $\Omega$ respectively with $V':=\Phi^\# V$, $M':=\Phi(M)$ and $F':={\Phi^{-1}}^\# F$ in $B_{2r}(0)$. By construction $V', M', F'$ are all as in Step 1. Hence we deduce that in $B_r(0)$ $$V' = \theta_0 \, \mathcal{H}^m\mathop{\llcorner} M'\otimes \left(\frac 12\delta_{\nu_x} + \frac 12 \delta_{-\nu_x}\right), \quad \text{where $\nu_x$ is the normal vectorfield to $M$}.$$ But this implies that $V = \theta_0 \, \mathcal{H}^m\mathop{\llcorner} M\otimes \left(\frac 12\delta_{\nu_x} + \frac 12 \delta_{-\nu_x}\right)$ in $B_r(p)$ and the proposition follows.
\end{proof}
\section{Boundary curvature estimates}\label{sec:bound}
In this section we prove the following theorem which easily implies Theorem \ref{thm:boundaryintro}. Recall that a set \(\Omega\) is strictly \(F\)-convex in \(B_{R}\) if
\[
H_F(x,\partial \Omega)\ge c>0\qquad\text{for all \(x\in \Omega\cap B_R\)}.
\]
It easily follows by \eqref{calcolo} that a uniformly convex set is strictly \(F\)-convex in in sufficiently small balls.
\begin{Theorem}\label{thm:curvature estimates at the boundary} Let $\Omega \subset \mathbb R^{3}$ s.t. $\partial \Omega \cap \overline{B_{2R}}$ is $C^{3}$ and \(\Omega\) is strictly \(F\)-convex in \(B_{2R}\). Let $\Gamma$ be a $C^{2,\alpha}$ embedded curve in $\partial \Omega \cap B_{2R}$ with $\partial \Gamma \cap B_{2R} = \emptyset$. Furthermore let $M$ be an $F$-stable, $C^2$ regular surface in $\Omega$ such that $\partial M \cap B_R = \Gamma$. Then there exists a constant $C>0$ and a radius $r_1>0$ depending only on $F, \Omega, \Gamma$ such that
\begin{equation*}
\sup_{\substack{p \in B_{\frac{R}{2}} \cap \Omega \\ \mathop{\mathrm{dist}}(p, \Gamma)< r_1}} r_1 |A(p)| \le C.
\end{equation*}
Moreover the constants \(C\) and \(r_1\) are uniform as long as \(\Omega\), \(\Gamma \) and \(F\) vary in compact classes\footnote{ For a family of curves \(\Gamma_{\alpha}\) this amounts also in asking that all the considered curves should be ``uniformly'' embedded:
\[
\inf_{\alpha} \inf_{\substack{x\ne y\\ \,x,y\in \Gamma_\alpha}}\frac{ \mathop{\mathrm{dist}}_{\Gamma}(x,y)}{|x-y|}>0.
\]
}.
\end{Theorem}
We start with the following simple lemma.
\begin{Lemma}\label{lem:nonemptyblowup}
Let $\{\mu_j\}_{j\in \mathbb N}\subset \mathcal M_+(\mathbb R^{m+1})$ be a sequence of Radon measures such that
$$\lim_{j\to \infty} \mu_j(\overline{B}_1)= + \infty.$$
Then the ``area-blow up set''
\[ Z:= \{x \in \mathbb R^{m+1} \colon \limsup_{j \to \infty} \mu_j(B_r(x)) = + \infty \text{ for every } r >0 \} \]
satisfies $Z \cap \overline{B}_1 \neq \emptyset $. \end{Lemma}
\begin{proof} Up to consider as new sequence of measures $\mu_j|\overline{B_1}$, we can assume that $\mathrm{spt}(\mu_j) \subset \overline{B}_1$. We claim that there exists a sequence of cubes $\{C_i\}_{i\in \mathbb N}$ with side length $l_i$ such that
\begin{itemize}
\item[(i)] $C_{i+1} \subset C_i$ for all $i \in \mathbb N$;
\item[(ii)] $l_i=2^{1-i}$;
\item[(iii)] $\limsup_{j \to \infty} \mu_j(C_i)=+\infty$ for all $i \in \mathbb N$.
\end{itemize}
We will prove this claim by induction on $i$. We remark that $C_0$ exists: it is enough to consider a cube containing $\overline{B}_1$, for instance $\overline{B}_1 \subset [-1,1]^m=:C_0$, so that we have
\[\limsup_{j \to \infty} \mu_j(C_0) = + \infty. \]
\emph{Proof of the Inductive step:}
Let $\mathcal{C}$ the collection of the dyadic cubes that are obtained by dividing $C_i$ into $2^{m+1}$ sub-cubes with half side length. Suppose
$$\limsup_{j\to \infty} \mu_j(C')< \infty \qquad \forall C'\in \mathcal{C}.$$
Since there are only $2^{m+1}$ of these cubes, there exists $j_0 \in \mathbb N$ and $K>0$ such that
\[ \mu_j(C') \le K \qquad \forall j \ge j_0, \quad \forall C' \in \mathcal{C}. \]
But this contradicts the assumption, since
\[ \mu_j(C_i) \le \sum_{C' \in \mathcal{C}} \mu_j(C') \le 2^{m+1} K \qquad \forall j \ge j_0.\]
We consequently can find a cube $C_{i+1} \subset C_i$ satisfying the properties (i), (ii) and (iii). \\
As a consequence we obtain a decreasing sequence of dyadic closed cubes $\{C_i\}_{i=0}^\infty$ with nonempty intersection, i.e. there exists $x \in \bigcap_{k=0}^\infty C_i$. \\
Since for every $r>0$ there exists $i \in \mathbb N$ such that $C_i \subset B_r(x)$, we have
$$\limsup_{j \to \infty} \mu_j(B_r(x))=+\infty.$$
This implies that $x$ is in the area blow up set. Finally since $x$ must be in the support of infinitely many $\mu_j$, we have $x \in \overline{B}_1$. This concludes the proof of this lemma.\end{proof}
The next proposition ensures that we have a local bound on the mass ratio, indeed assuming the contrary the varifolds associated with
\[
M_{x,r}=\frac{M-x}{r}
\]
would have unbounded masses. If \(Z\) is the area blow up set for this sequence, we can exploit our \(F\) convexity assumption together with the Hopf lemma to show that \(Z\) is contained in a wedge, this contradicts the fact that it is an \((m,h)\)-set.
\begin{Proposition}\label{prop:no-boundary blow up}
Let $\Omega \subset \mathbb R^{m+1}$ such that $\partial \Omega \cap \overline{B_{2R}}$ is $C^{3}$ and \(\partial\Omega\) is strictly $F$ convex in \(B_{2R}\). Let $\Gamma$ be a $C^{2,\alpha}$ embedded $(m-1)$-submanifold in $\partial \Omega \cap B_{2R}$ with $\partial \Gamma \cap B_{2R} = \emptyset$. Furthermore, let $M$ be a $C^2$ stationary (i.e. $\hat{\delta}_F M=0$) manifold in $\Omega$ such that $\partial M \cap B_R = \Gamma$. Then there exists a constant $C$ and a radius $r_0>0$ depending only on $F, \Omega, \Gamma$ such that
\begin{equation}\label{eq:blowup} \sup_{\substack{q \in \Gamma \cap B_{R}\\ r< r_0}} \frac{ \mathcal H^m(M\cap B_r(q))}{w_m r^m} \le C < \infty. \end{equation}
\end{Proposition}
\begin{proof} We split the proof in two steps:
\emph{Step 1:} Proposition \ref{prop:no-boundary blow up} holds under the following additional Assumption \ref{ass.smallball}:
\begin{Assumption}\label{ass.smallball} There exists $0<\delta<\frac{1}{4}$ such that:
\begin{enumerate}
\item $\Omega \cap B_2 = \{ x_{m+1} \ge \Phi(x_1, \dotsc, x_m) \} \cap B_2$ for some $\Phi \in C^{2,\alpha}(\mathbb R^m, \mathbb R)$. Furthermore we have
$$\Phi(0)=0, \quad D \Phi(0)=0 \quad \text{(i.e. $T_0\partial \Omega = e_{m+1}^\perp$) and } \quad\norm{\Phi}_{C^{2,\alpha}}<\delta;$$
\item Let $\mathcal{F}(x,y,p)$ denote the non-parametric function associated to $F$
\[ \mathcal{F}(x,y,p):= F\left ((x,y), p_1e_1+\dots+p_me_m - e_{m+1}\right) \]
and $L$ be the Euler-Lagrange operator for $\mathcal{F}$. Then, for every $U \subset B^m_2$ with smooth boundary, $f \in C^{0,\alpha}$ and $g \in C^{2,\alpha}$ with
$$\norm{f}_{C^{0,\alpha}}< \delta, \quad \norm{g}_{C^{2,\alpha}} < \delta,$$
the boundary value problem
\[ \left\{\begin{aligned}
L u &= f &&\text{ in } U\\
u &= g &&\text{ on } \partial U
\end{aligned}\right.\]
has a unique solution $u \in C^{2,\alpha}(U,\mathbb R)$ such that
\[
\norm{u}_{C^{2,\alpha}(U,\mathbb R)} \le C (\norm{f}_{C^{0,\alpha}} + \norm{g}_{C^{2,\alpha}} );
\]
\item for all $x \in \partial \Omega\cap B_2$ we have
\[
0< h_{min} < L(\Phi) < h_{max} < \delta.
\]
Note that
\[
F(x,\nu(x)) H_F(x) = \frac{L(\Phi)(x)}{\langle \nu(x), e_{m+1}\rangle} \nu(x)
\]
for all $x \in \partial \Omega \cap B_2$, where $\nu(x)$ is the normal of $\partial \Omega$ at the point $x$.
\item $\Gamma \subset \partial \Omega$ is $C^{2,\alpha}$.
\end{enumerate}
\end{Assumption}
\emph{Step 2:} There exists a radius $0<R_0\le R$ such that for every $p \in \partial \Omega$ the rescaled domain $\frac{\Omega-p}{R_0}$ and the rescaled manifold $\frac{M-p}{R_0}$ satisfy the conditions of Assumption \ref{ass.smallball}.\\
By a classical covering argument, one can show that Step 1 and Step 2 together imply Proposition \ref{prop:no-boundary blow up}.
\emph{Proof of Step 1:} Assume the conclusion \ref{eq:blowup} does not hold in $B_1$, then there exists a sequence $M_k, r_k, p_k$ satisfying
\begin{enumerate}
\item $\Gamma_k :=\partial M_k \subset \partial \Omega$ with uniformly bounded $C^{2,\alpha}$-norm;
\item $p_k \in \Gamma_k \cap B_1$, $0<r_k< \frac{1}{k}$ and
\begin{equation}\label{piublowupdicosisimuore}
\frac{\mathcal H^m(M_k\cap B_{r_k}(p_k))}{r_k^m} > k.
\end{equation}
\end{enumerate}
We denote with $\gamma_k$ the projection of $\Gamma_k$ onto the plane $\{ x_{m+1} = 0 \}$, i.e.
$$\Gamma_k = \mathbf{G}_{\Phi}(\gamma_k),$$
where $\mathbf{G}_\Phi(x):=(x, \Phi(x))$ is the graph map of $\Phi$. Up to subsequences, and performing if necessary a rotation of $B_2$, we may assume that
\begin{enumerate}\setcounter{enumi}{2}
\item there exists $x_0 \in \overline{B}_1$ such that $p_k=(x_k,\Phi(x_k)) \to p_0=(x_0,\Phi(x_0))$;
\item $\hat{\nu}_k(x_k) \to e_{m}$, where $\hat{\nu}_k(x)$ denotes the normal of $\gamma_k$ in the plane $\{x_{m+1} =0 \}$ at the point $x \in \gamma_k$.
\end{enumerate}
To set up the contradiction we need the following additional construction:\\
Consider $r_0>0$ small enough so that for all $k \in \mathbb N$ we have
$$r_0 < \min\left\{\frac{1}{\norm{ \mathbf{A}_{\gamma_k} }_{\infty}},\frac12\right\},$$
where $\mathbf{A}_{\gamma_k}$ denotes the second fundamental form of $\gamma_k$. This is possible since we assumed that the $C^{2,\alpha}$-norm of $\Gamma_k$ is uniformly bounded. \\
For every $k \in \mathbb N$ we define the pair of balls
\[ B_k^{\pm} := B^m_{r_0}(x_k \pm r_0\hat{\nu}_k(x_k)) \subset \{x_{m+1} =0 \}. \]
By the choice of $r_0$, we have ensured that $\overline{B_k^\pm} \cap \gamma_k = \{x_k\}$.
For each $0\le s \le \delta$, using Assumption \ref{ass.smallball} (2), let $u^\pm_{k,s} \in C^{2,\alpha}(B^\pm_k(x))$ be the unique solution to the boundary value problem
\[ \left\{\begin{aligned}
L u^\pm_{k,s} &= s &&\text{ in } B_k^\pm \\
u_{k,s}^\pm &= \Phi &&\text{ on } \partial B_k^\pm \, .
\end{aligned}\right.\]
Observe that, by the classical Hopf-maximum principle, if $s> h_{max}$ we have $u^\pm_{k,s} < \Phi$ and if $s< h_{min}$ then $u^\pm_{k,s} > \Phi$. We claim that the graphs of $u^\pm_{k,s}$ never touch $M_k$ in the interior of the cylinders $B^\pm_k \times \mathbb R$ for $s \ge \frac12 h_{min}$. Indeed, for $s> h_{max}$ this is obvious since $M_k \subset \Omega$. Suppose there is a first $\frac12 h_{min} < s\le h_{max}$ where for instance the graph $u^+_{k,s}$ touches $M_k$ at a point $q=(y,u^+_{k,s}(y))$. Then $T_qM_k = (- D u^+_{k,s}(y), 1)^\perp$ and $M_k$ is locally the graph over the plane $\{x_{m+1} = 0 \}$ around $y$ by a map $f_k$. Since $M_k$ is stationary we have $L(f_k)=0$, but this contradicts the strong maximum principle.
For $k \in \mathbb N$, define
\[
u_k^\pm:=u^\pm_{k,\frac12 h_{min}}.
\]
By the Hopf boundary point lemma we can compare $\Phi$ with $u_k^\pm$ at $x_k$, obtaining the existence of $c_H>0$ depending only on $F$ and $\partial \Omega$ such that
\begin{equation}\label{eq:Hopf} \min\left\{ \frac{\partial u_k^+(x_k)}{\partial \hat{\nu}_k(x_k)} - \frac{\partial \Phi(x_k)}{\partial \hat{\nu}_k(x_k)}, -\frac{\partial u_k^-(x_k)}{\partial \hat{\nu}_k(x_k)} + \frac{\partial \Phi(x_k)}{\partial \hat{\nu}_k(x_k)} \right\} > c_H. \end{equation}
Furthermore by (2) in Assumption \ref{ass.smallball}, $\norm{u^\pm_k}_{C^{2,\alpha}}$ is uniformly bounded on $B_k^\pm$.\\
Now we consider the blow-up sequence
\begin{itemize}
\item $M_k':= \frac{M_k - p_k}{r_k}$ in $\Omega'_k:= \frac{\Omega_k - p_k}{r_k}$;
\item $\Gamma_k' = \partial M_k'= \frac{\partial M_k - p_k}{r_k}$ projecting to $\gamma'_k = \frac{\gamma_k - x_k}{r_k}$ in $\{x_{m+1}=0\}$;
\item $d_k^\pm(y)= \frac{u^\pm_k(x_k + r_k y) - \Phi(x_k+r_k y)}{r_k}$ on $\frac{1}{r_k}(B^\pm_k - x_k)$.
\end{itemize}
Observe that, by the regularity assumption on $\Omega$ and $\Gamma_k$ and the estimates on $u_k^\pm$, we have (up to a subsequence)
\begin{itemize}
\item[(i)] $\partial \Omega'_k \to T_{p_0}\partial \Omega$, i.e. $\Omega'_k \to \{ x_{m+1} \ge \langle D\Phi(x_0), x \rangle\}$;
\item[(ii)] $\gamma_k' \to \{ x_m =0 \}$;
\item[(iii)] $d^\pm_k(y) \to a^\pm y_m$ for $y \in \mathbb R^m \cap \{ \pm y_m \ge 0 \}$ with $a^+, -a^->c_H$
\end{itemize}
Indeed (ii) follows by property (4). Point (iii) is a consequence of the fact that $\frac{1}{r_k}(B^\pm_k - x_k) \to \mathbb R^m \cap \{ \pm y_m \ge 0 \}$ and that, by construction, we have $d^\pm_k=0$ on $\partial \frac{1}{r_k}(B^\pm_k - x_k)$. The last part of (iii) is a consequence of \eqref{eq:Hopf}.
By \eqref{piublowupdicosisimuore} and the definition of $M'_k$, we observe that the sequence of Radon measures $\mu_k:= \mathcal{H}^{m}\mathop{\llcorner} M'_k$, satisfies the assumptions of Lemma \ref{lem:nonemptyblowup}, hence $Z\cap \overline{B}_1 \neq \emptyset$ where \(Z\) is the area blow up set for \(M'_k\)
Since $M$ is a stationary manifold (i.e. $\hat{\delta}_F M=0$), by \eqref{eq:boundaryvariation} we can estimate for every vectorfield $X$ with $\abs{X} \le \mathbf{1}_{B_R}$
\[| \delta_F M_k'(X)| \leq \int_{\Gamma'_k} |X| \le \mathcal{H}^{m-1}(\Gamma'_k\cap B_R). \]
Applying Theorem \ref{thm:area-blowupset}, we get that $Z$ is an $(m,0)$-set in $\mathbb R^{m+1}$ for the frozen integrand $F_{p_0}:\nu \mapsto F(p_0, \nu)$.
Moroever, combining (i) and (iii), we know that
\begin{equation}\label{eq:subset}
Z \subset \{ (x,x_{m+1}) \colon x_{m+1} \ge \langle D\Phi(x_0), x \rangle + c_H \abs{x_m} \}.
\end{equation}
We will show that this contradicts the fact that $Z$ satisfies the characterization (ii) in Proposition \ref{prop:equivalence} for being an $(m,0)$-set for an appropriate choice of a function $f$. We can assume $c_H \le \frac{1}{4} $ (up to replace $c_H$ with $\min(c_H, \frac14)$). We set
$$T:=4 \frac{ 1 +\abs{D \Phi(x_0)}}{c_H}$$ and consider $\varepsilon >0$ to be chosen later. We define the function
\[ f(x,x_{m+1}):= -x_{m+1} + \langle D\Phi(x_0), x \rangle + \frac{c_H}{2T} \left( x^2_m- \varepsilon x^2 \right).\]
On $\{ (x,x_{m+1}) \colon x_{m+1} \ge \langle D\Phi(x_0), x \rangle + c_H \abs{x_m} \}\cap \{ x_m = T \} $ we have
\begin{align}\label{bordo striscia}
f(x,x_{m+1}) &= -x_{m+1} +\langle D\Phi(x_0), x \rangle + c_H \abs{x_m} + c_H \left( \frac{x_m^2}{2T} - \abs{x_m} - \frac{\varepsilon x^2}{2T} \right) \nonumber \\
&\leq 0 + c_H \left( \frac{x_m^2}{2T} - \abs{x_m}\right) = - c_H\frac{T}{2} \le -2 (1 + \abs{D \Phi(x_0)}).
\end{align}
But for every $x \in \overline{B}_1$ and choosing $\varepsilon$ sufficiently small, we have
\[ f(x) > - \frac{3}{2} (1 +\abs{D \Phi(x_0)}). \]
Combining the previous inequality with \eqref{eq:subset} and \eqref{bordo striscia}, we deduce that $f|_Z$ takes a local maximum at some point $p=(\hat{x},\hat{x}_{m+1})$ with $\abs{\hat{x}_m} < T$. Now we claim that this contradicts (ii) in Proposition \ref{prop:equivalence} for sufficient small $\varepsilon>0$.
Indeed we can compute
\begin{align*}
D f(x, x_{m+1}) &= - e_{m+1} + D \Phi(x_0) + \frac{c_H}{T}( x_m e_m - \varepsilon x ),\\
D^2f(x,x_{m+1}) &= \frac{c_H}{T}( e_m \otimes e_m - \varepsilon \mathrm{Id}_{\mathbb R^{(m+1)\times(m+1)}} );
\end{align*}
where $\mathrm{Id}_{\mathbb R^{(m+1)\times(m+1)}}$ is the $(m+1)$-dimensional identity matrix. Observe that there exits $\Lambda >0$ such that $\trace(D^2F_{p_0}(\nu))< \Lambda$ for all $\nu \in \mathbb S^{m}$. Furthermore for every $0<\eta <1$ there exists some $\lambda>0$ such that
\[ D^2F_{p_0}(\nu): e_m \otimes e_m >\lambda \qquad \text{ for all } \nu \in \mathbb S^m \text{ verifying } \abs{\langle \nu , e_m \rangle }< 1-\eta.\]
For every $x$ such that $\abs{x_m}\le T$, by Assumption \ref{ass.smallball} (1), we can compute
\[ \abs{\langle D f(x, x_m) , e_m\rangle} = \abs{\partial_m\Phi(x_0) + \frac{c_H}{T}(1-\varepsilon) x_m} \leq \abs{\partial_m\Phi(x_0)} + \frac{c_H}{T} \abs{x_m} \le \frac{1}{4}+c_H \leq \frac{1}{2}. \]
Since $\abs{D f(x,x_m)}\ge \langle Df(x,x_m),-e_{m+1}\rangle \ge 1$, we deduce that for every $x$ with $\abs{x_m}\le T$
$$\abs{ \left \langle \frac{D f(x)}{\abs{D f(x)}} , e_m \right \rangle } \le \frac12.$$
If we choose $\varepsilon$ sufficiently small we compute in the local maximum point $p=(\hat{x},\hat{x}_{m+1})$
\[ \left \langle D^2F_{p_0}\left( \frac{D f(p)}{\abs{D f(p)}}\right):D^2 f(p) \right\rangle \ge \frac{c_H}{T} \left( \lambda - \varepsilon \Lambda \right) >0. \]
This contradicts Proposition \ref{prop:equivalence} (ii). \\
\emph{Proof of Step 2:} The existence of $R_0$ as in the statement of Step 2 is a consequence of the implicit function theorem as in~\cite{White1987}, we report here the argument for the sake of completeness. Fix $q\in \partial \Omega$ and let $\nu_q \in \mathbb S^m$ be the inner normal of $\partial \Omega$ at $q$. Furthermore we fix an orthonormal basis $t_1, \dotsc, t_m$ spanning $T_q \partial \Omega = \nu_q^\perp \sim \mathbb R^m$ i.e. $\mathbb R^{m+1} = T_q\partial \Omega \times \operatorname{span} \nu_q$. We will write $(x,x_{m+1})$ for points in $T_q\partial \Omega \times \operatorname{span} \nu_q$.
We consider the family of non-parametric functionals
\[ \mathcal{F}_r(x, u(x), D u(x)):= F\left(q + r (x, u(x)), (-D u(x), 1) \right) . \]
These are the non-parametric functionals associated to the image of the parametrized surfaces $x \mapsto q+ rx + r u(x) \nu_q$.
Let $L_r$ be the Euler-Lagrange operators for $\mathcal{F}_r$. By strict convexity of $F$, planes are the unique minimizers for the frozen integrand $\nu \in \mathbb S^m \mapsto F(q, \nu )$. With respect to $\mathcal{F}_r$ this implies that the constant functions $u$ are the unique minimizers of $\mathcal{F}_0$, and in particular $L_0u=0$ for every constant function $u$. The convexity of $F$ translates into the ellipticity of the linearization of $L_r$ around the constant $u_0=0$. Hence the implicit function theorem implies the existence of $\delta_q, R_q>0$ such that, for every couple of scalar functions $f,g$ with $\norm{f}_{C^{0,\alpha}}<\delta$, $\norm{g}_{C^{2,\alpha}} < \delta_q$, $U\subset B_2$ and $r\le R_q$, the boundary value problem
\[ \left\{\begin{aligned}
L_r u &= f &&\text{ in } U \\
u &= g &&\text{ on } \partial U
\end{aligned}\right.\]
has a unique solution $u \in C^{2,\alpha}(U,\mathbb R)$ satisfying
\[ \norm{u}_{C^{2,\alpha}(U,\mathbb R)} \le C (\norm{f}_{C^{0,\alpha}} + \norm{g}_{C^{2,\alpha}} ).\]
The size of $R_q, \delta_q$ only depends on the $C^{2,\alpha}$ norm of $F$. Hence by compactness there exist $R_1, \delta_1>0$ such that $ \delta_q >\delta_1$ and $R_q >R_1$ for all $q \in \partial \Omega \cap B_{2R}$. \\
Let $H_F(q)$ as before denote the anisotropic mean curvature of $\partial \Omega$ with respect to the inner normal $\nu(q)$. Fix $0<R_0\le R_1$ such that
\[ \max_{q \in \overline{B_R}\cap \partial \Omega } h_F(q)< \frac{\delta_1}{R_0}. \]
Now it is straight forward to check that $R_0$ has the desired properties.
\end{proof}
We now show how to ``globalize'' the above boundary estimate. We recall that for an \(F\)-stable surface it holds
\begin{equation}\label{eq:allard inequality}
\int_M \phi^2 \abs{A}^2 \, d \mathcal H^2 \le c_1 \int_{M} \abs{D \phi}^2 + c_2 \phi^2 \, d \mathcal H^2,
\end{equation}
for some constants $c_1(n,F)$, $c_2(n,F)>0$ and for all $\phi \in C^1_c(M)$, see \cite[Lemma 2.1]{Allard1983} or ~\cite[Lamma A.5]{dephilippismaggi2}.
\begin{Lemma}\label{lem:general_area_bound_in_dimension2}
Let $\Omega \subset \mathbb R^{3}$ and $\Gamma$ be a $C^{2,\alpha}$ embedded curve in $\partial \Omega \cap B_{2R}$ with $\partial \Gamma \cap B_{2R} = \emptyset$. Furthermore let $M$ be a two dimensional $F$-stable, $C^2$ regular surface in $\Omega$ such that $\partial M \cap B_R = \Gamma$ and satisfying for some $0<C_0< \infty$ and $r_0 \le 1$
\begin{equation}\label{eq:assumed_area_bound} \sup_{\substack{q \in \Gamma \cap B_{R}\\ r< r_0}} \frac{ \mathcal H^2(M\cap B_r(q))}{\pi r^2} \le C_0. \end{equation}
Then there exists a constant $C>0$ depending only on $F$ such that
\begin{equation}\label{eq:general_area_bound} \sup_{\substack{B_r(p) \subset B_{R-r_0}\\ r< \frac{r_0}{3}; \mathop{\mathrm{dist}}(p, \Gamma) < \frac{r_0}{3}}} \frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le C C_0. \end{equation}
\end{Lemma}
\begin{proof}
This lemma is a direct consequence of \eqref{eq:allard inequality} and of the extended monotonicity formula of L.~Simon (see \cite{Simon1993}).
Indeed, for every $p \in B_{R-r_0} \cap \Omega$ we fix $q \in \Gamma$ with
\[ d:=\abs{q-p}=\mathop{\mathrm{dist}}(p, \Gamma)< \frac{r_0}{3}.\]
Hence $q \in B_R\cap \Gamma$. If $\frac{d}{2}< r < \frac{r_0}{3}$, then $B_r(p) \subset B_{3r}(q)$ and we easily estimate
\[ \frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le \frac{9 \mathcal H^2(M\cap B_{3r}(q))}{\pi {(3r)}^2} \overset{\eqref{eq:assumed_area_bound}}{\le} 9 C_0. \]
If $r< \frac{d}{2}$ we argue as follows: Fix a non-negative even function $\eta \in C^\infty(\mathbb R)$ with $\eta(t)=1$ for every $|t|\leq \frac12$, $\eta(t)=0$ for $|t|\ge 1$ and $|\eta'(t)|\leq 3$ for every $t \in \mathbb R$. We choose $\phi(x):=\eta( \frac{\abs{x-p}}{d})$ in \eqref{eq:allard inequality} and, denoting with $H$ the isotropic mean curvature of $M$, we obtain
\begin{equation}\label{stima utile}
\begin{split}
\frac12 \int_{B_{\frac{d}{2}\cap M}(p)} \abs{H}^2 \, d \mathcal H^2 &\le \int_{M
} \phi^2 \abs{A}^2 \\
&\le c_1 \int_{M} \frac{1}{d^2}\abs{\eta'\Bigl(\frac{\abs{x-p}}{d}\Bigr)}^2 + c_2 \eta\left (\frac{\abs{x-p}}{d}\right)^2 \, d \mathcal H^2\\
&\le c_3 \frac{ \mathcal H^2(M \cap B_d(p))}{\pi d^2}
\end{split}
\end{equation}
where in the last inequality we used that $d<r_0\leq 1$.
Now we may use the extended monotonicity formula of L. Simon \cite[formula (1.3)]{Simon1993} to conclude that for any $r \le \frac{d}{2}$ we have for some universal constant $c>0$ (independent of $F, M, \Gamma$ and all our particular choices)
\begin{equation}\label{evviva}
\frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le c \left( \frac{ \mathcal H^2(M\cap B_{\frac{d}{2}}(p))}{\pi d^2} + \int_{B_{\frac{d}{2}}(p)\cap M} \abs{H}^2 \, d \mathcal H^2 \right).
\end{equation}
Plugging \eqref{stima utile} in \eqref{evviva}, we conclude the lemma:
\[ \frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le c (1 + 2c_3) \frac{ \mathcal H^2(M \cap B_d(p))}{\pi d^2} \le 4c(1+c_3) \frac{ \mathcal H^2(M \cap B_{2d}(q))}{\pi (2d)^2} \le C \,C_0,\]
where $C$ depends just on $F$.
\end{proof}
Now finally we can combine the obtained results with the curvature estimate in \cite{White1991} to prove Theorem \ref{thm:curvature estimates at the boundary}
\begin{proof}[Proof of \ref{thm:curvature estimates at the boundary}]
We choose $r_1 = \frac{r_0}{6}$ where $r_0$ is the radius in Proposition \ref{prop:no-boundary blow up}. Hence we may combine Proposition \ref{prop:no-boundary blow up} with Lemma \ref{lem:general_area_bound_in_dimension2} to deduce that for some constant $C$ depending only on $F, \Omega, \Gamma$
\[ \sup_{\substack{B_r(p) \subset B_{\frac{R}{2}}\\ r< 2r_1; \mathop{\mathrm{dist}}(p, \Gamma) < 2r_1}} \frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le C. \]
In particular this implies that for each $q \in B_{\frac{R}{2}} \cap \Gamma$ we have
\[ \sup_{\substack{B_r(p) \subset B_{2r_1}(q)}} \frac{ \mathcal H^2(M\cap B_r(p))}{\pi r^2} \le C .\]
Hence the triple $\Omega, M, B_{2r_1}(q)$ satisfies the assumptions of \cite[Theorem 5.2]{White1991} and we deduce that all principle curvatures of $M \cap B_{r_1}(q)$ are bounded by a constant depending only on $F, \Omega, \Gamma$.
\end{proof}
\bibliographystyle{siam}
|
{'timestamp': '2019-01-14T02:11:30', 'yymm': '1901', 'arxiv_id': '1901.03514', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.03514'}
|
arxiv
|
\section{Introduction}
For a directed graph $G=(V,E)$, with $n = |V|, m=|E|$, the Strongly-Connected Components (SCCs) of $G$ are the sets of the unique partition of the vertex set $V$ into sets $V_1, V_2, .. , V_k$ such that for any two vertices $u \in V_i, v \in V_j$, there exists a directed cycle in $G$ containing $u$ and $v$ if and only if $i=j$. In the Single-Source Reachability (SSR) problem, we are given a distinguished source $r \in V$ and are asked to find all vertices in $V$ that can be reached from $r$. The SSR problem can be reduced to finding the SCCs by inserting edges from each vertex in $V$ to the distinguished source $r$.
Finding SCCs in static graphs in $O(m+n)$ time is well-known since 1972\cite{tarjan1972depth} and is commonly taught in undergraduate courses, also appearing in CLRS\cite{cormen2009introduction}.
In this paper we focus on maintaining SCCs in a dynamic graph. The most general setting is the fully dynamic one, where edges are being inserted and deleted into the graph. While many connectivity problems for undirected graphs have been solved quite efficiently \cite{holm2001poly, wulff2013faster, thorup2000near, thorup2007fully, huang2017fully, nanongkai2017dynamic}, in fully-dynamic graphs, the directed versions of these problems have proven to be much harder to approach.
In fact, Abboud and Vassilevska\cite{abboud2014popular} showed that any algorithm that can maintain whether there are more than 2 SCCs in a fully-dynamic graph with update time $O(m^{1-\epsilon})$ and query time $O(m^{1-\epsilon})$, for any constant $\epsilon > 0$, would imply a major breakthrough on SETH. The same paper also suggests that $O(m^{1-\epsilon})$ update time and query time $O(n^{1-\epsilon})$ for maintaining the number of reachable vertices from a fixed source would imply a breakthrough for combinatorial Matrix Multiplication.
For this reason, research on dynamic SCC and dynamic single-source reachability has focused on the partially dynamic setting (decremental or incremental). In this paper we study the \emph{decremental} setting, where the original graph only undergoes edge deletions (no insertions). We note that both lower bounds above extend to decremental algorithms with \emph{worst-case} update time $O(m^{1-\epsilon})$, so all existing results focus on the amortized update time.
The first algorithm to maintain SSR faster than recomputation from scratch achieved total update time $O(mn)$\cite{shiloach1981line}. The same update time for maintaining SCCs was achieved by a randomized algorithm by Roddity and Zwick\cite{roditty2008improved}. Their algorithm also establishes that any algorithm for maintaining SSR can be turned into a randomized algorithm to maintain the SCCs incurring only an additional constant multiplicative factor in the running time. Later, Łącki\cite{lkacki2013improved} presented a simple deterministic algorithm that matches $O(mn)$ total update time and that also maintains the transitive closure.
For several decades, it was not known how to get beyond total update time $O(mn)$, until a recent breakthrough by Henzinger, Krinninger and Nanongkai\cite{henzinger2014sublinear, henzinger2015improved} reduced the total update time to expected time $O(\min(m^{7/6}n^{2/3}, m^{3/4}n^{5/4+o(1)}, m^{2/3}n^{4/3+o(1)} + m^{3/7}n^{12/7+o(1)})) = O(mn^{0.9+o(1)})$. Even more recently, Chechik et. al.\cite{chechik2016decremental} showed that a clever combination of the algorithms of Roditty and Zwick, and Łącki can be used to improve the expected total update time to $\tilde{O}(m \sqrt{n})$. We point out that all of these recent results rely on randomization and in fact no deterministic algorithm for maintaining SCCs or SSR beyond the $O(mn)$ bound is known for general graphs. For planar graphs, Italiano et. al.\cite{italiano2017decremental} presented a deterministic algorithm with total update time $\tilde{O}(n)$.
Finally, in this paper, we present the first algorithm for general graphs to maintain SCCs in $\tilde{O}(m)$ expected total update time with constant query time, thus presenting the first near-optimal algorithm for the problem. We summarize our result in the following theorem.
\begin{theorem}
\label{thm:SCCmain}
Given a graph $G=(V,E)$ with $m$ edges and $n$ vertices, we can maintain a data structure that supports the operations:
\begin{itemize}
\item $\textsc{Delete}(u,v)$: Deletes the edge $(u,v)$ from the graph $G$,
\item $\textsc{Query}(u,v)$: Returns whether $u$ and $v$ are in the same SCC in $G$,
\end{itemize}
in total expected update time $O(m \log^4 n)$ and with worst-case constant query time. The same time bounds apply to answer for a fixed source vertex $s \in V$ queries on whether a vertex $v \in V$ can be reached from $s$. The bound holds against an oblivious adaptive adversary.
\end{theorem}
Our algorithm makes the standard assumption of an oblivious adversary which does not have access to the coin flips made by the algorithm. But our algorithm does NOT require the assumption of a non-adaptive adversary, which is ignorant of answers to queries as well: the reason is simply that SCC and SSR information is unique, so the answers to queries do not reveal any information about the algorithm. One key exception is that for SSR, if the algorithm is expected to return a witness path, then it does require the assumption of a non-adaptive adversary.
A standard reduction described in appendix \ref{sec:fullyReach} also implies a simple algorithm for maintaining reachability from some set $S \subseteq V$ to $V$ in a fully-dynamic graph with vertex set $V$ that is a data structure that answers queries for any $s \in S, v \in V$ on whether $s$ can reach $v$. The amortized expected update time is $\tilde{O}(|S|m/t)$ and query time $O(t)$ for every $t \in [1, |S|]$. We allow vertex updates, i.e. insertions or deletions of vertices with incident edges, which are more general than edge updates. This generalizes a well-known trade-off result for All-Pairs Reachability\cite{roditty2016fully, lkacki2013improved} with $\tilde{O}(nm/t)$ amortized update time and query time $O(t)$ for every $t \in [1, n]$.
Finally, we point out that maintaining SCCs and SSR is related to the more difficult (approximate) shortest-path problems. In fact, the algorithms \cite{shiloach1981line, henzinger1995fully, henzinger2014sublinear,henzinger2015improved} can also maintain (approximate) shortest-paths in decremental directed graphs. For undirected graphs, the decremental Single-Source Approximate Shortest-Path problem was recently solved to near-optimality\cite{henzinger2014decremental}, and deterministic algorithms\cite{bernstein2016deterministic, bernstein2017deterministic} have been developed that go beyond the $O(mn)$ barrier. We hope that our result inspires new algorithms to tackle the directed versions of these problems.
\section{Preliminaries}
\label{sec:prelim}
In this paper, we let a graph $H = (V,E)$ refer to a directed multi-graph where we allow multiple edges between two endpoints and self-loops but say that a cycle contains at least two distinct vertices. We refer to the vertex set of $H$ by $V(H)$ and the edge set by $E(H)$. We denote the input graph by $G$, let $V= V(G)$ and $E = E(G)$ and define $n = |V|$ and $m = |E|$. If the context is clear, we simply write sets $X$ instead of their cardinality $|X|$ in calculations to avoid cluttering.
We define a subgraph of $H$ to be a graph $H'$ with $V(H') = V(H)$ and $E(H') \subseteq E(H)$. Observe that this deviates from the standard definition of subgraphs since we require the vertex set to be equivalent. We write $H \setminus E'$ as a shorthand for the graph $(V(H), E(H) \setminus E')$ and $H \cup E'$ as a shorthand for $(V(H), E(H) \cup E')$. For any $S \subseteq V(H)$, we define $E^H_{out}(S)$ to be the set $(S \times V(H)) \cap E(H)$, i.e. the set of all edges in $H$ that emanate from a vertex in $S$; we analogously define $E^H_{in}(S)$ and $E^H(S) = E^H_{in}(S) \cup E^H_{out}(S)$. If the context is clear, we drop the superscript and simply write $E_{in}(S), E_{out}(S), E(S)$.
For any graph $H$, and any two vertices $u,v \in V(H)$, we denote by $\mathbf{dist}_H(u,v)$ the distance from $u$ to $v$ in $H$. We also define the notion of $S$-distances for any $S \subseteq V(H)$ where for any pair of vertices $u,v \in V(H)$, the $S$-distance $\mathbf{dist}_H(u,v, S)$ denotes the minimum number of vertices in $S \setminus \{v\}$ encountered on any path from $u$ to $v$. Alternatively, the $S$-distance corresponds to $\mathbf{dist}_{H'}(u,v)$ where $H'$ is a graph with edges $E_{out}(S)$ of weight $1$ and edges $E \setminus E_{out}(S)$ of weight $0$. It therefore follows that for any $u,v \in V(H)$, $\mathbf{dist}_H(u,v) = \mathbf{dist}_H(u,v,V)$.
We define the diameter of a graph $H$ by $\mathbf{diam}(H) = \max_{u,v \in V} \mathbf{dist}_H(u,v)$ and the $S$-diameter by $\mathbf{diam}(H, S) = \max_{u,v \in V(H)} \mathbf{dist}_H(u,v,S)$. Therefore, $\mathbf{diam}(H) = \mathbf{diam}(H, V)$. For convenience, we often omit the subscript on relations if the context is clear and write $\mathbf{dist}(u,v,S)$
We denote that a vertex $u$ \textit{reaches} $v$ in $H$ by $u \leadsto_H v$, and if $u \leadsto_H v$ and $v \leadsto_H u$, we simply write $u \rightleftarrows_H v$ and say $u$ and $v$ are \textit{strongly-connected}. We also use $\leadsto$ and $\rightleftarrows$ without the subscript if the underlying graph $H$ is clear from the context. We say that $H$ is strongly-connected if for any $u,v \in V(H)$, $u \rightleftarrows v$. We call the maximal subgraphs of $H$ that are strongly-connected, the strongly-connected components (SCCs). We denote by $\textsc{Condensation}(H)$ the \textit{condensation} of $H$, that is the graph where all vertices in the same SCC in $H$ are contracted. To distinguish we normally refer to the vertices in $\textsc{Condensation}(H)$ as \textit{nodes}. Each node in $\textsc{Condensation}(H)$ corresponds to a vertex set in $H$. The node set of a condensation $\textsc{Condensation}(H)$ forms a partition of $V(H)$. For convenience we define the function $\textsc{Flatten}(X)$ for a family of sets $X$ with $\textsc{Flatten}(X) = \bigcup_{x \in X} x$. This is useful when discussing condensations. Observe further that $\textsc{Condensation}(H)$ can be a multi-graph and might also contain self-loops. If we have an edge set $E'$ with all endpoints in $H$, we let $\textsc{Condensation}(H) \cup E'$ be the multi-graph obtained by mapping the endpoints of each vertex in $E'$ to their corresponding SCC node in $\textsc{Condensation}(H)$ and adding the resulting edges to $\textsc{Condensation}(H)$.
Finally, for two partitions $P$ and $P'$ of a set $U$, we say that partition $P$ is a \textit{melding} for a partition $P'$ if for every set $X \in P'$, there exists a set $Y \in P$ with $X \subseteq Y$. We also observe that \textit{melding} is transitive, thus if $P$ is a melding for $P'$ and $P'$ a melding for $P''$ then $P$ is a melding for $P''$.
\section{Overview}
\label{subsec:overview}
We now introduce the graph hierarchy maintained by our algorithm, followed by a high-level overview of our algorithm.
\paragraph{High-level overview of the hierarchy.} Our hierarchy has levels $0$ to $\lfloor \lg n \rfloor + 1$ and we associate with each level $i$ a subset $E_i$ of the edges $E$.
The sets $E_i$ form a partition of $E$; we define the edges that go into each $E_i$ later in the overview but point out that we maintain $E_{\lfloor \lg n \rfloor + 1} = \emptyset$. We define a graph hierarchy $\hat{G} = \{\hat{G}_0, \hat{G}_1, .. ,\hat{G}_{\lfloor \lg n \rfloor + 1}\}$ such that each graph $\hat{G}_i$ is defined as
\[
\hat{G}_i = \textsc{Condensation}((V, \bigcup_{j < i} E_j)) \cup E_{i}
\]
That is, each $\hat{G}_i$ is the condensation of a subgraph of $G$ with some additional edges. As mentioned in the preliminary section, we refer to the elements of the set $\hat{V}_i = V(\hat{G}_i)$ as \textit{nodes} to distinguish them from \textit{vertices} in $V$. We use capital letters to denote nodes and small letters to denote vertices. We let $X_i^v$ denote the node in $\hat{V}_i$ with $v \in X_i^v$. Observe that each node $X$ corresponds to a subset of vertices in $V$ and that for any $i$, $\hat{V}_i$ can in fact be seen as a partition of $V$. For $\hat{G}_0 = \textsc{Condensation}((V, \emptyset)) \cup E_{0}$, the set $\hat{V}_0$ is a partition of singletons, i.e. $\hat{V}_0 = \{ \{v\} | v \in V\}$, and $X_0^v = \{ v\}$ for each $v \in V$.
Observe that because the sets $E_i$ form a partition of $E$ and $E_{\lfloor \lg n \rfloor + 1} = \emptyset$, the top graph $\hat{G}_{\lfloor \lg n \rfloor + 1}$ is simply defined as
\[
\hat{G}_{\lfloor \lg n \rfloor + 1} = \textsc{Condensation}((V, \bigcup_{j < \lfloor \lg n \rfloor + 1} E_j)) \cup E_{\lfloor \lg n \rfloor + 1} = \textsc{Condensation}((V,E)).
\]
Therefore, if we can maintain $\hat{G}_{\lfloor \lg n \rfloor + 1} $ efficiently, we can answer queries on whether two vertices $u,v \in V$ are in the same SCC in $G$ by checking if $X_{\lfloor \lg n \rfloor + 1}^u$ is equal to $X_{\lfloor \lg n \rfloor + 1}^v$.
Let us offer some intuition for the hierarchy. The graph $\hat{G}_0$ contains all the vertices of $G$, and all the edges of $E_0 \subseteq E$. By definition of $\textsc{Condensation}(\cdot)$, the nodes of $\hat{G}_1$ precisely correspond to the SCCs of $\hat{G}_0$. $\hat{G}_1$ also includes the edges $E_0$ (though some of them are contracted into self-loops in $\textsc{Condensation}((V, E_0))$), as well as
the additional edges in $E_1$. These additional edges might lead to $\hat{G}_1$ having larger SCCs than those of $\hat{G}_0$; each SCC in $\hat{G}_1$ then corresponds to a node in $\hat{G_2}$. More generally, the nodes of $\hat{G}_{i+1}$ are the SCCs of $\hat{G}_{i}$.
As we move up the hierarchy, we add more and more edges to the graph, so the SCCs get larger and larger. Thus, each set $\hat{V}_i$ is a \textit{melding} for any $\hat{V}_j$ for $j \leq i$; that is for each node $Y \in \hat{V}_j$ there exists a set $X \in \hat{V}_i$ such that $Y \subseteq X$. We sometimes say we \textit{meld} nodes $Y, Y' \in \hat{V}_j$ to $X \in \hat{V}_{i}$ if $Y, Y' \subseteq X$ and $j < i$. Additionally, we observe that for any SCC $Y \subseteq \hat{V}_i$ in $\hat{G}_i$, we meld the nodes in SCC $Y$ to a node in $X \in \hat{V}_{i+1}$, and $X$ consists exactly of the vertices contained in the nodes of $Y$. More formally, $X = \textsc{Flatten}(Y)$.
To maintain the SCCs in each graph $\hat{G}_i$, our algorithm employs a bottom-up approach. At level $i+1$ we want to maintain SCCs in the graph with all the edges in $\bigcup_{j \leq i+1} E_j$, but instead of doing so from scratch, we use the SCCs maintained at level $\hat{G}_i$ as a starting point. The SCCs in $\hat{G}_i$ are precisely the SCCs in the graph with edge set $\bigcup_{j \leq i} E_j$; so to maintain the SCCs at level $i+1$, we only need to consider how the sliver of edges in $E_{i+1}$ cause the SCCs in $\hat{G}_{i}$ to be melded into larger SCCs (which then become the nodes of $\hat{G}_{i+2}$).
If the adversary deletes an edge in $E_i$, all the graphs $\hat{G}_{i-1}$ and below remain unchanged, as do the nodes of $\hat{G}_{i}$. But the deletion might split apart an SCC in $G_i$, which will in turn cause a node of $\hat{G}_{i+1}$ to split into multiple nodes. This split might then cause an SCC of $\hat{G}_{i+1}$ to split, which will further propagate up the hierarchy.
In addition to edge deletions caused by the adversary, our algorithm will sometimes move edges from $E_i$ to $E_{i+1}$. Because the algorithm only moves edges \emph{up} the hierarchy, each graph $\hat{G}_i$ is only losing edges, so the update sequence remains decremental from the perspective of each $\hat{G}_i$. We now give an overview of how our algorithm maintains the hierarchy efficiently.
\paragraph{ES-trees.} A fundamental data structure that our algorithm employs is the ES-tree\cite{shiloach1981line, henzinger1995fully} that for a directed unweighted graph $G=(V,E)$ undergoing edge deletions, and a distinguished source $r \in V$ maintains the distance $\mathbf{dist}_G(r, v)$ for each $v \in V$. In fact, the ES-tree maintains a shortest-path tree rooted at $r$. We refer to this tree subsequently as ES out-tree. We call the ES in-tree rooted at $r$ the shortest-path tree maintained by running the ES-tree data structure on the graph $G$ with reversed edge set, i.e. the edge set where each edge $(u,v) \in E$ appears in the form $(v,u)$. We can maintain each in-tree and out-tree decrementally to depth $\delta > 0$ in time $O(|E| * \delta)$; that is we can maintain the distances $\mathbf{dist}_G(r, v)$ and $\mathbf{dist}_G(v, r)$ exactly
until one of the distances $\mathbf{dist}_G(r, v)$ or $\mathbf{dist}_G(v, r)$ exceeds $\delta$.
\paragraph{Maintaining SCCs with ES-trees.} Consider again graph $\hat{G}_0$ and let $X \subseteq \hat{V}_0$ be some SCC in $\hat{G}_0$ that we want to maintain. Let some node $X'$ in $X$ be chosen to be the \textit{center} node of the SCC (In the case of $\hat{G}_0$, the node $X'$ is just a single-vertex set $\{ v \}$). We then maintain an ES in-tree and an ES out-tree from $X'$ that spans the nodes in $X$ in the induced graph $\hat{G}_0[X]$. We must maintain the trees up to distance $\mathbf{diam}(\hat{G}_0[X])$, so the total update time is $O(|E(\hat{G}_0[X])|* \mathbf{diam}(\hat{G}_0[X]))$.
Now, consider an edge deletion to $\hat{G}_0$ such that the ES in-tree or ES out-tree at $X'$ is no longer a spanning tree. Then, we detected that the SCC $X$ has to be split into at least two SCCs $X_1, X_2, .., X_k$ that are node-disjoint with $X = \bigcup_i X_i$. Then in each new SCC $X_i$ we choose a new center and initialize a new ES in-tree and ES out-tree.
\paragraph{Exploiting small diameter.} The above scheme clearly is quite efficient if $\mathbf{diam}(\hat{G}_0)$ is very small. Our goal is therefore to choose the edge set $E_0$ in such a way that $\hat{G}_0$ contains only SCCs of small diameter. We therefore turn to some insights from \cite{chechik2016decremental} and extract information from the ES in-tree and out-tree to maintain small diameter. Their scheme fixes some $\delta > 0$ and if a set of nodes $Y \subseteq X$ for some SCC $X$ is at distance $\Omega(\delta)$ from/to $\textsc{Center}(X)$ due to an edge deletion in $\hat{G}_0$, they find a node separator $S$ of size $O(\min\{|Y|, |X \setminus Y|\}\log n / \delta)$; removing $S$ from $\hat{G}_0$ causes $Y$ and $X \setminus Y$ to no longer be in the same SCC. We use this technique and remove edges incident to the node separator $S$ from $E_0$ and therefore from $\hat{G}_0$. One subtle observation we want to stress at this point is that each node in the separator set appears also as a single-vertex node in the graph $\hat{G}_1$; this is because each separator node $\{ s\}$ for some $s \in V$ is not \textit{melded} with any other node in $\hat{V}_0$, as it has no edges in $\hat{G}_0$ to or from any other node.
For some carefully chosen $\delta = \Theta(\log^2 n)$, we can maintain $\hat{G}_0$ such that at most half the nodes in $\hat{V}_0$ become separator nodes at any point of the algorithm. This follows since each separator set is small in comparison to the smaller side of the cut and since each node in $\hat{V}_0$ can only be $O(\log n)$ times on the smaller side of a cut.
\paragraph{Reusing ES-trees.} Let us now refine our approach to maintain the ES in-trees and ES out-trees and introduce a crucial ingredient devised by Roditty and Zwick\cite{roditty2008improved}. Instead of picking an arbitrary center node $X'$ from an SCC $X$ with $X' \in X$, we are going to pick a vertex $r \in \textsc{Flatten}(X) \subseteq V$ uniformly at random and run our ES in-tree and out-tree $\mathcal{E}_{r}$ from the node $X_0^r$ on the graph $\hat{G}_0$. For each SCC $X$ we denote the randomly chosen root $r$ by $\textsc{Center}(X)$. In order to improve the running time, we \textit{reuse} ES-trees when the SCC $X$ is split into SCCs $X_1, X_2, .. , X_k$, where we assume wlog that $r \in \textsc{Flatten}(X_1)$, by removing the nodes in $X_2, .., X_k$ from $\mathcal{E}_{r}$ and setting $\textsc{Center}(X_1) = r$. Thus, we only need to initialize a new ES-tree for the SCC $X_2, .. , X_k$. Using this technique, we can show that each node is expected to participate in $O(\log n)$ ES-trees over the entire course of the algorithm, since we expect that if a SCC $X$ breaks into SCCs $X_1, X_2, .. , X_k$ then we either have that every SCC $X_i$ is of at most half the size of $X$, or with probability at least $1/2$ that $X_1$ is the new SCC that contains at least half the vertices, i.e. that the random root is containing in the largest part of the graph. Since the ES-trees work on induced graphs with disjoint node sets, we can therefore conclude that the total update time for all ES-trees is $O(m \log n* \mathbf{diam}(\hat{G}_0))$.
We point out that using the ES in-trees and out-trees to detect node separators as described above complicates the analysis of the technique by Roditty and Zwick\cite{roditty2008improved} but a clever proof presented in \cite{chechik2016decremental} shows that the technique can still be applied. In our paper, we present a proof that can even deal with some additional complications and that is slightly simpler.
\paragraph{A contrast to the algorithm of Chechik et al \cite{chechik2016decremental}.}
Other than our hierarchy, the overview we have given so far largely comes from the algorithm of Chechik et al \cite{chechik2016decremental}. However, their algorithm does not use a hierarchy of graphs. Instead, they show that for any graph $G$, one can find (and maintain) a node separator $S$ of size $\tilde{O}(n/\delta)$ such that all SCCs in $G$ have diameter at most $\delta$. They can then use ES-trees with random sources to maintain the SCCs in $G \setminus S$ in total update time $\tilde{O}(m\delta)$. This leaves them with the task of computing how the vertices in $S$ might meld some of the SCCs in $G \setminus S$. They are able to do this in total update time $\tilde{O}(m|S|) = \tilde{O}(mn/\delta)$ by using an entirely different technique of \cite{lkacki2013improved}. Setting $\delta = \tilde{O}(\sqrt{n})$, they achieve the optimal trade-off between the two techniques: total update time $\tilde{O}(m\sqrt{n})$ in expectation.
We achieve our $\tilde{O}(m)$ total update time by entirely avoiding the technique of \cite{lkacki2013improved} for separately handling a small set of separator nodes, and instead using the graph hierarchy described above, where at each level we set $\delta$ to be polylog rather than $\tilde{O}(\sqrt{n})$.
We note that while our starting point is the same as \cite{chechik2016decremental}, using a hierarchy of separators forces us to take a different perspective on the function of a separator set. The reason is that it is simply not possible to ensure that at each level of the hierarchy, all SCCs have small diameter. To overcome this, we instead aim for separator sets that decompose the graph into SCCs that are small with respect to a different notion of distance. The rest of the overview briefly sketches this new perspective, while sweeping many additional technical challenges under the rug.
\paragraph{Refining the hierarchy.} So far, we only discussed how to maintain $\hat{G}_0$ efficiently by deleting many edges from $E_0$ and hence ensuring that SCCs in $\hat{G}_0$ have small diameter. To discuss our bottom-up approach, let us define our graphs $\hat{G}_i$ more precisely.
We maintain a separator hierarchy $\mathcal{S} = \{S_0, S_1, .. , S_{\lfloor \lg{n} \rfloor + 2}\}$ where $\hat{V}_0 = S_0 \supseteq S_1 \supseteq .. \supseteq S_{\lfloor \lg{n} \rfloor + 1} = S_{\lfloor \lg{n} \rfloor + 2} = \emptyset$, with $|S_i| \leq n/2^i$, for all $i \in [0, \lfloor \lg{n} \rfloor + 2]$ (see below that for technical reasons we need to define $S_{\lfloor \lg{n} \rfloor + 2}$ to define $\hat{G}_{\lfloor \lg{n} \rfloor + 1}$). Each set $S_i$ is a set of single-vertex nodes -- i.e. nodes of the form $\{ v \}$ -- that is monotonically increasing over time.
We can now more precisely define each edge set $E_i = E(\textsc{Flatten}(S_{i} \setminus S_{i+1}))$. To avoid clutter, we abuse notation slightly referring henceforth to $\textsc{Flatten}(X)$ simply as $X$ if $X$ is a set of singleton sets and the context is clear. We therefore obtain
\[
\hat{G}_i = \textsc{Condensation}((V, \bigcup_{j < i} E_j)) \cup E_{i} = \textsc{Condensation}(G \setminus E(S_{i})) \cup E(S_{i} \setminus S_{i+1})).
\]
In particular, note that $\hat{G}_i$ contains all the edges of $G$ except those in $E(S_{i+1})$; as we move up to level $\hat{G}_{i+1}$, we add the edges incident to $S_{i+1} \setminus S_{i+2}$. Note that if $s \in S_{i} \setminus S_{i+1}$, and our algorithm then adds $s$ to $S_{i+1}$, this will remove all edges incident to $s$ from $E_{i}$ and add them to $E_{i+1}$. Thus the fact that the sets $S_i$ used by the algorithm are monotonically increasing implies the desired property that edges only move up the hierarchy (remember that we add more vertices to $S_i$ due to new separators found on level $i-1$).
At a high-level, the idea of the hierarchy is as follows. Focusing on a level $i$, when the ``distances'' in some SCC of $\hat{G}_i$ get too large (for a notion of distance defined below), the algorithm will add a carefully chosen set of separator nodes $s_1, s_2, ..$ in $S_i$ to $S_{i+1}$. By definition of our hierarchy, this will remove the edges incident to the $s_i$ from $\hat{G}_i$, thus causing the SCCs of $\hat{G}_i$ to decompose into smaller SCCs with more manageable ``distances''. We note that our algorithm always maintains the invariant that nodes added to $S_{i+1}$ were previously in $S_i$, which from the definition of our hierarchy, ensures that at all times the separator nodes in $S_{i+1}$ are single-vertex nodes in $\hat{V}_{i+1}$; this is because the nodes of $\hat{V}_{i+1}$ are the SCCs of $\hat{G}_i$, and $\hat{G}_i$ contains no edges incident to $S_{i+1}$.
\paragraph{Exploiting $S$-distances.} For our algorithm, classic ES-trees are only useful to maintain SCCs in $\hat{G}_0$; in order to handle levels $i > 0$ we develop a new generalization of ES-trees that use a different notion of distance. This enables us to detect when SCCs are split in graphs $\hat{G}_i$ and to find separator nodes in $\hat{G}_i$ as discussed above more efficiently.
Our generalized ES-tree (GES-tree) can be seen as a combination of the classic ES-trees\cite{shiloach1981line} and a data structure by Italiano\cite{italiano1988finding} that maintains reachability from a distinguished source in a directed acyclic graph (DAG), and which can be implemented in total update time $O(m)$.
Let $S$ be some feedback vertex set in a graph $G = (V,E)$; that is, every cycle in $G$ contains a vertex in $S$. Then our GES-tree can maintain
$S$-distances and a corresponding shortest-path tree up to $S$-distance $\delta > 0$ from a distinguished source $X_i^r$ for some $r \in V$ in the graph $G$. (See Section \ref{sec:prelim} for the definition of $S$-distances.) This data structure can be implemented to take $O(m\delta)$ total update time.
\paragraph{Maintaining the SCCs in $\hat{G}_i$.} Let us focus on maintaining SCCs in $\hat{G}_i = \textsc{Condensation}(G \setminus E(S_{i})) \cup E(S_{i} \setminus S_{i+1})$. Since the condensation of any graph forms a DAG, every cycle in $\hat{G}_i$ contains at least one edge from the set $E(S_{i} \setminus S_{i+1})$. Since $E(S_{i} \setminus S_{i+1})$ is a set of edges that is incident to $S_i$, we have that $S_i$ forms a feedback node set of $\hat{G}_i$. Now consider the scheme described in the paragraphs above, but instead of running an ES in-tree and out-tree from each center $\textsc{Center}(X)$ for some SCC $X$, we run a GES in-tree and out-tree on $\hat{G}_i[X]$ that maintains the $S_i$-distances to depth $\delta$. Using this GES, whenever a set $Y \subseteq X$ of nodes has $S_i$-distance $\Omega(\delta)$, we show that we can find a separator $S$ of size $O(\min\{|S_i \cap Y|, |S_i \cap (X \setminus Y)|\}\log n / \delta)$ that only consists of nodes that are in $\{\{s\} | s \in S_i\}$; we then add the elements of set $S$ to the set $S_{i+1}$, and we also remove the nodes $Y$ from the GES-tree, analogously to our discussion of regular ES-trees above. Note that adding $S$ to $S_{i+1}$ removes the edges $E(S)$ from $\hat{G}_i$; since we chose $S$ to be a separator, this causes $Y$ and $X \setminus Y$ to no longer be part of the same SCC in $\hat{G}_i$. Thus, to maintain the hierarchy, we must then split nodes in $\hat{G}_{i+1}$ into multiple nodes corresponding to the new SCCs in $\hat{G}_i$: $X \setminus (Y \cup S)$, $Y \setminus S$ and every single-vertex set in $S$ ($Y$ might not form a SCC but we then further decompose it after we handled the node split). This might cause some self-loops in $\hat{G}_{i+1}$ to become edges between the newly inserted nodes (resulting from the split) and needs to be handled carefully to embed the new nodes in the GES-trees maintained upon the SCC in $\hat{G}_{i+1}$ that $X$ is part of. Observe that this does not result in edge insertions but only remaps the endpoints of edges. Further observe that splitting nodes can only increase $S_{i+1}$-distance since when they were still contracted their distance from the center was equivalent. Since $S_{i+1}$-distance still might increase, the update to might trigger further changes in the graph $\hat{G}_{i+1}$.
Thus, overall, we ensure that all SCCs in $\hat{G}_i$ have $S_i$-diameter at most $O(\delta)$, and can hence be efficiently maintained by GES-trees. In particular, we show that whenever an SCC exceeds diameter $\delta$, we can, by moving a carefully chosen set of nodes in $S_i$ to $S_{i+1}$, remove a corresponding set of edges in $\hat{G}_i$, which breaks the large-$S_i$-diameter SCC into SCCs of smaller $S_i$-diameter.
\paragraph{Bounding the total update time.} Finally, let us sketch how to obtain the total expected running time $O(m \log^4 n)$. We already discussed how by using random sources in GES-trees (analogously to the same strategy for ES-trees), we ensure that each node is expected to be in $O(\log n)$ GES-trees maintained to depth $\delta = O(\log^2 n)$.
Each such GES-tree is maintained in total update time $O(m\delta) = O(m\log^2n)$, so we have $O(m \log^3 n)$ total expected update time for each level, and since we have $O(\log n)$ levels, we obtain total expected update time $O(m\log^4 n)$. We point out that we have not included the time to compute the separators in our running time analysis; indeed, computing separators efficiently is one of the major challenges to building our hierarchy. Since implementing these subprocedures efficiently is rather technical and cumbersome, we omit their description from the overview but refer to section \ref{subsec:separators} for a detailed discussion.
\section{Generalized ES-trees}
\label{subsec:EStree}
Even and Shiloach\cite{shiloach1981line} devised a data structure commonly referred to as ES-trees that given a vertex $r \in V$ in a graph $G=(V,E)$ undergoing edge deletions maintains the shortest-path tree from $r$ to depth $\delta$ in total update time $O(m \delta)$ such that the distance $\mathbf{dist}_G(r,v)$ of any vertex $v \in V$ can be obtained in constant time. Henzinger and King\cite{henzinger1995fully} later observed that the ES-tree can be adapted to maintain the shortest-path tree in directed graphs.
For our algorithm, we devise a new version of the ES-trees that maintains the shortest-path tree with regard to $S$-distances. We show that if $S$ is a \textit{feedback vertex set} for $G$, that is a set such that every cycle in $G$ contains at least one vertex in $S$, then the data structure requires only $O(m\delta)$ total update time. Our fundamental idea is to combine classic ES-trees with techniques to maintain Single-Source Reachability in DAGs which can be implemented in linear time in the number of edges\cite{italiano1988finding}. Since $\mathbf{dist}_G(r, v) = \mathbf{dist}_G(r, v, V)$ and $V$ is a trivial feedback vertex set, we have that our data structure generalizes the classic ES-tree. Since the empty set is a feedback vertex set for DAGs, our data structure also matches the time complexity of Italiano's data structure. We define the interface formally below.
\begin{definition}
\label{def:GES}
Let $G=(V,E)$ be a graph and $S$ a feedback vertex set for $G$, $r \in V$ and $\delta > 0$. We define a generalized ES-tree $\mathcal{E}_r$ (GES) to be a data structure that supports the following operations:
\begin{itemize}
\item $\textsc{InitGES}(r, G, S, \delta)$: Sets the parameters for our data structure. We initialize the data structure and return the GES.
\item $\textsc{Distance}(r ,v )$: $\forall v \in V$, if $\mathbf{dist}_G(r ,v, S) \leq \delta$, $\mathcal{E}_r$ reports $\mathbf{dist}_G(r ,v, S)$, otherwise $\infty$.
\item $\textsc{Distance}(v ,r )$: $\forall v \in V$, if $\mathbf{dist}_G(v ,r, S) \leq \delta$, $\mathcal{E}_r$ reports $\mathbf{dist}_G(v ,r, S)$, otherwise $\infty$.
\item $\textsc{Delete}(u,v)$: Sets $E \gets E \setminus \{(u,v)\}$.
\item $\textsc{Delete}(V')$: For $V' \subseteq V$, sets $V \gets V \setminus V'$, i.e. removes the vertices in $V'$ and all incident edges from the graph $G$.
\item $\textsc{GetUnreachableVertex}()$: Returns a vertex $v \in V$ with $\max\{ \mathbf{dist}_G(r ,v, S), \mathbf{dist}_G(v, r, S)\} > \delta$ or $\bot$ if no such vertex exists.
\end{itemize}
\end{definition}
\begin{lemma}
\label{lma:SimpleGES}
The GES $\mathcal{E}_r$ as described in definition \ref{def:GES} can be implemented with total initialization and update time $O(m \delta)$ and requires worst-case time $O(1)$ for each operation $\textsc{Distance}(\cdot)$ and $\textsc{GetUnreachableVertex}()$.
\end{lemma}
We defer the full prove to the appendix \ref{sec:proofsimpleges} but sketch the proof idea.
\begin{proof} (sketch)
Consider a classic ES-tree with each edge weight $w(u,v)$ of an edge $(u,v)$ in $E_{out}(S)$ set to $1$ and all other edges of weight $0$. Then, the classic ES-tree analysis maintains with each vertex $v \in V$ the distance level $l(v)$ that expresses the current distance from $s$ to $v$. We also have a shortest-path tree $T$, where the path in the tree from $s$ to $v$ is of weight $l(v)$. Since $T$ is a shortest-path tree, we also have that for every edge $(u,v) \in E$, $l(v) \leq l(u) + w(u,v)$. Now, consider the deletion of an edge $(u,v)$ from $G$ that removes an edge that was in $T$. To certify that the level $l(v)$ does not have to be increased, we scan the in-going edges at $v$ and try to find an edge $(u', v) \in E$ such that $l(v) = l(u') + w(u', v)$. On finding this edge, $(u',v)$ is added to $T$. The problem is that if we allow $0$-weight cycles, the edge $(u',v)$ that we use to reconnect $v$ might come from a $u'$ that was a descendant of $v$ in $T$. This will break the algorithm, as it disconnects $v$ from $s$ in $T$. But we show that this bad case cannot occur because $S$ is assumed to be a feedback vertex set, so at least one of the vertices on the cycle must be in $S$ and therefore the out-going edge of this vertex must have weight $1$ contradicting that there exists any $0$-weight cycle. The rest of the analysis follows closely the classic ES-tree analysis.
\end{proof}
To ease the description of our SCC algorithm, we tweak our GES implementation to work on the multi-graphs $\hat{G}_i$. We still root the GES at a vertex $r \in V$, but maintain the tree in $\hat{G}_i$ at $X_i^r$. The additional operations and their running time are described in the following lemma whose proof is straight-forward and therefore deferred to appendix \ref{sec:proofAugmentedGES}. Note that we now deal with nodes rather than vertices which makes the definition of $S$-distances ambitious (consider for example a node containing two vertices in $S$). For this reason, we require $S \subseteq \{\{v\} | v \in V\} \cap \hat{V}$ in the lemma below, i.e. that the nodes containing vertices in $S$ are single-vertex nodes. But as discussed in the paragraph ``Refining the hierarchy" in the overview, our hierarchy ensures that every separator node $X \in S_i$ is always just a single-vertex node in $\hat{G}_i$. Thus this constraint can be satisfied by our hierarchy.
\begin{lemma}
\label{lma:AugmentedGES}
Say we are given a partition $\hat{V}$ of a universe $V$ and the graph $\hat{G}=(\hat{V}, E)$, a feedback node set $S \subseteq \{\{v\} | v \in V\} \cap \hat{V}$, a distinguished vertex $r \in V$, and a positive integer $\delta$. Then, we can run a GES $\mathcal{E}_{r}$ as in definition \ref{def:GES} on $\hat{G}$ in time $O(m\delta + \sum_{X \in \hat{V}} E(X) \log X)$ supporting the additional operations:
\begin{itemize}
\item $\textsc{SplitNode}(X)$: the input is a set of vertices $X$ contained in node $Y \in \hat{V}$, such that either $E \cap (X \times Y \setminus X)$ or $E \cap (Y \setminus X \times X)$ is an empty set, which implies $X \not\rightleftarrows_{\hat{G} \setminus S} Y \setminus X$. We remove the node $Y$ in $\hat{V}$ and add node $X$ and $Y \setminus X$ to $\hat{V}$.
\item $\textsc{Augment}(S')$: This procedure adds the nodes in $S'$ to the feedback vertex set $S$. Formally,
the input is a set of single-vertex sets $S' \subseteq \{\{v\} | v \in V\} \cap \hat{V}$. $\textsc{Augment}(S')$ then adds every $s \in S'$ to $S$.
\end{itemize}
\end{lemma}
We point out that we enforce the properties on the set $X$ in the operation $\textsc{SplitNode}(X)$ in order to ensure that the set $S$ remains a feedback node set at all times.
\section{Initializing the \texorpdfstring{the graph hierarchy $\hat{\mathcal{G}}$}{the graph hierarchy}}
\label{subsec:Preprocessing}
We assume henceforth that the graph $G$ initially is strongly-connected. If the graph is not strongly-connected, we can run Tarjan's algorithm\cite{tarjan1972depth} in $O(m+n)$ time to find the SCCs of $G$ and run our algorithm on each SCC separately.
\begin{algorithm}
\caption{$\textsc{Preprocessing}(G, \delta)$}
\label{alg:preprocessing}
\KwIn{A strongly-connected graph $G=(V,E)$ and a parameter $\delta > 0$.}
\KwOut{A hierarchy of sets $\mathcal{S} = \{ S_0, S_1, .., S_{\lfloor \lg{n} \rfloor + 2}\}$ and graphs $\hat{\mathcal{G}} = \{\hat{G}_0, \hat{G}_1, .. , \hat{G}_{\lfloor \lg{n} \rfloor + 1}\}$ as described in section \ref{subsec:overview}. Further, each SCC $X$ in $\hat{G}_i$ for $i \leq \lfloor \lg n \rfloor + 2$, has a center $\textsc{Center}(X)$ such that for any $y \in X$, $\mathbf{dist}_{\hat{G}_{i}}(\textsc{Center}(X),y, S_{i}) \leq \delta/2$ and $\mathbf{dist}_{\hat{G}_{i}}(y,\textsc{Center}(X), S_{i}) \leq \delta/2$. }
\BlankLine
$ S_0 \gets V$\;
$ \hat{V}_0 \gets \{\{v\} | v \in V\}$\;
$ \hat{G}_0 \gets (\hat{V}_0, E)$\label{lne:G0Init}\;
\For{ $i = 0 $ \KwTo $ \lfloor \lg{n} \rfloor$}{
\tcc{Find separator $S_{Sep}$ such that no two vertices in the same SCC in $\hat{G}_{i}$ have $S_i$-distance $\geq \delta/2$. $P$ is the collection of these SCCs.}
$ (S_{Sep}, P) \gets \textsc{Split}(\hat{G}_i, S_{i}, \delta/2)$ \;
$ S_{i+1} \gets S_{Sep} $\;
$\textsc{InitNewPartition}(P, i, \delta)$\;
\tcc{Initialize the graph $\hat{G}_{i+1}$}
$ \hat{V}_{i+1} \gets P$\;
$ \hat{G}_{i+1} \gets (\hat{V}_{i+1}, E)$\label{lne:GIInit}\;
}
$S_{\lfloor \lg{n} \rfloor + 2} \gets \emptyset$\;
\end{algorithm}
Our procedure to initialize our data structure is presented in pseudo-code in algorithm \ref{alg:preprocessing}. We first initialize the level $0$ where $\hat{G}_0$ is simply $G$ with the vertex set $V$ mapped to the set of singletons of elements in $V$.
Let us now focus on an iteration $i$. Observe that the graph $\hat{G}_i$ initially has all edges in $E$ (by initializing each $\hat{G}_i$ in line \ref{lne:G0Init} or line \ref{lne:GIInit}). Our goal is then to ensure that all SCCs in $\hat{G}_i$ are of small $S_i$-diameter at the cost of removing some of the edges from $\hat{G}_i$. Invoking the procedure $\textsc{Split}(\hat{G}_i, S_i, \delta/2)$ provides us with a set of separator nodes $S_{Sep}$ whose removal from $\hat{G}_i$ ensure that the $S_i$ diameter of all remaining SCCs is at most $\delta$. The set $P$ is the collection of all these SCCs, i.e. the collection of the SCCs in $\hat{G}_i \setminus E(S_{Sep})$.
Lemma \ref{lma:split} below describes in detail the properties satisfied by the procedure $\textsc{Split}(\hat{G}_i, S_i, \delta/2)$. In particular, besides the properties ensuring small $S_i$-diameter in the graph $\hat{G}_i \setminus E(S_{Sep})$ (properties \ref{prop:1} and \ref{prop:2}), the procedure also gives an upper bound on the number of separator vertices (property \ref{prop:3}). Setting $\delta = 64 \lg^2 n$, clearly implies that $|S_{Sep}| \leq |S_i|/2$ and ensures running time $O(m \log^3 n)$.
\begin{lemma}
\label{lma:split}
$\textsc{Split}(\hat{G}_i, S_i, \delta/2)$ returns a tuple $(S_{Sep}, P)$ where $P$ is a partition of the node set $\hat{V}_i$ such that
\begin{enumerate}
\item for $X \in P$, and nodes $u,v \in X$ we have $\mathbf{dist}_{G \setminus E(S_{Sep})}(u,v,S) \leq \delta/2$, and \label{prop:1}
\item for distinct $X, Y \in P$, with nodes $u \in X$ and $v \in Y$, $u \not\rightleftarrows_{G \setminus E(S_{Sep})} v$, and \label{prop:2}
\item
$|S_{Sep}| \leq \frac{32 \lg^2 n}{\delta} |S_i|$. \label{prop:3}
\end{enumerate}
The algorithm runs in time $O\left(\delta m \lg n\right)$.
\end{lemma}
We then set $S_{i+1} = S_{Sep}$ which implicitly removes the edges $E(S_{i+1})$ from the graph $\hat{G}_i$. We then invoke the procedure $\textsc{InitNewPartition}(P, i, \delta)$, that is presented in algorithm \ref{alg:newPart}. The procedure initializes for each $X \in P$ that corresponds to an SCC in $\hat{G}_i$ the GES-tree from a vertex $r \in \textsc{Flatten}(X)$ chosen uniformly at random on the induced graph $\hat{G}_i[X]$. Observe that we are not explicitly keeping track of the edge set $E_i$ but further remove edges implicitly by only maintaining the induced subgraphs of $\hat{G}_i$ that form SCCs. A small detail we want to point out is that each separator node $X \in S_{Sep}$ also forms its own single-node set in the partition $P$.
\begin{algorithm}
\caption{$\textsc{InitNewPartition}(P, i, \delta)$\;}
\label{alg:newPart}
\KwIn{A partition of a subset of the nodes $V$, and the level $i$ in the hierarchy.}
\KwResult{Initializes a new ES-tree for each set in the partition on the induced subgraph $\hat{G}_i$.}
\BlankLine
\ForEach{$ X \in P $}{
Let $r$ be a vertex picked from $\textsc{Flatten}(X)$ uniformly at random. \;
$\textsc{Center}(X) \gets r$\;
\tcc{Init a generalized ES-tree from $\textsc{Center}(X)$ to depth $\delta$.}
$\mathcal{E}_r^i \gets \textsc{InitGES}(\textsc{Center}(X), \hat{G}_i[X], S_i, \delta)$\;
}
\end{algorithm}
On returning to algorithm \ref{alg:preprocessing}, we are left with initializing the graph $\hat{G}_{i+1}$. Therefore, we simply set $\hat{V}_{i+1}$ to $P$ and use again all edges $E$. Finally, we initialize $S_{\lfloor \lg n \rfloor + 2}$ to the empty set which remains unchanged throughout the entire course of the algorithm.
Let us briefly sketch the analysis of the algorithm which is more carefully analyzed in subsequent sections. Using again $\delta = 64 \lg^2 n$ and lemma \ref{lma:split}, we ensure that $|S_{i+1}| \leq |S_i|/2$, thus $|S_i| \leq n/2^i$ for all levels $i$. The running time of executing the $\textsc{Split}(\cdot)$ procedure $\lfloor \lg n \rfloor + 1$ times incurs running time $O(m \log^4 n)$ and initializing the GES-trees takes at most $O(m \delta)$ time on each level therefore incurring running time $O(m \log^3 n)$.
\section{Finding Separators}
\label{subsec:separators}
Before we describe how to update the data structure after an edge deletion, we want to explain how to find good separators since it is crucial for our update procedure. We then show how to obtain an efficient implementation of the procedure $\textsc{Split}(\cdot)$ that is the core procedure in the initialization.
Indeed, the separator properties that we want to show are essentially reflected in the properties of lemma \ref{lma:split}. For simplicity, we describe the separator procedures on simple graphs instead of our graphs $\hat{G}_i$; it is easy to translate these procedures to our multi-graphs
$\hat{G}_i$ because the separator procedures are not dynamic; they are only ever invoked on a fixed graph, and so we do not have to worry about node splitting and the like.
To gain some intuition for the technical statement of our separator properties stated in lemma \ref{lma:sep}, consider that we are given a graph $G=(V,E)$, a subset $S$ of the vertices $V$, a vertex $r \in V$ and a depth $d$. Our goal is to find a separator $S_{Sep} \subseteq S$, such that every vertex in the graph $G \setminus S_{Sep}$ is either at $S$-distance at most $d$ from $r$ \emph{or} cannot be reached from $r$, i.e. is separated from $r$.
We let henceforth $V_{Sep} \subseteq V$ denote the set of vertices that are still reachable from $r$ in $G \setminus S_{Sep}$ (in particular there is no vertex $S_{Sep}$ contained in $V_{Sep}$ and $r \in V_{Sep}$). Then, a natural side condition for separators is to require the set $S_{Sep}$ to be small in comparison to the smaller side of the cut, i.e. small in comparison to $\min\{|V_{Sep}|, |V \setminus (V_{Sep} \cup S_{Sep})|\}$.
Since we are concerned with $S$-distances, we aim for a more general guarantee: we want the set $S_{Sep}$ to be small in comparison to the number of $S$ vertices on any side of the cut, i.e. small in comparison to $\min\{|V_{Sep} \cap S|, |(V \setminus (V_{Sep} \cup S_{Sep})) \cap S|\}$. This is expressed in property \ref{prop:balanceS} of the lemma.
\begin{lemma}[Balanced Separator]
\label{lma:sep}
There exists a procedure $\textsc{OutSeparator}(r, G, S, d)$ (analogously $\textsc{InSeparator}(r, G, S, d)$) where $G=(V,E)$ is a graph, $r \in V$ a root vertex, $S \subseteq V$ and $d$ a positive integer. The procedure computes a tuple $(S_{Sep}, V_{Sep})$ such that
\begin{enumerate}
\item $S_{Sep} \subseteq S$, $V_{Sep} \subseteq V$, $S_{Sep} \cap V_{Sep} = \emptyset$, $r \in V_{Sep}$,
\item \label{step:separator-distance} $\forall v \in V_{Sep} \cup S_{sep}$, we have $\mathbf{dist}_G(r,v,S) \leq d$ (analogously $\mathbf{dist}_G(v,r,S) \leq d$ for $\textsc{InSeparator}(r, G, S, d)$),
\item \[
|S_{Sep}| \leq \frac{\min\{|V_{Sep}\cap S|, | (V \setminus (S_{Sep} \cup V_{Sep})) \cap S|\} 2\log{n}}{d},\] \label{prop:balanceS}
and
\item for any $x \in V_{Sep}$ and $y \in V \setminus (S_{Sep} \cup V_{Sep})$, we have $u \not\leadsto_{G \setminus E(S_{Sep})} v$ (analogously $v \not\leadsto_{G \setminus E(S_{Sep})} u$ for $\textsc{InSeparator}(r, G, S, d)$).
\end{enumerate}
The running time of both $\textsc{OutSeparator}(\cdot)$ and $\textsc{InSeparator}(\cdot)$ can be bounded by $O(E(V_{Sep}))$.
\end{lemma}
Again, we defer the proof of the lemma \ref{lma:sep} to the appendix \ref{sec:ProofLemmaSep}, but sketch the main proof idea.
\begin{proof}
To implement procedure $\textsc{OutSeparator}(r, G, S, d)$, we start by computing a BFS at $r$. Here, we assign edges in $E_{out}(S)$ again weight $1$ and all other edges weight $0$ and say a layer consists of all vertices that are at same distance from $r$. To find the first layer, we can use the graph $G \setminus E_{out}(S)$ and run a normal BFS from $r$ and all vertices reached form the first layer $L_0$. We can then add for each edge $(u,v) \in E_{out}(S)$ with $u \in L_0$ the vertex $v$ to $L_1$ if it is not already in $L_0$. We can then contract all vertices visited so far into a single vertex $r'$ and repeat the procedure described for the initial root $r$. It is straight-forward to see that the vertices of a layer that are also in $S$ form a separator of the graph. To obtain a separator that is small in comparison to $|V_{Sep} \cap S|$, we add each of the layers $0$ to $d/2$ one after another to our set $V_{Sep}$, and output the index $i$ of the first layer that grows the set of $S$-vertices in $V_{Sep}$ by factor less than $(1+\frac{2 \log n}{d})$. We then set $S_{Sep}$ to be the vertices in $S$ that are in layer $i$. If the separator is not small in comparison to $| (V \setminus (S_{Sep} \cup V_{Sep})) \cap S|$, we grow more layers and output the first index of a layer such that the separator is small in comparison to $| (V \setminus (S_{Sep} \cup V_{Sep})) \cap S|$. This layer must exist and is also small in comparison to $|V_{Sep} \cap S|$. Because we find our separator vertices $S_{Sep}$ using a BFS from $r$, a useful property of our separator is that all the vertices in $S_{Sep}$ and $V_{Sep}$ are within bounded distance from $r$.
Finally, we can ensure that the running time of the procedure is linear in the size of the set $E(V_{Sep})$, since these are the edges that were explored by the BFS from root $r$.
\end{proof}
Let us now discuss the procedure $\textsc{Split}(G,S,d)$ that we already encountered in section \ref{subsec:Preprocessing} and whose pseudo-code is given in algorithm \ref{alg:split}. Recall that the procedure computes a tuple $(S_{Split}, P)$ such that the graph $G \setminus E(S_{Split})$ contains no SCC with $S$-diameter larger $d$ and where $P$ is the collection of all SCCs in the graph $G \setminus E(S_{Split})$.
\begin{algorithm}
\caption{$\textsc{Split}(G, S, d)$}
\label{alg:split}
\KwIn{A graph $G=(V,E)$, a set $S \subseteq V$ and a positive integer $d$.}
\KwOut{Returns a tuple $(S_{Split}, P)$, where $S_{Split} \subseteq S$ is a separator such that no two vertices in the same SCC in $G \setminus E(S)$ have $S$-distance greater than $d$. $P$ is the collection of these SCCs.}
\BlankLine
$S_{Split} \gets \emptyset; P \gets \emptyset; G' \gets G;$\;
\While{$G' \neq \emptyset$}{
Pick an arbitrary vertex $r$ in $V$.\;
Run in parallel $\textsc{OutSeparator}(r, G', S, d/16)$ and $\textsc{InSeparator}(r, G', S, d/16)$ and let $(S_{Sep}, V_{Sep})$ be the tuple returned by the first subprocedure that finishes.\label{lne:sepTwoWay}\;
\lIf(\label{lne:sepTwoWayIf}){$|V_{Sep}| \leq \frac{2}{3}|V|$}{
$(S'_{Sep}, V'_{Sep}) \gets (S_{Sep}, V_{Sep})$
}\Else(\label{lne:sepTwoWayElse}){
Run the separator procedure that was aborted in line \ref{lne:sepTwoWay} until it finishes and let the tuple returned by this procedure be $(S'_{Sep}, V'_{Sep})$.
}
\If(\label{line:split-if-case}){$|V'_{Sep}| \leq \frac{2}{3}|V|$} {
$(S_{Small}, P_{Small}) \gets \textsc{Split}(G'[V'_{Sep}], V'_{Sep} \cap S, d)$\label{lne:splitRecurseIf} \;
$S_{Split} \gets S_{Split} \cup S_{Small} \cup S'_{Sep}$\label{lne:addtoS1}\;
$P \gets P \cup P_{Small} \cup \{ \{s\} | s \in S'_{Sep}\})$\label{lne:addToPSep}\;
$G' \gets G'[V \setminus (V'_{Sep} \cup S'_{Sep})]$
}\Else(\label{line:split-else-case}){
\tcc{Init a generalized ES-tree from $r$ to depth $d/2$.}
$\mathcal{E}_r^i \gets \textsc{InitGES}(r, G', S, d/2)$\;
\tcc{Find a good separator for every vertex that is far from $r$.}
\While(\label{line:split-else-while}) { $(v \gets \mathcal{E}_r^i.\textsc{GetUnreachableVertex}()) \neq \bot$} {
\If{$\mathcal{E}_r^i.\textsc{Distance}(r,v) > d/2$}{
$(S''_{Sep}, V''_{Sep}) \gets \textsc{InSeparator}(v,G', S, d/4)$\;
}\Else(\tcp*[h]{If $\mathcal{E}_r^i.\textsc{Distance}(v,r) > d/2$}){
$(S''_{Sep}, V''_{Sep}) \gets \textsc{OutSeparator}(v,G', S, d/4)$\;
}
$\mathcal{E}_r.\textsc{Delete}(S''_{Sep} \cup V''_{Sep})$\;
$(S'''_{Sep}, P''') \gets \textsc{Split}( G[V''_{Sep}] , V''_{Sep} \cap S , d)$\label{lne:splitRecurse}\;
$S_{Split} \gets S_{Split} \cup S''_{Sep} \cup S'''_{Sep}$\label{lne:addtoS2}\;
$P \gets P \cup P''' \cup \{ \{s\} | s \in S''_{Sep}\})$\label{lne:addToP1}\;
}
$P \gets P \cup \{\mathcal{E}_r.\textsc{GetAllVertices}()\}$\label{lne:addToP2}\;
$G' \gets \emptyset$\;
}
}
\Return $(S_{Split}, P)$\;
\end{algorithm}
Let us sketch the implementation of the procedure $\textsc{Split}(G,S,d)$. We first pick an arbitrary vertex and invoke the procedures $\textsc{OutSeparator}(r, G', S, d/4)$ and $\textsc{InSeparator}(r, G', S, d/4)$ to run in parallel, that is the operations of the two procedures are interleaved during the execution. If one of these subprocedures returns and presents a separator tuple $(S_{Sep}, V_{Sep})$, the other procedure is aborted and the tuple $(S_{Sep}, V_{Sep})$ is returned. If $|V_{Sep}| \leq \frac{2}{3}|V|$, then we conclude that the separator function only visited a small part of the graph. Therefore, we use the separator subsequently, but denote the tuple henceforth as $(S'_{Sep},V'_{Sep})$. Otherwise, we decide the separator is not useful for our purposes. We therefore return to the subprocedure we previously aborted and continue its execution. We then continue with the returned tuple $(S'_{Sep}, V'_{Sep})$.
From there on, there are two possible scenarios. The first scenario is that the subprocedure producing $(S'_{Sep}, V'_{Sep})$ has visited a rather small fraction of the vertices in $V$ (line \ref{line:split-if-case}); in this case, we have pruned away a small number of vertices $V'_{Sep}$ while only spending time proportional to the smaller side of the cut, so we can simply recurse on $V'_{Sep}$. We also have to continue pruning away vertices from the original set $V$, until we have either removed all vertices from $G$ by finding these separators and recursing, or until we enter the else-case (line \ref{line:split-else-case}).
The else-case in line \ref{line:split-else-case} is the second possible scenario: note that in this case we must have entered the else-case in line \ref{lne:sepTwoWayElse} and had both the $\textsc{InSeparator}(\cdot)$ \emph{and} $\textsc{OutSeparator}(\cdot)$ explore the large side of the cut. Thus we cannot afford to simply recurse on the smaller side of the cut $V \setminus V'_{sep}$, as we have already spent time $|V'_{sep}| > |V \setminus V'_{sep}|$. Thus, for this case we use a different approach. We observe that because we entered the else-case in line \ref{lne:sepTwoWayElse} and since we entered the else-case \ref{line:split-else-case}, we must have had that $|V_{Sep}| \geq \frac{2}{3}|V|$ \textit{and} that $|V'_{Sep}| \geq \frac{2}{3}|V|$. We will show that in this case, the root $r$ must have small $S$-distance to and at least $\frac{1}{3}|V|$ vertices. We then show that this allows us to efficiently prune away at most $\frac{2}{3}|V|$ vertices from $V$ at large $S$-distance to or from $r$. We recursively invoke $\textsc{Split}(\cdot)$ on the induced subgraphs of vertex sets that we pruned away.
We analyze the procedure in detail in multiple steps, and summarize the result in lemma \ref{lma:splitFull} that is the main result of this section. Let us first prove that if the algorithm enters the else-case in line \ref{line:split-else-case} then we add an SCC of size at least $\frac{1}{3}|V|$ to $P$.
\begin{claim}
\label{clm:largeSCCifEStree}
If the algorithm enters line \ref{line:split-else-case} then the vertex set returned by $\mathcal{E}_r.\textsc{GetAllVertices}()$ in line \ref{lne:addToP2} is of size at least $\frac{1}{3}|V|$.
\end{claim}
\begin{proof}
Observe first that since we did no enter the if-case in line \ref{lne:splitRecurseIf}, that $|V_{Sep}| > \frac{2}{3}|V|$ and $|V'_{Sep}| > \frac{2}{3}|V|$ (since we also cannot have entered the if case in line \ref{lne:sepTwoWayIf}).
Since we could not find a sufficiently good separator in either direction, we certified that the $S$-out-ball from $r$ defined
\[
B_{out}(r) = \{ v \in V | \mathbf{dist}_G(r , v, S) \leq d/16\}
\]
has size greater than $\frac{2}{3}|V|$, and that similarly, the $S$-in-ball $B_{in}(r)$ of $r$ has size greater than $\frac{2}{3}|V|$. This implies that
\[
|B_{out}(r) \cap B_{in}(r)| > \frac{1}{3}|V|.
\]
Further, we have that every vertex on a shortest-path between $r$ and a vertex $v \in B_{out}(r) \cap B_{in}(r)$ has a shortest-path from and to $r$ of length at most $d/16$. Thus the $S$-distance between any pair of vertices in $B_{out}(r) \cap B_{in}(r)$ is at most $d/8$. Now, let $SP$ be the set of all vertices that are on a shortest-path w.r.t. $S$-distance between two vertices in $B_{out}(r) \cap B_{in}(r)$. Clearly, $B_{out}(r) \cap B_{in}(r) \subseteq SP$, so $|SP| \geq |V|/3$. It is also easy to see that $G[SP]$ has $S$-diameter at most $d/4$.
At this point, the algorithm repeatedly finds a vertex $v$ that is far from $r$ and finds a separator from $v$. We will now show that the part of the cut containing $v$ is always disjoint from $SP$; since $|SP| > |V|/3$, this implies that at least $|V|/3$ vertices remain in $\mathcal{E}_r$.
Consider some vertex $v$ chosen in line \ref{line:split-else-while}. Let us say that we now run $\textsc{InSeparator}(v,G',S,d/4)$; the case where we run $\textsc{OutSeparator}(v,G',S,d/4)$ is analogous. Now, by property \ref{step:separator-distance} in lemma \ref{lma:sep}, every $s \in S_{Sep}$ has $\mathbf{dist}(s,v,S) \leq d/4$. Thus, since we only run the \textsc{InSeparator} if we have $\mathbf{dist}(r,v,S) > d/2$, we must have $\mathbf{dist}(r,s,S) > d/4$.
\end{proof}
We point out that claim \ref{clm:largeSCCifEStree} implies that $\textsc{Split}(\cdot)$ only recurses on disjoint subgraphs containing at most a $2/3$ fraction of the vertices of the given graph. To see this, observe that we either recurse in line \ref{lne:splitRecurseIf} on $G'[V'_{Sep}]$ after we explicitly checked whether $|V'_{Sep}| \leq \frac{2}{3}|V|$ in the if-condition, or we recurse in line \ref{lne:splitRecurse} on the subgraph pruned from the set of vertices that $\mathcal{E}_r$ was initialized on. But since by claim \ref{clm:largeSCCifEStree} the remaining vertex set in $\mathcal{E}_r$ is of size at least $|V|/3$, the subgraphs pruned away can contain at most $\frac{2}{3}|V|$ vertices.
We can use this observation to establish correctness of the $\textsc{Split}(\cdot)$ procedure.
\begin{claim}
\label{clm:SplitCorrectness}
$\textsc{Split}(G, S, d)$ returns a tuple $(S_{Sep}, P)$ where $P$ is a partition of the vertex set $V$ such that
\begin{enumerate}
\item for $X \in P$, and vertices $u,v \in X$ we have $\mathbf{dist}_{G \setminus E(S_{Sep})}(u,v,S) \leq d$, and
\item for distinct $X, Y \in P$, with vertices $u \in X$ and $v \in Y$, $u \not\rightleftarrows_{G \setminus E(S_{Sep})} v$, and
\item
$|S_{Split}| \leq \frac{32 \log n}{d} \sum_{X \in P} \lg (n - |X \cap S|) |X \cap S|$. \label{prop:splitcorrect3}
\end{enumerate}
\end{claim}
\begin{proof}
Let us start with the first two properties which we prove by induction on the size of $|V|$ where the base case $|V|=1$ is easily checked. For the inductive step, observe that each SCC $X$ in the final collection $P$ was added to $P$ in line \ref{lne:addToPSep}, \ref{lne:addToP1} or \ref{lne:addToP2}. We distinguish by 3 cases:
\begin{enumerate}
\item a vertex $s$ was added as singleton set after appearing in a separator $S_{Sep}$ but then $\{s\}$ is strongly-connected and $s$ cannot reach any other vertex in $G \setminus E(S_{Sep})$ since it has no out-going edges, or
\item an SCC $X$ was added as part of a collection $P'''$ in line \ref{lne:addToP1}. But then we have that the collection $P'''$ satisfies the properties in $G[V''_{Sep}]$ by the induction hypothesis and since $V''_{Sep}$ was a cut side and $S''_{Sep}$ added to $S_{out}$, we have that there cannot be a path to \emph{and} from any vertex in $G \setminus E(S_{out})$, or
\item we added the non-trivial SCC $X$ to $P$ after constructing an GES-tree from some vertex $r \in X$ and after pruning each vertex at $S$-distance to/from $r$ larger than $d/2$ (see the while loop on line \ref{line:split-else-while}). But then each vertex that remains in $X$ can reach $r$ within $S$-distance $d/2$ and is reached from $r$ within distance $d/2$ implying that any two vertices $u,v \in X$ have a path from $u$ to $v$ of $S$-distance at most $d$.
\end{enumerate}
Finally, let us upper bound the number of vertices in $S_{Split}$. We use a classic charging argument and argue that each time we add a separator $S_{Sep}$ to $S_{Split}$ with sides $V_{Sep}$ and $V \setminus (V_{Sep} \cup S_{Sep})$ at least one of these sides contains at most half the $S$-vertices in $V \cap S$. Let $X$ be the smaller side of the cut (in term of $S$-vertices) then by property \ref{prop:balanceS} from lemma \ref{lma:sep}, we can charge each $S$ vertex in $X$ for $\frac{32 \log{n}}{d}$ separator vertices (since we invoke $\textsc{OutSeparator}(\cdot)$ and $\textsc{InSeparator}(\cdot)$ with parameter at least $d/16$).
Observe that once we determined that a separator $S_{Sep}$ that is about to be added to $S_{Split}$ in line \ref{lne:addtoS1} or \ref{lne:addtoS2}, we only recurse on the induced subgraph $G'[V_{Sep}]$ and let the graph in the next iteration be $G'[V \setminus (V_{Sep} \cup S_{Sep})$.
Let $X$ be an SCC in the final collection $P$. Then each vertex $v \in X$ can only have been charged at most $\lg (n - |X \cap S|)$ times. The lemma follows.
\end{proof}
It remains to bound the running time. Before we bound the overall running time, let us prove the following claim on the running time of invoking the separator procedures in parallel.
\begin{claim}
\label{clm:twowaysep}
We spend $O(E(V'_{Sep} \cup S'_{Sep}))$ time to find a separator in line \ref{lne:sepTwoWayIf} or \ref{lne:sepTwoWayElse}.
\end{claim}
\begin{proof}
Observe that we run the subprocedures $\textsc{OutSeparator}(r, G', S, d/16)$ and $\textsc{InSeparator}(r, G', S, d/16)$ in line \ref{lne:sepTwoWay} in parallel. Therefore, when we run them, we interleave their machine operations, computing one operation from $\textsc{OutSeparator}(r, G', S, d/16)$ and then one operation from $\textsc{InSeparator}(r, G', S, d/16)$ in turns. Let us assume that $\textsc{OutSeparator}(r, G', S, d/16)$ is the first subprocedure that finishes and returns tuple $(S_{Sep}, V_{Sep})$. Then, by lemma \ref{lma:sep}, the subprocedure used $O(E(V_{Sep} \cup S_{Sep}))$ time. Since the subprocedure $\textsc{InSeparator}(r, G', S, d/16)$ ran at most one more operation than $\textsc{OutSeparator}(r, G', S, d/16)$, it also used $O(E(V_{Sep} \cup S_{Sep}))$ operations. A symmetric argument establishes the same bounds, if $\textsc{InSeparator}(r, G', S, d/16)$ finishes first. The overhead induced by running two procedures in parallel can be made constant.
Since assignments take constant time, the claim is vacuously true by our discussion if the if-case in line \ref{lne:splitRecurseIf} is true. Otherwise, we compute a new separator tuple by continuing the execution of the formerly aborted separator subprocedure. But by the same argument as above, this subprocedure's running time now clearly dominates the running time of the subprocedure that finished first in line \ref{lne:sepTwoWay}. The time to compute $(S'_{Sep}, V'_{Sep})$ is thus again upper bounded by $O(E(V'_{Sep}))$ by lemma \ref{lma:sep}, as required.
\end{proof}
Finally, we have established enough claims to prove lemma \ref{lma:splitFull}.
\begin{lemma}[Strengthening of lemma \ref{lma:split}]
\label{lma:splitFull}
$\textsc{Split}(G, S, d)$ returns a tuple $(S_{Split}, P)$ where $P$ is a partition of the vertex set $V$ such that
\begin{enumerate}
\item for $X \in P$, and vertices $u,v \in X$ we have $\mathbf{dist}_{G \setminus E(S_{Split})}(u,v,S) \leq d$, and
\item for distinct $X, Y \in P$, with vertices $u \in X$ and $v \in Y$, $u \not\rightleftarrows_{G \setminus E(S_{Split})} v$, and
\item
$|S_{Split}| \leq \frac{32 \log n}{d} \sum_{X \in P} \lg (n - |X \cap S|) |X \cap S|$
\end{enumerate}
The algorithm runs in time $O\left(d \sum_{X \in P} (1 + \lg( n - |X|)) E(X) \right)$.
\end{lemma}
\begin{proof}
Since correctness was established in lemma \ref{clm:SplitCorrectness}, it only remains to bound the running time of the procedure. Let us first bound the running time without recursive calls to procedure $\textsc{Split}(G, S,d)$. To see that we only spend $O(|E(G)| d)$ time in $\textsc{Split}(G, S,d)$ excluding recursive calls, observe first that we can find each separator tuple $(S'_{Sep}, V'_{Sep})$ in time $O(E(V'_{Sep}))$ by claim \ref{clm:twowaysep}. We then, either recurse on $G'[V'_{Sep}])$ and remove the vertices $V'_{Sep} \cup S'_{Sep}$ with their incident edges from $G'$ or we enter the else-case (line \ref{line:split-else-case}). Clearly, if our algorithm never visits the else-case, we only spend time $O(|E(G)|)$ excluding the recursive calls since we immediately remove the edge set that we found in the separator from the graph.
We further observe that the running time for the GES-tree can be bounded by $O(|E(G)| d)$. The time to compute the separators to prune vertices away from the GES-tree is again combined at most $O(|E(G)|)$ by lemma \ref{lma:sep} and the observation that we remove edges from the graph $G$ after they were scanned by one such separator procedure.
We already discussed that claim \ref{clm:largeSCCifEStree} implies that we only recurse on disjoint subgraphs with at most $\frac{2}{3}|V|$ vertices. We obtain that each vertex in a final SCC $X$ in $P$ participated in at most $O(\log( n - |X|))$ levels of recursion and so did its incident edges hence we can then bound the total running time by $O\left(d \sum_{X \in P} (1 + \log( n - |X|)) E(X) \right)$.
\end{proof}
\section{Handling deletions}
\label{subsec:delete}
Let us now consider how to process the deletion of an edge $(u,v)$ which we describe in pseudo code in algorithm \ref{alg:delete}. We fix our data structure in a bottom-up procedure where we first remove the edge $(u,v)$ if it is contained in any induced subgraph $\hat{G}_i[X]$ from the GES $\mathcal{E}_{\textsc{Center}(X)}$.
\begin{algorithm}
\caption{$\textsc{Delete}(u,v)$}
\label{alg:delete}
\KwIn{An edge $(u,v) \in E$.}
\KwResult{Updates the data structure such that queries for the graph $G \setminus \{ (u,v)\}$ can be answered in constant time.}
\BlankLine
\For{ $i = 0 $ \KwTo $ \lfloor \log{n} \rfloor$}{
\If{If there exists an $X \in \hat{V}_{i+1}$ with $u,v \in X$}{
$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Delete}(u,v)$\;
}
\While{there exists an $X \in \hat{V}_{i+1}$ with $\mathcal{E}_{\textsc{Center}(X)}.\textsc{GetUnreachable}() \neq \bot$}{
$X' \gets \mathcal{E}_{\textsc{Center}(X)}.\textsc{GetUnreachable}()$\;
\tcc{Find a separator from $X'$ depending on whether $X'$ is far to reach from $r$ or the other way around.}
\If{$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Distance}(\textsc{Center}(X),X') > \delta$}{
$(S_{Sep}, V_{Sep}) \gets \textsc{InSeparator}(X', \hat{G}_i[X], X \cap S_i, \delta/2)$ \label{lne:DelSepIn}
}
\Else(\tcp*[h]{$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Distance}(X', \textsc{Center}(X)) > \delta$}){
$(S_{Sep} , V_{Sep}) \gets \textsc{OutSeparator}(X' , \hat{G}_i[X] , X \cap S_i , \delta/2)$\label{lne:DelSepOut}
}
\tcc{If the separator is chosen such that $V_{Sep}$ is small, we have a good separator, therefore we remove $V_{Sep}$ from $\mathcal{E}_r$ and maintain the SCCs in $\hat{G}_{i}[V_{Sep}]$ separately. Otherwise, we delete the entire GES $\mathcal{E}_{\textsc{Center}(X)}$ and partition the graph with a good separator.}
\If{$|\textsc{Flatten}(V_{Sep})| \leq \frac{2}{3}|\textsc{Flatten}(X)|$}{
$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Delete}(V_{Sep}\cup S_{Sep})$\;
$(S'_{Sep}, P') \gets \textsc{Split}(\hat{G}[V_{Sep}], V_{Sep} \cap S_i, \delta/2)$\label{lne:DelSplit1}\;
$S''_{Sep} \gets S_{Sep} \cup S'_{Sep}$\;
$P'' \gets P' \cup S_{Sep}$\;
}
\Else{
$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Delete}()$\label{lne:ESdelete}\;
$(S''_{Sep}, P'') \gets \textsc{Split}(\hat{G}_i[X], X \cap S_i, \delta/2)$\label{lne:DelSplit2}\;
}
\tcc{After finding the new partitions, we init them, execute the vertex splits on the next level and add the separator vertices.}
$\textsc{InitNewPartition}(P'', i, \delta)$\;
\ForEach{$Y \in P''$}{
$\mathcal{E}_{\textsc{Center}(X)}.\textsc{SplitNode}(Y)$\;
}
$\mathcal{E}_{\textsc{Center}(X)}.\textsc{Augment}(S''_{Sep})$\label{lne:augmentInDelete}\;
$S_{i+1} \gets S_{i+1} \cup S''_{Sep}$\;
}
}
\end{algorithm}
Then, we check if any GES $\mathcal{E}_{\textsc{Center}(X)}$ on a subgraph $\hat{G}_i[X]$ contains a node that became unreachable due to the edge deletion or the fixing procedure on a level below. Whilst there is such a GES $\mathcal{E}_{\textsc{Center}(X)}$, we first find a separator $S_{Sep}$ from $X'$ in lines \ref{lne:DelSepIn} or \ref{lne:DelSepOut}. We now consider two cases based on the size of the set $\textsc{Flatten}(V_{Sep})$. Whilst focusing on the size of $\textsc{Flatten}(V_{Sep})$ instead of the size of $V_{Sep}$ seems like a minor detail, it is essential to consider the underlying vertex set instead of the node set, since the node set can be further split by node split updates from lower levels.
Now, let us consider the first case, when the set $V_{Sep}$ separated by $S_{Sep}$ is small (with regard to $\textsc{Flatten}(V_{Sep})$); in this case, we simply prune $V_{Sep}$ from our tree by adding $S_{Sep}$ to $S_{i+1}$, and then invoke $\textsc{Split}(\hat{G}_i[V_{Sep}], V_{Sep} \cap S_i, \delta/2)$ to get a collection of subgraphs $P'$ where each subgraph $Y \in P'$ has every pair of nodes $A, B \in Y$ at $S_{i}$-distance $\delta/2$. (We can afford to invoke $\textsc{Split}$ on the vertex set $V_{Sep}$ because we can afford to recurse to on the smaller side of a cut.)
The second case is when $V_{Sep}$ is large compared to the number of vertices in node set of the GES-tree. In this case we do not add $S_{Sep}$ to $S_{i+1}$. Instead we we declare the GES-tree $\mathcal{E}_{\textsc{Center}(X)}$ invalid, and delete the entire tree. We then partition the set $X$ that we are working with by invoking the $\textsc{Split}$ procedure on all of $X$. (Intuitively, this step is expensive, but we will show that whenever it occurs, there is a constant probability that the graph has decomposed into smaller SCCs, and we have thus made progress.)
Finally, we use the new partition and construct on each induced subgraph a new GES-tree at a randomly chosen center. This is done by the procedure $\textsc{InitNewPartition}(P', i, \delta)$ that was presented in subsection \ref{subsec:Preprocessing}. We then apply the updates to the graph $\hat{G}_{i+1}$ using the GES-tree operations defined in lemma \ref{lma:AugmentedGES}. Note, that we include the separator vertices as singleton sets in the partition and therefore invoke $\mathcal{E}_X.\textsc{SplitNode}(\cdot)$ on each singleton before invoking $\mathcal{E}_X.\textsc{Augment}(S''_{Sep})$ which ensures that the assumption from lemma \ref{lma:AugmentedGES} is satisfied. As in the last section, let us prove the following two lemmas whose proofs will further justify some of the details of the algorithm.
We start by showing that because we root the GES-tree for SCC $X$ at a \emph{random} root $r$, if the GES-tree ends up being deleted in \ref{lne:ESdelete} in algorithm \ref{alg:delete}, this means that with constant probability $X$ has decomposed into smaller SCCs, and so progress has been made.
\begin{lemma}[c.f. also \cite{chechik2016decremental}, Lemma 13]
\label{lma:EStreeprob}
Consider an GES $\mathcal{E}_r = \mathcal{E}_{\textsc{Center}(X)}$ that was initialized on the induced graph of some node set $X_{Init}$, with $X \subseteq X_{Init}$, and that is deleted in line \ref{lne:ESdelete} in algorithm \ref{alg:delete}. Then with probability at least $\frac{2}{3}$, the partition $P''$ computed in line \ref{lne:DelSplit2} satisfies that each $X' \in P''$ has $|\textsc{Flatten}(X')| \leq \frac{2}{3}|\textsc{Flatten}(X_{Init})|$.
\end{lemma}
\begin{proof}
Let $i$ be the level of our hierarchy on which $\mathcal{E}_{r}$ was initialized, i.e. $\mathcal{E}_{r}$ was initialized on graph $\hat{G}_i[X_{Init}]$, and went up to depth $\delta$ with respect to $S_i$-distances (see Algorithm \ref{alg:newPart}).
Let $u_1, u_2, ..$ be the sequence of updates since the GES-tree $\mathcal{E}_r$ was initialized that were either adversarial edge deletions, nodes added to $S_i$ or node splits in the graph $\hat{G}_i[X_{Init}]$. Observe that this sequence is independent of how we choose our random root $r$, since they occur at a lower level, and so do not take any GES-trees at level $i$ into account. Recall, also, that the adversary cannot learn anything about $r$ from our answers to queries because the SCCs of the graph are objective, and so do not reveal any information about our algorithm. We refer to the remaining updates on $\hat{G}_i[X_{Init}]$ as \textit{separator} updates, which are the updates adding nodes to $S_{i+1}$ and removing edges incident to $S_{i+1}$ or between nodes that due to such edge deletions are no longer strongly-connected. We point out that the separator updates are heavily dependent on how we chose our random source. The update sequence that the GES-tree undergoes up to its deletion in line \ref{lne:ESdelete} is a mixture of the former updates that are independent of our chosen root $r$ and the separator updates.
Let $G^j$ be the graph $\hat{G}_i$ after the update sequence $u_1, u_2, ..., u_j$ is applied. Let $X_{max}^j$ be the component of $S_i$-diameter at most $\delta/2$ that maximizes the cardinality of $\textsc{Flatten}(X_{max}^j)$ in $G^j$. We choose $X_{max}^j$ in this way because we want to establish an upper bound on the largest SCC of $S_i$-diameter at most $\delta/2$ in $G^j$. We then show that that if a randomly chosen source deletes a GES-tree (see line \ref{lne:ESdelete}) after $j$ updates, then there is a good probability that $X_{max}^j$ is small. Then by the guarantees of lemma \ref{lma:split}, the $\textsc{Split}(\cdot)$ procedure in line \ref{lne:DelSplit2} partitions the vertices into SCCs $X'$ of $S_i$-diameter at most $\delta/2$, which all have small $|\textsc{Flatten}(X')|$ because $X_{max}^j$ is small.
More precisely, let $G^j_r$, be the graph is obtained by applying \emph{all} updates up to update $u_j$ to $\hat{G}_i[X_{Init}]$; here we include the updates $u_1, ..., u_j$, as well as all separator updates up to the time when $u_j$ takes place. (Observe that $G^j$ is independent from the choice of $r$, but $G^j_r$ is not.) Let $X_{max, r}^j$ be the component of $S_i$-diameter at most $\delta/2$ that maximizes the cardinality of $\textsc{Flatten}(X^j_{max, r})$ in this graph $G^j_r$. It is straight-forward to see that since $S_i$-distances can only increase due to separator updates, we have $|\textsc{Flatten}(X_{max, r}^j)| \leq |\textsc{Flatten}(X_{max}^j)|$ for any $r$. Further $|\textsc{Flatten}(X_{max, r}^j)|$ upper bounds the size of any component $X' \in P''$, i.e. $|\textsc{Flatten}(X')| \leq |\textsc{Flatten}(X_{max, r}^j)|$ if the tree $\mathcal{E}_r$ is deleted in line \ref{lne:ESdelete} while handling update $u_j$; the same bound holds if $\mathcal{E}_r$ is deleted after update $u_j$, because the cardinality of $\textsc{Flatten}(X_{max, r}^j)$ monotonically decreases in $j$, i.e. $|\textsc{Flatten}(X_{max, r}^{j})| \leq |\textsc{Flatten}(X_{max, r}^{j-1})|$ since updates can only increase $S_i$-distances.
Now, let $k$ be the index, such that
\[
|\textsc{Flatten}(X_{max}^k)| \leq \frac{2}{3}|\textsc{Flatten}(X_{Init})| < |\textsc{Flatten}(X_{max}^{k-1})|.
\]
i.e. $k$ is chosen such that after the update sequence $u_1, u_2,..., u_{k}$ were applied to $\hat{G}_i[X_{Init}]$, there exists no SCC $X$ in $G^k$ of diameter at most $\delta/2$ with $|\textsc{Flatten}(X)| > \frac{2}{3}|\textsc{Flatten}(X_{Init})|$.
In the remainder of the proof, we establish the following claim: if we chose some vertex $r \in \textsc{Flatten}(X^{k-1}_{max})$, then the GES-tree would not be been deleted before update $u_k$ took place. Before we prove this claim, let us point out that this implies the lemma: observe that by the independence of how we choose $r$ and the update sequence $u_1, u_2, ..$, we have that
\[
Pr[r \in X_{max}^{k-1} | u_1, u_2, ..] = Pr[r \in X_{max}^{k-1}] = \frac{|\textsc{Flatten}(X_{max}^{k-1})|}{|\textsc{Flatten}(X_{Init})} > \frac{2}{3}
\]
where the before-last equality follows from the fact that we choose the root uniformly at random among the vertices in $\textsc{Flatten}(X_{Init})$. Thus, with probability at least $\frac{2}{3}$, we chose a root whose GES-tree is deleted during or after the update $u_k$ and therefore the invoked procedure $\textsc{Split}(\cdot)$ ensures that every SCC $X' \in P''$ satisfies $|\textsc{Flatten}(X')| \leq |\textsc{Flatten}(X_{max}^k)| \leq \frac{2}{3}|\textsc{Flatten}(X_{Init})|$, as required.
Now, let us prove the final claim. We want to show that if $r \in X^{k-1}_{max}$, then the GES-tree would not have been deleted before update $u_k$. To do so, we need to show that even if we include the separator updates, the SCC containing $r$ continues to have size at least $\frac{2}{3}|\textsc{Flatten}(X_{Init})|$ before update $u_k$. In particular, we argue that before update $u_k$, none of the separator updates decrease the size of $X^{k-1}_{max}$. The reason is that the InSeparator computed in Line
\ref{lne:DelSepIn} of Algorithm \ref{alg:delete} is always run from a node $X$ whose $S_i$-distance from $r$ is at least $\delta$. (The argument for an OutSeparator in Line \ref{lne:DelSepOut} is analogous.)
Now, the InSeparator from $X$ is computed up to $S_i$-distance $\delta/2$, so by Property \ref{step:separator-distance} of Lemma \ref{lma:sep}, we have that all nodes pruned away from the component have $S_i$-distance at most $\delta/2$ to $X$; this implies that these nodes have $S_i$-distance more than $\delta/2$ from $r$, and so cannot be in $X^{k-1}_{max}$, because $X^{k-1}_{max}$ was defined to have $S_i$-diameter at most $\delta/2$. Thus none of the separator updates affect $X^{k-1}_{max}$ before update $u_k$, which concludes the proof of the lemma.
\end{proof}
Next, let us analyze the size of the sets $S_i$. We analyze $S_i$ using the inequality below in order to ease the proof of the lemma. We point out that the term $\lg(n -|X \cup S_i|)$ approaches $\lg n$ as the SCC $X$ splits further into smaller pieces. Our lemma can therefore be stated more easily, see therefore corollary \ref{cor:SisSmall}.
\begin{lemma}
\label{lma:setS}
During the entire course of deletions our algorithm maintains
\begin{align}
|S_0| &= n & \\
|S_{i+1}| &\leq \frac{32 \log n}{\delta} \sum_{X \in \hat{V}_{i}} \lg (n - |X \cap S_{i}|) |X \cap S_{i}| & \text{for }i \geq 0
\end{align}
\end{lemma}
\begin{proof}
We prove by induction on $i$. It is easy to see that $S_0$ has cardinality $n$ since we initialize it to the node set in procedure \ref{alg:preprocessing}, and since each set $S_i$ is an increasing set over time.
Let us therefore focus on $i > 0$. Let us first assume that the separator nodes were added by the procedure $\textsc{OutSeparator}(\cdot)$ (analogously $\textsc{InSeparator}(\cdot)$). Since the procedure is invoked on an induced subgraph $\hat{G}_i[X]$ that was formerly strongly-connected, we have that either $V_{Sep}$ or $X \setminus (V_{Sep} \cup S_{Sep})$ (or both) contain at most half the $S_i$-nodes originally in $X$. Let $Y$ be such a side. Since adding $S_{Sep}$ to $S_i$ separates the two sides, we have that RHS of the equation is increased by at least $\frac{32 \log n}{\delta} |Y \cap S_i|$ since $\lg( n - |Y \cap S_i|) |Y \cap S_i| - \lg( n - |X \cap S_i|) |Y \cap S_i| \geq |Y \cap S_i|$. Since we increase the LHS by at most $\frac{4 \log n}{\delta} |Y \cap S_i|$ by the guarantees in lemma \ref{lma:sep}, the inequality is still holds.
Otherwise, separator nodes were added due to procedure $\textsc{Split}(\cdot)$. But then we can straight-forwardly apply lemma \ref{lma:splitFull} which immediately implies that the inequality still holds.
Finally, the hierarchy might augment the set $S_i$ in line \ref{lne:augmentInDelete}, but we observe that $f(s) = \lg(n - s) * s$ is a function increasing in $s$ for $s \leq \frac{1}{2} n$ which can be proven by finding the derivative. Thus adding nodes to the set $S_i$ can only increase the RHS whilst the LHS remains unchanged.
\end{proof}
\begin{corollary}
\label{cor:SisSmall}
During the entire course of deletions, we have $|S_{i+1}| \leq \frac{16 \lg^2 n}{\delta} |S_{i}|$, for any $i \geq 0$.
\end{corollary}
\section{Putting it all together}
\label{sec:alltogether}
By corollary \ref{cor:SisSmall}, using $\delta = 64 \lg^2 n$, we enforce that each $|S_i| \leq n/2^i$, so $\hat{G}_{\lfloor \lg{n} \rfloor + 1}$ is indeed the condensation of $G$. Thus, we can return on queries asking whether $u$ and $v$ are in the same SCC of $G$, simply by checking whether they are represented by the same node in $\hat{G}_{\lfloor \lg{n} \rfloor + 1}$ which can be done in constant time.
We now upper bound the running time of our algorithm by $O(m \log^5 n)$ and then refine the analysis slightly to obtain the claimed running time of $O(m \log^4 n)$.
By lemma \ref{lma:EStreeprob}, we have that with probability $\frac{2}{3}$, that every time a node leaves a GES, its induced subgraph contains at most a fraction of $\frac{2}{3}$ of the underlying vertices of the initial graph. Thus, in expectation each vertex in $V$ participates on each level in $O(\log n)$ GES-trees. Each time it contributes to the GES-trees running time by its degree times the depth of the GES-tree which we fixed to be $\delta$. Thus we have expected time $O(\sum_{v \in V} \mathbf{deg}(v) \delta \log n) = O(m \log^3 n)$ to maintain all the GES-trees on a single level by lemma \ref{lma:AugmentedGES}. There are $O(\log n)$ levels in the hierarchy, so the total expected running time is bounded by $O(m \log^4 n)$.
By lemmas \ref{lma:splitFull}, the running time for invoking $\textsc{Split}(G[X], S, \delta/2)$ can be bounded by $O(E(X) \delta \log n) = O(E(X) \log^3 n)$. After we invoke $\textsc{Split}(G[X], S, \delta/2)$ in algorithm \ref{alg:delete}, we expect with constant probability again by lemma \ref{lma:EStreeprob}, that each vertex is at most $O(\log n)$ times in an SCC on which the $\textsc{Split}(\cdot)$ procedure is invoked upon. We therefore conclude that total expected running time per level is $O(m \log^4 n)$, and the overall total is $O(m \log^5 n)$.
Finally, we can bound the total running time incurred by all invocations of $\textsc{InSeparator}(\cdot)$ and $\textsc{OutSeparator}(\cdot)$ outside of $\textsc{Split}(\cdot)$ by the same argument and obtain total running time $O(m \log^2 n)$ since each invocation takes time $O(E(G))$ on a graph $G$.
This completes the running time analysis, establishing the total expected running time $O(m \log^5 n)$. We point out that the bottleneck of our algorithm are the invocations of $\textsc{Split}(\cdot)$. We can reduce the total expected cost of these invocations to $O(m \log^4 n)$ by using the more involved upper bound on the running time of $\textsc{Split}(\cdot)$ of
\[
O\left(\delta E(X) + \delta \sum_{X \in P} \log( n - |\textsc{Flatten}(X)|) E(X) \right)
\]
where we use $|\textsc{Flatten}(X)|$ instead of $|X|$ to capture node splits. Then, we can bound the costs incurred by the first part of the bound by $O(m \log^4 n)$; for the second part we also get a bound $O(m \log^4 n)$ by using a telescoping sum argument on the size of the graph.
This concludes our proof of theorem \ref{thm:SCCmain}.
\section{Conclusion}
In this article, we presented the first algorithm that maintains SCCs or SSR in decremental graphs in almost-linear expected total update time $\tilde{O}(m)$. Previously, the fastest algorithm for maintaining the SCCs or SSR achieved expected total update time $\tilde{O}(m \sqrt{n})$. Three main open questions arise in the context of our new algorithm:
\begin{itemize}
\item Can the complexity of the Single-Source Shortest-Path problem in decremental directed graphs be improved beyond total expected update time of $O(mn^{0.9 + o(1)})$\cite{henzinger2014sublinear, henzinger2015improved} and can it even match the time complexity achieved by our algorithm?
\item Does there exist a \textit{deterministic} algorithm to maintain SCCs/SSR in a decremental graph beyond the $O(mn)$ total update time complexity?
\item And finally, is there a algorithm that solves All-Pairs Reachability in fully-dynamic graphs with update time $\tilde{O}(m)$ and constant query time? Such an algorithm is already known for dense graphs\cite{demetrescu2004new} but is still unknown for graphs of sparsity $O(n^{2-\epsilon})$.
\end{itemize}
\paragraph{Acknowledgements}
The second author of the paper would like to thank Jacob Evald for some helpful comments on organization and correctness.
\pagebreak
\printbibliography[heading=bibintoc]
\pagebreak
|
{'timestamp': '2019-03-15T01:07:44', 'yymm': '1901', 'arxiv_id': '1901.03615', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.03615'}
|
arxiv
|
\section{Introduction}
Eclipsing binaries are widely recognised for their importance in different astrophysical areas. In variability surveys, they are usually the most abundant type of variable star, particularly at low Galactic latitude. Although the specific proportions depend on each survey's depth, cadence and area probed (particularly galactic latitude), in the All Sky Automated Survey \citep[ASAS,][]{Pojmanski} and Catalina Real-Time Transient Survey \citep[CRTS,]{Drake} they amount to 25\% and 58\% percent of all variable stars respectively.
Eclipsing contact binaries or W-UMa type variables (EWs), in particular, follow a Period-Luminosity-Colour relation \citep{Rucinskipcolor,Rucinski1997}, which implies they can serve as standard candles for distance measurements \citep{Rucinski1996}. The latest estimates give an error of $\pm0.25$ mag in distance modulus for these stars, which corresponds to a distance uncertainty of $\sim12\%$ \citep{Rucinski2004}.
Although significantly fainter ($M_V>2$) than other, more traditional, standard candles such as RR Lyraes ($M_V\sim+0.6$) and Cepheids ($M_V<-2$), EWs are numerous and ubiquitous as they trace populations of any age and metallicity.
The advent of deep all-sky multi-epoch surveys such as Gaia,
the Panoramic Survey Telescope and Rapid Response System (PanSTARRS),
the Zwicky Transient Factory (ZTF) or the Large Synoptic Survey Telescope (LSST) \citep{Gaia2017b,Kaiser2010,Ivezic2010,Smith2014}, together with the first 3D extinction maps \citep{Sale2014,Green2015}, will open up the opportunity to use EWs as tracers of Galactic structure for the first time. For example, Gaia will be capable of observing an $M_V=3$ EW star up to $\sim30$ kpc without extinction, or up to $\sim15$ kpc with $A_V=1$, effectively probing a considerable volume of the Galactic Disc and inner Halo.
Numerous works have shown many potential uses for standard candles as probes of Galactic structure, e.g. to discover and trace stellar overdensities, clouds and tidal streams \citep[e.g.][]{Vivas2006,Sesar2010p,Baker2015}, trace radial and vertical metallicity gradients \citep{Luck2006} and to measure the density profiles of different Galactic components \citep[e.g.][]{Brown2008,Vivas2006,Sesar,Cohen2017}. However, for any tracer catalogue to be useful for studies of the Galactic density profiles, a thorough understanding of its completeness is required, which needs to be estimated through simulations. In RR Lyrae surveys, light curve simulations have been done extensively to model the survey completeness.
\citet{Vivas2004,Miceli2008}; \citet[][hereafter \citetalias{Mateu}]{Mateu}; \citet{Sesar2017b}, for example, produce synthetic light curve catalogues for a population of mock RR Lyrae stars, which are then run through the period-finding and RR Lyrae identification algorithms used to characterise the survey's completeness by looking at the fraction of recovered stars as a function of different parameters, such as magnitude, amplitude and number of observations. This completeness in the identification of RR Lyrae stars can range from $\sim60\%$ in the SEKBO and Catalina surveys \citep{Keller2008,Drake2013,Torrealba2015} to as high as $>90\%$ in LONEOS, SDSS Stripe 82, PS1 or QUEST \citep[][\citetalias{Mateu}]{Sesar2010p,Sesar2017b,Vivas2004}. Estimates such as these, however, have not been provided to date for eclipsing binary surveys.
The fact that eclipsing binaries are the most common type of variable star also means they are a very common contaminant in surveys for other types of variables. EWs are frequent contaminants of RRc and Delta Scuti surveys \citep[e.g.][\citetalias{Mateu}]{Vivas2004,Kinman2010}. Their light curve period ranges are similar (from a few hours to $\lesssim1.5$~d), and when time sampling is irregular and observations sparse, the light curve shapes can be difficult to tell apart and period aliasing can be a source of confusion.
Although currently many codes exist to simulate eclipsing binary light curves, such as NightFall \citep{Wichmann2011}, Wilson-Devinney \citep[WD,][]{Wilson} and EBOP \citep[Eclipsing Binary Orbit Program,][]{Etzel}, these are oriented towards simulating light curves for individual binary systems in great detail. However, as a population, eclipsing binary light curves are difficult to model because stars in almost any two evolutionary stages can be part of a binary system and the proportions of the different types of stars paired have to be modelled consistently with the initial mass function and star formation history of the population, and the effects of mass transfer on the stellar evolution of each component \citep[see e.g.][]{Hernandez}. \citet{Prsa2011} approached this problem to estimate the eclipsing binary yield of LSST by simulating a library of eclipsing binary light curves with physical and orbital parameters drawn at random from a set of given distributions, which were then fed into PHOEBE \citep{Prsa2005}, a code based on WD.
Our goal in this work was, therefore, to develop a tool to simulate light curves for populations of eclipsing binaries. {\textsc ELLISA}~(Eclipsing binary Light curve LIbrary SimulAtor), is based on stellar population synthesis models \citep{Hernandez} to produce binary systems with physical characteristics and in numbers consistent with the parent stellar population, and reproducing the observational characteristics of a given survey: time sampling, typical photometric errors in each filter, bright and faint magnitude limits, and so on (Section~\ref{Sec:ellisa}).
The light curve catalogues simulated with {\textsc ELLISA}~ will allow characterizing the completeness and possible biases of any eclipsing binary search, serve as benchmarks for the optimization of eclipsing binary observing strategies and search algorithms, and provide estimates of the expected levels of contamination of searches for other types of variable stars. Finally, we use the QUEST low latitude catalogue of variable stars \citepalias{Mateu} as a case study, and implement {\textsc ELLISA}~to guide the search and characterise the completeness of the resulting eclipsing binary catalogue. The {\textsc ELLISA}~code is publicly available at a GitHub repository\footnote{\url{https://github.com/umbramortem/ELLISA}}.
This paper is organized as follows: Section \ref{Sec:ellisa} describes the {\textsc ELLISA}~code, which is used to generate a mock catalogue to guide the search for eclipsing binaries in the QUEST catalogue of variable stars described in Section \ref{binary_search}. Using the ELLISA mock catalogue, the completeness of the catalogue obtained is characterised in Section \ref{s:compsec} as a function of amplitude, magnitude and spatial distribution, for each eclipsing binary type. Section \ref{sum_and_con} contains summary and conclusions.
\section{{\textsc ELLISA}: Eclipsing binary Light curve LIbrary SimulAtor}\label{Sec:ellisa}
The goal of the {\textsc ELLISA}~simulator is to generate a library of synthetic multi-filter light curves for a population of eclipsing binaries, reproducing the time sampling and photometric errors representative of the survey the user wants to simulate. This makes it possible to generate a synthetic library that mimics the way we would observe a population of eclipsing binaries with an arbitrary instrument and time sampling. {\textsc ELLISA}~is made publicly available as a {\texttt Python} stand-alone code and library at a GitHub repository\footnotemark[\value{footnote}].
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{flujogramasimulador.pdf}
\caption{{\textsc ELLISA}~flowchart. Rectangles, ellipses/circles and diamonds indicate user-supplied input, subroutines and outputs respectively.}
\label{flujogramasimulador}
\end{center}
\end{figure*}
The code's structure is shown in Figure \ref{flujogramasimulador}, where user inputs are shown with squares,
subroutines used by {\textsc ELLISA}~are shown in circles and the outputs produced are shown with rhombuses. The overall process is as follows. The user supplies {\textsc ELLISA}~the stellar population type, filter set and number of binary systems to simulate. This data is used by a routine that works as a wrapper for the binary stellar population synthesis code HB13 by \cite{Hernandez}, which simulates a synthetic population of binary systems at zero-age of a simple stellar population (SSP), calculates its stellar evolution according to the selected initial mass function,
and returns the physical parameters, magnitudes and colours of the individual stars in the binary population.
From these parameters the type of eclipsing binary (EW,EB and EA) is identified and, for each filter, light curve parameters --eclipse amplitudes, colours, maximum light magnitudes-- are computed using the widely known code PHOEBE \citep{Phoebe}. Then the observation process is simulated by sampling the light curve in the specific epochs given in the input time sampling, and the photometric errors and observation limits are simulated based on the (user-supplied) survey characteristics. Finally, the simulator returns the time series containing the magnitudes, photometric errors and observation dates for each binary system, in each filter selected by the user.
In what follows we describe in detail each of these steps.
\subsection{Simulation of Physical Parameters of Binaries using HB13}\label{Sec:parfisicosbassic}
In this part, {\textsc ELLISA}~uses the HB13~code from \citep{Hernandez} to generate the physical and orbital parameters --masses, separations, effective temperature, etc. -- of the binary population, consistently with the stellar evolution of the population being simulated.
HB13~is a code developed by \citet{Hernandez} that generates and follows the stellar evolution of an SSP, consisting of isolated stars and binary systems. This is implemented as a \texttt{Fortran} code, bundled and run inside {\textsc ELLISA}~through a \texttt{Python} wrapper subroutine. Here we summarise the procedures followed by HB13, and we refer the reader to \citet{Hernandez} for further details.
\paragraph*{Stellar population selection}
The wrapper subroutine allows simulating predefined simple and composite stellar populations. In its current version, {\textsc ELLISA}~allows the user to choose one of the following options, tailored to resemble the main Galactic components: Halo type, Bulge type, Thick disc type, Thin disc type.
The ages and metallicities used for the preset populations were chosen following \citep[][their Table 1]{Robin}. The Halo, Bulge and Thick Disc populations are simulated using SSPs, and the extended star formation history of the Thin Disc is simulated using 7 SSPs with ages between 100 Myr to 10 Gyr and metallicities between -0.12 dex and 0.01 dex, given by \citet{Robin}. These SSPs are combined in equal proportions (by number) for each age bin, so as to approximately reproduce the star formation history of the Thin Disc with the local density reported by \cite{Robin} for each age bin.
\paragraph*{Initial Physical and Orbital Parameters}
The physical and orbital parameters are simulated for the binary population at zero-age, as follows:
\begin{itemize}
\item Stellar masses ($M_1$) are randomly drawn from a \citet{Chabrier} Initial Mass Function ranging from $0.1M_\odot$ to $100M_\odot$, parametrized as a log-normal distribution with characteristic mass $\log m_c=0.08$ and variance $\sigma^2=<(\log m - <\log m>)^2>=0.47$.
\item For each potential primary star, a binary probability is drawn at random as a function of the spectral type, based on the compilation by \citet{Lada2006} summarized in Table \ref{t:lada}. This reproduces the larger binary probability observed for more massive stars. It is important to stress that these correspond to binary fractions at zero age, which change as the stellar population evolves and the stellar evolution under the effects of mass transfer is taken into account. For example, the fraction of F-G primary stars in binaries changes from 13.7\% at zero-age to 8.6\% at 10~Gyr.
\item For the stars randomly assigned to binary systems in the previous step, secondary masses are computed from a mass ratio $q$ drawn at random from a uniform distribution ranging from 0 to 1. \cite{Reggiani2011} and \cite{Reggiani2013} find that distribution of mass ratios is consistent with being uniform $(dN/dq=q^{0.25\pm0.29})$ in the Galactic field, as well as in many star forming regions. They also find the mass ratio distribution to be independent of the binary separation.
\item Orbital periods are drawn at random from the \citet{Duquennoy} log-normal distribution with mean log-period $\overline{\log P(d)} = 4.4$ and standard deviation $\sigma_{\log P} =2.3$. Orbital semi-major axes are computed from Kepler's Third Law $GMa^{-3}= 4\pi^2P^{-2}$.
\item Initial orbital eccentricities are randomly drawn from a uniform distribution ranging from 0 to 1.
\end{itemize}
\paragraph*{Binary Evolution}
Having generated the orbital and physical parameters, the next step is to follow the stellar evolution of each binary system. HB13~uses the \citet{Hurley2002} evolutionary tracks in order to follow the evolution of each binary, taking into account the degree of mass transfer in each evolutionary stage, based on the stellar masses and radii, and the orbital parameters of the system. The evolutionary stages followed go from the zero-age main sequence to the remnant stages (black hole, neutron star or white dwarf) for stars with initial masses ranging from 0.1 to a 100 $M_\odot$ and metallicities from Z=0.0004 to Z=0.01. In addition, to cover evolutionary aspects of He white dwarf stars with collisions considered by \citet{Hurley2002} as destroyed systems, \citet{Hernandez} use the model proposed by \citet{Han2002} which assumes Extreme Horizontal Branch stars (EHB) are formed through this channel. The isochrones used in the model are built from the evolutionary tracks described in \citet{Hernandez}.
The \citeauthor{Hurley2002} evolutionary tracks code allows computing the evolution of a binary system at any age, returning the temperature and luminosity for each star in the binary. Absolute magnitudes are computed using the BaSeL 3.1 \citep{Westera2002} spectral library. Currently the photometric systems available are Johnson-Cousins (UBVRIJHK), SDSS (\emph{ugriz}) and HST (F814W, F775W, F625W, F606W, F555W, F445W, F435W, F410W, F330W, F250W, F220W).
\begin{table
\caption{Binary fraction as function of spectral type}
\begin{center}
\begin{tabular}{ccc}
\hline
Spectral type & Binary fraction & Reference\\
\hline
O & 0.72 & Mason et al. 1998\\
O-B & 0.65 & Preibisch et al. 1999\\
B-A & 0.62 $\pm$ 0.2 & Patience et al. 2002\\
G-K & 0.58$\pm$0.1 & Duquennoy \& Mayor 1991\\
M & 0.49$\pm$0.09 & Fischer \& Marcy 1992\\
Late-type M & 0.26$\pm$0.1 & Basri \& Reiners 2006\\
\hline
\label{t:lada}
\end{tabular}
\end{center}
\end{table}
\subsection{Calculation of light curve parameters using PHOEBE}\label{s:phoe_sec}
Having generated the physical and orbital parameters for the binaries, {\textsc ELLISA}~goes on to determine the light curve parameters (primary and secondary eclipse amplitudes, maximum light magnitude) corresponding to each system of the simulated populations. The first step is to determine the light curve template to be used.
\paragraph*{Light curves}
To generate the light curves we use PHOEBE (PHysics Of Eclipsing BinariEs) \citep{Phoebe}, a modelling package for eclipsing binaries based on the well-known Wilson-Devinney code \citep[WD,][]{Wilson}. We used the Legacy version of PHOEBE\footnote{\url{http://phoebe-project.org/1.0}}, with {\sc Python} implementation capabilities. We make use of the LC PHOEBE feature which employs the LC WD's program to generate the light curve based on the binary's physical and orbital parameters.
For this work we use the PHOEBE functionality for contact and detached systems. Therefore, two types of models are used to simulate the shapes of the eclipsing binary light curves: one for detached or Algol type, which hereinafter will be referred as EA systems, and another for semi-detached ($\beta$Lyrae) or contact (WUMa) systems, which in what follows will be referred as EB+EW systems \citep[following the notation of the GCVS - General Catalog of Variable Stars][]{Samus}. The first step is to figure out whether the system is completely detached or not. For this, we follow \citet{Eggletonbook} and for each binary we compute the critical period $P_{\rm crit}$, i.e. the shortest period that a system with a certain mass ratio should have to make mass transfer events possible without overflowing its Roche lobe. According to \citet{Eggletonbook} (their Eq. 3.10) the critical period is given by
\begin{equation}
P_{\rm crit} \sim 0.35 \sqrt{\frac{R^3}{M_1}} \left( \frac{2}{1+q} \right) ^{0.2} ,
\end{equation}
\noindent where $M_1$ and $R$ denote the mass and radius of the primary and $q$ the system's mass ratio.\\
Through this criterion, we will separate the systems generated by HB13~into two blocks. The first group is constituted by EA systems whose separation is large enough to prevent mass transfer. A binary is of EA type if it has a period $P<P_{\rm crit}$. The second, and complementary group, contains those systems with one or the two stars transferring mass to its companion, which correspond in this case to EB or EW light curves.
In order to shape the light curve on its entirety, given its type, the following orbital parameters are passed to PHOEBE to perform the orbit calculation:
\begin{itemize}
\item{Initial inclination angle $i$ of the orbital plane for each system, randomly drawn from a $\cos (i)$, $i \in [0,\pi]$ distribution \citep[see e.g.][]{Arenou}. The range of inclination angles in which a given binary will be detected as eclipsing is given by equation \ref{incleq}. This holds for EA, EW and EB systems. For shorter semi-major axes the range of inclination values is larger as we can see in Figure} \ref{inclinacionfig}
\begin{equation}
a \cos (i) < R_1+R_2.
\label{incleq}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{inclinacionorbitaesquema.pdf}
\caption{Schematic representation of an eclipsing binary. Inclination-radius relation of the components.}
\label{inclinacionfig}
\end{center}
\end{figure}
\item{Eccentricity, period and semi-major axis distributions, given by HB13.}
\item{Synchronicity parameter which is the ratio between rotation and orbital angular velocity, follows equation \ref{sync} when differential rotation is neglected
\begin{equation}
F=\sqrt{\frac{1+\epsilon}{(1-\epsilon)^3}}.
\label{sync}
\end{equation}}
\noindent where $\epsilon$ is the orbital eccentricity.
\item{Argument of the periastron, randomly drawn from a uniform distribution in the range $[0,2\pi]$}
\end{itemize}
Once the orbit is added to the simulated system, the stellar parameters are also passed to PHOEBE in order to obtain geometrical light curves parameters.
Limb darkening effects are included in every system computation and are given as function of the selected passband in the case of the WD feature or can be dynamically computed from predefined tables (See PHOEBE scientific reference for further details).
The light curves physical quantities are given in units of flux, computed from the given passband luminosities in the available passbands systems (Stromgren, Johnson, Gaia, Tycho, Kepler, Hipparcos, LSST, Cousins).
\paragraph*{Apparent magnitudes}
At this point we transform the absolute magnitudes to apparent ones by adding a distance modulus. For this, the user selects a reference band in which the final distribution of mean apparent magnitudes will be uniform and limited at the bright and faint ends by the user-supplied saturation and limiting magnitudes, respectively. The apparent magnitude in the reference band is drawn at random from this uniform distribution and the corresponding distance modules is computed from the simulated absolute magnitude. This distance modulus is used to transform the magnitudes in the remaining bands from absolute to apparent. Finally, observations outside the (user-supplied) saturation and limiting magnitudes of each band are discarded.
\subsection{Time sampling and photometric errors}
The next step is to simulate the observation process, reproducing the time sampling, photometric errors and bright and faint limits of the survey for all the filters.
\paragraph*{Time sampling}
The time sampling is generated by {\textsc ELLISA}~based on user-supplied files containing the heliocentric julian dates (HJDs) of the observations in each filter.
The (error-free) magnitude $m_i^{\rm F0}$ is obtained by evaluating the light curve template for each filter $\rm F$ at the phase $\phi_i^{\rm F}$, corresponding to the $t_i^{\rm F}$ observation epoch, given by the following expression
\begin{equation}
\phi_i^{\rm F}=\frac{t_i^{\rm F}}{P} - int \bigg{(}\frac{t_i^{\rm F}}{P}\bigg{)} + \phi_{\rm off}
\end{equation}
\noindent where $t_i^{\rm F}$ is the HJD of the $i$-th observation in the $\rm F$ filter, $P$ is the light curve period and $\phi_{\rm off}$ is the phase shift. The phase shift $\phi_{\rm off}$ is randomly drawn from a uniform distribution in the range $\phi_{\rm off}\in[0,1)$. The phase-offset is filter-independent and it is taken as the phase in which the primary eclipse occurs.
\paragraph*{Photometric errors}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{errvsmagQUEST}
\caption{Error versus magnitude curves for QUEST catalogue in the VRI Johnson-Cousins filters, the bars represent the mean standard deviation $\delta$ of the typical photometric error $\sigma$ at each magnitude.}
\label{errvsmag}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=2\columnwidth]{curves_phoebe}
\caption{Simulated QUEST VRI light curves: EW system (upper panels), EB system (middle panels) and EA system (bottom panels) with very dense (left panels) and very sparse (right panels) time sampling .}
\label{curves}
\end{center}
\end{figure*}
The photometric errors are simulated as a function of magnitude, based on user-supplied error versus magnitude curves characteristic of the survey that is being simulated. As an example, Figure \ref{errvsmag} shows error versus magnitude curves for the QUEST survey in the VRI filters. These curves represent the mean standard deviation $\sigma$ (in magnitudes) of non-variable stars per magnitude bin \citep[][\citetalias{Mateu}]{Vivas2004}. The error bars are indicative of the actual dispersion of multiple measurements for non-variable stars at each magnitude bin, which we call $\delta$, and represents the dispersion of the photometric error $\sigma$ at that magnitude. The curves in Figure \ref{errvsmag} were computed for the QUEST low-latitude catalogue of variable stars \citepalias{Mateu}.
The error-convolved magnitude is thus computed as $m_i=m_i^{\rm F0} + \Delta m_i^{\rm F}$, where $\Delta m_i^{\rm F}$ is a random number drawn from a gaussian distribution $G(0,\sigma)$ with zero-mean and standard deviation $\sigma$, which is, in turn, drawn from a gaussian $G(0,\delta)$ with zero-mean and standard deviation $\delta$. Finally, error-convolved magnitudes outside the user-supplied bright and faint limits of the survey are discarded.
Figure \ref{curves} shows examples of synthetic light curves produced by {\textsc ELLISA}~for eclipsing binaries of the three types: (upper panels) with periods of 0.4745 d (left) and 0.53 d (right), EB (middle panels) with periods of 0.6062 d (left) and 0.77 d (right) and EA (bottom panels) with periods of 3.2293 d (left) and 5.84 d (right), for the VRI filters. For each eclipsing binary type, two examples are shown to illustrate a very dense ($>$90) number of observations (left panels) and very sparse ($\sim$ 35) number of observations (right panels) time sampling characteristic of different areas of the QUEST \citetalias{Mateu} survey.
The ELLISA code takes approximately 40 minutes to run in an Intel Xeon(R) CPU E5-2620 v3 \@2.9GHz x24, to produce a light curve library for 1,000 eclipsing binaries.
\subsection{Comparison of the ELLISA period distribution with Kepler}
Here we compare the period distribution obtained by ELLISA with that from the Catalogue of Eclipsing Binaries \citep{Keplerebcat} from the Kepler mission. The Kepler catalogue was produced based on the very high number of observations of the Kepler mission, acquired during a long baseline, so it is a very homogeneous and complete survey of eclipsing binaries and is not expected to be significantly affected by observational biases, making a good reference for comparison.
Figure \ref{kepler_ellisa_periods} shows the period distribution
for an ELLISA simulation of eclipsing binaries in a Thin Disc population, compared to the Kepler distribution. The initial distribution assumed as an ingredient of HB13 in the ELLISA simulation is also shown (dotted) and corresponds to the period distribution of the full binary population at zero-age (see Section~\ref{Sec:parfisicosbassic}). By contrast, the distribution shown for ELLISA corresponds only to the binaries that, at the selected age of the population, turn out to be eclipsing (based on the criterion described in Section \ref{s:phoe_sec}). There are some clear differences between the ELLISA and Kepler period distributions, although the shape is similar overall. At the short-period end, there is a sharp drop off at $\sim0.1$~d while in the Kepler distribution, which happens at a slightly lower period in ELLISA $\sim0.02-0.03$~d. At the long-period end, the drop-off is much shallower in Kepler and while the ELLISA distribution falls-off rapidly at periods larger than $\sim 10$~d. This more notable difference is probably due to the criteria used to decide when a binary is eclipsing or not. Some of this differences also might stem from the choice of initial period distribution made in HB13. In the future we plan to explore different initial assumptions for some of the zero-age orbital parameters and possible introduce distributions that consider joint dependencies \citep[e.g.][]{MoeDS17}. For the time being, since this modifications are beyond the scope of this work, we caution the reader that the use of ELLISA be better restricted to simulating eclipsing binaries with periods $\lesssim 15$~d.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{period_kepler_ellisa_bassic.pdf}
\caption{Period distribution resulting for a realization of a Thin Disc population of ELLISA eclipsing binaries (solid line), compared to the Kepler eclipsing binaries catalogue \citep{Keplerebcat} (dashed line). The initial period distribution assumed by HB13 (dotted line) for all binaries at zero-age is also shown for reference. All distributions shown are normalised to unit area.}
\label{kepler_ellisa_periods}
\end{center}
\end{figure}
\section{The search for eclipsing binaries in the QUEST low-latitude catalogue of variable stars}\label{binary_search}
In what follows we identify candidate variables in the QUEST low-latitude survey and, over the sample of candidates, we conduct a period search using the available filters simultaneously and identifying possible spurious periods.
To characterise the completeness of the variable stars identification, we use {\textsc ELLISA}~to generate a mock catalogue to mimic the time sampling and photometric errors of the QUEST survey. We search for candidate variables and determine the fraction of recovered variables using the same technique in terms of apparent magnitude and amplitude.
\subsection{The QUEST low-latitude survey}\label{s:quest_survey}
In this work we make use of the QUEST variability survey at low galactic latitude from \citetalias{Mateu}. This catalogue spans a total area of 476 deg$^2$ with 6,513,705 point sources and consists of multi-epoch observations in the VRI filters obtained fully with the J\"urgen Stock Schmidt telescope and the QUEST-I camera \citep{Baltay2002} at the National Astronomical Observatory in Llano del Hato, M\'erida, Venezuela. The observations correspond to a low Galactic latitude zone in the range $-25^\circ \la b \la 30^\circ$ approximately in direction towards the Galactic anti-centre 190$^ \circ \leqslant l \leqslant 230^ \circ$.
The time sampling of the survey is quite inhomogeneous as it comprises observations from many different projects with different scientific goals, and spans observations obtained between November~1998 and June~2006. Each star was typically observed once in any given night, with some exceptions of areas surveyed more than once (e.g. around McNeil's nebula in Orion). The typical number of epochs per star is $\sim30$ in V,R and $\sim25$ I respectively, but these can range from $\sim10$ up to $\sim120$--$115$ depending on the area of the sky (see Figure~5 in \citetalias{Mateu} and Figure \ref{distespsint} in Section~\ref{s:mockcat}).
The saturation magnitudes $M_{sat}$ are 14.0 mag for V and R and 13.5 mag for I, the limiting magnitudes $M_{lim}$ are 19.7 mag for V and R and 18.8 mag for I and the completeness magnitudes $M_{com}$ are 18.5 mag for filters V and R and 18.0 mag for filter I. For full details of the survey we refer the reader to \citetalias{Mateu}.
\subsection{The {\textsc ELLISA}-QUEST Mock Light Curve Catalogue}\label{s:mockcat}
Here we describe the catalogue of synthetic light curves produced with {\textsc ELLISA}~to mimic QUEST, which is later used to fine-tune the variable star selection thresholds and period search parameters used in the search for eclipsing binaries.
First we randomly select $>300,000$ of the stars (5\%) out of the \citetalias{Mateu} QUEST catalogue, in order to use the time sampling of each one as an input for {\textsc ELLISA}. This way we can have a simulation with a time sampling in the VRI bands that is representative of the whole survey and that reproduces the different cadence and number of observations across the survey area. For the QUEST \citetalias{Mateu} survey, this a particularly important point since the time sampling and number of observations per filter is very inhomogeneous across the survey.
The mock light curve catalogue was simulated within the saturation and completeness limits quoted in the previous section for the QUEST survey. Error versus magnitude curves were used as a function of the location in the survey footprint, computed by \citetalias{Mateu} subdividing the whole catalogue in stripes of 1h in right ascension by $0\fdg55$ in declination, allowing to take into account the effect of the changing number of observations per object per filter on the photometric error in different survey regions. Figure~\ref{distespsint} illustrates the number of observations per filter in a map in equatorial coordinates.
The {\textsc ELLISA}-QUEST Mock Light Curve Catalogue was produced using the simulator described in Section~\ref{Sec:ellisa},
assuming a Thin-Disc-like population. This is a reasonable assumption as the \citetalias{Mateu} QUEST survey is concentrated at low galactic latitudes ($|b|<30^\circ$) where the population is expected to be dominated by the Galactic Thin Disc. In any case, at such old ages ($\gtrsim$8--9 Gyr), the overall binary properties change very little with age. So, including a Thick-Disc-like population would be equivalent to giving a slightly larger weight to the old Thin Disc population and should not produce very significant changes.
The synthetic light curve catalogue produced containing a total of 307,935 binaries, of which 206,313 are EA systems and 101,622 EB+EW systems. The period and V band amplitude distributions are shown in the upper and lower panels of Figure \ref{dist_per_amp}, respectively.
As expected, EA systems dominate the distribution at large periods. It is interesting to note that, in the amplitude distribution, EB+EW systems dominate at large values ($>1$~mag). This is due to the presence of EW systems composed of massive Main Sequence star pairs, in which stars have large radii ($>\mathrm{few}$~$R_\odot$), common in young populations present in our Disc-like mock population. These binaries are largely absent in old Thick-Disc-like or Halo-like populations (except, e.g., for cases that produce Blue Straggler stars).
The mock catalogue with VRI synthetic light curves is publicly available at the GitHub repository\footnote{\url{https://github.com/umbramortem/QUEST_EBs}}.
In what follows we will use this synthetic or mock light curve catalogue to guide parameters used in the search for eclipsing binaries and to characterise the completeness of the samples at various stages of the identification process.
\begin{figure*}
\begin{center}
\includegraphics[width=2.1\columnwidth] {distespacialbibsintetica.pdf}
\caption{Spatial distribution of the number of observations for filters V (left), R (middle), I (right) of the mock catalogue, in equatorial coordinates. The colour scale in each panel indicates the mean number of observations for each filter for each star.}
\label{distespsint}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth] {amp_per_hist.pdf}
\caption{Period (upper panel) and V amplitude (lower panel) distribution of full sample (solid black line), EA systems (dashed blue line) and EB+EW systems (dotted red line) of the mock catalogue.}
\label{dist_per_amp}
\end{center}
\end{figure}
\subsection{Variable star identification}\label{s:variables}
The first step in the search for eclipsing binaries is to identify stars exhibiting photometric variability. To identify variable stars we used the \citetalias{Mateu} extension of the variability indices defined by \cite{Stetson}. These indices use the fact that the variability in two different photometric passbands must be positively correlated for most known types of variable stars. In the \citetalias{Mateu} catalogue, the three possible Stetson $L$ indices $L_{\rm VR}$, $L_{\rm VI}$ and $L_{\rm RI}$ are reported for each star.
Due to the inhomogeneous coverage of the survey, there are areas with very few observations in one or more filters, or even missing information in one filter altogether. Therefore, as in \citetalias{Mateu}, we selected as variable stars those that meet the following criteria:
\begin{itemize}
\item{$L_{\rm VR} \geq 1$ or $L_{\rm VI} \geq 1$ or $L_{\rm RI} \geq 1$}
\item{Observed amplitude $\geq 0.2$ mag, at least for two filters}
\item{$N_{obs} \geq 10$, at least for two filters}
\end{itemize}
From the whole QUEST sample of 6,513,705 stars, using this criteria a total of 410,715 stars were selected as candidate variables. We also applied this methodology to the ELLISA-QUEST mock catalogue and 60,145 objects were selected as variable stars.
\subsection{The period search}
The search for eclipsing variables was conducted using the \cite{Lafler} method developed to identify light curve periods of variable stars. In this work we used the extension presented in \citetalias{Mateu} that allows the use of the multi-filter data in the survey. This method chooses as the best period the one that makes the light curve look smooth when period-folded. For the purposes of this work, we proceed following \citetalias{Mateu} but adapted the range of the periods search to a wider one considering that the stars studied by \citetalias{Mateu} are RR Lyrae which have a shorter period range than eclipsing binaries. The period range chosen was from 0.04~d to 15~d, searched with an initial resolution of $10^{-4}$ d, refining the best 5 periods with a resolution of $10^{-7} $~d, and ending with the selection of the 3 best periods.
In addition to the period calculation, we computed the parameter $\Lambda$, a statistical significance parameter that
describes how smooth the light curve is for a given trial period, compared to the median for all trial periods, in standard deviation units ($N\sigma$). \citetalias{Mateu} used a conservative threshold of $\Lambda=4$ to search for RR Lyrae stars, i.e. stars for which a trial period was found to smooth the light curve at a $4\sigma$ statistical significance. After a visual inspection of a significant fraction of the sample, we observed that those light curves with values of $\Lambda$ close to 4 were extremely noisy. Because of this, we selected a higher threshold and performed the visual inspection only for those curves with $\Lambda \geqslant 5$, which resulted in a sample consisting of 10,020 initial candidates.
\subsubsection{Period aliases}\label{s:aliases}
\cite{Vivas2004} and \citetalias{Mateu} have shown that in the QUEST survey the period aliasing is frequent due to periodicities in the time sampling. Hence, we used the mock catalogue of eclipsing binaries to identify the most common period aliases affecting our survey.
According to \cite{Lafler}, the spurious periods $\Pi$ follow the relation
\begin{equation}
\frac{1}{\Pi}=\frac{k}{P}+\frac{1}{p}
\end{equation}
\noindent with $P$ the true period, $k$ a rational number that produces the harmonics of $P$ and $p$ the external period present in the time sampling. \citetalias{Mateu} analysed the most frequent period aliases that affect the recovery of RR Lyrae stars in their survey. They found the most frequent one is the one-day alias and the less frequent aliases were the 1/3-day and 1/4-day aliases for RRab stars and for the RRc type the first harmonic of the true period and the 1/2-day alias. In this work we have repeated that analysis and summarise our results in Table~\ref{tablaalias}. In addition to the common period aliases found in \citetalias{Mateu}, observe the presence of half the period harmonic aliases with $k=\frac{1}{2}$, as well as another very important locus represented by the 1/20-day (1.5 h) alias. This period alias is interesting as it is probably due to the repeated observations obtained as part of the CIDA variability survey of Orion around the McNeil nebula \citep{Briceno2004}, which had precisely a 1.5 h cadence.
\begin{table}
\caption{Frequency of occurrence of the different aliases or spurious periods present in the mock catalogue of eclipsing binaries}
\begin{center}
\begin{tabular}{cccc}
\hline
Type & $k$ & $p$ (d) & Frequency(\%)\\
\hline
Correctly identified & 1 & & 14.3\\
Harmonics & $\frac{1}{2}$ & & 2.79\\
& 2 & & 16.4\\
& $\frac{1}{4}$ & & 0.57\\
& 4 & & 0.53\\
& $\frac{1}{5}$ & & 0.4\\
& 5 & & 0.07\\
& $\frac{1}{20}$ & & 0.37\\
& 20 & & 0.001\\
1-day alias & 1 & $\pm$1 & 0.4\\
& 2 & $\pm$1 & 1.4\\
$\frac{1}{2}$-day alias & 1 & $\pm\frac{1}{2}$ & 0.1\\
2-day alias & 1 & $\pm$2 & 0.4\\
$\frac{1}{3}$-day alias & 1 & $\pm\frac{1}{3}$ & 0.12\\
3-day alias & 1 & $\pm$3 & 0.3\\
$\frac{1}{4}$-day alias & 1 & $\pm\frac{1}{4}$ & 0.1\\
4-day alias & 1 & $\pm$4 & 0.17\\
$\frac{1}{5}$-day alias & 1 & $\pm\frac{1}{5}$ & 0.03\\
5-day alias & 1 & $\pm$5 & 0.12\\
$\frac{1}{20}$-day alias & 1 & $\pm\frac{1}{20}$ & 0.01\\
20-day alias & 1 & $\pm$20 & 0.07\\
\hline
\label{tablaalias}
\end{tabular}
\end{center}
\end{table}
The most common aliases identified in this analysis were used in the visual inspection of the light curves during the search of eclipsing binaries in the real catalogue. Alias periods are reported wherever disambiguation was not possible.
\subsection{The QUEST catalogue of Eclipsing binaries}
The final identification of eclipsing binaries was made by visual inspection of the full sample of 10,020 initial candidates for which a statistically significant period ($\Lambda\geqslant5$) was found. The light curves were period-folded with the three best periods found, and the five most common period aliases identified in Section~\ref{s:aliases} were also checked for the best period selected.
Finally, we obtained 1,125 eclipsing binaries consisting of 179 EA, 60 EB and 886 EW systems corresponding to 16\%, 5\% and 79\% of the catalogue respectively.
The main light curve parameters are summarised in Table \ref{tab:lc_parameters} and VRI light curves are shown in Appendix \ref{a:EB_lightcurves}. The VRI time series data are available at a public GitHub repository\footnote{\url{https://github.com/umbramortem/QUEST_EBs}}.
\begin{table*}
\caption{Basic information and light curve parameters of the eclipsing binaries identified. (This table is published in its entirety as Supporting Information with the electronic version of the article. A portion is shown here for guidance regarding its form and content).}
\label{tab:lc_parameters}
\begin{footnotesize}
\tabcolsep=0.16cm
\begin{tabular}{cccccccccccc}
\hline
ID & RA & DEC & ($N_V$,$N_R$,$N_I$) & Period & AmpV & AmpR & AmpI & Type & V & R & I\\
& ($^{\circ}$) & ($^{\circ}$) & & (d) & (mag) & (mag) & (mag) & & (mag) & (mag) & (mag) \\
\hline
40430964 & 73.3967 & 1.21155 & (23,24,26) & 0.36275 & 0.48 & 0.50 & 0.45 & EW & 16.52 $\pm$ 0.02 & 16.04 $\pm$ 0.01 & 15.66 $\pm$ 0.01 \\
40440270 & 73.5137 & 1.10068 & (26,25,26) & 0.62019 & 0.41 & 0.46 & 0.41 & EB & 14.85 $\pm$ 0.01 & 14.69 $\pm$ 0.00 & 14.52 $\pm$ 0.01 \\
40451647 & 73.6484 & 1.19106 & (19,25,25) & 0.26241 & 0.62 & 0.62 & 0.64 & EW & 17.52 $\pm$ 0.04 & 17.12 $\pm$ 0.01 & 16.76 $\pm$ 0.02 \\
40507509 & 74.3014 & 1.23649 & (24,25,27) & 0.43003 & 0.21 & 0.26 & 0.28 & EW & 16.56 $\pm$ 0.02 & 16.24 $\pm$ 0.01 & 15.95 $\pm$ 0.01 \\
\hline
\end{tabular}
\end{footnotesize}
\raggedright The Table columns are: the identification number ID, right ascension RA and declination DEC (J2000), the number of observations in the VRI bands, the light curve period, amplitude of variation in each band, type of eclipsing binary and mean magnitudes in each band with the corresponding photometric error.
\end{table*}
\subsubsection{Cross-match with public catalogues of variable stars}
We cross-matched our 1125 Eclipsing Binaries candidates with the ASAS-3 catalogue \citep{Pojmanski}, the GCVS \citep{Samus} and the Catalina Real-Time Transient Survey (CRTS) \citep{Drake2014} with a tolerance of 7 arcsec. We found no coincidences with ASAS-3, and found 384 and 12 matches with CRTS and GCVS respectively.
Out of the 384 matches with CRTS, shown in Table \ref{tab:table_cross2}, 298 stars are classified as EW, 1 as EB and 25 as EA in both catalogues. Of the 60 matches left, 31 stars are classified as EW in the QUEST catalogue having different classifications in the CRTS catalogue, being 2 EB, 3 EA, 9 RS CVn, 1 RRab, 14 RRc, 1 RRd and 1 Hump (erratic light curve); 19 stars classified as EB in the QUEST catalogue are reported as EA (1) and EW(18) stars in CRTS; and 10 stars classified as EA in the QUEST catalogue and classified as 1 EA with unknown period, 1 EB and 8 EW stars respectively in the CRTS catalogue. It should be noticed that 12 of the CRTS periods resulting from the cross-match are found to be inexact based on the light curve inspection as indicated by \cite{Drake2014}. We found 180 stars having the same period we obtained in this work within a tolerance of 5\%. These, as well as stars identified with a period alias are indicated in Table \ref{tab:table_cross2}. Looking at the most common period aliases for the matched stars, we found 9 stars were recovered at half the period, 3 at twice the period and 1 three times the period. We also found 20 alias of 1-day, 5 of 1/3-day, 11 of 1/2-day and 4 of 1/4-day.
Table~\ref{tab:table_cross1} summarises the matches with GCVS. Out of the 12 matches, 4 stars don't have a computed period in GCVS and are classified there as one rapid irregular in a nebula (NN Ori), one rapid irregular (V1891 Ori), one red rapid irregular(V2678 Ori) and one UV Ceti (V0678 Ori). These stars were all classified in our catalogue as being EW.
Out of the 8 matches left, 6 were catalogued in GCVS as being RRc stars of which 1 is suspected to be alias of half the period (V0473 Hya), 2 alias of 1-day (V1867 Ori and V1845 Ori) and 3 alias of 1/2-day (V0497 Hya, V0485 Hya and V0944 Mon). These stars are kept in our final catalogue classified as EW type, since upon visual inspection the curves seem smooth and the amplitude of the eclipses similar for the different filters, as expected for EW binaries. Finally, the two stars left were classified in GCVS as a Delta Scuti with a suspected alias of 1 of 1/3-day (V2742 Ori), an a EW (V2769 Ori). These stars are classified in our catalogue as EWs. We retain their classifications as EW since the amplitudes for the three filters are very similar, although we caution that star 51082306 is near the faint limit of our survey ($V=17.6$ at maximum light) and its light curve is noisy.
\begin{table*}
\caption{QUEST matches with the GCVS catalogue.}
\label{tab:table_cross1}
\begin{tabular}{ccccccccc}
\hline
$\rm{ID}_{QUEST}$ & $P_{\rm QUEST}$ & AmpI & $\rm Type_{QUEST}$ & $\rm ID_{GCVS}$ & $P_{\rm GCVS}$ & AmpI & $\rm Type_{GCVS}$ & Comments\\
& (d) & (mag) & & & (d) & (mag) & & (periods)\\
\hline
50727050 & 0.303348 & 0.442 & EW & V1867 Ori & 0.217985 & 0.5 & RRC: & 1-day alias\\
51082306 & 0.387926 & 0.424 & EW & V2742 Ori & 0.162376 & 0.30000114 & DSCT: & 1/3-day alias\\
80683482 & 1.291147 & 0.468 & EW & V0497 Hya & 0.391934 & 0.6000004 & RRC & 1/2-day alias\\
51556152 & 0.214918 & 0.614 & EW & V2769 Ori & 0.27872 & $\cdots$ & EW & $\cdots$\\
50130978 & 0.337271 & 0.408 & EW & V1845 Ori & 0.254789 & 0.40000057 & RRC: & 1-day alias\\
80312834 & 0.690166 & 0.406 & EW & V0473 Hya & 0.345091 & 0.5 & RRC & k=2 harmonic\\
80010865 & 0.418573 & 0.25 & EW & V0944 Mon & 0.209273 & 0.29999924 & RRC: & 1/2-day alias\\
80494973 & 0.480576 & 0.408 & EW & V0485 Hya & 0.240283 & 0.3000002 & RRC & 1/2-day alias\\
50917935 & 0.474314 & 0.304 & EW & V1891 Ori & $\cdots$ & 0.42000008 & IS & $\cdots$\\
51000045 & 1.66799 & 0.327 & EW & V2678 Ori & $\cdots$ & 0.10999966 & INSB & $\cdots$\\
50765586 & 4.648128 & 0.365 & EW & V0678 Ori & $\cdots$ & 2.0 & UVN & $\cdots$\\
50969201 & 10.696515 & 0.331 & EW & NN Ori & $\cdots$ & 2.500001 & INS & $\cdots$\\
\hline
\end{tabular}
\end{table*}
Our catalogue also overlaps partially with the \cite{Eyken} eclipsing binaries catalogue from the Palomar Transient Factory (PTF) Orion Project. Out of their total of 82 binaries, we find 22 matching systems,
with 16 of them classified as Close systems (C) which corresponds to EW+EB systems in our classification and 5 as Detached (D) which corresponds to our EA systems, in both catalogues, and 1 system classified in this work as EW and classified as EA system by \cite{Eyken}. We found 9 period matches, an alias of 1-day and 1 alias of half the period.\\
We also cross-matched our catalogue with the T Tauri star catalogue from \cite{Karim2016}, since it is based on the QUEST low-galactic-latitude catalogue with additional observations made by the YETI project for some objects. We found 18 resulting matches, all classified in our catalogue as EW and as 2 Classical T Tauris (CTTS) and 16 weak-lined T Tauris (WTTS) in \cite{Karim2016}. Data for these stars in \cite{Karim2016} catalogue are shown in Table \ref{tab:table_cross2}, were the identified period aliases are also reported. These matching objects could be potential T Tauri stars in binary systems. However, a more detailed analysis of the light curves would be warranted to confirm this since most are recovered here at twice the period, meaning that only one of the two variability causes is probably real. It is important to note, however, that the low fraction of matching objects on both catalogues, 8\% of \citeauthor{Karim2016}'s and $<$2\% of ours, supports that a low degree of contamination or confusion is expected in either catalogue.
\begin{table*}
\caption{QUEST matches with the CRTS catalogue. (This table is published in its entirety as Supporting Information with the electronic version of the article. A portion is shown here for guidance regarding its form and content)}
\label{tab:table_cross2}
\begin{tabular}{ccccccccc}
\hline
$\rm{ID}_{QUEST}$ & $P_{\rm QUEST}$ & AmpV & $\rm Type_{QUEST}$ & $\rm ID_{CRTS}$ & $P_{\rm CRTS}$ & AmpV & $\rm Type_{CRTS}$ & Comments\\
& (d) & (mag) & & & (d) & (mag) & & (periods)\\
\hline
40047541 & 0.263274 & 0.307 & EW & J041329.6-031402 & 0.26327 & 0.21 & EW & matches\\
40067341 & 0.310505 & 0.329 & EA & J041810.3-024627 & $\cdots$ & 0.07 & EA$_{UP}$ & $\cdots$\\
40083051 & 0.352228 & 0.328 & EW & J042232.8-015739 & 0.352226 & 0.2 & EW & matches\\
40141914 & 0.409572 & 0.523 & EW & J043216.2-015942 & 0.29033 & 0.55 & EW & 1-day alias\\
40152097 & 0.267945 & 0.666 & EW & J043338.3-015918 & 0.23621 & 0.38 & EW & \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{QUEST matches with the PTF catalogue \protect\citep{Eyken}}
\label{tab:table_cross4}
\begin{tabular}{ccccccc}
\hline
$\rm{ID}_{QUEST}$ & $P_{\rm QUEST}$ & $\rm Type_{QUEST}$ & $\rm ID_{PTF}$ & $P_{\rm PTF}$ & $\rm Type_{PTF}$ & Comments\\
& (d) & & & (d) & (periods)\\
\hline
50557720 & 0.313152 & EW & 6-3648 & 0.27066 & C & $\cdots$\\
50566490 & 0.540496 & EW & 6-5196 & 0.425248 & C & $\cdots$\\
50571345 & 0.328092 & EW & 6-4262 & 0.328103 & C & matches\\
50747963 & 0.413978 & EW & 9-3480 & 0.342832 & C & $\cdots$\\
50775468 & 0.368418 & EW & 9-6659 & 0.310999 & C & $\cdots$\\
50812538 & 0.086737 & EA & 10-3009 & 2.234 & D & $\cdots$\\
50847483 & 0.468221 & EW & 10-2406 & 0.468237 & C & matches\\
50876465 & 0.327801 & EW & 11-3313 & 0.281535 & C & $\cdots$\\
50879736 & 0.227755 & EW & 11-3778 & 0.227754 & C & matches\\
50620504 & 0.504927 & EW & 7-750 & 0.402915 & C & $\cdots$\\
50707857 & 0.327595 & EW & 8-1414 & 0.281382 & C & $\cdots$\\
50863250 & 0.372698 & EW & 11-1051 & 0.372703 & C & matches\\
50517261 & 0.852987 & EA & 0-8036 & 0.597359 & D & $\cdots$\\
50544270 & 0.26412 & EW & 0-8177 & 0.304414 & C & $\cdots$\\
50563048 & 0.273228 & EW & 0-5197 & 0.273236 & C & matches\\
50820059 & 0.331189 & EW & 4-8796 & 0.397136 & C & $\cdots$\\
50830997 & 0.258346 & EW & 4-7558 & 0.258346 & C & matches\\
50842277 & 0.264612 & EW & 4-9573 & 0.264623 & C & matches\\
50545498 & 3.138798 & EA & 6-7525 & 1.569462 & D & 1-day alias\\
50569288 & 1.351627 & EA & 6-9087 & 1.351641 & D & matches\\
50599532 & 2.071471 & EW & 7-7604 & 2.071092 & D & matches\\
50602623 & 1.91986 & EA & 1-2659 & 0.960161 & D & k=2 harmonic\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{QUEST matches with the \protect\cite{Karim2016} TTS catalogue.}
\label{tab:table_cross3}
\begin{tabular}{ccccccc}
\hline
$\rm ID_{QUEST}$ & $P_{\rm QUEST}$ & $\rm Type_{QUEST}$ & $\rm ID_{KARIM}$ & $P_{\rm KARIM}$ & $\rm Type_{KARIM}$ & Comments (periods)\\
& (d) & & & (d) & \\
\hline
40538170 & 1.793697 & EW & 259 & 0.9 & WTTS & k=2 harmonic\\
50324694 & 2.318376 & EW & 330 & 0.54 & WTTS & 1/2-day alias\\
50493050 & 2.248062 & EW & 431 & 0.11 & WTTS & k=20 harmonic\\
50607212 & 22.161642 & EW & 607 & 11.19 & WTTS & k=2 harmonic\\
50611629 & 2.480004 & EW & 614 & 1.24 & WTTS & k=2 harmonic\\
50627432 & 14.369345 & EW & 200 & 7.18 & WTTS & k=2 harmonic\\
50643673 & 0.795061 & EW & 694 & 0.4 & WTTS & k=2 harmonic\\
50650756 & 0.97408 & EW & 28 & 0.49 & WTTS & 1-day alias\\
50674665 & 9.984332 & EW & 218 & 5.06 & WTTS & k=2 harmonic\\
50737943 & 3.084968 & EW & 968 & 2.82 & WTTS & \\
50765586 & 4.648128 & EW & 1059 & 2.32 & WTTS & k=2 harmonic\\
50765935 & 1.26327 & EW & 1056 & 0.63 & WTTS & k=2 harmonic\\
50936079 & 1.279304 & EW & 129 & 0.64 & CTTS & k=2 harmonic\\
50969201 & 10.696515 & EW & 1593 & 1.23 & WTTS & 1-day alias\\
51033757 & 1.054704 & EW & 154 & 0.53 & WTTS & 1-day alias\\
51052067 & 14.143456 & EW & 1803 & 7.07 & WTTS & k=2 harmonic\\
51053661 & 9.496182 & EW & 1810 & 4.75 & WTTS & k=2 harmonic\\
51293974 & 7.453762 & EW & 1951 & 3.73 & CTTS & k=2 harmonic\\
\hline
\end{tabular}
\end{table*}
\section{Completeness of the QUEST catalogue of eclipsing binaries}\label {s:compsec}
In order to analyse the completeness of the catalogue of eclipsing binaries produced, we used the procedure described in the previous sections to select variable stars and conduct the period search over the {\textsc ELLISA}-QUEST mock catalogue of eclipsing binary light curves.
We considered as `recovered' all synthetic variables for which the accumulated phase shift $\alpha$ due to the period misidentification is less than 20\%, $|\alpha|<0.2$, where $\alpha$ is given by
\begin{equation}
\alpha = \frac{(P_{recovered} - P_{true})}{P_{true}} \frac{\Delta T}{P_{true}}
\label{alphashift}
\end{equation}
\noindent where $\Delta T$ is the average duration of the time series, weighted by the number of observations in each filter.
Figure \ref{compvsmag_amp} shows the resulting completeness as a function of the apparent V, R and I band magnitudes. The full sample of synthetic stars with recovered periods corresponds to the black circles. This is sub-divided into three sub-samples according to VRI band amplitudes: $0.2 \leqslant \rm{Amp} \leqslant 0.5$ (yellow triangles), $0.5 \leqslant \rm{Amp} \leqslant 1$ (red diamonds) and $\rm{Amp} \geqslant 1$ (blue squares). Note that, in this and the following figures, the curves represent the completeness or recovery fraction within each sub-sample, so the sum of curves for the different sub-samples \emph{is not equal} to the curve for the combined sample.
For the three ranges selected, the plot shows the completeness decreases monotonically for fainter objects. Also, the recovery is systematically larger for stars in the intermediate-amplitude range, which is to be expected. Lower amplitude stars are more difficult to identify as variables and their periods more difficult to recover, due to the noisier data. Higher amplitude stars, on the other hand, are mostly EA binaries with very abruptly varying light curves for which the \cite{Lafler} method is not optimal, and which are often miss-identified as non-variables, given the difficulty to have well sampled eclipses. Also, note that in the two upper panels the mean completeness for the full sample is lower than for the three sub-samples. This is caused by the stars with data missing in one filter, or an observed amplitude $<0.2$~mag, which are included in the search; this is because in the criteria defined in Section ~\ref{s:variables} we only required two out of the three amplitudes to be larger than 0.2~mag.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{completitudvsmagnitud_amp.pdf}
\caption{Completeness as a function of the V (top), R(centre) and I (bottom) magnitudes of the complete sample (black), stars with amplitudes between 0.2 and 0.5 magnitudes (orange), stars with amplitudes between 0.5 y 1 magnitudes (red) and stars with amplitudes larger than 1 magnitude (blue).}
\label{compvsmag_amp}
\end{center}
\end{figure}
Figure~\ref{comp_period_type} shows the completeness as a function of period for each eclipsing binary type, for the full period range (top panel) and zooming in on periods shorter than 1~d (bottom panel). EA binaries are shown with (blue) triangles, EB+EW binaries are shown with (red) diamonds, the combined sample is shown with (black) circles. The top panel shows that taken as a whole, the completeness is on average fairly low, just below $20\%$ for all eclipsing types combined. In this full long-period range going up to $10$~d periods, the EA's completeness is on average $20\%$ which is similar to EB+EW's with an average completeness between $\sim5\%$ and $35\%$. However, this trend is different in the bottom panel focused on periods shorter than 1~d. For this short periods, the EW+EB binaries are optimally recovered with completeness averaging $\sim 30\%$ for periods larger than $\sim0.3$~d and, in contrast, EA's being recovered less than $5\%$ on average. This confirms our expectation, based on results from the previous RR Lyrae surveys
\citep[][\citetalias{Mateu}]{Vivas2004}, that EW+EB can be effectively identified with QUEST. The average completeness found for the identification of EB+EW's might seem lower in comparison to the QUEST results for RR Lyrae stars of type $c$, but this is due to the much more stringent recovery criterion used here in comparison to \citet{Vivas2004} and \citetalias{Mateu}.
Figure~\ref{comp_period_ampV} also illustrates the dependence of the average completeness, independently of the binary type, for the full sample with $\mathrm{AmpV}\geqslant0.2$~mag (black circles) and in three amplitude ranges: short (yellow triangles), intermediate (red diamonds) and large (blue squares). These plots are representative of the behaviour for the R and I amplitudes as well, which are not shown for the sake of simplicity. This figure shows the dependence upon amplitude is not as strong as it is with type, shown in Figure~\ref{comp_period_type}. Binaries in the intermediate amplitude range are slightly better recovered on average, but this is simply a consequence of the correlation of amplitude with period and type, i.e. at a given period the strong dependence shown with type in Figure~\ref{comp_period_type} dominates the average completeness trend more than the amplitude does. This very strong correlation with the eclipsing binary type has to do with the light curve shape and the survey's time sampling: EW+EB's are easier to recover ($<$1~d) because of their smooth continuous light curves, as opposed to EA's sharper eclipses which are harder to sample. Therefore, the main factor that affects completeness is time sampling, rather than amplitude, which plays a secondary role.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{compl_periods.pdf}
\caption{Completeness of the identification of eclipsing binaries in two period ranges: for the full period range explored (\emph{upper panel}) and short periods $<1$~d (\emph{bottom panel}). In both panels the average completeness is shown for the full sample (black circles) and by type: EA (blue triangles) and EB+EW (red diamonds).}
\label{comp_period_type}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{comp_period_amp.pdf}
\caption{Completeness as a function of period for the full period range explored (\emph{upper panel}) and short periods $<1$~d (\emph{bottom panel}), in three ranges of the V-band amplitude. The completeness for the full sample is shown with black circles, and for stars in three amplitude ranges: $0.2 \leqslant \mathrm{AmpV(mag)}\leqslant 0.5$ (yellow triangles), $0.5 \leqslant \mathrm{AmpV(mag)}\leqslant 1.0$ (red diamonds) and $\mathrm{AmpV(mag)}\geqslant 1.0$ (blue squares). The behaviour shown by these plots is also representative of that for the R and I filters (not shown).}
\label{comp_period_ampV}
\end{center}
\end{figure}
Figure \ref{distespsim} shows the completeness as a function of the spatial distribution of the QUEST low latitude survey. The two panels show the similar overall variations, mainly influenced by the number of available observations as a function of RA-DEC. In the outermost parts of the survey the completeness is lower than that observed for zones with $-5^\circ \leqslant DEC\leqslant +5^\circ$. The latter, like the zone at $-2^\circ \leqslant DEC \leqslant 0^\circ$ and $60^\circ \leqslant RA \leqslant 80^\circ$, have the lowest number of observations of the survey and also has no observations in R and I (see Figure. \ref{distespsint}). On the other hand for the zone between $-4^\circ \leqslant DEC \leqslant -2^\circ$ and $ RA \geqslant 70^\circ$ the completeness is higher than for the previous zones. We observe that here the number of observations in the different filters is larger than the number for the previous region. Finally, the highest completeness belongs to the region between 1.5$^\circ \leqslant DEC \leqslant 2^\circ$ and $70^\circ \leqslant RA \leqslant 90^\circ$ which has the larger number of observations for the three filters (typically $>100$ observations per filter). We conclude that the completeness varies as a function of the number of observations available in each survey region, just as expected, since the fewer observations we have the worst sampled the curves will be, making the period recovery more difficult.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{ra_dec_tipos.pdf}
\caption{Spatial distribution of the average completeness in the recovery of eclipsing binaries (\emph{left:}EA, \emph{right:}EB+EW), in equatorial coordinates. Areas shown in black with null completeness correspond to survey areas with less than 10 epochs per filter on average (see \ref{distespsint}).}
\label{distespsim}
\end{center}
\end{figure*}
\section{Summary and Conclusions}\label{sum_and_con}
In this work we introduced {\textsc ELLISA}, a simulator that allows to generate a synthetic library of multi-filter light curves for a population of eclipsing binaries, from user-supplied time sampling and photometric errors. Mock eclipsing binary light curve catalogues produced with {\textsc ELLISA}~can be used to test and optimise eclipsing binary searches in upcoming multi-epoch surveys, e.g. \emph{Gaia}, PanSTARRS-1 or LSST, as well as to design observing strategies in future surveys, with a realistic light curve ensemble.
They can also be used to simulate the contamination due to eclipsing binaries in searches for other types of variables, such as RR Lyrae or SX Phoenicis/$\delta$~Scuti stars. {\textsc ELLISA}~is implemented in \textsc{Python} and is publicly available in a GitHub repository\footnote{\url{https://github.com/umbramortem/ELLISA}}.
We have conducted an eclipsing binary search on the QUEST low latitude survey, and used {\textsc ELLISA}~to guide and optimise search parameters for this particular survey, and to estimate the completeness of the resulting eclipsing binary catalogue as a function of binary type, apparent magnitude, period and amplitude. Our main results and conclusions are the following:
\begin{itemize}
\item We have identified 1,125 eclipsing binaries, out of which 707 are new discoveries, and consist of 139 EA (detached or Algol-type), 40 EB (semi-detached or $\beta$Lyrae-type) and 528 EW (contact or WUMa-type) corresponding respectively to 20\%, 5\% and 75\% of the catalogue.
\item The coordinates and light curve parameters of the eclipsing binaries identified are summarised in Table~\ref{tab:lc_parameters}. The time series VRI data for the QUEST low latitude eclipsing binary catalogue is made publicly available at a GitHub repository\footnote{\url{https://github.com/umbramortem/QUEST_EBs}}.
\item EB+EW binaries are identified in QUEST with an average $\sim30\%$ completeness in the period range 0.25--1~d
\item EAs are identified at an average $15\%$ completeness in the period range $2$--10~d.
\item Time sampling density is the primary factor affecting which types of eclipsing binary can be recovered and with what average completeness, while amplitude plays a secondary role.
\item This constitutes one of the few catalogues of eclipsing binaries reported to date with a complete characterisation of its selection function, provided as the completeness (recovery fraction) as a function of period, amplitude and apparent magnitude.
\end{itemize}
Being standard candles, EWs can be used to trace Galactic substructure and identify possible new over-densities, streams or dwarf satellites, or to infer Halo and Disc's density profiles, provided the selection function is known, as has been done in numerous previous studies with other standard candles such as RR Lyrae or Red Clump stars \citep{Bovy2016, Mateu2018b,Miceli2008}. The EW catalogue presented here, particularly the high-completeness short-period sample (0.25--1~d), can therefore be used in any of these applications, given the provided characterisation of the catalogue's selection function obtained with the ELLISA mock catalogues.
\paragraph*{Acknowledgements} The authors are glad to thank the referee, Andrej Pr\v{s}a, his valuable comments and recommendations which helped improve this work. B-CO would like to thank Kyle Conroy for his support with technical details in the implementation of PHOEBE.
\bibliographystyle{mnras}
|
{'timestamp': '2019-01-14T02:21:49', 'yymm': '1901', 'arxiv_id': '1901.03673', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.03673'}
|
arxiv
|
\section{Introduction}\label{sec:intro}
The proliferation of networked data has motivated the extension of traditional methods of signal processing to signals defined on graphs. An important operational tool of traditional signal processing is the Fourier transform, which allows for the analysis of signals via their spectral decomposition. The extension of this approach to graphs has been used {successfully} to analyze data on networks, such as sensor networks, or networks of interactions of chemicals in a cell or of financial transactions.
A graph signal is any complex-valued function on the vertices of the graph, which may represent pollution levels measured by a sensor network, or neural activity levels in regions of the brain (see \cite{2018:Ortega:GSPOverview} and the references therein). Important properties of signals, such as diffusion of a signal, noise reduction or the concept of smoothness, can be modeled by means of a matrix representing the graph structure.
Such a matrix is referred to as the graph \emph{shift operator}, and is typically taken to be the graph adjacency matrix or the graph Laplacian. A graph Fourier transform can then be defined as the projection of the signal onto an eigenbasis of the graph shift operator.
See \cite{2018:Ortega:GSPOverview,SandryhailaMoura13} for an overview of recent developments in graph signal processing.
Taking this approach, we can see that the processing of the graph signal rigidly depends on the underlying network.
Any change in the network requires a re-computation of the eigenbasis of the graph shift operator, which is expensive. Further, in many cases networks evolve over time, resulting in a sequence of graphs that are structurally similar. These issues heighten the need for an instance-independent framework for handling processing of signals over large graphs.
In \cite{lovaszszegedy2006}, \emph{graphons} were introduced as limit objects of converging sequences of dense graphs.
Graphons retain the large-scale structure of the families of graphs they represent, and are hence useful representatives of these families.
The use of graphons to guide graph signal processing was first proposed in \cite{MorencyLeus17}, and a graphon Fourier transform was proposed in \cite{ruiz2}.
A graphon is commonly defined as a symmetric, measurable function $w: [0,1]^2 \rightarrow [0,1]$.
More generally, however, one can define a graphon on any standard probability space (\cite[Corollary 3.3]{Borg-Chayes-Lovasz-2010}).
For practical reasons, and especially to deal with Cayley graphons in Section \ref{sec:cayley}, we define graphons and graphon signals on general standard probability spaces as follows.
\begin{definition}\label{def:graphon-signal}
Let $(X,\mu)$ be a standard probability space, and $L^2(X)$ be the associated space of square-integrable functions.
A function $w:X\times X\to [0,1]$ is called a graphon represented on $X$ if $w$ is measurable and symmetric (i.e.~$w(x,y)=w(y,x)$ almost everywhere).
A graphon signal on $w$ is a pair $(w,f)$, where $f:X\rightarrow \mathbb C$ belongs to $L^2(X)$.
\end{definition}
Morency and Leus in \cite{MorencyLeus17} investigate asymptotic behavior of signal processing for sequences of graph signals converging to a graphon signal.
They use the limit theory of dense graph sequences (\cite{lovaszszegedy2006}) to define the notion of converging graph signals.
Their original definition is for graphons on $[0,1]$, but naturally extends to the general form of graphons as follows:
Let $w$ be a graphon on a standard probability space $(X,\mu)$.
A sequence of graph signals $\{(G_n, f_n)\}$ is said to converge to the graphon signal $(w , f)$ if there exists a labeling for each $G_n$ such that
$\|G_n-w\|_\Box\rightarrow 0$ and $\|f^X_n-f\|_2\rightarrow 0$,
where $f^X_n$ is the natural representation of the graph signal $f_n$ as a simple function in $L^2(X)$;
see Section \ref{sec:generalize} for the details of this definition, and Section \ref{subsec:graphon} for the definition of cut norm $\|\cdot\|_\Box$.
Ruiz, Chamon and Ribeiro in \cite{ruiz2}, and more extensively in \cite{RuizChamonRibeiro21}, give a convergence result for the graphon Fourier transform. The main result of \cite{RuizChamonRibeiro21} is that for a sequence of graph signals $\{(G_n,f_n)\}$ that converge to a graphon signal $(w,f)$, the graph Fourier transform of $(G_n,f_n)$ converges to the graphon Fourier transform of $(w,f)$ if the following conditions on $w$ and $f$ are satisfied:
\begin{itemize}
\item[(i)] the graphon signal $f$ is $c$-bandlimited,
\item[(ii)] $w$ is a non-derogatory graphon (i.e.~the integral operator associated with $w$ does not have repeated nonzero eigenvalues.)
\end{itemize}
In Section \ref{sec:generalize}, we generalize this result significantly. Namely, we drop both conditions which were imposed on $w$ and $f$, allowing for a convergence theorem for a much larger class of graphons (see Theorem~\ref{thm:convergence} for a precise statement of our result).
Even taking the density of non-derogatory graphons in the space of all graphons into account (\cite[Proposition 1]{RuizChamonRibeiro21}), our result genuinely extends the above-mentioned theorem of Ruiz \emph{et al}. Indeed, a linear operation (e.g.~graph Fourier transform in this context) may be continuous on a dense subset (e.g.~the collection of non-derogatory graphons), but fail to be continuous on the whole space (e.g.~all graphons). We note that many important examples of graphons, including many Cayley graphons on non-Abelian groups, have multiple non-zero eigenvalues and thus do not meet the criteria for convergence as obtained by Ruiz \emph{et al}.
To achieve the continuity of the graph Fourier transform on the entire space of graphons, we need to {refine the definition of graph/graphon Fourier transform}.
The theory of signal processing on graphs was originally inspired by Fourier analysis on ${\mathbb Z}_N$, or more generally, harmonic analysis on Abelian groups. Any shortcomings of this approach may be attributed to the fact that graphs lack the structural symmetries of ${\mathbb Z}_N$.
Inspired by Fourier analysis of non-Abelian groups, we replace the concept of ``Fourier coefficients'' by projections onto eigenspaces of the shift operator. This point of view enables us to deal with eigenvalues with multiplicity higher than 1. We then express continuity of the graph Fourier transform in terms of convergence of such projections in suitable norms. We note that every nonzero eigenvalue of a graphon $w$ has finite multiplicity, and the associated projection onto its eigenspace is a finite-rank operator. So our result in Theorem~\ref{thm:convergence} is of the form of convergence of matrices (of finite but increasing size) to finite-rank operators.
An important instance where Theorem~\ref{thm:convergence} may be employed is the class of \emph{Cayley graphons}, which we discuss in Section~\ref{sec:cayley}.
Cayley graphons, first introduced in \cite{cayley-graphon}, are defined on a (finite or compact) group $\mathbb G$ according to a connection function defined over $\mathbb G$. The group symmetries make Cayley graphons great models for many networks.
Cayley graphons form a natural extension of Cayley graphs.
Note that graphs sampled from a Cayley graphon are not, in general, themselves Cayley graphs. Instead, they can be seen as ``fuzzy versions'' of Cayley graphs, which preserve the symmetries of the group on a large scale, but are locally random.
In Section~\ref{sec:cayley}, we will show that the group structure of a Cayley graphon may be used to develop a specific framework for signal processing on the graphon, which can then be used to provide an instance-independent framework for graph signal processing.
As a critical step towards developing a framework for signal processing on Cayley graphons, we study the spectral decomposition of Cayley graphons in Section~\ref{sec:cayley}. Using the representation theory of the underlying group, we first show how to derive eigenvalues and eigenvectors of Cayley graphons. {Namely, in Theorem~\ref{thm:eigenvector}, we show that the eigenvalues/eigenvectors of a Cayley graphon can be derived from the eigenvalues/eigenvectors of the irreducible representations of the underlying group $\mathbb G$ applied to the function on $\mathbb G$ that defines the graphon.}
Proposition~\ref{prop:basis} then provides us with a basis for $L^2({\mathbb G})$ which diagonalizes the Cayley graphon (or more precisely, its associated integral operator). The proposed basis is obtained as a combination of coefficient functions of irreducible representations of ${\mathbb G}$. These are important functions associated with irreducible representations of a locally compact group, and play a central role in the harmonic analysis of non-Abelian groups.
Proposition~\ref{prop:basis} can be applied to Cayley graphs; in that case, it offers an improvement over many earlier results on calculating the eigenvalues and eigenvectors of Cayley graphs, as such results often work under the extra assumption that the generating set of the Cayley graph is closed under conjugation.
The bases as defined in Proposition~\ref{prop:basis} can be used as a framework for signal analysis of graphs whose structure conforms with the Cayley graphon.
\section{Notations and background}
\subsection{Signal processing on graphs}\label{subsec:graphs}
Let $G$ be a graph on the vertex set $V=\{v_1,\ldots,v_N\}$. A graph signal on $G$ is a function $f:V\to \mathbb C$, which can also be identified with a column vector $\left(f(v_1), f(v_2), \cdots, f(v_N)\right)^\top\in \mathbb C^N$, where $\top$ denotes the transpose of a vector.
Given a graph $G$ on $N$ nodes, a graph Fourier transform can be defined as the representation of signals on an orthonormal basis for $\mathbb C^N$ consisting of eigenvectors of the graph shift operator (i.e.~the adjacency matrix or the graph Laplacian). In the present work, we focus our attention on signal processing using the adjacency matrix as the shift operator.
For the rest of this article, let $A$ denote the adjacency matrix of a given graph $G$ on $N$ vertices, and fix an orthonormal basis of eigenvectors $\{\phi_i\}_{i=1}^{N}$ associated with eigenvalues $\lambda_1\leq \lambda_2\leq \ldots\leq \lambda_{N}$ of $A$. The \emph{graph Fourier transform} of a graph signal $f:V\to \mathbb C$ is defined as the expansion of $f$ in terms of the orthonormal basis $\{\phi_i\}_{i=1}^{N}$. That is,
\begin{equation}\label{GFT}
\widehat{f}(\phi_i) = \langle f, \phi_i\rangle_{\mathbb C^N} = \sum_{n=1}^N f(v_n)\overline{\phi_i(v_n)},
\end{equation}%
and in this setting the \emph{inverse graph Fourier transform} is given by
\begin{equation}\label{GIFT}
f(v_n) = \sum_{i=1}^{N} \widehat f (\phi_i) \phi_i(v_n).
\end{equation}
The notation $\widehat{f}(\lambda_i)$ is also used for $\widehat{f}(\phi_i)$. To avoid possible confusions when $A$ has repeated eigenvalues, we mainly use the notation in \eqref{GFT}.
The above definition of the graph Fourier transform generalizes the classical \emph{discrete Fourier transform}.
Please refer to ~\cite{2013:Sandryhaila:DS,2014:Sandryhaila:BD,SNFOV} for a detailed background on the graph Fourier transform, and ~\cite{2018:Ortega:GSPOverview} for a general overview of graph signal processing.
\subsection{Fourier analysis on compact (not necessarily Abelian) groups}
In Section \ref{sec:cayley}, we develop a framework for signal processing on Cayley graphons, which relies on the Fourier analysis of the underlying group.
In this section, we give the necessary background on Fourier analysis {of compact groups.}
Let $\mathbb G$ be a compact (not necessarily Abelian) group equipped with its Haar measure. Let $L^2(\mathbb G)$ denote the set of measurable complex-valued functions on $\mathbb G$, identified up to sets of measure 0, that are square-integrable. The Banach space $L^2(\mathbb G)$ forms a Hilbert space when equipped with the inner product $\langle f,g\rangle=\int_G f\overline{g}$. Let $L^1(\mathbb G)$ denote the set of measurable complex-valued functions on $\mathbb G$, identified up to sets of measure 0, that are integrable. For $f,g\in L^1(\mathbb G)$, we define their convolution product to be
\begin{equation}\label{def:convolution}(f*g)(x)=\int_{\mathbb G} f(y)g(y^{-1}x)\, dy.
\end{equation}
This convolution product turns $L^1(\mathbb G)$ into a Banach algebra. The Hilbert space $L^2(\mathbb G)$ and the Banach algebra $L^1(\mathbb G)$ play a central role in harmonic analysis of (locally) compact groups.
Given a Hilbert space $\mathcal H$, a map $\pi:\mathbb G\to \mathcal U(\mathcal H)$ is called a \emph{unitary representation} of $\mathbb G$ if $\pi$ is a group homomorphism
into the group of unitary operators on $\mathcal H$, denoted by $\mathcal U(\mathcal H)$. If $\mathcal H={\mathbb C}^n$, we say $\pi$ is an $n$-dimensional representation. Two representations $\pi:\mathbb G\to \mathcal U(\mathcal H_\pi)$ and $\rho:\mathbb G\to\mathcal U(\mathcal H_\rho)$ are said to be \emph{unitarily equivalent} if there exists a unitary map $U:\mathcal H_\pi\to\mathcal H_\rho$ such that
\[ U \pi(g) U^* = \rho(g), \quad \forall g\in \mathbb G.\]%
A representation $\pi$ is called \emph{irreducible} if it does not admit any non-trivial closed $\pi$-invariant subspaces.
The collection of (equivalence classes of) all irreducible representations of $\mathbb G$ is denoted by $\widehat{\mathbb G}$. In the special case where $\mathbb G$ is Abelian, every irreducible representation is 1-dimensional and the set $\widehat{\mathbb G}$, together with point-wise multiplication, forms a group called the \emph{dual group} of $\mathbb G$. For compact groups, every irreducible representation is finite dimensional. However, a compact group $\mathbb G$ may have representations of different dimensions. Therefore, in the case of a non-Abelian compact group, the collection $\widehat{\mathbb G}$ does not form a group.
\subsubsection{Fourier and inverse Fourier transforms for compact groups.} \label{subsec:FT-nonAbelian}
Let $f\in{L^1(\mathbb G)}$. The Fourier transform of $f$, denoted by ${\mathcal F}f$, is defined as a matrix-valued function on $\widehat{\mathbb G}$. At each $\pi\in\widehat{\mathbb G}$, the Fourier transform ${\mathcal F}f(\pi)$ is
a matrix of dimensions $d_\pi\times d_\pi$ with entries in $\mathbb C$, defined as
\begin{equation}\label{F-noncommutative}
{\mathcal F}f(\pi):=\pi(f)=\int_{\mathbb G} f(y)\pi(y)\, dy.
\end{equation}
Here the integral is taken with respect to the Haar measure of $\mathbb G$, and is interpreted entry-wise (i.e.~in the weak sense).
The inverse Fourier transform is given by
\begin{equation}\label{inverse-F-noncommutative}
f(x)=\sum_{\pi\in {\widehat{\mathbb G}}}d_\pi{\rm Tr}(\pi(x)^*\pi(f)),
\end{equation}
where the equality holds in $L^2(\mathbb G)$.
We should note that the Fourier and inverse Fourier transforms given in \eqref{F-noncommutative} and \eqref{inverse-F-noncommutative} are defined slightly differently in \cite{1995:Folland:HarmonicAnalysis}. The map $f\mapsto \widecheck{f}$, with $\widecheck{f}(x)=f(x^{-1})$, converts our definition of Fourier transform to the one in \cite{1995:Folland:HarmonicAnalysis}.
As demonstrated in the following proposition, the (matrix-valued) Fourier transform allows us to encode the non-commutative nature of non-Abelian groups.
\begin{proposition}{(Properties of Fourier transform on compact groups)}\label{prop:FT-cpt-group}
Let $\mathbb G$ be a compact group. For the Fourier and inverse Fourier transforms defined above we have:
\begin{itemize}
\item[(i)] (Parseval equation) For every $f\in L^2(\mathbb G)$, we have
$$\|f\|_2^2=\sum_{\pi\in\widehat{\mathbb G}}d_\pi{\rm Tr}[\pi(f)^*\pi(f)].$$
\item[(ii)] For $f,g\in L^1(\mathbb G)$ and $\pi\in\widehat{\mathbb G}$, $\pi(f*g)=\pi(f)\pi(g).$ Here $f*g$ denotes convolution as defined in \eqref{def:convolution}.
\item[(iii)] For $f\in L^1(\mathbb G)$ and $\pi\in\widehat{\mathbb G}$, $\pi(f^*)=\pi(f)^*,$ where $f^*(x)=\overline{f(x^{-1})}$. Here, $\pi(f)^*$ denotes the usual adjoint of the matrix $\pi(f)$.
\end{itemize}
\end{proposition}
Let $\mathbb G$ be a compact (not necessarily Abelian) group, and $\pi$ be an irreducible representation of $\mathbb G$ on the Hilbert space $\mathbb C^{d_\pi}$.
Let $\{e_k\}_{k=1}^{d_\pi}$ be the standard basis for $\mathbb C^{d_\pi}$, and define the \emph{coefficient function} of $\pi$ associated with $e_i, e_j$ to be the complex-valued function on $\mathbb G$ defined as
\begin{equation}\label{eq:coefficient}
\pi_{i,j}(\cdot):= \langle\pi(\cdot)e_i, e_j\rangle_{\mathbb C^{d_\pi}}.
\end{equation}
\begin{proposition}{(Schur's orthogonality relations \cite[Theorem 5.8]{1995:Folland:HarmonicAnalysis})}\label{prop:Schur-cpt-group}
For every $n$, let $\rm U_n(\mathbb C)$ denote the set of all unitary matrices of size $n$.
Let $\pi:\mathbb G\to \rm U_n(\mathbb C)$ and $\rho:\mathbb G\to \rm U_m(\mathbb C)$ be inequivalent, irreducible unitary representations. Then
\begin{itemize}
\item[(i)] $\langle\pi_{i,j}, \rho_{r,s}\rangle_{L^2(\mathbb G)}=0$ for all $1\leq i,j\leq d_\pi$ and $1\leq r,s\leq d_\rho$,
\item[(ii)] $\langle\pi_{i,j}, \pi_{r,s}\rangle_{L^2(\mathbb G)} = \frac{1}{d_\pi}\delta_{i,r} \delta_{j,s}$,
\end{itemize}
where $\delta_{i,j}$ is the Kronecker delta function.
Consequently, the collection $\big\{\sqrt{d_\pi}\pi_{i,j}\big\}_{\pi\in \widehat \mathbb G, i,j=1,\cdots,d_\pi}$ forms an orthonormal (Hilbert) basis for $L^2(\mathbb G)$.
\end{proposition}
See \cite{1995:Folland:HarmonicAnalysis, Terras:1999:FourierAnalysisOnFiniteGroups} for full details of Fourier analysis of non-Abelian groups.
\subsection{Graph limit theory}\label{subsec:graphon}
Let ${\mathcal W}_0$ denote the set of all {graphons} on $[0,1]^2$, that is, the set of all measurable functions $w: [0,1]^2 \to [0,1]$ that are symmetric,
i.e.~$w(x, y) = w(y, x)$ for every point $(x, y)$ in $[0, 1]^2$.
Let ${\mathcal W}$ denote the (real) linear span of ${\mathcal W}_0$. Every graph can be identified with a $0/1$-valued graphon as follows.
\begin{definition}\label{def:wG}
Let $G$ be a graph on $n$ vertices labeled $ \{ 1,2,\dots ,n\}$.
The $0/1$-valued graphon $w_G$ associated with $G$ is defined as follows: split $[0, 1]$ into $n$ equal-sized intervals $\{I_i\}_{i=1}^n$.
For every $i, j\in \{1,\ldots, n\}$, $w_G$ attains 1 on $I_i \times I_j$ precisely when vertices with labels $i$ and $j$ are adjacent. Note that $w_G$ depends on the labeling of the vertices of $G$, that is, relabeling the vertices of $G$ results in a different graphon.
\end{definition}
The topology described by convergent (dense) graph sequences can be formalized by endowing ${\mathcal W}$ with the cut-norm, introduced in \cite{cut-norm}. For $w \in {\mathcal W}$, the cut-norm is defined as:
\begin{equation}
\label{cutnorm}
\| w \|_{\Box}= {\rm sup}_{S,T \subset [0,1]}\left|\int_{S \times T} w(x,y)\, dxdy \right|,
\end{equation}
%
where the supremum is taken over all measurable subsets $S,T$ of $[0,1]$. To develop an unlabeled graph limit theory, the cut-distance between $u,w\in{\mathcal W}$ is defined as follows.
\begin{equation}
\label{cutdistance}
\delta_{\Box}(u,w)= {\rm sup}_{\sigma\in \Phi}\|u^\sigma-w\|_\Box,
\end{equation}
where $\Phi$ is the space of all measure-preserving bijections on $[0,1]$, and $w^\sigma(x,y)=w(\sigma(x),\sigma(y))$. This definition ensures that $\delta_\Box(w,u)=0$ when the graphons $w$ and $u$ are associated with the same graph $G$ given two different vertex labelings. In general, two graphons $u$ and $w$ are said to be \emph{$\delta_\Box$-equivalent} (or equivalent, for short), if $\delta_\Box(u,w)=0$.
It is known that a graph sequence $\{ G_n\}$ converges in the sense of Lov\'{a}sz-Szegedy
whenever the corresponding sequence of graphons $\{w_{G_n}\}$ is $\delta_\square$-Cauchy. The limit object for such a convergent sequence can be represented as a graphon in ${\mathcal W}_0$ (not necessarily corresponding to a graph). That the graph sequence $\{G_n\}$ is convergent
to a limit object $w \in {\mathcal W}_0$ is equivalent to $\delta_\Box(w_{G_n},w)\rightarrow 0$ as $n$ tends to infinity. This, in turn, is equivalent to the
existence of suitable labelings for the vertices of each of the graphs $G_n$ for which we have
\begin{equation}
\label{cutdist}
\lVert w_{G_n} - w \rVert_{\square} = \underset{S,T \subset [0,1]} {\rm sup} \Bigl \lvert \int_{S \times T} ( w_{G_n} - w ) \Bigr \rvert \rightarrow 0.
\end{equation}
See \cite[Theorem 2.3]{BCLSV2011} for the above convergence results.
A graphon $w$ can be interpreted as a probability distribution on random graphs, sampled via the \emph{$w$-random graph} process ${\mathcal G}(n,w)$.
The concept of {$w$-random graphs} was introduced in \cite{lovaszszegedy2006}, as a tool for generating examples of convergent graph sequences.
For a graphon $w$, we define the random process ${\mathcal G}(n,w)$ as follows. Given a the vertex set with labels $\{ 1,2,\dots ,n\}$, edges are formed according to $w$ in two steps. First, each vertex $i$ is assigned a value $x_i$ drawn uniformly at random from $[0,1]$. Next, for each pair of vertices with labels $i<j$ independently, an edge $\{ i,j\}$ is added with probability $w(x_i,x_j)$.
It is known that the sequence $\{{\mathcal G}(n,w)\}_n$ almost surely forms a convergent graph sequence, for which the limit object is the graphon $w$
(see \cite{lovaszszegedy2006}).
For a comprehensive account of dense graph limit theory, we refer the reader to \cite{lovasz-book}.
\subsection{Cayley graphons}\label{subsec:general-graphon}
Graphons can be represented on any standard probability space $(X,\mu)$ rather than the usual choice $[0,1]$.
As given in Definition \ref{def:graphon-signal}, a function $w:X\times X\to [0,1]$ is called a graphon represented on $X$ if $w$ is measurable and symmetric. The concepts of cut-norm, cut-distance and $w$-random graphs, for a graphon $w$ on $X$, are defined analogously to the corresponding concepts for ${\mathcal W}_0$. Representing graphs and graphons on a particular probability space $X$ can offer insights on their geometric structure, which may be lost otherwise.
Cayley graphons, as defined in Definition \ref{def:cayley-graphon}, provide natural examples of this phenomenon. In most cases, it is beneficial to represent a Cayley graphon associated with a (compact) group $\mathbb G$ on the probability space provided by $\mathbb G$.
\begin{definition}\label{def:cayley-graphon}
Let $\mathbb G$ be a second countable compact group. Then $\mathbb G$ equipped with its Haar measure forms a standard probability space.
Let $\gamma:\mathbb G\rightarrow [0,1]$ be a measurable function such that $\gamma(x)=\gamma(x^{-1})$.
Then the graphon $w:\mathbb G\times \mathbb G\to [0,1]$ defined as $w(x,y)=\gamma(xy^{-1})$ is called the Cayley graphon defined by $\gamma$ on the group $\mathbb G$, and the function $\gamma$ is called a Cayley function.
\end{definition}
When $\mathbb G$ is a second countable, infinite, compact group, the probability space provided by $\mathbb G$, together with its Haar measure, is standard and atom-free.
There is an easy correspondence between the representation of a graphon on an atom-free standard probability space and on $[0,1]$. Let $(X,\mu)$ be an atom-free standard probability space.
It is well-known that $X$ is isomorphic (mod 0) to the uniform probability space $[0,1]$. Let $\sigma_X$ be a fixed isomorphism (mod 0) between $[0,1]$ and $X$, that is, there are measure-zero sets $A_1\subseteq [0,1]$ and $A_2\subseteq X$ and an invertible map $\sigma_X:[0,1]\setminus A_1\to X\setminus A_2$ such that both $\sigma_X$ and its inverse are measurable and measure-preserving. Now if $w:X\times X\rightarrow [0,1]$ is a graphon on $(X,\mu)$, then
$w_0$, defined as follows, is a representation of $w$ on $[0,1]$:
\[w_0:[0,1]^2\rightarrow [0,1], \quad w_0(x,y)=\left\{\begin{array}{cc}
w(\sigma_X(x),\sigma_X(y)) & \mbox{ if } x,y\in [0,1]\setminus A_1, \\
0 & \mbox{otherwise}.
\end{array}\right. \]
Note that the value of a graphon on a null set does not affect its graph-limit-theoretic behavior, as graphons which are almost everywhere equal belong to the same $\delta_\Box$-equivalence class.
So, the random graphs ${\mathcal G}(n,w_0)$ and ${\mathcal G}(n,w)$ are equivalent.
On the other hand, a (labeled) graph $G$ on the vertex set $V(G)=\{1,\ldots, n\}$ can be identified with a 0/1-valued graphon $w_{G,X}$ on $X$ as well. %
Namely,
\begin{equation}\label{eq:W-G,X}
w_{G,X}(x,y)=\left\{\begin{array}{ll}
w_G(\sigma_X^{-1}(x),\sigma_X^{-1}(y)) & \text{for }x,y\in X\setminus A_2,\\
0 & \text{on the remaining null set,}
\end{array}\right.
\end{equation}
where $w_G\in {\mathcal W}_0$ is the graphon associated with $G$ {as in Definition~\ref{def:wG}}.
Now suppose $\mathbb G$ is a finite group of size $N$, and $w_{\mathbb G,\gamma}:\mathbb G\times \mathbb G\rightarrow [0,1]$ is a Cayley graphon on $\mathbb G$ defined by the function $\gamma:\mathbb G\rightarrow [0,1]$. The graphon $w_{\mathbb G,\gamma}$ is clearly a step function, and can be represented as a step graphon $w_0$ on $[0,1]$ as follows. Split $[0, 1]$ into $N$ equal-sized intervals $\{I_s\}_{s\in\mathbb G}$, labeled by the elements of $\mathbb G$. This partition defines the map $\sigma_\mathbb G: [0,1]\rightarrow \mathbb G$ as $x\mapsto s$ if and only if $x\in I_s$. The function $\sigma_\mathbb G$ is a measure-preserving (not invertible) map, which allows the representation of
$w_{\mathbb G,\gamma}$ on $[0,1]$, defined as below:
$$w_0(x,y)=w_{\mathbb G,\gamma}(\sigma_\mathbb G(x),\sigma_\mathbb G(y)),$$
that is, for every $s,t\in \mathbb G$, the graphon $w_0$ attains the value $\gamma (st^{-1})$ on $I_s \times I_t$.
We can sample from a Cayley graphon on a finite group via the $w$-random graph ${\mathcal G}(n,w_0)$ or, equivalently, via the $w$-random graph ${\mathcal G}(n,w_{\mathbb G, \gamma})$.
To form $G\sim {\mathcal G}(n,w_{\mathbb G, \gamma})$, assign to each vertex with label $i\in\{1,2,\dots ,n\}$ a group element $x_i\in \mathbb G$ selected uniformly at random. Next, each pair of vertices with labels $i<j$ are linked independently with probability $\gamma(x_ix_j^{-1})$. The assignment of the group elements to the vertices
of $G$ can be viewed as a natural partition of the vertex set of $G$ into $|\mathbb G|$ subsets.
The connection probability between two vertices is then completely determined by the subsets they belong to.
\begin{remark}\label{rem:every-graphon-in-W0}
The previous paragraphs consider the cases where either the probability space is atom-free, or it is consisting entirely of atoms. In general, any graphon $w:X\times X\rightarrow [0,1]$ represented on a standard probability space $(X,\mu)$ has a $\delta_\Box$-equivalent representation $w_0$ in ${\mathcal W}_0$. Indeed, every standard probability space is isomorphic mod 0 to a disjoint union of a closed interval (equipped with the Lebesgue measure) and a countable set of atoms. A combination of the above arguments for finite and infinite compact groups can be easily adjusted to produce a (not necessarily injective) measure-preserving map $\sigma_X:[0,1]\to X$.
The representation $w_0\in {\mathcal W}_0$ is then defined as $w_0(x,y)=w(\sigma_X(x),\sigma_X(y))$.
\end{remark}
\subsection{Spectral decomposition of graphons}\label{sec:SpectralDecompGraphons}
For the rest of this article, we consider graphons in their general form, i.e.~represented on a standard measure space $(X,\mu)$.
We may assume that $X$ is infinite; as seen in the previous section, graphons derived from finite graphs or groups can be represented as graphons on the infinite probability space $[0,1]$.
Every graphon $w:X\times X\to [0,1]$ can act as the kernel of an integral operator on the Hilbert space $L^2(X)$ as follows:
$$T_w: L^2(X)\to L^2(X), \quad T_w(\xi)(x)=\int_X w(x,y)\xi(y)\, d\mu(y), \mbox{ for } \xi\in L^2(X), x\in X.$$
By removing a null set if necessary, we can assume \emph{wlog} that $X$ is chosen so that $L^2(X)$ is separable. In particular, if $X$ is a locally compact group, we assume it is second countable. This assumption guarantees the existence of a finite or countable orthonormal basis for $L^2(X)$.
Since $w\in L^2(X\times X)$ and as $w$ is real-valued and symmetric, the Hilbert-Schmidt operator $T_w$ is self-adjoint. In addition, the operator norm of $T_w$ is $\|w\|_\infty$, which is bounded by 1. Thus, $T_w$ has a countable spectrum lying in the interval $[-1,1]$ for which 0 is the only possible accumulation point. We label the nonzero eigenvalues of $T_w$ as follows:
\begin{eqnarray}\label{eq:ordering}
1\geq \lambda_1(w)\geq \lambda_2(w)\geq \ldots\geq 0\geq \ldots \geq \lambda_{-2}(w)\geq \lambda_{-1}(w)\geq -1
\end{eqnarray}
Note that the set of positive eigenvalues of $T_w$ may be finite. In that case, we pad the sequence with 0's at the end, so we can view it as an infinite sequence. {We do} similarly for the set of negative eigenvalues. This arrangement is important for our discussion of the convergence of spectra for converging sequences of dense graphs (e.g.~in Theorem~\ref{thm:spectrum}).
Using spectral theory for compact operators, we see that $L^2(X)$ admits an orthonormal basis containing eigenvectors of $T_w$; this results in a spectral decomposition for $T_w$. More precisely, let $I_w\subseteq {\mathbb Z}^*$ be the indices in \eqref{eq:ordering} enumerating nonzero eigenvalues of $T_w$, and let $\{\phi_i\}_{i\in I_w}$ be an orthonormal collection of associated eigenvectors. Then we have
\begin{equation}\label{eq:spectral-decom}
T_w=\sum_{i\in I_w}\lambda_i(w) \, \phi_i\otimes \phi_i,
\end{equation}
where $\phi_i\otimes \phi_i$ denotes the rank-one projection on $L^2(X)$ defined as $(\phi_i\otimes\phi_i)(\xi)=\langle\xi,\phi_i\rangle\phi_i$ for every $\xi\in L^2(X)$. Note that the infinite sum in the above spectral decomposition should be interpreted as operator-norm convergence in the space of bounded operators on $L^2(X)$. Since $T_w$ is a Hilbert-Schmidt operator, the spectral decomposition sum converges in the Hilbert-Schmidt norm as well. In particular, given the spectral decomposition (\ref{eq:spectral-decom}), we have
\begin{equation}\label{eq:spectral-l2}
w=\sum_{i\in {I}_w}\lambda_i(w) \, \phi_i\otimes \phi_i,
\end{equation}
where $\phi_i\otimes \phi_i\in L^2(X\times X)$ is defined to be $(\phi_i\otimes \phi_i)(x,y)=\phi_i(x)\phi_i(y)$, and the convergence of the infinite sum is interpreted as convergence in $L^2(X\times X)$.
Note that by slight abuse of notation, we use $\phi_i\otimes \phi_i$ to denote a rank-one projection on $L^2(X)$ or its associated kernel in $L^2(X\times X)$.
For background material on compact operators and their spectral theory, we refer to \cite[Chapters 13--14]{Bollobas-book}.
We finish the discussion about spectra by quoting two theorems on convergence of spectra.
\begin{theorem}{\cite[Theorem 11.54]{lovasz-book}}\label{thm:spectrum}
Let $\{w_n\}_{n\in\mathbb N}$ be a sequence of graphons converging to a graphon $w$ in $\delta_\Box$-distance. Then for every fixed $i\in\mathbb Z^*$, we have %
$$\lambda_i(w_n)\rightarrow \lambda_i(w)\ \mbox{ as } \ n\rightarrow \infty,$$
{where the eigenvalues of each graphon are indexed as in \eqref{eq:ordering}.}
\end{theorem}
A more careful analysis of convergence of spectra was given in \cite{Szegedy-spectra} where the convergence of eigenspaces was shown.
For a graphon $w$ with spectral decomposition as in \eqref{eq:spectral-decom} and a positive number $\alpha>0$, define
\begin{equation}\label{eq:cut-off}
[w]_\alpha=\sum_{\{i\in {\mathbb Z}:\ |\lambda_i(w)|>\alpha\}}\lambda_i(w) \, \phi_i\otimes \phi_i.
\end{equation}
\begin{theorem}{\cite[Proposition 1.1]{Szegedy-spectra}}\label{thm:szegedy-spectra}
Let $\{w_n\}_{n\in\mathbb N}$ be a sequence of graphons. Then the following two statements are equivalent:
\begin{itemize}
\item[(i)] The sequence $\{w_n\}_{n\in\mathbb N}$ converges to $w$ in cut-norm.
\item[(ii)] There is a decreasing positive real sequence $\{\alpha_n\}_{n\in\mathbb N}$ approaching to 0 such that for every $j\in\mathbb N$, we have
$\|[{w_n}]_{\alpha_j}-[w]_{\alpha_j}\|_{L^2(X\times X)}\rightarrow 0$.
\end{itemize}
Furthermore, in the second statement the cut-norm limit $w$ of $\{w_n\}_{n\in\mathbb N}$ can be computed as
$$w=\lim_{j\rightarrow \infty}\left(\lim_{i\rightarrow \infty}[w_i]_{\alpha_j}\right)$$
converging in $L^2(X\times X)$.
\end{theorem}
\section{Graphon signal processing}\label{sec:generalize}
Graphons can be viewed as limit objects of graph sequences. Consequently, it is natural to develop the idea of a Fourier transform on graphons in such a way that the graph Fourier transform, along a converging graph sequence, converges (in some appropriate topology) to the graphon Fourier transform of the limit object. Such an approach was first proposed by Ruiz, Chamon and Ribeiro in \cite{ruiz2}, and expanded upon in \cite{RuizChamonRibeiro21}. They define a graphon Fourier transform based on the spectral decomposition of the graphon, and give a convergence result restricted to the class of so-called non-derogatory graphons. In this section, we show that the restriction can only be removed with a broader definition of the graphon Fourier transform, which is independent of the choice of basis for each eigenspace. Using this new definition, we establish a more general convergence result stated in Theorem \ref{thm:convergence}.
The graphon Fourier transform as defined in \cite{RuizChamonRibeiro21} is evidently motivated by Fourier analysis on ${\mathbb R}$.
Namely, it is derived from an orthonormal basis of $L^2(X)$, say ${\mathcal B}$, consisting of eigenvectors of $T_w$.
The {graphon Fourier transform (WFT)} of a graphon signal $(w,f)$ is then defined via expansion of $f$ with respect to the basis ${\mathcal B}$, that is,
\begin{equation*}\label{WFT}
\widehat{f}(\phi)=\langle f,\phi\rangle=\int_X f(x)\overline{\phi(x)}\,d\mu(x) \ \mbox { for } \phi\in {\mathcal B},\mbox{ (as defined in \cite{RuizChamonRibeiro21}}).
\end{equation*}
The inverse graphon Fourier transform (iWFT) of $\widehat{f}$ is given by
\begin{equation*}\label{iWFT}
{\rm iWFT}(\widehat{f})=\sum_{\phi\in {\mathcal B}}\widehat{f}(\phi)\phi, \mbox{ (as defined in \cite{RuizChamonRibeiro21}}).
\end{equation*}
Since ${\mathcal B}$ is an orthonormal basis of $L^2(X)$, we have ${\rm iWFT}(\widehat{f})=f$, with the equality interpreted in $L^2(X)$.
We now describe precisely what we mean by convergence of a sequence of graph/graphon signals to a graphon signal, using the framework discussed in Section \ref{subsec:general-graphon}.
Let $G$ be a graph on $n$ vertices labeled $\{1,2,\dots,n\}$, and let $w:X\times X\rightarrow [0,1]$ be a graphon represented on an infinite standard probability space $X$.
With the given labeling for the vertex set of $G$, a signal on $G$ is just a function $f:\{ 1,2,\dots,n\}\rightarrow \mathbb C$.
With respect to this labeling, every graph signal can also be viewed as a step function in $L^2[0,1]$ by identifying each vertex $v$ labeled $i$ with the interval $I_i=[\frac{i-1}{n},\frac{i}{n})$.
By Remark \ref{rem:every-graphon-in-W0}, the graphon $w$ can be transformed to a graphon $w_0 \in {\mathcal W}_0$ using a measure-preserving map $\sigma_X:[0,1]\to X$.
Applying the same measure-preserving map to the step function $f\in L^2[0,1]$ allows us to transform $f$ to a signal $f^X\in L^2(X)$ on the graphon $w$; namely, we define
\begin{equation}
\label{eqn:fX}
f^X(s)=\sqrt{n}f(k) \quad \forall s\in \sigma_X(I_k),\, 1\leq k\leq n.
\end{equation}
Note that the scaling factor of $\sqrt{n}$ in \eqref{eqn:fX} ensures that the map $f\mapsto f^X$ is an isometry from $\mathbb C^n$ to $L^2(X)$:
if $f,g:\{ 1,2,\dots ,n\}\to \mathbb C$ are signals on a graph $G$ with $n$ vertices, then
\begin{equation}
\label{eq:square-root-factor}
\langle f^X,g^X\rangle_{L^2(X)} = \langle f,g\rangle_{\mathbb C^n}.
\end{equation}
Consequently, applying the graph shift operator on $f$ in $\mathbb C^n$ yields the same result as applying the corresponding graphon shift operator to the functions $f^X$ in $L^2(X)$. Namely, let $G$ be a graph on $n$ vertices labeled as above. Let $A$ be the adjacency matrix of $G$, and $f$ be a signal on $G$ viewed as a vector $f\in \mathbb C^n$. Fix an infinite standard probability space $X$, with the measure-preserving map $\sigma_X:[0,1]\to X$. Let $w_{G,X}:X\times X \rightarrow [0,1]$ be the graphon associated with $G$ represented on $X$ (as defined in \eqref{eq:W-G,X}). Then
\begin{equation}\label{eq:Tdiscrete-vs-cts}
T_{w_{G,X}}f^X (x)= \int_{X} w_{G,X}(x,y)f^X(y)\, dy=\sum_{j=1}^n \frac{1}{\sqrt{n}}A_{k,j}f_j = (Af)^X(x), \mbox{ for } x\in \sigma_X(I_k).
\end{equation}
To discuss convergence of graphon signals, we need to clearly distinguish between convergence in cut-norm and cut-distance.
\begin{definition}\label{def:graph_signal_convergence}
We say a sequence $\{(w_n,f_n)\}_{n\in\mathbb N}$ of graphon signals on a standard probability space $(X,\mu)$ converges \emph{in norm} to a graphon signal $(w,f)$
if $$\|w_n-w\|_\Box\rightarrow 0,
\ \mbox{ and } \
\|{f}_n-f\|_2\rightarrow 0.$$
Fix an infinite standard probability space $(X,\mu)$, together with a measure-preserving map $\sigma_X:[0,1]\to X$.
A sequence $ \{(G_n, f_n)\}_{n\in \mathbb N}$ of graph signals converges to a graphon signal $(w,f)$ represented on $(X,\mu)$
if there exist labelings of each of the graphs $G_n$ so that the suitably labeled graphon signal sequence $\{(w_{G_n, X},f^X_n)\}_{n\in\mathbb N}$ converges in norm.
\end{definition}
Ruiz \emph{et al.}~prove a convergence result of the GFT of graph signals to the WFT of the limiting graphon. Their result is limited to graphons and signals with certain properties. We restate the convergence result here; see \cite[Theorem 1]{RuizChamonRibeiro21} for the original statement.
\begin{theorem*}{\rm \cite[Theorem 1]{RuizChamonRibeiro21}}
Let $\{(G_n , f_n )\}$ be a sequence of graph signals converging to the graphon signal $(w,f)$.
Assume all graphs are labeled to ensure convergence as in Definition \ref{def:graph_signal_convergence}.
Suppose the following conditions hold:
\begin{itemize}
\item[(i)] The signal $(w,f)$ is $c$-bandlimited for some $c>0$. That is, $\widehat{f}(\chi)=0$ whenever $\chi$ is a $\lambda$-eigenvector of $T_w$ with $|\lambda|<c$.
\item[(ii)] The graphon $w$ is non-derogatory, i.e.~every eigenvalue of $T_w$ has multiplicity 1.
\end{itemize}
Let $\{\phi_j\}$ and $\{\phi_j^{n}\}$ denote normalized eigenvectors associated with nonzero eigenvalues of $w$ and $G_n$ respectively, ordered as in \eqref{eq:ordering}.
Then,
we have that
$\{{\rm GFT}( G_n, f_n)\} $ converges to ${\rm WFT}(w,f)$ in the sense that for every index $j$, we have
$$\frac{1}{\sqrt{|G_n|}}\widehat{f_n}(\phi_j^{n}) \rightarrow \widehat{f}(\phi_j) \mbox{ as } n\rightarrow \infty.$$
\end{theorem*}
The condition that convergence only holds for non-derogatory graphons is highly restrictive. While non-derogatory graphons form a dense subset in the space of all graphons, the above theorem does not imply continuity of signal processing on the whole space. Moreover, the restriction excludes classes of graphons that have proven useful in practice, such as those of \emph{stochastic block models} \cite{Abbe}. Another such example is the class of Cayley graphons, which provide a versatile tool to model graphs whose link structure is informed by an underlying group and its topology. As it turns out, Cayley graphons tend to have many nonzero eigenvalues of multiplicity higher than 1.
To extend the above convergence result to all graphons, we need to modify the definition of the graphon Fourier transform.
Instead of defining the WFT as the projection of a signal on each element of the eigenbasis of the graphon, we think of the projection onto each eigenspace of $w$.
The two definitions coincide when $w$ is non-derogatory. However, the latter approach enables us to handle eigenvalues of higher multiplicity, as it provides us with a definition which is independent of the particular choice of eigenbasis.
To precisely state the new definition of graphon Fourier transform, and to analyze convergence of the graphon Fourier transform along a converging graphon sequence in this context, we need the following notation:
\begin{notation}\label{notation:notation-thm1}
Let $w$ be a (not necessarily non-derogatory) graphon on an infinite standard probability space $(X,\mu)$.
Let $w=\sum_{i\in I_w}\lambda_i(w) \, \phi_i\otimes \phi_i$ be the spectral decomposition of $w$ as in \eqref{eq:spectral-l2}, where $\{\lambda_i(w)\}_{i\in {I_w}}$ are {nonzero} eigenvalues of the associated integral operator $T_w$, and $\{\phi_n\}_{n\in{I_w}}$ is an orthonormal set of eigenvectors of $T_w$, associated with nonzero eigenvalues. Thus, $I_w\subseteq {\mathbb Z}^*$ is the set of indices $j$ such that $\lambda_j(w)\neq 0$. Let $\{\mu_j(w)\}_{j\in \mathbb Z^*}$ be the sequence of \emph{distinct} nonzero eigenvalues of $T_w$. The sequence is padded with zeros if the number of positive or negative eigenvalues is finite. We always order eigenvalues as in (\ref{eq:ordering}) .
For each $\mu_j(w)$, let
$$I_{\mu_j}=\{i\in I_w: \ \lambda_i(w)=\mu_j(w)\}.$$
By definition of $I_w$, if $\mu_j= 0$ then $I_{\mu_j}=\emptyset$.
For a subset $I\subseteq {I_w}$, define the operator $P^w_{I}: L^2(X)\to L^2(X)$ as
$$P^w_{I}=\sum_{i\in I}\phi_i\otimes \phi_i.$$
Clearly this is an orthogonal projection.
We set $P^w_{I}$ to be the zero operator when $I=\emptyset$.
For each $\mu_j(w)\neq 0$, the operator $P^w_{I_{\mu_j}}$ is the orthogonal projection onto the $\mu_j(w)$-eigenspace of $T_w$, and is of finite rank.
Finally, define $P^w_0$ to be the orthogonal projection onto the null space of $T_w$. Contrary to the previous projections, $P^w_0$ is not
necessarily of finite rank.
\end{notation}
With this notation, we can now formalize the new definition of the graphon Fourier transform as follows.
\begin{definition}[Graphon Fourier Transform]\label{def:NC-Fourier}
Let $w:X\times X\rightarrow [0,1]$ be a graphon, and $\Sigma$ denote the set of distinct eigenvalues of $T_w$. The graphon Fourier transform of a graphon signal $(w,f)$ is a vector-valued function $\widehat{f}$ on $\Sigma$ defined as
\begin{equation}\label{eqn:FT_Proj}
\widehat{f}(\mu_j)= P^w_{I_{\mu_j}}(f)\mbox{ for every nonzero } \mu_j, \text{ and } \widehat{f}(0) = P_0^w(f),
\end{equation}
where the notation is as given in (\ref{notation:notation-thm1}).
The \emph{inverse Fourier transform} can then be expressed as an infinite sum in $L^2(X)$:
\begin{equation}\label{eqn:iWFT}
f= \sum_{j\in\mathbb Z^*}P^w_{I_{\mu_j}}(f) +P_0^w(f) =\sum_{j\in \mathbb Z^*} \widehat{f}(\mu_j)+\widehat{f}(0).
\end{equation}
\end{definition}
The Fourier transform defined above exhibits principal features expected from a graphon Fourier transform. Most importantly, this new definition allows appropriate convergence behavior of the Fourier transform, when applied to any convergent sequence of graphs (with no restriction on the limiting graphon) as we will see in Theorem~\ref{thm:convergence}.
Such general convergence results can only be achieved by paying a price: each graphon Fourier transform `coefficient' is defined as a vector--often lying in an infinite dimensional space--rather than a numerical value. We do not view this fact as a drawback of our approach/definition; indeed, this vector-valued definition is in line with the harmonic analytic definition of the Fourier transform when the ambient group is non-Abelian.
\begin{remark}{(Conventions for graph and graphon Fourier transforms.)} \label{remark:GFT-WFT}
Throughout this paper, graph signals are considered as vectors in $\mathbb C^n$, and the graph Fourier transform is as defined in \eqref{GFT}.
The graphon Fourier transform is, however, the projection as defined in Definition \ref{def:NC-Fourier}.
\end{remark}
For a graph $G$ and an associated graphon $w_G$, the graph Fourier transform on $G$ and the graphon Fourier transform on $w_G$ are closely related, as demonstrated in the following lemma.
\begin{lemma}\label{lem:graph-graphon-spec}
Let $X$ be an infinite standard probability space with a measure-preserving map $\sigma_X:[0,1]\to X$, $G$ be a graph on $n$ vertices, and $f\in \mathbb C^n$ be a signal on $G$.
Let $w_{G,X}$ be the graphon associated with $G$ and represented on $X$ (as defined in \eqref{eq:W-G,X}), and consider the graphon signal $(f^X, w_{G,X})$
associated with the graph signal $(f,G)$ as defined in \eqref{eqn:fX}. Then we have:
\begin{itemize}
\item[(i)] Every eigenvector of a nonzero eigenvalue $\lambda$ of $T_{w_{G,X}}$ is of the form $\phi^X$ for some $\phi\in \mathbb C^n$.
\item[(ii)] Let $\lambda\neq 0$. Then $\lambda$ is an eigenvalue of $A_G$ of multiplicity $m$ with $\lambda$-eigenbasis $\{\phi_1,\ldots,\phi_m \}\subseteq \mathbb C^n$ \emph{iff}
$\lambda$ is an eigenvalue of $T_{w_{G,X}}$ of multiplicity $m$ with $\lambda$-eigenbasis $\{\phi_1^X,\ldots,\phi_m^X \}\subseteq L^2(X)$.
Moreover,
$$\widehat{f^X}(\lambda)=\sum_{i=1}^m\widehat{f}(\phi_i)\phi_i^X.$$
\end{itemize}
\end{lemma}
\begin{proof}
Let $I_1,\ldots, I_{n}$ denote the partition of $[0,1]$ into $n$ equal-sized intervals.
Let $h\in L^2(X)$, and $1\leq i\leq n$. For $x\in \sigma_X(I_i)$, we have
\begin{equation*}
T_{w_{G,X}} h (x)=\int_X w_{G,X}(x,y) h(y)\, dy= \sum_{j=1}^n\int_{\sigma_X(I_j)} w_{G,X}(x,y) h(y)\, dy= \sum_{j=1}^n A_G(i,j)\int_{\sigma_X(I_j)} h(y)\, dy.
\end{equation*}
So, the function $T_{w_{G,X}} h$ is constant on each set $\sigma_X(I_i)$, when $1\leq i \leq n.$
Consequently, every eigenvector of $T_{w_{G,X}}$ associated with a nonzero eigenvalue must attain constant values on the same subsets. This finishes proof of (i).
The correspondence between nonzero eigenvalues/eigenvectors of $A_G$ and $T_{w_{G,X}}$ follows from (i) together with \eqref{eq:Tdiscrete-vs-cts}.
Finally, note that $\widehat{f}(\phi_i)=\langle f, \phi_i\rangle_{\mathbb C^n}=\langle f^X, \phi_i^X\rangle_{L^2(X)}$. This finishes the proof of (ii).
\end{proof}
Theorem \ref{thm:convergence} below is stated in the most general form, dealing with sequences of graphon signals that converge in norm. In Corollary \ref{cor:GraphConvergence}, we will apply the theorem to converging graph signals.
\begin{theorem}\label{thm:convergence}
Let $w:X\times X\rightarrow [0,1]$ be a graphon with terminology as in Notation \ref{notation:notation-thm1}.
Let $\{(w_n , f_n)\}$ be a sequence of graphon signals, all represented on $X$, converging in norm to a graphon signal $(w, f)$.
Then, for every nonzero $\mu_j$,
$P^{w_n}_{I_{\mu_j}}\rightarrow P^{w}_{I_{\mu_j}}$ in Hilbert-Schmidt norm. In particular, we have
\begin{equation}\label{eq:converg}
P^{w_n}_{I_{\mu_j}}(f_n) \rightarrow P^w_{I_{\mu_j}}(f) \mbox{ in } L^2(X) \mbox{ as } n\rightarrow \infty.
\end{equation}
Moreover, if $P^w_0(f)=\mathbf{0}$ (the zero function) then we have that
\begin{equation}\label{eq:converg2}
\sum_{j\in \mathbb Z^*} \| P^{w_n}_{I_{\mu_j}}(f_n)- P^{w}_{I_{\mu_j}}(f)\|^2_2
\rightarrow 0 \mbox{ as } n\rightarrow \infty.
\end{equation}
\end{theorem}
\begin{remark}{(Remarks about the proof.)}
As defined earlier, $T_w$ is the integral operator associated with the (Hilbert-Schmidt) kernel $w\in L^2(X\times X)$. It is well-known that the Hilbert-Schmidt norm of $T_w$, denoted by $\|T_w\|_2$, and the $L^2$-norm of $w$ are equal.
In the following proof, we use $\phi\otimes \phi$ to denote both the rank-one operator on $L^2(X)$ defined as $f\mapsto \langle f, \phi\rangle \phi$ and the kernel of that operator which is the function in $L^2(X\times X)$ given by $(\phi\otimes \phi)(x,y)=\phi(x)\phi(y)$. Notations such as $\|\phi\otimes\phi\|_2$ can be interpreted both as norm of a function in $L^2(X\times X)$ or the Hilbert-Schmidt norm of the associated integral operator.
\end{remark}
\begin{proof}
We apply Notation \ref{notation:notation-thm1} to all graphons in the sequence $\{w_n\}$. That is, $\{ \lambda_i(w_n)\}_{i\in I_{w_n}}$ is the sequence of (repeated) eigenvalues of $w_n$ ordered as in \eqref{eq:ordering}, and $\{\phi^n_i\}$ is the sequence of associated eigenvectors.
Thus
$$
w_n=\sum_{i\in I_{w_n}}\lambda_i(w_n)\phi^n_i\otimes \phi^n_i
$$
is the spectral decomposition of $w_n$.
Next, fix a decreasing sequence of positive numbers $\{\alpha_i\}_{i\in\mathbb N}$ converging to 0, and assume that the sequences $\{\alpha_i\}_{i\in\mathbb N}$ and $\{|\mu_j|\}_{j\in\mathbb Z^*,\mu_j\neq 0}$ are interlacing sequences with no common terms.
Then, $\{\alpha_i\}_{i\in\mathbb N}$ satisfies the condition of Theorem~\ref{thm:szegedy-spectra} (ii), so
$\|[{w_n}]_{\alpha_i}-[w]_{\alpha_i}\|_{L^2(X\times X)}\to 0$ for each $i\in{\mathbb N}$.
Fix $j\in \mathbb Z^*$, and suppose $\mu_j(w)>0$;
the case where $\mu_j(w)$ is negative can be done in an identical manner.
We choose $r_j=\alpha_i$ and $s_j=\alpha_{i+1}$ so that $(s_j,r_j)\cap {\{ |\mu_i|:i\in \mathbb Z^*\}}=\{\mu_j(w)\}$ is a singleton, so we have
\begin{equation}\label{eq:Pdef}
[w]_{s_j}-[w]_{r_j}=\sum_{i\in I_{\mu_j}}\mu_j(w)\phi_i\otimes \phi_i-\sum_{i\in I_{-\mu_j}}\mu_j(w)\phi_i\otimes \phi_i.
\end{equation}
Let $P^+_j$ (resp.~$P^-_j$) denote the orthogonal projection onto the $\mu_j(w)$-eigenspace (resp.~the eigenspace associated with $-\mu_j(w)$).
Then we can rewrite (\ref{eq:Pdef}) as
\begin{equation}\label{eq:Pdef2}
[w]_{s_j}-[w]_{r_j}=\mu_j(w)(P^+_j-P^-_j).
\end{equation}
We now invoke Theorem \ref{thm:szegedy-spectra} to obtain that
\begin{eqnarray}
\|([{w_n}]_{s_j}-[{w_n}]_{r_j})-([w]_{s_j}-[w]_{r_j})\|_2&=&\|([{w_n}]_{s_j}-[{w}]_{s_j})+([w]_{r_j}-[w_n]_{r_j})\|_2\nonumber\\
&\leq&\|[{w_n}]_{s_j}-[{w}]_{s_j}\|_2+\|[w]_{r_j}-[w_n]_{r_j}\|_2\label{eq:conv}\\
&\to& 0 \quad \mbox{ as } n\rightarrow \infty.\nonumber
\end{eqnarray}
For each $n\in \mathbb N$, define the finite-rank projections
\begin{equation*}
P_{n,j}^+:=\sum_{i\in I_{\mu_j(w)}} \phi_i^n\otimes \phi_i^n\ \mbox{ and } \ P_{n,j}^-:=\sum_{i\in I_{-\mu_j(w)}} \phi_i^n\otimes \phi_i^n.
\end{equation*}
From Theorem \ref{thm:spectrum}, $\lim_{n}\lambda_i(w_n)=\mu_j(w)$ if $i\in I_{\mu_j(w)}$, and $\lim_{n}\lambda_i(w_n)=-\mu_j(w)$ if $i\in I_{-\mu_j(w)}$. For all other $i$, the value $\lim_{n}\lambda_i(w_n)$ does not fall in $(s_j,r_j)$. Thus for large values of $n$,
$\lambda_i(w_n)\in (s_j,r_j)$ \emph{iff} $i\in I_\mu(w)$. Moreover, as $n$ approaches infinity, $\lambda_i(w_n)\to \mu_j(w)$ for $i\in I_\mu(w).$ A similar statement holds for $I_{-\mu(w)}$.
Therefore, we have
\begin{equation}\label{eq-conv2}
\|\big([{w_n}]_{s_j}-[{w_n}]_{r_j}\big)-\mu_j(w)\left(P_{n,j}^+-P_{n,j}^-\right)\|_2\rightarrow 0 \mbox{ as } n\to \infty.
\end{equation}
Putting \eqref{eq:Pdef2}, \eqref{eq:conv}, and \eqref{eq-conv2} together, and using the fact that $\mu_j(w)\neq 0$, we get that
$$\left\|\left(P^+_j-P^-_j\right)-\left(P_{n,j}^+- P_{n,j}^-\right)\right\|_2\rightarrow 0 \mbox{ as } n\to \infty.$$
Let ${\mathcal B}(L^2(X))$ denote the space of bounded linear operators on $L^2(X)$ equipped with operator norm.
Since Hilbert-Schmidt norm dominates the operator norm, we have
\begin{equation}\label{eq:easy1}
P_{n,j}^+- P_{n,j}^-\to P^+_j-P^-_j \ \mbox{ in } {\mathcal B}(L^2(X)).
\end{equation}
So, $(P_{n,j}^+- P_{n,j}^-)^2\to (P^+_j-P^-_j)^2$ as well in ${\mathcal B}(L^2(X))$. Note that $P^+_jP^-_j=P^-_jP^+_j=0$ and $P_{n,j}^+P_{n,j}^-=0=P_{n,j}^-P_{n,j}^+$, since they are orthogonal projections and the images of each pair are orthogonal subspaces of $L^2(X)$.
Applying this, together with the fact that every projection is an idempotent, we obtain
\begin{equation}\label{eq:easy2}
P_{n,j}^++ P_{n,j}^-\to P^+_j+P^-_j \ \mbox{ in } {\mathcal B}(L^2(X)).
\end{equation}
Adding and subtracting \eqref{eq:easy1} and \eqref{eq:easy2} imply that $P_{n,j}^+\to P^+_j$ and $P_{n,j}^-\to P^-_j$ in ${\mathcal B}(L^2(X))$ as $n\rightarrow \infty$.
Moreover, note that the operators $P_{n,j}^+$, $P^+_j, P_{n,j}^-$ and $P^-_j$ are Hilbert-Schmidt operators, and $\|P_{n,j}^+\|_2, \|P^+_j\|_2, \|P_{n,j}^-\|_2, \|P^-_j\|_2\leq \max\{|I_{\mu_j(w)}|, | I_{-\mu_j(w)}|\}$. Using this uniform bound, we can now prove convergence in the Hilbert-Schmidt norm as follows:
\begin{eqnarray*}
\|P_{n,j}^+-P^+_j\|_2&=&\|(P_{n,j}^+-P^+_j)(P_{n,j}^++P^+_j)+P^+_j(P_{n,j}^+ -P^+_j)+(P^+_j-P_{n,j}^+)P^+_j\|_2\\
&\leq&\|P_{n,j}^++P^+_j\|_2\|P_{n,j}^+-P^+_j\|_{{\mathcal B}(L^2(X))}+\|P^+_j\|_2\|P_{n,j}^+-P^+_j\|_{{\mathcal B}(L^2(X))}\\
&\qquad\qquad +&
\|P^+_j\|_2\|P^+_j-P_{n,j}^+\|_{{\mathcal B}(L^2(X))},
\end{eqnarray*}
which converges to 0 as $n$ tends to infinity. Here, we have used the fact that Hilbert-Schmidt operators form an ideal in ${\mathcal B}(L^2(X))$.
Namely, if $T\in {\mathcal B}({\mathcal H})$ and $S$ is a Hilbert–Schmidt operator on the Hilbert space ${\mathcal H}$ then $\|TS\|_{2}\leq \|T\|_{{\mathcal B}({\mathcal H})}\|S\|_{2}$ and $\|ST\|_{2}\leq \|T\|_{{\mathcal B}({\mathcal H})}\|S\|_{2}$.
This completes the first part of the theorem.
To prove the second part, fix a vector $f\in L^2(X)$, and assume that $P^w_0(f)=\mathbf{0}$. We will show that
\begin{equation}\label{eqn:wft}
\sum_{j\in \mathbb Z^*} \| P^{w_n}_{I_{\mu_j}}(f)- P^{w}_{I_{\mu_j}}(f)\|^2_2
\rightarrow 0
\text{ as } n\rightarrow \infty.
\end{equation}
This suffices to prove the claim of the theorem since $\sum_{j\in \mathbb Z^*} \| P^{w_n}_{I_{\mu_j}}(f-f_n)\|^2_2\leq \| f-f_n\|^2_2$, and $f_n\rightarrow f$ in $L^2(X)$ as $n\rightarrow \infty$, by definition.
\emph{Wlog} assume that $f\neq \mathbf{0}$, as \eqref{eqn:wft} trivially holds if $f=\mathbf{0}$.
To simplify notation, let $P_{n,j}=P^{w_n}_{I_{\mu_j}}$ and $P_j=P^{w}_{I_{\mu_j}}$.
Recall that $\{P_j\}_j$ (resp.~$\{P_{n,j}\}_j$ for each $n\in {\mathbb N}$) is a collection of pairwise orthogonal projections.
To prove (\ref{eqn:wft}), let $\epsilon>0$ be given.
The collection $\{\phi_i\}_{i\in I_w}$, together with any orthonormal basis of the null space, forms an orthonormal basis for $L^2(X)$. Thus, using the fact that $P^w_0(f)=0$, we can decompose $f$ into orthogonal components
$f=\sum_{j\in\mathbb Z^*}P_{j}(f)$.
(Recall that $P_j$ is defined to be the zero operator if $\mu_j=0$, and thus $I_{\mu_j}=\emptyset.$)
Consequently, we have
\[\|f\|_2^2=\sum_{j\in\mathbb Z^*}\|P_j(f)\|_2^2.\]
Since the above sum is bounded, there exists a finite set $S\subset I_w$ such that
\begin{equation}\label{eq:cut-f}
\left\|f-\sum_{j\in S}P_j(f)\right\|_2<\frac{\epsilon}{4}.
\end{equation}
Let $h:=\sum_{j\in S}P_j(f)$, and note that $\| f-h\|_2<\epsilon/4$.
From the first part of the theorem, we have that, for each $j\in S$,
$
\left\| P_{n,j}(h)- P_{j}(h)\right\|^2_2\rightarrow 0,
$
as $n\rightarrow \infty$.
Given that $S$ is finite, there must exist $N\in \mathbb N$ so that for all $n\geq N$,
\begin{equation*}
\sqrt{\sum_{j\in S} \| P_{n,j}(h)-P_j(h) \|_2^2} <\min\left\{\frac{\epsilon}{4},\frac{\epsilon^2}{32\|f\|_2}\right\}.
\end{equation*}
To show (\ref{eqn:wft}), we use the triangle inequality in the space $\ell^2\text{-}\oplus_{j\in\mathbb Z^*}L^2(X)$.
Observe that
\begin{eqnarray}
\sqrt{\sum_{j\in \mathbb Z^*}\| P_{n,j} (f)-P_j (f)\|^2_2} &\leq & \sqrt{\sum_{j\in\mathbb Z^*} \| P_{n,j}(f-h)\|^2_2} + \sqrt{\sum_{j\in\mathbb Z^*} \| P_{j}(f-h)\|^2_2} \notag\\
&& \quad\quad + \sqrt{\sum_{j\in\mathbb Z^*} \| P_{n,j}(h)-P_j(h)\|^2_2}\notag \\
&\leq &2\| f-h\|_2 + \sqrt{\sum_{j\in S} \| P_{n,j}(h)-P_j(h)\|^2_2} \notag\\
&&\quad \quad + \sqrt{\sum_{j\in \mathbb Z^*\setminus S} \| P_{n,j}(h)\|^2_2},\notag
\end{eqnarray}
where the last step follows since, for any $j\in \mathbb Z^*\setminus S$, $P_j(h)=\mathbf{0}$.
It remains to show that $\sqrt{\sum_{j\in \mathbb Z^*\setminus S} \| P_{n,j}(h)\|^2_2}<\epsilon/4$.
For $n\geq N$, using the triangle inequality, we have
\[
\sqrt{\sum_{j\in S} \| P_{n,j}(h)\|^2_2} \,\geq \, \sqrt{\sum_{j\in S} \| P_{j}(h) \|^2_2} - \sqrt{\sum_{j\in S} \| P_{n,j}(h)-P_j(h)\|^2_2} \,\geq \, \| h\|_2-\frac{\epsilon^2}{32\|f\|_2}.
\]
Since $\sqrt{\sum_{j\in \mathbb Z^*} \| P_{n,j}(h)\|^2_2} \leq \| h\|_2$, the above inequality implies that
\begin{eqnarray}\label{eq1-estimate}
&&\sqrt{\sum_{j\in \mathbb Z^*} \| P_{n,j}(h)\|^2_2}
- \sqrt{\sum_{j\in \mathbb Z^*} \| P_{n,j}(h) \|^2_2 -\sum_{j\in \mathbb Z^*\setminus S} \| P_{n,j}(h)\|^2_2}
\notag\\
&&\leq \| h\|_2-\sqrt{\sum_{j\in S} \| P_{n,j}(h)\|^2_2} \,\leq \, \frac{\epsilon^2}{32\|f\|_2}.
\end{eqnarray}
On the other hand,
\begin{equation}\label{eq2-estimate}
\sqrt{\sum_{j\in \mathbb Z^*} \| P_{n,j}(h)\|^2_2}
+ \sqrt{\sum_{j\in \mathbb Z^*} \| P_{n,j}(h) \|^2_2 -\sum_{j\in \mathbb Z^*\setminus S} \| P_{n,j}(h)\|^2_2} \,\leq 2\| h\|_2\,\leq \, 2\|f\|_2.
\end{equation}
Multiplying \eqref{eq1-estimate} and \eqref{eq2-estimate} together, finishes the proof, as we get
$\sum_{j\in \mathbb Z^*\setminus S} \| P_{n,j}(h)\|^2_2 \,\leq \, \frac{\epsilon^2}{16}.$
\end{proof}
As a direct corollary of Theorem \ref{thm:convergence}, we now have the desired result that the graph Fourier transform converges to the graphon Fourier transform, when applied to a converging sequence of graph signals.
As mentioned in Remark~\ref{remark:GFT-WFT}, in the following corollary, graph signals are considered as vectors in $\mathbb C^n$, and the graph Fourier transform is as defined in \eqref{GFT}. The limiting graphon transform is the projection as defined in Definition \ref{def:NC-Fourier}.
\begin{corollary}\label{cor:GraphConvergence}
Fix a graphon $w:X\times X\rightarrow [0,1]$ and a graphon signal $f\in L^2(X)$, and consider the sequence $\{(G_n , f_n)\}$
of graph signals converging to the graphon signal $(w, f)$.
Suppose that the graphs $G_n$ and the graph signals $f_n$ are labeled so that $(w_{G_n,X},f_n^X)$ converges in norm to $(w,f)$.
Then graph Fourier transforms $\widehat{f}_n$ converge to the graphon Fourier transform $\widehat{f}$ in the following sense:
$$
\text{For each nonzero eigenvalue } \mu_j \text{ of } T_w,\ \sum_{i\in I_{\mu_j}} \widehat{f_n}(\phi_i^n) ({\phi^n_i})^X \rightarrow \widehat{f}(\mu_j)\mbox{ as } n\rightarrow \infty,
$$
where for each $n$, the adjacency matrix of $G_n$ has eigenvalues $\{\lambda^n_i\}$, ordered as in (\ref{eq:ordering}), with corresponding eigenvectors $\phi^n_i$.
Moreover, if $P_0^w(f)={\mathbf 0}$, then
$$
\sum_{j\in \mathbb Z^*}\|\sum_{i\in I_{\mu_j}} \widehat{f_n}(\phi_i^n) ({\phi^n_i})^X - \widehat{f}(\mu_j)\|_2^2\rightarrow 0\mbox{ as } n\rightarrow \infty.
$$
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:graph-graphon-spec}, the sequence $\{\lambda^n_i\}$ gives nonzero eigenvalues of the graphon $w_{G_n,X}$ listed as in (\ref{eq:ordering}),
with corresponding eigenvectors $(\phi^n_i)^X$. \emph{Wlog} assume $G_n$ has $n$ vertices.
So, if $\mu_j\neq 0$, we have
\begin{eqnarray*}
P^{w_{G_n,X}}_{I_{\mu_j}}(f_n^X)=\sum_{i\in I_{\mu_j}}\langle f_n^X, (\phi^n_i)^X\rangle_{L^2(X)}(\phi^n_i)^X
=\sum_{i\in I_{\mu_j}}\langle f_n, \phi^n_i\rangle_{{\mathbb C}^{n}}(\phi^n_i)^X=\sum_{i\in I_{\mu_j}}\widehat{f_n}(\phi^n_i)(\phi^n_i)^X.
\end{eqnarray*}
Now, applying Theorem~\ref{thm:convergence} to the converging graph signal sequence $(w_{G_n,X},f_n^X)\to (w,f)$ finishes the proof.
\end{proof}
For non-derogatory graphons, this corollary strengthens the previously known convergence result from \cite{RuizChamonRibeiro21}. Namely, suppose $\mu_j$ has multiplicity 1, so $I_{\mu_j}=\{\lambda_j\}$. Then $P^w_{\mu_j}(f) =\langle f, \phi_j\rangle\phi_j$, where $\phi_j$ is the $\lambda_j$-eigenvector of $w$. The corollary then states that
$$
\widehat{f_n}(\phi_j^n) {(\phi^n_j)^X} \rightarrow \langle f, \phi_j\rangle\ \phi_j \mbox{ as } n\rightarrow \infty.
$$
Since the functions $(\phi^n_j)^X$ and $\phi_j$ are elements of $L^2(X)$ with unit norm, this implies that
$$
\widehat{f_n}(\phi_j^n)\rightarrow \langle f, \phi_j\rangle \mbox{ as } n\rightarrow \infty.
$$
In addition, if $f$ is $c$-bandlimited for some $c>0$, then $P_0^w(f)={\mathbf 0}$, and we get
$$
\sum_{j\in\mathbb Z^*}|\widehat{f_n}(\phi_j^n)- \langle f, \phi_j\rangle |_2^2\rightarrow 0\mbox{ as } n\rightarrow \infty.
$$
Note that the scaling factor in the theorem of~\cite{RuizChamonRibeiro21} does not appear here. This is due to the fact that we have incorporated a
scaling factor of $\sqrt{n}$ in \eqref{eqn:fX}.
\subsection{Interpretation of Theorem \ref{thm:convergence} and its applications}
The graphon Fourier transform as introduced in Definition~\ref{def:NC-Fourier} is a vector-valued transform, which provides a decomposition for any given signal into projections of the signal onto each eigenspace of $T_w$.
This definition differs from the previously known approach, where graphon Fourier transform was modeled after classical (Abelian) harmonic analysis, and the Fourier coefficients were simply defined as real/complex numbers.
The necessity for Definition~\ref{def:NC-Fourier} becomes apparent when one deals with graphons which possess eigenvalues of higher multiplicities.
In such cases, convergence only occurs at the level of eigenspaces.
Suppose a graphon has an eigenvalue $\lambda$ with multiplicity $k$. Due to random fluctuations, samples from the graphon will likely have $k$ distinct eigenvalues close to $\lambda$. Our result indicates that the space spanned by the eigenvectors of those $k$ eigenvalues will be increasingly similar to the eigenspace of the graphon corresponding to $\lambda$, {as the size of the sampled graph increases}. However, there is no guarantee that the individual eigenvectors of the samples converge.
We therefore argue that if several eigenvalues of the graph sequence converge to a single (repeated) eigenvalue of the limit graphon, then the corresponding eigenvectors should be considered in their totality, and not individually.
A special case occurs when $T_w$ is of finite rank. Here, the set $I_w$ of nonzero (repeated) eigenvalues is finite. We first note that, in this case, the second convergence result (\ref{eq:converg2}) follows directly from (\ref{eq:converg}), and thus the condition $P^w_0(f)=\mathbf{0}$ is not necessary. Second, we observe that, for large $n$, a sample graph $G\sim \mathcal{G}(n,w)$ drawn from a finite rank graphon $w$ will likely have more non-zero eigenvalues than $w$. Namely, since edges are chosen independently at random, the rank of the adjacency matrix of a $w$-random graph, and thus its number of non-zero eigenvalues, will likely grow to infinity as the size increases.
As a simple example, let $\{G_n\}$ be a sequence of $w$-random graphs of increasing size, sampled from a constant graphon $w\equiv p$. We know from Theorem \ref{thm:szegedy-spectra} that the sequence $\{\lambda_1^n\}$, consisting of the largest eigenvalue of the adjacency matrix of each graph $G_n$, will converge to $\lambda_1=p$, while all smaller eigenvalues will converge to zero. By Theorem \ref{thm:convergence}, the eigenvectors of $G_n$ corresponding to the eigenvalues $\lambda^n_i$ with $i>1$ will converge to the kernel of $T_w$, and thus will not play a role in the spectral decomposition of $T_w$.
A similar situation occurs for any finite rank graphon. That is, for any index $j$ outside $I_w$, the sequence of eigenvalues
$\{\lambda_j^n\}_n$ converges to 0, and the associated sequence of eigenvectors converges to the kernel of $T_w$.
Our results suggest that such eigenvectors should be considered as sampling noise. Thus, an efficient analysis of the graph Fourier transform should only focus on eigenvalues with indices in $I_w$.
We suggest an approach for a unified Fourier analysis applicable to all graphs sampled from a given graphon $w:X\times X\to [0,1]$.
Namely, we can propose as a graph Fourier transform, the projection onto eigenspaces of $T_w$.
Our results show that, for large graphs, this Fourier transform will be similar to the GFT derived from the spectral decomposition of the adjacency matrix of the graph itself.
\begin{example}[Watts-Strogatz model]\label{exp:Watts-Strogatz}
Consider the graphon $w:[0,1]^2\rightarrow [0,1]$ defined as follows. For all $x,y\in [0,1]$, let
$$
w(x,y)=\left\{ \begin{array}{ll}
1-p &\mbox{if }|x-y|\leq d\mbox{ or }|x-y|\geq 1-d,\\
p & \mbox{otherwise},
\end{array}\right.
$$
where $p,d\in (0,\frac{1}{2})$ are parameters of the model.
The graphon $w$ is a Cayley graphon on the 1-dimensional torus (see Example \ref{exp:torus} for details).
Random graphs drawn from $w$ have a natural circular layout: each vertex can be identified with a point $e^{2\pi i x}$ on the unit circle. Then each vertex is connected with probability $1-p$ to vertices that are close (in angular distance), and with probability $p$ to any other vertex. When $p$ is small, this graphon corresponds closely to the Watts-Strogatz model first proposed in \cite{WattsStrogatz98}, which is widely used to model so-called ``small-world'' networks.
A straightforward calculation shows that the eigenvalues of $T_w$ are
$$\left\{\frac{(1-2p)\sin(2\pi kd)}{\pi k}: k\in \mathbb Z^*\right\}\cup \{p+2d-4pd\}.$$
Taking $d=p=0.1$ and using notation as in \ref{notation:notation-thm1}, the first three eigenvalues are
$$\lambda_1=0.6p+0.2, \lambda_2=\lambda_3=\left(\frac{1-2p}{\pi}\right)\sin(0.2\pi).$$
Then $\mu_2=\lambda_2=\lambda_3$, $I_{\mu_2}=\{ 2,3\}$, and the eigenspace corresponding to $\mu_2$ has dimension 2.
Let $\{ G_n\}$ be a sequence of $w$-random graphs $G_n\sim {\mathcal{G}}(n,w)$. Our convergence result tells us that for large $n$, the adjacency matrix of $G_n$, interpreted as a graphon, will have second and third largest positive eigenvalues $\lambda_2^n$ and $\lambda_3^n$ close to $\mu_2$. However, due to stochastic variation it is unlikely that $\lambda_2^n=\lambda_3^n$. Corollary \ref{cor:GraphConvergence} tells us that the space spanned by the $\lambda_2^n$- and $\lambda_3^n$-eigenvector converges to the eigenspace corresponding to $\mu_2$ (in the sense of the convergence of the associated orthogonal projections). It does not follow, {and is likely not true}, that the sequences $\{\widehat{f}(\lambda^n_2)\}$ and $\{\widehat{f}(\lambda^n_3)\}$ each converge.
We can then conclude that the graph Fourier coefficients $\widehat{f}(\lambda^n_2)$ and $\widehat{f}(\lambda^n_3)$ have little significance individually, but should be considered jointly.
\end{example}
\subsection{Application: Filter Design}
In graph signal processing, the GFT guides the design of graph filters. Diffusion of a graph signal reflects the structure of the graph. Therefore, the graph shift operator $S$ is often taken to be the adjacency matrix $A$. A \emph{polynomial graph filter} $H$ on a graph with $n$ vertices is any polynomial in $A$ (see for example \cite{2018:Ortega:GSPOverview}):
$$
H=\sum_{k=0}^m h_k A^k.
$$
Let $h$ be the polynomial $h(x)=\sum_{k=0}^m h_k x^k$. It follows directly from the definition of GFT and the spectral decomposition of the adjacency matrix that, for each eigenvalue $\lambda_i$ of $A$ with associated eigenvector $\phi_i$:
\begin{equation}\label{eq:graphfilter}
\widehat{Hf}(\phi_i)=h(\lambda_i)\widehat{f}(\phi_i).
\end{equation}
As proposed in \cite{MorencyLeus21}, this approach can be extended to graphons as follows. The shift operator of a graphon $w:X\times X\rightarrow [0,1]$ is the associated operator $T_w$, and a graphon filter is likewise defined as a polynomial in $T_w$:
$$
H=\sum_{k=0}^m h_kT_w^k.
$$
Using the spectral decomposition of $T_w$ and adopting the notation from \ref{notation:notation-thm1}, we have that, for each $f\in L^2(X)$,
$$
Hf=h_0f+ \sum_{k=1}^m h_k \sum_{j:\mu_j\neq 0} \mu_j^k P^w_{I_{\mu_j}}(f).
$$
As before, let $h$ be the polynomial $h(x)=\sum_{k=0}^m h_kx^k$. Using our extended definition of the graphon Fourier transform as given in Definition \ref{def:NC-Fourier}, we then have that, for all $\mu_j\neq 0$,
\begin{equation}\label{eq:graphonfilter}
\widehat{Hf}(\mu_j)=P^w_{I_{\mu_j}}(Hf)=h(\mu_j)P_{I_{\mu_j}}^wf=h(\mu_j) \widehat{f}(\mu_j).
\end{equation}
Our convergence results then immediately imply the convergence of the filter response as stated below.
\begin{corollary}
Let $\{(G_n,f_n)\}$ be a sequence of graph signals converging to a graphon signal $(w,f)$, and assume that the graphs $G_n$ and the graph signals $f_n$ are labeled so that $(w_{G_n,X},f_n^X)$ converges in norm to $(w,f)$.
Given a polynomial $h(x)=\sum_{k=0}^m h_kx^k$,
for each $n$, let $H_{n}=\sum_{k=0}^m h_k A_n^k$, where $A_n$ is the adjacency matrix of $G_n$.
Then for each nonzero eigenvalue $\mu_j$ of $T_w$,
$$
\sum_{i\in I_{\mu_j}} \widehat{H_{n}f_n}(\phi_i^n) ({\phi^n_i})^X \rightarrow h(\mu_j)\widehat{f}(\mu_j)\mbox{ as } n\rightarrow \infty,
$$
where for each $n$, the adjacency matrix of $G_n$ has eigenvalues $\{\lambda^n_i\}$, labeled as in (\ref{eq:ordering}), with corresponding eigenvectors $\phi^n_i$.
\end{corollary}
\begin{proof}
The first statement follows directly from Corollary~\ref{cor:GraphConvergence}, Equations \eqref{eq:graphfilter} and \eqref{eq:graphonfilter}, and the fact that
$\lim_{n\to \infty}\lambda_i^n=\mu_j$ for each $i\in I_{\mu_j}$.
\end{proof}
This corollary gives strong evidence that, for large graphs sampled from a graphon $w$, one should design graph filters with respect to the limiting graphon, rather than the graph itself. Also, when evaluating the effect of a filter on GFT, one should consider the Fourier coefficients of eigenvalues with indices in $I_{\mu_j}$ as a whole.
\section{Signal processing on Cayley graphons}\label{sec:cayley}
The instance-independent approach presented in this article is particularly favorable in the special case that the limit graphon is a Cayley graphon.
In this case, we have the well-established and rich theory of group representations at our disposal, which we employ to obtain suitable graphon Fourier bases. Fourier analysis informed by representation theory of the Cayley graphon can lead to decomposition of signals into `meaningful' components; for an instance of this phenomenon, see \cite{permutahedron}.
Cayley graphons reflect the symmetries of the underlying group, and can be used to model real-life networks. For example, the Watts-Strogatz model from Example~\ref{exp:Watts-Strogatz} is a Cayley graphon. In this section, we show how the representations of the underlying group naturally yield the spectral decomposition of the associated Cayley graphon, and can be used to define a universal GFT for samples from the graphon.
We fix the following notations throughout this section: let $w$ be a Cayley graphon on a compact group $\mathbb G$ defined by a Cayley function $\gamma:\mathbb G\rightarrow [0,1]$ (see Section \ref{subsec:general-graphon} and Definition \ref{def:cayley-graphon} for precise descriptions).
Applying Fourier analysis of non-Abelian groups as discussed in Subsection \ref{subsec:FT-nonAbelian}, we obtain properties of the eigenvalues/eigenvectors of $T_w$, which we list in Theorem \ref{thm:eigenvector}. In the next lemma, we will see that the action of $T_w$ on a signal $f$ can be expressed in terms of a convolution operator, which is computationally preferred when dealing with representations.
\begin{lemma}\label{lem:Tconv}
Let $f\in L^2(\mathbb G)$. For almost every $x\in \mathbb G$, we have $T_w(f)(x)=(\widecheck{f}*\widecheck{\gamma})(x^{-1})$, where the ``check operation'' $f\mapsto \widecheck{f}$ on $L^1(\mathbb G)$ is defined as $\widecheck{f}(x)=f(x^{-1})$. Consequently, we have
$$\widecheck{T_w(f)}(x)={(\widecheck{f}*\widecheck{\gamma})}(x).$$
\end{lemma}
\begin{proof}
For almost every $x\in \mathbb G$, we have $T_w(f)(x)=\int_{\mathbb G} w(x,y)f(y)\, dy=\int_{\mathbb G} \gamma(xy^{-1})f(y)\, dy$. So, applying the change of variable $y\mapsto y^{-1}$, we have
\begin{equation*}
T_w(f)(x)=\int_{\mathbb G} \widecheck{f}(y^{-1})\widecheck{\gamma}(yx^{-1})\, dy=\int_{\mathbb G} \widecheck{f}(y)\widecheck{\gamma}(y^{-1}x^{-1})\, dx=( \widecheck{f}*\widecheck{\gamma})(x^{-1}).
\end{equation*}
\end{proof}
As we will see in Theorem~\ref{thm:eigenvector}, the spectral analysis of matrices $\pi(\gamma)$ play a central role in the spectral decomposition of $T_w$.
\begin{lemma}\label{lem:pi(f)sa}
Let $\gamma$ be a Cayley function on a group $\mathbb G$, i.e.~$\gamma(x)=\gamma(x^{-1})$ for all $x\in \mathbb G$.
Then for every unitary representation $\pi:\mathbb G\to {\mathcal U}({\mathcal H}_\pi)$, the operator $\pi(\gamma)$ is self-adjoint. In particular, $\pi(\gamma)$ is diagonalizable, and its spectrum lies in ${\mathbb R}$.
\end{lemma}
\begin{proof}
Recall that $\pi(\gamma)\in {\mathcal B}({\mathcal H}_\pi)$ is defined as $\int_{\mathbb G} \gamma(x)\pi(x)\, dx$, where the integration is with respect to the Haar measure of $\mathbb G$. This integral should be interpreted weakly, that is,
$$\left\langle\left(\int_{\mathbb G} \gamma(x)\pi(x) \, dx\right)\xi,\eta\right\rangle= \int_{\mathbb G} \gamma(x)\langle\pi(x)\xi,\eta\rangle \, dx,$$
for each $\xi,\eta$ in the Hilbert space of $\pi$. For an arbitrary pair $\xi,\eta\in{\mathcal H}_\pi$, we have
\begin{eqnarray*}
\left\langle\left(\int_{\mathbb G} \gamma(x)\pi(x) \, dx\right)^*\xi,\eta\right\rangle&=&\overline{\left\langle\left(\int_{\mathbb G} \gamma(x)\pi(x) \, dx\right)\eta,\xi\right\rangle}
=\overline{\int_{\mathbb G} \gamma(x)\langle\pi(x)\eta,\xi\rangle \, dx}\\
&=&\int_{\mathbb G} \overline{\gamma(x)}\overline{\langle\pi(x)\eta,\xi}\rangle \, dx
=\int_{\mathbb G} \gamma(x)\langle\pi(x^{-1})\xi,\eta\rangle \, dx\\
&=&\int_{\mathbb G} \gamma(x^{-1})\langle\pi(x)\xi,\eta\rangle \, dx=\langle\pi(\gamma)\xi,\eta\rangle,
\end{eqnarray*}
where we used the change of variable $x\mapsto x^{-1}$, and the fact that $\gamma(x)=\gamma(x^{-1})$.
\end{proof}
\begin{theorem}\label{thm:eigenvector
Let $w:\mathbb G\times \mathbb G\to [0,1]$ be the Cayley graphon defined by a Cayley function $\gamma:\mathbb G\to [0,1]$ on a compact group $\mathbb G$.
\begin{itemize}
\item[(i)] The set of eigenvalues of $T_w$ is given as $\bigcup_{\pi\in\widehat{\mathbb G}}\left\{\mbox{eigenvalues of }\ \pi(\gamma)\right\}$.
\item[(ii)] For every nonzero eigenvalue $\lambda$ of $T_w$, there are finitely many $\pi\in \widehat{\mathbb G}$ such that $\lambda\in{\rm Spec}(\pi(\gamma))$. We denote this finite set by $\widehat{\mathbb G}_{\lambda,\gamma}$.
\item[(iii)] Let $0\neq \lambda$ be an eigenvalue of $T_w$. Then $\lambda$-eigenvectors $\phi\in L^2(\mathbb G)$ can be characterized as
$$\phi(x)=\sum_{\pi\in \widehat{\mathbb G}_{\lambda,\gamma}} d_\pi \overline{{\rm Tr}[A_\pi\pi(x)^*]},$$
where $A_\pi$ is a matrix with the property that every one of its columns is either zero, or a $\lambda$-eigenvector for $\pi(\gamma)$.
(Note that at least one of the $A_\pi$'s must be nonzero.)
\item[(iv)] The multiplicity of every nonzero eigenvalue $\lambda$ of $T_w$ is given by
$\sum_{\pi\in \widehat{\mathbb G}_{\lambda,\gamma}} d_\pi m_{\lambda,\pi},$
where $m_{\lambda,\pi}$ is the multiplicity of the eigenvalue $\lambda$ for $\pi(\gamma).$
\end{itemize}
\end{theorem}
\begin{proof}
To prove (i), suppose $0\neq \phi\in L^2(\mathbb G)$ is a $\lambda$-eigenvector of $T_w$, i.e.~$T_w(\phi)=\lambda \phi$ in $L^2(\mathbb G)$.
By Lemma~\ref{lem:Tconv}, and the fact that $\gamma$ is a Cayley function
(i.e.~$\widecheck{\gamma}=\gamma$), this identity can be written as $\widecheck{\phi}*{\gamma}=\lambda\widecheck{\phi}$.
Consequently, for every $\pi\in\widehat{\mathbb G}$, $\pi(\widecheck{\phi}*\gamma)=\pi(\widecheck{\phi})\pi(\gamma)=\lambda\pi(\widecheck{\phi})$. So by injectivity of the Fourier transform, $\phi$ is a $\lambda$-eigenvector of $T_w$ precisely when for every $\pi\in\widehat{\mathbb G}$, we have
\begin{equation}\label{eq1-pf-cayley}
\pi(\widecheck{\phi})(\pi(\gamma)-\lambda I_{d_\pi})=0,
\end{equation}
where $I_{d_\pi}$ is the identity matrix of dimension $d_\pi$.
Taking matrix-adjoint from both sides of Equation \eqref{eq1-pf-cayley}, this equation can be written as
\begin{equation}\label{eq:conv-fouroer}
(\pi(\gamma)-\lambda I_{d_\pi})\pi(\overline{\phi})=0 \ \mbox{ for every } \pi\in\widehat{\mathbb G}.
\end{equation}
Thus, we have:
\begin{itemize}
\item[(a)] If $\lambda$ is not an eigenvalue of $\pi(\gamma)$, then $\pi(\gamma)-\lambda I_{d_\pi}$ is invertible. So, $\pi(\overline{\phi})=0$.
\item[(b)] If $\lambda$ is an eigenvalue of $\pi(\gamma)$, then every nonzero column of the matrix $\pi(\overline{\phi})$ must be a $\lambda$-eigenvector of $\pi(\gamma)$.
\end{itemize}
As a result, if $\lambda$ is not an eigenvalue of $\pi(\gamma)$ for any $\pi\in\widehat{\mathbb G}$, then $\phi=0$, contradicting our assumption. So $\lambda$ is an eigenvalue of $T_w$ with associated eigenvector $\phi$ if and only if it is an eigenvalue of $\pi(\gamma)$ for some $\pi\in \widehat{\mathbb G}$. This finishes the proof of (i).
To prove (ii), we apply the Parseval identity for $\gamma$ as follows:
\begin{eqnarray*}
\|\gamma\|_2^2=\sum_{\pi\in\widehat{\mathbb G}} d_\pi {\rm Tr}[\pi(\gamma)\pi(\gamma)^*]=\sum_{\pi\in\widehat{\mathbb G}} d_\pi \left(\sum_{\lambda\in{\rm Spec}(\pi(\gamma))}\lambda^2\right).
\end{eqnarray*}
Since the above sum is finite, for every given $\lambda\neq 0$, there are only finitely many $\pi$ with $\lambda \in{\rm Spec}(\pi(\gamma))$.
To prove (iii), assume $\lambda$ is a nonzero eigenvalue of $T_w$, and recall that
$$\widehat{\mathbb G}_{\lambda,\gamma}=\left\{\pi\in \widehat{\mathbb G}:\ \lambda\in {\rm Spec}(\pi(\gamma))\right\}.$$
From (a) and (b), $0\neq \phi\in L^2(\mathbb G)$ is a $\lambda$-eigenvector of $T_w$ if and only if
\begin{itemize}
\item[(a$'$)] $\pi(\overline{\phi})=0$ for all $\pi\in \widehat{\mathbb G}\setminus\widehat{\mathbb G}_{\lambda,\gamma}$.
\item[(b$'$)] If $\pi\in\widehat{\mathbb G}_{\lambda,\gamma}$, every nonzero column of the matrix $\pi(\overline{\phi})$ must be a $\lambda$-eigenvector of $\pi(\gamma)$.
\end{itemize}
Using the inverse group Fourier transform (Equation~(\ref{inverse-F-noncommutative})), we get $\overline{\phi}(x)=\sum_{\pi\in \widehat{\mathbb G}_{\lambda,\phi}} d_\pi {\rm Tr}[\pi(\overline{\phi})\pi(x)^*]$.
Since this is a finite sum, there are no convergence issues to be considered here. Letting $A_\pi=\pi(\overline{\phi})$ finishes the proof of (iii).
To prove (iv), fix a nonzero eigenvalue $\lambda$ and a $\lambda$-eigenvector $\phi$ of $T_w$. By part (iii) of this theorem,
$${\phi}(x)=\sum_{\pi\in \widehat{\mathbb G}_{\lambda,\gamma}} d_\pi \overline{{\rm Tr}[A_\pi\pi(x)^*]},$$
where every nonzero column of $A_\pi$ is a $\lambda$-eigenvector for $\pi(\gamma)$.
For every $\pi\in\widehat{\mathbb G}_{\lambda,\gamma}$, let ${\mathcal E}_{\lambda, \pi(\gamma)}$ denote a fixed basis for the $\lambda$-eigenspace of $\pi(\gamma)$. Recall that $m_{\lambda,\pi}=|{\mathcal E}_{\lambda, \pi(\gamma)}|$.
It then follows immediately, from the above expression, that $\phi$ can be written as a linear combination of functions of the form $x\mapsto \overline{{\rm Tr}[A^{\pi}_{X, i}\ \pi(x)^*]}$, where $A^{\pi}_{X,i}$ denotes the matrix of size $d_\pi$ whose $i$'th column is $X\in {\mathcal E}_{\lambda,\pi(\gamma)}$, and its every other column is zero.
Applying a simple counting argument, we obtain the upper bound $\sum_{\pi\in \widehat{\mathbb G}_{\lambda,\gamma}} d_\pi m_{\lambda,\pi}$ for the multiplicity of the eigenvalue $\lambda$ of $T_w$.
To finish the proof, we will obtain the same number of independent $\lambda$-eigenvectors for $T_w$.
From the definition of coefficient functions (Subsection \ref{subsec:FT-nonAbelian}), we observe that the $(i,j)$th entry of $\pi(x)^*$ equals $\overline{\pi_{i,j}(x)}$.
For $Z\in {\mathcal E}_{\lambda, \pi(\gamma)}$ represented as $Z=[z_j]_{j=1}^{d_\pi}$,
we have
\begin{equation*}
\overline{{\rm Tr}[A^{\pi}_{Z,i}\pi(x)^*]}=\sum_{j=1}^{d_\pi} \overline{z_j}{\pi_{i,j}(x)} \in {\rm Span}\{\pi_{i,j}: \ j=1,\ldots, d_\pi\}.
\end{equation*}
The above equation, together with Schur's orthogonality relations (Proposition~\ref{prop:Schur-cpt-group}), implies that
\begin{itemize}
\item[(i)] If $i\neq j$, then the functions $\overline{{\rm Tr}[A^{\pi}_{Z,i}\pi(x)^*]}$ and $\overline{{\rm Tr}[A^{\pi}_{Z,j}\pi(x)^*]}$ are orthogonal nonzero functions in $L^2(\mathbb G)$.
\item[(ii)] If $\pi,\sigma\in \widehat{\mathbb G}_{\lambda,\gamma}$ are distinct (inequivalent) representations, then for every $X\in {\mathcal E}_{\lambda, \pi(\gamma)}$ and $Y\in {\mathcal E}_{\lambda, \sigma(\gamma)}$, and every $1\leq i\leq d_{\pi}$ and $1\leq j\leq d_{\sigma}$, we have that $\overline{{\rm Tr}[A^{\pi}_{X,i}\pi(x)^*]}$ and $\overline{{\rm Tr}[A^{\sigma}_{Y,j}\sigma(x)^*]}$ are orthogonal nonzero functions in $L^2(\mathbb G)$.
\item[(iii)] If $Y,Z\in {\mathcal E}_{\lambda, \pi(\gamma)}$ are distinct, then $\overline{{\rm Tr}[A^{\pi}_{Y,i}\pi(x)^*]}$ and
$\overline{{\rm Tr}[A^{\pi}_{Z,i}\pi(x)^*]}$ are orthogonal nonzero functions in $L^2(\mathbb G)$.
\end{itemize}
Parts (i) and (ii) follow directly from the statement of Schur's orthogonality relations. To prove (iii), take distinct (orthogonal) elements $Y=[y_k]_{k=1}^{d_\pi}$ and
$Z=[z_j]_{j=1}^{d_\pi}$ of ${\mathcal E}_{\lambda, \pi(\gamma)}$. Using orthogonality relations between $\pi_{i,j}$ and $\pi_{i,k}$, we get:
\begin{eqnarray*}
&&\langle\overline{{\rm Tr}[A^{\pi}_{Z,i}\pi(\cdot)^*]}, \overline{{\rm Tr}[A^{\pi}_{Y,i}\pi(\cdot)^*]}\rangle_{L^2(\mathbb G)}
=
\langle\sum_{j=1}^{d_\pi} \overline{z_j}{\pi_{i,j}}, \sum_{k=1}^{d_\pi} \overline{y_k}{\pi_{i,k}}\rangle_{L^2(\mathbb G)}\\
&=&
\ \ \sum_{j=1}^{d_\pi}\sum_{k=1}^{d_\pi} \overline{z_j}y_k\langle{\pi_{i,j}}, {\pi_{i,k}}\rangle_{L^2(\mathbb G)}
=
\frac{1}{d_\pi}\sum_{j=1}^{d_\pi}\overline{z_j}y_j=0.
\end{eqnarray*}
Thus, the set of functions $\left\{\overline{{\rm Tr}[A^{\pi}_{X,i}\pi(x)^*]}: \pi\in\widehat{\mathbb G}_{\lambda,\gamma},\ X\in {\mathcal E}_{\lambda, \sigma(\gamma)},\ 1\leq i\leq d_\pi\right\}$ forms a basis for the $\lambda$-eigenbasis of $T_w$; this finishes the proof.
\end{proof}
Theorem \ref{thm:eigenvector} reduces the problem of finding a spectral decomposition for $T_w$ to finding spectral decompositions of $\pi(\gamma)$ for each $\pi\in\widehat{\mathbb G}$. This application of representation theory leads to significant simplification of the problem. Indeed, $T_w$ is an operator on the infinite-dimensional space $L^2(X)$, and obtaining a spectral decomposition for $T_w$ is a nontrivial task; whereas each $\pi(\gamma)$ is a finite-dimensional matrix.
A special case arises when $\mathbb G$ is Abelian. In this case, every representation $\chi\in\widehat{\mathbb G}$ is 1-dimensional. The following corollary uses Theorem \ref{thm:eigenvector} to describe eigenvalues and eigenvectors of $T_w$, when $w$ is the Cayley graphon of a compact Abelian group. In the statement below, $\mathcal F$ is used to refer to the group Fourier transform as defined in \eqref{F-noncommutative}.
\begin{corollary}\label{cor:Abelian}
Let $w:\mathbb G\times \mathbb G\to [0,1]$ be the Cayley graphon of a compact Abelian group $\mathbb G$ defined by a Cayley function $\gamma:\mathbb G\to [0,1]$.
\begin{itemize}
\item[(i)] For every $\lambda\in \mathbb R$, define ${\mathcal U}_\lambda:=\{\chi\in \widehat{\mathbb G}:\ ({\mathcal F}{\gamma})(\chi)=\lambda\}$. The set of eigenvalues of $T_w$ can be described as $\{ ({\mathcal F}{\gamma})(\chi) :\ \chi\in \widehat{\mathbb G}\}=\{\lambda\in \mathbb R:\ {\mathcal U}_\lambda\neq \emptyset\}$.
\item[(ii)] Any nonzero $\phi\in L^2(\mathbb G)$ such that ${\mathcal F}{\overline{\phi}}$ is supported on ${\mathcal U}_\lambda$ is a $\lambda$-eigenvector of $T_w$.
\end{itemize}
\end{corollary}
The above corollary follows directly from Theorem~\ref{thm:eigenvector}. We demonstrate a more direct proof for the Abelian case in the following example.
\begin{example}[Graphons on the 1-dimensional torus]
\label{exp:torus}
Consider the Abelian compact group $\mathbb T=\{e^{2\pi i x}: x\in[0,1)\}$, with multiplication as the group product. Let $\gamma:\mathbb T\to [0,1]$ be a
Cayley function, i.e.~
$\gamma(x)=\gamma(x^{-1})$ for all $x\in \mathbb T$.
Using the identification of $\mathbb T$ and $[0,1)$, the Lebesgue measure on $[0,1)$ is transferred to the Haar measure on $\mathbb T$. Let $w:\mathbb T\times \mathbb T\rightarrow [0,1]$ be the Cayley graphon defined by $\gamma$. The integral operator associated with $w$ is defined as
$$T_w:L^2(\mathbb T)\to L^2(\mathbb T), \ (T_wf)(x)=\int_{\mathbb T}\gamma(xy^{-1}) f(y)\, dy=(f*\gamma)(x),$$
where the last equality holds as $\mathbb T$ is Abelian. To find eigenvalues/eigenvectors of $T_w$, we use classical Fourier analysis on $\mathbb T$, noting that $\widehat{\mathbb T}\simeq\mathbb Z$. In this example, we write $\widehat{f}(n)$ to denote the $n$'th Fourier coefficient of $f$.
Suppose $f\neq 0$ is a $\lambda$-eigenvector of $T_w$. Then, we have the following equivalent relations:
\begin{eqnarray*}
T_wf=\lambda f \mbox{ in } L^2(\mathbb T) &\Leftrightarrow & f*\gamma=\lambda f \mbox{ in } L^2(\mathbb T)\\
&\Leftrightarrow & \mbox{ for every } n\in \mathbb Z,\ \widehat{f}(n)\widehat{\gamma}(n)=\lambda \widehat{f}(n) \\
&\Leftrightarrow & \mbox{ for every } \ n\in\mathbb Z, \widehat{f}(n)=0 \mbox{ whenever } \widehat{\gamma}(n)\neq \lambda.
\end{eqnarray*}
Let ${\mathcal U}_\lambda=\{n\in\mathbb Z: \ \widehat{\gamma}(n)=\lambda\}$. If ${\mathcal U}_\lambda=\emptyset$, then we must have $\widehat{f}\equiv 0$, and consequently $f=0$; this is a contradiction with the choice of $f$ as a $\lambda$-eigenvector. On the other hand, if ${\mathcal U}_\lambda\neq \emptyset$, then any nonzero function $f$ whose Fourier series is supported on ${\mathcal U}_\lambda$ is a $\lambda$-eigenvector for $T_w$.
We note that in this particular example, we can replace ``${\mathcal F}{\overline{f}}$ is supported on ${\mathcal U}_\lambda$'' with the phrase
``${\mathcal F}{{f}}$ is supported on ${\mathcal U}_\lambda$'' in the statement of Corollary \ref{cor:Abelian} (ii). This is due to the fact that
(i) $\widehat{\overline{f}}(n)=\overline{\widehat{f}(-n)}$ for every $n\in\mathbb Z$, and (ii) ${\mathcal U}_\lambda$ is closed under negation as $\gamma$ is real-valued.
\end{example}
Next, we show how to obtain an eigenbasis for the integral operator of a Cayley graphon, using its harmonic analysis.
Harmonic analysis of non-Abelian compact groups is mainly focused on the study of the group representations and their associated function spaces.
An important (not irreducible) unitary representation of a group $\mathbb G$ is the \emph{left regular representation}, defined as $L:\mathbb G\to \mathcal U(L^2(\mathbb G))$, $(L(g) f)(h)=f(g^{-1}h)$, for $f\in L^2(\mathbb G)$ and $g,h\in \mathbb G$.
The integral operator of a Cayley graphon can be expressed in terms of the left regular representation of the underlying group.
\begin{remark}\label{remark:leftregular}
Let $w$ be a graphon on a group $\mathbb G$ defined by a Cayley function $\gamma$. There is a direct relation between the integral operator $T_w$ and the left regular representation $L$. Namely,
for $f\in L^2(\mathbb G)$ and almost every $x\in \mathbb G$, we have
\begin{eqnarray*}
T_w(f)(x)=\int_{\mathbb G} w(x,y)f(y)\, dy
=\int_{\mathbb G} \gamma(xy^{-1})f(y)\, dy
&=&\int_{\mathbb G} \gamma(y)f(y^{-1}x) \, dy\\
&=&\int_{\mathbb G} \gamma(y)(L(y)f)(x) \, dy
\end{eqnarray*}
So, $T_w(f)=L(\gamma)f$.
\end{remark}
To develop signal processing on Cayley graphons, we use the Peter-Weyl basis of $L^2(\mathbb G)$.
The Peter-Weyl theorem (\cite[Theorem 5.12]{1995:Folland:HarmonicAnalysis}) asserts that the left regular representation of $\mathbb G$ is unitarily equivalent to $\bigoplus_{\pi\in \widehat {\mathbb G}} d_\pi \pi$, where $d_\pi$ denotes the dimension of $\pi$. That is, every irreducible representation of $\mathbb G$ appears in the decomposition of $L$ with multiplicity equal to the dimension of the representation.
The orthogonal decomposition presented in the Peter-Weyl theorem and the precise orthogonality relations amongst the irreducible pieces play a central role in in the proof of the following proposition.
\begin{proposition}[Eigenbasis for Cayley graphons]\label{prop:basis}
Let $\mathbb G$ be a second countable compact group, and consider the Cayley graphon $w:\mathbb G\times \mathbb G\to[0,1]$ obtained from the Cayley function $\gamma:\mathbb G\to [0,1]$. For each $\pi$, let ${\mathcal E}_{\overline{\pi(\gamma)}}$ denote a fixed eigenbasis for $\overline{\pi(\gamma)}$, where the matrix $\overline{\pi(\gamma)}$ is obtained from $\pi(\gamma)$ by taking complex conjugation entry-wise.
Then
$$\bigcup_{\pi\in\widehat{\mathbb G}}\bigcup_{i:1,\ldots,d_\pi}\left\{\sum_{j=1}^{d_\pi}z_j\pi_{i,j}: \
\left[
\begin{array}{c}
z_1 \\
\vdots \\
z_{d_\pi}
\end{array}
\right]
\in {\mathcal E}_{\overline{\pi(\gamma)}} \right\}$$
is an (orthogonal) eigenbasis for $T_w$.
\end{proposition}
\begin{proof}
Let $\pi\in\widehat{\mathbb G}$. First, note that for the coefficient function $\pi_{i,j}\in L^2(\mathbb G)$, we have
\begin{eqnarray}
T_w(\pi_{i,j})(y)&=&\int_{\mathbb G} w(x,y)\pi_{i,j}(x)\, dx
=\int_{\mathbb G} \gamma(xy^{-1})\langle \pi(x)e_i,e_j\rangle\, dx\nonumber\\
&=&\int_{\mathbb G} \gamma(x)\langle \pi(xy)e_i,e_j\rangle\, dx
=\langle \pi(\gamma)(\pi(y)e_i),e_j\rangle=\langle \pi(y)e_i,\pi(\gamma)e_j\rangle, \label{eq:1-last-prop}
\end{eqnarray}
where in the last equality we used the fact that $\pi(\gamma)$ is self-adjoint.
Now suppose $\pi(\gamma)=[\alpha_{i,j}]$. So for every $i$ we have the linear expansion $\pi(\gamma)e_j=\sum_{k=1}^{d_\pi}\alpha_{k,j}e_k$.
Moreover, the equation $\pi^*(\gamma)=\pi(\gamma)$ implies that $\overline{\alpha_{i,j}}=\alpha_{j,i}$. Now Equation \eqref{eq:1-last-prop}, together with the linear expansion of $\pi(\gamma)e_j$ given above,
implies that
\begin{equation*}
T_w(\pi_{i,j})(y)=\sum_{k=1}^{d_\pi}\overline{\alpha_{k,j}}\langle \pi(y)e_i,e_k\rangle
=(\sum_{k=1}^{d_\pi}\overline{\alpha_{k,j}}\pi_{i,k})(y)
=(\sum_{k=1}^{d_\pi}\alpha_{j,k}\pi_{i,k})(y).
\end{equation*}
So for every $\pi\in \widehat{\mathbb G}$ and $1\leq i\leq d_\pi$, the set ${\mathcal S}_{\pi, i}:={\rm span}\{\pi_{i,j}: 1\leq j\leq d_\pi\}$ is an invariant subspace for $T_w$. On the other hand, by Peter-Weyl Theorem, we have the Hilbert space decomposition $L^2(\mathbb G)\simeq \ell^2\text{-}\oplus_{\pi\in\widehat{\mathbb G}}\oplus_{i=1}^{d_\pi} {\mathcal S}_{\pi, i}$. Thus $T_w$ is block diagonalized when this Hilbert space decomposition is in place.
We now proceed to diagonalize each block.
For every $\pi\in\widehat{\mathbb G}$, we know by Lemma~\ref{lem:pi(f)sa} that $\pi(\gamma)$ is a self-adjoint matrix. So the same is true for the entry-wise complex conjugate matrix $\overline{\pi(\gamma)}$, and it can be diagonalized using its eigenbasis.
Let $\lambda$ be an eigenvalue of $\overline{\pi(\gamma)}$, and
suppose the nonzero vector $Z=[z_j]$ is a $\lambda$-eigenvector, i.e.~$\overline{\pi(\gamma)}Z=\lambda Z$.
Then, for $1\leq i\leq d_\pi$ we have,
\begin{eqnarray*}
T_w(\sum_{j=1}^{d_\pi}z_j\pi_{i,j})&=&\sum_{j=1}^{d_\pi}z_j\sum_{k=1}^{d_\pi}\alpha_{j,k}\pi_{i,k}
=\sum_{j=1}^{d_\pi}\left(\sum_{s=1}^{d_\pi} \alpha_{s,j}z_s\right)\pi_{i,j}\\
&=&\sum_{j=1}^{d_\pi}\left(\sum_{s=1}^{d_\pi} \overline{\alpha_{j,s}}z_s\right)\pi_{i,j}=\lambda\sum_{j=1}^{d_\pi} z_j \pi_{i,j},
\end{eqnarray*}
which proves that $\sum_{j=1}^{d_\pi}z_j\pi_{i,j}$ is a $\lambda$-eigenvector of $T_w$.
\end{proof}
As seen in Remark \ref{remark:leftregular}, for any $\phi\in L^2(\mathbb G)$, $T_w(\phi)=L(\gamma)\phi$.
This can be used to give a more abstract proof of the previous proposition. We have avoided such abstract proofs in this paper, as the details of the unitary equivalences and the precise change of basis are important for graph signal processing applications. The following example demonstrates how the proposition can be used in such a setting.
\begin{example}[Ranking graphon]\label{exp:S3}
Consider the group of permutations on 3 elements:
$${\mathbb S}_3=\left\{g_1={\rm id},\ g_2=(12),\ g_3=(23),\ g_4=(13), \ g_5=(123),\ g_6=(132)\right\}.$$
The irreducible representations of ${\mathbb S}_3$ can be listed as follows:
\begin{itemize}
\item[(i)] the trivial representation $\iota:{\mathbb S}_3\to \mathbb C$, defined as $\iota(g)=1$ for all $g\in {\mathbb S}_3$;
\item[(ii)] the alternating representation $\tau:{\mathbb S}_3\to \mathbb C$, assigning to a permutation $g$ the sign of the permutation;
\item[(iii)] the standard representation $\pi:{\mathbb S}_3\to \mathcal U(\mathbb C^2)$, defined as
$$\pi({\rm id})=\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix}, \
\pi((12))=\begin{bmatrix} -\frac{1}{2} & \frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2} & \frac{1}{2}\end{bmatrix},\
\pi((23))=\begin{bmatrix} 1 & 0\\ 0 & -1\end{bmatrix},\
\pi((13))=\begin{bmatrix} -\frac{1}{2} & -\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2} & \frac{1}{2}\end{bmatrix},$$
%
$$\pi((123))=\begin{bmatrix} -\frac{1}{2} & -\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2} & -\frac{1}{2}\end{bmatrix},\
\pi((132))=\begin{bmatrix} -\frac{1}{2} & \frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2} & -\frac{1}{2}\end{bmatrix}.$$
\end{itemize}
As usual, we represent a complex-valued function on ${\mathbb S}_3$ by a vector in $\mathbb C^6$. Clearly, the (unique) coefficient function of every 1-dimensional representation is simply the representation itself. Equipping $\mathbb C^2$ with the standard basis
$\left\{\begin{bmatrix} 1\\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ 1\end{bmatrix}\right\}$, the coefficient functions associated with $\pi$ are given as follows
$$
\pi_{1,1}=\begin{bmatrix} 1\\ -\frac{1}{2}\\ 1\\ -\frac{1}{2}\\ -\frac{1}{2}\\ -\frac{1}{2}\end{bmatrix}, \
\pi_{2,1}=\begin{bmatrix} 0\\ \frac{\sqrt{3}}{2}\\ 0\\ -\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}\end{bmatrix}, \
\pi_{1,2}=\begin{bmatrix} 0\\ \frac{\sqrt{3}}{2}\\ 0\\ -\frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}\end{bmatrix}, \
\pi_{2,2}=\begin{bmatrix} 1\\ \frac{1}{2}\\ -1\\ \frac{1}{2}\\ -\frac{1}{2}\\ -\frac{1}{2}\end{bmatrix}.
$$
Consider the Cayley graphon $w:{\mathbb S}_3\times {\mathbb S}_3\to [0,1]$ defined by the Cayley function $\gamma:{\mathbb S}_3\to {\mathbb R}$, $\gamma=p\delta_{(12)}+q\delta_{(23)}$, where $0<q<p\leq 1$ and $\delta_g$ denotes the Dirac delta function. Clearly, we have
$$\iota(\gamma)=\frac{p+q}{6},\ \tau(\gamma)=\frac{-p-q}{6}, \ \pi(\gamma)=\frac{1}{6}\begin{bmatrix} -\frac{p}{2}+q & \frac{\sqrt{3}p}{2}\\ \frac{\sqrt{3}p}{2} & \frac{p}{2}-q\end{bmatrix}.$$
(Here, we have normalized the counting measure on ${\mathbb S}_3$ to obtain a probability space.)
The eigenvalues of $\pi(\gamma)$ are $\frac{\pm 1}{6}\sqrt{p^2+q^2-pq}$. From an easy calculation, we see
$$\begin{bmatrix} -\frac{p-2q-2\sqrt{p^2-pq+q^2}}{\sqrt{3}p}\\1\end{bmatrix} \mbox{ and } \begin{bmatrix} -\frac{p-2q+2\sqrt{p^2-pq+q^2}}{\sqrt{3}p}\\1\end{bmatrix}$$ are eigenvectors of $\pi(\gamma)$ associated with the positive and negative eigenvalues respectively.
Appealing to Theorem \ref{thm:eigenvector}, we conclude that the eigenvalues of $T_w$ are
$$\frac{p+q}{6}\ (\mbox{mult.~1}),\ \frac{-p-q}{6}\ (\mbox{mult.~1}),\ \frac{1}{6}\sqrt{p^2+q^2-pq}\ (\mbox{mult.~2}), \frac{-1}{6}\sqrt{p^2+q^2-pq}\ (\mbox{mult.~2}).$$
Next, using Proposition~\ref{prop:basis}, we have the following set of eigenvectors for $T_w$, listed to correspond to the above set of eigenvalues. Note that in this case, we have $\pi(\gamma)=\overline{\pi(\gamma)}$, so the condition of Proposition~\ref{prop:basis} is satisfied.
Let $s=-\frac{p-2q-2\sqrt{p^2-pq+q^2}}{\sqrt{3}p}$ and $r=-\frac{p-2q+2\sqrt{p^2-pq+q^2}}{\sqrt{3}p}$.
$$
\iota=\begin{bmatrix} 1\\ 1\\ 1\\ 1\\ 1\\ 1 \end{bmatrix},\
\tau=\begin{bmatrix} 1\\ -1\\ -1\\ -1\\ 1\\ 1 \end{bmatrix}, \
s\pi_{1,1}+\pi_{1,2}= \begin{bmatrix} s\\ -\frac{s}{2}+\frac{\sqrt{3}}{2}\\ s\\ -\frac{s}{2}-\frac{\sqrt{3}}{2}\\ -\frac{s}{2}+\frac{\sqrt{3}}{2}\\ -\frac{s}{2}-\frac{\sqrt{3}}{2}\end{bmatrix}, \
s\pi_{2,1}+\pi_{2,2}= \begin{bmatrix} 1\\ \frac{\sqrt{3}s}{2}+\frac{1}{2}\\ -1\\ -\frac{\sqrt{3}s}{2}+\frac{1}{2}\\ -\frac{\sqrt{3}s}{2}-\frac{1}{2}\\ \frac{\sqrt{3}s}{2}-\frac{1}{2}\end{bmatrix},
$$
$$
r\pi_{1,1}+\pi_{1,2}= \begin{bmatrix} r\\ -\frac{r}{2}+\frac{\sqrt{3}}{2}\\ r\\ -\frac{r}{2}-\frac{\sqrt{3}}{2}\\ -\frac{r}{2}+\frac{\sqrt{3}}{2}\\ -\frac{r}{2}-\frac{\sqrt{3}}{2}\end{bmatrix}, \
r\pi_{2,1}+\pi_{2,2}= \begin{bmatrix} 1\\ \frac{\sqrt{3}r}{2}+\frac{1}{2}\\ -1\\ -\frac{\sqrt{3}r}{2}+\frac{1}{2}\\ -\frac{\sqrt{3}r}{2}-\frac{1}{2}\\ \frac{\sqrt{3}r}{2}-\frac{1}{2}\end{bmatrix}.
$$
We can convert this Cayley graphon into a graphon $w:[0,1]^2\rightarrow [0,1]$ using the method described in Section \ref{subsec:general-graphon}. This allows us to sample graphs of arbitrary size from the graphon $w$. Any such graph will have a natural partition of its vertices into six subsets, each corresponding to an element of ${\mathbb S}_3$. For example, vertices could represent political bloggers, and the labels from ${\mathbb S}_3$ represent the priority orderings the bloggers assign to a list of three election topics. The graphs formed according to the graphon $w$ then have link probability $p$ and $q$, respectively, between groups whose lists only differ by a transposition of the first and second item, or the second and third item on their list.
\end{example}
Finally, we consider the special case for Cayley graphons where the Cayley function is constant on conjugacy classes. We refer to such graphons as \emph{quasi-Abelian} Cayley graphons; this terminology is an extension of a similar concept for Cayley graphs (\cite{Rockmore}). In this case,
Proposition \ref{prop:basis} takes a greatly simplified form. Namely, the eigenbasis for $T_w$ derived from the irreducible representations of the group consists simply of all coefficient functions $\pi_{i,j}$. This result is a generalization of an analogue theorem for Cayley graphs; see \cite[Theorem 1.1]{Rockmore} or \cite[Theorem III.1]{sampta} for a proof. We state the result in the following corollary.
\begin{corollary}\label{cor:constant-on-conjugacy}
Consider a compact group $\mathbb G$ together with a Cayley function $\gamma:\mathbb G\to [0,1]$ that is a class function, i.e.~ $\gamma$ is constant on conjugacy classes of $\mathbb G$ (or equivalently $\gamma(xy)=\gamma(yx)$ for all $x,y\in\mathbb G$). Let $w$ be the Cayley graphon associated with $\mathbb G$ and $\gamma$.
Then, for every $\pi\in \widehat{\mathbb G}$ and $1\leq i,j\leq d_\pi$,
\[ T_w(\pi_{i,j}) = \lambda_\pi \pi_{i,j},\]
where $\lambda_\pi = \frac{1}{d_\pi}{\rm Tr}(\pi(\gamma))$.
\end{corollary}
\begin{proof}
It is known that the set of characters $\left\{\chi_\pi:=\sum_{i=1}^{d_{\pi}}\pi_{i,i}:\ \pi\in \widehat{\mathbb G}\right\}$ of a group $\mathbb G$ forms an orthonormal basis for the subspace of class functions in $L^2(\mathbb G)$ (see e.g.~\cite[Proposition 5.23]{1995:Folland:HarmonicAnalysis}).
Since $\gamma$ is a class function, we have
\begin{equation}\label{eq:expansion1}
\gamma=\sum_{\pi\in\widehat{\mathbb G}}\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}\chi_\pi
=\sum_{\pi\in\widehat{\mathbb G}}\sum_{i=1}^{d_\pi}\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}\pi_{i,i}.
\end{equation}
Let $\pi\in\widehat{\mathbb G}$ be arbitrary. Using Schur's orthogonality relations, Equation \eqref{eq:expansion1} implies that
$\langle \gamma, \pi_{i,j}\rangle_{{L^2(\mathbb G)}}=0$ if $i\neq j$, and
$\langle \gamma, \pi_{i,i}\rangle_{{L^2(\mathbb G)}}=\frac{1}{d_\pi}\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}$.
This allows us to compute the entries of the matrix $\pi(\gamma)$. Namely, since $\gamma$ is real-valued, we have
\begin{eqnarray*}
\langle \pi(\gamma)e_i,e_j\rangle = \int_{\mathbb G} \overline{\gamma(x)}\langle \pi(x)e_i,e_j\rangle \, dx= \overline{\langle\gamma, \pi_{i,j}\rangle}_{L^2(\mathbb G)}
=\left\{\begin{array}{cc}
0 & i\neq j \\
\frac{1}{d_\pi}\langle\chi_\pi, \gamma\rangle_{{L^2(\mathbb G)}} & i=j
\end{array}\right..
\end{eqnarray*}
In other words, $\pi(\gamma)e_i= \frac{1}{d_\pi}\langle\chi_\pi, \gamma\rangle_{{L^2(\mathbb G)}} e_i$ for every $1\leq i\leq d_{\pi}$.
Conjugating both sides of the previous equation, we conclude that the standard basis $\{e_i\}_{i=1}^{d_{\pi}}$ is an orthonormal eigenbasis of $\overline{\pi(\gamma)}$ associated with the eigenvalue
$\frac{1}{d_\pi}\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}$. So by Proposition \ref{prop:basis}, $\cup_{\pi\in\widehat{\mathbb G}}\{\pi_{i,j}: 1\leq i,j\leq d_\pi\}$ forms an orthogonal eigenbasis for $T_w$ associated with (repeated) eigenvalues $\frac{1}{d_\pi}\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}$. Finally, observe that
\begin{equation*}
\langle \gamma, \chi_\pi\rangle_{{L^2(\mathbb G)}}=\sum_{i=1}^{d_\pi}\int_{\mathbb G}\gamma(x)\overline{\pi_{i,i}(x)}\, dx =\sum_{i=1}^{d_\pi}\int_{\mathbb G}\gamma(x)\langle\pi(x)e_{i},e_i\rangle \, dx
=\sum_{i=1}^{d_\pi}\langle\pi(\gamma)e_i,e_i\rangle={\rm Tr}(\pi(\gamma)),
\end{equation*}
which finishes the proof.
\end{proof}
\begin{example}[SO(3)]
Consider the (non-Abelian) group SO(3) of all rotations of the unit ball around an axis through the origin. Thus, each element of SO(3) can be characterized by a unit vector indicating the axis, and a rotation angle. It is well-known that two elements of SO(3) are conjugate if and only if they have the same rotation angle. Thus, if we let the Cayley function $\gamma$ be any function that depends only on the rotation angle, then $\gamma$ satisfies the conditions of Corollary \ref{cor:constant-on-conjugacy}. The corollary now tells us that the coefficient functions $\pi_{i,j}$ provide an eigenbasis for the graphon, which can be used to define a Fourier transform for graphs sampled from the graphon.
A natural Cayley
graphon results if we let $\gamma$ be a sharply declining function of the rotation angle. In that case, two rotations $\sigma$ and $\tau$ in SO(3) have high link probability if $\sigma\tau^{-1}$ has a very small angle. This can be interpreted as $\sigma$ and $\tau$ having a similar effect on the unit ball.
\end{example}
\section{Acknowledgements}
The first author was supported by NSF grant DMS-1902301, while this work was being completed. The second author was supported by NSERC.
The first two authors initiated this project while on a Research in Teams visit at the Banff International Research Center. They are
grateful to BIRS for the financial support and hospitality.
\bibliographystyle{plain}
|
{'timestamp': '2021-09-20T02:21:16', 'yymm': '2109', 'arxiv_id': '2109.08646', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.08646'}
|
arxiv
|
\section{RF-EHSN with 5G technology }
\label{sec:5G}
In this section, we study the potential of applying 5G technology into RF-EHSN for the animals' health monitoring. According to the measurement reported in \cite{pinuela2013ambient}, the density of ambient RF signals in urban and semi-urban environments is between $0.18$ and $84$\, nW/cm$^2$ on DTV, GSM 900/1800, 3G and WiFi frequencies on average, which is considerably thin to replenish an EHN in a short period of time.
The 5G mobile technology provides an opportunity to solve the thin energy problem. By operating in the millimeter wave (mmWave) band, the size of a transceiver can be significantly reduced, which makes a dense deployment of femtocells with massive antennas possible~\cite{buzzi2016survey}. By utilizing the femtocell as APs in an RF-EHSN, a highly directional beamforming can be steered to the direction of the desired EHN, which increases the directive gain and enables a quick energy charge.
The challenge of directional charging is the tight time synchronization across the antennas of a femtocell and accurate channel state information (CSI) between EHN and femtocell. These requirements can be achieved with the assistance of positioning technologies in 5G mobile networks, which can provide the location and energy status of active femtocells to EHNs. With such information, an EHN can receive RF signals from the most appropriate femtocell at the right time to optimize the efficiency of energy charge.
Compared with passive energy harvesting from ambient RF signals, an RF-EHSN with the 5G technology allows EHNs to request energy from femtocells on-demand. The advantage of proactive energy requests is the improved efficiency of power management. Specifically, the majority of existing transmission scheduling in RF-EHSN works in an offline scenario, assuming that the amount of energy harvested by an EHN at a given time is known in advance~\cite{ulukus2015energy}. Such assumption may not be true in the ambient RF environment, where the amount and the time of energy arrival are both random. By contrast, requesting energy from femtocell actively can guarantee that the energy received by an EHN is controllable and predictable to a certain extent, thereby making offline strategies more realistic in real applications.
When supplying energy to RF-EHSNs with 5G technology, the sensitivity of mmWave to the blockage needs to be taken into account. RF signals have weak ability to diffract around obstacles of sizes significantly larger than the wavelength. For the EHN implanted inside a moving animal, the mmWave link will become intermittent. Therefore, maintaining a reliable connection between an RF-EHSN and femtocells in a 5G network is a challenge for both energy charging and data transmission in animal healthcare applications.
\section{Network Architecture}
\label{sec:arch}
Due to the size and the energy constraint of the wearable EHNs, their communication range is short (usually less than one feet)~\cite{kurup2012body}. We propose a 3-tier network architecture to deliver the sensing data collected from animals in a large area to a far located health center, as illustrated in Fig.\,\ref{fig:scenario}.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.8cm]{figures/scenario}}
\caption{The 3-tier architecture of RF-EHSNs for health monitoring of animals in a zoo.}\label{fig:scenario}
\end{figure}
In the Tier 1 RF-EHSN, EHNs that carry biosensors implanted inside animals' body form an ad-hoc body area network (BAN). Each BAN is associated with a sink node, which is a special EHN embedded in a necklet, earring, or leg band of an animal. It is exposed in vitro and thus can have a relatively larger size and better channel quality for mid-range (hundreds of feet) data communications.
After gathering information from the associated BANs, sink nodes forwards the data to an access point (AP), forming a local area healthcare network (LAHN), which is the Tier 2 RF-EHSN. For animals like birds that live in a hutch or an aviary enclosing with an iron net, a Faraday cage is formed and blocks the electromagnetic waves. In this circumstance, an AP needs to be placed inside the cage acting as a dedicated energy source to power EHNs; the corresponding network is a closed-access LAHN. If APs are deployed in an open area, such as a wild animal zoo or a dairy farm, an open-access LAHN is established, where EHNs can harvest energy from both ambient RF environment and APs.
Eventually, a wide area healthcare network (WAHN), the Tier 3 RF-EHSN, is developed by connecting the APs in LAHNs to Ethernet. The operator in a health center can process the information sent from APs to monitor the health status of animals in real-time. This 3-tier RF-EHSN enables a timely diagnosis to prevent diseases like bird flu and mad cow disease, thereby saving the lives of animals for dairy cattle, poultry farms, and wildlife parks.
\section{Conclusions}
\label{sec:Con}
In this article, we give a tutorial on the RF-EHSN for the healthcare of pets, livestock, and wildlife. A three-tier architecture is proposed to create an online heal monitoring network. Several unique features, such as the interaction between data transmission and energy harvest, the balance issue between a data flow and an energy flow, and dynamic of network topology, are introduced. How those features challenge the design of an RF-EHSN on different layers are discussed to shed light on the resilient RF-EHSN design. We believe RF-EHSN is a promising technology for the health monitoring and disease control of animals.
\section{Introduction}
\label{sec:introduction}
Harvesting energy from radio environment has been considered as a promising technique to drive the next-generation wireless sensor networks~\cite{shaikh2016energy}. Not only is RF energy harvesting self-sustaining and pollution-free, but an added advantage is its ability to enable perpetual monitoring in a wide spectrum of applications (e.g., healthcare, emergency management, and fitness). Recently, considerable efforts have been made to provide smart health services for human beings using the radio frequency energy harvesting sensor networks (RF-EHSNs)~\cite{hu2017wireless}; the pets, livestock and wildlife, however, have received limited benefits from this favorable technique.
Studies reveal that animals usually hide their illness and pretend to be well even when they are sick. This masking comes from animals' instinct for survival. An obvious example would be the case of a predator who targets the weakest member of a group during a hunt. This masking feature makes it difficult for veterinarians to save animals' lives once clear illness symptoms appear. Therefore, it is important to frequently check animals' health status and behaviors for a timely treatment.
However, it is highly inefficient to hire manpower to examine the health of animals, especially when the number is large. For instance, the Noble Foods Ltd is the largest free-range egg supplier in the U.K. It owns around 100 farms with an average flock size of 5,000-6,000. How to monitor the health of so many hens for disease control is a critical problem. Another example is the Owens Aviary in the San Diego Zoo, which is the home of around 200 tropical birds. It is almost impossible to monitor the health status of each bird in such a large aviary.
The wearable RF-EHSN provides a cost-effective and affordable solution to monitor the health of pets and animals. Current biosensors developed for animals have been able to conduct physiological measurement (e.g., body temperature, heartbeat, and respiratory rate), behavior tracking, and pathogen detection~\cite{neethirajan2017recent}, and provide credible information about health conditions of the carrier. By ``capturing'' renewable RF energy from surrounding environment, an energy harvesting node (EHN) can perform sensing and communication for a long period of time without battery replacement; which is an attractive advantage to provide the health service for a large number of animals in a wide area.
Current research on RF-EHSN based smart healthcare, such as the body area network (BAN), wearable internet of things (IoT), and implantable biomedical devices, mainly focus on the human beings~\cite{salayma2017wireless, sun2016edgeiot}. Limited knowledge is known about the performance of applying an RF-EHSN into the health monitoring of animals and no attempt has been made in this research direction. To fill the gap, we attempt to investigate this new and exciting area, which can have a significant impact on disease prevention and behavioral surveillance for animals in farms and zoos.
This article gives a tutorial on the healthcare of animals with the RF-EHSN technology. A multi-tier architecture is proposed to provide long-range communications between biosensors and a health center. Afterward, the opportunity of applying the fifth-generation (5G) mobile technology to replenish EHNs for high-speed communications is discussed. Compared with conventional wireless sensor networks, RF-ENSN has some unique features. For instance, the data packet from an EHN not only carries information for communication but also contains energy that can charge neighboring EHNs; in addition to the data transmissions, EHNs also require to reserve the time and channel for the energy harvesting process; the relative positions of implanted biosensors change repetitively with the body movement of an animal. How those features challenge the design of an RF-ENSN at the physical layer, link layer, and network layer is analyzed carefully. Eventually, some potential solutions that utilize the unique features of RF-ENSN to improve the quality of health service for animals are introduced.
\section{Challenges on Link Layer}
\label{sec:MAC}
In addition to coordinating the channel access among multiple EHNs, the link layer of an RF-EHSN also needs to schedule the energy harvesting process. When an EHN sends data, neighboring EHNs are able to harvest energy from the overhead signal. As a consequence, the data flow is entangled with the energy flow, which is a unique feature of RF-EHSNs. In this section, we analyze how this feature challenges the implementation of energy and medium access control (EMAC) at the link layer of RF-EHSNs.
\subsection{Efficient Information and Power Transfer}
\label{subsec:Eff}
In a healthcare network, the data generation rates of different biosensors can be different. For example, a sensor monitoring the heartbeat of an animal has much higher sampling rate than the one detecting the presence of pathogens. Due to the heterogeneous traffic loads, EHNs in a network may consume energy at distinct rates. As a result, a node with heavy traffic may not capture sufficient energy from ambient RF signals for extensive data transmission. By contrast, an idle node may harvest superfluous energy that overflows its energy storage gradually, which causes a waste. How to reallocate the energy resource among EHNs is a critical issue.
To balance the energy among EHNs, an available solution is the multi-hop energy transfer. More specifically, given the short distances among nodes, an EHN can harvest substantial amount of energy from its neighbor
~\cite{mishra2015smart}. For instance, the typical stocking density of broilers in Europe is between 11 to 25 birds per square meter. Accordingly, the space among EHNs carried by neighboring chickens is less than 1 feet. If an EHN transmits data at 0.1\,mW, the neighbors 25\,cm away can harvest 12.7\, nW/cm$^2$ of energy, which is comparable to the energy density of ambient RF signals in a semi-urban area~\cite{pinuela2013ambient}.
Through the multi-hop energy transfer, an EMAC can arrange the EHN with a high residual energy but light traffic to charge its neighbors during the data transmission. However, fast information delivery and efficient energy transfer may not be able to achieve at the same time. It has been studied in \cite{zhang2013mimo} that there exists a trade-off between the maximal data rate and the optimal energy transfer. Furthermore, unlike conventional MAC protocols that consider the signal to unintended receivers as interference degrading the communication reliability, the co-channel noises in RF-EHSNs are beneficial from the viewpoint of energy harvesting. Therefore, the EMAC protocols in RF-EHSNs not only need to avoid collisions among data packets but also seek for effective channel and energy allocations among EHNs for the efficient information delivery and energy transfer.
\subsection{Balance between Energy Cycle and Data Cycle}
\label{subsec:Bal}
Due to the size constraint, the EHN implanted inside animals is commonly equipped with a single antenna switching between energy harvesting (i.e., energy cycle) and data communication (i.e., data cycle). If an EHN is scheduled to receive energy over frequently, it will be less likely to send the data in time, which is unacceptable in applications requiring a real-time health monitoring. In addition, continuous energy harvesting with less consumption leads to a high residual energy and a low charge efficiency, as depicted in Fig.\,\ref{fig:Nonlinear}. However, if an EHN sends packets freely without considering the residual energy, it may fail to harvest enough energy, resulting in a low throughput performance. Therefore, EHNs need to balance the cycles for data transmission and energy reception.
How to schedule the energy cycle and the data cycle for an EHN is not trivial; many factors, such as the length of data queue, residual energy, power density in the environment, and future data arrivals, need to be considered comprehensively. Taking the long data queue as an instance, the EMAC tends to initiate a data cycle when the residual energy is sufficient, thus mitigating the queuing delay. If the residual energy is low and the data queue is short, cumulating energy in the current period for the future data transmission is apparently a wise choice.
In most cases, however, the scenario is not as simple as the above two examples. Take an EHN with a long data queue as an example. If current RF environment has a temporally high energy density, switching to energy cycle may harvest more energy but at the cost of higher latency for data communications. By contrast, continuing on data cycle loses the opportunity to receive a substantial amount of energy but reduces the queuing delay. The balance between energy harvest and data transmission will need a careful calculation. Moreover, the values of residual energy, data queue and energy density in a real application cannot be simply represented by a binary (high or low) but change continuously with time, which makes the coordination between the data cycle and the energy cycle more complicated.
In addition, the network traffic load also affects the scheduling of data communication and energy harvesting. In an RF-EHSN with light traffic, assume the optimal ratio of energy cycles to data cycles is $8\!:\!2$. When the network traffic becomes heavy, the EHN may fail to access the channel in its data cycle to prevent the potential interference with the neighborhood. Consequently, the data cycle arranged for the information transfer is forced into an energy cycle for energy harvesting, which may cause superfluous energy harvest and descended throughput. To tackle this problem, the EMAC needs to adjust the duty cycle of each EHN adaptively according to the network traffic, thereby maintaining an optimal duty cycle in real-time.
\subsection{Optimal Energy Request}
\label{subsec:Opt}
In some applications like the one illustrated in Fig.\,\ref{fig:scenario}, it is promising to deploy a dedicated energy source (e.g., AP in a closed-access LAHN or femtocell in a 5G network) if the surrounding ambient RF signals are too weak. EHNs carried by animals are allowed to request energy from the associated facilities on demand. The energy replenishment thus can be scheduled based on the energy consumption of each EHN. How an EHN requests energy from the dedicated energy source intelligently to minimize the overall energy consumption while guaranteeing successful data transmission for the efficient healthcare is an interesting problem.
Intuitively, the EHN needs to pay a cost for the transmission of energy requests, thereby producing considerable energy overhead if a node requests over frequently. By contrast, requesting a large amount of energy each time reduces the overhead, but leads to energy inefficiency due to the nonlinear charge characteristic of EHNs, as discussed in Section\,\ref{sec:PHY}. An optimal strategy is necessary for EHNs to request an appropriate amount of energy at the right time. To solve this problem, a path-oriented method is a feasible solution~\cite{luo2017optimal}.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.3cm]{figures/ShPath}}
\caption{The shortest path based method for the optimal energy request, where a vertex at coordinates $(i, j)$ is denote by $V_{i, j}$.}\label{fig:ShPath}
\end{figure}
\begin{figure*}[htb]
\centerline{\includegraphics[width=14cm]{figures/movement}}
\caption{Topology changes of RF-EHSN resulting from animal movement, where the green dots represent biosensors and the red ones are the sink nodes.}\label{fig:movement}
\end{figure*}
In the path-oriented method, an EHN first calculates the least transmission power it required for a timely delivery of data collected from biosensors. Afterward, an energy tunnel is formed as shown in Fig.\,\ref{fig:ShPath}. The lower bound of the tunnel is determined by the least required energy accumulated with time, and the upper bound is in parallel with but $E_m$ above the lower bound, where $E_m$ is the maximum energy can be stored in an EHN. The accumulation of harvested energy (red arrow lines in Fig.\,\ref{fig:ShPath}) is subject to the two bounds, otherwise the harvested energy will either be insufficient for a timely data transmission (below the lower bound) or overflow from the energy storage (over the upper bound).
To find the optimal way for energy requesting, the energy tunnel in Fig.\,\ref{fig:ShPath} is divided into multiple grids, forming a graph with a set of ``vertices'' and ``edges''. The vertical edge indicates an energy replenishment, which generates an associate charging cost at the energy source. The cost consists of two parts: a constant overhead, and a nonlinear charge cost. The latter is determined by both the residual energy of the EHN, which is the vertical distance to the lower bound, and the amount of energy requested, which is the length of the vertical edge. A horizontal edge means no energy charging and generates no cost at the energy source. Eventually, the optimal energy requesting strategy is converted to find the route between the start point and destination that has the minimum sum-cost along the path. In Fig.\,\ref{fig:ShPath}, the path is not allowed to move backward or downward since the cumulation of harvested energy increases monotonically with time.
Consequently, the dynamic programming methods like Dijkstra's algorithm can be applied to schedule the optimal energy request. However, such solution can only work in the scenario where the destination is specified initially. The scenarios include the regular health examination, in which the lower bound of energy tunnel is predictable since biosensors collect data periodically from the animal and the EHN transmits the data in a prescheduled pattern. However, in an event-driven based application, such as detecting viruses and pathogens, the presence of an event is random; therefore an EHN cannot arrange its data transmission in advance. In this case, the energy requesting strategy needs to run in an online mode, which is still an open issue.
\section{Challenges at physical layer}
\label{sec:PHY}
Although energy harvesting can provide perpetual energy, it is critical to intelligently utilize harvested energy to achieve optimal networking performance and boost energy utilization efficiency. In this section, we identify the challenges at the physical layer of RF-EHSN and propose a new feedback-based energy harvesting model.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.3cm]{figures/Mod}}
\caption{Energy harvesting models before and after taking the nonlinear charge characteristic of an EHN into account: (a) The conventional one, and (b) the new feedback-based model.}\label{fig:Mod}
\end{figure}
Fig.~\!\ref{fig:Mod}\,(a) demonstrates the conventional energy harvesting model, where the amount of harvested energy is modeled as an independent random process. Although the dependency of energy harvesting efficiency on the concurrent charge state (i.e., residual energy) of batteries is a well-known fact~\cite{boshkovska2015practical, biason2016effects}, it is not taken into account in the power management on EHNs. It was suggested that EHNs would harvest an equivalent amount of energy as long as the arrived energy is the same, regardless of the residual energy on EHNs~\cite{ulukus2015energy}.
Unfortunately, this assumption may not be true in the real world. Due to the nonlinear charge characteristic of batteries, the harvested energy is also significantly affected by the residual energy of an EHN, as illustrated in Fig.~\!\ref{fig:Mod}\,(b). To further evaluate the accuracy of the nonlinear charging feature, an indoor experiment is conducted using Powercast P2110 development kit, and Micro850 programmable logical controller (PLC). In the test, an EHN is scheduled to receive RF energy radiated from a dedicated ES, TX91501 transmitter, which was placed 6 feet away. A supercapacitor with a series resistance is selected as the battery. Since the TX91501 is non-programmable, the on/off time of the transmitter is manipulated via the Micro850 controller to direct the charging time, $T$.
\begin{figure}[htb]
\centerline{\includegraphics[width=7.7cm]{figures/Nonlinear}}
\caption{Nonlinear energy harvesting with respect to the residual energy of a commercial EHN.}\label{fig:Nonlinear}
\end{figure}
Fig.\,\ref{fig:Nonlinear} shows the experimental results that how harvested energy, $E_h$, changes nonlinearly with respect to the residual energy, $E_r$, for different charging time. To be specific, when $E_r$ changes, $E_h$ is not a constant but a concave and non-monotonic function of $E_r$.
Therefore, it is crucial to retain an appropriate amount of energy to maximize the efficiency of energy harvesting. Since residual energy is determined by data transmission, this means that the power management on EHNs influences harvested energy. To capture this important interplay between the energy harvest and the data transmission, a new feedback-based EH model is proposed, as illustrated in Fig.\,\ref{fig:Mod}(b). In the new model, an energy feedback loop from data transmission to harvested energy is established that captures this important interplay between the process of energy harvest and the power management.
The feedback loop in Fig.\,\ref{fig:Mod}(b) poses a challenge to the design of the optimal strategy for power management. The existing work of power management on EHNs is to seek an energy consumption curve within a fixed energy tunnel so that a pre-defined objective is optimized (e.g., maximizing throughput). In the new energy harvesting model, however, the feasible energy tunnel is not fixed: its bounds are impacted by the residual energy of EHN.This indicates that the energy tunnel, which determines the optimal strategy for power management, is in turn affected by the power management. In other words, an EHD cannot estimate what amount of energy it can harvest from an energy packet before scheduling its transmissions, and we call it as the \emph{causality of energy harvest}. How to manage the power under the feedback loop effect should be considered carefully.
\section{Challenges at Network Layer}
\label{sec:Routing}
Due to the severe energy constraint on EHNs, a well-designed multi-hop routing protocol plays a more important role in RF-EHSN than in traditional sensor networks to transmit data reliably and efficiently. In this section, we identify the unique features, challenges and potential solutions of routing design in RF-EHSNs for the healthcare of animals.
\subsection{Dynamic Changes of Network Topology}
\label{subsec:Eff}
RF-EHSNs and mobile ad hoc networks share the feature of highly dynamic topology. However, the topology change in RF-EHSN is caused not only by the mobility of animals carrying biosensors but also by the body movement of an individual and frequent state switching of EHNs.
\textbf{Body Movement:}
Fig.\,\ref{fig:movement} illustrates that in an ad-hoc BAN, how the optimal routing path from biosensor A to sink node S varies in running animals. As shown in the snapshot (a) for a lion, the information at $A$ needs to go through nodes $B$, $C$, and $D$ to reach the destination, $S$. However, after a short period of time, the network topology is changed with the body movement of the lion, as shown in the snapshot (c). At this moment, node $A$ can reach the destination through $D$ directly, which is two hops shorter than that in the snapshot (a).
An inherent feature of such a dynamic topology revealed in Fig.\,\ref{fig:movement} is the \emph{repetition}. Particularly, different from nodes in a mobile network, which may move randomly, the relative positions of biosensors in an ad-hoc BAN change repeatedly depending on an animal's behavior (e.g., walking, running, and sleeping). For each behavior, the changes of network topology follow a corresponding pattern and are predictable at a certain level. For a routing protocol, the discovery of optimal paths from biosensors to the destination can be fast if the current gesture of an animal is known to the network. The gesture recognition using the three-dimensional accelerometer and inertial sensors has been well studied in recent years \cite{chen2017survey}.
In addition to the repetition, another critical feature of the topology change in Fig.\,\ref{fig:movement} is the species-dependent discrepancy. Intuitively, different animals may have completely different postures for moving and sleeping, which can be observed from the running sequence of a lion and a gorilla in the figure. To minimize the end-to-end delay, the deployment of biosensors and sink node should take the appearance and motion characteristic of animals into account, and the optimal path from an EHN to the sink node can be pre-arranged once the species and the current gesture of an animal are determined through the gesture recognition.
\textbf{Frequent State Switching:} Different from traditional wireless sensor nodes, which consume energy monotonically without battery replenishment, the energy level of an EHSN could even rise after a data transmission and sensing process due to newly harvested energy. Consequently, the sensor node in an EHN will not ``die'' permanently but comes alive after a short break as a perpetual wireless equipment.
Due to the thin density of the renewable RF energy, a small EHN usually consumes energy much faster than energy harvest rate~\cite{pinuela2013ambient}. In other words, an EHN might fail quickly and lose connections with its neighbors after continuous data transmissions, and then be resurrected after energy harvest.
The state switch of nodes between live and dead causes frequent and unpredictable changes in the network topology, which poses grand challenges on the routing design. Although there has been a tremendous amount of routing protocols and topology management methods developed for wireless sensor networks~\cite{younis2014topology, pantazis2013energy}, they may not work efficiently in RF-EHSNs due to the \emph{intermittent connections} among nodes. How to sustain the reliable network connectivity for a successive packet delivery is a crucial problem for the routing design in RF-EHSNs. Different from the network carried by a moving animal, the topology change caused by the intermittent connections can even happen in a static network, e.g., ad-hoc BAN carried by a resting or sleeping wildlife, which challenges the reliability of routing protocols.
\subsection{Integration of Energy Flow and Data Flow }
\label{subsec:Int}
In an RF-EHSN, EHNs can harvest energy from data transmitted by the neighborhood. The amount of harvested energy will vary depending on routes. This adds a new dimension to the innovative routing protocol design by considering the interplay between the energy flow and data flow. For instance, a desired route might be selected from the perspective of total energy harvested from neighboring EHNs along the route.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.8cm]{figures/EngFlow}}
\caption{Integration of energy flow and data flow for the route selection.}\label{fig:EngFlow}
\end{figure}
Fig.\,\ref{fig:EngFlow} illustrates an example on how the energy flow affects the route selection, where two sources, $S_1$ and $S_2$, plan to send data to a common destination, $D$. Assume $S_1$ generates and transmits data prior to $S_2$. From Fig.\,\ref{fig:EngFlow}\,(a), it can be observed that if $S_1$ selects the Path 1 as the route, then $S_2$ will use Path 2 since node $N$ is unavailable. However, if $S_1$ chooses Path 3 as shown in Fig.\,\ref{fig:EngFlow}\,(b), node $N$ will get charged by the data transmission from node $M$, and thus $S_2$ can use Path 4 for information delivery. In Fig.\,\ref{fig:EngFlow}\,(b), the sum of hop count from $S_1$ and $S_2$ to $D$ is $13$, which is four hops less than that in Fig.\,\ref{fig:EngFlow}\,(a). This example provides insights into demands that the energy flow in an RF-EHSN should be integrated with data flow in a routing design.
|
{'timestamp': '2018-03-02T02:03:31', 'yymm': '1803', 'arxiv_id': '1803.00106', 'language': 'en', 'url': 'https://arxiv.org/abs/1803.00106'}
|
arxiv
|
\section{Introduction}
Recurrent novae with frequent eruptions are new and exciting objects at the interface between the parameter spaces of novae and type Ia supernovae (SNe\,Ia). Novae are periodic thermonuclear eruptions on the surfaces of white dwarfs (WDs) in mass-transfer binaries \citep[see][for comprehensive reviews on nova physics]{2008clno.book.....B, Jos16,2016PASP..128e1001S}. In SNe\,Ia, a carbon-oxygen (CO) WD approaches the \citet{1931ApJ....74...81C} mass limit to be destroyed in a thermonuclear explosion. Theoretical models show that a CO WD can indeed grow from a low initial mass through many nova cycles to eventually become a SN\,Ia \citep[e.g.,][]{2005ApJ...623..398Y,2014ASPC..490..287N, 2016ApJ...819..168H}.
Only for massive WDs with high accretion rates do the periods of the nova cycles become shorter than $\sim100$\,yr \citep{1985ApJ...291..136S,2005ApJ...623..398Y,2008NewAR..52..386H,2014ApJ...793..136K} --- the (current) empirical limit to observe a nova erupting more than once. These are called recurrent novae (RNe) and have been observed in the Galaxy and its closest neighbors \citep[see, for example,][]{1991ApJ...370..193S,2010ApJS..187..275S,2015ApJS..216...34S,2016ApJ...818..145B}. The extreme physics necessary to power the high eruption frequency of the RNe with the shortest periods makes them the most promising (single-degenerate) SN\,Ia progenitor candidates known today \citep{2015ApJ...808...52K}.
Among the ten RNe in the Galaxy, U\,Scorpii has the shortest period with inter-eruption durations as short as eight years \citep{2010ApJS..187..275S}. Another nova with rapid eruptions has recently been found in the Large Magellanic Cloud \citep[LMCN\,1968-12a with 5~yr;][]{2016ATel.8578....1M,2016ATel.8587....1D,KuinPaper}. However, it is the nearby Andromeda galaxy ({M\,31}) which hosts six RNe with eruption periods of less than 10\,yr. Due to its proximity and relatively high stellar mass (within the Local Group), {M\,31} has been a target of optical nova surveys for a century. Starting with the first discovery by \citet{1917PASP...29..210R}, exactly 100\,yr ago, and the first monitoring survey by \citet{1929ApJ....69..103H}, the community has gradually built a rich database of more than 1000 nova candidates in {M\,31} \citep[see][and their on-line database\footnote{\url{http://www.mpe.mpg.de/~m31novae/opt/m31/index.php}}]{2007A&A...465..375P,2010AN....331..187P}. Crucially, the low foreground extinction toward {M\,31} \citep[\hbox{$N_{\rm H}$}~ = 0.7\hcm{21},][]{1992ApJS...79...77S} favours X-ray monitoring surveys for novae \citep{2007A&A...465..375P,2010A&A...523A..89H,2011A&A...533A..52H,2014A&A...563A...2H}.
The unparalleled {M\,31} nova sample contains 18 known RNe \citep{2015ApJS..216...34S,2015ATel.7116....1H,2017ATel10001....1S}. Among them there are five RNe with recurrence periods between four and nine years. Those objects are: M31N\,1990-10a \citep[9\,yr period;][]{2016ATel.9276....1H,2016ATel.9280....1H,2016ATel.9281....1E,2016ATel.9383....1F}, M31N\,2007-11f \citep[9\,yr period;][]{2017ATel10001....1S,2017ATel.9942....1F}, M31N\,1984-07a \citep[8\,yr period][]{2012ATel.4364....1H,2015ApJS..216...34S}, M31N\,1963-09c \citep[5\,yr period][]{1973A&AS....9..347R,2014A&A...563A...2H,2015ATel.8234....1W,2015ATel.8242....1W,2015ATel.8235....1H,2015ATel.8290....1H}, and M31N\,1997-11k \citep[4\,yr period][]{2009ATel.2286....1H,2015ApJS..216...34S}.
The indisputable champion of all RNe, however, is {M31N\,2008-12a}. Since its discovery in 2008 \citep[by][]{2008Nis}, this remarkable nova has been seen in eruption every single year \citep[][hereafter \citetalias{2016ApJ...833..149D}, see Table~\ref{eruption_history}]{2016ApJ...833..149D}. Beginning in 2013, our group has been studying the eruptions of {M31N\,2008-12a~} with detailed multi-wavelength observations. For the 2013 eruption we found a fast optical evolution \citep[hereafter \citetalias{2014A&A...563L...9D}]{2014A&A...563L...9D} and a supersoft X-ray source \citep[SSS;][]{2008ASPC..401..139K} phase of only two weeks (\citealt[hereafter \citetalias{2014A&A...563L...8H}]{2014A&A...563L...8H}, also see \citealt{2014ApJ...786...61T}). The SSS stage, powered by nuclear burning within the hydrogen-rich envelope remaining on the WD after the eruption, typically lasts years to decades in regular novae \citep{2011ApJS..197...31S,2014A&A...563A...2H,2015JHEAp...7..117O}. The SSS phase of the 2014 eruption was similarly short \citep[hereafter \citetalias{2015A&A...580A..46H}]{2015A&A...580A..46H} and we collected high-cadence, multi-color optical photometry \citep[hereafter \citetalias{2015A&A...580A..45D}]{2015A&A...580A..45D}. In \citet[hereafter \citetalias{2015A&A...582L...8H}]{2015A&A...582L...8H} we predicted the date of the 2015 eruption with an accuracy of better than a month and followed it with a large multi-wavelength fleet of telescopes (\citetalias{2016ApJ...833..149D}).
\begin{table*}
\caption{All Known Eruption Dates of {M31N\,2008-12a}.\label{eruption_history}}
\begin{center}
\begin{tabular}{lllll}
\hline\hline
Eruption date\tablenotemark{a} & SSS-on date\tablenotemark{b} & Days since & Detection wavelength & References\\
(UT) & (UT) & last eruption\tablenotemark{c} & (observatory) & \\
\hline
(1992 Jan 28) & 1992 Feb 03 & \nodata & X-ray ({\it ROSAT}) & 1, 2 \\
(1993 Jan 03) & 1993 Jan 09 & 341 & X-ray ({\it ROSAT}) & 1, 2 \\
(2001 Aug 27) & 2001 Sep 02 & \nodata & X-ray ({\it Chandra}) & 2, 3 \\
2008 Dec 25 & \nodata & \nodata & Visible (Miyaki-Argenteus) & 4 \\
2009 Dec 02 & \nodata & 342 & Visible (PTF) & 5 \\
2010 Nov 19 & \nodata & 352 & Visible (Miyaki-Argenteus) & 2 \\
2011 Oct 22.5 & \nodata & 337.5 & Visible (ISON-NM) & 5--8 \\
2012 Oct 18.7 & $<2012$ Nov 06.45 & 362.2 & Visible (Miyaki-Argenteus) & 8--11 \\
2013 Nov $26.95\pm0.25$ & $\le2013$ Dec 03.03 & 403.5 & Visible (iPTF); UV/X-ray ({\it Swift}) & 5, 8, 11--14 \\
2014 Oct $02.69\pm0.21$ & 2014 Oct $08.6\pm0.5$ & $309.8\pm0.7$ & Visible (LT); UV/X-ray ({\it Swift}) & 8, 15 \\
2015 Aug $28.28\pm0.12$ & 2015 Sep $02.9\pm0.7$ & $329.6\pm0.3$ & Visible (LCO); UV/X-ray ({\it Swift}) & 14, 16--18\\
2016 Dec $12.32\pm0.17$ & 2016 Dec $17.2\pm1.1$ & $471.7\pm0.2$ & Visible (Itagaki); UV/X-ray ({\it Swift}) & 19--23\\
\hline
\end{tabular}
\end{center}
\catcode`\&=12
\tablecomments{This is an updated version of Table~1 as it was published by \protect \citet{2014ApJ...786...61T}, \protect \citet{2015A&A...580A..45D}, \protect \citet{2015A&A...582L...8H}, and \protect\citet{2016ApJ...833..149D}. Here we add the 2016 eruption information.}
\tablenotetext{a}{Derived eruption time in the optical bands. The values in parentheses were estimated from the archival X-ray detections \protect \citep[cf.][]{2015A&A...582L...8H}.}
\tablenotetext{b}{Emergence of the SSS counterpart. There is sufficient ROSAT data to estimate the SSS turn-on time accurately. The {\it Chandra~} detection comprises of only one data point, on September 8th, which we assume to be midpoint of a typical 12-day SSS light curve. Due to the very short SSS phase the associated uncertainties will be small ($\pm6$\,d).}
\tablenotetext{c}{The gaps between eruption dates is only given for the case of observed eruptions in consecutive years.}
\tablerefs{(1)~\citet{1995ApJ...445L.125W}, (2)~\citet{2015A&A...582L...8H}, (3)~\citet{2004ApJ...609..735W}, (4)~\citet{2008Nis}, (5)~\citet{2014ApJ...786...61T}, (6)~\citet{2011Kor}, (7)~\citet{2011ATel.3725....1B}, (8)~\citet{2015A&A...580A..45D}, (9)~\citet{2012Nis}, (10)~\citet{2012ATel.4503....1S}, (11)~\citet{2014A&A...563L...8H}, (12)~\citet{2013ATel.5607....1T}, (13)~\citet{2014A&A...563L...9D}, (14)~\citet{2016ApJ...833..149D}, (15)~\citet{2015A&A...580A..46H}, (16)~\citet{2015ATel.7964....1D}, (17)~\citet{2015ATel.7965....1D}, (18)~\citet{2015ATel.7984....1H}, (19)~this paper, (20)~\citet{2016Ita}, (21)~\citet{2016ATel.9848....1I}, (22)~\citet{2016ATel.9853....1D}, (23)~\citet{2016ATel.9872....1H}, (24)~\citet{2017ATel11116....1B}, (25)~\citet{2018ATel11121....1H}, (26)~\citet{2018ATel11130....1H}, (27)~\citet{2018ATel11149....1D}.}
\end{table*}
The overall picture of {M31N\,2008-12a~} that had been emerging through the recent campaigns indicated very regular properties (see \citetalias{2016ApJ...833..149D}\ for a detailed description): Successive eruptions occurred every year with a predictable observed period of almost one year ($347\pm10$\,d). The optical light curve rose within about a day to a maximum below 18th mag (faint for an {M\,31} nova) and then immediately declined rapidly by 2 mag in about 2\,d throughout the UV/optical bands. The SSS counterpart brightened at around day 6 after eruption and disappeared again into obscurity around day 19 ($t_{\mbox{\small{on}}} = 5.6\pm0.7$\,d and $t_{\mbox{\small{off}}} = 18.6\pm0.7$\,d in 2015). Even the time evolution of the SSS effective temperatures in 2013--2015, albeit derived from low-count {\it Swift~} spectra, closely resembled each other.
Far UV spectroscopy of the 2015 eruption uncovered no evidence for neon in the ejecta \citep[hereafter \citetalias{2017ApJ...847...35D}]{2017ApJ...847...35D}. Therefore, these observations could not constrain the composition of the WD, since an ONe core might be shielded by a layer of He that grows with each eruption and H-burning episode. Modeling of the accretion disk, based on late-time and quiescent {\it Hubble Space Telescope (HST)} photometry, indicated that the accretion disk survives the eruptions, and that the quiescent accretion rate was both extremely variable and remarkably high $\sim10^{-6}\,M_\odot\,\mathrm{yr}^{-1}$ \citep[hereafter \citetalias{2017ApJ...849...96D}]{2017ApJ...849...96D}. Theoretical simulations found the eruption properties to be consistent with an $1.38\,M_\sun$ WD accreting at a rate of $1.6 \times 10^{-7}\,M_\sun$\,yr$^{-1}$ \citep{2015ApJ...808...52K,2016ApJ...830...40K,2017ApJ...838..153K}. \citetalias{2017ApJ...849...96D}\ also produced the first constraints on the mass donor a, possibly irradiated, red-clump star with $L_\mathrm{donor}=103^{+12}_{-11}\,L_\odot$, $R_\mathrm{donor}=14.14^{+0.46}_{-0.47}\,R_\odot$, and $T_\mathrm{eff, donor}=4890\pm110$\,K. Finally, \citetalias{2017ApJ...849...96D}\ utilized these updated system parameters to refine the time remaining for the WD to grow to the Chandrasekhar mass to be $<20$\,kyr.
By all accounts, {M31N\,2008-12a~} appeared to have become remarkably predictable even for a RN \citep[see also][for a recent review]{2017ASPC..509..515D}. Then everything changed. The 2016 eruption, predicted for mid September, did not occur until December 12th \citep{2016ATel.9848....1I}; leading to a frankly suspenseful monitoring campaign. Once detected, the optical light curve was observed to peak at a significantly brighter level than previously seen \citep{2016ATel.9857....1E,2016ATel.9861....1B}, before settling into the familiar rapid decline. When the SSS duly appeared around day 6 \citep{2016ATel.9872....1H} we believed the surprises were over. We were wrong \citep{2016ATel.9907....1H}. This paper studies the unexpected behavior of the 2016 eruption of {M31N\,2008-12a~} and discusses its impact on past and future observations.
\section{Observations and data analysis of the 2016 Eruption}\label{sec:observations}
In this section, we describe the multi-wavelength set of telescopes used in studying the 2016 eruption together with the corresponding analysis procedures. All errors are quoted to $1\sigma$ and all upper limits to $3\sigma$, unless specifically stated otherwise. The majority of the statistical analysis was carried out within the \texttt{R} software environment \citep{R_manual}. Throughout, all photometry through Johnson--Cousins filters, and the {\it HST}, {\it XMM-Newton}, and {\it Swift} flight filters are computed in the Vega system, all photometry through Sloan filters are quoted in AB magnitudes. We assume an eruption date of 2016-12-12.32 UT; discussed in detail in Sect.~\ref{sec:time} and \ref{sec:disc_date}.
\subsection{Visible Photometry}\label{sec:optical_photometry}
Like the 2014 and 2015 eruptions before it (\citetalias{2015A&A...580A..45D}, \citetalias{2016ApJ...833..149D}), the 2016 eruption of {M31N\,2008-12a}\ was observed by a large number of ground-based telescopes operating in the visible regime. Unfortunately, due to poor weather conditions at many of the planned facilities, observations of the 2016 eruption are much sparser than in recent years.
A major achievement for the 2016 eruption campaign was the addition of extensive observations from the American Association of Variable Star Observers (AAVSO\footnote{\url{https://www.aavso.org}}), along with the continued support of the Variable Star Observers League in Japan (VSOLJ\footnote{\url{http://vsolj.cetus-net.org}}; see Section~\ref{sec:time} and Appendix~\ref{app:optical_photometry}). Observations were also obtained from the Mount Laguna Observatory (MLO) 1.0\,m telescope in California, the Ond\v{r}ejov Observatory 0.65\,m telescope in the Czech Republic, the Danish 1.54\,m telescope at La Silla in Chile, the fully-robotic 2\,m Liverpool Telescope \citep[LT;][]{2004SPIE.5489..679S} in La Palma, the 2.54\,m Isaac Newton Telescope (INT) at La Palma, the Palomar 48$^{\prime\prime}$ telescope in California, the 0.6\,m and 1\,m telescopes operated by members of the Embry Riddle Aeronautical University (ERAU) in Florida, the $2\times8.4$\,m (11.8\,m eq.) Large Binocular Telescope (LBT) on Mount Graham, Arizona, the 2\,m Himalayan Chandra Telescope (HCT) located at Indian Astronomical Observatory (IAO), Hanle, India, and the 2.4\,m {\it Hubble Space Telescope}.
\subsubsection{{\textit Hubble Space Telescope} photometry}
The 2016 eruption, and pre-eruption interval, of {M31N\,2008-12a}\ were observed serendipitously by {\it HST} as part of Program ID:\,14651. The aim of this program was to observe the proposed ``Super-Remnant'' surrounding {M31N\,2008-12a}\ \citep[see \citetalias{2015A&A...580A..45D}\ and][]{2017arXiv171204872D}. Five pairs of orbits were tasked to obtain narrow band F657N (H$\alpha$+[N\,{\sc ii}]) and F645N (continuum) observations using Wide Field Camera 3 (WFC3) in the UVIS mode. Each orbit utilized a three-point dither to enable removal of detector defects. A `post-flash' of 12 electrons was included to minimize charge transfer efficiency (CTE) losses.
The WFC3/UVIS observations were reduced using the STScI {\tt calwf3} pipeline \citep[v3.4;][]{2012wfci.book.....D}, which includes CTE correction. Photometry of {M31N\,2008-12a}\ was subsequently performed using DOLPHOT \citep[v2.0\footnote{\url{http://americano.dolphinsim.com/dolphot}};][]{2000PASP..112.1383D} employing the standard WFC3/UVIS parameters as quoted in the accompanying manual. The resultant photometry is reported in Table~\ref{hst_photometry}, a full description of these {\it HST} data and their analysis will be reported in a follow-up paper.
\begin{table*}
\caption{{\it Hubble Space Telescope} Photometry of the 2016 Eruption of {M31N\,2008-12a}.\label{hst_photometry}}
\begin{center}
\begin{tabular}{llllllll}
\hline
Date & $\Delta t$\tablenotemark{\dag} & \multicolumn{2}{c}{MJD 57,000+} & Exposure & Filter & S/N\tablenotemark{\ddag} & Photometry \\
(UT) & (days) & Start & End & time (s) \\
\hline
2016-12-08.014 & \toe{730.014} & 729.971 & 730.058 & $3\times898$ & F657N & \phn19.7 & $23.143\pm0.055$ \\
2016-12-09.312 & \toe{731.312} & 731.295 & 731.329 & $3\times898$ & F657N & \phn14.5 & $23.500\pm0.075$ \\
2016-12-10.305 & \toe{732.305} & 732.288 & 732.322 & $3\times898$ & F657N & \phn16.8 & $23.421\pm0.065$ \\
2016-12-11.060 & \toe{733.060} & 733.016 & 733.104 & $3\times898$ & F657N & \phn17.8 & $23.327\pm0.061$ \\
2016-12-17.081 & \toe{739.081} & 739.043 & 739.118 & $3\times898$ & F657N & 165.3 & $19.348\pm0.007$\tablenotemark{a} \\
\hline
2016-12-08.140 & \toe{730.140} & 730.102 & 730.179 & $3\times935$ & F645N & \phn13.4 & $23.591\pm0.081$ \\
2016-12-09.378 & \toe{731.378} & 731.360 & 731.396 & $3\times935$ & F645N & \phn11.3 & $23.806\pm0.096$ \\
2016-12-10.371 & \toe{732.371} & 732.353 & 732.389 & $3\times935$ & F645N & \phn12.5 & $23.589\pm0.087$ \\
2016-12-11.186 & \toe{733.186} & 733.148 & 733.225 & $3\times935$ & F645N & \phn15.5 & $23.413\pm0.070$ \\
2016-12-17.159 & \toe{739.159} & 739.120 & 739.197 & $3\times935$ & F645N & \phn85.0 & $20.488\pm0.013$\tablenotemark{a}\\
\hline
\end{tabular}
\end{center}
\tablenotetext{\dag}{The time since eruption assumes an eruption date of 2016 December 12.32\,UT.}
\tablenotetext{\ddag}{Signal-to-noise ratio.}
\tablerefs{(a)~\citet{2016ATel.9874....1D}.}
\end{table*}
\subsubsection{Ground-Based Photometry}
Data from each contributing telescope were reduced following the standard procedures for those facilities, full details for those previously employed in observations of {M31N\,2008-12a}\ are presented in the Appendix of \citetalias{2016ApJ...833..149D}. For all the new facilities successfully taking data in this campaign we provide detailed information in Appendix~\ref{app:optical_photometry}. Photometry was also carried out in a similar manner to that reported in \citetalias{2016ApJ...833..149D}, using the identified secondary standards as presented in \citetalias{2016ApJ...833..149D}\ (see their Table\,10).
Preliminary photometry from several instruments was first published by the following authors as the optical light curve was evolving: \citet{2016ATel.9848....1I}, \citet{2016ATel.9857....1E}, \citet{2016ATel.9861....1B}, \citet{2016ATel.9864....1S}, \citet{2016ATel.9874....1D}, \citet{2016ATel.9881....1K}, \citet{2016ATel.9883....1H}, \citet{2016ATel.9885....1T}, \citet{2016ATel.9891....1N}, \citet{2016ATel.9906....1D}, and \citet{2016ATel.9910....1D}. All photometry from the 2016 eruption of {M31N\,2008-12a}\ is provided in Table~\ref{optical_photometry_table}.
\subsection{Visible Spectroscopy}\label{optical_spectroscopy}
The spectroscopic confirmation of the 2016 eruption of {M31N\,2008-12a}\ was announced by \citet{2016ATel.9852....1D}, with additional spectroscopic follow-up reported in \citet{2016ATel.9865....1P}. A summary of all optical spectra of the 2016 eruption of {M31N\,2008-12a}\ is shown in Table~{\ref{tab:spec}}, all the spectra are reproduced in Figure~\ref{specall}.
We obtained several spectra of the 2016 eruption with SPRAT \citep{2014SPIE.9147E..8HP}, the low-resolution, high-throughput spectrograph on the LT. SPRAT covers the wavelength range of $4000-8000$\,\AA\ and uses a $1^{\prime\prime}\!\!.8$ slit, giving a resolution of $\sim$18\,\AA. We obtained our spectra using the blue-optimized mode. The data were reduced using a combination of the LT SPRAT reduction pipeline and standard routines in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.} \citep{1993ASPC...52..173T}. The spectra were calibrated using previous observations of the standard star G191-B2B against data from \citet{1990AJ.....99.1621O} obtained via ESO. Conditions on La Palma were poor during the time frame the nova was accessible with SPRAT during the 2016 eruption, so the absolute flux levels are possibly unreliable.
We obtained an early spectrum of the nova, 0.54\,days after eruption, using the Andaluc\'\i a Faint Object Spectrograph and Camera (ALFOSC) on the 2.5\,m Nordic Optical Telescope (NOT) at the Roque de los Muchachos Observatory on La Palma. Grism \#7 and a slit width of $1^{\prime\prime}\!\!.3$ yielded a spectral resolution of 8.5\,\AA\ at the centre of the useful wavelength range $4000-7070$\,\AA\ ($R \sim 650$). The 1500\,s spectrum was imaged on the $2048 \times 2048$ pixel CCD \#14 with binning $2 \times 2$. We performed the observation under poor seeing conditions ($\sim 2^{\prime\prime}\!\!.5$). We reduced the raw images using standard IRAF procedures, and then did an optical extraction of the target spectrum with {\sc starlink}/{\sc pamela} \citep{1989PASP..101.1032M}. The pixel-to-wavelength solution was computed by comparison with 25 emission lines of the spectrum of a HeNe arc lamp. We used a 4th-order polynomial that provided residuals with an rms more than 10 times smaller than the spectral dispersion.
In addition, 1.87 days after eruption, we obtained a spectrum of {M31N\,2008-12a}\ using the blue channel of the 10\,m Hobby Eberly Telescope's (HET) new integral-field Low Resolution Spectrograph \citep[LRS2-B;][]{2014SPIE.9147E..0AC,2016SPIE.9908E..4CC}. This dual-beam instrument uses 280 fibers and a lenslet array to produce spectra with a resolution of $R \sim 1910$ between the wavelengths 3700 and 4700\,\AA, and $R \sim 1140$ between 4600 and 7000\,\AA\ over a $12^{\prime\prime} \times 6^{\prime\prime}$ region of sky. The seeing for our observations was relatively poor ($1\farcs 8$), and the total exposure time was 30 minutes, split into 3 ten-minute exposures.
Reduction of the LRS2-B data was accomplished using Panacea\footnote{\url{https://github.com/grzeimann/Panacea}}, a general-purpose IFU reduction package built for HET. After performing the initial CCD reductions (overscan removal and bias subtraction), we derived the wavelength solution, trace model, and spatial profile of each fiber using data from twilight sky exposures taken at the beginning of the night. From these models, we extracted each fiber's spectrum and rectified the wavelength to a common grid. Finally, at each wavelength in the grid, we fit a second order polynomial to the M31's background starlight and subtracted that from the gaussian-shaped point-source assumed for the nova.
Two epochs of spectra were obtained using the Himalayan Faint Object Spectrograph and Camera (HFOSC) mounted on the 2\,m Himalayan Chandra Telescope (HCT) located at Indian Astronomical Observatory (IAO), Hanle, India. HFOSC is equipped with a 2k$\times$4k E2V CCD with pixel size of $15\times15$\,$\mu$m. Spectra were obtained in the wavelength range $3800-8000$\,\AA\ on 2016 December 13.61 and 14.55\,UT. The spectroscopic data were bias subtracted and flat field corrected and extracted using the optimal extraction method. An FeAr arc lamp spectrum was used for wavelength calibration. The spectrophotometric standard star Feige\,34 was used to obtain the instrumental response for flux calibration.
Three spectra were obtained with the 3.5\,m Astrophysical Research Consortium (ARC) telescope at the Apache Point Observatory (APO), during the first half of the night on 2016 December 12, 13, and 17 (UT December 13, 14, and 18). We observed with the Dual Imaging Spectrograph (DIS):\ a medium dispersion long slit spectrograph with separate collimators for the red and blue part of the spectrum and two 2048$\times$1028 E2V CCD cameras, with the transition wavelength around 5350\,\AA. For the blue branch, a 400 lines mm$^{-1}$ grating was used, while the red branch was equipped with a 300 lines mm$^{-1}$ grating. The nominal dispersions were 1.83 and 2.31\,\AA\,pixel$^{-1}$, respectively, with central wavelengths at 4500 and 7500\,\AA. The wavelength regions actually used were 3500--5400\,\AA\ and 5300--9900\,\AA\ for blue and red, respectively. A $1^{\prime\prime}\!\!.5$ slit was employed. Exposure times were 2700\,s. At least three exposures were obtained per night. Each on-target series of exposures was followed by a comparison lamp exposure (HeNeAr) for wavelength calibration. A spectrum of a spectrophotometric flux standard (BD+28\,4211) was also acquired during each night, along with bias and flat field calibration exposures. The spectra were reduced using Python scripts to perform standard flat field and bias corrections to the 2-D spectral images. Extraction traces and sky regions were then defined interactively on the standard star and object spectral images. Wavelength calibration was determined using lines identified on the extracted HeNeAr spectra. We then determined the solution by fitting a 3rd order polynomial to these measured wavelengths. Flux calibration was determined by measuring the ratio of the star fluxes to the known fluxes as a function of wavelength. We performed these calibrations independently for the red and blue spectra, so that the clear agreement in the overlapping regions of the wavelength ranges confirms that our calibration and reduction procedure was successful.
\begin{table}
\caption{Summary of the Optical Spectra of the 2016 Eruption of {M31N\,2008-12a}.\label{tab:spec}}
\begin{center}
\begin{tabular}{llll}
\hline
Date (UT)&$\Delta t$ &Instrument & Exposure\\
2017 Dec & (days) & \& Telescope & time (s)\\
\hline
12.86 &0.54$\pm$0.01 &ALFOSC/NOT & $1\times1500$\\
12.93 &0.61$\pm$0.06 &SPRAT/LT & $6\times\phn900$ \\
13.14 &0.82$\pm$0.11 &DIS/ARC &\\
13.61 &1.29$\pm$0.02 &HFOSC/HCT & $1\times3600$\\
13.98 &1.66$\pm$0.07 &SPRAT/LT & $6\times\phn900$ \\
14.12 &1.80$\pm$0.08 &DIS/ARC &\\
14.19 &1.87$\pm$0.02 &LRS2-B/HET &$3\times\phn600$\\
14.55 &2.23$\pm$0.02 &HFOSC/HCT & $1\times2700$\\
14.90 &2.58$\pm$0.05 &SPRAT/LT & $6\times\phn900$ \\
15.91 &3.59$\pm$0.02 &SPRAT/LT & $3\times\phn900$ \\
16.85 &4.53$\pm$0.02 &SPRAT/LT & $3\times\phn900$ \\
18.15 &5.83$\pm$0.05 &DIS/ARC &\\
\hline
\end{tabular}
\end{center}
\tablecomments{The time since eruption assumes an eruption date of 2016 December 12.32\,UT. The error bars do not include the systematic error in this eruption date, but represent the total exposure time/time between combined exposures of a given epoch.}
\end{table}
\subsection{X-ray and UV observations}\label{swift_data}
A Neil Gehrels {\it Swift~} Observatory \citep{2004ApJ...611.1005G} target of opportunity (ToO) request was submitted immediately after confirming the eruption and the satellite began observing the nova on 2016-12-12.65 UT \citep[cf.][]{2016ATel.9853....1H}, only four hours after the optical discovery. All {\it Swift~} observations are summarized in Table~\ref{tab:swift}. The {\it Swift~} target ID of {M31N\,2008-12a~}\ is always 32613. Because of the low-Earth orbit of the satellite, a {\it Swift~} observation is normally split into several snapshots, which we list separately in Table~\ref{tab:swift_split}.
\begin{table*}
\caption{{\it Swift~} Observations of {M31N\,2008-12a}\ for the 2016 Eruption.}
\label{tab:swift}
\begin{center}
\begin{tabular}{rrrrrrrrrr}\hline\hline \noalign{\smallskip}
ObsID & Exp$^a$ & Date$^b$ & MJD$^b$ & $\Delta t^c$ & uvw2$^d$ & XRT Rate$^e$ \\
& (ks) & (UT) & (d) & (d) & (mag) & (\power{-2}\,ct\,s$^{-1}$)\\ \hline \noalign{\smallskip}
00032613183 & 3.97 & 2016-12-12.65 & 57734.65 & 0.33 & $16.7\pm0.1$ & $<0.3$ \\
00032613184 & 4.13 & 2016-12-13.19 & 57735.19 & 0.87 & $17.3\pm0.1$ & $<0.2$ \\
00032613185 & 3.70 & 2016-12-14.25 & 57736.26 & 1.94 & $17.9\pm0.1$ & $<0.3$ \\
00032613186 & 3.23 & 2016-12-15.65 & 57737.65 & 3.33 & $18.6\pm0.1$ & $<0.4$ \\
00032613188 & 1.10 & 2016-12-16.38 & 57738.38 & 4.06 & $18.7\pm0.1$ & $<0.7$ \\
00032613189 & 3.86 & 2016-12-18.10 & 57740.10 & 5.78 & $19.3\pm0.1$ & $0.6\pm0.1$ \\
00032613190 & 4.03 & 2016-12-19.49 & 57741.50 & 7.18 & $20.0\pm0.2$ & $0.4\pm0.1$ \\
00032613191 & 2.02 & 2016-12-20.88 & 57742.89 & 8.57 & $20.6\pm0.3$ & $1.9\pm0.3$ \\
00032613192 & 3.95 & 2016-12-21.49 & 57743.49 & 9.17 & $20.9\pm0.3$ & $1.5\pm0.2$ \\
00032613193 & 2.53 & 2016-12-22.68 & 57744.69 & 10.37 & $20.4\pm0.2$ & $1.7\pm0.3$ \\
00032613194 & 2.95 & 2016-12-23.67 & 57745.68 & 11.36 & $20.8\pm0.3$ & $1.4\pm0.2$ \\
00032613195 & 2.90 & 2016-12-24.00 & 57746.01 & 11.69 & $20.5\pm0.2$ & $0.7\pm0.2$ \\
00032613196 & 2.73 & 2016-12-25.00 & 57747.01 & 12.69 & $>21.1$ & $0.6\pm0.2$ \\
00032613197 & 2.71 & 2016-12-26.20 & 57748.20 & 13.88 & $>21.1$ & $0.3\pm0.2$ \\
00032613198 & 2.84 & 2016-12-27.72 & 57749.73 & 15.41 & $>21.1$ & $<0.5$ \\
00032613199 & 3.23 & 2016-12-28.19 & 57750.19 & 15.87 & $>21.2$ & $<0.4$ \\
00032613200 & 2.65 & 2016-12-29.45 & 57751.46 & 17.14 & $>21.1$ & $<0.5$ \\
00032613201 & 3.05 & 2016-12-30.05 & 57752.05 & 17.73 & $>20.9$ & $<0.4$ \\
00032613202 & 2.88 & 2016-12-31.58 & 57753.58 & 19.26 & $>21.1$ & $<0.3$ \\\hline
\end{tabular}
\end{center}
\noindent
\tablenotetext{a}{Exposure time includes dead-time corrections.}
\tablenotetext{b}{Observation start date.}
\tablenotetext{c}{Time in days after the eruption date on 2016-12-12.32 UT (MJD 57734.32)}
\tablenotetext{d}{The {\it Swift~} UVOT uvw2 filter has a central wavelength of 1930\,\AA\ with a FWHM of about 660\,\AA.}
\tablenotetext{e}{Count rates are measured in the 0.3--1.5 keV range.}
\end{table*}
\begin{table*}
\caption{Stacked {\it Swift~} UVOT Observations and Photometry as Plotted in Figure~\ref{fig:uvot_lc}.\label{tab:uvot_merge}}
\begin{center}
\begin{tabular}{lllllll}
\hline
{ObsIDs$^a$} & {Exp$^b$} & {Date$^c$} & {MJD$^c$} & {$\Delta t^c$} & {Length$^d$} & {uvw2} \\
& {(ks)} & {(UT)} & {(d)} & {(d)} & {(d)} & {(mag)} \\
\hline
00032613196/198 & 8.3 & 2016-12-26.37 & 57748.37 & 14.05 & 2.72 & $21.7\pm0.4$ \\
00032613199/200 & 5.9 & 2016-12-28.83 & 57750.83 & 16.51 & 1.27 & $<21.5$\\
\hline
\end{tabular}
\end{center}
\tablenotetext{a}{Start/End observation for each stack (cf.\ Table~\ref{tab:swift})}
\tablenotetext{b}{Summed up exposure.}
\tablenotetext{c}{Time between the eruption date (MJD 57734.32; cf.\ Section~\ref{sec:time}) and the stack midpoint.}
\tablenotetext{d}{Time in days from the first observation of the stack to the last one.}
\end{table*}
In addition, we triggered a 100\,ks {\it XMM-Newton~} \citep{2001A&A...365L...1J} ToO that was originally aimed at obtaining a high-resolution X-ray spectrum of the SSS variability phase. Due to the inconvenient eruption date, 14 days before the {\it XMM-Newton~} window opened, and the surprisingly fast light curve evolution, discussed in detail below, only low resolution spectra and light curves could be obtained. The {\it XMM-Newton~} object ID is 078400. The ToO was split into two observations which are summarized in Table~\ref{tab:xmm}. Since 2008, no eruption of {M31N\,2008-12a~}\ had occurred within one of the relatively narrow {\it XMM-Newton~} visibility windows from late December to mid February and July to mid August (cf.\ Table~\ref{eruption_history}).
The {\it Swift~} UV/optical telescope \citep[UVOT,][]{2005SSRv..120...95R} magnitudes were obtained via the HEASoft (v6.18) tool \texttt{uvotsource}; based on aperture photometry of carefully selected source and background regions. We stacked individual images using \texttt{uvotimsum}. In contrast to previous years, our 2016 coverage exclusively used the uvw2 filter which has a central wavelength of 1930\,\AA. The photometric calibration assumes the UVOT photometric (Vega) system \citep{2008MNRAS.383..627P,2011AIPC.1358..373B} and has not been corrected for extinction.
In the case of the {\it Swift~} X-ray telescope \citep[XRT;][]{2005SSRv..120..165B} data we used the on-line software\footnote{\url{http://www.swift.ac.uk/user\_objects}} of \citet{2009MNRAS.397.1177E} to extract count rates and upper limits for each observation and snapshot, respectively. Following the recommendation for SSSs, we extracted only grade-zero events. The on-line software uses the Bayesian formalism of \citet{1991ApJ...374..344K} to estimate upper limits for low numbers of counts. All XRT observations were taken in the photon counting (PC) mode.
The {\it XMM-Newton~} X-ray data were obtained with the thin filter for the pn and MOS detectors of the European Photon Imaging Camera \citep[EPIC;][]{2001A&A...365L..18S,2001A&A...365L..27T}. They were processed with XMM-SAS (v15.0.0) starting from the observation data files (ODF) and using the most recent current calibration files (CCF). We used \texttt{evselect} to extract spectral counts and light curves from source and background regions that were defined by eye on the event files from the individual detectors. We filtered the event list by extracting a background light curve in the 0.2--0.7\,keV range (optimized after extracting the first spectra, see Section~\ref{sec:xmm_spec}) and removing the episodes of flaring activity.
In addition, we obtained UV data using the {\it XMM-Newton~} optical/UV monitor telescope \citep[OM;][]{2001A&A...365L..36M}. All OM exposures were taken with the uvw1 filter, which has a slightly different but comparable throughput as the {\it Swift~} UVOT filter of the same name \citep[cf.][]{2005SSRv..120...95R}. The central wavelength of the OM uvw1 filter is 2910\,\AA\ with a width of 830\,\AA\ \citep[cf.\ UVOT uvw1:\ central wavelength 2600\,\AA, width 693\,\AA; see][]{2008MNRAS.383..627P}. We estimated the magnitude of {M31N\,2008-12a~}\ in both observations via carefully selected source and background regions, which were based on the {\it Swift~} UVOT apertures. Our estimates include (small) coincidence corrections and a PSF curve-of-growth correction. The latter became necessary because the size of the source region needed to be restricted to avoid contamination by neighboring sources. The count rate and uncertainties were converted to magnitudes using the CCF zero points.
As in previous papers on this object (\citetalias{2014A&A...563L...8H}, \citetalias{2015A&A...580A..46H}, \citetalias{2016ApJ...833..149D}), the X-ray spectral fitting was performed in \texttt{XSPEC} \citep[v12.8.2;][]{1996ASPC..101...17A} using the T\"ubingen-Boulder ISM absorption model (\texttt{TBabs} in \texttt{XSPEC}) and the photoelectric absorption cross-sections from \citet{1992ApJ...400..699B}. We assumed the ISM abundances from \citet{2000ApJ...542..914W} and applied Poisson likelihood ratio statistics \citep{1979ApJ...228..939C}.
\begin{table*}
\caption{{\it XMM-Newton~} Observations of {M31N\,2008-12a}\ in 2016.}
\label{tab:xmm}
\begin{center}
\begin{tabular}{lrrrrrrrr}\hline\hline \noalign{\smallskip}
ObsID & Exp$^a$ & GTI$^b$ & MJD$^c$ & $\Delta t^d$ & uvw1$^e$ & EPIC Rate & Equivalent XRT Rate$^f$\\
& (ks) & (ks) & (UT) & (d) & (mag) & (\power{-2}\,ct\,s$^{-1}$) & (\power{-4}\,ct\,s$^{-1}$)\\ \hline \noalign{\smallskip}
0784000101 & 33.5 & 16.1 & 57748.533 & 14.21 & $21.6^{+0.3}_{-0.2}$ & $1.9\pm0.2$ & $7.3\pm0.6$ \\
0784000201 & 63.0 & 40.0 & 57750.117 & 15.80 & $21.6\pm0.2$ & $1.0\pm0.1$ & $3.3\pm0.2$ \\
\hline
\end{tabular}
\end{center}
\noindent
\tablenotetext{a}{Dead-time corrected exposure time for {\it XMM-Newton~} EPIC pn prior to GTI filtering for high background.}
\tablenotetext{b}{Exposure time for {\it XMM-Newton~} EPIC pn after GTI filtering for high background.}
\tablenotetext{c}{Start date of the observation.}
\tablenotetext{d}{Time in days after the eruption of nova {M31N\,2008-12a~} in the optical on 2016-12-12.32 UT \citep[MJD = 57734.32; see][]{2016ATel.9848....1I}}
\tablenotetext{e}{The OM filter was uvw1 (central wavelength 2910\,\AA\ with a width of 830\,\AA.)}
\tablenotetext{f}{Theoretical {\it Swift~} XRT count rate (0.3--10.0 keV) extrapolated based on the 0.2--1.0\,keV EPIC pn count rates, in the previous column, and assuming the best-fit blackbody spectrum and foreground absorption.}
\end{table*}
\section{Panchromatic eruption light curve (visible to soft X-ray)}\label{sec:vis_lc}
\subsection{Detection and time of the eruption}\label{sec:time}
With a nova that evolves as rapidly as {M31N\,2008-12a}, early detection of each eruption is crucial. Following the successful eruption detection campaigns for the 2014 and 2015 outbursts, in 2016 we grew our large, multi-facility monitoring campaign into a global collaboration. The professional telescopes at the LT, Las Cumbres \citep[LCO;][the 2\,m at Haleakala, Hawai'i, the 1\,m at McDonald, Texas]{2013PASP..125.1031B}, and Ond\v{r}ejov Observatory, were joined by a network of highly motivated and experienced amateur observers in Canada, China, Finland, Germany, Italy, Japan, the United Kingdom, and the United States. A large part of their effort was coordinated through the AAVSO and VSOLJ, respectively (see Appendix~\ref{app:optical_photometry} for details). The persistence of the amateur observers in our team, during 6 suspenseful months of monitoring, allowed us to discover the eruption at an earlier stage than in previous years.
The 2016 eruption of {M31N\,2008-12a}\ was first detected on 2016 December 12.4874 (UT) by the 0.5\,m f/6 telescope at the Itagaki Astronomical Observatory in Japan at an unfiltered magnitude of 18.2 \citep{2016Ita}. The previous non-detection took place at the LCO 1\,m (McDonald) just 0.337\,days earlier, providing an upper limit of $r'>19.1$. A deeper upper limit of $u'>22.2$ was provided by the LT and its automated real-time alert system \citep[see][]{2007ApJ...661L..45D} 0.584\,days pre-detection. The 2016 eruption was spectroscopically confirmed almost simultaneously by the NOT and LT, 0.37 and 0.39\,days post-detection, respectively \citep{2016ATel.9852....1D}.
All subsequent analysis assumes that the 2016 eruption of nova {M31N\,2008-12a~}\ ($\Delta t=0$) occurred on 2016-12-12.32 UT ($\mathrm{MJD}= 57734.32$). This date is defined as the midpoint between the last upper limit (2016-12-12.15 UT; LCO) and the discovery observation (2016-12-12.49 UT; Itagaki observatory), as first reported by \citet{2016ATel.9848....1I}. The corresponding uncertainty on the eruption date is $\pm0.17$\,d. The corresponding dates of the 2013, 2014, and 2015 eruptions, to which we will compare our new results, are listed in Table~\ref{eruption_history}.
\subsection{Pre-eruption evolution?}
The {\it HST} photometry serendipitously obtained over the five day pre-eruption period is shown in Figure~\ref{hst_pre}. The H$\alpha$ photometry is shown by the black points and the narrow-band continuum by the red. Clear variability is seen during this pre-eruption phase. As this variability appears in both H$\alpha$ and the continuum it is possible that it is continuum driven. The system has a clear H$\alpha$ excess immediately before eruption, but the H$\alpha$ excess appears to diminish as the continuum rises. Following the discussion presented in \citetalias{2017ApJ...849...96D}, it is possible that such H$\alpha$ emission arrises from the {M31N\,2008-12a}\ accretion disk, which may be generating a significant disk wind.
\begin{figure}
\includegraphics[width=\columnwidth]{fig01.pdf}
\caption{{\it Hubble Space Telescope} WFC3/UVIS narrow-band photometry of {M31N\,2008-12a}\ over the five days before the onset of the 2016 eruption. Red points: F645N ``continuum'' photometry; black points: F657N ``H$\alpha$+[N\,{\sc ii}]'' photometry. The absolute magnitude assumes a distance to M\,31 of 770\,kpc \citep{1990ApJ...365..186F} and reddening toward {M31N\,2008-12a}\ of $E_{B-V}=0.1$ (\citetalias{2017ApJ...847...35D}). \label{hst_pre}}
\end{figure}
The continuum flux during this period is broadly consistent with the quiescent luminosity of the system (see \citetalias{2017ApJ...849...96D}). Therefore, it is unclear whether this behavior is a genuine pre-eruption phenomenon, or related to variability at quiescence with a characteristic time scale of up to a few days, with possible causes being accretion disk flickering, or even orbital modulation. Through constraining the mass donor, \citetalias{2017ApJ...849...96D}\ indicated that the orbital period for the {M31N\,2008-12a}\ binary should be $\gtrsim5$\,days. Such variation, as shown in Figure~\ref{hst_pre} would not be inconsistent with that constraint.
\subsection{Visible and ultraviolet light curve}
\label{sec:vis_lc_vis}
Following the 2015 eruption, \citetalias{2016ApJ...833..149D}\ noted that the 2013, 2014, and 2015 eruption light curves were remarkably similar spanning from the $I$-band to the near-UV (redder pass-bands only have data from 2015), see red data points in Figure~\ref{optical_lc}. Based on those observations, \citetalias{2016ApJ...833..149D}\ defined four phases of the light curve: {\it the final rise (Day 0--1)} is a regime sparsely populated with data due to the rapid increase to maximum light; {\it the initial decline (Day 1--4)} where a exponential decline in flux (linear in magnitude) is observed from the NUV to the near-infrared (see, in particular, the red data points in Figure~\ref{optical_zoom}; {\it the plateau (Day 4--8)} a relatively flat, but jittery, region of the light curve which is time coincident with the SSS onset; and {\it the final decline (Day $>8$)} where a power-law (in flux) decline may be present.
The combined 2013--2015 light curve defined these four phases, the individual light curves from each of those eruptions were also consistent with those patterns (see Figures~\ref{optical_lc} and \ref{optical_zoom}). A time-resolved SED of the well-covered 2015 eruption was presented by \citetalias{2016ApJ...833..149D}. Unfortunately, due to severe weather constraints our 2016 campaign did not obtain sufficient simultaneous multi-filter data to compare the SED evolution. However, we find that the 2015 and 2016 light curves are largely consistent (Figure~\ref{optical_lc}) except for the surprising features we will present in the following text.
\begin{figure*}
\includegraphics[width=\textwidth]{fig02.pdf}
\caption{Visible photometry of the past four eruptions of {M31N\,2008-12a}. Black points show the 2016 data (see Table~\ref{optical_photometry_table}). The red points indicate combined data from the 2013--2015 eruptions (\citetalias{2014A&A...563L...9D}, \citetalias{2015A&A...580A..45D}, \citetalias{2016ApJ...833..149D}, and \citetalias{2014ApJ...786...61T}). We show the SSS turn-on/off times of the \textit{2015} eruption as vertical gray lines, with their uncertainties marked by the shaded areas. For the 2013--2015 light curves combined, the inclined gray lines indicate an exponential decay in luminosity during the range of $1\leq\Delta t\leq4$ days (\citetalias{2016ApJ...833..149D}).\label{optical_lc}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig03.pdf}
\caption{As Figure~\ref{optical_lc}, but focusing on $0\leq t\leq4$\,days. The $i'$-band data are excluded as there were no discrepancies between the very limited 2016 $i'$ dataset and the extensive dataset from 2013--2015, $g'$-band data were excluded as no pre-2016 data exist.\label{optical_zoom}}
\end{figure*}
First, we look at the initial decline phase for the 2016 eruption. We examine this region of the light curve first as, in previous eruptions, it has shown the simplest evolution -- a linear decline -- which was used by \citetalias{2016ApJ...833..149D}\ to tie together the epochs of the 2013, 2014, and 2015 eruptions. But, due to the poor conditions at many of the planned sites, the data here are admittedly sparse, but are generally consistent with the linear behavior seen in the past three eruptions. There may however, be evidence for a deviation, approximately one magnitude upward, toward the end of this phase in the $u'$ and $r'$-band data at $t\gtrsim3.6$\,days post-eruption.
However, the largest deviation from the 2013--2015 behavior occurs during the final rise phase, between $0\leq t\leq1$\,days. There appears to be a short-lived, `cuspy' feature in the light curves seen through all filters (except the $B$-band where there was limited coverage) and the unfiltered observations (see Figures~\ref{optical_lc}, \ref{optical_zoom}, and \ref{fastphot}, which progressively focus on the `cusp'). The variation between the peak luminosity of the 2013--2015 eruptions and the 2016 eruption is shown in Table~\ref{max_deviation}, in all useful bands the deviation was significant. The average (across all bands) increase in maximum magnitude was 0.64\,mag, or almost twice as luminous as the 2013--2015 eruptions at peak. Notably, this over-luminous peak occurred much earlier than the 2013--2015 peaks. The mean time of peak in 2013--2015 was $t\simeq1.0$\,days (across the $u'$, $B$, $R$, $r'$, and $I$ filters), whereas the bright cusp in 2016 occurred at $t\simeq0.65$\,days.
\begin{figure*}
\includegraphics[width=\columnwidth]{fig04a.pdf}\hfill
\includegraphics[width=\columnwidth]{fig04b.pdf}
\caption{Broad-band and unfiltered photometry of the {M31N\,2008-12a}\ `cusp'. In both sub-plots, the blue points note the combined $r'$-band photometry from the 2013, 2014, and 2015 eruptions, with the solid line showing the template 2013--2015 $r'$-band light curve and associated uncertainties (see \citetalias{2016ApJ...833..149D}). {\bf Left:} Broad-band photometry of the `cusp', of the 2016 eruption of {M31N\,2008-12a}. Red points $r'$-band, magenta points $g'$-band, and the black points are $V$-band.\label{fastphot} {\bf Right:} Here we show a comparison between the {\it unfiltered} photometry of the 2010 (red) and 2016 (black) eruptions of {M31N\,2008-12a}, the black stars indicate photometry of the 2016 eruption with no computed uncertainties.\label{2010comp}}
\end{figure*}
\begin{table}
\caption{Comparison Between the Maximum Observed Magnitudes from the 2013--2015 and 2016 Eruptions of {M31N\,2008-12a}.\label{max_deviation}}
\begin{center}
\begin{tabular}{llll}
\hline
Filter & \multicolumn{2}{c}{$m_\mathrm{max}$ (mag)} & `$\Delta m_\mathrm{max}$'\\
& 2013--2015\tablenotemark{a} & 2016\tablenotemark{b} & (mag)\\
\hline
$u'$ & $18.35\pm0.03$ & $17.85\pm0.04$ & $0.50\pm0.05$\\
$B$\tablenotemark{c} & $18.67\pm0.02$ & $18.50\pm0.10$ & $0.17\pm0.10$\\
$V$ & $18.55\pm0.01$ & $17.6$ & 1.0\\
$R$ & $18.38\pm0.02$ & $17.76\pm0.05$ & $0.62\pm0.05$ \\
$r'$ & $18.45\pm0.01$ & $17.98\pm0.04$ & $0.47\pm0.04$ \\
$I$ & $18.31\pm0.03$ & $17.68\pm0.08$ & $0.63\pm0.09$\\
\hline
\end{tabular}
\end{center}
\tablenotetext{a}{As calculated by \citetalias{2016ApJ...833..149D}, based on a fit to the combined 2013--2015 light curves.}
\tablenotetext{b}{The most luminous observation of the 2016 eruption, those without error bars are estimated maxima from multiple observations and observers.}
\tablenotetext{c}{The $B$-band coverage during the 2016 peak was limited.}
\end{table}
The INT and ERAU obtained a series of fast photometry of the 2016 eruption through $g'$, $i'$ (ERAU only), and $r'$-band filters during the final rise phase. Figure~\ref{fastphot} (left) compares this photometry with the 2013--2015 $r'$-band eruption photometry. This figure clearly illustrates the short-lived, bright, optical `cusp', but also its highly variable nature over a short time-scale with variation of up to 0.4\,mag occurring over just 90 minutes. The $(g'-r')$ color during this period is consistent with the cusp light curve being achromatic. We derive $(g'-r')_0=0.15\pm0.03$ for the cusp period, which is roughly consistent with the {M31N\,2008-12a}\ color during the peak of the 2013--2015 eruptions \citetalias{2016ApJ...833..149D}.
The 2013--2015 eruptions exhibited a very smooth light curve evolution from, essentially, $t=0$ until $t\simeq4$\,days (see in particular the red $r'$-band light curve in Figure~\ref{optical_zoom}. As well as never being seen before, the bright cusp appears to break this smooth evolution. The 2016 eruption does not just appear more luminous than the observations of 2013--2015, there is evidence of a fundamental change, possibly in the emission mechanism, obscuration, or within the lines.
There are sparse data covering both the plateau and final decline phases. The $R$-band data from 2016 covers the entire plateau phase and is broadly consistent with the slow-jittery decline seen during this phase in the 2013--2015 eruptions. The $u'$ and $r'$-band data show a departure from the linear early decline around day 3.6, this could indicate an early entry into the plateau, i.e.\ different behavior in 2016, or simply that the variation seen during the plateau always begins slightly earlier than the assumed 4\,day phase transition.
In essence, the 2016 light curves of {M31N\,2008-12a}\ show a never before seen (but see Section~\ref{similar}), short-lived, bright cusp at all wavelengths during the final rise phase. There is no further strong evidence of any deviation from previous eruptions -- however we again note the sparsity of the later-time data. Possible explanations for the early bright light curve cusp are discussed in Section~\ref{sec:disc_peak} and ~\ref{cusp}, and Section~\ref{sec:disc_arch} re-examines earlier eruptions for possible indications of similar features.
\subsection{{\it Swift~} and {\it XMM-Newton~} ultraviolet light curve}\label{sec:uvot_lc}
During the 2015 eruption we obtained a detailed {\it Swift~} UVOT light curve through the uvw1 filter (\citetalias{2016ApJ...833..149D}). For the 2016 eruption our aim was to measure the uvw2 filter magnitudes instead to accumulate additional information on the broad-band SED evolution. With a central wavelength of 1930\,\AA\ the uvw2 band is the ``bluest'' UVOT filter (uvw1 central wavelength is 2600\,\AA). Therefore, the uvw1 range is more affected by spectral lines, for instance the prominent Mg\,{\sc ii} (2800\,\AA) resonance doublet, than the uvw2 magnitudes (see \citetalias{2017ApJ...847...35D}\ for details). Due to the peculiar properties of the 2016 eruption, a direct comparison between both light curves is now more complex than initially expected.
In Figure~\ref{fig:uvot_lc} we show the 2016 uvw2 light curve compared to the 2015 uvw1 (plus a few uvm2) measurements (\citetalias{2016ApJ...833..149D}) as well as a few uvw2 magnitudes from the 2014 eruption (\citetalias{2015A&A...580A..46H}, \citetalias{2015A&A...580A..45D}). The 2016 values are based on individual {\it Swift~} snapshots (see Table~\ref{tab:swift_split}) except for the last two data points where we used stacked images (see Table~\ref{tab:uvot_merge}). Similarly to the uvw1 light curve in 2015, the uvw2 brightness initially declined linearly with a $t_2 = 2.8 \pm 0.2$\,d. This is comparable to the 2015 uvw1 value of $t_2 = 2.6\pm0.2$\,d.
From day three onward, the decline slowed down and became less monotonic. Viewed on its own, the UV light curve from this point onward would be consistent with a power-law decline (in flux) with an index of $-1.5\pm0.2$. However, in light of the well-covered 2015 eruption the 2016 light curve would also be consistent with the presence of three plateaus between (approximately) the days 3--5, 6--8, and 9--12; and with relatively sharp drops of about 1 mag connecting those. Around day 12, when the X-ray flux started to drop (cf.\ Figure~\ref{fig:xrt_xmm_lc}) there might even have been a brief rebrightening in the UV before it declined rapidly. The UV source had disappeared by day 16, which is noticeably earlier than in 2015 (in the uvw1 filter). \citetalias{2017ApJ...849...96D}\ presented evidence that the UV--optical flux is dominated by the surviving accretion disk from at least day 13 onward. Therefore, a lower UV luminosity at this stage would imply a lower disk mass accretion rate. It is noteworthy that during the times where the 2014 and 2016 uvw2 measurements overlap they appear to be consistent.
The {\it XMM-Newton~} OM uvw1 magnitudes are given in Table~\ref{tab:xmm} and included in Figure~\ref{fig:uvot_lc}. The two OM measurements appear to be consistently fainter than the {\it Swift~} UVOT uvw1 data at similar times during the 2015 eruption (cf.\ \citetalias{2016ApJ...833..149D}). However, the uncertainties are large and the filter response curves (and instruments) are not perfectly identical. Therefore, we do not consider this apparent difference to have any physical importance. In addition, there is a hint at variability in the uvw1 flux during the first {\it XMM-Newton~} observation. Of the seven individual OM exposures, the first five can be combined to a uvw1 = $21.3^{+0.3}_{-0.2}$\,mag whereas the last two give a $2\sigma$ upper limit of uvw1 $> 21.5$\,mag. The potential drop in UV flux corresponds to the drop in X-ray flux after the peak in Figure~\ref{fig:epic_lc}. Also here the significance of this fluctuation is low and we only mention it for completeness, in case similar effects will be observed in future eruption.
\begin{figure}
\includegraphics[width=\columnwidth]{fig05.pdf}
\caption{{\it Swift~} UVOT uvw2 light curve for the 2016 eruption of {M31N\,2008-12a~}\ (red) compared to (i) the detailed uvw1 coverage of the 2015 eruption (black; \citetalias{2016ApJ...833..149D}), (ii) a few uvm2 measurements around the 2015 peak (gray), (iii) the uvw2 magnitudes from the 2014 eruption (blue; \citetalias{2015A&A...580A..45D}, \citetalias{2015A&A...580A..46H}), and (iv) the 2016 {\it XMM-Newton~} OM uvw1 magnitudes (cf.\ Table~\ref{tab:xmm}). The last two red data points were derived from stacking multiple images (see Table~\ref{tab:uvot_merge}). For better readability we only plot upper limits from individual observations until day 12 are plotted (cf. Tables~\ref{tab:swift} and \ref{tab:swift_split} for those). Uncertainties are the combined $1\sigma$ systematic and statistical values. Open triangles mark $3\sigma$ upper limits. Day zero is MJD = 57734.32 (see Section~\ref{sec:time}). The dark gray vertical lines indicate the SSS time scales (dashed) and their corresponding uncertainties (dotted) according to Section~\ref{sec:xrt_lc}.}
\label{fig:uvot_lc}
\end{figure}
\subsection{{\it Swift~} XRT light curve}
\label{sec:xrt_lc}
\smallskip
X-ray emission from {M31N\,2008-12a}\ was first detected at a level of $0.6\pm0.1$\,\cts{-2} on 2016-12-18.101 UT, 5.8 days after the eruption \citep[see Table~\ref{tab:swift} and also][]{2016ATel.9872....1H}. Nothing was detected in the previous observation on 2016-12-16.38 UT (day 4.1) with an upper limit of $<0.7$ \cts{-2}. Although these numbers are comparable, there is a clear increase of counts at the nova position from the pre-detection observation (zero counts in 1.1 ks) to the detection (more than 30 counts in 3.9\,ks). Therefore, we conclude that the SSS phase had started by day 5.8.
For a conservative estimate of the SSS turn-on time (and its accuracy) we use the midpoint between days 4.1 and 5.8 as $t_{\mbox{\small{on}}} = 4.9\pm1.1$\,d, which includes the uncertainty of the eruption date. This is consistent with the 2013--2015 X-ray light curves (see Figure~\ref{fig:xrt_xmm_lc}) for which we estimated turn-on times of $6\pm1$\,d (2013), $5.9\pm0.5$\,d (2014), and $5.6\pm0.7$\,d (2015) using the same method (see \citetalias{2014A&A...563L...8H}, \citetalias{2015A&A...580A..46H}, \citetalias{2016ApJ...833..149D}). There is no evidence that the emergence of the SSS emission occurred at a different time than in the previous three eruptions.
The duration of the SSS phase, however, was significantly shorter than previously observed \citep[see Figure~\ref{fig:xrt_xmm_lc} and][]{2016ATel.9907....1H}. The last significant detection of X-ray emission in the XRT monitoring was on day 13.9 (Table~\ref{tab:swift}). However, the subsequent 2.9\,ks observation on day 15.4 still shows about 4 counts at the nova position which amount to a $2\sigma$ detection (Table~\ref{tab:swift} gives the $3\sigma$ upper limit). Nothing is visible on day 15.9. Again being conservative we estimate the SSS turn-off time as $t_{\mbox{\small{off}}} = 14.9\pm1.2$\,d (including the uncertainty of the eruption date), which is the midpoint between observations 197 and 201 (see Table~\ref{tab:swift}).
In comparison, the SSS turn-off in previous eruptions happened on days $19\pm1$ (2013), $18.4\pm0.5$ (2014), and $18.6\pm0.7$ (2015); all significantly longer than in 2016. The upper limits in Figure~\ref{fig:xrt_xmm_lc} and Table~\ref{tab:swift} demonstrate that we would have detected each of the 2013, 2014, or 2015 light curves during the 2016 monitoring observations, which had similar exposure times (cf.\ \citetalias{2014A&A...563L...8H}, \citetalias{2015A&A...580A..46H}, and \citetalias{2016ApJ...833..149D}). Therefore, the short duration of the 2016 SSS phase is real and not caused by an observational bias.
The full X-ray light curve, shown in Figure~\ref{fig:xrt_xmm_lc}a, is consistent with a shorter SSS phase which had already started to decline before day 12, instead of around day 16 as during the last three years. In a consistent way, the blackbody parametrization in Figure~\ref{fig:xrt_xmm_lc}b shows a significantly cooler effective temperature ($kT = 86\pm6$~eV) than in 2013--2015 ($kT\sim115\pm10$~eV) during days 10--14 (cf.\ \citetalias{2016ApJ...833..149D}). As previously, for this plot we fitted the XRT spectra in groups with similar effective temperature.
In contrast to our previous studies of {M31N\,2008-12a}, here our blackbody parameterizations assume a fixed absorption of \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21} throughout. (The X-ray analysis in \citetalias{2016ApJ...833..149D}\ had explored multiple \hbox{$N_{\rm H}$}~ values). This value corresponds to the Galactic foreground. The extinction is based on {\it HST} extinction measurements during the 2015 eruption, which are consistent in indicating no significant additional absorption toward the binary system, e.g.\ from the {M\,31} disk \citetalias{2017ApJ...847...35D}\ (also see \citetalias{2016ApJ...833..149D}). These {\it HST} spectra were taken about three days before the 2015 SSS phase onset, making it unlikely that the extinction varies significantly during the SSS phase. The new $N_\mathrm{H}$, also applied to the 2013--2015 data in Figure~\ref{fig:xrt_xmm_lc}, affects primarily the absolute blackbody temperature, now reaching almost 140\,eV, but not the relative evolution of the four eruptions.
Figure~\ref{fig:xrt_xmm_lc}a also suggests that the SSS phase in 2016 was somewhat less luminous than in previous eruptions. The early SSS phase of this nova has shown significant flux variability, nevertheless a lower average luminosity is consistent with the XRT light curve binned per {\it Swift~} snapshot, as shown in Figure~\ref{fig:xrt_split}. A lower XRT count rate would be consistent with the lower effective temperature suggested in Figure\,\ref{fig:xrt_xmm_lc}b. Note, that this refers to the observed characteristics of the SSS; not the theoretically possible maximum photospheric temperature if the hydrogen burning had not extinguished early.
\begin{figure}
\includegraphics[width=\columnwidth]{fig06.pdf}
\caption{{\it Swift~} XRT (black) and {\it XMM-Newton~} EPIC pn (blue) (a) count rates (0.3--1.5\,keV) and (b) effective black body temperatures of {M31N\,2008-12a~}\ during the 2016 eruption compared to the XRT data of the 2013--15 eruptions (gray). {\it Panel a:} Triangles indicate upper limits (only shown for 2016 data). {\it Panel b:} Sets of observations with similar spectra have been fitted simultaneously assuming a fixed \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21}. The error bars in time represent either the duration of a single observation or the time covering the sets of observations (for panel b and for the last 2016 XRT upper limit in panel a). The deviation of the 2016 eruption from the evolution of past events is clearly visible.}
\label{fig:xrt_xmm_lc}
\end{figure}
We show the XRT light curve binned per {\it Swift~} snapshot in Figure~\ref{fig:xrt_split}. As found in previous eruptions (\citetalias{2014A&A...563L...8H}, \citetalias{2015A&A...580A..46H}, \citetalias{2016ApJ...833..149D}) the early SSS flux is clearly variable. However, here the variability level had already dropped by day $\sim11$ instead of after day 13 as in previous years. After day 11, the scatter (rms) decreased by a factor of two, which is significant on the 95\% confidence level (F-test, $p = 0.03$). This change in behavior can be seen better in the detrended {\it Swift~} XRT count rate light curve in Figure~\ref{fig:xrt_split}b. The faster evolution is consistent with the overall shortening of the SSS duration.
\begin{figure}
\includegraphics[width=\columnwidth]{fig07.pdf}
\caption{\textit{Panel a}: The short-term SSS light curve of {M31N\,2008-12a~}\ derived from all XRT snapshots. The 2016 eruption data is shown in black in contrast to the gray 2013--2015 light curves. Instead of the logarithmic count rate scale in Figure~\ref{fig:xrt_xmm_lc} here we use a linear axis. The overlayed green (2016), red (2015), blue (2014), and orange (2013) curves show smoothing fits using local regression. The 2016 light curve is clearly shorter and appears to be less luminous than in 2013--2015. \textit{Panel b}: Detrended light curves after removing the smoothed trend. The 2016 light curve (black) suggests a drop in variability after day 11, whereas for the 2013--2015 light curves (gray) this drop happened around day 13.}
\label{fig:xrt_split}
\end{figure}
\subsection{{\it XMM-Newton~} EPIC light curves}\label{sec:xmm_lc}
The {\it XMM-Newton~} light curves from both pointings show clear variability over time scales of a few 1000\,s (Fig.\,\ref{fig:epic_lc}). This is an unexpected finding, since the variability in the {\it Swift~} XRT light curve appeared to have ceased after day 11 (in general agreement with the 2013--15 light curve where this drop in variability occurred slightly later). Instead, we find that the late X-ray light curve around days 14--16 (corresponding to days 18--20 for the ``normal'' 2013--15 evolution) are still variable by factors of $\sim5$. The variability is consistent in the EPIC pn and MOS light curves (plotted without scaling in Figure~\ref{fig:epic_lc}).
Even with the lower XRT count rates during the late SSS phase, we would still be able to detect large variations similar to the high-amplitude spike and the sudden drop seen in the first and second EPIC light curve, respectively.
\begin{figure*}
\includegraphics[width=\columnwidth]{fig08a.pdf}\hfill
\includegraphics[width=\columnwidth]{fig08b.pdf}
\caption{{\it XMM-Newton~} EPIC light curves for observations 0784000101 (day 14.21; left) and 0784000201 (day 15.80; right) with 2\,ks binning. The EPIC pn (black), MOS1 (red), and MOS2 (blue) count rates and corresponding uncertainties are color-coded. The solid lines with the same colours are smoothed fits via locally-weighted polynomial regression (LOWESS).}
\label{fig:epic_lc}
\end{figure*}
\section{Panchromatic eruption spectroscopy}
\label{sec:spec}
\subsection{Optical spectra}
\label{sec:vis_spec}
The LT eruption spectra of 2016 are broadly similar to the 2015 (and prior) eruption (see \citetalias{2016ApJ...833..149D}), with the hydrogen Balmer series being the strongest emission lines (Fig.\,\ref{fig:optspec}). He\,{\sc i} lines are detected at 4471, 5876, 6678 and 7065\,\AA, along with He\,{\sc ii} (4686\,\AA) blended with N\,{\sc iii} (4638\,\AA). The broad N\,{\sc ii} (3) multiplet around 5680\,\AA\ is also weakly detected. These emission lines are all typically associated with the He/N spectroscopic class of novae \citep{1992AJ....104..725W}. The five LT spectra are shown in Figure~\ref{fig:optspec} (bottom) and cover a similar time frame as those obtained during the 2015 eruption. These spectra are also displayed along with all of the other 2016 spectra at the end of this work in Figure~\ref{specall}.
\begin{figure*}
\includegraphics[width=\textwidth]{fig09.pdf}
\caption{{\bf Top:} NOT ALFOSC spectrum of {M31N\,2008-12a}, taken 0.54\,days after the 2016 eruption, one of the earliest spectra taken of any of the {M31N\,2008-12a}\ eruptions. The gray dashed lines represent a velocity of $-6250$\,km\,s$^{-1}$ with respect the H$\beta$, He\,{\sc i} 5876\,\AA\ and H$\alpha$. Narrow absorption can be seen at this velocity accompanying the H$\alpha$ and H$\beta$ emission lines, and there is evidence for a similar absorption feature with He\,{\sc i} 5876\,\AA.\label{fig:not} {\bf Bottom:} LT spectra of the 2016 eruption, taken between 0.61 and 4.52\,days after eruption.\label{fig:optspec}}
\end{figure*}
The first 2016 spectrum, taken with NOT/ALFOSC 0.54\,days after eruption, shows P\,Cygni absorption profiles on the H$\alpha$ and H$\beta$ lines. We measure the velocity of the minima of these absorption lines to be at $-6320\pm160$ and $-6140\pm200$\,km\,s$^{-1}$ for H$\alpha$ and H$\beta$, respectively. This spectrum can be seen in Figure~\ref{fig:not} (top), which also shows evidence of a possible weak P\,Cygni absorption accompanying the He\,{\sc i} (5876\,\AA) line. The first LT spectrum, taken 0.61\,days after eruption, also shows evidence of a P\,Cygni absorption profile on H$\alpha$ (and possibly H$\beta$) at $\sim-6000$\,km\,s$^{-1}$.
This is the first time absorption lines have been detected in the optical spectra of {M31N\,2008-12a}. We note that the HST FUV spectra of the 2015 eruption revealed strong, and possibly saturated, P\,Cygni absorptions still present on the resonance lines of N\,{\sc v}, Si\,{\sc iv}, and C\,{\sc iv} at $t=3.3$\,days with terminal velocities in the range 6500--9400\,km\,s$^{-1}$, the NUV spectra taken $\sim1.5$\,days later showed only emission lines (\citetalias{2017ApJ...847...35D}).
The HET spectrum taken 1.87\,d after eruption can be seen in Figure~\ref{fig:spec3}, showing that the central emission profiles of the Balmer lines and He\,{\sc i} are broadly consistent. Note that the emission around +5000\,km\,s$^{-1}$ from the H$\alpha$ rest velocity probably contains a significant contribution from He\,{\sc i} (6678\,\AA). By this time the P\,Cygni profiles appear to have dissipated.
\begin{figure*}
\begin{center}
\includegraphics[width=0.48\textwidth]{fig10a}\hfill
\includegraphics[width=0.48\textwidth]{fig10b}\\
\includegraphics[width=0.48\textwidth]{fig10c}\hfill
\includegraphics[width=0.48\textwidth]{fig10d}
\end{center}
\caption{\textbf{Top left}: HET spectrum at day 1.87, showing the similar line structures of H$\alpha$, H$\beta$ and He\,{\sc i} (5876\,\AA). {\textbf{Top right}}: LT spectra comparing the high-velocity material at day-2.84 of the 2015 eruption to day-2.58 of the 2016 eruption. These are normalized to the lower velocity component peak. \textbf{Bottom left}: FWHM velocity evolution of the H$\alpha$ profile during the 2016 eruption (black), compared to previous eruptions (red). The gray dashed line is a power law of an index of $-1/3$ ($\chi^2_{/\mathrm{dof}}=3.7$; Phase II of shocked remnant development) and the solid black line is the best-fit power law with an index of $-0.26\pm$0.04 ($\chi^2_{/\mathrm{dof}}= 3.6$). \textbf{Bottom right:} comparison between the H$\alpha$ line profile 0.54\,days after the 2016 eruption (black) and the N\,{\sc v} (1240\,\AA) profile 3.32\,days after the 2015 eruption (gray; see \citetalias{2017ApJ...847...35D}). Note that the N\,{\sc v} profile has been shifted 500\,km\,s$^{-1}$ blueward with respect to H$\alpha$.\label{fig:spec3}}
\end{figure*}
Figure~\ref{fig:optspec} clearly shows the existence of high velocity material around the central H$\alpha$ line at day 2.58 of the 2016 eruption. This can be seen in more detail, compared to the 2015 eruption, in Figure~\ref{fig:spec3}. Note that, as stated above, the redshifted part of the (2016) profile could be affected by He\,{\sc i} (6678\,\AA), although the weakness of the (isolated) He\,{\sc i} line at 7065\,\AA\ (see Figure~\ref{fig:optspec}) suggests this cannot explain all of the excess flux on this side of the profile. Also note the extremes of the profile indicate a similar velocity (HWZI $\sim$ 6500 to 7000 km\,s$^{-1}$).
The 4.91\,day spectrum of the 2015 eruption shows H$\alpha$ and H$\beta$ emission. By comparison, the 2016 4.52-day spectrum also shows a clear emission line from He\,{\sc ii} (4686\,\AA), consistent with the Bowen blend being dominated by He\,{\sc ii} at this stage of the eruption. However, we note that this is unlikely to mark a significant difference between 2015 and 2016, as these late spectra typically have very low signal-to-noise ratios. The ARC spectra are shown in Figure~\ref{fig:apo}. The last of these spectra, taken 5.83\,d after eruption shows strong He\,{\sc ii} (4686\,\AA) emission. The S/N of the spectrum is relatively low, but the He\,{\sc ii} emission appears narrower than the H$\alpha$ line at the same epoch, as seen in Figure~\ref{fig:apovel}. At this stage of the eruption we calculate the FWHM of He~{\sc ii} (4686\,\AA) to be $930\pm150$\,km\,s$^{-1}$, compared to $2210\pm250$\,km\,s$^{-1}$ for H$\alpha$. The ARC spectra have a resolution of $R\sim1000$, so these two FWHM measurements are not greatly affected by instrumental broadening. Narrow He\,{\sc ii} emission has been observed in a number of other novae. It is seen in the Galactic RN U\,Sco from the time the SSS becomes visible \citep{2012A&A...544A.149M}. Those authors used the changes in the narrow lines with respect to the orbital motion (U\,Sco is an eclipsing system; \citealp{1990ApJ...355L..39S}) to argue that such emission arises from a reforming accretion disk. In the case of the 2016 eruption of {M31N\,2008-12a}, we clearly observe the SSS at 5.8\,d, meaning this final ARC spectrum is taken during the SSS phase. This is consistent with the suggestion that, in {M31N\,2008-12a}, the accretion disk survives the eruption largely intact (\citetalias{2017ApJ...849...96D}). In this scenario, the optically thick ejecta prevent us seeing evidence of the disk in our early spectra. We note however, \citet{2014A&A...564A..76M} argued that in the case of KT\,Eri, there could be two sources of such narrow He\,{\sc ii} emission, initially being due to slower moving material in the ejecta, before becoming quickly dominated by emission from the binary itself (as in U\,Sco) as the SSS enters the plateau phase.
\begin{figure*}
\includegraphics[width=0.96\textwidth]{fig11.pdf}
\caption{ARC spectra of the 2016 eruption of {M31N\,2008-12a}\ taken 0.82 and 1.80\,d post-eruption (top) and 5.83\,d post-eruption (bottom). The bottom panel shows a smaller wavelength range than the top panel, and here the gray line represents the errors for the $t=5.83$\,d spectrum.\label{fig:apo}}
\end{figure*}
\begin{figure}
\includegraphics[width=0.96\columnwidth]{fig12.pdf}
\caption{Comparison of H$\alpha$ and He\,{\sc ii} 4686\,\AA\ emission lines in the $t=5.83$\,d ARC spectrum.\label{fig:apovel}}
\end{figure}
\citetalias{2017ApJ...849...96D}\ presented a low S/N, post-SSS spectrum taken 18.8 days after the 2014 eruption of {M31N\,2008-12a}. This spectrum was consistent with that expected from an accretion disk, and H$\beta$ was seen in emission. However, no evidence of the He\,{\sc ii} (4686\,\AA) line was seen in that spectrum. It is possible that the strong He\,{\sc ii} line seen in the ARC spectrum arose from the disk but that the transition was excited by the on-going SSS at that time.
As with previous eruptions, the emission line profiles of individual lines showed significant evolution during the 2016 eruption. The FWHM of the main H$\alpha$ emission line (excluding the very high velocity material) narrows from $4540\pm300$\,km\,s$^{-1}$ on day 0.54 to $2210\pm250$\,km\,s$^{-1}$ on day 5.83. The velocity evolution of the 2016 eruption is compared to that of previous eruptions in Figure~\ref{fig:spec3}, and is largely consistent. The H$\alpha$ FWHM measurements of all 2016 eruption spectra are given in Table~\ref{tab:vel}
\begin{table}
\caption{FWHM Velocity Measurements of the H$\alpha$ Profile During the 2016 Eruption. \label{tab:vel}}
\begin{center}
\begin{tabular}{lll}
\hline
$\Delta t$ (days) &H$\alpha$ FWHM (km\,s$^{-1}$) &Instrument\\
\hline
0.54$\pm$0.01 &4540$\pm$300 &ALFOSC\\
0.61$\pm$0.06 &3880$\pm$220 &SPRAT\\
0.82$\pm$0.11 &3260$\pm$130 &DIS\\
1.29$\pm$0.02 &3010$\pm$90 &HFOSC\\
1.66$\pm$0.07 &3070$\pm$120 &SPRAT\\
1.80$\pm$0.08 &2910$\pm$80 &DIS\\
1.87$\pm$0.02 &2690$\pm$60 &LRS2-B\\
2.23$\pm$0.02 &2560$\pm$90 &HFOSC\\
2.58$\pm$0.05 &2820$\pm$170 &SPRAT\\
3.59$\pm$0.02 &2790$\pm$350 &SPRAT\\
4.53$\pm$0.02 &2850$\pm$540 &SPRAT\\
5.83$\pm$0.05 &2210$\pm$250 &DIS\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{The {\it XMM-Newton~} EPIC spectra and their connection to the {\it Swift~} XRT data}
\label{sec:xmm_spec}
The {\it XMM-Newton~} EPIC spectra for the two observations listed in Table~\ref{tab:xmm} were fitted with an absorbed blackbody model. The three detectors were modeled simultaneously, with only the normalizations free to vary independently. In Table~\ref{tab:xmm_spec} we summarize the best fit parameters and also include a simultaneous fit of all EPIC spectra. The binned spectra, with a minimum of 10 counts per bin, are plotted in Figure~\ref{fig:xmm_spec} together with the model curves. The binning is solely used for visualization here; the spectra were fitted with one-count bins and Poisson (fitting) statistics \citep{1979ApJ...228..939C}. The $\chi^2$ numbers were used as test statistics.
\begin{table*}
\caption{Spectral Fits for {\it XMM-Newton~} Data.}
\label{tab:xmm_spec}
\begin{center}
\begin{tabular}{lrrrrrrr}\hline\hline \noalign{\smallskip}
ObsID & $\Delta t^a$ & \hbox{$N_{\rm H}$}~ & kT & red.\ $\chi^2$ & d.o.f. & kT$_{0.7}^b$ & red.\ $\chi^{2\,\,b}$ \\
& (d) & (\ohcm{21}) & (eV) & & & (eV) & \\ \hline \noalign{\smallskip}
0784000101 & 14.21 & $2.2^{+0.6}_{-0.7}$ & $58^{+8}_{-5}$ & 1.29 & 149 & $77^{+4}_{-3}$ & 1.44 \\
0784000201 & 15.80 & $2.7^{+0.6}_{-0.5}$ & $45\pm5$ & 1.06 & 140 & $68^{+4}_{-3}$ & 1.35 \\
Both combined & 15.01 & $2.2\pm0.4$ & $53^{+5}_{-3}$ & 1.22 & 291 & $73^{+3}_{-2}$ & 1.42 \\
\hline
\end{tabular}
\end{center}
\noindent
\tablenotetext{a}{Time in days after the nova eruption (cf.\ Table~\ref{tab:xmm})}
\tablenotetext{b}{The blackbody temperature (and the reduced $\chi^2$ of the fit) when assuming a fixed \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21} for comparison with the {\it Swift~} XRT temperature evolution (see Fig.\,\ref{fig:xrt_xmm_lc}).}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{fig13.pdf}
\caption{{\it XMM-Newton~} EPIC spectra of {M31N\,2008-12a~} for the two pointings and the three individual (colour-coded) detectors (cf.\ Table~\ref{tab:xmm}). The blackbody fits are shown as solid lines. In the bottom panel the dashed purple line shows the scaled EPIC pn fit from the upper panel, indicating a tentative drop in temperature from $kT = 58^{+8}_{-5}$~eV on day 14.21 to $kT = 45\pm5$~eV day 15.8. See Table~\ref{tab:xmm_spec} for details on the spectral fits.}
\label{fig:xmm_spec}
\end{figure}
In Table~\ref{tab:xmm_spec} and Figure~\ref{fig:xmm_spec} we immediately see that the two spectra are (a) very similar and (b) contain relatively few spectral counts, leading to a low spectral resolution. The latter point is mainly due to the unexpectedly low flux at the time of the observations, but is also exacerbated by the strong background flaring (cf.\ Table~\ref{tab:xmm}).
In Table~\ref{tab:xmm_spec} we also list a second set of blackbody temperature values (kT$_{0.7}$) for the assumption of a fixed \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21}. The purpose of this is to compare these temperatures to the {\it Swift~} XRT models which share the same assumption (cf.\ Section~\ref{sec:xrt_lc}). In both sets of temperatures in Table~\ref{tab:xmm_spec} there is a slight trend toward higher temperatures in the first observation (day 14.21) compared to the second one (day 15.80). While the binned spectra in Figure~\ref{fig:xmm_spec} give a similar impression, which would be consistent with a gradually cooling WD, it needs to be emphasized that this gradient has no high significance because the two (sets of) temperatures are consistent within their $2\sigma-3\sigma$ uncertainties. In fact, the combined fit in Table~\ref{tab:xmm_spec} has reduced $\chi^2$ statistics and parameter uncertainties that are similar (the latter even slightly lower) than those of the individual fits.
In Figure~\ref{fig:xrt_xmm_lc} the {\it XMM-Newton~} data points are added to the {\it Swift~} light curve and temperature evolution. For the conversion from pn to XRT count rate we used the HEASarc WebPIMMS tool \citep[based on PIMMS v4.8d,][]{1993Legac...3...21M} under the assumption of the best-fit blackbody parameters in the third and fourth column of Table~\ref{tab:xmm_spec}.
While the equivalent count rates as well as the temperatures are consistent with the XRT trend of a fading and cooling source there appear to be systematic differences between the XRT and pn rates. This could simply be due to systematic calibration uncertainties between the EPIC pn and the XRT \citep{2017AJ....153....2M}. Another reason might be the ongoing flux variability (see Section~\ref{sec:xmm_lc}). However, it is also possible that deficiencies in the spectral model are preventing a closer agreement between both instruments. We refrain from an attempt to align the pn and XRT count rates because currently there are too many free parameters (e.g., the potential absorption or emission features discussed in \citetalias{2016ApJ...833..149D}) and insufficient constraints on them. We hope that a future {\it XMM-Newton~} observation will be able to catch this enigmatic source in a brighter state to shine more (collected) light on its true spectral properties.
\section{Discussion}\label{sec:discussion}
\subsection{The relative light curve evolution and the exact eruption date}\label{sec:disc_date}
\smallskip
The precision of the eruption dates for previous outbursts was improved by aligning their light curves, specifically the early, quasi-linear decline (\citetalias{2016ApJ...833..149D}). For the 2016 eruption, a priori we cannot be certain that this decline phase would be expected to align with previous years because the bright optical peak (Figure~\ref{fastphot} left) constitutes an obvious deviation from the established pattern. However, in Figure~\ref{optical_lc} we find that after the peak feature, most filters appear to decline in the same way as during the previous years. Therefore, we conclude that our estimated eruption date of $\mathrm{MJD}= 57734.32 \pm 0.17$ (2016-12-12.32 UT) is precise to within the uncertainties -- and this brings about a natural alignment of the light curves.
\subsection{The peculiarities of the 2016 eruption and their description by theoretical models}
\label{sec:disc_pecul}
From the combined optical and X-ray light curves in Figures~\ref{optical_lc} and \ref{fig:xrt_xmm_lc} it can be seen that in 2016 (i) the optical peak may have been brighter and (ii) the SSS phase was intrinsically shorter than the previous three eruptions (but began at the same time after eruption). In addition, the gap between the 2015 and 2016 eruptions was longer than usual. Below we study these discrepancies in detail and describe them with updated theoretical model calculations. The following discussion ignores the impact of a possible half-year recurrence (cf.~\citetalias{2015A&A...582L...8H}), the potential dates of which are currently not well constrained (except for the first half of 2016; Henze et al.\ 2018, in prep.).
The critical advantage of studying a statistically significant number of eruptions from the same nova system is that we can reasonably assume parameters like (accretion and eruption) geometry, metallicity of the accreted material, as well as WD mass, spin, and composition to remain (sufficiently) constant. Therefore, {M31N\,2008-12a~} plays a unique role in understanding the variations in nova eruption parameters.
\subsubsection{A brighter peak after a longer gap?}\label{sec:disc_peak}
This section aims to understand the surprising increase in the optical peak luminosity (the `cusp') by relating it to the delayed eruption date through the theoretical models of \citet{2006ApJS..167...59H, 2014ApJ...793..136K, 2017ApJ...838..153K}. While the specifics of our arguments are derived from this particular set of models, we note that all current nova light curve simulations agree on the general line of reasoning \citep[e.g.][]{2005ApJ...623..398Y, 2013ApJ...777..136W}. We also note that \citetalias{2017ApJ...849...96D}\ found an elevated mass accretion rate to that employed by \citet{2014ApJ...793..136K, 2017ApJ...838..153K}, but again the general trends discussed below do not depend on the absolute value of the assumed mass accretion rate.
The gap between the 2015 and 2016 eruptions was 472\,d. This is 162\,d longer than the 310\,d between the 2013 and 2014 eruptions (see Table~\ref{eruption_history} and Figure~\ref{fig:rec_time}) and about 35\% longer than the median gap (347\,d) between the successive eruptions from 2008 to 2015. The well-observed 2015 eruption was very similar to the eruptions in 2013 and 2014 (\citetalias{2016ApJ...833..149D}) and did not show any indications that would have hinted at a delay in the 2016 eruption (also see \citetalias{2017ApJ...849...96D}). This section compares the peculiar 2016 eruption specifically to the 2014 outburst, because we know that the latter was preceded and followed by a ``regular'' eruption (see Figures\,\ref{optical_lc} and \ref{fig:xrt_xmm_lc}, and \citetalias{2016ApJ...833..149D}). In general, we know that the peak brightness of a nova is higher for a more massive envelope if free-free emission dominates the SED \citep{2006ApJS..167...59H}.
\begin{figure}
\includegraphics[width=\columnwidth]{fig14.pdf}
\caption{Eruption dates (in days of the year) vs the year from 2008 onward. Individual uncertainties are smaller than the symbols. The best linear model for the \textit{2008-2015 eruptions} in shown in red with the 95\% uncertainties plotted in gray (cf.\ \citetalias{2016ApJ...833..149D}). The .}
\label{fig:rec_time}
\end{figure}
We consider two specific cases:\ (1) the mean mass accretion-rate onto the WD ($\dot{M}_\mathrm{acc}$) was constant but hydrogen ignition occurs in a certain range around the theoretically expected time and, as a result, the elapsed inter-eruption time was longer in 2016 due to stochastic variance. Alternatively, (2) the mean mass accretion-rate leading up to the 2016 eruption was lower than typical and, as a result, the elapsed time was longer.
(1) If the mean accretion rates prior to the 2014 and 2016 eruptions were the same, then the mass accreted by the WD in 2016 was $\Delta t_{\rm rec}\times \dot M_{\rm acc} = 162\,{\rm days} \times 1.6 \times 10^{-7}\,M_\sun$\,yr$^{-1} = 0.71\times10^{-7}\,M_\sun$ larger than in 2014. Here we used the mass accretion rate of the $1.38\,M_\sun$ model proposed for {M31N\,2008-12a~} by \citet{2017ApJ...838..153K}. The authors obtained the relation between a wind mass-loss rate and the photospheric temperature (see their Figure 12). The wind mass-loss rate is larger for a lower-temperature envelope, which corresponds to a more extended and more massive envelope.
In Figure 12 of \citet{2017ApJ...838..153K}, the rightmost point on the red line corresponds to the peak luminosity of the 2014 eruption. If at this point the envelope mass is higher by $0.71\times 10^{-7}\,M_\sun$, then the wind mass-loss rate should increase by $\Delta \log \dot M_{\rm wind} \sim 0.08$.
For the free-free emission of novae the optical/IR luminosity is proportional to the square of the wind mass-loss rate \citep[see e.g.][]{2006ApJS..167...59H}. Thus, the peak magnitude of the optical/IR free-free emission is $2.5 \times (\Delta \log \dot M_{\rm wind}) \times 2 = 2.5 \times 0.08 \times 2 = 0.4$~mag brighter, which is roughly consistent with the increase in the peak magnitudes observed in 2016 in the $V$ and $u'$ bands (Figure~\ref{optical_lc}).
However, the time from the optical maximum to $t_{\rm on}$ of the SSS phase should become longer by
\begin{multline*}
\Delta t = \frac{\Delta M_{\rm env}} { \Delta \dot M_{\rm wind} + \dot M_{\rm wind}}\\
= \frac{M_{\rm env}}{\dot M_{\rm wind}} \frac{\Delta M_{\rm env}/M_{\rm env}}{\Delta \dot M_{\rm wind}/ \dot M_{\rm wind} + 1} \sim \frac{6 \times 0.35}{0.2+1}=1.75 \text{ days},
\end{multline*}
\noindent where $M_{\rm env}$ is the hydrogen-rich envelope mass. This is not consistent with the $t_{\rm on}\sim6$ days in the 2016 (and 2013--2015) eruptions.
In general, all models agree that a higher-mass envelope would lead to a stronger, brighter eruption with a larger ejected mass \citep[e.g.][]{1998MNRAS.296..502S,2005ApJ...623..398Y, 2006ApJS..167...59H, 2013ApJ...777..136W}
(2) For the other case of a lower mean accretion rate, we have estimated the ignition mass of the hydrogen-rich envelope, based on the calculations of \citet{2016ApJ...830...40K, 2017ApJ...838..153K}, to be larger by 9\% for the 1.35 times longer recurrence period ($0.91\times 1.35=1.23$ yr). Then, the peak magnitude of the free-free emission is $2.5 \times (\Delta \log \dot M_{\rm wind}) \times 2 = 2.5\times 0.02\times 2 = 0.1$ mag brighter, but the time from the optical maximum to $t_{\rm on}$ of the SSS phase is longer by only
\begin{multline*}
\Delta t = \frac{\Delta M_{\rm env}} { \Delta \dot M_{\rm wind} + \dot M_{\rm wind}}\\
= \frac{M_{\rm env}}{\dot M_{\rm wind}} \frac{\Delta M_{\rm env}/M_{\rm env}}{\Delta \dot M_{\rm wind}/ \dot M_{\rm wind} + 1} \sim \frac{6 \times 0.09}{0.05+1}=0.5 \text{ days}.
\end{multline*}
\noindent The peak brightness of the 2016 outburst is about 0.5 days sooner than those in the 2013, 2014, and 2015 eruptions (see Figure\,\ref{fastphot} left). These two features, the $\sim 0.1$ mag brighter and 0.5 days earlier peak, are roughly consistent with the 2016 eruption except for the $\sim 1$ mag brighter cusp (Figure\,\ref{fastphot} left).
Observationally, we have shown that the expansion velocities of the 2016 eruption were comparable to previous outbursts (Section~\ref{sec:vis_spec}). Together with the comparable SSS turn-on time scale (Section~\ref{sec:xrt_lc}) this strongly suggests that a similar amount of material was ejected. Therefore, scenario (2) would be preferred here.
It should be emphasized that neither scenario addresses the short-lived, cuspy nature of the peak in contrast to the relatively similar light curves before or after it occurred. The models of \citet{2017ApJ...838..153K} and their earlier studies would predict a smooth light curve with brighter peak and different rise and decline rates.
Ultimately, scenario (2) would also require an explanation of what caused the accretion rate to decrease. The late decline photometry of the 2015 eruption indicated that the accretion disk survived that eruption (\citetalias{2017ApJ...849...96D}), however, we have no data from 2013 or 2014 with which to compare the end of that eruption. The similarities of the 2013--2015 eruptions would imply that there was nothing untoward about the 2015 eruption that affected the disk in a different manner to the previous eruptions. Therefore the `blame' probably lies with the donor.
The mass transfer rate in cataclysmic variable stars is known to be variable on time scales from minutes to years \citep[e.g.,][and references therein]{1995cvs..book.....W}. The shortest period variations (so called ``flickering"), with typical amplitudes of tenths of a magnitude, are believed to be caused by propagating fluctuations in the local mass accretion rate within the accretion disk \citep{2014MNRAS.438.1233S}. The longer time scale variations that may be relevant to {M31N\,2008-12a~} can cause much larger variations in luminosity. In some cases, as in the VY\,Sculptoris stars, the mass transfer from the secondary star can cease altogether for an extended period of time \citep[e.g.,][]{1981ApJ...251..611R,1985ApJ...290..707S}. The VY\,Scl phenomena is believed to be caused by disruptions in the mass transfer rate caused by star spots on the secondary star drifting underneath the L1 point \citep[e.g.,][]{1994ApJ...427..956L,1998ApJ...499..348K, 2004AJ....128.1279H}. It might be possible that a similar mechanism may be acting in {M31N\,2008-12a}, resulting in mass transfer rate variations sufficient to cause the observed small-scale variability in the recurrence time and potentially even larger ``outliers'' as in 2016.
\subsubsection{A shorter SSS phase}
\label{sec:disc_sss}
In this section we aim to explain the significantly shorter duration of the 2016 SSS phase in comparison with previous eruptions and with the help of the theoretical X-ray light curve models of \citet{2017ApJ...838..153K}.
While a high initial accreted mass at the time of ignition leads to a brighter optical peak (as discussed in the previous section), it does not change the duration of the SSS phase, assuming that the WD envelope settles down to a thermal equilibrium when any wind phase stops. For the same WD mass, a larger accreted mass results in a higher wind mass-loss rate but does not affect the evolution after the maximum photospheric radius has been reached \citep[e.g.,][]{2006ApJS..167...59H}. The shorter SSS duration and thus the shorter duration of the total outburst compared to previous years (Figure~\ref{fig:xrt_xmm_lc}) therefore needs an additional explanation.
\citet{2017ApJ...838..153K} presented a $1.38~M_\sun$ WD model with a mean mass accretion-rate of $1.6\times 10^{-7}\,M_\sun$\,yr$^{-1}$ for {M31N\,2008-12a}. They assumed that the mass-accretion resumes immediately after the wind stops, i.e., at the beginning of the SSS phase. The accretion supplies fresh H-rich matter to the WD and substantially lengthens the SSS lifetime, ``re-feeding'' the SSS, because the mass-accretion rate is the same order as the proposed steady hydrogen shell-burning rate of $\sim 5 \times 10^{-7}\,M_\sun$\,yr$^{-1}$. If the accretion does not resume during the SSS phase, or only with a reduced rate, then the SSS duration becomes shorter. This effect is model-independent.
To give a specific example, we calculate the SSS light curves and photospheric temperature evolution for various, post-eruption, mass-accretion rates and plot them in Figure~\ref{fig:sss_model}. Those are not fits to the data but models that serve the purpose of illustrating the observable effect of a gradually dimished post-eruption re-feeding. The thick solid black lines denote the case of no post-eruption accretion (during the SSS phase). The thin solid black lines represent the case that the mass-accretion resumes post-eruption with $1.6\times 10^{-7}\,M_\sun$\,yr$^{-1}$, just after the optically thick winds stop. The orange dashed, solid red, dotted red lines correspond to the mass-accretion rates of 0.3, 0.65, and 1.5 times the original mass-accretion rate of $1.6\times 10^{-7}\,M_\sun$\,yr$^{-1}$, respectively.
It is clearly shown that a higher post-eruption mass-accretion rate produces a longer SSS phase. Figure~\ref{fig:sss_model}a shows the X-ray count rates in the 2014 (blue crosses) and 2016 (open black circles) eruptions. The ordinate of the X-ray count rate is vertically shifted to match the theoretical X-ray light curves (cf.\ Figure~\ref{fig:xrt_xmm_lc}). The model X-ray flux drops earlier for a lower mass-accretion rate, which could (as a trend) explain the shorter duration of the 2016 SSS phase.
Figure~\ref{fig:sss_model}b shows the evolution of the blackbody temperature obtained from the {\it Swift~} spectra with the neutral hydrogen column density of \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21} (cf.\ Figure~\ref{fig:xrt_xmm_lc} and Section~\ref{sec:xrt_lc}). The lines show the photospheric temperature of our models. The model temperature decreases earlier for a lower mass-accretion rate. This trend is also consistent with the difference between the 2014 and 2016 eruptions.
Thus, the more rapid evolution of the SSS phase in the 2016 eruption can be partly understood if mass-accretion does not resume soon after the wind stops (zero accretion, thick black line in Figure~\ref{fig:sss_model}). Note, that the observed change in SSS duration clearly has a larger magnitude than the models (Figure~\ref{fig:sss_model}). This could indicate deficiencies in the current models and/or that additional effects contributed to the shortening of the 2016 SSS phase. One factor that has an impact on the SSS duration is the chemical composition of the envelope \citep[e.g.,][]{2005A&A...439.1061S}. However, it would be difficult to explain why the abundances of the accreted material would suddenly change from one eruption to the next. In any case, our observations make a strong case for a discontinued re-feeding of the SSS simply by comparing the observed parameters of the 2016 eruption to previous outbursts. The models are consistent with the general trend but need to be improved to be able to simulate the magnitude of the effect.
\citetalias{2017ApJ...849...96D}\ presented evidence that the accretion disk survives eruptions of {M31N\,2008-12a}, the 2015 eruption specifically. In Section~\ref{sec:disc_peak} we found that the accretion rate prior to the 2016 eruption might have been lower. If this lower accretion rate was caused by a lower mass-transfer rate from the companion, which is a reasonable possibility, then this would lead to a less massive disk (which was potentially less luminous; see Henze et al.\ 2018, in prep.). Thus, even if the eruption itself was not stronger than in previous years, as evidenced by the consistent ejection velocities (Section~\ref{sec:vis_spec}) and SSS turn-on time scale (Section~\ref{sec:xrt_lc}), it could still lead to a greater disruption of such a less massive disk. A part of the inner disk mass may be lost, which could prevent or hinder the reestablishment of mass accretion while the SSS is still active.
This scenario can consistently explain the trends toward a brighter optical peak and a shorter SSS phase for the delayed 2016 eruption. Understanding the \textit{quantitative} magnitude of these changes, and fitting the theoretical light curves more accurately to the observed fluxes, requires additional models that can be tested in future eruptions of {M31N\,2008-12a}. In addition, we strongly encourage the community to contribute alternative interpretations and models that could help us to understand the peculiar 2016 outburst properties.
\begin{figure}
\includegraphics[width=\columnwidth]{fig15.pdf}
\caption{Comparison of the theoretical light curve models with the observational data of the 2016 (open black circles, cf.\ Tables~\ref{tab:swift} and \ref{tab:xmm}) and 2014 (blue crosses; cf.\ \citetalias{2015A&A...580A..46H}) eruptions. The 2014 temperatures were re-analyzed assuming the updated \hbox{$N_{\rm H}$}~ = $0.7$ \hcm{21} (cf.\ Section~\ref{sec:xrt_lc}). The theoretical model light curves are based on a $1.38\,M_\sun$ WD with a mass accretion rate of $1.6\times 10^{-7}\,M_\sun$\,yr$^{-1}$ \citep{2017ApJ...838..153K}. The five theoretical curves correspond to the cases of no accretion (thick black lines), and factors of 0.3 (dashed orange lines), 0.65 (solid red lines), 1.0 (thin solid black lines), and 1.5 (dotted red lines) times the original mass accretion rate of $1.6\times 10^{-7}\,M_\sun$\,yr$^{-1}$. (a) Theoretical model X-ray light curves (0.3--1.0\,keV). (b) Theoretical model photospheric blackbody temperature. There is a clear trend towards a shorter SSS phase for weaker accretion. Improved models are needed to fit the observations with higher accuracy.}
\label{fig:sss_model}
\end{figure}
\subsubsection{Similar features in archival data?}\label{similar}
\label{sec:disc_arch}
Intriguingly, there is tentative evidence that the characteristic features of the 2016 eruption, namely the bright optical peak and the short SSS phase, might have been present in previous eruptions. Here we discuss briefly the corresponding observational data.
Recall that in X-rays there were two serendipitous detections with ROSAT \citep{1982AdSpR...2..241T} in early 1992 and 1993 (see Table~\ref{eruption_history}). \citet{1995ApJ...445L.125W} studied the resulting light curves and spectra in detail. Their Figure~2 shows that in both years the ROSAT coverage captured the beginning of the SSS phase. By chance, the time-axis zero points in these plots are shifted by almost exactly one day with respect to the eruption date as inferred from the rise of the SSS flux; This means that, for example, their day 5 corresponds to day 4 after eruption.
While the 1992 X-ray light curve stops around day eight, the 1993 coverage extends towards day 13 \citep{1995ApJ...445L.125W}. Both light curves show the early SSS variability expected from {M31N\,2008-12a~} (cf.\ Figure~\ref{fig:xrt_split}), but in 1993 the last two data points, near days 12 and 13, have lower count rates than expected from a ``regular'', 2015-type eruption (cf.\ Figure~\ref{fig:xrt_xmm_lc}). At this stage of the eruption, we would expect the light curve variations to become significantly lower (see also \citetalias{2016ApJ...833..149D}).
Of course, these are only two data points. However, the corresponding count rate uncertainties are relatively small and at face value these points are more consistent with the 2016-style early X-ray decline than with the 2015 SSS phase which was still bright at this stage (Figure~\ref{fig:xrt_xmm_lc}). Thus, it is possible that the 1993 eruption had a similarly short SSS phase as the 2016 eruption. The $\sim341$\,d between the 1992 and 1993 eruptions (Table~\ref{eruption_history}), however, are well consistent with the 2008--2015 median of 347\,d and suggest no significant delay.
The short-lived, bright, optical cuspy peak seen from the $I$-band to the UV (see Figures\,\ref{optical_lc}, \ref{optical_zoom}, and \ref{fastphot} left) from the 2016 eruption may have also been seen in 2010. The 2010 eruption of {M31N\,2008-12a}\ was not discovered in real-time, but was instead recovered from archival observations (\citetalias{2015A&A...582L...8H}). The 2010 eruption was only detected in two observations taken just 50\,minutes apart, but it appeared up to 0.6\,mag brighter than the 2013 and 2014 eruptions (and subsequently 2015). As the 2010 observations were unfiltered, \citetalias{2015A&A...582L...8H}\ noted that the uncertainties on those observations were possibly dominated by calibration systematics -- the relative change in brightness is significant. The 2010 photometry is compared with the 2016 photometry in Figure~\ref{2010comp} (right), the epoch of the 2010 data was arbitrarily marked as $t=0.7$\,d. It is clear from Figure~\ref{2010comp} (right), that the bright peak seen in 2016 is not inconsistent with the data from 2010. But it is also clear from Figure~\ref{2010comp} (right) that the unfiltered data again illustrate that, other than the cusp itself, the 2016 light curve is similar to those of the 2013--15 eruptions. Indeed, these unfiltered data have much less of a gap around the $t=1$\,d peak (as seen in 2013--15) than the filtered data do (see Figures~\ref{optical_lc} and \ref{optical_zoom}).
However, despite this tentative evidence of a previous `cusp', the 2010 eruption fits the original recurrence period model very well. In fact, it was the eruption that confirmed that original model. So the 2010 eruption appears to have behaved `normally' -- but we do note the extreme sparsity of data from 2010. So we must question whether the two deviations from the norm in 2016, the bright cuspy peak, and the X-ray behavior are causally related.
Additionally, we must ask whether the short-lived bright cuspy peak is normal behavior. Figure~\ref{fastphot} (left) demonstrates this conundrum well. As noted in Section~\ref{sec:disc_date}, the epoch of the 2016 eruption has been identified simply by the availability of pre-/post-eruption data, $t=0$ has not been tuned (as in 2013--2015) to minimize light curve deviations or based on any other factors. The final rise light curve data from 2013--2015 is sparse, indeed much more data have been collected during this phase in 2016 than in 2013--2015 combined, including the two-color fast-photometry run from the INT -- in fact, improving the final rise data coverage was a specified pre-eruption goal for 2016. Figure~\ref{fastphot} (left) indicates that should such a short-lived bright peak have occurred in any of 2013, 2014, or 2015, and given our light curve coverage of those eruptions, we may not have detected it. Under the assumption that the eruption times of the 2013--2016 eruptions have been correctly accounted for, we would not have detected a `2016 cuspy maximum' in each of 2013, 2014, or 2015. It is also worth noting that the final rise of the 2016 eruption was poorly covered in the $B$-band (as in all filters in previous years), and no sign of this cuspy behavior is seen in that band! The UV data may shed more light, but we note the unfortunate inconsistency of filters.
In conclusion, we currently don't have enough final rise data to securely determine whether the 2016 cuspy peak is unusual. However, the planned combination of rapid follow-up and high cadence observations of future eruptions are specifically designed to explore the early time evolution of the eruptions.
\subsection{What caused the cusp?}\label{cusp}
Irrespective of any causal connection between the late 2016 eruption and the newly observed bright cusp, the smooth light curve models can not explain the nature of this new feature. As the cusp `breaks' the previously smooth presentation of the observed light curve and the inherently smooth nature of the model light curves, it must be due to an additional, unconsidered, parameter of the system. Here we briefly discuss a number of possible causes in no particular order.
The cusp could in principle be explained as the shock-breakout associated with the initial thermonuclear runaway, but with evidence of a slower light curve evolution preceding the cusp (see Figure~\ref{fastphot} left) the timescales would appear incompatible.
An additional consideration would be the interaction between the ejecta and the donor. Under the assumption of a Roche lobe-filling donor, \citetalias{2017ApJ...849...96D}\ proposed a range of WD--donor orbital separations of $25-44\,R_\odot$, those authors also indicated that much larger separations were viable if accretion occured from the wind of the donor. Assuming Roche lobe overflow and typical ejecta velocities at the epoch of the cusp of $\sim4000$\,km\,s$^{-1}$ (see the bottom right plot of Figure~\ref{fig:spec3}), one would expect an ejecta--donor interaction to occur 0.02--0.06\,days post-eruption (here we have also accounted for the radius of the donor, $R\simeq14\,R_\odot$; \citetalias{2017ApJ...849...96D}). With the cusp seemingly occurring 0.65\,days post-eruption, the orbital separation would need to be $\sim330\,R_\odot$ ($\sim1.6$\,au). From this we would infer an orbital period in the range $350-490$\,days (i.e., $\gtrsim P_\mathrm{rec}$), depending on the donor mass, and mass transfer would occur by necessity through wind accretion. We note that the eruption time uncertainty ($\pm0.17$\,d) has little effect on the previous discussion. \citetalias{2016ApJ...833..149D}, \citetalias{2017ApJ...847...35D}, and \citetalias{2017ApJ...849...96D}\ all argued that the system inclination must be low, despite this it is still possible that the observation of such an ejecta--donor interaction may depend upon the orbital phase (with respect to the observer) at the time of eruption.
As a final discussion point, we note that \citetalias{2016ApJ...833..149D}\ and \citetalias{2017ApJ...847...35D}\ both presented evidence of highly asymmetric ejecta; proposing an equatorial component almost in the plane of the sky, and a freely expanding higher-velocity -- possibly collimated -- polar outflow directed close to the line-of-sight. We also note that the velocity difference between these components may be a factor of three or higher. If we treat these components as effectively independent ejecta, we would therefore expect their associated light curves to evolve at different rates, with the polar component showing the more rapid evolution. Therefore, we must ask whether the `normal' (2013--2015) light curve is that of the `bulk' equatorial ejecta, and the `cusp' is the first photometric evidence of the faster evolving polar ejecta? We note that such proposals have also been put forward to explain multi-peak light curves from other phenomena, for example, kilonovae \citep[see][and the references therein]{2017ApJ...851L..21V}.
\subsection{Predicting the date of the next eruption(s)}
\label{sec:disc_next}
A consequence of the delayed 2016 eruption is that the dates of the next few eruptions are much more difficult to predict than previously thought. Figure~\ref{fig:rec_time} demonstrates how much this surprising delay disrupted the apparently stable trend toward eruptions occurring successively earlier in the year (and Section~\ref{sec:disc_pecul} discusses the possible reasons).
Currently, detailed examinations of the statistical properties of the recurrence period distribution are hampered by the relatively small number of nine eruptions, and thereby eight different gaps, since 2008 (cf.\ Table~\ref{eruption_history}). {M31N\,2008-12a~} is the only known nova for which we will overcome this limitation in the near future. For now, we cannot reject the hypothesis that the gaps follow a Gaussian distribution, with Lilliefors (Kolmogorov-Smirnov) test p-value $\sim 0.11$, even with the long delay between 2015 and 2016. The distribution mean (median) is 363\,d (347\,d), with a standard deviation of 52\,d. Thereby, the 472 days prior to the 2016 eruption could indicate a genuine outlier, a skewed distribution, or simply an extreme variation from the mean. It is too early to tell.
In addition, all these gaps of roughly 1~yr length would be affected by the presence of an underlying 6-month period which could dampen the more extreme swings. Of course, the original prediction of a half-year period by \citetalias{2015A&A...582L...8H}\ was partly based on the apparently stable trend toward earlier eruptions since 2008. Comparing this recent trend to the dates of historical X-ray detections in 1992, 1993, and 2001 (\citetalias{2014A&A...563L...8H}), \citetalias{2015A&A...582L...8H}\ found that the most parsimonious explanation for the observed discrepancies between the two regimes would be a 6-month shift. However, the putative 6-month eruption still remains to be found (Henze et al.\ 2018, in prep.). At present, a single eruption deviating from this pattern does not present sufficient evidence to discard the 6-month scenario. The next (few) eruption date(s) will be crucial in evaluating the recurrence period statistics.
While this manuscript was with the referee, the next eruption was discovered on 2017 Dec 31 \citep{2017ATel11116....1B}. The $\sim384$\,d gap between the 2016 and 2017 eruptions is consistent with the pre-2016 eruption pattern. A comprehensive multi-wavelength analysis of the new eruption will be presented in a subsequent work.
\section{Summary \& Conclusions}\label{sec:conclusions}
\begin{enumerate}
\item The 2016 eruption occurred on December 12.32 UT, which was 472 days after the 2015 eruption. Thereby, it appeared to interrupt the general trend of eruptions since 2008 occurring slightly earlier in the year (with $t_\mathrm{rec} = 347\pm10$\,d).
\item The 2016 eruption light curve exhibited a short lived `cuspy' peak between $0.7\leq t \leq 0.9$\,days post-eruption, around 0.5 magnitudes brighter than the smooth peak at $t\simeq1$\,d observed in previous eruptions. This aside, the optical and UV light curve developed in a very similar manner to the 2013/2014/2015 eruptions.
\item The cuspy peak occurs during a previously unsampled portion of the light curve. Therefore we cannot rule out this being a `normal' feature that has previously been missed. There is tentative evidence of a similar occurrence during the 2010 eruption.
\item The first 2016 outburst spectrum, taken 0.54\,d after the eruption, was one of the earliest spectra taken of any {M31N\,2008-12a~} eruption. From this we identified P\,Cygni profiles in the optical spectrum of {M31N\,2008-12a~} for the first time, indicating an expansion velocity of $\sim6200$\,km\,s$^{-1}$. In addition, a late spectrum taken 5.83\,d after eruption revealed narrow He~{\sc ii} emission, possibly arising from the surviving accretion disk. There is however no evidence that the spectroscopic evolution of the 2016 eruption deviated significantly from the behavior in previous years.
\item The {\it Swift~} XRT light curve deviated significantly from the previous behavior. The flux started to decline around day 11 which is several days earlier than expected. In a consistent way, the evolution of the effective temperature was similar to the 2013--2015 eruptions until day 11 but afterwards decreased significantly earlier. A 100\,ks {\it XMM-Newton~} ToO observation, split into two pointings, managed to characterize the decaying SSS flux and temperature to be consistent with the XRT data and discovered surprising, strong variability at a stage that had previously suggested only marginal variation.
\item The tendency of the changes in recurrence period, optical peak brightness, and SSS duration can be consistently described in early theoretical model calculations. When we assume a lower accretion rate we find that this (i) increases the time between eruptions, (ii) leads to a less-massive disk the disruption of which delays the onset of mass-accretion and shortens the SSS phase, and (iii) increases the ignition mass and thereby the peak magnitude. This scenario will need to be explored in more detail in the future. We also strongly encourage alternative models and interpretations.
\end{enumerate}
\acknowledgements
\textit{We are deeply indebted to the late Swift PI Neil Gehrels for his long-term support of our project and for giving our community the game-changing Swift observatory. This paper is dedicated to his memory.}
We thank the anonymous referee for their constructive comments that helped to improve the paper.
We are, as always, grateful to the {\it Swift~} Team for making the ToO observations possible, in particular the duty scientists as well as the science planners. This research made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
Based on observations obtained with {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \#14651. Support for program \#14651 was provided by NASA through a grant from STScI.
The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University (LJMU) in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{i}sica de Canarias with financial support from STFC.
This work makes use of observations from the LCO network.
Based (in part) on data collected with the Danish 1.54-m telescope at the ESO La Silla Observatory.
The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOTSA.
The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P.\ Hobby and Robert E.\ Eberly.
The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona Board of Regents; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, The Leibniz Institute for Astrophysics Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.
The Pirka telescope is operated by Graduate School of Science, Hokkaido University, and it also participates in the Optical \& Near-Infrared Astronomy Inter-University Cooperation Program, supported by the MEXT of Japan.
We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.
The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University.
We wish to thank G.\ Mansir for sharing her observing time at the 1.54\,m Danish telescope on December 15th. We acknowledge G.\ Zeimann who reduced the HET spectra. We thank T.\ Johnson for assisting in observations at MLO. We wish to acknowledge Luc\'\i a Su\'arez-Andr\'es (ING) for obtaining the INT observations on a Director's Discretionary Time night, generously awarded by Marc Balcells (ING) and Cecilia Fari\~na (INT) to our collaboration.
M.\ Henze acknowledges the support of the Spanish Ministry of Economy and Competitiveness (MINECO) under the grant FDPI-2013-16933, the support of the Generalitat de Catalunya/CERCA programme, and the hospitality of the Liverpool John Moores University during collaboration visits.
S.~C.\ Williams acknowledges a visiting research fellowship at Liverpool John Moores University.
M.\ Kato and I.\ Hachisu acknowledge support in part by Grants-in-Aid for Scientific Research (15K05026, 16K05289) of the Japan Society for the Promotion of Science.
G.\ Anupama and M.\ Pavana thank the HCT observers who spared part of their time for the observations.
K.\ Chinetti acknowledges support by the GROWTH project funded by the National Science Foundation under Grant No.\ 1545949.
P.\ Godon wishes to thank William (Bill) P.\ Blair for his kind hospitality in the Rowland Department of Physics \& Astronomy at the Johns Hopkins University.
M.\ Hernanz acknowledges MINECO support under the grant ESP2015\_66134\_R as well as the support of the Generalitat de Catalunya/CERCA program.
K.\ Hornoch, H.\ Ku\v{c}\'akov\'a and J.\ Vra\v{s}til were supported by the project RVO:67985815.
R.\ Hounsell acknowledges support from the HST grant 14651.
E.\ Paunzen acknowledges support by the Ministry of Education of the Czech Republic (grant LG15010).
V.\ Ribeiro acknowledges financial support from Funda\c{c}\~{a}o para a Ci\^encia e a Tecnologia (FCT) in the form of an exploratory project of reference IF/00498/2015, from Center for Research \& Development in Mathematics and Applications (CIDMA) strategic project UID/MAT/04106/2013 and supported by Enabling Green E-science for the Square Kilometer Array Research Infrastructure (ENGAGESKA), POCI-01-0145-FEDER-022217, funded by Programa Operacional Competitividade e Internacionaliza\c{c}\~ao (COMPETE 2020) and FCT, Portugal.
P. Rodr\'\i guez-Gil acknowledges support by a Ram\'on y Cajal fellowship (RYC2010--05762). The use of Tom Marsh's {\sc pamela} package is gratefully acknowledged.
K.~L.\ Page and J.~P.\ Osborne acknowledge the support of the UK Space Agency.
T. Oswalt acknowledges support from U.S. NSF grant AST-1358787.
S.\ Starrfield acknowledges partial support to ASU from NASA and HST grants.
This research has made use of ``Aladin sky atlas" developed at CDS, Strasbourg Observatory, France.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.
We wish to thank the Observatorio Astrofisico de Javalambre Data Processing and Archiving Unit (UPAD) for reducing and calibrating the JAST/T80 data.
\facilities{AAVSO, ARC, Danish 1.54m Telescope, FTN, HCT, HET, HST (WFC3), ING:Newton, LBT, LCO, Liverpool:2m, Mayall, MLO:1m, NOT, OAO:0.5m, OO:0.65, PO:1.2m, ERAU:0.6m, ERAU:1m, Swift, XMM}
\software{AIP4WIN, Aladin \citep[v9;][]{2000A&AS..143...33B,2014ASPC..485..277B}, APAS \citep{2007ApJ...661L..45D}, APHOT \citep{1994ExA.....5..375P}, {\tt calwf3} \citep[v3.4;][]{2012wfci.book.....D}, DOLPHOT \citep[v2.0;][]{2000PASP..112.1383D}, FotoDif (v3.95), HEASOFT (v6.16), IRAF \citep[v2.16 and v2.16.1,][]{1993ASPC...52..173T}, MaxIm DL (v5.23), Mira Pro x64 (v8.011 and v8.012), Panacea, PGPLOT (v5.2), PyRAF, R \citep{R_manual}, SAOImage DS9, Starlink \citep[v2016A,][]{1982QJRAS..23..485D}, Source Extractor \citep[v2.8.6;][]{1996A&AS..117..393B}, SWarp \citep[v2.19.1][]{2002ASPC..281..228B}, XIMAGE (v4.5.1), XMM-SAS (v15.0.0), XSELECT (v2.4c), XSPEC \citep[v12.8.2;][]{1996ASPC..101...17A}}
\bibliographystyle{aasjournal}
|
{'timestamp': '2018-03-02T02:05:55', 'yymm': '1803', 'arxiv_id': '1803.00181', 'language': 'en', 'url': 'https://arxiv.org/abs/1803.00181'}
|
arxiv
|
\section{Introduction}
Whenever we compute an asteroid's orbit, it comes with an uncertainty
region due to the limited accuracy of the available observations. In
other words, orbits are only known in a statistical sense and the
accuracy of the related probabilistic interpretation relies heavily on
the observation accuracy and error modeling. Therefore, it is
important to apply an appropriate statistical treatment to the
observations used to compute the orbit.
The vast majority of asteroid astrometry is given by optical
observations, i.e., each observation provides two angular
measurements, typically right ascension (RA) and declination (DEC) in
the equatorial reference frame J2000, describing the position of an
asteroid on the celestial sphere at a specified time. Such
measurements are obtained with respect to nearby reference stars,
whose positions are provided by a reference star catalog. In general,
the more accurate the star catalog, the more accurate the observation.
Despite the common assumption that observation errors have zero mean,
\citet{carpino03} show that asteroid astrometry is significantly
biased and suggest the reason is
the presence of systematic errors in the star catalogs used to reduce
the astrometry.
\citet{cbm10} computed star catalog systematic errors for USNO-A1.0
\citep{usnoa1}, USNO-A2.0 \citep{usnoa2}, USNO-B1.0 \citep{usnob1},
UCAC2 \citep{ucac2}, and Tycho-2 \citep{tycho2} by comparing each of
these catalogs to 2MASS \citep{2mass}. Despite the lack of proper
motions, 2MASS was chosen as the reference catalog because of its very
accurate star positions at epoch J2000.0 and high spatial
density. \citet{cbm10} showed that correcting asteroid astrometry
using their computed biases leads to significantly lower systematic
errors and statistically better ephemeris predictions.
Pan-STARRS PS1 \citep{ps1} is one of the most accurate asteroid
surveys with an astrometric quality of the order of 0.1''. Although this
survey uses 2MASS as reference catalog for the astrometric reduction,
\citet{milani12} found that Pan-STARRS PS1 data have surprisingly
high biases on the order of 0.05--0.1'' with a strong regional
dependence. \citet{tholen_2mass} show that the lack of proper motion
in 2MASS is likely to be the cause of the Pan-STARRS PS1 astrometry
systematic errors and signatures. Moreover, they suggest that PPMXL
\citep{ppmxl} be used as reference catalog because of its spatial
density, accuracy comparable to that of 2MASS, and availability of
proper motion information.
Since the lack of proper motion can be significant for high quality
observations, in this paper we describe how to correct asteroid
observations for both position and proper motion errors. Moreover, we
perform this analysis for a more comprehensive list of star catalogs
than that considered in \citet{cbm10}.
\section{Asteroid astrometry}
As of January 2014 more than 600,000 asteroids have been designated,
$\sim$60\% of which are numbered. The number of asteroid optical
observations is already larger than 100,000,000 and increases every
day. Observers submit their observations to the Minor Planet Center
(MPC)\footnote{http://www.minorplanetcenter.net/} and usually provide
information on the catalog used to perform the astrometric
reduction. The MPC in turn makes the catalog information publicly
available by using an alphabetical
flag\footnote{http://www.minorplanetcenter.net/iau/info/CatalogueCodes.html}.
Table~\ref{t:catalogs} shows the MPC flag, the number of stars, and
the number of asteroid observations different catalogs. We only
consider the catalogs for which the number of asteroid observations
reported to the MPC with the corresponding catalog flag was larger
than 40,000 as of January 2014. We also included the GSC-1.2
\citep{gsc1.2}
catalog to complete the GSC-1 catalog series. The most used catalog is
USNO-A2.0, with more than 40,000,000 asteroid observations. 2MASS,
which was used as the reference catalog by \citet{cbm10}, is the
fourth most used catalog and the related astrometry is dominated by
Pan-STARRS PS1 observations (more than 75\% of the
sample). Observations reported with code `z' were reduced with one of
the GSC catalogs, but we do not know which one.
\begin{table}\small
\begin{center}
\begin{tabular}{lccccc}
\hline
Catalog & MPC & Number & \multicolumn{2}{c}{Asteroid observations} & Reference\\
& flag & of stars & Count & \% & \\
\hline
USNO-A2.0 & c & 526,280,881 & 40,408,360 & 38.47 & \citet{usnoa2}\\
UCAC-2 & r & 48,330,571 & 29,793,925 & 28.37 & \citet{ucac2}\\
USNO-B1.0 & o & 1,045,175,762 & 12,834,999 & 12.22 & \citet{usnob1}\\
2MASS & L & 470,992,970 & 8,136,250 & 7.75 & \citet{2mass}\\
UCAC-4 & q & 113,780,093 & 2,629,456 & 2.50 &\citet{ucac4}\\
UCAC-3 & u & 100,766,420 & 2,228,325 & 2.12 &\citet{ucac3}\\
USNO-A1.0 & a & 488,006,860 & 2,193,938 & 2.08 &\citet{usnoa1}\\
USNO-SA2.0 & d & 55,368,239 & 1,698,129 & 1.62 &\citet{usnoa2}\\
GSC-1.1 & i & 18,836,912 & 614,617 & 0.59 &\citet{gsc1.1}\\
UCAC-1 & e & 27,425,433 & 501,774 & 0.48 &\citet{ucac1}\\
SDSS-DR7 & N & 357,175,411 & 479,914 & 0.46 &\citet{sloan7}\\
GSC-ACT & m & 18,836,912 & 404,473 & 0.39 &\citet{gsc_act}\\
CMC-14 & w & 95,858,475 & 361,928 & 0.34 &\citet{cmc14}\\
Tycho-2 & g & 2,430,468 & 355,813 & 0.34 &\citet{tycho2}\\
USNO-SA1.0 & b & 54,787,624 & 337,561 & 0.32 &\citet{usnoa1}\\
GSC (unspecified) & z & N/A & 288,156 & 0.27 &N/A\\
ACT & l & 988,758 & 117,638 & 0.11 &\citet{act}\\
PPMXL & t & 910,468,688 & 88,328 & 0.08 &\citet{ppmxl}\\
NOMAD & v & 1,117,612,732 & 58,266 & 0.06 &\citet{nomad}\\
PPM & p & 378,910& 41,468 & 0.04 &\citet{ppm}\\
GSC-1.2 & j & 18,841,548 & 16,975 & 0.02 & \citet{gsc1.2}\\
\hline
\end{tabular}
\end{center}
\caption{Star catalogs and MPC flags. The number
of asteroid observations for each catalog account for all the astrometry
available up to January 7, 2014.}
\label{t:catalogs}
\end{table}
\section{Star catalog position and proper motion corrections}
To correct asteroid optical astrometry for star catalog systematic
errors, we need to select a reference for comparison with the other
catalogs. Such a selection is far from easy. Hipparcos
\citep{hipparcos} and Tycho2 are space-based, so they are not subject
to differential refraction corrections as ground-based observations
are, possibly making them the best available catalogs. However, a
reference catalog should be both dense and accurate and neither
Tycho-2 nor Hipparcos are dense enough. As shown in
Table~\ref{t:catalogs}, the catalogs with the largest number of stars
include USNO-A1.0 \citep{usnoa1}, USNO-A2.0 \citep{usnoa2}, USNO-B1.0
\citep{usnob1}, 2MASS \citep{2mass}, PPMXL \citep{ppmxl}, and NOMAD
\citep{nomad}. \citet{cbm10} proved that the USNO catalogs are
affected by systematic errors in position as large as 1--2''. NOMAD is
a simple merge of the Hipparcos, Tycho-2, UCAC2, and USNO-B1.0 and is
therefore still affected by the biases present in
USNO-B1.0. \citet{tholen_2mass} show that 2MASS is not the appropriate
choice because of the lack of proper motion. PPMXL \citep{ppmxl} is
also a merge of 2MASS and USNO-B1.0, but it includes proper motions
and a critical reprocessing of star positions from 2MASS and
USNO-B1.0. Therefore, PPXML seems a sensible choice for a reference
catalog. However, tests similar to one presented in
Sec.~\ref{s:pred_test} were not satisfactory as we found that
correcting with respect to PPMXL rather than 2MASS \citep[as
in][]{cbm10} can provide less accurate predictions. As described by
\citet{ppmxl}, more than 50\% of PPMXL stars are based on USNO-B1.0
and are not accurate enough for our purposes. To fix this problem, we
selected as a reference catalog the subset of PPMXL corresponding to
over 400 millions stars derived from 2MASS. This reference benefits
from the accuracy of 2MASS star positions and yet accounts for proper
motions.
As in \citet{cbm10}, to compare the different star catalogs to our
reference catalog we divided the celestial sphere into 49,152
equal-area tiles ($\sim$0.8 deg$^2$) using the JPL HEALPix package
\citep{healpix}. For all the catalogs analyzed, we took star
positions at epoch J2000.0. To identify stars in common within a given
tile we used a spatial correlation of 2''. Whenever more than one
identification with the same star is possible, we need to be careful
and avoid spurious identifications. If $d_i, i=1,N$ are the distances
between the considered star and the matches in the reference catalog, as
a safety measure we selected the identification $j$ only if $d_j < 0.2
d_i$ for $i=1, N$ and $i \ne j$. If none of the identifications met this
condition we rejected all the identification to avoid including
spurious matches in our analysis.
We also made sure that stars in the reference catalog were not paired
to more than one star. For each tile we computed the average
correction in position and proper motion for both right ascension and
declination. Because of the present biases, for some catalogs the 2''
spatial correlation may not be enough to find matching
stars. Therefore, we applied the procedure iteratively, i.e., we
corrected the stars in the catalog to be debiased by subtracting the
systematic error for the corresponding tile found at the previous
iteration.
At the end of the process, for each given tile and catalog we have a
correction in RA and DEC at epoch J2000.0,
$(\Delta \text{RA}_{2000}, \Delta \text{DEC}_{2000})$, and proper motion
corrections $(\Delta \mu_{\text{RA}}, \Delta \mu_{\text{DEC}})$. These numbers
can be used to correct asteroid astrometric observations by subtracting the
following quantities:
\begin{eqnarray*}
\Delta\text{RA} & = & \Delta \text{RA}_{2000} + \Delta \mu_{\text{RA}} (t - 2000.0)\\
\Delta\text{DEC} & = & \Delta \text{DEC}_{2000} + \Delta \mu_{\text{DEC}} (t - 2000.0)\\
\end{eqnarray*}
where $t$ is the observation epoch, and $\Delta \text{RA}_{2000}$,
$\Delta \text{DEC}_{2000}$, $\Delta \mu_{\text{RA}}$, and $\Delta
\mu_{\text{RA}}$ are the position and proper motion corrections for
the tile containing the astrometric observation. Note that
$\Delta{\text{RA}}$, $\Delta{\text{RA}}_{2000}$, and $\Delta
\mu_{\text{RA}}$ account for the spherical metric factor
$\cos\text{DEC}$. Of course, the successful application of these
corrections relies on the accuracy of the catalog information provided
in the MPC observation database. The star position and
proper motion correction table is publicly available at
ftp://ssd.jpl.nasa.gov/pub/ssd/debias/debias\_2014.tgz. The main
differences with respect to the \citet{cbm10} debiasing scheme are:
\begin{itemize}
\item our reference catalog is not 2MASS but the subset of PPMXL
based on 2MASS astrometry;
\item the present debiasing scheme accounts for both position and
proper motion errors, while \citet{cbm10} only considered position
errors;
\item we compute corrections for a more comprehensive list of star
catalogs.
\end{itemize}
For each analyzed catalog, Table~\ref{t:cat_rms} reports the average
size of the corrections in terms of RMS, e.g.
\begin{equation}
\overline{\Delta {\text{RA}}}_{2000} = \sqrt{\frac{1}{n_{tiles}}\sum_{i=1}^{n_{tiles}}
\left(\Delta {\text{RA}}_{2000}\right)_i^2}.
\end{equation}
\begin{table}
\begin{center}
\begin{tabular}{l|cc|ccc|c}
\hline
Catalog & $\overline{\Delta{\text{RA}}}_{2000}$ &
$\overline{\Delta{\text{DEC}}}_{2000}$ & PM & $\overline{\Delta\mu}_{\text{RA}}$ &
$\overline{\Delta\mu}_{\text{DEC}}$ & Sky\\
& [arcsec] & [arcsec] & inc. & [mas/yr] & [mas/yr] & coverage\\
\hline
Tycho-2 & 0.02 & 0.02 & Yes & 0.7 & 0.7 & 100\%\\
ACT & 0.02 & 0.02 & Yes & 1.5 & 1.4 & 100\%\\
2MASS & 0.03 & 0.02 & No & 5.8 & 6.4 & 100\%\\
USNO-A1.0 & 0.45 & 0.37 & No & 5.1 & 5.7 & 100\%\\
USNO-SA1.0 & 0.45 & 0.37 & No & 5.0 & 5.5 & 100\%\\
USNO-A2.0 & 0.21 & 0.24 & No & 5.1 & 5.7 & 100\%\\
USNO-SA2.0 & 0.21 & 0.24 & No & 5.0 & 5.6 & 100\%\\
USNO-B1.0 & 0.12 & 0.17 & Yes & 4.4 & 4.9 & 100\%\\
UCAC-1 & 0.03 & 0.03 & Yes & 5.8 & 7.4 & 39\%\\
UCAC-2 & 0.01 & 0.01 & Yes & 2.5 & 2.2 & 88\%\\
UCAC-3 & 0.02 & 0.02 & Yes & 5.0 & 4.6 & 100\%\\
UCAC-4 & 0.02 & 0.02 & Yes & 2.2 & 2.5 & 100\%\\
GSC-1.1 & 0.47 & 0.38 & No & 6.8 & 6.6 & 100\%\\
GSC-1.2 & 0.20 & 0.18 & No & 6.7 & 6.6 & 100\%\\
GSC-ACT & 0.15 & 0.13 & No & 6.7 & 6.6 & 100\%\\
NOMAD & 0.10 & 0.15 & Yes & 3.7 & 4.3 & 100\%\\
PPM & 0.23 & 0.24 & Yes & 4.1 & 4.2 & 100\%\\
CMC-14 & 0.03 & 0.04 & No & 6.3 & 7.0 & 62\%\\
SDSS-DR7 & 0.05 & 0.07 & Yes & 2.6 & 3.2 & 31\%\\
\hline
\end{tabular}
\end{center}
\caption{For each analyzed catalog columns are: average corrections in
position (right ascension and declination), information on whether
or not the catalog includes proper motions, average corrections in
proper motion (right ascension and declination), and the fraction of
the sky covered by the catalog.}
\label{t:cat_rms}
\end{table}
Figures~\ref{f:2mass}--\ref{f:sdss7} depict sky maps of the position
and proper motion corrections for the analyzed catalogs, which we
discuss in more detail in the following subsections. Note that the
color scale is not the same for all catalogs to reveal the regional
structures of the position and proper motion corrections.
As shown in Table~\ref{t:counts}, the right ascension and declination
corrections are not available for 1.65\% of the reported astrometry.
In most of these cases ($>$ 1,000,000 observations) we cannot apply
corrections because there is no catalog information. Moreover, almost
290,000 observations were reported as reduced using a GSC catalog,
without specifying which specific GSC catalog was used. Finally, about
100,000 observations were reduced with catalogs not included in our
analysis.
\begin{sidewaystable}\small
\begin{center}
\begin{tabular}{lc|c|cc|c|c}
\hline
Type & MPC flag & Unknown catalog & \multicolumn{2}{c|}{Known Catalog}
& Total & Dates\\
& & & Corrected & Not corrected & & \\
\hline
CCD & C & 530,374 & 99,938,430 & 279,375 & 100,748,179 & 1986--2014\\
Corrected CCD & c & 440 & 1,214,291 & 1,778 & 1,216,509 & 1991--2013\\
Former B1950 & A & 554,220 & 10,036 & 82,519 & 646,775 & 1802--1999\\
Photographic & ' ' & 207,812 & 118,743 & 27,258 & 353,813 & 1898--2012\\
Meridian or transit circle & T & 25,357 & 1,611 & 0 & 26,968 & 1984--2005\\
Micrometer & M & 0 & 13,008 & 0 & 13,008 & 1845--1954\\
Suppressed & X & 6,538 & 4,289 & 2,183 & 13,010 & 1891--2010\\
Suppressed & x & 5 & 3,763 & 0 & 3,768 & 1996--2010 \\
Hipparcos & H & 5,494 & 0 & 0 & 5,494 & 1989--1993\\
Occultation & E & 1,005 & 871 & 68 & 1,944 & 1961--2013\\
Satellite & S & 4,331 & 1,996,679 & 1,335 & 2,002,345 & 1994--2013\\
Roving observer & V & 102 & 92 & 0 & 194 & 2000--2012\\
Normal place & N & 0 & 37 & 0 & 37 & 1906--1923\\
Mini-normal place & n & 0 & 273 & 0 & 273 & 2009--2013\\
Encoder & e & 1 & 14 & 1 & 16 & 1993--1995\\
\hline
Total & & 1,335,679 & 103,302,137 & 394,517 & 105,032,333\\
& & 1.27\% & 98.35\% & 0.38\% & \\
\hline
\end{tabular}
\end{center}
\caption{For each optical observation type, number of observations
with unknown catalog, with known catalog and computed correction
tables, and with known catalog and no corrections available. As
suggested by the MPC, X and x-type observations should not be used
in the orbital fits. Observations are as of Jan 7, 2014.}
\label{t:counts}
\end{sidewaystable}
\subsection{PPMXL and 2MASS}
Since our reference catalog is a subset of PPMXL, the comparison
yields no corrections for PPMXL. Still, it is worth pointing out that
astrometry reduced with PPMXL can suffer from the lower accuracy of
USNO-B1.0 based stars.
Due to our choice for the reference catalog, we expect small
differences in the 2MASS star positions. As a matter of fact, we have
position differences of the order of 0.01'' -- well consistent
with the 2MASS stated accuracy of $\sim 0.07''$ \citep{2mass}. Though
these corrections are small, the top panels of Fig.~\ref{f:2mass} show
some regional dependence, which may be due to the lack of proper
motion for the time interval in which star positions were integrated.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bra2mass.pdf}
\includegraphics[width=0.6\textwidth]{bdec2mass.pdf}}
\centerline{\includegraphics[width=0.6\textwidth]{pmra2mass.pdf}
\includegraphics[width=0.6\textwidth]{pmdec2mass.pdf}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for 2MASS. Bottom: proper motion corrections
in right ascension (left) and declination (right) for
2MASS.}\label{f:2mass}
\end{figure}
The bottom panels of Fig.~\ref{f:2mass} show the proper motion
corrections to be applied to 2MASS. Since 2MASS does not have proper
motions, these two panels give the proper motion distribution for
stars that 2MASS and PPMXL have in common. There is an evident
regional dependence and it is clear that the lack of proper motion may
cause significant position errors if the observation epoch is not
close to J2000.0. Thus, we apply both position and proper motion
corrections to observations reduced with 2MASS.
\subsection{Tycho-2 and ACT}
Tycho-2 (Fig.~\ref{f:tycho2}) and ACT (Fig.~\ref{f:act}) are catalogs
with a relatively low number of stars. Both positions and proper
motions are close to those of our reference catalog. There is no clear
signature and the differences could simply be noise. Therefore, we
decided to apply no corrections to the Tycho-2 and ACT based
astrometry. Good agreement between Tycho-2, which is space based, and
our reference catalog gives us some additional confidence that our
reference catalog has good positions and proper motions, at least for
the stars in common.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bratycho2}\includegraphics[width=0.6\textwidth]{bdectycho2}}
\centerline{\includegraphics[width=0.6\textwidth]{pmratycho2}\includegraphics[width=0.6\textwidth]{pmdectycho2}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for Tycho-2. Bottom: proper motion
corrections in right ascension (left) and declination (right) for
Tycho-2.}\label{f:tycho2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{braact}\includegraphics[width=0.6\textwidth]{bdecact}}
\centerline{\includegraphics[width=0.6\textwidth]{pmraact}\includegraphics[width=0.6\textwidth]{pmdecact}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for ACT. Bottom: proper motion corrections
in right ascension (left) and declination (right) for
ACT.}\label{f:act}
\end{figure}
\subsection{USNO catalogs}
\citet{cbm10} showed that the USNO catalogs
(Fig.~\ref{f:usno_a1} -- Fig.~\ref{f:usno_b1}) present significant
position biases. Moreover, the USNO-A catalogs do not account for
proper motions. Although USNO-B1.0 does have proper motions for some
of its stars, the proper motion differences with our reference are of
the same order of the USNO-A catalogs thus indicating that USNO-B1.0
proper motions are not generally accurate enough. We therefore correct
all the astrometry based on the USNO catalogs for both position and
proper motion errors.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brausno_a1}\includegraphics[width=0.6\textwidth]{bdecusno_a1}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrausno_a1}\includegraphics[width=0.6\textwidth]{pmdecusno_a1}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for USNO-A1.0. Bottom: proper motion
corrections in right ascension (left) and declination (right) for
USNO-A1.0.}\label{f:usno_a1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brausno_sa1}\includegraphics[width=0.6\textwidth]{bdecusno_sa1}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrausno_sa1}\includegraphics[width=0.6\textwidth]{pmdecusno_sa1}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for USNO-SA1.0. Bottom: proper motion corrections
in right ascension (left) and declination (right) for USNO-SA1.0.}\label{f:usno_sa1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brausno_a2}\includegraphics[width=0.6\textwidth]{bdecusno_a2}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrausno_a2}\includegraphics[width=0.6\textwidth]{pmdecusno_a2}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for USNO-A2.0. Bottom: proper motion corrections
in right ascension (left) and declination (right) for USNO-A2.0.}\label{f:usno_a2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brausno_sa2}\includegraphics[width=0.6\textwidth]{bdecusno_sa2}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrausno_sa2}\includegraphics[width=0.6\textwidth]{pmdecusno_sa2}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for USNO-SA2.0. Bottom: proper motion corrections
in right ascension (left) and declination (right) for USNO-SA2.0.}\label{f:usno_sa2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brausno_b1}\includegraphics[width=0.6\textwidth]{bdecusno_b1}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrausno_b1}\includegraphics[width=0.6\textwidth]{pmdecusno_b1}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for USNO-B1.0. Bottom: proper motion corrections
in right ascension (left) and declination (right) for USNO-B1.0.}\label{f:usno_b1}
\end{figure}
\subsection{UCAC catalogs}
The UCAC catalogs (Fig.~\ref{f:ucac1} -- Fig.~\ref{f:ucac4}) provide
extremely good star positions, very close to those of our
reference. However, proper motions look problematic: UCAC-1 and UCAC-3
have significant corrections of the order of 5 mas/yr, while UCAC-2
and UCAC-4 seem to have better proper motions. \citet{ppmxl} report
problems in UCAC-3 proper motions, in particular for declinations
greater than $-20^\circ$.
Despite being the final product of the UCAC series, UCAC-4 has some
regional dependence of the proper motions suggesting that there are
still unresolved issues with proper motions. Moreover, the comparison
between UCAC-4 and Tycho-2 provides average proper motion corrections
of $\sim$ 2 mas/yr, while the comparison between the subset of PPMXL
that we are using as reference and Tycho-2 gives average proper motion
differences $< 1$ mas/yr. Due to the high quality of Tycho-2, these
differences further suggest that UCAC-4 proper motions have
correctable errors.
These indications suggest that proper motion in the UCAC catalogs should be
corrected, but corrections in positions are small. For consistency,
we correct both positions and proper motions.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{braucac1}\includegraphics[width=0.6\textwidth]{bdecucac1}}
\centerline{\includegraphics[width=0.6\textwidth]{pmraucac1}\includegraphics[width=0.6\textwidth]{pmdecucac1}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for UCAC-1. Bottom: proper motion corrections
in right ascension (left) and declination (right) for UCAC-1.}\label{f:ucac1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{braucac2}\includegraphics[width=0.6\textwidth]{bdecucac2}}
\centerline{\includegraphics[width=0.6\textwidth]{pmraucac2}\includegraphics[width=0.6\textwidth]{pmdecucac2}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for UCAC-2. Bottom: proper motion corrections
in right ascension (left) and declination (right) for UCAC-2.}\label{f:ucac2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{braucac3}\includegraphics[width=0.6\textwidth]{bdecucac3}}
\centerline{\includegraphics[width=0.6\textwidth]{pmraucac3}\includegraphics[width=0.6\textwidth]{pmdecucac3}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for UCAC-3. Bottom: proper motion corrections
in right ascension (left) and declination (right) for UCAC-3.}\label{f:ucac3}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{braucac4}\includegraphics[width=0.6\textwidth]{bdecucac4}}
\centerline{\includegraphics[width=0.6\textwidth]{pmraucac4}\includegraphics[width=0.6\textwidth]{pmdecucac4}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for UCAC-4. Bottom: proper motion corrections
in right ascension (left) and declination (right) for UCAC-4.}\label{f:ucac4}
\end{figure}
\subsection{GSC catalogs}
GSC-1 (Fig.~\ref{f:gsc_1.1} and Fig.~\ref{f:gsc_1.2}) and GSC-ACT
(Fig.~\ref{f:gsc_act}) catalogs are significantly biased and have no
proper motion information. There is no doubt that the astrometry
reduced with these two catalogs should be corrected.
As shown in Table~\ref{t:catalogs} there are almost 300,000 asteroid
observations submitted with the flag code `z'. These observations were
reduced with one of the GSC catalogs but we do not know which one. It
may be either one of the GSC-1 catalogs or one of the GSC-2 catalogs
\citep{gsc2.2, gsc2.3}. As a consequence, we cannot correct those
observations. It would be very useful if observers could provide the
MPC with the information on which specific GSC catalog they used to
reduce the astrometry.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bragsc_11}\includegraphics[width=0.6\textwidth]{bdecgsc_11}}
\centerline{\includegraphics[width=0.6\textwidth]{pmragsc_11}\includegraphics[width=0.6\textwidth]{pmdecgsc_11}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for GSC-1.1. Bottom: proper motion corrections
in right ascension (left) and declination (right) for GSC-1.1.}\label{f:gsc_1.1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bragsc_12}\includegraphics[width=0.6\textwidth]{bdecgsc_12}}
\centerline{\includegraphics[width=0.6\textwidth]{pmragsc_12}\includegraphics[width=0.6\textwidth]{pmdecgsc_12}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for GSC-1.2. Bottom: proper motion corrections
in right ascension (left) and declination (right) for GSC-1.2.}\label{f:gsc_1.2}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bragsc_act}\includegraphics[width=0.6\textwidth]{bdecgsc_act}}
\centerline{\includegraphics[width=0.6\textwidth]{pmragsc_act}\includegraphics[width=0.6\textwidth]{pmdecgsc_act}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for GSC-ACT. Bottom: proper motion corrections
in right ascension (left) and declination (right) for GSC-ACT.}\label{f:gsc_act}
\end{figure}
\subsection{Other catalogs}
For NOMAD (Fig.~\ref{f:nomad}) both position and proper motion
corrections are very similar to those of USNO-B1.0. This is not a
surprise as NOMAD is a merge of a few catalogs, and USNO-B1.0 is the
one with the largest number of stars. Thus, we correct NOMAD for both
positions and proper motions.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{branomad}\includegraphics[width=0.6\textwidth]{bdecnomad}}
\centerline{\includegraphics[width=0.6\textwidth]{pmranomad}\includegraphics[width=0.6\textwidth]{pmdecnomad}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for NOMAD. Bottom: proper motion corrections
in right ascension (left) and declination (right) for NOMAD.}\label{f:nomad}
\end{figure}
PPM (Fig.~\ref{f:ppm}) shows significant errors in both positions and
proper motions. There is no doubt that observations reduced with this
catalog should be debiased.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brappm}\includegraphics[width=0.6\textwidth]{bdecppm}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrappm}\includegraphics[width=0.6\textwidth]{pmdecppm}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for PPM. Bottom: proper motion corrections
in right ascension (left) and declination (right) for PPM.}\label{f:ppm}
\end{figure}
Despite the missing proper motions, CMC-14 (Fig.~\ref{f:cmc14})
provides good star positions. However, position corrections show a
regional dependence correlated to proper motion features. We therefore
corrected all the CMC-14 based astrometry.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bracmc14}\includegraphics[width=0.6\textwidth]{bdeccmc14}}
\centerline{\includegraphics[width=0.6\textwidth]{pmracmc14}\includegraphics[width=0.6\textwidth]{pmdeccmc14}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for CMC-14. Bottom: proper motion corrections
in right ascension (left) and declination (right) for CMC-14.}\label{f:cmc14}
\end{figure}
SDSS-DR7 (Fig.~\ref{f:sdss7}) does not seem to be an ideal catalog for
astrometric reduction. As we can see from Fig.~\ref{f:sdss7}, this
catalog does not have uniform coverage of the sky. Moreover, the
position and proper motion errors are significant. We therefore
correct all the SDSS-DR7 based astrometry. It is worth noticing that
all but 10 of the observations reduced using SDSS-DR7 were obtained by
the Palomar Transient Factory survey \citep{ptf}.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{brasdss7}\includegraphics[width=0.6\textwidth]{bdecsdss7}}
\centerline{\includegraphics[width=0.6\textwidth]{pmrasdss7}\includegraphics[width=0.6\textwidth]{pmdecsdss7}}
\caption{Top: J2000.0 position corrections in right ascension (left)
and declination (right) for SDSS-DR7. Bottom: proper motion corrections
in right ascension (left) and declination (right) for SDSS-DR7.}\label{f:sdss7}
\end{figure}
\section{Improvement of residual statistcs and ephemeris predictions}
\subsection{Tests with Apophis, Bennu, and Golevka}
We tested the astrometric corrections described in this paper on three
asteroids with the best constrained trajectories: (99942) Apophis,
(101955) Bennu, and (6489) Golevka
\citet{tholen_apo} reported over 430 high quality ground-based optical
observations for Apophis. We analyzed the behavior of the postfit
residuals, i.e., against the best fitting orbital solution, for the
\citet{tholen_apo} observations by using the \citet{cbm10} debiasing
scheme and the one presented in this paper. Figure~\ref{f:apo_res}
shows a scatter plot of the postfit residuals in RA and DEC with the two
different schemes. In both cases, the orbital solution is computed by
only using the \citet{tholen_apo} astrometry, Magdalena Ridge and
Pan-STARRS PS1 observations, and radar astrometry \citep[for details
see][]{farnocchia_apophis}. With the \citet{cbm10} scheme, the
\citet{tholen_apo} observations show mean RA/DEC postfit residuals of
$(0.033", 0.021")$. The adoption of the new scheme reduces the RA/DEC
mean postfit residuals to $(0.006", -0.005")$. The clear improvement is mostly
due to the proper motion corrections of 2MASS based astrometry, which
dominates the \citet{tholen_apo} dataset.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth]{apo_res}}
\caption{Postfit residuals of the Apophis astrometry from
\citet{tholen_apo} against the orbital solutions computed by using
the \citet{cbm10} debiasing scheme (circles) and the one presented
in this paper (pluses). Mean postfit residuals (white crosses) and
covariance ellipses at the 3$\sigma$ level are shown for the
respective datasets.}\label{f:apo_res}
\end{figure}
Near-Earth asteroids Golevka \citep{golevka}, Bennu \citep{bennu}, and
Apophis \citep{farnocchia_apophis} have exceptionally well constrained
orbits thanks to the availability of three radar
apparitions. Table~\ref{t:chi2} shows the normalized $\chi^2$, i.e.,
the weighted sum of the squared postfit residuals, of the
orbital fit for the \citet{cbm10} debiasing scheme and the one
presented here. For the computation of normalized $\chi^2$ we used
the \citet{cbm10} data weights as well as some manual weights as
described in \citet{bennu} and \citet{farnocchia_apophis}. In both
cases $\chi^2$ improves with the new debiasing scheme, especially for
Golevka. Since the nominal trajectory is already well constrained by
the radar measurements, $\chi^2$ measures how well the optical
observations fit the trajectory. Therefore, the improvement in
$\chi^2$ further suggest that the new debiasing scheme is more
accurate.
\begin{table}
\begin{center}
\begin{tabular}{l|ccc}
\hline
Object & $\chi^2$ \citet{cbm10} & $\chi^2$ this paper & $\Delta \chi^2$\\
\hline
(99942) Apophis & 231 & 201 & 30\\
(101955) Bennu & 227 & 224 & 3\\
(6489) Golevka & 1024 & 1005 & 19\\
\hline
\end{tabular}
\end{center}
\caption{Normalized $\chi^2$ of the orbital fit for asteroids (99942)
Apophis, (101955)
Bennu, and (6489) Golevka. We show the results for
both the \citet{cbm10}
debiasing scheme and the one presented in this paper.}
\label{t:chi2}
\end{table}
\subsection{Test with Pan-STARRS PS1 data}
\citet{milani12} found unexpected biases in Pan-STARRS PS1 data and
\citet{tholen_2mass} show clear correlations between the detected
biases and the lack of proper motion in 2MASS, which is the reference
catalog for Pan-STARRS PS1 astrometry. Since the debiasing scheme
presented in this paper corrects for proper motions, the size of
detected biases should decrease significantly.
Table~\ref{t:ps1_res} shows the mean and standard deviation of
Pan-STARRS PS1 residuals in both RA and DEC. There is a modest
improvement in RA and a more significant improvement in DEC.
\begin{table}
\begin{center}
\begin{tabular}{l|cc}
\hline
& \multicolumn{2}{c}{Residuals}\\
& $\text{RA} \cos(\text{DEC})$ & DEC\\
\hline
\citet{cbm10}& 0.05'' $\pm $ 0.13'' & 0.06'' $\pm $ 0.12'' \\
This paper & 0.04'' $\pm$ 0.11'' & -0.01'' $\pm$ 0.11''\\
\hline
\end{tabular}
\end{center}
\caption{Mean and standard deviation of Pan-STARRS PS1 residuals for
the \citet{cbm10} debiasing scheme and that of this paper.}
\label{t:ps1_res}
\end{table}
Figure~\ref{f:ps1} is a sky map of Pan-STARRS PS1 mean residuals in
the sky and helps to better appreciate the improvement due to the new
debiasing scheme. We only considered those tiles in the sky with at
least 100 observations. Top panels correspond to the \citet{cbm10}
debiasing scheme. We can clearly see the correlation between the found
biases and star proper motions (e.g., see bottom panels of
Fig.~\ref{f:2mass}). The bottom panels show the mean residuals using
the new debiasing scheme. A clear improvement is evident from the
application of the new debiasing scheme. In particular, the clear
regional structure of the systematic error distribution vanished.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{bra_ps1_cbm10}\includegraphics[width=0.6\textwidth]{bdec_ps1_cbm10}}
\centerline{\includegraphics[width=0.6\textwidth]{bra_ps1}\includegraphics[width=0.6\textwidth]{bdec_ps1}}
\caption{Top: Mean residuals for Pan-STARRS PS1 observations in right
ascension (right) and declination (left). Top panels are for the
\citet{cbm10} debiasing scheme, bottom panels are for the debiasing
scheme described in this paper.}\label{f:ps1}
\end{figure}
\subsection{Prediction test}
\label{s:pred_test}
To validate the new debiasing scheme the most important test is
prediction: the orbits computed with the new scheme have to provide
better predictions. We performed a test similar to that described by
\citet[][Sec. 6]{cbm10}. We took the same 222 asteroids, but we
considered the last 9 apparitions. For each object we selected
different subsets of the observational arc, propagated to the central
epoch of the 5th apparition, computed the 3-dimensional Cartesian
position, and compared to the solution obtained by using the full
observational dataset, which is considered as the truth. The
comparison was done consistently, i.e., if the prediction was computed
with the \citet{cbm10} scheme, then the truth was computed with
\citet{cbm10} scheme, and similarly for the scheme presented in this
paper.
For each subset of the 9 apparitions, the new debiasing scheme
performed better than that from \citet{cbm10}. As an example,
Fig.~\ref{f:pred_test} shows the cumulative distributions of the
prediction error for predictions made by using different subsets of
the 9 considered apparitions. We can see how the cumulative prediction
error distributions obtained with the new scheme are better than those
obtained with the \citet{cbm10} scheme.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{pred_err}
\caption{Cumulative distribution of prediction errors as a function of
the formal prediction $\sigma$ for the debiasing schemes presented
here (solid line) and the \citet{cbm10} one (dashed line). The
titles indicate what apparitions were used to compute the
prediction, e.g., App. 1--3 means that apparition 1, 2, and 3 were
used.}\label{f:pred_test}
\end{figure}
\section{Data weights and correlations}
The computation of an orbit is the result of a least square procedure
\citep[Chap.~5]{orbdet}. It is important that individual observations
are assigned weights that reflect the expected accuracy $\sigma$,
i.e., $w = 1/\sigma^2$. Tables~\ref{t:pho_dt}--\ref{t:spe_cat} list
the weights we have been using in the last few years. The $\sigma$
values for CCD observations of Tables~\ref{t:gen_cat} and
\ref{t:spe_cat} are largely from \citet{cbm10}, with some ad hoc
additions based on our experience. The precedence rule is the
following: Table~\ref{t:spe_cat} has priority over
Tables~\ref{t:pho_dt} and \ref{t:gen_typ}. Table~\ref{t:gen_cat},
which in turn has priority over Note that \citet{cbm10} adjust the CCD
weights by applying their so-called ``safety factor'' of 2 to the
reported $\sigma$ values to provide a more realistic ephemeris
uncertainty, i.e., with a prediction error distribution closer to a
theoretical normal distribution.
\begin{table}
\begin{center}
\begin{tabular}{lcc}
\hline
Date & $\sigma_{RA}$ & $\sigma_{DEC}$\\
\hline
$< 1890$ & 3.0'' & 3.0''\\
1890--1950 & 2.0'' & 2.0''\\
$> 1950$ & 1.5'' & 1.5''\\
\hline
\end{tabular}
\end{center}
\caption{Weighting rules by date for photographic, A and N-type
observations (see Table~\ref{t:counts}).}
\label{t:pho_dt}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{lcc|lcc}
\hline
Type & $\sigma_{RA}$ & $\sigma_{DEC}$ & Type & $\sigma_{RA}$ & $\sigma_{DEC}$\\
\hline
C, c, n, V, S & 1.0'' & 1.0'' & M & 3'' & 3'' \\
H & 0.4'' & 0.4'' & T & 0.5'' & 0.5''\\
E & 0.2'' & 0.2'' & e & 0.75'' & 0.75''\\
\hline
\end{tabular}
\end{center}
\caption{General weighting rules by type (see Table~\ref{t:counts}).}
\label{t:gen_typ}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{lcc|lcc}
\hline
Catalog & $\sigma_{RA}$ & $\sigma_{DEC}$ & Catalog & $\sigma_{RA}$ & $\sigma_{DEC}$\\
\hline
c, d & 0.51'' & 0.40'' & m & 0.56'' & 0.57''\\
e, q, r, u & 0.33'' & 0.30'' & w & 0.44'' & 0.36''\\
o, s & 0.50'' & 0.41'' & f, g & 0.73'' & 0.64''\\
a, b & 0.59'' & 0.51'' & L, t & 0.25'' & 0.25''\\
h, i, j, z & 0.45'' & 0.44'' & & &\\
\hline
\end{tabular}
\end{center}
\caption{Specific weighting rules by star catalog for observations
with MPC type flag
C, c, n, or V (see Table~\ref{t:counts}).}
\label{t:gen_cat}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{lccc|lccc}
\hline
Station & Catalog & $\sigma_{RA}$ & $\sigma_{DEC}$ & Station & Catalog & $\sigma_{RA}$ & $\sigma_{DEC}$\\
\hline
704 & c, d & 0.62'' & 0.60'' & 608 & c, d & 0.63'' & 0.77''\\
644 & c, d & 0.24'' & 0.28'' & 644 & o, s & 0.18'' & 0.17''\\
703 & c, d & 0.62'' & 0.57'' & 703 & e, r & 0.49'' & 0.46''\\
699 & c, d & 0.47'' & 0.39'' & 699 & o, s & 0.42'' & 0.41''\\
691 & c, d & 0.32'' & 0.34'' & 691 & o, s & 0.25'' & 0.28''\\
G96 & e, r & 0.25'' & 0.21'' & E12 & e, r & 0.41'' & 0.43''\\
F51 & L & 0.15'' & 0.15'' & H01 & t, L & 0.15'' & 0.15''\\
568 & t & 0.13'' & 0.13'' & 568 & L & 0.15'' & 0.15''\\
568 & o, s & 0.25'' & 0.25'' & 673 & All & 0.30'' & 0.30''\\
683 & e, r & 0.61'' & 0.78'' & 645 & e & 0.15'' & 0.15''\\
689 & g & 0.26'' & 0.32'' & 250 & All & 1.30'' & 1.30''\\
C51 & All & 1.00'' & 1.00'' & & & & \\
\hline
\end{tabular}
\end{center}
\caption{Station specific weighting rules by star catalog for
observations with MPC type flag C,
c, n, V, or S (see Table~\ref{t:counts}).
\label{t:spe_cat}
\end{table}
\citet{carpino03}, \citet{cbm10}, and \citet{baer11} show that
asteroid astrometric errors can be correlated, especially for
same-station observations closely spaced in time. The presence of
correlations is not a surprise since still unresolved systematic
errors, such as timing errors, result in correlations. To mitigate the
effect of unresolved systematic errors and correlations we relax the
weights, especially when there are many observations from the same
station on the same night (which we call a batch of observations). Our
strategy is to apply a scale factor $\sqrt{N}$ to each weight, where
$N$ is the number of observations contained in a single-station
batch. We consider as a batch a sequence of observations from the same
station with a time gap smaller than 8 hours between two consecutive
observations. For CCD observations, which have a typical batch of 3--5
observations, this scale factor is close to the safety factor of 2
suggested by \citet{cbm10}. The $\sqrt{N}$ better handles the cases
with a large number of observations in a batch and avoids
down-weighting batches with a lower number of observations. Moreover,
this scale factor is applied to all types of observations thus
mitigating possible correlations for, e.g., old photographic
observations.
To validate the new weighting scheme we performed a test similar to
that of Sec.~\ref{s:pred_test}. In this case we used two different
weighting schemes: the one with a safety factor of 2 and the one that
scales by $\sqrt{N}$. Figure ~\ref{f:predw_test} shows the prediction
error cumulative distributions for both schemes. The two weighting
schemes give very similar results and they both appear to be coarse as
they give uncertainties larger than theoretically expected. A deeper
analysis of the data weights is beyond the scope of this paper, but we
plan to address this issue in the future.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{weights_test}
\caption{Cumulative distribution of prediction errors as a function of
the formal prediction $\sigma$ for different weighting
schemes. Solid line is with the $\sqrt N$ factor, dashed line is
with the safety factor of 2, and dash-dotted line is a normal cumulative
distribution. The titles indicate what apparitions were used to
compute the prediction, e.g., App. 1--3 means that apparition 1, 2, and 3 were
used.}\label{f:predw_test}
\end{figure}
\section{Discussion}
Developing a reliable statistical error model for asteroid astrometric
observations is a complicated task. In this paper we give a
significant contribution by computing star position and proper motion
corrections.
The selection of a reference star catalog was not obvious. We selected
the subset of PPMXL corresponding to 2MASS based stars, therefore
inheriting the good accuracy of 2MASS stars and adding proper motion
information. We decided not to use the whole PPMXL catalog because
more than 50\% of its star positions are derived from USNO-B1.0 and
are not as accurate as desirable. The USNO-B1.0 based stars in PPMXL
can also affect asteroid observations reduced with PPMXL. These
observations cannot be corrected unless we select a reference catalog
independent from PPMXL. A possible solution would be that observers
select 2MASS based stars from PPMXL. However, this approach would
result in an inhomogeneous dataset of PPMXL based astrometry, unless
the corresponding observations are flagged with a separate MPC catalog
code.
\citet{teixeira} question the reliability of proper motions in some of
the main astrometric catalogs, including PPMXL. Although we
acknowledge that this problem has to be fixed, our goal is to improve
the current treatment of asteroid astrometry. The tests discussed in
this paper show that including our position and proper motion
corrections provide better predictions and better orbital fit
statistics. As soon as the GAIA star catalog \citep{GAIA} is
available, we will have a much more reliable reference catalog to
refine our debiasing scheme.
Finally, we presented a data weight scheme to update and somehow
generalize that suggested by \citet{cbm10}. In particular, to properly
mitigate the possible effect of correlated observation errors, we now
account for the number of observations present in a single batch and
scale the data weights accordingly. Since this scheme still is quite
coarse, future work will include a detailed statistical analysis of
the observation errors to produce a more accurate data weighting
scheme.
\section*{Acknowledgments}
We are grateful to D.~G. Monet and M. Micheli for several discussions
that helped in improving the paper. We also thank B.~J. Gray, \v Z.
Ivezi\'c, G. Landais, O. Maliy, B.~J. McLean, P.~A. Ries, S. Roeser,
S. Urban, P.~R. Weissman, and G.~V. Williams for providing us with
some of the catalogs and other useful information.
This research made use of the VizieR catalogue access tool, CDS,
Strasbourg, France \citep{vizier}.
Part of this research was conducted at the Jet Propulsion Laboratory,
California Institute of Technology, under a contract with NASA.
Copyright 2014 California Institute of Technology.
|
{'timestamp': '2014-08-01T02:07:24', 'yymm': '1407', 'arxiv_id': '1407.8317', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.8317'}
|
arxiv
|
\section{Introduction} \label{sec:intro}
A major aim of stochastic portfolio theory (see \cite{F02} and \cite{FKSurvey} for an introduction) is to uncover relative arbitrage opportunities under minimal and realistic assumptions on the behavior of equity markets. Consider an equity market with $n$ stocks. The market weight $\mu_i(t)$ of stock $i$ at time $t$ is the market capitalization of stock $i$ divided by the total capitalization of the market. The vector $\mu(t) = (\mu_1(t), ..., \mu_n(t))$ of market weights takes value in the open unit simplex $\Delta^{(n)}$ in ${\Bbb R}^n$ defined by
\[
\Delta^{(n)} = \left\{p = (p_1, ..., p_n): p_i > 0, \sum_{i = 1}^n p_i = 1 \right\}.
\]
For each $t$, the portfolio manager chooses a portfolio vector in $\overline{\Delta^{(n)}}$, where $\overline{\Delta^{(n)}}$ is the closure of $\Delta^{(n)}$. Its components represent the proportions of the current capital invested in each of the stocks. We assume that the portfolio is self-financing and all-long, so short selling is prohibited. The {\it market portfolio} is the portfolio whose portfolio weight at time $t$ is $\mu(t)$. It is a buy-and-hold portfolio since no trading is required after its installment. In general trading is required to maintain the target portfolio weights. A {\it relative arbitrage} with respect to the market portfolio over the horizon $[0, t_0]$ is a portfolio which is guaranteed to outperform the market portfolio at time $t_0$.
We say that the market is {\it diverse} if $\max_{1 \leq i \leq n} \mu_i(t) \leq 1 - \delta$ for some $\delta > 0$ and for all $t$, or more generally if $\mu(t) \in K$ for all $t$ where $K$ is an appropriate subset of $\Delta^{(n)}$. The market is {\it sufficiently volatile} if the cumulated volatility of the market weight grows to infinity in a suitable sense. Assuming the market is diverse and sufficiently volatile, it is possible to construct relative arbitrages with respect to the market portfolio over a finite (but possibly long) horizon; see for example \cite{FKSurvey}, \cite{PW13} and the references therein. In fact, it is possible to construct relative arbitrages whose portfolio weights are deterministic functions of the current market weights. In particular, forecasts of expected returns and the covariance matrix are not required. These portfolios, first introduced in \cite{F99}, are said to be {\it functionally generated}. This is in accordance with the observation by many academics and practitioners (see for example \cite{FGH98}, \cite{DGU09} and \cite{PPA12}) that simple portfolio rules such as the equal and diversity weighted portfolios often beat the market over long periods. Intuitively, these portfolios work by capturing market volatility while controlling the maximum drawdown relative to the market portfolio (the main ideas will be reviewed in Section \ref{sec:Fernholz}). In \cite{PW14} we proved the converse: a relative arbitrage portfolio (more precisely a {\it pseudo-arbitrage}, see below) depending deterministically on the current market weights must be functionally generated. We emphasize that a relative arbitrage portfolio is supposed to perform well {\it for all} possible realizations of the market weight process satisfying diversity and sufficient volatility. This observation is utilized in \cite{PW14} to allow a geometric, pathwise approach without assuming any stochastic model for the market weight process.
\medskip
There are two important questions that are not fully addressed by the existing theory. First, what happens if the market portfolio is replaced by another benchmark? In \cite{S12} the concept of functionally generated portfolio and the key `master equation' (see Lemma \ref{lem:FernholzDecomp} below) are extended to arbitrary benchmark portfolios. However, little is known about the existence of relative arbitrage under general conditions such as diversity and sufficient volatiltiy. For example, can we beat the equal-weighted portfolio by a functionally generated portfolio in a diverse and sufficiently volatile market, in the same way a functionally generated portfolio beats the market portfolio? More generally, does there exist an infinite hierarchy of relative arbitrages? Is there a `maximal portfolio' which cannot be beaten if only diversity and sufficient volatility are assumed?
Second, is there a sound and applicable optimization theory for relative arbitrages and functionally generated portfolios? Such a theory is clearly of great interest and this problem was raised already in Fernholz's monograph \cite[Problems 3.1.7-8]{F02}. To the best of our knowledge limited progress has been made to optimization of functionally generated portfolios. See \cite{PW13} for an attempt in the two asset case and \cite{PW14} for an approach using optimal transport. On the theoretical side, if the market model is given it is sometimes possible to characterize the highest return relative to the market or a given trading strategy that can be achieved using nonanticipative investment rules over a given time horizon. See \cite{FK10} for the case of Markovian markets, \cite{FK11} for a more general setting which allows uncertainty regarding the drift and diffusion coefficients, and \cite{R11} which expresses optimal relative arbitrages with respect to Markovian trading strategies as delta hedges. For optimization of functionally generated portfolios, a major difficulty is that the class of functionally generated portfolios is a function space and the optimization has to be nonparametric. Ideally, given historical data or a stochastic model of the market weight process, we want to pick an optimal functionally generated portfolio subject to appropriate constraints.
\medskip
The present paper attempts to give answers to both questions. In this paper we interpret relative arbitrage by what we call {\it pseudo-arbitrage} in \cite{PW13}. This is a model-free concept and the precise definition will be stated in Section \ref{sec:prelim}. We only consider portfolios which are deterministic functions of the current market weight, so a portfolio is represented by a map $\pi: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$. This means that the portfolio manager always chooses $\pi(p)$ when the current market weight is $\mu(t) = p \in \Delta^{(n)}$, regardless of previous price movements. Following \cite{PW14}, in this paper time is discrete and the market is represented by a deterministic sequence $\{\mu(t)\}_{t = 0}^{\infty}$ with state space $\Delta^{(n)}$. No underlying probability space is required.
Regarding the hierarchy of relative arbitrages, we first define a partial order among portfolios. If $\pi$ is a portfolio, we let $V_{\pi}(t)$ be the ratio of the growth of $\$1$ invested in the portfolio to that of $\$1$ invested in the market portfolio, and call it the {\it relative value process}. Let $\pi, \tau: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$ be portfolios. We say that $\tau$ {\it dominates $\pi$ on compacts} (written $\tau \succeq \pi$) if for any compact set $K \subset \Delta^{(n)}$, there exists a constant $\varepsilon = \varepsilon(\pi, \tau, K) > 0$ such that $V_{\tau}(t) / V_{\pi}(t) \geq \varepsilon$ for all $t$ and for all sequences of market weight $\{\mu(t)\}_{t = 0}^{\infty}$ taking values in $K$. That is, the maximum drawdown of $\tau$ relative to $\pi$ is uniformly bounded regardless of the market movement in that region. Since the compact set $K$ is arbitrary, this is a global property and defines a partial order among portfolios. If ${\mathcal S}$ is a family of portfolios, we say that a portfolio $\pi \in {\mathcal S}$ is {\it maximal} in ${\mathcal S}$ if there is no portfolio, other than $\pi$ itself, which dominates $\pi$ on compacts, i.e., $\tau \in {\mathcal S}$ and $\tau \succeq \pi$ implies $\tau = \pi$. In Section \ref{subsec:pseudo} we will relate this partial order with pseudo-arbitrage. Here we note that if $\tau$ is a relative or pseudo-arbitrage with respect to $\pi$ in all diverse and sufficiently volatile markets, it is necessarily the case that $\tau$ dominates $\pi$ on compacts.
Let $\pi: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$ be a portfolio and $\Phi$ be a positive concave function on $\Delta^{(n)}$. We say that $\pi$ is {\it functionally generated} with generating function $\Phi$ if for all $p \in \Delta^{(n)}$, the vector of coordinatewise ratios $\pi(p)/p$ defines a supergradient of the concave function $\log \Phi$ at $p$ (see Definition \ref{def:fgp} below for the rigorous definition). If $\Phi$ is $C^2$ (twice continuously differentiable), then $\pi$ is necessarily given by
\begin{equation} \label{eqn:fgweight}
\pi_i(p) = p_i \left(1 + D_{e(i) - p} \log \Phi(p) \right), \quad i = 1, ..., n, \quad p \in \Delta^{(n)}.
\end{equation}
Here $D_{e(i) - p}$ is the directional derivative in the direction $e(i) - p$, where $e(i)$ is the vertex of $\overline{\Delta^{(n)}}$ in the $i$-th direction. For example, the market portfolio is generated by the constant function $\Phi(p) \equiv 1$. We say that $\Phi$ is a {\it measure of diversity} if it is $C^2$ and symmetric (invariant under permutations of the coordinates). Let $\overline{e} = \left(\frac{1}{n}, ..., \frac{1}{n}\right)$ be the barycenter of $\Delta^{(n)}$. For portfolios that are continuously differentiable, the following theorem gives a sufficient condition for a portfolio to be maximal.
\begin{theorem} \label{thm:main}
Let $\pi$ be a portfolio generated by a measure of diversity $\Phi$. If
\begin{equation} \label{eqn:integralcondition}
\int_0^1 \frac{1}{\Phi(te(1) + (1 - t)\overline{e})^2} \mathrm{d}t = \infty,
\end{equation}
then $\pi$ is maximal in the class of portfolios $\tau: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$ that are continuously differentiable.
\end{theorem}
This sufficient condition is satisfied by the equal and entropy weighted portfolios (see Table \ref{tab:benchmark} in Section \ref{sec:benchmark} for the definitions) among many others. For the market portfolio the generating function is constant and so the integral in \eqref{eqn:integralcondition} converges. In Section \ref{sec:benchmark} we will show if $\pi$ is functionally generated and $\tau$ dominates $\pi$ on compacts, then $\tau$ must be functionally generated. Thus we may rephrase Theorem \ref{thm:main} by saying that if \eqref{eqn:integralcondition} holds then $\pi$ is maximal in the family of functionally generated portfolios with $C^2$ generating functions. A consequence of Theorem \ref{thm:main} is the following.
\begin{corollary} \label{cor:main}
Under the setting of Theorem \ref{thm:main}, suppose $\tau$ is a $C^1$ portfolio not equal to $\pi$. Then there is a compact set $K \subset \Delta^{(n)}$ and a market weight sequence $\{\mu(t)\}_{t \geq 0}$ taking values in $K$, such that the portfolio value of $\tau$ relative to $\pi$ tends to zero as $t$ tends to infinity.
\end{corollary}
One can interpret Corollary \ref{cor:main} by saying that if $\pi$ is maximal and $\tau \neq \pi$, it is possible to find a diverse and sufficiently volatile market in which $\pi$ beats $\tau$ in the long run. In this sense, for a portfolio $\pi$ satisfying \eqref{eqn:integralcondition}, it is impossible to find a (deterministic) portfolio which is a relative arbitrage with respect to $\pi$ in all diverse and sufficiently volatile markets. Theorem \ref{thm:main} will be proved by comparing the relative concavities of portfolio generating functions.
\medskip
Regarding optimization of functionally generated portfolios, we formulate a {\it shape-constrained optimization problem} in the spirit of maximum likelihood estimation of a log-concave density. For the statistical theory we refer the reader to \cite{DR09}, \cite{CSS10}, \cite{CS10}, \cite{KM10} and \cite{SW10}. Following \cite{PW14}, we associate to each functionally generated portfolio an {\it L-divergence functional} $T\left(\cdot \mid \cdot\right)$ defined on $\Delta^{(n)} \times \Delta^{(n)}$ (see Definition \ref{def:discreteenergy}). Intuitively, $T\left(q \mid p\right)$ measures the potential profit from volatility captured when the market weight jumps from $p$ to $q$ in $\Delta^{(n)}$. Let ${\Bbb P}$ be an intensity measure over the jumps $(p, q)$ which can be defined in terms of data or a given model (examples will be given in Section \ref{sec:optimization}). We maximize
\[
\int T\left( q \mid p \right) \mathrm{d}{\Bbb P}
\]
over all functionally generated portfolios with or without constraints. This optimization problem is shape-constrained because the generating function of a functionally generated portfolio is concave. We prove that the optimization problem is well-posed and is in a suitable sense {\it consistent} when interpreted as a statistical estimation problem. In this paper we implement this optimization for the case of two assets (analogous to univariate density estimation) and a general algorithm will be the topic of future research. We illustrate a typical application in portfolio management with a case study.
\medskip
The paper is organized as follows. In Section \ref{sec:prelim} we set up the notations and recall the definitions of pseudo-arbitrage and functionally generated portfolio. In Section \ref{sec:benchmark} we extend the framework of \cite{PW14} to benchmark portfolios that are functionally generated. Using a relative concavity lemma given in \cite{CDO07}, we prove Theorem \ref{thm:main} and Corollary \ref{cor:main} in Section \ref{sec:concavity}. Optimization of functionally generated portfolios is studied in Section \ref{sec:optimization} and an empirical case study is presented in Section \ref{sec:empirical}. Several proofs of a more technical nature are gathered in Appendex \ref{sec:appendix}.
\section{Pseudo-arbitrage and functionally generated portfolio} \label{sec:prelim}
\subsection{Portfolio and pseudo-arbitrage} \label{subsec:pseudo}
We work under the discrete time, deterministic set-up of \cite{PW14} which we briefly recall here. Let $n \geq 2$ be the number of stocks or assets in the market. We endow the open unit simplex $\Delta^{(n)}$ with the Euclidean metric. The open ball in $\Delta^{(n)}$ centered at $p$ with radius $\delta$ is denoted by $B(p, \delta)$. A tangent vector of $\Delta^{(n)}$ is a vector $v = (v_1, ..., v_n) \in {\Bbb R}^n$ satisfying $\sum_{i = 1}^n v_i = 0$. We denote the vector space of tangent vectors of $\Delta^{(n)}$ by $T\Delta^{(n)}$. For $i = 1, ..., n$, we let $e(i) = (0, ..., 0, 1, 0, ..., 0)$ be the vertex of $\Delta^{(n)}$ in the $i$-th direction. If $a$ and $b$ are vectors in ${\Bbb R}^n$, we let $\langle a, b \rangle$ be the Euclidean inner product. The Euclidean norm is denoted by $\|\cdot\|$. If $b$ has nonzero entries, $a / b$ is the vector of the componentwise ratios $a_i / b_i$.
Throughout this paper time is discrete ($t = 0, 1, 2, ...$). Extensions to continuous time will be discussed briefly in Section \ref{sec:continuoustime}. Let $X_i(t) > 0$ be the market capitalization of stock $i$ at time $t$. The total capitalization of the market is then $X_1(t) + \cdots + X_n(t)$. The market weight of stock $i$ is defined by
\[
\mu_i(t) = \frac{X_i(t)}{X_1(t) + \cdots + X_n(t)}, \quad i = 1, ..., n.
\]
The vector $\mu(t) = (\mu_1(t), ..., \mu_n(t))$ takes values in $\Delta^{(n)}$ and represents the relative sizes of the firms. As the stock prices move the market weights fluctuate accordingly.
As in \cite{PW14}, the stock market is modeled as a deterministic sequence $\{\mu(t)\}_{t \geq 0}$ taking values in $\Delta^{(n)}$, so an underlying probability space is not required. Our approach is analogous to that of universal prediction (see for example \cite{CL06}) where it is not assumed that the data is generated by a stochastic model. Only structural properties such as diversity and sufficient volatility will be imposed on the sequences.
We consider a small investor in this market who cares about the value of his or her portfolio relative to that of the entire market. We restrict ourselves to portfolios which are deterministic functions of the current market weights. Short sales are not allowed and we assume there is no transaction cost.
\begin{definition}[Portfolio and relative value process]
A portfolio is a Borel measurable map $\pi: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$. The market portfolio $\mu$ is the identity map $p \mapsto p$ and we do not distinguish it from the market weight process $\{\mu(t)\}$. Given a portfolio $\pi$, its relative value process $\{V_{\pi}(t)\}_{t \geq 0}$ is defined by $V_{\pi}(0) = 1$ and
\begin{equation} \label{eqn:relativevalue}
\frac{V_{\pi}(t+1)}{V_{\pi}(t)} = 1 + \left\langle \frac{\pi(\mu(t))}{\mu(t)}, \mu(t + 1) - \mu(t) \right\rangle, \quad t \geq 0.
\end{equation}
The weight ratio of the portfolio at $p \in \Delta^{(n)}$ is the vector $\frac{\pi(p)}{p} = \left(\frac{\pi_1(p)}{p_1}, ..., \frac{\pi_n(p)}{p_n}\right)$.
\end{definition}
The relative value $V_{\pi}(t)$ can be interpreted as the ratio of the growth of $\$1$ invested in the portfolio to that of $\$1$ invested in the market portfolio. If $V_{\pi}(t_1) > V_{\pi}(t_0)$, the portfolio outperforms the market portfolio over the (discrete) time interval $[t_0, t_1]$. As mentioned in \cite{PW14}, it is helpful to think of the weight ratio $p \mapsto \frac{\pi(p)}{p}$ as a vector field on $\Delta^{(n)}$. From \eqref{eqn:relativevalue}, the portfolio outperforms the market over $[t, t + 1]$ if the inner product between the displacement $\mu(t + 1) - \mu(t)$ of the market weight and the weight ratio is positive. This means on average the portfolio puts more weight on the assets which perform well relative to the rest of the market.
\medskip
In the first part of the paper we will study the hierarchy of portfolios defined by the relation `domination on compacts'.
\begin{definition}[Domination on compacts] \label{def:pseudoarbitrage}
Let $\pi$ and $\tau$ be portfolios. We say that $\tau$ dominates $\pi$ on compacts (written $\tau \succeq \pi$) if for any compact subset $K$ of $\Delta^{(n)}$, there exists a constant $C = C(\pi, \tau, K) \geq 0$ such that for any path $\{\mu(t)\}_{t \geq 0} \subset K$, we have
\begin{equation} \label{eqn:lowerbound}
\log\frac{V_{\tau}(t)}{V_{\pi}(t)} \geq -C, \quad t \geq 0.
\end{equation}
\end{definition}
Thus, if $\tau \succeq \pi$, the value of $\pi$ cannot grow at a rate faster than that of $\tau$ under the diversity condition $\mu(t) \in K$, for any compact subset $K$. The relation $\tau \succeq \pi$ defines a partial order among the class of portfolio maps. We include the logarithm in \eqref{eqn:lowerbound} as this formulation is more convenient when we discuss functionally generated portfolios. This definition is closely related to that of pseudo-arbitrage introduced in \cite{PW14}. The definition given below is extended slightly to allow for an arbitrary benchmark portfolio.
\begin{definition}[Pseudo-arbitrage] \label{def:pseudoarbitrage2}
Let $\pi$ and $\tau$ be portfolios, and $K$ be a subset of $\Delta^{(n)}$, not necessarily compact. We say that $\tau$ is a pseudo-arbitrage with respect to $\pi$ on $K$ if the following properties hold:
\begin{enumerate}
\item[(i)] There exists a constant $C = C(\pi, \tau, K) \geq 0$ such that \eqref{eqn:lowerbound} holds for any sequence $\{\mu(t)\}_{t \geq 0} \subset K$.
\item[(ii)] There exists a sequence $\{\mu(t)\}_{t \geq 0} \subset K$ along which $\lim_{t \rightarrow \infty} \log V(t) = \infty$.
\end{enumerate}
\end{definition}
We refer the reader to \cite{PW14} for more discussion of the definition. Here we note that the requirement $\{\mu(t)\}_{t \geq 0} \subset K$ in (i) is a diversity condition which is portfolio-specific, and (ii) refers to the presence of sufficient volatility. The following is an easy consequence of the definitions.
\begin{lemma} \label{lem:easy}
Let $\pi$ and $\tau$ be portfolios. Suppose $\tau$ is a pseudo-arbitrage relative to $\pi$ on $K_j$ for all $j$, where $\{K_j\}$ is a compact exhaustion of $\Delta^{(n)}$. Then $\tau$ dominates $\pi$ on compacts.
\end{lemma}
\begin{definition}[Maximal portfolio]
Let ${\mathcal S}$ be a family of portfolios and $\pi \in {\mathcal S}$. We say that $\pi$ is maximal in ${\mathcal S}$ if there is no portfolio in ${\mathcal S}$, other than $\pi$ itself, which dominates $\pi$ on compacts.
\end{definition}
Note that a maximal portfolio may not exist and may not be unique in the given class. In Section \ref{sec:concavity} we will study the maximal portfolios where ${\mathcal S}$ is the class of portfolios with $C^2$ generating functions. By Lemma \ref{lem:easy}, if $\pi$ is maximal there is no portfolio which is a pseudo-arbitrage with respect to $\pi$ on all sufficiently large compact subsets of $\Delta^{(n)}$. In this sense a maximal portfolio is one which is impossible to beat assuming only diversity and sufficient volatility.
\begin{remark}
The relation `domination on compacts' refers to global properties of portfolios. Even if $\pi$ is maximal, for a {\it fixed} subset $K \subset \Delta^{(n)}$ it may be possible to find a portfolio $\tau$ (depending on $K$) which beats $\pi$ in the long run whenever $\{\mu(t)\} \subset K$. For example, when $n = 2$, it can be shown that the entropy-weighted portfolio beats the equal-weighted portfolio in the long run if $\{\mu(t)\}$ is sufficiently volatile and stays in a certain neighborhood of $\left(\frac{1}{2}, \frac{1}{2}\right)$. This, however, requires that $K$ is known in advance. Maximality of $\pi$ requires that there is no {\it single} $\tau$ which beats $\pi$ on {\it all} compact sets $K \subset \Delta^{(n)}$.
\end{remark}
\subsection{Functionally generated portfolio} \label{sec:fgp}
Functionally generated portfolio was first introduced in a general form in \cite{F99}. We will follow the intrinsic treatment in \cite[Section 2]{PW14} which emphasizes the relationship with convex analysis. Throughout the paper we will rely heavily on results from convex analysis and a standard reference is \cite{R70}.
\begin{definition} [Functionally generated portfolios] \label{def:fgp} {\ }
Let $\pi$ be a portfolio and $\Phi: \Delta^{(n)} \rightarrow (0, \infty)$ be a concave function. We say that $\pi$ is generated by $\Phi$ if the inequality
\begin{equation} \label{eqn:superdiff}
1 + \left\langle \frac{\pi(p)}{p}, q - p \right\rangle \geq \frac{\Phi(q)}{\Phi(p)}
\end{equation}
holds for all $p, q \in \Delta^{(n)}$. We call $\Phi$ the generating function of $\pi$. We denote by ${\mathcal{FG}}$ the collection of all functionally generated portfolios $(\pi, \Phi)$ where $\pi$ is generated by the concave function $\Phi$.
\end{definition}
It is known (see \cite[Proposition 5]{PW14}) that the generating function is unique up to a positive multiplicative constant, so the use of `the' in the above definition is justified (up to the constant). On the other hand, by Lemma \ref{lem:superdiff}(ii) below a non-smooth concave function $\Phi$ generates multiple portfolios but they differ only on the set where $\Phi$ is not differentiable (i.e., the superdifferential $\partial \log \Phi(p)$ has more than one element), and this set has Lebesgue measure zero (relative to $\Delta^{(n)}$) by \cite[Theorem 25.5]{R70}. Note that here the generating function is concave by definition, while in \cite{F02} non-concave generating functions are allowed. See Theorem \ref{thm:PW14} and Proposition \ref{prop:MCMfgp} below for a justification of our definition.
Let $\Phi$ be a concave function on $\Delta^{(n)}$ and $p \in \Delta^{(n)}$. The {\it superdifferential} of $\Phi$ at $p$ is the set $\partial \Phi(p)$ defined by
\begin{equation} \label{eqn:superdiffdef}
\partial \Phi(p) = \{\xi \in T\Delta^{(n)}: \Phi(p) + \langle \xi, q - p \rangle \geq \Phi(q) \ \forall q \in \Delta^{(n)}\}.
\end{equation}
If $\Phi$ is concave and positive, it can be shown that $\log \Phi$ is also a concave function, and
\begin{equation} \label{eqn:supdiffequal}
\partial \log \Phi(p) = \frac{1}{\Phi(p)} \partial \Phi(p) = \left\{\frac{1}{\Phi(p)} \xi: \xi \in \partial \Phi(p)\right\}.
\end{equation}
\begin{lemma}\cite[Proposition 6]{PW14} \label{lem:superdiff} Let $\Phi$ be a positive concave function on $\Delta^{(n)}$.
\begin{enumerate}
\item[(i)] Let $\pi$ be a portfolio generated by $\Phi$. Then for $p \in \Delta^{(n)}$, the tangent vector $v = (v_1, ..., v_n)$ defined by
\begin{equation} \label{eqn:definev}
v_i = \frac{\pi_i(p)}{p_i} - \frac{1}{n} \sum_{j = 1}^n \frac{\pi_j(p)}{p_j}, \quad i = 1, ..., n,
\end{equation}
belongs to $\partial \log \Phi(p)$.
\item[(ii)] Conversely, if $v \in \partial \log\Phi(p)$, then the vector $\pi = (\pi_1, ..., \pi_n)$ defined by
\begin{equation} \label{eqn:definepi}
\frac{\pi_i}{p_i} = v_i + 1 - \sum_{j = 1}^n p_jv_j, \quad i = 1, ..., n,
\end{equation}
is an element of $\overline{\Delta^{(n)}}$. In particular, any measurable selection of $\partial \log \Phi$ (a Borel measurable map $\xi: \Delta^{(n)} \rightarrow T\Delta^{(n)}$ such that $\xi(p) \in \partial \log \Phi(p)$ for all $p \in \Delta^{(n)}$) defines via \eqref{eqn:definepi} a portfolio generated by $\Phi$. (By \cite[Theorem 14.56]{RW98}, there is always a measurable selection of $\partial \log \Phi$.)
\end{enumerate}
Moreover, the operations $\pi \mapsto v$ and $v \mapsto \pi$ defined by \eqref{eqn:definev} and \eqref{eqn:definepi} are inverses of each other.
\end{lemma}
From \eqref{eqn:definepi}, it can be seen that Fernholz's definition (see \cite[Theorem 3.1.5]{F02}) is consistent with ours. If $\pi$ is generated by $\Phi$, the weight ratio vector field $\frac{\pi}{p}$ is {\it conservative} on $\Delta^{(n)}$ and its potential function is given by the logarithm of the generating function $\Phi$. Here is a precise statement and the details can be found in the proof of \cite[Theorem 8]{PW14}. Let $\pi$ be a portfolio. If $\gamma: [0, 1] \rightarrow \Delta^{(n)}$ is a piecewise linear path in $\Delta^{(n)}$, we let
\begin{equation} \label{eqn:lineintegral}
I_{\pi}(\gamma) := \int_{\gamma} \frac{\pi}{p} \mathrm{d}p \equiv \int_0^1 \sum_{i = 1}^n \frac{\pi_i(\gamma(t))}{p_i(\gamma(t))}\gamma'_i(t)\mathrm{d}t
\end{equation}
be the line integral of the weight ratio along $\gamma$. If $\pi$ is functionally generated, the weight ratio $\frac{\pi}{p}$ is conservative in the sense that this line integral is zero whenever $\gamma$ is closed, i.e., $\gamma(0) = \gamma(1)$. Moreover, for any $p, q \in \Delta^{(n)}$ we have
\begin{equation} \label{eqn:lineintegral2}
\log \Phi(q) - \log \Phi(p) = I_{\pi}(\gamma),
\end{equation}
where $\gamma$ is any piecewise linear path from $p$ to $q$. In classical terminology, $\log \Phi$ is then the potential function of the weight ratio vector field. Fernholz's decomposition (see Lemma \ref{lem:FernholzDecomp} below) shows that the log relative value $\log V_{\pi}(t)$ can be decomposed as the sum of the increment of $\log \Phi(\mu(t))$ and a non-decreasing process related to market volatility.
The concavity of the generating function will be measured in terms of the L-divergence introduced in \cite{PW14}.
\begin{definition}[L-divergence] \label{def:discreteenergy}
Let $\pi$ be a portfolio generated by a concave function $\Phi: \Delta^{(n)} \rightarrow (0, \infty)$. The L-divergence functional of the pair $(\pi, \Phi)$ is the function $T: \Delta^{(n)} \times \Delta^{(n)} \rightarrow [0, \infty)$ defined by
\begin{equation} \label{eqn:discreteenergy}
T\left(q \mid p \right) = \log \left(1 + \left\langle \frac{\pi(p)}{p}, q - p \right\rangle \right) - \log \frac{\Phi(q)}{\Phi(p)}, \quad p, q \in \Delta^{(n)}.
\end{equation}
\end{definition}
Using \eqref{eqn:superdiff}, it can be shown that $T\left(q \mid p \right) \geq 0$ and $T\left(q \mid p \right) = 0$ only if $\Phi$ is affine on the line segment containing $p$ and $q$. $T\left(\cdot \mid \cdot\right)$ is a logarithmic version (hence the `L') of {\it Bergman divergence} used in information geometry (see \cite{AC10}) and should be thought of as a measure of the concavity of $\Phi$.
With these definitions, the main results of \cite{PW14} can be summarized as follow.
\begin{theorem}[Pseudo-arbitrages relative to the market portfolio] \cite[Theorem 1, Theorem 2]{PW14} \label{thm:PW14}
A portfolio $\pi$ is a pseudo-arbitrage relative to the market portfolio $\mu$ on a convex subset $K \subset \Delta^{(n)}$ if and only if $\pi$ is generated by a concave function $\Phi: \Delta^{(n)} \rightarrow (0, \infty)$ which is bounded below on $K$ and $T \left( \cdot \mid \cdot \right)$ is not identically zero on $K \times K$. Moreover, these portfolios correspond to solutions of an optimal transport problem.
\end{theorem}
In Section \ref{sec:concavity} we will focus on functionally generated portfolios with $C^2$ generating functions.
\begin{definition} \label{def:fg} \label{defn:C2fgp} {\ }
\begin{enumerate}
\item[(i)] We denote by ${\mathcal{FG}}^2$ the collection of functionally generated portfolios whose generating functions are $C^2$ and concave. An element of ${\mathcal{FG}}^2$ is denoted by either $\pi$, $\Phi$ or $(\pi, \Phi)$ where $\pi$ is generated by $\Phi$. In this case $\pi$ is necessarily given by \eqref{eqn:fgweight}.
\item[(ii)] A positive $C^2$ concave function $\Phi$ on $\Delta^{(n)}$ is called a measure of diversity if it is symmetric, i.e.,
\[
\Phi(p_1, ..., p_n) = \Phi(p_{\sigma(1)}, ..., p_{\sigma(n)})
\]
for all $p \in \Delta^{(n)}$ and any permutation $\sigma$ of $\{1, ..., n\}$.
\end{enumerate}
\end{definition}
Measure of diversity was introduced by Fernholz in \cite[Section 4]{F99}. Some examples are given in Table \ref{tab:benchmark} and more can be found in \cite[Section 3.4]{F02}. A measure of diversity gives a numerical measure of the concentration of the capital distribution $\mu(t) = \left(\mu_1(t), ..., \mu_n(t)\right)$ and also generates a portfolio.
\section{Benchmarking a functionally generated portfolio} \label{sec:benchmark}
Fix a portfolio $\pi$ generated by a concave function $\Phi: \Delta^{(n)} \rightarrow (0, \infty)$ and call it the {\it benchmark portfolio}. Some examples we have in mind are given in Table \ref{tab:benchmark}. All of these portfolios are generated by measures of diversity.
\begin{table}
\caption{Examples of functionally generated portfolios}
\label{tab:benchmark}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
Name & Portfolio weights & Generating function \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Market & $\pi_i(p) = p_i$ & $\Phi(p) = 1$ \\
Diversity-weighted ($0 < r < 1$) & $\pi_i(p) = \frac{p_i^r}{\sum_{j = 1}^n p_j^r}$ & $\Phi(p) = \left( \sum_{j = 1}^n p_j^r \right)^{\frac{1}{r}}$ \\
Equal-weighted & $\pi_i(p) = \frac{1}{n}$ & $\Phi(p) = \left(p_1 p_2 \cdots p_n \right)^{\frac{1}{n}}$ \\
Entropy-weighted & $\pi_i(p) = \frac{-p_i \log p_i}{\sum_{j = 1}^n -p_j \log p_j}$ & $\Phi(p) = \sum_{j = 1}^n -p_j \log p_j$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
As mentioned in the introduction, it can be proved that many functionally generated portfolios (including the three nontrivial examples above) outperform the market over sufficiently long periods under the assumptions of diversity and sufficient volatility. As these hypotheses appear to hold empirically, many functionally generated portfolios outperform the market over long periods. See \cite[Chapter 6]{F02} for several case studies using data of the US stock market. Since these portfolios contain no proprietary modeling, behave reasonably well and are easily replicable, they also serve as alternative benchmarks as discussed in practitioner papers such as \cite{FGH98} and \cite{HCKL11}. It is natural to ask whether we can construct relative or pseudo-arbitrages with respect to these portfolios.
\subsection{Fernholz's decomposition} \label{sec:Fernholz}
The relative value process of a functionally generated portfolio satisfies an elegant decomposition formula. It is a direct consequence of \eqref{eqn:discreteenergy} and \eqref{eqn:relativevalue} and can be motivated by the vector field interpretation discussed in Section \ref{sec:fgp}.
\begin{lemma}[Fernholz's decomposition] \label{lem:FernholzDecomp} \cite[Theorem 3.1]{F99} \cite[Lemma 7]{PW14}
If $\pi$ is generated by a concave function $\Phi$, the relative value process $V_{\pi}$ has the decomposition
\begin{equation} \label{eqn:FernholzDecomp}
\log V_{\pi}(t) = \log \frac{\Phi(\mu(t))}{\Phi(\mu(0))} + A(t),
\end{equation}
where $A(t) = \sum_{k = 0}^{t-1} T\left(\mu(k+1) \mid \mu(k)\right)$ is non-decreasing. We call $A(t)$ the drift process of the portfolio.
\end{lemma}
\begin{figure}
\includegraphics[scale=1]{fernholz.pdf}
\caption{Hypothetical performance of a functionally generated portfolio. If the market weight $\mu(t)$ stays within a subset $K \subset \Delta^{(n)}$, the relative value process will stay within the dashed curves which are vertical translations of the drift process $A(t)$. The width of the `sausage' is given by the oscillation of $\log \Phi$ on $K$ defined by ${\mathrm{osc}}_K(\log \Phi) = \sup_{p, q \in K} |\log \Phi(q) - \log \Phi(p)|$.}
\label{fig:FernholzDecomp}
\end{figure}
The key idea of the decomposition is that over any period $[t_0, t_1]$ where $\log \Phi(\mu(t_1))$ and $\log \Phi(\mu(t_0))$ are approximately equal, the portfolio will outperform the market by an amount equal to $A(t_1) - A(t_0)$, see Figure \ref{fig:FernholzDecomp} for an illustration. For this reason, the drift process $A(t)$ can be thought of as the cumulative amount of market volatility captured by the portfolio. The condition of sufficient volatility requires that $A(t)$ grows unbounded as $t \rightarrow \infty$. Empirical studies (see for example \cite[Figure 11.2]{FKSurvey}) show that $A$ increases at a roughly linear rate depending on the portfolio and market volatility. Thus, as long as the fluctuation of $\log \Phi(\mu(t))$ remains bounded, the drift process will dominate in the long run and the portfolio will outperform the market. The assumption on diversity is imposed to bound $\log \Phi(\mu(t))$. For (say) the entropy-weighted portfolio, $\log \Phi(\mu(t))$ is bounded as long as $\max_{1 \leq i \leq n} \mu_i(t) \leq 1 - \delta$ for some $\delta > 0$, so we can take $K$ in Definition \ref{def:pseudoarbitrage2} and Theorem \ref{thm:PW14} to be the set $\{p \in \Delta^{(n)}: \max_{1 \leq i \leq n} p_i \leq 1 - \delta\}$ (this is the definition of diversity stated in \cite{F99} and \cite{FKSurvey}). For other portfolios such as the equal-weighted portfolio, this condition is not enough and we require that $\mu(t)$ stays within a compact subset of $\Delta^{(n)}$. Thus the set $K$ is portfolio-specific. Fernholz's decomposition is implemented in the \verb"R" package \verb"RelValAnalysis" (available on \verb"CRAN") written by the author.
\subsection{Domination on compacts}
In \cite{PW14} pseudo-arbitrages with respect to the market portfolio are characterized in terms of a property called {\it multiplicative cyclical monotonicity} (MCM). It is a variant of cyclical monotonicity in convex analysis (see \cite[Section 24]{R70}) and is equivalent to $c$-cyclical monotonicity in optimal transport for a special cost function. Intuitively, this property requires that the portfolio outperforms the market portfolio whenever the market weight goes through a cycle. It is natural to extend the definition as follow.
\begin{definition}[Relative multiplicative cyclical monotonicity - RMCM]
Let $\pi$ and $\tau$ be portfolios. We say that $\tau$ satisfies multiplicative cyclical monotonicity relative to $\pi$ if over any discrete cycle
\[
\mu(0), \mu(1), ..., \mu(m), \mu(m+1) = \mu(0)
\]
in $\Delta^{(n)}$, we have
\begin{equation} \label{eqn:rmcm}
V_{\tau}(m + 1) \geq V_{\pi}(m + 1).
\end{equation}
\end{definition}
In \cite{PW14} we proved that functionally generated portfolios are characterized by the MCM property relative to the market portfolio.
\begin{proposition} \cite[Proposition 4]{PW14} \label{prop:MCMfgp}
A portfolio satisfies MCM relative to the market portfolio if and only if it is generated by a positive concave function.
\end{proposition}
For an arbitrary functionally generated benchmark portfolio, we can generalize Proposition \ref{prop:MCMfgp} as follow. This result provides equivalent formulations of the partial order $\succeq$ that are easier to work with. The proof is analogous to those of Proposition 4 and Theorem 1 of \cite{PW14}.
\begin{theorem} \label{prop:MCM}
Let $\pi$ be a portfolio generated by a concave function $\Phi: \Delta^{(n)} \rightarrow (0, \infty)$, and let $\tau$ be a portfolio. The following statements are equivalent.
\begin{enumerate}
\item[(i)] $\tau$ dominates $\pi$ on compacts, i.e., $\tau \succeq \pi$.
\item[(ii)] $\tau$ satisfies MCM relative to $\pi$.
\item[(iii)] $\tau$ is generated by a concave function $\Psi$, and the L-divergence $T_{\tau}\left(\cdot\mid \cdot\right)$ of $(\tau, \Psi)$ dominates $T_{\pi}\left(\cdot\mid \cdot\right)$ of $(\pi, \Phi)$ in the sense that
\begin{equation} \label{eq:divergenceineq}
T_{\tau}\left(q\mid p\right) \geq T_{\pi}\left(q\mid p\right)
\end{equation}
for all $p, q \in \Delta^{(n)}$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii): Suppose $\tau$ dominates $\pi$ on compacts. If $\tau$ does not satisfy MCM relative to $\pi$, we can find a discrete cycle $\{\mu(t)\}_{t = 0}^{m + 1}$ such that $\eta := V_{\tau}(m + 1) / V_{\pi}(m + 1) < 1$. Consider the market weight sequence which goes over this cycle again and again, i.e., $\mu(t) = \mu(t + (m+1))$ for all $t$. Then
\[
\frac{V_{\tau}(k(m + 1))}{V_{\pi}(k(m + 1))} = \eta^k
\]
for all $k \geq 0$ and the ratio tends to $0$ as $k \rightarrow \infty$. This contradicts the hypothesis $\tau \succeq \pi$. Thus if $\tau$ dominates $\pi$ on compacts then $\tau$ satisfies MCM relative to $\pi$.
\medskip
(ii) $\Rightarrow$ (iii): Suppose $\tau$ satisfies MCM relative to $\pi$. Since $V_{\mu}(\cdot) \equiv 1$ and $\pi$ satisfies MCM relative to the market portfolio (by Proposition \ref{prop:MCMfgp}), $\tau$ satisfies MCM relative to the market portfolio as well. By Proposition \ref{prop:MCMfgp} again $\tau$ has a generating function $\Psi$. To prove \eqref{eq:divergenceineq}, let $p, q \in \Delta^{(n)}$ with $p \neq q$. Let $\{q = \mu(1), ..., \mu(m), \mu(m+1) = p\}$ be a partition of the line segment $[q, p]$. Then if $\mu(0) = p$, $\{\mu(k)\}_{k = 0}^{m+1}$ is a cycle which starts at $p$, jumps to $q$ and then returns to $p$ along the partition. Then the RMCM inequality \eqref{eqn:rmcm} implies
\begin{equation} \label{eq:MCM}
\begin{split}
& \left( 1 + \left\langle \frac{\tau(p)}{p}, q - p \right\rangle \right) \prod_{k = 1}^m \left( 1 + \left\langle \frac{\tau(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right) \\
& \geq \left( 1 + \left\langle \frac{\pi(p)}{p}, q - p \right\rangle \right) \prod_{k = 1}^m \left( 1 + \left\langle \frac{\pi(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right).
\end{split}
\end{equation}
Taking log on both sides, we have
\begin{equation*}
\begin{split}
& \log\left( 1 + \left\langle \frac{\tau(p)}{p}, q - p \right\rangle \right) + \sum_{k = 1}^m \log \left( 1 + \left\langle \frac{\tau(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right) \\
& \geq \log \left( 1 + \left\langle \frac{\pi(p)}{p}, q - p \right\rangle \right) + \sum_{k = 1}^m \log \left( 1 + \left\langle \frac{\pi(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right).
\end{split}
\end{equation*}
By the fundamental theorem of calculus for concave function and Taylor approximation, we can choose a sequence of partitions with mesh size going to zero, along which
\begin{equation*}
\begin{split}
\sum_{k = 1}^m \log \left( 1 + \left\langle \frac{\pi(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right) &\rightarrow \int_{\gamma} \frac{\pi}{\mu} \mathrm{d}\mu = \log \frac{\Phi(p)}{\Phi(q)},\\
\sum_{k = 1}^m \log \left( 1 + \left\langle \frac{\tau(\mu(k))}{\mu(k)}, \mu(k+1) - \mu(k) \right\rangle \right)&\rightarrow \int_{\gamma} \frac{\tau}{\mu} \mathrm{d}\mu = \log \frac{\Psi(p)}{\Psi(q)},
\end{split}
\end{equation*}
where $\gamma$ is the line segment from $q$ to $p$. Taking the corresponding limit in \eqref{eq:MCM}, we obtain the desired inequality \eqref{eq:divergenceineq}.
\medskip
(iii) $\Rightarrow$ (i): Let $\{\mu(t)\}_{t \geq 0}$ be any market weight sequence. By Lemma \ref{lem:FernholzDecomp} we can write
\[
\log \frac{V_{\tau}(t)}{V_{\pi}(t)} = \log \frac{\Psi(\mu(t)) / \Psi(\mu(0))}{\Phi(\mu(t)) / \Phi(\mu(0))} + \left(A_{\tau}(t) - A_{\pi}(t)\right),
\]
where $A_{\tau}$ and $A_{\pi}$ are the drift processes of $\tau$ and $\pi$ respectively. By (iii), $A_{\tau}(t) - A_{\pi}(t)$ is non-decreasing in $t$. Since $\log \frac{\Psi(\mu(t)) / \Psi(\mu(0))}{\Phi(\mu(t)) / \Phi(\mu(0))}$ is bounded as long as $\mu(t)$ stays within a compact subset of $\Delta^{(n)}$, $\tau$ dominates $\pi$ on compacts.
\qed\end{proof}
Theorem \ref{prop:MCM} reduces the study of the partial order $\tau \succeq \pi$ to comparing the relative concavities of generating functions, where concavity is measured by the $L$-divergence. In this paper we focus on generating functions that are twice continuously differentiable. Then the infinitesimal version of \eqref{eq:divergenceineq} leads to second order differential inequalities.
\begin{definition}[Drift quadratic form] \label{def:driftform}
Let $(\pi, \Phi) \in {\mathcal{FG}}^2$. Its drift quadratic form, denoted by both $H_{\pi}$ and $H_{\Phi}$, is defined by
\[
H_{\pi}(p)(v, v) := \frac{-1}{2\Phi(p)} {\mathrm{Hess}} \nobreak\hspace{.16667em plus .08333em} \Phi(p)(v, v), \quad p \in \Delta^{(n)}, v \in T\Delta^{(n)}.
\]
Here ${\mathrm{Hess}} \nobreak\hspace{.16667em plus .08333em} \Phi$ is the Hessian of $\Phi$ regarded as a quadratic form. By definition, it is given by
\begin{equation} \label{eqn:Hessian}
{\mathrm{Hess}} \nobreak\hspace{.16667em plus .08333em} \Phi(p)(v, v) = \left.\frac{\mathrm{d}^2}{\mathrm{d}t^2} \Phi(p + tv) \right|_{t = 0}.
\end{equation}
\end{definition}
\begin{lemma} \label{lem:drift}
Let $(\pi, \Phi), (\tau, \Psi) \in {\mathcal{FG}}^2$, and let $T_{\pi}$ and $T_{\tau}$ be their corresponding L-divergences. If $\tau \succeq \pi$ and therefore $T_{\tau}\left(q \mid p\right) \geq T_{\pi}\left(q \mid p\right)$ for all $p, q \in \Delta^{(n)}$, then $H_{\tau} \geq H_{\pi}$ in the sense that
\begin{equation} \label{eqn:driftineq}
H_{\tau}(p)(v, v) \geq H_{\pi}(p)(v, v)
\end{equation}
for all $p \in \Delta^{(n)}$ and $v \in T\Delta^{(n)}$.
\end{lemma}
\begin{proof}
The lemma follows immediately from the Taylor approximation
\begin{equation} \label{eqn:taylor}
T_{\pi} \left(p + tv \mid p \right) = \frac{-1}{2\Phi(p)}{\mathrm{Hess}} \nobreak\hspace{.16667em plus .08333em} \Phi(p)(tv, tv) + o\left(t^2\right).
\end{equation}
where $p \in \Delta^{(n)}$, $v$ is a tangent vector, and $t \in {\Bbb R}$ is small.
\qed\end{proof}
As a consequence of Lemma \ref{lem:drift}, in order to show that a portfolio $\pi \in {\mathcal{FG}}^2$ is maximal in ${\mathcal{FG}}^2$, it is enough to show that its drift quadratic form $H_{\pi}$ is not dominated (in the sense of \eqref{eqn:driftineq}) by that of some other portfolio. This is the approach we use in Section \ref{sec:concavity} to prove Theorem \ref{thm:main}. Simple examples show, however, that $H_{\tau} \geq H_{\pi}$ does not imply $T_{\tau} \geq T_{\pi}$.
\begin{example}[Diversity-weighted portfolio]
For $0 < r < 1$, the diversity-weighted portfolio $\pi$ introduced at the beginning of this section is generated by the function
\[
\Phi(p) = \left(\sum_{j = 1}^n p_j^r\right)^{\frac{1}{r}}.
\]
It is easy to show that $\Phi$ is bounded below by $1$. Let $\tau$ be the portfolio generated by $\Psi := \Phi - 1$. Then it can be shown that $\tau \succeq \pi$. To see this, write the L-divergence \eqref{eqn:discreteenergy} in the form
\begin{equation} \label{eqn:divergence2}
T_{\pi}\left(q \mid p\right) = \log \frac{\Phi(p) + D_{q - p}\Phi(p)}{\Phi(q)}, \quad p, q \in \Delta^{(n)}.
\end{equation}
Then
\[
T_{\tau}\left(q \mid p\right) = \log \frac{\left(\Phi(p) - 1\right) + D_{q - p}\Phi(p)}{\Phi(q) - 1} \geq T_{\pi}\left(q \mid p\right).
\]
From \eqref{eqn:divergence2}, we can show that for a portfolio $(\pi, \Phi)$ to be maximal in ${\mathcal{FG}}^2$, it is necessary that the continuous extension of $\Phi$ to the closure $\overline{\Delta^{(n)}}$ (which exists by \cite[Theorem 10.3]{R70}) vanishes at all the vertices $e(1)$, ..., $e(n)$ (because otherwise we can subtract an affine function from $\Phi$ and make $T$ larger). However this condition is not sufficient for $\pi$ to be maximal in ${\mathcal{FG}}^2$.
\end{example}
\section{Relative concavity and maximal portfolios} \label{sec:concavity}
\subsection{Two asset case}
In this section we study the maximal portfolios in ${\mathcal{FG}}^2$ and prove Theorem \ref{thm:main}. To illustrate the ideas involved we first give a proof of the maximality of the equal-weighted portfolio for $n = 2$. This result is the starting point of this paper.
\begin{proposition} \label{prop:equalweight}
For $n = 2$, the equal-weighted portfolio $\pi \equiv \left(\frac{1}{2}, \frac{1}{2}\right)$ generated by the geometric mean $\Phi(p) = \sqrt{p_1p_2}$ is maximal in ${\mathcal{FG}}^2$.
\end{proposition}
\begin{proof}
Let $(\tau, \Psi) \in {\mathcal{FG}}^2$ be a portfolio which dominates $(\pi, \Phi)$ on compacts. Define $u(x) = \Phi(x, 1 - x) = \sqrt{x(1 - x)}$ and let $v(x) = \Psi(x, 1 - x)$, $x \in (0, 1)$. Then $u$ and $v$ are positive $C^2$ concave functions on $(0, 1)$. By Theorem \ref{prop:MCM} and Lemma \ref{lem:drift}, the drift quadratic form of $\tau$ dominates that of $\pi$. Using \eqref{eqn:Hessian}, we have the differential inequality
\begin{equation} \label{eqn:n=2domination}
\frac{-v''(x)}{v(x)} \geq \frac{-u''(x)}{u(x)} = \frac{1}{4\left(x(1 - x)\right)^2}, \quad x \in (0, 1).
\end{equation}
We claim that $v$ also generates the equal-weighted portfolio, and so $\tau = \pi$.
We will use a transformation which amounts to a change of num\'{e}raire using $y = \log \frac{x}{1 - x}$. See the binary tree model in \cite[Section 4]{PW13} for the motivation of this transformation and related results. Define a function $\tau_1: (0, 1) \rightarrow [0, 1]$ by
\begin{equation} \label{eqn:n=2weight}
\tau_1(x) = x + x(1 - x) \frac{v'(x)}{v(x)} = x \left[1 + (1 - x) (\log v)'(x)\right].
\end{equation}
By \eqref{eqn:fgweight}, this is the portfolio weight of stock $1$ generated by $v$ and $\tau_1$ takes value in $[0, 1]$. Let $y = \log \frac{x}{1 - x}$, so $x = \frac{e^y}{1 + e^y}$. Define $q: {\Bbb R} \rightarrow [0, 1]$ by
\[
q(y) = \tau_1(x) = \frac{e^y}{1 + e^y} + \frac{e^y}{(1 + e^y)^2} \frac{v'(x)}{v(x)}, \quad x = \frac{e^y}{1 + e^y}, \quad y \in {\Bbb R}.
\]
For the equal-weighted portfolio the corresponding portfolio weight function is identically $\frac{1}{2}$. It follows from a straightforward computation that
\[
q(y)(1 - q(y)) - q'(y) = \frac{-e^{2y}}{(1 + e^y)^4} \frac{v''(x)}{v(x)}.
\]
Now \eqref{eqn:n=2domination} can be rewritten in the form
\begin{equation} \label{eqn:transformeddrift}
q(y)(1 - q(y)) - q'(y) \geq \frac{1}{4}, \quad y \in {\Bbb R}.
\end{equation}
The proof is then completed by the following elementary result.
\qed\end{proof}
\begin{lemma} \label{lem:diffeqn}
Suppose $q: {\Bbb R} \rightarrow [0, 1]$ is differentiable and $q(1 - q) - q' \geq 1/4$ on ${\Bbb R}$. Then $q \equiv 1/2$.
\end{lemma}
\begin{proof}
Since $0 \leq q(y) \leq 1$, we have
\[
q' \leq q(1 - q) - \frac{1}{4} \leq \frac{1}{4} - \frac{1}{4} = 0,
\]
so $q$ is non-increasing. If $q(y_0) = q_0 < \frac{1}{2}$ for some $y_0$, then on $y \in [y_0, \infty]$, $q$ must satisfy the differential inequality
\[
q'(y) \leq q_0(1 - q_0) - \frac{1}{4} < 0,
\]
which contradicts the fact that $q(y) \geq 0$. Similarly, if $q(y_0) = q_0 > \frac{1}{2}$ for some $y_0$, the same inequality is satisfied on $(-\infty, y_0]$, again a contradiction. Thus we get $q(y) \equiv \frac{1}{2}$ for all $y \in {\Bbb R}$.
\qed\end{proof}
The main idea of the proof of Proposition \ref{prop:equalweight} is that for a portfolio to dominate the equal-weighted portfolio $\pi$ on compacts, it must be more aggressive than $\pi$ {\it everywhere} on the simplex. This means buying more and more the underperforming stock at a sufficiently fast rate satisfying \eqref{eqn:transformeddrift}, but this is impossible to continue up to the boundary of the simplex. While there is a multi-dimensional analogue of the differential inequality \eqref{eqn:transformeddrift} (see \cite[Theorem 9]{PW14}), we are unable to extend this proof to the multi-asset case since the market and portfolio weights can move in many directions. Instead, we will work with portfolio generating functions and use the simple but powerful tools of convex analysis.
\subsection{Main result} \label{subsec:relative}
Before we give the proof of Theorem \ref{thm:main} we note that the integral condition \eqref{eqn:integralcondition} is sufficient to capture many important examples. The proof is an exercise in elementary calculus and is left to the reader.
\begin{lemma}
The following portfolios satisfy \eqref{eqn:integralcondition}.
\begin{enumerate}
\item[(i)] The equal-weighted portfolio $\pi \equiv \left(\frac{1}{n}, ..., \frac{1}{n}\right)$ generated by the geometric mean $\Phi(p) = \left(p_1 \cdots p_n\right)^{\frac{1}{n}}$.
\item[(ii)] The entropy-weighted portfolio $\pi_i = -(p_i \log p_i) / \Phi(p)$ generated by the Shannon entropy $\Phi(p) = -\sum_{j = 1}^n p_j \log p_j$.
\end{enumerate}
\end{lemma}
The main ingredient of the proof of Theorem \ref{thm:main} is the following ingenious observation taken from \cite{CDO07} and \cite[Lemma 2]{CDOS09} (it is called the relative convexity lemma in these references). It can be proved by direct differentiation.
\begin{lemma}[Relative concavity lemma] \cite{CDO07} \label{lem:relativeconcavity}
Let $-\infty < a < b \leq \infty$ and $c, C: [a, b) \rightarrow {\Bbb R}$ be continuous. Suppose $u, v: [a, b) \rightarrow (0, \infty)$ are $C^2$ and satisfy the differential equations
\begin{equation*}
\begin{split}
u''(x) + c(x)u(x) &= 0, \quad x \in [a, b), \\
v''(x) + C(x)v(x) &= 0, \quad x \in [a, b).
\end{split}
\end{equation*}
Define $F: [a, b) \rightarrow [0, \infty)$ by
\[
F(x) = \int_a^x \frac{1}{u(t)^2} \mathrm{d}t, \quad x \in [a, b).
\]
Let $G$ be the inverse of $F$ defined on $[0, \ell)$, where $\ell = \lim_{x \uparrow b} F(x)$. Then the function
\[
w(y) := \frac{v(G(y))}{u(G(y))}
\]
defined on $[0, \ell)$ satisfies the differential equation
\[
w''(y) = -(C(x) - c(x))u(x)^4w(y), \quad 0 \leq y < \ell, \quad x = G(y).
\]
In particular, if $C(x) \geq c(x)$ on $[a, b)$, then $w$ is concave on $[0, \ell)$.
\end{lemma}
We also need some convex analytic properties of functionally generated portfolios.
\begin{lemma} \label{lem:FGconvex}
Let $\pi^{(1)}, \pi^{(2)} \in {\mathcal{FG}}$ be generated by $\Phi^{(1)}$ and $\Phi^{(2)}$ respectively, and $\lambda \in [0, 1]$. Then the portfolio given by the weighted average
\[
\pi := \lambda \pi^{(1)} + (1 - \lambda) \pi^{(2)}
\]
belongs to ${\mathcal{FG}}$. Indeed, $\pi$ is generated by the geometric mean
\[
\Phi := \left( \Phi^{(1)} \right)^{\lambda} \left( \Phi^{(2)} \right)^{1 - \lambda}
\]
of the two generating functions.
\end{lemma}
\begin{proof}
For $C^2$ generating functions this result is stated in \cite[Page 50]{F02}. The same is true in the general case where the generating functions are not necessarily smooth. To prove this, we need to check that $\pi = \lambda \pi^{(1)} + (1 - \lambda) \pi^{(2)}$ satisfies the defining inequality \eqref{eqn:superdiff}. This is an easy consequence of the AM-GM inequality and the proof is omitted. \qed
\end{proof}
\begin{lemma} \label{lem:Driftconcave}
The L-divergence and the drift quadratic form are concave in the portfolio weights in the following sense. Let $(\pi^{(1)}, \Phi^{(1)}), (\pi^{(2)}, \Phi^{(2)}) \in {\mathcal{FG}}$. For $\lambda \in [0, 1]$, let $\pi = \lambda \pi^{(1)} + (1 - \lambda) \pi^{(2)}$ and let $\Phi = \left(\Phi^{(1)}\right)^{\lambda} \left( \Phi^{(2)} \right)^{1 - \lambda}$ be the generating function of $\pi$. Let $T$, $T^{(1)}$ and $T^{(2)}$ be the L-divergences of $(\pi, \Phi)$, $(\pi^{(1)}, \Phi^{(1)})$ and $(\pi^{(2)}, \Phi^{(2)})$ respectively. Then
\begin{equation} \label{eqn:concavediscrete}
T\left(q \mid p \right) \geq \lambda T^{(1)}\left(q \mid p \right) + (1 - \lambda) T^{(2)}\left(q \mid p \right), \quad p, q \in \Delta^{(n)}.
\end{equation}
If $\Phi^{(1)}$ and $\Phi^{(2)}$ are $C^2$, then $H_{\pi} \geq \lambda H_{\pi^{(1)}} + (1 - \lambda) H_{\pi^{(2)}}$ in the sense that
\begin{equation} \label{eqn:concavecont}
H_{\pi}(p)(v, v) \geq \lambda H_{\pi^{(1)}}(p)(v, v) + (1 - \lambda) H_{\pi^{(2)}}(p)(v, v)
\end{equation}
for all $p \in \Delta^{(n)}$ and $v \in T\Delta^{(n)}$.
\end{lemma}
\begin{proof}
To prove \eqref{eqn:concavediscrete} we write the L-divergence $T\left(q \mid p \right)$ of a functionally generated portfolio $(\pi, \Phi)$ in the form
\[
T\left(q \mid p \right) = \log \left(1 + \left\langle \frac{\pi(p)}{p}, q - p \right\rangle\right) - I_{\pi}(\gamma),
\]
where $I_{\pi}(\gamma) = \int_{\gamma} \frac{\pi}{p} dp$ is the line integral of the weight ratio along the line segment from $p$ to $q$ (see \eqref{eqn:lineintegral}). Since the line integral is linear in $\pi$ and the logarithm is concave, we see that $T\left(q \mid p \right)$ is concave in $\pi$. The statement for the drift quadratic form follows from the Taylor approximation \eqref{eqn:taylor}.
\qed\end{proof}
We are now ready to prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let $\tau: \Delta^{(n)} \rightarrow \overline{\Delta^{(n)}}$ be a $C^1$ portfolio which dominates $\pi$ on compacts. We want to prove that $\tau = \pi$. By Theorem \ref{prop:MCM}, $\tau$ is generated by a concave function $\Psi: \Delta^{(n)} \rightarrow (0, \infty)$. Since $\tau$ is $C^1$, by \cite[Proposition 5(iii)]{PW14} $\Psi$ is $C^2$, so $\tau \in {\mathcal{FG}}^2$. Thus we may rephrase Theorem \ref{thm:main} by saying that $\pi$ is maximal in ${\mathcal{FG}}^2$.
Let $\Psi$ be a generating function of $\tau$. By scaling, we may assume that $\Psi(\overline{e}) = \Phi(\overline{e})$. We will prove that $\Psi$ equals $\Phi$ identically, so $\Psi$ generates $\pi$ and $\tau = \pi$. We divide the proof into the following steps.
\bigskip
\noindent
{\it Step 1 (Symmetrization).} Let $S_n$ be the set of permutations of $\{1, ..., n\}$. For $\sigma \in S_n$, define $\Psi_{\sigma}$ by relabelling the coordinates, i.e.,
\[
\Psi_{\sigma}(p) = \Psi(p_{\sigma(1)}, ..., p_{\sigma(n)}).
\]
Since $\tau \succeq \pi$, by Lemma \ref{lem:drift} (and relabeling the coordinates) we have $H_{\Psi_{\sigma}} \geq H_{\Phi_{\sigma}}$ for all $\sigma \in S_n$. But $\Phi$ is a measure of diversity, so $\Phi_{\sigma} = \Phi$ by symmetry and we have $H_{\Psi_{\sigma}} \geq H_{\Phi}$ for all $\sigma \in S_n$. Let
\[
\widetilde{\Psi} = \prod_{\sigma \in S_n} \left(\Psi_{\sigma}\right)^{\frac{1}{n!}}
\]
be the symmetrization of $\Psi$. By Lemma \ref{lem:FGconvex}, $\widetilde{\Psi}$ generates the symmetrized portfolio
\[
\widetilde{\tau}(p) = \frac{1}{n!} \sum_{\sigma \in S_n} \tau(p_{\sigma(1)}, ..., p_{\sigma(n)}), \quad p \in \Delta^{(n)}.
\]
By Lemma \ref{lem:Driftconcave}, we have
\begin{equation} \label{eqn:symmetrizedineq}
H_{\widetilde{\Psi}} \geq \frac{1}{n!} \sum_{\sigma \in S_n} H_{\Psi_{\sigma}} \geq H_{\Phi}.
\end{equation}
Thus $H_{\widetilde{\Psi}} \succeq H_{\Phi}$. Clearly $\widetilde{\Psi}$ is a measure of diversity and by symmetry it achieves its maximum at $\overline{e}$.
\bigskip
\noindent
{\it Step 2 ($\widetilde{\Psi} \leq \Phi$).} We claim that $\widetilde{\Psi} \leq \Phi$ on $\Delta^{(n)}$. Let $p \in \Delta^{(n)}$ and consider the one-dimensional concave functions
\begin{equation} \label{eqn:uandv}
\begin{split}
u(t) &= \Phi((1 - t)\overline{e} + tp) \\
v(t) &= \widetilde{\Psi}((1 - t)\overline{e} + tp)
\end{split}
\end{equation}
defined on $[0, 1]$. We have $u(0) = v(0)$ and $u'(0) = v'(0) = 0$ since both $\Phi$ and $\widetilde{\Psi}$ achieve their maximums at $\overline{e}$. Since $H_{\widetilde{\Psi}} \geq H_{\Phi}$, we have
\[
\frac{-v''(t)}{v(t)} \geq \frac{-u''(t)}{u(t)}, \quad t \in [0, 1].
\]
By the relative concavity lemma (Lemma \ref{lem:relativeconcavity}),
\begin{equation} \label{eqn:w}
w(y) = \frac{v(G(y))}{u(G(y))}
\end{equation}
is a positive concave function on $[0, \ell]$, where $\ell = \int_0^1 \frac{1}{u(t)^2} \mathrm{d}t$, with $w(0) = 1$ and $w'(0) = 0$ (by the quotient rule). Note that $\ell < \infty$ as $\Phi$ is continuous and positive on the line segment $[\overline{e}, p] \subset \Delta^{(n)}$. Also, it is straightforward to see that in this case the relative concavity lemma can be applied to $[0, \ell]$ instead of $[0, \ell)$. This implies that $w$ is non-increasing and so $w(\ell) = \widetilde{\Psi}(p) / \Phi(p) \leq 1$.
\bigskip
\noindent
{\it Step 3 ($\widetilde{\Psi} \equiv \Phi$).} Let $Z = \{p \in \Delta^{(n)}: \widetilde{\Psi}(p) = \Phi(p)\}$ and we claim that $Z = \Delta^{(n)}$. Here we follow an idea in the proof of \cite[Theorem 3]{CDOS09}. Define $u$ and $v$ on $[0, 1)$ by \eqref{eqn:uandv} with $p$ replaced by $e(1)$. Then the function $w$ defined as in \eqref{eqn:w} is positive and concave on $[0, \infty)$ since the integral in \eqref{eqn:integralcondition} (which defines $\ell = \int_0^1 \frac{1}{u(t)^2} \mathrm{d}t$) diverges. Again $w$ satisfies $w(0) = 1$ and $w'(0) = 0$. But since $w$ is defined on an infinite interval, if $w'(y) < 0$ for some $y$, then $w$ must hit zero as $w'$ is non-increasing by concavity. This contradicts the positivity of $w$, and so $w$ is identically one on $[0, \infty)$. It follows that $\widetilde{\Psi} = \Phi$ on the line segment $[\overline{e}, e(1))$. By symmetry, $Z$ contains the segments $[\overline{e}, e(i))$ for all $i$.
\medskip
Next we show that the set $Z$ is convex. Let $p, q \in Z$. Again we consider the pair of functions
\begin{equation} \label{eqn:uandvpq}
\begin{split}
u(t) &= \Phi((1 - t)p + tq) \\
v(t) &= \widetilde{\Psi}((1 - t)p + tq)
\end{split}
\end{equation}
on $[0, 1]$. Let $\widetilde{w}(t) = \frac{v(t)}{u(t)}$, $t \in [0, 1]$. By the relative concavity lemma again, we know that $\widetilde{w}$ is concave after a reparameterization. But $\widetilde{w}(t) \leq 1$ by Step 2 and $\widetilde{w}$ equals one at the endpoints $0$ and $1$. By concavity, $\widetilde{w}$ is identically one on $[0, 1]$. Hence if $Z$ contains $p$ and $q$, it also contains the line segment $[p, q]$. Now $Z$ is a convex set containing $[\overline{e}, e(i))$ for all $i$. It is easy to see that $Z$ is then the simplex $\Delta^{(n)}$. Hence $\widetilde{\Psi}$ equals $\Phi$ identically.
\bigskip
\noindent
{\it Step 4 (Desymmetrization).} We have shown that $\widetilde{\Psi} \equiv \Phi$, and so $H_{\widetilde{\Psi}} = H_{\Phi}$. By \eqref{eqn:symmetrizedineq}, we have
\[
H_{\Phi} = H_{\widetilde{\Psi}} \geq \frac{1}{n!} \sum_{\sigma \in S_n} H_{\Psi_{\sigma}} \geq H_{\Phi}.
\]
Since $H_{\Psi_{\sigma}} \geq H_{\Phi}$ for each $\sigma \in S_n$, we have $H_{\Psi_{\sigma}} = H_{\Phi}$ for all $\sigma$. In particular, taking $\sigma$ to be the identity, we have $H_{\Psi} = H_{\Phi}$. It remains to show that $\Psi$ equals $\Phi$ identically (recall that we assume $\Psi(\overline{e}) = \Phi(\overline{e})$).
Fix $i \in \{1, ..., n\}$ and consider
\begin{equation*}
\begin{split}
u(t) &= \Phi((1 - t)\overline{e} + te(i)) \\
v(t) &= \Psi((1 - t)\overline{e} + te(i))
\end{split}
\end{equation*}
for $t \in [0, 1)$. By the argument in Step 3, if $\left(\frac{v}{u}\right)'(0) \leq 0$, the integral condition \eqref{eqn:integralcondition} implies that $v / u$ is identically one. So $\left(\frac{v}{u}\right)'(0) \leq 0$ implies $\left(\frac{v}{u}\right)'(0) = 0$. For $\sigma \in S_n$ let
\[
v_{\sigma}(t) = \Psi((1 - t)\overline{e} + te(\sigma(i))).
\]
Since $\widetilde{\Psi} = \Phi$, we have
\[
\prod_{\sigma \in S_n} \left(\frac{v_{\sigma}(t)}{u(t)}\right)^{\frac{1}{n!}} = 1.
\]
Taking logarithm on both sides and differentiating, we see that the average of the derivatives $\left(\frac{v}{u}\right)'(0)$ over $i$ is $0$ (recall that $\Phi$ is symmetric). Since all derivatives are non-negative by the above argument, in fact they are all $0$, and so $\Psi = \Phi$ on $[\overline{e}, e(i))$ for all $i$.
Since the vectors $e(i) - \overline{e}$ span the plane parallel to $\Delta^{(n)}$, the graphs of $\Psi$ and $\Phi$ have the same tangent plane at $\overline{e}$. Since $\Phi$ achieves its maximum at $\overline{e}$, we see that $\Psi$ achieves its maximum at $\overline{e}$ as well. Now we may apply the argument in Steps 2 and 3 to conclude that $\Psi$ equals $\Phi$ identically on $\Delta^{(n)}$. Thus $\tau = \pi$ and we have proved that $\pi$ is maximal in ${\mathcal{FG}}^2$.
\qed\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:main}]
Let $\tau$ be a $C^1$ portfolio not equal to $\pi$. By the maximality of $\pi$, it is not the case that $\tau \succeq \pi$. By Theorem \ref{prop:MCM}, $\tau$ does not satisfy MCM relative to $\pi$. Thus, there is a cycle $\{\mu(t)\}_{t = 0}^{m+1}$ (with $\mu(0) = \mu(m + 1)$) over which
\begin{equation} \label{eqn:badcycle}
\frac{V_{\tau}(m+1)}{V_{\pi}(m+1)} < 1.
\end{equation}
Consider, as in the proof of Theorem \ref{prop:MCM}, the market weight sequence which goes through this cycle again and again. Clearly $\{\mu(t)\}_{t \geq 0}$ takes values in a finite set $K$ which is compact. From \eqref{eqn:badcycle}, it is clear that $V_{\tau}(t) / V_{\pi}(t) \rightarrow 0$ as $t \rightarrow \infty$.
\qed\end{proof}
\subsection{Extension to continuous time} \label{sec:continuoustime}
We discuss briefly how Theorem \ref{thm:main} can be generalized to continuous time. In continuous time, we let the market weight process $\{\mu(t)\}_{t \geq 0}$ be a continuous semimartingale with state space $\Delta^{(n)}$. The market weight process of a portfolio $\pi$ satisfies the stochastic differential equation
\[
\frac{\mathrm{d}V_{\pi}(t)}{V_{\pi}(t)} = \sum_{i = 1}^n \pi_i(\mu(t)) \frac{\mathrm{d}\mu_i(t)}{\mu_i(t)}.
\]
Let $(\pi, \Phi), (\tau, \Psi) \in {\mathcal{FG}}^2$. Then we have the decomposition
\[
\log \frac{V_{\tau}(t)}{V_{\pi}(t)} = \log \frac{\Psi(\mu(t)) / \Psi(\mu(0))}{\Phi(\mu(t)) / \Phi(\mu(0))} + A(t),
\]
where the drift process takes the form $A(t) = A_{\tau}(t) - A_{\pi}(t)$,
\begin{equation} \label{eqn:qvariation}
A_{\tau}(t) = \int_0^t H_{\tau}(\mu(s))(\mathrm{d}\mu(s), \mathrm{d}\mu(s)),
\end{equation}
and the analogous definition holds for $A_{\pi}$. See \cite[Theorem 3.1.5]{F02}. In \eqref{eqn:qvariation} we use the intrinsic notation of \cite{EM89} for the quadratic variation of $\{\mu(t)\}$ with respect to the non-negative definite form $H_{\tau}$. It can be shown that $A(t)$ is non-decreasing almost surely for all continuous semimartingales $\{\mu(t)\}$ if and only if $A_{\tau} \geq A_{\pi}$. We may define the relation $\tau \succeq \pi$ (domination on compacts) in the same way as in Definition \ref{def:pseudoarbitrage}, except that we require for any continuous semimartingale $\{\mu(t)\}$ with values in $K$, \eqref{eqn:lowerbound} holds for all $t \geq 0$ almost surely. Using the results established, one can show in continuous time that $\pi$ is maximal in ${\mathcal{FG}}^2$ if it is generated by a measure of diversity satisfying \eqref{eqn:integralcondition}.
Moreover, in continuous time, \cite[Theorem 3]{CDOS09} shows that the integral condition \eqref{eqn:integralcondition} is also necessary for $(\pi, \Phi)$ to be maximal in ${\mathcal{FG}}^2$ when $n = 2$. Let $u(x) = \Phi(x, 1 - x)$. The idea is that if the integral converges, we can solve the initial value problem
\[
v''(x) + \left(\frac{-u''(x)}{u(x)} + s(x)\right)v(x) = 0, \quad x \in (0, 1),
\]
\[
v\left(\frac{1}{2}\right) = u\left(\frac{1}{2}\right), \quad v'\left(\frac{1}{2}\right) = u'\left(\frac{1}{2}\right) = 0,
\]
for some appropriately chosen function $s(x)$ such that $s(x) \geq 0$, $s(x) \not\equiv 0$ and $s$ is symmetric about $\frac{1}{2}$. Sturm's comparison theorem implies that the solution $v(x)$ is positive (and concave) on $(0, 1)$. Let $\Psi(p) = v(p_1)$ and let $\tau$ be the portfolio generated by $\Psi$. Then the corresponding portfolio $\tau$ is not equal to $\pi$ and dominates $\pi$ on compacts, so $\pi$ is not maximal in ${\mathcal{FG}}^2$.
\begin{problem}
Characterize the maximal portfolios of ${\mathcal{FG}}$.
\end{problem}
\section{Optimization of functionally generated portfolios} \label{sec:optimization}
\subsection{A shape-constrained optimization problem} \label{sec:optim2}
Consider the relative value process of a functionally generated portfolio. If we have a model for the market weight process $\{\mu(t)\}_{t \geq 0}$, a natural optimization problem is to maximize the expected growth rate of the drift process over some horizon. To this end, suppose we are given an {\it intensity measure} ${\Bbb P}$ of the increments $(\mu(t), \mu(t + 1))$ modeled as a Borel probability measure on $\Delta^{(n)} \times \Delta^{(n)}$. We assume that ${\Bbb P}$ is either discrete (with countably many masses) or absolutely continuous with respect to the measure $\nu := m \otimes m$ on $\Delta^{(n)} \times \Delta^{(n)}$, where $m$ is the surface measure of $\Delta^{(n)}$ in ${\Bbb R}^n$ (which should be thought of as the Lebesgue measure on $\Delta^{(n)}$). We will abbreviate this by simply saying ${\Bbb P}$ is absolutely continuous. For technical reasons, we assume that ${\Bbb P}$ is supported on $K \times K$ for some compact subset $K$ of $\Delta^{(n)} \times \Delta^{(n)}$.
Given the intensity measure ${\Bbb P}$, we consider the optimization problem
\begin{equation} \label{eqn:newoptim1}
\max_{(\pi, \Phi) \in {\mathcal{FG}}} \int T\left(q \mid p\right) \mathrm{d} {\Bbb P}.
\end{equation}
First we give some examples of the intensity measure.
\begin{example} \label{exm:markov}
Suppose $\{(\mu(t - 1), \mu(t))\}$ is an ergodic Markov chain on $K \times K$. We can take ${\Bbb P}$ to be the stationary distribution of $(\mu(t - 1), \mu(t))$. It is easy to see that an optimal portfolio in \eqref{eqn:newoptim1} maximizes the asymptotic growth rate $\lim_{t \rightarrow \infty} \frac{1}{t} \log V_{\pi}(t)$ of the relative value (the term $\frac{1}{t} \log \frac{\Phi(\mu(t))}{\Phi(\mu(0))}$ vanishes as $t \rightarrow \infty$). This portfolio can be regarded as a {\it growth optimal portfolio} (relative to the market portfolio) among the functionally generated portfolios.
\end{example}
\begin{example} \label{exm:green}
We model $\{\mu(t)\}_{t \geq 0}$ as a stochastic process. Let $K$ be a compact subset of $\Delta^{(n)}$ containing $\mu(0)$. Let $\tau$ be the first exit time of $K$, i.e.,
\[
\tau = \inf\{t \geq 0: \mu(t) \notin K\}.
\]
Consider the measure ${\Bbb G}$ on $K \times K$ defined by
\[
{\Bbb G}(A) := {\Bbb E} \left[\sum_{t = 1}^{\tau - 1} 1_{\{(\mu(t-1), \mu(t)) \in A\}} \right], \quad A \subset K \times K \text{ measurable}.
\]
If the process $\{(\mu(t-1), \mu(t))\}$ is Markovian, ${\Bbb G}$ is the {\it Green kernel} of the process killed at time $\tau$. Suppose ${\Bbb G}(K \times K) = {\Bbb E} (\tau - 1) < \infty$, i.e., the exit time has finite expectation. Then
\begin{equation*}
{\Bbb P}(\cdot) := \frac{1}{{\Bbb G}(K \times K)} {\Bbb G}(\cdot)
\end{equation*}
is a probability measure on $K \times K$. This intensity measure will be used in the empirical example in Section \ref{sec:empirical}.
\end{example}
Note that Example \ref{exm:markov} deals with infinite horizon while Example \ref{exm:green} is concerned with a finite (but random) horizon. The optimization problem \eqref{eqn:newoptim1} is shape-constrained because the generating function is concave by definition. We will first study some theoretical properties of this abstract (unconstrained) optimization problem, and then focus on a discrete special case where numerical solutions are possible and further constraints are imposed. In contrast to classical portfolio selection theory where the portfolio weights are optimized period by period, in \eqref{eqn:newoptim1} we optimize the portfolio weights over a region simultaneously.
Throughout the development it is helpful to keep in mind the analogy between \eqref{eqn:newoptim1} and the maximum likelihood estimation of a log-concave density. In that context, we are given a random sample $X_1, ..., X_N$ from a log-concave density $f_0$ on ${\Bbb R}^d$ (i.e., $\log f_0$ is concave). The log-concave maximum likelihood estimate (MLE) $\widehat{f}$ is the solution to
\begin{equation} \label{eqn:MLE}
\max_f \sum_{j = 1}^N \log f(X_j),
\end{equation}
where $f$ ranges over all log-concave densities on ${\Bbb R}^d$. It can be shown that the MLE exists almost surely (when $N \geq d + 1$ and the support of $f_0$ has full dimension) and is unique; see \cite{CSS10} for precise statements of these results. We remark that \eqref{eqn:newoptim1} is more complicated than \eqref{eqn:MLE} because the portfolio weights correspond to selections of the superdifferential $\partial \log \Phi$, wheras \eqref{eqn:MLE} involves only the values of the density.
\subsection{Theoretical properties}
It is easy to check that \eqref{eqn:newoptim1} is a convex optimization problem since the L-divergence is concave in the portfolio weights (Lemma \ref{lem:Driftconcave}). First we show that \eqref{eqn:newoptim1} has an optimal solution and study in what sense the solution is unique.
\medskip
Given an intensity measure ${\Bbb P}$, it can be decomposed in the form
\begin{equation} \label{eqn:conditional}
{\Bbb P}(\mathrm{d}p\mathrm{d}q) = {\Bbb P}_1(\mathrm{d}p) {\Bbb P}_2(\mathrm{d}q | p),
\end{equation}
where ${\Bbb P}_1$ is the first marginal of ${\Bbb P}$ and ${\Bbb P}_2$ is the conditional distribution of the second variable given $p$. We will need a technical condition for ${\Bbb P}$ which allows jumps in all directions.
\begin{definition}[Support condition]
Let ${\Bbb P}$ be an absolutely continuous probability measure on $\Delta^{(n)} \times \Delta^{(n)}$ with the decomposition \eqref{eqn:conditional}. Write
\[
{\Bbb P}_1(\mathrm{d}p) = f(p) m(\mathrm{d}p),
\]
where $f(\cdot)$ is the density of ${\Bbb P}_1$ with respect to $m$. We say that ${\Bbb P}$ satisfies the support condition if for $m$-almost all $p$ for which $f(p) > 0$, for all $v \in T\Delta^{(n)}$, there exists $\lambda > 0$ such that $p + \lambda v$ belongs to the support of ${\Bbb P}_2(\cdot | p)$.
\end{definition}
We have the following result which is analogous to \cite[Theorem 1]{CSS10}.
\begin{theorem} \label{thm:optim}
Consider the optimization problem \eqref{eqn:newoptim1} where ${\Bbb P}$ is a discrete or absolutely continuous Borel probability measure on $\Delta^{(n)} \times \Delta^{(n)}$ supported on $K \times K$ with $K \subset \Delta^{(n)}$ compact.
\begin{enumerate}
\item[(i)] The problem has an optimal solution.
\item[(ii)] If $\pi^{(1)}$ and $\pi^{(2)}$ are optimal solutions, then
\begin{equation} \label{eqn:uniqueness}
\left\langle \frac{\pi^{(1)}(p)}{p}, q - p \right\rangle = \left\langle\frac{\pi^{(2)}(p)}{p}, q - p \right\rangle
\end{equation}
for ${\Bbb P}$-almost all $(p, q)$. In particular, if ${\Bbb P}(\mathrm{d}p\mathrm{d}q) = {\Bbb P}_1(\mathrm{d}p) {\Bbb P}_2(\mathrm{d}q | p)$ is absolutely continuous with ${\Bbb P}_1(\mathrm{d}p) = f(p) m(\mathrm{d}p)$ and satisfies the support condition, then $\pi^{(1)} = \pi^{(2)}$ $m$-almost everywhere on $\{p: f(p) > 0\}$.
\end{enumerate}
\end{theorem}
The proofs of Theorem \ref{thm:optim} and Theorem \ref{thm:optim2} below are given in Appendix \ref{sec:appendix}.
\medskip
Let ${\Bbb P}$ an intensity measure. Suppose $\{{\Bbb P}_N\}_{N \geq 1}$ is a sequence of probability measures converging weakly to ${\Bbb P}$. By definition, this means that
\begin{equation*}
\lim_{N \rightarrow \infty} \int f \mathrm{d}{\Bbb P}_N = \int f \mathrm{d}{\Bbb P}
\end{equation*}
for all bounded continuous functions on $\Delta^{(n)} \times \Delta^{(n)}$. For example, one may sample i.i.d.~observations $\{(p(j), q(j))\}_{j = 1}^N$ from ${\Bbb P}$ and take ${\Bbb P}_N$ to be the empirical measure $\frac{1}{N} \sum_{j = 1}^N \delta_{(p(j), q(j))}$, where $\delta_{(p(j), q(j))}$ is the point mass at $(p(j), q(j))$. From the perspective of statistical inference, the optimal portfolio $(\widehat{\pi}^{(N)}, \widehat{\Phi}^{(N)})$ for ${\Bbb P}_N$ can be regarded as a point estimate of the optimal portfolio $(\pi, \Phi)$ for ${\Bbb P}$. The following result states that the estimator is consistent. See \cite[Theorem 4]{CS10} for an analogous statement in the context of log-concave density estimation.
\begin{theorem} \label{thm:optim2}
Let $(\pi, \Phi)$ be the optimal portfolio in problem \eqref{eqn:newoptim1} for ${\Bbb P}$, where ${\Bbb P}(\mathrm{d}p\mathrm{d}q) = {\Bbb P}_1(\mathrm{d}p) {\Bbb P}_2(\mathrm{d}q | p)$ is absolutely continuous with ${\Bbb P}_1(\mathrm{d}p) = f(p) m(\mathrm{d}p)$, supported on $K \times K$ with $K \subset \Delta^{(n)}$ compact, and satisfies the support condition. Let $\{{\Bbb P}_N\}$ be a sequence of discrete or absolutely continuous probability measures on $K \times K$ such that ${\Bbb P}_N \rightarrow {\Bbb P}$ weakly, and suppose $(\widehat{\pi}^{(N)}, \widehat{\Phi}^{(N)})$ is optimal for the measure ${\Bbb P}_N$, $N \geq 1$. Then $\widehat{\pi}^{(N)} \rightarrow \pi$ $m$-almost everywhere on $\{p: f(p) > 0\}$.
\end{theorem}
\subsection{Finite dimensional reduction}
Without further constraints, the optimal portfolio weights of \eqref{eqn:newoptim1} may be highly irregular. Now we restrict to the special case where
\begin{equation} \label{eqn:discreteP}
{\Bbb P} = \frac{1}{N} \sum_{j = 1}^N \delta_{(p(j), q(j))}
\end{equation}
is a discrete measure and $(p(j), q(j)) \in \Delta^{(n)} \times \Delta^{(n)}$ for $j = 1, ..., N$. This presents no great loss of generality because in practice the market weights have finite precision and we can choose the pairs $(p(j), q(j))$ to take values in a grid approximating $\Delta^{(n)} \times \Delta^{(n)}$. Moreover, from Theorem \ref{thm:optim2} we expect that when $N$ is large the optimal solution approximates that of the continuous counterpart. Consider the modified optimization problem
\begin{equation} \label{eqn:newoptim2}
\begin{aligned}
& \underset{(\pi, \Phi) \in {\mathcal{FG}}}{\text{maximize}}
& & \int T\left(q \mid p\right)\mathrm{d} {\Bbb P} \\
& \text{subject to}
& & (\pi(p(1)), ..., \pi(p(N))) \in C, \\
\end{aligned}
\end{equation}
where $C$ is a given closed convex subset of $\overline{\Delta^{(n)}}^N$. Some examples of $C$ are given in Table \ref{tab:constraints}, where each constraint is a cylinder set of the form $\{\pi(p(j)) \in C_j\}$ with $C_j$ a closed convex set of $\overline{\Delta^{(n)}}$. `Global' constraints on the weights can be imposed, see Section \ref{sec:empirical} for an example. It can be verified easily that the proof of Theorem \ref{thm:optim} goes through without changes with these constraints, so \eqref{eqn:newoptim2} has an optimal solution. Moreover, if $\pi^{(1)}$ and $\pi^{(2)}$ are optimal solutions, then
\[
\left\langle \frac{\pi^{(1)}(p(j))}{p(j)}, q(j) - p(j) \right\rangle = \left\langle\frac{\pi^{(2)}(p(j))}{p(j)}, q(j) - p(j) \right\rangle, \quad j = 1, ..., N.
\]
\begin{table}
\caption{Examples of additional constraints imposed for $p \in \{p(1), ..., p(N)\}$. The parameters may be given functions of $p$.}
\label{tab:constraints}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Constraint & Interpretation \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$a_i \leq \pi_i(p) \leq b_i$ & Box constraints on portfolio weights \\
$m_i \leq \frac{\pi_i(p)}{p_i} \leq M_i$ & Box constraints on weight ratios \\
$(\pi(p) - p)' \Sigma (\pi(p) - p) < \varepsilon$ & Constraint on tracking error given a covariance matrix \\ \noalign{\smallskip}\hline
\end{tabular}
\end{table}
For maximum likehood estimation of log-concave density, it is shown in \cite{CSS10} that the logarithm of the MLE $\widehat{f}$ is {\it polyhedral}, i.e., $\log \widehat{f}$ is the pointwise minimum of several affine functions (see \cite[Section 19]{R70}). In particular, there exists a triangulation of the data points over which $\log \widehat{f}$ is piecewise affine. We show that an analogous statement holds for \eqref{eqn:newoptim2}. Let $D = \{p(j), q(j): j = 1, ..., N\}$ be the set of data points.
\begin{theorem} \label{thm:optim3}
Let $(\pi, \Phi)$ be an optimal portfolio for the problem \eqref{eqn:newoptim2} where ${\Bbb P} = \frac{1}{N} \sum_{j = 1}^N \delta_{(p(j), q(j))}$. Let $\overline{\Phi}: \Delta^{(n)} \rightarrow (0, \infty)$ be the smallest positive concave function on $\Delta^{(n)}$ such that $\overline{\Phi}(p) \geq {\Phi}(p)$ for all $x \in D$. Then $\overline{\Phi}$ is a polyhedral positive concave function on $\Delta^{(n)}$ satisfying $\overline{\Phi} \leq \Phi$ and $\overline{\Phi}(p) = \Phi(p)$ for all $p \in D$. Moreover, $\overline{\Phi}$ generates a portfolio $\overline{\pi}$ such that $\overline{\pi}(p(j)) = \pi(p(j))$ for all $j$. In particular, $(\overline{\pi}, \overline{\Phi})$ is also optimal for the problem \eqref{eqn:newoptim2}.
\end{theorem}
\begin{proof}
It is a standard result in convex analysis that $\overline{\Phi}$ such defined is {\it finitely generated} (see \cite[Section 19]{R70}). By \cite[Corollary 19.1.2]{R70}, $\overline{\Phi}$ is a polyhedral concave function. By definition of $\overline{\Phi}$ and concavity of $\Phi$, we have $\overline{\Phi}(p) = {\Phi}(p)$ for all $x \in D$ for all $j$ and $\overline{\Phi} \leq \Phi$. This implies that $\partial \log \Phi(p(j)) \subset \partial \log \overline{\Phi}(p(j))$ for all $j$. By Lemma \ref{lem:superdiff}(ii), $\overline{\Phi}$ generates a portfolio $\overline{\pi}$ which agrees with $\pi$ on $\{p(1) ,..., p(N)\}$. It follows that (using obvious notations)
\[
\overline{T}\left(q(j) \mid p(j)\right) = T\left(q(j) \mid p(j) \right)
\]
for all $j$, and hence $(\overline{\pi}, \overline{\Phi})$ is optimal for \eqref{eqn:newoptim2}.
\qed\end{proof}
Theorem \ref{thm:optim3} reduces \eqref{eqn:newoptim2} to a finite-dimensional problem. In the next section we present an elementary implementation for the case $n = 2$ (analogous to univariate density estimation) and illustrate its application in portfolio management with a case study.
\section{Empirical examples} \label{sec:empirical}
\subsection{A case study}
\begin{figure}
\includegraphics[scale=0.5]{data.pdf}
\vspace{-20pt}
\caption{The figure on the left shows the growth of $\$1$ for each asset, and the one on the right shows the time series of the market weight $\mu_1(t)$ of US. The vertical dotted line divides the data set into the training and testing periods respectively.}
\label{fig:data}
\end{figure}
In global portfolio management, an important topic is the determination of the aggregate portfolio weights for countries. In this example we consider two countries: US and China. We represent them by the S\&P US BMI index (asset 1) and the S\&P China BMI index (asset 2) respectively. The `market' consists of these two assets. We collect monthly data from January 2001 to June 2014 using Bloomberg. The benchmark portfolio is taken to be the buy-and-hold portfolio starting with weights $(0.5, 0.5)$ at January 2001. Here the initial market weights $(0.5, 0.5)$ are chosen arbitrarily. The data from January 2001 to December 2010 will be used as the training data to optimize the portfolio which will be backtested in the subsequent period. The market weights at January 2011 are $(0.1819, 0.8191)$. The data is plotted in Figure \ref{fig:data}.
Let $K \subset \Delta^{(2)}$ be the compact set defined by
\begin{equation} \label{eqn:set}
K = \{p = (p_1, p_2) \in \Delta^{(2)}: 0.1 \leq p_1 \leq 0.3\}.
\end{equation}
Our objective here is to optimize a functionally generated portfolio to be held as long as the market weights stay within $K$. If the market weight of US approach these boundary points (regarded as a regime change), a new portfolio will be chosen, so $0.1$ and $0.3$ can be thought of as the {\it trigger points}.
\subsection{The intensity measure and constraints} \label{subsec:example}
\begin{figure}
\includegraphics[scale=0.4]{density.pdf}
\vspace{-15pt}
\caption{Density estimate of ${\Bbb P}_N$ on $K \times K$ in terms of the market weight of US.}
\label{fig:density}
\end{figure}
Suppose $t = 0$ corresponds to January 2011. We model $\{\mu(t)\}_{t \geq 0}$ as a discrete-time stochastic process (time is monthly) where $\mu(0)$ is constant. Let ${\Bbb P}$ be the measure in Example \ref{exm:green} where $\tau$ is the first exit time of $K$ given in \eqref{eqn:set}.
If a stochastic model is given, we may approximate ${\Bbb P}$ by simulating paths of $\{\mu(t)\}$ killed upon exiting $K$. The resulting empirical measure
\[
{\Bbb P}_N = \frac{1}{N} \sum_{j = 1}^N \delta_{(p(j), q(j))}
\]
is then taken as the intensity measure of the optimization problem \eqref{eqn:newoptim2}.
Since our main concern is the implementation of the optimization problem \eqref{eqn:newoptim2}, sophisticated modeling of $\{\mu(t)\}$ will not be attempted and we will use a simple method to simulate paths of $\{\mu(t)\}$. Namely, starting at $\mu(0) = (0.1819, 0.8191)$, we simulate paths of $\{\mu(t)\}_{t = 0}^{\tau - 1}$ by {\it bootstrapping} the past returns of the two assets and computing the corresponding market weight series. In view of the possible recovery of US, before the simulation we recentered the past returns so that they both have mean zero over the training period. (Essentially, only the difference in returns matter for the evolution of the market weights.) We simulated 50 such paths and obtained $N = 3115$ pairs $(p(j), q(j))$ in $K \times K$. A density estimate of ${\Bbb P}_N$ (in terms of the market weight of US) is plotted in Figure \ref{fig:density}. To reduce the number of variables, the market weights are rounded to 3 decimal places, so the market weights of US take values in the set $D = \{0.100, 0.101, ..., 0.299, 0.300\}$.
Next we specify the constraints for $\{\pi(p_1) := \pi(p_1, 1 - p_1): p_1 \in D\}$. (This notation should cause no confusion since the market weight of China is determined by that of US.) First, we require that $\pi_1(p_1)$ is non-decreasing in $p_1$, i.e.,
\[
\pi_1(0.100) \leq \pi_1(0.101) \leq \cdots \leq \pi_1(0.300).
\]
This imposes a shape constraint on the portfolio weights which guarantees that the portfolio weights always move in the direction of market movement. To control the concentration of the portfolio we require also that the weight ratio of US satisfies $0.5 \leq \frac{\pi_1(p_1)}{p_1} \leq 2$ for $p_1 \in D$ (since there are only two assets, this implies a weight ratio bound for China). These constraints determine the convex set $C$ in the optimization problem \eqref{eqn:newoptim2} we are about to solve.
\subsection{Optimization procedure}
\begin{figure}
\includegraphics[scale=0.50]{result.pdf}
\vspace{-20pt}
\caption{The portfolio weight and the generating function of the optimized portfolio.}
\label{fig:result}
\end{figure}
By Theorem \ref{thm:optim3}, it suffices to optimize over generating functions that are piecewise linear over the data points. First we introduce some simplifying notations. Write the set of grid points as $D = \{x_1 < x_2 < \cdots < x_m\}$ and let $x_0 = 0$, $x_{m+1} = 1$ be the endpoints of the interval. Let the decision variables be
\[
z_j := \pi(x_j, 1 - x_j), \quad j = 1, ..., m,
\]
\[
\varphi_j := \Phi(x_j, 1 - x_j), \quad j = 0, ..., m + 1.
\]
By scaling, we may assume $\varphi_1 = 1$. The constraints on $\{\varphi_j\}$ are
\begin{equation} \label{eqn:nonnegative}
\varphi_j \geq 0, \quad j = 0, ..., m+1, \quad \varphi_1 = 1, \quad \text{(non-negativity)}
\end{equation}
\begin{equation} \label{eqn:concavity}
s_0 \geq s_1 \geq \cdots \geq s_m, \quad s_j := \frac{\varphi_{j+1} - \varphi_j}{x_{j+1} - x_j}. \quad \text{(concavity)}
\end{equation}
We require that $\pi$ is generated by $\Phi$. By \eqref{eqn:n=2weight} and Lemma \ref{lem:superdiff}, it can be seen that $z_j$ satisfies the inequality
\begin{equation} \label{eqn:generated}
x_j + x_j(1 - x_j) \frac{s_j}{\varphi_j} \leq z_j \leq x_j + x_j(1 - x_j) \frac{s_{j-1}}{\varphi_j}, \quad j = 1, ..., m. \quad \text{($(\pi, \Phi) \in {\mathcal{FG}}$)}
\end{equation}
We require that $z_j$ is non-decreasing in $j$:
\begin{equation} \label{eqn:monotone}
z_1 \leq z_2 \leq \cdots \leq z_m. \quad \text{(monotonicity)}
\end{equation}
Finally, we require that the weight ratios are bounded between $0.5$ and $2$:
\begin{equation} \label{eqn:weightratio}
0.5 \leq \frac{z_j}{x_j} \leq 2, \quad j = 1, ..., m. \quad \text{(weight ratios)}
\end{equation}
With the constraints \eqref{eqn:nonnegative}-\eqref{eqn:weightratio} we maximize
\[
\int T\left(q \mid p\right) \mathrm{d}{\Bbb P}_N = \frac{1}{N} \sum_{j = 1}^N T\left( q(j) \mid p(j) \right)
\]
over $\{z_j\}$ and $\{\varphi_j\}$. This is a standard non-linear, but smooth, constrained optimization problem (convexity is lost because $\Phi$ is now piecewise linear). We implement this optimization problem using the \verb"fmincon" function in \verb"MATLAB". The optimal portfolio weights together with the generating function are plotted in Figure \ref{fig:result}. It turns out that the optimal portfolio is close to constant-weighted (with weights $(0.2331, 0.7669)$). Note that the constraint on the weight ratio limits the deviation of $\pi_1(p_1)$ from the market weight $p_1$. If the weight ratio constraint was not imposed (while the monotonicity constraint was kept), the optimal portfolio would be the equal-weighted portfolio $\pi \equiv (0.5, 0.5)$, and the reason can be seen from the proof of Lemma \ref{lem:diffeqn}.
\subsection{Backtesting the portfolio}
\begin{figure}
\includegraphics[scale=0.50]{backtest.pdf}
\vspace{-20pt}
\caption{Fernholz's decomposition of the optimized portfolio over the testing period. The log relative value is $\log V_{\pi}(t)$. The generating function term is $\log \Phi(\mu(t)) - \log \Phi(\mu(0))$, and the drift process is $A(t)$.}
\label{fig:backtest}
\end{figure}
Finally, we compute the performance of the optimized portfolio over the testing period January 2011 to June 2014. The result (plotted using the function \verb"FernholzDecomp" of the \verb"RelValAnalysis" package) is shown in Figure \ref{fig:backtest}. Over the testing period, the portfolio beats the market by nearly 2\% in log scale and its performance has been steady. From the decomposition, about half of the outperformance is attributed to the increase of the generating function (note that the market weight of US becomes closer to $0.2331$ where the generating function attains its maximum), and the rest comes from the drift process. That the optimal portfolio is close to constant-weighted may not be very interesting, but this is a consequence of the data and our choice of constraints and is by no means obvious. Our optimization framework allows many other possibilities especially when there are multiple assets. Other useful constraints and efficient algorithms are natural subjects of further research.
|
{'timestamp': '2014-11-26T02:03:51', 'yymm': '1407', 'arxiv_id': '1407.8300', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.8300'}
|
arxiv
|
\section*{Materials and Methods}
Equations of motion (equations~(\ref{eq:projection_r}) and (\ref{eq:projection_n})) were integrated numerically. Instead of choosing a curvilinear
parametrization of the sphere we kept the equations in the vector form and imposed constraints after each step. Each time step has two stages:
\emph{i}) unconstrained move and \emph{ii}) projection onto the constraint. First, the particle is moved according to equation~(\ref{eq:projection_r}) without
any constraints. Its position is then projected back onto the sphere and its velocity and orientation are projected
onto the tangent plane at the new position. Similarly, torques were projected onto the surface normal at $\mathbf{r}_{i}$ and, finally, $\mathbf{n}_{i}$
was rotated by a random angle around the same normal. As long as the time step is sufficiently small, all projections are unique and
should not affect the dynamics.
The packing fraction, $\phi=N\pi\sigma^{2}/4\pi R^{2}$ is defined as the ratio of the area occupied by all particles to the total area of the sphere (we count double overlaps twice). All simulations were performed with $N\approx3\times10^{3}$
particles at packing fraction $\phi=1$, resulting in $R\approx28.2\sigma$. For comparison, we performed a series of simulations in the plane with
the same $N$ and $\phi$ by imposing periodic boundary conditions onto a square simulation box of size $L=100\sigma$. In all cases, the equations
of motion were integrated for a total of $1.1\times10^{4}\tau$ with time step $\delta t=10^{-3}\tau$. Initially, particles were placed
at random on the sphere. In order to make the configuration reasonably uniform and avoid large forces leading to large displacements, initial
overlaps were removed by using a simple energy relaxation scheme (with $v_0=0$) for $10^{3}\tau$ time steps.
Subsequently, activity and noise were introduced and equations were integrated for addition $10^{4}\tau$ using a standard Euler-Maruyama
method. Configurations were recorded every $5\tau$. Typical runs took approximately 5 hours on a single core of Intel Xeon E2600 series processor.
The system spontaneously breaks spherical symmetry and there is no reason to expect that the axis connecting poles will be aligned with any
of the coordinate axes in $\mathbb{R}^3$. Therefore, in order to produce the angular profiles in Fig.~\ref{fig:profiles}, for each snapshot we first
determined the direction of the total angular velocity and then performed a global rotation around the origin that aligned it with the
$z$-axis in $\mathbb{R}^3$.
In order to analyse the single-slice model we suppose that the chain consists of $N_p$ particles pole-to-pole. We chose $N_p$ such that $p\sigma^2\approx 0.5 k$ in the absence of activity, consistent with the low velocity and flat value of the pressure (see SI). Assuming overlapping particles, the force an adjacent particle $j$ exerts on particle
$i$ in the chain is given by $\mathbf{F}_{ij}=-k\hat{\mathbf{r}}_{ij}(2\sigma-|\mathbf{r}_j-\mathbf{r}_i|)$. $k$ is the (linearised) stiffness of the potential and $\sigma$ is the particle radius.
If we introduce curvilinear coordinates along the chain and expand around $\theta_i$ in small values of $\delta \theta =\theta_j - \theta_i$, we can approximate
$\mathbf{r}_j-\mathbf{r}_i=-R(\theta_j-\theta_i)\hat{\mathbf{e}}_{\theta}$. To first order, interparticle forces are along $\hat{\mathbf{e}}_{\theta}$, and the forces
acting on particle $i$ from its neighbours $i-1$ and $i+1$ are $F_{i,i-1}=k(2\sigma-R(\theta_i -\theta_{i-1}))$ and $F_{i,i+1}=-k(2\sigma-R(\theta_{i+1}-\theta_i))$. Finally, we can
write the set of equations of motion along the chain:
\begin{align}
& v_0 \sin \alpha_1 = -\mu k \left(2\sigma-R(\theta_2-\theta_1)\right) \nonumber \\
& v_0\sin \alpha_i = -\mu k R(\theta_i -\theta_{i-1}) +\mu kR(\theta_{i+1}-\theta_i) \nonumber \\
& v_0\sin \alpha_{N_p} = \mu k\left(2\sigma-R(\theta_{N_p} -\theta_{N_p-1})\right).
\label{eq:band_balance}
\end{align}
We solve these equations using two approaches. First, we treat equations~(\ref{eq:band_balance}) as Euler-Lagrange equations of an energy functional containing only potential energy terms, which we then minimize by using
the standard L-BFGS-B conjugate gradient method including boundary constraints. Formally, even though our physical system conserves neither energy nor momentum, if we assume $\alpha = s \theta$, the active force components in equation~(\ref{eq:band_balance})
derive from an effective potential $V^i_{\text{act}}=v_0 \cos(s \theta_i)$ which can be added to the interparticle repulsive term $V^i_{\text{rep}}=\frac{k R}{2} \sum_{j \in \mathcal{N}} (\theta_j - \theta_i)^2$.
Then setting the gradients of $V^i = V^i_{\text{act}}+V^i_{\text{rep}}$ to zero is equivalent to equations~(\ref{eq:band_balance}).
The second approach is based on the analytical continuum limit. It is less straightforward, but a bit more insightful and discussed in details in the SI.
\acknowledgments{{\bf Acknowledments.} We thank M.C. Marchetti for introducing us to active matter, and for illuminating discussions and critical reading of the manuscript. We also thank F.~Ginelli for useful discussions. Part of this work was performed at the Kavli Institute for Theoretical Physics and was supported in part by the National Science Foundation under Grant No.~NSF PHY11-25915.}
|
{'timestamp': '2014-08-01T02:12:26', 'yymm': '1407', 'arxiv_id': '1407.8516', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.8516'}
|
arxiv
|
\section{Introduction}
Graph-structured data is ubiquitous across application domains ranging from chemo- and bioinformatics to image and social network analysis.
To develop successful machine learning models in these domains, we need techniques that can exploit the rich information inherent in graph structure, as well as the feature information contained within a graph's nodes and edges. In recent years, numerous approaches have been proposed for machine learning graphs---most notably, approaches based on graph kernels \cite{Vis+2010} or, alternatively, using graph neural network algorithms \cite{Ham+2017a}.
Kernel approaches typically fix a set of features in advance---e.g., indicator features over subgraph structures or features of local node neighborhoods.
For example, one of the most successful kernel approaches, the \new{Weisfeiler-Lehman subtree kernel}~\cite{She+2011}, which is based on the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic \cite[pp.\,79\,ff.]{Gro2017}, generates node features through an iterative relabeling, or \emph{coloring}, scheme:
First, all nodes are assigned a common initial color; the algorithm then iteratively recolors a node by aggregating over the multiset of colors in its neighborhood, and the final feature representation of a graph is the histogram of the resulting node colors.
By iteratively aggregating over local node neighborhoods in this way, the WL subtree kernel is able to effectively summarize the neighborhood substructures present in a graph.
However, while powerful, the WL subtree kernel---like other kernel methods---is limited because this feature construction scheme is fixed (i.e., it does not adapt to the given data distribution). Moreover, this approach---like the majority of kernel methods---focuses only on the graph structure and cannot interpret continuous node and edge labels, such as real-valued vectors which play an important role in applications such as bio- and chemoinformatics.
Graph neural networks (GNNs) have emerged as a machine learning framework addressing the above challenges.
Standard GNNs can be viewed as a neural version of the $1$-WL algorithm, where colors are replaced by continuous feature vectors and neural networks are used to aggregate over node neighborhoods \cite{Ham+2017,Kip+2017}.
In effect, the GNN framework can be viewed as implementing a continuous form of graph-based ``message passing'', where local neighborhood information is aggregated and passed on to the neighbors~\cite{Gil+2017}. By deploying a trainable neural network to aggregate information in local node neighborhoods, GNNs can be trained in an end-to-end fashion together with the parameters of the classification or regression algorithm, possibly allowing for greater adaptability and better generalization
compared to the kernel counterpart of the classical $1$-WL algorithm.
Up to now, the evaluation and analysis of GNNs has been largely empirical, showing promising results compared to kernel approaches, see, e.g.,~\cite{Yin+2018}. However, it remains unclear how GNNs are actually encoding graph structure information into their vector representations, and whether there are theoretical advantages of GNNs compared to kernel based approaches.
\xhdr{Present Work}
We offer a theoretical exploration of the relationship between GNNs and kernels that are based on the $1$-WL algorithm.
We show that GNNs cannot be more powerful than the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs, e.g., the properties of subgraphs around each node.
This result holds for a broad class of GNN architectures and all possible choices of parameters for them.
On the positive side, we show that given the right parameter initialization GNNs have the same expressiveness as the $1$-WL algorithm, completing the equivalence.
Since the power of the $1$-WL has been completely characterized, see, e.g.,~\cite{Arv+2015,kiefer2015graphs}, we can transfer these results to the case of GNNs, showing that both approaches have the same shortcomings.
Going further, we leverage these theoretical relationships to propose a generalization of GNNs, called $k$-GNNs, which are neural architectures based on the $k$-dimensional WL algorithm ($k$-WL), which are strictly more powerful than GNNs.
The key insight in these higher-dimensional variants is that they perform message passing directly between subgraph structures, rather than individual nodes.
This higher-order form of message passing can capture structural information that is not visible at the node-level.
Graph kernels based on the $k$-WL have been proposed in the past \cite{Mor+2017}.
However, a key advantage of implementing higher-order message passing in GNNs---which we demonstrate here---is that we can design hierarchical variants of $k$-GNNs, which combine graph representations learned at different granularities in an end-to-end trainable framework.
Concretely, in the presented hierarchical approach the initial messages in a $k$-GNN are based on the output of lower-dimensional $k'$-GNN (with $k' < k$), which allows the model to effectively capture graph structures of varying granularity. Many real-world graphs inherit a hierarchical structure---e.g., in a social network we must model both the ego-networks around individual nodes, as well as the coarse-grained relationships between entire communities, see, e.g.,~\cite{New2003}---and our experimental results demonstrate that these hierarchical $k$-GNNs are able to consistently outperform traditional GNNs on a variety of graph classification and regression tasks. Across twelve graph regression tasks from the QM9 benchmark, we find that our hierarchical model reduces the mean absolute error by 54.45\% on average. For graph classification, we find that our hierarchical models leads to slight performance gains.
\xhdr{Key Contributions}
Our key contributions are summarized as follows:
\begin{enumerate}
\item We show that GNNs are not more powerful than the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Moreover, we show that, assuming a suitable parameter initialization, GNNs have the same power as the $1$-WL.
\item We propose $k$-GNNs, which are strictly more powerful than GNNs. Moreover, we propose a hierarchical version of $k$-GNNs, so-called $1$-$k$-GNNs, which are able to work with the fine- and coarse-grained structures of a given graph, and relationships between those.
\item Our theoretical findings are backed-up by an experimental study, showing that higher-order graph properties are important for successful graph classification and regression.
\end{enumerate}
\section{Related Work}
Our study builds upon a wealth of work at the intersection of supervised learning on graphs, kernel methods, and graph neural networks.
Historically, kernel methods---which implicitly or explicitly map graphs to elements of a Hilbert space---have been the dominant approach for supervised learning on graphs.
Important early work in this area includes random-walk based kernels \cite{Gae+2003,Kas+2003}) and kernels based on shortest paths \cite{Borgwardt2005}.
More recently, developments in graph kernels have emphasized scalability, focusing on techniques that bypass expensive Gram matrix computations by using explicit feature maps.
Prominent examples of this trend include kernels based on graphlet counting~\cite{She+2009}, and, most notably, the Weisfeiler-Lehman subtree kernel~\cite{She+2011} as well as its higher-order variants~\cite{Mor+2017}.
Graphlet and Weisfeiler-Leman kernels have been successfully employed within frameworks for smoothed and deep graph kernels~\cite{Yan+2015,Yan+2015a}. Recent works focus on assignment-based approaches~\cite{Kri+2016,Nik+2017,Joh+2015}, spectral approaches~\cite{Kon+2016}, and graph decomposition approaches~\cite{Nik+2018}.
Graph kernels were dominant in graph classification for several years, leading to new state-of-the-art results on many classification tasks.
However, they are limited by the fact that they cannot effectively adapt their feature representations to a given data distribution, since they generally rely on a fixed set of features. More recently, a number of approaches to graph classification based upon neural networks have been proposed.
Most of the neural approaches fit into the graph neural network framework proposed by~\cite{Gil+2017}. Notable instances of this model include \new{Neural Fingerprints}~\cite{Duv+2015}, \emph{Gated Graph Neural Networks}~\cite{Li+2016}, \emph{GraphSAGE}~\cite{Ham+2017}, \emph{SplineCNN}~\cite{Fey+2018}, and the spectral approaches proposed in~\cite{Bru+2014,Def+2015,Kip+2017}---all of which descend from early work in~\cite{Mer+2005} and~\cite{Sca+2009}.
Recent extensions and improvements to the GNN framework include approaches to incorporate different local structures around subgraphs \cite{Xu+2018} and novel techniques for pooling node representations in order perform graph classification \cite{Zha+2018,Yin+2018}.
GNNs have achieved state-of-the-art performance on several graph classification benchmarks in recent years, see, e.g.,~\cite{Yin+2018}---as well as applications such as protein-protein interaction prediction~\cite{Fou+2017}, recommender systems~\cite{Yin+2018a}, and the analysis of quantum interactions in molecules~\cite{Sch+2017}.
A survey of recent advancements in GNN techniques can be found in \cite{Ham+2017a}.
Up to this point (and despite their empirical success) there has been very little theoretical work on GNNs---with the notable exceptions of Li et\ al.'s \cite{Li+2018a} work connecting GNNs to a special form Laplacian smoothing and Lei et al.'s\@ \cite{Lei+2017} work showing that the feature maps generated by GNNs lie in the same Hilbert space as some popular graph kernels. Moreover, Scarselli et al.\ \cite{Sca+2009a} investigates the approximation capabilities of GNNs.
\section{Preliminaries}\label{prelim}
We start by fixing notation, and then outline the Weisfeiler-Leman algorithm and the standard graph neural network framework.
\subsection{Notation and Background}
A \new{graph} $G$ is a pair $(V,E)$ with a finite set of \new{nodes} $V$ and a set of \new{edges} $E \subseteq \{ \{u,v\} \subseteq V \mid u \neq v \}$. We denote the set of nodes and the set of edges of $G$ by $V(G)$ and $E(G)$, respectively. For ease of notation we denote the edge $\{u,v\}$ in $E(G)$ by $(u,v)$ or $(v,u)$.
Moreover, $N(v)$ denotes the \new{neighborhood} of $v$ in $V(G)$, i.e., $N(v) = \{ u \in V(G) \mid (v, u) \in E(G) \}$. We say that two graphs $G$ and $H$ are \new{isomorphic} if there exists an edge preserving bijection $\varphi: V(G) \to V(H)$, i.e., $(u,v)$ is in $E(G)$ if and only if $(\varphi(u),\varphi(v))$ is in $E(H)$. We write $G \simeq H$ and call the equivalence classes induced by $\simeq$ \new{isomorphism types}. Let $S \subseteq V(G)$ then $G[S] = (S,E_S)$ is the \new{subgraph induced} by $S$ with $E_S = \{ (u,v) \in E(G) \mid u,v \in S \}$. A \new{node coloring} is a function $V(G) \to \Sigma$ with arbitrary codomain $\Sigma$. Then a \new{node colored} or \new{labeled graph} $(G,l)$ is a graph $G$ endowed with a node coloring $l \colon V(G) \to \Sigma$. We say that $l(v)$ is a \new{label} or \new{color} of $v\in V(G)$. We say that a node coloring $c$ \new{refines} a node coloring $d$, written $c \sqsubseteq d$, if $c(v) = c(w)$ implies $d(v) = d(w)$ for every $v,w$ in $V(G)$.
Two colorings are \new{equivalent} if $c \sqsubseteq d$ and $d \sqsubseteq c$, and we write $c \equiv d$.
A \new{color class} $Q\subseteq V(G)$ of a node coloring $c$ is a maximal set of nodes with $c(v)=c(w)$ for every $v,w$ in $Q$. Moreover, let $[1\!:\!n] = \{ 1, \dotsc, n \} \subset \mathbb{N}$ for $n > 1$, let $S$ be a set then the set of \new{$k$-sets} $[S]^k = \{ U \subseteq S \mid |U| = k \}$ for $k \geq 2$, which is the set of all subsets with cardinality $k$, and let $\{\!\!\{ \dots \}\!\!\}$ denote a multiset.
\subsection{Weisfeiler-Leman Algorithm}
We now describe the \textsc{$1$-WL} algorithm for labeled graphs. Let $(G,l)$ be a labeled graph. In each iteration, $t \geq 0$, the $1$-WL computes a node coloring $c^{(t)}_l \colon V(G) \to \Sigma$,
which depends on the coloring from the previous iteration.
In iteration $0$, we set $c^{(0)}_l = l$. Now in iteration $t>0$, we set
\begin{equation}\label{eq:wlColoring}
c_{l}^{(t)}(v) \!=\! \textsc{hash}\Big(\!\big(c_l^{(t-1)}(v),\{\!\!\{ c_l^{(t-1)}(u)\!\mid\!u \in\!N(v) \!\}\!\!\} \big)\! \Big)
\end{equation}
where $\textsc{hash}$ bijectively maps the above pair to a unique value in $\Sigma$, which has not been used in previous iterations. To test two graph $G$ and $H$ for isomorphism, we run the above algorithm in ``parallel'' on both graphs. Now if the two graphs have a different number of nodes colored $\sigma$ in $\Sigma$, the \textsc{$1$-WL} concludes that the graphs are not isomorphic. Moreover, if the number of colors between two iterations does not change, i.e., the cardinalities of the images of $c_l^{(t-1)}$ and $c_l^{(t)}$ are equal, the algorithm terminates. Termination is guaranteed after at most $\max \{ |V(G)|,|V(H)| \}$ iterations. It is easy to see that the algorithm is not able to distinguish all non-isomorphic graphs, e.g., see~\cite{Cai+1992}. Nonetheless, it is a powerful heuristic, which can successfully test isomorphism for a broad class of graphs~\cite{Bab+1979}.
The $k$-dimensional Weisfeiler-Leman algorithm ($k$-WL), for $k \geq 2$, is a generalization of the $1$-WL which colors tuples from $V(G)^k$ instead of nodes. That is, the algorithm computes a coloring $c^{(t)}_{l,k} \colon V(G)^k \to \Sigma$. In order to describe the algorithm, we define the $j$-th neighborhood
\begin{equation}\label{gnei}
N_j(s) \!=\! \{ ( s_1, \dotsc, s_{j-1}, r, s_{j+1}, \dotsc, s_k) \mid r \in V(G) \}
\end{equation}
of a $k$-tuple $s = (s_1, \dotsc, s_k )$ in $V(G)^k$. That is, the $j$-th neighborhood $N_j(t)$ of $s$ is obtained by replacing the $j$-th component of $s$ by every node from $V(G)$. In iteration $0$, the algorithm labels each $k$-tuple with its \new{atomic type}, i.e., two $k$-tuples $s$ and $s'$ in $V(G)^k$ get the same color if the map $s_i \mapsto s'_i$ induces a (labeled) isomorphism between the subgraphs induced from the nodes from $s$ and $s'$, respectively. For iteration $t > 0$, we define
\begin{equation}\label{wl-prim}
C^{(t)}_j(s) = \textsc{hash}_{}\big(\{\!\!\{ c^{(t-1)}_{l,k}(s') \mid s' \in N_j(s)\}\!\!\}\big),
\end{equation}
and set
\begin{equation}\label{labelk}
c_{k,l}^{(t)}(s)\!=\! \textsc{hash}_{}\Big( \!\big(c_{k,l}^{(t-1)}(s), \big( C^{(t)}_1(s), \dots, C^{(t)}_k(s) \big) \! \Big)\,.
\end{equation}
Hence, two tuples $s$ and $s'$ with $c_{k,l}^{(t-1)}(s) = c_{k,l}^{(t-1)}(s')$ get different colors in iteration $t$ if there exists $j$ in $[1\!:\!k]$ such that the number of $j$-neighbors of $s$ and $s'$, respectively, colored with a certain color is different.
The algorithm then proceeds analogously to the \textsc{$1$-WL}.
By increasing $k$, the algorithm gets more powerful in terms of distinguishing non-isomorphic graphs, i.e., for each $k\geq 2$, there are non-isomorphic graphs which can be distinguished by the ($k+1$)-WL but not by the $k$-WL~\cite{Cai+1992}.
We note here that the above variant is not equal to the \emph{folklore} variant of $k$-WL described in~\cite{Cai+1992}, which differs slightly in its update rule.
However, it holds that the $k$-WL using~\cref{labelk} is as powerful as the folklore $(k\!-\!1)$-WL \cite{GroheO15}.
\xhdr{WL Kernels}
After running the WL algorithm, the concatenation of the histogram of colors in each iteration can be used as a feature vector in a kernel computation.
Specifically, in the histogram for every color $\sigma$ in $\Sigma$ there is an entry containing the number of nodes or $k$-tuples that are colored with $\sigma$.
\subsection{Graph Neural Networks}
Let $(G,l)$ be a labeled graph with an initial node coloring $f^{(0)}: V(G)\rightarrow \mathbb{R}^{1\times d}$ that is \emph{consistent} with $l$.
This means that each node $v$ is annotated with a feature $f^{(0)}(v)$ in $\ensuremath{\mathbb{R}}^{1\times d}$ such that $f^{(0)}(u) = f^{(0)}(v)$ if and only if $l(u) = l(v)$.
Alternatively, $f^{(0)}(v)$ can be an arbitrary real-valued feature vector associated with $v$.
Examples include continuous atomic properties in chemoinformatic applications where nodes correspond to atoms, or vector representations of text in social network applications.
A GNN model consists of a stack of neural network layers, where each layer aggregates local neighborhood information, i.e., features of neighbors, around each node and then passes this aggregated information on to the next layer.
A basic GNN model can be implemented as follows~\cite{Ham+2017a}.
In each layer $t > 0$, we compute a new feature
\begin{equation}\label{eq:basicgnn}
f^{(t)}(v) = \sigma \Big( f^{(t-1)}(v) \cdot W^{(t)}_1 +\, \sum_{\mathclap{w \in N(v)}}\,\, f^{(t-1)}(w) \cdot W_2^{(t)} \Big)
\end{equation}
in $\ensuremath{\mathbb{R}}^{1 \times e}$ for $v$, where
$W_1^{(t)}$ and $W_2^{(t)}$ are parameter matrices from $\ensuremath{\mathbb{R}}^{d \times e}$
, and $\sigma$ denotes a component-wise non-linear function, e.g., a sigmoid or a ReLU.\footnote{For clarity of presentation we omit biases.}
Following~\cite{Gil+2017}, one may also replace the sum defined over the neighborhood in the above equation by a permutation-invariant, differentiable function, and one may substitute the outer sum, e.g., by a column-wise vector concatenation or LSTM-style update step.
Thus, in full generality a new feature $f^{(t)}(v)$ is computed as
\begin{equation}\label{eq:gnngeneral}
f^{W_1}_{\text{merge}}\Big(f^{(t-1)}(v) ,f^{W_2}_{\text{aggr}}\big(\{\!\!\{ f^{(t-1)}(w) \mid w \in N(v)\}\!\!\} \big)\!\Big),
\end{equation}
where $f^{W_1}_{\text{aggr}}$ aggregates over the set of neighborhood features and $f^{W_2}_{\text{merge}}$ merges the node's representations from step $(t-1)$ with the computed neighborhood features.
Both $f^{W_1}_{\text{aggr}}$ and $f^{W_2}_{\text{merge}}$ may be arbitrary differentiable, permutation-invariant functions (e.g., neural networks), and, by analogy to Equation \ref{eq:basicgnn}, we denote their parameters as $W_1$ and $W_2$, respectively.
In the rest of this paper, we refer to neural architectures implementing~\cref{eq:gnngeneral} as \emph{$1$-dimensional GNN architectures} ($1$-GNNs).
A vector representation $f_{\text{GNN}}$ over the whole graph can be computed by summing over the vector representations computed for all nodes, i.e.,
\begin{equation*}
f_{\text{GNN}}(G) = \sum_{v \in V(G)} f^{(T)}(v),
\end{equation*}
where $T > 0$ denotes the last layer. More refined approaches use differential pooling operators based on sorting~\cite{Zha+2018} and soft assignments~\cite{Yin+2018}.
In order to adapt the parameters $W_1$ and $W_2$ of~\cref{eq:basicgnn,eq:gnngeneral}, to a given data distribution, they are optimized in an end-to-end fashion (usually via stochastic gradient descent) together with the parameters of a neural network used for classification or regression.
\section{Relationship Between 1-WL and 1-GNNs}
In the following we explore the relationship between the $1$-WL and $1$-GNNs.
Let $(G,l)$ be a labeled graph, and let $\mathbf{W}^{(t)} = \big(W^{(t')}_1, W^{(t')}_2 \big)_{t'\leq t}$ denote the GNN parameters given by \cref{eq:basicgnn} or~\cref{eq:gnngeneral} up to iteration $t$.
We encode the initial labels $l(v)$ by vectors $f^{(0)}(v)\in\mathbb{R}^{1\times d}$, e.g., using a $1$-hot encoding.
Our first theoretical result shows that the $1$-GNN architectures do not have more power in terms of distinguishing between non-isomorphic (sub-)graphs than the $1$-WL algorithm.
More formally, let $f^{W_1}_{\text{merge}}$ and $f^{W_2}_{\text{aggr}}$ be any two functions chosen in \eqref{eq:gnngeneral}.
For every encoding of the labels $l(v)$ as vectors $f^{(0)}(v)$, and for every choice of $\mathbf{W}^{(t)}$, we have that the coloring $c^{(t)}_l$ of $1$-WL always refines the coloring $f^{(t)}$ induced by a $1$-GNN parameterized by $\mathbf{W}^{(t)}$.
\begin{theorem}\label{thm:refine}
Let $(G, l)$ be a labeled graph. Then for all $t\ge 0$ and for all choices of initial colorings $f^{(0)}$ consistent with $l$, and weights $\mathbf{W}^{(t)}$,
\begin{equation*}
c^{(t)}_l \sqsubseteq f^{(t)}\,.
\end{equation*}
\end{theorem}
Our second result states that there exist a sequence of parameter matrices $\mathbf{W}^{(t)}$ such that $1$-GNNs have exactly the same power in terms of distinguishing non-isomorphic \mbox{(sub-)}graphs as the $1$-WL algorithm.
This even holds for the simple architecture~\eqref{eq:basicgnn}, provided we choose the encoding of the initial labeling $l$ in such a way that different labels are encoded by linearly independent vectors.
\begin{theorem}\label{equal}
Let $(G, l)$ be a labeled graph. Then for all \mbox{$t\geq 0$} there exists a sequence of weights $\mathbf{W}^{(t)}$, and a $1$-GNN architecture such that
\begin{equation*}
c^{(t)}_l \equiv f^{(t)}\,.
\end{equation*}
\end{theorem}
Hence, in the light of the above results, $1$-GNNs may viewed as an extension of the $1$-WL which in principle have the same power but are more flexible in their ability to adapt to the learning task at hand and are able to handle continuous node features.
\subsection{Shortcomings of Both Approaches}
The power of $1$-WL has been completely characterized, see, e.g.,~\cite{Arv+2015}.
Hence, by using~\cref{thm:refine,equal}, this characterization is also applicable to $1$-GNNs.
On the other hand, $1$-GNNs have the same shortcomings as the $1$-WL.
For example, both methods will give the same color to every node in a graph consisting of a triangle and a $4$-cycle, although vertices from the triangle and the vertices from the $4$-cycle are clearly different.
Moreover, they are not capable of capturing simple graph theoretic properties, e.g., triangle counts, which are an important measure in social network analysis~\cite{Mil+2002,New2003}.
\section{$\boldsymbol{k}$-dimensional Graph Neural Networks}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.65\linewidth}
\centering
\includegraphics[width=0.7\textwidth]{overview.pdf}
\caption{Hierarchical 1-2-3-GNN network architecture}\label{fig:architecture}
\end{subfigure}
\hspace{-.5cm}
\begin{subfigure}[b]{0.28\linewidth}
\centering
\includegraphics[width=0.7\textwidth]{pool.pdf}
\caption{Pooling from $2$- to $3$-GNN.}\label{fig:pooling}
\end{subfigure}
\caption{Illustration of the proposed hierarchical variant of the $k$-GNN layer. For each subgraph $S$ on $k$ nodes a feature $f$ is learned, which is initialized with the learned features of all $(k-1)$-element subgraphs of $S$. Hence, a hierarchical representation of the input graph is learned.}\label{fig:overview}
\end{figure*}
In the following, we propose a generalization of $1$-GNNs, so-called $k$-GNNs, which are based on the $k$-WL. Due to scalability and limited GPU memory, we consider a set-based version of the $k$-WL. For a given $k$, we consider all $k$-element subsets $[V(G)]^k$ over $V(G)$. Let $s = \{ s_1, \dotsc, s_k \}$ be a $k$-set in $[V(G)]^k$, then we define the \emph{neighborhood} of $s$ as
\begin{equation*}\label{eq:localNeighborhood}
N(s) = \{ t\in [V(G)]^k\mid |s\cap t|=k-1\}\,.
\end{equation*}
The \emph{local neighborhood}\, $N_L(s)$ consists of all $t\in N(s)$ such that $(v,w)\in E(G)$ for the unique $v\in s\setminus t$ and the unique $w\in t\setminus s$. The \emph{global neighborhood} $N_G(s)$ then is defined as $N(s) \setminus N_{\text{L}}(s)$.\footnote{Note that the definition of the local neighborhood is different from the the one defined in~\cite{Mor+2017} which is a superset of our definition.
Our computations therefore involve sparser graphs.}
The set based $k$-WL works analogously to the $k$-WL, i.e., it computes a coloring $c_{\text{s},k,l}^{(t)} : [V(G)]^k \to \Sigma$ as in \cref{eq:wlColoring} based on the above neighborhood.
Initially, $c_{\text{s},k,l}^{(0)}$ colors each element $s$ in $[V(G)]^k$ with the isomorphism type of $G[s]$.
Let $(G,l)$ be a labeled graph.
In each $k$-GNN layer $t \geq 0$, we compute a feature vector $f^{(t)}_{k}(s)$ for each $k$-set $s$ in $[V(G)]^k$.
For $t=0$, we set $f^{(0)}_{k}(s)$ to $f^{\text{iso}}(s)$, a one-hot encoding of the isomorphism type of $G[s]$ labeled by $l$. In each layer $t > 0$, we compute new features by
\begin{align*}
f^{(t)}_{k}(s) = \sigma & \Big( f^{(t-1)}_{k}(s) \cdot W_1^{(t)} +\, \sum_{\mathclap{u \in N_L (s) \cup N_G (s)}} \, f^{(t-1)}_{k}(u) \cdot W_2^{(t)}\Big)\,.
\end{align*}
Moreover, one could split the sum into two sums ranging over $N_L (s)$ and $N_G (s)$ respectively, using distinct parameter matrices to enable the model to learn the importance of local and global neighborhoods.
To scale $k$-GNNs to larger datasets and to prevent overfitting, we propose \emph{local} $k$-GNNs, where we omit the global neighborhood of $s$, i.e.,
\begin{equation*}
f^{(t)}_{k,\text{L}}(s) = \sigma \Big( f^{(t-1)}_{k,\text{L}}(s) \cdot W^{(t)}_1 + \,\sum_{\mathclap{u \in N_{L}(s)}} \, f^{(t-1)}_{k,\text{L}}(u) \cdot W^{(t)}_2 \Big)\,.
\end{equation*}
The running time for evaluation of the above depends on $|V|$, $k$ and the sparsity of the graph (each iteration can be bounded by the number of subsets of size $k$ times the maximum degree). Note that we can scale our method to larger datasets by using sampling strategies introduced in, e.g.,~\cite{Mor+2017,Ham+2017}. We can now lift the results of the previous section to the $k$-dimensional case.
\begin{proposition}\label{pro:refines}
Let $(G, l)$ be a labeled graph and let $k\geq 2$. Then for all $t \ge 0$, for all choices of initial colorings $f_k^{(0)}$ consistent with $l$ and for all weights $\mathbf{W}^{(t)}$,
\begin{equation*}
c^{(t)}_{\text{s},k,l} \sqsubseteq f^{(t)}_{k}\,.
\end{equation*}
\end{proposition}
Again the second result states that there exists a suitable initialization of the parameter matrices $\mathbf{W}^{(t)}$ such that $k$-GNNs have exactly the same power in terms of distinguishing non-isomorphic (sub-)graphs as the set-based $k$-WL.
\begin{proposition}\label{pro:equality}
Let $(G, l)$ be a labeled graph and let $k\geq 2$. Then for all $t \geq 0$ there exists a sequence of weights $\mathbf{W}^{(t)}$, and a $k$-GNN architecture such that
\begin{equation*}
c^{(t)}_{\text{s},k,l} \equiv f^{(t)}_{k}\,.
\end{equation*}
\end{proposition}
\subsection{Hierarchical Variant}
One key benefit of the end-to-end trainable $k$-GNN frame\-work---compared to the discrete $k$-WL algorithm---is that we can hierarchically combine representations learned at different granularities.
Concretely, rather than simply using one-hot indicator vectors as initial feature inputs in a $k$-GNN, we propose a \emph{hierarchical} variant of $k$-GNN that uses the features learned by a $(k-1)$-dimensional GNN, in addition to the (labeled) isomorphism type, as the initial features, i.e.,
\begin{equation*}
f^{(0)}_{k}(s) = \sigma \Big(\Big[f^{\text{iso}}(s), \sum_{u \subset s} f^{\,(T_{k-1})}_{k-1}(u) \Big] \cdot W_{k-1} \Big),
\end{equation*}
for some $T_{k-1} > 0$, where $W_{k-1}$ is a matrix of appropriate size, and square brackets denote matrix concatenation.
Hence, the features are recursively learned from dimensions $1$ to $k$ in an end-to-end fashion.
This hierarchical model also satisfies \cref{pro:refines,pro:equality}, so its representational capacity is theoretically equivalent to a standard $k$-GNN (in terms of its relationship to $k$-WL).
Nonetheless, hierarchy is a natural inductive bias for graph modeling, since many real-world graphs incorporate hierarchical structure, so we expect this hierarchical formulation to offer empirical utility.
\begin{table*}[t]\
\caption{Classification accuracies in percent on various graph benchmark datasets.}
\label{fig:classification_results}
\renewcommand{\arraystretch}{0.90}
\centering
\begin{tabular}{@{}clccccccc@{}}
\toprule
& \multirow{3}{*}{\vspace*{8pt}\textbf{Method}} & \multicolumn{7}{c}{\textbf{Dataset}} \\
\cmidrule{3-9}
& & {\textsc{Pro}} & {\textsc{IMDB-Bin}} & \!{\textsc{IMDB-Mul}} & \!{\textsc{PTC-FM}} & \!{\textsc{NCI1}} & \!{\textsc{Mutag}} & \!{\textsc{PTC-MR}} \\
\cmidrule{2-9}
\multirow{6}{*}{\rotatebox{90}{\hspace*{-6pt}Kernel}}
& \textsc{Graphlet} & 72.9 & 59.4 & 40.8 & 58.3 & 72.1 & 87.7 & 54.7 \\
& \textsc{Shortest-path} & \textbf{76.4} & 59.2 & 40.5 & 62.1 & 74.5 & 81.7 & 58.9 \\
& \textsc{$1$-WL} & 73.8 & 72.5 & \textbf{51.5} & 62.9 & 83.1 & 78.3 & 61.3 \\
& \textsc{$2$-WL} & 75.2 & 72.6 & 50.6 & \textbf{64.7} & 77.0 & 77.0 & 61.9 \\
& \textsc{$3$-WL} & 74.7 & 73.5 & 49.7 & 61.5 & 83.1 & 83.2 & 62.5 \\
& \textsc{WL-OA} & 75.3 & 73.1 & 50.4 & 62.7 & \textbf{86.1} & 84.5 & \textbf{63.6}
\\
\cmidrule{2-9}
\multirow{7}{*}{\rotatebox{90}{GNN}}
& \textsc{DCNN} & 61.3 & 49.1 & 33.5 & --- & 62.6 & 67.0 & 56.6 \\
& \textsc{PatchySan} & 75.9 & 71.0 & 45.2 & --- & 78.6 & \textbf{92.6} & 60.0 \\
& \textsc{DGCNN} & 75.5 & 70.0 & 47.8 & --- & 74.4 & 85.8 & 58.6 \\
\cmidrule{2-9}
& \textsc{$1$-Gnn No Tuning} & 70.7 & 69.4 & 47.3 & 59.0 & 58.6 & 82.7 & 51.2 \\
& \textsc{$1$-Gnn} & 72.2 & 71.2 & 47.7 & 59.3 & 74.3 & 82.2 & 59.0 \\
& \textsc{$1$-$2$-$3$-Gnn No Tuning} & 75.9 & 70.3 & 48.8 & 60.0 & 67.4 & 84.4 & 59.3 \\
& \textsc{$1$-$2$-$3$-Gnn} & 75.5 & \textbf{74.2} & 49.5 & 62.8 & 76.2 & 86.1 & 60.9 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[t]\
\caption{%
Mean absolute errors on the \textsc{Qm9} dataset. The far-right column shows the improvement of the best $k$-GNN model in comparison to the $1$-GNN baseline.
}%
\label{fig:qm9_results}
\renewcommand{\arraystretch}{1.0}
\centering
\begin{tabular}{lccccccc}
\toprule
\multirow{3}{*}{\vspace*{8pt}\textbf{Target}} & \multicolumn{7}{c}{\textbf{Method}} \\
\cmidrule{2-8}
& \!\!\!\textsc{Dtnn}~\cite{Wu+2018}\!\! & \!\!\textsc{Mpnn}~\cite{Wu+2018}\!\!\! & \textsc{$1$-Gnn} & \!\textsc{$1$-$2$-Gnn} & \textsc{$1$-$3$-Gnn} & \textsc{$1$-$2$-$3$-Gnn}\! & Gain \\
\midrule
$\mu$ & $\mathbf{0.244}$ & 0.358 & 0.493 & 0.493 & $\mathbf{0.473}$ & 0.476 & 4.0\% \\
$\alpha$ & 0.95 & 0.89 & 0.78 & $\mathbf{0.27}$ & 0.46 & $\mathbf{0.27}$ & 65.3\% \\
$\varepsilon_{\text{HOMO}}$ & 0.00388 & 0.00541 & $\mathbf{0.00321}$ & 0.00331 & 0.00328 & 0.00337 & -- \\
$\varepsilon_{\text{LUMO}}$ & 0.00512 & 0.00623 & 0.00355 & $\mathbf{0.00350}$ & 0.00354 & 0.00351 & 1.4\% \\
$\Delta\varepsilon$ & 0.0112 & 0.0066 & 0.0049 & 0.0047 & $\mathbf{0.0046}$ & 0.0048 & 6.1\% \\
$\langle R^2 \rangle$ & $\mathbf{17.0}$ & 28.5 & 34.1 & 21.5 & 25.8 & 22.9 & 37.0\% \\
\textsc{ZPVE} & 0.00172 & 0.00216 & 0.00124 & $\mathbf{0.00018}$ & 0.00064 & 0.00019 & 85.5\% \\
$U_0$ & 2.43 & 2.05 & 2.32 & $\mathbf{0.0357}$ & 0.6855 & 0.0427 & 98.5\% \\
$U$ & 2.43 & 2.00 & 2.08 & $\mathbf{0.107}$ & 0.686 & 0.111 & 94.9\% \\
$H$ & 2.43 & 2.02 & 2.23 & 0.070 & 0.794 & $\mathbf{0.0419}$ & 98.1\% \\
$G$ & 2.43 & 2.02 & 1.94 & 0.140 & 0.587 & $\mathbf{0.0469}$ & 97.6\% \\
$C_{\text{v}}$ & 0.27 & 0.42 & 0.27 & 0.0989 & 0.158 & $\mathbf{0.0944}$ & 65.0\% \\
\bottomrule
\end{tabular}
\end{table*}
\section{Experimental Study}
In the following, we want to investigate potential benefits of GNNs over graph kernels as well as the benefits of our proposed $k$-GNN architectures over $1$-GNN architectures. More precisely, we address the following questions:
\begin{description}
\item[Q1] How do the (hierarchical) $k$-GNNs perform in comparison to state-of-the-art graph kernels?
\item[Q2] How do the (hierarchical) $k$-GNNs perform in comparison to the $1$-GNN in graph classification and regression tasks?
\item[Q3]
How much (if any) improvement is provided by optimizing the parameters of the GNN aggregation function, compared to just using random GNN parameters while optimizing the parameters of the downstream classification/regression algorithm?
\end{description}
\subsection{Datasets }
To compare our $k$-GNN architectures to kernel approaches we use well-established benchmark datasets from the graph kernel literature~\cite{KKMMN2016}. The nodes of each graph in these dataset is annotated with (discrete) labels or no labels.
To demonstrate that our architectures scale to larger datasets and offer benefits on real-world applications, we conduct experiments on the \textsc{Qm9} dataset~\cite{Ram+2014,Rud+2012,Wu+2018}, which consists of 133\,385 small molecules. The aim here is to perform regression on twelve targets representing energetic, electronic, geometric, and thermodynamic properties, which were computed using density functional theory. We used the dataset provided at \url{http;//moleculenet.ai}, cf. \cref{fig:qm9_details} in the Appendix for details.
\subsection{Baselines}
We use the following kernel and GNN methods as baselines for our experiments.
\xhdr{Kernel Baselines} We use the Graphlet kernel~\cite{She+2009}, the shortest-path kernel~\cite{Borgwardt2005}, the Weisfeiler-Lehman subtree kernel (\textsc{WL})~\cite{She+2011}, the Weisfeiler-Lehman Optimal Assignment kernel (\textsc{WL-OA})~\cite{Kri+2016}, and the global-local $k$-WL~\cite{Mor+2017} with $k$ in $\{2,3\}$ as kernel baselines.
For each kernel, we computed the normalized Gram matrix.
We used the $C$-SVM implementation of LIBSVM~\cite{Cha+2011} to compute the classification accuracies using 10-fold cross validation.
The parameter $C$ was selected from $\{10^{-3}, 10^{-2}, \dotsc, 10^{2},$ $10^{3}\}$ by 10-fold cross validation on the training folds.
\xhdr{Neural Baselines} To compare GNNs to kernels we used the basic $1$-GNN layer of~\cref{eq:basicgnn}, DCNN~\cite{Wang2018}, PatchySan~\cite{Nie+2016}, DGCNN~\cite{Zha+2018}. For the \textsc{Qm9} dataset we used a $1$-GNN layer similar to~\cite{Gil+2017}, where we replaced the inner sum of~\cref{eq:basicgnn} with a 2-layer MLP in order incorporate edge features (bond type and distance information). Moreover, we compare against the numbers provided in~\cite{Wu+2018}.
\subsection{Model Configuration}
We always used three layers for $1$-GNN, and two layers for (local) $2$-GNN and $3$-GNN, all with a hidden-dimension size of $64$.
For the hierarchical variant we used architectures that use features computed by $1$-GNN as initial features for the $2$-GNN ($1$-$2$-GNN) and $3$-GNN ($1$-$3$-GNN), respectively.
Moreover, using the combination of the former we componentwise concatenated the computed features of the $1$-$2$-GNN and the $1$-$3$-GNN ($1$-$2$-$3$-GNN).
For the final classification and regression steps, we used a three layer MLP, with binary cross entropy and mean squared error for the optimization, respectively.
For classification we used a dropout layer with $p=0.5$ after the first layer of the MLP.
We applied global average pooling to generate a vector representation of the graph from the computed node features for each $k$.
The resulting vectors are concatenated column-wise before feeding them into the MLP.
Moreover, we used the Adam optimizer with an initial learning rate of $10^{-2}$ and applied an adaptive learning rate decay based on validation results to a minimum of $10^{-5}$. We trained the classification networks for $100$ epochs and the regression networks for $200$ epochs.
\subsection{Experimental Protocol}
For the smaller datasets, which we use for comparison against the kernel methods, we performed a 10-fold cross validation where we randomly sampled 10\% of each training fold to act as a validation set.
For the \textsc{Qm9} dataset, we follow the dataset splits described in~\cite{Wu+2018}.
We randomly sampled 10\% of the examples for validation, another 10\% for testing, and used the remaining for training. We used the same initial node features as described in~\cite{Gil+2017}. Moreover, in order to illustrate the benefits of our hierarchical $k$-GNN architecture, we did not use a complete graph, where edges are annotated with pairwise distances, as input.
Instead, we only used pairwise Euclidean distances for connected nodes, computed from the provided node coordinates. The code was built upon the work of~\cite{Fey+2018} and is provided at~\url{https://github.com/chrsmrrs/k-gnn}.
\subsection{Results and Discussion}
In the following we answer questions \textbf {Q1} to \textbf{Q3}. \cref{fig:classification_results} shows the results for comparison with the kernel methods on the graph classification benchmark datasets. Here, the hierarchical $k$-GNN is on par with the kernels despite the small dataset sizes (answering question \textbf{Q1}).
We also find that the 1-2-3-GNN significantly outperforms the 1-GNN on all seven datasets (answering \textbf{Q2}), with the 1-GNN being the overall weakest method across all tasks.\footnote{Note that in very recent work, GNNs have shown superior results over kernels when using advanced pooling techniques~\cite{Yin+2018}. Note that our layers can be combined with these pooling layers. However, we opted to use standard global pooling in order to compare a typical GNN implementation with standard off-the-shelf kernels.}
We can further see that optimizing the parameters of the aggregation function only leads to slight performance gains on two out of three datasets, and that no optimization even achieves better results on the \textsc{Proteins} benchmark dataset (answering \textbf{Q3}).
We contribute this effect to the one-hot encoded node labels, which allow the GNN to gather enough information out of the neighborhood of a node, even when this aggregation is not learned.
\cref{fig:qm9_results} shows the results for the \textsc{Qm9} dataset. On eleven out of twelve targets all of our hierarchical variants beat the $1$-GNN baseline, providing further evidence for \textbf{Q2}. For example, on the target $H$ we achieve a large improvement of 98.1\% in MAE compared to the baseline. Moreover, on ten out of twelve datasets, the hierarchical $k$-GNNs beat the baselines from~\cite{Wu+2018}.
However, the additional structural information extracted by the $k$-GNN layers does not serve all tasks equally, leading to huge differences in gains across the targets.
It should be noted that our $k$-GNN models have more parameters than the $1$-GNN model, since we stack two additional GNN layers for each $k$. However, extending the $1$-GNN model by additional layers to match the number of parameters of the $k$-GNN did not lead to better results in any experiment.
\section{Conclusion}
We presented a theoretical investigation of GNNs, showing that a wide class of GNN architectures cannot be stronger than the $1$-WL. On the positive side, we showed that, in principle, GNNs possess the same power in terms of distinguishing between non-isomorphic (sub-)graphs, while having the added benefit of adapting to the given data distribution. Based on this insight, we proposed $k$-GNNs which are a generalization of GNNs based on the $k$-WL. This new model is strictly stronger then GNNs in terms of distinguishing non-isomorphic (sub-)graphs and is capable of distinguishing more graph properties. Moreover, we devised a hierarchical variant of $k$-GNNs, which can exploit the hierarchical organization of most real-world graphs. Our experimental study shows that $k$-GNNs consistently outperform $1$-GNNs and beat state-of-the-art neural architectures on large-scale molecule learning tasks. Future work includes designing task-specific $k$-GNNs, e.g., devising $k$-GNNs layers that exploit expert-knowledge in bio- and chemoinformatic settings.
\section*{Acknowledgments}
This work is supported by the German research council (DFG) within the Research Training Group 2236 \emph{UnRAVeL} and the Collaborative Research Center
SFB 876, \emph{Providing Information by Resource-Constrained
Analysis}, projects A6 and B2.
\fontsize{9.5pt}{10.5pt} \selectfont
\bibliographystyle{aaai}
|
{'timestamp': '2020-02-17T02:15:51', 'yymm': '1810', 'arxiv_id': '1810.02244', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.02244'}
|
arxiv
|
\section{Introduction}
\label{s_int}
During large eruptive flares, a system of flare loops evolves from the impulsive phase to the often long-lasting
gradual phase \citep{sjm1992}. This is the result of a gradual magnetic reconnection in the corona when the energy is transported
downwards along the reconnected loops and the plasma from strongly heated low atmospheric layers is evaporated.
Due to this process the loops are filled by a hot 10$^6$ - 10$^7$ K plasma, which subsequently cools down. The density distribution of such loops is an open question, but vital to physical models.
Such hot flare loops are now routinely observed e.g. by SDO/AIA \citep{lem12} in selected coronal passbands, while cooler loops cool below transition-region temperatures and finally become visible in chromospheric lines, such as H$\alpha$ \citep[e.g.][]{jing16} or \ion{Mg}{2} \citep[e.g.][]{mik17,lac17}.
These cool
flare loops, often misleadingly called 'post-flare' loops \citep{sve07}, exhibit large downflows, which
is a consequence of the catastrophic cooling. In the
meantime new hot loops form higher in the corona due to gradual reconnection \citep[see][]{sve92}.
This classical scenario corresponds to the so-called CSHKP model \citep{carmichael1964,sturrock1966,hirayama1974,kopppneuman1976} which is widely accepted.
Although it basically is a 2D model, its generalization to 3D retains similar physics \citep{jan15}. However, the whole process strongly
depends on the efficiency of the reconnection which is, for each event, gradually decreasing with time.
The amount of evaporated plasma directly depends on the amount of energy transported from loop tops down to the transition region and chromosphere. The loop density is thus a crucial parameter needed to understand the temporal evolution of flares, and namely their gradual phases.
At the beginning the cooling process may be dominated by conduction, while later on the radiative cooling takes over, which is proportional to density squared (or emission measure). In hot loops, the electron density or emission
measure can be diagnosed using various coronal lines, while cool loops with downflowing blobs pose a
more difficult problem. Being detected in cool chromospheric lines, their spectral diagnostics require
complex non-LTE radiative transfer performed for moving structures illuminated by the surrounding
atmosphere. Downward motions cause the so-called Doppler brightening (e.g. in the H$\alpha$ line) or
Doppler dimming \citep[for \ion{Mg}{2} see][]{mik17} which must be properly taken into account in order
to accurately derive the electron densities. For static loops (e.g. loop tops), \citet{hk87}
derived electron densities of the order of 10$^{12}$ cm$^{-3}$ for H$\alpha$ loops visible in
absorption against the solar disk, while at higher densities the loops may appear in emission.
Recent observation using the SDO/HMI instrument revealed flare loops above the limb, surprisingly well detectable in the visible continuum. First detections were reported by \citet{oliv14} and \citet{shil14} after X-class flares. These white-light (WL) loops reached heights of more than 10$^4$ km and the authors suggested that their brightness is due to the Thomson scattering of the incident photospheric radiation on loop electrons, with a possible thermal component (free-bound and free-free). They also used HMI's linear polarization to derive the electron density from the Thomson-scattering component.
On the disk, HMI was used to detect detect ribbons of many WL flares \citep[e.g.][]{kuh16}, assuming that the outermost HMI channel detects the visible continuum and not the \ion{Fe}{1} line emission during the flare, which was shown to be the case, even though the absolute value of the enhancement may be misrepresented with this method \citep{sva18}.
However, in the off-limb structures the photospheric \ion{Fe}{1} line is not seen and we seem to detect only the visible continuum around that wavelength. We would like to clearly distinguish the observations of chromospheric footpoints of otherwise hot flare loops \citep[e.g.,][]{kru15, hei17} and the full WL-loops of \citet{oliv14} and \citet{shil14}, and of this paper. Here we will analyze a very bright loop system that was detected during the gradual phase of the X8.2 limb flare on September 10, 2017. After calibrating the HMI images, we derive plausible ranges of the electron densities for this event, considering quantitatively all relevant emission mechanisms.
The paper is organized as follows. In Section~\ref{s-obs} we present the SDO/HMI observations and data processing, Section~\ref{s-multi} discusses the multi-thermal nature of flare loops. Section~\ref{s-rad} details all considered emission mechanisms and
develops the new diagnostics technique for the electron density determination, while
Section~\ref{s-ne} presents the results of our diagnostics. Finally, Section~\ref{s-con} contains a discussion and conclusions.
\section{Observations and data processings of the loops}
\label{s-obs}
\begin{figure*}[tb]
\centering
\includegraphics[width=.8\textwidth]{20170910_paper_figgoes_exp.eps}
\caption{Left: The GOES X-ray 1--8 \AA\ flux (solid line) showing the X8.2 flare. The vertical dotted lines indicate the times of the panels on the right, which show the HMI continuum images of the evolving loop system. The off-limb intensity was enhanced for the display by dividing the regular HMI images by an exponential function and by setting the disk values to zero. A movie showing the full evolution is available online [20170910\_wl\_exp.mp4].}
\label{f-goes}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{paper_loopint_cr_arcsec.eps}
\caption{Temporal evolution of SDO/HMI WL loops and their intensity. The left panel shows the HMI WL image with a cut through the flare loop (dotted red line). The
solid line in the right panel shows calibrated intensities with the pre-flare subtracted in CGS units along the marked red dotted line. The dashed line shows the pre-flare intensity divided by 10.}
\label{f-loopint}
\end{figure*}
Active region (AR) 12673 erupted near the west solar limb on September 10, 2017 with its maximum X-ray emission at 16:06 UT as a
strong X8.2 limb flare with well visible arcades of flare loops during its gradual phase. Its GOES X-ray plot and several snapshots of WL data from SDO/HMI are shown in Figure~\ref{f-goes}. We use the continuum channel of SDO/HMI (hmi.Ic\_45s), which is outside of the \ion{Fe}{1} line at 6173 \AA\ \citep{Scherrer2012}. The loop system was visible for more than one hour in the WL images of SDO/HMI.
We performed an absolute calibration of HMI intensities by taking their disk center value, which is about $5.63 \times 10^4$ counts and assigning it the continuum value from the atlas of \citet{neckel} at 6173 \AA, which is $0.315 \times 10^7$ erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$ \AA$^{-1}$. We applied this conversion factor to the off-limb intensities. We also removed a large fraction of cosmic rays by checking if their intensity at a given time step exceeds three standard deviations of 134 time steps. If a pixel exceeded this threshold at 1 or 2 consecutive time steps, it was flagged as cosmic ray pixel and its value was substituted with the median value from the 2 previous and 2 posterior time steps. This rather conservative approach made sure that we did not filter long-lasting events (more than 2 time steps), but some cosmic rays remain as can be seen in the Figures.
The left panel of Figure~\ref{f-loopint} shows the temporal evolution of the flare loops during the gradual phase between 16:01 and 16:47 UT when the WL continuum emission enhancement is observed by HMI. In the right column we can see the variation
of the specific intensity of the WL continuum radiation along the
horizontal cut marked by a red dotted line through the flare loop after subtraction of pre-flare images. We later convert these intensities into electron densities and can thus also determine the maximum electron density $n_{\rm e}$ in the flare loops.
We further coaligned the SDO/HMI data with all EUV channels of the
SDO/AIA data. These channels cover the temperature range between 5
$\times$ 10$^{4}$ and 2 $\times$ 10$^{7}$~K by observing the transition region and corona \citep{lem12}. The spatial resolution of AIA is 1\farcs2, while for SDO/HMI it is 1\arcsec\ and the temporal resolution is 12 s for AIA images and 45 s for HMI observations.
\section{Multi-thermal loops seen by SDO/AIA}
\label{s-multi}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{20170910_maxcut_oplot_061.eps}
\caption{An example of flare loops of AR 12673 on the west limb on September 10, 2017 at 16:20:03.8 UT in SDO/HMI (upper left plot, in this case without a correction for cosmic rays) and various SDO/AIA passbands (labeled in their titles). In the HMI panel a pre-flare image from 15:35:48.8 UT was subtracted.
HMI loop contours (marked in blue) are overlaid on all images.
The emission along the red dotted line is shown for all channels in the bottom right panel. The absolute units are valid for SDO/HMI, all AIA wavelengths were scaled to have their maximum at the same level. It is visible that the loop height in HMI coincides with AIA 1600, while the AIA 1700 loop is slightly lower and the loop heights of all other AIA passbands peak higher.}
\label{f-aia}
\end{figure*}
Apart from the SDO/HMI WL evolution, we also examine the behavior of the flare loops in various SDO/AIA channels. The AIA images clearly show a multi-thermal nature of the observed loop arcade and we can deduce spatial correlations between loops in different AIA channels and those detected by HMI in the WL. In this section we provide a qualitative description of the multi-thermal behavior and discuss possible mechanisms responsible for the formation of AIA diagnostics.
Most AIA channels and a pre-flare subtracted HMI image are shown in Figure~\ref{f-aia}. In this HMI image, the cosmic rays were not removed on purpose to show their prevalence. We also plot the intensity variations along one cut through the loop system (red dotted line in Figure~\ref{f-aia}). The absolute intensity on the y-axis refers only to HMI, all AIA signals were scaled to have their maximum at an arbitrary value.
\subsection{AIA 193, 131 and 94}
The AIA 193 channel shows both hot loop emission in the \ion{Fe}{12} and \ion{Fe}{24} lines, as well as cool loops at chromospheric temperatures seen as dark absorbing features, also reported by \citet{son16}. In this case, the cool loops in front of the hot ones absorb the EUV radiation emitted by iron lines, and the absorption process corresponds to photoionization of the hydrogen and helium by background EUV photons. The dominant process is the photoionization of helium and since the cross-sections of \ion{He}{1} and \ion{He}{2} at 193 \AA\ are about the same, we do not need to consider
\ion{He}{1} and \ion{He}{2} separately \citep{anz05}. The \ion{He}{1} and \ion{He}{2} photoionization continua start at 504 \AA\ and 228 \AA, respectively.
We clearly see that the intensity depression in the 193
channel (yellow dash-dotted line in Fig.~\ref{f-aia}), downwards from roughly X=971\arcsec, correlates well with the position of the WL loops from HMI. The fact that hot 193 loops are located above those of WL is consistent with
the standard scenario of gradual reconnection where the reconnected hot loops appear higher and higher, but at a given height they cool down and appear gradually at lower and lower temperatures \citep[for a brief overview of flare loops characteristics see e.g.][]{mik17}. However, the loop arcade studied in this paper seems to be aligned along the line of sight and due to the projection of differently inclined loops we may partially see also the hot loops behind the cool ones, with their emission attenuated by photoionization in cool loops. If the cool loops are optically thick in the \ion{He}{1}-\ion{He}{2} continua (plus some opacity from the \ion{H}{1} Lyman continuum), the hot background loops will not be visible in 193, but will definitely contribute to the HMI WL continuum emission which is optically thin and thus covers the whole arcade along the line of sight (see below). A presence of hotter loops behind the cool ones seems to be indicated by a perfect co-alignment between the HMI WL loops, the dark 193 loops and loops seen in emission in the AIA 1600 and 1700 channels. The other two AIA channels, 131 and 94, exhibit a similar behavior as 193 one, but the absorption is decreasing with the decreasing wavelength and the emission is due to different iron ions \citep{lem12}.
\subsection{AIA 1600 and 1700}
We believe that the AIA 1600 channel loops are mainly due to \ion{C}{4} line emission at temperatures of the order of 10$^5$ K meaning that cool (10$^4$ K) loops and hotter \ion{C}{4} loops are located at similar heights because the time to cool \ion{C}{4} loops down to cool loops is very short, especially at the rather high electron densities we derive in this study - see the cartoon of gradual reconnection and cooling in \citet{sch96}.
AIA 1700 channel contains mainly cooler lines like \ion{C}{1} and \ion{He}{2} \citep{sim18} which might explain that the emission peak is slightly shifted towards the lower heights.
\subsection{AIA 304}
AIA 304 images show a more complex morphology where a combination of bright and dark loops is visible. In our opinion, this is a mixture
of cooler loops at the temperature of \ion{He}{2} 304 \AA\ line formation (around 5 $\times 10^4$ K) which is seen in emission above the limb, but because of the large opacity in the \ion{He}{2} line some forefront loops can obscure this emission producing dark absorbing features. However, a striking feature is the extension of 304 loops well above those at 1600 \AA. This is somewhat difficult to explain because \ion{He}{2} 304 \AA\ loops cannot form at temperatures higher than the 1600 loops with \ion{C}{4} and thus, according to the above-mentioned reconnection scenario, they should not be located higher than 1600 loops. However, the 304 AIA channel may be contaminated by the
nearby \ion{Si}{11} line at 303.3 \AA\ and it can produce emission at altitudes higher than those of the 1600 channel, but still somewhat lower than the 193 \ion{Fe}{12} loops.
It is not the aim of this study to perform a quantitative analysis of emissions or absorptions in all these AIA channels and we provide this discussion just to relate the HMI WL loops to structures seen by SDO/AIA. However, in the future it would be important
to analyze these height variations in detail, for example to estimate densities from the EUV absorptions, or to deduce the (differential) emission measure in hot loops. Such parameters could be compared with our findings for HMI loops.
\section{Continuum radiation processes in flare loops}
\label{s-rad}
\begin{table} \small
\centering
\caption{Parameters of WL flare loops: time $t$, height above the solar disk, peak of the specific intensity of the WL continuum radiation, dilution factor $W(H, \nu)$, and diluted mean intensity of the incident radiation from the solar disk $J(\nu)$.
Here cgs represents the units erg~s$^{-1}$~cm$^{-2}$~sr$^{-1}$.
Note that the last image has two peaks and the values are shown for both of them: the last two rows for the left and right peak, respectively.}
\label{t-par}
\begin{tabular}{ccccc}
\hline
\hline
{\it t} & {\it H} & {\it I$_{\rm WL}$} & {\it W(H, $\nu$)}&{\it J($\nu$)} \\
(UT) & (km) & (cgs~\AA$^{-1}$)& &(10$^{-5}$~~cgs~Hz$^{-1}$) \\
\hline
16:01:18:8 & 5500 & 6600 & 0.329 & 1.266 \\
16:11:03.8 & 12\,500 & 24\,000 & 0.313 & 1.204 \\
16:19:18.8 & 14\,500 & 20\,000 &0.308 & 1.188 \\
16:47:03.8 & 21\,500 & 7800 & 0.294 & 1.132 \\
16:47:03.8 & 25\,000 & 7700 & 0.287 & 1.105 \\
\hline
\end{tabular}
\end{table}
WL continuum emission in flare loops observed off the limb is mainly due to four different mechanisms:
i) Thomson scattering of the incident solar radiation on flare loop electrons,
ii) hydrogen Paschen recombination continuum (i.e., protons capture free thermal electrons) with the
continuum head at 8204~\AA~,
iii) hydrogen Brackett recombination continuum with the continuum head at 14584~\AA, and finally
iv) hydrogen free-free continuum emission due to energy losses of free thermal electrons in the electric field of protons. Here we
neglect higher hydrogen recombination continua and other continuum sources.
Below we present the explicit forms of emissivities for all these processes \citep[see also][]{hub15,hei17,hei18}.
\subsection{Optically-thin loops}
The specific intensity of optically-thin continuum radiation is generally written as
\begin{equation}
I_{\rm WL}(\nu)= \eta(\nu)~D_{\rm eff} \ ,
\label{e-ivl}
\end{equation}
where $D_{\rm eff}$ is the effective thickness, $\eta(\nu)$ is the emissivity and $\nu$ the frequency, in our case corresponding to the wavelength around the HMI \ion{Fe}{1} line at 6173~\AA.
The Thomson continuum emissivity is expressed as
\begin{equation}
\eta^{\rm{Th}}(\nu) = n_{\rm e}~\sigma_{\rm T}~J(\nu) \ ,
\label{e-th}
\end{equation}
where $\sigma_{\rm T} = 6.65 \times 10^{-25}$~cm$^2$ is the absorption cross-section for Thomson scattering
and $J(\nu)$ is the intensity of radiation emitted from the
solar disk center multiplied by a dilution factor $W(H, \nu)$ which takes into account center-to-limb continuum variation and depends on
the loop height $H$ and frequency. $W(H, \nu)$ and
$J(\nu)$ are shown in Table~\ref{t-par} and are computed according to \citet{jej09}.
The Paschen and Brackett continuum emissivity is written as
\begin{equation}
\eta^{i}(\nu) = n_{\rm e}~n_{\rm p}~F_i(\nu,T) \ ,
\label{e-pabr}
\end{equation}
where the principal quantum number is $i = 3$ for the Paschen (Pa) continuum and $i = 4$ for the Brackett (Br) continuum.
$n_{\rm e}$ and $n_{\rm p}$ are the electron and proton densities, respectively, and $T$ is the kinetic
temperature of the loop. The function $F_i(\nu,T)$ takes the form \citep{hei17}
\begin{eqnarray}
F_i(\nu,T) &=& 1.1658 \times 10^{14}~g_{\rm bf}(i, \nu)~T^{-3/2}~B_{\rm \nu}(T) \nonumber\\
&\times& e^{h\nu_i/kT}~(1 - e^{- h\nu/kT})~(i \nu)^{-3} \, .
\label{e-f}
\end{eqnarray}
Here $h$ and $k$ are Planck and Boltzmann constants, respectively, $g_{\rm bf}(i, \nu)$ is the bound-free Gaunt factor and
$B_{\nu}(T)$ is the Planck function. $\nu_i$ is the frequency at the respective continuum head.
The Paschen and Brackett bound-free Gaunt factors at 6173~\AA~are $g_{\rm bf}(3, \nu)$= 0.942 and $g_{\rm bf}(4, \nu)$= 0.998, respectively \citep{mih67}.
The hydrogen free-free continuum emissivity is simply related to the Paschen emissivity as \citep{hei17}
\begin{equation}
\eta^{\rm ff}(\nu) = 8.546 \times 10^{-5}~\frac{g_{\rm ff}(\nu,T)}{g_{\rm bf}(3, \nu)} T e^{-h\nu_3/kT} \times \eta^{\rm Pa}(\nu) \, .
\label{e-ff}
\end{equation}
Here $g_{\rm ff}(\nu,T)$ is the free-free Gaunt factor \citep[see][Table~1]{ber56}.
The total WL radiation intensity of an optically thin loop takes into account all four processes, i.e.
\begin{equation}
I_{\rm WL}(\nu) = I^{\rm Th}(\nu) + I^{\rm Pa}(\nu) + I^{\rm Br}(\nu) + I^{\rm ff}(\nu) \ .
\label{e-4pr}
\end{equation}
Equation~(\ref{e-4pr}) can be written using Equations~(\ref{e-ivl}) - (\ref{e-ff}) and assuming
pure hydrogen plasma with $n_{\rm e} = n_{\rm p}$ as
\begin{eqnarray}
I_{\rm WL}(\nu) &=& n_{\rm e} \sigma_{\rm T} J(\nu) D_{\rm eff} + n_{\rm e}^{2} F_3(\nu, T) D_{\rm eff} \nonumber\\
&\times& (1 + 8.546 \times 10^{-5} \frac{g_{\rm ff}(\nu,T)}{g_{\rm bf}(3, \nu)} T e^{-h\nu_3/kT}) \nonumber\\
&+& n_{\rm e}^{2} F_4(\nu, T) D_{\rm eff} \, .
\label{e-all}
\end{eqnarray}
We thus obtained a {\em quadratic equation} to be solved for $n_{\rm e}$, at a given frequency, temperature, and effective
thickness and for a given (measured) intensity of the WL radiation.
Note that the second line in Equation~(\ref{e-all}) shows the relative importance of the Paschen and free-free continua.
\begin{figure*} [tbh]
\centering
\includegraphics[width=\textwidth]{ivstne.eps}
\caption{Contribution of individual processes (Thomson, free-bound, free-free) to the flare loop WL emission as a
function of temperature at $H$= 10$^{4}$~km, $D_{\rm eff}$ = 1000~km and for $n_{\rm e}$ = 10$^{11}$~cm$^{-3}$
({\it left panel}), $n_{\rm e}$ = 10$^{12}$~cm$^{-3}$
({\it middle panel}), and $n_{\rm e}$ = 10$^{13}$~cm$^{-3}$ ({\it right panel}). The Thomson continuum only dominates for low electron densities. At high densities, the Paschen and Brackett continua dominate at lower temperatures, and the free-free emission at high temperatures. }
\label{f-modne}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.425\textwidth]{IvsT.eps}
\caption{Computed WL radiation intensity as a function of temperature for a selected range of electron densities at $D_{\rm eff}$ = 1000~km.
Note that all four processes (Thomson, Paschen, Brackett and free-free) are taken into account for the WL emission.}
\label{f-ica}
\end{figure}
\subsection{Relative contribution of the different emission mechanisms}
The computed WL emission for optically thin structures as a function of temperature is shown in Figure~\ref{f-modne} for
$n_{\rm e}$ equal to 10$^{11}$, 10$^{12}$, and 10$^{13}$~cm$^{-3}$ and a characteristic $D_{\rm eff}$ = 1000~km. At high electron densities, the Thomson continuum is completely negligible compared to the total WL emission. The Paschen and Brackett continua are dominant only at lower temperatures up to about 2.5 $\times$ 10$^4$~K.
At higher temperatures, the free-free continuum becomes dominant. Therefore, the flare loop WL emission
can be due to both cool as well as hot loop structures.
To show the dependence of the electron density diagnostics on temperature, we computed the WL emission from Equation~(\ref{e-all})
for a range of temperatures between 10$^4$ and 10$^6$~K and for five different electron densities 10$^{11}$, 5 $\times$ 10$^{11}$, 10$^{12}$, 5 $\times$ 10$^{12}$ and 10$^{13}$~cm$^{-3}$ at $D_{\rm eff}$ = 1000~km. Figure~\ref{f-ica} shows the flare loop WL radiation intensity at given temperature and electron density, computed by adding together all processes. It is visible that the temperature only has a minor influence on the electron density determination for a given intensity.
\section{Electron density in flare loops}
\label{s-ne}
\begin{figure*}
\centering
\includegraphics[width=.81\textwidth]{conhis.eps}
\vspace{0mm}
\caption{Temporal evolution of contour plots of electron density as a function of effective thickness and temperature for the maximum intensity at a given time ({\it left panel}) together with the distribution of electron density ({\it right panel}) for our grid of 208 models that were constructed from sampling the parameter space shown in the left panel. }
\label{f-net}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{hth.eps}
\caption{Distribution of the ratio of Thomson continuum radiation to the total WL radiation intensity for selected time steps of the selected pixel in the observations.}
\label{f-his}
\end{figure}
\begin{figure*}
\centering
\hspace{-0.5cm}
\includegraphics[width=1.02\textwidth]{density.eps}
\vspace{-8mm}
\caption{Temporal evolution of the electron density for four selected models computed from Equation~(\ref{e-all}) for four selected time steps. Units in $x$ and $y$ directions are in arcsec. It is visible that the maximum density is reached in the middle of the loop top.
A movie corresponding to this figure is available as online material of the journal.}
\label{f-density}
\end{figure*}
\subsection{Exploring the parameter space of electron densities}
From calibrated HMI WL intensity data we estimate the maximum radiation intensity in the flare loop system and the corresponding height above the solar surface
(see Table~\ref{t-par}). Since we cannot obtain the loop temperature and effective thickness from SDO data, we computed the electron density by solving Equation~(\ref{e-all}) for a range of typical temperatures, taking into account cool as well as hot structures, for a grid of temperatures between 6000 and 10$^6$~K. For the effective thickness we take values between 200 and 20\,000~km. All together we have 208 models for a given time of the observations that allow us to explore the parameter space of the expected electron density.
The electron density as a function of effective
thickness and temperature is presented in the left panels of Figure~\ref{f-net} as contour plots for different times of the observations. Note that for the fourth selected time step we only focus on the left peak, which has a higher intensity (both peaks give rather similar values of electron density). The electron density is
increasing with temperature and decreasing with effective
thickness and it is more sensitive to effective thickness than temperature. For the maximal brightness at 16:11:03.8 UT and assuming effective thickness of 1000~km the electron density increases by a factor of two when comparing cool (6000~K) and hot loops (10$^6$~K).
Arcades of cool loops would have
$n_{\rm e}$ $\sim$ 7.3 $\times$ 10$^{12}$~cm$^{-3}$ while arcades of hot loops would have $n_{\rm e}$ $\sim$ 1.5 $\times$ 10$^{13}$~cm$^{-3}$.
This behavior follows directly from Figure~\ref{f-ica}.
Normally the system of flare loops is a mixture of hot and cool loops along the line of sight, thus the electron density is expected to lie between these two extreme values. We thus obtain a relatively high electron density of the order of 10$^{13}$~cm$^{-3}$ at $D_{\rm eff}$ = 1000~km.
At higher effective thicknesses and for the same brightness, the obtained electron density is decreased. For example at $D_{\rm eff}$ = 10\,000~km, the electron density would be roughly between
2.2 $\times$ 10$^{12}$~cm$^{-3}$ and
4.5 $\times$ 10$^{12}$~cm$^{-3}$ for two extreme temperatures (by neglecting the Thomson scattering, the WL intensity would be proportional to emission measure n$_{\rm e}^2$ D$_{\rm eff}$).
The right panels of Figure~\ref{f-net} show histograms of electron density at different times of the observations. The range of electron densities is higher at higher $I_{\rm WL}$
and is between 7.9 $\times$ 10$^{11}$ and 3.4 $\times$ 10$^{13}$~cm$^{-3}$ for our assumed parameter space and the maximum of the cut through the loop at our selected pixel. The mean weighted electron density is 3.8 $\times$ 10$^{12}$, 7.6 $\times$ 10$^{12}$, 7.0 $\times$ 10$^{12}$,
and 4.1 $\times$ 10$^{12}$~cm$^{-3}$ for the four histograms.
To check the quality of the inversion, we can compute the WL radiation intensity from Equation~(\ref{e-all}), where the input parameters are temperature and electron density shown in Figure~\ref{f-net}. The difference between computed and observed $I_{\rm WL}$ is below 0.6~\%.
For a comparison we also computed the ratio of the Thomson contribution to the total WL radiation intensity. The results are shown as histograms for all four
different times of the observations in Figure~\ref{f-his}. The histograms clearly show that the Thomson contribution is relatively small, up to 15\% for low
WL radiation intensities and up to 5 \% for higher ones.
\subsection{Converting observed intensity maps into density maps}
Figure~\ref{f-density} shows the temporal evolution of the electron density for the two representative temperatures
10$^4$ and 5 $\times$ 10$^5$~K and the two effective thicknesses 5000 and 20\,000~km by estimating the maximum radiation intensity at a height of 10\,000 km for four selected time steps. All other time steps can be found in the online movie. It is visible that the center of the loop top has the highest density, while it decreases towards the limb.
\section{Discussion and conclusions}
\label{s-con}
In this paper we reported on SDO/HMI off-limb observations of a large X8.2 class flare, which was one of the strongest flares detected during the solar cycle 24. Well visible flare loops were seen
in the HMI pseudo-continuum channel during the gradual phase. Although similar WL loops have already been
analyzed for weaker flares \citep{oliv14,shil14}, this event is quite interesting due to its extraordinary brightness.
It is the first time that the HMI WL loop brightness is analyzed using quantitative modeling, which includes all relevant emission processes. We demonstrate that for this strong flare the HMI intensities are dominated by the hydrogen recombination continuum, i.e. the Paschen continuum at the HMI wavelength 6173 \AA, with a small contribution
due to the tail of the Brackett continuum if assuming temperatures around 10$^4$ K. However, because we clearly see the multi-thermal character of the whole loop arcade from SDO/AIA imaging, we also consider the free-free emission, which plays a significant role at higher temperatures. Both the hydrogen recombination and the free-free emission are proportional to the loop emission measure, i.e. to the square of the mean electron
density. On the other hand, Thomson scattering of the photospheric light on the loop electrons is linearly proportional
to electron density and cannot explain the observed brightness. We show that the contribution of the Thomson scattering
to total continuum intensity is only a few percent at most in these very bright loops, but nevertheless we take it into
account when solving the quadratic Equation~(\ref{e-all}) for the electron density.
The densities we obtain are unusually high for
flare loops in the gradual phase, ranging between 10$^{12}$ and 10$^{13}$ cm$^{-3}$ and mainly depending on the estimate of the
line of sight extension of the loop arcade.
As shown by \citet{shil14} in case of their weaker flare, the Thomson-scattered radiation is partially linearly polarized and this was detected by HMI during their analyzed flare. In case of strong flares, the ratio of linear polarization $Q/I$ will be small, because $Q$ increases only linearly with the electron density while $I$, dominated by thermal processes (recombination and free-free emission) scales
quadratically with $n_{\rm e}$. For our observations, one would therefore not expect significant linear polarization, at least not co-spatial to the loop top where the density is high.
As suggested in \citet{shil14}, the particular distinction between Thomson scattering and processes proportional to the emission measure can be used to an advantage for an efficient disentangling between $n_{\rm e}$ and $D_{\rm eff}$. An analogous analysis method was developed for solar prominences \citep{jej09}, which also represent cool off-limb structures (note that flare loops have been previously classified as 'loop prominences', but now they are often called 'coronal rain',see e.g. \citeauthor{scu16} \citeyear{scu16}), however, the electron densities of prominences are low and thus the Thomson scattering completely dominates their WL emission, which can be detected
only during solar eclipses. In Figure~\ref{f-ica} $n_{\rm e}$= 10$^{11}$~cm$^{-3}$ refers to an upper limit of electron density usually met in quiescent prominences and the
continuum intensity is thus a few orders of magnitude lower than that detected in our studied flare loops. This also clearly explains why typical solar prominences have never been seen by HMI - their predicted intensity is apparently well below the detection limit.
In a next study we plan to analyze other HMI off-limb observations and derive the flare-loop electron densities
which may help our understanding of the significance of WL loops on other flaring stars
and namely on those producing superflares as suggested by \citet{hei18}. The range of electron densities for a particular loop system can also be constrained by a detailed analysis of the linear-polarization signal from HMI. Moreover, for this X8.2 flare
spectral line data exists from various ground-based observations and these, together with complex non-LTE modeling
of flare loops, can also provide independent density diagnostics needed for a better understanding of evaporative processes and subsequent flare-loop cooling.
\begin{acknowledgements}
SJ acknowledges the financial support from the Slovenian Research Agency No. P1-0188. SJ, and PH acknowledge the support from the Czech Funding Agency through the grant No. 16-18495S and PH the partial
support from the grant No. 16-16861S. The funding from RVO-67985815 is also acknowledged.
\end{acknowledgements}
\bibliographystyle{aasjournal}
|
{'timestamp': '2018-10-08T02:03:08', 'yymm': '1810', 'arxiv_id': '1810.02431', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.02431'}
|
arxiv
|
\section{Algorithm}
\label{sec:Algorithm}
The new algorithm is called {\tt RecurRank}\xspace (`recursive ranker').
The algorithm maintains a partition of the $K$ positions into intervals. Associated with each interval is an integer-valued `phase number'
and an ordered set of items, which has the same size as the interval for all but the last interval (containing position $K$).
Initially the partition only contains one interval that is associated with all the items and phase number $\ell = 1$.
At any point in time, {\tt RecurRank}\xspace works in parallel on all intervals.
Within an interval associated with phase number $\ell$, the algorithm balances exploration and exploitation while determining the relative attractiveness of the items to accuracy $\Delta_\ell = 2^{-\ell}$.
To do this, items are placed in the first position of the interval in proportion to an experimental design. The remaining
items are placed in order in the remaining positions. Once sufficient data is collected, the interval is divided into a collection of subintervals
and the algorithm is restarted on each subinterval with the phase number increased.
The natural implementation of the algorithm maintains a list of partitions and associated items. In each round it iterates over the partitions and makes assignments of the items within
each partition. The assignments are based on round-robin idea using an experimental design, which means the algorithm needs to keep track of how often each item has been placed in
the first position. This is not a problem
from an implementation perspective, but stateful code is hard to interpret in pseudocode. We provide a recursive implementation that describes the assignments made within each
interval and the rules for creating a new partition.
A flow chart depicting the operation of the algorithm is given in \cref{fig:flow chart for algo}. The code is provided in the supplementary material.
\begin{algorithm}[thb!]
\begin{algorithmic}[1]
\STATE \label{alg:main:input} \textbf{Input: } Phase number $\ell$ and \\ $\mathcal{A} = (a_1,a_2,\ldots)$ and $\mathcal K = (k,k+1,\ldots,k+m-1)$
\STATE \label{alg:main:g-optimal design}
Find a $G$-optimal design $\pi = \operatorname{\textsc{Gopt}}(\mathcal{A})$
\STATE \label{alg:main:define T(a)}
Let $\Delta_\ell = 2^{-\ell}$ and
\begin{align}
T(a) = \ceil{\frac{d\,\pi(a)}{2\Delta_\ell^2}\log\left(\frac{|\mathcal{A}|}{\delta_\ell}\right)}
\label{eq:allocchoice}
\end{align}
This instance will run $\sum_{a \in \mathcal{A}} T(a)$ times
\STATE
\label{alg:main:select}
Select each item $a\in \mathcal{A}$ exactly $T(a)$ times at position $k$
and put available items in $\{a_1,\ldots,a_m\}$ sequentially in positions $\{k+1,\ldots,k+m-1\}$ and receive feedbacks (synchronized by a global clock).
\STATE
\label{alg:main:compute theta hat}
Let $\mathcal D = \{(\beta_1, \zeta_1), \ldots\}$ be the multiset of item/clicks from position $k$
and compute
\begin{align}
\hat \theta &= V^\dagger S \quad \text{ with } \label{eq:lse} \\
& \qquad V = \sum_{(\beta, \zeta) \in \mathcal D} \beta \beta^{\top} \text{ and } S = \sum_{(\beta, \zeta) \in \mathcal D} \beta \zeta \nonumber
\end{align}
\STATE Let $a^{(1)},a^{(2)},\ldots,a^{(|\mathcal{A}|)}$ be an ordering $\mathcal{A}$ such that
\begin{align*}
\epsilon_i = \ip{\hat \theta, a^{(i)} - a^{(i+1)}} \geq 0 \text{ for all } 1 \leq i < |\mathcal{A}|
\end{align*}
and set $\epsilon_{|\mathcal{A}|}=2\Delta_\ell$
\STATE Let $(u_1,\ldots,u_p) = (i \in [|\mathcal{A}|] : \epsilon_i \geq 2\Delta_\ell)$ and $u_0 = 0$
\begin{align*}
\mathcal{A}_i &= (a^{(u_{i-1}+1)},\ldots,a^{(u_{i})}) \\
\mathcal K_i &= (k+u_{i-1},\ldots,k+\min(m,u_i)-1)
\end{align*}
\STATE \label{alg:main:elim}
For each $i \in [p]$ such that $k+u_{i-1}\le k+m-1$ call {\tt RecurRank}\xspace$(\ell+1, \mathcal{A}_i, \mathcal K_i)$ on separate threads
\end{algorithmic}
\caption{{\tt RecurRank}\xspace}\label{alg:main}
\end{algorithm}
The pseudocode of the core subroutine on each interval is given in \cref{alg:main}. The subroutine accepts as input (1) the phase number $\ell$, (2) the positions of the interval $\mathcal K \subseteq [K]$
and (3) an ordered list
of items, $\mathcal{A}$. The phase number determines the length of the experiment and the target precision.
The ordering of the items in $\mathcal{A}$ is arbitrary in the initial partition (when $\ell = 1$). When $\ell > 1$ the ordering is determined by the empirical estimate of attractiveness in
the previous experiment, which is crucial for the analysis.
The whole algorithm is started by calling ${\tt RecurRank}\xspace(1,\mathcal L,(1,2,\dots,K))$ where the order of $\mathcal L$ is random.
The algorithm is always instantiated with parameters that satisfy $|\mathcal{A}| \geq |\mathcal K|=m$. Furthermore, $|\mathcal{A}| > |\mathcal K|$ is only possible when $K \in \mathcal K$.
\begin{figure*}[thb!]
\centering
\begin{tikzpicture}[scale=0.7,font=\small]
\draw (0,0) rectangle (1,-4);
\draw[dashed] (0,-0.5) -- (1,-0.5);
\node at (-0.2, -0.25) {$1$};
\node at (-0.2, -3.75) {$8$};
\node at (0.5, 0.25) {$\ell=1$};
\node at (1.5, 0.75) {$\mathcal{A}$};
\node at (1.5, 0.25) {$||$};
\node at (1.5, -0.25) {$\overbrace{a_1}$};
\node at (1.5, -1.5) {$\cdot$};
\node at (1.5, -2) {$\cdot$};
\node at (1.5, -2.5) {$\cdot$};
\node at (1.5, -3.75) {$a_8$};
\node at (1.5, -4.25) {$\cdot$};
\node at (1.5, -4.5) {$\cdot$};
\node at (1.5, -4.75) {$\cdot$};
\node at (1.5, -5.25) {$\underbrace{a_{50}}$};
\node[rounded rectangle,draw,minimum width=5.7cm,minimum height=2.2cm,dotted,rotate=90] (r1) at (0.9,-2.3) {};
\draw[-latex] (2.3,-2.3) -- (4.55,1.1);
\draw (5,1.5) rectangle (6,0);
\draw[dashed] (5,1) -- (6,1);
\node at (4.8, 1.25) {$1$};
\node at (4.8, 0.25) {$3$};
\node at (5.5, 1.75) {$\ell=2$};
\node at (6.5, 2.25) {$\mathcal{A}$};
\node at (6.5, 1.75) {$||$};
\node at (6.5, 1.25) {$\overbrace{a_1}$};
\node at (6.5, 0.75) {$\vdots$};
\node at (6.5, 0.25) {$\underbrace{a_3}$};
\node[rounded rectangle,draw,minimum width=2.8cm,minimum height=2.1cm,dotted,rotate=90] (r1) at (5.9,1.1) {};
\draw[-latex] (2.3,-2.3) -- (4.5,-4.5);
\draw (5,-3) rectangle (6,-5.5);
\draw[dashed] (5,-3.5) -- (6,-3.5);
\node at (5.5, -2.75) {$\ell=2$};
\node at (6.5, -2.25) {$\mathcal{A}$};
\node at (6.5, -2.75) {$||$};
\node at (4.8, -3.25) {$4$};
\node at (4.8, -5.25) {$8$};
\node at (6.5, -3.25) {$\overbrace{a_4}$};
\node at (6.5, -3.75) {$\cdot$};
\node at (6.5, -4.25) {$\cdot$};
\node at (6.5, -4.75) {$\cdot$};
\node at (6.5, -5.25) {$a_8$};
\node at (6.5, -5.75) {$\cdot$};
\node at (6.5, -6) {$\cdot$};
\node at (6.5, -6.25) {$\cdot$};
\node at (6.5, -6.75) {$\underbrace{a_{25}}$};
\node[rounded rectangle,draw,minimum width=4.5cm,minimum height=2.25cm,dotted,rotate=90] (r1) at (5.9,-4.5) {};
\draw[-latex] (7.2,1.1) -- (8.55,1.1);
\draw (9,1.5) rectangle (10,0);
\draw[dashed] (9,1) -- (10,1);
\node at (8.8, 1.25) {$1$};
\node at (8.8, 0.25) {$3$};
\node at (9.5, 1.75) {$\ell=3$};
\node at (10.5, 2.25) {$\mathcal{A}$};
\node at (10.5, 1.75) {$||$};
\node at (10.5, 1.25) {$\overbrace{a_1}$};
\node at (10.5, 0.75) {$\vdots$};
\node at (10.5, 0.25) {$\underbrace{a_3}$};
\node[rounded rectangle,draw,minimum width=2.8cm,minimum height=2.1cm,dotted,rotate=90] (r1) at (9.9,1.1) {};
\draw[-latex] (11.2,1.1) -- (15.5,1.1);
\node at (15.75, 1.1) {$\cdot$};
\node at (16, 1.1) {$\cdot$};
\node at (16.25, 1.1) {$\cdot$};
\draw[-latex] (7.25,-4.5) -- (11.55,-1.6);
\draw (12,-1.5) rectangle (13,-2.5);
\draw[dashed] (12,-2) -- (13,-2);
\node at (12.5, -1.25) {$\ell=3$};
\node at (13.5, -0.75) {$\mathcal{A}$};
\node at (13.5, -1.25) {$||$};
\node at (11.8, -1.75) {$4$};
\node at (11.8, -2.25) {$5$};
\node at (13.5, -1.75) {$\overbrace{a_4}$};
\node at (13.5, -2.25) {$\underbrace{a_5}$};
\node[rounded rectangle,draw,minimum width=2cm,minimum height=2.1cm,dotted,rotate=90] (r1) at (12.9,-1.6) {};
\draw[-latex] (14.2,-1.6) -- (15.5,-1.6);
\node at (15.75, -1.6) {$\cdot$};
\node at (16, -1.6) {$\cdot$};
\node at (16.25, -1.6) {$\cdot$};
\draw[-latex] (7.25,-4.5) -- (11.55,-5.5);
\draw (12,-4.5) rectangle (13,-6);
\draw[dashed] (12,-5) -- (13,-5);
\node at (12.5, -4.25) {$\ell=3$};
\node at (13.5, -3.75) {$\mathcal{A}$};
\node at (13.5, -4.25) {$||$};
\node at (11.8, -4.75) {$6$};
\node at (11.8, -5.75) {$8$};
\node at (13.5, -4.75) {$\overbrace{a_6}$};
\node at (13.5, -5.25) {$\vdots$};
\node at (13.5, -5.75) {$a_8$};
\node at (13.5, -6.25) {$\cdot$};
\node at (13.5, -6.5) {$\cdot$};
\node at (13.5, -6.75) {$\cdot$};
\node at (13.5, -7.25) {$\underbrace{a_{12}}$};
\node[rounded rectangle,draw,minimum width=3.7cm,minimum height=2.1cm,dotted,rotate=90] (r1) at (12.9,-5.5) {};
\draw[-latex] (14.2,-5.5) -- (15.5,-5.5);
\node at (15.75, -5.5) {$\cdot$};
\node at (16, -5.5) {$\cdot$};
\node at (16.25, -5.5) {$\cdot$};
\draw[-latex] (-1,-8.5) -- (17,-8.5);
\node at (16.75, -8.75) {$t$};
\draw[dashed] (1,-5.75) -- (1,-8.5);
\node at (1,-6) {\color{red} Instance 1};
\node at (1, -8.75) {$1$};
\draw[dashed] (5.9,-0.5) -- (5.9,-1.9);
\node at (5.9,-0.75) {\color{red} Instance 2};
\draw[dashed] (5.9,-7.25) -- (5.9,-8.5);
\node at (5.9,-7.5) {\color{red} Instance 3};
\node at (5.9, -8.75) {$t_1$};
\draw[dashed] (9.9,-0.5) -- (9.9,-8.5);
\node at (9.9,-0.75) {\color{red} Instance 4};
\node at (9.9, -8.75) {$t_2$};
\draw[dashed] (12.9,-2.85) -- (12.9,-3.4);
\node at (12.9,0) {\color{red} Instance 5};
\draw[dashed] (12.9,-7.75) -- (12.9,-8.5);
\node at (12.9,-8) {\color{red} Instance 6};
\node at (12.9, -8.75) {$t_3$};
\end{tikzpicture}
\caption{A flow chart demonstration for the algorithm. Each dotted circle represents a subinterval and runs an instance of \cref{alg:main}. The dashed line denotes the first position for each interval.}
\vspace{-0.3cm}
\label{fig:flow chart for algo}
\end{figure*}
The subroutine learns about the common unknown parameter vector by placing items in the first position of the interval
in proportion to a $G$-optimal design for the available items. The remaining items in $\mathcal{A}$ are placed in order into the remaining positions (Line 4).
This means that each item $a \in \mathcal{A}$ is placed exactly $T(a)$ times in the first position $k$ of the interval. The choice of $T(a)$ is based on the
phase number $\ell$ and the $G$-optimal design $\pi$ over $\mathcal{A}$ (Line 2). Note $T(a) = 0$ if $\pi(a)=0$. For example, if $\mathcal{A}=(a_1,\ldots,a_m)$ and $a_3$
is placed at the first position, then the rest positions are filled in as $a_1,a_2,a_4,a_5,\ldots,a_m$.
The subroutine runs for $\sum_{a\in \mathcal{A}} T(a)$ rounds.
The $G$-optimal design means that the number of rounds required to estimate the value of each item to a fixed precision depends only logarithmically on the number of items.
Higher phase number means longer experiment and also higher target precision.
Once all arms $a \in \mathcal{A}$ have been placed in the first position of the interval $T(a)$ times, {\tt RecurRank}\xspace estimates the attractiveness of the items in $\mathcal{A}$
using a least-squares estimator based on the data collected from the first position (Line 5).
The items are then ordered based on their estimated attractiveness. The subroutine then partitions the ordered items
when the difference between estimated attractiveness of consecutive items is sufficiently large (Line 7).
Finally the subroutine recursively calls {\tt RecurRank}\xspace on each partition for which there are positions available with an increased phase number with items sorted according to their empirical attractiveness (Line 8).
\begin{remark}
Items are eliminated entirely if at the end of a subroutine a partition is formed for which there are no available positions.
For example, consider the first instantiation of {\tt RecurRank}\xspace with $\ell = 1$ and $\mathcal K = [K]$ and $\mathcal{A} = \mathcal L$.
Suppose the observed data is such that $p = 2$ and $u_1 \geq K$, then items $a^{(u_1+1)},a^{(u_1+2)},\ldots,a^{(u_2)}$ will be discarded because the starting position
of the second partition would be larger than $K$.
\end{remark}
\begin{remark}
The least-squares estimator $\hat \theta$ defined in \cref{eq:lse} actually
does not have expectation $\theta$, which means the algorithm is not really estimating attractiveness. Our assumptions ensure that the expectation of $\hat \theta$ is proportional to $\theta$, however,
which is sufficient for our analysis. This is the reason for only using the first position within an interval for estimation.
\end{remark}
\begin{remark}
The subroutine only uses data collected during its own run. Not doing this would introduce bias that may be hard to control.
\end{remark}
\vspace{-0.3cm}
In \cref{fig:flow chart for algo}, the algorithm starts with Instance 1 of phase number $\ell=1$, all items and all positions. At time $t_1$, Instance 1 splits
into two, each with an increased phase number $\ell=2$. Instance 2 contains $3$ items and $3$ positions and Instance 3 contains $5$ positions but $22$ items.
The remaining items have been eliminated.
At time $t_2$, Instance 2 finishes running but has no split, so it calls Instance 4 with the same items, same positions but increased phase
number $\ell=3$. During time $t_1$ to $t_2$, Instance 2 and Instance 3 run in parallel and recommend lists together; during time $t_2$ to $t_3$, Instances 3 and 4
run in parallel and recommend lists together. At time $t_3$, Instance 3 finishes and splits into another two threads, both with increased phase number $\ell=3$.
Instance 5 contains exactly $2$ items and $2$ positions and Instance $6$ contains $3$ positions but $7$ items. Note that the involved items become even less. Right after time $t_3$, Instance $4,5,6$ run in parallel and recommend lists together.
{\tt RecurRank}\xspace has two aspects that one may think can lead to an unjustifiable increase of regret:
\emph{(i)} each subroutine only uses data from the first position to estimate attractiveness,
and \emph{(ii)} data collected by one subroutine is not re-used subsequently.
The second of these is relatively minor. Like many elimination algorithms, the halving of the precision means
that at most a constant factor is lost by discarding the data.
The first issue is more delicate. On the one hand, it seems distasteful not to use all available data. But the assumptions do not make it easy
to use data collected in later positions. And actually the harm may not be so great. Intuitively the cost of only using data from the
first position is greatest when the interval is large and the attractiveness varies greatly within the interval.
In this case, however, a split will happen relatively fast.
\paragraph{Running time}
The most expensive component is computing the $G$-optimal design.
This is a convex optimisation problem and has been studied extensively (see, \citealt[\S7.5]{BV04} and \citealt{Tod16}).
\todoc{So what is the running time for solving it??}
It is not necessary to solve the optimisation problem exactly.
Suppose instead we find a distribution $\pi$ on $\mathcal{A}$ with support at most $D(D+1)/2$ and for which
$\max_{a \in \mathcal{A}} \norm{a}_{Q(\pi)^{\dagger}}^2 \leq D$.
Then our bounds continue to hold with $d$ replaced by $D$.
Such approximations are generally easy to find.
For example, $\pi$ may be chosen to be a uniform distribution
on a volumetric spanner of $\mathcal{A}$ of size $D$.
\ifsup See Appendix~\ref{app:vspan} for a summary on volumetric spanners. \else
See the supplementary material for a summary of volumetric spanners. \fi
\citet{H16volumetric}
provide a randomized algorithm that returns a volumetric spanner of size at most $O(d \log(d) \log(\abs{\mathcal{A}}))$
with an expected running time of $O(\abs{\mathcal{A}} d^2)$.
For the remaining parts of the algorithm, the least-squares estimation is at most $O(d^3)$.
The elimination and partitioning run in $O(\abs{\mathcal{A}}d)$.
Note these computations happen only once for each instantiation.
The update for each partition in each round is $O(d^2)$.
The total running time is $O(Ld^2\log(T)+Kd^2T)$.
\section{Regret Analysis}
\label{sec:regret analysis}
Our main theorem bounds the regret of \cref{alg:main}.
\begin{theorem}\label{thm:upper}
There exists a universal constant $C > 0$ such that
the regret bound for Algorithm \ref{alg:main} with $\delta = 1/\sqrt{T}$ satisfies
\begin{align*}
R_T \le C K \sqrt{dT\log(LT)}\,.
\end{align*}
\end{theorem}
Let $I_\ell$ be the number of calls to {\tt RecurRank}\xspace{} with phase number $\ell$.
Hence each $i \in [I_\ell]$ corresponds to a call of {\tt RecurRank}\xspace{} with phase number $\ell$ and the arguments are denoted by
$\mathcal{A}_{\ell i}$ and $\mathcal K_{\ell i}$.
Abbreviate $K_{\ell i} = \min \mathcal K_{\ell i}$ for the first position of $\mathcal K_{\ell i}$, $M_{\ell i} = |\mathcal K_{\ell i}|$
for the number of positions and $\mathcal K_{\ell i}^+ = \mathcal K_{\ell i} \setminus \{K_{\ell i}\}$.
We also let $K_{\ell, I_\ell+1}=K+1$ and
assume that the calls $i \in [I_\ell]$ are ordered so that
\begin{align*}
1 = K_{\ell 1} < K_{\ell 2} < \cdots < K_{\ell I_\ell} \leq K < K+1 = K_{\ell, I_\ell+1}\,.
\end{align*}
The reader is reminded that $\chi^\ast_k = \chi(A^\ast, k)$ is the examination probability of the $k$th position under the optimal list.
Let $\chi_{\ell i} = \chi_{K_{\ell i}}^\ast$ be the shorthand for the optimal examination probability of the first position in call $(\ell, i)$.
We let $\hat \theta_{\ell i}$ be the least-squares estimator computed in \cref{eq:lse} in \cref{alg:main}.
The maximum phase number during the entire operation of the algorithm is $\ell_{\max}$.
\begin{definition}
Let $F$ be the failure event that there exists an $\ell \in [\ell_{\max}]$, $i \in [I_\ell]$ and $a \in \mathcal{A}_{\ell i}$ such that
\begin{align*}
\abs{\ip{\hat{\theta}_{\ell i}, a} - \chi_{\ell i} \ip{\theta_\ast, a}} \geq \Delta_\ell
\end{align*}
or there exists an $\ell \in [\ell_{\max}]$, $i \in [I_\ell]$ and $k \in \mathcal K_{\ell i}$ such that
$a_k^\ast \notin \mathcal{A}_{\ell i}$.
\end{definition}
The first lemma shows that the failure event occurs with low probability.
The proof follows the analysis in \citep[Chap. 22]{LS18book} and is summarised in
\ifsup
\cref{app:lem:failure}.
\else
the supplementary material.
\fi
\begin{lemma}\label{lem:failure}
$\Prob{F} \leq \delta$.
\end{lemma}
The proofs of the following lemmas are
\ifsup
provided in \cref{app:sec:proofs of technical lemmas}.
\else
also provided in the supplementary material.
\fi
\begin{lemma}
\label{lem:adjacent gap}
On the event $F^c$ it holds for any $\ell \in [\ell_{\max}]$, $i \in [I_\ell]$ and positions $k, k+1 \in \mathcal K_{\ell i}$ that
$\chi_{\ell i}
(\alpha(a_k^\ast) - \alpha(a_{k+1}^\ast)) \le 8\Delta_\ell$.
\end{lemma}
\begin{lemma}
\label{lem:suboptimal gap}
On the event $F^c$ it holds for any $\ell \in [\ell_{\max}]$ and $a \in \mathcal{A}_{\ell I_\ell}$ that
$\chi_{\ell I_\ell} (\alpha(a_K^\ast) - \alpha(a)) \le 8\Delta_\ell$.
\end{lemma}
\begin{lemma}\label{lem:first-subopt}
Suppose that in its $(\ell, i)$th call {\tt RecurRank}\xspace places item $a$
in position $k = K_{\ell i}$. Then, provided $F^c$ holds,
$\chi_{\ell i} \left(\alpha(a_k^*) - \alpha(a)\right) \leq 8 M_{\ell i}$.
\end{lemma}
\begin{lemma}
\label{lem:regret on lower positions}
Suppose that in its $(\ell, i)$th call {\tt RecurRank}\xspace places item $a$
in position $k \in \mathcal K_{\ell i}^+$. Then provided $F^c$ holds,
$\chi_{\ell i} \left(\alpha(a_k^\ast) - \alpha(a)\right) \leq 4\Delta_\ell$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:upper}]
The first step is to decompose the regret using the failure event:
\begin{align*}
R_T \leq \Prob{F} T K + \mathbb{E}\left[\sind{F^c}\! \sum_{t=1}^T \sum_{k=1}^K (v(A^*, k) - v(A_t, k))\right]\,.
\end{align*}
From now on we assume that $F^c$ holds and bound the term inside the expectation.
Given $\ell$ and $i \in [I_\ell]$ let $\mathcal T_{\ell i}$ be the set of rounds when algorithm $(\ell, i)$ is active.
Then
\begin{align}
\sum_{t=1}^T \sum_{k=1}^K (v(A^*, k) - v(A_t, k))
&= \sum_{\ell=1}^{\ell_{\max}} \sum_{i=1}^{I_\ell} R_{\ell i}\,,
\label{eq:decomp}
\end{align}
where $R_{\ell i}$ is the regret incurred during call $(\ell, i)$:
\begin{align*}
R_{\ell i} = \sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}} (v(A^*, k) - v(A_t, k))\,.
\end{align*}
This quantity is further decomposed into the first position in $\mathcal K_{\ell i}$, which is used for exploration, and the
remaining positions:
\begin{align*}
R_{\ell i}^{(1)} &= \sum_{t \in \mathcal T_{\ell i}} (v(A^*, K_{\ell i}) - v(A_t, K_{\ell i}))\,. \\
R_{\ell i}^{(2)} &= \sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}^+} (v(A^*, k) - v(A_t, k))\,.
\end{align*}
Each of these terms is bounded separately.
For the first term we have
\begin{align}
R_{\ell i}^{(1)}
&= \sum_{t \in \mathcal T_{\ell i}} (v(A^*, K_{\ell i}) - v(A_t, K_{\ell i})) \nonumber \\
&= \sum_{t \in \mathcal T_{\ell i}} \chi(A^*, K_{\ell i}) \alpha(a^*_{K_{\ell i}}) - \chi(A_t, K_{\ell i}) \alpha(A_t(K_{\ell i}))\nonumber \\
&= \sum_{t \in \mathcal T_{\ell i}} \chi_{\ell i} \left\{\alpha(a^*_{K_{\ell i}}) - \alpha(A_t(K_{\ell i}))\right\} \nonumber \\
&\leq 8 \sum_{t \in \mathcal T_{\ell i}} M_{\ell i} \Delta_\ell\,, \label{eq:relli1}
\end{align}
where the first equality is the definition of $R_{\ell i}^{(1)}$, the second is the definition of $v$.
The third inequality is true because event $F^c$ ensures that
\begin{align*}
\{A_t(k) : k < K_{\ell i}\} = \{a^*_k : k < K_{\ell i}\}\,,
\end{align*}
which combined with \cref{ass:perm} shows that $\chi(A^*, K_{\ell i}) = \chi(A_t, K_{\ell i}) = \chi_{\ell i}$.
The inequality in \cref{eq:relli1} follows from \cref{lem:first-subopt}.
Moving on to the second term,
\begin{align}
R_{\ell i}^{(2)}
&= \sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}^+} (v(A^*, k) - v(A_t, k)) \nonumber \\
&\le \sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}^+} \chi_k^\ast (\alpha(a^*_k) - \alpha(A_t(k))) \nonumber \\
&\leq \sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}^+} \chi_{\ell i} (\alpha(a^*_k) - \alpha(A_t(k))) \nonumber \\
&\leq 4\sum_{t \in \mathcal T_{\ell i}} \sum_{k \in \mathcal K_{\ell i}^+} \Delta_\ell \label{eq:relli2} \\
&\leq 4\sum_{t \in \mathcal T_{\ell i}} M_{\ell i} \Delta_\ell\,, \nonumber
\end{align}
where the second inequality follows from \cref{ass:min} and the third inequality follows from \cref{ass:decrease} on ranking $A^\ast$.
The inequality in \cref{eq:relli2} follows from \cref{lem:regret on lower positions}
and the one after it from the definition of $M_{\ell i} = |\mathcal K_{\ell i}|$.
Putting things together,
\begin{align*}
(\ref{eq:decomp})
&= 12 \sum_{\ell=1}^{\ell_{\max}} \sum_{i\in I_\ell} |\mathcal T_{\ell i}| M_{\ell i} \Delta_\ell
\le 12 K \sum_{\ell=1}^{\ell_{\max}} \max_{i\in I_\ell} |\mathcal T_{\ell i}| \Delta_\ell\,, \numberthis
\label{eq:sumtosplit}
\end{align*}
where we used that $ \sum_{i\in I_\ell} M_{\ell i} = K$.
To bound $|\mathcal T_{\ell i}|$ note that, on the one hand,
$|\mathcal T_{\ell i}|\le T$ (this will be useful when $\ell$ is large),
while on the other hand,
by the definition of the algorithm and the fact that the $G$-optimal design is supported
on at most $d(d+1)/2$ points we have
\begin{align*}
\MoveEqLeft
|\mathcal T_{\ell i}|
\leq \sum_{a \in \mathcal{A}_{\ell i}} \ceil{\frac{2 d \pi(a) \log(1/\delta_{\ell})}{\Delta_\ell^2}} \\
&\leq \frac{d(d+1)}{2} + \frac{2 d \log(1/\delta_{\ell})}{\Delta_{\ell}^2}\,.
\end{align*}
We now split to sum in \eqref{eq:sumtosplit} into two.
For $1\le \ell_0 \le \ell_{\max}$ to be chosen later,
\begin{align*}
\MoveEqLeft
\sum_{\ell=1}^{\ell_0} \max_{i\in I_\ell} |\mathcal T_{\ell i}| \Delta_\ell
\le
\frac{d(d+1)}{2}
+
4 d \log(1/\delta_{\ell_0}) 2^{\ell_0}\,,
\end{align*}
while
\begin{align*}
\sum_{\ell=\ell_0+1}^{\ell_{\max}} \max_{i\in I_\ell} |\mathcal T_{\ell i}| \Delta_\ell
\le
T \sum_{\ell=\ell_0+1}^{\ell_{\max}} \Delta_\ell \le T 2^{-\ell_0}\,,
\end{align*}
hence,
\begin{align*}
(\ref{eq:decomp})
\le 12 K \left\{ \frac{d(d+1)}{2} + 4 d \log(1/\delta_{\ell_0}) 2^{\ell_0} +T 2^{-\ell_0} \right\}\,.
\end{align*}
The result is completed by optimising $\ell_0$.
\end{proof}
\iffalse
\begin{proof}[Proof of Theorem~\ref{thm:upper}]
For each position $k$, let its starting time for phase $\ell$ be $T_{k \ell}$. Then $T_{\cdot \ell}$ is the same on set $[K_{\ell i}, K_{\ell, i+1})$ for any $i \le I_\ell, \ell \ge 1$. Since there exists a $G$-optimal design $\pi^\ast$ with $\abs{\mathrm{Supp}(\pi^\ast)} \le d(d+1)/2$, the algorithm runs
\begin{align*}
T_{K_{\ell i}, \ell+1} - T_{K_{\ell i} \ell} \le \frac{2d \log(1/\delta_{\ell i})}{\Delta_\ell^2} + \frac{d(d+1)}{2}
\end{align*}
rounds on partition $(\ell, i)$.
The regret can be composed into partition level
\begin{align*}
R_T &= T \sum_{k=1}^K v(A^*, k) - \E{\sum_{t=1}^T \sum_{k=1}^K v(A_t, k)} \notag\\
&\le \sum_{\ell}^{\ell_{\max}} \sum_{i=1}^{I_\ell} R_{\ell i}
\end{align*}
where
\begin{align}
R_{\ell i} &= \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} \ \sum_{k=K_{\ell i}}^{K_{\ell, i+1}-1} \chi^\ast(k)\alpha(a_k^\ast) - \chi(A_t,k)\alpha(A_t(k)) \notag\\
&\le \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} \ \sum_{k=K_{\ell i}}^{K_{\ell, i+1}-1} \chi^\ast(k) (\alpha(a_k^\ast) - \alpha(A_t(k))) \notag\\
&\le \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} \Big( \chi^\ast(K_{\ell i})(\alpha(K_{\ell i})-\alpha(A_t(K_{\ell i}))) \notag\\
&\quad + \sum_{k=K_{\ell i}+1}^{K_{\ell, i+1}-1} \chi^\ast(k)\left(\alpha(a_k^\ast)-\alpha(A_t(k))\right) \Big)\notag\\
&\le \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} \Big( 16(K_{\ell, i+1} - K_{\ell i})\Delta_\ell \notag\\
&\quad +\sum_{k=K_{\ell i}+1}^{K_{\ell, i+1}-1}\chi^\ast(K_{\ell i})(\alpha(a_k^\ast)-\alpha(A_t(k)) \Big) \label{eq:by lemma 1 and 2}\\
&\le \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} \Big( 16(K_{\ell, i+1} - K_{\ell i})\Delta_\ell + \sum_{k=K_{\ell i}+1}^{K_{\ell, i+1}-1} 4\Delta_\ell\Big) \label{eq:by lemma 3}\\
&\le 20 \sum_{t = T_{K_{\ell i} \ell}}^{T_{K_{\ell i} \ell+1}-1} (K_{\ell, i+1} - K_{\ell i})\Delta_\ell \notag
\end{align}
and the first term in \eqref{eq:by lemma 1 and 2} states the gap between items in each partition can be bounded by adjacent gaps of first $K$ items (Lemma \ref{lem:adjacent gap}) and the gap between $K$-th best item and the suboptimal items (Lemma \ref{lem:suboptimal gap}), and the second term in \eqref{eq:by lemma 3} is by Lemma \ref{lem:regret on lower positions}.
Take $\ell_0 = \log\left(\frac{T}{8d\log(L\log(T)/\delta)}\right)$, then
\begin{align*}
R(T) &\le \sum_{\ell=1}^{\ell_0} \sum_{i=1}^{I_\ell} R_{\ell i} + \sum_{\ell > \ell_0} \sum_{i=1}^{I_\ell} R_{\ell i}\\
&\le \sum_{\ell=1}^{\ell_0} \sum_{i=1}^{I_\ell} \left(\frac{2d}{\Delta_\ell^2}\log\left(\frac{|\mathcal{A}_{\ell i}|\ell(\ell+1)}{\delta}\right) + \frac{d(d+1)}{2}\right) \\
&\qquad \cdot 20(K_{\ell, i+1} - K_{\ell i})\Delta_\ell\\
&\qquad + \sum_{\ell > \ell_0} \sum_{i=1}^{I_\ell} 20 \left(T_{K_{\ell i} \ell+1} - T_{K_{\ell i} \ell}\right) (K_{\ell, i+1} - K_{\ell i})\Delta_\ell\\
&\le \sum_{\ell=1}^{\ell_0} \sum_{i=1}^{I_\ell} \Bigg(40(K_{\ell, i+1} - K_{\ell i}) \frac{d}{\Delta_\ell}\log\left(\frac{L\ell(\ell+1)}{\delta}\right) \\
&\qquad + 10 (K_{\ell, i+1} - K_{\ell i}) d(d+1)\Delta_\ell \Bigg) \\
&\qquad + \sum_{\ell > \ell_0} \sum_{i=1}^{I_\ell} \sum_{k=K_{\ell i}}^{K_{\ell, i+1}-1} 20 \left(T_{k,\ell+1} - T_{k \ell}\right) \Delta_\ell\\
&\le \sum_{\ell=1}^{\ell_0} \Bigg(80 K \frac{d}{\Delta_\ell}\log\left(\frac{L\log(T)}{\delta}\right) + 10 K d(d+1)\Delta_\ell \Bigg) \\
&\qquad + \sum_{k=1}^K 20\Delta_{\ell_0} \sum_{\ell > \ell_0} \left(T_{k,\ell+1} - T_{k \ell}\right) \\
&\le 160 K \frac{d}{\Delta_{\ell_0}}\log\left(\frac{L\log(T)}{\delta}\right) + 20 K d(d+1) \\
&\qquad + 20 K\Delta_{\ell_0}T\\
&\le 60 K\sqrt{dT\log\left(\frac{L\log(T)}{\delta}\right)} + 20 K d(d+1)
\end{align*}
Take $\delta = \frac{1}{T}$ to get the result.
\end{proof}
\fi
\section{Proof of Lemma~\ref{lem:failure}}\label{app:lem:failure}
In what follows, we add the index $(\ell,i)$ to any symbol used in the algorithm to indicate the value that it takes in the $(\ell,i)$ call. For example, $\mathcal D_{\ell i}$ denotes the data multiset collected in the $(\ell,i)$ call, $T_{\ell i}(a)$ be the value computed in \cref{eq:allocchoice}, etc.
Fix $\ell\ge 1$
and let $F_\ell$ be the failure event that there exists an $i \in [I_\ell]$ and $a \in \mathcal{A}_{\ell i}$ such that
\begin{align*}
\abs{\ip{\hat{\theta}_{\ell i}, a} - \chi_{\ell i} \ip{\theta_\ast, a}} \geq \Delta_\ell\,.
\end{align*}
Let $E_\ell$ be the event that for any $i \in [I_\ell]$,
the examination probability on the first position of the call $(\ell, i)$ is $\chi_{\ell i}$.
For the argument that follows, let us assume that $E_\ell$ holds.
By our modelling assumptions (\cref{eq:click,eq:clickfactor,eq:clicklinear}),
for any $(\beta, \zeta) \in \mathcal D_{\ell i}$,
\begin{align*}
\zeta = \ip{\chi_{\ell i}\theta_\ast, \beta} + \eta_{(\beta, \zeta)}\,,
\end{align*}
where $\{\eta_{(\beta, \zeta)}\}_{(\beta, \zeta)}$ is a conditionally $1/2$-subgaussian sequence.
Define the Gram matrix $Q$ for any probability mass function $\pi:\mathcal{A} \to [0,1]$, $\sum_{a \in \mathcal{A}} \pi(a) = 1$, as $Q(\pi) = \sum_{a \in \mathcal{A}} \pi(a) aa^\top$.
By the Kiefer-Wolfowitz theorem \cite{KW60},
\begin{align*}
\max_{a \in \mathcal{A}_{\ell i}} \norm{a}_{Q(\pi_{\ell i})^{\dagger}}^2 = \mathrm{rank}(\mathcal{A}) \le d\,,
\end{align*}
where $Q^\dagger$ denotes the Moore-Penrose inverse of $Q$. Then, by \cref{eq:allocchoice},
\begin{align*}
V_{\ell i} &=\sum_{a \in \mathcal{A}_{\ell i}} T_{\ell i}(a) aa^\top
\succeq \frac{d}{2\Delta_\ell^2}\log\left(\frac{\abs{\mathcal{A}_{\ell i}} }{\delta_\ell}\right)Q(\pi_{\ell i})\,,
\end{align*}
where $P\succeq Q$ denotes that $P$ precedes $Q$ in the Loewner partial ordering of positive semi-definite (symmetric) matrices.
This implies that
\begin{align*}
\norm{a}_{V_{\ell i}^{\dagger}}^2
&\le \frac{2\Delta_\ell^2}{d} \frac{1}{\log\left(\frac{\abs{\mathcal{A}_{\ell i}} }{\delta_\ell}\right)} \norm{a}_{Q(\pi_{\ell i})^{\dagger}}^2 \\
&\le 2\Delta_\ell^2 \frac{1}{\log\left(\frac{\abs{\mathcal{A}_{\ell i}} }{\delta_\ell}\right)} \,.
\end{align*}
Rearranging shows that
\begin{align}
\Delta_\ell &\ge \sqrt{\frac{1}{2} \norm{a}_{V_{\ell i}^{\dagger}}^2 \log\left(\frac{\abs{\mathcal{A}_{\ell i}} }{\delta_\ell}\right)}\,. \label{eq:Delta ell}
\end{align}
Now note that
\begin{align*}
&\ip{\hat{\theta}_{\ell i} - \chi_{\ell i} \theta_\ast, a}\\
=& \ip{V_{\ell i}^{\dagger}\sum_{(\beta, \zeta) \in \mathcal D_{\ell i}} \beta \zeta - \chi_{\ell i} \theta_\ast, a}\\
=& \ip{V_{\ell i}^{\dagger}\sum_{(\beta, \zeta) \in \mathcal D_{\ell i}} \beta (\beta^{\top} \theta_\ast \chi_{\ell i} + \eta_{(\beta, \zeta)}) - \chi_{\ell i} \theta_\ast, a}\\
=&\chi_{\ell i} \ip{(V_{\ell i}^{\dagger} V_{\ell i}-I) \theta_\ast, a} + \ip{V_{\ell i}^{\dagger}\sum_{(\beta, \zeta) \in \mathcal D_{\ell i}}\beta \eta_{(\beta, \zeta)}, a}\\
=&\sum_{(\beta, \zeta) \in \mathcal D_{\ell i}} \ip{V_{\ell i}^{\dagger} \beta, a} \ \eta_{(\beta, \zeta)}\,. \numberthis
\label{eq:noise}
\end{align*}
The last equality follows from $I - V_{\ell i}^{\dagger} V_{\ell i}$ is the orthogonal projection on the kernel of $V_{\ell i}$, which is the
orthogonal complement of $\mathcal{A}_{\ell i}$, and thus will map each $a \in \mathcal{A}_{\ell i}$ to the zero vector.
Then, for any $a \in \mathcal{A}_{\ell i}$,
\begin{align*}
&\mathbb{P}\left(\abs{\ip{\hat{\theta}_{\ell i} - \chi_{\ell i} \theta_\ast, a}} \ge \Delta_\ell \right) \\
&\le \mathbb{P}\left(\abs{\ip{\hat{\theta}_{\ell i} - \chi_{\ell i} \theta_\ast, a}} \ge \sqrt{\frac{1}{2} \norm{a}_{V_{\ell i}^{\dagger}}^2 \log\left(\frac{\abs{\mathcal{A}_{\ell i}} }{\delta_\ell}\right)} \right)\\
&\le \frac{2\delta_\ell}{\abs{\mathcal{A}_{\ell i}}}\,.
\end{align*}
The first inequality is by \cref{eq:Delta ell}.
The second inequality is by \cref{eq:noise},
the concentration bound on conditional subgaussian sequences \citep[Lemma 5.2 and Theorem 5.1]{LS18book},
and $\sum_{(\beta, \zeta) \in \mathcal D_{\ell i}}\ip{V_{\ell i}^{\dagger}\beta, a}^2 = \norm{a}_{V_{\ell i}^{\dagger}}^2$.
Thus with probability at least $1 - 2\delta_\ell$,
\begin{align*}
\abs{\ip{\hat{\theta}_{\ell i} - \chi_{\ell i} \theta_\ast, a}} \le \Delta_\ell
\end{align*}
holds for any $a \in \mathcal{A}_{\ell i}$ and thus from $I_\ell\le K$, we get that
\begin{align}
\Prob{F_\ell \cap E_\ell} \le 2K\delta_\ell\,.
\label{eq:fllell}
\end{align}
Now we prove by induction on $\ell$ that on the complementer of $F_{1:\ell-1}=F_1\cup \dots \cup F_{\ell-1}$ (with $F_{1:0}=\emptyset$)
the following hold true:
{\em (i)}
the examination probability on the first position of the call $(\ell, i)$ is $\chi_{\ell i}$ for any $i \in [I_\ell]$;
{\em (ii)}
$a_{K_{\ell I_\ell}}^\ast, \ldots, a_K^\ast$ are the $M_{\ell I_{\ell}}$ best items in $\mathcal{A}_{\ell I_{\ell}}$ and
that
{\em (iii)} for any $i,j\in [I_{\ell}]$, $i<j$, and $a\in \mathcal{A}_{\ell i}$, $a'\in \mathcal{A}_{\ell j}$, it holds that $\alpha(a)<\alpha(a')$
(note that {\em (ii)} and {\em (iii)} just mean that the algorithm does not make a mistake when it eliminates items or splits blocks).
The claim is obviously true for $\ell=1$.
In particular,
the examination probability on the first position of the call $(\ell=1, i=1)$ is $\chi_{1,1}$ by \cref{ass:perm}.
Now, let $\ell\ge 1$ and
suppose $F_{1:\ell}$ does not hold.
If $\ip{\hat{\theta}_{\ell i}, a} - \ip{\hat{\theta}_{\ell i}, a'} \ge 2\Delta_{\ell}$
for some $a, a' \in \mathcal{A}_{\ell i}$ and $i\in [I_\ell]$, then by {\em (i)} of the induction hypothesis,
\begin{align*}
\chi_{\ell i} \ip{\theta_\ast, a} &> \ip{\hat{\theta}_{\ell i}, a} - \Delta_{\ell} \\
&\ge \ip{\hat{\theta}_{\ell i}, a'} + \Delta_{\ell} > \chi_{\ell i} \ip{\theta_\ast, a'}\,,
\end{align*}
thus $\alpha(a) > \alpha(a')$.
If $a \in \mathcal{A}_{\ell I_{\ell}}$ is eliminated at the end of call $(\ell, I_\ell)$, there exists $m = M_{\ell I_{\ell}}$ different items $b_1, \ldots, b_m \in \mathcal{A}_{\ell I_{\ell}}$ such that $\ip{\hat{\theta}_{\ell i}, b_j} - \ip{\hat{\theta}_{\ell i}, a} \ge 2\Delta_{\ell}$ for all $j \in [m]$. Thus $\alpha(b_j) > \alpha(a)$ for all $j \in [m]$. Since, by induction, $a_{K_{\ell I_\ell}}^\ast, \ldots, a_K^\ast$ are $m$ best items in $\mathcal{A}_{\ell I_{\ell}}$,
then $\alpha(a) < \alpha(a_K^\ast)$. This shows that {\em (ii)} will still hold for $\mathcal{A}_{\ell+1, I_{\ell+1}}$.
If there is a split $\mathcal{A}_1, \ldots, \mathcal{A}_p$ and $\mathcal K_1, \ldots, \mathcal K_p$ on $\mathcal{A}_{\ell i}$ and $\mathcal K_{\ell i}$
by the algorithm, $\ip{\hat{\theta}_{\ell i}, a} - \ip{\hat{\theta}_{\ell i}, a'} \ge 2\Delta_{\ell}$ for any $a \in \mathcal{A}_j, a'\in \mathcal{A}_{j+1}, j \in [p-1]$.
Then $\alpha(a) > \alpha(a')$.
So the better arms are put at higher positions,
which combined with that {\em (iii)} holds at stage $\ell$ shows that
{\em (iii)} will still continue to hold for $\ell+1$.
Finally, it also follows that
$\chi_{\ell+1, i} = \chi_{K_{\ell+1,i}}^\ast$ is the examination probability of the first position for any call $(\ell+1, i)$ of phase $\ell+1$, showing that {\em (i)} also continues to hold for phase $\ell+1$.
From this argument it follows that $F_{1:\ell-1}^c \subset E_\ell$ holds for all $\ell\ge 1$.
Then,
\begin{align*}
F
& = (F_{1:1}\cap F_{1:0}^c) \cup (F_{1:2} \cap F_{1:1}^c) \cup (F_{1:3} \cap F_{1:2}^c) \cup \dots\\
& \subset (F_{1:1} \cap E_1) \cup (F_{1:2} \cap E_2) \cup (F_{1:3} \cap E_{3}) \cup \dots\,.
\end{align*}
Taking probabilities and using \eqref{eq:fllell}, we get
\begin{align*}
\Prob{F} = \sum_{\ell\ge 1} \Prob{ F_{1:\ell} \cap E_\ell } \le \delta\,,
\end{align*}
finishing the proof.
\section{Volumetric Spanners}\label{app:vspan}
A volumetric spanner of compact set $\mathcal K \subset \mathbb{R}^d$ is a finite set $S = \{x_1,\ldots,x_n\} \subseteq \mathcal K$ such that
\begin{align*}
\mathcal K \subseteq \mathcal{E}(S) = \set{\sum_{i=1}^n \alpha_i x_i : \norm{\alpha}_2 \leq 1}\,.
\end{align*}
Let $\pi$ be a uniform distribution on $S$ and
\begin{align*}
Q = \sum_{i=1}^n \pi(x_i) x_i x_i^\top\,.
\end{align*}
If $S$ is a volumetric spanner of $\mathcal K$,
for any $x \in \mathcal K$ it holds that $\norm{x}_{Q^{\dagger}}^2 \leq n$.
To see this let $U \in \mathbb{R}^{d \times n}$ be the matrix with columns equal to the elements in $S$, which means that $Q = U U^\top/n$.
Since $x \in \mathcal K$ there exists an $\alpha \in \mathbb{R}^n$ with $\norm{\alpha}_2 \leq 1$ such that $x = U\alpha$.
Then
\begin{align*}
x^\top Q^{\dagger} x
&= n\alpha^\top U^\top (U U^\top)^{\dagger} U \alpha \\
&= n\alpha^\top U^{\dagger} U \alpha \\
&\leq n \norm{\alpha}_2^2 \\
&\leq n\,.
\end{align*}
Any compact set admits a volumetric spanner of size $n \leq 12d$, hence by \cref{eq:gopt}, a volumetric spanner is a ``$12$-approximation'' to the $G$-optimal design problem.
For finite $\mathcal K$ with $n$ points in it, the and for $\epsilon>0$ fixed, a spanner of size $12(1+\epsilon)d$ can be computed in $O(n^{3.5} + d n^3 + n d^3)$ time \citep[Theorem 3]{H16volumetric}.
\todoc{There is a $\log(1/\epsilon)$, which is suppressed.}
\section{Proofs of Technical Lemmas}
\label{app:sec:proofs of technical lemmas}
\begin{proof}[Proof of \cref{lem:adjacent gap}]
Let $F^c$ hold.
Since $\Delta_1 = 1/2$, the result is trivial for $\ell = 1$.
Suppose $\ell > 1$, the lemma holds for all $\ell' < \ell$ and that there exists a pair $k, k+1 \in \mathcal K_{\ell i}$ satisfying
$\chi_{\ell i}
(\alpha(a_k^\ast) - \alpha(a_{k+1}^\ast)) > 8\Delta_\ell$.
Let $(\ell-1,j)$ be the parent of $(\ell, i)$, which satisfies $a_k^\ast, a_{k+1}^\ast \in \mathcal{A}_{\ell i} \subseteq \mathcal{A}_{\ell-1,j}$.
Since $K_{\ell - 1,j} \leq K_{\ell i}$ it follows from \cref{ass:decrease} and the definition of $F$ that $\chi_{\ell-1,j} \ge \chi_{\ell i}$ and hence
\begin{align*}
\chi_{\ell-1,j} \left(\alpha(a_k^\ast) - \alpha(a_{k+1}^\ast)\right) > 8\Delta_\ell = 4 \Delta_{\ell - 1}\,,
\end{align*}
where we used the definition of $\Delta_\ell = 2^{-\ell}$.
Given any $m, n \in \mathcal K_{\ell-1,j}$ with $m \le k < k+1 \le n$ we have
\begin{align*}
\ip{\hat{\theta}_{\ell-1, j}, a_m^\ast}
&\ge \chi_{\ell-1,j} \alpha(a_m^\ast) - \Delta_{\ell-1} \\
&\ge \chi_{\ell-1,j} \alpha(a_k^\ast) - \Delta_{\ell-1} \\
&> \chi_{\ell-1,j} \alpha(a_{k+1}^\ast) + 3\Delta_{\ell-1} \\
&\ge \chi_{\ell-1,j} \alpha(a_n^\ast) + 3\Delta_{\ell-1} \\
&\ge \ip{\hat{\theta}_{\ell-1, j}, a_n^\ast} + 2\Delta_{\ell-1}\,.
\end{align*}
The first and fifth inequalities are because $F$ does not hold. The third inequality is due to induction assumption on phase $\ell-1$.
Hence by the definition of the algorithm the items $a_k^\ast$ and $a_{k+1}^\ast$ will be split into different partitions by
the end of call $(\ell-1,j)$, which is a contradiction.
\end{proof}
\begin{proof}[Proof of \cref{lem:suboptimal gap}]
We use the same idea as the previous lemma.
Let $F^c$ hold.
The result is trivial for $\ell = 1$. Suppose $\ell > 1$, the lemma holds for $\ell' < \ell$ and there exists an $a \in \mathcal{A}_{\ell I_\ell}$ satisfying
$\chi_{\ell I_\ell} (\alpha(a_K^\ast) - \alpha(a)) > 8\Delta_\ell$.
By the definition of the algorithm and $F$ does not hold, $a, a_K^\ast \in \mathcal{A}_{\ell-1,I_{\ell-1}}$ and hence
\begin{align*}
\chi_{\ell-1,I_{\ell-1}} \left(\alpha(a_K^\ast) - \alpha(a)\right) > 4\Delta_{\ell-1} \,.
\end{align*}
For any $m \in \mathcal K_{\ell-1,I_{\ell-1}}$ with $m \leq K$ it holds that
\begin{align*}
\ip{\hat{\theta}_{\ell-1,I_{\ell-1}}, a_m^\ast}
&\ge \chi_{\ell-1,I_{\ell-1}} \alpha(a_m^\ast) - \Delta_{\ell-1} \\
&\ge \chi_{\ell-1,I_{\ell-1}} \alpha(a_K^\ast) - \Delta_{\ell-1} \\
&> \chi_{\ell-1,I_{\ell-1}} \alpha(a) + 3\Delta_{\ell-1} \\
&\ge \ip{\hat{\theta}_{\ell-1,I_{\ell-1}}, a} + 2\Delta_{\ell-1} \,.
\end{align*}
Hence there exist at least $M_{\ell-1,I_{\ell-1}}$ items $b \in \mathcal{A}_{\ell-1,I_{\ell-1}}$ for which $\ip{\hat \theta_{\ell-1,I_{\ell-1}}, b - a} \geq 2\Delta_{\ell - 1}$.
But if this was true then by the definition of the algorithm
(cf. line~7)
item $a$ would have been eliminated by the end of call $(\ell-1,I_{\ell-1})$, which is a contradiction.
\end{proof}
\begin{proof}[Proof of \cref{lem:first-subopt}]
Let $F^c$ hold.
Suppose that $i < I_\ell$ and abbreviate $m = M_{\ell i}$. Since $F$ does not hold it follows that $a \in \{a^*_k, \ldots,a^*_{k+m-1}\}$.
By \cref{lem:adjacent gap},
\begin{align*}
\chi_{\ell i} \left(\alpha(a_k^*) - \alpha(a)\right)
&\leq \chi_{\ell i} \left(\alpha(a_k^\ast) - \alpha(a_{k+m-1}^\ast)\right) \\
&= \sum_{j=0}^{m-2} \chi_{\ell i} \left(\alpha(a^\ast_{k+j}) - \alpha(a^\ast_{k+j+1})\right) \\
&\leq 8(m-1) \Delta_\ell\,.
\end{align*}
Now suppose that $i = I_\ell$. Then by \cref{lem:suboptimal gap} and the same argument as above,
\begin{align*}
&\chi_{\ell i} \left(\alpha(a_k^*) - \alpha(a)\right) \\
&\qquad = \chi_{\ell i} \left(\alpha(a_K^*) - \alpha(a)\right) + \chi_{\ell i} \left(\alpha(a_k^*) - \alpha(a_K^*)\right) \\
&\qquad \leq 8m\Delta_\ell\,.
\end{align*}
The claim follows by the definition of $m$.
\end{proof}
\begin{proof}[Proof of \cref{lem:regret on lower positions}]
The result is immediate for $\ell = 1$. From now on assume that $\ell > 1$ and let $(\ell-1,j)$ be the parent of $(\ell, i)$.
Since $F$ does not hold, $\{a^*_m : m \in \mathcal K_{\ell i}\} \subseteq \mathcal{A}_{\ell i}$.
It cannot be that $\ip{\hat \theta_{\ell-1,j}, a^*_m - a} > 0$ for all $m \in \mathcal K_{\ell i}$ with $m \leq k$, since this would
mean that there are $k-K_{\ell i}+2$ items that precede item $a$ and hence item $a$ would not be put in position $k$ by the algorithm.
Hence there exists an $m \in \mathcal K_{\ell i}$ with $m \leq k$ such that $\ip{\hat \theta_{\ell-1,j}, a^*_m - a} \leq 0$ and
\begin{align*}
\chi_{\ell i} (\alpha(a_k^\ast) - \alpha(a))
&\leq \chi_{\ell i} (\alpha(a_m^\ast) - \alpha(a)) \\
&\le \chi_{\ell-1, j} (\alpha(a_m^\ast) - \alpha(a)) \\
&\leq \ip{\hat \theta_{\ell - 1,j}, a_m^\ast - a} + 2\Delta_{\ell - 1} \\
&\leq 2\Delta_{\ell - 1} = 4\Delta_\ell\,,
\end{align*}
which completes the proof.
\end{proof}
\begin{table}[thb!]
\centering
\begin{tabular}{l|r|r|r}
& {\tt RecurRank}\xspace{} & {\tt CascadeLinUCB}\xspace{} &{\tt TopRank}\xspace{}\\
\hline
CM &$0.53$ &$0.17$ &$106.10$\\
\hline
PBM &$68,943$ &$227,736$ &$745,177$\\
\hline
ML &$42,157$ &$180,256$ &$114,288$\\
\hline
\end{tabular}
\caption{The total regret under (a) CM (b) PBM and (c) ML.
The number shown are computed by taking the average over the $10$ random runs.}
\label{tab:reg}
\end{table}
\begin{table}[thb!]
\centering
\begin{tabular}{l|r|r|r}
Time (s) & \footnotesize {\tt RecurRank}\xspace & \footnotesize {\tt CascadeLinUCB}\xspace & \footnotesize {\tt TopRank}\xspace \\
\hline
CM &$51$ &$411$ &$176,772$\\
\hline
PBM &$310$ &$4,147$ &$367,509$\\
\hline
ML &$234$ &$916$ &$4,868$\\
\hline
\end{tabular}
\caption{The total running time of the compared algorithms in seconds (s). The results are averaged over $10$ random runs.}
\label{tab:time}
\end{table}
\section{Quantity Results for Experiments}
\label{app:sec:quantity results for experiments}
The regrets of \cref{fig:results} at the end are given in the \cref{tab:reg}, while total running times (wall-clock time) are shown in \cref{tab:time}. The experiments are run on Dell PowerEdge R920 with CPU of Quad Intel Xeon CPU E7-4830 v2 (Ten-core 2.20GHz) and memory of 512GB.
\section{Discussion}\label{sec:discussion}
\paragraph{Assumptions}
Our assumptions are most closely related to the work by \citet{LKLS18ranking} and \citet{ZTG17}.
The latter work also assumes a factored model where the probability of clicking on an item factors into an examination probability and an attractiveness function.
None of these works make use of features to model the attractiveness of items:
They are a special case of our model when we set the features of items to be orthogonal to each other (in particular, $d=L$).
Our assumptions on the examination probability function are weaker than those by \citet{ZTG17}.
Despite this, our regret upper bound is better by a factor of $K$ (when setting $d=L$) and the analysis is also simpler.
The paper by \citet{LKLS18ranking}
does not assume a factored model, but instead places assumptions directly on $v$.
They also assume a specific behaviour of the $v$ function under pairwise exchanges that is not required here.
Their assumptions are weaker in the sense that they do not assume the probability of clicking on position $k$ only depends on the identities of the items in positions $[k-1]$ and the attractiveness of the item in position $k$.
On the other hand, they do assume a specific behaviour of the $v$ function under pairwise exchanges that is not required by our analysis.
It is unclear which set of these assumptions is preferable.
\vspace{-0.3cm}
\paragraph{Lower bounds}
In the orthogonal case where $d = L$ the lower bound in \cite{LKLS18ranking} provides an example where the regret is
at least $\Omega(\sqrt{T K L})$. For $d \leq L$, the standard techniques for proving lower bounds for linear bandits can be used
to prove the regret is at least $\Omega(\sqrt{d TK})$, which except for logarithmic terms means our upper bound is suboptimal by a factor of
at most $\sqrt{K}$. We are not sure whether either the lower bound or the upper bound is tight.
\vspace{-0.3cm}
\paragraph{Open questions}
Only using data from the first position seems suboptimal, but is hard to avoid without making additional assumptions.
Nevertheless, we believe a small improvement should be possible here.
Another natural question is how to deal with the situation when the set of available items is changing. In practice this happens in many applications, either
because the features are changing or because new items are being added or removed.
Other interesting directions are to use weighted least-squares estimators to exploit the low variance when the examination probability and attractiveness are small.
Additionally one can use a generalised linear model instead of the linear model to model the attractiveness function, which may be analysed
using techniques developed by \citet{FCGS10} and \citet{JBNW17}.
Finally, it could be interesting to generalise to the setting where item vectors are sparse (see \citealt{APS12} and \citealt[Chap. 23]{LS18book}).
\section{Experiments}
\begin{figure*}[thb!]
\centering
\includegraphics[width=0.32\textwidth,height=3.5cm]{figs/CM.pdf}
\includegraphics[width=0.32\textwidth,height=3.5cm]{figs/PBM.pdf}
\includegraphics[width=0.32\textwidth,height=3.5cm]{figs/ML.pdf}
\caption{
The figures compare {\tt RecurRank}\xspace{} (red) with {\tt CascadeLinUCB}\xspace{} (black) and {\tt TopRank}\xspace{} (blue).
Subfigure (a) shows results for an environment that follows
the cascade click model (CM), while subfigure (b) does the same for the position-based click model (PBM).
On these figures, regret over time is shown (smaller is better).
In both models there are $L=10^4$ items and $K=10$ positions, and the feature space dimension is $d=5$.
Note the logarithmic scale of the $y$ axis on subfigure (a).
Subfigure (c) shows the regret over time on the MovieLens dataset with $L=10^3$, $d=5$, $K=10$.
All results are averaged over $10$ random runs. The error bars are standard errors.
}
\vspace{-0.1cm}
\label{fig:results}
\end{figure*}
We run experiments to compare {\tt RecurRank}\xspace{} with {\tt CascadeLinUCB}\xspace{} \cite{LWZC16,ZNK16} and {\tt TopRank}\xspace{} \cite{LKLS18ranking}.
\paragraph{Synthetic experiments}
We construct environments using the cascade click model (CM) and the position-based click model (PBM) with $L=10^4$
items in $d=5$ dimension to be displayed in $K=10$ positions.
We first randomly draw item vectors $\mathcal L$ and weight vector $\theta_\ast$ in $d-1$ dimension with each entry a standard Gaussian variable, then normalise, add one more dimension with constant $1$, and divide by $\sqrt{2}$.
The transformation is as follows:
\begin{align}
x\mapsto \left(\frac{x}{\sqrt{2}\norm{x}},\ \frac{1}{\sqrt{2}}\right)\,. \label{eq:transform x}
\end{align}
This transformation on both the item vector $x \in \mathcal L \subset \mathbb{R}^d$ and weight vector $\theta_\ast$ is to guarantee the attractiveness $\ip{\theta_\ast, x}$ of each item $x$ lies in $[0,1]$.
The position bias for PBM is set as $\left(1,\frac{1}{2},\frac{1}{3},\ldots,\frac{1}{K}\right)$ which is often adopted in applications \cite{wang2018position}.
The evolution of the regret as a function of time is shown in \cref{fig:results}(a)(b). The regrets at the end and total running times are given in
\ifsup
\cref{app:sec:quantity results for experiments}.
\else
the supplementary material.
\fi
{\tt CascadeLinUCB}\xspace{} is best in CM but worst in PBM because of its modelling bias.
{\tt TopRank}\xspace{} takes much longer time to converge than either
{\tt CascadeLinUCB}\xspace{} or {\tt RecurRank}\xspace since it neither exploits the specifics of the click model, nor does it use the linear structure.
\paragraph{MovieLens dataset} We use the
$20m$ MovieLens dataset \cite{harper2016movielens}
which contains $20$ million ratings for $2.7\times 10^4$ movies by $1.38 \times 10^5$ users. We extract $L=10^3$ movies with most ratings and $1.1 \times 10^3$ users who rate most and randomly split the user set to two parts, $U_1$ and $U_2$ with $\abs{U_1}=100$ and $\abs{U_2}=10^3$. We then use the rating matrix of users in $U_1$ to derive feature vectors with $d=5$ for all movies by singular-value decomposition (SVD). The resulting feature vectors $\mathcal L$ are also processed as \eqref{eq:transform x}. The true weight vector $\theta_\ast$ is computed by solving the linear system of $\mathcal L$ w.r.t. the rating matrix of $U_2$. The environment is the document-based click model (DBM) with $\mathcal L$ and $\theta_\ast$ and we set $K=10$. The performances are measured in regret, as shown in \cref{fig:results}(c). As can be seen, {\tt RecurRank}\xspace learns faster than the other two algorithms. Of these two algorithms, the performance of {\tt CascadeLinUCB}\xspace saturates: this is due to its incorrect bias.
\section{Introduction}
Let $\mathcal L$ be a large set of items to be ranked. For example, a database of movies, news articles or search results.
We consider a sequential version of the ranking problem where in each round
the learner chooses an ordered list of $K$ distinct items from $\mathcal L$ to show the user.
We assume the feedback comes in the form of clicks and the learner's objective is
to maximise the expected number of clicks over $T$ rounds.
Our focus is on the case where $\mathcal L$ is large (perhaps millions) and $K$ is relatively small (fifty or so).
There are two main challenges that arise in online ranking problems:
(a) The number of rankings grows exponentially in $K$, which makes learning one parameter for each ranking a fruitless endeavour.
Click models may be used to reduce the dimensionality of the learning problem, but balancing generality of the model with learnability is a serious challenge. The majority of previous
works on online learning to rank have used unstructured models, which are not well suited to our setting where $\mathcal L$ is large.
(b) Most click models depend on an unknown attractiveness function that endows the item set with an order. This yields a model with at least $|\mathcal L|$ parameters,
which is prohibitively large in the applications we have in mind.
The first challenge is tackled by adapting the flexible click models introduced in \cite{ZTG17,LKLS18ranking} to our setting.
For the second we follow previous works on bandits with large action sets by assuming the attractiveness function can be
written as a linear function of a relatively small number of features.
\paragraph{Contribution}
We make several contributions:
\begin{itemize}
\item A new model for
ranking problems with features is proposed that generalises previous work \cite{LWZC16,ZNK16,LLZ18} by relaxing the relatively restrictive assumptions
on the probability that a user clicks on an item. The new model is strictly more robust than previous works focusing on regret analysis for large item sets.
\item We introduce a novel polynomial-time algorithm called {\tt RecurRank}\xspace{}. The algorithm operates recursively over an increasingly fine set of partitions of $[K]$. Within each
partition the algorithm balances exploration and exploitation, subdividing the partition once it becomes sufficiently certain about the suboptimality
of a subset of items.
\item A regret analysis shows that the cumulative regret of {\tt RecurRank}\xspace{} is at most
$R_T = O(K \sqrt{d T \log(LT})$, where $K$ is the number of positions, $L$ is the number of items
and $d$ is the dimension of the feature space. Even in the
non-feature case where $L = d$ this improves on the state-of-the-art by a factor of $\sqrt{K}$.
\end{itemize}
A comparison with most related work is shown in Table \ref{table:full comparisons with related work}.
\begin{table*}
\label{table:full comparisons with related work}
\caption{This table compares settings and regret bounds of most related works on online learning to rank. $T$ is the number of total rounds, $K$ is the number of positions, $L$ is the number of items and $d$ is the feature space dimension. $\Delta$ is the minimal gap between the expected click rate of the best items and the expected click rate of the suboptimal items. }
\centering
\small
\renewcommand{\arraystretch}{1}
\begin{tabularx}{\textwidth}{@{}XlXp{3cm}}
\toprule
&Context &Click Model & Regret \\
\midrule
\citet{KSWA15}&- &Cascade Model (CM) &$\displaystyle \Theta\left(\frac{L}{\Delta}\log(T)\right)$\\
\midrule
\citet{LWZC16} \newline \citet{ZNK16} \newline \citet{li18clustering} & (Generalised) Linear Form &CM &$\displaystyle O\left(d\sqrt{TK}\log(T)\right)$ \\
\midrule
\citet{KKS16} &- &Dependent Click Model (DCM) &$\displaystyle \Theta\left(\frac{L}{\Delta}\log(T)\right)$\\
\midrule
\citet{LLZ18} & Generalised Linear Form &DCM &$\displaystyle O\left(dK\sqrt{TK}\log(T)\right)$\\
\midrule
\citet{LVC16} &- & Position-Based Model (PBM) with known position bias & $\displaystyle O\left(\frac{L}{\Delta}\log(T)\right)$\\
\midrule
\citet{ZTG17} &- &General Click Model &$\displaystyle O\left(\frac{K^3L}{\Delta}\log(T)\right)$\\
\midrule
\citet{LKLS18ranking} &- &General Click Model & $\displaystyle O\left(\frac{KL}{\Delta}\log(T)\right)$ \newline $\displaystyle O\left(\sqrt{K^3 L T \log(T)}\right)$ \newline $\displaystyle \Omega\left(\sqrt{KLT}\right)$ \\
\midrule
Ours &Linear Form &General Click Model &$\displaystyle O\left(K \sqrt{dT\log(LT)}\right)$\\
\bottomrule
\end{tabularx}
\vspace{-0.3cm}
\end{table*}
\paragraph{Related work}
Online learning to rank has seen an explosion of research in the last decade and there are multiple ways
of measuring the performance of an algorithm. One view is that the clicks themselves should be maximised, which we take in this article.
An alternative is to assume an underlying relevance of all items in a ranking that is never directly observed, but can be inferred
in some way from the observed clicks. In all generality this latter setting falls into the partial monitoring framework \cite{Rus99}, but has
been studied in specific ranking settings \citep[and references therein]{Cha16}. See the article by \citet{HW11} for more discussion on
various objectives.
Maximising clicks directly is a more straightforward objective because clicks are an observed quantity.
Early work was empirically focused. For example, \citet{LC10} propose a modification of
LinUCB for contextual ranking and \citet{CH15} modify the optimistic algorithms
for linear bandits.
These algorithms do not come with theoretical guarantees, however. There has recently been significant effort towards
designing theoretically justified algorithms in settings of increasing complexity \cite{KSWA15,CMP15,ZNK16,KKS16,LVC16}.
These works assume the user's clicks follow a click model that connects properties of the shown ranking to the probability that a user clicks
on an item placed in a given position. For example, in the document-based model it is assumed that the probability that the user clicks on a shown
item only depends on the unknown attractiveness of that item and not its position in the ranking or the other items.
Other simple models include the position-based, cascade and dependent click models. For a survey of click models see \cite{CMR15}.
As usual, however, algorithms designed for specific models are brittle when the modelling assumptions are not met.
Recent work has started to relax the strong assumptions by making the observation that in all of the above click models the
probability of a user clicking on an item can be written as the product of the item's inherent attractiveness and the probability that the user
examines its position in the list. \citet{ZTG17}
use a click model where this decomposition is kept, but the assumption on how the examination probability of a position depends on the
list is significantly relaxed. This is relaxed still further by \citet{LKLS18ranking} who avoid the factorisation assumption by making assumptions directly
on the click probabilities, but the existence of an attractiveness function remains.
The models mentioned in the last paragraph do not make assumptions on the attractiveness function, which means the regret depends badly on the size of $\mathcal L$.
Certain simple click models have assumed the attractiveness function is a linear function of an item's features and the resulting algorithms are suitable for large action sets.
This has been done for the cascade model \cite{LWZC16} and the dependent-click model \cite{LLZ18}.
While these works are welcomed, the strong assumptions leave a lingering doubt that perhaps the models may not be a good fit for practical problems.
Of course, our work is closely related to stochastic linear bandits, first studied by \citet{AL99}
and refined by \citet{Aue02,AST11,VMKK14} and many others.
Ranking has also been examined in an adversarial framework by \citet{RKJ08}.
These settings are most similar to the stochastic position-based and document-based models, but with the additional robustness bought by the adversarial framework.
Another related setup is the rank-$1$ bandit problem in which the learner should choose just one of $L$ items to place in one of $K$ positions.
For example, the location of a billboard with the budget to place only one. These setups have a lot in common with the present one, but cannot be directly applied to ranking problems.
For more details see \cite{KKS17,KKS17b}.
Finally, we note that some authors do not assume an ordering of the item set provided by an attractiveness function.
The reader is referred to the work by \citet{SRG13} (which is a follow-up work to \citet{RKJ08}) where the learner's objective is to maximise the probability that a user clicks on
\emph{any} item, rather than rewarding multiple clicks. This model encourages diversity and provides an interesting alternative approach.
\section{Preliminaries}
\paragraph{Notation}
Let $[n] = \{1,2,\ldots,n\}$ denote the first $n$ natural numbers.
Given a set $X$ the indicator function is $\sind{X}$.
For vector $x \in \mathbb{R}^d$ and positive definite matrix $V \in \mathbb{R}^{d\times d}$ we let $\norm{x}_V^2 = x^\top V x$.
The Moore-Penrose pseudoinverse of a matrix $V$ is $V^\dagger$.
\vspace{-0.1cm}
\paragraph{Problem setup}
Let $\mathcal L \subset \mathbb{R}^d$ be a finite set of items, $L = |\mathcal L|$ and $K>0$ a natural number, denoting the number of positions.
A ranking is an injective function from $[K]$, the set of positions, to $\mathcal L$
and the set of all rankings is denoted by $\Sigma$.
We use uppercase letters like $A$ to denote rankings in $\Sigma$ and lowercase letters $a,b$ to denote items in $\mathcal L$.
The game proceeds over $T$ rounds. In each round $t\in [T]$ the learner chooses a ranking $A_t \in \Sigma$ and subsequently receives feedback in
the form of a vector $C_t \in \{0,1\}^K$ where $C_{tk} = 1$ if the user clicked on the $k$th position.
We assume that the conditional distribution of $C_t$ only depends on $A_t$, which means there exists an unknown function $v : \Sigma \times [K] \to [0,1]$ such that for all $A \in \Sigma$ and $k \in [K]$,
\begin{align}
\Prob{C_{tk} = 1 \mid A_t = A} = v(A, k)\,.
\label{eq:click}
\end{align}
\begin{remark}
We do not assume conditional independence of $(C_{tk})_{k=1}^K$.
\end{remark}
In all generality the function $v$ has $K|\Sigma|$ parameters, which is usually impractically large to learn in any reasonable time-frame.
A click model corresponds to making assumptions on $v$ that reduces the statistical complexity of the learning problem.
We assume a factored model:
\begin{align}
v(A, k) = \chi(A, k) \alpha(A(k))\,,
\label{eq:clickfactor}
\end{align}
where $\chi : \Sigma \times [K] \to [0,1]$ is called the examination probability and $\alpha : \mathcal L \to [0,1]$ is the attractiveness function.
We assume that attractiveness is linear in the action, which means there exists an unknown $\theta_\ast \in \mathbb{R}^d$ such that
\begin{align}
\alpha(a) = \ip{a, \theta_{\ast}} \quad \text{for all } a \in \mathcal L\,.
\label{eq:clicklinear}
\end{align}
Let $a_k^\ast$ be the $k$-th best item sorted in order of decreasing attractiveness. Then let $A^\ast = \left(a_1^\ast, \ldots, a_K^\ast\right)$.
In case of ties the choice of $A^\ast$ may not be unique. All of the results that follow hold for any choice.
The examination function satisfies three additional assumptions.
The first says the examination probability of position $k$ only depends on the identity of the first $k-1$ items and not their order:
\begin{assumption}\label{ass:perm}
$\chi(A, k) = \chi(A', k)$ for any $A, A' \in \Sigma$ with $A([k-1]) = A'([k-1])$.
\end{assumption}
The second assumption is that the examination probability on any ranking is monotone decreasing in $k$:
\begin{assumption}\label{ass:decrease}
$\chi(A, k+1) \leq \chi(A, k)$ for all $A \in \Sigma$ and $k \in [K-1]$.
\end{assumption}
The third assumption is that the examination probability on ranking $A^\ast$ is minimal:
\begin{assumption}\label{ass:min}
$\chi(A, k) \ge \chi(A^\ast, k) =: \chi_k^\ast$ for all $A \in \Sigma$ and $k \in [K]$.
\end{assumption}
All of these assumptions are satisfied by many standard click models, including the document-based, position-based and cascade models.
These assumptions are strictly weaker than those made by \citet{ZTG17}
and orthogonal to those by \citet{LKLS18ranking} as we discuss it in \cref{sec:discussion}.
\paragraph{The learning objective}
We measure the performance of our algorithm in terms of the cumulative regret,
which is
\begin{align*}
R_T = T \sum_{k=1}^K v(A^\ast, k) - \E{\sum_{t=1}^T \sum_{k=1}^K v(A_t, k)} \,.
\end{align*}
\begin{remark}
The regret is defined relative to $A^\ast$, but our
assumptions do not imply that
\begin{align}
A^\ast \in \argmax_{A \in \Sigma} \sum_{k=1}^K v(A, k)\,. \label{eq:maximizer}
\end{align}
The assumptions in all prior work in Table~\ref{table:full comparisons with related work} either directly or indirectly
ensure that (\ref{eq:maximizer}) holds. Our regret analysis does not rely on this, so we do not assume it. Note, however,
that the definition of regret is most meaningful when (\cref{eq:maximizer}) approximately holds.
\end{remark}
\paragraph{Experimental design}
Our algorithm makes use of an exploration `spanner'
that approximately minimises the covariance of the least-squares estimator.
Given an arbitrary finite set of vectors $X = \{x_1,\ldots,x_n\} \subset \mathbb{R}^d$ and distribution $\pi : X \to [0,1]$ let $Q(\pi) = \sum_{x \in X} \pi(x) xx^\top$.
By the Kiefer--Wolfowitz theorem \cite{KW60} there exists a $\pi$ called the $G$-optimal design such that
\begin{align}
\max_{x \in X} \norm{x}_{Q(\pi)^{\dagger}}^2 \leq d\,.
\label{eq:gopt}
\end{align}
As explained in Chap. 21 of \citep{LS18book}, $\pi$ may be chosen so that $|\{x : \pi(x) > 0\}| \leq d(d+1)/2$.
A $G$-optimal design $\pi$ for $X$ has the property that if each element $x\in X$ is observed
$n \pi(x)$ times for some $n$ large enough,
the value estimate obtained via least-squares
will have its maximum uncertainty over the items minimised.
\todoc{We could be a bit more specific. In fact, we could start with the goal of optimal design.}
Given a finite (multi-)set of vectors $X \subset \mathbb{R}^d$ we let $\operatorname{\textsc{Gopt}}(X)$ denote a $G$-optimal design distribution.
Methods from experimental design have been used for pure exploration in linear bandits \cite{SLM14,XHS17} and also finite-armed linear bandits \citep[Chap. 22]{LS18book} as
well as adversarial linear bandits \cite{BCK12}.
|
{'timestamp': '2019-05-28T02:07:33', 'yymm': '1810', 'arxiv_id': '1810.02567', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.02567'}
|
arxiv
|
\section{Introduction}
In 1928 P. Jordan and E. Wigner have introduced their celebrated
transformations \cite{1}, which enabled expressing Fermi operators $c^{\dag}_m$
and $c_m$ in terms of the Pauli operators for a one-dimensional system.
Namely, the following relations are satisfied (we use the notation generally
accepted in contemporary literature on the subject):
\begin{equation}
c_m=\exp{\left( i\pi\sum^{m-1}_{j=1}\tau^+_j\tau^-_j\right)}\tau^-_m,\;\;\;
c_m^{\dag}=\exp{\left( i\pi\sum^{m-1}_{j=1}\tau^+_j\tau^-_j\right)}\tau^+_m,
\end{equation}
where $\tau^\pm_m$ - Pauli operators \cite{12}, satisfying anticommutation
transposition relations for one node $m$ of the chain
\begin{equation}
\left\{\tau^+_m,\tau^-_m\right\}_+=1, \;\;\;\;\;
\left(\tau^+_m\right)^2=\left(\tau^-_m\right)^2=0 ,
\end{equation}
and commutation transposition relations for different nodes of the chain
\begin{equation}
\left[\tau^\pm_m,\tau^\pm_{m'}\right]_-=0\hspace{1cm} (m\neq m').
\end{equation}
The Pauli operators $\tau^\pm_m$ can be expressed in terms of the Pauli
matrices $\tau^k_m$, $(k=x,y,z)$ \cite{18,Baxter} by the following
well known formulae:
\begin{eqnarray*}
\tau^{\pm}_m =\frac{1}{2}\left(\tau^z_m\pm i\tau^y_m\right), \;\;\;
\tau^x_m =-2\left(\tau^+_m\tau^-_m-\frac{1}{2} \right), \;\;\;
\tau^z_m =\tau^+_m+\tau^-_m .
\end{eqnarray*}
There exist also transformations that are inverse to (1.1):
\begin{equation}
\tau^-_m=\exp{\left( i\pi\sum^{m-1}_{j=1}c^{\dag}_jc_j\right)}c_m, \;\;\;
\tau_m^+=\exp{\left( i\pi\sum^{m-1}_{j=1}c^{\dag}_jc_j\right)}c^{\dag}_m
\end{equation}
and the following relation is satisfed: $\tau^+_m\tau^-_{m}=c^{\dag}_mc_m$.
The Jordan-Wigner (J-W) transformations (1.1), (1.4) are applied widely in
numerous domains of quantum as well as classical physics, especially in
quantum field theory, statistical mechanics of quantum and classical
systems, in physics of phase transitions and critical effects
\cite{12,2,4,5,6,7,9,10,11,Foda}, etc. One of the most spectacular
examples of application of Jordan-Wigner transformations is the outstanding
paper by Schultz, Mattis and Lieb \cite{11}, in which the authors introduced
a new approach to solving the two-dimensional Ising model. The key moment of
the paper is application of the J-W transformations (the transfer
matrix method was known earlier \cite{2}), and then reduction of the problem
to a problem of many fermions on a one-dimensional lattice, i.e.
transformation to the fermion representation. The solution \cite{11} of the
Lenz-Ising-Onsager problem is, in our opinion, the most beautiful and
powerful of all solutions given before and after this one. It seems to be
the proper place to mention the note made by J. Ziman \cite{14} p. 209, which
sounds a little bit pessimistic. The note concerns the question connected
with various approaches to the solution of the Lenz-Ising-Onsager problem
after switching on external magnetic field of a finite value. Ziman,
reporting the approach introduced in the paper \cite{11}, writes that
taking into account external magnetic field extremely complicates the
operator representation of the transfer matrix in the second quantization
representation and that {\it "Exactly in this point the limitation of the
Onsager's method is manifested"}.
We feel that there is some misunderstanding in this statement, because using
the Onsager's method we do not apply the field-theoretical language of
creation and annihilation operators for fermions (or bosons), but the authors
of the paper \cite{11} do. Indeed, the Onsager's method \cite{15}, \cite{16}
exhibits some limitation concerning its application to solving the
two-dimensional Ising model in external magnetic field or in solving the
three-dimensional Ising model \cite{18}. In contrast, we deal with a
completely different situation considering the approach of Schultz, Mattis
and Lieb \cite{11}, which uses with all of its beauty the field theoretical
language of second quantization. This method should no longer be treated as a
beautiful trick but as a powerful tool with great prospects for
generalizations. The first step towards such a generalization, leading to the
solutions of the Lenz-Ising-Onsager problems for $d=2$, $H\neq 0$ and for
$d=3$, $H\neq 0$, is (in additionally to the transfer matrix method) the
introduction of the J-W transformations, generalized to two-dimensional and
three-dimensional systems (see \cite{25,26}).
Further in this paper the transformations (1.1) and (1.4) will be called
one-dimensional Jordan-Wigner transformations. It is well known that
instead of transformations (1.1), (1.4) we can introduce transformations of
the form:
\begin{equation}
\label{6a} b_m=exp\left(i\pi\sum^{\mbox{\eu m}}_{p=m+1}\tau^+_p\tau^-_p
\right)\tau_m^-\hspace{1cm}b_m^{\dag}=exp\left(i\pi\sum^{\mbox{\eu m}}_{p=m+1}
\tau^+_p\tau^-_p\right)\tau_m^+,
\end{equation}
and the transformations inverse to them. These transformations will be called
inversional. Operators $(b^{\dag}_m,b_m)$ are obviously the Fermi creation
and annihilation operators. Operators $(c^{\dag}_m, c_m)$ and $(b^{\dag}_m,
b_m)$ are connected by relations of the type:
\begin{equation}
\label{6b} c^{\dag}_m=-(-1)^{\hat{\mbox{\eu m}}}b^{\dag}_m,\hspace{.5cm}
c_m=(-1)^{\hat{\mbox{\eu m}}}b_m,\hspace{0.5cm} c^{\dag}_mc_m=b^{\dag}_mb_m,
\hspace{0.5cm}m=1,2,\ldots,\mbox{\eu m}
\end{equation}
where: $\hat{\mbox{\eu m}}=\sum_m c^{\dag}_mc_m=\sum_mb^{\dag}_mb_m$ - the
operator of the number of all fermions, as it follows from (1.1), (1.4)-
(\ref{6a}). It is easy to show that
\begin{equation}
\label{6c} [c^{\dag}_m,b^{\dag}_{m'}]_-=\ldots=[c_m,b_{m'}]=0,
\hspace*{.8cm} m\neq m',\hspace{1.5cm}[c^{\dag}_m,b_m]_-=-(-1)^{\hat{\mbox
{\eu m}}},
\end{equation}
i.e. these operators commute for different $m$.
Recently appeared a few papers \cite{20,21,22,23}, investigating
generalization of J-W transformations for lattice systems to higher
dimensions. Fradkin \cite{20} and Y.R. Wang \cite{21} consider
generalization to the two-dimensional case (2D), and Huerta and Zanelli
\cite{22} and S. Wang \cite{23} - to the three-dimensional case (3D). The
latter two authors show also that appropriate generalizations to the 4D and
higher dimensional cases is straightforward. Further in the paper we refer to
some results of the papers \cite{22,23} and especially from the paper of S.
Wang \cite{23}. Therefore let us remind shortly the most important points of
the paper. In the paper \cite{23} were given solutions to the equations
(here we preserve original notation of the author):
\begin{equation}
S^-({\bf x})=U({\bf x})c({\bf x}), \;\;\;
S^+({\bf x})=c^{\dag}({\bf x})U^+({\bf x}),\;\;\;
U({\bf x})=\exp{\left[i\pi\sum_{\bf z}w({\bf x,z})c^{\dag}({\bf z})c({\bf
z})\right]}
\end{equation}
where $S^{\pm}({\bf x})$ - Pauli operators, $c^{\dag}({\bf x}), c({\bf x})$ -
Fermi operators, and the function $w({\bf x, z})$ should satisfy the
following condition:
\begin{equation}
e^{i\pi w({\bf z,x})}=- e^{i\pi w({\bf x, z})},\hspace{1cm}{\bf x}\neq{\bf z}
\end{equation}
The solution of the equation (1.9) is of the form:
for the 1D case $w(x,z)=\Theta (x-z)$, where $\Theta (x)$ -- the step-like
function (the unit - valued Heaviside function), and for the 2D case
\begin{equation}
w({\bf x, z})=\Theta (x_1-z_1)(1-\delta_{x_1z_1})+\Theta (x_2-z_2) \delta_
{x_1z_1}
\end{equation}
where $x_{1,2}$ and $z_{1,2}$ - components of the vectors ${\bf x}$ and
${\bf z}$, respectively, in a chosen coordinate system $({\bf e}_1,{\bf e}_2)$,
and $\delta_{xz}$ -the one-dimensional lattice delta function. Finally, for
the 3D case S. Wang \cite{23} writes the solution:
\begin{eqnarray}
w({\bf x , z})&=&\Theta (x_1-z_1)(1-\delta_{x_1,z_1})+\Theta
(x_2-z_2)\delta_{x_1,z_1}(1-\delta_{x_2z_2})\nonumber\\ &+&\Theta
(x_3-z_3)\delta_{x_1z_1}\delta_{x_2z_2}
\end{eqnarray}
It is easy to see that the role of the step-like function $\Theta (x)$ could
be played by one of the three Heaviside functions: $\Theta_s (x)$ - the
symmetric unit-valued function, $\Theta^{\pm}(x)$ - the asymmetric unit-valued
functions. One should deal with $\Theta_s (x)$ with a special care.
We will not explore here various topological aspects of the $w({\bf x,z})$,
that were briefly discussed in the papers \cite{20,21,22} for the 2D case,
and in the paper \cite{23} for the 3D case, although from a different point
of view. More specifically, in the paper \cite{22} the generalized JW
transformations for the 3D case are interpreted as gauge transformations with
the topological charge equal to 1. These transformations are more complicated
than the transformations (1.11) from the paper \cite{23}.
Here we will consider generalized transformations of the JW type for lattice
systems from a different point of view. We will show that solutions (1.10)
and (1.11) of the equation (1.9) are not unique. More precisely, we will show
that for the 2D case, as well as for the 3D case (and also in higher
dimensions), there is a possibility to introduce two or more sets of Fermi
creation and annihilation operators. Moreover, there is a nontrivial
transposition algebra between various sets of Fermi operators. This fact was
not realized by the authors of the papers \cite{20,21,22,23}, because of, as
it seems to us, not a very clear and simple enough notation. Below we will
point out some possible applications of the generalized transformations of
the JW type in the form postulated by us to the analysis of lattice models
of statistical physics and in the graph theory, connected with the problem of
calculation of generating functions for self-avoiding walks (Hamiltonian
cycles on a simple rectangular lattice , see \cite{26,27,28}).
This problem is still under investigation, what could be seen e.g. from the
recent paper of Gujrati \cite{24} devoted to a geometric description of phase
transitions in terms of diagrams and their growth functions. Moreover, as
far as the author's knowledge is concerned, the multidimensional
transformations of the JW type were not given in such a simple and convenient
form, and their properties were not examined in the sense discussed above
(algebra of transposition relations etc.). In what follows we adopt the
notation accepted in contemporary literature \cite{12,Baxter,11,13,14} and we
keep in touch with the spirit and ideas of the pioneering paper of Jordan and
Wigner \cite{1}.
\section{The two-dimensional transformation of Jordan-Wigner type}
Let us introduce three sets of $2^{\mbox{\eu nm}}$ dimensional Pauli matrices
$\tau^{x,y,z}_{nm}$ which can be defined in the following way \cite{18},
$\{n(m)=1,2,\ldots =\mbox{\eu n(m)}\}$:
\begin{eqnarray}
\tau^x_{nm}=1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots 1\hspace{-1.1mm}\mbox{I}\bigotimes\tau^x\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots
1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\hspace{2cm}(\mbox{{\eu nm} -- factors}) \nonumber\\
\tau^y_{nm}=1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots 1\hspace{-1.1mm}\mbox{I}\bigotimes\tau^y\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots
1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\hspace{2cm}(\mbox{{\eu nm} -- factors}) \nonumber\\
\tau^z_{nm}=1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots1\hspace{-1.1mm}\mbox{I}\bigotimes\tau^z\bigotimes 1\hspace{-1.1mm}\mbox{I}\ldots
1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\bigotimes 1\hspace{-1.1mm}\mbox{I}\hspace{2cm}(\mbox{{\eu nm} -- factors})\label{7}
\end{eqnarray}
where the standard Pauli matrices $\tau^{x,y,z}$ are situated in these tensor
products at the nm-th place (1\hspace{-1.2mm}I - the unit $2\times 2$ matrix).
Further, the doubly indexed Pauli operators $\tau^{\pm}_{nm}$ are defined in
the way analogous, i.e. as
\begin{equation}
\tau^{\pm}_{nm}=\frac{1}{2}\left(\tau^z_{nm}\pm i\tau^y_{nm}\right),\;\;\;
\tau^x_{nm}=-2\left(\tau^+_{nm}\tau^-_{nm}-\frac{1}{2} \right),\;\;\;
\tau^z_{nm}=\tau^+_{nm}+\tau^-_{nm}\label{8}
\end{equation}
The Pauli operators satisfy anticommutation transposition relations for the
same lattice node (nm):
\begin{equation}
\left\{\tau^+_{nm},\tau^-_{nm}\right\}_+=1, \;\;\;\;
\left(\tau^+_{nm}\right)^2=\left(\tau^-_{nm}\right)^2=0 ,
\label{9}
\end{equation}
and commutation relations for different nodes:
\begin{equation}
\left[\tau^\pm_{nm},\tau^\pm_{n'm'}\right]_-=0\hspace{2cm}
(nm)\neq (n'm').
\label{10}
\end{equation}
In other words the Pauli operators (\ref{8}-\ref{10}) behave as Fermi
operators for the same node and as Bose operators for different nodes. To
accomplish the transition to the Fermi representation for the whole
lattice i.e. to the representation of Fermi creation and annihilation
operators ($c^{\dag}_{nm},c_{nm}$) for the whole lattice, we introduce, in
analogy to the one-dimensional J-W transformations (1.1), (1.4) two-
dimensional transformations of J-W type, enabling us to express the Fermi
operators for a two-dimensional system ($c^{\dag}_{nm},c_{nm}$) by Pauli
operators ($\tau^{\pm}_{nm}$). It occurs that in the two-dimensional case
there exist two sets of such transformations, (we do not include here the
inverse transformations of the type (\ref{6a})), which we write in the form:
\begin{eqnarray}
\alpha^{\dag}_{nm}&=&\exp\left( i\pi\sum^{n-1}_{k=1}\sum^{\mbox{\eu
m}}_{l=1}\tau^+_{kl} \tau^-_{kl}
+i\pi\sum^{m-1}_{l=1}\tau^+_{nl}\tau^-_{nl}\right)\tau^+_{nm}\nonumber,\\
\alpha_{nm}&=&\exp\left( i\pi\sum^{n-1}_{k=1}\sum^{\mbox{\eu
m}}_{l=1}\tau^+_{kl} \tau^-_{kl}
+i\pi\sum^{m-1}_{l=1}\tau^+_{nl}\tau^-_{nl}\right)\tau^-_{nm},
\label{11}\\
\beta^{\dag}_{nm}&=&\exp\left( i\pi\sum^{\mbox{\eu n}}_{k=1}\sum^{m-1}
_{l=1}\tau^+_{kl} \tau^-_{kl}
+i\pi\sum^{n-1}_{k=1}\tau^+_{km}\tau^-_{km}\right)\tau^+_{nm}\nonumber,\\
\beta_{nm}&=&\exp\left( i\pi\sum^{\mbox{\eu n}}_{k=1}\sum^{m-1}_{l=1}\tau^+_{kl} \tau^-_{kl}
+i\pi\sum^{n-1}_{k=1}\tau^+_{km}\tau^-_{km}\right)\tau^-_{nm}.
\label{12}
\end{eqnarray}
It is straightforward to check, using the relations (\ref{9}) and (\ref{10})
that the operators
$(\alpha^{\dag}_{nm},\alpha_{nm})$ and $(\beta^{\dag}_{nm},\beta_{nm})$ are
actually Fermi operators for all the lattice i.e. that they satisfy
anticommutation transposition relations for all nodes:
\begin{eqnarray}
\{\alpha^{\dag}_{nm},\alpha_{nm}\}_+=1,\;\;\;\;
(\alpha^{\dag}_{nm})^2=(\alpha_{nm})^2=0,\\
\{\alpha^{\dag}_{nm},\alpha^{\dag}_{n'm'}\}_+=\{\alpha^{\dag}_{nm},\alpha_
{n'm'}\}_+=\{\alpha_{nm},\alpha_{n'm'}\}_+=0\hspace{1cm}(nm)\neq(n'm'),\nonumber
\end{eqnarray}
and analogously for $\beta$-operators. The inverse transformations to
(\ref{11}-\ref{12}) are:
\begin{eqnarray}
\tau^+_{nm}&=&\exp\left( i\pi\sum^{n-1}_{k=1}\sum^{\mbox{\eu
m}}_{l=1}\alpha^{\dag}_{kl} \alpha_{kl}
+i\pi\sum^{m-1}_{l=1}\alpha^{\dag}_{nl}\alpha_{nl}\right)\alpha^{\dag}_{nm}\nonumber,\\
\tau_{nm}^+&=&\exp\left( i\pi\sum^{\mbox{\eu n}}_{k=1}\sum^{
m-1}_{l=1}\beta^{\dag}_{kl} \beta_{kl}
+i\pi\sum^{n-1}_{k=1}\beta^{\dag}_{km}\beta_{km}\right)\beta^{\dag}_{nm},
\label{14}
\end{eqnarray}
and analogously for $\tau^-_{nm}$.
The checking procedure could be easily performed provided we take into
consideration the following relations:
\begin{equation}
\exp{\left(
i\pi\sum_{nm}\tau^+_{nm}\tau^-_{nm}\right)}=\prod_{nm}(1-2\tau^+_{nm}\tau_{nm})
=\prod_{nm}(\tau^x_{nm})\label{15}
\end{equation}
That's obvious:
\begin{equation}
\tau^+_{nm}\tau^-_{nm}=\alpha^{dag}_{nm}\alpha_{nm}, \;\;\;
\tau^+_{nm}\tau^-_{nm}=\beta^{\dag}_{nm}\beta_{nm}, \;\;\;
\alpha^{\dag}_{nm}\alpha_{nm}=\beta^{\dag}_{nm}\beta_{nm}\label{16}
\end{equation}
The last formula in (\ref{16}) expresses the condition of local equality of
occupation numbers for $\alpha$- and $\beta$-fermions in the same node.
It is a simple matter to see that in the discrete case the solution (1.10)
can be identified with the first pair of transformations (2.8), if we
introduce the following correspondence:
\begin{equation}
x_1\to n,\hspace{1cm} z_1\to k;\hspace{1cm} x_2\to m,\hspace{1cm}z_2\to l
\end{equation}
Afterwards, we have
$$ \sum_{\bf Z}w({\bf x,z})\alpha^{\dag}({\bf z})\alpha ({\bf z}) \to
\sum^{n-1}_{k=1}\sum^{M}_{l=1}\alpha^{\dag}_{kl}\alpha_{kl}+\sum^{m-1}_{l=1}
\alpha^{\dag}_{nl}\alpha_{nl} $$
Analogously, for
\begin{equation}
x_1\to m,\hspace{1cm} z_1\to l;\hspace{1cm} x_2\to n,\hspace{1cm}z_2\to k ,
\end{equation}
we obtain the second pair of transformations (2.8). From (2.11-2.12) it
follows that the pair of transformations (2.8) can be written in the form
(in the notation from \cite{23}):
\begin{equation}
w({\bf x,z})=\Theta(x_i-z_i)(1-\delta_{x_iz_i})+\Theta(x_j-z_j)\delta_
{x_iz_i},\hspace{1cm}(i\neq j)=1,2.
\end{equation}
From (2.13) drop out two others inverse transformations, analogous to
transformations (1.5) in the one-dimensional case. Obviously, (2.11-2.12)
corresponded to the symmetric transposition group $S_2$ and, therefore, to
the complete set of transformations for the 2D case in the discrete case
corresponds the group $S_2$. This set of transformations should be
complemented by a group of inverse transformations, analogous to
transformations (1.5) in one-dimensional case. These inverse transformations
could always be written out if necessary.
It follows from (\ref{11}--\ref{12}), (\ref{14}) and (\ref{16}) that
operators $(\alpha^{\dag}_{nm},\alpha_{nm})$ and $(\beta^{\dag}_{nm},\beta_
{nm})$ are connected by relations of the form:
\begin{eqnarray}
\alpha^{\dag}_{nm}=\exp(i\pi \phi_{nm})\beta^{\dag}_{nm}, \;\;\;\;
\alpha_{nm}=\exp(i\pi \phi_{nm})\beta_{nm}\nonumber\\
\phi_{nm}=\left[ \sum^{\mbox{\eu n}}_{k=n+1}\sum^{m-1}_{l=1}
+\sum^{n-1}_{k=1}\sum^{\mbox{\eu m}}_{l=m+1}\right]
\alpha^{\dag}_{kl}\alpha_{kl}=\left[\ldots\right]\beta^{\dag}_{kl}\beta_{kl}
\label{17}
\end{eqnarray}
It is obvious that the operators $\phi_{nm}$ commute with operators
$(\alpha^{\dag}_{nm},\alpha_{nm})$ and $(\beta^{\dag}_{n'm'},\beta_{n'm'})$
in the same node:
\begin{equation}
[\phi_{nm},\alpha^{\dag}_{nm}]_-=\ldots =\ldots =[\phi_{nm},\beta_{nm}]_-=0,
\end{equation}
because the occupation numbers with the index $(nm)$ drop out of the
$\phi_{nm}$.
It is easy to see that in the one-dimensional case the transformations
introduced become identical to the one-dimensional J-W
transformations (1.1), (1.4). We should stress here that an inverse
transition does not exist, i.e. a transformation from the one-dimensional
J-W transitions (1.1), (1.4) to their two-dimensional analogue (\ref{11}-
\ref{12}). In other words a "derivation" of the two-dimensional transformations
(\ref{11}-\ref{12}) from the one-dimensional J-W transformations (1.1),
(1.4) is not possible using, for example, the lexicological order \cite{19}
for the current double indexed variables $\tau^\pm_{nm}$, or using different
types of order. No doubts, we obtain in this case the well known one-
dimensional transformations, not more. Of course, we use extensively the
ideas of Jordan and Wigner to obtain the multidimensional analogue of the
transformations.
Now, let us find the transposition relations for the operators
$(\alpha^{\dag}_{nm},\alpha_{nm})$ and $(\beta^{\dag}_{n'm'},\beta_{n'm'})$.
Firstly, using the relations:
\begin{equation}
\exp(i\pi\alpha^{\dag}_{nm}\alpha_{nm})=(1-2\alpha^{\dag}_{nm}\alpha_{nm})=
(-1)^{\alpha^{\dag}_{nm}\alpha_{nm}},
\end{equation}
it is easy to find the following transposition relations:
\begin{equation}
\{(-1)^{\alpha^{\dag}_{nm}\alpha_{nm}},\alpha_{nm}\}_+=
\{(-1)^{\alpha^{\dag}_{nm}\alpha_{nm}},\alpha^{\dag}_{nm}\}_+=
\{(-1)^{\alpha^{\dag}_{nm}\alpha_{nm}},\beta_{nm}\}_+=
\{(-1)^{\alpha^{\dag}_{nm}\alpha_{nm}},\beta^{\dag}_{nm}\}_+=0
\label{20}
\end{equation}
where the equality of occupation numbers for the $\alpha$ and $\beta$ fermions
(\ref{16}) has been used. The straightforward calculation gives the following
transposition relations:
\begin{eqnarray}
\{\alpha^{\dag}_{nm},\beta_{nm}\}_+\!\!&\!\!=\!\!&\!\!
\{\beta^{\dag}_{nm},\alpha_{nm}\}_+=(-1)^{\phi_{nm}}\label{21}\\
\left[ \alpha_{nm},\beta_{n'm'} \right] _{-} \!\!&\!\!=\!\!&\!\!
\ldots =
\left[ \alpha^{\dag}_{nm},\beta^{\dag}_{n'm'} \right] _{-}=0,\mbox{for}
\left(\begin{array}{c c}
n'\leq n-1,&m'\geq m+1\\
n'\geq n+1,&m'\leq m-1
\end{array}\right)\nonumber\\ & & \\
\{\alpha_{nm},\beta_{n'm'}\}_{+}\!\!&\!\!=\!\!&\!\!
\{\alpha^{\dag}_{nm},\beta^{\dag}_{n'm'}\}_{+}=0\label{23}
\end{eqnarray}
for all the cases, where the operators $\phi_{nm}$ in (\ref{21}) are defined
by the formula (\ref{17}) and we use the equality:
$$\exp (i\pi\phi_{nm})=(-1)^{\phi_{nm}}.$$
The transformation relations (\ref{21}-\ref{23}) are simply illustrated in
the Fig.1, where the distinguished operator $\alpha_{nm}$ for a fixed node
$(nm)$ commutes with $\beta$-operators for the nodes $(n',m')$, denoted by a
cross $(\times )$. For all other nodes $\alpha$- and $\beta$-operators
anticommute. On the other hand, we can easily obtain the expression for the
commutation for the same node $(nm)$
\begin{equation}
\alpha_{nm}\beta^{\dag}_{nm}-\beta^{\dag}_{nm}\alpha_{nm}=(-1)^{\beta^{\dag}_
{nm}\beta_{nm}}(-1)^{\phi_{nm}}
\end{equation}
In this way we obtain a rather specific structure of transposition relations
for $\alpha$- and $\beta$-operators for the lattice, in spite of the
fact that this structure has some symmetry.
Here is the proper place to make a small digression and compare the situation
described above with the situation in which we use the method of second
quantization. In the latter case for a system constituting of different
particles the operators of second quantization are introduced. The operators
that are assigned to bosons and fermions commute. On the other hand, for the
operators assigned to different fermions it is usually postulated without
proof \cite{13} that in the framework of the non-relativistic theory these
operators could be formally treated either as commuting or as anticommuting.
For both possible assumptions about the transposition relations the
application of the method of second quantization gives the same result. On
the other hand, as far as the relativistic theory is concerned, where some
transmutations of particles are possible, we should treat the creation and
annihilation operators for different fermions as anticommuting.
In the case we deal with, we operate formally with "quasiparticles" of the
$\alpha$- and $\beta$-type, which, treated separately, are subjected to the
Fermi statistics. In contrast, the transposition relations among the members
of these two sets of operators depend on the relative position of the
"quasiparticles" in the nodes of the lattice. As far as it is known to the
author, such a situation has not occured in quantum physics.
The fact that in the two-dimensional case there are two sets of
transformations (neglecting the inversional transformations of the type
(\ref{6a}), discussed below) of the JW type (\ref{11}) and (\ref{12}) is, in
a way, justified if we consider statistical mechanics of two- and three-
dimensional lattice models with the nearest-neighbours interactions. Namely,
assume we managed for one of such models (for example, for the Ising model or
another model describing two level states of any system) to express the
Hamiltonian in terms of the doubly indexed Pauli operators $\tau^{\pm}_{nm}$
and the components of the Hamiltonian are of the form:
\begin{equation}
\tau^+_{nm}\tau^+_{n+1,m},\hspace{1cm}
\tau^+_{nm}\tau^-_{n,m+1},\hspace{.5cm}etc. \label{25}
\end{equation}
Then it is easy to obtain for the first component of (\ref{25}), after
application of the transformations (\ref{12}), the expression:
\begin{equation}
\tau^+_{nm}\tau^+_{n+1,m}=\beta^{\dag}_{nm}(1-2\beta^{\dag}_{nm}\beta_{nm})
\beta^{\dag}_{n+1,m}=\beta^{\dag}_{nm}\beta^{\dag}_{n+1,m} ,
\end{equation}
and for the second component of (\ref{25}), after application of
transformations (\ref{11}), the expression:
\begin{equation}
\tau^+_{nm}\tau^-_{n,m+1}=\alpha^{\dag}_{nm}(1-2\alpha^{\dag}_{nm}\alpha_{nm})
\alpha_{n,m+1}=\alpha^{\dag}_{nm}\alpha_{n,m+1},
\end{equation}
On the other hand, if we apply to the first component of (\ref{25}) the
transformations (\ref{11}), we obtain:
\begin{eqnarray}
\tau^+_{nm}\tau^+_{n+1,m}&=&(-1)^{\phi_{nm}}\alpha^{\dag}_{nm}(-1)^{\phi_
{n+1,m}}\alpha^{\dag}_{n+1,m}=(-1)^{\chi_{nm}}\alpha^{\dag}_{nm}\alpha^{\dag}
_{n+1,m},\nonumber\\
\chi_{nm}&=&\sum^{\mbox{\eu
m}}_{l=m+1}\alpha^{\dag}_{nl}\alpha_{nl}+\sum^{m-1}_{l=1}\alpha^{\dag}_{n+1,l}
\alpha_{n+1,l}
\end{eqnarray}
and if we apply to the second component of (\ref{25}) the transformations
(\ref{12}), we obtain:
\begin{eqnarray}
\tau^+_{nm}\tau^-_{n,m+1}&=&
(-1)^{\phi_{nm}}\beta^{\dag}_{nm}(-1)^{\phi_{n,m+1}}\beta_{n,m+1}=
(-1)^{\rho_{nm}}\beta^{\dag}_{nm}\beta_{n,m+1},\nonumber\\
\rho_{nm}&=&
\sum^{\mbox{\eu n}}_{k=n+1}\beta^{\dag}_{km}\beta_{km}+
\sum^{n-1}_{k=1}\beta^{\dag}_{k,m+1}\beta_{k,m+1}\label{29}
\end{eqnarray}
In this way, in the expressions (2.25) and (\ref{29}) some unpleasant phase
factors $(-1)^{\chi_{nm}}$ and $(-1)^{\rho_{nm}}$ are contained.
It is this place where certain difficulties appear, connected with the
attempts to apply the JW transformations (2.8) in the solution of the 3D
Ising model using the approach introduced in the paper \cite{11}. In some
cases, for example during the calculation of energy of the ground state,
those phase factors could be eliminated from considerations, because:
\begin{equation}
(-1)^{\chi_{nm}}|0\rangle=1|0\rangle, \;\;\;\;\;
(-1)^{\rho_{nm}}|0\rangle=1|0\rangle
\end{equation}
where: $|0\rangle$ - the vacuum state;
$\alpha_{nm}|0\rangle=\beta_{nm}|0\rangle=0$. As we see, application of the
transformations (\ref{12}) to the first index (n) and the transformations
(\ref{11}) to the second index $(m)$ does not lead to occurrence of these
phase factors. As a result, this fact implies the possibility of
diagonalization by transformation to the momentum representation.
Unfortunately, there occur some other obstacles for the diagonalization,
which will be discussed elsewhere.
The arguments given above for existence of at least two sets of
transformations (\ref{11}) and (\ref{12}) of the J-W type in the two-dimensional
case are, of course not, rigorous and are presented here only as some
guiding devices. Possible future physical interpretation of these results has
nothing to do with the mathematical fact of existence of two nontrivial
transformations for two-dimensional systems. Introduction of inverse
transformations of the type (\ref{6a}) for the two-dimensional case do not
lead to any new transformations, and do not change the symmetry of the
transposition relations. As it is seen below, in the three-dimensional case
the situation is much more complicated and the arguments given above are, in
general, powerless.
In the papers \cite{25,26,27,28} of the author was shown a nontrivial example
of application of a pair of the JW transformations (2.8) to the problem of
deriving the generating function for Hamiltonian cycles on a simple
rectangular lattice with $N\times M$ nodes. One of the key moments in the
papers is simultaneous application of the pair of JW transformations (2.8),
and not of only of one of them. We will investigate in detail application of
the pair (2.8) to one of the possible solutions of the 2D Ising-Onsager
problem in the external field \cite{26}.
\section{The three-dimensional transformation of the J-W type}
In the three-dimensional case we introduce three sets of $2^{\mbox{\eu nmk}}$
-dimensional Pauli matrices $\tau^{x,y,z}_{nmk}$ ($n=1,2,\ldots,${\eu n};
$m=1,2,\ldots,${\eu m}; $k=1,2,\ldots,${\eu k}), which are defined analogously
to the two-dimensional case (\ref{7}). Further we introduce three-index
Pauli operators $\tau^{\pm}_{nmk}$ by formulae:
\begin{equation}
\tau^{\pm}_{nmk}=2^{-1}(\tau^z_{nmk}\pm i\tau^y_{nmk})
\end{equation}
which satisfy anticommutation transposition relations for the same lattice
node $(nmk)$:
\begin{equation}
\left\{\tau^+_{nmk},\tau^-_{nmk}\right\}_+=1,\;\;\;\;
\left(\tau^+_{nmk}\right)^2=\left(\tau^-_{nmk}\right)^2=0\label{32}
\end{equation}
and commutation relations for different lattice nodes:
\begin{equation}
\left[\tau^\pm_{nmk},\tau^\pm_{n'm'k'}\right]_-=0\hspace{2cm}
(nmk)\neq (n'm'k'). \label{33}
\end{equation}
It occurs that in the three-dimensional case there exist six (not including
the inverse transformations of the type \ref{6a}) sets of
transformations of the J-W type, which could be represented in the form:
\begin{eqnarray}
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{\mbox{\eu n}}_{s=1}
\sum^{\mbox{\eu
m}}_{p=1}\sum^{k-1}_{q=1}\alpha^{\dag}_{spq}\alpha_{spq}+
\sum^{\mbox{\eu n}}_{s=1}\sum^{m-1}_{p=1}\alpha^{\dag}_{spk}\alpha_{spk}+
\sum^{n-1}_{s=1}\alpha^{\dag}_{smk}\alpha_{smk}\right)\right]\alpha^{\dag}_{nmk}
\nonumber\\\label{34}\\
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{\mbox{\eu n}}_{s=1}
\sum^{\mbox{\eu
m}}_{p=1}\sum^{k-1}_{q=1}\beta^{\dag}_{spq}\beta_{spq}+
\sum^{ n-1}_{s=1}\sum^{\mbox{\eu m}}_{p=1}\beta^{\dag}_{spk}\beta_{spk}+
\sum^{m-1}_{p=1}\beta^{\dag}_{npk}\beta_{npk}\right)\right]\beta^{\dag}_{nmk}
\nonumber\\\label{35}\\
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{\mbox{\eu n}}_{s=1}
\sum^{m-1}_{p=1}\sum^{\mbox{\eu k}}_{q=1}\gamma^{\dag}_{spq}\gamma_{spq}+
\sum^{\mbox{\eu n}}_{s=1}\sum^{k-1}_{q=1}\gamma^{\dag}_{smq}\gamma_{smq}+
\sum^{n-1}_{s=1}\gamma^{\dag}_{smk}\gamma_{smk}\right)\right]\gamma^{\dag}_{nmk}
\nonumber\\\label{36}\\
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{\mbox{\eu n}}_{s=1}
\sum^{m-1}_{p=1}\sum^{\mbox{\eu k}}_{q=1}\eta^{\dag}_{spq}\eta_{spq}+
\sum^{n-1}_{s=1}\sum^{\mbox{\eu k}}_{q=1}\eta^{\dag}_{smq}\eta_{smq}+
\sum^{k-1}_{q=1}\eta^{\dag}_{nmq}\eta_{nmq}\right)\right]\eta^{\dag}_{nmk}
\nonumber\\\label{37}\\
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{n-1}_{s=1}
\sum^{\mbox{\eu m}}_{p=1}\sum^{\mbox{\eu k}}_{q=1}\omega^{\dag}_{spq}\omega_{spq}+
\sum^{\mbox{\eu m}}_{p=1}\sum^{k-1}_{q=1}\omega^{\dag}_{npq}\omega_{npq}+
\sum^{m-1}_{p=1}\omega^{\dag}_{npk}\omega_{npk}\right)\right]\omega^{\dag}_{nmk}
\nonumber\\\label{38}\\
\tau^+_{nmk}\!\!&\!\!=\!\!&\!\!\exp \left[ i\pi\left(\sum^{n-1}_{s=1}
\sum^{\mbox{\eu m}}_{p=1}\sum^{\mbox{\eu k}}_{q=1}\theta^{\dag}_{spq}\theta_{spq}+
\sum^{m-1}_{p=1}\sum^{\mbox{\eu k}}_{q=1}\theta^{\dag}_{npq}\theta_{npq}+
\sum^{k-1}_{q=1}\theta^{\dag}_{nmq}\theta_{nmq}\right)\right]\theta^{\dag}_{nmk}
\nonumber\\\label{39}
\end{eqnarray}
and analogously for the operators $\tau^-_{nmk}$. For the sake of
completeness of the exposition, we have written here six transformations,
which enable us to express the Pauli operators $\tau^{\pm}_{nmk}$ by the
Fermi creation and annihilation operators $(\alpha^{\dag}_{nmk},\alpha_{nmk},
\ldots,\theta_{nmk})$. Applying the formulae (\ref{32}), (\ref{33}) and the
formulae of the type (\ref{20}), it is easy to show that the operators
$(\alpha^{\dag}_{nmk},\alpha_{nmk},\ldots ,\theta_{nmk})$ satisfy the
anticommutation transposition relations:
\begin{eqnarray}
\{\alpha^{\dag}_{nmk},\alpha_{nmk}\}_+= 1, \;\;\;\;
(\alpha^{\dag}_{nmk})^2=(\alpha_{nmk})^2=0\nonumber\\
\{\alpha^{\dag}_{nmk},\alpha^{\dag}_{n'm'k' }\}_+=\ldots =
\{\alpha_{nmk},\alpha_{n'm'k'}\}_+=0\hspace{.3cm}(nmk)\neq (n'm'k')
\end{eqnarray}
etc., as could be straightforwardly checked. There exist also inverse
transformations:
\begin{equation}
\alpha^{\dag}_{nmk}=\exp \left[ i\pi\left(\sum^{\mbox{\eu n}}_{s=1}
\sum^{\mbox{\eu m}}_{p=1}\sum^{k-1}_{q=1}\tau^{+}_{spq}\tau^-_{spq}+
\sum^{\mbox{\eu n}}_{s=1}\sum^{m-1}_{p=1}\tau^+_{spk}\tau^-_{spk}+
\sum^{n-1}_{s=1}\tau^+_{smk}\tau^-_{smk}\right)\right]\tau^{+}_{nmk}
\end{equation}
etc., from which we can obtain easily the equations:
\begin{eqnarray}
\tau^+_{nmk}\tau^-_{nmk}=\alpha^{\dag}_{nmk}\alpha_{nmk}=\beta^{\dag}_{nmk}
\beta_{nmk}=\gamma^{\dag}_{nmk}\gamma_{nmk}=\eta^{\dag}_{nmk}\eta_{nmk}=
\nonumber\\
=\omega^{\dag}_{nmk}\omega_{nmk}=\theta^{\dag}_{nmk}\theta_{nmk}\label{42}
\end{eqnarray}
using relations of the type (\ref{15}), written for the three-dimensional
case. The relations (\ref{42}) express the conditions of equality of local occupation
numbers for $\alpha$- ,$\beta$- , $\gamma$- , $\eta$- , $\omega$- and
$\theta$- fermions for the same lattice node $(nmk)$. In analogy to the two-
dimensional case, the operators
$(\alpha^{\dag}_{nmk},\ldots ,\theta_{nmk})$ are connected by canonical
nonlinear transformations:
\begin{eqnarray}
\alpha^{\dag}_{nmk}&=&(-1)^{\phi_{nmk}}\beta^{\dag}_{nmk}\nonumber\\
\alpha_{nmk}&=&(-1)^{\phi_{nmk}}\beta_{nmk}\nonumber\\
\phi_{nmk}&=&\left[\sum^{\mbox{\eu n}}_{s=n+1}\sum^{m-1}_{p=1}+
\sum^{n-1}_{s=1}\sum^{\mbox{\eu m}}_{p=m+1}\right]\alpha^{\dag}_{spk}\alpha_{spk}
\label{43};\\
\alpha^{\dag}_{nmk}&=&(-1)^{\psi_{nmk}}\gamma^{\dag}_{nmk}\nonumber\\
\alpha_{nmk}&=&(-1)^{\psi_{nmk}}\gamma_{nmk}\nonumber\\
\psi_{nmk}&=&\sum^{\mbox{\eu n}}_{s=1}\left[\sum^{\mbox{\eu m}}_{p=m+1}
\sum^{k-1}_{q=1}+
\sum^{m-1}_{p=1}\sum^{\mbox{\eu k}}_{q=k+1}\right]\alpha^{\dag}_{spq}\alpha_{spq}
\label{44};\\
\beta^{\dag}_{nmk}&=&(-1)^{\chi_{nmk}}\gamma^{\dag}_{nmk}\nonumber\\
\beta_{nmk}&=&(-1)^{\chi_{nmk}}\gamma_{nmk}\nonumber\\
\chi_{nmk}&=&\sum^{\mbox{\eu n}}_{s=1}\left[\sum^{\mbox{\eu m}}_{p=m+1}
\sum^{k-1}_{q=1}+
\sum^{m-1}_{p=1}\sum^{\mbox{\eu k}}_{q=k}\right]\beta^{\dag}_{spq}\beta_{spq}
+\sum^{n-1}_{s=1}\sum^{\mbox{\eu
m}}_{p=1}\beta^{\dag}_{spk}\beta_{spk}\nonumber\nopagebreak\\\nopagebreak
& &+\sum^{m-1}_{p=1}\beta^{\dag}_{npk}\beta_{npk}+
\sum^{n-1}_{s=1}\beta^{\dag}_{smk}\beta_{smk};
\end{eqnarray}
and 12 pairs of further transformations, which could be easily written, if
necessary. The operators $\phi_{nmk}$, $\psi_{nmk}$ etc. obviously commute
with the operators $(\alpha^{\dag}_{nmk},\ldots ,\theta_{nmk})$ in the same
node, because of lack in the operators $\phi_{nmk}$, $\psi_{nmk}$ etc. of the
operators of occupation numbers indexed by $(nmk)$. It is also a rather easy
task to prove that the transformations of the J-W type introduced
above by the formulae (\ref{34}-\ref{39}) for the three-dimensional case
reduce to transformations (\ref{11}), (\ref{12}) in the two-dimensional case.
Similarly as in the 2D case, correspondence between transformations (3.4-3.9)
and the solution (12) can be established in the three-dimensional case on the
basis of the symmetric group $S_3$. Indeed, the transformations (3.4-3.9) in
the notation from the paper \cite{23} can be written in the form:
\begin{eqnarray}
w({\bf x,z})&=&\Theta(x_i-z_i)(1-\delta_{x_iz_i})+\Theta(x_j-z_j)\delta_{x_iz_i}
(1-\delta_{x_jz_j})\nonumber\\
&+&\Theta(x_k-z_k)\delta_{x_iz_i}\delta_{x_jz_j},\hspace{1cm}\nonumber\\
(i\neq j\neq k)&=&1,2,3.
\end{eqnarray}
It should be noticed that from (3.15) drop out the inverse transformations of
the type (1.5), generalized to the 3D case. Obviously in the d-dimensional
case the complete number of transformations of the JW type is equal to d!
(neglecting the inverse transformations), and the correspondence can be
established using the symmetric group $S_d$.
There are no principal obstacles against full analysis of transposition
relations for the operators $(\alpha^{\dag}_{nmk},\ldots ,\theta_{nmk})$
and, if necessary, we can write all the relations we want. Here we consider
the transposition relations only for the operators
$(\alpha^{\dag}_{nmk},\beta^{\dag}_{n'm'k'},\gamma^+_{n''m''k''},)$,
to get feeling of the "geometric structure" of these relations for the three-
dimensional case. First of all let us observe that, according to (\ref{43}),
transposition relations for $(\alpha^{\dag}_{nmk},\alpha_{nmk})$
and $(\beta^{\dag}_{n'm'k'},\beta_{n'm'k'})$ for $k=k'$ are of the form
(\ref{21}-\ref{23}), where it is sufficient to add the third index $k$ to all
the operators. In other words, transposition relations in the plane
($nm/k=const$) for $\alpha$- and $\beta$- operators behave like in the two-
dimensional case. This fact could be expected, because for the fixed $k$ we
deal actually with two-dimensional transformations of the J-W type (\ref{11}
-\ref{12}). It is easy to see that for all of the three mutually orthogonal
planes there exists a pair of operators, for which the transformation
relations are of the form (\ref{21}-\ref{23}). In accordance with (\ref{34}-
\ref{39}) for the plane $(mk/n=const)$ such a pair will be the pair of
operators $\omega-\theta $, and for the plane $(nk/m=const)$ - the pair of
operators $\gamma-\eta$. Further, according to (\ref{43}) for $(k\neq k')$,
$\alpha$- and $\beta$ - operators anticommute for any $(nm)$ i.e.
\begin{equation}
\{\alpha^{\dag}_{nmk},\beta^{\dag}_{n'm'k'}\}_+=\ldots =
\{\alpha_{nmk},\beta_{n'm'k'}\}_+=0,\hspace{1cm} (k\neq k')
\label{46}
\end{equation}
It is obvious that also the pairs of operators $\omega - \theta$
anticommute for $(n\neq n')$ and the pair of operators $\gamma - \eta$
anticommutes for $(m\neq m')$. Now, from (\ref{44}) there follow
transposition relations for $\alpha$- and $\gamma$-operators of the form
analogous to (\ref{21}-\ref{23}) i.e.:
\begin{eqnarray}
[\alpha_{nmk},\gamma_{n'm'k'}]_-&=&\ldots =\ldots =0,\hspace{1cm} \mbox{for}\left(
\begin{array}{c c c}m'\leq m-1&,&k'\geq k+1\\
m'\geq m+1&,&k'\leq k-1
\end{array}\right),\nonumber\\
\{\alpha_{nmk},\gamma_{n'm'k'}\}_+&=&\ldots
=\{\alpha^{\dag}_{nmk},\gamma_{n'm'k'}\}_+=0,
\label{47}
\end{eqnarray}
in all other cases, with the only difference that the equations (\ref{47}) are
satisfied for any $n$ and $n'$. Analogously, transposition relations for
other pairs of operators are considered. In the general case the symmetry
characteristic for the two-dimensional case (see (\ref{21} --\ref{23}) and
Fig.1) disappears.
Therefore, in the case of the three-dimensional space there exist six
nontrivial transformations of the J-W type (\ref{34}-\ref{39}) and the
algebra of their transposition relations is much more complicated than the
analogous algebra in the two-dimensional case. Some examples of application
of the transposition relations (\ref{34}-\ref{39}) will be considered
elsewhere, where the three-dimensional Ising model with and without external
magnetic field is considered from the new point of view, as well as other
models of statistical mechanics and physics etc. \cite{Baxter,25,26,27}.
We believe it is reasonable to notice here one beautiful fact connected with
generalized transformations of the JW type for the $d\geq 3 $ case. Namely,
for lattice models with nearest neighbours interactions (for example, for the
Ising model), the statistical sum of the system can be represented in terms
of three-index Pauli operators $\tau^\pm\,_{nmk}$, which will enter the sum as
bi-linear products of the type:
\begin{equation}
\tau^+_{nmk}\tau^+_{n+1,mk},
\end{equation}
etc., see \cite{27,28}. Then one can easily realize that among
transformations (3.4)-(3.9) there are two, (3.4) and (3.5), which after
application of which to (3.18) lead to the expressions
$\alpha^{\dag}_{nmk}\alpha^{\dag}_{n+1,mk}$ and $\gamma^{\dag}_{nmk}
\gamma^{\dag}_{n+1,mk}$ in which the phase factors of the type
$(-1)^{\chi_{nmk}}$ (2.25) etc. are not present, and the same applies to
indices $m$ and $k$. In other words, there exist two equivalent (in the sense
of absence of the phase factors) transformations of the JW type for each
degree of freedom of Pauli variables. This is a sort of degeneracy in each
index. For the 3D case the number $N_d$ of generalized transformations of the
JW type is equal to $N_d=3!=6$, and the "degree" of degeneracy in every index
is equal to $v=(N_d/d)=2$. Then in the general case we have $v=(N_d/d)=(d!/d)$
and for $d=2$, $v=1$ as we have mentioned above (2.22)-(2.26).
\section{Conclusions}
We believe we have menaged to show in this paper advantages and simplicity
of the generalized transformations of the JW type introduced above.
Especially important is the fact that this formulation of the JW
transformations enabled us to find the entire sequence of sets of the
transformations, and to investigate the algebra of transposition relations
for various sets of Fermi operators. Moreover, as far as the analysis of
topological aspects of the transformations and consideration of their
continuous counterparts are concerned, it seems
that the notation from the papers \cite{20}-\cite{23} is more convenient.
We omitted in this paper consideration of the problem of correspondence
between discrete generalized transformations of the JW type, introduced in
this paper and analogous transformations given in the papers \cite{20}-
\cite{23}, which are their continuous counterparts thereof. The reason is
there are still many unclear points that need some analysis in future papers.
Especially interesting, both from the physical (in the framework of possible
applications) and mathematical point of view would be examination of the
connection between our transformations and transformations taken from the
paper \cite{22}. Such analysis is missing also in the paper \cite{23}. This
statement applies to the discrete case, but to the continuous one as well
provided there exists a formal way to take the continuous limit for the
lattice constant {\bf a}$\to 0$. For example, yet in the 2D case such a
formal transition to the continuous limit could result in singularities of
the section type along lines $x=const$ and $y=const$, where $\alpha (x,y)$,
$\beta (x,y)$- densities in a fixed point (x,y) in the chosen coordinate
system ({\bf e}$_1$,{\bf e}$_2$) (relative to the transposition relations
between $\alpha$- and $\beta$-operators (2.18)-(2.21). Here also some other
problems appear which we will be explored in future papers.
The attention of physicists and mathematicians was and still is in the field
theory and in connections there of with various models of classical and
quantum statistical mechanics (see for example, \cite{12}, \cite{Baxter},
\cite{Gaudin}) and the literature cited in these papers). It is known
\cite{12},\cite{Baxter}, \cite{Gaudin} that, in some cases a deep connection
between the models of quantum field theory and the models of statistical
mechanics has been discovered.
We hope that in the given form the proposed above JW type transformations for
the 2D and 3D could be a valuable tool in the analysis of the
already known models of statistical mechanics and quantum field theory, as
well as they are expected to initiate formulation of new problems in these
areas of theoretical physics. With the help of generalized J-W transformations
type a new approach for Lenz-Ising-Onsager problem (LIO) has been made
\cite{25,26,27,28,29}. For example, in the frame of this approach 2D LIO
problem in the asymptotic magnetic field has been solved \cite{26}:
\begin{eqnarray}
-\beta f_2(h\rightarrow 0)\sim\ln 2+2\ln(\cosh h/2)+ \nonumber \\
\frac{1}{2\pi^2}\int^{\pi}_0\int^{\pi}_0\ln[
\cosh{2K_1^*}\cosh{2K_2^*}-\sinh{2K_1^*}\cos q-\sinh{2K_2^*}\cos p]dq dp,
\end{eqnarray}
where the parameters
$(K_{1,2},h)$ are to be renormalised in the following way $(K_{1,2}\geq 0)$:
\begin{eqnarray}
\sinh2K^*_{1,2}=\beta_{1,2}[\sinh2K_{1,2}(1-\tanh^2(h/2)],\nonumber \\
\cosh(2K^*_{1,2})=\beta_{1,2}[\cosh2K_{1,2}+\tanh^2(h/2)\sinh2K_{1,2}],
\nonumber \\
\beta_{1,2}=[1+2\tanh^2(h/2)\sinh2K_{1,2}e^{2K_{1,2}}]^{-1/2}, \;\;\;
\tanh^2h^*_{1,2}=\tanh^2(h/2)\frac{\beta_{1,2}\exp(2K_{1,2})}{\cosh^2K^*
_{1,2}},
\end{eqnarray}
where $\alpha(h,x)=\tanh^2(h/2)(1+\cos x)/(\sin x)$ and
$K_{1,2}={\beta}J_{1,2}, \;\; h={\beta}H, \;\; \beta=1/k_{B}T$ ,
where $T$ denotes temperature and $k_{B}$ - the Boltzman constant.
\section*{Acknowledgements}
I am grateful to Dr. H. Makaruk for her help in preparation of the final
form of this paper.
This paper was supported by the KBN grant $N^o$ {\bf 2 P03B 088 15}.
|
{'timestamp': '1998-07-29T20:43:30', 'yymm': '9807', 'arxiv_id': 'cond-mat/9807388', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9807388'}
|
arxiv
|
\section{Introduction}
Traditionally photoproduction and DIS are considered as
processes which are governed by different underlying physics.
Whereas most of the features of DIS processes can be described in terms
of perturbative QCD, photoproduction is dominated by non-perturbative
effects.
This point of view seems to be justified by
the $Q^2$ dependence of the $\gamma^\star p$ cross
section which exhibits a
clearly visible transition region between photoproduction and DIS at
about $Q^2 \sim 0.5 {\rm GeV}^2$.
On the other hand, the steady transition from photoproduction to DIS
highlights the importance of obtaining a
description which smoothly links the non-perturbative and
perturbative domains, see for example
\cite{Levy96a,Capella94b,Schildknecht97a,Desgrolard98a,Landshoff98a,Povh98a}.
There exist now high precision deep inelastic lepton scattering
data \cite{Aid96b,Adloff97a,Breitweg97b,Adams96a,Arneodo89a}
covering both the low $Q^2$ and high $Q^2$
domains, as well as measurements of the photoproduction cross
section \cite{Derrick92a,Aid95b,Landolt87}.
In the present paper we discuss a simple QCD-motivated parametrisation
of the observed $Q^2$ dependence of the $\gamma^\star p$ cross
section, which is closely related the average
transverse momentum of secondary particles produced in the photon
hemisphere.
In addition, the question of the
hard scale in deep-inelastic scattering is discussed within the
framework of $k_T$ factorization.
\section{Theoretical framework of $k_t$ factorization}
Let $\sigma_{\gamma^\star p}(s, Q^2)$ be the
total cross section for the process $\gamma^* p \rightarrow X$
where $Q^2$ is the virtuality of the photon and $\sqrt{s}$ is the
$\gamma^* p$ centre-of-mass energy.
For $s\gg Q^2$ the
$\gamma^* \rightarrow
q\bar{q}$ fluctuations occur over a much longer time
scale than the interaction of the
$q\bar{q}$ pair with the target proton. Therefore
the $\gamma^\star p$
cross section is well approximated by
the probability $|{\mathcal M}|^2$ of the $\gamma^* \rightarrow q\bar{q}$
transition multiplied by the
imaginary part of the forward amplitude describing the $q\bar{q}$-proton
interaction
\be
\Im m\left( A_{q\bar{q} + p}\right) \; = \; s \sigma_{q\bar{q} + p}\ ,
\label{eq:b12}
\ee
where $\sigma_{q \bar{q}+p}$ is the cross section for the
scattering of the $q\bar{q}$ system on the proton.
For transversely polarized photons the amplitude of the
$\gamma^* \rightarrow q\bar{q}$ transition reads
\begin{equation}
{\mathcal M}_T = \frac{\sqrt{z(1-z)}}{\bar{Q}^2 + k^2_T}
\
\bar{u}_{\lambda}(\gamma .
\epsilon_{\pm})u_{\lambda^\prime}
= \frac{(\epsilon_{\pm}.k_T)[(1-2z)\lambda \pm 1]
\delta_{\lambda,-\lambda^\prime}
+ \lambda m_q \delta_{\lambda\lambda^\prime}}
{\bar{Q}^2 + k_T^2},
\label{eq:a13}
\end{equation}
where the $q$ and $\bar{q}$ longitudinal momentum
fractions and
transverse momenta are $z,\; \vec{k}_t$ and $(1-z),\; -\vec{k}_t$
respectively.
We use the notation of Ref.~\cite{Levin97a}, which is based
on the earlier work of
Ref.~\cite{Mueller90a,Brodsky80a}. Namely $\bar{Q}^2$ and the photon
polarization vectors are
given by
\begin{eqnarray}
\label{eq:a14}
\bar{Q}^2 = z(1-z)Q^2+m^2_q \\
\epsilon_T = \epsilon_{\pm} = \frac{1}{\sqrt{2}}
(0,0,1,\pm i),
\label{eq:a15}
\end{eqnarray}
and where $\lambda, \lambda^{\prime} = \pm 1$ according
to whether
the $q, \bar{q}$ helicities are $\pm \frac{1}{2}$.
In terms of the quark momentum variables we thus obtain
\begin{equation}
\sigma_{\gamma_T^\star p}(s,Q^2) = \sum_q \alpha \frac{e^2_q}{2\pi}
\int dz\ dk^2_T \
\frac{[z^2 + (1-z)^2]k^2_T+m^2_q}{(\bar{Q}^2 + k^2_T)^2}
\
N_c \sigma_{q\bar{q}+p} (s, k^2_T)
\label{eq:a16}
\end{equation}
where the number of colours $N_c = 3$.
Eq.~(\ref{eq:a16}) can be rewritten as a dispersion relation in
$M^2$, with $M$ being the invariant mass of the $q\bar q$ pair.
With
\begin{equation}
M^2 = \frac{k^2_T + m^2_q}{z(1-z)}
\label{eq:a17}
\end{equation}
and a change of the integration variable from $dk_T^2$ to
$dM^2$ one gets\footnote{
Of course, in principle, there may be non-diagonal elements of the
amplitude
$A_{q\bar{q} + p}$ of (\ref{eq:b12}). However it is known, both from
experiment and from triple Regge theory, that such non-diagonal
transitions
are suppressed in the forward direction. In terms of the additive
quark model the suppression is the result of the orthogonality of the
initial and final wave functions for a non-diagonal transition.}
\be
\sigma_{\gamma_T^\star p}(s,Q^2) \; = \; \frac{\alpha}{2\pi} \sum_q
e_q^2 \int dz \frac{dM^2}{(Q^2 + M^2)^2} \: \left \{ M^2 \left [z^2 +
(1 - z)^2 \right ] + 2 m_q^2 \right \} \; N_c \sigma_{q\bar{q} + p}(s, k_T^2).
\label{eq:b17}
\ee
This can be compared with the corresponding expression of the
generalized vector dominance model
\cite{Sakurai72a,Sakurai72b,Donnachie78-b}
\begin{equation}
\sigma_{\gamma_T^*p}(s,Q^2) = \sum_q \int^{\infty}_0
\frac{dM^2}{(Q^2+M^2)^2}
\ \rho(M^2) \sigma_{q \bar{q}+p}(s,M^2),
\label{eq:a12}
\end{equation}
where the spectral function
$\rho$ represents the density of $q\bar{q}$ states.
A similar dispersion relation has been used, for example,
in \cite{Badelek92,Kwiecinski89a}
to describe the structure function $F_2$ over the full $Q^2$ range.
In comparison to (\ref{eq:a12}) we see that
(\ref{eq:b17}) is a two-dimensional
integral. To see the reason for this let us consider massless quarks.
Then $z = \frac{1}{2} (1 + \cos
\theta)$ where $\theta$ is the angle between the $q$ and
the $\gamma^*$ in the
$q\bar{q}$ rest frame. The $dz$ integration is implicit
in (\ref{eq:a12}) as the
integration over the quark angular distribution in the
spectral function $\rho$.
At first sight the $Q^2$ dependence of the cross section (\ref{eq:a12}) should
be $\sigma_{\gamma^* p}\propto 1/(Q^2+M_0^2)^2$.
This is true if one deals with only one vector meson or when the
dominant contribution in (\ref{eq:a12})
comes from a limited range of $M^2$.
On the other hand when all the possible values of $M^2$ are taken into
account the result is
\be
\sigma_{\gamma^* p}\propto \int\frac{dM^2}{(Q^2+M^2)^2}
=\frac 1{Q^2+M_0^2}\; .
\ee
Just such a behaviour is expected in our approach
(see Sect.~3 and Eq.~(\ref{eq:b20})).
To obtain a complete description of the $\gamma^\star p$ cross section a
deed model for the $q\bar q$-proton interaction is needed. Such a
model is developed, for example, in \cite{Martin98a}. Furthermore,
longitudinally polarized photons have to be considered.
However for our phenomenological
discussion it is sufficient to investigate some general features of
(\ref{eq:a16}).
\section{Virtuality dependence}
The $Q^2$ dependence in Eq.~(\ref{eq:a16}) comes mainly from the two
quark propagators $1/(\bar{Q}^2 + k^2_T)^2 = 1/(z(1-z)Q^2+k^2_T)^2$
where in the r.h.s. of the last equality we neglect the small quark
mass ($m_q^2$) in the $\bar{Q}^2$ term. In order to demonstrate the
expected $Q^2$ behaviour of the cross section (\ref{eq:a16}) let us
first write the expression in the simplified form
\be
\sigma_{\gamma_T^\star p}(Q^2) \propto \int^{1/2}_0\frac{dz}{(zQ^2+k^2_T)^2}
\ee
and perform the $dz$ integration. It gives the result
\be
\sigma_{\gamma_T^\star p}(Q^2) \propto \frac 1{k^2_T(Q^2+2k^2_T)}\propto \frac
1{Q^2+2k^2_T}.
\ee
It can be checked by explicit numerical integration that the $z$
dependent part of the integral (\ref{eq:a16})
\be
J_\sigma=\int^1_0 dz\frac{[z^2+(1-z)^2]}{(z(1-z)Q^2+k^2_T)^2}
\ee
is well approximated by
\be
J_{\rm app.}=\frac{2}{k^2_T(Q^2+3k^2_T)}\ .
\ee
The ratio $r=J_\sigma/J_{\rm app.}$ tends
to 1 in the asymptotic limits $Q^2\to 0$ or $Q^2\to \infty$ and reaches a
minimum of about 0.96 at $Q^2\sim 8k^2_T$.
Using this approximation (\ref{eq:a16}) can be written as
\bea
\sigma_{\gamma_T^\star p}(s,Q^2) &=& N_c\alpha\sum_q
\frac{e_q^2}{2\pi}\int d\log(k_T^2) \frac{2}{Q^2+3 k_T^2}
k_T^2\sigma_{q\bar{q}+p}(s,k_T^2)
\nonumber\\
&\propto & N_c\alpha\sum_q
\frac{e_q^2}{2\pi} \frac{2}{Q^2+3 \overline{k_T^2}}
\overline{k_T^2} \sigma_{q\bar{q}+p}(s,\overline{k_T^2})\ ,
\label{eq:b19}
\eea
where $\overline{k_T^2}$ is the characteristic transverse momentum of
the quark.
In Eq.~(\ref{eq:b19})
the $Q^2$ dependence is almost factorised and is mainly given
just by the factor $1/(Q^2+3 \overline{k_T^2})$.
Now let us discuss the structure of the integral (\ref{eq:b19}) over $dk^2_T$.
Of course the large distances, i.e. very small $k_T<1/r$ (where $r\sim R_N$ is
of the order of nucleon radius $R_N$) are suppressed by the confinement. At
very large $k_T\gg 1/r$ based on the perturbative QCD and neglecting the
anomalous dimension one expects the cross section
$\sigma_{q\bar{q}+p}\propto 1/k^2_T$.
Thus in the ultraviolet region
(at $k^2_T>Q^2$) the integral (\ref{eq:b19}) is convergent.
The main contribution comes from the mediate (more or less soft)
$k^2_T\sim 0.1\, -\, 0.4$ GeV$^2$ interval.
Typically the cross section is large in the soft region,
where it falls down with $k_T$ exponentially;
then at $k_T>1\, -\, 3$ GeV
(corresponding to a small distances) it has the power-like tail .
Note that the predicted behaviour
\be
\sigma_{\gamma^* p}\propto \frac 1{Q^2+3\overline{k^2_T}}
\label{eq:c40}
\ee
does not depend
too much on concrete form of the $q\bar{q}$ cross
section.
As an extreme example let us consider a simple "soft" approximation
\be
\sigma_{q\bar{q}+p}(s,k_T^2) = \sigma_0(s) \Theta(k_T^2-\mu^2)
\Theta(\overline{K_T^2}-k_T^2)
\ee
which corresponds to soft scattering where the quark-proton cross
section is saturated for $\mu^2 < k_T^2 < \overline{K_T^2}$ and vanishes
everywhere else.
Then the $\gamma^\star p$ cross section reads
\be
\sigma_{\gamma_T^\star p}(s,Q^2)
\propto
\sigma_0(s) \ln \left(\frac{Q^2+3 \overline{K_T^2}}{Q^2+3 \mu^2}\right)\
.
\label{eq:b21}
\ee
Despite of the fact that (\ref{eq:b21}) takes now a logarithmic form
for the numerical values for $\overline{K_T^2}$ discussed in the following
(\ref{eq:b21}) predicts almost the same $Q^2$ behaviour (\ref{eq:c40}) as
Eq.~(\ref{eq:b19}).
One has to expect also that the characteristic value
$\overline{k^2_T}$ should increases with energy.
For larger collision energies the evolution chain
(which produces finally the wee parton) becomes longer.
At each step of evolution a new parton is emitted
and the active parton gets some
extra transverse momentum. Therefore, as in the case of multiple small angle
scattering in a thick target, the final 'intrinsic' $k_T$ of the active
parton grows
with the number of interactions (the number of evolution steps).
In the framework of perturbative QCD this growth is originated in the
summation of the
double logarithmic contributions of the type
$(\alpha_s\log{(k^2_T)}\log{(s)})^n$ and
is described in terms of the
anomalous dimensions. Due to the larger
value of anomalous dimension $\gamma$
at higher energies one expects
the larger characteristic value $\overline{k_T^2}$.
Finally
we will neglect the weak logarithmic $Q^2$ dependence of
$\overline{k_T^2}$ in (\ref{eq:b19},\ref{eq:c40})
(which is, of course, not excluded completely)
and try to describe the data with the parametrisation
\be
\sigma_T(\gamma^*p) \propto \frac 1{Q^2+Q^2_0}
\label{eq:b20}
\ee
using $Q_0^2$ given by the characteristic value $\overline{k^2_T}$ of the
quark transverse momentum
\be
Q_0^2 \approx
3 \overline{k^2_T}\ .
\label{eq:c1}
\ee
The value of $Q^2_0$ becomes unimportant for large $Q^2$ so we
use the value of $\overline{k^2_T}$ as determined at small $Q^2$ (say,
in photoproduction at $Q^2=0$).
Since the integral (\ref{eq:b19}) over $k^2_T$ has a logarithmic structure
one can not estimate the characteristic value of $\overline{k_T^2}$ through
the average of $k^2_T$.
Multiplying the integrand by an extra power of
$k^2_T$ destroys the whole structure of the integral and enlarges
crucially the essential values of $k^2_T$. Therefore we estimate
$\overline{k_T^2}$ by averaging the logarithm of the squared transverse
momentum
\be
\overline{k^2_T} = \exp\left(
\langle\log(k^2_T)\rangle\right)\ .
\ee
Of course, one cannot measure directly $k^2_T$ of a quark.
So we have to relate the $k_T$ of the quark jet to the transverse
momenta $p_T$ of secondary hadrons in the photon fragmentation
region.
Contrary to the large $E_T$ jet fragmentation here
(for not too large $k_T$) the value of $p_T$ is not so small in comparison
with $k_T$. If one assumes that in photoproduction both values ($k_T$ of
the quark and $p_T$ of secondary hadrons) are controlled by the typical
temperature $T$ then we may expect that our $\overline{k^2_T}$ is close
(or equal) to the average $p^2_T$
of secondary hadrons (in the photon
fragmentation region).
To understand better the relation between the values of $\overline{k^2_T}$ and
$\overline{p^2_T}$ we use a standard Monte Carlo program which is in agreement
with fixed target
and HERA photoproduction data, in particular with the measured transverse
is momentum spectra of secondaries.
The corresponding predictions for the {\sc Phojet}
(which, of course is not excluded completely)
event generator \cite{Engel95d} are given in Tab.~\ref{tab:1} for two
energies at which inclusive transverse momentum distributions of
secondaries have been measured \cite{Apsimon89a,Abt94a,Derrick95i}.
Indeed for the photon fragmentation region
($x_F>0.2$) the $\overline{p^2_T}$ of secondary hadrons
in non-diffractive events
is close to the parton $\overline{k^2_T}$ and increases with energy.
A similar increase of the $\overline{p^2_T}$ of secondary hadrons with the
collision energy was observed experimentally in
deep inelastic scattering~\cite{Derrick96f,Ashman91a,Adams91a}.
\begin{table}[htb]
\caption{\label{tab:1}
Logarithmic average transverse momenta
of partons ($k_T$) and charged final state
hadrons ($p_T$) produced in non-diffractive $\gamma p$ collisions
in a photon fragmentation ($x_F>0.2$) region as predicted by
the {\sc Phojet} event generator \cite{Engel95d}.
In the last column the $Q_0^2$ values as obtained by a fit to
photoproduction and DIS data are given.
}
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
$\sqrt{s}$~~(GeV) & $\overline{k^2_T}$~~(GeV$^2$) &
$\overline{p_T^2}$~~(GeV$^2$) & $Q_0^2$~~(GeV$^2$)
\\ \hline
15 & 0.19 & 0.17 & $0.42\pm 0.01$
\\ \hline
200 & 0.5 & 0.35 & $1.04\pm 0.04$
\\ \hline
\end{tabular}
\end{center}
\end{table}
In Fig.~1 the parametrisation (\ref{eq:b20}) is compared to
photoproduction~\cite{Aid95b,Landolt87} and
DIS~\cite{Aid96b,Adloff97a,Breitweg97b}
data at two different energies ($\sqrt{s}= 200$~GeV and $\sqrt{s}=
15$~GeV). To obtain the total virtual photon--proton cross sections
we use $\sigma_{\gamma^*p}=(4\pi\alpha/Q^2)F_2(x,Q^2)$.
Where necessary, the measured values
of $F_2(x,Q^2)$ have been interpolated using a smooth function.
The overall normalization uncertainties have been neglected in the
fit of the data in the Fig.~1.
The measured total photon--proton cross sections are fitted
to the parametrisation (\ref{eq:b20}) with only two free parameters:
$Q_0^2$ and an overall normalization for each of the two sets of the
data. The results of the fit are also presented in Fig.~1 as lines.
We can conclude that (\ref{eq:b19},\ref{eq:b20}) reproduce all the main
features of
the data (including the photoproduction points) in a wide range of
energies and $Q^2$.
As expected, the parameter $Q^2_0$ is energy dependent. It's value is
$1.04\pm 0.04$~GeV$^2$ at $\sqrt{s}=200$~GeV while for $\sqrt{s}=15$~GeV we
needed $Q^2_0=0.42\pm 0.01$~GeV$^2$.
Relating the experimentally measured transverse momentum spectrum
to the parameter $\overline{k_T^2}$, the increase of $\overline{k_T^2}$
from the energies of fixed target experiments to the
HERA kinematic region agrees well the rise of $Q^2_0$ parameter obtained
above (see Eq.~(\ref{eq:c1})).
It is known that in the small-$x$ region the DGLAP evolution leads to a strong
scaling violation which reveals itself in a rather large positive value of the
anomalous dimension.
Therefore, at large energies the $Q^2$ behaviour of the
cross section $\sigma\propto 1/(Q^2)^{1-\gamma}$ becomes more flat.
In our parametrisation (\ref{eq:b20}) the same effect is hidden in the value
of $Q^2_0\propto k^2_T$. Due to a rather large anomalous dimension
and $\sigma_{q\bar{q}+p}(s,k^2_T)\propto 1/(k^2_T)^{1-\gamma}$,
the essential values of $k^2_T$ increase with energy faster than
$\log(s)$. Consequently, the $Q^2$ distribution becomes broader.
The expression (\ref{eq:b20}) fits also quite well the data on the nuclear
targets.
In Fig.~2 the parametrisation (\ref{eq:b20}) is compared to the
available photoproduction~\cite{Landolt87} and
DIS~\cite{Adams96a,Arneodo89a} data at energy $\sqrt{s}= 10$~GeV for the nucleon
in deuteron, carbon and calcium nuclei.
The value of $Q^2_0$ increases
with the atomic number $A$ but the normalization factors are the same
within the errors.
In other words not only the anomalous dimension behaviour of DIS cross
section but the effect of shadowing is absorbed in the value of $Q^2_0$
(to be more precise -- in the $A$-dependence of $Q^2_0$) as well.
What is the origin of $A$-dependence of $Q^2_0$?
{}From the point of view of the photon-quark interaction
the $k_T$ in Eq.~(\ref{eq:b19})
plays the role of the intrinsic transverse momentum of the quarks. So we have
to discuss the parton wave function of the nucleon/nuclear target.
Note that in coordinate space the longitudinal interval occupied by
the small-$x$ wee parton $z\sim 1/m_Nx$ ($m_N$ is the nucleon mass) increases
with $1/x$ and for $x< 0.1\, - \, 0.2$ the partons originated by
different nucleons start to overlap and interact with each other.
This parton-parton rescattering leads to the well-known shadowing effects.
However a quark can not disappear completely since it carries conserved
quantum numbers (i.e.\
charge, isospin, etc.). The only possibility is to move the quark from the
densely populated phase space region to another one.
Consequently the rescattering mainly enlarges
the transverse momentum and pushes the quark out of the small $k_T$ region.
Therefore we have to expect
a larger value of
$Q_0^2=3\overline{k^2_T}$ for a heavier nuclei. To estimate the size of
this effect let us consider soft rescattering. With a quark-nucleon cross
section of $\sigma_{qN}\simeq \frac 13\sigma_{NN}\simeq 10\, -\, 15$ mb
the mean number of quark interactions in $Ca$ will be $\nu_q\sim 0.7\, -\, 1$.
Each soft rescattering increases the value of $\overline{k^2_T}$ by
about $\Delta k^2\sim (0.3-0.4$~GeV$)^2$ since
$k^2_A \approx \overline{k^2_T}+\nu_q\cdot\Delta k^2$.
The parameter $Q^2_0$ for $Ca$ should be of
about $3\nu_q\cdot\Delta k^2\sim 0.3-0.4$GeV$^2$ larger
than for a free nucleon target. This is in good agreement with the fit
results given in Tab.~\ref{tab:2}.
\begin{table}[htb]
\caption{\label{tab:2}
Fit results for the parameter $Q_0^2$ for different energies and
targets. The data sets have been interpolated to the given energy.
}
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
$\sqrt{s}$~~(GeV) & target & $Q_0^2$~~(GeV$^2$) & data used
\\ \hline
200 & p & $1.04\pm 0.04$ & H1, ZEUS
\\ \hline
100 & p & $0.75\pm 0.04$ & H1, ZEUS
\\ \hline
15 & p & $0.42\pm 0.01$ & E665
\\ \hline
10 & d & $0.46\pm 0.02$ & E665
\\ \hline
10 & d & $0.44 \pm 0.03$ & EMC
\\ \hline
10 & C & $0.56 \pm 0.06$ & EMC
\\ \hline
10 & Ca & $0.76 \pm 0.08$ & EMC
\\ \hline
\end{tabular}
\end{center}
\end{table}
The same parton-parton rescattering may explain at least part of the growth of
$Q^2_0$ with the energy; due to the fact that at large energies (small $x$)
the parton density increases and even in the case of a single proton the
parton-parton interaction becomes not negligible.
In terms of the dispersion relation (\ref{eq:a12})
one can say that in dense matter (heavy nuclei or large $\sqrt{s}$) the
effective mass ($M^2$) of
$q\bar{q}$-pair increases. The point-like (large $M^2$) configurations
with a small cross section which penetrate a thin target
without any interaction are
absorbed by a dense target and give an essential contribution to the
cross section.
\section{Conclusive remarks}
At low $x$ photon--proton scattering can be understood as the fluctuation
of the virtual photon into a hadronic state and the subsequent scattering of
this state on the proton.
We have shown that cross section data on fixed target and HERA deep-inelastic
scattering support this interpretation.
On this basis, a simple
parametrisation of the $Q^2$ dependence of the $\gamma^\star p$ cross
section has been derived. The essential parameter $Q_0^2$ of this
parametrisation can be estimated from the measurement of secondaries
produced in the photon fragmentation region.
This data analysis also confirms the prediction of the $k_T$
factorization approximation that the hard scale of the process
is not the initial photon virtuality $Q^2$ but the parton $k_T$ of the
hadronic fluctuation. Of course, the essential values of $k^2_T$ are
correlated with $Q^2$ but neither directly equal nor proportional to $Q^2$.
Instead, the correlation between $k_T^2$ and $Q^2$ is rather broad.
Therefore, in order to fix the hard scale of
the deep-inelastic process it is better to use the transverse energy
($E_T$) measurements in the photon fragmentation region, than the
value of the colliding photon virtuality.
Fixing the hard scale by high $p_T$
secondary hadrons from the photon fragmentation region instead of the
photon virtuality $Q^2$, a faster transition
to hard scattering has been observed \cite{Aid97b}.
Finally, we may say that
the cross section fits presented in this work suggest that low-$x$ deep
inelastic scattering is
characterized by rather "soft" (corresponding to
$k^2_T\sim 0.15\, -\, 0.3$ GeV$^2$) quark-nucleon
($\sigma_{q\bar{q}+p}$) interactions.
\clearpage
\noindent {\large \bf Acknowledgements}
MGR thanks the INTAS (95-311) and the Russian Fund of
Fundamental Research (98-02-17629) for support. One of the authors
(RE) is supported by the U.S.\ Department
of Energy under grant DE-FG02-91ER40626.
AR gratefully acknowledge the University of Antwerpen for support.
\newpage
|
{'timestamp': '1998-07-07T05:37:16', 'yymm': '9807', 'arxiv_id': 'hep-ph/9807268', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9807268'}
|
arxiv
|
\section{Introduction}
The effective action is an important tool in quantum
electrodynamics (QED), and quantum field theory in
general. For example, for fermions in a static magnetic
background, the effective action yields (minus) the effective
energy of the fermions in that background; while for fermions
in an electric background, the effective
action is complex and the imaginary part gives (half) the
probability for fermion-antifermion pair production
\cite{schwinger,stone}.
The computation of rates for pair-production from the vacuum was
initiated by Schwinger \cite{schwinger} who studied the constant field
case and found that the rate is (exponentially) extremely
small. Brezin and Itzykson \cite{brezin} studied the more realistic
case for alternating fields $\vec{E}(t)=(E \sin(\omega_0 t),0,0)$ but
found negligible frequency dependence and still an unobservably low
rate for realistic electric fields. Narozhny\u{i} and Nikishov
\cite{naro} obtained an expression for both the spinor QED and scalar
QED effective action, as an integral over 3-momentum, for a time
dependent field $\vec{E}(t)=(E {\rm{sech}}^2\left({t\over
\tau}\right),0,0)$. Their approach was based on the well-known exact
solvability of the Dirac and Klein-Gordon equations for such a
background. This solvable case
has also featured in the strong-field analysis of Cornwall and
Tiktopoulos \cite{cornwall}, the group-theoretic semiclassical
approach of Balantekin {\it et al.} \cite{balantekin88,balantekin91}, the
proper-time method of Chodos \cite{chodos},
and the $S$-matrix work of Gavrilov and Gitman \cite{gavrilov}. Recent
experimental work involving the SLAC accelerator and intense lasers
has given renewed impetus to this subject, providing tantalizing hints
that the critical fields required for direct vacuum pair production
may be within reach \cite{burke,melissinos}.
In this paper we make several new contributions to this body of
work. First, using the resolvent approach we present an expression for
the exact effective action in the time-dependent background
$\vec{E}(t)=(E {\rm{sech}}^2 \left( {t\over \tau} \right),0,0)$ that is a
simple integral representation involving a single integral, rather
than as an expression that must still be traced over all 3-momenta, as
in \cite{naro,cornwall,gavrilov}. Second, we use this explicit
expression to make a direct comparison with independent results from
the derivative expansion approximation
\cite{hallthesis,shovkovy98}. Third, we show how the real and
imaginary parts of the effective action are related by dispersion
relations, connecting perturbative and nonperturbative
expressions. Finally, we show how the uniform semiclassical
approximation \cite{brezin,balantekin88} fits into the resolvent
approach, obtaining a simple semiclassical expression for the QED
effective action in a general time dependent, but spatially uniform
$E$ field. This expression is remarkably similar to Schwinger's
``proper-time'' expression for the contant field case.
When the background field has constant field strength
$F_{\mu\nu}$, it is possible to obtain an explicit expression for the
exact effective action as an integral representation \cite{schwinger}. The
physical interpretation of this expression depends upon the magnetic
or electric character of the background, and this is reflected in how
we expand the integral representation. In the case of a
constant magnetic background, a simple perturbative expansion in
powers of B yields
\begin{equation}
{\rm{S}}_{eff} = {B^2 T L^3 \over 2 \pi^2} \sum_{n=1}^\infty
{{\cal B}_{2n+2} \over (2n+2)(2n+1)(2n)} \left( 2B \over m^2
\right)^{2n},
\label{constantmag}
\end{equation}
where the ${\cal B}_n$ are the Bernoulli numbers \cite{gradshteyn},
and $T L^3$ is the space-time volume factor.
In the case of a constant electric field background, the
effective action is complex. The real part has a natural
perturbative expansion which is just (\ref{constantmag})
with $B\rightarrow iE$, while the imaginary part is a sum
over nonperturbative tunnelling amplitudes
\begin{eqnarray}
{\rm Re} ({\rm{S}}_{eff}) &=& -{E^2 T L^3 \over 2 \pi^2} \sum_{n=1}^\infty
{ (-1)^n {\cal B}_{2n+2} \over (2n+2)(2n+1)(2n)} \left( 2E \over
m^2 \right)^{2n} \label{re part constant E-field}\\
{\rm Im} ({\rm{S}}_{eff}) &=& {E^2TL^3 \over 8 \pi^3} \sum_{n=1}^\infty
{1 \over n^2} e^{ - {m^2 \pi n \over E}}.
\label{im part constant E-field}
\end{eqnarray}
There are two clear motivations for studying the effective
action in non-constant background fields. First, knowledge
of the effective action for more general gauge fields is
necessary for the ultimate quantization of the electromagnetic
field. Second, realistic electromagnetic background fields do
not have constant field strength, and so we would like to
understand effective energies and pair production rates in
more general backgrounds. However, it is, of course, not
possible to evaluate the exact QED effective action for
a completely arbitrary background. Thus we are led naturally
to approximate expansion techniques. A common approach,
known as the derivative expansion
\cite{aitchison,cangemi95a,gusynin96,hallthesis},
involves a formal perturbative expansion about Schwinger's
exactly solvable case of constant field strength.
Unfortunately, this type of perturbative expansion is
difficult to
perform beyond first order, and is hard to interpret
physically, even for magnetic-type backgrounds. This is
even more problematic for electric-type backgrounds,
for which we seek a {\it nonperturbative} expansion.
A complementary approach is to search for solvable examples
that are more realistic than the constant field case, although
still not completely general. Recent work \cite{dunne98}
has found an exact, explicit integral representation for
the 3+1 dimensional QED effective action in a static but spatially
inhomogeneous magnetic field of the form
\begin{equation}
\vec{B}(\vec{x}) = ( 0,0,B {\rm{sech}}^2 \left( {x \over \lambda} \right)).
\label{magtanh}
\end{equation}
For fermions in this background field, there are three relevant
scales: a magnetic field scale $B$, a width parameter $\lambda$
characterizing the spatial inhomogeneity, and the fermion mass
$m$. It is therefore possible to
expand the exact effective action in terms of two independent
dimensionless ratios of these scales, depending on the
question of interest. For example, since $\lambda=\infty$
corresponds to the uniform background case, in order to
compare with the derivative expansion we expand the exact
${\rm{S}}_{eff}$ as a series in $1 \over B\lambda^2$. It has been
verified that the first two terms in this
series agree precisely with independent derivative
expansion results (there are no independent field theoretic
calculations of higher order terms in the derivative
expansion with which to compare). Furthermore, these and analogous
results in 2+1 dimensions indicate that the derivative expansion is in
fact an asymptotic series expansion \cite{cangemi95b,dunne97}.
Formally, one could change this magnetic-type result to an
electric-type background by an appropriate analytic
continuation $B
\rightarrow iE$. However, it is not immediately clear
how to obtain a {\it nonperturbative} expression [for
example, something like (\ref{im part constant E-field})]
for the imaginary part of the effective action. For constant
background fields a simple dispersion relation provides this
connection between the magnetic and electric cases, but for
nonconstant fields the
dispersion relations are more complicated. Understanding this
connection, for non-constant backgrounds, is one of the main
motivations for this paper.
This paper is organized as follows. In Section II we review
briefly the constant field case, using Schwinger's proper
time method. In Section III we review the resolvent method,
which has been used to obtain exact integral representations
for the effective action in the special nonuniform magnetic
background (\ref{magtanh}). In Section IV we then use the
resolvent method to evaluate the exact effective action for
a time-dependent, but spatially uniform electric field
\begin{equation}
\vec{E}(\vec{x}) =( E {\rm{sech}}^2 \left( {t \over \tau} \right) ,0,0).
\label{electanh}
\end{equation}
In Section V we show how dispersion relations connect the
magnetic and electric cases (\ref{magtanh}) and (\ref{electanh}).
In Section VI we review the derivative expansion for electric fields
and in Section VII show its connection to the exact effective action
of Secion IV. In Section VIII we use a uniform semi-classical
approximation to obtain a general (but semi-classical) expression for
the pair production probability in a time-dependent electric
background. The final Section is devoted to some concluding comments.
\section{Schwinger's Approach}
Integrating over the fermion fields gives the QED effective
action for fermions in a background electromagnetic field
\begin{eqnarray}
{\rm{S}}_{eff}[A] = - i \ln \det ( i{\not \!\! D} - m) = -{i \over 2} \:{\rm{tr}} \:\ln
({\not \!\! D}^2+m^2).
\label{schwingers trick}
\end{eqnarray}
Here, the covariant derivative is ${\not \!\! D}=\gamma^\mu
( \partial_\mu + iA_\mu)$ with the electric charge $e$ absorbed into
the gauge field $A$. In the calculations that follow we are implicitly
subtracting off zero field contribution ${\rm{S}}_{eff}[A=0]$.
In a classic paper \cite{schwinger}, Schwinger computed the effective
action for constant background fields. One expresses
the logarithm through an integral representation, the ``proper-time''
representation.
\begin{equation}
{\rm{S}}_{eff}=-{i\over 2} \:{\rm{tr}} \:\ln({\not \!\! D}^2+m^2)= {i\over 2}
\int_0^\infty{ds\over s} \:{\rm{tr}} \: e^{-s(\not{D}^2+m^2)}
\label{schwingers general}
\end{equation}
Clearly, to proceed, we need information concerning the spectrum
of the operator ${\not \!\! D}^2+m^2$.
For a constant magnetic background of strength B, we choose
$A_\mu=(0,0,0,By)$ and the Dirac representation of the gamma
matrices so that the operator becomes diagonal:
\begin{equation}
{\not \!\! D}^2+m^2= \left[\partial^2_0 -\partial_x^2-\partial_y^2
-(\partial_z + iBy)^2 +m^2 \right] {\bf 1} +
\left( \matrix { B&0&0&0 \cr 0&-B&0&0 \cr 0&0&B&0
\cr 0&0&0&-B} \right).
\end{equation}
The Dirac trace is trivial, and we are left with a harmonic
oscillator system with eigenvalues
\begin{equation}
m^2-k_0^2+k_x^2 +2B(n+{1\over 2}\pm {1 \over 2})
\label{harmonic osc}
\end{equation}
The remaining traces are straightforward, yielding the exact
effective action for a constant magnetic field \cite{schwinger}
\begin{equation}
{\rm{S}}_{eff}= {BTL^3 \over 8 \pi^2} \int_0^\infty {ds \over s^2}e^{-m^2s}
\left( \coth Bs - {1 \over Bs} - {Bs \over 3} \right)
\label{integral constant B}
\end{equation}
Here, the ${1 \over Bs}$ term is an explicit subtraction of
${\rm{S}}_{eff}[0]$, while the ${Bs \over 3}$ term corresponds
to a charge renormalization. A straightforward
expansion of (\ref{integral constant B}) yields the expansion
(\ref{constantmag}).
In a constant electric background the calculation is similar.
Choosing $A_\mu=(0,Ex_0,0,0)$ and using the chiral representation for
the gamma matrices, we find the operator ${\not \!\! D}^2+m^2$ diagonalizes:
\begin{equation}
{\not \!\! D}^2+m^2=[ \partial_0^2 - (\partial_x+iEt)^2 -\partial_y^2-
\partial_z^2 +m^2] {\bf 1} + \left( \matrix {
iE&0&0&0 \cr 0&iE&0&0 \cr 0&0&-iE&0 \cr 0&0&0&-iE} \right).
\end{equation}
Once again, the Dirac trace is trivial, and we are left with a
harmonic oscillator with imaginary frequency. Thus
${\not \!\! D}^2+m^2$ has complex eigenvalues
\begin{equation}
m^2 +2iE(n+{1\over 2}\pm {1 \over 2})+k_y^2+k_z^2
\label{imaginary ho}
\end{equation}
The traces can be performed as before, yielding
\begin{equation}
{\rm{S}}_{eff}= {ETL^3 \over 8 \pi^2} \int_0^\infty {ds\over s^2} e^{-m^2s}
\left(\cot Es -{1 \over Es} +{Es \over 3} \right)
\label{integral constant E}
\end{equation}
where we have subtracted the same vacuum contribution and charge
renormalization terms.
Going from (\ref{integral constant B}) to (\ref{integral constant E}),
we note poles of the integrand have moved onto the contour of
integration. This is the trademark of background electric fields and
the ultimate source of the imaginary contribution. Regulating the
poles with the standard principal parts prescription \cite{schwinger},
we separate out the imaginary and real contributions to the effective
action.
\begin{equation}
{\rm{S}}_{eff}=i {E^2TL^3 \over 8\pi^2} \sum_{n=1}^\infty {1\over n^2}
e^{-{m^2 \pi n \over E}} + {ETL^3\over 8 \pi^2} {\cal P}
\int_0^\infty {ds \over s^2} e^{-m^2s} \left( \coth Es -
{1\over Es} -{Es\over 3} \right)
\end{equation}
As before, it is straightforward to expand the integral and arrive
at the expansion (\ref{re part constant E-field}), for the real part
of the effective action.
\section{Resolvent Method}
Now consider a class of more general backgrounds -- fields pointing
in a given direction and depending on only one space-time
coordinate. This is still far from the most general case;
nevertheless, this class is sufficiently broad to study the
effects of inhomogeneities, and yet simple enough to be
analytically tractable.
In the magnetic case we choose
\begin{equation}
\vec{A}= (0,0,a_B(y)) \hspace{1cm} \rightarrow \hspace{1cm}
\vec{B}=(a^\prime_B(y),0,0)
\label{B general}
\end{equation}
while in the electric case we choose
\begin{equation}
\vec{A}= (a_E(t),0,0) \hspace{1cm} \rightarrow \hspace{1cm}
\vec{E}=(a^\prime_E(t),0,0).
\label{E general}
\end{equation}
In the magnetic case there is no time dependence and $A_0=0$, so
we can perform the energy trace in (\ref{schwingers trick}).
After an integration by parts in $k_0$, this reduces the evaluation
of the effective action to a trace of a one-dimensional Green's
function, or resolvent
\begin{equation}
{\rm{S}}_{eff}=-iL \int {dk_0\over 2\pi} \sum_\pm \:{\rm{tr}} \:
{k_0^2 \over {\cal D}_\pm(k_x,y,k_z) -k_0^2}
\label{B resolvent}
\end{equation}
where the one-dimensional operator ${\cal D}_\pm$ is
\begin{equation}
{\cal D}_\pm=m^2+k_x^2 -\partial_y^2 + (k_z-a_B(y))^2 \pm
a^\prime_B(y).
\end{equation}
In the electric case there is no y dependence and $A_y=0$, so we
can perform the $k_y$ trace and obtain
\begin{equation}
{\rm{S}}_{eff}=iL \int {dk_y \over 2\pi} \sum_\pm \:{\rm{tr}} \:
{k_y^2 \over {\cal D}_\pm(t,k_x,k_z) + k_y^2}
\label{E resolvent}
\end{equation}
which involves the resolvent of the operator
\begin{equation}
{\cal D}_\pm= m^2 + \partial_0^2 + (k_x - a_E(t))^2
+ k_z^2 \pm ia_E^\prime(t).
\label{diff-e-q for electric}
\end{equation}
Thus, for both the magnetic and electric backgrounds in
(\ref{B general},\ref{E general}) the problem reduces to
tracing the diagonal resolvent (\ref{B resolvent},
\ref{E resolvent}) of a one-dimensional differential
operator. This makes clear the advantage of the resolvent
approach. For a typical background field we usually think
of computing the effective action by some sort of
summation over the spectrum of the appropriate Dirac operator.
This is easy for constant fields because the spectrum is
discrete [see (\ref{harmonic osc}) and (\ref{imaginary ho})].
But for non-constant fields the spectrum will typically
have both discrete and continuous parts, which makes a
direct summation extremely difficult. However, for
one-dimensional operators, we do not have to use this
eigenfunction expansion approach -- we can alternatively
express the resolvent as a product of two suitable independent
solutions, divided by their Wronskian. This provides a
simple and direct way to compute the effective action when
the background field has the form in (\ref{B general},
\ref{E general}).
This resolvent approach has been applied successfully to spatially
inhomogeneous magnetic backgrounds
\cite{cangemi95b,dunne97,dunne98,hallthesis}. It has also been used previously
by Chodos \cite{chodos} in an analysis of the possibility of spontaneous chiral
symmetry breaking for QED in time-varying background electric fields. In
this paper we present a detailed analysis of the resolvent approach to the
computation of the QED effective action in time-dependent electric backgrounds.
We
first check the resolvent approach by computing the effective action
for a constant electric field. The
constant electric field case follows the constant magnetic case
very closely. Choosing $a_E(t)=E t$, the eigenfunctions of the
operator (\ref{diff-e-q for electric}) are parabolic cylinder
functions. Taking independent solutions with the approriate
behavior at $t=\pm \infty$, we obtain the Green's function
\begin{equation}
{\cal{G}}(t,t')=-{\Gamma[-\nu] \over \sqrt{4 \pi iE}} D_\nu
\left( \sqrt{ 2i \over E} (Et-k_x)\right) D_\nu \left(-
\sqrt{2i \over E} (Et'-k_x) \right)
\end{equation}
where we have defined $\nu={m^2 +k_y^2 +k_z^2 \over 2iE} \pm
{1\over2} + {1\over2}$. The trace of the diagonal Green's function
can be performed \cite{gradshteyn}, yielding Psi functions,
where $\psi(u)=\Gamma'(u)/\Gamma(u)$ is the logaritmic derivative
of the Gamma function \cite{gradshteyn}. Thus the effective action is
\begin{eqnarray}
{\rm{S}}_{eff} & = & -{i L^3 \over 4 \pi^3} \int_{0}^{ET} dk_x
\int_{-\infty}^{\infty} k_y^2 dk_y dk_z \sum_{\pm}
\int_{-\infty}^{\infty} dx_0 {\cal{G}} (x_0,x_0) \nonumber \\
& = & - {EL^3T \over 4 \pi^3} \int_{-\infty}^\infty k_y^2 dk_y dk_z
\sum_{\pm} \left( \psi({1\over2}-{\nu\over2}) +\psi(-{\nu\over2})
\right) \label{psi function} \\
& = & {EL^3T \over 8 \pi^2} \int_0^\infty {ds \over s^2} e^{-m^2 s}
\left( \cot Es - {1 \over Es} +{Es\over 3}\right).
\label{E integral 3}
\end{eqnarray}
The limits on the $k_x$ trace can be motivated by the classical
Lorentz interaction of the electron-positron pair after pair creation,
and can be checked by the requirement that the zero field part cancels
correctly. Note that the arguments of the psi functions appearing in
the effective action (\ref{psi function}) are complex. Thus we must
be careful to use the correct integral representation of the
$\psi$ function in the analysis. A convenient representation
for complex argument is given in Bateman \cite{bateman} as:
\begin{eqnarray}
\psi(z)&=& \log z -{1 \over 2z} -\int_0^{\infty e^{i\beta}} dt \left(
{1\over e^t-1} -{1\over t} + {1\over2} \right) e^{-zt} \\
& & -{\pi \over 2} < \beta < {\pi \over 2} \phantom{space}
-\left({\pi \over 2} + \beta \right) < {\rm{arg}} z <
\left( {\pi \over 2} + \beta \right) \nonumber
\end{eqnarray}
The expression (\ref{E integral 3}) is the same as
(\ref{integral constant E}), and
the calculation proceeds exactly as before. But for the constant field case
the resolvent method is unnecessarily complicated. The advantages of
the resolvent method will become evident when applied to more complicated
background fields, as is done in the remainder of this paper.
\section{Exactly Solvable Case}
In this section we apply the resolvent method to a background
gauge field $A^\mu=(0,E \tau \tanh (\frac{t}{\tau}),0,0)$. This gauge
field corresponds to a single pulsed
electric field in the x-direction $E_x(t)=E {\rm{sech}}^2 (\frac{t}{\tau})$.
The electric field is spatially uniform but time-dependent; it
vanishes at $t=\pm\infty$, peaks at t=0, and has a temporal width
$\tau$ that is arbitrary. This field contains the constant field as a
special case when we take $\tau \rightarrow \infty$. The resolvent
expression (\ref{E resolvent}) for the effective action gives
\begin{equation}
{\rm{S}}_{eff} = i {L^3 \over 4\pi^3} \int d^3k \:{\rm{tr}} \:
{k_y^2 \over \partial_0^2+ (k_x -{E \tau} \tanh ({t \over \tau}
))^2 +k_y^2 + k_z^2 +m^2 \pm i E {\rm{sech}}^2 \left({t \over \tau}\right)}
\end{equation}
The $k_x$ momentum trace runs over $(-\infty,\infty)$ since we
consider an infinite interaction time.
To determine the effective action we need the resolvent, which is
constructed from solutions to the ordinary differential equation
\begin{equation}
\left(\partial_0^2 +m^2+k_y^2 +k_z^2 +\left( k_x -E \tau
\tanh\left({t \over \tau}\right) \right)^2 \pm iE {\rm{sech}}^2
\left({t\over \tau}\right) \right) \phi=0
\end{equation}
This can be converted, by the substitution $y=\frac{1}{2}
(1+\tanh\frac{t}{\tau})$, to a hypergeometic equation,
with independent solutions
\begin{eqnarray}
\phi_1 & = & y^\alpha(1-y)^\beta \mbox{}_2F_1\left( {i \tau\over 2}
\left( \alpha+\beta\pm 2 E\tau \right), {i\tau\over 2}
\left(\alpha+\beta+1 \mp 2E\tau\right) ; 1+ i \tau
\alpha;y \right) \nonumber \\
\phi_2 & = & y^\alpha(1-y)^\beta \mbox{}_2F_1\left( {i \tau\over 2}
\left( \alpha+\beta\pm 2E\tau \right), {i\tau \over 2}
\left(\alpha+\beta+1 \mp 2E\tau \right) ; 1+ i\tau
\beta;1-y \right)
\end{eqnarray}
where we have defined
\begin{eqnarray}
y & = &{1\over2}(1+\tanh\left({t\over \tau}\right)) \nonumber \\
\alpha & = & \left( m^2+k_y^2
+k_z^2 +\left(E\tau +k_x\right)^2 \right)^{1/2} \nonumber\\
\beta & = & \left( m^2 +k_y^2 +k_z^2 + \left( E\tau
-k_x \right)^2 \right)^{1/2}.
\label{a and b}
\end{eqnarray}
The boundary conditions are a particle of energy
$\alpha$ traveling forward in time and a particle of energy $-\beta$
traveling backward in time.
The diagonal resolvent is ${\cal G}(t,t)={\phi_1(t)\phi_2(t)\over
W[\phi_1,\phi_2]}$, where $W[\phi_1,\phi_2]$ is the Wronskian.
The trace over time once again yields psi functions (just as
in the magnetic cases treated in \cite{cangemi95b,dunne97}):
\begin{eqnarray}
{\rm{S}}_{eff} & = & - {L^3 \tau \over 4\pi^3} \sum_\pm
\int_{-\infty}^\infty {k_y^2 d^3k
\over 4 } \left( {1\over\alpha} + {1\over\beta} \right)
\left( \psi(1+ {i\tau\over2} ( \alpha + \beta \mp 2E \tau ))
+ \psi ( {i\tau\over 2} (
\alpha + \beta \pm 2E \tau ) ) \right) \nonumber \\
& = & - {L^3 \over 4\pi^3} \sum_\pm
\int_{-\infty}^\infty {k_y^2 d^3k
\over 4 k_\perp} {\partial \Omega_{(\pm)} \over \partial k_\perp}
\left(\psi(1+{i \over 2} \Omega_\mp) + \psi({i\over2} \Omega_\pm)
\right) \nonumber \\
& = & {L^3 \over 4\pi^3} {1 \over2} \int d^3k \int_0^\infty
{ds \over s} \left( e^{-\Omega_+ s} + e^{-\Omega_- s} \right)
\left( \cot s -{1\over s} \right)
\label{full action for tanh}
\end{eqnarray}
where we have defined $\Omega_+={\tau \over 2}\left( \alpha+\beta
+{2E\tau}\right)$ and $\Omega_-= {\tau \over 2 }
\left(\alpha+\beta-{2E\tau} \right) $.
Equation (\ref{full action for tanh}) is the exact effective action
for this time-dependent background gauge field. Notice the close
similarity to Schwinger's expression (\ref{E integral 3}) for the
constant background electric field. It is straightforward to
check that taking $\tau \rightarrow \infty$ reduces
(\ref{full action for tanh}) to the constant field result
(\ref{E integral 3}).
The effective action (\ref{full action for tanh}) has both real
and imaginary parts. As described before for the constant field
case, we regulate the integral using the principal part
prescription to obtain the imaginary part
\begin{eqnarray}
{\rm{Im}}({\rm{S}}_{eff})& = & {1\over2} {L^3 \over 4\pi^3} \int d^3k\;
\sum_{n=1}^\infty {1\over n} \left(e^{-n \pi \Omega_+} + e^{-n \pi
\Omega_-} \right) \nonumber \\
& = & -{1 \over 2} {L^3 \over 4\pi^3} \int d^3k \;\ln\left(
(1- e^{-\pi \Omega_+} ) (1-e^{- \pi \Omega_-} ) \right)
\label{non expanded im part of tanh eff action}
\end{eqnarray}
and real part of the exact effective action
\begin{eqnarray}
{\rm{Re}}({\rm{S}}_{eff}) & = & {1 \over 2} {i L^3 \over 4\pi^3} \int d^3k
\int_0^\infty {ds \over s} \left(e^{-i \Omega_+ s} + e^{- i
\Omega_- s} \right) (\coth s -{1\over s}) \nonumber \\
& = & {1 \over 6} {L^3 \over 4\pi^3} \int d^3k {1 \over \Omega_+}
+ {L^3 \over (2 \pi)^3} \sum_{n=1}^\infty { (-1)^n {\cal{B}}_{2n+2}
\over (2n+2)(2n+1) } \int d^3k \left( {2 \over \Omega_+} \right)^{2n+1}
\end{eqnarray}
where we have asymptotically expanded the integral over $s$ in inverse
powers of $\Omega_+$.
The first term can be regulated and absorbed by renormalization. In
the second term the $k$ integrals can be done to yield the
integral representation
\begin{equation}
{\rm{Re}}({\rm{S}}_{eff}^{ren})=- {2L^3 \tau^3\over 3 \pi^2}
\int_0^\infty {dt \over e^{2 \pi t} -1 } \left( { t- E\tau^2
\over v_- } \left( {m^2\tau^2} - v_-^2
\right)^{3/2} \sin^{-1}(\frac{v_-}{\tau m}) +
(E \rightarrow -E) \right)
\label{real part from computation}
\end{equation}
where we have defined $v_-= (t^2 -2t E \tau^2)^{1/2}$. This integral
may be expanded as
\begin{eqnarray}
{\rm{Re}} {\rm{S}}_{eff}^{ren}&=& - {L^3 \tau m^4 \over 8 \pi^{3/2}}
\sum_{j=0}^\infty {1 \over \Gamma(j+1)} \left({1 \over 2E\tau^2}
\right)^j \nonumber \\
&& \hspace{2.5cm} \times \sum_{k=1}^\infty
{\Gamma(2k+j)\Gamma(2k+j-2)\over \Gamma(2k+1)\Gamma(2k+j+{1\over2})}
(-1)^{k+j} {\cal{B}}_{2k+2j}
\left( {2E \over m^2} \right)^{2k+j}
\label{real full expansion of tanh action}
\end{eqnarray}
We now compare these results to previous analyses. The real part
of the effective action is exactly the same as the effective action
for the magnetic ${\rm{sech}}^2$ background case (see eqs. (10 and (18)
in \cite{dunne98}), with the replacements $B \rightarrow iE$ and
$\lambda T\rightarrow \tau L$. Thus, our naive expectation that
this simple analytic continuation from a magnetic to an electric
background is borne out. But in an electric background we are more
interested in the imaginary part, which does not have this type of
perturbative expansion. Rather, it has the nonperturbative form
(\ref{non expanded im part of tanh eff action}). This explains how
it is possible to obtain both a perturbative and a nonperturbative
expression, for the real and imaginary parts respectively, from
the exact effective action (\ref{full action for tanh}).
Balantekin {\it et al.} \cite{balantekin91} have also computed
this imaginary part of the effective action for a ${\rm{sech}}^2$ electric
field. Our result
(\ref{non expanded im part of tanh eff action}) agrees with their
expression (see eqn (3.29) of \cite{balantekin91}), once theirs
is symmetrized in $E\to -E$, as it must be to satisfy Furry's
theorem. This difference is not important for the imaginary part, but
it is crucial for the consistency of the dispersion relations which
relate the real and imaginary parts, as we show in the next section.
\section{Dispersion Relations}
In the previous section we found an expression for the exact effective
action for a particular background electric field. This effective
action has both real and imaginary parts. Given the real or imaginary
part of the effective action there exist dispersion relations which
relate the two. Here, we exploit the cuts in the electron self-energy
function to analytically continue it to the entire complex plane.
We shall show that there exist simple dispersion relations between
the real and imaginary parts of the effective action, both at the
perturbative level and also at the level of the general expression
(\ref{full action for tanh}).
\subsection{Perturbative Dispersion Relations}
Expand the imaginary part
(\ref{non expanded im part of tanh eff action})
of the effective action in powers of $E^2$
\begin{eqnarray}
{\rm{Im}} ({\rm{S}}_{eff}) & = & {1\over2} {L^3 \over 4\pi^3 }
\sum_{n=0}^\infty{1\over n} \int d^3k\left( e^{-n \pi \Omega_+}
+ e^{-n \pi \Omega_-} \right) \\
& = & {L^3 \over 4\pi^3} \sum_{n=1}^\infty {1 \over n} \int
d^3k \left[
e^{- 2 \pi n \tau \sqrt{\mu^2+k_x^2} } + {E^2 \over 2}
e^{- 2 \pi n \tau \sqrt{\mu^2+k_x^2} } \left( 4n^2 \pi^2\tau^4
- {2n\pi\mu^3 \tau^3 \over (\mu^2+k_x^2)^{3/2}}
\right) \right. \nonumber \\
& \mbox{}& \phantom{need space he}
+ E^4 e^{-2 \pi n \tau} \sqrt{\mu^2+
k_x^2} \left( {2n^4\pi^4 \tau^8\over 3} - {2n^3\pi^3\mu^2 \tau^7
\over (\mu^2+k_x^2)^{3/2} } + {n^2\pi^2\mu^4 \tau^6 \over
2(\mu^2+k_x^2)^3} \right. \nonumber \\
& \mbox{}& \phantom{space needs go right here}
+ \left. \left. {\pi n\nu^2 \tau^5 \over 4
(\mu^2+k_x^2)^{5/2}} - {5\pi n \mu^2 k_i^2 \tau^5 \over 4
(\mu^2+k_x^2)^{7/2}} \right)+\dots \right]
\label{imaginary piece expansion in E^2}
\end{eqnarray}
Consider first the $E^2$ term. Doing the angular integrals we obtain
\begin{eqnarray}
[{\rm{Im}} {\rm{S}}_{eff}]_{E^2}& = & {L^3 \over 4\pi^3} E^2 \sum_{n=1}^\infty
{1 \over 2n} \int d^3k
e^{- 2 \pi n \tau \sqrt{m^2+k^2} } 2 \pi n \tau
\left(2 n \pi \tau - { \tau(m^2+k^2 \sin^2 \theta) \over
(m^2+k^2)^{3/2} } \right) \nonumber \\
& = & {L^3 \over 4\pi^3}
{4E^2 \pi^2 \tau^3\over 3 } \sum_{n=1}^\infty \int_0^\infty
dk e^{- 2 \pi n \tau \sqrt{m^2+k^2}} \left( 6 \pi nk^2\tau
- {3k^2m^2 + 2k^4 \over (m^2+k^2)^{3/2}} \right)
\end{eqnarray}
With the substitution $q=2 \sqrt{m^2+k^2}$ this becomes
\begin{eqnarray}
[{\rm{Im}} {\rm{S}}_{eff}]_{E^2}& = &{L^3 \over 4\pi^3}
{E^2 \pi^2 \tau^3 \over 3 } \sum_{n=1}^\infty
\int_{2m}^\infty dq e^{n \pi q\over \lambda} (q^2-4m^2)^{1/2}
\left( {3n \pi \tau} q - {2\over q^2}
(q^2+2m^2) \right)\nonumber\\
& = & {L^3 \over 4\pi^3}
{E^2 \pi^3 \tau^4 \over6} \int_{2m}^\infty dq
q^2 {\rm{csch}}^2 {\pi q \tau\over 2} \left(1-{4m^2\over q^2}
\right)^{1/2} \left( 1+{2m^2\over q^2} \right) \nonumber \\
\label{imaginary E^2 action}
& = & {L^3 \over 4\pi^3} 4E^2 \pi^4 \tau^4
\int_0^\infty dq q^2 {\rm{csch}}^2 {\pi q \tau \over 2 } {\rm{Im}}
\Pi(q^2)
\end{eqnarray}
This expression agrees with the result of Itzykson and Zuber
\cite{itzykson}, where $\Pi(q^2)$ is the one-loop self-energy.
They reduced the problem to a one-dimensional Lippman-Schwinger
equation and expanded perturbatively to find the $E^2$ order term.
Along the real axis, there is a cut in the $q^2$ complex plane
from $(-\infty,-2m]$ and from $[2m,\infty)$. To derive the
dispersion relations we will need to consider an integral as $q^2
\rightarrow \infty$. Since the electron self-energy does not go to
zero as $q^2$, we need to add a linear convergence factor. The
convergence factor gives a residue at the origin which will ultimately
be absorbed by renormalization. This results in a once-subtracted
dispersion relation as follows.
Apply Cauchy's integral theorem to a function $f(z)$ satisfying these
properties. Let the contour be from $(-\infty,\infty)$ along the real
axis and close with an arc of infinite radius in the upper half plane.
\begin{eqnarray}
{f(z) \over z} & = & {1 \over 2\pi i} \oint_{\cal{C}}
{ f(\xi) d\xi \over \xi (\xi -z)} \nonumber \\
& = & {f(0) \over 2 z} + {1 \over 2\pi i} {\rm{P}}
\int_{-\infty}^\infty {f(x^\prime) dx^\prime \over x^\prime
(x^\prime -z)}
\end{eqnarray}
Now let the point z go to the real axis $z \rightarrow
x+i\varepsilon$.
\begin{equation}
\label{once subtracted dispersion}
{f(x) \over x} = {f(0) \over x} + {1 \over \pi i} {\rm{P}}
\int_{-\infty}^\infty {f(x^\prime) dx^\prime \over x^\prime
(x^\prime-x)}
\end{equation}
Take the real and imaginary parts of
(\ref{once subtracted dispersion}) and assume that $f(z)$ satisfies
the Schwarz reflection principle $f(z^*)=f^*(z)$.
\begin{eqnarray}
{\rm{Re}} (f(x)-f(0))
&=& {x\over \pi} {\rm{P}} \int_{-\infty}^\infty {{\rm{Im}}
f(x^\prime) dx^\prime \over x^\prime (x^\prime -x) }
= {2 x^2 \over \pi} {\rm{P}} \int_0^\infty {{\rm{Im}} f(x^\prime)
dx^\prime \over x^\prime ({x^\prime}^2 -x^2) } \\
{\rm{Im}} (f(x) -f(0))
&=& -{x\over \pi} {\rm{P}} \int_{-\infty}^\infty {{\rm{Re}}
f(x^\prime) dx^\prime \over x^\prime (x^\prime -x) }
= -{2 x \over \pi} {\rm{P}} \int_0^\infty {{\rm{Im}} f(x^\prime)
dx^\prime \over x^\prime ({x^\prime}^2 -x^2) }
\end{eqnarray}
{}From the imaginary part of the electron self-energy in
({\ref{imaginary E^2 action}})
\begin{equation}
{\rm{Im}} \Pi(k) = {1\over 24 \pi} \left( 1-{4m^2 \over k^2}
\right)^{1/2} \left( 1 +{2m^2 \over k^2} \right) \Theta(k^2 -4m^2)
\end{equation}
we can obtain the real part.
\begin{eqnarray}
{\rm{Re}} (\Pi(k) -\Pi(0)) & =& {2k^2 \over \pi} {1 \over 24 \pi}
{\rm{P}} \int_{2m}^\infty {dk^\prime \over k^\prime ( {k^\prime}^2
-k^2) } \left(1-{4m^2 \over {k^\prime}^2} \right)^{1/2} \left( 1 +
{2m^2 \over {k^\prime}^2 } \right) \nonumber \\
& = & {1 \over 32 \pi^{3/2}} \sum_{j=1}^\infty {\Gamma(j+2) \over j
\Gamma(j+{5\over2}) } \left( {k\over 2m} \right)^{2j}
\end{eqnarray}
This is the kernel for the $E^2$ order real part of the effective
action.
\begin{eqnarray}
[{\rm{Re}} {\rm{S}}_{eff}]_{E^2} & = & {L^3 \over 4\pi^3} 4E^2 \pi^4 \tau^4
\int_0^\infty dq q^2 {\rm{csch}}^2 {\pi q \tau \over 2 }
{\rm{Re}} ( \Pi(q^2) -\Pi(0) ) \nonumber \\
& = & {L^2 \over 4\pi^3} {4 \pi^4 E^2 \tau^4 \over 32 \pi^{3/2}}
\sum_{j=1}^\infty {\Gamma(j+2) \over j\Gamma(j+2)} {1\over
(2m)^{2j}} \int_{-\infty}^\infty dq q^{2j+2}{\rm{csch}}^2
{q\pi\tau\over 2} \nonumber \\
& = & {E^2 L^3 \tau\over 4 \pi^{3/2} } \sum_{j=1}^\infty
{(-1)^j \Gamma(j+2) \over j\Gamma(j+{5\over2})} {\cal{B}}_{2j+2}
\left({1 \over m\tau} \right)^{2j}
\label{dispersion real E^2 expansion}
\end{eqnarray}
This agrees with the $k=1$ term of
(\ref{real full expansion of tanh action}), the real part of the full
effective action to order $E^2$.
A similar analyis can be done for the $E^4$ contribution. Doing the
angular integrals in the $E^4$ piece from
(\ref{imaginary piece expansion in E^2}) gives
\begin{eqnarray}
[{\rm{Im}}{\rm{S}}_{eff}]_{E^4} &=& {L^3 \over 4\pi^3} {4\pi^2 \tau^5\over 3}
\sum_{n=1}^\infty \int_0^\infty k^2 dk e^{-2\pi n \tau
\sqrt{m^2+k^2}} \left( 2n^3 \pi^3\tau^3 -
{2n^2\pi^2\tau^2(2k^2+3m^2) \over (m^2+k^2)^{3/2}} \right.
\nonumber \\
& \mbox{} & \phantom{some space here} + \left.
{ n\pi \tau(15m^4+20m^2k^2+8k^4)\over 10(m^2+k^2)^3 } +
{3 m^4 \over 4(m^2+k^2)^{7/2}} \right)
\end{eqnarray}
The substitution $q=2\sqrt{m^2+k^2}$ leads to
\begin{eqnarray}
[{\rm{Im}} {\rm{S}}_{eff}]_{E^4} &=& {L^3 \over 4\pi^3} {\pi^2\tau^5 \over 3}
\sum_{n=1}^\infty \int_{2m}^\infty dq q (q^2-4m^2)^{1/2} e^{-n\pi q
\tau} \left( n^3 \pi^3\tau^3 -
{ 4n^2\pi^2\tau^2 (q^2+2m^2) \over q^3} \right. \nonumber \\
& \mbox{} & \phantom{some space here} \left. +
{8n\pi\tau(q^4+2m^2q^2+6m^4) \over 5 q^6} +
{48m^4\over q^7} \right)
\end{eqnarray}
Integrate by parts in the $1^{st},2^{nd}$ and $4^{th}$ terms and
collect terms:
\begin{eqnarray}
[{\rm{Im}}{\rm{S}}_{eff}]_{E^4} &=& -{L^3\over 4\pi^3} {8 \pi^3E^4m^4 \tau^6
\over 3} \int_0^\infty dq d^4 {\rm{csch}}^2 {\pi q \tau\over 2}
\Theta(q^2-4m^2) \nonumber \\
& \mbox{}& \phantom{some big space here} \times
{1\over q^8} \left(1-{4m^2 \over q^2}\right)^{-3/2}
\left( 3 -{10m^2 \over q^2} \right)
\end{eqnarray}
The dispersion relation for the $E^4$ term is derived in the same way
except that no subtraction is needed.
\begin{eqnarray}
{\rm{Re}}f(x) & = & {2\over \pi} {\rm{P}} \int_0^\infty {x^\prime
dx^\prime \over {x^\prime}^2 -x^2} {\rm{Im}} f(x^\prime)
\nonumber \\
{\rm{Im}}f(x) & = & - {2x\over \pi} {\rm{P}} \int_0^\infty {dx^\prime
\over {x^\prime}^2 -x^2} {\rm{Re}}f(x^\prime)
\label{4th order dispersion relations}
\end{eqnarray}
With the dispersion relations (\ref{4th order dispersion relations})
we can immediately write down the complementary part of
the effective action at order $E^4$.
\begin{eqnarray}
[{\rm{Re}}{\rm{S}}_{eff}]_{E^4} & = & - {L^3 \over 4\pi^3}
{8 \pi^3E^4m^4 \tau^6\over 3}
\int_0^\infty dq d^4 {\rm{csch}}^2 {\pi q\tau\over 2} {2\over\pi}
{\rm{P}} \int_{2m}^\infty {k dk \over k^2-q^2} \nonumber \\
& \mbox{} & \phantom{need some space here} \times
{1 \over k^8} \left( 1-{4m^2\over k^2} \right)^{-3/2}
\left(3-{10 m^2 \over k^2} \right) \nonumber \\
& = & - {2L^3E^4m^4 \tau^9\over \pi^{3/2}} \sum_{j=0}^\infty
{(-1)^j \over \Gamma(j+1)} {\Gamma(j+4)\Gamma(j+2)\over
\Gamma(5)\Gamma(j+{9\over2})} {\cal{B}}_{2j+4}
\left( 1 \over m \tau \right)^{2j+8}
\label{dispersion E^4 term}
\end{eqnarray}
Thus, the dispersion relations have enabled us to deduce the $E^4$
term of real part (\ref{real full expansion of tanh action}) of the
effective action, beginning with the $E^4$ term in the imaginary part.
Using dispersion relations we have shown how it is possible to go from
a tunneling like expression to an asymptotic expansion at the first
two orders in $E^2$. Recall the real part for the exact effective
action with a ${\rm{sech}}^2$ background electric field
(\ref{real full expansion of tanh action}) is an asymptotic
expansion in {\it two} dimensionless scales ${1\over \tau^2}$ and
$\left({E \over m^2}\right)^2$. Following steps similar to those taken
above, we can find similar disperion relations for
the other expansion scale, ${1\over \tau^2}$. These relations have
been derived and are presented in \cite{hallthesis}.
\subsection{All-Orders Dispersion Relations}
The above approach could be continued to higher orders in $E^2$,
but the integrals become more difficult. Instead, we look for a
dispersion relation connecting the full exact expressions for
the real part (\ref{real part from computation}) and the imaginary
part (\ref{non expanded im part of tanh eff action}) of the
effective action. Begin with the imaginary part
(\ref{non expanded im part of tanh eff action}):
\begin{equation}
{\rm{Im}}{\rm{S}}_{eff}={L^3 \over 4\pi^3}
{1\over2} \sum_{n=1}^\infty {1\over n} \int d^3k \left(
e^{-n \pi \tau \left(- 2E\tau +\sqrt{ \mu^2 +
\left( E\tau +k_x\right)^2 } +\sqrt{ \mu^2+ \left(
E\tau -k_x \right)^2} \right)} +(E \rightarrow -E) \right)
\label{full non-perturbative im part}
\end{equation}
Make the following substitution to unravel the exponents
\begin{equation}
2t=2E\tau^2 + \tau \sqrt{m^2 + E^2\tau^2 +k^2+ 2 E \tau k\cos\theta}
+\tau \sqrt{ m^2 + E^2\tau^2 +k^2 -2 E\tau k \cos\theta}.
\label{expression for t}
\end{equation}
Solve (\ref{expression for t}) for $k$,
\begin{equation}
k= \lambda \sqrt{ \left(t-E\tau^2\right)^2
\left(t^2 - m^2 \tau^2 -2t E \tau^2 \right)
\over t \left(t- 2E\tau^2 \right) - E^2 \tau^4 \sin^2 \theta }
\end{equation}
substitute into (\ref{full non-perturbative im part}), and do the
angular integration
\begin{eqnarray}
{\rm{Im}} {\rm{S}}_{eff} &=& {L^3 \over 4\pi^3} \sum_{n=1}^\infty
{\pi \over n} \int_{E\tau^2} \sqrt{m^2\tau^2 + E^2\tau^4}^\infty
dt e^{-2\pi nt} {d \over dt} \int_0^{2\pi} d\theta
\sin\theta \left(k^3(E) +k^3(-E) \right) \nonumber \\
& = & -{L^3\over 4\pi^3}{4 \pi^2 m^4\tau \over 3} \int_0^\infty
{dt \over e^{2\pi t} -1} \left(\Theta(z_--1) z_-^3 {dz_- \over dt}
\left(1-{1\over z_-^2} \right)^{3/2} + (z_- \rightarrow z_+) \right)
\end{eqnarray}
where we have defined $z_-={1\over m \tau}(t^2 -2t E \tau^2)^{1/2}$.
A dispersion relation can be derived for the complex variable
$z_-$. We regard the factor $\left(1-{1\over z_-^2}\right)^{3/2}$
as the imaginary part of an analytic
function defined along the whole real axis. Care must be taken since
the function doesn't go to zero along the arc as $z_- \rightarrow
\infty$; so we must insert a
convergence factor. There is a dispersion relation giving the real
part in terms of the imaginary part of a function with these
characteristics.
\begin{equation}
{\rm{Re}}(f(z_-)-f(0)) = {2 z_-^2 \over \pi} {\rm{P}} \int_0^\infty
{{\rm{Im}} f(k) dk \over k (k^2-z_-^2) }
\label{dispersion for nonper}
\end{equation}
With (\ref{dispersion for nonper}) we can obtain the real part of the
effective action.
\begin{eqnarray}
{\rm{Re}}{\rm{S}}_{eff}^{ren} & = & -{L^3\pi^2m^4\tau\over 3\pi} \int_0^\infty {dt
\over e^{2\pi t} -1} \left( z_-^3 {dz_-\over dt} {2 z_-^2\over \pi}
{\rm{P}} \int_1^\infty { \left( 1-{1\over k^2} \right)^{3/2} dk
\over k(k^2-z_-^2) } + (z_- \rightarrow z_+) \right) \nonumber \\
& = & -{m^4 L^3 \tau\over 15 \pi^2} \int_0^\infty {dt \over e^{2
\pi t} -1} \left( z_-^4 {dz_-^2 \over dt}
\mbox{}_2F_1(1,1;{7\over2},z_-^2) + (z_- \rightarrow z_+) \right)
\label{has 2F1 in it}
\end{eqnarray}
In the last equation, we recognize the hypergeometric funciton
$\mbox{}_2F_1$, which has another representation in terms of
$\sin^{-1}$ \cite{gradshteyn}.
\begin{eqnarray}
{\rm{Re}}{\rm{S}}_{eff}^{ren} & = & - {m^4 L^3 \tau\over \pi^2}
\int_0^\infty {dt \over e^{2\pi t} -1} {2\lambda^2 \over m^2}
\left( \left( t-{E \tau^2}\right) \right. \nonumber \\
\phantom{some space here} & \times &
\left. \left( -{2\over3} + {8 v_-^2 \over 9 m^2 \tau^2}
+ {2m \tau\over 3 v_-} \left( 1-{ v_-^2 \over
\tau^2 m^2}\right)^{3/2} \sin^{-1} {v_-\over \tau m} \right)
+ (E \rightarrow -E) \right)
\end{eqnarray}
We drop terms independent of E (since these cancel against the vacuum
subtraction) and get
\begin{equation}
{\rm{Re}}{\rm{S}}_{eff}^{ren} = -{2 L^3 \over 3 \pi^2\tau^3} \int_0^\infty
{dt \over e^{2 \pi t} -1} \left( { t-E\tau^2
\over v_-} \left( m^2\tau^2 -v_-^2 \right)^{3/2}
\sin^{-1} {v_- \over \tau m} + (E \rightarrow -E) \right)
\label{total real from dispersion}
\end{equation}
where $v_-=\left( t^2 -2tE \tau^2 \right)^{1/2}$ and
$v_+=(t^2 + 2tE \tau^2 )^{1/2}$.
This expression for the real renormalized effective action is
exactly the same as obtained by a direct computation
(\ref{real part from computation}). Thus the dispersion
relations enable us to compute the real part, given the imaginary
part. The reverse direction works similarly.
\section{Derivative Expansion in 3+1 Dimensional Electric Field}
Schwinger solved the effective action exactly for constant background
fields. To solve for more realistic fields one must use some
perturbative expansion such as the derivative expansion. In the
derivative expansion the fields are assumed to vary very slowly. We
rewrite the trace in (\ref{schwingers general}) as a supersymmetric
quantum-mechanical path integral, expand the gauge field in a Taylor
series about the constant case, and interpret the successive
coefficients as successively increasing $n$-body interaction terms.
This has been done for 2+1 dimensional electric fields
\cite{hallthesis} and we may immediately generalize to 3+1 dimensions
by making the substitution $m^2 \rightarrow m^2+k_z^2$ and tracing
over the additional momentum \cite{dunne98}.
We obtain the zeroth and first orders of the derivative expansion for
a spatially homogeneous electric field in 3+1 dimensions
\begin{equation}
S= {i\over 2} \int d^4x \int_0^\infty {ds\over s}
{e^{-m^2 s}\over 4i (\pi s)^2} \left[ (Es \cot Es -1)
+(\partial_0 E)^2 \left({s^2\over 8 E^4} \right)
(Es \cot Es)''' \right].
\label{full E deriv exp}
\end{equation}
Regulating the $s$ integral as before with the principal parts
prescription, we easily separate the real and imaginary parts of the
zeroth order derivative expansion term
\begin{eqnarray}
{\rm Im}[{\rm{S}}_{eff}]_0 &=& \int d^4x {E^2 \over 8\pi^3}
\sum_{n=1}^\infty {1\over n^2} e^{-{m^2 \pi n \over E}}
\label{im zero order der exp} \\
{\rm Re}[{\rm{S}}_{eff}]_0 &=& \int d^4x {\cal P} \int_0^{ \infty}
{ds \over s} {e^{-m^2 s}\over 8 \pi^2 s^2} \left( Es \cot Es
-1 \right).
\label{re zero order der exp}
\end{eqnarray}
Perform an asymptotic expansion of the integral over $s$
and we obtain
\begin{equation}
{\rm Re}[{\rm{S}}_{eff}]_0= -\int d^4x {E^2 \over 2 \pi^2}
\sum_{n=1}^\infty { (-1)^n {\cal B}_{2n+2}
\over 2n(2n+1)(2n+2)} \left( 2 E \over m^2
\right)^{2n}
\label{re zero order der exp 2}
\end{equation}
where ${\cal B}_{\nu}$ is the $\nu^{th}$ Bernoulli number.
Equations (\ref{im zero order der exp}) and
(\ref{re zero order der exp 2}) are the same as the corresponding
equations for the constant field result (\ref{im part constant E-field})
and (\ref{re part constant E-field}), with the constant field $E$ replaced
by the time dependent field $E(t)$.
Now consider first derivative term in (\ref{full E deriv exp}).
Separating out the imaginary component is complicated by the fact that
the triple derivative introduces fourth order poles along the real
axis, while in the zeroth order term the poles are of first
order. The exact effective action, containing both imaginary and
real components, for the first order derivative term is
\begin{eqnarray}
[{\rm{S}}_{eff}]_1&=&{i \over 2} \int d^4x \int_0^\infty {ds \over s} {e^{-m^2 s}
\over 4i (\pi s)^2} (\partial_0 E)^2 {s^2 \over 8 E^4}
(s E \cot sE -1)''' \nonumber \\
&=& - {1 \over 64 \pi^2} \int d^4x { (\partial_0 E)^2 \over
E^4} \sum_{n=1}^\infty \int_0^\infty {ds \over s}
{e^{-m^2 s} \over E^4 \left(s-{n \pi \over E}\right)^4 }
{48 n^2 \pi^2 E^4 s (n^2 \pi^2 +s^2 E^2)
\over(Es+ \pi n)^4 }.
\end{eqnarray}
In this expression we clearly see the presence of the fourth order
poles along the real axis. Regulating using the principal parts
prescription, we get the imaginary part which is just a sum of
${1\over2}$ the residues
\begin{eqnarray}
{\rm Im}[{\rm{S}}_{eff}]_1 &=&
-{1\over 64 \pi^2} \int d^4x { (\partial_0 E)^2
\over E^4} \sum_{n=1}^\infty {\pi \over 3!} \left.\left( {e^{-m^2 s}
\over s} {48n^2 \pi^2 s(n^2 \pi^2+s^2E^2)\over (Es+\pi n)^4}
\right)''' \right|_{s \rightarrow {\pi n \over E}} \nonumber \\
&=& {1 \over 64 \pi} \int d^4x {(\partial_0 E)^2 \over E^4}
\sum_{n=1}^\infty { e^{-{n m^2 \pi \over E}} \over (\pi n)^3}
(6E^3+6E^2m^2n\pi+3Em^4n^2\pi^2+m^6n^3\pi^3).
\label{im first order der exp}
\end{eqnarray}
As before asymptotically expand the integral in powers of
${E\over m^2}$ and we find
\begin{equation}
{\rm Re}[{\rm{S}}_{eff}]_1={m^6 \over 64 \pi^2}\int d^4x {(\partial_0 E)^2
\over E^4} \sum_{n=1}^\infty { (-1)^n {\cal B}_{2n+2}
\over 2n-1} \left( {2E \over m^2} \right)^{2n+2}.
\label{re first order der exp}
\end{equation}
Note that in the spirit of the derivative expansion approximation, $E$
means $E(t)$ in the expressions
(\ref{full E deriv exp}-\ref{re first order der exp}). In the next
section we will specialize to the ${\rm{sech}}^2$ electric field and compare
with the exact result (\ref{full action for tanh}) for the effective
action.
\section{Derivative Expansion in Exactly Solvable Case}
For the electric field
\begin{equation}
E_1(t)=E {\rm{sech}}^2 \left({t \over \tau}\right).
\label{another sech}
\end{equation}
the exact effective action is (\ref{full action for tanh}), with
explicit real and imaginary parts in
(\ref{real full expansion of tanh action}) and
(\ref{non expanded im part of tanh eff action}), respectively. In
order to compare with the derivative expansion results in
(\ref{im zero order der exp}), (\ref{re zero order der exp 2}),
(\ref{im first order der exp}) and (\ref{re first order der exp}), we
still need to perform the $t$ integrals in these expressions, with
$E(t)=E {\rm{sech}}^2 \left({t\over \tau}\right)$.
\subsection{Comparison of real part}
Insert the electric field (\ref{another sech}) into the real part of
the zero order derivative expansion effective action
(\ref{re zero order der exp}) and do the $t$ integral using the
formula (3.512.2) from Gradshteyn \cite{gradshteyn}
\begin{equation}
\int_0^\infty {\sinh^\mu x \over \cosh^\nu x}dx=
{ \Gamma({\mu+1\over2}) \Gamma({\nu -\mu \over2}) \over 2\Gamma({\nu +1
\over 2})}
\end{equation}
and we obtain
\begin{equation}
{\rm Re}[{\rm{S}}_{eff}]_0=-{\tau L^3 m^4 \over 8\pi^{3/2}} \sum_{n=1}^\infty
{\Gamma(2n-2) \Gamma(2n) \over \Gamma(2n+1) \Gamma(2n+{1\over2}) }
(-1)^n {\cal B}_{2n} \left( {2E \over m^2} \right)^{2n}.
\label{sech j=0}
\end{equation}
This is precisely the leading term, as an expansion in ${1\over
E\tau^2}$, of the exact effective action
(\ref{real full expansion of tanh action}). Similarly, for the real
part of the first order derivative term
(\ref{re first order der exp}) in the expansion of the of the
effective action, doing the $t$ integral yields
\begin{equation}
{\rm Re}[{\rm{S}}_{eff}]_1={L^3 m^2\over 8\pi^{3/2}\tau} \sum_{n=1}^\infty
{ \Gamma(2n+1) \Gamma(2n-1) \over \Gamma(2n+1) \Gamma(2n+{3\over2})}
(-1)^n {\cal B}_{2n+2} \left({2E\over m^2}\right)^{2n}.
\label{sech j=1}
\end{equation}
This is precisely the next-to-leading term in the expansion
(\ref{real full expansion of tanh action}) of the exact result.
This agreement is as expected for the field (\ref{another sech}),
each order in the derivative expansion introduces an extra factor of
${1\over \tau^2}$. These results provide strong evidence that the
expansion (\ref{real full expansion of tanh action}) of the exact
result is an all-orders derivative expansion, as in the magnetic case
\cite{cangemi95b,dunne97}. However, as in the magnetic case, we note
that this is an asymptotic expansion.
\subsection{Comparison of imaginary part}
For the imaginary piece we follow a different approach to make
the comparison. Inserting the $E(t)={\rm{sech}}^2\left({t\over \tau}\right)$
into the zero-order and first-order expressions
(\ref{im zero order der exp}) and (\ref{im first order der exp})
leads to the probability integral, which cannot be computed
explicitly. Instead, we expand the imaginary part of the exact
effective action (\ref{non expanded im part of tanh eff action}) in
inverse powers of $\tau$, and transform the momentum integrals into a
form which can be compared directly with the derivative expansion
answers (\ref{im zero order der exp}) and
(\ref{im first order der exp}).
Recall the imaginary part
(\ref{non expanded im part of tanh eff action})
of the exact effective action
\begin{equation}
{\rm Im}({\rm{S}}_{eff})= {L^3 \over 4\pi^3} {1\over 2}\int d^3k
\sum_{n=1}^\infty {1\over n} \left( e^{-n\pi\Omega_+} +
e^{-n\pi\Omega_-} \right)
\end{equation}
where $\Omega_+$ and $\Omega_-$ are defined as
\begin{eqnarray}
\Omega_+=\tau(\alpha+\beta+2E\tau) \hspace{1cm}
\Omega_-=\tau(\alpha+\beta-2E\tau)
\end{eqnarray}
and $\alpha$ and $\beta$ are defined in (\ref{a and b}).
We can ignore the $\Omega_+$ term in the derivative expansion,
$\tau \rightarrow \infty$, since it
is suppressed by an exponential factor $e^{-4E\tau}$ relative to the
$\Omega_-$ piece. Make the transformation
\begin{equation}
2t=\tau\left(-2E\tau+\sqrt{\mu^2+(E\tau+k_x^2)^2}+
\sqrt{\mu^2+(E\tau-k_x)^2}\right)
\end{equation}
and solve for $k_x$
\begin{equation}
k_x={(t+\tau^2 E) \sqrt{t^2-\mu^2\tau^2+2t\tau^2E} \over t^{1/2} \tau
\sqrt{t+2\tau^2 E}}.
\end{equation}
The integral is now
\begin{equation}
{\rm Im}({\rm{S}}_{eff})={L^3 \over 2\pi^2} \sum_{n=1}^\infty \int dk_y dk_z
\int_{t_0}^\infty e^{-2\pi n t} k_x(t)
\end{equation}
where the lower limit on the integration is
$t_0=-\tau^2 E + \tau\sqrt{\mu^2+E^2 \tau^2}$. Make another
transformation to the coordinate $z$
\begin{equation}
z={1\over \mu\tau}\sqrt{t^2+2t\tau^2 E} \hspace{1cm}
{dt \over dz} ={\tau z \mu^2 \over \sqrt{\mu^2 z^2 +\tau^2 E^2}}
\end{equation}
and the integral becomes
\begin{equation}
{\rm Im}({\rm{S}}_{eff})= {L^3\tau\over 2\pi^2} \sum_{n=1}^\infty
\int dk_y dk_z \mu^2 \int_1^\infty dz \sqrt{z^2-1}
e^{-2\pi n\left(-\tau^2 E + \tau\sqrt{\mu^2 z^2 +\tau^2 E^2}
\right)}
\end{equation}
which can be expanded in inverse powers of $\tau$.
\begin{eqnarray}
{\rm Im}({\rm{S}}_{eff})&=&{L^3\tau\over 2\pi^2}\sum_{n=1}^\infty\int dk_y
dk_z \mu^2 \int_1^\infty dz \sqrt{z^2-1} e^{-{n\pi z^2 \mu^2 \over E}}
\left(1+{n\pi z^4 \mu^4 \over 4E^3\tau^2} +
\ldots \right)
\label{expansion of im part}
\end{eqnarray}
Complete the integral over $k_y$ and $k_z$ in the leading term
\begin{eqnarray}
{\rm Im}[{\rm{S}}_{eff}]_0&=&{L^3\tau E^2\over 4\pi^3}
\sum_{n=1}^\infty {1\over n^2} \int_1^\infty {dz\over z^4
\sqrt{z^2-1}}e^{-{n\pi m^2 \over E} z^2} \label{extract time dep} \\
&=&{L^3 \tau E^2 \over 8 \pi^3} \sum_{n=1}^\infty
{1\over n^2} e^{-{n\pi m^2 \over E}} \Psi({1\over2},-1,
{n\pi m^2 \over E}) \label{compare with gitman}
\end{eqnarray}
where $\Psi$ is the confluent hypergeometric function defined in
6.5(2) of Bateman \cite{bateman}. Gavrilov and Gitman \cite{gavrilov}
have found, by other methods, the zeroth order
term for this field configuration and obtain precisely
(\ref{compare with gitman}).
In order to compare with the zeroth order derivative expansion result
(\ref{im zero order der exp}) we substitute
$z=\cosh\left({t\over \tau}\right)$
in (\ref{extract time dep}) to obtain
\begin{equation}
{\rm Im}({\rm{S}}_{eff})_\tau= \int d^4x {E^2(t)\over 8\pi^3} \sum_{n=1}^\infty
{1\over n^2} e^{-{n\pi m^2 \over E(t)}}
\end{equation}
where $E(t)=E{\rm{sech}}^2\left({t\over \tau}\right)$. This is precisely
the imaginary part of the zeroth order term of the derivative
expansion (\ref{im zero order der exp}).
Similarly, perform the integrals over $k_y$ and $k_z$
in the next-to-leading order term in (\ref{expansion of im part})
\begin{eqnarray}
{\rm Im}[{\rm{S}}_{eff}]_1&=& {L^3\over 8\pi \tau E^2}
\sum_{n=1}^\infty {1\over \pi^3 n^3} \int_1^\infty dz
{\sqrt{z^2-1} \over z^4} e^{-{n\pi m^2 \over E}z^2}
\nonumber \\ && \hspace{1cm} \times
(6E^3+6 E^2m^2 n \pi z^2+3Em^4n^2 \pi^2z^4+m^6 n^3\pi^3z^6).
\end{eqnarray}
To compare with the first order derivative expansion result
(\ref{im first order der exp}), we make the same substitution
$z=\cosh \left({t\over\tau}\right)$ to obtain
\begin{eqnarray}
{\rm Im}[{\rm{S}}_{eff}]_1&=&{1\over64\pi}\int d^4x
{(\partial_0E)^2\over E^4} \sum_{n=1}^\infty
{1\over n^3\pi^3} e^{-{n\pi m^2 \over E}}
\nonumber \\ && \hspace{1cm} \times
(6E^3+6E^2 m^2 n\pi +3E^3m^4 n^2\pi^2 +m^6n^3\pi^2).
\end{eqnarray}
where here $E$ means $E(t)=E {\rm{sech}}^2 \left({t\over \tau}\right)$.
This is the same result we obtained for the first derivative term of
the imaginary part of the effective action
(\ref{im first order der exp}). As with the real part of the
effective action, successive terms in inverse powers of $\tau$ from
(\ref{expansion of im part}), correspond to increasing orders of the
derivative expansion.
\section{Exact Semi-Classical Action for More General Fields}
As discussed in Section III, the resolvent method is a useful
technique for evaluating the exact effective action when the Dirac
operator can be reduced to an effectively one-dimensional operator.
In this section we show how a generalized WKB expansion can then be
used to obtain an exact semi-classical
effective action for background electric fields with more general
time dependence than the $E(t)=E{\rm{sech}}^2\left({t\over \tau}\right)$
example considered in the previous two sections.
Assume the background gauge field has only one component in the
x-direction $A_\mu=(0,a(t),0,0)$. According to (\ref{E resolvent}),
we seek the Green's functions
\begin{equation}
-\left( \hbar^2 \partial_0^2 +\mu^2 + \phi^2(t) \pm i
\hbar \phi^\prime(t) \right) \: {\cal{G}}^\pm_{k_\perp}(t,t^\prime)
= \delta(t-t^\prime)
\label{diff e q for general case}
\end{equation}
where $\mu^2=m^2 +k_y^2 + k_z^2$, and $\phi=a(t)-k_x$.
In the uniform semiclassical approximation \cite{balantekin88},
one begins by looking for solutions $\psi(t)=K(t)
U\left(S(t)\right)$.
The familiar WKB approximation of quantum mechanics consists of
the choice: $\psi(t)=K
e^{iS(t)}$. Instead, a uniform semiclassical approximation is
obtained by choosing $U$ to be a parabolic cylinder function.
Define $U$ to satisfy
\begin{equation}
- \hbar^2 {\partial^2 U \over \partial S^2} - (S^2 + i \eta \hbar)
U(S) = \Omega U(S)
\end{equation}
to which independent solutions are
\begin{equation}
D_\nu \left( \pm {1+i \over \sqrt{\hbar} } S(t) \right)
\end{equation}
where $\eta$ goes as the sign of $\phi^\prime$, and $\nu={1\over2} (\eta
-1-{i \over \hbar} \Omega)$. Now take
$K=(S^\prime)^{-1/2}$. Then the general differential equation
(\ref{diff e q for general case}) becomes a differential equation
relating $K$ and $S$.
\begin{equation}
\hbar^2 {1 \over K} {\partial^2K\over \partial t^2} - \left(
{\partial^2 S \over \partial t^2} \right) (\Omega + i \eta \hbar +
S^2) + (\mu^2 + \phi^2) \pm i \hbar \phi^\prime = 0
\end{equation}
Expand $S(t) \approx S_0(t)+\hbar S_1(t)$ and
collect the zeroth order terms in $\hbar$.
\begin{equation}
\mu^2 + \phi^2(t)=(\Omega +S_0^2) \left( {\partial S_0 \over \partial
t} \right)^2
\label{zeroth order eq}
\end{equation}
The WKB expansion is a good approximation when the zeroth order term
outsizes the first order term $1 \gg | S_1/S_0 |$. At points $t^\prime$
where $S_0(t^\prime) \rightarrow 0$ the approximation doesn't work
unless we require $S_1(t^\prime) \rightarrow 0$ as well. Then apply
L'H\^{o}pital's rule
\begin{equation}
1 \gg \left| { S_1 \over S_0 } \right| = \left| {S_1^\prime \over
S_0^\prime} \right| = \left| S_1^\prime \right| \left| \left(
{\Omega + S_0^2 \over \mu^2 + \phi^2 } \right)^{1/2} \right|
\end{equation}
and we see the generalized WKB will be an appropriate expansion if the
turning points of the numerator $S_0(t_0)=i \sqrt{\Omega}$ and
$S_0(t_0^*)= -i\sqrt{\Omega}$ are the same as those of the denominator
$\phi(t_0)=i \mu$ and $\phi(t_0^*)=-i\mu$. Using the turning points,
we can integrate (\ref{zeroth order eq}) and find the quantity
$\Omega$ \cite{balantekin88}:
\begin{equation}
\int_{t_0}^{t_0^*} dt \sqrt{\mu^2+\phi^2(t)} =
\int_{t_0}^{t_0^*} dt {dS_0\over dt} \sqrt{\Omega+S_0^2} =
\int_{i\sqrt{\Omega}}^{-i\sqrt{\Omega}} dS_0 \sqrt{\Omega+S_0^2}=
- {i \Omega \pi \over 2}.
\label{omeg}
\end{equation}
Given the wavefunctions, we can express the Green's function as
\begin{equation}
{\cal{G}}_{k_\perp}^{\pm(\eta)}(t,t^\prime) = - {\Gamma(-\nu) \over 2
\sqrt{\pi}} e^{-{i
\pi \over 4}} {1 \over S^\prime} D_\nu \left( {1+i \over \sqrt{\hbar}}
S(t) \right) D_\nu \left( - {1+i \over \sqrt{\hbar}} S(t^\prime) \right)
\end{equation}
The resolvent approach then gives the effective action as
\begin{equation}
{\rm{S}}_{eff}= i { L^3 \over 4 \pi^3} \int k_y^2 d^3k {1\over 2} \sum_{\pm E}
2 \sum_{\pm(\eta)} \int_{-\infty}^{\infty} dx_0\; \mbox{}
{\cal{G}}_{k_\perp}^{\pm(\eta)}(t,t)
\end{equation}
where we explicitly summed signs of the electric field to satisfy
Furry's theorem.
Now make the semiclassical approximation by replacing $S(t)$ by
$S_0(t)$. The semiclassical Green's function is
\begin{equation}
{\cal{G}}_{k_\perp}^{\pm(\eta),sc} (t,t^\prime)=- { \Gamma(-\nu)
\over 4\sqrt{\pi} }
e^{- {i\pi \over4}} {1 \over k_\perp}
{\partial \Omega \over \partial k_\perp} S_0^\prime
D_\nu \left( {1+i\over \sqrt{\hbar}} S_0(t) \right)
D_\nu \left(-{1+i\over \sqrt{\hbar}} S_0(t^\prime) \right).
\end{equation}
where we have used the identity [see Eq. \ref{zeroth order eq}] that
$\frac{1}{S_0^\prime}=\frac{1}{2k_\perp} \frac{\partial \Omega}{\partial
k_\perp} S_0^\prime$.
The $t$ integral in the trace of the diagonal resolvent can be converted to an
integral over $S_0$, giving
\begin{eqnarray}
{\rm{S}}_{eff}^{sc}& = & - {L^3 \over (2\pi)^3} \int_{-\infty}^\infty k_y^2
d^3k \sum_{\pm E}
{1 \over 2 k_\perp} { \partial \Omega \over \partial
k_\perp} \left( \Psi({i\over2} \Omega)
+ \Psi( 1 + {i\over2} \Omega) \right) \nonumber \\
& = & {L^3 \over (2\pi)^3 } {1\over 2} \int d^3k \int_0^\infty
{ds \over s } \left( e^{-\Omega(E) s}+ e^{-\Omega(-E) s}\right)
(\cot s-{1\over s})
\label{main result}
\end{eqnarray}
This expression is the exact (but semiclassical) effective action
for an electric background field that is spatially uniform, but
has general time dependence $\phi^\prime (t)$. The function
$\Omega$ is given by (\ref{omeg}).
It is interesting to note how similar this general expression
is to Schwinger's exact expression (\ref{integral constant E})
for the constant background field case.
In the exactly solvable case studied in the previous two sections,
$\phi(t)=E\tau\; \tanh(\frac{t}{\tau})$. In this case the integral
(\ref{omeg}) for $\Omega$ can be done exactly and we arrive at the
exact expression (\ref{full action for tanh}) derived before with the
resolvent method. The fact that the uniform
semiclassical approximation is actually exact in this case is due
to the supersymmetry underlying the uniform semiclassical
approximation in this system. In general cases that are not exactly
solvable, the expression (\ref{main result}) still gives the
semiclassical answer. For example, a periodic background gauge field
\mbox{$A_\mu=(0,{E \over \omega_0} \cos(\omega_0 t),0,0)$} is not
an exactly solvable case. However, the expression (\ref{main result})
immediately gives the semiclassical result of Brezin and Itzykson
\cite{brezin} [see (eq.44) of their paper] for the imaginary
part of the effective action in an alternating electric field.
\section{Conclusions}
In conclusion, we have used the resolvent approach to compute the
exact QED effective action for the time dependent electric field
background $\vec{E}(t)=( E{\rm{sech}}^2 \left({t\over
\tau}\right),0,0)$. The result is a simple integral representation
involving a single integral, just as in Schwinger's proper-time result
for the constant electric field case. We then used this exact result
to investigate the dispersion relations relating the real and
imaginary parts of the effective action. This explains the connection
between the nonperturbative form of the imaginary part, and the
perturbative form of the real part. It is this perturbative real part
that should be compared with results for magnetic backgrounds. In
addition, we made an asymptotic expansion of the exact answer in
powers of ${1\over E\tau^2}$, and showed that the first two terms
agree with (independent) results from the derivative
expansion. Finally, we showed how the uniform semiclassical approach
of Balantekin {\it el al.} is incorporated into the resolvent
approach, yielding a simple semiclassical expression that encodes both
the real and imaginary parts of the effective action.
The challenge now is to use these results for the effective action to
obtain realistic estimates of pair-production rates in electric fields
with practically attainable strength and time dependence.
\vspace{1cm}
\noindent{\bf Acknowledgments}
This work has been supported by the Department of Energy grant
No. DE-FG02-92ER40716.00., and the University of Connecticut Research
Foundation. We also thank Carl Bender and Alain Comtet for helpful
comments and suggestions.
|
{'timestamp': '1998-08-12T21:53:06', 'yymm': '9807', 'arxiv_id': 'hep-th/9807031', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9807031'}
|
arxiv
|
\section{Setting the Stage}\label{introsec}\noindent
We've all been given a problem in a calculus class remarkably similar
to the following one:
\story{Farmer Ted is building a chicken coop. He decides he can spare
190 square feet of his land for the coop, which will be built in the
shape of a rectangle. Being a practical man, Farmer Ted wants to spend
as little as possible on the chicken wire for the fence. What dimensions
should he make the chicken coop?}
\noindent By solving a simple optimization problem, we learn that
Farmer Ted should make his chicken coop a square with side lengths
$\sqrt{190}$ feet. And that, according to the solution manual, is
that.
But the calculus books don't tell the rest of the story:
\story{So Farmer Ted went over to Builders Square and told the
salesman, ``I'd like $4\sqrt{190}$ feet of chicken wire, please.'' The
salesman, however, replied that he could sell one foot or two feet or
a hundred feet of chicken wire, but what the heck was $4\sqrt{190}$
feet of chicken wire? Farmer Ted was taken aback, explaining heatedly
that his family had been buying as little chicken wire as possible for
generations, and he really wanted $4\sqrt{190}$ feet of chicken wire
measured off for him immediately! But the salesman, fearing more
irrational behavior from Farmer Ted, told him, ``I don't want to hear
about your roots. We do business in a natural way here, and if you
don't like it you can leave the whole store.'' Well, Farmer Ted didn't
feel that this treatment was commensurate with his request, but he
left Builders Square to rethink his coop from square one.}
At first, Farmer Ted thought his best bet would be to make a
$10'\tms19'$ chicken coop, necessitating the purchase of 58 feet of
chicken wire---certainly this was better than 86 feet of chicken wire
for a $5'\tms38'$ coop, say. But then he realized that he could be
more cost-effective by not using all of the 190 square feet of land he
had reserved for the coop. For instance, he could construct an
$11'\tms17'$ coop (187 square feet) with only 56 feet of chicken
wire; this would give him about 3.34 square feet of coop space per
foot of chicken wire purchased, as opposed to only 3.28 square feet per
chicken-wire-foot for the $10'\tms19'$ coop. Naturally, the
parsimonious farmer wondered: could he do even better?
\section{Posing the Problem}\label{problemsec}\noindent
Jon Grantham posed the following problem at the 1998 SouthEast
Regional Meeting On Numbers in Greensboro, North Carolina: given a
positive integer $N$, find the dimensions of the rectangle with
integer side lengths and area at most $N$ whose area-to-perimeter
ratio is largest among all such rectangles. In the story above, Farmer
Ted is trying to solve this problem for $N=190$.
Let's introduce some notation so we can formulate Grantham's problem
more precisely. For a positive integer $n$, let $s(n)$ denote the
least possible \sp\ (length plus width) of a rectangle with integer
side lengths and area $n$. (Since the area-to-\sp\ ratio of a
rectangle is always twice the area-to-perimeter ratio, it doesn't
really change the problem if we consider \sp s\ instead of perimeters;
this will eliminate annoying factors of 2 in many of our formulas.) In
other (and fewer) words,
\begin{equation*}
s(n) = \min_{cd=n}(c+d) = \min_{d\mid n}(d+n/d),
\end{equation*}
where $d\mid n$ means that $d$ divides $n$.
Let $F(n)=n/s(n)$ denote the area-to-semi\-perimeter ratio in which we
are interested. We want to investigate the integers $n$ such that
$F(n)$ is large, and so we define the set ${\cal A}$ of ``record-breakers''
for the function $F$ as follows:
\begin{equation}
{\cal A} = \{n\in{\mathbb N}\colon F(k)\le F(n) \hbox{ for all }k\le n\}. \label{cdef}
\end{equation}
(Well, the ``record-tiers'' are also included in ${\cal A}$.) Then it is
clear after a moment's thought that to solve Grantham's problem for a
given number $N$, we simply need to find the largest element of ${\cal A}$
not exceeding $N$.
By computing all possible factorizations of the numbers up to 200 by
brute force, we can make a list of the first 59 elements of ${\cal A}$:
\story{${\cal A}=\{$1, 2, 3, 4, 6, 8, 9, 12, 15, 16, 18, 20, 24, 25, 28, 30,
35, 36, 40, 42, 48, 49, 54, 56, 60, 63, 64, 70, 72, 77, 80, 81, 88,
90, 96, 99, 100, 108, 110, 117, 120, 121, 130, 132, 140, 143, 144,
150, 154, 156, 165, 168, 169, 176, 180, 182, 192, 195, 196, \dots\}}
\noindent If we write, in place of the elements $n\in{\cal A}$, the
dimensions of the rectangles with area $n$ and least \sp, we
obtain
\story{${\cal A}=\{$1\tms1, 1\tms2, 1\tms3, 2\tms2, 2\tms3, 2\tms4, 3\tms3,
3\tms4, 3\tms5, 4\tms4, 3\tms6, 4\tms5, 4\tms6, 5\tms5, 4\tms7,
5\tms6, 5\tms7, 6\tms6, 5\tms8, 6\tms7, 6\tms8, 7\tms7, 6\tms9,
7\tms8, 6\tms10, 7\tms9, 8\tms8, 7\tms10, 8\tms9, 7\tms11, 8\tms10,
9\tms9, 8\tms11, 9\tms10, 8\tms12, 9\tms11, 10\tms10, 9\tms12,
10\tms11, 9\tms13, 10\tms12, 11\tms11, 10\tms13, 11\tms12, 10\tms14,
11\tms13, 12\tms12, 10\tms15, 11\tms14, 12\tms13, 11\tms15, 12\tms14,
13\tms13, 11\tms16, 12\tms15, 13\tms14, 12\tms16, 13\tms15, 14\tms14,
\dots\},}
\noindent a list that exhibits a tantalizing promise of pattern! The
interested reader is invited to try to determine the precise pattern
of the set ${\cal A}$, before reading into the next section in which the
secret will be revealed. One thing we immediately notice, though, is
that the dimensions of each of these rectangles are almost (or
exactly) equal. For this reason, we will call the elements of ${\cal A}$
{\it \asq s}. This supports our intuition about what the answers to
Grantham's problem should be, since after all, Farmer Ted would build
his rectangles with precisely equal sides if he weren't hampered by
the integral policies of (the ironically-named) Builders Square.
From the list of the first 59 \asq s, we find that 182 is the largest
al\-most\discretionary{-}{}{-}square\ not exceeding 190. Therefore, Farmer Ted should build a chicken
coop with area 182 square feet; and indeed, a $13'\tms14'$ coop would
give him about 3.37 square feet of coop space per foot of chicken wire
purchased, which is more cost-effective than the options he thought of
back in Section~\ref{introsec}. But what about next time, when Farmer
Ted wants to build a supercoop on the 8,675,309 square feet of land he
has to spare, or even more? Eventually, computations will need to
give way to a better understanding of ${\cal A}$.
Our specific goals in this paper will be to answer the following
questions:
\begin{enumerate}
\item Can we describe ${\cal A}$ more explicitly? That is, can we
characterize when a number $n$ is an al\-most\discretionary{-}{}{-}square\ with a description
that refers only to $n$ itself, rather than all the numbers smaller
than $n$? Can we find a formula for the number of \asq s\ not
exceeding a given positive number $x$?
\item Can we quickly compute the largest al\-most\discretionary{-}{}{-}square\ not exceeding $N$, for
a given number $N$? We will describe more specifically what we mean by
``quickly'' in the next section, but for now we simply say that we'll
want to avoid both brute force searches and computations that involve
factoring integers.
\end{enumerate}
\noindent In the next section, we will find that these questions have
surprisingly elegant answers.
\section{Remarkable Results}\label{resultsec}\noindent
Have you uncovered the pattern of the \asq s? One detail you might have
noticed is that all numbers of the form $m\ifmmode\times\else$\times$\fi m$ and $(m-1)\ifmmode\times\else$\times$\fi m$,
and also $(m-1)\ifmmode\times\else$\times$\fi(m+1)$, seem to be \asq s. (If not, maybe we should
come up with a better name for the elements of~${\cal A}$!) This turns out
to be true, as we will see in Lemma \ref{intuitlem} below. The problem
is that there are other \asq s\ than these---3\tms6, 4\tms7, 5\tms8,
6\tms9, 6\tms10---and the ``exceptions'' seem to become more and more
numerous\dots. Even so, it will be convenient to think of the
particular \asq s\ of the form $m\ifmmode\times\else$\times$\fi m$ and $(m-1)\ifmmode\times\else$\times$\fi m$ as
``punctuation'' of a sort for~${\cal A}$. To this end, we will define a {\it
flock} to be the set of \asq s\ between $(m-1)^2+1$ and $m(m-1)$, or
between $m(m-1)+1$ and $m^2$, including the endpoints in both cases.
If we group the rectangles corresponding to the \asq s\ into flocks
in this way, indicating the end of each flock by a semicolon, we
obtain:
\story{\newcommand{\bc}{;\ \hskip2pt}${\cal A}=\{$1\tms1\bc 1\tms2\bc
1\tms3, 2\tms2\bc 2\tms3\bc 2\tms4, 3\tms3\bc 3\tms4\bc 3\tms5,
4\tms4\bc 3\tms6, 4\tms5\bc 4\tms6, 5\tms5\bc 4\tms7, 5\tms6\bc
5\tms7, 6\tms6\bc 5\tms8, 6\tms7\bc 6\tms8, 7\tms7\bc 6\tms9,
7\tms8\bc 6\tms10, 7\tms9, 8\tms8\bc 7\tms10, 8\tms9\bc 7\tms11,
8\tms10, 9\tms9\bc 8\tms11, 9\tms10\bc 8\tms12, 9\tms11, 10\tms10\bc
9\tms12, 10\tms11\bc 9\tms13, 10\tms12, 11\tms11\bc 10\tms13,
11\tms12\bc 10\tms14, 11\tms13, 12\tms12\bc 10\tms15, 11\tms14,
12\tms13\bc 11\tms15, 12\tms14, 13\tms13\bc 11\tms16, 12\tms15,
13\tms14\bc 12\tms16, 13\tms15, 14\tms14\bc \dots\}}
\label{seglist}
\noindent It seems that all of the rectangles in a given flock have
the same \sp; this also turns out to be true, as we will see in Lemma
\ref{seqlem} below. The remaining question, then, is to determine
which rectangles of the common \sp\ a given flock contains. At
first it seems that all rectangles of the ``right'' \sp\ will be in
the flock as long as their area exceeds that of the last rectangle
in the preceding flock, but then we note a few omissions---2\tms5,
3\tms7, 4\tms8, 5\tms9, 5\tms10---which also become more
numerous if we extend our computations of~${\cal A}$\dots.
But as it happens, this question can be resolved, and we can actually
determine exactly which numbers are \asq s, as our main theorem
indicates. Recall that $\floor x$ denotes the greatest integer not
exceeding~$x$.
\medskip
\noindent{\bf Main Theorem.} \it
For any integer $m\ge2$, the set of \asq s\ between $(m-1)^2+1$ and
$m^2$ (inclusive) consists of two flocks, the first of which is
\begin{equation*}
\{ (m+a_m)(m-a_m-1), (m+a_m-1)(m-a_m), \dots, (m+1)(m-2), m(m-1) \}
\end{equation*}
where $a_m = \floor{(\sqrt{2m-1}-1)/2}$, and the second of which is
\begin{equation*}
\{ (m+b_m)(m-b_m), (m+b_m-1)(m-b_m+1), \dots, (m+1)(m-1), m^2 \}
\end{equation*}
where $b_m = \floor{\sqrt{m/2}}$.\rm
\medskip
The Main Theorem allows us to easily enumerate the \asq s\ in order,
but if we simply want an explicit characterization of \asq s\ without
regard to their order, there turns out to be one that is extremely
elegant. To describe it, we recall that the {\it triangular numbers\/}
$\{0,1,3,6,10,15,\dots\}$ are the numbers $t_n = {n\choose2} =
n(n-1)/2$ (Conway and Guy \cite{ConGuy} describe many interesting
properties of these and other ``figurate'' numbers). We let $T(x)$
denote the number of triangular numbers not exceeding $x$. (Notice
that in our notation, $t_1 = {1\choose2} = 0$ is the first triangular
number, so that $T(1)=2$, for instance.) Then an alternate
interpretation of the Main Theorem is the following:
\begin{corollary}
The \asq s\ are precisely those integers that can be written in the
form $k(k+h)$, for some integers $k\ge1$ and $0\le h\le T(k)$.
\label{tricor}
\end{corollary}
\noindent It is perhaps not so surprising that the triangular numbers
are connected to the \asq s---after all, adding $t_m$ to itself or to
$t_{m+1}$ yields \asq s\ of the form $m(m-1)$ or $m^2$, respectively
(Figure \ref{tplustfig} illustrates this for $m=6$). In any case, the
precision of this characterization is quite attractive and unexpected,
and it is conceivable that Corollary \ref{tricor} has a direct proof
that doesn't use the Main Theorem. We leave this as an open problem
for the reader.
\begin{figure}[htbf]
\smaller\smaller
\hskip.25in\psfig{figure=t65.eps}
\put{$m(m-1)=t_m+t_m$}{.08in}{1.4in}
\put{$m^2=t_m+t_{m+1}$}{.8in}{.5in}
\hskip1.9in\psfig{figure=t66.eps}
\caption{Two triangular integers invoke an al\-most\discretionary{-}{}{-}square}
\label{tplustfig}
\end{figure}
In a different direction, we can use the Main Theorem's precise
enumeration of the \asq s\ in each flock to count the number of \asq s\
quite accurately.
\begin{corollary}
Let $A(x)$ denotes the number of \asq s\ not exceeding $x$. Then for
$x\ge1$,
\begin{equation*}
A(x) = \frac{2\sqrt2}3 x^{3/4} + \frac12x^{1/2} + R(x),
\end{equation*}
where $R(x)$ is an oscillating term whose order of magnitude is
$x^{1/4}$.
\label{asymcor}
\end{corollary}
\noindent A graph of $A(x)$ (see Figure \ref{axrxfig}) exhibits a
steady growth with a little bit of a wiggle. When we isolate $R(x)$ by
subtracting the main term $2\sqrt2x^{3/4}/3+x^{1/2}/2$ from $A(x)$,
the resulting graph (Figure \ref{axrxfig}, where we have plotted a
point every time $x$ passes an al\-most\discretionary{-}{}{-}square) is a pyrotechnic, almost
whimsical display that seems to suggest that our computer code needs
to be rechecked. Yet this is the true nature of $R(x)$. When we prove
Corollary \ref{asymcor} (in a more specific and precise form) in
Section \ref{cacsec}, we will see that there are two reasons that the
``remainder term'' $R(x)$ oscillates: there are oscillations on a
local scale because the \asq s\ flock towards the right half of each
interval of the form $((m-1)^2,m(m-1)]$ or $(m(m-1),m^2]$, and
oscillations on a larger scale for a less obvious reason.
\begin{figure}[ht]
\smaller\smaller
\hskip.4cm
\psfig{figure=ax.eps}
\put{$x$}{-.2cm}{.4cm}
\put{$A(x)$}{-3.7cm}{3.9cm}
{\smaller\smaller
\dimen1=-15.25pt\dimen2=-2pt\dimen3=-41.5pt
\put{5000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{4000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{3000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{2000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{1000}{\dimen1}{\dimen2}
\dimen1=-228.7pt\dimen2=23.3pt\dimen3=20.9pt
\put{100}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{200}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{300}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{400}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{500}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{600}{\dimen1}{\dimen2}
}
\hskip.3cm
\psfig{figure=rx.eps}
\put{$x$}{-.2cm}{.4cm}
\put{$R(x)$}{-6cm}{3.9cm}
{\smaller\smaller
\dimen1=-15.25pt\dimen2=-2pt\dimen3=-41.5pt
\put{5000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{4000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{3000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{2000}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{1000}{\dimen1}{\dimen2}
\dimen1=-221pt\dimen2=36pt\dimen3=33pt
\put 2{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put 4{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put 6{\dimen1}{\dimen2}
}
\caption{Superficial steadiness of $A(x)$, mesmerizing meanderings of
$R(x)$}
\label{axrxfig}
\end{figure}
These theoretical results about the structure of the \asq s\ address
question 1 nicely, and we turn our attention to the focus of question
2, the practicality of actually computing answers to questions about
\asq s. Even simple tasks like printing out a number or adding two
numbers together obviously take time for a computer to perform, and
they take longer for bigger numbers. To measure how the computing time
needed for a particular computation increases as the size of the input
grows, let $f(k)$ denote the amount of time it takes to perform the
calculation on a $k$-digit number. Of course, the time could depend
significantly on which $k$-digit number we choose; what we mean is the
worst-case scenario, so that the processing time is at most $f(k)$ no
matter which $k$-digit number we choose.
We say that a computation runs in {\it polynomial time\/} if this
function $f(k)$ grows only as fast as a polynomial in $k$, i.e., if
there are positive constants $A$ and $B$ such that
$f(k)<Ak^B$. Generally speaking, the computations that we consider
efficient to perform on very large inputs are those that run in
polynomial time. (Because we are only concerned with this category of
computations as a whole, it doesn't matter if we write our numbers in
base 10 or base 2 or any other base, since this only multiplies the
number of digits by a constant factor like $\log_210$.)
All of our familiar arithmetic operations $+$, $-$, $\ifmmode\times\else$\times$\fi$, $\div$,
$\sqrt\cdot$, $\floor\cdot$ and so on have polynomial-time
algorithms. On the other hand, performing a calculation on each of the
numbers from 1 to the input $n$, or even from 1 to $\sqrt n$, etc.,~is
definitely not polynomial-time. Thus computing \asq s\ by their
definition, which involves comparing $F(n)$ with all of the preceding
$F(k)$, is not efficient for large $n$. Furthermore, the obvious
method of factoring numbers---testing all possible divisors in
turn---is not polynomial-time for the same reason. While there are
faster ways to factor numbers, at this time there is still no known
polynomial-time algorithm for factoring numbers; so even factoring a
single number would make an algorithm inefficient. (Dewdney
\cite{dewdney} writes about many facets of algorithms, including this
property of running in polynomial time, while Pomerance \cite{pom}
gives a more detailed discussion of factoring algorithms and their
computational complexity.)
Fortunately, the Main Theorem provides a way to compute \asq s\ that
avoids both factorization and brute-force enumeration. In fact, we can
show that all sorts of computations involving \asq s\ are efficient:
\begin{corollary}
There are polynomial-time algorithms to perform each of the following
tasks, given a positive integer $N$:
\begin{enumerate}
\item[{\rm(a)}]determine whether $N$ is an al\-most\discretionary{-}{}{-}square, and if so determine
the dimensions of the optimal rectangle;
\item[{\rm(b)}]find the greatest al\-most\discretionary{-}{}{-}square\ not exceeding $N$, including
the dimensions of the optimal rectangle;
\item[{\rm(c)}]compute the number $A(N)$ of \asq s\ not exceeding $N$;
\item[{\rm(d)}]find the $N$th al\-most\discretionary{-}{}{-}square, including the dimensions of the
optimal rectangle.
\end{enumerate}
\label{polycor}
\end{corollary}
\noindent We reiterate that these algorithms work without ever
factoring a single integer. Corollary~\ref{polycor}, together with our
lack of a polynomial-time factoring algorithm, has a rather
interesting implication: for large values of $N$, it is much faster to
compute the most cost-effective chicken coop (in terms of
area-to-semiperimeter ratio) with area {\it at most\/} $N$ than it is
to compute the most cost-effective chicken coop with area {\it equal
to\/} $N$, a somewhat paradoxical state of affairs! Nobody ever said
farming was easy\dots.
\section{The Theorem Thought Through}\label{ththsec}\noindent
Before proving the Main Theorem, we need to build up a stockpile
of easy lemmas. The first of these simply confirms our expectations
that the most cost-effective rectangle of a given area is the one
whose side lengths are as close together as possible, and also
provides some inequalities for the functions $s(n)$ and $F(n)$. Let us
define $d(n)$ to be the largest divisor of $n$ not exceeding $\sqrt n$
and $d'(n)$ the smallest divisor of $n$ that is at least $\sqrt n$, so
that $d'(n)=n/d(n)$.
\begin{lemma}
The rectangle with integer side lengths and area $n$ that has the
smallest \sp\ is the one with dimensions $d(n)\ifmmode\times\else$\times$\fi
d'(n)$. In other words,
\begin{equation*}
s(n) = d(n) + d'(n).
\end{equation*}
We also have the inequalities
\begin{equation*}
s(n) \ge 2\sqrt n \quad\hbox{and}\quad F(n) \le \sqrt n/2.
\end{equation*}
\label{firstlem}
\end{lemma}
\begin{proof}
For a fixed positive number $n$, the function $f(t)=t+n/t$ has
derivative $f'(t)=1-n/t^2$ which is negative for $1\le t<\sqrt n$, and
therefore $t+n/t$ is a decreasing function of $t$ in that range. Thus
if we restrict our attention to those $t$ such that both $t$ and $n/t$
are positive integers (in other words, $t$ is an integer dividing
$n$), we see that the expression $t+n/t$ is minimized when
$t=d(n)$. We therefore have
\begin{equation*}
s(n)=d(n)+n/d(n)\ge2\sqrt n,
\end{equation*}
where the last inequality follows from the Arithmetic Mean/Geometric
Mean inequality; and the inequality for $F(n)$ then follows directly
from the definition of $F$.
\end{proof}
If we have a number $n$ written as $c\ifmmode\times\else$\times$\fi d$ where $c$ and $d$ are
pretty close to $\sqrt n$, when can we say that there isn't some
better factorization out there, so that $s(n)$ is really equal to
$c+d$? The following lemma gives us a useful criterion.
\begin{lemma}
If a number $n$ satisfying $(m-1)^2<n\le m(m-1)$ has the form
$n=(m-a-1)(m+a)$ for some number $a$, then $s(n)=2m-1$, and
$d(n)=m-a-1$ and $d'(n)=m+a$. Similarly, if a number $n$ satisfying
$m(m-1)<n\le m^2$ has the form $n=m^2-b^2$ for some number $b$, then
$s(n)=2m$, and $d(n)=m-b$ and $d'(n)=m+b$.
\label{nohidelem}
\end{lemma}
\begin{proof}
First let's recall that for any positive real numbers $\alpha$ and
$\beta$, the pair of equations $r+s=\alpha$ and $rs=\beta$ have a
unique solution $(r,s)$ with $r\le s$, as long as the Arithmetic
Mean/Geometric Mean inequality $\alpha/2\ge\sqrt\beta$ holds. This is
because $r$ and $s$ will be the roots of the quadratic polynomial
$t^2-\alpha t+\beta$, which has real roots when its discriminant
$\alpha^2-4\beta$ is nonnegative, i.e., when $\alpha/2\ge\sqrt\beta$.
Now if $n=(m-a-1)(m+a)$, then clearly $s(n)\le(m-1-a)+(m+a)=2m-1$ by
the definition of $s(n)$. On the other hand, by Lemma \ref{firstlem}
we know that $s(n)\ge2\sqrt n>2(m-1)$, and so $s(n)=2m-1$
exactly. We now know that
$d(n)d'(n)=n=(m-a-1)(m+a)$ and
\begin{equation*}
d(n)+d'(n) = s(n) = 2m-1 = (m-a-1)+(m+a),
\end{equation*}
and of course $d(n)\le d'(n)$ as well; by the argument of the previous
paragraph, we conclude that $d(n)=m-a-1$ and $d'(n)=m+a$. This
establishes the first assertion of the lemma, and a similar argument
holds for the second assertion.
\end{proof}
Of course, if a number $n$ satisfies $s(n)=2m$ for some $m$, then $n$
can be written as $n=cd$ with $c\le d$ and $c+d=2m$; and letting
$b=d-m$, we see that $n=cd=(2m-d)d=(m-b)(m+b)$. A similar statement is
true if $s(n)=2m-1$, and so we see that the converse of Lemma
\ref{nohidelem} also holds. We also remark that in the statement of
the lemma, the two expressions $m(m-1)$ can be replaced by
$(m-1/2)^2=m(m-1)+1/4$ if we wish.
Lemma \ref{nohidelem} implies in particular that for $m\ge2$,
\begin{equation*}
s(m^2)=2m, \quad s(m(m-1))=2m-1, \quad\hbox{and } s((m-1)(m+1))=2m,
\end{equation*}
and so
\begin{equation*}
F(m^2) = \frac m2, \quad F(m^2-m) = {m(m-1)\over2m-1}, \quad\hbox{and
} F(m^2-1) = \frac{m^2-1}{2m}.
\end{equation*}
Using these facts, we can verify our theory that these numbers are
always \asq s.
\begin{lemma}
Each positive integer of the form $m^2$, $m(m-1)$, or $m^2-1$ is an
al\-most\discretionary{-}{}{-}square.\label{intuitlem}
\end{lemma}
It is interesting to note that these are precisely those integers $n$
that are divisible by $\floor{\sqrt n}$ (see \cite{sqrtn}), one of
the many interesting things that can be discovered by referring to
Sloane and Plouffe~\cite{sloplo}.
\medskip
\begin{proof}
We verify directly that such numbers satisfy the condition in the
definition (\ref{cdef}) of~${\cal A}$. If $k<m^2$, then by Lemma
\ref{firstlem} we have $F(k)\le\sqrt{k}/2 < m/2=F(m^2)$, and so $m^2$
is an al\-most\discretionary{-}{}{-}square. Similarly, if $k<m(m-1)$, then again
\begin{equation*}
F(k) \le \frac{\sqrt{k}}2 \le \frac{\sqrt{m^2-m-1}}2 <
{m(m-1)\over2m-1} = F(m(m-1)),
\end{equation*}
where the strict inequality can be verified as a ``fun'' algebraic
exercise. Thus $m(m-1)$ is also an al\-most\discretionary{-}{}{-}square. A similar argument shows that
$m^2-1$ is also an al\-most\discretionary{-}{}{-}square.
\end{proof}
Now we're getting somewhere! Next we show that the \sp s\ of the
rectangles corresponding to the \asq s\ in a given flock are all
equal, as we observed at the beginning of Section \ref{resultsec}.
\begin{lemma}
Let $m\ge2$ be an integer. If $n$ is an al\-most\discretionary{-}{}{-}square\ satisfying
$(m-1)^2<n\le m(m-1)$, then $s(n)=2m-1$; similarly, if $n$ is an al\-most\discretionary{-}{}{-}square\
satisfying $m(m-1)<n\le m^2$, then $s(n)=2m$.
\label{seqlem}
\end{lemma}
\begin{proof}
If $n=m(m-1)$, we have already shown that $s(n)=2m-1$. If $n$
satisfies $(m-1)^2<n<m(m-1)$, then by Lemma \ref{firstlem} we have
$s(n)\ge2\sqrt n>2(m-1)$. On the other hand, since $n$ is an al\-most\discretionary{-}{}{-}square\
exceeding $(m-1)^2$, we have
\begin{equation*}
\frac{m-1}2 = F((m-1)^2) \le F(n) = \frac n{s(n)} < {m(m-1)\over s(n)},
\end{equation*}
and so $s(n)<2m$. Therefore $s(n)=2m-1$ in this case.
Similarly, if $n$ satisfies $m(m-1)<n<m^2$, then $s(n)\ge2\sqrt
n\ge2\sqrt{m^2-m+1} > 2m-1$; on the other hand,
\begin{equation*}
{m(m-1)\over2m-1} = F(m(m-1)) \le F(n) = \frac n{s(n)} \le {m^2-1\over
s(n)},
\end{equation*}
and so $s(n)<(m+1)(2m-1)/m<2m+1$. Therefore $s(n)=2m$ in this case.
\end{proof}
Finally, we need to exhibit some properties of the sequences
$a_m$ and $b_m$ defined in the statement of the Main Theorem.
\begin{lemma}
Define $a_m=\floor{(\sqrt{2m-1}-1)/2}$ and
$b_m=\floor{\sqrt{m/2}}$. For any integer $m\ge2$:
\begin{enumerate}
\item[{\rm(a)}]$a_m\le b_m\le a_m+1$;
\item[{\rm(b)}]$b_m=\floor{m/\sqrt{2m-1}}$; and
\item[{\rm(c)}]$a_m+b_m=\floor{\sqrt{2m}}-1$.
\end{enumerate}
\label{floorlem}
\end{lemma}
We omit the proof of this lemma since it is tedious but
straightforward. The idea is to show that in the sequences $a_m$,
$b_m$, $\floor{m/\sqrt{2m-1}}$, and so on, two consecutive terms are
either equal or else differ by 1, and then to determine precisely for
what values of $m$ the differences of 1 occur.
Armed with these lemmas, we are now ready to furnish a proof of
the Main Theorem.\medskip
\begin{pflike}{Proof of the Main Theorem:}
Fix an integer $m\ge2$. By Lemma \ref{seqlem}, every al\-most\discretionary{-}{}{-}square\ $n$ with
$(m-1)^2<n\le m(m-1)$ satisfies $s(n)=2m-1$; while by Lemma
\ref{nohidelem}, the integers $(m-1)^2<n\le m(m-1)$ satisfying
$s(n)=2m-1$ are precisely the elements of the form $n_a=(m-a-1)(m+a)$
that lie in that interval. Thus it suffices to determine which of the
$n_a$ are \asq s.
Furthermore, suppose that $n_a$ is an al\-most\discretionary{-}{}{-}square\ for some $a\ge1$. Then
$F(n_a)\ge F(n)$ for all $n<n_a$ by the definition of ${\cal A}$, while
$F(n_a)>F(n)$ for all $n_a<n<n_{a-1}$ since we've already concluded
that no such $n$ can be an al\-most\discretionary{-}{}{-}square. Moreover, $n_{a-1}>n_a$ and
$s(n_{a-1})=2m-1=s(n_a)$, so $F(n_{a-1})>F(n_a)$, and thus $n_{a-1}$
is an al\-most\discretionary{-}{}{-}square\ as well. Therefore it suffices to find the largest value
of $a$ (corresponding to the smallest $n_a$) such that $n_a$ is an
al\-most\discretionary{-}{}{-}square.
By Lemma \ref{intuitlem}, we know that $(m-1)^2$ is an al\-most\discretionary{-}{}{-}square, and so we
need to find the largest $a$ such that $F(n_a)\ge F((m-1)^2)$, i.e.,
\begin{equation*}
{(m-a-1)(m+a)\over2m-1} \ge \frac{m-1}2,
\end{equation*}
which is the same as $2a(a+1)+1\le m$. By completing the square and
solving for $a$, we find that this inequality is equivalent to
\begin{equation}
\frac{-\sqrt{2m-1}-1}2 \le a \le \frac{\sqrt{2m-1}-1}2, \label{aineq}
\end{equation}
and so the largest integer $a$ satisfying the inequality is exactly
$a=\floor{(\sqrt{2m-1}-1)/2}=a_m$, as defined in the statement of
the Main Theorem. This establishes the first part of the theorem.
By the same reasoning, it suffices to find the largest value of $b$
such that $F(m^2-b^2)\ge F(m(m-1))$, i.e.,
\begin{equation*}
{m^2-b^2\over2m} \ge {m(m-1)\over2m-1},
\end{equation*}
which is the same as
\begin{equation}
b^2\le m^2/(2m-1) \label{bineq}
\end{equation}
or $b\le\floor{m/\sqrt{2m-1}}$. But by Lemma \ref{floorlem}(b),
$\floor{m/\sqrt{2m-1}}=b_m$ for $m\ge2$, and so the second part of the
theorem is established.
\end{pflike}
With the Main Theorem now proven, we remark that Lemma
\ref{floorlem}(c) implies that for any integer $m\ge2$, the number of
\asq s\ in the two flocks between $(m-1)^2+1$ and $m^2$ is exactly
$(1+a_m)+(1+b_m) = 1+\floor{\sqrt{2m}}$, while Lemma \ref{floorlem}(a)
implies that there are either equally many in the two flocks or else
one more in the second flock than in the first.
\section{Taking Notice of Triangular Numbers}\label{tntnsec}\noindent
Our next goal is to derive Corollary \ref{tricor} from the Main
Theorem. First we establish a quick lemma giving a closed-form
expression for $T(x)$, the number of triangular numbers not
exceeding~$x$.
\begin{lemma}
For all $x\ge0$, we have $T(x)=\floor{\sqrt{2x+1/4}+1/2}$. \label{Txlem}
\end{lemma}
\begin{proof}
$T(x)$ is the number of positive integers $n$ such that $t_n\le x$, or
$n(n-1)/2 \le x$. This inequality is equivalent to $(n-1/2)^2 \le
2x+1/4$, or $-\sqrt{2x+1/4}+1/2 \le n \le \sqrt{2x+1/4}+1/2$. The
left-hand expression never exceeds $1/2$, and so $T(x)$ is simply the
number of positive integers $n$ such that $n \le \sqrt{2x+1/4}+1/2$;
in other words, $T(x) = \floor{\sqrt{2x+1/4}+1/2}$ as desired.
\end{proof}
\medskip
\begin{pflike}{Proof of Corollary \ref{tricor}:}
Suppose first that $n=k(k+h)$ for some integers $k\ge1$ and $h\le
T(k)$. Let $k'=k+h$, and define
\begin{equation*}
\oecases{m=k+(h+1)/2 \and a=(h-1)/2}{m=k+h/2 \and b=h/2},
\end{equation*}
so that
\begin{equation*}
\oecases{k=m-a-1 \and k'=m+a}{k=m-b \and k'=m+b}.
\end{equation*}
We claim that
\begin{equation}
\oecases{(m-1)^2 < (m-a-1)(m+a) \le (m-1/2)^2}{(m-1/2)^2 < m^2-b^2 \le
m^2}. \label{segclaim}
\end{equation}
To see this, note that in terms of $k$ and $h$, these inequalities
become
\begin{equation*}
\big( k+\frac{h-1}2 \big)^2 < k(k+h) \le \big( k+\frac h2 \big)^2.
\end{equation*}
A little bit of algebra reveals that the right-hand inequality is
trivially satisfied while the left-hand inequality is true provided
that $h<2\sqrt k+1$. However, from Lemma \ref{Txlem} we see that
\begin{equation*}
T(k) = \floor{\sqrt{2k+1/4}+1/2} \le \sqrt{2k+1/4}+1/2 < 2\sqrt k+1
\end{equation*}
for $k\ge1$. Since we are assuming that $h\le T(k)$, this shows that
the inequalities (\ref{segclaim}) do indeed hold.
Because of these inequalities, we may apply Lemma \ref{nohidelem} (see
the remarks following the proof of the lemma) and conclude that
\begin{equation*}
\oecases{s(n)=2m-1,\, d(n)=m-a-1, \and d'(n)=m+a}{s(n)=2m,\, d(n)=m-b,
\and d'(n)=m+b}.
\end{equation*}
Consequently, the Main Theorem asserts that $n$ is an al\-most\discretionary{-}{}{-}square\ if and
only if
\begin{equation}
\oecases{a\le a_m}{b\le b_m}, \label{abbd}
\end{equation}which by the definitions of $a$, $b$, and $m$ is the same as
\begin{equation*}
\oecases{(h-1)/2 \le \floor{(\sqrt{2m-1}-1)/2} =
\floor{(\sqrt{2k+h}-1)/2}}{h/2 \le \floor{\sqrt{m/2}} =
\floor{\sqrt{k/2+h/4}}}.
\end{equation*}
Since in either case, the left-hand side is an integer, the
greatest-integer brackets can be removed from the right-hand side,
whence both cases reduce to $h\le\sqrt{2k+h}$. From here, more algebra
reveals that this inequality is equivalent to $h\le\sqrt{2k+1/4}+1/2$;
and since $h$ is an integer, we can add greatest-integer brackets to
the right-hand side, thus showing that the inequality (\ref{abbd}) is
equivalent to $h\le T(k)$ (again using Lemma \ref{Txlem}). In
particular, $n$ is indeed an al\-most\discretionary{-}{}{-}square.
This establishes one half of the characterization asserted by
Corollary \ref{tricor}. Conversely, suppose we are given an al\-most\discretionary{-}{}{-}square\ $n$,
which we can suppose to be greater than 1 since 1 can obviously be
written as $1(1+0)$. If we let $h=d'(n)-d(n)$, then the Main Theorem
tells us that
\begin{equation*}
\oecases{n=(m-a-1)(m+a),\, d(n)=m-a-1, \and d'(n)=m+a}{n=m^2-b^2,\,
d(n)=m-b, \and d'(n)=m+b}{}
\end{equation*}
for some integers $m\ge2$ and either $a$ with $0\le a\le a_m$ or $b$
with $0\le b\le b_m$. If we set $k=d(n)$, then certainly
$n=k(k+h)$. Moreover, the algebraic steps showing that the inequality
(\ref{abbd}) is equivalent to $h\le T(k)$ are all reversible; and
(\ref{abbd}) does in fact hold, since we are assuming that $n$ is an
al\-most\discretionary{-}{}{-}square. Therefore $n$ does indeed have a representation of the form
$k(k+h)$ with $0\le h\le T(k)$. This establishes the corollary.
\end{pflike}
We take a slight detour at this point to single out some special
\asq s. Let us make the convention that the $k$th flock refers to the
flock of \asq s\ with \sp\ $k$, so that the first flock is actually
empty, the second and third poor flocks contain only $1=1\tms1$ and
$2=1\tms2$, respectively, the fourth flock contains $3=1\tms3$ and
$4=2\tms2$, and so on. The Main Theorem tells us that $a_m$ and $b_m$
control the number of \asq s\ in the odd-numbered and even-numbered
flocks, respectively; thus every so often, a flock will have one more
al\-most\discretionary{-}{}{-}square\ than the preceding flock of the same ``parity''. We'll let a
{\it pioneer\/} be an al\-most\discretionary{-}{}{-}square\ that begins one of these suddenly-longer
flocks.
For instance, from the division of ${\cal A}$ into flocks on page
\ref{seglist}, we see that the 4th flock \{1\tms3, 2\tms2\} is
longer than the preceding even-numbered flock \{1\tms1\}, so
$1\tms3=3$ is the first pioneer; the 9th flock \{3\tms6, 4\tms5\} is
longer than the preceding odd-numbered flock \{3\tms4\}, so
$3\tms6=18$ is the second pioneer; and so on, the next two pioneers
being $6\tms10$ in the 16th flock and $10\tms15$ in the 25th
flock. Now if this isn't a pattern waiting for a proof, nothing is!
The following lemma shows another elegant connection between the
\asq s\ and the squares and triangular numbers.
\begin{corollary}
For any positive integer $j$, the $j$th pioneer equals $t_{j+1}\ifmmode\times\else$\times$\fi
t_{j+2}$ (where $t_i$ is the $i$th triangular number), which begins
the $(j+1)^2$-th flock. Furthermore, the ``record-tying'' \asq s\
(those whose $F$-values are equal to the $F$-values of their immediate
predecessors in ${\cal A}$) are precisely the even-numbered pioneers.
\label{piocor}
\end{corollary}
\begin{proof}
First, Lemma \ref{floorlem}(a) tells us that the odd- and
even-numbered flocks undergo their length increases in alternation, so
that the pioneers alternately appear in the flocks of each parity. The
first pioneer $3=1\tms3$ appears in the 4th flock, and corresponds to
$m=2$ and the first appearance of $b_m=1$ in the notation of the Main
Theorem. Thus the $(2k-1)$-st pioneer will equal $m^2-k^2$, where $m$
corresponds to the first appearance of $b_m=k$. It is easy to see that
the first appearance of $b_m=k$ occurs when $m=2k^2$, in which case
the $(2k-1)$-st pioneer is
\begin{equation*}
m^2-k^2 = (2k^2)^2-k^2 = (2k^2-k)(2k^2+k) = \frac{2k(2k-1)}2
\frac{(2k+1)2k}2 = t_{2k}t_{2k+1}.
\end{equation*}
Moreover, the flock in which this pioneer appears is the $2m$-th or
$(2k)^2$-th flock.
Similarly, the $2k$-th pioneer will equal $(m-k-1)(m+k)$, where $m$
corresponds to the first appearance of $a_m=k$. Again one can show
that the first appearance of $a_m=k$ occurs when $m=2k^2+2k+1$, in
which case the $2k$-th pioneer is
\begin{equation*}
(m-k-1)(m+k) = (2k^2+k)(2k^2+3k+1) = \frac{(2k+1)2k}2
\frac{(2k+2)(2k+1)}2 = t_{2k+1}t_{2k+2}.
\end{equation*}
Moreover, the flock in which this pioneer appears is the $(2m-1)$-st
or $(2k+1)^2$-th flock. This establishes the first assertion of the
corollary.
Since the $F$-values of the \asq s\ form a nondecreasing sequence by
the definition of al\-most\discretionary{-}{}{-}square, to look for \asq s\ with equal $F$-values we
only need to examine consecutive \asq s. Furthermore, two consecutive
\asq s\ in the same flock never have equal $F$-values, since they are
distinct numbers but by Lemma \ref{seqlem} their \sp s\ are the
same. Therefore we only need to determine when the last al\-most\discretionary{-}{}{-}square\ in a
flock can have the same $F$-value as the first al\-most\discretionary{-}{}{-}square\ in the
following flock.
The relationship between the $F$-values of these pairs of \asq s\ was
determined in the proof of the Main Theorem. Specifically, the
equality $F((m-1)^2)=F((m-a-1)(m+a))$ holds if and only if the
right-hand inequality in (\ref{aineq}) is actually an equality; this
happens precisely when $m=2a^2+2a+1$, which corresponds to the
even-numbered pioneers as was determined above. On the other hand, the
equality $F(m(m-1))=F(m^2-b^2)$ holds if and only if the inequality
(\ref{bineq}) is actually an equality; but $m^2$ and $2m-1$ are always
relatively prime (any prime factor of $m^2$ must divide $m$ and thus
divides into $2m-1$ with a ``remainder'' of $-1$), implying that
$m^2/(2m-1)$ is never an integer for $m\ge2$, and so the inequality
(\ref{bineq}) can never be an equality. This establishes the second
assertion of the corollary.
\end{proof}
We know that all squares are \asq s, and so $t_j^2$ is certainly an
al\-most\discretionary{-}{}{-}square\ for any triangular number $t_j$; also, Corollary \ref{piocor}
tells us that the product $t_jt_{j+1}$ of two consecutive triangular
numbers is always an al\-most\discretionary{-}{}{-}square. This led the author to wonder which numbers
of the form $t_mt_n$ are \asq s. If $m$ and $n$ differ by more than 1,
it would seem that the rectangle of dimensions $t_m\ifmmode\times\else$\times$\fi t_n$ is not
the most cost-effective rectangle of area $t_mt_n$, and so the author
expected that these products of two triangular numbers would behave
randomly with respect to being \asq s---that is, a few of them might be
but most of them wouldn't. After some computations, however, Figure
\ref{tmxtnfig} emerged, where a point has been plotted in the $(m,n)$
position if and only if $t_mt_n$ is an al\-most\discretionary{-}{}{-}square; and the table exhibited a
totally unexpected regularity.
\begin{figure}[htbf]
\smaller\smaller
\centerline{
\psfig{figure=tmxtn.eps}
\put{$m$}{-1.58in}{3.05in}
\put{$n$}{-3.2in}{1.48in}
{
\smaller\smaller
\dimen1=-2.255in\dimen2=2.96in\dimen3=.825in
\put{20}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{40}{\dimen1}{\dimen2}
\advance\dimen1by\dimen3 \put{60}{\dimen1}{\dimen2}
\dimen1=-3.15in\dimen2=2.083in\dimen3=-.825in
\put{20}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{40}{\dimen1}{\dimen2}
\advance\dimen2by\dimen3 \put{60}{\dimen1}{\dimen2}
}}
\caption{Amazing al\-most\discretionary{-}{}{-}square\ patterns in products of two triangles}
\label{tmxtnfig}
\end{figure}
Of course the symmetry of the table across the main diagonal is to be
expected since $t_mt_n=t_nt_m$. The main diagonal and the first
off-diagonals are filled with plotted points, corresponding to the
\asq s\ $t_m^2$ and $t_mt_{m+1}$; and in hindsight, the second
off-diagonals correspond to
\begin{equation*}
t_mt_{m+2} = \frac{m(m-1)}2\frac{(m+2)(m+1)}2 =
\frac{m^2+m-2}2 \, \frac{m^2+m}2,
\end{equation*}
which is the product of two consecutive integers (since $m^2+m$ is
always even) and is thus an al\-most\discretionary{-}{}{-}square\ as well. But apart from these
central diagonals and some garbage along the edges of the table where
$m$ and $n$ are quite different in size, the checkerboard-like pattern
in the kite-shaped region of the table seems to be telling us
that the only thing that matters in determining whether $t_mt_n$ is an
al\-most\discretionary{-}{}{-}square\ is whether $m$ and $n$ have the same parity!
Once this phenomenon had been discovered, it turned out that the
following corollary could be derived from the prior results in this
paper. We leave the proof of this corollary as a challenge to the
reader.
\begin{corollary}
Let $m$ and $n$ be positive integers with $n-1>m>3n-\sqrt{8n(n-1)}-1$.
Then $t_mt_n$ is an al\-most\discretionary{-}{}{-}square\ if and only if $n-m$ is even.
\end{corollary}
We remark that the function $3n-\sqrt{8n(n-1)}-1$ is asymptotic to
$(3-2\sqrt2)n+(\sqrt2-1)$, which explains the straight lines of slope
$-(3-2\sqrt2)\approx{-0.17}$ and $-1/(3-2\sqrt2)\approx{-5.83}$ that
seem to the eye to separate the orderly central region in Figure
\ref{tmxtnfig} from the garbage along the edges.
\section{Counting and Computing}\label{cacsec}\noindent
In this section we establish Corollaries \ref{asymcor} and
\ref{polycor}. We begin by defining a function $B(x)$ that will serve
as the backbone of our investigation of the al\-most\discretionary{-}{}{-}square\ counting function
$A(x)$. Let $\{x\}=x-\floor x$ denote the fractional part of $x$, and
define the quantities $\gamma=\gamma(x) = \{\sqrt2x^{1/4}\}$ and
$\delta=\delta(x) = \{x^{1/4}/\sqrt2\}$. Let $B(x)=B_0(x)+B_1(x)$,
where
\begin{equation}
B_0(x) = \frac{2\sqrt2}3x^{3/4} + \frac12x^{1/2} + \big(
\frac{2\sqrt2}3 + \frac{\gamma(1-\gamma)}{\sqrt2} \big) x^{1/4}
\label{b0def}
\end{equation}
and
\begin{equation*}
B_1(x) = \frac{\gamma^3}6 - \frac{\gamma^2}4 - \frac{5\gamma}{12}
- \frac\delta2 - 1.
\end{equation*}
We remark that $\gamma=\{2\delta\}$ and that $B_1(x^4)$ is a periodic
function of $x$ with period $\sqrt2$, and so it is easy to check that
the inequalities $-2\le B_1(x)\le-1$ always hold. The following lemma
shows how the strange function $B(x)$ arises in connection with the
\asq s.
\begin{lemma}
For any integer $M\ge1$, we have $A(M^2)=B(M^2)$. \label{am2bm2}
\end{lemma}
\begin{proof}
As remarked at the end of Section~\ref{ththsec}, the number of
\asq s\ between $(m-1)^2+1$ and $m^2$ is $\floor{\sqrt{2m}}+1$ for
$m\ge2$. Therefore
\begin{equation*}
A(M^2) = 1 + \sum_{m=2}^M (\floor{\sqrt{2m}}+1) = M-1 + \sum_{m=1}^M
\floor{\sqrt{2m}}.
\end{equation*}
It's almost always a good idea to interchange orders of summation
whenever possible---and if there aren't enough summation signs, find a
way to create some more! In this case, we convert the greatest-integer
function into a sum of 1s over the appropriate range of integers:
\begin{equation*}
\begin{split}
A(M^2) &= M-1 + \sum_{m=1}^M \sum_{1\le k\le\sqrt{2m}} 1 \\
&= M-1 + \sum_{1\le k\le\sqrt{2M}} \sum_{k^2/2\le m\le M} 1 \\
&= M-1 + \sum \begin{Sb}1\le k\le\sqrt{2M} \\ k\text{ odd}\end{Sb}
\big( M-\frac{k^2-1}2 \big) + \sum \begin{Sb}1\le k\le\sqrt{2M} \\
k\text{ even}\end{Sb} \big( M-\big( \frac{k^2}2-1 \big) \big).
\end{split}
\end{equation*}
If we temporarily write $\mu$ for $\floor{\sqrt{2M}}$, then
\begin{equation}
\begin{split}
A(M^2) &= M-1 + \mu\big( M+\frac12 \big) - \frac12 \sum_{k=1}^\mu k^2
+ \sum \begin{Sb}k=1 \\ k\text{ even}\end{Sb}^\mu \frac12 \\
&= M(\mu+1) + \frac\mu2-1 - \frac12\frac{\mu(\mu+1)(2\mu+1)}6 -
\frac12\bfloor{\frac\mu2},
\end{split}
\label{muform}
\end{equation}
using the well-known formula for the sum of the first $\mu$ squares.
Since $\floor{\floor x/n} = \floor{x/n}$ for any real number $x\ge0$
and any positive integer $n$, the last term can be written as
\begin{equation*}
\frac12\bfloor{\frac\mu2} = \frac12\bbfloor{\frac{\floor{\sqrt{2M}}}2}
= \frac12\bbfloor{\sqrt{\frac M2}} = \frac12\sqrt{\frac M2} -
\frac12\bigg\{\sqrt{\frac M2}\bigg\} = \frac12\sqrt{\frac M2} -
\frac{\delta(M^2)}2,
\end{equation*}
while we can replace the other occurrences of $\mu$ in equation
(\ref{muform}) by $\sqrt{2M}-\{\sqrt{2M}\} =
\sqrt{2M}-\gamma(M^2)$. Writing $\gamma$ for $\gamma(M^2)$ and
$\delta$ for $\delta(M^2)$, we see that
\begin{equation*}
\begin{split}
A(M^2) &= M(\sqrt{2M}-\gamma+1) + \sqrt{2M}-\gamma-1 \\
&\qquad {}- \frac12
\frac{(\sqrt{2M}-\gamma)(\sqrt{2M}-\gamma+1)(2(\sqrt{2M}-\gamma)+1)}6
- \frac12\sqrt{\frac M2} + \frac\delta2 \\
&= \frac{2\sqrt2}3M^{3/2} + \frac M2 + \big( \frac{2\sqrt2}3 -
\frac{\gamma(1-\gamma)}{\sqrt2} \big) \sqrt M + \frac{\gamma^3}6 -
\frac{\gamma^2}4 - \frac{5\gamma}{12} - 1 - \frac\delta2 = B(M^2)
\end{split}
\end{equation*}
after much algebraic simplification. This establishes the lemma.
\end{proof}
Now $B(x)$ is a rather complicated function of $x$, but the next lemma
gives us a couple of ways to predict the behavior of $B(x)$. First, it
tells us how to predict $B(x+y)$ from $B(x)$ if $y$ is small compared
to $x$ (roughly speaking, their difference will be $y/\sqrt2x^{1/4}$);
second, it tells us how to predict approximately when $B(x)$ assumes a
given integer value.
\begin{lemma}
There is a positive constant $C$ such that:
\renewcommand{\labelenumi}{{\it(\alph{enumi})}}
\begin{enumerate}
\item for all real numbers $x\ge1$ and $0\le y\le\min\{x/2,3\sqrt
x\}$, we have
\begin{equation}
\big| (B(x+y)-B(x)) - \frac y{\sqrt2x^{1/4}} \big| < C; \label{bxbxy}
\end{equation}
\item if we define $z_j = \frac12(3j)^{2/3} - \frac14(3j)^{1/3}$ for
any positive integer $j$, then for all $j>C$ we have $z_j>2$ and
$B((z_j-1)^2) < j < B(z_j^2)$.
\end{enumerate}
\label{Bxlem}
\end{lemma}
If the proof of Lemma~\ref{floorlem} was omitted due to its
tediousness, the proof of this lemma should be omitted and then
buried\dots. The idea of the proof is to rewrite $B_0(x)x^{-3/4}$
using the new variable $t=x^{-1/4}$, and then expand in a Taylor
series in $t$ (a slight but easily overcome difficulty being that the
term $\gamma(x)(1-\gamma(x))$ is not differentiable when
$\sqrt2x^{1/4}$ is an integer). For the proof of part (b), we also
need to rewrite $z_jj^{-2/3}$ using the new variable $u=j^{-1/3}$ and
expand in a Taylor series in $u$. We remark that the constant $C$ in
Lemma~\ref{Bxlem} can be taken to be quite small---in fact, $C=5$ will
suffice.
With these last lemmas in hand, we can dispatch
Corollaries~\ref{asymcor} and~\ref{polycor} in quick succession.
\medskip
\begin{pflike}{Proof of Corollary \ref{asymcor}:}
Let $x>1$ be a real number and define $R(x) = A(x) - 2\sqrt2x^{3/4}/3
- \sqrt x/2$, as in the statement of the corollary. We will describe
how to prove the following more precise statement:
\begin{equation}
R(x) = \big( {2\sqrt2\over3} + g(\sqrt2x^{1/4}) - h(2\sqrt x) \big)
x^{1/4} + R_1(x),
\label{moreprecise}
\end{equation}
where
\begin{equation*}
g(t) = {\{t\}(1-\{t\})\over\sqrt2} \quad\hbox{and}\quad h(t) =
\begin{cases}
\displaystyle {\{t\}\over\sqrt2}, &\hbox{if\ }
\displaystyle 0\le\{t\}\le\frac12, \\
\displaystyle \sqrt{1-\{t\}} - {1-\{t\}\over\sqrt2}, &\hbox{if\ }
\displaystyle \frac12\le\{t\}\le1
\end{cases}
\end{equation*}
and $|R_1(x)|<D$ for some constant $D$. The functions $g$ and $h$ are
continuous and periodic with period 1, and are the causes of the
oscillations in the error term $R(x)$. The expression
$g(\sqrt2x^{1/4})$ goes through a complete cycle when $x$ increases by
about $2\sqrt2x^{3/4}$ (one can show this using Taylor expansions yet
again!), which causes the large-scale bounces in the normalized error
term $R(x)x^{-1/4}$ shown in Figure~\ref{rxbsfig} below. Similarly,
the expression $h(2\sqrt x)$ goes through a complete cycle when $x$
increases by about $\sqrt x$, which causes the smaller-scale stutters
shown in the (horizontally magnified) right-hand graph in
Figure~\ref{rxbsfig}.
\begin{figure}[htbf]
\smaller\smaller
\hskip.4cm\psfig{figure=rxb.eps}
\put{$x$}{-.2cm}{.15cm}
\put{$R(x)x^{-1/4}$}{-6.8cm}{4.8cm}
{\smaller\smaller
\put{$10^8$}{-0.75cm}{-.2cm}
\put{$9.95\cdot10^7$}{-4.17cm}{-.2cm}
\put{$9.9\cdot10^7$}{-7.03cm}{-.2cm}
\put{$5/6\sqrt2$}{-8.05cm}{0.6cm}
\put{$2\sqrt2/3$}{-8.05cm}{3.2cm}
\put{$19/12\sqrt2$}{-8.35cm}{4.5cm}
}
\hskip.5cm\psfig{figure=rxs.eps}
\put{$x$}{-.1cm}{.2cm}
\put{$R(x)x^{-1/4}$}{-6.8cm}{4.8cm}
{\smaller\smaller
\put{$800^2$}{-7.8cm}{-.2cm}
\put{$800\cdot801$}{-6.25cm}{-.2cm}
\put{$801^2$}{-4.2cm}{-.2cm}
\put{$801\cdot802$}{-2.65cm}{-.2cm}
\put{$802^2$}{-0.6cm}{-.2cm}
}
\caption{Big bounces and small stutters for $R(x)x^{-1/4}$}
\label{rxbsfig}
\end{figure}
To establish the formula~(\ref{moreprecise}), we shrewdly add
$B(x)-B(x)$ to the expression defining $R(x)$, which yields
\begin{equation*}
R(x) = (A(x)-B(x)) + \big( \frac{2\sqrt2}3 + {\gamma(x)(1-\gamma(x))
\over \sqrt2} \big) x^{1/4} + B_1(x)
\end{equation*}
from the definition~(\ref{b0def}) of $B_0(x)$. Now $B_1(x)$ is a
bounded function; and since $\gamma(x)=\{\sqrt2x^{1/4}\}$, the
expression $\gamma(x)(1-\gamma(x))/\sqrt2$ is precisely
$g(\sqrt2x^{1/4})$. So what we need to show is that
$B(x)-A(x)=h(2\sqrt x)x^{1/4} + R_2(x)$, where $R_2(x)$ is another
bounded function.
While we won't give all the details, the outline of showing this last
fact is as follows: suppose first that $x\ge m^2$ but that $x$ is less
than the first al\-most\discretionary{-}{}{-}square\ $(m+1+a_{m+1})(m-a_{m+1})$ in the $(2m+1)$-st
flock, so that $A(x)=A(m^2)$. Since $A(m^2)=B(m^2)$ by
Lemma~\ref{am2bm2}, we only need to show that $B(x)-B(m^2)$ is
approximately $h(2\sqrt x)x^{1/4}$; this we can accomplish with the
help of Lemma~\ref{Bxlem}(a).
Similarly, if $x<m^2$ but $x$ is at least as large as the first al\-most\discretionary{-}{}{-}square\
$(m+b_m)(m-b_m)$ in the $2m$-th flock, the same method works as long
as we take into account the difference between $A(m^2)$ and $A(x)$,
which is $\floor{\sqrt{m^2-x}}$ by the Main Theorem. And if $x$ is
close to an al\-most\discretionary{-}{}{-}square\ of the form $m(m-1)$ rather than $m^2$, the same
method applies; even though $A(m(m-1))$ and $B(m(m-1))$ are not
exactly equal, they differ by a bounded amount.
Notice that the functions $g(t)$ and $h(t)$ take values in
$[0,1/4\sqrt2]$ and $[0,1/2\sqrt2]$, respectively. From this and the
formula~(\ref{moreprecise}) we can conclude that
\begin{equation*}
\mathop{\hbox{lim inf}}_{x\to\infty} {R(x)\over x^{1/4}} =
{5\over6\sqrt2} \and \mathop{\hbox{lim sup}}_{x\to\infty} {R(x)\over
x^{1/4}} = {19\over12\sqrt2}.
\end{equation*}
The interested reader can check, for example, that the sequences
$y_j=4j^4+j^2$ and $z_j=(2j^2+j)^2$ satisfy $\lim_{j\to\infty}
R(y_j)/y_j^{1/4} = 5/6\sqrt2$ and $\lim_{j\to\infty}
R(z_j)/z_j^{1/4} = 19/12\sqrt2$.
\end{pflike}
\begin{pflike}{Proof of Corollary \ref{polycor}:}
The algorithms we describe will involve only the following types of
operations: performing ordinary arithmetical calculations $+$, $-$,
$\ifmmode\times\else$\times$\fi$, $\div$; computing the greatest-integer $\floor\cdot$,
least-integer $\ceil\cdot$, and fractional-part $\{\cdot\}$ functions;
taking square roots; and comparing two numbers to see which is
bigger. All of these operations can be easily performed in polynomial
time. To get the ball rolling, we remark that the functions $a_m$,
$b_m$, and $B(x)$ can all be computed in polynomial time, since their
definitions only involve the types of operations just stated.
We first describe a polynomial-time algorithm for computing the number
of \asq s\ up to a given positive integer $N$. Let $M=\ceil{\sqrt N}$,
so that $(M-1)^2<N\le M^2$. Lemma~\ref{am2bm2} tells us that the
number of \asq s\ up to $M^2$ is $B(M^2)$, and so we simply need to
subtract from this the number of \asq s\ larger than $N$ but not
exceeding $M^2$. This is easy to do by the characterization of \asq s\
given in the Main Theorem. If $N>M(M-1)$, then we want to find the
positive integer $b$ such that $M^2-b^2\le N<M^2-(b-1)^2$, except that
we want $b=0$ if $N=M^2$. In other words, we set
$b=\ceil{M^2-N}$. Then, if $b\le b_M$, the number of \asq s\ up to $N$
is $B(M^2)-b$, while if $b>b_M$, the number of \asq s\ up to $N$ is
$B(M^2)-b_M-1$.
In the other case, where $N\le M(M-1)$, we want to find the positive
integer $a$ such that $(M-a-1)(M+a)\le N<(M-a)(M+a-1)$, except that we
want $a=0$ if $N=M(M-1)$, In other words, we set
$a=\ceil{\sqrt{(M-1/2)^2-N}+1/2}$. Then, if $a\le a_M$, the number of
\asq s\ up to $N$ is $B(M^2)-b_m-1-a$, while if $a>a_M$, the number of
\asq s\ up to $N$ is $B((M-1)^2)$. This shows that $A(N)$ can be
computed in polynomial time, which establishes part (c) of the corollary.
Suppose now that we want to compute the $N$th al\-most\discretionary{-}{}{-}square. We compute in any
way we like the first $C$ \asq s, where $C$ is as in Lemma \ref{Bxlem};
this only takes a constant amount of time (it doesn't change as $N$
grows) which certainly qualifies as polynomial time. If $N\le C$ then
we are done, so assume that $N>C$. Let $M=\ceil{z_N}$, where $z_N$ is
defined as in Lemma \ref{Bxlem}(b), so that $M$ is at least 3 by the
definition of $C$. By Lemma \ref{am2bm2},
\begin{equation*}
A(M^2) = B(M^2) \ge B(z_N^2) > N \quad\hbox{and}\quad A((M-2)^2) =
B((M-2)^2) < B((z_N-1)^2) < N,
\end{equation*}
where the last inequality in each case follows from Lemma
\ref{Bxlem}(b). Therefore the $N$th al\-most\discretionary{-}{}{-}square\ lies between $(M-2)^2$ and
$M^2$, and so is either in the $2M$-th flock or one of the preceding
three flocks. If $0 \le B(M^2)-N \le b_M$, then the $N$th al\-most\discretionary{-}{}{-}square\ is in
the $2M$-th flock, and by setting $b=B(M^2)-N$ we conclude that the
$N$th al\-most\discretionary{-}{}{-}square\ is $M^2-b^2$ and the dimensions of the optimal rectangle
are $(M-b)\ifmmode\times\else$\times$\fi(M+b)$. If $1+b_M \le B(M^2)-N \le b_M+1+a_M$, then the
$N$th al\-most\discretionary{-}{}{-}square\ is in the $(2M-1)$-st flock, and so on. This establishes
part (d) of the corollary.
Finally, we can determine the greatest al\-most\discretionary{-}{}{-}square\ not exceeding $N$ by
computing $J=A(N)$ and then computing the $J$th al\-most\discretionary{-}{}{-}square, both of which
can be done in polynomial time by parts (c) and (d); and we can
determine whether $N$ is an al\-most\discretionary{-}{}{-}square\ simply by checking whether this
result equals $N$. This establishes the corollary in its entirety.
\end{pflike}
\section{Final Filibuster}\noindent
We have toured some very pretty and precise properties of the \asq s,
and there are surely other natural questions that can be asked about
them, some of which have already been noted. When Grantham posed this
problem, he recalled the common variation on the original
calculus problem where the fence for one of the sides of the rectangle
is more expensive for some reason (that side borders a road or
something), and suggested the more general problem of finding the most
cost-effective rectangle with integer side lengths and area at most
$N$, where one of the sides must be fenced at a higher cost. This
corresponds to replacing $s(n)$ with the more general function
$s_\alpha(n) = \min_{d\mid n}(d+\alpha n/d)$, where $\alpha$ is some
constant bigger than 1. While the elegance of the characterization of
such ``$\alpha$-\asq s'' might not match that of Corollary
\ref{tricor}, it seems reasonable to hope that an enumeration every
bit as precise as the Main Theorem would be possible to establish.
How about generalizing this problem to higher dimensions? For example,
given a positive integer $N$, find the dimensions of the rectangular
box with integer side lengths and volume at most $N$ whose
volume-to-surface area ratio is largest among all such boxes. (It
seems a little more natural to consider surface area rather than the
sum of the box's length, width, and height, but who knows which
problem has a more elegant solution?) Perhaps these ``almost-cubes''
have an attractive characterization analogous to Corollary
\ref{tricor}; almost certainly a result like the Main Theorem, listing
the almost-cubes in order, would be very complicated. And of course
there is no reason to stop at dimension 3.
In another direction, intuitively it seems that numbers with many
divisors are more likely to be \asq s, and the author thought to test
this theory with integers of the form $n!$. However, computations
reveal that the only values of $n\le500$ for which $n!$ is an al\-most\discretionary{-}{}{-}square\
are $n=1,2,3,4,5,6,7,8,10,11,13,15$. Is it the case that these are
the only factorial \asq s? This seems like quite a hard question to
resolve. Perhaps a better intuition about the \asq s\ is that only
those numbers that lie at the right distance from a number of the form
$m^2$ or $m(m-1)$ are \asq s---more an issue of good fortune than of
having enough divisors.
The reader is welcome to contact the author for the Mathematica code
used to calculate the functions related to \asq s\ described in this
paper. With this code, for instance, one can verify that with
8,675,309 square feet of land at his disposal, it is most
cost-effective for Farmer Ted to build a 2,$919'\tms2$,$972'$
supercoop \dots speaking of which, we almost forgot to finish the Farmer
Ted story:
\story{After learning the ways of the \asq s, Farmer Ted went back to
Builders Square, where the salesman viewed the arrival of his
${\mathbb R}$-rival with trepidation. But Farmer Ted reassured him, ``Don't
worry---I no longer think it's inane to measure fences in ${\mathbb N}$.'' From
that day onward, the two developed a flourishing business
relationship, as Farmer Ted became an integral customer of the store.}
\noindent And that, according to this paper, is that.
\story{{\it Acknowledgements.\/} The author would like to thank
Andrew Granville and the anonymous referees for their valuable
comments which improved the presentation of this paper. The author
would also like to acknowledge the support of National Science
Foundation grant number DMS 9304580 \dots the NSF may or may not wish
to acknowledge this paper.}
|
{'timestamp': '1999-04-20T23:21:21', 'yymm': '9807', 'arxiv_id': 'math/9807108', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807108'}
|
arxiv
|
\section{Introduction}
The Vela supernova remnant (G$263.9-3.3$) is one of the closest and
brightest supernova remnants. Recent measurements support a distance
of about 350~pc (\cite{dub98} and references therein). Estimates of
its age range from a few thousand years (\cite{sto80}) to 29,000~yr
(Aschenbach, Egger, \& Tr\"umper 1995\nocite{asc95}) with a widely
used value being that given by the characteristic age of its pulsar,
11,400~yr (Reichley, Downs \& Morris 1970\nocite{rei70}). Its
brightness and large angular size ($\sim8$\arcdeg) have made possible
its study at many wavelengths. Yet it has been less intensively
studied than other close remnants such as the plerionic Crab Nebula
and the shell-type Cygnus Loop, although it is arguably the closest
and a key member of the third major class of SNRs, the composite
remnants. Among many controversies surrounding its nature has been
that of whether it is a shell or a composite remnant. This paper aims
to show that the Vela SNR definitely can be seen as a composite
remnant on morphological grounds and ought to be considered the
Galactic archetype.
Previous radio studies of the Vela SNR region have mainly used lower
resolution single dish images over a frequency range from 408~MHz to
8.4~GHz (\cite{mil68}; Day, Caswell, \& Cooke 1972\nocite{day72},
\cite{mil80}; \cite{mil95}; \cite{dun96}). The earliest observations
showed the Vela SNR to comprise three main areas of radio emission,
called Vela~X, Y, and Z, within a diameter of 5\arcdeg, corresponding
roughly with the bright, filamentary structure of the nebula Stromlo
16 (\cite{gum55}; \cite{mil68a}). Until recently, this 5\arcdeg\
diameter was thought to indicate the extent of the remnant, with the
nebula Vela~X containing the pulsar (PSR B0833--45), offset to one
side. This pulsar, discovered in 1968, was immediately associated with
the Vela supernova remnant (\cite{lar68}). Calculations by Bailes et
al.\ (1989\nocite{bai89}) indicate that there is only a 0.1\%
probability that the pulsar and the supernova remnant are in chance
superposition. Kundt (1988\nocite{kun88}) deduced from the 408~MHz
survey of Haslam et al.\ (1982\nocite{has82}) that the Vela SNR might
be much larger, in fact about 8\arcdeg\ across, with the pulsar
approximately at the center. ROSAT and radio observations
(\cite{asc92}; \cite{dun96}) reinforced this model. The discovery that
the speed and direction of the pulsar's proper motion indicates that
it was born near the center of the 8\arcdeg\ Vela SNR shell
(\cite{asc95}) solves the offset problem. This is one of only a few
reliable SNR/pulsar associations (\cite{kas96}). The most recent
single-dish observations (\cite{mil80}; \cite{mil95}) began to resolve
Vela~X at higher frequencies (5--8.4~GHz) and uncovered strongly
linearly polarized structure. Higher resolution observations at 327
MHz with the Very Large Array (VLA) by Frail et al.\
(1997\nocite{fra97}) showed a bright filament near the center of
Vela~X, near the X-ray feature called a `jet' by Markwardt \&
\"Ogelman (1995\nocite{mar95}). This feature extends southwards from
the pulsar, which is offset to the north of Vela~X.
The radio spectral index of Vela~X is flatter than that of the rest of
the remnant, leading to the remnant's classification as a composite,
with the Vela~X nebula directly powered by the Vela pulsar
(\cite{dwa91}; \cite{wei88}). This conclusion has been controversial
(\cite{wei80}; \cite{mil86}; \cite{wei88}). The Vela SNR lies close to
the Galactic Plane, leading to difficulties in estimating the
baselevel both in single-dish and interferometer images. To provide
the first evidence supporting the classification as a composite on
morphological grounds, radio observations are presented in this paper
at the highest resolution yet used to image the Vela X region. A
subsequent paper will present a multi-wavelength study of Vela~X and
consider the nature of the central plerion in detail.
These are the first high-resolution radio observations to cover a
large fraction of the entire Vela SNR, and are the first radio
observations of the Vela shell at a resolution compatible with
currently available optical and X-ray images. Radio observations of
the Vela shell published before the present work have been at
relatively low resolution. For example, the observations of Duncan et
al.\ (1996\nocite{dun96}) have a resolution of 10\arcmin, making them
difficult to correlate with high resolution data in other spectral
regimes. In this paper it is possible for the first time to present a
multi-wavelength study of part of a composite remnant's shell at
scales a small fraction of a parsec.
\section{Observations}
\label{sec:surv_obs}
The Molonglo Observatory Synthesis Telescope (MOST) is an east-west
multi-element interferometer located in New South Wales, Australia
(\cite{rob91}). It consists of two co-linear cylindrical parabolas,
each 11.6~m by 778~m. In a twelve-hour synthesis observation it images
an elliptical field of size $70'\times
70'\mathop{\rm cosec}\nolimits\left|\delta\right|$.\footnote{Recent modifications have
increased the field of view to
$163'\times163'\mathop{\rm cosec}\nolimits\left|\delta\right|$.} Sixty-three of these
synthesis observations covering an area of almost 50 square degrees
comprise this survey. The survey includes the regions of brightest
radio emission from the Vela SNR and covers the majority of the X-ray
remnant as seen by Aschenbach et al.\ (1995\nocite{asc95}). The area
close to the strong H\,\textsc{ii}\ region RCW~38 has been avoided, due to
imaging artefacts. Each observation
was made at a frequency of 843~MHz and a resolution of
$43''\times43''\mathop{\rm cosec}\nolimits\left|\delta\right|$, receiving right-handed
circular polarization in a bandwidth of 3~MHz. The observations were
made over the period 1984 February 3 to 1996 February 3. Complete
coverage with the elliptical field shape necessitates substantial
overlaps in the survey which have been used to refine the relative
calibration between fields, based on the unresolved sources common to
more than one field. Twenty-seven of the earlier observations were
made for the First Epoch Molonglo Galactic Plane Survey (\cite{whi89};
Green, Cram, \& Large 1998\nocite{gre98}) and some of these data
appear in the MOST supernova remnant catalogue (\cite{whi96}). The
remaining observations, which maintained the same basic grid
separation of 0\fdg9, commenced on 1992 January 17.
Those observations initially made in the vicinity (within about
1$^\circ$) of the Vela pulsar were severely limited in dynamic range
by the presence of the pulsar in the primary beam. The pulsar is a
strong, unresolved continuum source and is variable over time scales
of seconds to hours. Its integrated pulsed emission in ungated
observations was 2.2$\pm$0.4~Jy, averaged over the entire pulsar period. A
source will appear in a MOST image with a symmetric point-spread
function only if its intensity remains constant throughout the
observation. Any time-dependent variation compromises sidelobe
cancellation during synthesis, producing rays within the image
emanating from the source, and confusing nearby faint features. To
improve the imaging of this region a method of pulsar gating was used
which was originally developed by J.~E. Reynolds (personal
communication, 1996) for an observation in 1986.
To make the observations, a predicted pulse arrival time was used to
generate a gating signal of width 20~ms in each pulsar period ($\sim$
89~ms).\footnote{The predicted pulse arrival time was determined from
timing measurements of the pulsar kindly provided by R.~N.
Manchester (ATNF), M.~Bailes (University of Melbourne) and P.~M.
McCulloch (University of Tasmania).} This was displayed on an
oscilloscope simultaneously with the actual detected total power
signal of the pulsar. The half-power pulse width of the pulsar signal
was 4.5~ms, larger than the observed half-power width of 2.3~ms at
632~MHz (\cite{mcc78}) due to the dispersion (2~ms) over the MOST's
bandwidth (following Komesaroff, Hamilton, \& Ables
1972\nocite{kom72}) and to the effect of an integrating low pass
filter present in the signal path before the detection point. The
gating signal was adjusted to suppress data acquisition from 5~ms
before until 15~ms after the peak of the pulse, to allow removal of
almost all of the variable pulsed emission. Four observations were
made, each with the pulsar near the edge of the field to the north,
south, east and west. These observations were incorporated in the
complete mosaic in place of the non-gated data, for this part of the
survey. The observations made with this procedure contain only a
40~mJy (2\%) residual of the pulsed emission, sufficient to preclude
associated imaging artefacts. However, some artefacts are present in
more remote fields, mainly due to the grating response of the MOST to
the pulsar. Further details of the pulsar gating procedure are given
by Bock (1997\nocite{boc97}).
\section{Imaging}
\label{sec:mosaic_imaging}
Individual images were synthesized from each of the 63 (12 hour) observations using the
back-projection algorithm (\cite{per79}; \cite{cra84}) as implemented in the
standard MOST reduction software. To provide initial calibration for
the reduction process, several strong unresolved sources were observed briefly
before and after each target observation. These provide a
preliminary gain, phase and beamshape calibration.
The images were deconvolved using the H\"ogbom CLEAN algorithm. Some
images containing stronger sources were adaptively deconvolved as
described by Cram \& Ye (1995\nocite{cra95}). This method is similar
to self-calibration, but with a reduced set of free parameters.
The residual images had pixel rms in the range 1--6~m\hbox{Jy~beam$^{-1}$}. This range
reflects the variation between fields which were essentially noise-limited
and those which were dynamic-range limited.
Preliminary position and flux calibration used short observations of a number (typically eight) of
unresolved calibration sources before and after each Vela target
field. These sources are a subset of the MOST calibrators from
Campbell-Wilson \& Hunstead (1994\nocite{cam94}). A refined calibration was
achieved using unresolved survey field sources which are located in the
overlapped regions. Measurement of these sources produced small corrections
that result in positions mostly consistent to better than $1''$ in right
ascension and $2''$ in declination and flux densities accurate to
within 5\%.\footnote{The uncertainties in the
absolute calibration of MOST images are approximately an additional 5\% in
flux density and 0\farcs5 in RA and Dec. (Hunstead, 1991 and
personal communication, 1997).} The refining corrections could not be
applied to some fields which had too few unresolved sources in common with
nearby fields (for example, those at the edge of the mosaic). The
individual fields were mosaiced into ARC projection (\cite{gre95}), to
facilitate comparison with optical surveys.
The MOST is a redundant array sensitive to the
full range of spatial frequencies within the limits set by its extreme
spacings. The minimum geometric spacing (42.85$\lambda$), implies a
maximum detectable angular scale of about 1.3\arcdeg. However, it has
been found empirically that the actual synthesized beam of the MOST is
best fit with a model including a effective minimum spacing of about
70$\lambda$. This parameter varies between and during observations.
Typically, angular scales less than about 30\arcmin\ are well
imaged. The MOST's synthesized beam can also vary significantly
during an observation due to environmental effects and minor telescope
performance variations with the result that the theoretical beam used
for deconvolution sometimes does not model well the actual beam of an
observation. A combination of these effects produces a negative bowl
artefact around bright extended sources. In the MOST Vela SNR
survey, this effect causes a background level of about
$-10$~m\hbox{Jy~beam$^{-1}$}\ around Vela~X and Puppis~A.
Extended structure at levels as low as 6~m\hbox{Jy~beam$^{-1}$}\ is clear in the
images. Although the rms in individual pixels is around 2~m\hbox{Jy~beam$^{-1}$},
typical extended structure covers many pixels and is reliably detected
at lower levels than would usually be accepted for point sources. The
confirmation of this low level structure in a VLA image of Vela~X made at
1.4~GHz (\cite{boc98a}) validates similar emission
seen at 843~MHz elsewhere in the survey.
The most common artefacts present in the image are grating rings, which are
due to the periodic nature of the MOST. They are sections of ellipses of
dimension $1\fdg15\times1\fdg15$\,\hbox{$\cosec\left|\delta\right|$}\ with a width dependent on the source
producing them and are, in general, of variable amplitude
since they pass through several individual fields where the grating
rings have different relative gains. The morphology of these artefacts
makes them easily distinguishable from the sky emission. Much of the
survey is dynamic-range limited; in the less complicated regions the
rms noise is of order 1--2~m\hbox{Jy~beam$^{-1}$}.
An additional non-ideality comes from the mosaicing: the image from each of
the 63 individual observations was deconvolved separately, yet
structure outside a given field can contribute to sidelobes in that
field. This manifests itself as discontinuities in the survey image,
at the edges of component images or where
regions containing artifacts have been masked.
\section{Results}
An image of the complete MOST survey of the Vela SNR is shown in
figure~\ref{fig:survey}. To assist in identifying the various objects
and emission regions within the survey, a cartoon covering the same
area is presented in figure~\ref{fig:vela_cartoon}. Key
characteristics of the survey are summarized in Table~\ref{params}.
\begin{table}[t]
\caption{\protect\centering Key parameters of the MOST Vela SNR survey
\label{params}}
\vspace*{0.5mm}
\centering
\begin{tabular}{ll}
\tableline
\tableline
Parameter & Value \\
\tableline
frequency & 843~MHz\\
bandwidth & 3~MHz\\
total area & 50~deg.sq.\\
individual field size ($\alpha\times\delta$) & $70'\times 70'~$\hbox{$\cosec\left|\delta\right|$}\\
resolution ($\alpha\times\delta$) & $43''\times 43''~$\hbox{$\cosec\left|\delta\right|$}\\
max. imaged angular scale&$\sim 30'$\\
typical rms noise & 1--2~m\hbox{Jy~beam$^{-1}$}\\
received polarization & right circular (IEEE)\\
\tableline
\end{tabular}
\end{table}
Morphologically, there are several distinct regions apparent in the
image. Near the center is the bright nebula known as Vela~X, which is
thought to be powered by the Vela pulsar (PSR~B0833$-$45: Large,
Vaughan, \& Mills 1968\nocite{lar68}). The nebula is composed of a
network of complex filaments. Significant extended structure is also
present but not detected by the MOST. This region is seen more clearly
in subsequent images.
In the north and east of the image there are several filaments from
the shell of the Vela supernova remnant and at least one unrelated
H\,\textsc{ii}\ region, RCW~32 (Rodgers, Campbell, \& Whiteoak
1960\nocite{rod60}). There are also partial elliptical artefacts due
to strong H\,\textsc{ii}\ regions outside the survey area. Broadly speaking, we
can categorize the extended structure in this area on morphological
grounds. To the north-east of the Galactic Plane, much of the
structure is diffuse and randomly oriented and may be Galactic
emission unrelated to the Vela SNR. Most of the extended emission
between the Galactic Plane and the Vela~X nebula \emph{is} due to the
Vela supernova event. These filamentary features have some
correspondence with optical filaments and X-ray emission in the area
(\S~\ref{sec:shell_obs}). They are generally perpendicular to the
direction to the center of the SNR and are presumably related to the
shell. This is the area known in the literature as Vela~Y
(\cite{dwa91}; \cite{mil68}). Directly to the east of Vela~X is the radio
peak known as Vela~Z. This area is confused by the elliptical sidelobe
from the bright H\,\textsc{ii}\ region RCW~38 (\cite{rod60}), which is not
included in the survey. The area around RCW~38 is included in the
First Epoch Molonglo Galactic Plane Survey (\cite{gre98}). On the
southern side of the central nebula, another region of shell-like
emission (08\h32\hbox{$^{\rm h}$}$ -49^\circ00'$) is probably also part of the Vela
SNR. This coincides approximately with the southern boundary of the
8\arcdeg\ remnant (\cite{asc92}; \cite{dun96}).
The survey contains many unresolved and barely resolved sources, most
of which are presumably background sources.\footnote{About 39
small-diameter sources above 5~mJy at 843~MHz are found per square
degree away from the Galactic Plane (\cite{lar94}).} However, some
may be density enhancements in the Vela emission or other Galactic objects, such as compact H\,\textsc{ii}\ regions, planetary
nebulae, pulsars or small-diameter SNRs. Several of these sources have
unusual coincidences with extended structures. From the survey it is
unclear whether they are in fact background sources, or whether they
are `knots' in the SNR emission. Follow-up VLA observations
(\cite{boc98a}) of four of these sources have not found evidence for a
Galactic origin.
The unrelated supernova remnant Puppis A is also contained within the
survey. It is an oxygen-rich SNR of age about 3700~yr (\cite{win88})
at an accepted distance of 2~kpc (\cite{are90}). Puppis A has
previously been imaged separately with the MOST (\cite{kes87};
\cite{are90}). It falls approximately on the 8\arcdeg\ X-ray boundary
of the Vela SNR. However, no Vela radio emission is obvious in the
vicinity.
\subsection{The northern shell}
\label{sec:shell_obs}
The present survey of the Vela SNR covers much of the brightest region
of the Vela shell. The image in figure~\ref{fig:shell_most} is part of
the survey showing the northern section of the Vela SNR shell. The
following discussion focuses on this area, where the radio emission
from the shell is most prominent and not confused by emission from
unrelated Galactic objects or by artefacts.
The extended structure in this image is in a series of filamentary
arcs across the image at position angles ranging from $70^\circ$ to
170\arcdeg. The structure is generally concave towards the center of
the SNR: there are no significant radial filaments. The majority of
the filaments are resolved by the MOST, with widths ranging from
1\arcmin\ to 6\arcmin\ and peak surface brightnesses up to 20~m\hbox{Jy~beam$^{-1}$}.
These filaments generally have a sharp edge on the side away from the
center of the remnant, while towards the remnant center they may fade
over several arcminutes. The sharper outside profile is
consistent with the `projected sheet' picture of filamentary emission
(\cite{hes87}). The effect may also indicate that the filaments
are in fact edges, spatially filtered by the MOST so that only
the sharp transitions appear.
\subsubsection{A multi-wavelength comparison}
The availability of three datasets of comparable resolution at widely
spaced wavelengths gives us an opportunity to understand the spatial
relationship between the underlying physical processes. In addition to
the 843~MHz survey with the MOST, H$\alpha$\ and soft X-ray data are
available. The radio image shows primarily the non-thermal synchrotron
emission, the optical filaments are line emission resulting from
recombination in cooling processes, while the X-rays are shock heated
thermal radiation.
An H$\alpha$\ image of the northern Vela shell is shown in
figure~\ref{fig:shell_ol}(a), overlaid with a contour (at
approximately the $3\sigma$ level) from the radio image of
figure~\ref{fig:shell_most}. This image is from a test observation
(kindly made and reduced by M.~S. Bessell) for the MSSSO Wide Field
CCD H$\alpha$\ Imaging Survey (Buxton, Bessell, \& Watson
1998\nocite{bux98}).
The observation was made using a
$2048\times2048$ 24~\micron\ pixel CCD through a 400~mm, f/4.5
Nikkor-Q lens, at the 16-inch (0.4 m) telescope facility at Siding
Springs Observatory. Pixel spacing in the image is 12\arcsec, giving a
field size of 7\arcdeg\ square. The portion of this image presented
here is taken from the central ($5^\circ\times5^\circ$) region, where
vignetting in the 1.5~nm filter is not significant. The image has
been derived from two frames with a total exposure time of 1400~s,
which were bias-subtracted and flat-field corrected before averaging.
No correction has been made for the effect of cosmic rays. No
continuum observation was made for subtraction. Consequently, the
image presented here contains stars and a continuum component in the
extended emission. A coordinate system was applied to the image by
comparison with a Digital Sky Survey image (\cite{mor95}) using the
\texttt{KARMA} package (\cite{goo96}). The registration is within the
resolution of the radio data.
The Vela SNR was observed as part of the ROSAT All-sky Survey between
1990 October and 1991 January. An image of the Vela SNR (0.1--2.4~keV,
with angular resolution 1\arcmin) from the survey has been presented
by Aschenbach et al.\ (1995\nocite{asc95}), and part is reproduced in
figure~\ref{fig:shell_ol}(b), overlaid with the same radio contour as
in figure~\ref{fig:shell_ol}(a). In the figure, the top section of
the X-ray image (black) is the Galactic background. The surface
brightness at the edge of the SNR shell is
$7\times10^{-15}~\rm{erg~cm^{-2}~s^{-1}~arcmin^{-2}}$ (\cite{asc95}). To
the south, the surface brightness increases by a factor of 500 to the
brightest part (white), which is the most intense X-ray emission region
in the entire SNR. The first grey area ($\delta=-41^\circ40'$) marks
the edge of the main shock, seen in projection.
\subsubsection{Morphological analysis}
\label{sec:shell_morph}
By considering only the radio and H$\alpha$\ images
(figure~\ref{fig:shell_ol}(a)), it is possible to see immediately the
most striking aspect of the comparison, namely the contrast between
the optical and radio emission regions. As will be discussed below,
this has a simple theoretical basis, but is contrary to the picture
seen in other SNRs in those cases where optical emission has been
compared with well-resolved radio shell structure. The brightest
radio filaments are (as noted earlier) generally oriented
perpendicularly to the direction to the SNR center and are without
optical counterparts. Likewise, many of the optical filaments are
without radio counterparts. However, one of the brightest optical
filaments (with orientation similar to the radio structures), centered
on 08\h36\hbox{$^{\rm m}$}$-$42\arcdeg50\arcmin, does have a faint radio counterpart.
By contrast, the equally bright optical filaments in the south-west
corner of the image are without radio counterparts in the MOST image.
These filaments are generally not oriented perpendicularly to the
direction to the SNR center in the same way as the radio filaments.
In addition to the optical filamentary structure, diffuse optical
emission is also present. This is concentrated to the eastern side of
the image, in the general area of the strong radio filaments, but
there is no obvious correlation between the diffuse optical emission
and the radio filaments. No direct measure of the effect of
extinction on the H$\alpha$\ image is available.
The complete X-ray image (\cite{asc95}) shows by
its near-circular shape that it delineates the projected edge of those
parts of the main shock that are still expanding into a relatively
homogeneous medium. The regions of optical and radio emission
described so far are interior to this main X-ray shell. At the
western side of the main X-ray boundary in figure~\ref{fig:shell_ol}(b),
we see significant optical and radio emission clearly present
close to the X-ray edge. Here the radio and optical emission agree
quite well, in an arc with apex at 08\h35\hbox{$^{\rm m}$}$-$42\arcdeg10\arcmin,
just behind the outer edge of the X-ray
emission.
The X-ray emission is quite different in form to the emission we see
in the optical and radio regimes. Apart from the main edge, it is
relatively diffuse and smooth. By contrast, the radio and optical
images are dominated by filamentary structure. However, we note that
the radio image has reduced sensitivity to smooth structure, due to
the MOST's spatial response.
The bright optical filament at 08\h36\hbox{$^{\rm m}$}$-$42\arcdeg50\arcmin\ traces
the exterior (with respect to the remnant expansion) of the brightest
peaks of the X-ray emission. Yet not all the optical filaments exhibit
this relationship. The diffuse optical component has no obvious
X-ray counterpart and is strongest where
the X-ray emission is not quite so bright, to the east. The radio
filaments are also partially correlated with the X-ray emission.
Several follow changes in X-ray brightness. However, the most central
filament (08\h39\hbox{$^{\rm m}$}$-43^\circ10'$) is less well correlated: it crosses
a bright region of X-ray emission.
\subsubsection{Radiation mechanisms}
In SNRs, optical and X-ray emission are both typically due to thermal
processes. However, quite different physical conditions are involved.
Thermal X-ray emission is the result of fast shocks propagating
through a rarefied medium, with density 0.1--1~cm$^{-3}$, shocked to
temperatures of 10$^6$--10$^7$~K (\cite{loz92}). The optical emission
typically observed is produced by hydrogen recombination of cooling
shocked gas at about $10^4$~K, with density a few times
$10^2$~cm$^{-3}$.
One model which has had success explaining optical and X-ray
observations of the Cygnus Loop (\cite{hes86}; \cite{gra95a};
\cite{lev96}) invokes large ($\gtrsim 10^{14}$~m) molecular clouds
with which the expanding shock is interacting. The optical emission
comes from the shocked cloud, where the dense material is not heated
to temperatures as high as those which are maintained in the less
dense X-ray emitting regions. This emission is due to recombinative
cooling after the passage of the shock. Behind the optical emission,
the X-ray emission is further brightened by the passage of a reflected
(or reverse) shock due to the density contrast between the cloud and
the less dense inter-cloud material. Where the main shock does not
encounter molecular clouds, we do not expect to see recombinative
cooling. Instead the non-radiative shock may be traced by fainter
Balmer filaments (Hester, Raymond, \& Blair 1994\nocite{hes94}).
The present observations of the northern Vela shell fit nicely into
this picture. If the majority of optical emission was from the main
shock interacting with a relatively uniform medium, but seen here in
projection, we would expect also to see it along the entire edge of
the X-ray shell, where accentuation of sheet-like emission in
projection would be strongest. This is not observed, implying the
emission is localized and due to some interaction in density inhomogeneities
with a filling factor much less than unity. The cloud interaction
model is further supported by the
presence of X-ray brightened regions (figure~\ref{fig:shell_ol}(b))
immediately behind the bright optical filamentary structure centered
on 08\h36\hbox{$^{\rm m}$}$-$42\arcdeg50\arcmin\ in
figure~\ref{fig:shell_ol}(a). We might be seeing this emission in
projection, significantly in front of or behind the plane of the
explosion center transverse to the line of sight. This would indicate
local density enhancements very close to the main shock.
Alternatively, it could be nearly in the plane of the explosion
center, with a shock velocity significantly reduced by interactions
with more dense material. Some of the emission could be from regions
already passed and energized by the main shock.
The digital 60~\micron\ images in the IRAS Sky Survey Atlas
(\cite{whe94}) support the thermal emission model for the X-ray
emission. Much of the X-ray structure does have an infra-red
counterpart. However, infra-red images are generally less useful
thermal diagnostics than X-ray images near the Galactic Plane, since
the infra-red observations are dominated by diffuse Galactic emission
and confusion from other sources (\cite{whi91}).
An alternative model for SNR optical/X-ray emission (\cite{mck75}),
explains SNRs with centrally-peaked X-ray emission (\cite{whi91}). In
this model cold dense clouds with a small filling factor have been
passed by the main shock and are evaporating by conductive heating
from the postshock gas.
Both these models rely on molecular clouds to explain the observed
features. Molecular clouds have been detected in the direction of the
Vela SNR (May, Murphy, \& Thaddeus 1988\nocite{may88}). The initial
survey was of $^{12}$CO and $^{13}$CO $J=1\rightarrow0$ line emission
with a resolution of 0\fdg5. Higher resolution follow-up observations
(\cite{mur91}) covered only the eastern part of the Vela SNR shell. A
cloud with a barely resolved peak at 08\h41\hbox{$^{\rm m}$}$-41^\circ20'$ is seen,
with a distance estimated to be 0.5--2.0~kpc, i.e.\ immediately behind
the Vela SNR. However, this cloud appears coincident with a bright
H\,\textsc{ii}\ region seen optically to the north of
figure~\ref{fig:shell_ol}(a), and might not be responsible for the
observed optical features in the Vela shell. H\,\textsc{i}\ may be a better
tracer of density in the Vela shell region. Dubner et al.\
(1998\nocite{dub98}) find a near-circular shell of H\,\textsc{i}\ surrounding
the northern edge of the remnant, with column densities up to
$10^{21}$~cm$^{-2}$, and estimate the pre-shock gas to have had a
density of 1--2~cm$^{-3}$. The H\,\textsc{i}\ shell traces the X-ray edge of the
remnant, enclosing the radio and optical filaments.
In the simple radio emission model for the interaction of supernova
explosions with the ISM (\cite{wol72}), Vela is in the
radiative or snowplow phase of evolution, having swept
up significant matter and dissipated much of the original kinetic energy of
the explosion. A cool dense shell surrounds a hot interior. This model can
account for the faint radio emission seen just behind the X-ray edge,
which indicates the presence of compressed magnetic fields and
accelerated particles, probably from the diffusive shock mechanism
(\cite{ful90}). It does not account for the brighter localized
filaments apparently well behind the main shock.
Duin \& Van Der Laan (1975\nocite{dui75}) present a consistent picture
for the coincidence of radio and optical emission which is observed in
``middle-aged'' shell remnants. This model, based on observations of
IC443, proposes that the magnetic field required for synchrotron
emission is frozen into condensations forming in the cooling
instabilities which then give rise to the optical emission. We do not
find significant radio/optical coincidence in our Vela observations.
Consequently, if this process is occurring, then we can infer that the
cooling material around the radio filaments is not at an appropriate
temperature for the emission of recombination radiation. One
explanation is for a long period to have elapsed following the passage
of the radiative shock, allowing substantial cooling while still
preserving the conditions for synchrotron emission (\cite{bla82}).
Alternatively, where shock-accelerated particles producing the optical
filaments are located, the magnetic field may not be sufficiently
compressed to cause detectable synchrotron emission.
The applicability of these models may be investigated further with
magnetic field information, provided by polarimetry. Polarized
intensity has been observed in the Vela shell (\cite{dun96}), but high
resolution measurements at several frequencies will be required to
examine the magnetic field structure in this area in detail. Blandford
and Cowie (1982\nocite{bla82}) note that individual filaments ought to
be polarized parallel to their longest dimensions, although they may
be too faint to be detected with current instruments.
Good agreement between optical and radio emission has been found in
other middle-aged shell SNRs such as IC443 (\cite{dui75}), the Cygnus
Loop (\cite{str86}) and HB3 (\cite{fes95}). The situation in Vela is
quite different and the reason for this is not apparent. Extinction
may be a culprit, obscuring some of the H$\alpha$\ emission. However, the
coincidence of diffuse optical emission with bright radio filaments,
noted above, argues against massive extinction in this direction.
This initial investigation of the optical/radio/X-ray correlations in
the region indicates that a fuller investigation would be profitable.
A first step would be to obtain optical spectral information to
separate non-radiative and radiative filaments, allowing a detailed
comparison with the model of Hester \& Cox (1986\nocite{hes86}).
\subsection{Vela~X}
\label{sec:most_x}
A view of the central nebula of the Vela SNR is shown as a greyscale image
in figure~\ref{fig:velax.843} and as a ruled surface plot in
figure~\ref{fig:velax_hid}. Each representation emphasizes different
characteristics. The greyscale image gives a good overall view of the
region, while the ruled surface plot helps to show the nature of the
filamentary structure and highlights the small-diameter sources.
The first thing to note in the images is that at the resolution of
these observations the nebula is seen to be composed of many filaments
or wisps, at a variety of orientations and on many angular
scales. Several of the brighter filaments are aligned approximately
north-south. It is important to realize that the flux density detected in
this image is
only a small fraction of the total flux density of the remnant,
because of absent low spatial frequency information. The total flux density
of the extended features in the MOST image of Vela~X is calculated to be
$28\pm2$ Jy, which becomes 130 Jy when correction is made for the negative
bowl artefact surrounding the nebula. This is approximately 12\% of the
estimated single dish flux density of Vela~X (\cite{dwa91}). One benefit of the
MOST acting as a spatial filter is the prominence it gives to smaller scale
structures with size of the order of the X-ray feature seen
by Markwardt and \"Ogelman (1995). In figure~\ref{fig:velax.843}, the
central radio filament overlaid on the X-ray feature by Frail et al.\
(1997) is marked `1'. This filament does not look strikingly different
from other filaments in the region, for example the filament marked
`2'. However, we see in the 8~GHz Parkes image of Milne (1995) that
filament `1' is located at the brightest part of the Vela~X nebula. Frail
et al.\ (1997) have argued that this radio filament may be associated with the
X-ray feature, but it is morphologically indistinguishable from other
filaments in the image. The central radio filament looks so prominent
in the 327~MHz VLA image of Frail el al.\ (1997) partially because
that image is uncorrected for the VLA primary beam attenuation at the
edge of the field. Also, the maximum entropy method of deconvolution used
for the VLA data promotes the flux density at low
spatial frequencies more than the CLEAN algorithm used to deconvolve the
MOST observations.
Several further interesting objects in the region should be noted. In
figure~\ref{fig:velax.843} a filament (`3') extends through the pulsar
position (at the head of the arrow) to the south and may connect to
filament `1'. The axis of symmetry of these two filaments is closely
aligned with the direction of motion of the pulsar (\cite{bai89}),
shown on the image with an arrow. Using the proper motion measurement
of Bailes et al., we notice that over its lifetime (assuming an age of
12,000~yr) the pulsar has moved to its present position from a bright
region to the south-east. An excess of high-energy $\gamma$-rays has
been detected from near this putative birthplace (\cite{yos97}).
Greater age estimates (e.g.\ \cite{asc95}; \cite{lyn96}) change this
slightly as they increase the distance moved by the pulsar by up to a
factor of two. Just to the north of the pulsar is a 3\arcmin\
crescent-shaped synchrotron nebula, seen originally by Bietenholz,
Frail, \& Hankins (1991)\nocite{bie91}. They resolved out the
extended structure in the region, whereas here we see that the
crescent is one bright region of much extended emission around the
pulsar. Some very faint structures found around the edge of Vela~X
appear unusual. Object `4' has a shape reminiscent of many shell
supernova remnants, but if it is associated with Vela~X it might be a
blowout from the nebula. Object `5' is a faint streamer apparently
connecting Vela~X to the Vela shell (cf. figure~\ref{fig:survey}). It
might be argued that this is actually a foreground or background
projection of the surface of a shock `bubble', but it is substantially
thinner than any of the shell filaments.
\section{Conclusion}
The radio survey presented in this paper contains the highest
resolution observations yet made of the bulk of the Vela supernova
remnant and resolves the structure of the remnant in more detail than
has been possible for any other composite remnant. The resolution of
this observation of the Vela~X region is a factor of two greater than
that presented by Frail et al.\ (1997\nocite{fra97}), and covers the
entire plerion, unaffected by primary beam attenuation. The Vela
plerion in the radio consists both of diffuse and filamentary
emission. Although the survey does not contain information on the
largest spatial scales, this structure may be inferred from single
dish observations at higher frequencies (\cite{mil95}; \cite{dun96}),
which show that the filamentary emission in the survey covers the same
area as the more diffuse emission from Vela~X seen in total power
images. The region immediately surrounding the Vela pulsar contains
much non-thermal emission in addition to the possible pulsar wind
nebula seen by Bietenholz et al.\ (1991).
The two distinct regions of the Vela SNR, the shell and the plerion,
have in the past been considered separate entities because of their
different spectral indices. This characteristic puts Vela in the
composite class with SNRs such as G$326.3-1.8$ (MSH~$15-56$: Clark,
Green, \& Caswell 1975\nocite{cla75}; \cite{whi96}) and G$0.9+0.1$
(\cite{hel87}). In the images presented in this paper, we now see the
shell and plerion in fine detail and they separately show strong
similarities with what we see in other SNRs, observed at similar
resolutions. The shell filaments are comparable to those seen in the
Cygnus Loop (\cite{gre84}; \cite{str86}), oriented perpendicularly to
the direction to the SNR center, with H$\alpha$\ and X-ray counterparts. By
contrast, the filamentary structure within the plerion is more
nebulous and has a gross alignment approximately north-south. Its
appearance is reminiscent of the filaments in the Crab Nebula
(\cite{bie90}). Thus we can identify both a shell and a plerion
within the Vela SNR, classifying it unambiguously as a composite
remnant.
We propose that it be considered as the archetypal Galactic member of
the composite class. In angular extent, the Vela SNR is respectively
50 and 12 times larger than G$0.9+0.1$ and G$326.3-1.8$, allowing detailed
studies at a variety of wavelengths. It
is now appropriate to use this object as a key laboratory for
studying the properties of SNRs and the interstellar medium.
The investigation of the shell of the Vela SNR in this paper focussed
on its northern side. Like parts of the Cygnus Loop, this region can
be explained by a model of a fast shock heating interstellar material
to X-ray emitting temperatures and interacting with denser clouds to
produce H$\alpha$\ recombination line emission (\cite{hes86};
\cite{gra95a}; \cite{lev96}). The expanding shock also produces
bright non-thermal radio emission not well correlated with these H$\alpha$\
filaments, in contrast with the optical/radio agreement seen in many
other middle-aged SNRs. To continue investigating this region,
observations of other optical emission lines are needed to separate
projected Balmer filaments, produced at the outer shock, from
recombination line emission at molecular cloud interactions.
High-resolution polarization observations of the radio shell filaments
are the obvious next step to investigate the magnetic field associated
with the non-thermal emission.
Further study of the remnant's plerionic component, Vela~X, should
determine how the Vela pulsar transfers its rotational kinetic energy
to the nebula. With an age at least ten times that of the Crab Nebula,
we might expect the Vela plerion to show evolutionary trends in the
relative emission strengths in different wavelength regimes. The
absence of obvious correlations between radio emission and the optical
filaments (Elliott, Goudis, \& Meaburn 1976\nocite{ell76}) already
contrasts Vela~X with the Crab Nebula, where the radio filaments
surround the optical filaments (\cite{bie91a}). The recent discovery
of a possible X-ray `jet', which might be the conduit for energy
transfer to the nebula from the pulsar (\cite{mar95}), further
contrasts Vela~X with other plerions currently known.
\acknowledgements
The Molonglo Observatory Synthesis Telescope is operated by the School
of Physics, with funds from the Australian Research Council and the
Science Foundation for Physics within the University of Sydney. The
authors thank D. A. Frail for useful discussions in the course of this
work; J. E. Reynolds for assistance with the pulsar gating
observations; B. Aschenbach and M. S. Bessell for providing
electronic versions of their images; and M. Bailes, P. M. McCulloch and
R. N. Manchester for providing Vela pulsar timing data.
D. C.-J. B. also acknowledges financial support from an
Australian Postgraduate Award while at the University of Sydney.
\nocite{dwa91,mil68,asc95,bux98}
|
{'timestamp': '1998-07-13T21:13:59', 'yymm': '9807', 'arxiv_id': 'astro-ph/9807125', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9807125'}
|
arxiv
|
\section{Conjugacy bounds in the Mapping Class Group}
\label{MCG}
In this section we will apply the hierarchy construction to the
Mapping Class Group $\Mod(S)$. Our main goals will be Theorem
\ref{Quasigeodesic Words}, stating that hierarchies give rise to
quasi-geodesic words in $\Mod(S)$, and Theorem \ref{Conjugacy Bound},
which gives a linear upper bound for the length of the minimal word
conjugating two pseudo-Anosov mapping classes.
We recall first that any generating set for $\Mod(S)$ induces a {\em
word metric} on the group, denoting by $|g|$ the length of the
shortest word in the
generators representing $g\in\Mod(S)$, and by $|g^{-1}h|$ the distance
between $g$ and $h$. The metrics associated to any two
(finite) generating sets are bilipschitz equivalent.
\subsection{Paths in the mapping class group}
\label{paths in MCG}
Our first step is to show how a resolution of a hierarchy into a
sequence of slices gives rise to a word in the mapping class
group. This is a completely standard procedure involving the
connection between groupoids and groups. Theorem
\lref{Efficiency of Hierarchies} will imply that these words are in fact
quasi-geodesics.
Let $\til \MM$ be the graph of complete clean markings of $S$
connected by elementary moves, as in \S\ref{efficiency section}.
The action of $\Mod(S)$ on
$\til \MM$ is not free -- a mapping class can permute the components
of a marking or reverse their orientations -- but it has finite
stabilizers. The quotient $\MM=\til \MM/\Mod(S)$ is a finite graph, and we
let $D$ denote its diameter. Fix a marking $\mu_0\in\til \MM$
and let $\Delta_{j}\subset\Mod(S)$ denote the set of elements $g$ such
that $d_{\til \MM}(\mu_0,g(\mu_0)) \le j$.
Any marking $\mu\in \til \MM$ is at most distance $D$ from some
$\psi(\mu_0)$, with $\psi\in\Mod(S)$ determined up to pre-composition
by elements of $\Delta_{2D}$.
Note that $\Delta_{j}$ is a finite set for any $j$. Now given any
$\psi\in \Mod(S)$,
we can write it as a word in $\Delta_{2D+1}$ as follows: connect
$\mu_0$ to $\psi(\mu_0)$ by a shortest path
$\mu_0,\mu_1,\ldots,\mu_N=\psi(\mu_0)$ in $\til\MM$.
For
each $i<N$ choose $\psi_i$ such that $\psi_i(\mu_0)$ is within $D$ moves
of $\mu_i$, and let $\psi_N = \psi$. Then $\psi_i$ and $\psi_{i+1}$
differ by pre-composition with
an element $\delta_{i+1}$ in $\Delta_{2D+1}$. Thus we can write
$\psi = \delta_1\cdots\delta_{N}$.
This gives an upper bound on the word length of $\psi$, proportional
to $N=d_{\til\MM}(\mu_0,\psi(\mu_0))$. Of course, the word we obtain
here can be translated to a word in any other finite generating set in
the standard way, and its length will only increase by a bounded
multiple (depending on the generating set).
In the other direction,
suppose that $\psi$ can be written
as $\alpha_1\cdots\alpha_{M}$ with $\alpha_i$ in some fixed finite
generating set. Then the sequence of markings
$\{\mu_j=\alpha_1\cdots\alpha_j(\mu_0)\}$ satisfies the property that $\mu_j$
and $\mu_{j+1}$ are separated by some bounded number of elementary
moves, with the bound depending on the generating set. This bounds
$d_{\til\MM}(\mu_0,\psi(\mu_0))$ above linearly in terms of $|\psi|$.
Now let $H$ be any
hierarchy with ${\mathbf I}(H)=\mu_0$ and ${\mathbf T}(H) = \psi(\mu_0)$.
Theorem \lref{Efficiency of Hierarchies} gives upper and lower bounds
on $d_{\til \MM}(\mu_0,\psi(\mu_0))$ in terms of $|H|$, and we
immediately obtain:
\begin{theorem+}{Quasigeodesic Words}
Fix a complete clean marking $\mu_0$ of $S$ and a set of generators for
$\Mod(S)$, and let $|\cdot|$ be the word metric with respect to these
generators. Then there are $c_2,c_3>0$ such that the following holds:
Given any $\psi\in\Mod(S)$ let $H$ be a hierarchy such that ${\mathbf I}(H)=\mu_0$
and ${\mathbf T}(H) = \psi(\mu_0)$.
Then the words in $\Mod(S)$ constructed from resolutions of $H$
via the procedure in this section are quasi-geodesics, and in particular
$$
c_2^{-1} |H| - c_3 \le |\psi| \le c_2|H|.
$$
\end{theorem+}
(We remark that the additive constant $c_3$ can be removed if we
always choose the hierarchy to have length 0 when $\psi={\mathrm
id}$).
Note also that an estimate on the word length $|\psi|$ can be obtained
purely in terms of the quantities $d_Y(\mu_0,\psi(\mu_0))$, using
Theorem \ref{Move distance and projections}.
\subsection{The conjugacy bound}
\label{conjugacy}
We are now ready to prove the main theorem of this section:
\begin{theorem+}{Conjugacy Bound}
Fixing a set of generators for $\Mod(S)$,
there exists a constant $K$
such that if $h_1,h_2\in \Mod(S)$ are conjugate pseudo-Anosovs there
is a
conjugating element $w$ with $|w|\leq K(|h_1|+|h_2|)$.
\end{theorem+}
Let $\delta$ be the hyperbolicity constant for $\CC(S)$.
Say that two geodesics are $c$-fellow travelers
if each is in a $c$-neighborhood of the other (Hausdorff distance $c$)
and their endpoints (if any) can be paired to be within distance $c$
of each other. The following three lemmas are standard for any
hyperbolic metric space.
\begin{lemma}{Bi-infinite fellow traveling} For any $K$, if
$\beta_1,\beta_2$ are two $K$-fellow traveling bi-infinite geodesics,
then they are actually $2\delta$ fellow travelers.
\end{lemma}
\begin{pf}
Let $\bar\beta_1$ be any segment of $\beta_1$. Choose
points $y_1,y_2\in\beta_1\setminus \bar\beta_1$ on either side of
$\bar \beta_1$ whose distance to $\bar\beta$ is $2\delta+K+1$ and
points $x_i$ on $\beta_2$ such that $d(x_i,y_i)=K$. The
quadrilateral $[x_1x_2y_2y_1]$ is $2\delta$-thin by hyperbolicity:
that is, each edge is within $2\delta$ of the union of the other three.
By the triangle inequality no point of $\bar\beta_1$ can be within
$2\delta$
of $[x_i,y_i]$ so each point must be within $2\delta$ of
$\beta_2$.
Since $\bar\beta_1$ was arbitrary we are done.
\end{pf}
For the next two lemmas, let $\beta$ denote any bi-infinite geodesic,
and let $\pi=\pi_\beta:\CC(S)\to \beta$ be any map which for each $x$
picks out some closest point on $\beta$. (Note that $\pi$ need not be
uniquely defined.) Let
$[a,b]$ denote any geodesic joining $a$ and $b$, taken to be a segment
of $\beta$ whenever $a$ and $b$ lie on $\beta$.
\begin{lemma}{projection stable}
Let $x\in\CC(S)$ and $z\in \beta$, such that
$d(x,z) \le d(x,\pi_\beta(x)) + k$ for some $k\ge 0$.
Then $d(\pi(x),z) \le k + 4\delta$.
\end{lemma}
\begin{pf}
We may assume $d(\pi(x),z) > 2\delta$. Let $m\in[\pi(x),z]$ be
distance $2\delta+\ep$ from $\pi(x)$, for $\ep>0$.
By $\delta$-hyperbolicity,
$m$ is distance at most $\delta$ from either some $m_1\in[x,\pi(x)]$
or some $m_2 \in [x,z]$. The former case cannot occur, since then
$d(m_1,\pi(x)) \le
\delta$ and hence $d(\pi(x),m) \le 2\delta$, a contradiction.
Thus we have
$d(x,\pi(x)) \le d(x,m_2) + \delta$, and
$d(m,z) \le d(m_2,z) + \delta$.
Adding these together and using the hypothesis, we conclude
that $d(m,z) \le 2\delta + k$ and hence $d(\pi(x),z) \le 4\delta + k +
\ep$. Sending $\ep\to 0$ gives the desired result.
\end{pf}
\begin{lemma}{thin quadrilateral}
Let $x,y$ be any two points in $\CC(S)$, such that
$d(\pi_\beta (x),\pi_\beta( y)) > 8\delta + 2$.
Let $\sigma$ be the subsegment of
$[\pi_\beta(x),\pi_\beta(y)]$ on $\beta$ whose endpoints
are distance $4\delta+1$ from $\pi_\beta(x)$ and $\pi_\beta(y)$ respectively.
Then $\sigma$ is in a $2\delta$ neighborhood of $[x,y]$.
\end{lemma}
\begin{pf}
Form the quadrilateral whose sides are $[x,\pi(x)], [\pi(x),\pi(y)],
[y,\pi(y)]$ and $[x,y]$. By $\delta$-hyperbolicity, any point $z\in
[\pi(x),\pi(y)]$ is at most $2\delta$ from one of the other three
sides. Suppose that this is the side $[x,\pi(x)]$. Then there is some
$m\in[x,\pi(x)]$ such that $d(m,z)\le 2\delta$, and since $d(x,\pi(x))
\le d(x,z)$, we must have $d(m,\pi(x)) \le 2\delta$. It follows that
$d(\pi(x),z)\le 4\delta$. The same argument applies to $[y,\pi(y)]$,
and it follows that if $z\in \sigma$ then it must be distance
$2\delta$ from $[x,y]$.
\end{pf}
\begin{proposition+}{Axis} Let $h$ be a pseudo-Anosov element in
$Mod(S)$. There
exists a bi-infinite tight geodesic
$\beta$ such that for each $j$, $h^j(\beta)$ and $\beta$ are
$2\delta$ fellow travelers. Moreover there exists a hierarchy
$H$ with main geodesic $\beta$.
\end{proposition+}
\begin{proof}
Pick any $x\in \CC(S)$. Let $\beta_n$ be a tight geodesic
joining $h^{-n}(x)$ and
$h^n(x)$ (extend the endpoints in an arbitrary way to complete
markings ${\mathbf I}(\beta_n)$ and ${\mathbf T}(\beta_n)$),
and let $H_n$ be a hierarchy with main geodesic $\beta_n$.
By Proposition 3.6 of \cite{masur-minsky:complex1}, the sequence
$\{h^k(x),k\in \Z\}$ satisfies
$d(h^k(x),x)\geq c|k|$ for some $c>0$ (independent of $x$ or $h$) so the
sequence is a $\frac{d_0}{c}$-
quasi-geodesic, where $d_0=d(h(x),x))$.
By $\delta$-hyperbolicity there is a
constant $c'=c'(c,d_0,\delta)$ so that $\beta_n$ and the
sequence $\{h^j(x)\}_{|j|\le n}$ lie in a $c'$-neighborhood of each other.
This implies that there exist $x_n\in\beta_n$, so
that given any $R$, for $n,m$ sufficiently large, $\beta_n$ and $\beta_m$
are $(2c',R)$-parallel at $x_n,x_m$.
We apply Theorem \lref{Convergence of Hierarchies} to find a geodesic
$\beta$ and a
hierarchy $H$, which is the limit of a subsequence of $H_n$. For
each $j$, $h^j(\beta)$ is a $2c'$-fellow traveler to $\beta$.
Applying Lemma \ref{Bi-infinite fellow traveling}
gives the result.
\end{proof}
We call $\beta$ a {\em quasi-axis} for $h$.
We will need to know the following:
\begin{lemma}{definite translation}
Given $A>0$, there is an integer $N>0$,
independent of $h$, such that for any $x\in\CC(S)$ and $n\ge N$,
$$
d(\pi(x),\pi(h^n(x))) \ge A.
$$
\end{lemma}
\begin{pf}
We first observe that,
if $g$ is any power of $h$, and $\beta$ a quasi-axis for $h$, then
\begin{equation}\label{quasicommute}
d(\pi g(x),g\pi(x)) \le 10\delta
\end{equation}
for any $x\in\CC$. The proof will be given below.
Now using the inequality $d(\pi(x),h^n\pi(x)) \ge
c|n|$, with $c$ independent of $x$ and $h$, from Proposition 3.6 of
\cite{masur-minsky:complex1},
we simply choose $N$ so that $cN > A + 10\delta$.
It remains to prove (\ref{quasicommute}):
Since $\beta$ and $g(\beta)$ are $2\delta$-fellow travelers, we have
$d(g(x),g(\beta)) \le d(g(x),\beta) + 2\delta$, or equivalently, since $g$
is an isometry,
$d(g(x),g\pi( x)) \le d(g(x),\pi g (x)) + 2\delta$. Now
$\pi g \pi (x)$ is on $\beta$, and again by the fellow traveler
property, we have $d(g\pi( x),\pi g \pi (x)) \le 2\delta$. Thus
$d(g(x),\pi g \pi (x)) \le d(g(x),\pi g( x)) + 4\delta$. Applying Lemma
\ref{projection stable} with $k=4\delta$ and $z=\pi g \pi (x)$, we find
that
$d(\pi g (x), \pi g \pi (x)) \le 8 \delta$.
We conclude that $d(\pi g( x), g\pi( x)) \le 10\delta$, as desired.
\end{pf}
\begin{pf*}{Proof of Conjugacy Theorem}
Suppose that $h_2 = w^{-1}h_1w$.
Lemma \ref{definite translation} guarantees that we can choose $N$
independent of $h_1$ and $h_2$ such that
so that $d(\pi(x),\pi h_i^n(x))) \ge 40\delta + 24$ for all $x\in\CC$ and
$n\ge N$. Let $g_i=h_i^N$ (for $i=1,2$). In the proof below, let
$C_1,C_2,\ldots$ denote positive constants which are independent of
$h_1$ and $h_2$.
Fix a complete clean marking $\mu_0$ in $S$.
Let
$H_i$ be a hierarchy such that ${\mathbf I}(H_i) = \mu_0$ and ${\mathbf T}(H_i) = g_i(\mu_0)$.
We may also assume
the main geodesic of $H_i$ is a segment
$[v,g_i(v)]$ for a base curve $v$ of $\mu_0$.
By Theorem \lref{Quasigeodesic Words}, we have
$|H_i|\le C_1|g_i| \le NC_1 |h_i|$.
Since $w$ acts by natural isomorphisms on $\CC(S)$ and and all the subsurface
complexes, we have a hierarchy $w(H_2)$ with
main geodesic $[w(v),w g_2(v)]=[w(v),g_1 w(v)]$,
and $|w(H_2)|=|H_2|$.
Form a quasi-axis $\beta$ for $g_1$, together with a hierarchy $H$,
and form the segments $$I_0=[\pi(v),\pi g_1(v)]$$
and
$$
I'_m=[\pi g_1^m w(v),\pi g_1^{m+1} w(v)]
$$
on $\beta$.
Each of these have length at least $40\delta + 24$.
Let
$\sigma_0\subset I_0$ and $\sigma'_m\subset I'_m$ be the subsegments
obtained by removing $4\delta+1$-neighborhoods of the endpoints.
The $\{I'_m\}_{m\in\Z}$ tile $\beta$ and therefore the gaps between
$\sigma'_m$ and $\sigma'_{m+1}$ have length $8\delta+2$. It follows
that there exists some $m\in\Z$ such that $\sigma_0$ and $\sigma'_m$
overlap on a segment $\zeta$ of length at least $12\delta + 10$.
Let $w' = g_1^m w$, and note that $w'$ also conjugates $h_1$ and
$h_2$. We will now
bound the word length of $w'$, by bounding
$d_{\til \MM}(\mu_0,w'(\mu_0))$.
By Lemma \ref{thin quadrilateral},
the segment $\zeta$ is in a $2\delta$-neighborhood of both
$[v,g_1(v)]$ and $[w'(v),g_1 w'(v)]$.
Let $x$ be a vertex of $\zeta$ nearest its midpoint, so that
$\zeta$ contains an interval $L$ of radius $6\delta+4$ around $x$,
and let $u,u'$ be vertices on
$[v,g_1(v)]$ and $[w'(v),g_1 w'(v)]$,
respectively, which are nearest to $x$.
Thus the main geodesics of $H$ and $H_1$ are
$(2\delta,6\delta+3)$-parallel at $x$ and
$u$, and similarly for those of $H$ and $w'(H_2)$ at $x$ and $u'$.
This will allow us to apply Lemma \lref{Slice Comparison} below.
Resolve $H_1$ into a sequence of slices. One of them must have bottom
vertex $u$ (see proof of Proposition \ref{Resolution into slices})
-- let $\mu_1$ be a clean marking compatible with this slice.
The resolution gives a bound $d_{\til\MM}(\mu_0,\mu_1) \le C_2|h_1|$,
by Proposition \ref{Resolution into slices} and Lemma \ref{marking
move bound}.
Similarly, resolve $w'(H_2)$, find a slice with bottom vertex
$u'$, let $\mu_2$ be a clean marking compatible with this slice, and
conclude $d_{\til\MM}(\mu_2,w'(\mu_0)) \le C_3|h_2|$.
Let $\mu_3$ be a clean marking associated to a slice of $H$ with
base vertex $x$. Let $W$ be a
hierarchy with ${\mathbf I}(W)=\mu_1$ and ${\mathbf T}(W)=\mu_3$.
Case (2) of Lemma \lref{Slice Comparison} tells us that $W$ is
$(K',M)$-pseudo-parallel to $H_1$ (with $K',M$ depending only on $\delta$).
In particular this means
$|W|\le C_4|H_1|$. Resolving $W$, we obtain
$d_{\til\MM}(\mu_1,\mu_3) \le C_5|h_1|$.
Similarly, join $\mu_3$ to
$\mu_2$ by a hierarchy $W'$.
The same argument as for $W$ gives us
$d_{\til\MM}(\mu_2,\mu_3) \le C_5|h_2|$.
Adding these bounds, we obtain
$d_{\til\MM}(\mu_0,w'(\mu_0)) \le C_6(|h_1|+|h_2|)$, which as in
\S\ref{paths in MCG} gives the desired bound on $|w'|$.
\end{pf*}
\section{Comparison and control of hierarchies}
\label{large link etc}
In this section, we combine the structural results of the previous two
sections with Theorem \lref{Bounded Geodesic Image}, to
prove a number of basic results that
allow us to control the higher-order structure of hierarchies, and
to compare hierarchies whose main geodesics are close.
As applications we prove Theorem \lref{Efficiency of Hierarchies},
which shows that hierarchies give rise to sequences of markings
separated by elementary moves which are close to shortest
possible. These will be used to produce quasi-geodesics in $\Mod(S)$ in
Section \ref{MCG}. We also prove Theorem \lref{Convergence of
Hierarchies} which will allow us to obtain infinite hierarchies as
limits of finite ones. Corollary \lref{Finite Geodesics} states that
between any two points in $\CC(S)$ there are only finitely many tight
geodesics.
Our basic technical result will be Lemma \lref{Sigma Projection},
which simplifies and generalizes the ``short cut and projection''
argument used in the motivating examples in \S\ref{example}. Recall
how we
showed that a large link (long geodesic) in one hierarchy forces a
similar large link in a fellow-traveling hierarchy, by
producing paths forward and backwards
from the given link to its main geodesic, and projecting these back to
the domain of the link. The forward and backward sequences
$\Sigma^\pm$ provide the framework for making this argument work in
general.
Lemmas \lref{Large Link} and \lref{Common Links} will be
straightforward applications of Lemma \ref{Sigma Projection}, and will
generalize what we did in the motivating examples. Lemma \lref{Slice
Comparison} is a more delicate comparison between nearby hierarchies
and requires more work.
\subsection{The forward and backward paths}
The ``forward path'' for a domain $Y$ is built roughly as follows:
Starting on the top geodesic in $\Sigma^+(Y)$ we move forward until
it ends, at which point we have arrived at the position following the
footprint of $Y$ on the next geodesic in $\Sigma^+(Y)$, and we
continue in this way until we get to the bottom geodesic $g_H$. A
``backward path'' is constructed the same way from $\Sigma^-(Y)$.
More precisely: Let
$\sigma$ denote the set of all pairs $(k,v)$
where $k\in\Sigma^\pm(Y)$, and $v$ is a position on $k$ such that
$v|_ Y \ne \emptyset$.
We claim that the partial order $\prec_p$
restricts to a linear order on $\sigma$, making it into a
sequence:
Indeed, each $f_i\in\Sigma^+(Y)$ for $i>0$ contributes
a segment
$\sigma^+_i = \{(f_i,v_i)\prec_p\cdots\prec_p(f_i,{\mathbf T}(f_i))\}$, where
$v_i$ is
the position immediately following $\max\phi_{f_i}(Y)$ (if
$\max\phi_{f_i}(Y)$ is the last vertex then $\sigma^+_i =
\{(f_i,{\mathbf T}(f_i))\}$). Since
$\max\phi_{f_i}(Y) = \max\phi_{f_i}(D(f_{i-1}))$ (Corollary
\ref{Footprints}), we also have $(f_{i-1},{\mathbf T}(f_{i-1})) \prec_p
(f_i,v_i)$.
Thus the union of all $\sigma_i^+$ are linearly ordered.
The same
holds for $\sigma^-_i$, defined as
$\{(b_i,{\mathbf I}(b_i))\prec_p\cdots\prec_p(b_i,u_i)\}$, where $u_i$ is the
last position before $\min\phi_{b_i}(Y)$.
Note that the same geodesic may appear in
$\Sigma^+$ and
$\Sigma^-$, in which case it can contribute both a $\sigma_j^+$
and a
$\sigma_i^-$, one on each side of the footprint.
The top geodesic $h=b_0=f_0$ has
empty $\phi_{h}(Y)$ by Theorem \ref{Structure of Sigma} part (2),
and so all its positions are included in $\sigma$, and
they
follow all the $\sigma_i^-$ and
precede all the $\sigma_i^+$ pairs, for $i>0$.
We denote the sequence of positions of the top geodesic by $\sigma^0$.
We let $\sigma^+$ be the concatenation
$\sigma_1^+ \union \cdots \union \sigma_n^+$ (with the same linear
order), and similarly
$\sigma^- = \sigma_m^- \union \cdots \union \sigma_1^- $.
In case clarification is needed we write $\sigma^+(Y),
\sigma^-(Y,H)$,
etc.
By the definition, for each $(k,v)\in\sigma$ the projection
$\pi_Y(v)$ is nonempty.
Let $\pi_Y(\sigma)$ denote the union of these projections, and
similarly for $\pi_Y(\sigma^+)$ and $\pi_Y(\sigma^-)$.
The following property of $\sigma$ forms the basis of all the
proofs in this section:
\begin{lemma+}{Sigma Projection}
There exist constants $M_1,M_2$ depending only on $S$ such that,
for any hierarchy $H$ and domain $Y$ in $S$,
$$\operatorname{diam}_{Y}(\pi_Y(\sigma^+(Y,H))) \le M_1$$
and similarly for $\sigma^-$.
Furthermore, if $Y$ is properly contained in the top domain of
$\Sigma(Y)$, then
$$\operatorname{diam}_{Y}(\pi_Y(\sigma(Y,H))) \le M_2.$$
\end{lemma+}
\begin{pf}
Theorem \lref{Bounded Geodesic Image} bounds the diameter of the
projection to $Y$ of each $\sigma^\pm_i$, and of $\sigma^0$ in the
case where $Y$ is properly contained in the top domain. The
transition from the last position of $\sigma^+_i$ to the first of
$\sigma^+_{i+1}$ just
consists of adding curves to the marking and so projects to a bounded
step in $\CC(Y)$ by Lemma \ref{Lipschitz Projection}.
The same holds for the other
transitions between segments of $\sigma$. Finally,
the number of segments in each of
$\sigma^\pm_i$ is bounded by $\xi(S)-\xi(Y)$. These facts together
give the desired diameter bounds.
\end{pf}
\subsection{Large links}
The following is an almost immediate consequence of Lemma
\ref{Sigma Projection}:
\begin{lemma+}{Large Link}
If $Y$ is any domain in $S$ and
$$d_Y({\mathbf I}(H),{\mathbf T}(H)) > M_2$$
then $Y$ is the support of a geodesic $h$ in $H$.
Conversely if $h\in H$ is any geodesic with $Y=D(h)$,
$$
\left| |h| - d_Y({\mathbf I}(H),{\mathbf T}(H)) \right | \le 2M_1.
$$
\end{lemma+}
\begin{proof}
The top geodesic $k=b_0=f_0$ of $\Sigma(Y)$ has domain $Z=D(k)$ which
either equals $Y$ or contains it.
If $Y$ does not support a geodesic then
$Z$ properly contains $Y$, and Lemma
\ref{Sigma Projection} implies
$$d_Y({\mathbf I}(H),{\mathbf T}(H)) \le \operatorname{diam}_{Y}(\pi_Y(\sigma)) \le M_2.$$
This proves the first part.
For the second part, if $Y=D(h)$ then by Theorem \ref{Structure of
Sigma} we must have $Z=Y$ and $h=k$. Since $\sigma^+$
contains both ${\mathbf T}(h)$ and ${\mathbf T}(H)$, and $\sigma^-$ contains ${\mathbf I}(h)$
and ${\mathbf I}(H)$, Lemma \lref{Sigma Projection} implies that
$$d_Y({\mathbf I}(h),{\mathbf I}(H)) \le M_1$$ and
$$d_Y({\mathbf T}(h),{\mathbf T}(H))) \le M_1.$$
The second statement of the lemma follows.
\end{proof}
\subsection{Fellow traveling}
In a $\delta$-hyperbolic metric space, geodesics whose endpoints are
near each other must stay together for their whole lengths. Our
hierarchies have some similar properties. Before we state them we need
some definitions.
\begin{definition}{separation}
We say that two hierarchies $H$ and $H'$ are {\em $K$-separated at the
ends}
if the markings ${\mathbf I}(H)$ and ${\mathbf I}(H')$ are complete and clean, and are
separated by at most $K$
elementary moves, and similarly for ${\mathbf T}(H)$ and ${\mathbf T}(H')$.
\end{definition}
\begin{definition}{K R parallel}
Given two geodesics $g_1$ and $g_2$ with the same domain,
and $x_i$ a vertex in $g_i$ for $i=1,2$,
we say that
$g_1$ and $g_2$ are {\em $(K,R)$-parallel} at $x_1$ and $x_2$
provided $d(x_1,x_2)\le K$ and for at least one of $i=1$ or $2$,
$x_i$ is the midpoint of a segment $L_i$ of
radius $R$ in $g_i$, and $L_i$ lies in a $K$-neighborhood of $g_{3-i}$.
\end{definition}
\begin{definition}{pseudo-parallel}
We say a hierarchy $H$ is {\em $(K,M)$-pseudo-parallel} to a hierarchy
$H'$ if, for any geodesic $h\in H$ with $|h|\ge M$ there is a geodesic
$h'\in H$ such
that $D(h) = D(h')$, and $h$ is contained in a $K$-neighborhood of $h'$
in $\CC(D(h))$.
\end{definition}
(Note that the pseudo-parallel relation is not symmetric)
The following lemma is a generalization of Farb's Bounded Coset
Penetration Property.
\begin{lemma+}{Common Links}
Given $K$ there exist $K', M$ such that,
if two hierarchies $H$ and $H'$ are $K$-separated at the ends
then each of them is $(K',M)$-pseudo-parallel to the other.
\end{lemma+}
\begin{proof}
Let $h$ be any geodesic in $H$.
By Lemma \ref{Large Link}, the hypothesis, and Lemma
\ref{Elementary Move Projections}, we have
$$|h|-2M_1-4K\leq d_Y({\mathbf I}(H'),{\mathbf T}(H'))\leq |h|+2M_1+4K.$$
If we assume $|h|>M_2 + 2M_1 + 4K$, then
the left hand side is greater than $M_2$,
so Lemma \ref{Large Link} implies
that there is a geodesic $h'\in H'$ with $D(h')=D(h)$.
A bound of $2M_1 + 8K$
on $d_Y({\mathbf I}(h),{\mathbf I}(h'))$ and $d_Y({\mathbf T}(h),{\mathbf T}(h'))$ follows
from Lemmas \ref{Sigma Projection} and \ref{Elementary Move Projections}.
It follows by hyperbolicity of $\CC(Y)$ that $h$ and $h'$
remain a bounded distance apart along their whole length.
\end{proof}
In the next lemma we show how to compare slices in a pair of
hierarchies that are $K$-separated at the ends or have parallel
segments. The idea is that two
such slices can be joined by a hierarchy that only has long geodesics
when these are parallel to segments in the original two hierarchies.
This is the closest one can come to saying that two hierarchies are
fellow-travelers.
\begin{lemma+}{Slice Comparison}
Given $K$ there exist $K'$, $M$ so that the following holds:
Let $\tau $ and $\tau'$ be complete slices in two hierarchies $H$ and $H'$
respectively, with bottom vertices $x\in g_H$ and $x'\in g_{H'}$.
Suppose that either
\begin{enumerate}
\item $H$ and $H'$ are $K$-separated at the ends, or
\item $g_H$ and $g_{H'}$ are $(K,3K+4)$-parallel at $x$ and $x'$.
\end{enumerate}
Let $\mu$ and $\mu'$ be clean markings compatible with $\tau$ and
$\tau'$ respectively.
Then any hierarchy $J$ with ${\mathbf I}(J) = \mu$ and ${\mathbf T}(J)=\mu'$ is
$(K',M)$-pseudo-parallel to both $H$ and $H'$.
\end{lemma+}
Before giving the proof of this lemma we need the following two results.
\begin{lemma}{slices cut}
Let $H$ be a complete hierarchy.
Let $\tau$ be a slice in $V(H)$ and $(k,v)$ a pair where $v$ is a
position in $k\in H$. Then exactly one of the following occurs:
\begin{enumerate}
\item $(k,v)\in\tau$,
\item there exists
$(h,u)\in\tau$ such that $(k,v)\prec_p(h,u)$,
\item there exists
$(h,u)\in\tau$ such that $(h,u)\prec_p(k,v)$.
\end{enumerate}
Furthermore $h$ may be taken so that $D(k)\subseteq D(h)$.
\end{lemma}
In view of this result, let us write $(k,v) \prec_s \tau$ when case (2)
holds, and $\tau\prec_s(k,v)$ when case (3) holds.
\begin{proof}
Since $\tau\in V(H)$, it is complete and its bottom geodesic is $g_H$.
We will prove the statement of the lemma inductively for any
complete slice whose bottom geodesic $g$ satisfies $D(k)\subseteq
D(g)$. Let $(g,u)$ be the bottom pair of $\tau$. If $g=k$ then the
statement is immediate -- either $v<u$, $u<v$, or $u=v$.
Now suppose $D(k)$ is properly contained in $D(g)$.
By Corollary \ref{easy containment}, $\phi_g(D(k))$ is nonempty.
If $\max\phi_g(D(k)) < u$ or $u<\min\phi_g(D(k))$ then $(k,v)\prec_p
(g,u)$ or $(g,u)\prec_p (k,v)$, respectively, and we are done. If not
then $u\in\phi_g(D(k))$ and there is some component domain $Y$ of $(D(g),u)$
containing $D(k)$. Since $\tau$ is complete there is a pair
$(h,w)\in\tau$ with $D(h) = Y$. The slice $\tau'$ consisting of all
$(h',w')\in\tau$ such that $D(h')\subseteq D(h)$ is itself complete,
and has bottom pair $(h,w)$. Applying induction to $\tau'$, we have
the desired statement.
The fact that the three possibilities are mutually exclusive follows
directly from the fact that any two elements of a slice are not
$\prec_p$-comparable (see proof of Lemma \ref{sprec partial order}).
\end{proof}
\begin{lemma}{sigma and slice}
Fix a complete hierarchy $H$ and a slice $\tau\in V(H)$.
Let $Y$ be any domain in $S$. Then the path $\sigma(Y)$
contains a pair $(k,v)$ which is in $\tau$.
\end{lemma}
\begin{proof}
Let $(k,v)$ and $(k',v')$ be succesive pairs in
$\sigma(Y)$. We will show that it is not possible for
$(k,v) \prec_s \tau$ and $\tau\prec_s (k',v')$ to hold simultaneously.
Since the first pair in $\sigma$ is always
$(g_H,{\mathbf I}(g_H))$, for which $(g_H,{\mathbf I}(g_H))\prec_s \tau$ holds,
and the last is $(g_H,{\mathbf T}(g_H))$ for which $\tau\prec_s(g_H,{\mathbf T}(g_H))$ holds,
the statement of the
lemma follows from Lemma \ref{slices cut}.
By definition of $\sigma$, there are three possibilities for the
relation between $(k,v)$ and $(k',v')$:
\begin{enumerate}
\item $k=k'$. Here $v'$ is the position following $v$.
\item $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k'$. Here $v={\mathbf T}(k)$, and $v'$ is the position following
$\max\phi_{k'} (D(k))$.
\item $k\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k'$. Here $v'={\mathbf I}(k')$, and $v$ is the position preceding
$\max\phi_{k} (D(k'))$.
\end{enumerate}
We will first prove our claim in cases (1) and (2). Suppose that
$(k,v)\prec_s\tau$, and
let $(h,u)\in\tau$ be a pair such that, as in Lemma \ref{slices
cut}, $(k,v)\prec_p(h,u)$ and $D(k)\subseteq D(h)$.
If $k=h$ then $v<u$ and in particular we must be in case (1) since $v$
is not the last position of $k$. Thus $k=k'=h$ and $v'$ is the
successor of $v$, so $v'\le u$. Thus we are done in this case.
If $D(k)$ is properly contained in $D(h)$ then we note that $k\mathrel{\scriptstyle\searrow}
h$. In case (1) we still
have $(k',v') = (k,v') \prec_p (h,u)$, so we are done. In case (2),
we either have $k'=h$ or $k'\mathrel{\scriptstyle\searrow} h$. In the first case, $v'$ is the
successor of $\max\phi_h(D(k))$ and hence $v'\le u$ and we are done. In
the second, we have $\max\phi_h(D(k')) = \max\phi_h(D(k))$ by
Corollary \ref{Footprints}, and so again $(k',v')\prec_p (h,u)$.
To prove our claim in case (3), we just note that it is equivalent to
case (2), with the directions and roles of $k$ and $k'$ reversed.
\end{proof}
\begin{proof}[Proof of Slice Comparison Lemma]
Let $R= 3K+4$.
Let $m_0$ be any geodesic of $J$ with $|m_0|> M$, where the value of
$M$ will be determined below, and let $Y=D(m_0)$.
We first claim that, up to possibly reversing all the directions in
$H'$ (and interchanging ${\mathbf I}(H')$ with ${\mathbf T}(H')$),
\begin{eqnarray}
\label{TT bound}
d_Y({\mathbf T}(H),{\mathbf T}(H')) &\le M_3,\\
\label{II bound}
d_Y({\mathbf I}(H),{\mathbf I}(H')) &\le M_3
\end{eqnarray}
for appropriate $M_3$.
(If $H$ is infinite then this holds with ${\mathbf I}(H)$ or ${\mathbf T}(H)$ replaced
by any point of $g_H$ on $\sigma^-(Y)$ or $\sigma^+(Y)$, respectively;
and similarly for $H'$).
In case (1) this is true by the hypothesis and Lemma
\ref{Elementary Move Projections}, provided $M_3\ge 4K$.
In case (2), up to interchanging $H$ and $H'$ we
may assume there is an interval $L$ of radius $R$ centered on $x$
which lies in a $K$-neighborhood of $g_{H'}$.
If $y,z$ are the endpoints of $L$ and $y<x<z$, let $y',z'$ be points of
$g_{H'}$ closest to $y$ and $z$ respectively. Up to reversing all the
directions in $H'$ we may assume $y'<z'$, and then by the triangle
inequality we have $y'<x'<z'$.
Since $Y$ is a domain of $J$, whose main geodesic has
length at most $K$, and $x$ is a curve in ${\mathbf I}(J)$,
we have $d_S(\boundary Y,x) \le K+2$ (thinking of $\boundary Y$ as a
simplex in $\CC(S)$).
We claim that any curve $w$
on a geodesic from $z$ to $z'$ intersects $\boundary Y$.
For if not, $d_S(w,\boundary Y) \le 1$, and so
$d(x,z)\leq d(x,\boundary Y)+d(\boundary Y,w)+d(w,z)\leq 2K+3 < R$, a
contradiction. It follows that we can project $[z,z']$ into $\CC(Y)$ and
conclude $d_Y(z,z') \le 2K$ by Lemma \ref{Lipschitz Projection}.
If $\phi_{g_H}(Y)$ is nonempty,
the triangle inequality similarly gives
$d_S(x,\phi_{g_H}(Y)) \le K+3 < R = d(x,z) $ and hence
$z$ lies to the right of $\phi_{g_H}(Y)$. Similarly
$d_S(x',\phi_{g_{H'}}(Y)) \le K+3 < R-2K \le d(x',z')$ and hence
$z'$ lies to the right of $\phi_{g_{H'}}(Y)$, if that is nonempty.
It follows that $z$ can be connected to ${\mathbf T}(H)$
by a path lying in
$\sigma^+(Y,H)$, and similarly for $z'$ and ${\mathbf T}(H')$.
Lemma \lref{Sigma Projection} then gives a
bound of $M_1$ on
$d_Y(z,{\mathbf T}(H))$ and $d_Y(z',{\mathbf T}(H'))$.
Putting
these together with the bound on $d_Y(z,z')$
gives (\ref{TT bound}), with $M_3=2M_1 + 2K$.
The same argument with $y$ and $y'$ gives (\ref{II bound}).
Next we claim that, for $M_4= 2M_2 + 4M_1 + 4$,
\begin{equation}
\label{mu between}
d_Y(\mu,{\mathbf I}(H)) + d_Y(\mu,{\mathbf T}(H)) \le d_Y({\mathbf I}(H),{\mathbf T}(H)) +M_4
\end{equation}
and similarly for $\mu'$ and $H'$.
Begin by observing that,
by Lemma \ref{sigma and slice}, $\sigma(Y,H)$ contains a pair
$(k,v)\in\tau$. If $D(k)$ is an annulus then $D(k)=Y$ and $v$ is a
transversal of $\mu$ -- otherwise it is in $\operatorname{base}(\mu)$.
Lemma \ref{Lipschitz Projection} then implies that $\pi_Y(\mu)$ is within
distance 2 of $\pi_Y(\sigma(Y,H))$.
By Lemma \lref{Sigma Projection},
if $Y$ does not support a geodesic in $H$ then $\operatorname{diam}_Y(\sigma(Y,H)) \le
M_2$. Hence the left side of
(\ref{mu between}) is at most $2M_2+4$ and the inequality
follows by choice of $M_4$.
If $Y$ supports a geodesic $h\in H$ then Lemma \ref{Sigma Projection}
implies that $\pi_Y(\sigma(Y,H))$ is Hausdorff distance $M_1$ from
$h$ (i.e. each is in an $M_1$-neighborhood of the other). We therefore
have, for $v$ as above,
$d_Y(v,v_0) + d_Y(v,v_{|h|}) \le |h| + 2M_1$,
where $v_0$ and $v_{|h|}$ are the first and last vertices of $h$,
and since $\pi_Y({\mathbf I}(H))$ and $\pi_Y({\mathbf T}(H))$ are distance $M_1$ from the
respective endpoints of $|h|$, (\ref{mu between}) again follows
with a bound of $6M_1 + 4$. This is at most $M_4$ since $M_1\le M_2$.
Now by the triangle inequality $d_Y(\mu,\mu') $ is bounded by
both
$$d_Y(\mu,{\mathbf T}(H)) + d_Y({\mathbf T}(H),{\mathbf T}(H')) + d_Y(\mu',{\mathbf T}(H'))$$
and
$$d_Y(\mu,{\mathbf I}(H)) + d_Y({\mathbf I}(H),{\mathbf I}(H')) + d_Y(\mu',{\mathbf I}(H')).$$
Adding these two estimates together and applying (\ref{TT
bound},\ref{II bound}) and (\ref{mu between})), we have
$$
2d_Y(\mu,\mu')\leq
d_Y({\mathbf I}(H),{\mathbf T}(H))+d_Y({\mathbf I}(H'),{\mathbf T}(H')) + 2M_3 + 2M_4.
$$
Now again applying (\ref{TT bound},\ref{II bound}) and the triangle
inequality, we find that
$d_Y({\mathbf I}(H),{\mathbf T}(H))$ and $d_Y({\mathbf I}(H'),{\mathbf T}(H'))$ differ by at most $2M_3$.
This gives
$$d_Y(\mu,\mu')\leq d_Y({\mathbf I}(H),{\mathbf T}(H)) + 4M_3 + 2M_4,$$
and the same inequality for $H'$.
Since $|m_0|> M$, Lemma \lref{Large Link} gives
$d_Y(\mu,\mu')> M - 2M_1$, so
$d_Y({\mathbf I}(H),{\mathbf T}(H))>M-2M_1-4M_3 - 2M_4$.
If we set $M=2M_1 + 4M_3+2M_4 + M_2$,
Lemma \ref{Large Link} again guarantees that
$Y$ is the domain of a geodesic
$m\in H$. Furthermore, ${\mathbf T}(m_0)$ is within $M_1$ of
$\pi_Y({\mathbf T}(J))=\pi_Y(\mu') $, which is within 4 of
$\pi_Y(\sigma(Y,H))$ by Lemma \ref{sigma and slice}, as above.
Applying Lemma \ref{Sigma Projection} again we find that this is
within $M_1$ of $m$. A similar estimate holds for ${\mathbf I}(m_0)$, and by
$\delta$-hyperblicity all of $m_0$ lies within
$K'=2M_1 + 4 + 2\delta$ of $m$. This establishes
that $J$ is $(K',M)$-pseudo-parallel to $H$.
The corresponding statement holds for $H'$, and the lemma is proved.
\end{proof}
\subsection{Efficiency}
\label{efficiency section}
In Section \ref{resolution} we saw that a hierarchy $H$ can be
resolved into a sequence of markings of length bounded by its
size $|H|$. Here we will obtain an estimate in the opposite direction.
Let $\til \MM$ be the graph whose vertices are complete, clean
markings in $S$, and whose edges represent elementary moves. Giving
edges length 1, we have for two complete clean markings $\mu,\nu$
their {\em elementary move distance} $d_{\til \MM}(\mu,\nu)$ in this graph.
Proposition \ref{Resolution into slices}
and Lemma \ref{marking move bound} imply that
this graph is connected, but this fact is already well known:
it follows for example from a similar connectedness result
for the graph of pants decompositions in Hatcher-Thurston
\cite{hatcher-thurston} (and see proof in Hatcher \cite{hatcher:pants}).
\begin{theorem+}{Efficiency of Hierarchies}
There are constants $c_0,c_1>0$ depending only on $S$ so that,
if $\mu$ and $\nu$ are complete clean markings and
$H$ is a hierarchy with ${\mathbf I}(H)=\mu$, ${\mathbf T}(H)=\nu$, then
$$
c_0^{-1}|H| -c_1 \le d_{\til\MM}(\mu,\nu) \le c_0|H|.
$$
\end{theorem+}
See Theorem \lref{Quasigeodesic Words}, in Section \ref{MCG},
for the implication of this to words in the
Mapping Class Group.
\begin{proof}
The second inequality is an immediate consequence of
Proposition \ref{Resolution into slices} and Lemma
\ref{marking move bound}.
For the first inequality,
the idea of the argument is as follows. Consider a
shortest path $\{\mu=\mu_0,\ldots,\mu_N=\nu\}$
from $\mu$ to $\nu$ in $\til\MM$.
Each long geodesic $h\in H$
imposes a lower bound of the form $N \ge c|h|$, because the projection
$\pi_{D(h)}(\mu_j)$
moves at bounded speed in $\CC(D(h))$ (Lemma \ref{Elementary Move Projections})
as $j$ goes from
$0$ to $N$, and by Lemma \lref{Large Link} it must travel a distance
proportional to $|h|$. To obtain our desired statement
we must show that the projections of $\mu_j$ cannot
move far in many different domains at once, and hence the lower bounds
for the different geodesics will add. This will be done using
Lemma \ref{Order and projections} below, which relates projections to
time-order.
Let $M_5= 2M_1 +5$ and $M_6 = 4(M_1+M_5+4)$, and let
$\GG$ be the set of geodesics $h\in H$
satisfying $|h|\geq M_6$. Let
$|\GG| = \Sigma_{h\in\GG} |h|$. Then we have
\begin{equation}
\label{G is enough}
|\GG| \ge d_0|H| - d_1
\end{equation}
for $d_0,d_1$ depending only on $S$ (and the choice of $M_6$). The
proof is a simple counting
argument, using the fact that the number of component domains of any
geodesic is bounded by a constant times its length.
Thus the main point will be to bound $N$ below in terms of $|\GG|$.
For any $h\in\GG$ let us isolate an interval in $[0,N]$ in which the
projections to $Y=D(h)$ of the
$\mu_j$ are ``making the transition'' between being close to
$\pi_Y(\mu_0)$, to being close to $\pi_Y(\mu_N)$.
Let $L=d_Y(\mu_0,\mu_N)$, noting that $L\ge |h|-2M_1 \ge M_6-2M_1$ by
Lemma \ref{Large Link}. The projections of $\mu_j$ to $\CC(Y)$ are a
sequence that moves by bounded jumps
$d_Y(\mu_j,\mu_{j+1}) \le 4$, by
Lemma \ref{Elementary Move Projections}.
Therefore there must be some largest value of $j\in[0,N]$ for which
$d_Y(\mu_0,\mu_j)\in[M_5,M_5+4]$. Let $a_Y$ be this value.
Since $L>2(M_5+4)$, we know that $d_Y(\mu_{a_Y},\mu_N)> M_5+4$.
Therefore there
is a smallest $j\in[a_Y,N]$ for which
$d_Y(\mu_j,\mu_N)\in[M_5,M_5+4]$. Let this be $b_Y$.
Let $J_Y$ be the interval $[a_Y,b_Y]$.
These intervals have the following properties:
\begin{enumerate}
\item For any $j\in J_Y$ we have $d_Y(\mu_0,\mu_j)\ge M_5$ and
$d_Y(\mu_j,\mu_N) \ge M_5$.
\item $|J_{D(h)}|\geq |h|/8$ for any $h\in\GG$.
\item If $h,k\in \GG$ are such that $Y=D(h)$, $Z=D(k)$ have
nonempty intersection and neither is contained in the other,
then $J_Y$ and $J_Z$ are disjoint intervals.
\end{enumerate}
(1) follows immediately from the definition and Lemma
\ref{Elementary Move Projections}
To prove (2), by the triangle inequality we have $d_Y(\mu_0,\mu_{b_Y}) \ge
L-M_5-4$. Again by Lemma \ref{Elementary Move Projections},
$d_Y(\mu_0,\mu_j)$ changes by at most $4$ with each increment
of $j$, so since $d_Y(\mu_0,\mu_{a_Y}) \le M_5+4$
we conclude that $b_Y-a_Y \ge (L-2M_5-8)/4$. This implies (2), by the
choice of constants and the fact that $|h|\ge M_6$.
To prove (3), we will first need the following lemma:
\begin{lemma+}{Order and projections}
Let $H$ be a hierarchy and $h,k\in H$ with $D(h)=Y$ and
$D(k)=Z$. Suppose that $Y\intersect Z \ne \emptyset$, and neither
domain is contained in the other. Then, if $h\prec_t k$ then
\begin{equation}\label{order proj 1}
d_Y(\boundary Z, {\mathbf T}(H)) \le M_1+2
\end{equation}
and
\begin{equation}\label{order proj 2}
d_Z({\mathbf I}(H),\boundary Y) \le M_1+2.
\end{equation}
\end{lemma+}
\begin{proof}[Proof of Lemma \ref{Order and projections}]
Let $m$ be
the geodesic used to compare $h$ and $k$.
It lies in $\Sigma^+(Y)$. Let $v\in
\phi_m(Z)$. Since $v$ lies to the right of $\phi_m (Y)$ the
pair $(m,v)$ is in the sequence $\sigma^+(Y)$.
Lemma \lref{Sigma Projection} now implies
$$
d_Y(v,{\mathbf T}(H)) \le M_1.$$
Since $\boundary Z$ intersects $Y$ essentially by the assumption on
$Y$ and $Z$, and since $\boundary Z$ is disjoint from $v$, by applying
Lemma \ref{Lipschitz Projection} we find that
$$
d_Y(\boundary Z,{\mathbf T}(H)) \le M_1+2$$
as desired. The second inequality is proved in the same way.
\end{proof}
Returning to the proof of Theorem \ref{Efficiency of Hierarchies},
suppose that property (3) is false, so that $Y$ and $Z$ intersect
and are non-nested, but $J_Y$ and $J_Z$ overlap.
Let $j\in J_Y\intersect J_Z$.
Let $H_j$ be a hierarchy such that ${\mathbf I}(H_j) = \mu_0$ and ${\mathbf T}(H_j)=\mu_j$.
By property (1) $d_Y(\mu_0,\mu_j)$ and $d_Z(\mu_0,\mu_j)$ are both at
least $M_5$, so Lemma
\ref{Large Link} implies that $Y$ and $Z$ support geodesics $h_j$
and $k_j$ in $H_j$.
The condition on $Y$ and $Z$ implies that $h_j$ and
$k_j$ are time-ordered in $H_j$ by Lemma \ref{Time Order}, so suppose
without loss of generality that $h_j\prec_t k_j$.
By Lemma \ref{Order and projections}, we have $d_Y(\boundary Z,\mu_j)
\le M_1+2$.
However, $h$ and $k$ must also be time-ordered in $H$, and
applying Lemma \ref{Order and projections} to the hierarchy
$H$, we have either
$$d_Y(\boundary Z,\mu_N) \le M_1+2$$
if $h\prec_t k$ by (\ref{order proj 1}), or
$$d_Y(\mu_0,\boundary Z) \le M_1+2$$
if $k\prec_t h$ by (\ref{order proj 2}). Thus, either $d_Y(\mu_j,\mu_N) \le
2M_1 + 4$ or
$d_Y(\mu_0,\mu_j) \le 2M_1+4$. Either one of these contradicts the
assumption that $j\in J_Y$, since $2M_1+4 < M_5$.
This proves (3).
Thus, the intervals $\{J_{D(h)}: h\in\GG\}$ cover a
subset of $[0,N]$ with multiplicity at most $s$, where
$s$ is the maximal cardinality of a set $D_1,\ldots,D_s$ of domains in
$S$, $\xi(D_i)\ne 3$, for which
any two are
either disjoint or nested. This number depends only on $S$ (in fact it
is easy to show that $s=2\xi(S)-6$).
It follows that
$$
sN \ge \sum_{h\in\GG} |J_{D(h)}|.
$$
Combining this with (2) which gives $\sum_{h\in\GG}|J_{D(h)}| \ge
|\GG|/8$, and then using (\ref{G is enough}), we obtain
$$
N \ge c_0^{-1}|H| - c_1
$$
with suitable constants $c_0,c_1$.
\end{proof}
The following corollary of this theorem can be stated without
any mention of hierarchies. It relates elementary-move distance to the
sum of all ``sufficiently large'' projections to subsurfaces in $S$
(including $S$ itself).
\begin{theorem+}{Move distance and projections}
There is a constant $M_6(S)$ such that, given $M\ge M_6$,
there are $e_0,e_1$ for which, if $\mu$ and $\nu$ are any two complete
clean markings then
$$
e_0^{-1} d_{\til \MM}(\mu,\nu) -e_1 \le
\sum_{\substack{
Y\subseteq S\\
d_Y(\mu,\nu)\ge M}} d_Y(\mu,\nu) \le
e_0 d_{\til \MM}(\mu,\nu) + e_1
$$
\end{theorem+}
The proof is simply a rephrasing of the result of Theorem
\ref{Efficiency of Hierarchies}, together with the inequality
(\ref{G is enough}) to restrict consideration to ``long'' geodesics,
and Lemma \ref{Large Link} to relate this to projection diameters.
\subsection{Finiteness and limits of hierarchies}
\label{limits of hierarchies}
In this section we apply the comparison lemmas to the question of when
a sequence of hierarchies converges to a limiting hierarchy. Let us
first discuss what we mean by convergence.
Fix a point $x_0\in \CC(S)$ and let $B_R=B_R(x_0)$ denote the
$R$-neighborhood of $x_0$ in $\CC(S)$. For a tight geodesic $h$ with
$D(h)\subseteq S$, let $h\intersect B_R$ denote the
following:
\begin{enumerate}
\item If $D(h)$ is a component domain of $(S,v)$ for some $v$ then,
if $v\subset B_R$ we let $h\intersect B_R = h$, and otherwise
$h\intersect B_R =\emptyset$.
\item If $D(h)=S$ then $h\intersect B_R$ is the set of all positions
of $h$ that lie in $B_R$.
\end{enumerate}
(In (2) this includes ${\mathbf I}(h)$ and/or ${\mathbf T}(h)$ if their bases are in
$B_R$.)
For a hierarchy $H$, let $H\intersect B_R = \{h\intersect B_R: h\in H\}$.
We say that a sequence $\{H_n\}$ of hierarchies {\em converges to a
hierarchy $H$} if for all $R>0$, $H_n\intersect B_R = H\intersect
B_R$ for large enough $n$. Clearly if $H$ is a finite hierarchy this
just means that eventually $H_n=H$.
It is also easy to see the following: Suppose that for all $R>0$, the
sets $H_n\intersect B_R$ are eventually constant. Then $\{H_n\}$ converges
to a unique hierarchy $H$. We can now prove the following result:
\begin{theorem+}{Convergence of Hierarchies}
Let $\{H_n\}_{n=1}^\infty$ be a sequence of hierarchies such that either
\begin{enumerate}
\item For a fixed $K$ and any $n,m$, $H_n$ and $H_m$ are $K$-separated
at the ends, or
\item There exists $K>0$ and a vertex $x_n$ on each $g_{H_n}$
such that, for each
$R'>0$, there exists $n=n_{R'}$ so that for all $m\ge n$,
$g_{H_n}$ and $g_{H_m}$ are $(K,R')$-parallel at $x_n$ and $x_m$.
\end{enumerate}
Then $\{H_n\}$ has a convergent subsequence.
\end{theorem+}
\begin{proof}
Fix an arbitrary $x_0$ and let $\UU_R$ denote the set of all
vertices in $H_n\intersect B_R(x_0)$ for all $n>0$. We claim that
$\UU_R$ is finite for each $R>0$. The theorem follows immediately
since this implies that $H_n\intersect B_R$ varies in a finite set
of possibilities for each $R$, and so the usual diagonalization step
extracts a subsequence $H_{n_k}$ for which $H_{n_k}\intersect B_R$ is
eventually constant.
To show $\UU_R$ is finite, consider first case (1).
Fix a slice $\tau_1$ in $H_1$, and note that
any vertex $v$ in $H_n$ appears in some complete slice $\tau$ of $H_n$.
Consider a hierarchy $J(v)$
joining clean markings compatible with $\tau$ and $\tau_1$ respectively.
Lemma \lref{Slice Comparison}, case (1), implies that
$J(v)$ is $(K',M)$-pseudo-parallel to
$H_1$, so each geodesic in $J(v)$ has
length bounded either by $M$ or by a constant plus the length of a
geodesic in $H_1$. Since $H_1$ is finite this gives some uniform
bound, so every marking compatible with a slice of $H_n$ can be
transformed to a marking compatible with a slice of $H_1$ in a bounded
number of elementary
moves. This means the set of all base curves that occur in such
markings is finite, and this bounds the set of all vertices occurring
in non-annular geodesics in all $H_n$. The annular geodesics are
determined by their initial and terminal markings, up to a finite
number of choices (by the definition of tightness), and hence those
vertices are finite in number as well. (Note we have actually proved
finiteness for all the vertices in all $H_n$, without mention of $R$).
In case (2), the condition implies that there is some bound
$d(x_0,x_n)\le R_0$ for all $n>0$.
Given $R$, choose $R'=R_0 + R+3K + 8\delta + 5$ and
let $n=n_{R'}$. For $m\ge n$ let $\ell_m$ be the segment of radius $R'$
around $x_m$.
The $(K,R')$-parallel condition means that
$d(x_n,x_m)\le K$ and
either $\ell_m$ is in a $K$-neighborhood of $g_{H_n}$ or
$\ell_n$ is in a $K$-neighborhood of $g_{H_m}$.
In either case, the
triangle inequality and $\delta$-hyperbolicity imply that, if $x\in
g_{H_m}$ and $d(x,x_m)\le R_0+R+1$ then $x$ is the center of a segment
in $g_{H_m}$ of radius $6\delta+4$ contained in a
$2\delta$-neighborhood of $\ell_n$ (we should assume that $K>\delta$,
which entails no loss of generality).
Now any vertex of $H_m\intersect B_R$ occurs in some complete slice
$\tau$ of $H_m$ with bottom vertex $x$ in $B_{R+1}$ (the slice will be
complete because ${\mathbf I}(H_m)$ and ${\mathbf T}(H_m)$ are sufficiently far away
from $B_R$
that they have non-trivial restriction to any domain occurring in
$H_m\intersect B_R$ -- so one can apply Lemma \ref{Subordinate
Intersection 3}). Thus
$d(x,x_m) \le R_0 + R + 1$ by the triangle inequality, and the
previous paragraph implies that, for suitable
$x'\in \ell_n$, $g_{H_m}$ and $g_{H_n}$ are
$(2\delta,6\delta+4)$-parallel at $x$ and $x'$. By case (2) of
Lemma \ref{Slice Comparison} we can again conclude that a
marking compatible with
$\tau$ can be connected to some marking compatible with a slice in
$H_n\intersect B_{R'}$ by a
sequence of elementary moves whose length is bounded only in terms of
$H_n\intersect B_{R'}$. The argument then proceeds as in case (1).
\end{proof}
We have the following immediate consequence of this argument:
\begin{corollary+}{Finite Geodesics}
Given a pair of points $x,y\in \CC_0(S)$ there are only a finite number of
tight geodesics joining them.
\end{corollary+}
\begin{proof}
Fix markings ${\mathbf I}$ and ${\mathbf T}$ containing
$x$ and $y$, respectively. Each tight geodesic connecting $x$ to $y$
can be extended to a hierarchy connecting ${\mathbf I}$ and ${\mathbf T}$, and the
finiteness argument in case (1) of Theorem \ref{Convergence of
Hierarchies} implies this set of hierarchies is finite.
\end{proof}
\section{Complexes and subcomplexes of curves}
\label{defs}
We review here the definitions of the various complexes of curves,
paying particular attention to the way in which subsurfaces of a given
surface give rise to sub-complexes. We will prove Lemma \ref{arcs
to curves} relating arc complexes to curve complexes, define
projections from a complex to its sub-complexes
and prove Lemma \lref{Lipschitz Projection}.
We will also treat the
case of annuli, which are exceptional in various respects, and
conclude with a discussion of markings and elementary moves.
\subsection{Basic definitions and notation}
\label{basic defs}
Let $S=S_{\gamma,p}$ be an orientable surface of finite type, with genus
$\gamma(S)$ and $p(S)$ punctures. It will be convenient to measure
the complexity of $S$ by $\xi(S) = 3 \gamma(S) + p(S)$. Note that
$\xi$ is not equivalent to Euler characteristic, but has the property
that if $T\subset S$ is an incompressible proper subsurface then
$\xi(T)$ is strictly smaller than $\xi(S)$. We will only consider
surfaces with $\xi > 1$, thus excluding the sphere and disk. We will
also exclude the standard torus (which does not arise as a subsurface
of a hyperbolic surface), so that from now on $\xi(S) = 3$ implies $S$
is the thrice-punctured sphere.
The {\em complex of curves} $\CC(S)$, introduced by Harvey
in \cite{harvey:boundary},
is a finite-dimensional and
usually locally infinite simplicial complex defined as follows:
A {\em curve} in $S$ is by definition a nontrivial homotopy class of simple
closed curves, not homotopic into a puncture.
If $\xi(S)>3$ then the set of curves is non-empty, and we let these be
the vertices of $\CC(S)$.
If $\xi(S)>4$ then the $k$-simplices are the sets $\{v_0,\ldots,v_k\}$
of distinct curves that have pairwise disjoint representatives.
One easily checks that $\dim(\CC(S)) = \xi(S) - 4$.
When $\xi(S)=4$, $S$ is either a once-punctured torus $S_{1,1}$ or four times
punctured sphere $S_{0,4}$, and the complex as defined above has
dimension 0. In
this case we make an alternate definition: an edge in $\CC(S)$ is a
pair $\{v,w\}$ where $v$ and $w$ have representatives that intersect
once (for $S_{1,1}$) or
twice (for $S_{0,4}$). Thus $\CC(S)$ is a graph, and in fact is
isomorphic to the familiar Farey graph in the plane (see e.g.
Bowditch-Epstein \cite{bowditch-epstein:triang},
Bowditch \cite{bowditch:markoff},
Hatcher-Thurston \cite{hatcher-thurston}, and
Series \cite{series:cfrac}). In particular this graph is a
triangulation of the 2-disk with vertices on the boundary, and the link
of each vertex can be identified with the integers, on which Dehn
twists (or half-twists for $S_{0,4}$) act by translation (see Figure
\ref{farey graph}).
When $\xi(S)=3$, $\CC(S)$ is empty (recall we have excluded the
regular torus). When $\xi(S)=2$, $S$ is the annulus and this case is
of interest when $S$ appears as a subsurface of a larger surface. We
consider this case further in \S\ref{annulus defs}.
\subsection{Distance geometry and hyperbolicity}
Let $\CC_k(S)$ denote the $k$-skeleton of $\CC(S)$. It is easy to show
that $\CC_k$ is connected for $k\ge 1$,
see e.g. \cite[Lemma 2.1]{masur-minsky:complex1}.
We can make $\CC_k(S)$ into a complete geodesic metric space by giving each
simplex the metric of a regular Euclidean simplex with side-length
1 (see Bridson \cite{bridson:simplicial}). It is easy to see that the
resulting spaces are quasi-isometric for all $k>0$. In
\cite{masur-minsky:complex1} we showed
\begin{theorem+}{Hyperbolicity}
If $\xi(S)\ge 4$ and $k>0$ then $\CC_k(S)$ is an infinite-diameter
$\delta$-hyperbolic metric space for some $\delta>0$.
\end{theorem+}
See e.g.
\cite{cannon:negative,gromov:hypgroups,bowditch:hyperbolicity,ghys-harpe,short:notes}
for background on $\delta$-hyperbolic metric spaces. We recall here
just the definition that a geodesic metric space is $\delta$-hyperbolic if
for any geodesic triangle each edge is in a $\delta $-neighborhood of
the union of the other two edges.
We will usually consider just distances between {\em vertices} in
$\CC(S)$, i.e. points in $\CC_0(S)$,
for which it suffices to consider distances in the graph $\CC_1(S)$,
which we note are integers. Thus by the notation $d_{\CC(S)}(v,w)$, or
even $d_S(v,w)$, we will always mean distances as measured in
$\CC_1(S)$.
Writing $\operatorname{diam}_S$ to mean diameter in $\CC_1(S)$,
we define for subsets $A,B\subset \CC_0(S)$
\begin{equation}
\label{set distance is max}
d_S(A,B) = \operatorname{diam}_S(A\union B).
\end{equation}
We will also usually think of a {\em geodesic} in $\CC_1(S)$ as a
sequence of vertices $\{v_i\}$ in $\CC_0(S)$, such that $d_S(v_i,v_j)
= |i-j|$. In particular $v_i$ and $v_{i+1}$ are always disjoint (when
$\xi(S) > 4$) and $v_i$ and $v_{i+3}$ always fill $S$, in the sense
that the union of the curves they represent, in minimal position, cuts
$S$ into a union of disks and once-punctured disks.
A final abuse of notation throughout the paper is in the usage of the
term ``vertex'': when we introduce the notion of tight geodesics in
\S\ref{hierarchy defs} we will use ``vertex of a geodesic'' to denote
something more general than a point of $\CC_0(S)$, namely a simplex of
$\CC(S)$, representing a multi-component curve (or multicurve).
(One can think of this as a vertex of the first barycentric subdivision).
We will also go back and forth freely between vertices or simplices
and the (multi)curves they represent.
\subsection{Subdomains, links, arc complexes}
\label{subsurfaces}
A {\em domain} (or {\em subdomain}) $Y$ in $S$ will always be taken to mean
an (isotopy class of an) incompressible, non-peripheral, connected
open subsurface. Unless we say {\em proper} subdomain, we include the
possibility that $Y=S$.
We usually omit the mention of isotopy classes for both surfaces and
curves, and to make the discussion clear one might fix a
complete hyperbolic metric on $S$ and consider geodesic
representatives of curves, and surfaces bounded by them.
We also take the word
``intersection'' to mean {\em transverse} intersection.
However, annuli are an exceptional case in several ways; see below.
In particular note that the boundary curves of a surface do not intersect
it.
We immediately obtain an embedding $\CC(Y)\subset \CC(S)$ except when
$\xi(Y)\le 4$. Another complex of interest is the {\em arc complex}
$\CC'(Y)$,
which we define as follows: Suppose again that $\xi(Y)>3$.
An {\em arc} in $Y$ is
a homotopy class of properly embedded paths in $Y$, which cannot be
deformed rel punctures to a point or a puncture.
The vertices of $\CC'(Y)$ are both the arcs and the curves,
and simplices as before are sets of vertices that can be
realized disjointly.
The complex $\CC'(Y)$ naturally arises when we try to ``project''
$\CC(S)$ into $\CC(Y)$ by taking intersections with $Y$ of curves in
$S$.
{\bf Remark:} The punctures of $Y$ can come from either punctures of
$S$ or from boundary components of $Y$ in $S$. In fact,
it is often useful to think of all the punctures of $Y$ as
boundary components, in which case we consider arcs up to homotopy
which allows the endpoints to move on the boundary. These points of
view are equivalent, and we shall go back and forth between them for
convenience.
The next elementary observation is that $\CC(Y)$ embeds in $\CC'(Y)$
as a co-bounded set. More precisely, letting
$\PP(X)$ denote the set of finite subsets of $X$, we have:
\begin{lemma}{arcs to curves}
Let $\xi(Y)>3$. There is a map $\psi=\psi_Y:\CC'_0(Y)\to \PP(\CC_0(Y))$
such that:
\begin{itemize}
\item
$\psi(v) = \{v\}$ for $v\in\CC_0(Y)$,
\item
$d_{\CC'(Y)}(\alpha,\psi(\alpha))\le 1$, and
\item
if $d_{\CC'(Y)}(\alpha,\beta) \le 1$ then
$d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) \le 2.$
\end{itemize}
\end{lemma}
\begin{pf}
If $\alpha$ is an arc,
let $\NN$ be a regular neighborhood in $Y$ of the union of $\alpha$
with the component(s) of $\boundary Y$ on which its endpoints lie, and
consider the frontier of $\NN$ in $Y$. This has either one or two
components, and at least one of them must be both nontrivial and
nonperipheral, since otherwise $Y$ is a disk, annulus or thrice-punctured
sphere, contradicting $\xi(Y)>3$. We let $\psi(\alpha)$ be the union
of the (at most two) nontrivial components
(see figure \ref{psi def fig}). If $\alpha$ is a curve (vertex of
$\CC_0(Y)$), we define $\psi(\alpha) = \{\alpha\}$.
\realfig{psi def fig}{psidef.ps}{The neighborhood $\NN$ is shaded. Note
that its frontier in $Y$ has two components in the first case and one
in the second}
Let $\alpha $ and $\beta$ be adjacent in $\CC'(Y)$, so they have
disjoint representatives. If either of them is a closed curve
then automatically $d(\psi(\alpha),\psi(\beta)) \le 1$, so assume both
are arcs. Similarly if their endpoints lie on disjoint boundary
components of $Y$ then $\psi(\alpha)$ and $\psi(\beta)$ have disjoint
representatives, so we can
assume from now on that there is at least one boundary component which
touches both of them.
Suppose that the complement of $\alpha \union \beta$ in $Y$ contains a
non-trivial, non-peripheral
simple closed curve $\gamma$. Then $\gamma$ is also
disjoint from $\psi(\alpha)$ and $\psi(\beta)$, and we conclude
$d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) \le 2$.
If there is no such $\gamma$,
then $\alpha$ and $\beta$ cut $Y$ into a union
of (at most 3) disks or punctured disks. The possible cases
can therefore be enumerated explicitly.
\realfig{psi cases}{psicases.ps}{The different cases in the proof
that $d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) \le 2$.}
Let $C$ be a
boundary component of $Y$ meeting both $\alpha$ and $\beta$.
If $C$ meets all the endpoints then there are two possibilities,
according to whether the endpoints
separate each other on $C$. If they separate each other,
$Y$ must be a once or twice
punctured torus, as in cases 1a and 1b of
Figure \ref{psi cases}. In case 1a we have
$d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 1$,
and in case 1b, $d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 2$, as shown
(note in this case that $\psi(\alpha)$ and $\psi(\beta)$ each have two
components).
If they do not separate then
$Y$ must be a quadruply-punctured sphere (case 1c) and
$d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 1$.
(Recall that in the cases where $\xi(Y)=4$, the definition of $d_{\CC(Y)}$ is
slightly different).
Suppose that $\alpha$ has one endpoint on $C$ and one on another
boundary $C'$. In all these cases $Y$ turns out to be a
quadruply-punctured sphere.
If both $\beta$'s endpoints are on $C$ we get case 2a, where
$d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 1$. If $\beta$'s other endpoint is on
$C'$ we get 2b, where $d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 2$. If
$\beta$'s other endpoint is on a third component $C''$, we get case 2c where
again $d_{\CC(Y)}(\psi(\alpha),\psi(\beta)) = 1$.
\end{pf}
\bfheading{Projections to subsurfaces:}
If $Y$ is a proper subdomain in $S$ with $\xi(Y)\ge 4$ we can define a map
$\pi'_Y : \CC_0(S) \to \PP(\CC'_0(Y))$, simply by taking
for any curve $\alpha$ the union of (homotopy classes of) its
essential arcs of intersection with $Y$.
If $\alpha$ does not meet $Y$ essentially then
$\pi'_Y(\alpha)=\emptyset$, and otherwise it is always a simplex
of $\CC'(Y)$.
Adopting the convention for set-valued maps that $f(A) = \union_{a\in
A}f(a)$, we define $\pi_Y$ by $\pi_Y(\alpha) = \psi_Y(\pi'_Y(\alpha))$.
We also define
\begin{equation}
\label{dY convention}
d_Y(A,B) \equiv d_Y(\pi_Y(A),\pi_Y(B))
\end{equation}
For sets or elements $A$ and $B$ in $\CC_0(S)$, and similarly
we let $diam_Y(A)$ denote $\operatorname{diam}_{\CC(Y)}(\pi_Y(A))$.
\subsection{Annular domains}
\label{annulus defs}
An annular domain is an annulus $Y$ with incompressible boundary in $S$,
which is not homotopic into a puncture of $S$.
The purpose of defining complexes for such annuli is to keep track
of Dehn twisting around their cores; hence one would like $\CC(Y)$ to
be $\Z$. However there seems to be no natural way to do
this, and we will be content with something more complicated which is
nevertheless quasi-isometric to $\Z$. The
statements made in this subsection are all elementary, and we only
sketch the proofs.
Let $\til Y$ be the annular cover of $S$ to which $Y$ lifts
homeomorphically. There is a natural compactification of $\til Y$ to a
closed annulus $\hhat Y$, obtained in the usual way
from the compactification of the universal
cover $\til S = \Hyp^2$ by the closed disk.
Define the vertices of $\CC(Y)$ to be
the paths connecting the two boundary components
of $\hhat Y$, modulo homotopies that {\em fix the endpoints}.
Put an edge between any two elements of $\CC_0(Y)$ which have
representatives with disjoint interiors. As before we can make
$\CC(Y)$ into a metric space with edge lengths 1.
If $\alpha\in\CC_0(S)$ is the core curve of $Y$ we also write
$\CC(\alpha)=\CC(Y)$, and similarly $d_Y = d_{\alpha }$.
Fixing an orientation on $S$ and
an ordering on the components of $\boundary \hhat Y$, we can define
algebraic intersection number $\alpha\cdot \beta$
for $\alpha,\beta\in\CC_0(Y)$ (only interior intersections count).
It is easy to see by an inductive argument that
\begin{equation}
\label{annulus distance and intersection}
d_Y(\alpha,\beta) = 1 + |\alpha\cdot \beta|
\end{equation}
whenever $\alpha\ne\beta$.
Let us also observe the convenient identity
\begin{equation}
\label{adding twists}
\gamma\cdot\alpha = \gamma\cdot\beta + \beta\cdot\alpha + j
\end{equation}
where $j=0,1$ or $-1$ (the value of $j$ depends on the exact
arrangement of endpoints on $\boundary \hhat Y$).
We claim that $\CC(Y)$ is quasi-isometric to $\Z$ with the standard
metric. In fact define a map $f:\CC_0(Y) \to \Z$ by fixing some
$\alpha\in\CC_0(Y) $ and letting $f(\beta) = \beta\cdot\alpha$. Then
(\ref{adding twists}) and (\ref{annulus distance and intersection}) imply
\begin{equation}
\label{f quasiisometry}
|f(\gamma)-f(\beta)| \le d_Y(\gamma,\beta) \le |f(\gamma)-f(\beta)|
+ 2.
\end{equation}
In particular this implies $\CC(Y)$ is hyperbolic so Theorem
\ref{Hyperbolicity} holds for this complex as well.
\bfheading{Projections to annuli:}
We can define $\pi_Y:\CC_0(S) \to \PP(\CC_0(Y))$ as follows:
If $\gamma$ is a simple closed curve in $S$ crossing the core of
$Y$ transversely, then the lift of $\gamma$ to $\til Y$ has at least
one component that connects the two boundaries of $\hhat Y$, and
together these
components make up a (finite) set of diameter 1 in $\CC(Y)$.
Let $\pi_Y(\gamma) $ be this set.
If $\gamma$ does not intersect $Y$ essentially (including the case
that $\gamma$ is the core of $Y$!) then $\pi_Y(\gamma) = \emptyset$,
as in the previous section.
Finally, for consistency we also define $\pi_Y:\CC_0(Y) \to \PP(\CC_0(Y))$
by $v\mapsto \{v\}$, and define $d_Y(A,B)$ and $\operatorname{diam}_Y(A)$ using the
same conventions
(e.g. (\ref{set distance is max}) and (\ref{dY convention})) as for
larger subdomains. If $\alpha$ is the core of $Y$ we also write
$\operatorname{diam}_{\alpha}$ and $\pi_{\alpha}$.
We remark that $\CC(Y)$ is not a subcomplex of $\CC(S)$, but just as
for larger subdomains, any $f\in\Mod(S)$ acts by isomorphism
$f : \CC(Y) \to \CC(f(Y))$, and this fits naturally with the action on
$\CC(S)$ via $\pi_{f(Y)} \circ f = f \circ \pi_Y$.
With these definitions in place we have the following:
\begin{lemma+}{Lipschitz Projection}
Let $Y$ be a subdomain of $Z$.
For any simplex $\rho$ in $\CC(Z)$, if $\pi_Y(\rho)\ne\emptyset$ then
$\operatorname{diam}_Y(\rho) \le 2$. If $Y$ is an annulus and $\xi(Z)>4$ then the
bound is 1.
\end{lemma+}
\begin{proof}
For an annulus $Y$, if $\xi(Z)>4$ the bound is immediate, since any
two disjoint curves in $Z$ lift to disjoint arcs in $\til Y$.
If $\xi(Z) = 4$, one easily checks that Farey neighbors in $\CC(Z)$
lift to curves that intersect at most once in any annulus cover.
For $\xi(Y)\ge 4$, the bound follows from Lemma \ref{arcs to
curves}.
\end{proof}
\bfheading{Dehn twists:}
Let $Y$ be an annulus with core $\alpha$. Let $D_\alpha$ be a positive
Dehn twist in $S$ about $\alpha$, and let $\hhat D_\alpha$ be a
positive Dehn twist in the covering annulus $\hhat
Y$ about its core.
Then $\hhat D_\alpha$ acts on $\CC(Y)$ and it is immediate
for any $t\in\CC_0(Y)$ that $(\hhat D^n_\alpha t)\cdot t = n-1$ if $n>0$
and $n+1$ if $n<0$. Thus we obtain from
(\ref{annulus distance and intersection}) that
$d_Y(\hhat D_\alpha^n(t),t) = |n|$ for all $n\in\Z$
With a little more thought one can see that, for any curve $\beta$
intersecting $\alpha$ transversely,
\begin{equation}
\label{twist distance}
d_Y(D^n_\alpha(\beta),\beta) = 2+|n|
\end{equation}
for $n\ne 0$. This is because the Dehn twist in $S$ affects every
intersection of the lift of $\beta$ with lifts of $\alpha$ in $\til
Y$, and this shifts the endpoints on $\boundary \hhat Y$ enough to
enable components of $\pi_Y(D^n_\alpha(\beta))$ and
$\pi_Y(\beta)$ to intersect an additional two times.
If $\beta$ intersects $\alpha$ exactly 2 times with opposite
orientation, one can apply a {\em
half twist} to $\beta$ to obtain a curve $H_\alpha(\beta)$, which is
equivalent to taking $\alpha\union\beta$ and resolving the
intersections in a way consistent with orientation (see
\cite{luo:intersectionnum}
for a generalization). Then $H^2_\alpha(\beta)=D_\alpha(\beta)$, and
one can also see for $n\ne 0$ that
\begin{equation}
\label{half-twist distance}
d_Y(H^n_\alpha(\beta),\beta) = 2 + \left\lfloor\frac{|n|}{2}\right\rfloor.
\end{equation}
\subsection{Markings}
\label{markings}
Assume $\xi(S)\ge 4$ and let
$\{\alpha_1,\ldots,\alpha_k\}$ be some simplex in
$\CC(S)$. A {\em marking} in $S$ is a set $\mu=\{p_1,\ldots,p_k\}$,
where each $p_i$ is either just $\alpha_i$, or a pair $(\alpha_i,t_i)$
such that $t_i$ is a diameter-1 set of vertices of
the annular complex $\CC(\alpha_i)$.
The $\alpha_i$ are called the base curves and the simplex
$\{\alpha_i\}$ is denoted
$\operatorname{base}(\mu)$. The (possibly empty) set $\{t_i\}$ is called the
set of {\em transversals} and denoted $\operatorname{trans}(\mu)$.
Thus a special case is when $\operatorname{trans}(\mu)=\emptyset$ and then
$\mu=\operatorname{base}(\mu)$.
If $\operatorname{base}(\mu)$ is contained in $\CC(Y)$ for some non-annular
subsurface in $Y$, we call $\mu$ a marking in $Y$.
If $Y$ is an essential annulus in $S$ then a marking $\mu$ in $Y$ is any set
of diameter 1 in $\CC_0(Y)$ (typically these sets will have at most
two elements), and we have $\mu=\operatorname{base}(\mu)$ in this case.
If $\operatorname{base}(\mu)$ is maximal and
every curve has a transversal,
the marking is called {\em complete.}
Markings can be very complicated objects, because the transversals,
being arcs in annular covers, can have complicated images in $S$.
Let us therefore define something called a {\em clean marking}:
Given $\alpha\in\CC_0(S)$ a {\em clean transverse curve} for $\alpha$
is a curve $\beta\in\CC_0(S)$ such that
a regular neighborhood of $\alpha\union \beta$ (in minimal position) is a
surface $F$ with $\xi(F)=4$, in which $\alpha$ and $\beta$ are
$\CC(F)$-neighbors (note there are only two possible configurations,
corresponding to $F$ being a 1-holed torus or 4-holed sphere, and
$\alpha$ and $\beta$ intersect once or twice, respectively).
A marking $\mu$ is called {\em clean} if every pair in $\mu$ is of the
form $(\alpha_i,\pi_{\alpha_i}(\beta_i))$ where
$\beta_i$ is a clean transverse curve for $\alpha_i$,
which also misses the other curves in $\operatorname{base}(\mu)$.
Note that if $\mu$ is clean then the curves $\beta_i$ are uniquely
determined by the transversals $t_i=\pi_{\alpha_i}(\beta_i)$.
We note that, up to homeomorphisms of $S$, there are only a finite
number of clean markings.
If $\mu$ is a complete marking, there is an almost
canonical way to select a related clean marking. Let us say
that a clean marking $\mu'$ is {\em compatible} with a marking $\mu$
provided $\operatorname{base}(\mu) = \operatorname{base}(\mu')$, a base curve $\alpha$ has a
transversal $t'=\pi_Y(\beta)$ in $\mu'$ if and only if it has a
transversal $t$ in
$\mu$, and $d_{\alpha}(t,t')$ is minimal among all possible choices of $t'$.
\defn_0{n_0}
\defn_1{n_1}
\begin{lemma}{clean markings}
Let $\mu$ be a complete marking of $Y\subset S$. Then there exist at
least 1 and at most $n_0^b$ complete clean markings $\mu'$
compatible with $\mu$, where $b$ is the number of base curves of
$\mu$, and $n_0$ is a universal constant.
Furthermore,
for each $(\alpha,t)\in \mu$ and
$(\alpha,t')\in \mu'$ we have $d_\alpha(t,t') \le n_1$, where
$n_1$ is a universal constant.
\end{lemma}
\begin{proof}
Fix one clean marking $\mu_0$ with $\operatorname{base}(\mu_0)=\operatorname{base}(\mu)$.
All other clean markings with this base are obtained from $\mu_0$ by
twists and half-twists, so it follows immediately from
(\ref{twist distance},\ref{half-twist distance}) and the
quasi-isometry (\ref{f quasiisometry}) of an annular complex to $\Z$
that for each
$\alpha\in\operatorname{base}(\mu)$ there is a choice
of clean transversal $\beta$ that minimizes
$d_\alpha(t,\pi_\alpha(\beta))$, and that there is a uniform bound on
this minimum.
The fact that the number of choices of $\beta$ are uniformly bounded
for each base curve also follows from (\ref{twist
distance}) and (\ref{half-twist distance}).
\end{proof}
(One can in fact show that $n_0\le 4$ and $n_1=3$, but we
will not need this).
\bfheading{Projections of markings:}
If $Y$ is any subdomain of $S$ and $\mu$ any marking in $S$
we can define $\pi_Y(\mu)$ as follows:
If $Y$ is an annulus whose core is some $\alpha\in\operatorname{base}(\mu)$, and
$\alpha$ has a transversal $t$, we
define $\pi_Y(\mu) = t$. If $\alpha$ has no transversal $\pi_Y(\mu) =
\emptyset$. In all other cases, $\pi_Y(\mu) = \pi_Y(\operatorname{base}(\mu))$.
\bfheading{Elementary moves on clean markings:}
Let $\mu$ be a complete clean marking, with
pairs $(\alpha_i,\pi_{\alpha_i}(\beta_i))$ as above.
There are two types of elementary moves that transform $\mu$ into a
new clean marking.
\begin{enumerate}
\item Twist: Replace $\beta_i$ by
$\beta'_i$, where $\beta'_i$ is obtained from $\beta_i$
by a Dehn twist or half-twist around $\alpha_i$.
\item Flip: Replace $(\alpha_i,\pi_{\alpha_i}(\beta_i))\in\mu$ by
$(\beta_i,\pi_{\beta_i}(\alpha_i))$ to get a
non-clean marking $\mu''$.
Then replace $\mu''$ by a compatible clean marking $\mu'$.
\end{enumerate}
In the first move a twist can be positive or negative. A
half-twist is possible when $\alpha_i$ and $\beta_i$ intersect twice.
The replacement part of the Flip move requires further discussion:
The surface $F$ filled by $\alpha_i$ and $\beta_i$ has $\xi(F)=4$, and its
(non-puncture) boundary components are other elements of
$\operatorname{base}(\mu)$. For each such element $\alpha_j$ there is a transverse
$\beta_j$ which misses $\alpha_i$ but hits $\beta_i$. Thus after interchanging
$\alpha_i$ and $\beta_i$ the marking is no longer clean.
We must therefore replace $\beta_j$ by $\beta'_j$ which
misses $\beta_i$, subject
to the condition that $d_{\alpha_j}(\beta_j,\beta'_j)$ is as small as possible.
Lemma \ref{clean markings} says that this distance is at most
$n_1$, and there are $n_0$ possible choices for each $\beta_j$.
(Actually this is a more special case than Lemma \ref{clean markings}
and one can get a distance bound of 2).
Thus, given $\mu$ there is a finite number of possible elementary
moves on it, depending only on the topological type of $S$.
We conclude with an extension of Lemma \lref{Lipschitz Projection}.
\begin{lemma+}{Elementary Move Projections}
If $\mu,\mu'$ are complete clean markings differing by one elementary
move, then for any domain $Y$ in $S$ with $\xi(Y)\ne 3$,
$$
d_Y(\mu,\mu') \le 4
$$
If $Y$ is an annulus the bound is 3.
\end{lemma+}
\begin{proof}
If $Y$ is an annulus with core curve $\alpha\in\operatorname{base} (\mu)$, then
$\mu $ contains $(\alpha,\pi_\alpha(\beta))$ for a clean transversal curve
$\beta$, and $\pi_Y(\mu) = \pi_\alpha(\beta)$. Then if
$\mu'$ is obtained
by a twist or half-twist on $\alpha$, a bound of 3 follows from
(\ref{twist distance}) and (\ref{half-twist distance}). If $\mu'$ is
obtained by a Flip move, replacing $(\alpha,\pi_\alpha(\beta))$ by
$(\beta,\pi_\beta(\alpha))$, then
$\pi_Y(\mu')=\pi_Y(\operatorname{base}(\mu'))=\pi_\alpha(\beta)$, so the distance is 0.
A similar analysis holds if $Y$ is an annulus with core curve in
$\operatorname{base}(\mu')$.
In all other cases, $\pi_Y(\mu)=\pi_Y(\operatorname{base}(\mu))$ and
$\pi_Y(\mu')=\pi_Y(\operatorname{base}(\mu'))$,
and by definition
$d_Y(\mu,\mu') = \operatorname{diam}_Y(\pi_Y(\operatorname{base}(\mu))\union\pi_Y(\operatorname{base}(\mu'))$.
If $\pi_Y(\operatorname{base}(\mu))$ and $\pi_Y(\operatorname{base}(\mu'))$ have at least one curve in
common, the bound of 4 follows from Lemma \ref{Lipschitz Projection}.
If not, then the move must be a Flip move, and $Y$ meets only the two
base curves $\alpha,\alpha'$ involved in the Flip. Let $F$ be the
surface of $\xi=4$
filled by these curves, which are neighbors in $\CC(F)$.
If $\xi(Y)=4$ then $Y=F$ and we are done, with a bound of 1.
The remaining possibility is that $Y$ is an essential annulus in $F$ meeting
both curves, and then any two lifts of $\alpha$ and $\alpha'$ to
$\til Y$ intersect at most once, giving a bound of 2.
\end{proof}
\section{Introduction}
\label{intro}
In this paper we continue our geometric study
of Harvey's Complex of Curves \cite{harvey:boundary}, a
finite dimensional and locally infinite complex $\CC(S)$ associated to
a surface $S$,
which admits an action by the mapping class group $\Mod(S)$.
The geometry and combinatorics of $\CC(S)$ can be applied to study
group-theoretic properties of $\Mod(S)$, and the geometry of Kleinian
representations of $\pi_1(S)$.
In \cite{masur-minsky:complex1} we showed that, endowed with a natural
metric, $\CC(S)$ is an infinite diameter $\delta$-hyperbolic space in
all but a small number of trivial cases (see Section
\ref{defs} for precise definitions). This result suggests that one try
to apply the techniques of hyperbolic spaces and groups
to study $\CC(S)$ and its $\Mod(S)$-action, considering for
example such questions as the word problem, conjugacy problem and
quasi-isometric rigidity. The barrier to doing this is that
the complex is locally infinite, and hence the distance bounds one
obtains in a typical geometric argument give little a-priori
information.
Our goal in this paper is to develop tools for lifting this barrier.
The organizing philosophy is roughly this: Links of vertices in
$\CC(S)$ are themselves complexes associated to subsurfaces. The
geometry of these links is tied to the geometry of $\CC(S)$ by a family of
{\em subsurface projection maps}, which are analogous to closest-point
projections to horoballs in classical hyperbolic space.
This gives a layered structure to the complex, with
hyperbolicity at each level, and the main construction of our paper is
a combinatorial device used to tie these levels together, which we
call a {\em hierarchy of tight geodesics}.
Using these constructions, we derive a number of properties of
$\CC(S)$ which are similar to those of locally finite complexes, such
as a finiteness result for geodesics with given endpoints (Theorem
\ref{Finite Geodesics}), and a convergence criterion for sequences of
geodesics (Theorem \ref{Convergence of Hierarchies}). We then apply
these ideas to study the
conjugacy problem in $\Mod(S)$, deriving a
linear bound on the shortest word conjugating two pseudo-Anosov
mapping classes (Theorem \ref{Conjugacy Bound}).
Along the way we describe a class of quasi-geodesic words in $\Mod(S)$
(Theorem \ref{Quasigeodesic Words}), whose lengths can be estimated using
the subsurface projection maps (Theorem \ref{Move distance
and projections}).
The rest of Section \ref{intro} gives a more detailed outline of
our results, and works through some explicit examples that
motivate our constructions.
Section \ref{defs} presents our definitions
and notation, and proves some basic lemmas. Section \ref{projection}
proves our fundamental result on subsurface projections, Sections
\ref{hierarchies} and \ref{resolution} develop the machinery of
hierarchies and their resolutions into sequences of markings, Section
\ref{large link etc} proves our basic geometric control theorems, and
Section \ref{MCG} proves the conjugacy bound theorem for $\Mod(S)$.
\subsection{Subsurface Projections}
A basic analogy for thinking about $\CC(S)$ is provided by the
geometry of a family $\FF$ of disjoint, uniformly spaced horoballs in
$\Hyp^n$, for example
the uniform cusp horoballs of a Kleinian group. The non-proper metric
space $X_\FF$
obtained by collapsing each horoball to a point is itself
$\delta$-hyperbolic -- see Farb \cite{farb:relhyp} and Klarreich
\cite{klarreich:thesis} -- and the horoballs play a role similar to
links of vertices in $\CC(S)$.
If $B$
is a horoball and $L$ is a hyperbolic geodesic disjoint from $B$, then
the closest-point projection of $L$ to $B$ has uniformly bounded
diameter, independently of $L$ or $B$. Interestingly, one can sensibly
define a ``projection'' from the collapsed space $X_\FF$ to $B$ which
similarly sends $X_\FF$-geodesics avoiding $B$ to bounded sets. This
turns out to
be a crucial property in understanding the geometry of $X_\FF$ and
its relation to the geometry of $\Hyp^n$.
In our context, vertices of $\CC(S)$ are simple closed curves in $S$ (see
\S\ref{basic defs}) and the link of a vertex $v$ is closely related to
the complexes $\CC(Y)$ for the complementary subsurfaces $Y$ of $v$.
We will define projections $\pi_Y$ from $\CC(S)$ to $\CC(Y)$ as follows:
given a simple closed curve on $S$ take its arcs of
intersection with $Y$ and perform a surgery on them to obtain closed
curves in $Y$. (More precisely $\pi_Y$ sends vertices in $\CC(S)$ to
finite sets in $\CC(Y)$).
We will prove the following analogue to the situation
with horoballs:
\Restate{Theorem}{Bounded Geodesic Image}{
If $Y$ is an essential subsurface of $Z$ and $g$ is a geodesic in
$\CC(Z)$ all of whose vertices intersect $Y$ nontrivially, then the
projected image of $g$ in $\CC(Y)$ has uniformly bounded diameter.
}
The family $\FF$ of horoballs also satisfies the closely related
``bounded coset penetration property'' of Farb \cite{farb:relhyp},
which roughly speaking
is a stability property for paths in $\Hyp^n$ whose images in $X_\FF$ are
quasi-geodesics: if two such paths begin and end near each other, then
up to bounded error they penetrate through the same set of horoballs
in the same way. This property does not hold in our case but a certain
generalization of it does. This will be the content of Lemmas
\lref{Large Link} and \lref{Common Links}, which will be briefly
discussed in \S\ref{hierarchy summary} below.
\subsection{The conjugacy problem}
Fix a set of generators for $\Mod(S)$ and let $|\cdot|$ denote the
word metric. As one application of our techniques,
in Section \ref{MCG} we establish the following
bound:
\Restate{Theorem}{Conjugacy Bound}{Fix a surface $S$ of finite type and a generating set
for $\Mod(S)$. If $h_1, h_2$ are
words describing
conjugate pseudo-Anosov elements, then the shortest conjugating
element $w$ has word length
$$|w| \le C(|h_1|+|h_2|),$$
where the constant $C$ depends only on $S$ and the generating set.
}
This linear growth property for the
shortest conjugating word is shared with word-hyperbolic groups
(see Lys\"enok \cite[Lemma 10]{lysenok:hyperbolic}),
although except in a few low-genus cases
the mapping class group
is not word hyperbolic since it contains abelian subgroups of
rank at least $2$ generated by Dehn twists about disjoint curves.
Our proof is based on a proof which works in the word-hyperbolic case.
The case of general elements of $\Mod(S)$ introduces complications
similar to those that occur for torsion elements of word-hyperbolic
groups. We hope to address the general case in a future paper.
This bound is related to the question of solubility of the conjugacy
problem, since a computable bound on $w$ provides a bounded search
space for an algorithm seeking to establish or refute conjugacy.
Hemion \cite{hemion:conjugacy} proved that
the conjugacy problem for $\Mod(S)$ is soluble, and Mosher
\cite{mosher:conjugacy} gave an explicit algorithm for
determining conjugacy
for pseudo-Anosovs. In both cases, no explicit bound on the complexity
was given (although Mosher's algorithm is fast in practice).
Theorem \ref{Conjugacy Bound} is still short of a good complexity
bound since we have not described an efficient way to search through
the possible conjugating words. However, we are hopeful that the
techniques of this paper can be extended to give a more complete
algorithmic approach.
\subsection{Finiteness results}
In a locally finite graph, there are finitely many geodesics
between any two points. In
$\CC(S)$ this is easily seen to be false even for geodesics of length
2. However we shall introduce a finer notion of {\em tight geodesic}
(\S\ref{hierarchy defs}), for which we can establish:
\Restate{Theorem}{Finite Geodesics}{Between any two vertices in $\CC(S)$
there are finitely many tight geodesics.}
This is part of a collection of results showing that in several useful
ways $\CC(S)$ is like a locally finite complex. Another is Theorem
\lref{Convergence of Hierarchies}, which generalizes the property of a
locally finite complex that any sequence of
geodesics meeting a compact set has a convergent
subsequence.
In the locally finite setting this involves a simple diagonalization
argument, and this is replaced here by an application of Theorem \ref{Bounded
Geodesic Image} and the hierarchy construction.
The following is an application of this result:
\Restate{Proposition}{Axis}{Any pseudo-Anosov element $h\in\Mod(S)$ has a
quasi-invariant axis in $\CC(S)$: that is, a bi-infinite geodesic
$\beta$ such that $h^n(\beta)$ and $\beta$ are $2\delta$-fellow
travelers for all $n\in\Z$.}
That a {\em quasi-geodesic} exists which fellow-travels its $h$-translates
is a consequence of work in \cite{masur-minsky:complex1}. The geodesic with
this property is
obtained by a limiting process using Theorem \ref{Convergence of Hierarchies}.
\subsection{Hierarchies of geodesics}
\label{hierarchy summary}
A geodesic in $\CC(S)$ is a sequence of curves in $S$, but words in $\Mod(S)$
are more closely related to sequences of pants decompositions separated
by elementary moves (replacement of one curve at a time). The
hierarchy construction is based on the idea that a geodesic can be
``thickened'' in a natural way to give a family of pants
decompositions. We will illustrate this in one of the simplest
examples, that of the five-holed sphere $S_{0,5}$, below. We will then
give a more general discussion of the construction and state some of
our main results about it. Finally in \S\ref{example} we will give a
more extended, but still relatively simple, collection of examples.
\medskip
A pants decomposition $P$ in $S=S_{0,5}$
is a pair of disjoint curves $\alpha,\beta$; i.e. an edge of $\CC(S)$.
An {\em elementary move} of pants $P\to P'$ fixes one of the
curves, say $\alpha$, and replaces $\beta$ with a curve $\beta'$
which intersects $\beta$ minimally and is disjoint from $\alpha$.
Now given $P = \{\alpha,\beta\}$, take some $\psi\in\Mod(S)$, and
consider ways of connecting $P$ to
$\psi(P)=\{\alpha',\beta'\}$. Choose a geodesic in $\CC(S)$
whose vertices are
$\alpha=\alpha_0,\alpha_1,\ldots,\alpha_N=\alpha'$.
The subsurface $S\setminus \alpha_0$ has two
components, a three-holed sphere, and a four-holed sphere which
contains $\alpha_1$ and $\beta$. Let
$S_{\alpha_0}$ denote the four-holed sphere.
The complex of curves $\CC(S_{\alpha_0})$ is isomorphic to the Farey
graph (see \S\ref{example}),
so let us join $\beta$ to
$\alpha_1$ by a geodesic
$\beta=\gamma_0,\gamma_1,\ldots,\gamma_m=\alpha_1$ in
$\CC(S_{\alpha_0})$. The transition $(\alpha_0,\gamma_i)$ to
$(\alpha_0,\gamma_{i+1})$ is an elementary move in pants. This
path concludes with the pants decomposition
$\{\alpha_0,\alpha_1\}$ (see Figure \ref{fivehole pants}).
Now working in $S_{\alpha_1}$
join $\alpha_0$ to $\alpha_2$ by a geodesic, giving a path of
elementary moves ending with $\{\alpha_1,\alpha_2\}$. We repeat
this procedure, eventually ending with $\psi(P_0)$.
\realfig{fivehole pants}{fivehole.ps}{A sequence of pants decompositions in the
five-holed sphere. Each edge represents a pants
decomposition. Transitions between edges are elementary moves.}
In this example, the same final pants decomposition $\psi(P_0)$ could have
been written as $\psi(\theta(P_0))$ where $\theta$ is any product of
Dehn twists around $\alpha$ and $\beta$.
Thus in order to keep track of elements in the mapping class group,
and not just pants decompositions, we will also need to keep track of
twisting information around pants curves. In fact twisting data is
implicit everywhere in this example: in
the geodesic in $\CC( S_{\alpha_0})$, for each $i\in(0,m)$,
$\gamma_{i-1}$ and
$\gamma_{i+1}$ both intersect $\gamma_i$ minimally, and hence differ by
a product of Dehn twists (or half-twists) about $\gamma_i$. Keeping
track of this information will
require the introduction of complexes associated to annular
subsurfaces (see \S\ref{annulus defs}).
Ultimately we will be considering sequences of {\em complete markings}, which
are pants decompositions together with twisting data,
such that successive markings are separated by
appropriately defined elementary moves.
(More generally the markings need not be complete, but let us assume for
the rest of this discussion that they are.)
In considering such a sequence
carefully, one finds in it segments where some sub-marking is fixed
and all the elementary moves take place in a subsurface of $S$. Thus
one obtains some interlocking structure of paths in subcomplexes of
$\CC(S)$.
{\em Hierarchies of geodesics} will be our method for constructing and
manipulating such structures. Roughly, a hierarchy is
a collection $H$ of
geodesics, each geodesic $h$ contained in a complex $\CC(Y)$ where $Y$ is
the {\em domain} of $h$. The geodesics will satisfy a technical
condition called {\em tightness}, which makes them easier to control.
There will be one ``main geodesic'' whose
domain is all of $S$, and in general the geodesics will interlock
via a relation called ``subordinacy'', which is related to the nesting
of their domains. There will be an {\em initial} and a {\em terminal}
marking, called ${\mathbf I}(H)$ and ${\mathbf T}(H)$, and a partial order on the
geodesics which is related to the linear order of a sequence of
elementary moves connecting ${\mathbf I}(H)$ to ${\mathbf T}(H)$ (the reason for a
partial rather than linear order is that some elementary moves commute
because they take place in disjoint subsurfaces).
Any hierarchy will admit a {\em resolution} of its partial order to a
linearly ordered sequence of markings, separated by elementary moves,
connecting ${\mathbf I}(H)$ to ${\mathbf T}(H)$. This resolution will be nonunique, but
efficient in the following sense.
Let $\til \MM$ be the graph whose
vertices are complete markings and whose edges are elementary
moves. We then have:
\Restate{Theorem}{Efficiency of Hierarchies}{
Any resolution of a hierarchy $H$ into a sequence of
complete markings is a quasi-geodesic in $\til \MM$, with uniform constants.}
In the case where ${\mathbf T}(H) = \psi({\mathbf I}(H))$ for some $\psi\in\Mod(S)$,
a resolution gives rise to a quasi-geodesic word in $\Mod(S)$
(Theorem \ref{Quasigeodesic Words}).
Hierarchies will be constructed inductively, with the main geodesic
chosen first and then further geodesics in subsurfaces determined by
vertices of the previous ones. At every stage a geodesic is not
uniquely determined, although hyperbolicity implies that all choices
are fellow travelers. This is a priori a fairly loose constraint, but
it has the following rigidity property, which is our generalization of
Farb's bounded coset penetration property.
\Restate{Lemma}{Common Links}{Suppose $H$ and $H'$ are hierarchies
whose initial and
terminal markings differ by at most $K$ elementary moves. Then there
is a number $M(K)$ such that, if a geodesic $h$ appears in $H$ and
has length greater than $M$, then $H'$ contains a geodesic $h'$
with the same domain. Furthermore, $h$ and $h'$ are
fellow-travelers with a uniform separation constant. }
This lemma is a consequence of the following lemma, which
characterizes in terms of the subsurface projections $\pi_Y$ when
long geodesics appear in a hierarchy. For two markings $\mu$ and $\mu'$
we let $d_Y(\mu,\mu')$ denote the distance in $\CC(Y)$ of their
projections by $\pi_Y$ (see \S\ref{markings}).
\Restate{Lemma}{Large Link}{
There exists $M=M(S)$ such that,
if $H$ is any hierarchy in $S$ and $d_Y({\mathbf I}(H),{\mathbf T}(H)) \ge M$ for a
subsurface $Y$ in $S$, then $Y$ is the domain of a geodesic $h$ in
$H$.
Furthermore if $h$ is in $H$ with domain $Y$ then its length $|h|$ and
the projection distance $d_Y({\mathbf I}(H),{\mathbf T}(H))$ are within a uniform additive
constant of each other.
}
Both of these results follow from Theorem \lref{Bounded Geodesic
Image} together with the structural properties of hierarchies, which
are summarized by Theorem \ref{Structure of Sigma}.
Applications of Lemmas \ref{Large Link} and \ref{Common Links}
are based on the idea that, whenever
geodesics in a hierarchy have short length, one can apply arguments
that work for locally finite complexes. Whenever geodesics become
long, one has this rigidity for all ``nearby'' hierarchies, and can
work inductively in the shared domains of the long geodesics.
Theorems \lref{Finite Geodesics}, \lref{Convergence of Hierarchies} and
\lref{Axis} are all consequences of this sort of argument.
\subsection{Motivating Examples}
\label{example}
To illustrate the above theorems, we will work through some
more extended low-genus examples.
Let us first take a closer look at the
case where $S$ is a once-punctured torus or
four-times punctured sphere. Then $\CC(S)$ is the Farey graph
(See figure \ref{farey graph} and \S\ref{basic defs}), and in spite of
the fact that the link of every vertex is
infinite we have fairly explicit and rigid control of geodesics. In
particular we note the following phenomenon. Let $h$ be a geodesic and
$v$ a vertex in $h$, preceded by $u$ and followed by $w$. The link of
the vertex $v$ can be identified with $\Z$, and we can measure the
distance between $u$ and $w$ in this link, an integer $d_v(u,w)$.
If $h'$ is a geodesic with the same endpoints as $h$,
then $h'$ must pass through $v$ {\em provided $d_v(u,w)$ is suffiently
large} (5 will do). In fact the
same holds if $h'$ has endpoints, say, distance 1 from those of $h$.
Furthermore, $h'$ must enter the link of
$v$ at a point within 1 of $u$ and exit within 1 of $w$. All of these
claims are easy to show starting from the basic fact that any edge in
the Farey graph separates it.
\scalefig{farey graph}{3in}{farey.ps}{The complex of curves for a
once-punctured torus or
4-times punctured sphere is the classical Farey graph. Vertices are
labelled by slopes of the corresponding curves relative to some
fixed homology basis.}
This phenomenon, that a large link distance generates strong constraints on
fellow traveling geodesics, persists in higher genus (even though the
separation property of edges does not generalize), and gives rise
to Lemmas \ref{Large Link} and \ref{Common Links}.
Let us now demonstrate this generalized phenomenon, together with the
main features of our hierarchy construction, in the case where $S$ is
a closed genus 2 surface.
\medskip
Let $h$ be a geodesic
in $\CC(S)$ with a segment $..,u,v,w,...$ occuring somewhere in $h$.
Let $h'$ be a fellow traveler of $h$ -- for concreteness suppose the
endpoints of $h$ and $h'$ are distance 1 or less apart, and occur at a
distance at least $2\delta+2$ from $u,v$ and $w$ (where $\delta$ is the
hyperbolicity constant of $\CC(S)$). Hyperbolicity of
$\CC(S)$ implies that $h$ and $h'$ are
$2\delta+1$-fellow travelers.
\realfig{genus 2 nonsep}{gen2nonsep.ps}{The short-cut argument for a
genus 2 surface, where $v$ is non-separating. If as shown $h'$ does
not meet $v$ then the dotted rectangle bounds $d_Y(u,w)$.}
Suppose first that the subsurface $Y=S\setminus v$ is connected -- a
2-holed torus (figure \ref{genus 2 nonsep}).
Then $u$ and $w$ give points in $\CC(Y)$ and let us
denote their distance in $\CC(Y)$ by $d_Y(u,w)$.
We can show the following statement:
{\em
If
the ``link distance'' $d_Y(u,w)$ is sufficiently large then
the fellow-traveler $h'$ must also pass through $v$.}
Suppose not -- then every vertex of $h'$ has
nontrivial intersection with $Y$.
Consider a path
beginning at $w$, moving forward in $h$ a distance $2\delta+2$, across
to $h'$ by a path of length at most $2\delta+1$, back along $h'$ and over
to $h$ by another path of length at most $2\delta+1$, which lands at a
point $2\delta+2$ behind $u$, and from there back up to $u$ along $h$.
By the triangle inequality, every point of this path not on $h'$ has
distance at least 2 from $v$ (except the endpoints $u$ and $w$ which
are in $Y$). Together with the assumption about $h'$
we have that every point on the path represents a
curve having nontrivial intersection with $Y$. The length of the
segment on $h'$ is bounded by $8\delta+8$ by the triangle inequality, so
the total length of
the path from $w$ to $u$ is at most $16\delta + 14$.
If we replace every curve with one arc of its intersection with $Y$,
we obtain a sequence of properly embedded arcs or curves in $Y$, each
disjoint from the previous. As we will see in Lemma
\ref{arcs to curves}, these
can each be replaced with a simple closed curve, so that each one is
distance at most 2 from its predecessor in $\CC(Y)$. (This is the
subsurface projection $\pi_Y$.)
We therefore obtain a
path in $\CC(Y)$ connecting $w$ to $u$, of length at most
$32\delta+28$. If we assumed
$d_Y(u,w)>32\delta+28$ this would be a contradiction, and then
$h'$ would have to pass through the vertex $v$.
In that case, we can say more. Let $u'$ be the predecessor and $w'$
the successor
of $v$ along $h'$. The same kind of argument, applied to the segments of
our path joining $u$ to $u'$ and $w$ to $w'$,
gives an upper bound
for $d_Y(u,u')$ and $d_Y(w,w')$.
Joining $u$ and $w$ by a geodesic $k$ in $\CC(Y)$ and
$u'$ and $w'$ by a geodesic $k'$ in $\CC(Y)$,
we now know by
hyperbolicity of $\CC(Y)$ that $k$ and $k'$ are fellow-travelers.
Now suppose instead that $v$ divides $S$ into components $Y_1$
and $Y_2$, each necessarily a one-holed torus (see figure
\ref{genus 2 sep}).
Since $u,v,w$ is a
geodesic, $u$ and $w$ must intersect nontrivially and hence belong to
the same component, say
$Y_1$. The previous ``short-cut'' argument now implies that, if
$d_{Y_1}(u,w)>32\delta+28$, some
curve $v'$ of $h'$ must miss $Y_1$. We could again have $v'=v$,
or now the additional possibility that $v'$ lies (nonperipherally) in $Y_2$.
Suppose this case happens. Set $Y'=S\setminus v'$, noting that
it must be a single two-holed torus containing $Y_1$,
and let $u'$ and $w'$ be
the predecessor and successor of $v'$ in $h'$.
We again apply the short-cut argument to conclude that
$d_{Y'}(u',w')\leq 32\delta+28$; for otherwise $v'$ would
appear in $h$, but it is not $u,v$ or $w$ and is distance 1 from $v$,
so this contradicts the fact that $h$ is a geodesic. Let
$m'$ be a geodesic in $\CC(Y')$ joining $u'$ and $w'$.
If every vertex of $m'$ intersects $Y_2$, then using $m'$, and
thus bypassing $v'$, we can find a path of some bounded length joining
$u$ and
$w$, such that every point on it represents a curve that meets $Y_1$.
Thus assuming
$d_{Y_1}(u,w)$ is sufficiently large, $m'$ must pass through a curve
missing $Y_1$. Since it is an essential curve in $Y'$, this curve in
fact can only be $v$ itself.
Let $y'$ and
$z'$ be the predecessor and successor of $v$ along $m'$. They must
lie in $Y_1$ and now in fact the same argument gives an upper
bound for the distance in $\CC(Y_1)$ between $y'$ and $u$ and
between $z'$ and $w$. Again by hyperbolicity any geodesic $k'$
in $\CC(Y_1)$ joining $y'$ and $z'$ fellow travels the geodesic
$k$ joining $u$ and $w$. This is
essentially the content of Theorem \lref{Common Links} in this case.
\realfig{genus 2 sep}{gen2sep.ps}{When $v$ separates $S$ into $Y_1$
and $Y_2$, $h'$ can pass through $v'$ in $Y_2$, But if $d_{Y_1}(u,w)$ is
large then $m'$, supported in $Y'=S\setminus v'$, must pass through $v$.}
So far, we have constructed over $h'$ a ``hierarchy'' of geodesics:
$m'$ is obtained as a geodesic in the link of $v'$, joining
its predecessor and its successor in $h'$. $k'$ is obtained in the
link of $v$, appearing in $m'$, in the same way. We say that $m'$ is
{\em subordinate} to $h'$, and $k'$ to $m'$.
For the hierarchy over $h$ we have something similar, with the
geodesic $k$ supported in one of the complementary domains $Y_1$ of $v$,
and hence subordinate to $h$, but we have not constructed anything in
the domain $Y_2$. A geodesic in $Y_2$ does arise naturally, in the
following way. Let $U=S\setminus u$ and $W=S\setminus w$, noting that
both of these are two-holed tori containing $Y_2$. There are geodesics
$p$ supported in $U$ and $q$ supported in $W$, so that $p$ joins the
predecessor of $u$ to its successor $v$, and $q$ joins the predecessor
$v$ of $w$ to its successor (see figure \ref{r in Y2} for
schematic). Let $s$ be the vertex of $p$ preceding
$v$, and let $t$ be the vertex of $q$ following $v$. Each is disjoint
from $v$, and therefore must lie in $Y_2$. We therefore may join $s$
to $t$ by a geodesic $r$ in $\CC(Y_2)$. In the notation we will later
develop, $r$ is {\em forward subordinate} to $q$, since it is
supported in the domain of $q$ minus the vertex $v$, and its last
vertex is the successor of $v$. Similarly $r$ is {\em backward
subordinate} to $p$.
\realfig{r in Y2}{inY2.ps}{The geodesic $r$, supported in $Y_2$, arises
naturally after the geodesics $p$ and $q$ are constructed in the
links of $u$ and $w$.}
Let us see how pants decompositions arise in this structure. In the
hierarchy over $h'$, the vertices $v'$ at bottom level (in $h'$), $v$ on
the next level (in $m'$), and any vertex $x$ in the
geodesic $k'$, form a pants decomposition, which we also call a {\em
slice} of the hierarchy. If $x'$ is the successor of $x$ in $k'$
(so $x$ and $x'$ are neighbors in the Farey graph $\CC(Y_1)$),
the transition from $(v',v,x)$ to $(v',v,x')$ is an {\em elementary
move}.
In the hierarchy over $h$ we can see a slice with different
organization: starting with $v$ at bottom level, we take any vertex
$a$ in $k$ and $b$ in $r$, and the triple $(v,a,b)$ make a pants
decomposition. We can move $a$ and $b$ independently in their
respective geodesics, since their domains ($Y_1$ and $Y_2$) are disjoint.
This kind of idea will give a way to ``resolve'' a hierarchy
(non-uniquely) into a sequence of slices, or markings, which will then
enable us to describe a useful class of words in the mapping class group.
In these examples we have only produced pants decompositions, but
in our final construction there will be complete markings, which
include twisting data around each pants
curve. This will be done using ``annulus complexes,'' which are
analogous to the links of vertices in the Farey graph.
\subsection{Other applications and directions}
We hope that the tools developed here can be used to give an
algorithmic approach to $\Mod(S)$ in which the complexity of
problems such as the conjugacy problem can be computed.
In particular, the conjugacy bound of Theorem \ref{Conjugacy Bound},
together with the quasi-geodesic words constructed from hierarchies,
are a good start provided that one can give an effective algorithm to {\em
construct} hierarchies with a Turing machine.
The word problem, by comparison, admits a quadratic-time solution
because $\Mod(S)$ is known to have an {\em automatic structure} (see Mosher
\cite{mosher:automatic}). A stronger condition known as a {\em
biautomatic structure} (see \cite{epstein-et-al} for definitions of
these terms) would give bounds on the conjugacy problem,
but whether one exists remains open. Finding a biautomatic structure
was an initial motivation for this paper, but significant problems
remain. In particular the paths obtained from resolutions of
hierarchies are {\em not} a bicombing of $\Mod(S)$, because of the
presence of disjoint domains in $S$, whose order of traversal can
differ in different paths. The standard ``diagonalization'' method of moving in
both domains at once runs into some significant technical problems in
our setting. However, we believe that the hierarchy structure
should be powerful enough by itself to give algorithmic results.
A rather different application of our ideas is to questions of
rigidity and classification for hyperbolic 3-manifolds. In
\cite{minsky:torus}, Kleinian representations of the fundamental group
of the punctured torus were studied via the length functions they
induce on its curve complex, the Farey graph. A connection between the
combinatorics of this graph and the geometry of the corresponding
3-manifolds was established, which was a primary ingredient in the
proof of Thurston's Ending Lamination Conjecture in that case. In
general, given a representation $\rho:\pi_1(S)\to PSL(2,\C)$ one can
study the complex translation lengths of conjugacy classes of simple
curves, viewed as a function on $\CC(S)$. In \cite{minsky:lengthfunctions}
some preliminary convexity properties are established for these
functions, which we hope will prove useful in studying the
general classification problem for Kleinian groups.
\section{Tight geodesics and hierarchies}
\label{hierarchies}
This section describes the main construction of our paper, hierarchies
of tight geodesics. After defining these notions in \S\ref{hierarchy
defs}, we prove some existence results, Lemma \ref{Tight geodesics
exist} and Theorem \ref{Hierarchies exist}, in \S\ref{existence results}.
Hierarchies give us the combinatorial framework in which to carry out
the link projection arguments first outlined in the examples in
\S\ref{example} (and done in generality in Section \ref{large link
etc}).
The main ingredient in this is the {\em backward and forward
sequences} $\Sigma^\pm$, whose basic structural properties are
stated in Theorem \ref{Structure of Sigma}. The proof of this theorem
takes up the rest of Section \ref{hierarchies}, and along the way we
will develop a number of results, notably Theorem \ref{Completeness},
which describes when a hierarchy is {\em complete}.
We will also define a ``time order'', which is a partial order on a
hierarchy, generalizing the linear order on vertices of a single
geodesic, that will serve as a basic organizational principle in the
proofs here and in later sections.
\subsection{Definitions}\label{hierarchy defs}
\mbox{}
\bfheading{Tight geodesics.}
The non-uniqueness of geodesics in $\CC(S)$ is already manifested at a
local level, where typically, if $d_\CC(\alpha,\gamma) = 2$
there can be infinitely many choices for a curve $\beta$ disjoint
from both. The notion of
{\em tightness}, defined below, addresses this local problem, but
more importantly introduces a crucial ingredient of control that makes
our combinatorial description of hierarchies possible. It is worth
noting that the only place where we make direct use of tightness is in
Lemma \ref{contiguous footprint}.
A pair of curves or curve systems $\alpha,\beta$ in a surface $Y$ are
said to {\em fill } $Y$ if all non-trivial non-peripheral curves in $Y$
intersect at least one of $\alpha$ or $\beta$. If $Y$ is a subdomain
of $S$ then it also holds that any curve $\gamma$ in $S$ which intersects a
boundary component of $Y$ must intersect one of $\alpha$ or $\beta$.
Given arbitrary curve systems $\alpha,\beta$ in $\CC(S)$, there
is a unique subsurface $F(\alpha,\beta)$ which they fill: Namely,
thicken the union of the geodesic representatives, and fill in all
disks and once-punctured disks. Note that $F$ is connected if and only
if the union of geodesic representatives is connected.
For a subdomain $X\subseteq Z$ let $\boundary_Z(X)$ denote the
{\em relative boundary} of $X$ in $Z$, i.e. those boundary components
of $X$ that are non-peripheral in $Z$.
\begin{definition}{tight seq def}
Let $Y$ be a domain in $S$. If $\xi(Y)>4$, a sequence of simplices
$\{v_0,\ldots,v_N\}$ in $\CC(Y)$ is called {\em tight} if
\begin{enumerate}
\item For any vertices $w_i$ of $v_i$ and $w_j$ of $v_j$ where $i\ne
j$, $d_{\CC(Y)}(w_i,w_j) = |i-j|$,
\item For each $1\le i \le N-1$, $v_i$ represents the relative
boundary $\boundary_Y F(v_{i-1},v_{i+1})$.
\end{enumerate}
If $\xi(Y)=4$ then a tight sequence is just the vertex sequence of any
geodesic.
If $\xi(Y)=2$ then a tight sequence is the vertex sequence of any
geodesic, with the added condition that the set of endpoints on
$\boundary\hhat Y$ of arcs representing the vertices equals the set of
endpoints of the first and last arc.
\end{definition}
Note that condition (1) of the definition specifies that given any
choice of components $w_i$ of $v_i$ the sequence $\{w_i\}$ is a
geodesic in the original sense. It also implies that $v_{i-1}$ and
$v_{i+1}$ always have connected union.
In the annulus case, the restriction on endpoints of arcs is of little
importance, serving mainly to guarantee that there between any two
vertices there are only finitely many tight sequences.
With this in mind, a {\em tight geodesic} will
be a tight sequence together with some additional data:
\begin{definition}{tight geod def}
A {\em tight geodesic} $g$ in $\CC(Y)$
consists of a tight sequence
$\{v_0,\ldots,v_N\}$, and two markings ${\mathbf I}={\mathbf I}(g)$ and ${\mathbf T}={\mathbf T}(g)$
(in the sense of \S \ref{markings}), called its {\em
initial} and {\em terminal} markings, such that
$v_0$ is
a vertex of $\operatorname{base}({\mathbf I})$ and $v_N$ is a vertex of
$\operatorname{base}({\mathbf T})$.
The number $N$
is called the length of $g$, usually written $|g|=N$.
We refer to each of
the $v_i$ as {\em vertices} of $g$ (by a slight abuse of notation).
$Y$ is called the {\em domain} or {\em support of $g$} and we write $Y=D(g)$.
We also say that $g$ is {\em supported in $D(g)$}.
\end{definition}
\medskip
Finally we will also, occasionally, allow tight geodesics to be
infinite, in one or both directions. If
a tight geodesic $g$ is infinite in the forward direction then ${\mathbf T}(g)$
is not defined, and if it is infinite in the backward direction then
${\mathbf I}(g)$ is not defined.
\medskip
\bfheading{Subordinacy.}
We first saw the relations of
{\em forward subordinacy} and {\em backward subordinacy} in the
simple examples in Section \ref{example}. Let us now introduce a bit
more notation and give the general definitions.
\medskip
\noindent{\em Restrictions of markings:}
If $W$ is a domain in $S$ and $\mu$ is a marking in $S$, then
the {\em restriction} of $\mu$ to $W$, which we write
$\mu|_ W$, is constructed from $\mu$ in the following way:
Suppose first that
$\xi(W)\ge 4$. Recall that for every $p\in\mu$, either
$p=\alpha\in\operatorname{base}(\mu)$ or
$p=(\alpha,t)$ with $t$ a transversal to $\alpha$. We let $\mu|_ W$
be the set of those $p$ whose base curve $\alpha$ meets $W$ essentially.
(Recall that $\alpha$ meets $W$ essentially if it cannot be deformed
away from $W$ -- in particular if $\alpha\subset W$ it must be
non-peripheral).
If $W$ is an annulus ($\xi(W)=2$) then $\mu|_ W$ is just $\pi_W(\mu)$.
Note in particular that,
if all the base curves of $\mu$ which meet $W$ essentially are
actually contained
in $W$, then $\mu|_ W$ is in fact a marking of $W$.
If $W$ is an annulus then $\mu|_ W$ is a marking of $W$ whenever it
is non-empty.
\medskip
\noindent{\em Component domains:}
Given a surface $W$ with $\xi(W)\ge 4$ and a curve system $v$ in $W$ we say
that $Y$ is a
{\em component domain of $(W,v)$} if either: $Y$ is a component of
$W\setminus v$, or $Y$ is an annulus with core a component of $v$.
Note that in the latter case $Y$ is non-peripheral, and thus satisfies
our definition of ``domain''.
Call a subsurface $Y\subset S$ a {\em component domain of $g$} if
for some vertex $v_j$ of $g$, $Y$ is a component domain of
$(D(g), v_j)$.
We note that this determines $v_j$ uniquely.
In such a case, let
$${\mathbf I}(Y,g) = \left\{
\begin{array}{ll}
v_{j-1}|_ Y & v_j \text{ is not the first vertex} \\
{\mathbf I}(g)|_ Y & v_j \text{ is the first vertex}
\end{array}
\right.
$$
be the {\em initial marking} of $Y$ relative to $g$.
Similarly let
$${\mathbf T}(Y,g) = \left\{
\begin{array}{ll}
v_{j+1}|_ Y & v_j \text{ is not the last vertex} \\
{\mathbf T}(g)|_ Y & v_j \text{ is the last vertex}
\end{array}
\right.
$$
denote the {\em terminal marking}.
Note in particular that these are indeed markings.
\medskip
\noindent{\em Special cases:}
\begin{enumerate}
\item
The motivating case is that in which $v_j$ is neither first nor last,
and $\xi(D(g))>4$.
If $Y$ is the component of $D(g)\setminus v_j$ which is filled by
$v_{j-1}$ and $v_{j+1}$, then ${\mathbf I}(Y,g) = v_{j-1}$ and ${\mathbf T}(Y,g) =
v_{j+1}$.
If $Y$ is any other component domain of $(D(g),v_j)$ then
${\mathbf I}(Y,g)={\mathbf T}(Y,g) = \emptyset$.
\item
If $Y$ is a thrice punctured sphere ($\xi(Y)=3$) then
always ${\mathbf I}(Y,g) = {\mathbf T}(Y,g) = \emptyset$.
\item
If $\xi(D(g))>4$ and $Y$ is an annulus (whose core curve is a component
of $v_j$), then unless $j=0$ or $j=|g|$, we must have
${\mathbf I}(Y,g) = {\mathbf T}(Y,g) = \emptyset$, since successive curves in $g$ are
disjoint. If e.g. $j=0$, then the core of $Y$ is a base curve of
${\mathbf I}(g)$, so if this curve has a transversal in the marking
${\mathbf I}(g)$ then ${\mathbf I}(Y,g)$ is nonempty.
\item
If $\xi(D(g)) = 4$ then $Y$ must be an annulus, and now
${\mathbf I}(Y,g)$ and ${\mathbf T}(Y,g)$ may be nonempty
because successive curves in $g$ do intersect.
\end{enumerate}
If $Y$ is a component domain of $g$ and ${\mathbf T}(Y,g)\ne\emptyset$ then we
say that $Y$ is {\em directly forward subordinate} to $g$, or $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$.
Similarly if
${\mathbf I}(Y,g)\ne\emptyset$ we say that $Y$ is {\em directly backward
subordinate} to $g$, or $g\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y$.
\medskip
We can now define subordinacy for geodesics:
\begin{definition}{subordinate def}
If $k$ and $g$ are tight geodesics,
we say that $k$ is {\em directly forward subordinate} to $g$,
or $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$, provided
$D(k)\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$ and ${\mathbf T}(k) =
{\mathbf T}(D(k),g)$.
Similarly we define $g\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k$ to mean $g \mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} D(k)$ and ${\mathbf I}(k) =
{\mathbf I}(D(k),g)$.
\end{definition}
We denote by {\em forward-subordinate}, or $\mathrel{\scriptstyle\searrow}$,
the transitive closure of $\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}$,
and similarly for $\mathrel{\scriptstyle\swarrow}$.
We let $h\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} k$ denote the condition that $h=k$ or
$h\mathrel{\scriptstyle\searrow} k$, and similarly for $k\mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} h$.
We include the notation $Y\mathrel{\scriptstyle\searrow} f$ where $Y$ is a domain
to mean $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f'$ for some $f'$ such that
$f'\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} f$, and similarly define
$b\mathrel{\scriptstyle\swarrow} Y$.
\bfheading{Hierarchies.}
\begin{definition}{hierarchy def}
A {\em hierarchy of geodesics} is a collection $H$ of
tight geodesics
in $S$ with the following properties:
\begin{enumerate}
\item
There is a distinguished {\em main geodesic} $g_H$ with domain $D(g_H)
= S$. The initial and terminal markings of $g_H$ are
denoted also ${\mathbf I}(H), {\mathbf T}(H)$.
\item
Suppose $b,f\in H$, and $Y\subset S$ is a domain such that $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}
Y$ and $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$. Then $H$ contains a
unique tight geodesic $k$ such that $D(k)=Y$, $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k$ and $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
\item
For every geodesic $k$ in $H$ other than $g_H$, there are $b,f\in H$
such that $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
\end{enumerate}
\end{definition}
Condition (3) implies that for any $k$ in $H$, there is a sequence
$k=f_0\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}\ldots\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f_m=g_H$, and similarly
$g_H=b_n\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}\ldots\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} b_0=k$.
Later we will prove these sequences are unique.
\medskip\par\noindent{\em Infinite hierarchies.}
An infinite hierarchy is one in which the main geodesic $g_H$ is
allowed to be an infinite ray or a line. Note that in this case
${\mathbf I}(H)$ and/or ${\mathbf T}(H)$ may not be defined.
Typically a hierarchy will be finite, but
most of the machinery of the paper will work for infinite hierarchies,
so we will indicate
where relevant how each proof works in the infinite case.
Infinite
hierarchies will arise, as limits, in
\S\ref{limits of hierarchies}, and will be used in Section \ref{MCG}.
\subsection{Existence.}\label{existence results}
In this section we will prove that hierarchies exist. The first step
is the following:
\begin{lemma+}{Tight geodesics exist}
Let $u$ and $v$ be two vertices in $\CC(Y)$. There
exists a tight
sequence $v_0,\ldots,v_N$ such that $v_0=u$ and $v_N=v$.
\end{lemma+}
(Note that whereas $u$ and $v$ are single vertices in the complex
$\CC(Y)$, the interior vertices of the sequence may actually be curve
systems, i.e. simplices of $\CC(Y)$.)
\begin{pf}
If $\xi(Y)= 4$ then the vertex sequence of
any geodesic is tight, by definition.
If $\xi(Y)=2$ then the proof is an easy exercise. For example one can
start with $u$ and apply Dehn twists in the covering annulus $\hhat Y$
to obtain a sequence of curves with the same endpoints as $u$ on $\boundary
\hhat Y$, arriving at one which has one intersection with $v$ and making
one final step.
We now assume $\xi(Y)>4$.
To begin, let $h=\{u=u_0,\ldots,u_N=v\}$ be a regular geodesic connecting
$u$ and $v$. We will describe a process that adjusts $h$ until a tight
sequence is obtained.
Let $v_1,v_2,v_3,v_4$ be any sequence of simplices
satisfying condition (1) of Definition \ref{tight seq def}, and suppose also
that $v_3$ is the boundary of $F(v_2,v_4)$, so that (2) holds for
$v_3$. If we now replace $v_2$ by
$v'_2 = \boundary F(v_1,v_3)$, we want to show that $v_3$ is still
$\boundary F(v'_2,v_4)$. In other words, ``fixing'' $v_2$ so that
Condition (2) holds for it will not spoil condition (2) for $v_3$.
Note that (1) still holds for $v_1,v'_2,v_3,v_4$, by the triangle
inequality. In particular each
component of $v'_2$ intersects each component of $v_4$, so that their
union is connected and so is $F(v'_2,v_4)$. Since $v'_2$ is disjoint
from $v_3=\boundary F(v_2,v_4)$, $v'_2$ must be contained in
$F(v_2,v_4)$ and in particular $F(v'_2,v_4) \subseteq
F(v_2,v_4)$. Thus it
suffices to show that $v'_2$ and $v_4$ fill $F(v_2,v_4)$. Let $\alpha$
be any curve in $F(v_2,v_4)$. If $\alpha$ doesn't intersect $v_4$ then
it must intersect $v_2$, and also $v_1$ since $v_1$ and $v_4$ fill
$S$. But since $v_2$ is not contained in $F(v_1,v_3)$, $\alpha$ must
cross $\boundary F(v_1,v_3)$, which is just
$v'_2$. We conclude that $F(v'_2,v_4) = F(v_2,v_4)$.
Now we can adjust the vertices of $h$ in any order: For any
$i\in[1,N-1]$ replace $u_i$ by $\boundary F(u_{i-1},u_{i+1})$.
For the new sequence, condition (2) holds for the $i$-th vertex.
Repeating the process for a new value of $i$ in $ [1,N-1]$, the
previous argument assures us that the condition persists for
previously adjusted values of $i$. Thus after $N-1$ steps we obtain a
tight sequence.
Note that there is no reason to expect a unique tight sequence -- the
process seems to depend on the order in which the indices are chosen.
\end{pf}
We will now show, starting with any two markings in a surface $S$,
how to build a hierarchy connecting them. That is,
\begin{theorem+}{Hierarchies exist}
Let $P$ and $Q$ be two markings in a surface $S$. There exists a
hierarchy $H$ of tight geodesics such that ${\mathbf I}(H)=P$ and ${\mathbf T}(H)=Q$.
\end{theorem+}
\begin{pf}
We say that $H$ is a {\em partial hierarchy} if it satisfies
properties (1) and (3) of Definition \ref{hierarchy def}, and the
uniqueness part of (2), but not
necessarily existence. That is:
\begin{enumerate}
\item[(2')]
Suppose $b,f\in H$, and $Y\subset S$ is a domain such that $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}
Y$ and $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$. Then $H$ contains {\em at most one}
tight geodesic $k$ such that $D(k)=Y$, $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k $ and $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
\end{enumerate}
Of course every hierarchy is also a partial hierarchy.
We begin by choosing vertices $v\in \operatorname{base}(P)$ and $w\in\operatorname{base}(Q)$,
and connecting them with a tight sequence, which exists by Lemma
\ref{Tight geodesics exist}. Define a tight geodesic $g$
by letting its sequence be this one, and setting
${\mathbf I}(g) = P$ and ${\mathbf T}(g)=Q$.
Let $H_0 $ be the partial hierarchy $\{g\}$, and let us construct a finite
sequence of partial hierarchies $H_n$,
the last of which is a hierarchy.
Call a triple $(Y,b,f)$ with domain $Y$ and $b,f\in H_n$ an {\em
unutilized configuration} if
$b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$
but $Y$ is not the support of any geodesic $k\in H_n$
such that $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} k \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
Choose $(Y_n,b_n,f_n)$ to be any unutilized configuration in $H_n$.
Again use Lemma \ref{Tight geodesics exist} to construct a
tight geodesic $h_n$ supported in $Y_n$, with ${\mathbf I}(h_n)={\mathbf I}(Y_n,b_n)$ and
${\mathbf T}(h_n)={\mathbf T}(Y_n,f_n)$. Let $H_{n+1} = H_n \union \{h_n\}$.
The only thing to check is that the sequence terminates.
Define a sequence of tuples $M_n = (M_n(1),M_n(2),\ldots,M_n(\xi(S)-2))$ by
letting $M_n(j)$ denote the number of unutilized configurations
$(Y_n,b_n,f_n)$ in $H_n$ with $\xi(Y_n) = \xi(S)-j$.
Then, since for
each $Y_n$ in the above step, all component domains occurring in the
geodesic $h_n$ have complexity $\xi$ strictly smaller than $\xi(Y_n)$,
it follows immediately that the sequence $M_n$ is strictly decreasing in
lexicographic order as $n$ increases.
(Recall that in lexicographic order $(x_1,...,x_k)<(y_1,...,y_k)$
when for some $j\le k$, $x_i=y_i$ for all $i<j$ and $x_j<y_j$.)
Therefore the
sequence terminates in a partial hierarchy with no unutilized configurations
-- that is, a hierarchy.
\end{pf}
Note that the uniqueness part of property (2) holds automatically: although
the choice of $h_n$ at each stage was arbitrary, we
never put in more than one geodesic for a given configuration $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}
Y \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
\subsection{Forward and backward sequences.}
Given a domain $Y\subset S$ and a hierarchy $H$, define
$$
\Sigma^+_H(Y) = \{ f\in H: Y\subseteq D(f) \ \ \text{and}\ \
{\mathbf T}(f)|_ Y\ne \emptyset\}
$$
and similarly
$$
\Sigma^-_H(Y) = \{ b\in H: Y\subseteq D(b) \ \ \text{and}\ \
{\mathbf I}(b)|_ Y\ne \emptyset\}
$$
which we also abbreviate by omitting the $H$ or $Y$ when they
are understood.
For infinite hierarchies, we alter the definition by also admitting
$g_H$ into $\Sigma^+$ whenever $g_H$ is infinite in the forward direction,
and into $\Sigma^-$ whenever it is infinite in the backward direction.
We will call $\Sigma^+(Y)$ the {\em forward sequence of $Y$} and
$\Sigma^-(Y)$ the {\em backward sequence of $Y$}. These names will be
justified by the following theorem,
which is perhaps the main point of our construction.
\begin{theorem+}{Structure of Sigma}
Let $H$ be a hierarchy, and $Y$ any domain in its support $S$.
\begin{enumerate}
\item If $\Sigma^+_H(Y)$ is nonempty then it has the form of a
sequence
$$f_0\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}\cdots\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f_n=g_H,$$
where $n\ge 0$. Similarly,
if $\Sigma^-_H(Y)$ is nonempty then it has the form of a
sequence
$$g_H=b_m\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}\cdots\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} b_0,$$
where $m\ge 0$.
\item If
$\Sigma^\pm(Y)$ are both nonempty, then $b_0 = f_0$, and
$Y$ intersects every vertex of $f_0$ nontrivially.
\item If $Y$ is a component domain in any geodesic $k\in H$ and
$\xi(Y)\ne 3$, then
$$f\in \Sigma^+(Y) \ \ \iff \ \ Y\mathrel{\scriptstyle\searrow} f,$$
and similarly,
$$b\in \Sigma^-(Y) \ \ \iff \ \ b\mathrel{\scriptstyle\swarrow} Y.$$
If, furthermore, $\Sigma^\pm(Y)$ are both nonempty,
then in fact $Y$ is the support of $b_0=f_0$.
\item Geodesics in $H$ are determined by their supports. That is, if
$D(h)=D(h')$ for $h,h'\in H$ then $h=h'$.
\end{enumerate}
\end{theorem+}
The ingredients for the proof of
Theorem \ref{Structure of Sigma} will be developed throughout the rest
of the section, and the proof will be completed in
\S\ref{structure sigma proof}.
In Section \ref{large link etc}, $\Sigma^\pm$ will be converted into
forward and backward {\em paths} in $\CC(S)$ which will enable us to
generalize the projection arguments in the examples of
\S\ref{example}, and prove Lemmas \ref{Large Link}, \ref{Common Links}
and their relatives.
\subsection{Footprints and subordinacy}
\label{footprints and subordinacy}
We begin with the following basic lemma, which gives one direction of
Part (3) of Lemma \ref{Structure of Sigma}:
\begin{lemma+}{Subordinate Intersection 1}
Let $H$ be a hierarchy in a surface $S$ and $Y$ a domain in
$S$. Let $h$ and $f$ denote geodesics in $H$.
\begin{enumerate}
\item
If $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$ then $h\in\Sigma_H^+(Y)$.
\item
If $h\in\Sigma_H^+(Y)$ and $h\mathrel{\scriptstyle\searrow} f$, then
$f\in\Sigma_H^+(Y)$.
\item
If $Y\mathrel{\scriptstyle\searrow} f$ then $f\in\Sigma^+_H(Y)$.
\end{enumerate}
The same holds with $\mathrel{\scriptstyle\searrow}$ replaced by $\mathrel{\scriptstyle\swarrow}$, and $\Sigma^+$
replaced by $\Sigma^-$.
\end{lemma+}
\bfheading{Footprints.}
We will first need one new definition, which will be a basic tool in
all that follows:
\begin{definition}{footprint def}
For a domain $Y\subset S$ and a
tight geodesic $g$ with non-annular support $D(g)\subset S$, let $\phi_g(Y)$
be the set of vertices of $g$ disjoint
from $Y$. We call this the {\em footprint} of $Y$ on $g$.
\end{definition}
If $Y$ is a subdomain of $D(g)$, then immediately
\begin{equation}\label{footprint diam bound}
\operatorname{diam}(\phi_g(Y)) \le 2
\end{equation}
in the curve complex of $D(g)$ (note if $Y$ is an annulus then by
definition of subdomain it is nonperipheral in $D(g)$).
It is also an immediate consequence of the definition that
\begin{equation}\label{footprint containment}
Y\subseteq Z \implies \phi_g(Z)\subseteq\phi_g(Y).
\end{equation}
Let us record the following elementary but crucial property of
footprints, which
is the only place where the tightness
property is used directly.
\begin{lemma}{contiguous footprint}
If $g$ is a tight geodesic and $Y\subset D(g)$ is a proper subdomain,
then $\phi_g(Y)$ is a sequence of 0,1, 2 or 3 contiguous vertices of
$g$.
\end{lemma}
\begin{pf}
When $\xi(D(g))=4$, $\phi_g(Y)$ is empty except when $Y$ is an annulus
whose core is some vertex $v$ of $g$. In that case every other vertex
intersects $Y$, so $\phi_g(Y)$ is the single vertex $v$.
Now assume $\xi(D(g))>4$.
The diameter bound (\ref{footprint diam bound}) implies that
the only possibility for $\phi_g(Y)$ other than those mentioned in the
lemma is
that $\phi_g(Y)$ contains some $v_j$ and $v_{j+2}$ but not
$v_{j+1}$. However, since $g$ is a tight geodesic,
if $Y$ intersects $v_{j+1}$ it either intersects $v_j$ or $v_{j+2}$,
since $v_{j+1}=\boundary_{D(g)}F(v_j,v_{j+2})$.
\end{pf}
Denote by $\min\phi_g(Y)$ and $\max\phi_g(Y)$ the
vertices of $\phi_g(Y)$ with lowest and highest index, respectively.
\begin{pf*}{Proof of Lemma \ref{Subordinate Intersection 1}}
Clearly (3) is a consequence of (1) and (2), so we prove them.
We will prove the forward-subordinate case. The backward-subordinate
case proceeds similarly.
To see (1), suppose $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$. Then by definition $Y$ is a component
domain of
$(D(h),v_i)$ for some vertex $v_i$ of $h$, and ${\mathbf T}(Y,h)\ne\emptyset$.
If $v_i$ is the last vertex then ${\mathbf T}(Y,h) = {\mathbf T}(h)|_ Y$, so
this is nonempty and $h\in\Sigma^+(Y)$.
If $v_i$ is not the last vertex, we note that $v_i\in\phi_h(Y)$ and
$v_{i+1}$ is not in $\phi_h(Y)$.
It follows, since the footprint is contiguous
(Lemma \ref{contiguous footprint}), that
the last vertex is not in $\phi_h(Y)$, hence
${\mathbf T}(h)|_ Y\ne\emptyset$, and again $h\in\Sigma^+(Y)$.
If $h$ is infinite in the forward direction ($h=g_H$, and ${\mathbf T}(h)$
undefined) then automatically $h\in\Sigma^+(Y)$.
Now to prove (2), if $h\in\Sigma^+(Y)$ we have $Y\subset D(h)$ and
$ {\mathbf T}(h) |_ Y\ne \emptyset$.
Suppose first that $h\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$ -- then
$D(h)$ is a component domain of $(D(f),v_j)$ for
some vertex $v_j$ of $f$. If $v_j$ is the last vertex then ${\mathbf T}(h) =
{\mathbf T}(f)|_ {D(h)}$, and it follows that
${\mathbf T}(f)|_ Y\ne\emptyset$.
If $v_j$ is not the last vertex then ${\mathbf T}(h) = v_{j+1}|_ {D(h)}$
and since this intersects $Y$, $v_{j+1}\notin \phi_f(Y)$.
Thus also the last vertex of $f$ is not in $\phi_f(Y)$, and we may again
conclude that $ {\mathbf T}(f)|_ Y\ne\emptyset$ (or $f$ is infinite in the
forward direction). In each case we have $f\in\Sigma^+(Y)$.
Part (2) now follows by induction.
\end{pf*}
The proof also gives the following slightly finer statement:
\begin{corollary+}{Footprints}
Let $H$ be a hierarchy, geodesics $h,f,b\in H$ and domain
$Y\subset D(h)$.
If $h\in\Sigma^+_H(Y)$ and $h \mathrel{\scriptstyle\searrow} f$, then
$$\max \phi_f(D(h)) = \max \phi_f(Y).$$
Similarly if $h\in\Sigma^-_H(Y)$ and $b\mathrel{\scriptstyle\swarrow} h$ then
$$\min \phi_b(D(h)) = \min \phi_b(Y).$$
\end{corollary+}
Note, a special case of this is that if $h_1\mathrel{\scriptstyle\searrow} h_2 \mathrel{\scriptstyle\searrow} f$ then
$\max\phi_f(D(h_1)) = \max\phi_f(D(h_2))$ by letting $Y=D(h_1)$.
(The condition $h_2\in\Sigma^+(D(h_1))$ follows from Lemma
\ref{Subordinate Intersection 1}.)
\begin{pf}
Since $h\mathrel{\scriptstyle\searrow} f$ there exists $h'$ such that $h\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} h'\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$.
Examining the proof of part (2) in
Lemma \ref{Subordinate Intersection 1} above,
we note that it shows that $D(h')$ is a component domain of $(D(f),v)$
where $v = \max\phi_f(D(h'))$, and if $v$ is not the last vertex in
$f$ then its successor intersects $Y$ and $D(h)$. Hence
$v=\max\phi_f(Y) =\max\phi_f(D(h))$. If $v$ is the last vertex then
automatically $v=\max\phi_f(Y) =\max\phi_f(D(h))$, since
$\phi_f(D(h'))\subseteq\phi_f(D(h))\subseteq\phi_f(Y)$
by (\ref{footprint containment}).
The backward case proceeds similarly.
\end{pf}
\subsection{Uniqueness of descent}
\label{sigma basics}
By virtue of lemma \ref{Subordinate Intersection 1}, we know that
$\Sigma^+(Y)$ contains any sequence of geodesics $f_0,\ldots,f_n$
satisfying $f_0\in\Sigma^+(Y)$ and $f_i\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f_{i+1}$
(and similarly for $\Sigma^-$).
The goal of the next lemma is to show that in fact $\Sigma^+$ and
$\Sigma^-$ are each just one such sequence, and as a consequence to prove
that geodesics in $H$ are determined by their domains.
\begin{lemma+}{Uniqueness of Descent}
Let $H$ be a hierarchy, and $Y$ any domain in its support $S$.
\begin{enumerate}
\item If $\Sigma^+_H(Y)$ is nonempty then it has the form of a
sequence $f_0\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}\cdots\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f_n=g_H$, where $n\ge 0$.
Similarly
if $\Sigma^-_H(Y)$ is nonempty then it has the form of a
sequence $g_H=b_m\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}\cdots\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} b_0$, where $m\ge 0$.
\item If there is some $h\in H$ with $D(h) = Y$, then there is exactly
one such $h$, and $h=f_0=b_0$.
\end{enumerate}
\end{lemma+}
In particular, this gives parts (1) and (4) of Theorem
\lref{Structure of Sigma}.
\begin{pf}
Note that (2) is a consequence of (1) for any given $Y$, since
if $Y=D(h)$ then $h\in \Sigma^+$ and must have the smallest domain of
any member of $\Sigma^+$ -- hence $h=f_0$, and similarly with
$\Sigma^-$ we have $h=b_0$. In particular $h$ is unique.
We will prove (1)
by induction on $\xi(S)- \xi(Y)$.
If $\xi(S)-\xi(Y)=0$ then $Y=S$ and $\Sigma^+=\Sigma^-=\{g_H\}$, hence
(1) holds.
Let $g\in\Sigma^+(Y)$, and suppose that $f\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$ for some
$f\in\Sigma^+(Y)$.
We claim that $D(f)$ is uniquely determined by $Y$ and $g$, and in
fact if $\xi(D(f))>\xi(Y)$ then $f$ itself is uniquely determined.
Suppose for a moment that $Y$ is not an annulus.
By definition of $f\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$, $D(f)$ is a component domain
for $(D(g),v)$ where the vertex $v$ is $\max\phi_g(D(f))$.
By Corollary \lref{Footprints}, $v$ is also $\max\phi_g(Y)$.
Hence, $D(f) $ is the unique component of $(D(g),v)$ containing
$Y$, which depends only on $Y$ and $g$.
Now if $\xi(D(f)) > \xi(Y)$ then by induction (2) holds for $D(f)$, so
that $f$ is the unique geodesic in $\Sigma^+$ such that $f\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g$.
If $Y$ is an annulus the same proof goes through verbatim, recalling
that if $D(f)$ is not an annulus it must contain $Y$ as a
nonperipheral annulus (otherwise ${\mathbf T}(f)|_{Y}$ would be empty,
contradicting $f\in\Sigma^+(Y)$),
and is therefore the unique component domain of
$(D(g),v)$ with this property. If $D(f)$ is an annulus it must be
equal to $Y$ so again it is uniquely determined.
If $\Sigma^+\ne\emptyset$, then since any $f\in
\Sigma^+$ is forward-subordinate to $g_H$,
Lemma \ref{Subordinate Intersection 1} implies that
$g_H\in\Sigma^+$. For any $k\in \Sigma^+$ there exists
some $f$ such that $k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} f \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g_H$, and $f\in \Sigma^+$
again by Lemma \ref{Subordinate Intersection 1}. By the previous
claim, we know that $D(f)$ is independent of $k$, and so is $f$ if
$D(f)\ne Y$.
Thus, replacing $g_H$ with $f$ and
repeating this
argument inductively, we obtain a single sequence
$f_1\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}\cdots \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} g_H$ which accounts for all of $\Sigma^+$
except possibly those geodesics $f$ with $D(f)=Y$.
Now repeating this for $\Sigma^-$ we obtain a sequence
$g_H\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}\cdots\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} b_1$. If $Y$ does not support any geodesic then
we are done (reindexing both sequences to start with $0$).
If $Y$ supports at least one geodesic $h$, then
$h\in\Sigma^+\intersect \Sigma^-$, and by the same logic as above we
have that $h\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f_1$ and $b_1\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} h$. However, by the uniqueness
part of the definition of a hierarchy, there can only be one
such $h$. Setting $f_0=b_0=h$, we are done.
\end{pf}
The following partial converse of Lemma
\ref{Subordinate Intersection 1} is an immediate corollary of Lemma
\ref{Uniqueness of Descent}:
\begin{lemma+}{Subordinate Intersection 2}
Let $k$ and $h$ be geodesics in a hierarchy $H$.
If $h\in \Sigma^+_H(D(k))$ then
$k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} h$. Similarly,
If $h\in \Sigma^-_H(D(k))$ then
$h\mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} k$.
\end{lemma+}
Here is an easy corollary of Lemma \ref{Subordinate Intersection 2}.
\begin{corollary}{easy containment}
If $D(k)$ is properly contained in $D(h)$ then $\phi_h(D(k))$ is non-empty.
\end{corollary}
\begin{pf}
If $h$ is infinite then $h=g_H$ and by definition $h\mathrel{\scriptstyle\swarrow} k$, so
$\phi_h(D(k))\ne \emptyset$. Otherwise ${\mathbf I}(h)$ is defined.
If ${\mathbf I}(h)|_{D(k)}$ is empty then $\phi_h(D(k))$ contains the
initial vertex. If not, then $h\in\Sigma^-(D(k))$ and,
by Lemma \ref{Subordinate Intersection 2}, $k$ is
backward-subordinate to $h$, and hence $D(k)$ is contained in a
component domain
for some other vertex, so that again $\phi_h(D(k))$ is non-empty.
\end{pf}
Let us also record the following consequence of these lemmas:
\begin{lemma}{Y direct to top}
Let $Y$ be a domain in $S$ and $h$ in a hierarchy $H$ such that
$Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$. Then $h$ is uniquely determined, and in particular,
writing $\Sigma^+_H(Y) =
\{f_0,\ldots\}$ we have
either $h=f_1$ and $Y = D(f_0)$, or $h=f_0$ and $Y$ supports no
geodesic in $H$.
The corresponding statement holds when $h\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y$.
\end{lemma}
\begin{pf}
By Lemma \ref{Subordinate Intersection 1}, $h\in\Sigma^+(Y)$.
If $h=f_0$ then $Y$ cannot support a geodesic because $D(h)$ has the
smallest domain among elements of $\Sigma^+(Y)$.
Suppose $h=f_{i+1}$ where $i\ge 0$. Then by Corollary
\ref{Footprints},
$\max\phi_h(Y) = \max\phi_h(D(f_i))$ and hence both $Y$ and $D(f_i)$
are component domains for the same vertex of $h$. As in the proof of
Lemma \ref{Uniqueness of Descent} we conclude $Y=D(f_i)$ and so $i=0$
since there can be no smaller domain in $\Sigma^+(Y)$.
The case where $h\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y$ is similar.
\end{pf}
\subsection{Time order}
\label{time order section}
The vertices of any geodesic admit a linear order from initial to
terminal, and the relations $\mathrel{\scriptstyle\searrow}$ and $\mathrel{\scriptstyle\swarrow}$ are, by definition,
partial orders. It turns out that these can be combined to define a
useful partial order $\prec_t$ on a hierarchy, and a
related partial order $\prec_p$ on the set of ``pointed geodesics'' of
a hierarchy. In this section we define these relations and study their
basic properties. Let a hierarchy $H$ be fixed througout this section.
\begin{definition}{time order def}
For any $h,h'\in H$,
we say that {\em $h$ precedes $h'$ in time order}, or
$$h\prec_t h',$$
if there exists a geodesic $m\in H$
such that $D(h), D(h')\subset D(m)$, and
$$\max\phi_m(D(h)) < \min\phi_m(D(h')).$$
(In particular $\phi_m(D(h))$ and
$\phi_m(D(h'))$ are disjoint.)
\end{definition}
Note that if this occurs then automatically $h\mathrel{\scriptstyle\searrow} m$ and $m \mathrel{\scriptstyle\swarrow} h'$:
since $\phi_m(D(h))$
must miss the terminal vertex of $m$, $m$ is in $\Sigma^+(D(h))$ and we
may apply Lemma \ref{Subordinate Intersection 2}, and similarly for
$h'$ using $\Sigma^-(D(h'))$.
We call $m$ the {\em geodesic used to compare $h$ and $h'$}, and note that
it is unique: If some $m'$ is also used to obtain $h\prec_t h'$, then
both $m$ and $m'$ appear in the forward sequence of $h$ and the
backward sequence of $h'$. In particular either $m\mathrel{\scriptstyle\searrow} m'$ or
$m'\mathrel{\scriptstyle\searrow} m$; suppose the first, without loss of generality. Then
$\phi_{m'}(D(m))$ is non-empty, and by (\ref{footprint containment})
is contained in both $\phi_{m'}(D(h))$ and $\phi_{m'}(D(h'))$,
contradicting the assumption that they are disjoint.
If either $h\prec_t h'$ or $h'\prec_t h$ then we say $h$ and $h'$ are {\em
time-ordered}. Note, we have not yet shown that these two
possibilities are mutually exclusive,
or indeed that $\prec_t$ is a partial order. Before we do
that let us define a more general relation.
\bfheading{Partial order on pointed geodesics.}
Let $k$ be a tight geodesic with
vertices $v_0,\ldots,v_N$. We generalize slightly the notion of vertex
to a {\em position} on $k$, which is either a vertex
$v_i$, or ${\mathbf I}(k)$ or ${\mathbf T}(k)$. The linear order $v_i<v_j$ when $i<j$
extends to an order on positions where we say ${\mathbf I}(k) < v_0$ if the two
are not the same,
and similarly $v_N<{\mathbf T}(k)$ if the two are not the same.
We can now discuss {\em pointed geodesics}, which are
pairs $(k,v)$ where $v$ is a position in $k$.
We extend the notion of footprint slightly as follows:
Given a pointed geodesic $(k,v)$ and a geodesic $h$ with
$D(k)\subseteq D(h)$, we define
\begin{equation*}
\hat\phi_h(k,v) = \begin{cases}
\phi_h(D(k)) & \text{if}\ \ D(k) \subset D(h), \\
\{v\} & \text{if}\ \ k=h.
\end{cases}
\end{equation*}
Note that $\hat\phi_h(k,v)$ could be $\{{\mathbf I}(k)\}$ or $\{{\mathbf T}(k)\}$ in the second
case, in contrast with regular footprints which can only consist of
vertices.
We now define a relation $\prec_p$ on pairs $(k,v)$:
\begin{definition}{pair order def}
We say
$$(k,v) \prec_p (h,w)$$
if and only if there exists a geodesic $m$ such that
$k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} m \mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} h$, and
$$\max\hat\phi_m(k,v) < \min\hat\phi_m(h,w).$$
\end{definition}
Again, it is clear that $m$ is unique. It is also immediate from the
definitions that
\begin{equation}
\label{pprec defines tprec}
k \prec_t h \iff (k,{\mathbf T}(k)) \prec_p (h,{\mathbf I}(h)).
\end{equation}
Indeed, $(k,v)\prec_p(h,w)$ breaks up into four
possible, mutually exclusive, cases:
\begin{itemize}
\item $k\prec_t h$,
\item $k=h$ and $v<w$,
\item $k\mathrel{\scriptstyle\searrow} h$ and $\max \phi_h(D(k)) < w$
\item $k\mathrel{\scriptstyle\swarrow} h$ and $v < \min \phi_k(D(h))$.
\end{itemize}
We now verify that these relations are partial orders, together with a
number of other properties. The following lemma holds for infinite as
well as finite hierarchies.
\begin{lemma+}{Time Order}
\mbox{}
\begin{enumerate}
\item
If $D(h)\subseteq D(h')$ then $h$ and $h'$ are not time-ordered.
\item
On the other hand if $D(h)\intersect D(h') \ne \emptyset$ and neither
domain is contained in the other, then $h$ and $h'$ are time-ordered.
\item
Suppose $b\mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} f$.
Then either $b=f$, $b\mathrel{\scriptstyle\searrow} f$,
$b\mathrel{\scriptstyle\swarrow} f$, or $b\prec_t f$.
\item
Suppose that $k_1\prec_t k_2$. If $h\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} k_1$ then
$h\prec_t k_2$. Similarly if $k_2\mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} g$ then $k_1\prec_t g$.
\item
The relation $\prec_p$ is a (strict) partial order.
\item
The relation $\prec_t$ is a (strict) partial order.
\end{enumerate}
\end{lemma+}
\begin{pf}
To prove part (1),
suppose $D(h)\subseteq D(h')$. Then in any geodesic $m$ such that $D(m)$
contains $D(h')$,
$\phi_m(D(h'))\subseteq \phi_m(D(h))$. In particular the footprints can
never be disjoint, and hence neither $h\prec_t h'$
nor $h'\prec_t h$ can hold.
\medskip
Next let us prove part (2). Suppose $D(h)\intersect D(h')\ne \emptyset$.
Consider the following assertion:
If $m$ is a geodesic such that $D(m)$ contains $D(h)\union D(h')$, and
also $m\mathrel{\scriptstyle\swarrow} h$, then either
$D(h')\subseteq D(h)$, or $D(h)\subseteq D(h')$, or $h$ and $h'$ are
time-ordered.
We shall prove this by induction on
$\xi(D(m))$.
If $\xi(D(m))=2$,
then $m=h=h'$ and we are done. More generally,
if $\xi(D(m)) = \xi(D(h'))$ then $D(h)\subseteq D(h')$, and again we
are done.
Otherwise, we must have
$\xi(D(m)) > \xi(D(h'))$, so consider the footprints $\phi_m(D(h))$
and $\phi_m(D(h'))$ (the former is non-empty since $m\mathrel{\scriptstyle\swarrow} h$, and the
latter is non-empty by Corollary \ref{easy containment}).
If they are disjoint then $h$ and $h'$ are time-ordered and
again we are done. If
they overlap then, since each is an interval of contiguous
vertices, the minimum of one must be contained in the other.
If the minimum $v$ of $\phi_m(D(h))$ is contained in $\phi_m(D(h'))$, let
$m'$ be the geodesic such that $m\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} m' \mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} h$.
Then $D(m')$ is a
component domain of $(D(m),v)$. Since $v\in \phi_m(D(h'))$,
$v$ does not intersect $D(h')$,
and since $D(h)$ and $D(h')$ intersect, $D(h')$
must be in the same component domain of $(D(m),v)$, namely $D(m')$.
If $h=m'$ then we are done, with $D(h')\subset D(h)$. If not, we may
apply the inductive assumption since $\xi(D(m'))<\xi(D(m))$, and again
we are done.
If $v$ is not in $\phi_m(D(h'))$, then $v'= \min\phi_m(D(h'))$
must be in $\phi_m(D(h))$, and furthermore $v'$ is {\em
not} the first vertex of $m$ since $v$ lies to its left. Thus
$m\in\Sigma^-(D(h'))$ and it follows by
Lemma \ref{Subordinate Intersection 2} that $m\mathrel{\scriptstyle\swarrow} h'$,
and therefore we may reverse the roles of $h$ and $h'$ and apply the
previous paragraph. This concludes the proof of the assertion, and (2)
follows by applying the assertion when $m$ is the main geodesic $g_H$.
\medskip
To prove (3), we may
suppose that $b\mathrel{\scriptstyle\swarrow} k\mathrel{\scriptstyle\searrow} f$ and $b\ne f$, as the cases of equality
are trivial.
By Lemma \ref{Subordinate Intersection 1}, $f\in\Sigma^+(D(k))$.
If $D(b)\subset D(f)$ then
$f\in\Sigma^+(D(b))$ as well, and by Lemma
\ref{Subordinate Intersection 2}, $b\mathrel{\scriptstyle\searrow} f$. Similarly if
$D(f)\subset D(b)$, then $b\mathrel{\scriptstyle\swarrow} f$.
Now suppose that neither $D(b)\subseteq D(f)$ nor
$D(f)\subseteq D(b)$.
Since $D(k) \subset D(b)\intersect D(f)$, part (2) shows that
$b$ and $f$ are time-ordered, so let $D(m)$ contain both
$D(b)$ and $D(f)$ such that their footprints on $m$ are disjoint.
We claim that $\min\phi_m(D(b))=\min \phi_m(D(k))$. If $b$ is
backward-subordinate to $m$ then
this is just Corollary \lref{Footprints}. If $b$ is not backward subordinate
to $m$ then neither is $k$, so that by Lemma
\ref{Subordinate Intersection 2} both
minima are equal to the first vertex of $m$.
Similarly, $\max\phi_m(D(f))=\max\phi_m(D(k))$. It follows that
$\max\phi_m(D(b))< \min\phi_m(D(f))$
(since we already know they are
disjoint). Thus $b\prec_t f$, as desired.
\medskip
Next we prove (4). Since $k_1\prec_t k_2$, let $m$ be the geodesic used
to compare them. Since $h\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} k_1\mathrel{\scriptstyle\searrow} m$,
$\max\phi_m(D(h)) = \max\phi_m(D(k_1))$ by Corollary \lref{Footprints}.
Thus $\max\phi_m(D(h))< \min\phi_m(D(k_2)$, so $h\prec_t k_2$. The proof
that $k_1\prec_t g$ is similar.
\medskip
To prove (5), we must in particular show that $\prec_p$
is transitive. Suppose $(k,v) \prec_p (k',v')$, and
$(k',v') \prec_p (k'',v'')$.
Let $b$ be the geodesic used to compare $(k,v)$ and $(k',v')$, and $f$ be the
geodesic used to compare $(k',v')$ and $(k'',v'')$. Then in particular
$k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} b \mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} k' \mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} f \mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} k''$.
By (3), either $b=f$, $b\mathrel{\scriptstyle\searrow} f$,
$b\mathrel{\scriptstyle\swarrow} f$, or $b\prec_t f$.
If $b=f$ then the footprints $\hat\phi_b(k,v)$,
$\hat\phi_b(k',v')$, and
$\hat\phi_b(k'',v'')$ are disjoint and linearly ordered from left to
right, so $(k,v) \prec_p (k'',v'')$ immediately.
If $b\mathrel{\scriptstyle\searrow} f$ then neither $k$ nor $k'$ can equal $f$. Thus
$\hat\phi_f(k,v) = \phi_f(D(k))$ and
$\hat\phi_f(k',v') = \phi_f(D(k'))$. By Corollary \lref{Footprints} we
have
$\max\phi_f(D(k)) = \max\phi_f(D(b))$, and by
(\ref{footprint containment}) we have
$\phi_f(D(b)) \subset \phi_f(D(k'))$. Then since $\max\phi_f(D(k')) <
\min \hat\phi_f(k'',v'')$, we conclude $\max\hat\phi_f(k,v) < \min
\hat\phi_f(k'',v'')$ so that $(k,v) \prec_p (k'',v'')$.
The case where $b\mathrel{\scriptstyle\swarrow} f$ follows similarly.
If $b\prec_t f$ then since $k\mathrel{\genfrac{}{}{0pt}{}{\searrow}{\raise.1ex\hbox{=}}} b$
part (4) gives $k\prec_t f$, and since $f\mathrel{\genfrac{}{}{0pt}{}{\swarrow}{\raise.1ex\hbox{=}}} k''$
part (4) gives $k\prec_t k''$. It follows that $(k,v) \prec_p (k'',v'')$.
We have therefore proved that $\prec_p$ is transitive.
It follows from the definition that $(h,v)\prec_p (h,v)$ can never
hold, and so transitivity implies $(h,v)\prec_p(k,w)$ and
$(k,w)\prec_p(h,v)$ are mutually exclusive.
Thus $\prec_p$ is a strict partial order.
\medskip
Part (6) follows immediately from the relation (\ref{pprec defines
tprec}) between $\prec_p$ and $\prec_t$. It is also easy to see it
directly by an argument very similar to the above.
\end{pf}
The next lemma gives a sufficient condition for two
geodesics with disjoint domains {\em not} to be time-ordered.
\begin{lemma}{diagonalizable}
Let $v$ be a vertex of $h$ and suppose that
$D(k)$ and $D(k')$ lie in different component domains of $(D(h),v)$.
Then $k$ and $k'$ are not time-ordered.
\end{lemma}
Remark: we expect that there {\em will} be geodesics
with disjoint domains which are nevertheless time-ordered. This is in
fact one of the serious difficulties in applications of hierarchies.
\begin{pf}
Suppose by way of contradiction that
$k\prec_t k'$, and let $m$ be the geodesic
used to compare them.
Note first $m\ne h$ since the footprints of $k$ and
$k'$ on $m$ are disjoint, and on $h$ they both contain $v$.
Suppose $D(m)\subset D(h)$. Then $\min\phi_h(D(m))=\min\phi_h(D(k'))$
since $m\mathrel{\scriptstyle\swarrow} k'$, just as we argued in the proof of Lemma
\ref{Time Order}, part (3). Similarly $\max\phi_h(D(m))=\max\phi_h(D(k))$. It
follows that $\phi_h(D(m))$
contains $\phi_h(D(k))\intersect \phi_h(D(k'))$, which in particular
contains $v$. Thus $D(m)$ is disjoint from $v$, and since $D(m)$ is
connected, it must lie in one component domain of $(D(h),v)$. This
contradicts the assumption that $D(k)$ and $D(k')$ lie in different
components.
Now suppose $D(h)\subset D(m)$. By Corollary \ref{easy containment},
$\phi_m(D(h))$ is nonempty, and since $D(h)$ contains both $D(k)$ and
$D(k')$, $\phi_m(D(h))\subset \phi_m(D(k))\intersect \phi_m(D(k'))$.
This contradicts the disjointness of footprints of $k$ and $k'$ in
$m$.
Finally if neither $D(h)$ nor $D(m)$ is contained in the other, their
intersection is still non-empty since both contain $D(k)\union D(k')$.
Thus by Lemma \ref{Time Order} part (2), $h$ and $m$ are time-ordered.
Suppose $h \prec_t m$. Since $m\mathrel{\scriptstyle\swarrow} k'$, by
Lemma \ref{Time Order} part (4) we have $h\prec_t k'$. However
$D(k')\subset D(h)$ so this contradicts Lemma \ref{Time Order} part
(1). Similarly if $m\prec_t h$ then since $k\mathrel{\scriptstyle\searrow} m$
we have $k\prec_t h$, again a contradiction.
\end{pf}
\subsection{Complete hierarchies}
\label{complete hierarchy section}
A hierarchy $H$ is {\em complete} if, for every domain $Y$ with $\xi(Y)\ne
3$, which is a
component domain in some geodesic $k\in H$, there is a geodesic $h\in
H$ with $Y=D(h)$.
In this section we will prove:
\begin{theorem+}{Completeness}
If the markings ${\mathbf I}(H)$ and ${\mathbf T}(H)$ (where defined) are complete,
then $H$ is complete.
\end{theorem+}
This will require Lemma \ref{Subordinate Intersection 3} below,
which is also the last and perhaps trickiest piece needed to prove
Theorem \lref{Structure of Sigma}.
This lemma addresses the issue of when a component domain appearing in
a hierarchy is the support of a geodesic in the hierarchy.
If $v$ is a marking in a non-annular $W$, a {\em component domain of
$(W,v)$} is defined to be any component domain of
$(W,\operatorname{base}(v))$. This slight generalization
will be used below for component domains defined by {\em
positions} in geodesics, including the initial or terminal markings.
\begin{lemma+}{Subordinate Intersection 3}
Let $H$ be a hierarchy, and let
$Y$ be a component domain of $(D(k), v)$, where $k\in H$ and
$v$ is a position in $k$. Assume that $\xi(Y) \ne 3$.
If $f\in \Sigma^+(Y)$ then either $D(f)= Y$ or $Y\mathrel{\scriptstyle\searrow} f$.
Similarly if $b\in \Sigma^-(Y)$ then either $D(b) = Y$ or $b\mathrel{\scriptstyle\swarrow} Y$.
\end{lemma+}
Note the similarity of this to Lemma
\lref{Subordinate Intersection 2}, the main difference being that $Y$
is not required to be the support of a geodesic. In fact,
the conclusions $Y\mathrel{\scriptstyle\searrow} f$ and $b\mathrel{\scriptstyle\swarrow} Y$ together imply that $Y$ is
the support of a geodesic, so in particular the
lemma gives a sufficient condition for this to occur.
\begin{pf}
We will prove the forward case. Note that it suffices to show that,
if $\Sigma^+_H(Y) \ne \emptyset$ then there exists $h\in H$ such that
$Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$. Lemma \ref{Y direct to top} then implies that,
writing $\Sigma^+(Y) = \{f_0,f_1,\ldots\}$,
either $h=f_0$ and $Y$ is not the support
of any domain, or $Y=D(f_0)$ and $h=f_1$.
The lemma follows from this.
We argue by induction using the partial order $\prec_p$.
In particular,
let us write our data as $(Y,k,v)$ where $v$ is a position on $k$ such that
$Y$ is a component domain of $(D(k),v)$.
Let $N(Y,k,v)$ be the number of such triples $(Y',k',v')$
in $H$ for which $Y\subseteq Y'$ and $(k,v) \prec_p (k',v')$.
Note that this number is finite even if the hierarchy is infinite,
because $\phi_{g_H}(D(k'))\subset \phi_{g_H}(Y)$, so the candidate triples
are limited to a finite subset of $H$.
Clearly
if $Y\subseteq Y'$ and $(k,v)\prec_p (k',v')$, then $N(Y,k,v)>N(Y',k',v')$,
so we will induct on $N$.
Let us first show that either $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k$, in which case we are done,
or we can find a certain $(Y',k',v')$ with $Y\subseteq Y'$ and $(k,v)\prec_p
(k',v')$ to which we can apply the inductive hypothesis. In particular
this will take care of the base case $N(Y,k,v)=0$. We will then use $Y'$
to deduce the desired conclusion for $Y$.
For a position $w < {\mathbf T}(k)$ in $k$ let $succ(w)$ denote the next
position in the linear order. The following two cases occur:
\bfheading{1:} If $v<{\mathbf T}(k)$, let $v'=succ(v)$.
If $v'|_ Y$ is nonempty then ${\mathbf T}(Y,k)$ is nonempty and
$Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k$, as desired.
If not, we define
$(Y',k',v')$ by letting
$k'=k$ and letting $Y'$ be the component domain of
$(D(k),v')$ containing $Y$.
(Remarks: If $v={\mathbf I}(k)$ or $v'={\mathbf T}(k)$ then $v$ and $v'$ will share some
base curves. In this case it is possible that $Y=Y'$.
If $Y$ is an annulus then
either $Y'=Y$ or $Y'$ contains $Y$ essentially.)
\bfheading{2:}
If $v={\mathbf T}(k)$ (including the case ${\mathbf T}(k)$ is equal to the last vertex),
we let $k'$ be
the geodesic such that $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k'$.
Then $D(k)$ is a component domain of some vertex $w$ in $k'$, with $w<{\mathbf T}(k')$.
Letting $w'=succ(w)$, we have
${\mathbf T}(k) = w'|_ {D(k)}$.
Let $v'=w'$ and let $Y'$ be the component domain of $(D(k'),w')$
containing $Y$.
\medskip
Note that in each case, $(k,v) \prec_p (k',v')$: When $k=k'$ we had
$v<v'$, and when $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k'$ we had $\phi_{k'}(D(k)) < v'$.
Recall, the assumption that $\Sigma^+(Y) \ne\emptyset$ is equivalent by Lemma
\ref{Uniqueness of Descent} to $ {\mathbf T}(g_H)|_ Y\ne \emptyset$
or $g_H$ infinite in the forward direction.
Thus, since $Y\subseteq Y'$, $\Sigma^+(Y)\ne\emptyset$
implies $\Sigma^+(Y')\ne\emptyset$. Now we can apply the inductive
assumption to $(Y',k',v')$ to conclude that
$Y'\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h'$ for some $h'$.
Thus if $Y=Y'$, we are done.
From now on let us assume $Y$ is a proper subdomain of $Y'$.
Thus, the relative boundary $\boundary_{Y'}(Y)$ is nonempty.
We claim that $\boundary_{Y'}(Y)$ consists of (base)
curves of $v$ in case 1, and of $w$ in case 2:
In case 1, $\boundary_{D(k)}(Y)$ is in $\operatorname{base}(v)$, and so the claim is
immediate. In case 2, $\boundary_{D(k)}(Y)$ is in $\operatorname{base}({\mathbf T}(k))$ which is
part of $w'$, and $\boundary_{D(k')}(D(k))$ is in $w$. It follows that
$\boundary_{D(k')}(Y)$ is in $w\union w'$. Since no curve of $w'$
is an essential curve of $Y'$, $\boundary_{Y'}(Y)$ must be in $w$.
Noting also that ${\mathbf I}(Y',k') = v|_ {Y'}$ in case 1
and ${\mathbf I}(Y',k') = w|_ {Y'}$ in case 2, we conclude in each case that
${\mathbf I}(Y',k')\ne \emptyset$. Thus,
$k'\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y'$. Since we already have $Y'\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h'$,
by definition of a hierarchy there is
a geodesic $h\in H$ whose support is $Y'$, and
$k'\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} h$. In fact $(k,v) \prec_p
(h,{\mathbf I}(h))$: when $k=k'$ this is because the footprint of $h$ in $k$
has minimum at $v'$. When $k\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} k'$, it is because $\max\phi_{k'}(D(k))
= w < w' = \min\phi_{k'}(D(h))$, so in fact $k\prec_t h$.
Also, $Y$ is a component domain of $(D(h),{\mathbf I}(h))$:
In case 1 this is because ${\mathbf I}(h) = v|_ {Y'}$ and
$\boundary_{Y'}(Y)\subseteq v$, and in case 2 it is because ${\mathbf I}(h) =
w|_ {Y'}$ and $\boundary_{Y'}(Y)\subseteq w$.
Thus, the lemma holds for $(Y,h,{\mathbf I}(h))$ by induction, and we are done.
\end{pf}
\begin{proof}[Proof of Completeness Theorem]
Let $Y$ be any component domain that is not a thrice-punctured sphere.
If ${\mathbf I}(H)$ is defined then it is complete, so ${\mathbf I}(H)|_ Y$ is
nonempty and hence $g_H\in\Sigma^-(Y)$. If ${\mathbf I}(H)$ is undefined then
$g_H$ is infinite in the backward direction so $g_H\in\Sigma^-(Y)$ by
definition. Similarly $g_H\in\Sigma^+(Y)$, so that by
Lemma \ref{Subordinate Intersection 3}
$g_H\mathrel{\scriptstyle\swarrow} Y \mathrel{\scriptstyle\searrow} g_H$. In particular $Y$ must support a geodesic by
definition of a hierarchy.
\end{proof}
Note that, even if ${\mathbf I}(H)$ and ${\mathbf T}(H)$ are pants decompositions with
no transverals at all, we get a hierarchy which is complete except
for the annular domains whose cores are curves of ${\mathbf I}(H)$ or ${\mathbf T}(H)$.
If $H$ has a bi-infinite main geodesic then it is automatically
complete with no further conditions.
\subsection{Proof of the Structural Theorem}
\label{structure sigma proof}
We now have all the ingredients in place to put together a proof of
Theorem \lref{Structure of Sigma}. Parts (1) and (4) were already
shown in Lemma \lref{Uniqueness of Descent}.
For Part (3), one direction of
$$
f\in\Sigma^+(Y) \ \iff \ Y\mathrel{\scriptstyle\searrow} f
$$
follows from
Lemma \lref{Subordinate Intersection 1}, and the other from
Lemma \lref{Subordinate Intersection 3}; similarly for $\Sigma^-$.
If $\Sigma^\pm(Y)$ are both nonempty then, again by Lemma
\ref{Subordinate Intersection 3}, there must be $b$ and $f$
such that $b\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex} Y \mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} f$ and so by definition of a hierarchy $Y$
is the support of a geodesic, which must then be $b_0=f_0$.
It remains to show Part (2), that if
$\Sigma^\pm(Y)$ are both nonempty for any domain $Y$,
then $b_0=f_0$, and $\phi_{f_0}(Y) = \emptyset$.
If $Y=D(h)$ for
some geodesic $h$ then we already know $h=b_0=f_0$,
and then $\phi_{f_0}(Y) = \emptyset$ automatically.
In general, if $\phi_{f_0}(Y)\ne
\emptyset$, let $v=\max\phi_{f_0}(Y)$. Let
$W$ be the component domain of $(D(f_0),v)$ containing $Y$. Recall that
Lemma \ref{Uniqueness of Descent} implies for any domain $X$ that
$\Sigma^+(X)\ne \emptyset$ is equivalent to either ${\mathbf T}(H)|_ X\ne\emptyset$
or $g_H$ infinite in the forward direction, and
similarly for $\Sigma^-$ and ${\mathbf I}(H)$. Thus
$\Sigma^\pm(Y)\ne \emptyset$ implies $\Sigma^\pm(W)\ne \emptyset$.
Since $W$ is also a component domain, Lemma \ref{Subordinate
Intersection 3} then implies that $W$ supports some geodesic
$h$. Letting $v'$ be the successor of $v$, we note that $Y$ intersects
$v'$ nontrivially (this is true even when $v$ is the last vertex of
$f_0$ and $v'={\mathbf T}(f_0)$). But $ v'|_ W$ is just ${\mathbf T}(h)$ by
definition, and so $h\in\Sigma^+(Y)$. This contradicts the fact that
$f_0$ has the smallest domain in $\Sigma^+$.
Thus $\phi_{f_0}(Y)=\emptyset$.
In particular, it follows that $f_0\in \Sigma^-$ since
${\mathbf I}(f_0)|_ Y\ne \emptyset$ (or $f_0$ is infinite in the backward
direction). In fact $f_0$ must
be $b_0$ since $Y$ has nonempty footprint on any other geodesic in
$\Sigma^-$. This concludes the proof.
\section{Projection bounds}
\label{projection}
Our goal in this section will be to
prove Theorem \ref{Bounded Geodesic Image},
which gives strong
contraction properties for the subsurface projections $\pi_Y$.
\begin{theorem+}{Bounded Geodesic Image}
Let $Y$ be a proper subdomain of $Z$ with $\xi(Y)\ne 3$
and let $g$ be a geodesic segment, ray,
or biinifinite line in $\CC(Z)$, such that $\pi_Y(v)\ne\emptyset$ for
every vertex $v$ of $g$.
There is a constant $M$ depending only on $\xi(Z)$ so
that
$$\operatorname{diam}_Y(g) \le M.$$
\end{theorem+}
The intuition behind the statement is this: as we move in one
direction in $g =
\{...v_1,v_2,...\}$, we expect the
vertices to converge to some foliation in $Z$. Hence their projections
to $Y$ should converge to the intersection with $Y$ of the foliation
leaves. Recalling that $\pi_Y$ identifies parallel arcs, it should
follow that eventually $\pi_Y(v_i)$ should stabilize to a finite
collection of possible arcs.
To make this precise we have to re-introduce the tools of
Teichm\"uller geometry from \cite{masur-minsky:complex1}.
We also emphasize that the statements we prove will be strictly weaker
than this intuitive description, but will suffice for the diameter bound.
\subsection{Quadratic differentials, vertical and horizontal}
Given a finite-type complex structure on $Z$, recall that
a holomorphic quadratic differential $q$ on $Z$ is a
tensor of the form $\varphi(z)dz^2$ in local coordinates, with
$\varphi$ holomorphic. Away from zeroes, a coordinate $\zeta$ can be
chosen so that $q = d\zeta^2$, which determines a Euclidean metric
$|d\zeta^2|$ together with a pair of orthogonal foliations parallel to
the real and imaginary axes in the $\zeta$ plane. These are
well-defined globally and are called the {\em horizontal} and {\em
vertical} foliations, respectively. The zeroes of $q$ are cone points
with cone angle $n\pi$, $n\in\Z$, $n\ge 2$.
(See Gardiner \cite{gardiner} or
Strebel \cite{strebel}.)
For a closed curve or arc $\alpha$ in $Z$, denote by
$|\alpha|_q$ its length in the $q$ metric. Let $|\alpha|_{q,h}$
and $|\alpha|_{q,v}$ denote its horizontal and vertical lengths,
respectively, by which we mean the total lengths of the (locally
defined) projections of $\alpha$ to the horizontal and vertical
directions of $q$.
Henceforth assume $q$ has finite area, which means that at the
punctures it has poles of order 1 or less, and equivalently that its
metric completion gives a surface $\hat Z$ which is $Z$ with a cone point
added at each puncture, with cone angle $n\pi$, $n\in\Z$, $n\ge 1$.
Define a {\em straight segment} to be a path in $Z$ which meets no
punctures or zeroes of $q$, and is a straight line in the Euclidean metric.
A geodesic is composed of a finite number of straight segments,
meeting at zeroes with a certain angle condition.
We must slightly generalize the notion of ``geodesic representative''
as follows:
If $Z$ has punctures, the incompleteness of $|q|$ means that
a non-peripheral homotopy class $\alpha$ may not have a geodesic
representative. However, there is a representative in $\hat Z$ which
goes through
the punctures some finite number of times and is geodesic elsewhere,
which we can think of as a limit of geodesic representatives in the
compact surfaces obtained by deleting open disks of $q$-radius $r$
around the punctures, for $r\to 0$. Thus by ``geodesic
representatives'' we will in fact mean representatives in this sense.
Let $\ep,\theta>0$ be some fixed (small) constants. We say that a
straight segment $\alpha$ is {\em almost vertical} with respect to $q$
if if it makes an angle of at most $\theta$ with the vertical
foliation. We say a geodesic is almost vertical if it is composed of
straight segments meeting at punctures or zeros, each of which is
either almost vertical, or has
length at most $\ep$. We define {\em almost horizontal} in the
analogous way.
\begin{lemma}{almost verticals close}
There is a choice of $\ep,\theta$ depending only on $\xi(Z)$ such that
the following holds.
Let $Y$ be a domain in $Z$ with $\xi(Y)\ne 3$, $q$ a unit-area quadratic
differential on $Z$, and
$\alpha$ a boundary component of $Y$ whose $q$-geodesic contains an
almost-horizontal segment $\sigma$ of horizontal length 1. Then
if $\beta$ and $\gamma$ are two almost-vertical curves intersecting
$Y$,
$$
d_Y(\beta,\gamma) \le 4.
$$
\end{lemma}
\begin{pf}
We begin with the case
where $Y$ is not an annulus.
For simplicity, suppose first that $Y$ is isotopic to an embedded surface with
$q$-geodesic boundary. Thus we may assume that $\alpha$ is already geodesic.
Consider the flow starting from $\sigma$ and
moving along the vertical foliation into $Y$ until it returns to
$\sigma$ or meets a singularity. The points corresponding to flow
lines that meet singularities divide $\sigma$ into at most $k_0$
intervals $\{I_j\}$, where $k_0$ depends only on $\xi(Z)$. Each $I_j$
determines a ``flow rectangle'', which is actually a Euclidean
trapezoid or parallelogram with two vertical sides and two almost-horizontal
sides which have slope at most $\tan \theta$. The interior of the
rectangle is embedded, though its top and bottom edges are segments of
$\sigma$ that may overlap.
Since $\sigma $ has horizontal length 1 there must be an interval $I_j$
of horizontal length at
least $1/{k_0}$. Let $R$ denote the corresponding flow rectangle, and
$h$ the average height of $R$. Thus $R$ has
area at least $h/k_0$, and
since $q$ is unit-area, $h\le k_0$.
Suppose that $\ep < 1/k_0$ and that $\theta$ is sufficiently small
that $\cot \theta > k_0^2 + \tan \theta$.
With these choices, we claim that
an almost-vertical geodesic $\beta$ cannot cross $R$ from one
vertical side to the other: Since $R$ has no singularities in its
interior, such
a crossing would have to be a straight segment $\tau$, and
the slope of $\tau$ would be at most
$h k_0 + \tan \theta \le k_0^2 + \tan \theta $, which is less than
$\cot\theta$ by the choice of $\theta$.
Hence $\tau$ could not be almost vertical. Thus it would have
to have length bounded by $\ep$, and hence be shorter than the width
of $R$, again a contradiction.
We conclude that $\beta$ is disjoint from the interior of some arc $a$
in $R$ connecting the top edge
and the bottom edge. Thus, any component of $a\intersect Y$ gives
an element of $\CC'(Y)$ which is distance 1 from each vertex of
$\pi'_Y(\beta)$.
The same argument
applies to $\gamma$, with $a$ in the same homotopy class,
and we conclude $d_{\CC'(Y)}(\pi'_{Y}(\beta),\pi'_{Y}(\gamma))\le 2.$
Lemma \ref{arcs to curves} then gives the desired bound.
Now consider the possiblity that $Y$ is not homotopic to an embedded surface
with geodesic boundary $\alpha$. In particular the geodesic
representative of $\alpha$ may traverse one or more geodesic segments
more than once, producing arcs of self-tangency.
However even in this case we obtain a map of $Y$ into $\hat Z$
($Z$ union its punctures)
which is homotopic to the inclusion by a homotopy
that is an embedding until the last moment.
At that moment families of arcs in $Y$ or its complement,
with endpoints on $\boundary Y$, are collapsed to points, producing the
arcs of self-tangency.
It is easy to see that the same argument holds except that the
rectangles $R$ in question may have height zero, with horizontal arcs
on the self-tangencies and vertical arcs collapsed.
This concludes the case where $Y$ is not an annulus.
\medskip
When $Y$ is an annulus, $\alpha$ is in the homotopy class of its core.
Lift $\alpha$ to $\til\alpha$ in the universal cover $\til Z$. We
remind the reader that again the geodesic representative of $\alpha$
may pass through punctures, and as the universal covering is
infinitely branched around punctures the topology is easier to keep
track of if we keep $\alpha$ outside a small neighborhood of the
punctures. At any rate our segment $\sigma$ can be assumed disjoint
from the punctures so we need not worry about this.
If we consider the lines of the vertical flow which start at $\sigma$ and
go in both directions until they hit $\sigma$ again, we obtain at most $2k_0$
rectangles composed of vertical flow lines with $\sigma$ passing
through them, and we choose $R$ to be one which has width
at least $1/2k_0$.
Let $\{R_n\}_{n\in\Z}$ denote its lifts
corresponding to the lift of $\alpha$ to $\til\alpha$, so that
$\til\alpha $ passes through the interior of each $R_n$, and the top
and bottom edges of $R_n$ lie on translates of $\alpha$ called
$\til\alpha_n$ and $\til\alpha'_n$, respectively.
(We include also the degenerate possibility that $R$ has height 0 on one
side or the other of $\alpha$
and so the
$\til\alpha_n$, or $\til\alpha'_n$, are each tangent to $\til\alpha$ along
a segment.)
\realfig{twist projection}{twistproj.ps}{}
After an arbitrary choice of orientation for $\alpha$, each $R_n$
has a left and a right vertical
edge. Let $\til\beta$ and $\til\gamma$ be components of the lifts
of $\beta $
and $\gamma$ which cross $\til\alpha$. As argued before, and with
appropriate choice of $\ep,\theta$, neither
$\til\beta$ nor $\til\gamma$ can cross a rectangle $R_n$ from left to
right.
Let $H_n$ and $H'_n$ denote
the halfplanes bounded by
$\til\alpha_n$ and $\til\alpha'_n$, respectively, whose interiors are
disjoint from
$\til\alpha$ (see figure \ref{twist projection}).
Let $U_n$ denote $R_n\union H_n \union H'_n$.
Then neither $\til\beta$ nor $\til\gamma$
can cross through $U_n$ from left to right, because this would involve
either crossing $R_n$ from left to right, or entering and exiting the
interior of $H_n$ or $H'_n$, which a geodesic cannot do.
Thus if $\rho_n$ is the right-hand boundary of $U_n$, $\til\beta$
and $\til\gamma$ can each
cross at most one of the $\rho_n$. If $\rho$, $\hhat\beta$ and
$\hhat \gamma$ are the covering projections of $\rho_n$, $\til\beta$ and
$\til\gamma$, respectively, to the annulus $\hhat Y$, then we obtain
$|\hhat\beta\cdot\rho| \le 1 $ and $|\hhat\gamma\cdot\rho| \le 1 $.
It follows by (\ref{adding twists}) that $|\hhat\beta\cdot\hhat\gamma|\le 3$
and hence $d_Y(\beta,\gamma) \le 4$ by
(\ref{annulus distance and intersection}).
\end{pf}
\subsection{Teichm\"uller geodesics and balancing.}
A Teichm\"uller geodesic in $\TT(Z)$ ``shadows'' a $\CC(Z)$-geodesic
in the following
specific sense, which played a crucial role in
\cite{masur-minsky:complex1}.
Recall that a
Teichm\"uller geodesic $L:\R \to \TT(Z)$ can be described in terms of
a family of quadratic differentials $q_t$ holomorphic on $L(t)$: Each
$q_t$ is obtained from $q_0$ by scaling the horizontal directions by
$e^t$ and the vertical by $e^{-t}$. This determines the conformal
structure $L(t)$.
In \cite{masur-minsky:complex1}, we associate to the geodesic $L$ a
map $F:\R\to \CC_0(Z)$ by letting $F(t)$ be any simple curve of minimal
extremal length with respect to $L(t)$. Furthermore we define a
map $\pi=\pi_q: \CC_0(Z) \to \R\union\{\pm\infty\}$, called a
``balancing projection,'' as follows:
Given any $\alpha\in\CC_0(Z)$, its horizontal length $|\alpha|_{q_t,h}$
has the form $|\alpha|_{q_0,h} e^t$ and
its vertical length $|\alpha|_{q_t,v}$
has the form $|\alpha|_{q_0,v} e^{-t}$. Thus if both of these are non
zero there is a unique point $t$ where they are equal, and we say
$\alpha$ is {\em balanced} at $t$, and set
$\pi_q(\alpha)=t$. If the horizontal lengths are 0 ($\alpha$ is parallel
to the vertical foliation) then we let
$\pi_q(\alpha) = +\infty$, and if the vertical lengths are $0$ we let
$\pi_q(\alpha) = -\infty$.
Now suppose we are given $v,w\in\CC_0(Z)$ with $d_{\CC(Z)}(v,w)\ge
3$. Then $v$ and $w$ fill $Z$, and so there is a conformal structure
and a quadratic differential
$q_0$ for which the horizontal and vertical foliations
have closed nonsingular leaves which are isotopic to $v$ and $w$,
respectively. The corresponding Teichm\"uller geodesic is called the
Teichm\"uller geodesic associated to $(v,w)$. We note immediately
that $\pi_q(\alpha) = -\infty$ for $d(\alpha,v)\le 1$
and similarly that
that $\pi_q(\alpha) = +\infty$ for $d(\alpha,w)\le 1$.
Some basic properties of this projection map are outlined in the
following lemma. Here $d()$
and $\operatorname{diam}()$ refer to
distance and diameter in
$\CC_1(Z)$. The $K_i$ are constants depending only on $\xi(Z)$.
The notation $[s,t]$ refers to the interval with endpoints $s$ and
$t$, regardless of order.
\begin{lemma}{teichmuller properties}
Let $g = \{v_i\}_{i=M}^{N}$ be a geodesic segment in $\CC(Z)$ with
$M-N\ge 3$, and let $L:\R\to\TT(Z)$ be the Teichm\"uller geodesic
associated to $(v_M,v_N)$,
$F:\R\to\CC_0(Z)$ its associated map and $\pi:\CC_0(Z)\to\R$ the
associated projection. There are constants $K_0,K_1,K_2,m_0>0$, depending
only on the surface $Z$, such that:
\begin{enumerate}
\item \label{lipschitz}
(Lipschitz)
If $d(v,w) \le 1$ for $v,w\in\CC_0(Z)$ then
$$\operatorname{diam}(F([\pi(v),\pi(w)])) \le K_0.$$
\item \label{Fellow traveling 1}
(Fellow traveling 1) for any $v_i$ in $g$,
$$d(v_i,F(\pi(v_i))) \le K_1$$
\item \label{Fellow traveling 2}
(Fellow traveling 2)
For all $t\in\R$, there exists some $v_i\in g$ such that
$$\operatorname{diam}(F([t,\pi(v_i)])) \le K_2$$
\item \label{coarse monotonicity}
(Coarse monotonicity)
Whenever $v_i$, $v_j$ are in $g$ with $j> i+m_0$,
$$\pi(v_j)>\pi(v_i)$$
\end{enumerate}
\end{lemma}
\begin{pf}
Part (\ref{lipschitz}) is part of Theorem 2.6 of
\cite{masur-minsky:complex1}, and parts
(\ref{Fellow traveling 1}) and
(\ref{Fellow traveling 2})
follow from Theorem 2.6 together with
the proof of Lemma 6.1 of \cite{masur-minsky:complex1}.
Part (\ref{coarse monotonicity}) is a consequence of parts
(\ref{lipschitz}) and (\ref{Fellow traveling 1}):
Since the last vertex $v_N$ has $\pi(v_N) = +\infty$ by definition,
if we have $\pi(v_{i+m}) < \pi(v_i)$
then there is some $m'\ge m$ for which $\pi(v_i) \in
[\pi(v_{m'}),\pi(v_{m'+1})]$.
By (\ref{lipschitz}), we then have
$d(F(\pi(v_i)),F(\pi(v_{i+m'}))) \le K_0$. However since
$d(v_i,v_{i+m'}) = m'$, this implies together with (\ref{Fellow
traveling 1}) and the triangle inequality that $m'\le K_0 + 2K_1$.
Setting $m_0 = K_0 + 2K_1$, we have part (\ref{coarse monotonicity}).
\end{pf}
\subsection{Proof of Theorem \ref{Bounded Geodesic Image}}
Let us first consider the case where $g$ is a finite segment
$\{v_i\}_{i=M}^N$.
Note that at most 3 of the vertices can actually be
contained $Y$, since they would all be $\CC(Z)$-distance 1 from
$\boundary Y$.
We may assume without loss of generality that $|g|=N-M \ge 3$.
Select a Teichm\"uller geodesic $L: \R \to \TT(Z)$ as above,
associated to $(v_M,v_N)$, as well as the associated map $F$,
family of quadratic differentials $q_t$, and balancing map $\pi$.
Let $\alpha$ be any boundary component of $Y$ (non-peripheral in $Z$).
Let $s_0 = \pi(\alpha)$. Note that possibly $s_0 =
-\infty$ or $+\infty$, if
$\alpha$ is disjoint from $v_M$ or $v_N$ (but not both).
If $s_0\ne \pm \infty$,
then $\alpha$ is balanced at $s_0$. If $s_0=-\infty$ then it is
horizontal at any $q_t$. In either case,
Lemmas 5.3 and 5.6 of
\cite{masur-minsky:complex1} imply that, for $K_3>0$ depending only on
$Z$, there is $s_1\ge s_0$ with
\begin{equation}
\label{s0 s1 bound}
\operatorname{diam}(F[s_0,s_1]) \le K_3,
\end{equation}
such that $\alpha$ is almost-horizontal
with respect to $q_{s}$ whenever $s\ge s_1$, and contains an almost
horizontal segment of
horizontal length $\ep_1$, for some fixed $\ep_1>0$. In fact
we may assume $\ep_1=1$, because horizontal length expands at a
definite exponential rate with distance along the Teichm\"uller
geodesic $L$, and the map $F$ is quasi-Lipschitz
by Lemma 5.1 of \cite{masur-minsky:complex1}.
(The case $s_0=\infty$ is treated similarly, interchanging horizontal
and vertical).
Lemma 5.7 of \cite{masur-minsky:complex1} implies that, for $K_4>0$
depending only on $Z$,
there exists $s_2>s_1$ such that
\begin{equation}
\label{s1 s2 bound}
\operatorname{diam}(F[s_1,s_2])\le K_4
\end{equation}
and,
for any $\gamma\in\CC(Z)$, if $\pi(\gamma)>s_2$ then $\gamma$ is
almost vertical with respect to $q_{s_1}$.
Again, possibly $s_2 = \infty$.
Let $j_0$ be the index of the vertex of $g$ for which part
(\ref{Fellow traveling 2}) of Lemma \ref{teichmuller properties}
gives
\begin{equation}
\label{j0 s0 bound}
\operatorname{diam}(F([s_0,\pi(v_{j_0})])) \le K_2.
\end{equation}
We will now show that for $i>j_0$ sufficiently large,
$\pi(v_i) > s_2$.
By the coarse monotonicity (\ref{coarse monotonicity}) of Lemma
\ref{teichmuller properties}, if $i>j_0+m_0$
then $\pi(v_i) > \pi(v_{j_0})$.
Thus if $\pi(v_i)\le s_2$, we have $d(F(\pi(v_i)),F(\pi(v_{j_0})))\le
\operatorname{diam}(F([\pi(v_{j_0}),s_2]))$, and the latter is bounded by
$K_2 + K_3 + K_4$ because of the bounds (\ref{s0 s1 bound}), (\ref{s1
s2 bound}) and (\ref{j0 s0 bound}).
Thus, $i-j_0 = d(v_i,v_{j_0}) \le K_2 + K_3 + K_4 + 2K_1$ by (\ref{Fellow
traveling 1}) of Lemma \ref{teichmuller properties} and the triangle
inequality. Letting
$m_1 = 1+ \max (m_0,K_2 + K_3 + K_4 + 2K_1)$, we are therefore assured
that if $i\ge j_0 + m_1$ then $\pi(v_i)>s_2$.
Thus, if $i\ge j_0 + m_1$, then $v_i$ is almost vertical with
respect to $q_{s_1}$.
We can now apply Lemma \ref{almost verticals close} using the
quadratic differential $q_{s_2}$ and the boundary component
$\alpha$. If $j_0+m_1 \le N$ then
for any $i,i'\in [j_0+m_1,N]$ we have by the above that both $v_i$
and $v_{i'}$ are almost vertical with respect to $q_{s_2}$, and thus
$d_Y(v_i,v_i')\le 4$.
The same argument, with horizontal and vertical interchanged, applies
to give a bound for $i,i'\in[M,j_0 - m_1]$, if $M\le j_0-m_1$. The
remaining segment between
$\max(M,j_0-m_1)$ and $\min(N,j_0+m_1)$ has a diameter bound of
$2m_1$, so its $\pi_Y$-image has diameter at most $4m_1$ by Lemma
\lref{Lipschitz Projection}. Thus the image
of the full segment $g$ is bounded by $4m_1 + 8$.
Since this bound is independent of $N$ and $M$, it implies a bound
also in the infinite cases, via an exhaustion of $g$ by finite subsegments.
This concludes the proof of Theorem \ref{Bounded Geodesic Image}.
\section{Slices, resolutions, and markings}
\label{resolution}
In this section we will discuss how to resolve a hierarchy $H$ into a
sequence of markings connecting ${\mathbf I}(H)$
to ${\mathbf T}(H)$, so that successive markings are related by
elementary moves. Essentially we must somehow combine the vertex
sequences of the various geodesics in $H$, and their partial orders,
into one large linearly ordered sequence. This process is by no means
unique.
Along the way we will need to develop the notion of a {\em slice},
which roughly speaking is a marking pieced together from variously
nested geodesics in the hierarchy, together with additional
organizational structure. These slices will admit a certain partial
order, and we will then describe an elementary move on slices, which
moves a slice forward in the partial order.
The resulting sequence of slices can then be transformed into a
sequence of clean markings of the surface (in a slightly non-unique
fashion), and we will prove a lemma bounding the length of this
sequence in terms of the size of $H$.
\bfheading{Slices.}
Let us assume from now on that the hierarchy $H$ is complete.
A {\em slice} in $H$ is a set $\tau$ of pairs $(h,v)$ where
$h\in H$ and $v$ is a vertex of
$h$, satisfying the following conditions:
\begin{itemize}
\item[S1:] A geodesic $h$ appears in at most one pair in $\tau$.
\item[S2:] There is a distinguished pair $(h_\tau,v_\tau)$ appearing in $\tau$,
called the bottom pair of $\tau$. We call $h_\tau$ the bottom geodesic.
\item[S3:] For every $(k,w)\in \tau$ other than the bottom pair, $D(k) $
is a component domain of $(D(h),v)$ for some $(h,v)\in\tau$.
\end{itemize}
If in addition this fourth condition holds, we call the slice {\em
complete}:
\begin{itemize}
\item[S4:] Given $(h,v)\in\tau$, for every component domain $Y$ of
$(D(h),v)$ there is a pair $(k,w)\in\tau$ with $D(k)=Y$.
\end{itemize}
Most often $h_\tau$ will just be the main geodesic of $H$.
A slice $\tau$ is called {\em initial} if, for each $(h,v)\in\tau$,
$v$ is the first vertex of $h$. Note that a complete initial slice is
uniquely determined by its bottom geodesic.
The complete initial slice with bottom geodesic $g_H$ is called
{\em the} initial slice of $H$. We similarly
define terminal slices.
\bfheading{Markings associated to a slice:}
To any slice $\tau$ we associate a unique marking $\mu_\tau$ as follows.
It is easy to see by induction that the vertices $v$ appearing in
non-annular geodesics in
$\tau$ are all disjoint and distinct, and hence form a simplex in $\CC(S)$.
We let this be $\operatorname{base}(\mu_\tau)$.
For each
base curve $v$, if $\tau$ contains some $(k,t)$ with $D(k)$ the annulus whose
core is $v$, then we let $t$ be the transversal of $v$ in $\mu_\tau$.
In particular a complete slice determines a complete marking.
Typically $\mu_\tau$ is not clean, so
let us say that a clean marking $\mu'$ is {\em compatible with $\tau$}
if it is compatible with $\mu_\tau$ in the sense of Lemma \ref{clean
markings}.
Lemma \ref{clean markings} then shows that such a $\mu'$ exists,
there are at most $n_0^b$ possibilities where $b=\#\operatorname{base}(\mu)$,
and any two differ by a bounded number of Twist elementary
moves.
\medskip
Note that, if $\tau$ is the initial slice of $H$, then if the marking
${\mathbf I}(H)$ is clean it is
compatible with $\tau$. The same is true for the terminal slice
and ${\mathbf T}(H)$.
\bfheading{Partial order on slices:}
Consider now the set $V(H)$ of complete slices whose bottom geodesic
equals the main geodesic of $H$. This set admits a partial order $\prec_s$
as follows.
For $\tau, \tau'\in V(H)$, say that $\tau \prec_s
\tau'$ iff $\tau \ne \tau'$ and, for any $(h,v) \in \tau$,
either $(h,v)\in\tau'$ or
there is some $(h',v')\in \tau'$
such that $(h,v) \prec_p (h',v')$.
\begin{lemma}{sprec partial order}
Let $H$ be a complete hierarchy. The relation $\prec_s$ is a strict
partial order on $V(H)$.
\end{lemma}
\begin{pf}
Let us first note the following facts:
\begin{enumerate}
\item $\prec_p$ is a strict partial order,
\item Any two elements $(h,v)$ and $(k,w)$ of a slice $\tau$ are not
$\prec_p$-comparable,
\item If $\tau\subseteq \tau'$ for slices $\tau,\tau'\in V(H)$ then
$\tau = \tau'$.
\end{enumerate}
Fact (1) is Lemma \ref{Time Order} part (5) , and
Fact (2) is an application of Lemma \ref{diagonalizable} and
\ref{Time Order} part (1).
Fact (3) follows from the fact that slices in $V(H)$ are {\em
complete}.
For $\prec_s$ to be a strict partial order it suffices to show that it
is transitive, since by definition it is never reflexive. Let
$\tau_1 \prec_s \tau_2 \prec_s \tau_3$ for $\tau_i\in V(H)$.
By definition of $\prec_s$, given any $p_i\in\tau_i$ (where $i=1,2$
and $p_i$ denotes some pair $(h_i,v_i)$),
there
exists $p_{i+1} \in \tau_{i+1}$ such that either $p_i\prec_p p_{i+1}$
or $p_i = p_{i+1}$.
By fact (1) this implies either $p_1 \prec_p p_3$ or
$p_1 = p_3$. Thus either $\tau_1 \prec_s \tau_3$ or $\tau_1=\tau_3$. To
rule out the latter, note that there is at least one $p_1\in\tau_1$
which is not in $\tau_2$ (by Fact (3)). Thus
$p_1\prec_p p_2$ so $p_1\prec_p p_3$, and $p_3$ cannot lie in $\tau_1$
by Fact (2).
\end{pf}
\bfheading{Forward elementary moves:}
Roughly, an elementary move on a slice $\tau$ consists of incrementing
the vertex $v$ of some $(h,v)$ in $\tau$, and making certain
adjustments to the other pairs to obtain a new slice $\tau'$.
To begin, let $h\in H$ and let $v$ be a vertex of $h$, not the last,
with successor $v'$. These will determine two slices $\sigma$ and
$\sigma'$, not necessarily complete, called the {\em transition
slices} for $v$ and $v'$. The slices $\sigma,\sigma'$ will have the
property that (at least when $\xi(D(h)) > 4$) $\mu_\sigma = \mu_{\sigma'}
= v\union v'$. After constructing these we will extend them to
complete slices $\tau,\tau'$, which will constitute our elementary
move.
Define $\sigma$ as the smallest slice with bottom pair $(h,v)$ such that,
for any $(k,w)\in\sigma$ and $Y$ a component domain of $(D(k),w)$,
\begin{itemize}
\item[E1:] if $v'|_ Y \ne \emptyset$ and $Y$ supports a geodesic
$q$ then $(q,u)\in \sigma$ where $u$ is the {\em last} vertex of $q$.
\item[E2:] if $ v'|_ Y = \emptyset$ then no geodesic in $Y$ is
included in $\sigma$.
\end{itemize}
Note that $\sigma$ is easily built inductively from E1 and E2, and is
uniquely determined. It is also easy to check that it satisfies the
slice properties (S1--S3).
We call the domains appearing in E2 ``unused
domains'' for $\sigma$.
Similarly, define $\sigma'$ as the smallest slice with bottom
pair $(h,v')$, such that for any
$(k,w)\in\sigma'$ and $Y$ a component domain of $(D(k),w))$,
\begin{itemize}
\item[E1':] if $ v|_ Y \ne \emptyset$ and $Y$ supports a geodesic
$q$ then $(q,u)\in \sigma'$ where $u$ is the {\em first} vertex of $q$.
\item[E2':] if $ v|_ Y = \emptyset$ then no geodesic in $Y$ is
included in $\sigma$.
\end{itemize}
Before continuing let us consider this construction in several special
cases.
\begin{enumerate}
\item $D(h)$ is an annulus. Here $v$ and $v'$ are arcs
in the closed annulus with disjoint interiors, and
$\sigma = \{(h,v)\}$, $\sigma' = \{(h,v')\}$.
\item $D(h)$ is a once-punctured torus. Now $v$ and $v'$ are curves
intersecting once in $D(h)$. Let $k$ be the geodesic supported in the
annulus $Y$ whose core is $v$, and let $k'$ be the geodesic supported in
the annulus $Y'$ whose core is $v'$. Then
$$\sigma = \{(h,v),(k,\pi_Y(v'))\},\ \ \ \sigma' =
\{(h,v'),(k',\pi_{Y'}(v))\}.$$
(If $D(h)$ is a 4-holed sphere then $v$ and $v'$ intersect twice so
$\pi_Y(v')={\mathbf T}(k)$ has two components, only one of which appears as the
last vertex of $k$; and similarly for $\pi_{Y'}(v)$.)
\item $\xi(D(h)) = 5$. Now $v$ and $v'$ are disjoint one-component curves. The
complementary domain $Y$ of $v$ with $\xi(Y)=4$ must contain $v'$, and the
complementary domain $Y'$ of $v'$ with $\xi(Y')=4$ must contain
$v$. Let $k$ and $k'$ be the geodesics supported in $Y$ and $Y'$
respectively. Then
$$\sigma = \{(h,v),(k,v')\}, \ \ \ \sigma' = \{(h,v'),(k',v)\}.$$
Note that the annuli with cores $v$ and $v'$ are not included in these
slices, by E2 and E2'. In particular we observe that $\mu_\sigma =
\mu_{\sigma'} = v\union v'$. The general $\xi>4$ case will be treated
in Lemma \ref{transition slices} below.
\end{enumerate}
\begin{lemma}{transition slices}
Let $v,v'$ be successive vertices in a geodesic $h\in H$, where $H$ is
a complete hierarchy. Let $\sigma,\sigma'$ be the transition slices
associated to $v,v'$. If $\xi(D(h)) > 4$ then
no geodesics in $\sigma$ and $\sigma'$ have annular domains, the associated
markings $\mu_\sigma$ and $\mu_{\sigma'}$ have no transversals and
are both equal to $v\union v'$,
and the unused domains in $\sigma $ and $\sigma'$
are exactly the component domains of $(D(h),v\union v')$.
\end{lemma}
Thus, the move from $\sigma $ to $\sigma'$ in this case involves only
a ``reorganization'', and the underlying curve system is not changed.
\begin{pf}
Since $\xi(D(h))>4$, $v$ and $v'$ are disjoint curve systems.
Consider first a component domain $Y$ of $(D(h),v)$. If $Y$ misses
$v'$ then it is an unused domain of $\sigma$ (case E2) and is also clearly a
component domain of $(D(h),v\union v')$. If $Y$ doesn't miss $v'$,
then by E1 (and completeness of $H$) we have
$(q,u)$ in $\sigma$ where
$D(q) = Y$ and $u$ is the last vertex in $q$.
Since $Y\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$,
it follows from Lemma \ref{Y direct to top} that $q\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex} h$
with ${\mathbf T}(q) = {\mathbf T}(Y,h)=v'|_ Y$. Hence in particular $u\subseteq
v'$. Note that $u$ need not be all of $v'|_ Y$.
Now let $Z$ be any component domain of $(D(q),u)$. By the above, the
relative boundary $\boundary_{D(h)}(Z)$ consists of some subset of
$v\union v'$.
Again if $Z$ misses $v'$ it is unused in $\sigma$ and a component
domain of $(D(h),v\union v')$,
and if $v'|_ Z\ne\emptyset$
then $Z\mathrel{\scriptstyle\searrow} q$ with ${\mathbf T}(Z,q) = {\mathbf T}(q)|_ Z =
v'|_ Z$. Thus the same argument works inductively. The process
terminates in an unused domain exactly when this domain is a component
domain of $(D(h),v\union v')$.
The same argument applies to $\sigma'$ as well, reversing directions
as usual.
Every annulus whose core is a component of $v\union v'$ does not have
essential intersection with either $v$ or $v'$. Thus it is unused, so
that the slices $\sigma,\sigma'$ have no annulus-domain geodesics, and
their markings have no transversals.
\end{pf}
We can now define our elementary moves.
Let there be given two slices $\tau$ and $\tau'$, and let $h$ be a
geodesic in $H$ with two successive vertices $v,v'$.
We say that $\tau'$
is related
to $\tau$ by a {\em forward elementary move} along $h$ from $v$ to
$v'$ (or $\tau \to
\tau'$ for short) provided the following holds:
Letting $\sigma,\sigma'$ be the transition slices for $v,v'$, we have
$\sigma\subset \tau$ and $\sigma'\subset \tau'$, and $\tau\setminus\sigma =
\tau'\setminus\sigma'$. The next lemma checks that a forward move in
$V(H)$ really moves forward in terms of the partial order:
\begin{lemma}{forward means forward}
Suppose $\tau$ and $\tau'$ are in $V(H)$, and are related by an
elementary move $\tau\to\tau'$. Then
$\tau \prec_s \tau'$.
\end{lemma}
\begin{pf}
First, $\tau \ne \tau'$ since $\sigma $ and $\sigma'$ differ in their
bottom pair. Now
let $(k,w)\in \tau$, such that $(k,w)\notin\tau'$. Then
$(k,w)\in\sigma$ and hence $D(k)\subseteq D(h)$, and
$v'|_ {D(k)}\ne \emptyset$, by construction of $\sigma$.
If $k=h$ then $(k,w) = (h,v) \prec_p (h,v')\in \sigma'$ and we are done.
If not then $\phi_h(D(k))$ contains $v$ and not $v'$, so that
$\max\phi_h(D(k)) = v < v'$. We therefore have
$(k,w) \prec_p (h,v')$, and again we are done.
\end{pf}
Next, we should show that in fact a sequence of elementary moves does
exist connecting the initial to the terminal slice of $H$, and
furthermore give a bound for its length.
Let $|H|$ denote the size of the hierarchy $H$, defined as the sum
$\Sigma_{h\in H} |h|$ of the lengths of its geodesics.
\begin{proposition}{Resolution into slices}
Any complete finite hierarchy $H$ admits a sequence of forward elementary
moves
$\tau_0\to\cdots\to\tau_N$ where $\tau_0$ is its initial slice,
$\tau_N$ is its terminal slice, and
$$
N\le |H|.
$$
Such a sequence is called a {\em resolution} of $H$.
\end{proposition}
\begin{pf}
Let us first show that, if $\tau$ is not the terminal slice of $H$,
then there exists some $\tau'$ such that $\tau\to\tau'$.
Indeed, there is at least one $(h,v)\in\tau$ for which $v$ is not the
last vertex
of $h$. Choose $h$ minimal in the sense that if $(k,w)\in\tau$ and
$D(k)\subset D(h)$ then $w$ is the last vertex of $k$.
Let $v'$ denote the successor of $v$ in $h$.
The subset
$$
\sigma = \{(k,w)\in\tau: D(k)\subseteq D(h),
v'|_{D(k)}\ne\emptyset\}
$$
satisfies conditions (E1,E2), by the minimal choice of $h$ and the
fact that $\tau$ is complete.
Construct
$\sigma'$ via (E1') and (E2'), thus obtaining the transition slices
for $v,v'$, and let $\tau' =\sigma'\union(\tau\setminus \sigma)$.
It is easy to check that $\tau'$ satisfies conditions (S1--S3) and is
hence a slice, with $h_{\tau'} = h_\tau = g_H$. To see that it is a
complete slice (S4), consider
any $(k,y)\in \tau'$ and let $Y$ be a component domain of
$(D(k),y)$. If $(k,y)$ is not in $\sigma'$ then by definition it is in
$\tau$, and since $\tau$ is complete, it contains some $(l,z)$ with
$D(l)=Y$. If $(l,z)\notin\sigma$ then again by definition $(l,z)\in
\tau'$ and we are done. If $(l,z)\in\sigma$ then, since it is a
component domain of a pair outside of $\sigma$, it can only be the bottom
pair $(h,v)$ of $\sigma$. But then $Y=D(h)$ and we know that $(h,v')$
is a pair in $\tau'$, so again we are done. Now suppose that
$(k,y)\in\sigma'$. If $Y$ is a
used domain of $\sigma'$ then by definition it supports some geodesic
$l'$ appearing in $\tau'$. If $Y$ is an unused domain then by Lemma
\ref{transition slices} it is also an unused domain of $\sigma$, and
hence supports a
geodesic $q$ appearing in $\tau\setminus \sigma$. Again we conclude
$q$ appears in $\tau'$ and we are done.
We thus have a slice $\tau'$ in $V(H)$, and an elementary forward move
$\tau\to\tau'$. Note that $\tau'$ is not uniquely determined by
$\tau$, as there may have been more than one $h$ to choose from.
Now if $\tau_0$ is the initial slice of $H$, the above implies that
there is a sequence
$\tau_0 \to \tau_1 \to \tau_2 \to \cdots$, which does not terminate at
$\tau_i$ as
long as $\tau_i$ is not the terminal slice.
On the other hand by Lemma \ref{forward means forward},
the sequence is strictly increasing in $\prec_s$. Since the set of
slices is finite, it must terminate for some $\tau_N$, which must then
be the terminal slice of $H$.
All we have left to prove is the bound on the length $N$ of the
resolution.
Suppose a pair $(h,v)$ appears in $\tau_n$ and $(h,w)$ appears in
$\tau_{m}$ for $n<m$. Then $\tau_n \prec_s \tau_m$ as we have seen,
and therefore it must be that $v\le w$.
For if not we would have $(h,w)\prec_p(h,v)$, but by definition of
$\prec_s$ there is some $(k,u)\in\tau_m$ such that
$(h,v)\prec_p(k,u)$.
Hence $(h,w) \prec_p (k,u)$, but
this contradicts the fact that all pairs in a
given slice are not $\prec_p$-comparable (see proof of Lemma \ref{sprec
partial order}).
By the definition, a forward move $\tau_n \to \tau_{n+1}$ advances exactly one
geodesic exactly one step,
erases certain pairs of the form $(k,u)$ where $u$ is
the last vertex, and creates certain pairs of the form $(k',u')$ where $u'$ is
the first vertex, and keeps the rest of the pairs unchanged.
Since by the previous paragraph
no vertices in any geodesic can be repeated once they have been
incremented or erased,
it follows that the number of forward moves is bounded by
$\Sigma_{h\in H} |h|$, which is $|H|$.
\end{pf}
\bfheading{Conversion to a sequence of clean markings.}
Given a resolution $\tau_0\to\cdots\to\tau_N$ of $H$ into slices,
we may obtain a sequence of clean markings $\mu_0,\ldots,\mu_N$ by
requiring that each $\mu_i$ be compatible with $\tau_i$. Recall that
there may be a finite number of choices for each $\mu_i$.
For convenience we also assume that ${\mathbf I}(H)$ and ${\mathbf T}(H)$ are clean, and
$\mu_0 = {\mathbf I}(H)$ and $\mu_N={\mathbf T}(H)$.
What is left to check is the relationship between $\mu_i$ and
$\mu_{i+1}$. Recall from \S\ref{markings} the definition of the
elementary moves Flip and Twist on clean markings.
We can now establish:
\begin{lemma}{marking move bound}
Let $(\tau_i)$ be a resolution of a complete finite hierarchy $H$, and
let $(\mu_i)$ be
a sequence of complete clean markings compatible with
$(\tau_i)$.
There exists $B>0$ depending only on the topological type of $S$, such
that
$\mu_i$ and $\mu_{i+1}$ differ by at most $B$ elementary
moves.
In particular, assuming ${\mathbf I}(H)$ and ${\mathbf T}(H)$ are clean,
there is a sequence of clean markings $(\hat\mu_j)_{j=0}^M$, successive
ones separated
by elementary moves, such that $\hat\mu_0 = {\mathbf I}(H)$, $\hat \mu_M={\mathbf T}(H)$, and
$M\le B|H|$.
\end{lemma}
\begin{proof}
We have already seen in the beginning of the section that two clean
markings compatible with the
same $\tau_i$ differ by a bounded number
of Twist elementary moves.
Now, recall that $\tau_i\to\tau_{i+1}$ is determined by a
transition $v\to v'$ along some geodesic $h$. If $D(h)$ is an annulus,
$v$ and $v'$ differ by distance one in the annular complex $\CC(D(h))$,
so a bounded number of Twist moves applied to $\mu_i$ yields a
marking $\mu'_{i+1}$
which is compatible with $\tau_{i+1}$. Then as above $\mu'_{i+1}$ and
$\mu_{i+1}$ are related by a bounded number of Twist moves.
Suppose that $\xi(D(h)) = 4$. Then recall that the transition slices
$\sigma_i$ and $\sigma_{i+1}$ can be written as
$\{(h,v),(k,t)\}$ and $\{(h,v'),(k',t')\}$ where $k,k'$ are the
geodesics in the complexes of the annuli $Y$ and $Y'$ with cores $v$
and $v'$ respectively, and $t$ and $t'$ are vertices of $\pi_Y(v')$
and $\pi_{Y'}(v)$ respectively. (If $D(h)$ is a 1-holed torus then
$t=\pi_Y(v')$ and $t'=\pi_{Y'}(v)$).
Thus a clean marking $\mu'_i$ can be constructed compatible with $\tau_i$ and
containing a pair $(v,\pi_Y(v'))$. Now a Flip move on this
marking yields a marking $\mu'_{i+1}$ with the pair $(v',\pi_{Y'}(v))$,
with all other base curves the same, and transversals at distance at most
$n_1$ from those of $\mu'_i$ by Lemma \ref{clean markings}.
It follows that, using a bounded number of Twist moves on each
base curve,
$\mu'_{i+1}$ can be made into $\mu''_{i+1}$ which is
compatible with $\tau_{i+1}$.
Since the previous discussion bounds the number of moves to get from
$\mu_i$ to $\mu'_i$ and from
$\mu_{i+1}$ to $\mu''_{i+1}$, we again have a bound on the number of
moves needed to get from $\mu_i$ to $\mu_{i+1}$.
Finally when $\xi(D(h)) > 4$, $\tau_i$ and $\tau_{i+1}$ have exactly
the same base curves, and the positions on their annulus geodesics are
the same. It follows that any marking compatible with $\tau_i$ is also
compatible with $\tau_{i+1}$, and hence again $\mu_i$ and $\mu_{i+1}$
differ by a bounded number of Twist moves.
\end{proof}
We remark that explicit bounds for this lemma are straightforward, but
somewhat tedious, to compute, so we have elected to leave them out.
|
{'timestamp': '1998-07-27T17:05:01', 'yymm': '9807', 'arxiv_id': 'math/9807150', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807150'}
|
arxiv
|
\section{
Introduction}
It is well known that compact groups
admit no ergodic actions on operator algebras other than the
finite ones (i.e. those with finite traces) \cite{HLS}.
Therefore, there arosed the following basic problem
(cf p76 of \cite{HLS}):
Construct an ergodic action of a semisimple compact Lie group on the
Murray-von Neumann $\mathrm{II}_1$ factor $R$. Later,
Wassermann developed some general theory of ergodic actions of
compact groups on operator algebras and
showed that $SU(2)$ cannot act ergodically on $R$
\cite{AWassermann1,AWassermann3},
leaving experts the doubt that semisimple compact Lie groups
admit ergodic actions on $R$ at all.
In \cite{Boca1}, Boca studied the general theory of
ergodic action of compact quantum groups \cite{Wor5} on $C^*$-algebras
and generalized some basic results on ergodic actions of
compact groups to compact quantum groups. But so far
there is still a lack of non-trivial examples of ergodic actions
of compact quantum groups on operator algebras.
The purpose of the present paper is two-fold, which is
in some sense opposite to that of Boca \cite{Boca1}. First, we show
that some new phenomena can occur for ergodic actions of
quantum groups. Second, we supply some
general methods to construct ergodic actions of compact
quantum groups on operator algebras and give several non-trivial examples of
such actions. We show that the universal compact matrix quantum
groups $A_u(Q)$ of \cite{W5,W1} admit ergodic actions on both
the (infinite) injective
factors of type III (for $Q \neq c I_n$, $c \in {\Bbb C}^*$)
and the (infinite) Cuntz algebras (for $Q>0$). We construct
an ergodic action of the universal compact matrix quantum group of Kac type
$A_u(n)$ on the hyperfinite factor $R$, which may not admit
ergodic actions of any semisimple compact Lie group \cite{AWassermann3}.
We also study ergodic actions of compact quantum groups
on their homogeneous spaces and show that there are non-homogeneous
{\em classical spaces} that admit ergodic actions of quantum groups.
These results show that compact quantum groups have a much
richer theory of ergodic actions on operator algebras than
compact (Lie) groups.
Unlike Boca \cite{Boca1}, we study actions of compact quantum groups
on both $C^*$-algebras and von Neumann algebras,
not just $C^*$-algebras. Our construction of
ergodic actions of compact quantum groups on von Neumann
algebras come from their ``measure preserving'' actions on
$C^*$-algebras, just as in the classical situation
(see \thref{inducedaction}). One of our constructions of ergodic actions
(see Sect. 3) uses tensor products of irreducible representations of compact
quantum groups. This method was first used by Wassermann \cite{AWassermann4}
in the setting of Lie groups (instead of quantum groups)
to construct subfactors from their ``product type actions''.
At the other extreme, actions of quantum groups with large fixed
point algebras (i.e. prime actions) have been studied by many authors,
see, e.g. \cite{Cuntz2,CDPR1}.
Generalizing the canonical action of compact Lie groups on the Cuntz
algebras \cite{Cuntz} introduced by Doplicher-Roberts \cite{DopRob6,Dop1},
Konishi et al \cite{KMW} study (the non-ergodic) action of
$SU_q(2)$ on the Cuntz algebra ${\cal O}_2$ and its CAR subalgebra and show
that their fixed point algebras coincide (see also \cite{Marciniak1}).
This result is extended to $SU_q(n)$ by Paolucci \cite{Paolucci1}.
In \cite{Nakagami1}, this action of the quantum group $SU_q(n)$ is
induced to a (non-ergodic) action on
the Powers factor $R_\lambda$ by a rather complicated method,
which follows from our result \thref{inducedaction} in a much simpler
and more conceptual manner.
The contents of this paper are as follows. In \secref{induce},
we give a general method of construction of quantum group actions
on von Neumann algebras from their ``measure preserving'' action
on $C^*$-algebras. Using this and a result of Banica \cite{Banica2}
on the tensor products of
the fundamental representation of $A_u(Q)$, we construct in
\secref{UHF} an ergodic action of a universal quantum groups $A_u(Q)$ on the
Powers factor $R_\lambda$ of type $\mathrm{III}_\lambda$ and
and an ergodic action of $A_u(n)$ on the hyperfinite
$\mathrm{II}_1$ factor $R$. In \secref{fixedpoint}, using results of
Banica \cite{Banica1}, we show that the
fixed point subalgebra of $R$ under the quantum subgroup $A_o(n)$
of $A_u(n)$ is also a factor and that the action of $A_o(n)$ on $R$ is prime.
In \secref{Cuntz},
we construct ergodic action of $A_u(Q)$ on the Cuntz algebras and on the
injective factor $R_{\infty}$ of type $\mathrm{III}_1$ as well as the other
factors of type $\mathrm{III}$. It is also shown that the (unimodular)
compact quantum group $A_u(n)$ of Kac type acts ergodically
on the injective factor of type $\mathrm{III}_{\frac{1}{n}}$,
a fact rather surprising to us.
In the last section \secref{quotient}, we
study ergodic actions of compact quantum groups on their ``quotient spaces'',
and show that the quantum automorphism group $A_{aut}(X_4)$ acts ergodically
on the classical space $X_4$ with $4$ points, but $X_4$ is not isomorphic
to a quotient space. We point out that instead of using the
fundamental representation of $A_u(Q)$, we can
also use representations of free products of compact quantum groups
\cite{W1} in the examples in \secref{UHF} and \secref{Cuntz} for the
constructions of ergodic actions.
\section{
\label{induce}
Lifting actions on $C^*$-algebras to von Neumann algebras}
In this section, we describe (\thref{inducedaction})
how to construct ergodic actions of compact quantum groups on
von Neumann algebras from ``measure preserving'' actions on
noncommutative topological spaces (i.e. $C^*$-algebras).
To fix notation, we first recall some basic notions concerning actions of
quantum groups on operator algebras (\cite{BS2,Boca1,Pod6,W15}).
For convenience in this paper, we will use the definition given in
\cite{W15} for the notion of actions of compact quantum groups on
$C^*$-algebras. As in \cite{W15},
Woronowicz Hopf $C^*$-algebras are assumed to be full in order to
define morphisms. We adapt the following convention
(see \cite{W1,W5,W15}): when $A=C(G)$ is a Woronowicz Hopf $C^*$-algebra,
we also say that $A$ is a compact quantum group,
referring to the dual object $G$.
\begin{DF} (cf \cite{W15})
\label{qact}
A (left) {\bf action} of a compact quantum group $A$ on a
$C^*$-algebra $B$ is a unital *-homomorphism $\alpha$ from $B$ to
$B \otimes A$ such that
(1). $( id_B \otimes \Phi ) \alpha = ( \alpha \otimes id_A ) \alpha,$
where $\Phi$ is the coproduct on $A$;
(2). $(id_B \otimes \epsilon) \alpha = id_B$,
where $\epsilon$ is the counit on $A$;
(3). There is a dense *-subalgebra $\cal B$ of $B$, such that
$$\alpha ({\cal B}) \subseteq {\cal B} \otimes {\cal A},$$
where $\cal A$ is the canonical dense *-subalgebra of $A$.
\end{DF}
\noindent
{\em Remarks.}
(1).
The definition above is equivalent to the one in Podles \cite{Pod6}.
As in \cite{Pod6}, we do not impose the condition that $\alpha$ is
injective, which is required in \cite{BS2,Boca1},
though the examples constructed in this paper satisfy
this condition. We conjecture that this condition is
a consequence of the other conditions in the definition.
A special case of this conjecture says that
the coproduct of a Woronowicz Hopf $C^*$-algebra
is injective, which is true for both the full Woronowicz Hopf
$C^*$-algebras (because of the counital property) and
the reduced ones (because of Baaj-Skandalis \cite{BS2}).
Even if this conjecture is false, one can
still obtain an injective $\tilde{\alpha}$ from $\alpha$
by passing to the quotient of $B$ by the kernel of $\alpha$.
We leave the verification of the latter as an exercise for the reader.
(2).
The above notion of left action of quantum group $G$ would be
called right coaction of the Woronowicz Hopf $C^*$-algebra $C(G)$ by
some other authors. But we prefer the more geometric term ``action of
quantum group''.
We can similarly define a right action of a quantum group $G$, which
would be called a left coaction of the Woronowicz Hopf
$C^*$-algebra $C(G)$ by some other specialists.
\begin{DF}
Let $\alpha$ be an action of a compact quantum group $A$ on $B$.
An element $b$ of $B$ is said to be {\bf fixed under $\alpha$}
(or {\bf invariant under $\alpha$}) if
\begin{eqnarray}
\alpha (b) = b \otimes 1_A.
\end{eqnarray}
The {\bf fixed point algebra} $B^\alpha$
(or $B^A$ if no confusion arises) of the action $\alpha$ is
\begin{eqnarray}
B^\alpha = \{ b \in B \; | \; \alpha (b) = b \otimes 1_A \}.
\end{eqnarray}
The action of $A$ is said to be {\bf ergodic}
if $B^\alpha = {\Bbb C} I$.
A continuous functional $\phi$ on $B$ is said to be
{\bf invariant under $\alpha$} if
\begin{eqnarray}
(\phi \otimes id_A) \alpha (b) = \phi (b) 1_A.
\end{eqnarray}
\end{DF}
Fix an action $\alpha $ of a compact quantum group $A$ on $B$. Let $h$
be the Haar state on $A$ \cite{Wor5,W1,Daele4}. Then we have
\begin{PROP}
\label{denseB}
(1). The map $E = (1 \otimes h) \alpha$ is a projection of norm one
from $B$ onto $B^\alpha$;
(2). Let
\begin{eqnarray}
{\cal B}^\alpha = \{ b \in {\cal B} \; | \; \alpha (b) = b \otimes 1_A \}.
\end{eqnarray}
Then ${\cal B}^\alpha$ is norm dense in $B^\alpha$. Hence the
action $\alpha$ is ergodic if and only if it is so when restricted to
the dense *-subalgebra ${\cal B}$ of $B$.
\end{PROP}
\pf
(1).
This is an easy consequence of the following form of the invariance of the
Haar state (cf \cite{Wor5}):
$$(id_A \otimes h) \Phi (a) = h(a) 1_A, \; \; \; a \in A.$$
(2).
If $b \in B^\alpha$, then $b$ can be approximated in
norm by a sequence of elements $b_l \in \cal B$. Let ${\bar b}_l$ be the
average of $b_l$:
$$ {\bar b}_l = (1_B \otimes h) \alpha (b_l).$$
Then from part (1) of the proposition, $\bar{b}_l \in B^\alpha$. From
condition (3) of \dfref{qact}, we see that
$\bar{b}_l \in {\cal B}^\alpha$.
Moreover,
\begin{eqnarray*}
\|{\bar b}_l - b \|
&=& \| (1 \otimes h) \alpha(b_l - b)\| \\
&\leq& \| (1 \otimes h) \alpha \| \|b_l - b\| \rightarrow 0.
\end{eqnarray*}
The rest is clear.
\QED
\vspace{4mm}\\
Preserve the notation above. Let $\frak A$ be the von Neumann algebra
generated by the GNS representation $\pi_h$ of $A$ for the state $h$.
Then $\frak A$ is a Hopf von Neumann algebra. For later use,
we need to adapt the definitions above to the situation of
von Neumann algebras.
\begin{DF}
\label{vonNeumanncoact}
A {\bf right coaction} of a Hopf von Neumann algebra $\frak A$ on a
von Neumann algebra $\frak B$ is a normal homomorphism $\alpha$ from
$\frak B$ to ${\frak B} \otimes {\frak A}$ such that
(1). $ ( id_{\frak B} \otimes \Phi ) \alpha =
( \alpha \otimes id_{\frak A} ) \alpha, $
where $\Phi$ is the coproduct on $\frak A$;
(2). $ \alpha ({\frak B}) (1 \otimes {\frak A} ) $
generates the von Neumann algebra ${\frak B} \otimes {\frak A}$.
\end{DF}
The main reason why we use the term ``coaction of Hopf von Neumann algebra''
is that von Neumann algebras are measure-theoretic objects
instead of geometric-topological objects (cf. Remark (2) after \dfref{qact}).
Condition (2) in the above definition is an analog of the density
condition as used in the Hopf $C^*$-algebra setting \cite{BS2,Pod6}.
It is well known that there is no analogue of counit in
the Hopf von Neumann algebra situation simply because a
von Neumann algebra corresponds to
a measure space in the commutative case (the simplest case), and functions
are defined only up to sets of measure zero. Hence we do not have an
analog of condition (2) of \dfref{qact} for Hopf von Neumann algebras
coactions.
If $\frak A$ comes from the GNS-representation of the Haar state
on a compact quantum group $A$ and $\frak A$
coacts on the right on some von Neumann algebra $\frak B$,
we will abuse the terminology by saying that the quantum group
$A$ acts on $\frak B$.
Other notions such as invariant elements (or functionals), fixed point
algebra and ergodic actions in the
$C^*$-case above can also be carried over to to the
von Neumann algebra situation.
The main result of this section is the following
\begin{TH}
\label{inducedaction}
Let $B$ be a $C^*$-algebra endowed with an action $\alpha$ of a compact
quantum group $A$. Let $\tau$ be an $\alpha$-invariant state on $B$.
Then
(1).
$\alpha$ lifts to a coaction $\tilde{\alpha}$ of the Hopf von Neumann
algebra ${\frak A} = \pi_h (A) ^{\prime \prime}$ on the von Neumann algebra
${\frak B} = \pi_{\tau}(B) ^{\prime \prime}$ defined by
\begin{eqnarray}
\tilde{\alpha} (\pi_\tau (b)) = (\pi_\tau \otimes \pi_h) \alpha(b),
\hspace{1cm} b \in B,
\end{eqnarray}
where $\pi_h$ and $\pi_\tau$ are respectively the GNS representations
associated with the Haar state $h$ on $A$ and the state $\tau$ on $B$.
(2).
If $\alpha$ is ergodic, then so is $\tilde{\alpha}$.
\end{TH}
\pf
(1).
We will only show that the natural map $\tilde{\alpha}$ given
on the dense subalgebra $\pi_\tau(B)$ by
$$\tilde{\alpha} (\pi_\tau (b)) = (\pi_\tau \otimes \pi_h) \alpha(b),
\hspace{1cm} b \in B,$$
is well defined and extends to a {\em normal morphism} from $\frak B$
to ${\frak B} \otimes {\frak A}$.
Let $b \in B$ and $a \in A$. Denote by $\tilde b$ and
$\tilde a$ respectively the corresponding elements of the Hilbert spaces
$H=L^2(B, \tau)$ and $K = L^2(A, h)$. Define an operator $U$ on
$H \otimes K$ by
\begin{eqnarray}
U(\tilde{b} \otimes \tilde{a}) = (\pi_\tau \otimes \pi_h) \alpha(b)
(\tilde{1}_B \otimes \tilde{a}).
\end{eqnarray}
Then since $\tau$ is $\alpha$ invariant, we have
\begin{eqnarray*}
<U(\tilde{b} \otimes \tilde{a}), U(\tilde{b} \otimes \tilde{a})>
&=& (\tau \otimes h) (1_B \otimes a^*) \alpha(b^* b) (1_B \otimes a) \\
&=& aha^* ((\tau \otimes id_A) \alpha(b^* b))
= aha^* (\tau (b^* b) 1_A) \\
&=& <\tilde{b} \otimes \tilde{a}, \tilde{b} \otimes \tilde{a}>,
\end{eqnarray*}
where $aha^*$ is the functional on $A$ defined by
$$aha^*(x) = h(a^* x a), \hspace{1cm} x \in A.$$
Hence $U$ is an isometry. Since
$\alpha (B) (1 \otimes A)$ is dense in $B \otimes A,$
$U$ is a unitary operator. We also have
\begin{eqnarray*}
(\pi_\tau \otimes \pi_h) \alpha (b) U (\tilde{b'} \otimes \tilde{a'})
&=& (\pi_\tau \otimes \pi_h) \alpha (b)
(\pi_\tau \otimes \pi_h) \alpha (b') (\tilde{1}_B \otimes \tilde{a'}) \\
&=& (\pi_\tau \otimes \pi_h) \alpha (b b') (\tilde{1}_B \otimes \tilde{a'})
= U (\pi_\tau(b) \tilde{b'} \otimes \tilde{a'}) \\
&=& U (\pi_\tau(b) \otimes 1) (\tilde{b'} \otimes \tilde{a'}).
\end{eqnarray*}
That is
\begin{eqnarray}
\label{covariantaction1}
(\pi_\tau \otimes \pi_h) \alpha(b) = U (\pi_\tau(b) \otimes 1) U^*.
\end{eqnarray}
Condition (1) of \dfref{vonNeumanncoact} follows immediately.
Since $ \alpha (B) (1 \otimes A ) $
is dense in $B \otimes A$ (cf. Remark (1) after \dfref{qact}
and Podles \cite{Pod6}),
Condition (2) of \dfref{vonNeumanncoact} follows.
(2).
Assume $\alpha$ is ergodic. Let $z \in {\frak B}$ be a fixed element
under $\tilde{\alpha}$:
$$\tilde{\alpha} (z) = z \otimes 1_{\frak A}.$$
Let $b_n \in B$ be a net of elements such that $\pi_\tau(b_n) \rightarrow z$
in the weak operator topology. Consider the average of $\pi_\tau(b_n)$
integrated over the quantum group $A$,
$$z_n = (id_{\frak B} \otimes h) \tilde{\alpha} (\pi_\tau (b_n)),$$
where we use the same letter $h$ to denote the faithful normal state
on $\frak A$ determined by the Haar state $h$ on $A$.
Then one can verify that $z_n \rightarrow z$ in the weak operator topology.
Moreover, using
$$(id_{\frak A} \otimes h) \Phi (a) = h(a) 1_{\frak A}, \; \; \; a \in
{\frak A},$$
where we denote the coproduct on $\frak A$ by the same symbol
as the coproduct $\Phi$ on $A$, we have
\begin{eqnarray*}
\tilde{\alpha} (z_n)
&=& (id_{\frak B} \otimes id_{\frak A} \otimes h)
(\tilde{\alpha} \otimes id_{\frak A}) \tilde{\alpha}(\pi_\tau(b_n)) \\
&=& (id_{\frak B} \otimes id_{\frak A} \otimes h) (id_{\frak B} \otimes \Phi )
\tilde{\alpha} (\pi_\tau(b_n)) \\
&=& (id_{\frak B} \otimes (id_{\frak A} \otimes h) \Phi )
\tilde{\alpha}(\pi_\tau(b_n)) \\
&=& (id_{\frak B} \otimes h) \tilde{\alpha}(\pi_\tau(b_n)) \otimes 1_{\frak A}
= z_n \otimes 1_{\frak A}.
\end{eqnarray*}
That is, each $z_n$ is fixed under $\tilde{\alpha}$.
From part (1) of the theorem, we see
$$z_n = (\pi_\tau \otimes h) \alpha(b_n) = \pi_\tau(\bar{b}_n), $$
where
$$\bar{b}_n = (1 \otimes h) \alpha(b_n) \in B^\alpha$$
is the average of of $b_n$.
Since $\alpha$ is ergodic, $\bar{b}_n$ is a scalar. This implies
that each $z_n$ is also a scalar. Consequently, the operator $z$,
as a limit of the $z_n$'s in the weak operator topology,
is a scalar.
\QED
\vspace{4mm}\\
{\em Remarks.}
Define on the Hilbert $A$-module $H \otimes A$ (conjugate linear in
the second variable) an operator $u$ by
\begin{eqnarray}
u(\tilde{b} \otimes a) = (\pi_\tau \otimes 1) \alpha(b)
(\tilde{1}_B \otimes a).
\end{eqnarray}
Then one verifies that $u$ is a unitary representation of the quantum group
$A$ (cf \cite{Wor5,BS2,W1}) and $(\pi_\tau, u)$ satisfies the following
covariance condition in the sense of 0.3 of \cite{BS2}:
\begin{eqnarray}
\label{covariantaction2}
(\pi_\tau \otimes 1) \alpha(b) = u (\pi_\tau(b) \otimes 1) u^*.
\end{eqnarray}
The operator $U$ defined above is given by
$$U = (1 \otimes \pi_h) u.$$
The pair $(\pi_\tau, U)$ along with the
relation \eqref{covariantaction1} can be called a covariant system
in the framework of Hopf von Neumann algebras.
Note that part (1) of the above theorem also gives a conceptual
proof of Proposition 4.2.(i) of \cite{Nakagami1}, where a rather complicated
(and nonconceptual) proof is given.
\vspace{4mm}\\
{\em Notation.} Let $v$ be a unitary representation of
a quantum group $A$ on some finite dimensional Hilbert space $H_v$
\cite{Wor5,BS2,W1}. Define
\begin{eqnarray}
Ad_v (b) = v ( b \otimes 1) v^*, \; \; b \in B(H_v).
\end{eqnarray}
Then using Proposition 3.2 of \cite{Wor5}, we see that $Ad_v$ is an action
of $A$ on $B(H_v)$ (see also the remark after the proof of Theorem 4.1 in
\cite{W15}). It will be called the {\bf adjoint action} of the quantum group
$A$ for the representation $v$. Note that unlike in the case of locally
compact groups, for quantum groups we have in general
\begin{eqnarray}
Ad_{v \otimes_{in} w} \neq Ad_v \otimes_{in} Ad_w,
\end{eqnarray}
where $\otimes_{in}$ denotes the interior tensor product
representations \cite{Wor5,W2}.
For other basic notions on compact quantum groups, we refer the reader
to \cite{Wor5,W1,W2}.
\section{
\label{UHF}
Ergodic actions of $A_u(Q)$
on the Powers factor $R_\lambda$ and the
Murray-von Neumann
factor $R$}
We construct in this section an ergodic action of the universal
quantum group $A_u(Q)$ on the type $\mathrm{III}_\lambda$
Powers factor $R_\lambda$ for a proper choice of $Q$ and an ergodic action
of $A_u(n)$ on the type $\mathrm{II}_1$ Murray-von Neumann factor $R$.
These are obtained as consequences of \thref{main1} below.
Recall \cite{W1,W5,W5'}
that for every non-singular $n \times n$ complex matrix $Q$ ($n > 1$
in the rest of this paper), the universal compact quantum group $(A_u(Q), u)$
is generated by $u_{ij}$ ($i,j = 1 , \cdots, n$) with defining relations
(with $u = ( u_{ij} )$):
\vspace{4mm}\\
$A_u(Q): \; \; \;
u^* u = I_n = u u^*, \; \; \;
u^t Q {\bar u} Q^{-1} = I_n = Q {\bar u } Q^{-1} u^t$;
\vspace{4mm}\\
There is also another related family of quantum groups $A_o(Q)$
\cite{W1,W5,W5',Banica1}:
\vspace{4mm}\\
$A_o(Q): \; \; \;
u^t Q u Q^{-1} = I_n = Q u Q^{-1} u^t, \; \; \;
{\bar u} = u$; (here $Q > 0$)
\vspace{4mm}\\
Part (1) of the next proposition gives a characterization of $A_u(Q)$
in terms of the functional $\phi_Q$ defined below.
\begin{PROP}
\label{traceQ}
Consider the adjoint action $Ad_u$ corresponding to the fundamental
representation $u$ of the quantum group $(A_u(Q), u)$.
(1). The quantum group $(A_u(Q), u)$ is the
largest compact matrix quantum group such that its
action $Ad_u$ on $M_n( {\Bbb C} )$ leaves invariant the functional
$\phi_Q$ defined by
\begin{eqnarray*}
\phi_Q (b) = Tr ( Q^t b), \; \; \; b \in M_n({\Bbb C}).
\end{eqnarray*}
(2). $Ad_u$ is an ergodic action if and only if
$Q = \lambda E$, where $\lambda$ is a nonzero scalar,
$E$ is the positive matrix $(1 \otimes h) u^t \bar{u}$ (cf \cite{W5}),
and $h$ is the Haar measure of $A_u(Q)$.
\end{PROP}
\pf
(1). It is a straightforward calculation to verify that the action
$Ad_u$ of $A_u(Q)$ leaves the functional $\phi_Q$ invariant.
Assume that $(A, v)$ is a compact quantum group such that
$Ad_v$ leaves $\phi_Q$ invariant ($v = (v_{ij})_{i,j=1}^n$). Then the
$v_{ij}$'s satisfy the defining relations for $A_u(Q)$. Hence $(A, v)$ is a
quantum subgroup of $A_u(Q)$.
(2). A matrix $S$ is fixed by $Ad_u$ if and only if $S$ intertwines
the fundamental representation $u$ with itself. Hence the action $Ad_u$
is ergodic if and only if the fundamental representation $u$ is irreducible.
When $Q = \lambda E$, then $A_u(Q) = A_u(E)$.
Since $E$ is positive, $u$ is irreducible (cf. \cite{Banica2}).
On the other hand we have (see \cite{W5})
$$(u^t)^{-1} = E \bar{u} E^{-1},$$
where $E$ is defined as in the proposition.
We also have
$$(u^t)^{-1} = Q \bar{u} Q^{-1}.$$
Hence
$$ E \bar{u} E^{-1} = Q \bar{u} Q^{-1} \; \;
{\mbox and} \; \;
Q^{-1} E \bar{u} = \bar{u} Q^{-1} E .$$
If $u$ is irreducible, then so is $\bar{u}$ and therefore $Q^{-1} E =$ scalar.
\QED
\vspace{4mm}\\
{\em Note.} The proof of necessary condition in (2) above was pointed to us
by Woronowicz. Our original proof contains an error.
In general the invariant functional $\phi_Q$ defined above is not
a trace, even if the action $Ad_u$ is ergodic.
However, for ergodic actions of compact groups on operator
algebras, one has the following finiteness theorem of
H$\o$egh-Krohn-Landstad-St$\o$rmer \cite{HLS}:
\begin{TH}
\label{ergodicHLS}
If a von Neumann algebra admits
an ergodic action of a compact group $G$, then
(a). this von Neumann algebra is finite;
(b). the unique $G$-invariant state is
a trace on the von Neumann algebra.
\end{TH}
The proposition above shows that part (b) of this finiteness
theorem is no longer true for compact quantum groups in general.
We now show that part (a) of the above finiteness theorem is
false for compact quantum groups either: not only can compact quantum
groups act on infinite algebras,
they can act on purely infinite factors (type III factors).
\begin{DF}
\label{compatiblesystem}
Let $(B_i, \pi_{ji})$ be an inductive system of $C^*$-algebras
($i, j \in I$). For each $i \in I$, let $\alpha_i$ be an action of
a compact quantum group $A$ on $B_i$. We say that the actions
$\alpha_i$ are a {\bf compatible system of actions} for
$(B_i, \pi_{ji})$ if for each pair $i \leq j$, the following
holds,
$$ (\pi_{ji} \otimes 1 ) \alpha_i = \alpha_j \pi_{ji}.$$
\end{DF}
The following lemmas will be used in the next theorem. Preserve the
notation in \dfref{compatiblesystem}. Let $\pi_i$ be
the natural embedding of $B_i$ into the inductive limit $B$ of the $B_i$'s.
\begin{LM}
Put for each $i \in I$
$$\alpha \pi_i (b_i) = (\pi_i \otimes 1) \alpha_i (b_i),
\; \; \; b_i \in B_i.$$
Then $\alpha$ induces a well defined action of the quantum group $A$ on $B$.
The action $\alpha$ is ergodic if and only if each $\alpha_i$ is.
Assume further that $\phi_i $ is an inductive system of states on
$B_i$ and that each $\phi_i$ is invariant under $\alpha_i$.
Then $\alpha$ leaves invariant the inductive limit state
$\tau = \lim \phi_i$.
\end{LM}
\pf
Let $j > i$, so $\pi_{ji}(b_i) \in B_j$.
Then by the formula of $\alpha$ given in the lemma, we have
\begin{eqnarray*}
\hspace{2cm}
\alpha \pi_j (\pi_{ji} (b_i) ) = (\pi_j \otimes 1) \alpha_j (\pi_{ji}(b_i)).
\hspace{2cm} (*)
\end{eqnarray*}
Since $ \pi_j \pi_{ji} = \pi_i$, the left hand of the above is equal to
$$\alpha \pi_i (b_i) = (\pi_i \otimes 1) \alpha_i (b_i).$$
From the compatibility
condition we see that the right hand side of $(*)$ is equal to
\begin{eqnarray*}
(\pi_j \otimes 1) (\pi_{ji} \otimes 1 ) \alpha_i (b_i)
= (\pi_i \otimes 1) \alpha_i (b_i).
\end{eqnarray*}
This shows that $\alpha$ is well defined on the dense subalgebra
${\cal B} = \bigcup \pi_i ({\cal B_i})$ of $B$, where ${\cal B}_i$
is the dense *-subalgebra of $B_i$ according to \dfref{qact}.
It also clear that $\alpha$ is bounded and satisfies conditions
of Definition \ref{qact}. Hence $\alpha$ induces a well defined action of
the quantum group on $B$.
Assume that each $\alpha_i$ is ergodic. It is clear that the action
$\alpha$ is ergodic on the dense *-subalgebra $\cal B$.
Hence $\alpha$ is ergodic on $B$ by \propref{denseB}.
Conversely, if $\alpha$ is ergodic, then the restrictions $\alpha_i$
of $\alpha$ to $B_i$ is clearly ergodic.
We now show that $\tau$ is invariant under $\alpha$.
Note that $\tau (\pi_i(b_i) = \phi_i (b_i)$. From this we have
\begin{eqnarray*}
(\tau \otimes 1) \alpha (\pi_i(b_i)) &=&
(\tau \otimes 1) (\pi_i \otimes 1) \alpha_i(b_i)) \\
&=& (\phi_i \otimes 1) \alpha_i(b_i)) = \phi_i (b_i) \\
&=& \tau (\pi_i (b_i) ).
\end{eqnarray*}
By density of $\cal B$ in $B$, we have
$$(\tau \otimes 1) \alpha (b) = \tau (b), \; \; \; b \in B.$$
This completes the proof of the lemma.
\QED
\vspace{4mm}\\
{\em Note.} Not every action of a compact quantum group
on an inductive limit of $C^*$-algebras arises from a compatible
system of actions of $A$.
\begin{LM}
Let $u_k$ be a unitary representation of a compact quantum group $A$
on $V_k$ for each natural number $k$. Assume that $Ad_{u_k}$ leaves
invariant a functional $\psi_k$ on $B(V_k)$. Then
the action $Ad_{u_1 \otimes_{in} \cdots \otimes_{in} u_k}$ leaves
the functional $\phi^k = \psi_1 \otimes \cdots \otimes \psi_k$ invariant.
\end{LM}
\pf Straightforward calculation.
\QED
\vspace{4mm}\\
Let $Q \in GL(n, {\Bbb C})$ be a positive matrix with trace $1$.
We now construct
a sequence of actions $\alpha_k$ of the compact quantum group
$(A_u(Q), u)$ on $M_n({\Bbb C})^{\otimes k}$. Denote by $u^k$ the $k$-th
fold interior tensor product of the representation $u$, i.e.,
$$ u^k = u \otimes_{in} \cdots \otimes_{in} u,$$
see \cite{W2} for the definition of the interior tensor product $\otimes_{in}$.
Put
$$\alpha_k = Ad_{u^k}, \; \; \;
\phi_Q^k = \phi_Q^{\otimes k} = \phi_Q \otimes \cdots \otimes \phi_Q.$$
Let
$$B = \lim_{ k \rightarrow \infty} M_n({\Bbb C})^{\otimes k}, \; \; \;
\tau_Q = \lim_{ k \rightarrow \infty} \phi_Q^k,$$
$$ {\frak B} = \pi_{Q}(B)^{\prime \prime}, \; \; \;
{\frak A} = \pi_h (A_u(Q))^{\prime \prime},
$$
where $\pi_{Q}$ and $\pi_h$ are respectively the GNS-representations for
the positive functional $\tau_Q$ and the Haar state $h$ on $A_u(Q)$.
\begin{TH}
\label{main1}
The actions $\alpha_k$ ($ k = 1, 2, \cdots$) of $A_u(Q)$
forms a compatible system of ergodic actions leaving the functionals
$\phi^k_Q$ invariant.
These actions gives rise to a natural ergodic action on the UHF algebra $B$
leaving invariant the positive functional $\tau_Q$, which in turn
lifts to an ergodic action on the von Neumann algebra $\frak B$.
\end{TH}
\pf
It is straightforward to verify that the actions $\alpha_k$ are a
compatible system of actions. Since each $u^k$ is irreducible
(cf \cite{Banica2}), we see that the actions $\alpha_k$ are ergodic.
By \ref{traceQ}.(1), $\phi_Q$ is invariant under the action $Ad_u$.
Hence applying the lemmas above, we see that the functionals
$\phi^k_Q$ are invariant under the actions $\alpha_k$, and these actions
gives rise to an ergodic action of the quantum group $A$ on $B$ leaving
$\tau_Q$ invariant.
Now apply \thref{inducedaction},
the action $\alpha$ on $B$ induces an ergodic action
$$\tilde{\alpha}: \; {\frak B} \longrightarrow {\frak B} \otimes {\frak A}$$
at the von Neumann algebra level defined by
$$\tilde{\alpha} (\pi_Q (b)) = (\pi_Q \otimes \pi_h) \alpha(b),$$
where $b \in B$.
\QED
\begin{COR}
\label{Powers}
Take
$$ Q =
\left(
\begin{array}{cc}
a & 0 \\
0 & 1-a
\end{array}
\right), \; \; a \in (0, 1/2).
$$ Then $\tau_Q$ is the Powers state, so
the quantum group $A_u(Q)$ acts ergodically on
the Powers factor $R_\lambda$ of type $\mathrm{III}_\lambda$,
where $\lambda = a /(1-a)$.
\end{COR}
\begin{COR}
\label{Murray-von Neumann}
(compare \cite{AWassermann3})
Take $ Q = I_n$.
Then $\tau_Q$ is the unique trace on the UHF algebra $B$ of type
$n^\infty$, so the quantum group $A_u(n)=A_u(I_n)$ acts ergodically on
the hyperfinite $\mathrm{II}_1$ factor $R$.
\end{COR}
We will see in \secref{Cuntz} that for appropriate choice of $Q$, the
quantum groups $A_u(Q)$ act on the injective factor $R_\infty$ of type
$\mathrm{III}_1$ also.
It would be interesting to know whether compact quantum groups admit
ergodic actions on factors of type $\mathrm{III}_0$ too.
\section{
\label{fixedpoint}
Fixed point subalgebras of quantum subgroups}
In this section, we show that although the actions of the universal quantum
groups $A_u(Q)$ constructed in the last section are ergodic, when restricted
to some of their non-trivial quantum subgroups,
we obtain interesting large fixed point algebras.
Let
$Q =
\left(
\begin{array}{cc}
a & 0 \\
0 & 1-a
\end{array}
\right)$, as in \ref{Powers}.
Put $q = \lambda^{1/2} = (a /(1-a))^{1/2}$.
Then from the definitions of $SU_q(2)$ and $A_u(Q)$,
we see that $SU_q(2)$ is a quantum subgroup of $A_u (Q)$.
By restriction, we obtain from the action of $A_u(Q)$ an action
of $SU_q(2)$ on $R_\lambda$. The fixed point subalgebra of $R_\lambda$
under the action of $SU_q(2)$ is generated by the Jones projections
$\{ 1, e_1, e_2, \cdots, \}$. The restriction of the Powers states $\tau_Q$
to this fixed point algebra is a trace and its values on the
Jones projections gives the Jones polynomial. See the book of
Jones \cite{Jones2}.
Now take $Q = \frac{1}{n} I_n$. We have \corref{Murray-von Neumann}.
For simplicity of notation, let $\tau$ denote the trace $\tau_Q$ on the
UHF algebra $B$. There are two special quantum subgroups of $A_u(n)$:
$SU(n)$ and $A_o(Q) = A_o(n)$.
By 4.7.d. of \cite{GoodmanHarpeJones}, for any closed subgroup $G$ of
$SU(n)$, the fixed point algebra $R^G$ is a II$_1$ subfactor of
$R$. We now show that the same result holds for quantum subgroups
of $A_o(n)$. For this, it suffices to prove the following
\begin{PROP}
The fixed point subalgebra $R^{A_o(n)}$ of $R$ for the quantum subgroup
$A_o(n)$ of $A_u(n)$ is a $\mathrm{II}_1$ factor and the action
of $A_o(n)$ on $R$ is prime.
\end{PROP}
\pf
Put $\beta = n^2$. By \cite{Banica1}, the fixed point subalgebra
of $M_n({\Bbb C})^{\otimes k}$ for the action
$$\alpha_k = Ad_{u^k}$$
is generated by $1, e_1, \cdots, e_{k-1}$, where
$u$ is the fundamental representation of the quantum group $A_o(n)$,
$$e_s = I_{H^{\otimes(s-1)}} \otimes \sum_{i,j} \frac{1}{n} e_{ij} \otimes
e_{ij} \otimes I_{H^{\otimes (k-s-1)}},$$
and $H = {\Bbb C}^n$. The $e$'s satisfies the relations:
(i). $e_s^2 = e_s = e^*_s$;
(ii). $e_s e_t = e_t e_s$, $1 \leq s, t \leq k-1$, $|s-t| \geq 2$;
(iii). $\beta e_s e_t e_s = e_s$, $1 \leq s , t \leq k-1$, $|s-t| = 1$.
We now show that the restriction of $\tau$ on the fixed point subalgebra
of $M_n({\Bbb C})^{\otimes k}$ satisfies the Markov
trace condition of modulus $\beta$, where $\tau$ is the
trace on $R$. Namely, we will verify the identity
$$\tau ( w e_{k-1} ) = \frac{1}{\beta} \tau (w)$$
for $w$ in the subalgebra of $M_n({\Bbb C})^{\otimes k}$ generated
by $1, e_1, \cdots, e_{k-2}$. By Theorem 4.1.1 and Corollary 2.2.4 of
Jones \cite{Jones1}, this will complete the proof of the proposition.
It will also follow that the action of $A_o(n)$ on $R$ is prime.
To verify this, it suffices by Proposition 2.8.1 of \cite{GoodmanHarpeJones}
to check the Markov trace condition for $w$ of the form
$$ w = (e_{i_1} e_{i_1-1} \cdots e_{j_1})
(e_{i_2} e_{i_2-1} \cdots e_{j_2}) \cdots
(e_{i_p} e_{i_p-1} \cdots e_{j_p}),$$
where
\begin{eqnarray*}
& & 1 \leq i_1 < i_2 \cdots i_p \leq k-2 \\
& & 1 \leq j_1 < j_2 \cdots j_p \leq k-2 \\
& & i_1 \geq j_1, i_2 \geq j_2, \cdots, i_p \geq j_p \\
& & 0 \leq p \leq k-2
\end{eqnarray*}
If $i_p < k-2$ then it is easy to see that
$$\tau (w e_{k-1}) = \tau (w) \tau(e_{k-1}) = \frac{1}{\beta} \tau (w),$$
noting that $\tau(e_{k-1}) = \frac{1}{\beta}$.
Hence we can assume $i_p = k-2$. Let $l_w$ be the length of the word $w$.
Then $w$ takes the form
$$w = \sum (\frac{1}{n})^{l_w} ( \cdots ) \otimes e_{ab} \otimes 1,$$
where the summation is over the indices $a, b$ and some other indices
that need not to be specified, and the terms in $( \cdots )$ are certain
elements of $M_n({\Bbb C})^{\otimes (k-2)}$ that need not to be
specified either (the components in the tensor product of the terms in
$(\cdots)$ are products of $e_{ij}$'s). We have then
\begin{eqnarray*}
\tau (w e_{k-1}) &=& (\frac{1}{n})^{l_w + 1}
\sum_{x,y} \sum \tau ((\cdots) \otimes e_{ab} e_{xy} \otimes e_{xy}) \\
&=& (\frac{1}{n})^{l_w + 1}
\sum_{x,y} \sum \tau ((\cdots) \otimes e_{ab} e_{xy} \otimes 1 )
\tau (I_{H^{\otimes (k-1)}} \otimes e_{xy}) \\
&=& (\frac{1}{n})^{l_w + 1}
\tau (\sum_{x}
\sum ((\cdots) \otimes e_{ab} e_{xx} \otimes 1 ) \frac{1}{n} ) \\
&=& \frac{1}{\beta} \tau (w).
\end{eqnarray*}
The proof is complete.
\QED
\vspace{4mm}\\
{\em Remarks.}
(1).
In view of the above result, fixed point algebras of quantum subgroups
of $A_o(n)$ give examples of subfactors.
Therefore, it would be interesting to classify finite quantum subgroups
of $A_o(n)$ and study them in the light of Jones' theory,
see \cite{GoodmanHarpeJones} for this in the case of the Lie group $SU(2)$.
Note that the quantum $A_o(n)$ contains the quantum permutation
group $A_{aut}(X_n)$ of $n$ point space $X_n$ (see \cite{W15})
and many other interesting quantum subgroups (see \cite{W1}).
It would also be interesting to determine the fixed
point subalgebras of the quantum subgroups of $SU_{-1}(n)$
($SU_{-1}(n)$ is a quantum subgroup of $A_u(n)$
because its antipode has period 2 \cite{W1,W5}).
We refer the reader to Banica \cite{Banica5} for some interesting
related results.
(2).
Note that since
\begin{eqnarray*}
Ad_{v \otimes_{in} w} \neq Ad_v \otimes_{in} Ad_w,
\end{eqnarray*}
for unitary representation $v$ and $w$ of $A_o(n)$,
we do not have a commuting square like the one on p222 of
\cite{GoodmanHarpeJones} for a given quantum subgroup $G$ of $A_o(n)$.
\section{
\label{Cuntz}
Ergodic actions of $A_u(Q)$ on the Cuntz algebra and the injective factor of
type $\mathrm{III}_1$}
The Cuntz algebra ${\cal O}_n$ is an infinite simple
$C^*$-algebra without trace, hence by \cite{HLS}
it does not admit an ergodic action of a compact group.
Recall that the Cuntz algebra ${\cal O}_n$ is the simple $C^*$-algebra
generated by $n$ isometries $S_k$ ($k=1, \cdots, n$) such that
\begin{eqnarray}
\sum_{k=1}^n S_k S_k^* = 1.
\end{eqnarray}
Just as $U(n)$, the compact matrix quantum group $A_u(Q)$ acts
on ${\cal O}_n$ in a natural manner \cite{DopRob6,Dop1,KMW},
where $Q$ is a positive matrix of trace $1$ in $GL(n, {\Bbb C})$:
\begin{eqnarray}
\alpha(S_j) = \sum_{i = 1}^n S_i \otimes u_{ij},
\end{eqnarray}
the dense *-algebra ${\cal B}$ of \dfref{qact} being
the *-subalgebra $^0{\cal O}_n$ of ${\cal O}_n$
generated by the $S_i$'s, see Doplicher-Roberts \cite{DopRob5}.
However, unlike the actions of compact groups on ${\cal O}_n$, we have
\begin{TH}
\label{main2}
The above action $\alpha$ of the quantum group $A_u(Q)$ on
${\cal O}_n$ is ergodic, the unique $\alpha$-invariant state
on ${\cal O}_n$ is the quasi-free state $\omega_Q$ associated with
$Q$ \cite{Evans1}.
\end{TH}
\pf
Let $H$ be the Hilbert subspace of ${\cal O}_n$ linearly spanned by
the $S_k$'s. Let $(H^s, H^r)$ be the linear span of elements of the form
$S_{i_1} S_{i_2} \cdots S_{i_r} S^*_{j_s} \cdots S^*_{j_2} S^*_{j_1}$.
Then $ ^0{\cal O}_n $ is the linear span of
all the spaces $(H^s, H^r)$ , $r, s \geq 0$ (see \cite{DopRob5}).
Observe that each of the spaces $(H^s, H^r)$ is invariant
under the action $\alpha$:
$$
\alpha(S_{i_1} S_{i_2} \cdots S_{i_r} S^*_{j_s} \cdots S^*_{j_2} S^*_{j_1})
= $$
$$
\sum_{k_1, \cdots k_r, l_1, \cdots, l_s = 1 }^n
S_{k_1} S_{k_2} \cdots S_{k_r} S^*_{l_s} \cdots S^*_{l_2} S^*_{l_1}
\otimes u_{k_1 i_1} u_{k_2 i_2} \cdots u_{k_r i_r} u^*_{l_s j_s}
\cdots u^*_{l_2 j_2} u^*_{l_1 j_1}.
$$
Hence $(id \otimes h) \alpha((H^s, H^r))$ is the space of
the fixed elements of $(H^s, H^r)$ under $\alpha$, where $h$ is the
Haar state on $A_u(Q)$. For $r \neq s$, the tensor product representations
$u^{\otimes r}$ and $u^{\otimes s}$ of the fundamental representation
$u$ of the quantum group $A_u(Q)$ are {\em inequivalent and irreducible}
\cite{Banica2}.
Hence by Theorem 5.7 of Woronowicz \cite{Wor5}, for $r \neq s$,
\begin{eqnarray}
h(u_{k_1 i_1} u_{k_2 i_2} \cdots u_{k_r i_r} u^*_{l_s j_s}
\cdots u^*_{l_2 j_2} u^*_{l_1 j_1}) = 0,
\end{eqnarray}
and therefore $(H^s, H^r)$ has no fixed point other than $0$.
For $r = s$, identifying the elements
$$S_{i_1} S_{i_2} \cdots S_{i_r} S^*_{j_r} \cdots S^*_{j_2} S^*_{j_1}$$
of $(H^r, H^r)$ with the matrix units
$$e_{i_1 j_1} \otimes e_{i_2 j_2} \otimes
\cdots \otimes e_{i_r j_r}$$
of $M_n({\Bbb C})^{\otimes r}$, the action $\alpha$ on $(H^r, H^r)$
is identified with the action $\alpha_r$ on
$M_n({\Bbb C})^{\otimes r}$ of \thref{main1}. Hence the fixed elements
of $(H^r, H^r)$ under $\alpha$ are the scalars.
Consequently, the fixed elements of $^0{\cal O}_n$
under $\alpha$ are the scalars. By
\propref{denseB}, $\alpha$ is ergodic on ${\cal O}_n$.
Let $\phi$ be the (unique) $\alpha$-invariant state
on ${\cal O}_n$. Then for $x \in (H^r, H^s)$ with
$r \neq s$, $r, s \geq 0$, we have
\begin{eqnarray*}
\phi(x) = h((\phi \otimes 1) \alpha(x))
= \phi ((1 \otimes h) \alpha(x)).
\end{eqnarray*}
But
$ (1 \otimes h) \alpha(x) = 0$
according to the computation above.
Hence $\phi(x) = 0$.
From the consideration of the last paragraph,
$\alpha$ restricts to an ergodic action on the subalgebra
$(H^k, H^k)$ of ${\cal O}_n$. Identifying $(H^k, H^k)$
with $M_n({\Bbb C})^{\otimes k}$ as above, we see that
$$\phi(x) = \phi_Q^k(x), \; \; \; x \in (H^k, H^k),$$
where $\phi_Q^k$ is the functional in \thref{main1}.
This shows that $\phi$ is the quasi-trace state $\omega_Q$
associated with $Q$ (cf \cite{Evans1}).
\QED
\vspace{4mm}\\
We can assume that $Q = diag(q_1, q_2, \cdots, q_n)$
is a diagonal positive matrix with trace 1, since
$A_u(Q)$ and $A_u(VQV^{-1})$ are similar to each other \cite{W5}.
Let $\beta$ be a positive number. Define numbers $\omega_1,
\omega_2, \cdots, \omega_n$ by
\begin{eqnarray}
diag(e^{-\beta \omega_1}, e^{-\beta \omega_2}, e^{-\beta \omega_n})
= diag(q_1, q_2, \cdots, q_n).
\end{eqnarray}
Let $\pi_Q$ be the GNS representation of the $\alpha$-invariant
state $\omega_Q$ of ${\cal O}_n$.
Then by Theorem 4.7 of Izumi \cite{Izumi1} and \thref{inducedaction}, we have
\begin{COR}
If $\omega_1/\omega_k$ is irrational for some $k$, then the compact
quantum group $A_u(Q)$ acts ergodically on the injective
factor $\pi_Q ({\cal O}_n)^{\prime \prime}$ of type $\mathrm{III}_1$.
\end{COR}
\noindent
{\em Remarks.}
(1).
The big quantum semi-group $U_{nc}(n)$ of Brown also acts on ${\cal O}_n$
in the same way as $A_u(Q)$ on ${\cal O}_n$ above. See Brown \cite{Brown1} and
4.1 of Wang \cite{W1} for the quantum semi-group structure on $U_{nc}(n)$.
(2).
If the $\omega_1/\omega_k$'s are rational for all $k$, then
$\pi_Q ({\cal O}_n)^{\prime \prime}$ is an
injective factor of type $\mathrm{III}_\lambda$,
on which $A_u(Q)$ acts ergodically,
where $\lambda$ is determined from an equation involving $q_1, \cdots, q_n$
(see \cite{Izumi1}). In particular, taking $A_u(Q)=A_u(n)$, we see that even
the compact matrix quantum group $A_u(n)$ of Kac type admits ergodic
actions on both the infinite $C^*$-algebra ${\cal O}_n$ and the
injective factor $\pi_Q({\cal O}_n)^{\prime \prime}$ of type
III$_{\frac{1}{n}}$. In view of \corref{Murray-von Neumann},
it would be interesting to solve the following problem:
\vspace{4mm}\\
{\bf Problem:}
Does a compact matrix quantum group of non-Kac type admit ergodic action
on the hyperfinite $\mathrm{II}_1$ factor $R$?
\section{
\label{quotient}
Ergodic actions on quotient spaces}
In this section, we study ergodic actions of compact quantum groups on
their quantum quotient spaces. We also give an example to show that,
contrary to the classical situation,
not all ergodic actions arise in this way.
Fix a quantum subgroup $H$ of a compact quantum group $G$, which is given
by a surjective morphism $\theta$ of Woronowicz Hopf $C^*$-algebras
from $C(G)$ to $C(H)$. Let $h_H$ and $h_G$ be respectively the Haar
states on $C(H)$ and $C(G)$. Then there is a natural action $\beta$ of the
quantum group $H$ on $G$ given by
\begin{eqnarray}
\beta: C(G) \longrightarrow C(H) \otimes C(G), \; \; \;
\beta = (\theta \otimes 1) \Phi_G,
\end{eqnarray}
where $\Phi_G$ is the coproduct on $C(G)$. The
quotient space $H \backslash G$ is defined by
the fixed point algebra of $\beta$ (cf \cite{Pod6}):
\begin{eqnarray}
C(H \backslash G) = C(G)^\beta =
\{ a \in C(G): (\theta \otimes 1) \Phi_G (a)
= 1 \otimes a \} .
\end{eqnarray}
The restriction of $\Phi_G$ to $C(H \backslash G)$ defines a natural
action $\alpha$ of $G$ on $C(H \backslash G)$:
\begin{eqnarray}
\alpha = \Phi_G |_{C(H \backslash G)} : C(H \backslash G)
\longrightarrow C(H \backslash G) \otimes C(G).
\end{eqnarray}
The dense *-subalgebras of \dfref{qact} for the actions $\beta$
and $\alpha$ are the natural ones.
Note that $E = (h_H \otimes 1) \beta = (h_H \theta \otimes 1) \Phi_G $
is a projection of norm
one from $C(G)$ to $C(H \backslash G)$ (cf \propref{denseB} and \cite{Pod6}).
\begin{PROP}
\label{quotients}
In the situation as above, we have
(1). the action $\alpha$ of $G$ on $C( H \backslash G)$ is ergodic;
(2). $C(H \backslash G)$ has a unique $\alpha$
invariant state $\omega$ satisfying
\begin{eqnarray}
h_G(a) = \omega ((h_H \theta \otimes 1) \Phi_G (a)), \; \; \; a \in C(G).
\end{eqnarray}
Namely, $\omega$ is the restriction of $h_G$ on $C(H \backslash G)$.
\end{PROP}
{\em Note.} Part (2) of the proposition above is the analogue of the the
following well known integration formula in the classical situation:
$$\int_G a(g) dg = \int_{H \backslash G} \int_H a(hg) dh d \omega (g) ,
\; \; \; a \in C(G).$$
\pf
(1). Let $a \in C(H \backslash G)$ be fixed under $\alpha$, i.e.,
$$ \hspace{2cm} \alpha (a) = a \otimes 1. \hspace{2cm} (**)$$
Since $\alpha(a) = \Phi_G (a)$ and since (by the definition of
$C( H \backslash G)$)
$$(\theta \otimes 1) \Phi_G (a) = 1 \otimes a ,$$
it follows that
$$( \theta \otimes 1 ) \alpha (a) = 1 \otimes a.$$
Using $(**)$ for the left hand side of the above, we get
$$\theta(a) \otimes 1 = 1 \otimes a.$$
This is possible only for $a = \lambda \cdot 1$ for some scalar $\lambda$.
(2). The general result of the existence and uniqueness of the
invariant state for an ergodic action is proven in \cite{Boca1}.
For the special situation we consider here, we now not only prove the
existence and uniqueness of the invariant state, but also give the precise
formula of the invariant state.
Let $\omega$ be the restriction of $h_G$ on the subalgebra
$C(H \backslash G)$ of $C(G)$. Since $(h_H \theta \otimes 1) \Phi_G$
is a projection from $C(G)$ onto $C(H \backslash G)$ and $\alpha$ is the
restriction of $\Phi_G$ on $C(H \backslash G)$, the invariance of $\omega$ for
the action $\alpha$ follows from the invariance of the Haar state $h_G$.
Conversely, let $\mu$ be any invariant state on $C(H \backslash G)$.
Using again the fact that $(h_H \theta \otimes 1) \Phi_G$
is a projection from $C(G)$ onto $C(H \backslash G)$,
a standard calculation shows that the functional
$$\phi(a) = \mu ((h_H \theta \otimes 1) \Phi_G (a)), \; \; \; a \in C(G)$$
is a right invariant state, i.e.
$$\phi * \psi (a) = \phi(a), \; \; \; a \in C(G),$$
where $\psi$ is a state on $A$ and
$\phi * \psi = (\phi \otimes \psi) \Phi_G$
is the convolution operation (cf \cite{Wor5}).
From the uniqueness of the Haar state, it follows from this that
$$\phi = h_G, \; \; \; \mu = \omega = h_G|_{C(H \backslash G)}.$$
\QED
\vspace{4mm}\\
{\em Remarks.}
(1).
Note that the quantum groups $A_u(Q)$, $A_o(Q)$ and $B_u(Q)$ have many
quantum subgroups. In the light of \propref{quotients} and
\thref{inducedaction}, it would be interesting to study
the corresponding operator algebras and the actions on them.
We leave this to a separate work.
(2).
More general than the considerations in \propref{quotients}, if two quantum
groups admit commuting actions on a noncommutative space, then
they act on each other's orbit spaces (not necessarily in an ergodic manner),
just as in the classical situation.
Note that the notion of orbit space corresponds
to fixed point algebra in the noncommutative situation.
\vspace{4mm}\\
{\em An Example.}
Every transitive action of a compact group $G$ on a
topological space $X$ is isomorphic to the natural action of $G$ on
$H \backslash G$, where $H$ is the closed subgroup of $G$ that fixes
some point of $X$. However, this is no longer true for quantum groups,
even if the space on which the quantum group acts is a classical one.
To see this, let $X_n = \{ x_1, \cdots, x_n \}$ be the space with $n$
points. By Theorem 3.1 of \cite{W15}, the quantum automorphism group
$A_{aut}(X_4)$ of $X_4$ contains the ordinary permutation group $S_4$,
hence it acts ergodically on $X_4$. The quantum subgroup of $A_{aut}(X_4)$
that fix a point, say $x_1$, is isomorphic to $A_{aut}(X_3)$, which is the
same as $C(S_3)$, a (commutative) algebra of dimension $6$.
From \cite{W15}, we know that as a $C^*$-algebra, $A_{aut}(X_n)$
is the same as $C(S_n)$ for $n \leq 3$ and it has
$C^*({\Bbb Z} / 2{\Bbb Z} * {\Bbb Z} / 2{\Bbb Z})$ as a
quotient for $n \geq 4$, where ${\Bbb Z} / 2{\Bbb Z} * {\Bbb Z} / 2{\Bbb Z}$
is the free product of the two-element group
${\Bbb Z} / 2{\Bbb Z}$ with itself, because the entries of the matrix
\begin{eqnarray*}
\left(
\begin{array}{cccc}
p & 1-p & 0 & 0 \\
1-p & p & 0 & 0 \\
0 & 0 & q & 1-q \\
0 & 0 & 1-q & q
\end{array}
\right)
\end{eqnarray*}
satisfy the commutation relations of the algebra $A_{aut}(X_4)$,
where $p, q$ are the projections generating the $C^*$-algebra
$C^*({\Bbb Z} / 2{\Bbb Z} * {\Bbb Z} / 2{\Bbb Z})$: $p=(1-u)/2$ and
$q=(1-v)/2$, $u$ and $v$ being the unitary generators of the
first and second copies of ${\Bbb Z}/2{\Bbb Z}$ in the free product
(cf \cite{RaeburnSinclair}).
For simplicity of notation, let $C(G) = A_{aut}(X_4)$,
and let ${\cal M}$ be the canonical dense subalgebra of $C(G)$
generated by the coefficients of the fundamental representation
of $G$ (see \cite{Wor5}). Let $H=S_3$, the subgroup of $G$ that
fixes $x_1$, and let ${\cal H} = C(H)$. Let
$\theta$ be the surjection from $C(G)$ to $C(H)$ that embeds
$H$ as subgroup of $G$ (cf. \cite{W15}).
Let $\beta$ be the action defined in the beginning of this section.
We claim that the coset space $H \backslash G$ is not isomorphic to $X_4$ as
a $G$-space (see Sect. 2 of \cite{W15} for the notion of morphism).
Namely, we have
\begin{PROP}
The $G$-algebras $C(H \backslash G)$
(which is defined to be $C(G)^\beta$) and $C(X_4)$ are not isomorphic to
each other.
\end{PROP}
\pf
Since $C(X_4)$ has dimension $4$,
it suffices to show that $C(H \backslash G)$ is infinite dimensional.
We make ${\cal M}$ into a
Hopf ${\cal H}$-module (i.e. a compatible system of a left ${\cal H}$
comodule and a left ${\cal H}$ module) as follows.
The restriction of $\beta$ to ${\cal M}$ clearly defines a
left ${\cal H}$ comodule structure:
\begin{eqnarray}
\beta: {\cal M} \longrightarrow {\cal H} \otimes {\cal M}.
\end{eqnarray}
The left ${\cal H}$ module structure on ${\cal M}$ is the trivial one
defined by
\begin{eqnarray}
& & {\cal H} \otimes {\cal M} \longrightarrow {\cal M}, \\
& & h \cdot m = \epsilon(h)m, \; \; \; h \in {\cal H}, \; m \in {\cal M}.
\end{eqnarray}
By Theorem 4.1.1 of Sweedler \cite{Sweedler}, we have
an isomorphism of left ${\cal H}$ modules
\begin{eqnarray}
& & {\cal H} \otimes {\cal M}^\beta \cong {\cal M}.
\end{eqnarray}
That is
\begin{eqnarray}
& & {\cal H} \otimes {\cal A}(H \backslash G) \cong {\cal M} \\
& & h \otimes m' \mapsto h \cdot m',
\; \; \; h \in {\cal H}, \; m' \in {\cal A}(H \backslash G),
\end{eqnarray}
where ${\cal A}(H \backslash G)= {\cal M}^\beta$ is the canonical dense
subalgebra of $C(H \backslash G)=C(G)^\beta$. Since ${\cal M}$ is infinite
dimensional and ${\cal H}$ is finite dimensional, ${\cal A}(H \backslash G)$
and therefore $C(H \backslash G)$ are also infinite dimensional.
\QED
\vspace{4mm}\\
{\bf Acknowledgement.}
The author is indebted to Marc A. Rieffel for continual support.
Part of this paper was written while the author was a
member at the IHES during the year July, 1995-Aug, 1996. He
thanks the IHES for its financial support and hospitality during this period.
The author also wishes to thank the Department of
Mathematics at UC-Berkeley for its support and hospitality
while the author holds an NSF Postdoctoral Fellowship there
during the final stage of this paper.
|
{'timestamp': '1999-03-04T06:38:28', 'yymm': '9807', 'arxiv_id': 'math/9807093', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807093'}
|
arxiv
|
\section{Introduction}
Superdeformed rotational bands were first identified \cite{Twi86} from
transition energies only. At an early stage, crude measurements of
(electric) quadrupole moments were performed \cite{Ben87} which were
an important evidence for the large deformation. Recently, it has
become possible to measure relative quadrupole moments with a much
higher precision \cite{Sav96,Nis97,Hac98}. Since the total quadrupole
moment depends sensitively on the specific orbitals that are occupied,
these quadrupole moments have become an important tool to verify
configurations of different bands. It is the aim of the present paper
to study in detail how the quadrupole moment depends on occupied
single-particle states in superdeformed nuclei.
Adding a particle with a single-particle quadrupole moment, $q_{\nu}$,
to a core causes a change in deformation due to polarization effects.
The size of the deformation change depends on the relative
deformations, or quadrupole moments, of the core and the added
particle. The deformation change can be translated to a change in the
electric quadrupole moment and we shall refer to this change as the
effective quadrupole moment $q_{eff}$. We will consider two different
ways to calculate $q_{eff}$. One (microscopic) way is to calculate the
total quadrupole moment from a sum of single-particle quadrupole
moments at the minimum-energy deformations for the configurations
before and after the particle is added. The other (macroscopic) way is
to calculate the quadrupole moment from a homogeneously charged body
with the appropriate deformation and volume. The change of the
electric quadrupole moment when one particle/hole is added then
constitutes the effective electric quadrupole moment.
Simple models \cite{Mot58} suggest that a near-spherical $Z=N$ nucleus
changes its microscopic electric quadrupole moment with about
$q_{\nu}/2$ due to polarization, and by an additional $q_{\nu}$ if the
added particle is a proton. Generalizing to a deformed nucleus we
shall show that, in the pure harmonic oscillator (HO) model, $q_{eff}$
can be written as
\begin{equation}
\left( q_{eff} \right)_{p,n}=e \left(b_{p,n}(\varepsilon) \cdot q_{\nu}
+ a_{p,n}(\varepsilon) \right).
\end{equation}
The parameters $b_{p,n}$ and $a_{p,n}$ depend on deformation,
$\varepsilon$, and are different for protons $(p)$ and neutrons $(n)$.
The relation (1) will be derived in section~2 and, for the microscopic
as well as the macroscopic methods, analytic expressions will be
given for $b_{p,n}$ and $a_{p,n}$ at superdeformation but also at
other closed-shell configurations corresponding to different
deformations.
When comparing experimental and calculated quadrupole moments, the
modified oscillator (MO) has often been used. In refs.\
\cite{Sav96,Nis97}, the deformation for different superdeformed
configurations was calculated using the Nilsson-Strutinsky cranking
method and the quadrupole moment was then obtained in the
macroscopic way. The agreement between experiment and theory was
generally found to be quite good. Furthermore, these quadrupole
moments come close to those obtained in relativistic mean field
calculations \cite{Afa96}.
In section~3 the quadrupole moment is studied using the MO potential. The
polarization effects from adding one particle (or hole) to the yrast
superdeformed $^{152}$Dy configuration are studied in subsection 3.1
utilizing the macroscopic approach to calculate the quadrupole
moment. The corresponding results from the microscopic approach are
presented in subsection~3.3. In both models we obtain simple relations
corresponding to eq.~(1), but with some modifications. The possible
reasons for these modifications (cranking, hexadecapole deformation,
Strutinsky renormalization, the $l^2$-term or the $\vec{l}\cdot
\vec{s}$-term in the potential) are analysed.
In the models used here, superdeformed bands are understood from
single-particle motion in a rotating deformed potential. Then, it
seems natural to ask if physical quantities can be described by adding
effective contributions from particles in different orbitals. The
relations found in ref.~\cite{Ben88} between high-$N_{osc}$
configurations in superdeformed bands and the ${\cal J}^{(2)}$ moments
of inertia were based on the additivity of single-particle ${\cal
J}^{(2)}$ contributions. An attempt to test the additivity of
experimental ${\cal J}^{(2)}$ moment of inertia was performed in
ref.~\cite{Del89}. For the single-particle angular momentum,
additivity of alignment was first tried in ref.~\cite{Rag91}, then
further tested in refs.~\cite{Rag93,preprint} and found to work
well. Similarly, it was concluded that specific orbitals lead to
well-defined deformation changes corresponding to an additivity for
deformations \cite{Rag93}.
Deformations can be translated into quadrupole moments whose
additivity were tested in selfconsistent Skyrme-Hartree-Fock
calculations by Satu{\l}a {\it et al}.\ \cite{Sat96}. They extracted
effective quadrupole moments from least square fits over a large
number of configurations in the region of nuclei with $Z=64-67,
N=84-87$, and found that the quadrupole moments of these
configurations could be well described by summing the contributions
from the orbitals involved. In the present paper, somewhat similar
studies are described in subsections~3.2 and 3.4 using the more
simplistic cranked Nilsson-Strutinsky approach. However, our studies
cover the whole $A=152$ superdeformed region from $^{142}$Sm to
$^{152}$Dy where a limited number of low-lying configurations are
investigated in detail. Furthermore, comparisons are made between
extrapolations based on calculated effective and bare single-particle
quadrupole moments, respectively. Indeed, based on our analytical
calculations in the HO, we get a microscopic understanding why
additivity works so well also in realistic nuclear models. In
subsection~3.5 we compare with experimental data. Finally, a short
summary is given in section~4.
\section{Quadrupole moments in the pure oscillator.}
The polarization effect on a deformed core by one particle is studied
in the pure HO potential. There exist two equivalent methods
to calculate this effect (see e.g.\ p.~510 in ref.~\cite{BM2}):
{\em Either} as a renormalization of the quadrupole operator due to the
coupling of the single-particle excitations to the
(isoscalar) giant quadrupole resonance (treated in RPA),
{\em or} by considering the new, self-consistent deformation that results
when one particle is added to a core. In the present paper, only the latter
method is used.
In subsection~2.1 some useful definitions are given, and an effort is
made to calculate the polarization using double-stretched coordinates.
In subsection~2.2 non-stretched coordinates are used and
explicit expressions are derived for the polarization effect on an
axially symmetric $Z=N$ core. In subsection~2.3 the formulae are
generalized to a core with $Z \neq N$.
\subsection{Double-stretched quadrupole moments for one kind of particles}
Consider a HO potential
\begin{equation}
V_{osc}=\frac 12m\left( \omega _x^2x^2+\omega _y^2y^2+\omega _z^2z^2\
\right).
\end{equation}
A specific orbital $\left| \nu \right\rangle =\left| n_xn_yn_z\right\rangle$
is described by the number of quanta $n_i$ in the three Cartesian
directions. The single-particle energies are given by $e_\nu $ and
the single-particle mass quadrupole moments $q_\nu $ are calculated as
\begin{equation}
q_\nu =\langle \nu \mid 2z^2-x^2-y^2\mid \nu \rangle .
\end{equation}
In the potential, the equipotential surfaces are ellipsoidal with the axes
proportional to $1/\omega _i$.
We will only consider axial symmetric solutions corresponding to $\omega
_x=\omega _y=\omega _{\bot }$. Elongation is then described by the standard
parameter $\varepsilon $ \cite{Nil55},
\begin{eqnarray}
\omega _z &=&\omega _0\left( 1-\frac{2\varepsilon }3\right) \nonumber \\
\omega _x &=&\omega _y=\omega _0\left( 1+\frac \varepsilon 3\right).
\end{eqnarray}
Volume conservation corresponds to
\begin{equation}
\omega _x\omega _y\omega _z=\left( \stackrel{o}\omega _{0}\right) ^3 ,
\end{equation}
where the parameter $\stackrel{o}\omega _{0}$ is determined from the
radius of the core. We now transform the physical coordinates
$\left(x,y,z\right) $ to dimensionless coordinates and furthermore
introduce the stretched coordinate system \cite{Nil55,Lar72},
\begin{eqnarray}
x^{\prime } &=&\xi =\sqrt{\frac{\hbar \omega _x}m}x, \nonumber \\
y^{\prime } &=&\eta =\sqrt{\frac{\hbar \omega _y}m}y,
\label{str} \\
z^{\prime } &=&\zeta =\sqrt{\frac{\hbar \omega _z}m}z, \nonumber
\end{eqnarray}
corresponding to the system where the eigensolutions separate in
the three Cartesian directions. The single-particle quadrupole moment
becomes
\begin{equation}
q_\nu =\frac{\hbar ^2}m\frac{\stackrel{o}\omega _{0}}{\omega _0}\left( \frac{%
2\left( n_z+1/2\right) }{1-2\varepsilon /3}-\frac{n_x+1/2}{1+\varepsilon /3}-%
\frac{n_y+1/2}{1+\varepsilon /3}\right).
\end{equation}
For a system of $Z$ particles (protons) in the potential, we calculate
the total energy $E$ as the sum of the single-particle energies under the
constraint of volume conservation. The energy can be written as
\begin{equation}
E=\hbar\omega _x\Sigma_x+\hbar\omega _y\Sigma_y+\hbar\omega _z\Sigma_z,
\end{equation}
where the $\Sigma ^{\prime }s$ measure the total number of quanta in the
different directions,
\begin{equation}
\Sigma _i=\sum_{\nu =1}^{Z} \left( n_i+\frac 12\right) _\nu \quad ;
\quad i=x,y,z.
\label{Sigmai}
\end{equation}
The selfconsistent deformation, is obtained by
minimizing the total energy, in the ($\varepsilon,\gamma$) deformation space.
The energy minimization is equivalent to a self-consistency between the
potential and the matter distribution \cite{BM2}, namely that the ratio
$\left\langle x^2\right\rangle :$
$\left\langle y^2\right\rangle :$
$\left\langle z^2\right\rangle $ is the same for the two distributions.
This can be expressed as
\begin{equation}
\Sigma _x\omega _x=\Sigma _y\omega _y=\Sigma _z\omega _z.
\end{equation}
From now on we will assume axial symmetry corresponding to an equal
number of quanta in the two perpendicular directions,
\begin{equation}
\Sigma _x = \Sigma_y \equiv \frac{1}{2}\Sigma_{\perp}.
\label{Sigma}
\end{equation}
The axial symmetry corresponds to $\gamma=0$ while $\varepsilon$
is obtained as (see e.g.\ refs.\
\cite{Cer79,Nil95})
\begin{equation}
\varepsilon =\frac{3\left( 2\Sigma_z-\Sigma_{\perp} \right) }
{4\Sigma_z+\Sigma_{\perp} }.
\label{eps1}
\end{equation}
The total microscopic electric quadrupole moment $Q^{mic}$ is calculated as
the sum of the single-particle quadrupole moments
\begin{equation}
Q^{mic}=e\sum_{\nu = 1}^{Z} q_\nu =\frac{\hbar ^2 e}m\cdot\frac{\stackrel{o}
\omega _{0}}{\omega _0}\left( \frac{2\Sigma _z}
{1-2\varepsilon /3}-\frac{\Sigma_{\perp} }
{1+\varepsilon /3}\right).
\end{equation}
The question which specifically interests us now is how $Q^{mic}$ is changed
if {\em one} particle with a single-particle (mass)
quadrupole moment $q_\nu $ is added to the core.
Adding one particle to a spherically symmetrical HO potential
($\Sigma _x=\Sigma_y=\Sigma _z; Q^{mic}=0$), it is well-known \cite{Mot58}
that the total quadrupole
moment becomes $Q^{mic}=2eq_\nu $.
Starting from an arbitrary axially symmetric deformation of the
deformed HO, defined by the total number of quanta in
the different directions, $\Sigma _z$ and $\Sigma_{\perp},$ and the number of
particles $Z$, a similar relation is found, however only when
expressed in the double-stretched coordinates \cite{Sak89},
\begin{equation}
x^{\prime \prime }=\frac{\hbar \omega _{\bot }}mx,\qquad y^{\prime \prime }=%
\frac{\hbar \omega _{\bot }}my,\qquad z^{\prime \prime }=\frac{\hbar \omega
_z}{m}z.
\end{equation}
We define the single-particle quadrupole moment in these coordinates,
\begin{equation}
q_\nu ^{\prime \prime }=\langle \nu \mid 2\left( z^{\prime \prime }\right)
^2-\left( x^{\prime \prime }\right) ^2-(y^{\prime \prime })^2\mid \nu
\rangle.
\end{equation}
It is then straightforward to show that at the self-consistent deformation $%
\varepsilon _0$ defined by eq.~(\ref{eps1}),
\begin{equation}
Q^{\prime \prime }=e\sum_\nu q_\nu ^{\prime \prime }=0,
\end{equation}
i.e.\ the matter distribution is always `spherically symmetric' in the
double-stretched system.
When adding a particle with a double-stretched quadrupole moment $%
q_{\nu}^{\prime \prime }$, we find a total double-stretched quadrupole
moment,
\begin{equation}
Q^{\prime \prime }\left( \varepsilon _0\right) =2eq_\nu ^{\prime \prime },
\label{Qtotds}
\end{equation}
where $Q^{\prime \prime }$ is calculated at the `new' self-consistent
deformation, but where $\varepsilon _0$ indicates that the
double-stretched coordinates are now defined with $\omega _{\bot }$
and $\omega _z$ corresponding to the original deformation,
$\varepsilon _0$, which is different from the new deformation obtained
with the added particle. This is thus a generalization of the formula
for spherical shape.
However, it turns out that eq.~(\ref{Qtotds}) is not too useful when
calculating how the physical quadrupole moment $Q$ is influenced by
the addition of a particle. It is straightforward to find linear
relations between $Q$ and $\langle r^2\rangle $ in the two systems
(and also in the single-stretched system, eq.~(\ref{str})), but the
somewhat complicated form of these relations, and the fact that also
$\langle r^2 \rangle $ in the double-stretched system must then be
analyzed, make us conclude that it is more straightforward to work
directly with the formulae in the physical (non-stretched)
coordinates.
\subsection{Quadrupole moments with a $Z=N$ core.}
Consider a system of $Z$ protons (with charge e) and $N$
neutrons (with no charge) in their respective HO potential with the
total number of quanta (see eqs.\ (\ref{Sigmai}, \ref{Sigma}))
described by ($\Sigma_{zp},\Sigma_{\perp p} $) and
($\Sigma _{zn},\Sigma_{\perp n}$),
respectively. The protons and neutrons are coupled in the standard way
used in the MO \cite{Nil69}, namely that protons and neutrons have the
same radius. For nuclear radii to be reproduced, this results in
frequencies varying according to
\begin{equation}
\hbar \stackrel{o}\omega _{0}
=\frac D{A^{1/3}}\left( 1\mp \frac{N-Z}{A}\right)^{1/3}
\approx \frac D{A^{1/3}}\left( 1\mp \frac{N-Z}{3A}\right),
\label{hbarom}
\end{equation}
where the standard value of $D$ is 41~MeV corresponding to
$r_{o}=1.2$~fm. The rightmost expression has become standard in MO
calculations and will be used by us except when discussing explicit
$Z$ and $N$ dependences. We then note that
\begin{eqnarray}
\frac {1}{A^{1/3}}\left( 1- \frac{N-Z}{A}\right)^{1/3}
&=&\frac{(2Z)^{1/3}}{A^{2/3}} \nonumber \\
\frac {1}{A^{1/3}}\left( 1+ \frac{N-Z}{A}\right)^{1/3}
&=&\frac{(2N)^{1/3}}{A^{2/3}}.
\label{ZNhbarom}
\end{eqnarray}
For the isolated systems of protons or neutrons, we can use
eq.~(\ref{eps1}) to calculate the deformation of an arbitrary state,
and thus the deformation change, $\delta \varepsilon $, if a particle
is added. For simplicity we start by analysing a nucleus with an equal
number of protons and neutrons in the core. By minimizing the total energy,
i.e.\ the sum of the single-particle energies,
of the $Z=N$ proton-neutron system the equilibrium deformation is obtained as
\begin{equation}
\varepsilon =\frac 12\left(\frac{3\left(2\Sigma _{zp}-\Sigma_{\perp p}\right)}
{4\Sigma _{zp}+\Sigma _{\perp p}}+
\frac{3\left( 2\Sigma _{zn}-\Sigma _{\perp n}\right) }
{4\Sigma _{zn}+\Sigma _{\perp n}}\right).
\label{epspn}
\end{equation}
This expression will be derived for the general $Z \neq N$ HO system
in subsect.~2.3 below, see eq.~(\ref{gendef}). Equation (\ref{epspn})
gives the reasonable result that to lowest order in
$\delta \varepsilon$, the addition of either a proton or a neutron
leads to a deformation change $\delta \varepsilon /2$, for the combined
system. This is a simple model which can be studied analytically leading to
closed formulae which should be helpful when considering shape changes
and polarization effects in the more realistic MO model.
Let us first note that the formulae in the previous subsection can easily
be generalized to the $Z=N$ system. Thus, if a particle with a mass
quadrupole moment $q_{\nu}$ is added to a spherical system, because of
its polarization is `equally divided' between protons and neutrons,
the total charge quadrupole moment will increase by $0.5eq_{\nu}$ if
the added particle is a neutron and by $(1+0.5)eq_{\nu}$ for a proton.
Furthermore, eq.~(\ref{Qtotds}) can be generalized in an analogous way
for the $Z=N$ system.
For a deformed core we assume that the added particle has $n_z$
quanta in the axial direction and $n_x$, $n_y$ quanta in the
perpendicular directions. In an analogous way to the capital $\Sigma
$'s above (cf.\ eq.~(\ref{Sigmai})), describing the total number of
quanta, we then introduce
\begin{eqnarray}
\sigma _z &=&n_z+1/2 \nonumber \\
\sigma_{\perp} &=&n_x+n_y+1.
\end{eqnarray}
The single-particle quadrupole moment can now be written as,
\begin{equation}
q_\nu =\frac{\hbar ^2}{mD}\cdot \frac{\stackrel{o}\omega _{0}}{\omega _0}%
\cdot \frac{A^{1/3}}{1\mp \frac{N-Z}{3A}}\left( \frac{2\sigma _z}{1-\frac{%
2\varepsilon }3}-\frac{\sigma_{\perp} }{1+\frac \varepsilon 3}\right),
\end{equation}
and the total {\em microscopic} electric quadrupole moment as,
\begin{equation}
Q^{mic}=e\sum\limits_{\nu \in prot}q_\nu =
\frac{\hbar ^2 e}{mD}\cdot \frac{\stackrel{o}\omega _{0}}{\omega _0}\cdot
\frac{A^{1/3}}{1- \frac{N-Z}{3A}}
\left( \frac{2\Sigma _{zp}}{1-\frac{2\varepsilon }3}-
\frac{\Sigma_{\perp p} }{1+\frac\varepsilon 3}\right),
\label{Qmic}
\end{equation}
where the volume conservation factor is
\begin{equation}
\frac{\stackrel{o}\omega _{0}}{\omega _0}=\left( 1+\frac \varepsilon
3\right) ^{2/3}\left( 1-\frac{2\varepsilon }3\right) ^{1/3}.
\end{equation}
We can also define a {\em macroscopic} electric quadrupole moment
$Q^{mac}$ calculated from an isotropic charge distribution with the
total charge equal to $Ze.$ For such a spheroid with the symmetry axis
$b$ and the perpendicular axis $a$, the quadrupole moment is given as
$(2/5)Ze(b^2-a^2)$. Then with a radius parameter $r_0,$ i.e.\ a volume
$(4/3)\pi r_0^3A,$ and a quadrupole deformation calculated from the
$\Sigma $'s according to eq.~(\ref{epspn}), the macroscopic electric
quadrupole moment becomes,
\begin{equation}
Q^{mac}=\frac 15ZeA^{2/3}r_0^2\left( \frac{\stackrel{o}\omega _{0}}{\omega _0}%
\right) ^2\left( \frac 2{\left( 1-\frac{2\varepsilon }3\right) ^2}-\frac
2{\left( 1+\frac \varepsilon 3\right) ^2}\right).
\label{Qmac}
\end{equation}
From eqs.\ (\ref{Qmac},\ref{Qmic}), using eq.~(\ref{ZNhbarom}), we
see that both $Q^{mac}$ and $Q^{mic}$ are proportional to $ZA^{2/3}$
as $\Sigma_{\perp p}$ and $\Sigma _{zp}$ increase as $Z^{4/3}$. Since
$\varepsilon $ can be expressed in the $\Sigma $'s, $Q^{mac}$ and
$Q^{mic}$ can be considered as functions of the independent variables
$\Sigma_{\perp} $ and $\Sigma _z$ for protons and neutrons, respectively,
together with the number of protons $Z$ and neutrons $N.$
We now determine how $Q^{mic}$ (eq.~(\ref{Qmic})) and $Q^{mac}$
(eq.~(\ref{Qmac})) are changed from the addition of one proton by
differentiating with respect to $Z$, $\Sigma _{\perp p}$ and $\Sigma _{zp}$,
and from the addition of one neutron by differentiating with respect
to $N$, $\Sigma _{\perp n}$ and $\Sigma _{zn}$. The final expressions can be
simplified quite a lot if instead of the original $\Sigma _{\perp}$ and
$\Sigma _z$ (which are the same for neutrons and protons), we
introduce the axis-ratio $k$ between the z-axis and the perpendicular
axes, which can be written as
\begin{equation}
k=\frac{\Sigma _z}{\frac{1}{2}\Sigma_{\perp}},
\end{equation}
and use $k$ and $\Sigma_{\perp}$ as independent variables. For example, the
volume conservation factor then takes the form
\begin{equation}
\frac{\stackrel{o}\omega _{0}}{\omega _0}=
\frac{3 k^{2/3}}{ 2k+1}.
\end{equation}
We will use $A_0$ for the number of particles in the reference nucleus which
thus has $A_0/2$ protons and $A_0/2$ neutrons. Then, with $k$ measuring the
deformation, $q_\nu $ takes the simple form
\begin{equation}
q_\nu =\frac{\hbar ^2}{Dm}k^{-1/3}A_0^{1/3}(2kn_z+k+n_z-1-N_{osc}),\label{qny}
\end{equation}
where instead of $\sigma _z$ and $\sigma_{\perp} ,$ we have used
$n_z$ and $N_{osc}=n_x + n_y + n_z$ to characterize the particle.
We will refer to the change in $Q$ ($Q^{mic}$ or $Q^{mac}$)
when a particle is added as $q_{eff},$
which can generally be expressed in $A_0,$ $\Sigma_{\perp} $ and $k,$ in
addition to
$n_z$ and $N_{osc}$, describing the properties of the added particle. These
expressions are not very complicated but become even simpler if we put
$k=2$, corresponding to an axis ratio of $2:1$, i.e.\ a superdeformed shape:
\begin{eqnarray}
\left( q_{eff}^{mac}\right) _p &=&\left( A_0/2\right) ^{2/3}r_0^2 e
\left( 1.6+\frac{A_0}{\Sigma_{\perp} }\left( 1.2n_z-0.3-0.6N_{osc}\right)
\right) \nonumber \\
\left( q_{eff}^{mac}\right) _n &=&\left( A_0/2\right) ^{2/3}r_0^2 e
\left( 0.4+\frac{A_0}{\Sigma_{\perp} }\left( 1.2n_z-0.3-0.6N_{osc}\right)
\right) \nonumber \\
\left( q_{eff}^{mic}\right) _p &=&\frac{\hbar ^2e}{Dm}\left( A_0/2\right)
^{1/3}\left( 8n_z+0.25-2.5N_{osc}\right) \label{Qeff} \\
\left( q_{eff}^{mic}\right) _n &=&\frac{\hbar ^2e}{Dm}\left( A_0/2\right)
^{1/3}\left(\frac{2 \Sigma_{\perp}}{A_0}+ 3n_z-0.75-1.5N_{osc}
\right). \nonumber
\end{eqnarray}
Note that these formulae are general for a superdeformed $Z=N$ system
in the sense that we do not require
that the $\left( A_0/2\right)$ lowest orbitals are filled; the only
requirement is that $\Sigma_z=\Sigma_{\perp}$ for the core.
An interesting physical situation corresponds to the filling of the
orbitals below the $2:1$ gaps of the HO. It is then important to note
that only for every second gap, the deformation calculated
(eq.~(\ref{eps1})) from the minimum of the sum of the single-particle
energies corresponds to a $2:1$ ratio of the nuclear axes
($\varepsilon =0.6),$ i.e.\ it is only for these {\it selfconsistent gaps}
that $\Sigma _z=\Sigma_{\perp}.$ These are the $Z=N=g=4,16,40,80,140,\dots
$ gaps, where we have introduced $g$ for the particle number at these
selfconsistent gaps. It is clear (see eqs.\ (\ref{Qeff}) and (\ref{qny})) that
if for one of these gaps, $q_{eff}$ is plotted vs. $q_\nu $ with
varying $N_{osc}$ for fixed $n_z$, or with varying $n_z$ for fixed
$N_{osc}$, we obtain straight lines. If both $n_z$ and $N_{osc}$ are
varied, the relation is not as evident, however.
Of main physical interest are the orbitals close to the Fermi surface,
and we note that the orbitals which are degenerate at $2:1$ shape have
a specific relation between $n_z$ and $N_{osc},$ i.e.\ when $N_{osc}$
decreases by one, $n_z$ decreases by 2. Thus, for each of these
bunches of degenerate orbitals, we will again get straight lines when
$q_{eff}$ is plotted vs. $q_\nu .$ The relation between $N_{osc}$ and
$n_z$ is used to reduce the set of variables, and all the needed
information about the number of quanta can be expressed by $N_{sh}$,
which counts the number of shells at a deformation where the ratio
$\omega_z: \omega _{\bot }$ can be expressed by small integers.
Before we write down the simplified expressions, we note that the
relations at $2:1$ deformation can easily be generalized to a $k:1$
deformation where $k$ is a small integer number. Selfconsistent
gaps are then formed
for particle numbers $Z=N=g= 2k, 8k, 20k, 40k, 70k, \dots$, i.e.\ if
the orbitals below these gaps are occupied, then
$\Sigma_z = k\frac{1}{2}\Sigma_{\perp}$.
Furthermore, for degenerate orbitals at $k:1$ deformation, if $N_{sh}$
differs by $(k-1)$, then $n_z$ differs by $k$. For the selfconsistent gaps,
$g$ and $\Sigma _{\perp}$ can be expressed in $N_{sh}$:
\[
g=\frac {1}{3k^2}\left( N_{sh}+1\right) \left( N_{sh}+1+k\right) \left(
N_{sh}+1+2k\right)
\]
\begin{equation}
\Sigma_{\perp} =\frac {1}{6k^3}\left( N_{sh}+1\right) \left( N_{sh}+1+k\right)
^2\left( N_{sh}+1+2k\right),
\end{equation}
i.e.\
\begin{equation}
\frac {g}{\Sigma_{\perp}} =\frac{2k}{\left( N_{sh}+1+k\right) }.
\end{equation}
Using these relations together with eq.~(\ref{Qeff}) in its general form
for an arbitrary $k$-value, we obtain $q_{eff}$ as functions in
$q_\nu $, eq.~(\ref{qny}), for selfconsistent gaps at spherical ($k=1$),
superdeformed ($k=2$), hyperdeformed
($k=3$), etc.\ shape,
\begin{eqnarray}
\label{qefftot}
\left( q_{eff}^{mac}\right) _p &=&\left( \frac{2}{k} \right) ^{1/3}
\frac{2g^{4/3}}{5\Sigma _{\perp}}
\frac{r_0^2 e}{\hbar^2/Dm}q_\nu - \frac{r_0^2 e}{15}
\left(\frac{2g}{k}\right)^{2/3}(k^2-1)
\left(0 \pm \frac {4}{ N_{sh}+k+1 } \right) \nonumber \\
\left( q_{eff}^{mac}\right) _n &=&\left( \frac{2}{k} \right) ^{1/3}
\frac{2g^{4/3}}{5\Sigma _{\perp}}
\frac{r_0^2 e}{\hbar^2/Dm}q_\nu - \frac{r_0^2 e}{15}
\left(\frac{2g}{k}\right)^{2/3}(k^2-1)
\left(6 \pm \frac {4}{ N_{sh}+k+1 }\right) \nonumber \\
\left( q_{eff}^{mic}\right) _p &=&1.5eq_\nu \quad -\frac{\hbar ^2 e}{Dm}%
\left(\frac{2g}{k}\right)^{1/3} \frac{k^2-1}{3k}
\left( N_{sh}+k+1 \pm \frac{1}{2}\right) \\
\left( q_{eff}^{mic}\right) _n &=&0.5eq_\nu \quad -\frac{\hbar ^2 e}{Dm}%
\left(\frac{2g}{k}\right)^{1/3} \frac{k^2-1}{3k} \frac{1}{2}
\left( N_{sh}+k+1 \pm 1\right). \nonumber
\end{eqnarray}
All these functions are seen to be on the linear form
\begin{equation}
q_{eff} = e(b q_\nu +a)
\label{ba}
\end{equation}
with
\begin{equation}
a = a_0 \pm \Delta
\label{adelta}
\end{equation}
where $a_0$ is the average of $q_{eff}$ for a particle and a hole with
$q_\nu =0,$ and $\Delta $ is the small deviation from this average due
to particle or hole nature of the orbital, i.e.\ a particle added to
one of the degenerate orbitals (with $N_{osc}=N_{sh}+1$) just above
the gap, or removed from one of the degenerate orbitals (with
$N_{osc}=N_{sh}$) just below the gap. For {\em prolate shape $a_0$ as
well as $\Delta$ in eq.~(\ref{adelta}) are negative} and the plus
(minus) sign in this equation corresponds to a particle
(hole).
\begin{figure}[t]
\vspace{-2.5cm}
\psfig{figure=fig1.ps,height=8cm}
\vspace{2.5cm}
\caption{Effective electric quadrupole moments, $q_{eff}$,
versus the single-particle mass quadrupole moment, $q_\nu$, at
superdeformation, derived in the HO in the case when
one particle is added to an orbital just above the gap. Solid and
dashed lines are used for microscopic and macroscopic $q_{eff}$,
respectively. Thick lines are used for the $g=40$ gap and thin lines
for the $g=80$ gap (only drawn for neutrons). Very similar behaviour
appears (not shown) when one particle is removed from an orbital just
below the gap, i.e.\ $\Delta$ in eq.~(\ref{adelta}) is of minor importance.}
\label{fig1}
\end{figure}
Equations (\ref{qefftot}) are valid for arbitrary k values. They are
illustrated for the superdeformed ($k=2$) $g=40$
gap ($N_{sh}=5$) in Fig.~1. There we have primarily chosen to put the
particle in an orbital above the gap with ($N_{osc}, n_z$) = (6,6),
(5,4), etc. As $\left( q_{eff}^{mac}\right) _p$ has $a_0$ $=$ $0$
(which holds for all selfconsistent gaps) $\Delta ^{mac}$ is directly seen as
the deviation from 0 for $q_{\nu} =0$. All $a$-values are negative,
i.e.\ the$q_{eff}$-values are all negative for $q_{\nu}=0$. This is
easily understood from the fact that the quadrupole moment of an added
particle to a superdeformed core must be sufficiently large to induce
an increased deformation.
The slopes for the microscopic $%
q_{eff}$ are the same as for the spherical case, i.e.\ $(b^{mic})_p=1.5$ and
$(b^{mic})_n=0.5$. The slopes, i.e.\ the $b$-values, in
the macroscopic case depend on both $k$ and $g$ but,
as illustrated in Fig.~2, in a way so that they
\begin{figure}[tb]
\vspace{-2.5cm}
\psfig{figure=fig2.ps,height=8cm}
\vspace{2.5cm}
\caption{Values of $b^{mac}$ (i.e.\ the slope of the dashed lines in Fig.~1.)
as functions of number of particles of each kind ($Z=N$) in the
nucleus. The stars connected by solid lines are the 1st, 2nd, 3rd
... selfconsistent gaps for each of the deformations: spherical ($1:1$),
superdeformed ($2:1$) and hyperdeformed ($3:1$). One can see that the
values converge to an asymptotic value close to one (see text for
details), and that it requires the same number of selfconsistent gaps (i.e.\
more particles at larger deformation) to get the asymptotic value. The
circles show the values of the slopes for deformations ($1:2$, $2:3$
and $3:2$) where there are no selfconsistent gaps. Note that all values are
quite close to the value at the asymptotic limit and that, as
explained in the text, the differences from this value can largely be
understood from the choice of the parameter $\hbar$$\stackrel{o}\omega_{0}$.}
\label{fig2}
\end{figure}
are almost constant and close to one
for all gaps (except for the very lowest ones). The $b^{mac}$-values
are the same e.g.\ for $g=40$ at $2:1$ as for $g=20$ at $1:1$
(and $g=60$ at $3:1$).
These are the third lowest selfconsistent gaps at each deformation.
Using eq.~(\ref{hbarom}) with the standard value of $D=41$ MeV an
asymptotic value of 0.995 is found for $b^{mac}$ when N$_{sh}
\longrightarrow \infty$. It turns out, however, that the deviation
from 1 is simply related to the value chosen for $D$. This value is
generally determined so that $R_{rms}=1.2 A^{1/3}$ fm , see
e.g.\ \cite{Nil95}, leading to $D=41.2$ MeV with one digit higher
accuracy. Using this value of $D$ instead gives $b^{mac} = 1.000$ in the
limit $N_{sh} \longrightarrow \infty$. Furthermore, if when deducing
$D$, we do not go to the asymptotic limit but require that $R_{rms}=1.2
A^{1/3}$ fm for each HO configuration, $\hbar$$\stackrel{o}\omega _{0}$
will depend on $g/k$ in such a way that $b^{mac} \equiv 1$ for all
selfconsistent gaps at $k:1$ deformation. Consequently, in these cases Fig.~2
just shows the effect of a constant $\hbar$$\stackrel{o}\omega _{0}$
for all particle numbers at each deformation. However, as $D=41$ MeV is
generally used in MO calculations independent of
particle number, we think it is interesting to illustrate the effect
which this $A$-dependence leads to for the polarization of an added
particle. In particular this means that, in microscopic calculations,
$R_{rms}$ will be larger than $1.2 A^{1/3}$ fm for light nuclei.
At spherical shape $q_{eff}=0$ for $q_\nu =0$, i.e.\ $a=0$, as seen in
eq.~(\ref{qefftot}) ($k=1$). For prolate shape $\left( a^{mic}\right)_{p,n} $
and $(a^{mac})_n$ are negative. In Fig.~3
\begin{figure}[htb]
\vspace{-2.5cm}
\psfig{figure=fig3.ps,height=8cm}
\vspace{2.5cm}
\caption{Dependence of $a^{mac}$ and $a^{mic}$ on deformation $\varepsilon$.
The values are given for $g=40$ but, as the values scale with the
number of particles in the same way as the single-particle quadrupole
moment, $a/q_{\nu}$ is (asymptotically) independent of the number of
particles. The different behaviour for particles and holes ($\Delta$
in eq.~(\ref{adelta})) has been ignored.}
\label{fig3}
\end{figure}
is shown how $a$ varies with quadrupole deformation for a fixed number
of particles. For $1:1$, $2:1$ and $3:1$ deformation we have used the
selfconsistent gaps while for $1:2$, $2:3$ and $3:2$ deformation we have used
the gaps whose deformation at minimal total energy is closest to the
`correct' deformation. For all deformations we have fitted a curve as
a function of particle number (which becomes almost linear) to get the
values at $g=40$.
In the high-$N_{sh}$ limit and for selfconsistent gaps we get the asymptotic
relations defining the macroscopic effective quadrupole moments,
\begin{eqnarray}
\left( b^{mac}\right) _p &=&\left( b^{mac}\right) _n = 0.995 \nonumber \\
\left( a^{mac}\right) _p &=&-\left( 0 \pm 0.423\frac{k^2-1}{k^{4/3}} g^{1/3}
\right) \label{lennart1}\\
\left( a^{mac}\right) _n &=&-\left( 0.914\frac{k^2-1}{k^{2/3}} g^{2/3}
\pm 0.423\frac{k^2-1}{k^{4/3}} g^{1/3} \right), \nonumber \\
\nonumber
\end{eqnarray}
and the microscopic effective quadrupole moments,
\begin{eqnarray}
\left( b^{mic}\right) _p &=&1.5 \nonumber \\
\left( b^{mic}\right) _n &=&0.5 \nonumber \\
\left( a^{mic}\right) _p &=&
-\left(0.613\frac{k^2-1}{k^{2/3}} g^{2/3}
\pm 0.212\frac{k^2-1}{k^{4/3}} g^{1/3} \right) \label{lennart2} \\
\left( a^{mic}\right) _n &=&
-\left(0.306\frac{k^2-1}{k^{2/3}} g^{2/3}
\pm 0.212\frac{k^2-1}{k^{4/3}} g^{1/3} \right). \nonumber \\
\nonumber
\end{eqnarray}
It is clearly seen that $a_0$ (eq.~(\ref{adelta})) increases with
$g^{2/3}$ ($\propto A^{2/3}$) which is the same growth rate as for
$q_{\nu}$, and therefore the relative importance of $a_0$ is
independent of particle-number. The polarization difference between
particles and holes, $\Delta$, on the other hand gets less important
the heavier the nucleus is as it only increases as $g^{1/3}$.
Note that although the functions for macroscopic and microscopic
$q_{eff}$ are quite different, the sum of the $q_{eff}$ values when
adding one neutron and one proton is much more alike. This means that
the total electric quadrupole moment for a nucleus with $Z \approx N$ is
approximately the same independently of if the microscopic or the
macroscopic formula is used.
The different behaviour of macroscopic and microscopic effective
quadrupole moments have the same origin in the deformed case as the
spherical case, and is caused by the two basically different
assumptions applied. In the {\em macroscopic} approach, the
protons as well as the neutrons are assumed to have a constant matter
distribution inside the potential, i.e. also the matter
distribution for protons and neutrons have identical deformations.
Consequently, for a $Z=N$ nucleus the addition of a proton or
neutron has the same polarization effect, except for a constant factor
caused by the added charge, i.e.\ same $b$- but different
$a$-parameters, as seen in eq.~(\ref{qefftot}) for a deformation
corresponding to a $k:1$ prolate shape. Independent of
deformation, we asymptotically obtain $\left( b^{mac}\right) _p
=\left( b^{mac}\right) _n \approx 1$, see eq.~(\ref{lennart1}) (see
also Fig.~1 for the k=2 case).
To analyze the {\em microscopic} approach, we start from identical
proton and neutron configurations corresponding to an equilibrium
deformation $\varepsilon_o$. Then if a proton is added, the isolated
proton system will get a new equilibrium deformation which we will
refer to as $\varepsilon_o + \delta \varepsilon$. The deformation
of the combined system is then $\varepsilon_o + \delta
\varepsilon/2$. The equilibrium deformation of the isolated neutron
system is unchanged, $\varepsilon_o$. The electric quadrupole moment
is then calculated at the (non-selfconsistent) deformation
$\varepsilon_o +\delta \varepsilon /2$. From eq, (12), it is easy to
find out that in a pure proton system, half of the change in the
quadrupole moment will be caused by the changed value of
$\varepsilon$, and half of the change by the change in the
$\Sigma$'s. In the present case with a proton added to a $Z=N$ system,
the change in the $\Sigma$'s for the protons is the same as when a
proton is added to a pure proton system while the change in
deformation for the total system (the polarization) is half of that
for the pure proton system. This gives $\left( b^{mic}\right) _p =
1.5$. If instead a neutron is added, there is no contribution from
the change in the $\Sigma$'s as they refer to the
proton-configuration, but the deformation of the total system will
still be $\varepsilon_o + \delta \varepsilon/2$, and the contribution
from this change in deformation will thus be the same. This gives
$\left( b^{mic}\right) _n = 0.5$, see eq.~(\ref{lennart2}) (see also
Fig.~1 for the k=2 case).
The difference between the macroscopic and microscopic
polarizations are thus caused by the lack of selfconsistency when the
isolated proton and neutron systems have different equilibrium
deformations. Then, in the macroscopic approach, it is assumed that the
proton and neutron {\em matter distributions} fully adopt the
common deformation while they only do it 'halfway' in the microscopic
approach. With the assumption that the proton-proton, neutron-neutron
and proton-neutron attractions are the same, it seems that the
microscopic approach should come close to a fully selfconsistent
treatment. However, a stronger attraction between unlike particles
would correspond to a polarization in the direction suggested by the
macroscopic approach.
In subsections 3.1 and 3.3 we calculate $q_{eff}$ in
both methods for the modified oscillator potential, and in subsection
3.5 we compare the two methods with experimental data. First we will
however study the polarization effects in the HO with a $Z \neq N$
core.
\subsection{Quadrupole moments with a $Z \neq N$ core.}
How will the results from the previous subsection change when the number
of protons is far from the same as the number of neutrons? We will not
derive all the formulae again but just investigate how different
quantities depend on $Z$ and $N$. To analyse quadrupole moments, the
correct equilibrium deformation of the proton-neutron system has to be
obtained. This is done by minimizing the total energy
$E(\varepsilon,Z,N)$. The energy of the proton system or neutron
system can be written as
\begin{equation}
E_{i}=3 \hbar \stackrel{o}\omega _{0i}
\left(\frac{1}{4}\Sigma_{\perp j}^2\Sigma_{zj}\right)^{1/3} \hspace{1cm};
\hspace{1cm}j=p,n.
\end{equation}
We know that $\hbar$$\stackrel{o}\omega _{0}$ follows eq.~(\ref{ZNhbarom}),
and $\Sigma_{\perp}$ and $\Sigma_z$ are proportional to $Z^{4/3}$ and
$N^{4/3}$ for protons and neutrons, respectively. Altogether the
energy becomes
\begin{eqnarray}
E_{p} &\propto& \frac{Z^{5/3}}{ A^{2/3}} \nonumber \\
E_{n} &\propto& \frac{N^{5/3}}{ A^{2/3}}
\label{prop}
\end{eqnarray}
for the two different systems. In a region around their respective
equilibrium deformation the proton and the neutron energies can be
well approximated with parabolas. The total energy, which is just the
sum of the two energies, can therefore be written
\begin{equation}
E_{tot}=E_{0p}+C_{p}(\varepsilon-\varepsilon_{p})^{2}+
E_{0n}+C_{n}(\varepsilon-\varepsilon_{n})^{2},
\end{equation}
where the equilibrium deformation of each subsystem is
\begin{equation}
\varepsilon_{i}=\frac{3(2\Sigma_{zj}-\Sigma_{\perp j})}
{4\Sigma_{zj}+\Sigma_{\perp j}}
\hspace{1cm}; \hspace{1cm}j=p,n
\end{equation}
and the energies at the minima ($E_{0p}$ and $E_{0n}$) and the
stiffness parameters ($C_{p}$ and $C_{n}$), all with proportionality
according to eq.~(\ref{prop}), determine each parabola. By minimizing the
total energy, $E_{tot}$, the equilibrium deformation of the total system
is obtained as
\begin{equation}
\varepsilon_{0}=\frac{C_{p}\varepsilon_{p}+C_{n}\varepsilon_{n}}{C_{p}+C_{n}}=
\frac{Z^{5/3}\varepsilon_{p}+N^{5/3}\varepsilon_{n}}{Z^{5/3}+N^{5/3}}.
\label{gendef}
\end{equation}
The $b$ values describe the part of $q_{eff}$ which depends on single
particle properties of the added particle (eq.~(\ref{ba})). They enter
via the change $\delta \varepsilon$ in equilibrium deformation, and
for protons in the microscopic case by a direct contribution of
1~$q_{\nu}$. The change in total quadrupole moment caused by the
deformation change is to first order
\begin{eqnarray}
\delta Q_{def} \propto Z A^{2/3} \delta \varepsilon,
\end{eqnarray}
while the single-particle quadrupole moment is
\begin{eqnarray}
q_{\nu p} &\propto& \frac {A^{2/3}}{Z^{1/3}}f(\sigma_{zp},\sigma_{\perp p})
\nonumber \\
q_{\nu n} &\propto& \frac {A^{2/3}}{N^{1/3}}f(\sigma_{zn},\sigma_{\perp n}),
\end{eqnarray}
where $f(\sigma_{zp},\sigma_{\perp p})$ and $f(\sigma_{zn},\sigma_{\perp n})$
are independent of $Z$ and $N$.
In total we get the expressions
\begin{eqnarray}
\left(\frac{\delta Q_{def}}{q_{\nu}}\right)_{p} &\propto& \frac{Z A^{2/3}
\frac{Z^{5/3}}{Z^{5/3}+N^{5/3}}
\frac{3(2\sigma_{zp}-\sigma_{\perp p})}{4\Sigma_{zp}+\Sigma_{\perp p}} }
{A^{2/3}Z^{-1/3}f(\sigma_{zp},\sigma_{\perp p})} =
\frac{Z^{5/3}}{Z^{5/3}+N^{5/3}}F(\sigma_{zp},\sigma_{\perp p}) \nonumber \\
\left(\frac{\delta Q_{def}}{q_{\nu}}\right)_{n} &\propto& \frac{Z A^{2/3}
\frac{2N^{5/3}}{Z^{5/3}+N^{5/3}}
\frac{3(2\sigma_{zn}-\sigma_{\perp n})}{4\Sigma_{zn}+\Sigma_{\perp n}} }
{A^{2/3}N^{-1/3}f(\sigma_{zn},\sigma_{\perp n}) } =
\frac{Z}{N} \frac{N^{5/3}}{Z^{5/3}+N^{5/3}}F(\sigma_{zn},\sigma_{\perp n}).
\end{eqnarray}
where $F(\sigma_{zp},\sigma_{\perp p})$ and $F(\sigma_{zn},\sigma_{\perp n})$
are independent of $Z$ and $N$. The previously derived $b$-values
(eqs.~(\ref{lennart1},\ref{lennart2})) can now be generalized to any
proton-neutron system
\begin{eqnarray}
\left( b^{mac}(Z,N)\right) _p &=& \frac{2Z^{5/3}}{Z^{5/3}+N^{5/3}}
\left( b^{mac}(Z_{0}=N_{0})\right) _p \nonumber \\
\left( b^{mac}(Z,N)\right) _n &=& \frac{Z}{N} \frac{2N^{5/3}}{Z^{5/3}+N^{5/3}}
\left( b^{mac}(Z_{0}=N_{0})\right) _n \nonumber \\
\left( b^{mic}(Z,N)\right) _p &=& 1+\frac{2Z^{5/3}}{Z^{5/3}+N^{5/3}}
\left(\left( b^{mic}(Z_{0}=N_{0})\right) _p-1\right) \label{scaling}\\
\left( b^{mic}(Z,N)\right) _n &=& \frac{Z}{N} \frac{2N^{5/3}}{Z^{5/3}+N^{5/3}}
\left( b^{mic}(Z_{0}=N_{0})\right) _n \nonumber
\end{eqnarray}
keeping in mind that the $Z=N$ $b$-values were only deduced for selfconsistent
gaps, but should be approximately valid for all $Z$- and $N$-values.
For the nucleus $^{152}$Dy the predictions from the HO is
$b^{mac}_p=0.78$, $b^{mac}_n=0.93$, $b^{mic}_p=1.39$, and
$b^{mic}_n=0.47$.
\section{Quadrupole moments at superdeformation in the cranked MO
potential.}
We will now continue to analyze polarization effects on quadrupole
moments in the cranked MO model. Starting from the superdeformed
$^{152}$Dy configuration, we shall study how the quadrupole moment is
affected by adding or removing protons or neutrons in specific
orbitals. Contrary to the previous section, all calculations are
performed at $I \approx 40 \hbar$. It has previously been concluded
\cite{Rag80} that at superdeformation, the general properties of
the single-particle orbitals are almost unaffected by spin up to the
highest observed values $I=60-70 \hbar$. Furthermore, our present study
shows that the polarizing properties in the HO at superdeformation are
essentially the same at $I=0 \hbar$ and at $I=40 \hbar$. In
subsections~3.1 and 3.2 we study the macroscopic quadrupole moments,
which have been used in recent comparisons with experiment and seem to
work quite well
\cite{Sav96,Nis97}, while similar calculations using the microscopic
quadrupole moments are carried out in subsections~3.3 and 3.4. We
perform a complete calculation in the cranked MO potential (parameters
from ref.~\cite{Haa93}) with Strutinsky renormalization to get the
proper equilibrium deformation, $\varepsilon $ and $\varepsilon_4$, of
each configuration. The formalism of ref.~\cite{Ben85} is used which
makes it straightforward to study specific configurations defined by
the number of particles of signature $\alpha = 1/2$ and $\alpha =
-1/2$, respectively, in the different $N_{rot}$ shells of the rotating
HO basis. The macroscopic electric quadrupole moment, $Q^{mac}$, is
obtained from an integration over the volume described by the nuclear
potential and the microscopic electric quadrupole moment, $Q^{mic}$, by
adding the contributions $q_{\nu}$ from the occupied proton orbitals
at the proper deformation. One-particle polarization effects are
investigated in subsections~3.1 and 3.3 while the additivity of
several particles is studied in subsections~3.2 and 3.4. In
subsection~3.5 the two theoretical methods are compared and confronted
with experiment.
\subsection{Macroscopic polarization effects of one-particle states}
In the present study, we first calculate the macroscopic quadrupole
moments $Q^{mac}$ in neighbouring nuclei which only differ by one
particle or one hole from the yrast superdeformed configuration in
$^{152}$Dy. From these $Q^{mac}$-values effective electric quadrupole
moments, $q_{eff}$, are obtained for different orbitals. For the
orbitals around the $Z=66$ and $N=86$ gaps, $q_{eff}$ is plotted vs.\
the mass single-particle quadrupole moment, $q_{\nu}$, in Fig.~4 (see
also Table~4 below).
\begin{figure}[bt]
\psfig{figure=fig4.ps,height=8cm}
\caption{Changes in the total macroscopic quadrupole moment, $q_{eff}$,
when a particle is added to or removed from the superdeformed core
plotted vs. the single-particle mass quadrupole moment, $q_{\nu}$. The
circles and crosses are obtained from calculations in the
Strutinsky-renormalized cranked MO with simultaneous
energy-minimization in $\varepsilon$ and $\varepsilon_4$ direction
using $^{152}$Dy as a core (values are given in Table~4). The quantum
number $N \equiv N_{rot}$ is indicated for each state. The solid
lines, whose equations are given in the text, are linear least square
fits to the respective points. The dashed lines are the quadratic
least square fits for the HO potential (with $\varepsilon_4$ deformation
included) with the ($Z=60, N=80$) core, as defined in Fig.~5.}
\label{fig4}
\end{figure}
It is evident that these values define approximate straight lines. By a
least square fit we get the relation
\begin{equation}
q_{eff}=(1.02q_{\nu}+5.4)\; e{\rm fm}^2
\label{macp}
\end{equation}
for protons and
\begin{equation}
q_{eff}=(1.30q_{\nu}-43.7)\; e{\rm fm}^2
\label{macn}
\end{equation}
for neutrons. This should be compared with $b^{mac}_p=0.78$ and
$b^{mac}_n=0.93$ obtained from eqs.~(\ref{lennart1},\ref{scaling}).
It is evident that there are important differences.
Let us analyze the differences between the $Z=N$ HO and the MO results
for $^{152}$Dy by introducing, in the numerical calculations, the
different terms one at the time. Starting with the $Z=N$ HO we get the
expected modifications when combining the HO superdeformed gaps
$Z=60$ and $N=80$: Due to the neutron excess the average slope gets
notably smaller than in the $Z=N$ case and neutrons have a larger slope
than protons, all in agreement with eq.~(\ref{scaling}). Cranking the
system and performing a Strutinsky renormalization
(i.e. including macroscopic liquid drop energy) only have a minor
effect on the slopes, see dashed lines in Fig.~5.
\begin{figure}[htb]
\psfig{figure=fig5.ps,height=8cm}
\caption{Changes in the total macroscopic quadrupole moment, $q_{eff}$, when
a particle is added to or removed from the $Z=60$ and $N=80$ core
plotted vs. the single-particle mass quadrupole moment, $q_{\nu}$.
The curves are least square fits to the calculations in
Strutinsky-renormalized cranked HO calculations. The dashed lines show the
relations when the energy is minimized only in $\varepsilon$ deformation
with $\varepsilon_4=0$. The solid lines (corresponding proton and neutron data
marked with circles and crosses, respectively) show the relations when
the energy is simultaneously minimized in the $\varepsilon$ and
$\varepsilon_4$ directions.}
\label{fig5}
\end{figure}
In the standard HO only quadrupole deformation $\varepsilon$ is used,
while in the MO the energy is minimized in the quadrupole and
hexadecapole deformation plane. In order to test the importance of the
hexadecapole $\varepsilon_4$-deformation it is now included in the
(cranked and Strutinsky-renormalized) HO, see the solid lines in
Fig.~5. Due to the hexadecapole $\varepsilon_4$-deformation
there is now a curvature in the relation $q_{eff}=q_{eff}(q_{\nu})$,
but the best linear fit is very close to the one obtained for
quadrupole deformation only.
When changing to the MO potential by including the $\vec{l}\cdot
\vec{s}$ and $l^2$-terms, there are large changes, see Fig.~4. The large
differences between the HO and the MO are understood by considering
the stiffness of the core in the two models. Around the equilibrium
quadrupole deformation $\varepsilon _0$ of the core, its energy can be
expressed as
\begin{equation}
E_{core}=E_0(\varepsilon _0)+C(\varepsilon-\varepsilon _0)^2,
\end{equation}
and the energy of the added particle as
\begin{equation}
e_{part}=e_0(\varepsilon _0)+K(\varepsilon-\varepsilon _0).
\end{equation}
The change in deformation of the total system, due to the added
particle, is therefore $K/2C$. The stiffness $C$ of the energy surface
is larger for the HO than for the MO (see below), and this directly
gives a larger deformation change in the MO for the same single
particle quadrupole moment (same $K$). Consequently, in the MO the
total quadrupole moment will increase more for high-$N_{osc}$ orbitals
with a positive deformation change and decrease more for low-$N_{osc}$
orbitals with negative deformation change. The slope of
$q_{eff}(q_{\nu})$ will therefore be considerably larger in the MO
than in the HO.
The net result of the change from the HO with a $Z=N$ core to the
cranked MO potential with the $Z=66$, $N=86$ core is thus an increase
in the slope of the $q_{eff}$ vs.\ $q_{\nu}$ relation for neutrons,
but almost no change in the proton slope. Furthermore, the relations
still become approximately linear also in the MO case.
One might ask why the polarization effects are larger in the MO than
in the HO. To investigate this, we carried out calculations in the HO
both with the HO closed shell configuration ($Z=60, N=80$), and for the
configuration corresponding to the yrast superdeformed band in
$^{152}$Dy. It turned out that, when corrected for the difference in
mass, the different configurations had very similar stiffness.
Calculations were then carried through for the $^{152}$Dy configuration
with only the $l^2$-term or the $\vec{l}\cdot \vec{s}$-term included,
indicating that the $l^2$-term is responsible for approximately 60 \%
and the $\vec{l}\cdot \vec{s}$-term for 40 \%
of the change in stiffness.
\subsection{Additivity of macroscopic effective quadrupole moments}
It is now interesting to investigate if the $q_{eff}$ values are additive,
i.e.\ if quadrupole moments of several-particle several-hole configurations
relative to the superdeformed $^{152}$Dy yrast state (with the $Q^{mac}$
value 1893 $e$fm$^2$) can be calculated from the formula
\begin{equation}
Q_{est}=Q\left( ^{152}Dy_{yrast}\right)
+\sum\limits_{particle}q_{eff}-\sum\limits_{hole}q_{eff}. \label{Qadd}
\label{Qest}
\end{equation}
The additivity will first be tested for excited states in $^{152}$Dy
where the number of excited particles are the same as the number of excited
holes. We shall then test the additivity in other superdeformed nuclei all
the way down to $^{142}$Sm.
Starting with $^{152}$Dy, we give in Table~1 calculated deformations
and quadrupole moments for a few $n$-particle $n$-hole configurations
with rather low excitation energy. The quadrupole moments are
calculated both by exact integration and by use of eq.~(\ref{Qadd})
with $q_{eff}$ taken from the 1-hole or 1-particle configurations
given in Fig.~4. The agreement between these two methods is good with
a typical difference of 2 $e$fm$^2$.
\begin{table}[htb]
\caption{Calculated deformations and macroscopic quadrupole moments for SD
configurations of $^{152}$Dy. The two values of the quadrupole moment are
obtained from a numerical integration ($Q_{exact}$) and by adding and
subtracting $q_{eff}$ values to the yrast quadrupole moment ($Q_{est}$).
The values of $q_{eff}$ are obtained from 1-particle or 1-hole
configurations relative to the yrast state, see Fig.~4 and Table~4 below.}
\label{tab1}\hspace{-0.1cm}
\par
\begin{tabular}{|ccccc|}
\hline
Configurations of $^{152}$Dy. & $Q_{exact}$ & $Q_{est}$ & $\varepsilon
$ & $\varepsilon _4$ \\
& ($e$fm$^2$) & ($e$fm$^2$) & & \\ \hline
yrast & 1893.2 & & 0.5820 & 0.0166 \\
& & & & \\
$\pi ([651]3/2^-)^{-1}([413]5/2^-) $ & 1826.9 & 1828 & 0.5716 & 0.0211 \\
& & & & \\
$\pi ([301]1/2^-)^{-1}([532]3/2^-) $ & 1960.6 & 1960 & 0.6012 & 0.0282 \\
& & & & \\
$\nu ([770]1/2^+)^{-1}([402]5/2^+) $ & 1749.1 & 1750 & 0.5519 & 0.0135 \\
& & & & \\
$\pi ([651]3/2^-)^{-1}([651]3/2^+)^{-1} $ & & & & \\
$([413]5/2^-)([413]5/2^+) $ & 1765.2 & 1766 & 0.5608 & 0.0241 \\
& & & & \\
$\pi ([651]3/2^-)^{-1}([651]3/2^+)^{-1} $ & & & & \\
$([532]3/2^-)([413]5/2^-) $ & 1809.1 & 1811 & 0.5706 & 0.0259 \\
& & & & \\
$\pi ([651]3/2^-)^{-1}([413]5/2^-) $ & & & & \\
$\nu ([770]1/2^+)^{-1}([402]5/2^-)$ & 1686.4 & 1685 & 0.5415 & 0.0182 \\
& & & & \\
$\pi ([651]3/2^-)^{-1}([651]3/2^+)^{-1}$ & & & & \\
$([532]3/2^-)([413]5/2^+)$ & 1726.9 & 1723 & 0.5601 & 0.0388 \\
$\nu ([770]1/2^+)^{-1}([521]3/2^-)$ & & & & \\ \hline
\end{tabular}
\end{table}
Considering the good agreement between the `exact' and `estimated'
macroscopic quadrupole moments for $^{152}$Dy, one could ask if
similar methods could be used to estimate the quadrupole moment
for other SD nuclei around $^{152}$Dy. Furthermore, considering the
approximately linear relation between $q_{\nu}$ and $q_{eff}$ discussed
above, it is interesting to investigate if these quadrupole
moments can be obtained from a knowledge of $Q$ for the reference nucleus
and $q_{\nu}$ (but not $q_{eff}$) for the active orbitals. Quadrupole moments
for configurations in the nuclei $^{150}$Dy, $^{148}$Dy, $^{150}$Gd,
$^{148}$Gd, $^{144}$Gd, $^{143}$Eu and $^{142}$Sm
were therefore calculated in three different ways:
\begin{itemize}
\item By direct calculation at the appropriate equilibrium
deformation; $Q_{exact}$ in Table~2.
\item From the quadrupole moment of the superdeformed $^{152}$Dy yrast
state and the sum of effective one-hole quadrupole moments;
$Q_{est}(q_{eff})$ in Table~2.
\item From the quadrupole moment of the superdeformed $^{152}$Dy
yrast state and the sum of effective quadrupole moments calculated
from single-particle quadrupole moments by the simple linear relations
eqs.\ (\ref{macp},\ref{macn}); $Q_{est}(q_{\nu})$ in Table~2.
\end{itemize}
\renewcommand{\arraystretch}{1.7}
\begin{table}[htb]
\caption{Macroscopic quadrupole moments calculated in three different
ways (see text) for SD configurations in selected nuclei with one or
several holes relative to the $^{152}$Dy reference nucleus.}
\label{tab2}\hspace{-0.1cm}
\par
\begin{tabular}{|clcccc|}
\hline
nucleus & configuration relative & $Q_{exact}$ &
$Q_{est}(q_{eff})$
& $Q_{est}(q_{\nu})$ & $Q_{exact} - Q_{est}(q_{\nu})$\\
& SD $^{152}$Dy yrast & ($e$fm$^{2}$) &
($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$)\\
\hline
$^{152}$Dy & & 1893 & & &\\
$^{150}$Dy & $\nu 7^{-2}$ & 1725 & 1722 & 1715 & 10 \\
$^{150}$Gd & $\pi 6^{-2}$ & 1735 & 1733 & 1722 & 13 \\
$^{148}$Dy & $\nu 6^{-4}$ & 1701 & -- & 1714 & $-13$ \\
$^{148}$Gd & $\pi 6^{-2}\nu 7^{-2}$ & 1576 & 1562 & 1544 & 32 \\
$^{144}$Gd & $\pi 6^{-2}\nu 7^{-2}6^{-4}$ & 1381 & -- & 1365 & 16 \\
$^{143}$Eu & $\pi 6^{-3}\nu 7^{-2}6^{-4}$ & 1305 & -- & 1277 & 28 \\
$^{142}$Sm & $\pi 6^{-3}([541]1/2)^{-1}\nu 7^{-2}6^{-4}$ & 1232 & --
& 1198 & 34 \\
$^{142}$Sm & $\begin{array}{l} \pi 6^{-2}([541]1/2)^{-1}3^{-1} \\
\nu 7^{-1}6^{-4}([411]1/2)^{-1} \end{array}$
& 1419 & -- & 1410 & 9 \\ \hline
\end{tabular}
\end{table}
When choosing the configurations, we have to make sure that if two
orbitals interact in the superdeformed region, both these orbitals should be
either empty or occupied. This is the reason why the four $N=6$ neutron
orbitals (2 of each signature) are treated as one entity.
In Table~2, $Q_{est}(q_{eff})$ values are presented only for
configurations which have holes in the orbitals used to get the
effective one-hole quadrupole moments plotted in Fig.~4 while
$Q_{est}(q_{\nu})$ are calculated for all configurations. The two
estimates based on $q_{eff}$ and $q_{\nu}$, respectively, differ by up
to 18 $e$fm$^{2}$ for four particles removed. The difference between
the calculated value, $Q_{exact}$, and the single-particle estimate
$Q_{est}(q_{\nu})$ is at most 34 $e$fm$^{2}$. The corresponding
rms-value is 22 $e$fm$^{2}$ for all configurations in Table~2. It is
astonishing that the summation of effective quadrupole moments,
calculated from eqs.\ (\ref{macp}, \ref{macn}), describes the `exactly'
calculated values within 2\% for $^{142}$Sm, which is ten particles
away from $^{152}$Dy, and where the deformation has changed from
$\varepsilon =0.58$ to $\varepsilon =0.48$ and $\varepsilon =0.52$ for
the two studied configurations.
The two $^{142}$Sm configurations included have been measured in
experiment \cite{Hac98} and comparison with these data will be
discussed in subsection~3.5. In the Sm-bands, there are two orbitals
which have to be handled with extra care. Thus, due to the difference
in deformation the order of some orbitals closest to the Fermi-surface
are different in $^{152}$Dy and $^{142}$Sm. At the deformation of
$^{152}$Dy, the proton $N_{osc}=5$ and neutron $N_{osc}=4$ closest to
the Fermi-surface are $\pi ([532]5/2)$ and $\nu ([404]9/2)$, i.e.\ the
corresponding points in Fig.~4 are constructed from configurations
with holes in these orbitals. At the deformation of the Sm-bands on
the other hand, the $\pi ([541]1/2)$ and $\nu ([411]1/2)$ orbitals are
higher in energy and therefore, the bands are formed with holes in
these orbitals relative to the $^{152}$Dy bands. Consequently, their
single-particle moments should be used in the relations to get the
quadrupole moments in $^{142}$Sm, see Table~2. The orbitals are
relatively pure in the $^{142}$Sm configurations because, at the
relevant rotational frequencies, the crossings between the orbitals
occur at deformations somewhere between that of $^{152}$Dy and
$^{142}$Sm. Therefore, these configurations are anyway a good test on
how well additivity, based on $q_{\nu}$, works 10 particles away from
$^{152}$Dy.
\subsection{Microscopic polarization effects of one-particle states}
In a similar way as in section~3.1 we now calculate the microscopic
quadrupole moments $Q^{mic}=e\sum_{\nu =1}^{Z} q_{\nu}$ in
neighbouring nuclei to superdeformed $^{152}$Dy and plot $q_{eff}$
vs. $q_{\nu}$, see the solid lines in Fig.~6.
\begin{figure}[htb]
\psfig{figure=fig6.ps,height=8cm}
\caption{Changes in the total microscopic quadrupole moment, $q_{eff}$,
when a particle is added to or removed from the superdeformed core
plotted vs. the single-particle mass quadrupole moment, $q_{\nu}$.
The circles and crosses are obtained from calculations in the
Strutinsky-renormalized cranked MO with simultaneous
energy-minimization in the $\varepsilon$ and $\varepsilon_4$ directions
using $^{152}Dy$ as a core (values are given in Table~4). The solid
lines are quadratic least square fits to these points. The dashed
lines are obtained from analogous fits for the HO potential with the
($Z=60, N=80$) core, as defined in Fig.~7.}
\label{fig6}
\end{figure}
Using the full MO, the relations no longer are approximatively linear
but rather quadratic,
\begin{equation}
q_{eff}=(-0.008q_{\nu}^2+2.28q_{\nu}-23.3)e{\rm fm}^2
\label{micpmo}
\end{equation}
for protons and
\begin{equation}
q_{eff}=(-0.009q_{\nu}^2+1.61q_{\nu}-30.0)e{\rm fm}^2
\label{micnmo}
\end{equation}
for neutrons. This might suggest that it would be more proper to
express $q_{eff}$ not only as a function of the single-particle
quadrupole moment, but also of the hexadecapole moment. However, as
found below, the relations eqs.~(\ref{micpmo}) and (\ref{micnmo}) seem to
work well in the limited region of superdeformed nuclei with
$A=142-152$, so at present we will make no attempt to generalize eqs.\
(\ref{micpmo}) and (\ref{micnmo}).
In a similar way as in the macroscopic case, the reason why the microscopic
relations are so different from what was
found in the $Z=N$ HO calculations is now analyzed by
performing calculations where the different terms in the full MO
relative to the HO are introduced one at a time.
First, allowing a different number of protons and neutrons and
calculating also for non-selfconsistent gaps give linear relations with changes
in the slopes in accordance with eq.~(\ref{scaling}). Introducing the
Strutinsky renormalization and cranking only has a minor effect on the
$q_{eff}$ vs. $q_{\nu}$ relations.
The result of including hexadecapole deformation $\varepsilon_4$
in the HO is shown in Fig.~7.
\begin{figure}[bt]
\psfig{figure=fig7.ps,height=8cm}
\caption{Changes in the total microscopic quadrupole moment, $q_{eff}$, when
a particle is added to or removed from the $Z=60$ and $N=80$ core
plotted vs. the single-particle mass quadrupole moment, $q_{\nu}$.
The curves are least square fits to the calculations in
Strutinsky-renormalized cranked HO calculations. The dashed lines show the
relation when the energy is minimized only in $\varepsilon$ deformation
with $\varepsilon_4=0$. The solid lines (corresponding proton and neutron data
marked with circles and crosses, respectively) show the relation when
the energy is simultaneously minimized in the $\varepsilon$ and
$\varepsilon_4$ directions.}
\label{fig7}
\end{figure}
As in the macroscopic case (see Fig.~5), the hexadecapole deformation
introduces a curvature, which is, however, more than twice as large in this
microscopic case. In Fig.~6 the MO result is compared to the HO
result, both including the hexadecapole deformation. The two curves
are seen to be rather similar, although the stronger polarization
effect for the MO than for the HO, discussed above for the macroscopic
case, can be seen also in the microscopic case. Furthermore, there is a small
increase of the curvature of the $q_{eff}$ vs.\ $q_{\nu}$ relation.
The reason why the introduction of the $\vec{l}\cdot \vec{s}$ and
$l^2$-terms rather increase the curvature in the microscopic case (Fig.~6) but
removes the curvature in the macroscopic case (Fig.~4) is not understood.
\subsection{Additivity of microscopic effective quadrupole moments}
The additivity in the microscopic case is checked, see Table~3, in the
same way as in the macroscopic case, i.e.\ the electric quadrupole
moment is calculated in three different ways $Q_{exact}$,
$Q_{est}(q_{eff})$, and $Q_{est}(q_{\nu})$, as described in
subsection~3.2.
The result of this test is that the additivity seems to work
with a similar accuracy in this model as in the macroscopic approach.
This is illustrated in Fig.~8 where
\begin{figure}[htb]
\psfig{figure=fig8.ps,height=8cm}
\caption{The difference between the calculated and estimated
(based on $q_{\nu}$) quadrupole moment as a function of number of
holes relative to the superdeformed $^{152}$Dy core. The upper panel
shows the macroscopic result from Table~2 while the lower panel shows
the corresponding microscopic result from Table~3.}
\label{fig8}
\end{figure}
the difference between the calculated, $Q_{exact}$, and the single
particle estimated quadrupole moments, $Q_{est}(q_{\nu})$, is plotted
as a function of number of holes relative to the superdeformed
$^{152}$Dy core.
For the three configurations where the quadrupole moment has been
estimated based on both $q_{eff}$ and $q_{\nu}$, the maximum deviation
is 7 $e$fm$^{2}$, see Table~3. The rms value for
$Q_{exact}-Q_{est}(q_{\nu})$ is 29 $e$fm$^{2}$ for the eight
configurations considered in Table~3, while the maximum deviation is
53 $e$fm$^{2}$. In the microscopic case, summing effective quadrupole
moments, calculated from eqs.~(\ref{micpmo}, \ref{micnmo}), describes
the two $^{142}$Sm configurations with a 4 \%
accuracy.
\renewcommand{\arraystretch}{1.7}
\begin{table}[htb]
\caption{Microscopic quadrupole moments calculated for SD configurations
in selected nuclei with one or several holes relative to the $^{152}$Dy
reference nucleus.}
\label{tab3}\hspace{-0.1cm}
\par
\begin{tabular}{|clcccc|}
\hline
nucleus & configuration relative & $Q_{exact}$ &
$Q_{est}(q_{eff})$
& $Q_{est}(q_{\nu})$ & $Q_{exact} - Q_{est}(q_{\nu})$\\
& SD $^{152}$Dy yrast & ($e$fm$^{2}$) &
($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$)\\
\hline
$^{152}$Dy & & 1810 & & & \\
$^{150}$Dy & $\nu 7^{-2}$ & 1722 & 1725 & 1729 & $-7$ \\
$^{150}$Gd & $\pi 6^{-2}$ & 1607 & 1605 & 1598 & 9 \\
$^{148}$Dy & $\nu 6^{-4}$ & 1675 & -- & 1660 & 15 \\
$^{148}$Gd & $\pi 6^{-2}\nu 7^{-2}$ & 1528 & 1520 & 1517 & 8 \\
$^{144}$Gd & $\pi 6^{-2}\nu 7^{-2}6^{-4}$ & 1396 & -- & 1367 & 29\\
$^{143}$Eu & $\pi 6^{-3}\nu 7^{-2}6^{-4}$ & 1303 & -- & 1259 & 44\\
$^{142}$Sm & $\pi 6^{-3}([541]1/2)^{-1}\nu 7^{-2}6^{-4}$ & 1213 & -- & 1160
& 53\\
$^{142}$Sm & $\begin{array}{l} \pi 6^{-2}([541]1/2)^{-1}3^{-1} \\
\nu 7^{-1}6^{-4}([411]1/2)^{-1} \end{array}$
& 1395 & -- & 1367 & 28 \\ \hline
\end{tabular}
\end{table}
\subsection{Comparison between macroscopic, microscopic and experimental
effective quadrupole moments}
The two models studied give quite simple relations (eqs.~(\ref{macp},
\ref{macn}) and eqs.~(\ref{micpmo}, \ref{micnmo})) between $q_{eff}$ and
$q_{v}$, but their predictions for specific orbitals are rather
different as can be seen in Fig.~9.
\begin{figure}[htb]
\psfig{figure=fig9.ps,height=8cm}
\caption{Effective electric quadrupole moments versus
single-particle mass quadrupole moments calculated in the MO for one
particle/hole outside the superdeformed $^{152}$Dy core. Dashed lines are
used for the macroscopic and solid lines for the microscopic
method. It is clear that for specific combinations of protons and
neutrons removed from the $^{152}$Dy core, the two methods could lead
to rather different results.}
\label{fig9}
\end{figure}
In the macroscopic relations neutrons and protons with high
$q_{\nu}$-value have similar effects, while the microscopic relations
give very different values for protons and neutrons, with exception
for the lowest $q_{\nu}$-values. In the microscopic case for neutrons
the maximum $q_{eff}$ is not obtained from the maximum $q_{\nu}$. In
Table~4 $q_{\nu}$ and $q_{eff}$-values are given for several different
orbitals close to the Fermi-surface in $^{152}$Dy. The
$q_{eff}$-values are shown both for the macroscopic and the
microscopic case. For comparison values from Skyrme-Hartree-Fock
calculations are also presented. By using these effective quadrupole
moments together with eq.~(\ref{Qest}) (with the scalings discussed
below) electric quadrupole moments can be estimated for a large number
of superdeformed configurations in a rather large region around
$^{152}$Dy. There are also configurations with only $q_{\nu}$-values
given. From those one can estimate effective electric quadrupole
moments by using eqs.~(\ref{macp},\ref{macn}) and
eqs.~(\ref{micpmo},\ref{micnmo}), and through eq.~(\ref{Qest}) get
good estimates of electric quadrupole moments for many more
configurations in this region.
\renewcommand{\arraystretch}{1.2}
\begin{table}[htb]
\caption{ }
\setbox1=
\vbox{
Effective quadrupole moments calculated for orbitals in the
SD $A=150$ region. The macroscopic and microscopic values, calculated
in $^{152}$Dy $\pm$ 1 particle, are the ones used when producing
Fig.~4 and Fig.~6. For other orbitals only $q_{\nu}$-values is given. They
can be used together with the relations eqs.~(\ref{macp},
\ref{macn}) and eqs.~(\ref{micpmo}, \ref{micnmo}) to estimate macroscopic and
microscopic effective quadrupole moments. The Skyrme-Hartree-Fock
calculations are from ref.~\cite{Sat96}.}
\label{tab4}\hspace{-0.1cm}
\par \scriptsize
\setbox2=
\hbox{
\begin{tabular}{|cccccc|cccccc|}
\hline
orbital & $q_{\nu}$ & $q^{mac}_{eff}$ & $q^{mic}_{eff}$
& $q^{SkP}_{eff}$ & $q^{SkM^{*}}_{eff}$ &
orbital & $q_{\nu}$ & $q^{mac}_{eff}$ & $q^{mic}_{eff}$
& $q^{SkP}_{eff}$ & $q^{SkM^{*}}_{eff}$ \\
& ($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$) &
($e$fm$^{2}$) &
& ($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$) & ($e$fm$^{2}$) &
($e$fm$^{2}$) \\ \hline
\multicolumn{6}{|c|}{proton holes} &
\multicolumn{6}{c|}{neutron holes} \\
$\pi ([301]1/2^-)^{-1}$ & -7.8 & -5.7 & -44.8 & -15 & -18 &
$\nu ([404]9/2^-)^{-1}$ & -12.3 & -66.5 & -54.0 & & \\
$\pi ([301]1/2^+)^{-1}$ & -8.6& -5.5 & -46.1 & -18 & -16 &
$\nu ([404]9/2^+)^{-1}$ & -12.3 & -66.8 & -54.3 & & \\
$\pi ([413]7/2^-)^{-1}$ & 8.3 & 13.6 & -3.0 & & &
$\nu ([523]7/2^-)^{-1}$ & 23.0 & -8.5 & 7.9 & & \\
$\pi ([413]7/2^+)^{-1}$ & 8.3 & 13.0 & -3.3 & & &
$\nu ([523]7/2^+)^{-1}$ & 23.0 & -9.3 & 7.3 & & \\
$\pi ([532]5/2^-)^{-1}$ & 46.7 & 54.7 & 63.2 & & &
$\nu ([642]5/2^-)^{-1 (a)}$ & 58.0 & 32.6 & 29.2 & 22 & 22 \\
$\pi ([532]5/2^+)^{-1}$ & 45.9 & 52.3 & 62.4 & & &
$\nu ([642]5/2^+)^{-1 (a)}$ & 70.6 & 61.7 & 36.6 & 24 & 24 \\
$\pi ([651]3/2^-)^{-1}$ & 83.7 & 81.8 & 106.9 & 96 & 96 &
$\nu ([770]1/2^-)^{-1}$ & 102.3 & 88.3 & 42.7 & 59 & 48 \\
$\pi ([651]3/2^+)^{-1}$ & 74.6 & 78.5 & 98.5 & 89 & 88 &
$\nu ([770]1/2^+)^{-1}$ & 102.6 & 82.6 & 42.3 & 57 & 48 \\
$\pi ([541]1/2^-)^{-1}$ & 71.6 & & & & &
$\nu ([411]1/2^-)^{-1}$ & 8.7 & & & 18 & 16 \\
$\pi ([541]1/2^+)^{-1 }$ & 68.3 & & & & &
$\nu ([411]1/2^+)^{-1}$ & 9.4 & & & 15 & 13 \\
$\pi ([660]1/2^-)^{-1}$ & 81.4 & & & & &
$\nu ([651]1/2^-)^{-1 (a)}$ & 78.5 & & & 43 & 28 \\
$\pi ([660]1/2^+)^{-1}$ & 84.6 & & & & &
$\nu ([651]1/2^+)^{-1 (a)}$ & 66.1 & & & 43 & 30 \\ \hline
\multicolumn{6}{|c|}{proton particles} &
\multicolumn{6}{c|}{neutron particles} \\
$\pi ([413]5/2^-)$ & 9.6 & 16.7 & 0.6 & & &
$\nu ([402]5/2^-)$ & -12.0 & -60.4 & -50.2 & -44 & -38 \\
$\pi ([413]5/2^+)$ & 9.6 & 16.7 & 10.6 & & &
$\nu ([402]5/2^+)$ & -12.0 & -60.4 & -50.2 & -44 & -38 \\
$\pi ([532]3/2^-)$ & 48.8 & 61.5 & 70.2 & & &
$\nu ([521]3/2^-)$ & 24.9 & -5.0 & 7.4 & 0 & -1 \\
$\pi ([532]3/2^+)$ & 49.9 & 60.5 & 71.7 & & &
$\nu ([521]3/2^+)$ & 24.9 & -5.0 & 7.4 & 0 & -1 \\
$\pi ([642]5/2^-)$ & 82.4 & 92.9 & 111.5 & & &
$\nu ([640]1/2^-)$ & 44.8 & 5.5 & 18.0 & & \\
$\pi ([642]5/2^+)$ & 60.8 & 64.2 & 83.7 & & &
$\nu ([640]1/2^+)$ & 61.5 & 40.6 & 32.6 & & \\
$\pi ([770]1/2^-)$ & 92.6 & 101.1 & 121.0 & & &
$\nu ([761]3/2^-)$ & 89.2 & 68.4 & 41.5 & 46 & 41 \\
$\pi ([770]1/2^+)$ & 87.9 & 94.1 & 116.4 & & &
$\nu ([761]3/2^+)$ & 95.0 & 72.3 & 41.5 & 41 & 28 \\ \hline
\end{tabular}}
\normalsize
\setbox3=
\vbox{
(a) The orbitals $\nu ([642]5/2)$ and $\nu ([651]1/2)$ are mixed in
the Dy-region and are not pure in $^{152}$Dy. If none or both orbitals
of the same parity is unoccupied the quadrupole moment can be correct
calculated from these values, else extra care should be taken. The
labels are valid for the Skyrme-Hartree-Fock values.}
\centerline{\rotl{1} \rotl{2} \rotl{3}}
\end{table}
\begin{table}[htb]
\caption{Macroscopic and microscopic quadrupole moments are compared with
experiments. The holes are specified relative to the reference,
superdeformed $^{152}$Dy yrast configuration.}
\label{tab5}\hspace{-0.1cm}
\par
\begin{tabular}{|clccccc|}
\hline
nucleus & configuration relative & $Q^{exp}$ &
$0.92 Q^{mac}$ & $0.96 Q^{mic}$ & $Q^{exp}-0.92 Q^{mac}$ &
$Q^{exp}-0.96 Q^{mic}$\\
& SD $^{152}$Dy yrast & ($e$b$^{2}$) & ($e$b$^{2}$) & ($e$b$^{2}$) &
($e$b$^{2}$) & ($e$b$^{2}$) \\
\hline
$^{152}$Dy & & 17.5$^{a)}$ & 17.4 & 17.4 & 0.1 & 0.1\\
$^{151}$Dy & $\nu 7^{-1}$ & 16.9$^{b)}$ & 16.6 & 17.0 & 0.3 & $-0.1$\\
$^{151}$Tb & $\pi 6^{-1}$ & 16.8$^{a)}$ & 16.6 & 16.3 & 0.2 & 0.5 \\
$^{149}$Gd & $\pi 6^{-2}\nu 7^{-1}$ & 15.0$^{a)}$ & 15.2 & 15.1 & $-0.2$
& $-0.1$\\
$^{149}$Gd & $\pi 6^{-2}\nu 6^{-1}$ & 15.6$^{a)}$ & 15.4 & 15.1 & 0.2 & 0.5\\
$^{149}$Gd & $\nu 6^{-1}3^{-1} \nu 7^{-1}$ & 15.2$^{a)}$ & 16.0 & 16.2
& $-0.8$ & $-1.0$\\
$^{149}$Gd$^{d)}$ & $\pi 3^{-2}\nu 4^{-1}$ & 17.5$^{a)}$ & 17.7 & 18.3
& 0.2 &
$-0.8$ \\
$^{148}$Gd & $\pi 6^{-2}\nu 7^{-1}6^{-1}$ & 14.6$^{a)}$ & 14.7 & 14.7
& $-0.1$ & $-0.1$ \\
$^{148}$Gd & $\pi 6^{-2}\nu 7^{-1}6^{-1}$ & 14.8$^{a)}$ & 14.7 & 14.7
& 0.1 & 0.1 \\
$^{148}$Gd$^{d)}$ & $\pi 3^{-2}\nu 4^{-2}$ & 17.8$^{a)}$ & 18.0 & 18.5 &
$-0.2$ & $-0.7$ \\
$^{142}$Sm & $\pi 6^{-3}5^{-1}\nu 7^{-2}6^{-4}$ & 11.7$^{c)}$ & 11.3 & 11.6
& 0.4 & 0.1\\
$^{142}$Sm$^{e)}$ & $\pi 6^{-2}5^{-1}3^{-1}\nu 7^{-1}6^{-4}4^{-1}$
& 13.2$^{c)}$ & 13.1 & 13.4 & 0.1 & $-0.2$ \\
\hline
\end{tabular}
The experimental data are from $^{a)}$ ref.~\cite{Sav96}, $^{b)}$ ref.\
\cite{Nis97}, $^{c)}$ ref.~\cite{Hac98}. In the configurations marked
$^{d)}$ the hole is forced to the $\nu ([411]1/2)^{-1}$ orbit in order not
to mix with $\nu ([404]9/2)^{-1}$ orbit while in the configuration
marked $^{e)}$ the hole naturally comes in the $\nu ([411]1/2)^{-1}$ orbit.
\end{table}
In the comparison with experiments, see Table~5, the calculated
quadrupole moments are scaled with factors to give approximately the
same value for $^{152}$Dy as the experimental data. This is partly
motivated by the uncertainties in the absolute values obtained in
experiment, due to the uncertainties in stopping powers. Also in
Woods-Saxon, Hartree-Fock with Skyrme force, and cranked relativistic
mean field calculations \cite{Naz89,Sat96,Afa96} the values are
systematically higher than in experiment. We see that the macroscopic
method reproduces the relative changes with somewhat better accuracy
in this region. The rms-values between the experimental and scaled
theoretical quadrupole moments are 31 $e$fm$^2$ and 48 $e$fm$^2$ for
macroscopic and microscopic models, respectively. The configuration
with the largest discrepancy, $\nu 6^{-1}3^{-1} \nu 7^{-1}$ in
$^{149}$Gd, has the largest discrepancy also in the
Skyrme-Hartree-Fock calculation \cite{Sat96} and the error is
almost the same.
It is also interesting to note that the contribution to the change of
quadrupole moments coming from protons and neutrons are very different
in the two approaches. In the macroscopic model 48 \%
of the total change in quadrupole moment from the superdeformed yrast
band in$^{152}$Dy to the exited band ($\pi6^{-3}5^{-1}\nu 7^{-2}6^{-4}$)
in $^{142}$Sm comes from adding $q_{eff}$ for the removed protons,
while the corresponding number is 64 \%
in the microscopic model.
In the mean field calculation based on e.g.\ the modified oscillator
or Woods-Saxon potential, it is required that the proton and neutron
deformation are exactly the same, namely the proton and neutron
single-particle potentials are defined for the same deformation
parameters. Then in the macroscopic method to calculate quadrupole
moments, also the matter distribution is assumed to have the same
deformation. As discussed in subsect.~2.2 for the HO this explains why
for a $Z=N$ nucleus $\left( b^{mac}\right) _p =\left(
b^{mac}\right) _n \approx 1$, while the microscopic approach leads to
different deformations leading to $\left( b^{mic}\right) _p =1.5$
and $\left( b^{mac}\right) _n=0.5$. Then, as seen in Fig.~9, these
expressions are modified by different factors but general features are
still the same in Nilsson-Strutinsky-cranking MO calculations for
$Z<N$ nuclei. The numbers in Table~5 do rather support the macroscopic
formula, e.g.\ when comparing $^{152}$Dy and $^{149}$Gd configurations
with configurations in $^{151}$Dy and $^{148}$Gd which differ by one
$N=7$ neutron. This $N=7$ neutron appears to have a large influence on
the measured quadrupole moments in somewhat closer agreement with the
macroscopic than the microscopic calculations. On the other hand,
measured quadrupole moments in $^{131,132}$Ce \cite{Cla96} indicate a
very small polarization for an $N=6$ neutron in this region, even
smaller than suggested by our microscopic calculations. We can conclude that
more experimental data with high accuracy combined with comparison
with selfconsistent calculations are required to disentangle the
polarization properties of protons and neutrons, respectively.
\section{Summary}
We have investigated the polarization effects of a particle on a
well-deformed core in the harmonic oscillator (HO) potential as well
as in the modified oscillator (MO) potential. Two different ways to
calculate the quadrupole moment (and thereby the polarization effect)
were considered. In the microscopic approach the electric
single-particle quadrupole moments are summed at the appropriate
equilibrium deformations, while in the macroscopic approach the
quadrupole moment is calculated by considering the nuclear charge as
uniformly distributed over its volume, again at the appropriate
equilibrium deformation. Averaging over protons and neutrons, the two
models were found to give similar results even though the individual
proton and neutron contributions turned out to be rather different.
In the pure HO model, it was found for a $Z=N$ system, that the change
of the electric quadrupole moment when a particle (or hole) is added,
$q_{eff}$, can be described by a simple linear relation in the
single-particle mass quadrupole moment, $q_{\nu}$:
$q_{eff}=e(bq_{\nu}+a)$. Analytical expressions were derived for the
deformation and mass dependence of the parameters $a$ and $b$. It
turned out that in the microscopic model, $b=1.5$ for protons and
$b=0.5$ for neutrons while in the macroscopic model, $b$ showed some
variation but was close to one for all deformations and particle
numbers, for both protons and neutrons. These differences were
explained from the way the proton and neutron matter distributions are
assumed to adjust to each other when the equilibrium deformation of
the individual systems are different. Allowing $Z \neq N$, neutron
excess led to a decrease of the $b$-values, especially for protons.
In the macroscopic case, $a$ was essentially equal to zero for
protons. In the other three cases, it was negative for prolate shapes
and positive for oblate shapes, and scales with mass $A$ approximately
in the same way as the single-particle quadrupole moment, i.e.\
proportional to $A^{2/3}$. The fact that the parameter $a$ is
positive for oblate shapes and negative for prolate shapes is easily
understood. The quadrupole moment of the added particle must overcome
some value in order to increase the core deformation, and this value
is obviously positive for prolate shapes, negative for oblate shapes
and zero for spherical shapes.
In the Nilsson-Strutinsky cranking calculations we used the MO
potential as the microscopic potential, and calculated effective
quadrupole moments around the superdeformed core of $^{152}$Dy. Both
the macroscopic approach and the microscopic approach were used. From
a basic point of view the microscopic way of calculating quadrupole
moments appears most reasonable. On the other hand the macroscopic
approach has been used frequently in previous realistic calculations
and been found to work well. In the macroscopic model numerical
calculation indicated that the linear relation between $q_{eff}$ and
$q_{\nu}$ were valid and the polarization factors were then numerically
obtained as $b_p=1.02$, $b_n=1.30$, $a_p=5.4$ fm$^{2}$ and $a_n=-43.7$
fm$^{2}$. These numbers were different from the factors deduced from
the pure HO in the same macroscopic way for an $Z=66, N=86$
superdeformed nucleus ($b_p=0.78$ and $b_n=0.93$). The reason for
this deviation was explained as being due to the decreased stiffness
of the potential energy surface around the minimum for the MO. In the
microscopic model there appeared to be a stronger dependence on
hexadecapole deformation which led to a need for quadratic
relations.
Additivity of effective quadrupole moments in superdeformed nuclei was
investigated and found to work surprisingly well.
Adding $q_{eff}$-values, calculated from one-hole and one-particle states
outside a superdeformed core of $^{152}$Dy, quadrupole moments could
be well described in an extensive region of superdeformed nuclei.
Similar results using the Skyrme-Hartree-Fock method were previously
obtained by Satu{\l}a {\it et al}.\ \cite{Sat96}.
Furthermore, using the simple relations for $q_{eff}$ as a function of
$q_{\nu}$, quadrupole moments could be estimated in an even larger
region using only the total quadrupole moment of the core $^{152}$Dy
together with $q_{\nu}$-values for the active single-particle
orbitals as input. For example, in the macroscopic case, the 10-hole
configurations describing two observed superdeformed band in $^{142}$Sm
were both estimated within a 2 \%
accuracy relative to the values obtained from a full calculation for
these bands. In the microscopic case the additivity worked with a
somewhat smaller accuracy and we obtained a 4 \%
accuracy for the two $^{142}$Sm bands.
From the (bare) single-particle quadrupole moments given in Table~4 it
should be possible to estimate total electric quadrupole moments with
a reasonably accuracy for configurations in a quite extended region of
superdeformed $A \sim 150$ nuclei.
The experimental data are reproduced in a good way by the theoretical
calculations with somewhat smaller discrepancies using the macroscopic
method, see also refs.~\cite{Sav96,Nis97,Hac98}.
The surprising accuracy of the additivity suggests the possibility of
a shell-model type description of superdeformed nuclei, utilizing a
superdeformed core and a valence space consisting of superdeformed
one-particle (one-hole) states.
\vspace{1cm}
\noindent We are grateful to A.V.\ Afanasjev for useful comments on this
manuscript. I.R.\ and S.{\AA}.\ thank the Swedish National Research
Council (NFR) for financial support.
\newpage
\newpage
\newpage
|
{'timestamp': '1998-07-16T11:31:32', 'yymm': '9807', 'arxiv_id': 'nucl-th/9807043', 'language': 'en', 'url': 'https://arxiv.org/abs/nucl-th/9807043'}
|
arxiv
|
\section{Introduction and Discussion}
Perturbation theory anomalies have been known for a long time, starting
with the work by Bell and Jackiw \cite{bell} and by Adler \cite{adler}
concerning anomalies in gauge theories. The fact that there are no radiative
corrections to the one loop result for the anomaly has been countlessly
proven or brought into question since the early work of Adler and
Bardeen \cite{adbard}. For this reason, explicit calculations of possible
radiative corrections to the one loop anomaly are of particular interest.
Using a method proposed by Baker and Johnson \cite{kj1}, Erlich and Freedman
recently performed such an explicit calculation for the two loop
contribution of the anomalous correlation function
$\langle A_\mu (x) A_\nu (y) A_\rho (z) \rangle$ of three chiral currents,
in the Abelian Higgs model and in the Standard Model \cite{JF}. In here, we
wish to extend such calculation to the case of a gravitational background.
The Adler-Bell-Jackiw anomaly concerns the divergence of the axial
current in a gauge field background. The calculation of the divergence of
the axial current in a gravitational field background was later performed
by Delbourgo and Salam \cite{del1}, Eguchi and Freund \cite{eguchi} and
Delbourgo \cite{del2}. As in the ABJ case, these authors found an anomaly
associated to the conservation of the axial current, the gravitational
axial anomaly. Later, Alvarez-Gaum\'e and Witten showed the
significance of gravitational anomalies for a wide variety of physical
applications \cite{witt}.
The question of absence of radiative corrections to the one loop result
obtained for the gravitational axial anomaly is an issue
not as well established as it is in the gauge theory case. This
is the reason why we proceed to perform an explicit two loop calculation,
adopting the spirit in \cite{JF}. However, calculating the two loop
contribution to the gravitational axial anomaly is a much longer
task than to do so for the gauge axial anomaly. In this paper we shall
address the first part of the computation, by calculating the abnormal
parity part of the three point function involving one axial vector and
two energy-momentum tensors at a specific two loop order in the Abelian
Higgs model. The reason we choose to work in this model is due to the
recent interest arising from the gauge anomaly case in \cite{JF}, and
also due to the fact that this model is a simplified version of the
Standard Model. In order to set notation, the anomalous correlator we
shall be dealing with is:
$$
\langle A_\alpha(z) T_{\mu\nu}(y) T_{\rho\sigma}(x) \rangle,
\eqno(1.1)
$$
where $A_\alpha$ is the axial current and $T_{\mu\nu}$ the energy-momentum
tensor.
The method of calculation \cite{kj1,JF} is based on conformal properties
of massless field theories, and also involves ideas from the coordinate
space method of differential regularization due to Freedman, Johnson and
Latorre \cite{kj2}. In particular, the correlator (1.1) will be directly
calculated in Euclidean position space and a change of variables suggested
by the conformal properties of the correlator will be used in order to
simplify the internal integrations. The order in two loops we shall
be working involves no internal photons, but only internal matter
fields (the scalar and spinor fields in the Abelian Higgs model). However,
in this case diagrams containing vertex and self-energy corrections will
require a regularization scale. To handle this technicality we shall
introduce photons in our calculation, as there is a unique choice of gauge
fixing parameter (in the photon propagator) which makes both the self-energies
and vertex corrections finite. These ``finite gauge photons'' are merely
a technical tool employed in the calculation.
The use of conformal symmetry to construct three point functions is
well established. Of particular interest to us is the work by Schreier
\cite{schreier}, where three point functions invariant under conformal
transformations were constructed. For the case of one axial and two vector
currents, it was shown that there is a unique conformal tensor present in the
three point function. More recently, Osborn and Petkos \cite{osborn1} and
Erdmenger and Osborn \cite{osborn2} have used conformal invariance to
compute several three point functions involving the energy-momentum tensor.
However, the case of one axial current and two energy-momentum tensors
was not considered.
What we find in here is that, even though at one loop there is only one
conformal tensor present in the correlator (1.1) -- the one that leads to
the contraction of the Riemann tensor with its dual in the expression for
the anomaly --, at two loops there are two independent conformal tensors
present in the correlator. This is unlike the gauge axial anomaly case
where the only possible tensor is the one that leads to the field strength
contracted with its dual in the anomaly equation. Precisely because
of the presence of these two tensors in the two loop result for the
three point function, this correlator does not vanish. Again, this is
unlike the gauge axial anomaly case \cite{JF}.
The two linearly independent conformal tensors present in the anomalous
correlator are the ones in expressions (3.6) and (3.7) below (where the
notation is explained in the paragraphs leading up to these formulas). One
thing we would like to stress is that {\it every} diagram relevant for our
calculation is either a multiple of one of these tensors, or a linear
combination of them both.
Two comments are in order. First, the existence of two independent tensors
in the two loop correlator could seem to indicate the existence of a
radiative correction to the anomaly. On the other hand, the fact that the
correlator does not vanish at two loops does not mean that its divergence
(the anomaly) does not vanish at two loops.
Another point of interest is to follow \cite{kj2} and study the differential
regularization of the one loop triangle diagram associated to the
gravitational axial anomaly, Figure 1(a). This is done in the Appendix.
What one finds is that differential regulation entails the introduction
of several different mass scales. Renormalization or symmetry conditions
may then be used to determine the ratios of these mass scales. In the
gauge axial anomaly case it was found that there is only one mass ratio
\cite{kj2}. In this gravitational axial anomaly case, we have shown in the
Appendix that there is more than one mass ratio. This multiplicity of the
mass ratios introduces new parameters that could be able to cancel all
potential (new) anomalies. Apart from presenting part of these different
scales we shall not proceed with their study. Here, we shall only restrict to
the calculation of the correlation function, which by itself consists a
lengthy project. Extracting the two loop contribution to the gravitational
axial anomaly from our three point function is a question we hope to report
about in the near future.
The structure of this paper is as follows. In section 2 we present the
massless Abelian Higgs model, as well as a review of the basic ideas
involved in the method of calculation we use. This includes the calculation
of the one loop triangle diagram. Then, in section 3 we perform our
two loop calculation, with emphasis on rigorous details. The many
contributing diagrams are organized into separate groups, and then
analyzed one at a time.
\section{The Abelian Higgs Model and Conformal Symmetry}
We shall start by presenting the massless Abelian Higgs model. In four
dimensional Euclidean space, its action is given by:
$$
S=\int d^4x \, \Bigl\{
{1\over4}F_{\mu\nu}F_{\mu\nu}+(D_\mu\phi)^\dagger D_\mu\phi+
{\bar{\psi}}\gamma_\mu D_\mu\psi-f{\bar{\psi}}(L\phi+R\phi^\dagger)\psi-
{\lambda\over4}(\phi^\dagger\phi)^2 \Bigr\}, \eqno(2.1)
$$
where we have used $L={1\over2}(1-\gamma_5)$ and $R={1\over2}
(1+\gamma_5)$, with $\gamma_5=\gamma_1\gamma_2\gamma_3\gamma_4$. The
covariant derivatives are:
$$
D_\mu\phi=(\partial_\mu+ig{\cal A}_\mu)\phi,
$$
$$
D_\mu\psi=(\partial_\mu+{1\over2}ig{\cal A}_\mu\gamma_5)\psi, \eqno(2.2)
$$
so that the theory is parity conserving with pure axial gauge coupling.
Next we introduce a background (external) gravitational field, in order
to properly define the energy-momentum tensors associated to the
scalar and spinor matter degrees of freedom. A simple way to do this is
to couple our model to gravity, so that a spacetime metric $g_{\mu \nu}(x)$ is
naturally introduced in the Lagrangian as a field variable. Then we can
obtain the energy-momentum tensor by varying the Lagrangian with respect
to the metric $g_{\mu \nu}(x)$ as $T_{\mu \nu}(x) = 2 {\delta\over\delta
g^{\mu\nu}(x)} \int d^4x\,\sqrt{-g}\,{\cal L}$, where $T_{\mu \nu}(x)$ is
manifestly symmetric. In addition we have to ensure that it is conserved
and traceless, obtaining finally for the fermion field,
$$
T_{\mu\nu}^{\bf F}={\bar{\psi}}\,\gamma_{(\mu}\partial_{\nu)}\,\psi,
\eqno(2.3)
$$
and for the boson field,
$$
T_{\mu\nu}^{\bf B}={2\over3}\Bigl\{\partial_\mu{\phi^\dagger}\,
\partial_\nu\phi+\partial_\nu{\phi^\dagger}\,\partial_\mu\phi-
{1\over2}\delta_{\mu\nu}\partial_\alpha{\phi^\dagger}\,\partial_\alpha\phi-
{1\over2}(\phi\,\partial_\mu\partial_\nu{\phi^\dagger}+{\phi^\dagger}\,
\partial_\mu\partial_\nu\phi)\Bigr\}, \eqno(2.4)
$$
where $(\mu\nu)\equiv\mu\nu+\nu\mu$.
One should observe that in the two loop calculation we are interested in
computing the order ${\cal O}(gf^2k^2)$ correction to the correlator,
where $g$ is the gauge coupling, $f$ the scalar-spinor coupling, and
$k$ the gravitational coupling. This means that there are no internal
photons in the associated diagrams, as these would be of order
${\cal O}(g^3k^2)$ -- we shall only need photons as the external
axial current, and in order to handle some of the potential
divergences in the calculation (see section 3).
This is why in (2.3) and (2.4) the scalar and spinor matter degrees of
freedom are decoupled from the gauge field.
Conformal symmetry plays a central role in our calculations, as it
motivates a change of variables that simplifies the two
loop integrations. Due to the absence of any scale,
our model is conformal invariant. The conformal group of Euclidean field
theory is $O(5,1)$ \cite{osborn1}.
All transformations which are continuously
connected to the identity are obtained via a combination of rotations
and translations with the basic conformal inversion,
$$
x_\mu={x'_\mu\over{x'}^2},
$$
$$
{\partial x_\mu \over \partial x'_\nu}=x^2
(\delta_{\mu\nu}-{2x_\mu x_\nu \over x^2}) \equiv
x^2 J_{\mu\nu}(x). \eqno(2.5)
$$
The Jacobian tensor, $J_{\mu\nu}(x)$, which is an improper orthogonal matrix
satisfying $J_{\mu\nu}(x)=J_{\mu\nu}(x')$, will play a useful role in the
calculation of the coordinate space Feynman diagrams.
The action (2.1) is invariant under conformal inversions, as \cite{JF}:
$$
\phi(x) \rightarrow \phi'(x)={x'}^2\phi(x'),
$$
$$
\psi(x) \rightarrow \psi'(x)={x'}^2\gamma_5{x\!\!\! /}'\psi(x'),
$$
$$
{\cal A}_\mu(x) \rightarrow {{\cal A}'}_\mu(x)=-{x'}^2J_{\mu\nu}(x')
{\cal A}_\nu(x'), \eqno(2.6)
$$
while also the following relations hold,
$$
d^4x={d^4x' \over {x'}^8} \qquad {\rm and} \qquad
{x\!\!\! /}'\gamma_\mu{x\!\!\! /}'=-{x'}^2J_{\mu\nu}(x')\gamma_\nu.
\eqno(2.7)
$$
In order to use conformal properties to simplify the two loop Feynman
integrals, one should expect that the relevant Feynman rules
will consist of vertex factors and propagators with simple inversion
properties. In particular for the scalar and spinor propagators we have,
$$
\Delta(x-y)={1 \over 4\pi^2}{1 \over (x-y)^2}={1 \over 4\pi^2}
{{x'}^2{y'}^2 \over (x'-y')^2},
$$
$$
S(x-y)=-{\partial\!\!\! /}\Delta(x-y)={1 \over 2\pi^2}{{x\!\!\! /}-{y\!\!\! /}
\over (x-y)^4}=-{1\over 2\pi^2}{x'}^2{y'}^2 {x\!\!\! /}'
{{x\!\!\! /}'-{y\!\!\! /}' \over (x'-y')^4}{y\!\!\! /}'. \eqno(2.8)
$$
The vertex rules, read from the action (2.1), are:
\begin{figure}[ht]
\centerline{
\put(120,185){$=-fL,$}
\put(320,185){$=-fR,$}
\put(140,105){$=-$ $\large{{1\over2}}$ $ig\,\gamma_\alpha\gamma_5\,
\delta^4(z-z_1)\,\delta^4(z-z_2),$}
\put(140,28){$=ig$ $\large{({\partial\over\partial z_2^\alpha}}$ $-$
$\large{{\partial\over\partial z_1^\alpha})}$
$\delta^4(z-z_1)\,\delta^4(z-z_2),$}
\put(420,185){(2.9)}
\put(414,105){(2.10)}
\put(414,28){(2.11)}
\epsfxsize=6in
\epsfysize=3.5in
\epsffile{vert1.eps}
}
\end{figure}
\noindent
where solid lines are fermions, dashed lines are scalars and wavy lines
are gauge fields.
In addition the energy-momentum tensor insertions (2.3) and (2.4) yield
the following vertices:
\begin{figure}[ht]
\centerline{
\put(140,102){$=k\gamma_{(\mu}\delta_{\nu)\alpha}$ $\large{({\partial\over
\partial z_2^\alpha}}$ $-$ $\large{{\partial\over\partial z_1^\alpha})}$
$\delta^4(z-z_1)\,\delta^4(z-z_2),$}
\put(140,31){$=$ $\large{{2\over3}}$ $k$
$\large{({\partial\over\partial z_2^{(\mu}}{\partial\over\partial
z_1^{\nu)}}}$ $-$ $\large{{1\over2}}$ $\delta_{\mu\nu}$ $\large{{\partial
\over\partial z_2^\alpha}{\partial\over\partial z_1^\alpha}}$ $-$
$\large{{1\over2}\,({\partial^2\over\partial z_2^\mu\partial z_2^\nu}}$ $+$
$\large{{\partial^2\over\partial z_1^\mu\partial z_1^\nu}))}\cdot$}
\put(240,0){$\cdot\delta^4(z-z_1)\,\delta^4(z-z_2),$}
\put(430,102){(2.12)}
\put(430,0){(2.13)}
\epsfxsize=6.5in
\epsfysize=2in
\epsffile{vert2.eps}
}
\end{figure}
\noindent
where the double solid lines represent gravitons.
Let us analyze the conformal properties of the graviton vertices. In
order to do that, we attach the vertices (2.12) and (2.13) to
scalar and spinor legs, and use (2.5), (2.7) and (2.8) to obtain,
$$
S(v-x)\,\gamma_{(\mu}\delta_{\nu)\alpha}\,(
{\overrightarrow{\partial}\over\partial x_\alpha}-
{\overleftarrow{\partial}\over\partial x_\alpha})\,S(x-u)=
$$
$$
=-{x'}^8\,J_{\bar{\mu}(\mu}(x')\,J_{\nu)\bar{\nu}}(x')\,
\Bigl\{\,
{v'}^2{v\!\!\! /}'\,S(v'-x')\,\gamma_{\bar{\mu}}\,(
{\overrightarrow{\partial}\over\partial x'_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial x'_{\bar{\nu}}})\,
S(x'-u')\,{u'}^2{u\!\!\! /}'
\,\Bigr\}, \eqno(2.14)
$$
for the fermionic vertex, where the derivatives {\it only} act inside
the curly brackets. Likewise,
$$
\Delta(v-x)\,(
{\overleftarrow{\partial}\over\partial x_{(\mu}}
{\overrightarrow{\partial}\over\partial x_{\nu)}}-{1\over2}\delta_{\mu\nu}\,
{\overleftarrow{\partial}\over\partial x_\alpha}
{\overrightarrow{\partial}\over\partial x_\alpha}-{1\over2}\,(
{\overleftarrow{\partial^2}\over\partial x_\mu\partial x_\nu}+
{\overrightarrow{\partial^2}\over\partial x_\mu\partial x_\nu}))\,
\Delta(x-u)={x'}^8\,J_{\bar{\mu}\mu}(x')\,J_{\nu\bar{\nu}}(x')\cdot
$$
$$
\cdot\Bigl\{ {v'}^2\Delta(v'-x')\,(
{\overleftarrow{\partial}\over\partial x'_{(\bar{\mu}}}
{\overrightarrow{\partial}\over\partial x'_{\bar{\nu})}}-
{1\over2}\delta_{\bar{\mu}\bar{\nu}}\,
{\overleftarrow{\partial}\over\partial x'_\alpha}
{\overrightarrow{\partial}\over\partial x'_\alpha}-{1\over2}\,(
{\overleftarrow{\partial^2}\over\partial x'_{\bar{\mu}}
\partial x'_{\bar{\nu}}}+
{\overrightarrow{\partial^2}\over\partial x'_{\bar{\mu}}
\partial x'_{\bar{\nu}}}))\,
\Delta(x'-u'){u'}^2\Bigr\}, \eqno(2.15)
$$
for the bosonic vertex, where once again the derivatives only act
inside the curly brackets.
As an illustration of these coordinate space propagators and vertex
rules, we shall now look at the one loop triangle diagram and
perform the conformal inversion on the amplitude's tensor structure.
The relevant one loop triangle diagram is depicted in Figure 1(a) and
its amplitude, $B_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, can be computed
using the previous rules to be:
$$
B_{\alpha, \mu\nu, \rho\sigma}(z,y,x)=
$$
$$
={1\over2}igk^2\,{\bf Tr}\,\gamma_\alpha\gamma_5\,S(z-y)\,
\gamma_{(\mu}\delta_{\nu)\beta}\,(
{\overrightarrow{\partial}\over\partial y_\beta}-
{\overleftarrow{\partial}\over\partial y_\beta})\,S(y-x)\,
\gamma_{(\rho}\delta_{\sigma)\pi}\,(
{\overrightarrow{\partial}\over\partial x_\pi}-
{\overleftarrow{\partial}\over\partial x_\pi})\,S(x-z).
\eqno(2.16)
$$
Due to translation symmetry we are free to set $z=0$,
while we refer the remaining external points $x$ and $y$ to
their inverted images (2.5). Although this transformation may seem
{\it ad hoc} at this stage, it will later simplify the calculation of the
two loop diagrams \cite{JF}. The result we obtain is,
$$
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x)
=-{igk^2\over8\pi^4}{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')J_{\nu)\bar{\nu}}(y')J_{\bar{\rho}(\rho}(x')
J_{\sigma)\bar{\sigma}}(x')\,{\bf Tr}\,\gamma_\alpha\gamma_5
\gamma_{\bar{\mu}}\,
{\partial^2\over\partial y'_{\bar{\nu}}\partial x'_{\bar{\sigma}}}
\,S(y'-x')\,
\gamma_{\bar{\rho}}. \eqno(2.17)
$$
Taking the fermionic trace one finally gets,
$$
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,
=\,{igk^2\over4\pi^6}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\,
{\partial^2\over\partial y'_{\bar{\nu}}\partial x'_{\bar{\sigma}}}
\,{(x'-y')_\kappa\over(x'-y')^4}. \eqno(2.18)
$$
For separated points (2.16) is fully Bose symmetric and conserved on
all indices. The expected anomaly is a local violation of the
conservation Ward identities which arises because the differentiation
of singular functions is involved \cite{JF}. There are several ways
to obtain the anomaly in this coordinate space approach
\cite{kj2,sonoda,JF}. One way \cite{kj2,JF} to do this is to recognize
that the amplitude (2.16) is too singular at short distances to have a
well defined Fourier transform. One then regulates, which entails the
introduction of several independent mass scales. The regulated
amplitude is well defined, and one can check the Ward identities. Also,
an important aspect of this coordinate space approach to the axial
anomaly is that the well defined amplitude (2.16), for separated points,
determines the fact that there is an anomaly of specific strength \cite{kj2}.
Some more comments about the role of the conformal symmetry in the
calculation of possible radiative corrections to the anomaly are
now in order. At first sight this could look as a questionable role,
after all the introduction of a scale
to handle the divergences of perturbation
theory will spoil any expected conformal properties.
This is true in general,
but our two loop triangle diagrams for this massless Abelian Higgs
model are exceptional. Any primitively divergent amplitude is
exceptional when studied in coordinate space for separated points,
since the internal integrals converge without regularization \cite{JF}.
As we shall see in the next section, there will be 3 non-planar and 3 planar
diagrams, which are primitives. Of course there will be many other
diagrams which contain sub-divergent vertex and self-energy corrections,
and these require a regularization scale. However, we are dealing with
pure axial coupling for the fermion. This means that if we introduce
diagrams containing internal photons, then there is a unique
choice of gauge-fixing parameter $\Gamma$ which makes the one loop
self-energy finite \cite{JF}. Moreover, since the vertex and
self-energy corrections are related by a Ward identity, each vertex
correction is also finite in this same gauge. In conclusion, choosing
this finite gauge makes it possible to obtain a finite two loop
result in our calculation.
\section{The Three Point Function for the Two Loop Gravitational Axial Anomaly}
Let us now proceed to the next loop order in the non-gauge sector, as we are
interested in computing possible corrections to the gravitational axial
anomaly at order ${\cal O}(gf^2k^2)$. At this order we have a total
of 36 diagrams that can possibly contribute. Of these diagrams, 3 are
non-planar, but they actually only correspond to 2 independent
calculations due to reflection symmetry. These are depicted in
Figures 1(b) and 1(c). Then, there are 3 scalar self-energy
diagrams, and other 3 photon self-energy diagrams (as we shall
see, some diagrams involving photons are required in order
to choose the finite gauge and compensate some divergences of the non-gauge
amplitudes). These 6 self-energy diagrams amount to 2 independent
calculations alone, the ones depicted in Figures 1(d) and 1(e). Then, we
have 3 axial current insertion vertex corrections, in Figures 1(f), 1(g)
and 1(h). At the energy-momentum tensor insertion, we also have
vertex corrections. These are 6 diagrams, amounting to the 3 independent
calculations in Figures 1(i), 1(j) and 1(k). There are also 6 diagrams
that identically vanish due to fermionic traces, the ones in Figures 1(l),
1(m) and 1(n). Associated to the mentioned self-energies there
are 3 diagrams corresponding to local self-energy renormalizations.
They amount to 1 independent calculation, Figure 1(o). Also, associated to
the mentioned vertex corrections at the energy-momentum insertion there
are 2 diagrams corresponding to local vertex renormalizations. They amount
to 1 independent calculation, Figure 1(p). So, overall, of the 29 initial
two loop diagrams in Figure 1, we are left with 12 independent calculations.
In Figure 2 we have 7 more diagrams, corresponding to 4 independent
calculations. These diagrams are associated to the finite gauge photons and
shall be discussed later. We are thus left with an overall number of 16
independent calculations, out of the initial 36 diagrams. Let us see how to
perform such calculations, one at a time.
\subsection{Diagrams in Figures 1(b) and 1(c)}
We begin with the non-planar diagram depicted in Figure 1(b), which we
shall denote by $N^{(1)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$.
This amplitude is conformal covariant since no issues of subdivergences
and gauge choice arise. The idea \cite{kj1} is to use the inversion,
$u_\alpha={u'}_\alpha/{u'}^2$ and $v_\alpha={v'}_\alpha/{v'}^2$, as a
change of variables in the internal integrals. In order to use the
simple conformal properties of the propagators (2.8) we must also refer
the external points to their inverted images (2.5), as was done
in (2.14), (2.15), and in (2.17), (2.18). If in succession
we use the translation symmetry to place one point at the origin,
say $z=0$, then the propagators attached to that point drop out of
the integral, because the inverted point is now at $\infty$, and the
integrals simplify.
After summing over both directions of Higgs field propagation, and setting
$z=0$, the amplitude for $N^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$
is written as,
$$
N^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over8\pi^4}\int d^4u\,d^4v\,
({u_\alpha\over u^4v^2}-{v_\alpha\over v^4u^2})\cdot
$$
$$
\cdot{\bf Tr}\,\gamma_5\,
S(v-x)\gamma_{(\rho}\delta_{\sigma)\beta}\,(
{\overrightarrow{\partial}\over\partial x_\beta}-
{\overleftarrow{\partial}\over\partial x_\beta})\,S(x-u)S(u-y)
\gamma_{(\mu}\delta_{\nu)\pi}\,(
{\overrightarrow{\partial}\over\partial y_\pi}-
{\overleftarrow{\partial}\over\partial y_\pi})\,S(y-v). \eqno(3.1)
$$
The change of variables previously outlined can be performed with the
help of (2.7), (2.8), and the Higgs current transformation,
$$
{u_\alpha\over u^4v^2}-{v_\alpha\over v^4u^2}=
{v'}^2{u'}^2\,({u'}_\alpha-{v'}_\alpha). \eqno(3.2)
$$
The spinor propagator side factors ${x\!\!\! /}'$, ${v\!\!\! /}'$, etc.,
collapse within the trace, and the Jacobian $({u'}{v'})^{-8}$ cancels
with factors in the numerator. Performing the algebra we obtain,
$$
N^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over8\pi^4}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')
\int d^4{u'}\,d^4{v'}\,(v'-u')_\alpha\cdot
$$
$$
\cdot\Bigl\{ {\bf Tr}\,\gamma_5\,
S(v'-x')\gamma_{\bar{\rho}}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
S(x'-u')S(u'-y')\gamma_{\bar{\mu}}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\,
S(y'-v')\Bigr\}, \eqno(3.3)
$$
where the derivatives are acting only inside the curly brackets.
We see that we obtain the expected transformation
factors for the energy-momentum tensors at $x$ and $y$ times an
integral in which $u'$ and $v'$ each appear in only two
denominators. These convolution integrals can be done in
several different ways. The final relevant formulas are listed
in the Appendix. We begin by using the trace properties to move
the $S(y'-v')$ propagator in (3.3) close to the $S(v'-x')$ propagator.
The differential operators are kept fixed, with the understanding that
now the $y'$ derivative that seems to be acting on nothing is actually
acting on the propagator $S(y'-v')$ which is now sitting on the left. As
usual, all derivatives act only inside curly brackets. We can perform the
integrations without the need to make the differentiations first as
the integration variables are well separated from the differentiation ones.
Expand the product with $(v'-u')_\alpha$, and we are led to the
following result:
$$
\int d^4{u'}d^4{v'}(v'-u')_\alpha
{\bf Tr}\gamma_5
S(v'-x')\gamma_{\bar{\rho}}(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})
S(x'-u')S(u'-y')\gamma_{\bar{\mu}}(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})
S(y'-v')=
$$
$$
-{\bf Tr}\gamma_5
\int d^4{v'}v'_\alpha S(v'-y')S(v'-x')\,\gamma_{\bar{\rho}}(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})
\int d^4{u'}S(u'-x')S(u'-y')\,\gamma_{\bar{\mu}}(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})
$$
$$
+{\bf Tr}\gamma_5
\int d^4{v'}S(v'-y')S(v'-x')\,\gamma_{\bar{\rho}}(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})
\int d^4{u'}u'_\alpha S(u'-x')S(u'-y')\,\gamma_{\bar{\mu}}(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}}),
\eqno(3.4)
$$
where the integrals can be directly read off from the Appendix. When
these results are used and substituted within the trace, one finds
the final amplitude,
$$
N^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over32\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\Bigl\{\varepsilon_{\alpha\kappa{\bar{\rho}}{\bar{\mu}}}\,
{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\},
\eqno(3.5)
$$
where we should have the ``trace'' attitude for taking derivatives:
the derivatives {\it only} act inside the curly brackets, and the $y'$
derivative that seems to be acting on nothing is actually acting on the
first $(x'-y')$ term.
We observe that unlike the case in \cite{JF}, this non-planar amplitude
is {\it not} a numerical multiple of the amplitude for the one loop triangle
diagram (2.18). This is because the tensor (derivative) structure
in (3.5) is different from the one in (2.18). To see that one just
has to explicitly compute both structures, and compare them. For
the triangle one has:
$$
{\partial^2\over\partial x'_{\bar{\sigma}}\partial y'_{\bar{\nu}}}
\,{\Delta_\kappa\over\Delta^4}\,=
$$
$$
=\,{\partial\over\partial x'_{\bar{\sigma}}}\Bigl({\Delta_\kappa\over
\Delta^2}\Bigr){\partial\over\partial y'_{\bar{\nu}}}\Bigl({1\over\Delta^2}
\Bigr)\,+\,
{\Delta_\kappa\over\Delta^2}{\partial^2\over\partial x'_{\bar{\sigma}}
\partial y'_{\bar{\nu}}}\Bigl({1\over\Delta^2}\Bigr)\,
+\,{1\over\Delta^2}{\partial^2\over\partial x'_{\bar{\sigma}}
\partial y'_{\bar{\nu}}}\Bigl({\Delta_\kappa\over\Delta^2}\Bigr)\,+\,
{\partial\over\partial x'_{\bar{\sigma}}}\Bigl({1\over\Delta^2}\Bigr)
{\partial\over\partial y'_{\bar{\nu}}}\Bigl({\Delta_\kappa\over\Delta^2}
\Bigr), \eqno(3.6)
$$
while for the non-planar structure one obtains,
$$
{1\over\Delta^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{\Delta_\kappa\over\Delta^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\,=
$$
$$
=\,{\partial\over\partial x'_{\bar{\sigma}}}\Bigl({\Delta_\kappa\over\Delta^2}
\Bigr){\partial\over\partial y'_{\bar{\nu}}}\Bigl({1\over \Delta^2}\Bigr)\,-
\,{\Delta_\kappa\over\Delta^2}{\partial^2\over\partial x'_{\bar{\sigma}}
\partial y'_{\bar{\nu}}}\Bigl({1\over\Delta^2}\Bigr)\,
-\,{1\over\Delta^2}{\partial^2\over\partial x'_{\bar{\sigma}}
\partial y'_{\bar{\nu}}}\Bigl({\Delta_\kappa\over\Delta^2}\Bigr)\,+\,
{\partial\over\partial x'_{\bar{\sigma}}}\Bigl({1\over\Delta^2}\Bigr)
{\partial\over\partial y'_{\bar{\nu}}}\Bigl({\Delta_\kappa\over\Delta^2}
\Bigr), \eqno(3.7)
$$
where we defined $\Delta\equiv(x'-y')$. The reason such difference
can happen is that while in \cite{JF} there is a unique conformal tensor
structure for the correlator of three axial vector currents, in here we
have two conformal tensor structures due to the higher dimensionality of
the correlator of the one axial vector current and the two energy-momentum
tensors. Also, observe that both these structures (3.6) and (3.7) are to be
understood as always attached to the appropriate factors of $J_{\mu\nu}(y')$,
$J_{\rho\sigma}(x')$ and the appropriate powers of $y'$, $x'$. Moreover the
diagrams that give rise to them obey conservation equations for the
energy-momentum tensor insertions. (3.6) is associated to the one loop diagram
in (2.18). It can easily be proved that the conservation equation is obeyed,
a standard result from \cite{del1,eguchi,del2} (and also form \cite{kj2} once
we are aware of the relation (A.6) from the Appendix). (3.7) is associated
to the two loop diagram in (3.5), and one can also explicitly check the
conservation law for this case. This existence of two conformal structures
is an extra feature in the discussion of these two loop diagrams, relative
to the work in \cite{JF}.
There are 2 more non-planar diagrams, where the scalar vertex is placed
at $x$ and at $y$. We need to compute them, as they are independent
of the previous result (we have a scalar-scalar-tensor vertex
instead of a scalar-scalar-vector vertex, among other different vertices),
but they amount to 1 independent calculation.
So, we proceed with the non-planar diagram in Figure 1(c), denoted in the
following by $N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$.
The method of calculation is very similar to the one for the previous
diagram, and so we shall perform it in here with somewhat less details.
After summing over both directions of Higgs field propagation, and setting
$z=0$, the amplitude for $N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$
is written as,
$$
N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over12\pi^4}\int d^4u\,d^4v\,
{\bf Tr}\,\gamma_5\,{{u\!\!\! /}\gamma_\alpha{v\!\!\! /}\over u^4v^4}\,
S(v-x)\gamma_{(\rho}\delta_{\sigma)\beta}\,(
{\overrightarrow{\partial}\over\partial x_\beta}-
{\overleftarrow{\partial}\over\partial x_\beta})\,S(x-u)\cdot
$$
$$
\cdot\Delta(u-y)\,(
{\overleftarrow{\partial}\over\partial y_{(\mu}}
{\overrightarrow{\partial}\over\partial y_{\nu)}}-{1\over2}\delta_{\mu\nu}\,
{\overleftarrow{\partial}\over\partial y_\pi}
{\overrightarrow{\partial}\over\partial y_\pi}-{1\over2}\,(
{\overleftarrow{\partial^2}\over\partial y_\mu\partial y_\nu}+
{\overrightarrow{\partial^2}\over\partial y_\mu\partial y_\nu}))\,
\Delta(y-v). \eqno(3.8)
$$
Performing the conformal inversion is now no harder than it was for
the previous diagram. The procedure is essentially the same, and if we
carry out the algebra we obtain,
$$
N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over24\pi^4}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\int d^4{u'}\,d^4{v'}\,\Bigl\{\,{\bf Tr}\,\gamma_5\gamma_\alpha\,
S(v'-x')\gamma_{\bar{\rho}}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
S(x'-u')\cdot
$$
$$
\cdot\Delta(u'-y')\,(
{\overleftarrow{\partial}\over\partial {y'}_{({\bar{\mu}}}}
{\overrightarrow{\partial}\over\partial {y'}_{{\bar{\nu})}}}-
{1\over2}\delta_{{\bar{\mu}}{\bar{\nu}}}\,
{\overleftarrow{\partial}\over\partial {y'}_\pi}
{\overrightarrow{\partial}\over\partial {y'}_\pi}-{1\over2}\,(
{\overleftarrow{\partial^2}\over\partial {y'}_{\bar{\mu}}
\partial {y'}_{\bar{\nu}}}+
{\overrightarrow{\partial^2}\over\partial {y'}_{\bar{\mu}}
\partial {y'}_{\bar{\nu}}}))\,
\Delta(y'-v')\,\Bigr\}. \eqno(3.9)
$$
Once again the expected structure emerges, and all we have to do is to
perform the integrations. Using the relevant formulas from the Appendix,
we find the final result as,
$$
N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
{igf^2k^2\over384\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\Bigl\{
\varepsilon_{\alpha\ell{\bar{\rho}}\kappa}\cdot
$$
$$
\cdot
{(x'-y')_\ell\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overleftarrow{\partial}\over\partial {y'}_{({\bar{\mu}}}}
{\overrightarrow{\partial}\over\partial {y'}_{{\bar{\nu})}}}-
{1\over2}\delta_{{\bar{\mu}}{\bar{\nu}}}\,
{\overleftarrow{\partial}\over\partial {y'}_\pi}
{\overrightarrow{\partial}\over\partial {y'}_\pi}-{1\over2}\,(
{\overleftarrow{\partial^2}\over\partial {y'}_{\bar{\mu}}
\partial {y'}_{\bar{\nu}}}+
{\overrightarrow{\partial^2}\over\partial {y'}_{\bar{\mu}}
\partial {y'}_{\bar{\nu}}}))\Bigr\}. \eqno(3.10)
$$
However, one should note the following. In (3.3) both differential
operators were first order in the derivatives, but in (3.9) the
differential operator associated to the vertex (2.13) is actually
second order. As we expect to have at the end a result similar to
(3.5) or (2.18), we have to perform one of the derivatives in order
for both differential operators to become first order. Manipulating
this result through a somewhat lengthy calculation, one finds:
$$
N^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over128\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\Bigl\{\varepsilon_{\alpha\kappa{\bar{\rho}}{\bar{\mu}}}\,
{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\},
\eqno(3.11)
$$
where the notation is like in (3.5). One realizes that the
structure here obtained is the same as in (3.5). So, the 3 non-planar
diagrams have the same tensor structure, which is different from the
one associated to the one loop triangle. From this result we immediately
read the last non-planar diagram, the one with
the vertex involving the scalar fields and energy-momentum
tensor located at $x$. All we have to do is to exchange $x$ with $y$
and $\mu\nu$ with $\rho\sigma$ in (3.11). This actually does not
change the amplitude (3.11), so that this third diagram contributes with
the same amount as its reflection symmetric diagram.
Finally, we can add these 3 diagrams, and obtain the non-planar
contribution to the two loop correlator. The overall contribution is simply:
$$
N_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=\,\sum_{i=1}^3\,
N^{(i)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
\,-{3igf^2k^2\over64\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\Bigl\{\varepsilon_{\alpha\kappa{\bar{\rho}}{\bar{\mu}}}\,
{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\},
\eqno(3.12)
$$
\subsection{Diagrams in Figures 1(d), 1(e) and 1(o)}
We now proceed to the self-energy diagrams. These will be the same as in
the three gauge current case \cite{JF}. We shall see the finite gauge
mechanism for the one loop self-energies and vertex corrections
coming about, as it handles certain divergences by choosing a
gauge where they are zero \cite{kj1,JF}. For this cancelation of
divergences we have introduced the Abelian field
which can be decoupled at the end by setting its coupling to zero.
Let us see how all that works, by starting with the
Higgs self-energy diagram in Figure 1(d),
and the photon self-energy diagram in Figure 1(e).
These are 3 diagrams in Figure 1(d) (as we can place the self-energy loop
at any of the 3 sides of the triangle), which amount to 1 independent
calculation, and other 3 diagrams in Figure 1(e) that again amount to
1 independent calculation. If we remove the self-energy leg from the
triangle diagram, and add the Higgs and photon contributions we obtain
\cite{JF},
$$
\Sigma(v-u)={1\over8\pi^4}\,[f^2+{1\over2}g^2(1-\Gamma)]\,
{{v\!\!\! /}-{u\!\!\! /}\over(v-u)^6}+a\,{\partial\!\!\! /}\,\delta^4(v-u),
\eqno(3.13)
$$
where $\Gamma$ is the gauge fixing parameter coming from the photon
propagator \cite{kj1,JF}. In this result, the first term is the part
of the amplitude which is determined by the Feynman rules read from
the diagrams. It has a linearly divergent Fourier transform, but the
crucial point is that this amplitude can be made finite by choosing the
gauge $\Gamma=1+2f^2/g^2$. It then vanishes for separated points.
However, there is a possible local term, the second term in (3.13), which
is left ambiguous by the Feynman rules, and is represented in Figure 1(o).
The constant $a$ will be determined by the Ward identity \cite{JF}.
In order to proceed with the calculation of this constant using the Ward
identity, we first need to look at the following vertex
correction diagrams at the axial current insertion:
Figure 1(f), $T^{(1)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$,
Figure 1(g), $T^{(2)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, and Figure 1(h),
$T^{(3)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$.
Again, we need a diagram involving photons in order to choose the
previously introduced finite gauge. Also, these 3 diagrams clearly
correspond to 3 distinct calculations.
The amplitudes of the 3 vertex correction subgraphs in these diagrams are
the same as in \cite{JF}. Therefore we already know that each
contribution has a logarithmic divergent Fourier transform, and that
the sum of the divergent contributions from these 3 vertex subgraphs is
proportional to $-2f^2-g^2(1-\Gamma)$, therefore vanishing in the
same gauge that makes the self-energy finite. Henceforth we shall use
this gauge.
Let us then proceed with the Ward identity calculation, by summarizing
the result from \cite{JF}. From the amplitudes for the vertex
subgraphs in the diagrams $T^{(i)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$,
$i=1,2,3$, we obtain the Ward identity for the theory \cite{JF},
$$
{\partial\over\partial z_\alpha}T_\alpha(z,u,v)=-i{1\over2}g\gamma_5
\,(\delta^4(z-u)-\delta^4(z-v))\,\Sigma(u-v), \eqno(3.14)
$$
where $T_\alpha=\sum_{i=1}^3T^i_\alpha$, and $T^i_\alpha$ is the vertex
subgraph in the diagram $T^{(i)}_{\alpha, \mu\nu, \rho\sigma}$.
The constant $a$ in the self-energy (3.13) can be calculated as in
\cite{JF} -- where basically one works out the LHS in (3.14) (in the
finite gauge) in order to find the correct value for (3.13) in the RHS --,
and the final answer is given by
$$
\Sigma(z)={3\over64\pi^2}\,(f^2-{1\over2}g^2)\,
{\partial\!\!\! /}\,\delta^4(z). \eqno(3.15)
$$
Strictly speaking, one should now proceed to verify that the exact same
result is obtained from the Ward identity associated with the vertex
correction diagrams at the energy-momen-\\tum tensor insertions, Figures 1(i),
1(j) and 1(k). This is in fact true, but for pedagogical reasons we shall
postpone such a proof for a couple of pages.
It is this result for $\Sigma(v-u)$ which is to be used to evaluate the
local self-energy renormalization, Figure 1(o), therefore yielding the
correct value for $a$ in (3.13).
These again are 3 diagrams that amount to 1 independent calculation as
in Figures 1(d) and 1(e). As (3.15) is purely local, the integral in $u$
and $v$ required for the previous diagram is trivial, simply
yielding a multiple of the one loop triangle amplitude. The final
result is that the sum of the self-energy insertion diagrams, Figures 1(d),
1(e) and 1(o), is a multiple of the one loop amplitude,
$$
\Sigma'_{\alpha, \mu\nu, \rho\sigma}(z,y,x)={3\over64\pi^2}\,
(f^2-{1\over2}g^2)\,B_{\alpha, \mu\nu, \rho\sigma}(z,y,x),
\eqno(3.16)
$$
exactly like in \cite{JF} as the internal
fields are the same. Now recall that
there is a factor of 3 from the triangular symmetry. There is also a
factor of 2 for opposite directions of fermion charge flow (such term
was absent in the non-planar diagrams). Finally, we are interested in
the ${\cal O}(gf^2k^2)$ corrections, so that the term in $g^2$ in
(3.16) should be discarded. The overall result for the self-energy
contribution to the two loop correlator is finally,
$$
\Sigma_{\alpha, \mu\nu, \rho\sigma}(0,y,x)={9f^2\over32\pi^2}\,
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x),
\eqno(3.17)
$$
where we have set $z=0$ (for coherence with the other diagram calculations).
\subsection{Diagrams in Figures 1(f), 1(g) and 1(h)}
We can now proceed the calculation
of the vertex correction diagrams at the axial current insertion,
$T^{(i)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, $i=1,2,3$. As for the
self-energy, the calculation of these 3 diagrams follows from \cite{JF}. We
shall regard each virtual photon diagram as the sum of two graphs, one
with the photon propagator in the Landau gauge $\Gamma=1$, and the second
with inversion covariant pure gauge propagator,
$$
\tilde{\Delta}_{\mu\nu}(u-v)=-{1\over4\pi^2}\,{f^2\over g^2}\,
{1\over(u-v)^2}\,J_{\mu\nu}(u-v). \eqno(3.18)
$$
The Landau gauge diagrams give order ${\cal O}(g^3k^2)$ contributions to
the two loop correlator, while the remainder gives an order ${\cal O}
(gf^2k^2)$ contribution which is what we are interested in. Therefore --
and similarly to what was done from (3.16) to (3.17) -- we shall discard
the Landau gauge diagrams from our final result, and only use (3.18)
for the virtual photon propagator in the finite gauge.
With this in mind we turn to the calculation of the diagrams
$T^{(i)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$.
The method of calculation is similar to the
one used for the non-planar diagrams, and so we shall follow it here
without giving details. After summing over both directions of Higgs field
propagation, and setting $z=0$, the amplitude for
$T^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ is written as,
$$
T^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over8\pi^4}\int d^4u\,d^4v\,
\Delta(v-u)\cdot
$$
$$
\cdot{\bf Tr}\,{{u\!\!\! /}\over u^4}\,\gamma_\alpha\gamma_5\,
{{v\!\!\! /}\over v^4}\,
S(v-y)\gamma_{(\mu}\delta_{\nu)\beta}\,(
{\overrightarrow{\partial}\over\partial y_\beta}-
{\overleftarrow{\partial}\over\partial y_\beta})\,S(y-x)
\gamma_{(\rho}\delta_{\sigma)\pi}\,(
{\overrightarrow{\partial}\over\partial x_\pi}-
{\overleftarrow{\partial}\over\partial x_\pi})\,S(x-u), \eqno(3.19)
$$
and performing the conformal inversion we are led to the result,
$$
T^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over8\pi^4}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')
\int d^4{u'}\,d^4{v'}\,\Delta(v'-u')\cdot
$$
$$
\cdot{\bf Tr}\,\gamma_5\gamma_\alpha\,
S(v'-y')\gamma_{\bar{\mu}}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\,
S(y'-x')\gamma_{\bar{\rho}}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
S(x'-u'). \eqno(3.20)
$$
As usual the expected tensorial structure emerges. We are left with
the integrations to be performed. However, as we have seen, this
result is divergent; only when we sum the 3 diagrams
$T^{(i)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, $i=1,2,3$
the result will be finite, in the finite gauge. So at this stage we should
include in the calculation (3.20) the equivalent results
coming from the diagrams $T^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$
and $T^{(3)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ -- where for this last
one we should use {\it only} the inversion covariant pure gauge
propagator (3.18). The result of including the 3 diagrams all
together is to produce an integral of a traceless tensor, which is
convergent, and can be read from the formulas in the Appendix. Hence we
can write for the net sum of vertex insertions at point $z$, {\it i.e.},
$T^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ plus
$T^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ plus
$T^{(3)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$,
$$
T'_{\alpha, \mu\nu, \rho\sigma}(0,y,x)
={igf^2k^2\over256\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\,
{\partial^2\over\partial y'_{\bar{\nu}}\partial x'_{\bar{\sigma}}}
\,{(x'-y')_\kappa\over(x'-y')^4}, \eqno(3.21)
$$
which is a multiple of the triangle one loop amplitude (2.18). Recalling
that there is a factor of 2 for opposite directions of fermion charge
flow, we can finally write for the contribution of the vertex correction
diagrams (at the axial current insertion) to the two loop correlator,
$$
T_{\alpha, \mu\nu, \rho\sigma}(0,y,x)={f^2\over32\pi^2}\,
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x).
\eqno(3.22)
$$
\subsection{Diagrams in Figures 2(a) and 2(b)}
In order to obtain the previous result we had to use finite gauge virtual
photons, as given by the propagator (3.18). If one has a diagram with an
internal virtual photon, one should expect a factor of $g$ from each of the
two internal vertices, and so an overall contribution of order
${\cal O}(g^3k^2)$. However, if one uses an internal finite gauge virtual
photon, there is an extra factor of $f^2/g^2$ from the propagator
(3.18), and we therefore obtain an overall contribution of order
${\cal O}(gf^2k^2)$, which is the order we are interested in. This means that
one now has to include all diagrams with one internal finite gauge photon
(3.18).
In particular we have to include one more vertex in our rules, that
completes (2.9-13). This vertex can be read from the action (2.1) when
coupled to gravity, and is the following,
\begin{figure}[ht]
\centerline{
\put(145,47){$=$ $-$ $\large{{1\over2}}$ $igk\,(\gamma_{(\mu}
\delta_{\nu)\alpha}-$ $\large{{1\over2}}$ $\delta_{\mu\nu}\gamma_\alpha)\,
\gamma_5\,\delta^4(z-z_1)\,\delta^4(z-z_2)\,\delta^4(z-z_3)$,}
\put(437,20){(3.23)}
\epsfxsize=6.5in
\epsfysize=1.3in
\epsffile{vert3.eps}
}
\end{figure}
\noindent
where the notation is as in (2.9-13). Observe that there is no similar vertex
involving two scalar legs (as opposed to the two fermion legs we have) as
such vertex would give diagrams that do not contribute to the abnormal
parity part of the correlator we are computing.
When this vertex is considered, one finds that there are 7 new diagrams
that must be included in our calculation, the ones presented in Figure 2.
There is 1 primitive diagram in Figure 2(a). There are 2 other primitive
diagrams which only correspond to 1 independent calculation, the one
depicted in Figure 2(b). Then we have energy-momentum insertion vertex
corrections. These are Figure 2(c) and Figure 2(d), 2 independent
calculations corresponding to 4 diagrams due to reflection symmetry. We
shall now proceed to evaluate these diagrams.
We start with the primitive diagram in Figure 2(a),
$P'^{(1)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$. This diagram is easily
evaluated as it involves no integrations. Recall that we have to use (3.18)
alone, whenever one encounters a virtual photon. Setting $z=0$ and
performing the conformal inversion, the amplitude
$P'^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ becomes,
$$
P'^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)
=-{igf^2k^2\over64\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}
\,{(x'-y')_\kappa\over(x'-y')^6}\,J_{\bar{\nu}\bar{\sigma}}(x'-y').
\eqno(3.24)
$$
Unlike all the preceeding calculations, this amplitude involves no
derivatives. This is certainly to be expected due to the nature of
vertex (3.23). However, one can manipulate (3.24) in order to write it as
the second derivative of a tensor involving the structures (3.6)
and (3.7) alone. After some calculations, one can show that
(3.24) can be re-written as (including the factor of 2 for opposite directions
of fermion charge flow):
$$
P^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,
=\,{igf^2k^2\over128\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\Bigl\{\varepsilon_{\alpha\kappa{\bar{\mu}}{\bar{\rho}}}\,
{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\},
\eqno(3.25)
$$
and this tensor structure is precisely the same as the one in the
non-planar diagrams (3.12). This should not come as a surprise as this
diagram -- like the non-planar ones -- is a primitive.
Let us proceed with the primitive diagram in Figure 2(b),
$P'^{(2)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$. This diagram involves only
one integration, therefore being different from the ones we previously
calculated (which either involved two or none integrations). Upon setting
$z=0$ the amplitude for $P'^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ is,
$$
P'^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
\,-{igf^2k^2\over128\pi^6}\int {d^4u\over(u-y)^2}\,
J_{\tau\beta}(u-y)\cdot
$$
$$
\cdot{\bf Tr}\,\gamma_\beta\,{{u\!\!\! /}\over u^4}\,\gamma_\alpha\gamma_5\,
{{y\!\!\! /}\over y^4}\,(\gamma_{(\mu}\delta_{\nu)\tau}-
{1\over2}\,\delta_{\mu\nu}\gamma_\tau)\,S(y-x)\,
\gamma_{(\rho}\delta_{\sigma)\pi}\,(
{\overrightarrow{\partial}\over\partial x_\pi}-
{\overleftarrow{\partial}\over\partial x_\pi})\,S(x-u), \eqno(3.26)
$$
and performing the conformal inversion one obtains,
$$
P'^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
\,{igf^2k^2\over128\pi^6}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\cdot
$$
$$
\cdot\int {d^4{u'}\over(u'-y')^2}\,J_{\bar{\nu}\beta}(u'-y')\,
{\bf Tr}\,\gamma_\beta\,\gamma_\alpha\,\gamma_5\,
\gamma_{\bar{\mu}}\,S(y'-x')\gamma_{\bar{\rho}}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
S(x'-u'). \eqno(3.27)
$$
We are left with one integration to perform. However, one should note that
there is only one differential operator in (3.27), and if we are to obtain
a final result for this diagram which involves the tensor structures (3.6)
and (3.7) we shall have to manipulate (3.27) in order to re-write it in such a
way that it involves two differential operators. This is analogous to the
situation we faced from (3.24) to (3.25). After integrating and performing
some calculations, one obtains:
$$
P'^{(2)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
\,{igf^2k^2\over128\pi^8}\,{y'}^8{x'}^8\,J_{\bar{\mu}(\mu}(y')\,
J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\cdot
$$
$$
\cdot\Bigl\{{1\over2}\,{\partial^2\over\partial y'_{\bar{\nu}}\partial
x'_{\bar{\sigma}}}\,{(x'-y')_\kappa\over(x'-y')^4}\,+\,{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\}.
\eqno(3.28)
$$
It is interesting to observe that this tensor structure is a linear
combination of both (3.6) and (3.7). Also, one must now include in this
result a factor of 2 for opposite directions of fermion charge flow and
another factor of 2 associated to the 2 distinctive diagrams connected
through reflection symmetry.
Finally, we can add the 3 diagrams in Figures 2(a) and 2(b), in order to
obtain the primitive planar contribution to the two loop correlator,
$$
P_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
\,{igf^2k^2\over32\pi^8}\,{y'}^8{x'}^8\,J_{\bar{\mu}(\mu}(y')\,
J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\cdot
$$
$$
\cdot\Bigl\{{1\over2}\,{\partial^2\over\partial y'_{\bar{\nu}}\partial
x'_{\bar{\sigma}}}\,{(x'-y')_\kappa\over(x'-y')^4}\,+\,{5\over4}
\,{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\Bigr\}.
\eqno(3.29)
$$
\subsection{Diagrams in Figures 1(i), 1(j), 1(k), 1(p), 2(c) and 2(d)}
Next, we proceed with the evaluation of the contributions coming from
the vertex correction diagrams at the energy-momentum tensor insertions.
These amplitudes are presented in Figure 1(i):
$V^{(1)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$,
Figure 1(j): $V^{(2)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, Figure 1(k):
$V^{(3)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, Figure 1(p):
$V^{(4)}_{\alpha, \mu\nu, \rho\sigma}$ $(z,y,x)$, and also Figure 2(c):
$V^{(5)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$, and Figure 2(d):
$V^{(6)}_{\alpha, \mu\nu, \rho\sigma}(z,y,x)$.
Using the previous treatment with the axial insertion vertex we should
insert the photon propagator in our calculation in order to guarantee
finiteness of the energy-momentum insertion vertex. The amplitudes
$V^{(5)}$ and $V^{(6)}$ have a different structure from the other vertex
diagrams as they are semi-local. Moreover, there is a possible local term,
$V^{(4)}$, which is left ambiguous by the Feynman rules. This is analogous
to the situation we faced when dealing with the self-energy diagrams. As
this local term cannot be evaluated by Feynman rules it will be solely
determined from the Ward identity. The role it plays is one of regularizing a
divergence.
The sum of these amplitudes becomes finite in the finite gauge, which also
guaranteed the finiteness of the axial insertion vertex. The finite part of
the vertex subgraphs is a traceless tensor with respect to all three
included indices, and can be written as,
$$
V^{(finite)}_{\mu\nu}(y,v,u)
={3k(f^2-{1 \over 2} g^2)\over2\pi^4}\,{1\over(y-u)^6}\,\gamma_\kappa\,\Big(
{(y-u)_\mu\,(y-u)_\nu\,(y-u)_\kappa\over(y-u)^2}\,-\,{1\over 6}\,
\delta_{(\mu\nu}\,(y-u)_{\kappa)}\Big). \eqno(3.30)
$$
In addition, it would be thoughtful to check the Ward identity connected
to these vertices. For that we express the tensor on the RHS
of (3.30) in terms of the regularized traceless structure of derivatives,
$$
\Big({(y-u)_\mu\,(y-u)_\nu\,(y-u)_\kappa\over(y-u)^2}-{1\over6}\,
\delta_{(\mu\nu}\,(y-u)_{\kappa)}\Big)\,=\,
-\,{1\over48}\,{\partial\over\partial y_\kappa}\,\Big({\partial\over
\partial y_\mu}\,{\partial\over\partial y_\nu}-{1\over4}\,
\delta_{\mu\nu}\,\Box\,\Big)\,{1\over(y-u)^2}. \eqno(3.31)
$$
With this expression it is easy to derive that the following Ward identity
is satisfied for the energy-momentum insertion vertex,
$$
{\partial\over\partial y^{\mu}}\int d^4v \sum_{i=1}^6 V^{(i)}_{\mu\nu}(y,v,u)=
-k\,\partial_{\nu}\,\Sigma(y-u), \eqno(3.32)
$$
where $V^{(i)}_{\mu\nu}$ is the vertex subgraph in the diagram
$V^{(i)}_{\alpha, \mu\nu, \rho\sigma}$.
This Ward identity is essential for the calculation of the amplitude of
Figure 1(i). It suggests that it will give a divergent result, and only when
we add together the diagrams in Figures 1(i), 1(j), 1(k), 1(p), 2(c) and 2(d)
we shall obtain a finite answer. After summing up the two possible directions
of the Higgs field and setting $z=0$ due to translation invariance of the
amplitude, $V^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ is written as,
$$
V^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
-{igf^2k^2\over8\pi^4}\int d^4u\,d^4v\,
\Delta(u-v)\cdot
$$
$$
\cdot{\bf Tr}\,
S(v-y)\gamma_{(\mu}\delta_{\nu)\beta}\,(
{\overrightarrow{\partial}\over\partial y_\beta}-
{\overleftarrow{\partial}\over\partial y_\beta})\,S(y-u)S(u-x)
\gamma_{(\rho}\delta_{\sigma)\pi}\,(
{\overrightarrow{\partial}\over\partial x_\pi}-
{\overleftarrow{\partial}\over\partial x_\pi})\,
{{x\!\!\! /}\over x^4}\,\gamma_\alpha\gamma_5\,
{{v\!\!\! /}\over v^4}. \eqno(3.33)
$$
Using the conformal properties of the theory we can perform the usual
inversion in the spatial variables which results to,
$$
V^{(1)}_{\alpha, \mu\nu, \rho\sigma}(0,y,x)=
{igf^2k^2\over8\pi^4}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')
\int d^4{u'}\,d^4{v'}\,\Delta(v'-u')\cdot
$$
$$
\cdot{\bf Tr}\,\gamma_5\,
S(v'-y')\gamma_{\bar{\mu}}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\,
S(y'-u')S(u'-x')\gamma_{\bar{\rho}}\,
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}
\,\gamma_\alpha. \eqno(3.34)
$$
In order to proceed with the $u$ and $v$ integrations we should also add
the diagrams of Figures 1(j), 1(k), 1(p), 2(c) and 2(d) to guarantee
finiteness of the contribution from the energy-momentum insertion vertex.
Note that all these diagrams do not contribute with a finite part for the
order ${\cal O}(gf^2k^2)$ we are interested in, but merely make the diagram
1(i) finite. This will make the integrand have a traceless form, as given
in the Appendix. After some manipulations we deduce that,
$$
V''_{\alpha, \mu\nu, \rho\sigma}(0,y,x)
={igf^2k^2\over256\pi^8}\,{y'}^8{x'}^8\,
J_{\bar{\mu}(\mu}(y')\,J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')
\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\,
{\partial^2\over\partial y'_{\bar{\nu}}\partial x'_{\bar{\sigma}}}
\,{(x'-y')_\kappa\over(x'-y')^4}, \eqno(3.35)
$$
which is proportional to the triangle structure. Taking into account the 2
fermionic directions and doubling our answer for the two distinctive
diagrams connected with reflection symmetry we obtain finally:
$$
V'_{\alpha, \mu\nu, \rho\sigma}(0,y,x)={2f^2\over32\pi^2}\,
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x), \eqno(3.36)
$$
which is similar to what we got in (3.22). So, we can add all the diagrams
that represent vertex corrections (both at axial and energy-momentum
insertions). The overall result of the vertex corrections contribution to
the two loop correlator is,
$$
V_{\alpha, \mu\nu, \rho\sigma}(0,y,x)={3f^2\over32\pi^2}\,
B_{\alpha, \mu\nu, \rho\sigma}(0,y,x). \eqno(3.37)
$$
\subsection{Diagrams in Figures 1(l), 1(m) and 1(n)}
Finally, we would like to mention the diagrams that are zero. That these
diagrams vanish can be easily seen either from the fact that the fermion
trace vanishes or from arguments of Lorentz symmetry. We mention these
diagrams for completeness. They are the following: Figure 1(l), which are
3 diagrams that amount to 1 independent calculation, Figure 1(m), which
is only 1 diagram, and Figure 1(n), which are 2 diagrams that amount to 1
independent calculation. We have now completed the calculations
for all the 36 diagrams.
\subsection{The Three Point Function}
The next and final step is to add all diagrams together, and find out what is
the two loop contribution to the three point function at order
${\cal O}(gf^2k^2)$. Adding the results for all our diagrams we obtain the
${\cal O}(gf^2k^2)$ two loop contribution to the correlator
$\langle A_\alpha(z) T_{\mu\nu}(y) T_{\rho\sigma}(x) \rangle$. There are 4
distinct contributions: the one from the non-planar primitive diagrams,
$N_{\alpha, \mu\nu, \rho\sigma}$ $(0,y,x)$ in (3.12); the one from the
self-energy diagrams, $\Sigma_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ in (3.17);
the one from the planar primitive diagrams,
$P_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ in (3.29); and the one from the
vertex correction diagrams, $V_{\alpha, \mu\nu, \rho\sigma}(0,y,x)$ in (3.37).
Adding these 4 structures we finally obtain our result: the three point
function does not vanish and consists of two independent conformal tensor
structures,
$$
N_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,+\,
\Sigma_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,+\,
P_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,+\,
V_{\alpha, \mu\nu, \rho\sigma}(0,y,x)\,=
$$
$$
=\,{igf^2k^2\over64\pi^8}\,{y'}^8{x'}^8\,J_{\bar{\mu}(\mu}(y')\,
J_{\nu)\bar{\nu}}(y')\,J_{\bar{\rho}(\rho}(x')\,J_{\sigma)\bar{\sigma}}(x')\,
\varepsilon_{\alpha\kappa\bar{\mu}\bar{\rho}}\cdot
$$
$$
\cdot\Bigl\{\,7\,{\partial^2\over\partial y'_{\bar{\nu}}\partial
x'_{\bar{\sigma}}}\,{(x'-y')_\kappa\over(x'-y')^4}\,+\,{11\over2}
\,{1\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {x'}_{\bar{\sigma}}}-
{\overleftarrow{\partial}\over\partial {x'}_{\bar{\sigma}}})\,
{(x'-y')_\kappa\over(x'-y')^2}\,(
{\overrightarrow{\partial}\over\partial {y'}_{\bar{\nu}}}-
{\overleftarrow{\partial}\over\partial {y'}_{\bar{\nu}}})\,\Bigr\}.
\eqno(3.38)
$$
There is one consistency check that can be performed on this result. Namely,
under the appropriate changes, one can ask: does it reduce to the result
obtained in the axial gauge theory case \cite{JF}? In order to reduce
(3.38) to the gauge theory case of \cite{JF} we first have to discard
all diagrams involving the vertex (3.23). Then, in the other diagrams, one
has to erase all the ``graviton derivatives''. Once this is done we are left
with a unique conformal tensor, {\it i.e.}, we are reducing the structure of
our theory to the one in \cite{JF}. Finally, taking into consideration that
the removal of the ``graviton derivatives'' also includes a factor of $-2$ in
the non-planar contribution (due to the symmetry enhancement of this
diagrams), one finds that the overall result vanishes just as it did in
\cite{JF}. This shows that our result is consistent with the calculations
performed for the gauge axial anomaly.
We can trace back the reason why this radiative correction does not vanish
(as it {\it does} vanish in the gauge theory case \cite{JF}). This is (3.12)
and (3.29), the contribution of the primitive diagrams, which is {\it not} a
multiple of the one loop amplitude. The existence of two different conformal
tensors in our theory is a result of the dimensionality of the correlator
$\langle A_\alpha(z) T_{\mu\nu}(y) T_{\rho\sigma}(x) \rangle$.
\vspace{5 mm}
\noindent
{\bf Acknowledgments:}
We would like to thank Daniel Freedman and Kenneth Johnson for suggesting and
guiding the above investigation, as well as for comments and reading of the
manuscript. We would also like to thank Roman Jackiw for comments and reading
of the manuscript, and Joshua Erlich for helpful remarks. One of us (R.S.) is
partially supported by the Praxis XXI grant BD-3372/94 (Portugal).
\vfill
\eject
|
{'timestamp': '1998-09-22T20:06:15', 'yymm': '9807', 'arxiv_id': 'hep-th/9807128', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9807128'}
|
arxiv
|
\section*{Introduction. }
Every nonsingular projective surface $S$ over $\mathbb C$ defines three
underline structures
$$ vS, \hspace{1cm} dS,\hspace{1cm} tS,$$
where $tS$ is the topological type of $S$, $dS$ is the underline smooth 4-manifold and $vS$ is the deformation type of $S$.
I would like to talk about Boris Moishezon's Program on investigation of smooth structures on projective surfaces and their deformation types.
It has three sources: classical (Italian) algebraic geometry including Picard-Lefschetz theory, braid group theory, topology of smooth manifolds, and it consists of three components.
The first one coincides with {\it Chisini's Problem}.
Let $S$ be a nonsingular surface in a projective space $\mathbb P^r$ of $\deg S=N$. It is well known that for almost all projections $pr:\mathbb P^r \to \mathbb P^2$ the restrictions $f:S\to \mathbb P^2$ of these projections to $S$ satisfy the following conditions:
$(i)$ $f$ is a finite morphism of $\deg f=\deg S$;
$(ii)$ $f$ is branched along an irreducible curve $B\subset \mathbb P^2$ with ordinary cusps and nodes, as the only singularities;
$(iii)$ $f^{*}(B)=2R+C$, where $R$ is irreducible and non-singular, and $C$ is reduced;
$(iv)$ $f_{\mid R}:R\to B$ coincides with the
normalization of $B$. \newline
We shall call such $f$ {\it a generic morphism} and its branch curve will be called {\it the discriminant curve}.
Two generic morphisms $(S_1,f_1)$, $(S_2,f_2)$ with the same discriminant curve $B$ are said to be equivalent if there exists an isomorphism $\varphi : S_1 \to S_2$ such that
$f_1=f_2\circ \varphi $.
The following assertion is known as Chisini's Conjecture.
\begin{gip}
Let $B$ be the discriminant curve of a generic morphism $f:S\to \mathbb P^2$ of degree $\deg f \geq 5$. Then $f$ is uniquely determined by the pair $(\mathbb P^2,B)$.
\end{gip}
It is easy to see that the similar conjecture for generic morphisms of projective curves to $\mathbb P^1$ is not true. On the other hand one can show that Chisini's Conjecture holds for the discriminant curves of almost all generic morphisms of any projective surface.
The second part of Moishezon's Program deals with so called {\it braid monodromy technique}. Let $B$ be an algebraic curve in $\mathbb P^2$ of degree $2d$, where $d\in \frac{1}{2}\mathbb N$ (if $B$ is a discriminant curve, then $\deg B$ is even, i.e. $d\in \mathbb N$). The topology of the embedding $B\subset \mathbb P^2$ is determined by the {\it braid monodromy} of $B$ (see, for example, \cite{Moi1} or \cite{Moi2}), which is described by a factorization of the "full twist" $\Delta _{2d}^2$ in the semi-group $B^{+}_{2d}$ of the braid group $B_{2d}$ of $2d$
string braids (in standard generators, $\Delta _{2d}^2=(X_1\cdot ...\cdot X_{2d-1})^{2d}$). If $B$ is a cuspidal curve, then this factorization can be written as follows
\begin{equation}\label{F}
\Delta _{2d}^2=\prod_{i}Q_i^{-1}X_1^{\rho _i}Q_i, \hspace{1cm} \rho _i\in (1,2,3),
\end{equation}
where $X_1$ is a positive half-twist in $B_{2d}$.
Let
\begin{equation} \label{f}
h=g_1\cdot ...\cdot g_r
\end{equation}
be a factorization in $B^{+}_{2d}$. The transformation which changes two neighboring factors in (\ref{f}) as follows
$$g_i\cdot g_{i+1} \longmapsto (g_ig_{i+1}g_i^{-1})\cdot g_i, $$
or
$$g_i\cdot g_{i+1} \longmapsto g_{i+1}(g_{i+1}^{-1}g_ig_{i+1}) $$
is called a {\it Hurwitz move}.
For $z\in B_{2d}$, we denote by
$$h_z=z^{-1}g_1z\cdot z^{-1}g_2z\cdot ...\cdot z^{-1}g_rz$$
and say that the factorization expression $h_z$ is obtained from (\ref{f}) by simultaneous conjugation by $z$. Two factorizations are called {\it Hurwitz and conjugation equivalent} if one can be obtained from the other by a finite sequence of Hurwitz moves followed by a simultaneous conjugation. For any algebraic curve $B\subset \mathbb P^2$ any two factorizations of the form (\ref{F}) are Hurwitz and conjugation equivalent. We shall say that two factorizations of the form (\ref{F}) belong to the same {\it braid factorization type} if they are Hurwitz and conjugation equivalent. The main problem in this direction is the following one.
\begin{pro}
Does the braid factorization type of the pair $(\mathbb P^2,B)$ uniquely determine the diffeomorphic type of this pair $(\mathbb P^2,B)$, and vice versa?
\end{pro}
Let $S_1$ and $S_2$ be two non-singular projective surfaces, and let
$\varphi :S_1\to S_2$ be a homeomorphism. The homeomorphism $\varphi $ induces the isomorphism $\varphi ^{*}:H^2(S_2,\mathbb Z)\to H^2(S_1,\mathbb Z)$. Assume that $L_i$, $i=1,2$, is an ample line bundle on $S_i$ such that $f_i:S_i \to \mathbb P^2$ given by three-dimensional linear subsystem of $|L_i|$ is a generic morphism, and let $\varphi ^{*}(L_2)=L_1$. The third part of Moishezon's Program can be formulated as the following problem.
\begin{pro}
Let $f_i:S_i\to \mathbb P^2$, $i=1,2$, be a generic morphism as above and such that Chisini's Conjecture holds for its discriminant curve $B_i$. Do the diffeomorphic (resp. deformation) types of $S_1$ and $S_2$ coincide if the diffeomorphic (resp. deformation) types of the pairs $(\mathbb P^2,B_1)$ and $(\mathbb P^2,B_2)$ coincide, and vice versa?
\end{pro}
\section{Chisini's Conjecture.}
{\bf 1.1.} Let $B\subset \mathbb P^2$ be an irreducible plane curve with ordinary cusps and nodes, as the only singularities. Denote by $2d$ the degree of $B$, and let $g$ be the genus of its desingularization, $c= \# \{ \mbox{cusps of}\, B\}$, and $n= \# \{ \mbox{nodes of} \, B\}$.
Let us fix $p\in \mathbb P^2\setminus B$ and denote by $\pi _1=\pi _1(\mathbb P^2\setminus B,\, p)$ the fundamental group of the complement of $B$. Choose any point $x\in B\setminus Sing\, B$ and consider a line $\Pi =\mathbb P^1\subset \mathbb P^2$ intersecting $B$ transversely at $x$. Let $\gamma \subset \Pi$ be a circle of small radius with center at $x$. If we choose an orientation on $\mathbb P^2$, then it defines an orientation on $\gamma $. Let $\Gamma $ be a loop consisting of a path $L$ in $\mathbb P^2\setminus B$ joining the point $p$ with a point $q\in \gamma$, the circuit in positive direction along $\gamma $ beginning and ending at $q$, and a return to $p$ along the path $L$ in the opposite direction. Such loops $\Gamma$ (and the corresponding elements in $\pi _1$) will be called {\it geometric generators}. It is well-known that $\pi _1$ is generated by geometric generators, and any two geometric generators are conjugated in $\pi _1$ since $B$ is irreducible.
For each singular point $s_i$ of $B$ we choose a small neighborhood $U_i\subset \mathbb P^2$ such that $B\cap U_i$ is defined (in local coordinates in $U_i$) by the equations $y^2=x^3$, if $s_i$ is a cusp, and $y^2=x^2$, if $s_i$ is a node. Let $p_i$ be a point in $U_i\setminus B$. It is well-known that if $s_i$ is a cusp, then $\pi _1(U_i\setminus B,p_i)$ is isomorphic to the braid group $\mbox{Br}_3$ of 3-string braids and is generated by two geometric generators (say $a$ and $b$) satisfying the following relation
$$aba=bab.$$
If $s_i$ is a node, then $\pi _1(U_i\setminus B,p_i)$ is isomorphic to $\mathbb Z\oplus \mathbb Z$ generated by two commuting geometric generators.
Let us choose smooth paths $\gamma _i$ in $\mathbb P^2\setminus B$ joining $p_i$ and $p$. This choice
defines homomorphisms $\psi _i:\pi _1(U_i\setminus B,p_i)\to \pi _1$. Denote the image $\psi _i(\pi _1(U_i\setminus B,p_i))$ by $G_i$ if $s_i$ is a cusp, and $\Gamma _i$
if $s_i$ is a node.
A generic morphism of degree $N$ determines a homomorphism $\varphi :\pi _1 \to \mathfrak S_N$, where $\mathfrak S_N$ is the symmetric group. This homomorphism $\varphi $ is determined uniquely up to inner automorphism of $\mathfrak S_N$.
\begin{pred} (\cite{Moi1},\cite{Kul1})\label{prop1}
The set of the non-equivalent generic morphisms of degree $N$ possessing the same discriminant curve $B$ is in one to one correspondence with
the set of the epimorphisms $\varphi :\pi _1(\mathbb P^2\setminus B)\to \mathfrak S_N$ (up to inner automorphisms of $\mathfrak S_N$) satisfying the following conditions:
$(i)$ for a geometric generator $\gamma $ the image $\varphi (\gamma )$ is a transposition in $\mathfrak S_N$;
$(ii)$ for each cusp $s_i$ the image $\varphi (G_i)$ is isomorphic to $\mathfrak S_3$ generated by two transpositions;
$(iii)$ for each node $s_i$ the image $\varphi (\Gamma _i)$
is isomorphic to $\mathfrak{S}_2 \times \mathfrak{S}_2$ generated by two commuting transpositions.
\end{pred}
{\bf 1.2.} Moishezon proved the following theorem
\begin{thm} Let an epimorphism $\varphi :\pi _1(\mathbb P^2\setminus B)\to \mathfrak S_N$ satisfy conditions (($i$)-($iii$)) of Proposition \ref{prop1}. If the kernel $K$ of $\varphi $ is a solvable group, then Chisini's Conjecture holds for $B$.
\end{thm}
In particular, from this theorem, it follows that Chisini's Conjecture holds for the discriminant curves of generic morphisms of $S$ for $S=\mathbb P^2$, $S=\mathbb P^1\times \mathbb P^1$, and for the discriminant curves of generic projections of hypersurfaces in $\mathbb P^3$.
\newline {\bf 1.3.} In \cite{Kul1}, one can find the proof of the following
\begin{thm} \label{K1}
Let $B$ be the discriminant curve of a generic morphism
$f:S\to \mathbb P^2$ of $\deg f = N$. If
\begin{equation}
N > \frac{4(3d+g-1)}{2(3d+g-1)-c}. \label{in}
\end{equation}
Then the generic morphism $f$ is uniquely determined by the pair $(\mathbb P^2,B)$ and thus, the Chisini Conjecture holds for $B$.
\end{thm}
Theorem \ref{K1} shows that if the degree of a generic morphism with given discriminant curve $B$ is sufficiently large, then this generic morphism is unique for $B$. Almost all generic morphisms interesting from algebraic geometric point of view satisfy this condition. More precisely, we have the following theorems (\cite{Kul1}).
\begin{thm}
Let $S$ be a projective non-singular surface, and $L$ be an ample divisor on $S$, $f:S\to \mathbb P^2$ a generic morphism given by a three-dimensional subsystem $\{ E\} \subset |mL|$, $m\in \mathbb Q$, and $B$ its discriminant curve. Then there exists a constant $m_0$ (depending on $L^2, \, (K_S,L), \, K^2_S,\, p_a$) such that $f$ is uniquely determined by the pair $(\mathbb P^2,B)$ if $m\geq m_0$.
\end{thm}
In particular, we have
\begin{thm}
Let $S$ be a surface of general type with ample canonical bundle $K_S$, $f:S\to \mathbb P^2$ a generic morphism given by a three-dimensional linear subsystem of $|E|$, where $E\equiv mK_S$, $m\in \mathbb N$ ($\equiv $ means numerical equivalence). Then $f$ is uniquely determined by the pair $(\mathbb P^2,B)$.
\end{thm}
{\bf 1.4.} The last theorem can be generalized to the case of generic morphisms $f:X\to \mathbb P^2$, where $X$ is a canonical model of a surface $S$ of general type ($X$ is the surface with Du Val singularities) (\cite{Kul4}).
\begin{thm} (V.S. Kulikov and Vik.S. Kulikov)
Let $S_1$ and $S_2$ be two minimal models of surfaces of general type with $K_{S_1}^2=K_{S_2}^2$ and $\chi (S_1)=\chi (S_2)$, and $X_1$ and $X_2$ their canonical models. Let $B$ be the canonical discriminant curve of two $m$-canonical generic morphisms $f_1:X_1\to \mathbb P^2$ and $f_2:X_2\to \mathbb P^2$, that is, $f_i$, $i=1,2,$ is given by three-dimensional linear subsystems of $|mK_{X_i}|$. If $m\geq 4$, then $f_1$ and $f_2$ are equivalent.
\end{thm}
\section{Braid factorization and smooth equivalence.}
{\bf 2.1.} One can show that the braid factorization of an algebraic
plane curve $B$ uniquely determines the diffeomorphic type of the pair
$(\mathbb P^2,B)$. More precisely, one can prove
\begin{thm} (Vik.S.Kulikov and M.Teicher)
Let $B_1, B_2 \subset \mathbb P^2$ be two projective plane curves with the same braid factorization type. Then there exists a diffeomorphism
$\varphi : \mathbb P^2 \to \mathbb P^2$ such that $\varphi (B_1)=B_2$.
\end{thm}
The inverse question remains open.
\begin{pro} \label{pr}
Let $B_1, B_2 \subset \mathbb P^2$ be two projective plane curves such that the pairs $(\mathbb P^2,B_1)$ and $(\mathbb P^2,B_2)$ are diffeomorphic. Do $B_1$ and $B_2$ belong to the same braid factorization type?
\end{pro}
I would like to mention here the following very important problem.
\begin{pro}
Let $\Delta^2_{2d}={\cal{E}}_1$ and $\Delta^2_{2d}={\cal{E}}_2$ be two braid factorizations. Does there exist a finite algorithm to recognize whether these two braid factorizations belong to the same braid factorization type or not?
\end{pro}
\section{Smooth types of surfaces and smooth types \\ of pairs $(\mathbb P^2,B)$.}
{\bf 3.1.}
The set of plane curves of degree $2d$ is naturally parameterized by the points in $\mathbb P^{d(2d+3)}$. The subset of plane irreducible curves of degree $2d$ and genus $g$ with $c$ ordinary cusps and some nodes, as the only singularities, corresponds to a quasi-projective subvariety ${\cal{M}}(2d,g,c)\subset P^{d(2d+3)}$ (\cite{Wah}). One can show that if two non-singular points of the same irreducible component of ${\cal{M}}_{red}(2d,g,c)$ correspond to curves $B_1$ and $B_2$, then the pairs $(\mathbb P^2,B_1)$ and $(\mathbb P^2,B_2)$ are diffeomorphic. In particular, in this case the fundamental groups $\pi _1(\mathbb P^2\setminus B_1)$ and $\pi _1(\mathbb P^2\setminus B_2)$ are isomorphic. Moreover, in this case $B_1$ and $B_2$ have the same braid factorization type.
The following Proposition is a simple consequence of Proposition \ref{prop1} and some local properties of generic morphisms.
\begin{pred}
Let $(\mathbb P^2,B_1)$ and $(\mathbb P^2,B_2)$ be two diffeomorphic (resp. homeomorphic) pairs. If $B_1$ is the discriminant curve of a generic morphism $(S_1,f_1)$, then
$B_2$ is also the discriminant curve of some generic morphism $(S_2,f_2)$. Moreover, if $(S_1,f_1)$ is unique, i.e. Chisini's Conjecture holds for $B_1$, then the same is true for $(S_2,f_2)$ and $S_1$ and $S_2$ are diffeomorphic (resp. homeomorphic).
\end{pred}
{\bf 3.2.} Let $B$ be the discriminant curve of some generic morphism $f:S\to \mathbb P^2$ of $\deg f = N$. Suppose Chisini's Conjecture holds for $B$. Let $f^{*}(B)=2R+C$ be the preimage of $B$ and suppose that for $\pi (S\setminus C)$ there exists a unique (up to inner automorphism) epimorphism $\varphi :\pi (S\setminus C)\to \mathfrak{S}_{N-1}$. In this case, we say that Chisini's Conjecture holds {\it twice} for $B$.
If it is the case, one can prove the following
\begin{thm} (\cite{Kul3})
Let $B_i$, $i=1,2,$ be the discriminant curve of a generic morphism $f_i:S_i\to \mathbb P^2$. Put $f_i^{*}(B_i)=2R_i+C_i$. Suppose Chisini's Conjecture holds twice for $B_i$. Then there exists a diffeomorphism (resp. a homeomorphism) of pairs $\varphi : (\mathbb P^2,B_1) \to (\mathbb P^2,B_2)$ if and only if
there exists a diffeomorphism (resp. a homeomorphism) $\Phi :S_1 \to S_2$ such that $\Phi(R_1)=R_2$, $\Phi(C_1)=C_2$ and such that the following diagram is commutative
$$
\xymatrix{
S_1 \ar[d]_{f_1} \ar[r]^{\Phi } & S_2 \ar[d]^{f_2} \\
\mathbb P^2 \ar[r]_{\varphi } & \mathbb P^2 . }
$$
\end{thm}
{\bf 3.3.} For $S\subset \mathbb P^r$, a projection $f:S\to \mathbb P^2$ is defined by a point in Grassmannian $\mbox{Gr}_{r+1,r-2}$ (the base locus of the projection). It is well known that the set of generic projections is in one to one correspondence with some Zariski's open subset $U_S$
of $\mbox{Gr}_{r+1,r-2}$. A continuous variation of a point in $U_S$ gives rise to a continuous family of generic projections of $S$, whose branch curves belong to the same continuous family of plane cuspidal curves. Therefore the discriminant curves of two generic projections of $S\subset \mathbb P^r$ belong to the same irreducible component of ${\cal{M}}(2d,g,c)$. Moreover, it is easy to see that the discriminant curves of two generic projections of $S\subset \mathbb P^r$ have the same braid factorization type, because they belong to the same irreducible component of ${\cal{M}}(2d,g,c)$ .
In particular, if two surfaces $S_1$ and $S_2$ of general type with the same $K^2_S=k$ and $p_a=\chi ({\cal{O}}_{S_i}) = p$ are embedded by the $m$th canonical class into the same projective space $\mathbb P^r$ and belong to the same
irreducible component of coarse moduli space ${\cal{M}}_S(k,p)$ of surfaces with given invariants (\cite{Gie}), then there exist generic projections $f_1$ of $S_1$ and $f_2$ of $S_2$ belonging to the same continuous family of generic projections. Therefore, discriminant curves (we will call them {\it $m$-canonical discriminant curves}) of two such generic projections of
$S_1$ and $S_2$, belonging to the same
irreducible component of a moduli space ${\cal{M}}_S(k,p)$, belong to the same irreducible component of ${\cal{M}}(2d,g,c)$ (cf. \cite{Wah}).
By Theorem 2 and by Propositions 5 and 6 in \cite{Kul1}, for a surface of general type with ample canonical class the triple of integers $(m,k,p)$ is uniquely determined by the invariants $(d,g,c)$ of $m$th canonical discriminant curve, and vice versa. Thus, we have a natural mapping $ir_{k,p,m}$ (resp. $var_{k,p,m}$) from the set of irreducible (resp. connected) components of ${\cal{M}}_S(k,p)$ to the set of irreducible (resp. connected) components of ${\cal{M}}(2d,g,c)$. Hence by Proposition 1 and Theorems 4 and 5, we have
\begin{thm} (\cite{Kul4}) Assume that for all surfaces $S$ of general type with the same $K^2_S=k$ and $p_a=\chi ({\cal{O}}_{S_i})=p$ there exists a three dimensional linear subsystem of $|mK_S|$ which gives a generic morphism $f:S\to \mathbb P^2$. Then
\newline ($i$) $ir_{k,p,m}$ is injective for any $m\in \mathbb N$; \newline ($ii$) $var_{k,p,m}$ is injective if $m\geq 4$.
\end{thm}
{\bf 3.4.}
From Theorems 4, 6 and Proposition 2 it follows
\begin{thm} (Vik.S. Kulikov and M. Teicher) Let $B_i$, $i=1,2,$ be
an $m$-canonical discriminant curve of a generic morphism $f_{m,i}:S_i\to \mathbb P^2$. If $B_1$ and $B_2$ have the same braid decomposition type, then $S_1$ and $S_2$ are diffeomorphic.
\end{thm}
Applying Theorems 4 and 5 we obtain
\begin{thm}
The braid factorization type of an $m$-canonical discriminant curve $B_m$ is an invariant of the deformation type of the corresponding surface $S$ of general type.
\end{thm}
\begin{zam}
A negative solution of Problem \ref{pr} implies a negative solution of Diff-Def Problem.
\end{zam}
\begin{zam}
In \cite{Man}, Manetti announced that Diff-Def Problem has a negative solution.
\end{zam}
{\bf 3.5.}
One can prove the following theorem using the arguments similar to those in \cite{Kul1}.
\begin{thm} (\cite{Kul3}) Chisini's Conjecture holds twice for the $m$-canonical discriminant curves.
\end{thm}
Applying this theorem and theorem 7, we have
\begin{thm} \label{dif} (\cite{Kul3})
Let $B_i$, $i=1,2,$ be an $m$-canonical discriminant curve of some generic morphism $f_{m,i}:S_{i}\to \mathbb P^2$. Put $f_i^{*}(B_i)=2R_i+C_i$. Then the pairs $(\mathbb P^2,B_1)$ and $(\mathbb P^2,B_2)$ are diffeomorphic (resp. homeomorphic) if and only if there exists a diffeomorphism (resp. a homeomorphism) $\Phi :S_1 \to S_2$ such that $\Phi(R_1)=R_2$, $\Phi(C_1)=C_2$ and such that the following diagram is commutative
$$
\xymatrix{
S_1 \ar[d]_{f_1} \ar[r]^{\Phi } & S_2 \ar[d]^{f_2} \\
\mathbb P^2 \ar[r]_{\varphi } & \mathbb P^2 , }
$$
where $\varphi $ is a diffeomorphism (resp. a homeomorphism) of pairs $(\mathbb P^2,B_i)$ .
\end{thm}
{\bf 3.6.}
In \cite{Cat1} and \cite{Cat2}, Catanese investigated smooth simple bidouble coverings $\varphi : S\to Q=\mathbb P^1\times \mathbb P^1$ of type $(a,b),(m,n)$; thus, $\varphi $ is a finite $(\mathbb Z/2)^2$ Galois covering branched along two generic curves of respective bidegrees $(2a,2b)$, $(2m,2n)$. He proved that for each integer $k$ there exists at least one $k$-tuple $S_1,...,\, S_k$ of bidouble coverings of $Q$ of respective types $(a_i,b_i),(m_i,n_i)$ satisfying the following conditions:
$(i)$ $S_i$ and $S_j$ are homeomorphic for each $1\leq i,j\leq k $.
$(ii)$ $r(S_i)\neq r(S_j)$ for $i\neq j$ (and therefore $S_i$ and $S_j$ are not diffeomorphic), where $r(S_i)= \max \{ s\in \mathbb N\, \, |\, \, (1/s)K_{S_i}\in H^2(S_i,\mathbb Z)\} $ is the index of $S_i$. \newline We shall call a $k$-tuple of surfaces of general type, satisfying conditions ($i$) and ($ii$), {\it a Catanese $k$-tuple}.
As a corollary of Theorem \ref{dif} we obtain
\begin{thm} (\cite{Kul2})
Let $B_{m,i}$, $i=1,2,$ be $m$-canonical discriminant curves of generic morphisms $f_{m,i}:S_{i}\to \mathbb P^2$, where $(S_1,S_2)$ is a Catanese pair. Then the pairs $(\mathbb P^2,B_{m,1})$ and $(\mathbb P^2,B_{m,2})$ are not homeomorphic. In particular, $B_{m,1}$ and $B_{m,2}$ belong to different braid factorization types.
\end{thm}
Theorem 13 shows that the braid factorization types of $m$-canonical discriminant curves of Catanese's surfaces distinguish diffeomorphic types of these surfaces.
\section{Moishezon 4-manifolds.}
{\bf 4.1.} In \cite{Moi1}, Moishezon remarked that for any cuspidal factorization of $\Delta^2_{2d}=\cal{E}$ and a projection $pr:\mathbb C^2\to \mathbb C^1$, it is possible to construct a cuspidal curve $\overline B$ and a topological embedding $i:\overline B\to \mathbb P^2$ which is not complex-analytic, but behaves as a complex-analytic one with respect to the rational map $\pi: \mathbb P^2\to \mathbb P^1$, so that the braid monodromy for $i(\overline B)$ with respect to $\pi $ will be well defined and will be represented by $\Delta^2_{2d}=\cal{E}$. We shall call such curves $i(\overline B)$ {\it semi-algebraic}.
Assume that there exists an epimorphism $\varphi :\pi _1(\mathbb P^2\setminus \overline B)\to \mathfrak S_N$ satisfying the conditions described in Proposition 1. Then one can construct a smooth 4-manifold $S$ and
a $\cal{C}^{\infty}$-map $f:S\to \mathbb P^2$ which outside of
$i(\overline B)$ is a non-ramified covering of degree $N$ with monodromy homomorphism equal to $\varphi$, and over a neighborhood of $i(\overline B)$ locally behaves like a generic morphism branched at $i(\overline B)$ and with local monodromy induced by $\varphi$. We call such $S$ {\it the Moishezon 4-manifold}.
\begin{gip} The class of Moishezon 4-manifolds coincides with the class of symplectic 4-manifolds.
\end{gip}
{\bf 4.2.} {\bf Problem}. {\it To realize Boris Moishezon's Program for the Moishezon 4-manifolds.}
|
{'timestamp': '1998-07-27T18:47:04', 'yymm': '9807', 'arxiv_id': 'math/9807153', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807153'}
|
arxiv
|
\section{Introduction}
The Bethe-Salpeter (BS) equation provides a natural basis for the
relativistic treatment of bound $q\bar{q}$ systems in the framework
of the constituent quark model. However, due to the lack of the
probability interpretation of the 4-dimensional (4D) BS amplitude
as well as due to serious mathematical diseases which are inherent
in the BS approach to the bound-state problem, various
3-dimensional (3D) reduction schemes of the original BS equation are
usually used. As it is well known, the simplest version of this sort of
reduction immediately arises if the kernel of the BS equation is taken in
the instantaneous (static) approximation. As a result, the Salpeter
equation is obtained. The Salpeter equation was used for the description
of the bound $q\bar{q}$ system without further approximation in
Refs.~\cite{Long,ACK,FBS,FBS1,Lag,FBSA,Spence,RMMP,4A,Olson,Parramore,PJP},
whereas some additional approximations were
made in Refs.~\cite{Mit,GDD}. However, as it is well known, the
Salpeter equation itself is not free from some drawbacks. Namely, it does not
have a correct one-body limit (the Dirac equation) when the mass of one of
the particles tends to infinity. From the general viewpoint, this property
is expected to be important for the $q\bar{q}$ system with one heavy and one light
quark. In order to avoid the above difficulty, in Refs.~\cite{MW,CJ}
the effective noninteracting 3D Green function for two fermions was
chosen in the form that guarantees the correct one-body limit of 3D
relativistic equations with the static BS kernel. These versions of the 3D
equations will be referred hereafter as to the MW and CJ versions,
respectively. A new version of the effective free propagator for two scalar
particles which also possesses this property was suggested in Ref.~\cite{MNK}.
The effective 3D Green function for two noninteracting fermions
can be constructed from this propagator in a standard way.
Taking into account the fact that the relativistic effects are
important for $q\bar{q}$ systems with quarks from light and light-heavy
sectors, it seems interesting to carry out the investigation of this
sort of systems in the framework of the above-mentioned different versions
of 3D relativistic equations. This will allow one to shed light on the
problem of ambiguity
coming from the choice of a particular 3D reduction scheme of the BS
equation, and to find the characteristics of the bound $q\bar q$ systems,
which are more sensitive to this choice.
For the meson mass spectrum this problem was addressed to
in Refs.~\cite{TT1,TT2} where the MW and CJ versions of 3D
relativistic equations together with the Salpeter equation (Sal. version)
were considered in the configuration space to (partially) avoid
the difficulties related to a highly singular behavior of the
linear confining potential in the momentum space at the zero
momentum transfer.
The version of the 3D reduction of the BS equation suggested
in Ref.~\cite{MNK} significantly differs from the MW and CJ
versions and can be written down only in
the momentum space. Consequently, it seems interesting to study together all
versions in the momentum space
and to investigate a wider class of characteristics of the bound systems,
including the decay characteristics of the mesons, which are sensitive to
the behavior of the meson wave functions, and the meson mass spectrum.
These problems will form the subject of the present paper.
The layout of the present paper is as follows. In Section II, we present different
versions of the 3D bound-state equations in the momentum space, and perform
the partial-wave expansion of the obtained equations. The numerical solution
of these equations with the oscillator-type potential is considered in
Section III. The general structure of the meson mass spectra obtained
from the solution of these equations is discussed in detail. In Section
IV, the leptonic decay characteristics of the pseudoscalar and vector
mesons are calculated using the wave functions obtained from the solution
of these equations.
\section{The relativistic 3D equations}
The relativistic 3D equations for the wave function of the bound
$q\bar{q}$ systems, cor\-res\-pon\-ding to the instantaneous
kernel of the BS equation, i.e. when
$K(P;p,p')\rightarrow K_{st}(\vec p,\vec p~')$,
for all versions considered below in the c.m.f. can be written
in a common form
\begin{equation}\label{eq1}
\tilde{\Phi}_ {M}(\vec p\,) =
\tilde{G}_{0eff}(M,\vec p\,)
\int\frac{d^3\vec p~'}{(2\pi)^{3}}\,\,
\bigl[\, iK_{st}(\vec p,\vec p~') \equiv
\hat{V}(\vec p,\vec p~')\,\bigr]\,\,
\tilde{\Phi}_{M}(\vec p~')
\end{equation}
\noindent
where $M$ is the mass of the bound system, and the equal-time wave function
$\tilde{\Phi}_{M}(\vec p\,)$ is related to the BS amplitude $\Phi_{P}(p)$ as
\begin{equation}
\tilde{\Phi}_{M}(\vec p\,) =
\int\frac{dp_{0}}{2\pi}\,\Phi_{P=(M,\vec 0\,)}(p)
\end{equation}
\noindent
The effective 3D Green function of two noninteracting-quark system
$\tilde{G}_{0eff}$ is defined as
\begin{equation}\label{eq3}
\tilde{G}_{0eff}(M,\vec p\,)=\int\frac{dp_0}{2\pi i}\,
\bigl[\, G_{0eff}(M;p) =
g_{0eff}(M;p)(\not\! p_{1}+m_{1})(\not\! p_{2}+m_{2})\,\bigr]
\end{equation}
\noindent
Here
$g_{0eff}(M;p)$ is the effective propagator of two scalar particles. The
operator $\tilde{G}_{0eff}$ is given in the form
\begin{equation}\label{eq4}
\tilde{G}_{0eff}(M,\vec p\,) =
\sum\limits_{\alpha_{1}=\pm}\sum\limits_{\alpha_{2}=\pm}\,
\frac{D^{(\alpha_{1}\alpha_{2})}(M;p)}{d(M;p)}\,\,
\Lambda_{12}^ {(\alpha_1\alpha_2)}
(\vec p,-\vec p\,)\,\,\gamma_{1}^{0}\gamma_{2}^{0},\,\,\,\,\,\,\,\,
p\equiv|\vec p\,|
\end{equation}
\noindent
where the projection operators $\Lambda_{12}^{(\alpha_1\alpha_2)}$
are defined by
\begin{eqnarray}
\Lambda_{12}^{(\alpha_1\alpha_2)}(\vec p_{1},\vec p_{2})
=\Lambda_{1}^{(\alpha_1)}(\vec p_{1})\otimes
\Lambda_{2}^{(\alpha_2)}(\vec p_{2})&,&\,\,\,\,\,
\Lambda_{i}^{(\alpha_i)}(\vec p_{i})
=\frac{\omega_{i}+\alpha_i\hat h_{i}(\vec p_{i})}{2\omega_{i}}
\nonumber\\[2mm]
\hat h_{i}(\vec p_{i})=\gamma_{i}^{0}(\vec\gamma_{i}\,\vec p_{i})+
m_{i}\gamma_{i}^{0}&,&\,\,\,\,\,\,\,\,
\omega_{i}= \bigl( m_{i}^{2}+\vec p_{i}^{~2}\bigr)^{1/2}
\end{eqnarray}
\noindent
and the functions $D^{(\alpha_{1}\alpha_{2})}(M;p)$ and $d(M;p)$ are
given by the expressions (see Ref.~\cite{BKA})
\begin{eqnarray}
D^{(\alpha_{1}\alpha_{2})}=
\frac{(-1)^{\alpha_{1}+\alpha_{2}}}
{\omega_{1}+\omega_{2}-(\alpha_{1}E_{1}+\alpha_{2}E_{2})},
\,\,\,\,\,
d=1
&\nonumber\\[2mm]
E_{1}+E_{2}=M,\,\,\,\,
E_{1}-E_{2}=\frac{m_{1}^{2}-m_{2}^{2}}{M}\equiv b_0
&{\mbox{\quad(MW version)}}\\[2mm]
D^{(\alpha_{1}\alpha_{2})}=
(E_{1}+\alpha_{1}\omega_{1})(E_{2}+\alpha_{2}\omega_{2})
&\nonumber\\[2mm]
d=2(\omega_{1}+\omega_{2})a,\,\,\,\,a=
E_{i}^{2}-\omega_{i}^{2}=[M^{2}+b_{0}^{2}-2(\omega_{1}^{2}+\omega_{2}^{2})]/4
&\mbox{\quad(CJ version)}
\label{eq7}
\end{eqnarray}
\noindent
Note that for the case of CJ version Eq. (\ref{eq7}) is obtained from
Eqs. (\ref{eq3}) and (\ref{eq4})
by using the expression for $g_{0eff}(M;p)$ defined
from the dispersion relation which guarantees the elastic unitarity.
The same condition is satisfied by the expression of
$g_{0eff}(M;p)$ suggested in Ref.~\cite{MNK}, (see formula (10) from
this paper). According to this condition,
the particles in the intermediate states are allowed to go off shell
proportionally to their mass, so that when one of the particles becomes
infinitely massive, it automatically keeps that particle fully on-mass-shell
and the equation is reduced to the one-body equation. Using this
expression for $g_{0eff}(M;p)$ in Eq. (\ref{eq3}), we derive the expression
for $\tilde{G}_{0eff}$ having the form (\ref{eq4}) where
\begin{eqnarray}
&&
D^{(\alpha_{1}\alpha_{2})}=
(E_{1}+\alpha_{1}\omega_{1})(E_{2}+\alpha_{2}\omega_{2})-\frac{R-b}{2y}
\biggl[\frac{R-b}{2y}
+(E_{1}+\alpha_{1}\omega_{1})-(E_{2}+\alpha_{2}\omega_{2})\biggr]
\nonumber\\[2mm]
&&
d=2RB,
\hspace*{0.5cm}
R=\bigl(b^{2}-4y^{2}a\bigr)^{1/2},
\hspace*{0.5cm}
B=\frac{R-b}{2y}\biggl[\frac{R-b}{2y}+b_{0}\biggr]+a,
\nonumber\\[2mm]
&&
b=M+b_{0}y,
\hspace*{0.5cm}
y=\frac{m_{1}-m_{2}}{m_{1}+m_{2}}
\label{eq8}
\end{eqnarray}
Hereafter this version will be referred to as the MNK version
(our comments on Ref.~\cite{MNK} see in Ref.~\cite{BKA}).
Using the properties of the projection operators
$\Lambda_{12}^ {(\alpha_1\alpha_2)}$ and Eqs.~(\ref{eq4})-(\ref{eq8}),
the following system of equations can be derived from Eq. (\ref{eq1})
\begin{eqnarray}
&&[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
\tilde{\Phi}_ {M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)=
\nonumber\\[2mm]
&=&A^{(\alpha_{1}\alpha_{2})}(M;p)\,
\Lambda_{12}^{(\alpha_1\alpha_2)}(\vec p,-\vec p\,)
\int\frac{d^3\vec p~'}{(2\pi)^{3}}\,
\gamma_{1}^{0}\gamma_{2}^{0}\,
\hat{V}(\vec p,\vec p~')
\sum\limits_{\alpha_{1}^{'}=\pm}\,\sum\limits_{\alpha_{2}^{'}=\pm}\,
\tilde{\Phi}_ {M}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(\vec p~')
\label{eq9}
\end{eqnarray}
\noindent
where $\tilde{\Phi}_ {M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)=
\Lambda_{12}^{(\alpha_1\alpha_2)}(\vec p,-\vec p\,)\
\tilde{\Phi}_ {M}(\vec p)$
and the functions $A^{(\alpha_{1}\alpha_{2})}(M;p)$ are given by
\begin{eqnarray}
A^{(\pm\pm)}=\pm 1,
\hspace*{1.cm}
A^{(\pm\mp)}=\frac{M}{\omega_{1}+\omega_{2}}
&&{\mbox{\quad(MW version)}}
\label{eq10}
\\[2mm]
A^{(\alpha_{1}\alpha_{2})}=
\frac{M+(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})}
{2(\omega_{1}+\omega_{2})}
&&{\mbox{\quad(CJ version)}}
\label{eq11}
\\[2mm]
A^{(\alpha_{1}\alpha_{2})}=\frac{1}{2RB}\biggl\{
a[M+(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})] - \hspace*{4.2cm} &
\nonumber\\[2mm]
-[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
\frac{R-b}{2y}
\biggl[\frac{R-b}{2y}+(E_{1}+\alpha_{1}\omega_{1})-
(E_{2}+\alpha_{2}\omega_{2})\biggr]\biggr\}
&&{\mbox{\quad(MNK version)}}
\label{eq12}
\end{eqnarray}
As to the Salpeter equation, it can be obtained from the MW version
by putting $A^{(\pm\mp)}=0$ and $\tilde{\Phi}_{M}^{(\pm\mp)}=0$.
Note that for quarks with the equal masses $(m_{1}=m_{2}=m)$
and $\omega=\bigl( m^{2}+\vec p^{~2}\bigr)^{1/2}$, from
Eqs. (\ref{eq10}) and (\ref{eq11}) we have
\begin{eqnarray}
A^{(\pm\pm)}=\pm 1,
\hspace*{1.cm}
A^{(\pm\mp)}=\frac{M}{2\omega}
\hspace*{1.cm}
&&{\mbox{(MW version)}}
\label{eq13}
\\[2mm]
A^{(\pm\pm)}=
\frac{M+2\omega}{4\omega}
\hspace*{1.cm}
A^{(\pm\mp)}=\frac{M}{4\omega}
\hspace*{1.cm}
&&{\mbox{(CJ version)}}
\label{eq14}
\end{eqnarray}
One observes from Eqs. (\ref{eq13}) and (\ref{eq14}) that in the limit of
the equal-mass quarks the bound-state mass $M$ enters multiplicatively in the
coefficients in front of the mixed-energy components
$\tilde{\Phi}_{M}^{(\pm\mp)}(\vec p\,)$ both in the l.h.s. and r.h.s.
of Eq. (\ref{eq9}). Consequently, dividing both sides of the
equations for these components by $M$, one arrives at the (nondynamical)
constraints on all components of the wave function
which must be considered together with the remaining two dynamical equations
for the components $\tilde{\Phi}_{M}^{(\pm\pm)}(\vec p\,)$.
These equations for the bound state mass $M$
are linear in the MW version and are nonlinear in the CJ version due to the fact
that the r.h.s. of the equations depends on the value of $M$ one is looking
for. In the MNK version with an account of the property of the function
$R$ given by Eq. (\ref{eq8})
$\lim\limits_{m_{1}\rightarrow m_{2}}
R/y=\lim\limits_{m_{1}\rightarrow m_{2}}M/y$,
from Eq. (\ref{eq12}) we have
\begin{eqnarray}
A^{(\pm\pm)}=
\frac{M+2\omega}{2\omega}
\hspace*{1.2cm}
A^{(\pm\mp)}=\frac{1}{2}
&&{\mbox{\quad\quad(MNK version)}}
\end{eqnarray}
In this case, from Eq. (\ref{eq9}) it follows that characteristic features
of the bound-state equations remain
unchanged -- again it is the system of nonlinear equations in $M$ and
it includes all components of the wave function
$\tilde{\Phi}_{M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)$, as it is in the case
for the quarks with nonequal masses.
Further, we write the unknown function
$\tilde{\Phi}_{M}^{(\alpha_{1}^{'}\alpha_{2}^{'})}$
in Eq. (\ref{eq9}) in the form analogous to that used in Ref.~\cite{4A},
where the bound $q\bar{q}$ systems were studied in the framework of the
Salpeter equation
\begin{eqnarray}
&&
\tilde{\Phi}_ {M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)=
N_ {12}^{(\alpha_{1}\alpha_{2})}(p)
\pmatrix{1\cr
\alpha_{1}(\vec \sigma_{1}\vec p\,)/(\omega_{1}+\alpha_{1}m_{1})
}
\otimes
\pmatrix{1\cr
-\alpha_{2}(\vec \sigma_{2}\vec p\,)/(\omega_{2}+\alpha_{2}m_{2})
}
\chi_{M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)
\nonumber\\
&&\label{eq16}
\end{eqnarray}
\noindent where
\begin{equation}\label{eq17}
N_ {12}^{(\alpha_{1}\alpha_{2})}(p)=
\biggl(\frac{\omega_{1}+\alpha_{1}m_{1}}{2\omega_{1}}\biggr)^{1/2}
\biggl(\frac{\omega_{2}+\alpha_{2}m_{2}}{2\omega_{2}}\biggr)^{1/2}
\equiv N_ {1}^{(\alpha_{1})}(p)N_ {2}^{(\alpha_{2})}(p)
\end{equation}
Then, if the $qq$ interaction potential operator
$\hat{V}(\vec p,\vec p~')$ is taken in the form \cite{4A}
\begin{equation}\label{eq18}
\hat{V}(x;\vec p,\vec p~')=\gamma_{1}^{0}\gamma_{2}^{0}
\hat{V}_{og}(\vec p - \vec p~')+
[x\gamma_{1}^{0}\gamma_{2}^{0}+(1-x)I_{1}I_{2}]
\hat{V}_{c}(\vec p - \vec p~'),
\hspace*{0.8cm}
(0\leq x \leq1),
\end{equation}
\noindent
the following system of equations for the Pauli $2\otimes2$ wave
functions $\chi_{M}^{(\alpha_{1}\alpha_{2})}$ can be derived
\begin{eqnarray}
&&
[M-(\alpha_{1}\omega_{1} + \alpha_{2}\omega_{2})]
\chi_{M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)=
\nonumber\\[2mm]
&=&A^{(\alpha_{1}\alpha_{2})}(M;p)
\sum\limits_{\alpha_{1}^{'}=\pm}\,\sum\limits_{\alpha_{2}^{'}=\pm}\,
\int\frac{d^3\vec p~'}{(2\pi)^{3}}\,
\hat{V}_{eff}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(\vec p,\vec p~',\vec \sigma_{1},\vec \sigma_{2})
\chi_{M}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(\vec p~')
\label{eq19}
\end{eqnarray}
\noindent
where
\begin{eqnarray}
\hat{V}_{eff}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(\vec p,\vec p~',\vec \sigma_{1},\vec \sigma_{2})
&=&
N_ {12}^{(\alpha_{1}\alpha_{2})}(p)\biggl[V(1;\vec p-\vec p~')
\hat{B}_{1}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(\vec p,\vec p~',\vec \sigma_{1},\vec \sigma_{2})
\nonumber\\[2mm]
&+&V(x;\vec p-\vec p~')
\hat{B}_{2}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(\vec p,\vec p~',\vec \sigma_{1},\vec \sigma_{2})
\biggr]
N_ {12}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p')
\end{eqnarray}
\begin{equation}
\hat{B}_{1}^{(\alpha_{1}\alpha_{2}\alpha_{1}^{'}\alpha_{2}^{'})}=1+
\frac{\alpha_{1}\alpha_{2}\alpha_{1}^{'}\alpha_{2}^{'}
({\vec{\sigma}}_{1}{\vec p}\,)({\vec{\sigma}}_{2}{\vec p}\,)
({\vec{\sigma}}_{1}{\vec p~'})({\vec{\sigma}}_{2}{\vec p~'})}
{(\omega_{1}+\alpha_{1} m_{1})(\omega_{2}+\alpha_{2} m_{2})
(\omega_{1}^{'}+\alpha_{1}^{'} m_{1})(\omega_{2}^{'}+\alpha_{2}^{'} m_{2})}
\end{equation}
\begin{equation}
\hat{B}_{2}^{(\alpha_{1}\alpha_{2}\alpha_{1}^{'}\alpha_{2}^{'})}=
\frac{\alpha_{1}\alpha_{1}^{'} {(\vec{\sigma}}_{1}{\vec p}\,)
({\vec{\sigma}}_{1}{\vec p~'})}{(\omega_{1}+\alpha_{1} m_{1})
(\omega_{1}^{'}+\alpha_{1}^{'} m_{1})}+
\frac{\alpha_{2}\alpha_{2}^{'} ({\vec{\sigma}}_{2}{\vec p}\,)
({\vec{\sigma}}_{2}{\vec p~'})}{(\omega_{2}+\alpha_{2} m_{2})
(\omega_{2}^{'}+\alpha_{2}^{'} m_{2})}
\end{equation}
\begin{equation}
V(x;\vec p-\vec p~')=V_{og}(\vec p-\vec p~')
+(2x-1)V_{c}(\vec p-\vec p~')
\end{equation}
\noindent
Now using the partial-wave expansion
\begin{equation}\label{eq24}
\chi_{M}^{(\alpha_{1}\alpha_{2})} (\vec p\,)=
\sum\limits_{LSJM_{J}}\bigl<\hat{p}\mid LSJM_{J} \bigr>
R_{LSJ}^{(\alpha_{1}\alpha_{2})}(p)
,\,\,\,\,\,\,\,\,\biggl(\hat{p}=\frac{\vec p}{p}\biggr)
\end{equation}
\noindent
\begin{equation}
V(\vec p-\vec p~')=(2\pi)^{3}\sum\limits_{LSJM_{J}}V^{L}(p,p')
\bigl<\hat{p}\mid LSJM_{J} \bigr>\bigl<LSJM_{J}\mid\hat{p}\bigr>
\end{equation}
\noindent
where
\begin{equation}\label{eq26}
V^{L}(p,p^{'})=\frac{2}{\pi}\int_{0}^{\infty}j_{L}(pr)j_{L}(p^{'}r)r^{2}dr
\end{equation}
\noindent
with $j_{L}(x)$ being the spherical Bessel function,
the following system of equations can
be obtained from Eq. (\ref{eq19})
for the radial wave functions
$R_{LSJ}^{(\alpha_{1}\alpha_{2})}(p)$
\begin{eqnarray}
&&[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
R_{J(^{0}_{1})J}^{(\alpha_{1}\alpha_{2})}(p)=
A^{(\alpha_{1}\alpha_{2})}(M;p)N_ {12}^{(\alpha_{1}\alpha_{2})}(p)
\times
\nonumber\\[2mm]
&\times&
\sum\limits_{\alpha_{1}^{'}\alpha_{2}^{'}}
\int_{0}^{\infty} p^{'2}dp^{'}\biggl \{\biggl[\biggl(1+a_{12 \otimes 12}^
{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p,p{'})\biggr)
V^{J}(1;p,p^{'})+
\nonumber\\[2mm]
&+&a_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}
\alpha_{2}^{'})}(p,p^{'})V^{(^{0}_{1})}_{\oplus J}(x;p,p^{'})\biggr]
R_{J(^{0}_{1})J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})-
\nonumber\\[2mm]
&-&\biggl[a_{\ominus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p,p^{'})
V_{\ominus J}(x;p,p^{'})\biggr]
R_{J(^{1}_{0})J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})\biggr\}
N_ {12}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p')
\nonumber
\end{eqnarray}
\begin{eqnarray}
&&
[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
R_{J\pm11J}^{(\alpha_{1}\alpha_{2})}(p)=
A^{(\alpha_{1}\alpha_{2})}(M;p)N_ {12}^{(\alpha_{1}\alpha_{2})}(p)
\times
\nonumber\\[2mm]
&\times&
\sum\limits_{\alpha_{1}^{'}\alpha_{2}^{'}}
\int_{0}^{\infty} p^{'2}dp^{'}\biggl\{\biggl[V^{J\pm1}(1;p,p^{'})
+a_{12 \otimes 12}^{(\alpha_{1}\alpha_{2},
\alpha_{1}^{'}\alpha_{2}^{'})}(p,p{'})V_{J\pm1 1 J}(1;p,p^{'})+
\nonumber\\[2mm]
&+&a_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}
\alpha_{2}^{'})}(p,p^{'})V^{J}(x;p,p^{'})\biggr]
R_{J\pm11J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})+
\nonumber\\[2mm]
&+&\biggl[a_{12 \otimes 12}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(p,p^{'})\frac{2}{2J+1}V_{\ominus J}(1;p,p^{'})\biggr]
R_{J\mp11J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})\biggr\}
N_ {12}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p')
\label{eq27}
\end{eqnarray}
\noindent where
\begin{eqnarray}
a_{12 \otimes 12}^{(\alpha_{1}\alpha_{2},
\alpha_{1}^{'}\alpha_{2}^{'})}(p,p{'})
&=&a_{12}^{(\alpha_{1}\alpha_{2})}(p,p)
a_{12}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'},p^{'})
\nonumber\\[2mm]
a_{^{\oplus}_{\ominus}}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}
(p,p{'})&=&a_{11}^{(\alpha_{1}\alpha_{1}^{'})}(p,p^{'})\pm
a_{22}^{(\alpha_{2}\alpha_{2}^{'})}(p,p^{'})
\nonumber\\[2mm]
a_{ij}^{(\alpha_{i}\alpha_{j})}(p,p^{'})=a_{i}^{\alpha_{i}}(p)
a_{j}^{\alpha_{j}}(p^{'})&,&
\hspace*{1.2cm}
a_{i}^{\alpha_{i}}(p)=\frac{\alpha_{i}p}{\omega_{i}+\alpha_{i}m_{i}}
\end{eqnarray}
and
\begin{eqnarray}
V^{(^{0}_{1})}_{\oplus J}(x;p,p')&=&\frac{1}{2J+1}\bigl[
\pmatrix{J\cr J+1}V^{J-1}(x;p,p')+
\pmatrix{J+1\cr J}V^{J+1}(x;p,p')
\bigr]
\nonumber\\[2mm]
V_{\ominus J}(x;p,p')&=&\frac{\sqrt{J(J+1)}}{2J+1}\bigl[
V^{J-1}(x;p,p')-V^{J+1}(x;p,p')\bigr]
\\[2mm]
V_{J\pm1 1 J}(1;p,p')&=&\frac{1}{(2J+1)^{2}}\bigl[
V^{J\pm1}(1;p,p')+4J(J+1)V^{J\mp1}(1;p,p')\bigr]
\nonumber\\[2mm]
V(x;p,p')&=&V_{og}(x;p,p')+(2x-1)V_{c}(x;p,p')
\nonumber
\end{eqnarray}
The main purpose of the present study is to carry out the comparative
qualitative ana\-ly\-sis of the different versions of 3D relativistic equations
(\ref{eq27}), addressing the question of existence of stable solutions
for different values of the scalar-vector
mixing parameter $x$ in the confining part of the potential (Eq. (\ref{eq18})).
Also, we investigate the general structure of the meson mass spectrum
and calculate the leptonic decay characteristics,
namely, the pseudoscalar decay constant $f_{P}(P \rightarrow\mu \bar{\nu})$
and the vector meson decay width $\Gamma (V \rightarrow e^{-} e^{+})$.
For these reasons, at the first stage we neglect in (\ref{eq18}) the one-gluon
exchange potential. A full analysis of the problem will
be made in further publications.
Further, according to Ref.~\cite{2A}, we use the oscillator
form for the confining potential $V_{c}(r)$
which is a simplified, but justified
form of a more general potential used in Ref.~\cite{4A}
(at least for the quarks from the light and light-heavy sectors,
which are considered in the present paper). Namely, we take
\begin{eqnarray}
V_{c}(r)=\frac{4}{3}\alpha_{s}(m_{12}^{2})\biggl(\frac{\mu_{12}\omega_{0}^{2}}
{2}r^{2}-V_{0}\biggr)
\\[2mm]
\mu_{12}=\frac{m_{1}m_{2}}{m_{12}},
\hspace*{1.cm}
m_{12}=m_{1}+m_{2},
\hspace*{1.cm}
\alpha_{s}(Q^{2})=\frac{12\pi}{33-2n_f}\biggl({\mbox{\rm ln}}\frac{Q^{2}}{\Lambda^{2}}
\biggr)^{-1}
\nonumber
\end{eqnarray}
where $n_f$ denotes the number of flavors. This potential
in the momentum space corresponds to the operator (see (\ref{eq26}))
\begin{equation}\label{eq30}
V_{c}^{L}(p,p^{'})=-\frac{4}{3}\alpha_{s}(m_{12}^{2})
\biggl[\frac{\mu_{12}\omega_{0}^{2}}{2}\bigl(\frac{d^{2}}{dp^{'2}}+
\frac{2}{p^{'}}\frac{d}{dp^{'}}-\frac{L(L+1)}{p^{'2}}\bigr)+V_{0}\biggr]
\frac{\delta(p-p^{'})}{pp'}
\end{equation}
Now the system of equations (\ref{eq27}) can be reduced to the
system of equations with the following structure:
\begin{eqnarray}
&&
[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
R_{J(^{0}_{1})J}^{(\alpha_{1}\alpha_{2})}(p)=
-\frac{4}{3}\alpha_{s}(m_{12}^{2}) A^{(\alpha_{1}\alpha_{2})}(M;p)
\sum\limits_{\alpha_{1}^{'}\alpha_{2}^{'}} \biggl\{ V_{0}
\biggl[B_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)+
\nonumber\\[2mm]
&&
+(2x-1)A_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)
\biggr]R_{J(^{0}_{1})J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})+
\frac{\mu_{12}\omega_{0}^{2}}{2}\biggl[
\biggl(\hat{D}_{B}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)+
(2x-1)\hat{D}_{A}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)-
\nonumber\\[2mm]
&&
-\frac{1}{p^{2}}
\biggl(B_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)J(J+1)
+(2x-1)A_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)
\pmatrix{J(J+1)+2\cr J(J+1)}
\biggr)\biggr)
R_{J(^{0}_{1})J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p^{'})-
\nonumber\\[2mm]
&&
-(2x-1)\biggl(\frac{2\sqrt{J(J+1)}}{p^{2}}
A_{\ominus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)\biggr)
R_{J(^{1}_{0})J}^{(\alpha_{1}^{'}\alpha_{2}^{'})}(p)\biggr]\biggr\},
\nonumber\\[2mm]
&&
[M-(\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2})]
R_{J\pm11J}^{(\alpha_{1}\alpha_{2})}(p)=
-\frac{4}{3}\alpha_{s}(m_{12}^{2}) A^{(\alpha_{1}\alpha_{2})}(M;p)
\sum\limits_{\alpha_{1}^{'}\alpha_{2}^{'}}\biggl\{V_{0}
\biggl[B_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)+
\nonumber\\[2mm]
&&
+(2x-1)A_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)
\biggr] R_{J\pm11J}^{(\alpha_{1}\alpha_{2})}(p)
+\frac{\mu_{12}\omega_{0}^{2}}{2}\biggl[
\bigl(\hat{D}_{B}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)+
(2x-1)\hat{D}_{A}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)-
\nonumber\\[2mm]
&&
-\frac{1}{p^{2}}
\biggl(B_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)
\biggl(J(J+1)+1\pm\frac{1}{2J+1}\biggr)\pm\frac{4J(J+1)}{2J+1}
B_{\ominus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)+
\nonumber\\[2mm]
&&
+(2x-1)A_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)
J(J+1)\biggr)\biggr)R_{J\pm11J}^{(\alpha_{1}\alpha_{2})}(p)+
\nonumber\\[2mm]
&&
+\frac{1}{p^{2}}\frac{2\sqrt{J(J+1)}}{(2J+1)^{2}}\biggl(
B_{\oplus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)-
B_{\ominus}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}(p)\biggr)
R_{J\mp11J}^{(\alpha_{1}\alpha_{2})}(p)\biggr]\biggr\}
\label{eq31}
\end{eqnarray}
\noindent
where $A_{^{\oplus}_{\ominus}}^{(\alpha_{1}\alpha_{2},\alpha_{1}^,\alpha_{2}^{'})} and
B_{^{\oplus}_{\ominus}}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}$
are a given functions of $p$, and
$\hat{D}_{A(B)}^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}$
are certain second order differential operators in $p$.
\section{Meson mass spectrum}
In order to solve the system of equations (\ref{eq31}) for the bound state mass $M$,
the unknown radial wave functions $R_{LSJ}^{(\alpha_{1}\alpha_{2})}(p)$ are
expanded in the basis of the radial wave functions $R_{nL}(p)$ being the solutions
of the Shr\"{o}dinger radial equation with the oscillator potential in the momentum space
(\ref{eq30}), as it was done in Refs.~\cite{4A,2A}
\begin{equation}\label{eq32}
R_{LSJ}^{(\alpha_{1}\alpha_{2})}(p)=\bigl( 2M(2\pi)^{3}\bigr)^{1/2}
\sum\limits_{n=0}^{\infty}C_{LSJn}^{(\alpha_{1}\alpha_{2})}R_{nL}(p)
\end{equation}
where the multiplier $\bigl( 2M(2\pi)^{3}\bigr)^{1/2}$ is introduced for appropriate
normalization of the wave function and
\begin{eqnarray}
&&
R_{nL}(p)=p_{0}^{-\frac{3}{2}}\biggl[\biggl(\frac{2 \Gamma (n+1+\frac{3}
{2})}{\Gamma (n+1)}\biggr )^{1/2}\frac{z^{L}\exp\bigl(- \frac{z^{2}}{2}\bigr)}
{\Gamma(L+\frac{3}{2})}\,\,_{1}F_{1}(-n,L+\frac{3}{2};z^{2})
\equiv \bar R_{nL}(z)\biggr]
\nonumber\\[2mm]
&&
z=\frac{p}{p_{0}},
\hspace*{1.cm}
p_{0}=\biggl(\mu_{12}\omega_{0}
\biggl(\frac{4}{3}\alpha_{s}(m_{12}^{2})\biggr)^{1/2}\biggr)^{1/2}
\label{eq33}
\end{eqnarray}
Substituting the expression (\ref{eq32}) into the system of differential equations
(\ref{eq31}), the following system of algebraic equations for the coefficients
$C_{LSJn}^{\alpha_{1}\alpha_{2}}$ can be obtained
\begin{equation}\label{eq34}
MC_{LSJn}^{(\alpha_{1}\alpha_{2})}=
\sum\limits_{\alpha_{1}^{'}\alpha_{2}^{'}}\sum\limits_{L^{'}S^{'}n^{'}}
H^{(\alpha_{1}\alpha_{2},\alpha_{1}^{'}\alpha_{2}^{'})}_{LSJn,
L^{'}S^{'}n^{'};J}(M)C_{L^{'}S^{'}Jn^{'}}^{(\alpha_{1}^{'}\alpha_{2}^{'})}
\end{equation}
It is necessary to note here that the matrix $H_{\alpha\beta}$ explicitly
depends (except the Sal. version) on the meson mass $M$ we are looking for.
Consequently, the system of equations (\ref{eq34}) is nonlinear in $M$. Note
also that for the quarks with equal masses $(m_{1}=m_{2}=m)$ part of the
equations from the system (\ref{eq34}), corresponding to $(\alpha_{1}\alpha_{2})=(\pm \mp)$
for the MW and CJ versions, transforms into the nondynamical constraints between
all coefficients
$C_{LSJn}^{(\alpha_{1}\alpha_{2})}$ which should be considered together with
the remaining dynamical equations corresponding to
$(\alpha_{1}\alpha_{2})=(\pm\pm)$.
The numerical algorithm for the solution of the system of nonlinear
equations (\ref{eq34}) in the case of
nonequal mass quarks was discussed in Ref.~\cite{BKA} where the systems
$u\bar{s}$ $(^{3}S_{1}, ^{1}P_{1}, ^{3}P_{0},^{3}P_{1},
^{3}P_{2},^{1}D_{2}, ^{3}D_{1}, ^{3}D_{3})$, $c\bar{u}$ and $c\bar{s}$
$(^{1}S_{0}, ^{3}S_{1}, ^{1}P_{1}, ^{3}P_{2})$ were considered.
In brief, the infinite set of equations (\ref{eq34}) is truncated at
some fixed value $n=N_{max}$ and the eigenvalue problem is solved for
the finite-range matrix $H$. Increasing then $N_{max}$, one checks the
stability of the resulting spectrum with respect to the variation of
$N_{max}$. Since the r.h.s. of Eq. (\ref{eq34}) depends on $M$, the
solutions are obtained iteratively, starting from some value of $M$.
In Ref. \cite{BKA}, the existence of
stable solutions of the system of equations (\ref{eq34}) was investigated
for different values of the mixing parameter
$x$ in the confining potential (\ref{eq18}). It was found that
the existence/nonexistence of stable solutions of Eq. (\ref{eq34})
critically depended on the value of $x$, on the value of confining interaction
strength parameter $\omega_{0}$ (\ref{eq30}), and on the particular state
under consideration.
This dependence is different for the different versions of 3D reduction of the BS
equation. The instability is primarily caused by the presence of the mixed
$(+-,-+)$ energy
components of the wave function in the equations for the $q\bar{q}$ bound
system. Namely, for the parameter $\omega_{0}$=710 MeV that leads
to a reasonable description of the meson mass spectrum in the framework
of the Salpeter equation \cite{4A}, stable solutions for CJ, MNK and Sal.
versions simultaneously exist for the values of the parameter $x$
from the interval $0.3\leq x \leq 0.6-0.7$. For the MW version,
in order to provide the existence of stable solutions in the
same interval of $x$, $\omega_{0}$ must be set to a smaller value
(450 MeV). However, in this case the values of masses for all states under
consideration turn out to be smaller than the experimental ones.
Keeping in mind that
the calculated values of masses will further decrease
after adding the one-gluon interaction potential,
we conclude that the MW version seems to poorly describe the meson mass
spectrum. For this reason, along with the Sal. version, below we consider
only the CJ and MNK versions, both having a similar theoretical foundation:
the effective Green function (\ref{eq3}) in these versions is constructed
from the elastic unitarity condition.
The results of calculations are given in Figs. 1,2,3, from which one can see
that the level ordering is similar for all three versions under
consideration. Further, at $x=0.5$ the states $^{3}P_{0},^{3}P_{1},
^{1}P_{1}$ are degenerate and spin-orbit splitting exists only between the
degenerate $^{3}P_{0},^{3}P_{1}$ states and the $^{3}P_{2}$ state. For
$x\neq0.5$ this degeneration is removed and the calculated level ordering
agrees with the experiment for the
value $x=0.3$, except the $^{3}P_{2}$ state. For the D - states
$(u\bar{s})$, experimentally there is degeneration between $^{1}D_{2},^{3}D_{3}$
states. In our calculations we do not have this degeneration,
but for the MNK and CJ
versions at $x=0.3$ the splitting is very small and increases for $x=0.5$
and $x=0.7$. Note, however, that only for these values of $x$ the sequence
of the $^{3}D_{1}$ state and other two
D-states agrees with the experiment. For the $q\bar{q}$ states with the quarks
from light-heavy sectors $(c\bar{u},c\bar{s})$ the spin-dependence of the energy
levels for all values of $x$ is much weaker than the experimental one,
but at the same time for $x \geq 0.5$ the level ordering agrees with
experimental data.
As to the pseudoscalar $q\bar{q}$ systems (the $^{1}S_{0}$ state) with the quarks from
the light sector ($u\bar{s}$), the calculated masses in the model under
consideration are much larger than the experimental ones, as demonstrated in
Fig 1. This might serve as an indication of the fact that if the constituent
quark model is used for the description of this sort of systems,
the chiral symmetry breaking effect should be taken into
account, at least in a phenomenological manner,
e.g. by the inclusion of the t'Hooft interaction in the kernel
of the Salpeter equation (see Ref.~\cite{RMMP}).
In order to take into account a full content of global QCD symmetries
in a systematic way, a coupled set of Dyson-Schwinger and BS equations
should be considered (see, e.g. \cite{Jain}).
Note that the number of terms ($N_{max}$) in the expansion (\ref{eq32}), which is
necessary to get stable solutions of the system of equations (\ref{eq34}),
varies with the constituent quark masses, with the value of the mixing parameter $x$,
with the state under consideration and is different for the CJ, MNK and Sal.
versions. Namely, when the quark masses increase,
$N_{max}$ decreases. The convergence of the numerical procedure used in the calculations
is better for $x \leq 0.5$ and worse for $x>0.5$. For all values of $x$
the convergence is better for the Sal. version than for the CJ and
MNK versions.
We have also calculated the mass spectrum of $q\bar{q}$ systems with the equal
quark masses from the light quark sector $(u\bar{u},s\bar{s})$.
This problem was the subject of our primary interest in the present study.
The results of calculations are shown in Figs. 4 and 5. Note
that for this sort of systems the convergence of numerical algorithm
used in the calculations
appears to be not so good for the values of the parameters
$\omega_{0}$=710 MeV and $x>0.5$. The convergence becomes
better for smaller values of $\omega_{0}$. Namely, for the $(u\bar{u})$
system for $\omega_{0}$=710 MeV and $x=0.6$, stable solutions in the MNK version
for the $^{3}S_{1}$ state do not exist. In the CJ version such a situation holds
for other states $(^{3}P_{2},^{3}D_{1},^{3}D_{3})$ as well. For smaller values
of the potential strength parameter, e.g. $\omega_{0}$=550 MeV,
the stable solutions exist for all states
(just these results are shown in fig. 4). Further,
in this case the sequence of the energy levels corresponding to the $^{3}P_{J}$
states (the spin-orbit splitting) agrees with the experiment at $x>0.5$ in
all versions, and in the $^{3}D_{1}$ and $^{3}D_{3}$ states the agreement
appears to occur at $x<0.5$.
Consequently, on the basis of the above analysis one can conclude that
none of the 3D equations with the simple oscillator kernel considered in the
present paper, does simultaneously describe even general features of the
mass spectrum of all $q\bar q$ systems under study.
\section{Some decay characteristics of mesons}
For the investigation of the meson decay properties, the normalization
condition for the wave function $\tilde{\Phi}_{M}(\vec p\,)= \sum\limits_
{(\alpha_{1}\alpha_{2})}\tilde{\Phi}_{M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)$
is needed.
For the Salpeter wave function this condition is well known \cite{IZ}
\begin{equation}\label{eq35}
\int\frac{d^3\vec p}{(2\pi)^{3}}\,\,\
\biggl[\mid \tilde{\Phi}_{M}^{(++)}(\vec p\,) \mid ^{2}-
\mid \tilde{\Phi}_{M}^{(--)}(\vec p\,) \mid ^{2}\biggr]=2M
\end{equation}
\noindent
As to the MW, CJ and MNK versions, the
normalization condition for the corresponding wave functions can be obtained
with the use of the fact that
the effective Green operators (\ref{eq4}) in Eq. (\ref{eq1})
can be inverted. As a result, the corresponding full 3D Green
operators $\tilde{G}_{0eff}$ can be defined analogously to the 4D case
\begin{equation}
\tilde{G}_{eff}^{-1}(M;\vec p,\vec p~') =
(2\pi)^3\delta^{(3)}(\vec p-\vec p~')\tilde{G}_{0eff}^{-1}(M;\vec p\,)-
\hat{V}(\vec p,\vec p~')
\end{equation}
\noindent
Since $\hat{V}(\vec p,\vec p~')$ does not depend on $M$, the normalization
condition reads
\begin{equation}\label{eq37}
\int\frac{d^3\vec p}{(2\pi)^{3}}\,\,\
\biggl[\overline{\tilde{\Phi}}_{M}(\vec p\,)=
\tilde{\Phi}_{M}^{^{+}}(\vec p\,)
\gamma_{1}^{0}\gamma_{2}^{0}\biggr]
\biggl[\frac{\partial}{\partial M}
\tilde{G}_{0eff}^{-1}(M;\vec p\,)\biggr]
\tilde{\Phi}_{M}(\vec p\,)=2M.
\end{equation}
\noindent
Using Eqs. (\ref{eq4})-(\ref{eq8}), from Eq. (\ref{eq37}) one obtains
\begin{equation}\label{eq38}
\sum\limits_{\alpha_{1}\alpha_{2}}\int\frac{d^3\vec p}{(2\pi)^{3}}
\tilde{\Phi}_{M}^{^{+}(\alpha_{1}\alpha_{2})}(\vec p\,)
f_{12}^{(\alpha_{1}\alpha_{2})}(M;p)
\tilde{\Phi}_{M}^{(\alpha_{1}\alpha_{2})}(\vec p)=2M
\end{equation}
\noindent
where
\begin{eqnarray}
f_{12}^{(\alpha_{1}\alpha_{2})}=\frac{\alpha_{1}E_{1}+\alpha_{2}E_{2}}{M}
\hspace*{1.2cm}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\mbox{(MW version)}&
\\[2mm]
f_{12}^{(\alpha_{1}\alpha_{2})}=\frac{\omega_{1}+\omega_{2}}{M}
\frac{\alpha_{1}\omega_{1}E_{2}+\alpha_{2}\omega_{2}E_{1}}
{(E_{1}+\alpha_{1}\omega_{1})(E_{2}+\alpha_{2}\omega_{2})}
\hspace*{1.2cm}
\mbox{(CJ version)}&
\\[2mm]
f_{12}^{(\alpha_{1}\alpha_{2})}=\frac{2}{D^{(\alpha_{1}\alpha_{2})}}
\biggl \{ \biggl[\frac{M}{R}B(1-y^{2})+\frac{M^{2}}{2}-2\biggl(\frac{R-M}
{2y}\biggr)^{2}\biggr]-
&\nonumber\\[2mm]
-\frac{B}{D^{(\alpha_{1}\alpha_{2})}}\biggl[\biggl(M+
\frac{\alpha_{1}\omega_{1}+\alpha_{2}\omega_{2}}{2}R-
-\frac{M^{2}}{2}-2\biggl(\frac{R-M}{2y}\biggr)^{2}+
&\nonumber\\[2mm]
+(\alpha_{1}\omega_{1}-\alpha_{2}\omega_{2})\biggl(\frac{R-M}{2y}
+\frac{M}{2}y\biggr)\biggr]\biggr \}
\hspace*{1.2cm}\,\,\,\,\,\,\,\,\,\,
\mbox{(MNK version)}&
\end{eqnarray}
Note that for $m_{1}=m_{2}=m$ the normalization condition (\ref{eq38})
for the MW version is reduced to (\ref{eq35}) which can be written in
the form of Eq. (\ref{eq38}) where
\begin{eqnarray}
f_{12}^{(\alpha_{1}\alpha_{2})}=\frac{\alpha_{1}+\alpha_{2}}{2}
\hspace*{1.2cm}
\mbox{(Sal. version)}&
\end{eqnarray}
From the normalization condition (\ref{eq38}) for the wave function
given by Eqs. (\ref{eq16}), (\ref{eq17}) one obtains the
normalization condition for the wave functions $R_{LSJ}^{(a1a2)}(p)$
with the use of the partial-wave expansion (\ref{eq24})
\begin{equation}\label{eq43}
\int_{0}^{\infty}\frac{p^{2} dp}{(2\pi)^{3}}
\sum\limits_{\alpha_{1}\alpha_{2}}f_{12}^{(\alpha_{1}\alpha_{2})}(M;p)
\biggl[R_{LSJ}^{(\alpha_{1}\alpha_{2})}(p)\biggr]^{2}=2M
\end{equation}
\noindent
The functions $f_{12}^{(--)}$ (40,41) have second order poles at
$p=p_{s}$, where
\begin{equation}
a(p_{s})=E_{i}^{2}-\omega_{i}^{2}(p_{s})=0,
\hspace*{1.2cm}
p_{s}=\frac{1}{2}\bigl( M^{2}+b_{0}^{2}-2(m_{1}^{2}+m_{2}^{2})\bigr)^{1/2}
\end{equation}
The functions $f_{12}^{(\pm\mp)}$ turn out to be finite
$(f_{12}^{(++)}(p_{s})$ is apparently finite). Consequently, the normalization
condition (\ref{eq43}) for the CJ and MNK versions involves a singular
integral of the type
\begin{equation}
I(x_{0})=\int_{0}^{\infty}\frac{f(x)dx}{(x-x_{0})^{2}}
\end{equation}
\noindent
which taking account of the conditions $f(0)=0=f(\infty)$ valid in our case,
can be regularized as
\begin{equation}
\int_{0}^{\infty}\frac{f(x)dx}{(x-x_{0})^{2}}=
\int_{0}^{2x_0}\frac{[f'(x)-f'(x_{0})]dx}{(x-x_{0})}+
\int_{2x_{0}}^{\infty}\frac{f'(x)dx}{x-x_{0}}
\end{equation}
\noindent
Now we can calculate the pseudoscalar ($S=L=J=0$) decay constant
$f_{P}(P \rightarrow\mu \bar{\nu})$ and the leptonic decay width of the vector
($l=0, S=J=1$) meson $\Gamma(V\rightarrow e^{-}e^{+})$ (the corresponding
decay constant is denoted by $f_{V}$). In these calculations, instead of the
functions $\tilde{\Phi}_{M}^{(\alpha_{1}\alpha_{2})}(\vec p\,)$ (\ref{eq10})
given as a column with the components
$\tilde{\Phi}_{aa}^{(\alpha_{1}\alpha_{2})}$,
$\tilde{\Phi}_{ab}^{(\alpha_{1}\alpha_{2})}$,
$\tilde{\Phi}_{ba}^{(\alpha_{1}\alpha_{2})}$,
$\tilde{\Phi}_{bb}^{(\alpha_{1}\alpha_{2})}$,
it is convenient to introduce the wave function
$\Psi^{(\alpha_{1}\alpha_{2})}$ written in the form (see Ref.~\cite{IZ})
\begin{equation}\label{eq47}
\Psi^{(\alpha_{1}\alpha_{2})}=\pmatrix{
\tilde{\Phi}_{aa}^{(\alpha_{1}\alpha_{2})}\,\,
\tilde{\Phi}_{ab}^{(\alpha_{1}\alpha_{2})}\cr
\tilde{\Phi}_{ba}^{(\alpha_{1}\alpha_{2})}\,\,
\tilde{\Phi}_{bb}^{(\alpha_{1}\alpha_{2})}}(C=i\gamma^{2}\gamma^{0})=
\pmatrix{
\tilde{\Phi}_{aa}^{(\alpha_{1}\alpha_{2})}\sigma_{y}\,\,
\tilde{\Phi}_{ab}^{(\alpha_{1}\alpha_{2})}\sigma_{y}\cr
\tilde{\Phi}_{ba}^{(\alpha_{1}\alpha_{2})}\sigma_{y}\,\,
\tilde{\Phi}_{bb}^{(\alpha_{1}\alpha_{2})}\sigma_{y}}
\end{equation}
\noindent
where $C$ is the charge conjugation operator. Then, the
decay constants $f_{P}$ and $f_{V}$ are given by the expressions
\begin{equation}\label{eq48}
\delta_{\mu 0} Mf_{P}=\sqrt{3}Tr\biggl[\Psi_{000}(\vec r=0)\gamma^{\mu}
(1-\gamma_{5})\biggr]
\end{equation}
\begin{equation}\label{eq49}
f_{V}(\lambda)=\sqrt{3}Tr\biggl[\Psi_{011\lambda}(\vec r=0)\gamma^{\mu}\biggr]
\varepsilon^{\mu}(\lambda=0,\pm 1)
\end{equation}
\noindent
Here, the factor $\sqrt{3}$ stems from the color part of the wave functions,
$\varepsilon^{\mu}(\lambda)$ is the polarization vector of the meson and
\begin{equation}\label{eq50}
\Psi_{LSJM_{J}}(\vec r=0)=\int\frac{d^3\vec p}{(2\pi)^{3}}\,
\biggl[\Psi_{LSJM_{J}}(\vec p\,)=\sum\limits_{\alpha_{1}\alpha_{2}}
\Psi_{LSJM_{J}}^{(\alpha_{1}\alpha_{2})}(\vec p\,)\biggr]
\end{equation}
\noindent
Using Eqs. (\ref{eq16}), (\ref{eq17}), (\ref{eq24}),
(\ref{eq47}) and (\ref{eq50}), from Eqs. (\ref{eq48}) and (\ref{eq49})
one obtains
\begin{equation}\label{eq51}
f_{P}=\frac{\sqrt{24\pi}}{M}\int_{0}^{\infty}\frac{p^{2} dp}{(2\pi)^{3}}
\sum\limits_{\alpha_{1}\alpha_{2}}\biggl[N_ {1}^{(\alpha_{1})}(p)
N_ {2}^{(\alpha_{2})}(p)-\alpha_{1}\alpha_{2}N_ {1}^{(-\alpha_{1})}(p)
N_ {2}^{(-\alpha_{2})}(p)\biggr]R_{000}^{(\alpha_{1}\alpha_{2})}(p)
\end{equation}
\begin{eqnarray}
f_{V}(\lambda)&=&\biggl \{-\sqrt{24\pi}\int_{0}^{\infty}\frac{p^{2}dp}
{(2\pi)^{3}}\sum\limits_{\alpha_{1}\alpha_{2}}
\biggl[N_ {1}^{(\alpha_{1})}(p)N_ {2}^{(\alpha_{2})}(p)+
\nonumber\\[2mm]
&+&\frac{\alpha_{1}\alpha_{2}}{3}N_ {1}^{(-\alpha_{1})}(p)
N_ {2}^{(-\alpha_{2})}(p)\biggr]R_{011}^{(\alpha_{1}\alpha_{2})}(p)
\biggr \}\delta_{\lambda 0}
\label{eq52}
\end{eqnarray}
\noindent
For a given $f_{V}$ (\ref{eq52}) the leptonic decay width of the vector meson
is given by
\begin{equation}\label{eq53}
\Gamma(V \rightarrow e^{-} e^{+})=\frac{\alpha^{2}_{eff}}{4 \pi M^{3}}
\frac{1}{3}\sum\limits_{\lambda=0,\pm}\mid f_{V}(\lambda) \mid^{2}
\nonumber\end{equation}
where
$$
\alpha^{2}_{eff}=\frac{1}{137}\biggl(\frac{1}{2},\frac{1}{18},
\frac{1}{9} \biggr)
$$
\noindent
for $\varrho^{0}, \omega$ and $\varphi$ mesons, respectively.
Finally, using Eqs. (\ref{eq32}), (\ref{eq33}), (\ref{eq43}),
(\ref{eq51}), (\ref{eq52}) and (\ref{eq53}), we obtain
\begin{eqnarray}
f_{P}&=&\frac{\sqrt{6}p_{0}^{\frac{3}{2}}}{\pi\sqrt{M}} \mid
\sum\limits_{\alpha_{1}\alpha_{2}}\int_{0}^{\infty}z^{2} dz
\biggl[N_ {1}^{(\alpha_{1})}(p_{0},z)N_ {2}^{(\alpha_{2})}(p_{0},z)-
\nonumber\\[2mm]
&-&\alpha_{1}\alpha_{2}N_ {1}^{(-\alpha_{1})}(p_{0},z)
N_ {2}^{(-\alpha_{2})}(p_{0},z)\biggr]\bar R_{000}^{(\alpha_{1}\alpha_{2})}(z)
\mid
\label{eq55}
\end{eqnarray}
\noindent
\begin{eqnarray}
\Gamma(V \rightarrow e^{-} e^{+})&=&
\frac{4\alpha_{eff}^{2}p_{0}}{(2\pi)^{3}M^{2}}
\mid \sum\limits_{\alpha_{1}\alpha_{2}}\int_{0}^{\infty}z^{2}dz
\biggl[N_ {1}^{(\alpha_{1})}(p_{0},z)N_ {2}^{(\alpha_{2})}(p_{0},z)+
\nonumber\\[2mm]
&+&\frac{\alpha_{1}\alpha_{2}}{3}N_ {1}^{(-\alpha_{1})}(p_{0},z)
N_ {2}^{(-\alpha_{2})}(p_{0},z)\biggr]\bar R_{011}^{(\alpha_{1}\alpha_{2})}(z)
\mid ^{2}
\label{eq56}
\end{eqnarray}
\noindent
where the functions
\begin{equation}
\bar R_{LSJ}^{(\alpha_{1}\alpha_{2})}(z)=\sum\limits_{n=0}^{\infty}
C_{LSJ}(\alpha_{1}\alpha_{2})\bar R_{nL}(z)
\end{equation}
\noindent
satisfy the normalization condition
\begin{equation}
\sum\limits_{\alpha_{1}\alpha_{2}}\int_{0}^{\infty}z^{2}dz
f^{(\alpha_{1}\alpha_{2})}_{12}(M;p_{0},z)\biggl[
\bar R_{LSJ}^{(\alpha_{1}\alpha_{2})}(z)\biggr]^{2}=1
\end{equation}
\noindent
The results of numerical calculations of the quantities $f_{P}$ and
$\Gamma$ defined by Eqs. (\ref{eq55}) and (\ref{eq56}), are given in Table I.
We see from Table I that the calculated values of $f_{P}$
in the MNK and CJ versions, as a rule, are smaller than in the Sal. version
and this fact is related to the presence of the contributions from
the "$+-$" and "$-+$" components of the wave function (contribution from
the "$--$" component is negligibly small). Further, the calculated
value of $f_P$ is larger in the CJ version than in the MNK version.
The calculated value of the quantity $\Gamma(V \rightarrow e^{-} e^{+})$
weakly depends on the particular choice of the 3D reduction scheme of
the BS equation. With the increase of the mixing parameter $x$ both the
quantities $f_{P}$ and $\Gamma(V \rightarrow e^{-} e^{+})$ slightly increase.
The calculated values of $f_{P}$ and $\Gamma(V \rightarrow e^{-} e^{+})$
are smaller than the experimental ones.
\vspace*{.6cm}
On the basis of the analysis of the different versions of the 3D
reductions of the bound state BS equation carried out in the
present paper, one arrives at the following conclusions:
The existence/nonexistence of stable solutions of the 3D bound-state
equations critically depends on the value of the scalar-vector mixing parameter
$x$. For all 3D versions (Sal, MNK, CJ) stable solutions coexist
for the value of $x$ from a rather restricted interval
$0.3\leq x \leq 0.6-0.7$. The level ordering
in the mass spectrum is similar for all versions under consideration.
However, the sequence of the calculated energy levels agrees with the
experiment for some states at $ x < 0.5$ and for other states at
$ x > 0.5$.
Consequently, a simultaneous description of even general features of
the meson whole mass spectrum turns out not to be possible for a given
value of $x$ from the above-mentioned interval.
It is interesting to investigate the dependence of this results on
the form of the confining potential. Also, it is interesting
to study how it changes when the one-gluon exchange potential is taken
into account. This aspect of the problem will be considered at the next
step of our investigation. Further, we plan to include the
't Hooft effective interaction in our potential in order
to (phenomenologically) take into account the effect of spontaneous
breaking of chiral symmetry which is important in the pseudoscalar
($^{1}S_{0}$) sector of the constituent model.
The calculated leptonic decay characteristics of mesons are quite
insensitive to the particular 3D reduction scheme used and
give an acceptable description of experimental data.
In future, we also plan to study the validity of this conclusion
for a wider class of realistic interquark potentials.
{\it Acknowledgments.} One of the authors (A.R.) acknowledges the financial
support from the Russian Foundation for Basic Research under contract
96-02-17435-a.
|
{'timestamp': '1998-07-24T13:04:28', 'yymm': '9807', 'arxiv_id': 'hep-ph/9807485', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9807485'}
|
arxiv
|
\section{Introduction}
In the last few years a number of remarkable results
on the non-perturbative behavior in gauge theories and
string theories have appeared. The exact solutions
for supersymmetric gauge theories are striking in their
self-consistency and seem to refute any doubts in
their validity. One of the basic result is the
existence of a non-zero gluino condensate in pure $N=1$ supersymmetric
Yang-Mills theory.
There are different consistent approaches to get
non-zero gluino condensate, from the Veneziano-Yankielowicz
effective action \cite{vy}
to the instanton calculations \cite{instanton}. Also the existence
of a non-zero gluino condensate is consistenly incorporated
in the full picture of supersymmetric gauge and string theories.
Nevetheless within the framework of QFT there is still
a confusing question: What kind of field configurations
gives rise the gluino condensate itself?
The solution of this problem might be relevant to the
understanding of confinment in the supersymmetric gauge
theories. There has been
an attempt to answer this question in a finite volume
\cite{gomez} by introducing twisted boundary conditions.
To the author's knowledge the only attempt
to answer this question in infinite volume is by introducing
multivalued field configurations \cite{zhit}. In the
present letter we try to give an answer for the
problem
without any direct references to the ideas of \cite{gomez}
and \cite{zhit}.
The main puzzle is that the existence of a non-zero gluino
condensate requires gauge configurations
with fractional topological charge . This goes
beyond our present understanding of QFT. There are no
smooth gauge field configuration with finite action
which can have a
reduced number of fermionic zero modes. According to standard
wisdom, all gauge configurations are classified by
integer topological charge and the path integral is
just a sum over different topological sectors. The above
problem can be resolved if one is willing to study non smooth
configurations which nevertheless have finite
action\footnote{In the sense that one has a unique recipe to
define the action for such configurations.}.
We are going to look for singular
configurations with the right properties. We argue that
in fact the best candidate for this role is a configuration with
point support which can be regarded as zero-size
instanton. We think that
it is an important lesson for gauge theories to understand
that perhaps the path integral is not just a sum over
different topological sectors of smooth configurations
but also a sum over some
singular configurations.
The organization of the paper is as follows. In section
2 we review the basic facts about pure supersymmetric
Yang-Mills theories. We recall the chiral selection
rules and show that non-zero gluino condensate
requires a gauge configuration with two fermionic zero
modes. In section 3 we consider a QFT with constant
Green's functions (SYM is example of such theory). Using
these constant Green's functions we argue
that the configurations with point
support are important in the path integral and that only
the topological part survives
in the action (therefore it is just a number).
In section 4 we consider the configurations with
point support and show that in fact they have the right
number of fermionic zero modes. We point out that the zero-size
instantons are essentially merons, configurations which were
inroduced by hand in \cite{gross} to get the area law for
Wilson loop operator.
In section 5 we give
our conclusions and comment on the status of
the $D$-instanton/YM-instanton correspondence within
$AdS$/CFT correspondence.
\section{Evidence for the fractional configurations}
First of all let us recall the basic relevant facts
about the pure supersymmetric gauge theory in four dimensions.
The SYM Lagrangian describing the gluodynamics of gluons $A_\mu$ and
gluinos $\lambda_\alpha$ with a general compact gauge group has the form
\beq
\label{a1}
{\cal L} = -\frac{1}{4g_0^2}
G_{\mu\nu}^a G_{\mu\nu}^a
+ \frac{i}{g_0^2}\lambda_{\dot\alpha}^\dagger
D^{\dot\alpha\beta}\lambda_\beta
+ \frac{i\theta}{32\pi^2} G_{\mu\nu}^a \tilde{G}_{\mu\nu}^a.
\eeq
where $G_{\mu\nu}^a$ is the gluon field strength tensor,
$\tilde{G}_{\mu\nu}^a$
is the dual tensor and $D^{\dot\alpha \beta}$ is the covariant
derivative and all quantities are defined with respect to the adjoint
representation of the gauge group. This Lagrangian may be written
in terms of the gauge superfield $W_\alpha$
with physical components ($\lambda_\alpha$, $A_\mu$) as follows
\beq
\label{a2}
{\cal L} = \frac{1}{8\pi} Im \int d^2\theta\tau_0 W^{\alpha}W_{\alpha},
\eeq
where the bare gauge coupling $\tau_0$ is defined to be
$\tau_0 = \frac{4\pi i}{g^2_0} + \frac{\theta_0}{2\pi}$.
The model possesses a discrete global $Z_{2C_2}$ symmetry\footnote{$C_2$
denotes the Dynkin index (the quadratic Casimir) with $C_2 = 1/2$
normalization for the fundamental
representations of $SU(N)$, $Sp(N)$ and with $C_2=1$
normalization for
the vector representation of $SO(N)$. So for adjoint
representations we use
the following values: $C_2(SU(N))=N$, $C_2(Sp(2N))=N+1$,
$C_2(SO(N))=N-2$,
$C_2(E_6)=12$, $C_2(E_7)=18$, $C_2(E_8)=30$, $C_2(F_4)=9$,
$C_2(G_2)=4$. }, a residual
non-anomalous subgroup of the anomalous chiral $U(1)$.
The discrete chiral symmetry $Z_{2C_2}$ is sponteneously broken
by a non-zero gluino condensate $\langle \lambda^{a\alpha}
\lambda^a_\alpha \rangle \equiv \langle \lambda \lambda\rangle$.
In term of the strong coupling scale $\Lambda$ the gluino
condesate has the form
\beq
\label{a3}
\langle \lambda \lambda \rangle = c\, \Lambda^3\,
e^{\frac{2\pi i k}{C_2}},\,\,\,\,\,k\in Z.
\eeq
where the constant $c$ can be absorbed in a redefinition
of $\Lambda$.
Therefore the system has $C_2$ different values of
the gluino condensate and can be in any one of
the $C_2$ vacua. In what follows we suppose that the system
sits in one of the vacua and that all calculations are valid
for this phase. The supersymmetry requires that
the Green's functions of the composite operator $\lambda\lambda$
do not depend on the coordinates. Due to the cluster
decomposition property we have the following equalities
\beq
\label{a4}
\langle \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n)\rangle = \Lambda^{3n},\,\,\,\,
\langle \lambda\lambda(x) \rangle = \Lambda^3.
\eeq
We mainly concentrate our attention on SYM
with an $SU(2)$ gauge group.
Now we consider the arguments
supporting the existence of fractional topological
charge. It is beleived that the path intergral is a sum
over different topological sectors. Let us look at
the Green's function restricted to one topological sector.
Consider a chiral rotation of fermions
$\lambda \rightarrow e^{i\alpha} \lambda$ which
gives the following rotation in the path integral
representation of the relevant Green's functions
\beq
\label{a5}
\langle \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n)\rangle = e^{(2in\alpha - i\alpha f)}
\int DA\, D\lambda\,\,\, \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n) e^{-S(A,\lambda)}
\eeq
where $f=\nu_{+}-\nu_{-}$ and $\nu_{+}$ ($\nu_{-}$)
is number of fermionic zero modes with positive (negative)
chirality. The equality (\ref{a5}) is just a change of variables
in path integral. The exponent in front of the path integral comes
from an explicit rotation of the fermion fields and the anomalous
rotation of the mesure.
The last contribution can be derived using the Fujikawa's method
\cite{fujikawa}. Since the expression is independent
of $\alpha$ we can take derivative with respect to $\alpha$
and get the following
chiral selection rules
\beq
\label{a6}
(2 n - f)\langle \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n)\rangle = 0.
\eeq
In fact the number fermionic zero modes is related to
the properties of the gauge field. We are interested in the
continuous gauge configurations which give a finite action. Therefore
all such continuous configurations are characterized by
the third homotopy group $\pi_3(G)$. The corresponding topological
charge is given by the following expression
\beq
\label{a7}
Q= \frac{1}{32\pi^2} \int d^4 x\,Tr(\epsilon^{\mu\nu\rho\sigma}
G_{\mu\nu}^a G_{\rho\sigma}^a)
\eeq
which is called the second Chern or first Pontrjagin number, depending
on the group. The Atiyah-Singer theorem gives the dependence
between the number of zero modes of the Dirac operator
in the gauge backgound and the second Chern number of this backgound
$2C_2 Q = f$. After all one can rewrite the expression
(\ref{a6}) as follows
\beq
\label{a8}
(n-C_2 Q)\langle \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n)\rangle = 0.
\eeq
Therefore one concludes that that the n-point function is
saturated by the gauge configurations with topological number
$Q=n/C_2$. Since we consider fermions in the adjoint
represetation $C_2$ is quite ``big''. The non-vanishing
gluino condensate has $Q=1/C_2$, but $Q$ is supposed to
be integer. In the work on instantons \cite{instanton}, to
avoid contradiction, the following average over degenerate vacua
was taken
\beq
\label{a9}
\sum\limits_{k=1}^{C_2} \langle \lambda\lambda(x_1) \lambda\lambda(x_2) ...
\lambda\lambda(x_n)\rangle = 0, \,\,\,\,\,\,\,\,\frac{n}{C_2}\not\in Z
\eeq
where $k$ labels the different vacua. So these averaged Green's
functions vanish when $Q$ is fractional. From general
principeles of QFT we have no right to average Green's
functions over different degenerate vacua. The worlds with
different gluino condensate live completely independent lifes
(with no correlations between them). Thus as soon as we
accept the
existence of a non-vanishing gluino condensate and an unbroken
supersymmetry we have to accept the existence of all n-point
Green's funtions (\ref{a4}) for the operator $\lambda\lambda$.
We still consider a system which sits in one of
the vacua.
To resolve this problem we have to assume that the gauge
configurations which give rise the Green's functions are
not continous (and as a result not smooth). If we accept this
then the topological charge cannot classify these
configurations by definition. To be able to work with such
configurations one has to go to the notion of generalised
function (distribution). It is an essential starting point
for the axiomatic approach to QFT to think about fields
as operator valued distributions. It can be shown
that considering fields as ordinary functions (objects
defined at each point of space time) leads to
contradictions in QFT or triviality of QFT \cite{logunov}.
Further we will argue that the role of fractional instantons
may be played by field configurations with point support. We
have to understand that such configurations are not instantons
in the usual sense, they are singular and any reasonable calculation
requires regularization of these configurations.
Nevertheless these configurations
satisfy the self-duality conditions (in a generalized sense)
and we may view them as instantons, at least formally.
\section{Constant Green's functions}
In this section we are going to use the property that
the relevant Green's funtions are constant.
This property is a result of unbroken supersymmetry.
To simplify the notation we will consider an abstract
QFT and identify the composite operator $\lambda\lambda$
with the fundamental field. All arguments can
be strightforwardly specified to a situation with
certain field content.
Let us consider a QFT with some opeartor $\phi(x)$
which has a constant non vanishing n-point Green's function
$\langle \phi(x_1) \phi(x_2)...\phi(x_n)\rangle$. Due
to the cluster decomposition property we have the following
equalities
\beq
\label{b1}
\langle \phi(x_1) \phi(x_2)...\phi(x_n)\rangle = A^n,\,\,\,
\,\,\,\langle \phi(x) \rangle =A.
\eeq
The generating functional for the Green functions is
\beq
\label{b2}
Z[J(x)] = \int D\phi\,\,e^{-S(\phi) + \int d^4x\,\phi(x)J(x)},\,\,\,
\,\,\,\,Z[0]=1
\eeq
which one can expand in the standard way
\beq
\label{b3}
Z[J(x)] = \sum\limits_{n=0}^{\infty}\frac{1}{n!} \int\int ...\int
d^4x_1\, d^4x_2...d^4x_n
J(x_1)J(x_2)...J(x_n) \langle \phi(x_1) \phi(x_2)...\phi(x_n)\rangle
\eeq
Using the equlities (\ref{b1}) we can rewrite the
expression (\ref{b3}) as follows
\beq
\label{b4}
Z[J(x)] = \sum\limits_{n=0}^{\infty}\frac{A^n}{n!} \int d^4x_1
J(x_1) \int d^4x_2 J(x_2) ... \int d^4x_n J(x_n) = e^{A \int d^4x J(x)}
\eeq
In general it is very difficult to treat rigirously the path
integral representation of the generating functional since this
object is non-linear functional of the source. But the
expression (\ref{b4}) is somewhat particular. Usually one defines
such constructions on the appropriate Banach space. Let us consider
the generating functional on the space ${\cal L}_1(R^4)$ (the Banach
space with norm $\|J\|_{{\cal L}_1} = \int d^4x |J(x)| < \infty$).
The key observation is that the logarithm of the generating functional
(\ref{b4}) is a linear continious functional on ${\cal L}_1(R^4)$
($|\log Z[J(x)]| \leq A \|J\|_{{\cal L}_1}$). Therefore we can define
the generating functional on some dense subset of the space and
afterwards get a unique continuation to the whole space .
We will consider sources which have the following special
form
\beq
\label{b5}
J(x) = \left\{ \begin{array}{l}
J=const,\,\,\,\,\,\,x\in {\cal B};\\
0,\,\,\,\,\,\,\,x\not\in {\cal B}.
\end{array} \right.
\eeq
where ${\cal B}$ is a set of finite volume $V$ in $R^4$ which
is homeomorphic to the ball. The set of linear combinations of
the functions (\ref{b5}) is a dense subset in ${\cal L}_1(R^4)$.
Hence for every function $J(x) \in {\cal L}_1(R^4)$ one can find
sequence $J_n (x)$ which is a finite linear combination of
sources $J_i$ defined in (\ref{b5})
\beq
\label{b5a}
J(x) = \lim\limits_{n} \sum\limits_i J_i(x)
\eeq
where any numbers are included in the definitions of $J_I(x)$. The
limit in (\ref{b5a}) is understood with respect to topology given
by $\|.\|_{{\cal L}_1}$. Thus we can write the following definition
of the generating functional on arbitrary function from ${\cal L}_1$
\beq
\label{b5b}
\log Z[J(x)] = \lim\limits_n \sum\limits_i (\log Z[J_i(x)])
\eeq
We conclude that it is enough to consider the
generating functional restricted on the subset of sources with form
(\ref{b5}).
Let us concentrate our attention on the one-point function.
The integral
over the source (\ref{b5}) is $\int d^4x J(x) = J V$.
We see that the functional
$Z[J, V] = e^{AJ V}$ is essentially a function of $J$ and $V$.
Thus one has the following definition of the relevant Green's functions
\beq
\label{b6}
\langle \phi(x) \rangle =
\frac{1}{V}
\left. \frac{\partial Z}{\partial J}\right|_{J=0} =
\frac{1}{V}
\left. \frac{\partial Z}{\partial J}\right|_{V=0} =
\frac{1}{J}\left.
\frac{\partial Z}{\partial V}\right|_{V=0}.
\eeq
These definitions of constant Green's functions are somewhat
suprising since, after taking derivatives, one has to take the
zero volume limit. Let us look more carefully at the construction
\beq
\label{b7}
Z[J, V] = \int D\phi\,e^{-S(\phi) + J
\int\limits_{\cal B} d^4x\, \phi(x)}
\eeq
where the action is an integral over all of space-time
$S(\phi) = \int\limits_{R^4} d^4x\, {\cal L}(\phi)$. Every function
$\phi(x)$ can be decomposed as $\phi(x)= \phi_1(x) + \phi_2(x)$
where $\phi_1(x)$ has support inside the ball ${\cal B}$
and $\phi_2(x)$ has support
outside ball ${\cal B}$. In the same way we can decompose
the action $S(\phi)=S_1(\phi)+S_2(\phi)$ where the first part is
integral over the ball and second is integral over its complement.
In this separation of the action the values of $\phi(x)$
on the boundary of the ball are important
since non-trivial boundary conditions
can give rise important surface contributions. We assume that
all possible surface contributions are included in $S_1$.
Therefore the generating functional can be written in the following
form
\beq
\label{b8}
Z[J,V]=\frac{\int D\phi_2\, e^{-S_2(\phi_2)}\,\int D\phi_1\,
e^{-S_1(\phi_1) + J \int\limits_{\cal B} d^4x\, \phi(x)}}
{\int D\phi_2\, e^{-S_2(\phi_2)}\,\int D\phi_1\,
e^{-S_1(\phi_1)}}
\eeq
where in the denominator we have written out the contribution
$Z[J(x)=0]$ which we normalized to one in (\ref{b2}).
In (\ref{b8}) we did not write explicitly the integration
over collective coordinates. The ball ${\cal B}$ has a
center $x_0$ and one has to integrate out this dependence
in the expression (\ref{b8}). In the supersymmetric models
we can put this ball in superspace and thus we have two
collective coordinates $x_0$ and its superpartner $\theta_0$.
Hence we can write the generating functional as follows
\beq
\label{b9}
Z[J,V]=\int d^4x_0\,\int D\phi\,
e^{-\int\limits_{\cal B} d^4x {\cal L} +
J \int\limits_{\cal B} d^4x\, \phi(x)}
\eeq
where in general the integration over appropriate collective
coordinates are assumed. Now let us use the definition (\ref{b6})
of the Green's functions
\beq
\label{b10}
\langle \phi(x) \rangle =
\frac{1}{V}\left.\frac{\partial Z}{\partial J}\right|_{V=0}=
\int d^4x_0\, \int D\phi\,
\left( \frac{\phi}{V}
\right) \left.e^{- \int\limits_{\cal B} d^4x {\cal L}}
\right|_{V\rightarrow 0}.
\eeq
where $\phi \equiv \int\limits_{\cal B} d^4x\, \phi(x)$. We
have to take the zero volume limit in the expression (\ref{b10}). In
this limit in the action only topological contributions survive
which are independent of the volume $S(\phi) \rightarrow S_{top}$.
In the zero volume limit only the configurations with point support
\footnote{In the expression (\ref{b11}) we do not specify the
delta function
(for example, $\delta^4(x)$ or $\delta(x^2)$). It may depend of
the details of the limiting procedure.}
survive
\beq
\label{b11}
\frac{\phi}{V} \rightarrow \phi\delta(x-x_0)\,\,\,
\Longleftrightarrow\,\,\,\int d^4x \frac{1}{V}
\phi(x) \rightarrow \int d^4x \delta(x-x_0)\phi(x),\,\,\,\,\,\,\,\,\,
V\rightarrow 0.
\eeq
These configurations give rise to constant Green's functions.
Therefore schematiclly we can write down the following expression
for the Green's functions
\beq
\label{b12}
\langle \phi(x) \rangle = \sum\limits_{top}
\int d^4x_0\,\int d\phi\, \phi\, \left(\delta(x-x_0)\right)\,
e^{-S_{top}}
\eeq
where $\int d\phi\,\phi$ is just the finite dimensional integral
over the relevant moduli. Also the sum over different topological
contributions has to be assumed in the most general situation.
At this point we have to issue
a warrning. We cannot regard
the expression (\ref{b12}) as a simple recipe for calculating
the relevant Green's functions. The configurations with point
support is UV singular and any realiable calculation will need
an UV regularization. In the theory of distributions one cannot make
sense of products of delta functions except by regularization.
Aslo one has to
worry about the proper definition the topological contribution $S_{top}$
in the action for singular configurations. As we will see in the
case of Green's functions of the composite operator $\lambda\lambda$,
one can define the topological contribution using the
Atiyah-Singer theorem.
In this section we have tried to give
arguments in favour of the
rather intuitivly obvious idea that the constant Green's
functions are saturated by the configurations
localized at a point\footnote{This property
is related to the fact that the relevant Green's functions
are a set of disconnected coordinate independent pieces.}.
We have shown that only the configurations
with non trivial topology (the action on these configurations is
proportional the topological charge) can give rise the constant
Green's functions.
The space of such configurations
is finite dimensional, therefore the path integral
represenation of the relevant Green's functions is reduced
to a finite dimensional integral over the corresponding
collective coordinates. In the next section we will see that
the configurations with point support have the right number
of fermionic zero modes.
\section{Fractional instantons}
In the last section we have argued that the field configurations with
point support give rise constant Green's functions. Now
let us turn to the discussion of the situation for $SU(2)$ SYM.
As we have seen in Section 2, for saturation
of the gluino condensate one needs two zero fermionic modes and
a gauge configuration with $Q=1/2$. Let us look at the fermionic
zero modes for self-dual (antiself-dual) gauge configurations.
A supersymmetry transformation applied to the bosonic solution
$G^a_{\mu\nu}$ generates two fermionic zero modes
\beq
\label{c1}
\lambda^{a}_{\alpha} \sim G^{a}_{\alpha\beta} \epsilon^{\beta}
\eeq
where $\epsilon^{\beta}$ is the spinor parameter of
transformation. The remaining two zero modes can be obtained by
applying a superconformal transformation to the bosonic
solution
\beq
\label{c2}
\lambda^{a}_{\alpha} \sim G^{a}_{\alpha\beta}(x-x_0)^{\beta
\dot\gamma} \xi_{\dot\gamma}
\eeq
where $(x-x_0)^{\beta\dot\gamma} \xi_{\dot\gamma}$ is coordinate
dependent parameter of transformation.
The first two zero modes (\ref{c1}) corespond to the physical symmetry
of the QFT.
The existence of the second two modes (\ref{c2}) corespond to the
symmetry of the classical equations of motion. Therefore we
can sacrify the superconformal modes.
We can try to solve the problem by brute force,
requiering the superconformal modes to vanish
\beq
\label{c3}
G^{a}_{\alpha\beta}(x-x_0)^{\beta
\dot\gamma} \xi_{\dot\gamma} = 0.
\eeq
This means that the superconformal transformation acts trivially on
the bosonic solution $ G^{a}_{\alpha\beta}$.
Equation (\ref{c3}) has only one solution in terms of
distributions which due to the dimension of the field
can be written as follows
$G \sim \delta((x-x_0)^2)$. We have thus found that the field
configuration with point support has presicely two fermionic
zero modes as we need for non-zero gluino condensate.
The solution found above can be regarded as the zero size
limit of the one instanton solution
\beq
\label{c4}
G^{a}_{\mu\nu} = -4\eta^{a}_{\mu\nu}\frac{\rho^2}{[(x-x_0)^2
+\rho^2]^2}\,\,\,\,\,\,\,\, \stackrel{\rho \rightarrow 0}{\longrightarrow}
\,\,\,\,\,\,\,\, -4\eta^{a}_{\mu\nu} \delta((x-x_0)^2).
\eeq
where $\eta^{a}_{\mu\nu}$ are the 't Hooft symbols.
The $-4\eta^{a}_{\mu\nu} \delta((x-x_0)^2)$ is the generalized solution
of the self-duality equation. Therefore formally one can write the
expression for the action as follows $S= 8\pi^2 Q/g^2$ where $Q$ is
defined by (\ref{a7}). In the case of zero size instanton we cannot
calculate the action or topological charge directly since there are
singularities (in general expression like
$\delta(x)\delta(x)$ do not make
sense)\footnote{Using the delta-like sequences
one can show that $\delta(x)\delta(x)\rightarrow\delta(x^2)$.}.
The most natural way is to use the Atiyah-Singer theorem as a
definition of the topological charge and the corresponding action.
It is important to note that zero size instanton is an independent
object in the sense that the standard instanton is not the only
possible regularization of the zero size instanton. There are
infinitely many such regularizations. The consideration in
\cite{zhit} supports the point that such singular configurations
exist independently.
Another very interesting point is that if we consider the gauge
potential for the instanton in the zero size limit
\beq
\label{c4a}
A^a_{\mu} = 2\eta^{a}_{\mu\nu} \frac{(x-x_0)_\nu}{(x-x_0)^2 + \rho^2}
\,\,\,\,\,\,\,\, \stackrel{\rho \rightarrow 0}{\longrightarrow}
\,\,\,\,\,\,\,\, 2\eta^{a}_{\mu\nu} \frac{(x-x_0)_\nu}{(x-x_0)^2}
\eeq
then one can recognize in this expression the meron configuration
intorduced in \cite{gross}. There are two remarkable facts about
this configuration. First in the meron background the Wilson loop
has area law and therefore there is confinment. Second the meron
is a point like defect since it can be gauge away everywhere exept
one point $x_0$.
The supersymmetry transformation applied to the zero size instanton
generate two fermionic modes
\beq
\label{c5}
\lambda^{a}_{\alpha} \sim -4\eta^{a}_{\alpha\beta}\epsilon^{\beta}
\delta((x-x_0)^2).
\eeq
which correspond to two collective coordinate, the center of
zero size instanton $x_0$ and its superpartner $\theta_0$.
The supeconformal transformations act trivially and give zero.
This is natural since the size of the instanton $\rho$ and
its superpartner are lacking. In a superspace language
the relevant superfield
configuration has the form
\beq
\label{c6}
W^{\alpha}W_{\alpha} \sim \delta((x-x_0)^4) (\theta -\theta_0)^2.
\eeq
Further one has to integrate over the collective coordinate to get
the gluino condensate
\beq
\label{c7}
\langle W^{\alpha}W_{\alpha} \rangle \sim M^3\int
d^4x_0\,d^2\theta_0\,\,
\delta((x-x_0)^4) (\theta -\theta_0)^2 e^{-\frac{4\pi^2}{g^2}}
\eeq
where $M$ is a parameter with dimension of mass which is needed
to keep the dimensions right. After these calculations
we get the gluino condensate
\beq
\label{c8}
\langle \lambda\lambda\rangle \sim c M^3 e^{-\frac{4\pi^2}{g^2}}
\sim c\Lambda^3
\eeq
where $c$ is numerical constant and $\Lambda$ is the strong coupling scale.
Of course the above calculations are naive since we did not talk
about the regularization which we need to consider for the product of
two delta functions $\delta(x)\delta(x)$. Thus we did not worry
about numerical factors.
In this context the standard instanton's calculations \cite{instanton}
can be thought of as a way of introducing a regularization.
The size of instanton plays the role of UV-cut-off. The
additional fermionic modes have to be introduced to keep
supersymmetry unbroken within these calculations.
\section{Discussions and conclutions}
In the present letter we have tried to answer the question of what kind
of field configurations give rise the gluino condensate.
We have addressed this problem within the QFT language.
The existence of non-zero gluino condensate and unbroken
supersymmetry gives us two puzzles.
The first is that, from general principles, the non-vanishing matrix
element $\langle \lambda\lambda\rangle$
requieres the existence of fractional instantons or more
presicely; field configurations with a reduced number of ferminic
zero modes. Second, a non-vanishing gluino condensate
gives rise a tower of constant Green's functions for the composite
operator $\lambda\lambda$. In fact both puzzles have the
same answer. We have shown that the constant Green's functions
can be saturated by field configurations with point
support. At the same time such configurations can be thought of
as the zero size instanton and they have exactly right number of
fermionic zero modes.
Also within this context it is interesting to note
the correspondence between $D$-instantons of Type IIB
superstring on $AdS_5\times S^5$ and YM instantons in
SYM living on the boundary of $AdS_5$ \cite{kogan}.
Within this correspondence the size of instanton
play the role of UV cut-off (distance of $D$-istanton
to the boundary of $AdS_5$) and instanton itself
is an object with point support on the boundary.
In the large N-limit ('t Hooft limit) the instantons do
not survive and only fractional configurations with
action $S \sim 1/N$ can survive. Therefore it is
natural to have configuration with point support (thus
with reduced number of zero modes) on the boundary.
Hence one can call this correspondence - $D$-instanton/fractional
instanton correspondence.
We do not
want to speculate about this subject except to point
out the possible parallels between the different ideas.
It would be iteresting to go further on this subject.
For example, it would be nice to gain a better
understanding of how to calculate
different condensates in supersymmetric gauge theories
using configurations with point support without
direct use of instantons. Another problem for further
research could be the undestanding of the twenty years
old calculations of Wilson loop \cite{gross} in the light
of the presented arguments.
\begin{flushleft} {\Large\bf Acknowledgments} \end{flushleft}
\noindent The author is grateful to Hans Hansson, Ulf Lindstr\"om
and Myck Schwetz
for useful discussions and comments.
The author was supported by the grants of the Royal Swedish Academy of
Sciences and the Swedish Institute.
\begin{flushleft} {\Large\bf Note added} \end{flushleft}
After completing this work we learned about the work \cite{brodie}
by J.Brodie where similar ideas were discussed from the string theory
point of view. Also we learned about some evidence \cite{nar}
within lattice calculations in favor
of the fractional instantons.
\baselineskip=1.6pt
|
{'timestamp': '1998-08-28T17:20:46', 'yymm': '9807', 'arxiv_id': 'hep-th/9807120', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9807120'}
|
arxiv
|
\section{Introduction}
Lattice simulations of supersymmetric systems usually apply
formulations, which reveal the supersymmetry
in the continuum limit but not in the lattice discretized
version \cite{Montvay}. This note
addresses the issue of a lattice action, which directly
displays a continuous form of supersymmetry.
Some related works are listed in Ref. \cite{susylat}.
\subsection{A simple supersymmetric model}
We first consider a simple 2d SUSY model \cite{GS}
given in the continuous Euclidean space by the Lagrangian
\begin{equation}
{\cal L} = \bar \psi \gamma_{\mu} \partial_{\mu} \psi
+ \partial_{\mu} \varphi \partial_{\mu} \varphi \ ,
\end{equation}
where $\psi$ is a real ``Majorana spinor''
\footnote{Strictly speaking, there are no Majorana spinors in
Euclidean space, we just refer to the Euclidean version of the
corresponding formulae in Minkowski space. A definition is
given for instance in Ref. \cite{Gol}, based on a Euclidean
analog of charge conjugation.
Note that $\bar \psi$ and $\psi$ are not independent.}
and $\varphi$ is a real scalar field.
Many qualitative features with respect to a lattice formulation
are the same as in the Wess-Zumino model.
The action $S$ is invariant under the field transformations
\begin{equation} \label{trafo}
\delta \psi = - \gamma_{\mu} \partial_{\mu} \varphi \ \epsilon
\quad , \quad \delta \varphi = \bar \epsilon \ \psi \ ,
\end{equation}
where the two components of the transformation parameter
$\ \epsilon \ $ are real Grassmann variables.
As an important property, we note that the supersymmetric
generator forms a closed algebra with the translation operator,
\begin{equation} \label{transla}
[\delta_{1}, \delta_{2}]\varphi = ( \bar \epsilon_{1} \ \gamma_{\mu} \
\epsilon_{2} \ - \ \bar \epsilon_{2} \ \gamma_{\mu} \ \epsilon_{1} ) \
\partial_{\mu} \varphi \ .
\end{equation}
\subsection{Ansatz for a lattice formulation} \label{ansatz}
Let us consider a rather general ansatz for a lattice discretization
of this free system in momentum space,
\begin{eqnarray}
S &=& \int_{B} \frac{d^{2}p}{(2\pi )^{2}} \ \Big\{
\bar \Psi (-p) [ \gamma_{\mu} \rho_{\mu}(p) + \lambda (p)]
\Psi (p) + \Phi (-p) \Omega (p) \Phi (p) \Big\} \ , \nonumber \\
\delta \Psi (p) &=& - [ \gamma_{\mu} R_{\mu}(p) + L(p)]
\Phi (p) \epsilon \ , \nonumber \\ \label{act}
\delta \Phi (p) &=& \bar \epsilon [u(p)+ \gamma_{\mu}v_{\mu}(p)] \Psi \ .
\end{eqnarray}
Here, $\Psi$ and $\Phi$ are the massless lattice fermion resp. scalar
field, and the new quantities, which we introduce as an ansatz for the
inverse propagators and for the transformation terms ($\rho_{\mu},
\lambda , \Omega ,R_{\mu},L,u,v_{\mu}$) are real in coordinate
space. It is desirable that they are all local, i.e. analytic in
momentum space. We require
the low energy expansion of the action to reproduce the correct
continuum limit, and the inverse propagators obey in coordinate
space the normalization conditions
\footnote{In the first expression, there is no sum over $\mu$.}
\begin{equation} \label{norm}
\sum_{x} x_{\mu} \rho_{\mu}(x) = 1 \ , \ \sum_{x} \lambda (x) = 0
\ , \ \sum_{x} x^{2} \Omega (x) = -4 \ .
\end{equation}
We do not require the lattice transformation terms to correspond
exactly to the continuum transformations (\ref{trafo}).
Hence this ansatz includes a possible ``remnant supersymmetry'' of the
lattice action, similar to the Ginsparg-Wilson relation for
the chiral symmetry \cite{ML}.
The general (remnant) supersymmetry requirement $\delta S =0$
amounts to
\begin{eqnarray} \hspace{-13mm} &&
-R_{\mu}(-p) [\rho_{\mu}(p)-\rho_{\mu}(-p)]+L(-p)[\lambda (-p)-
\lambda (p) ]
+ u(p) [\Omega (p) + \Omega (-p)] = 0 \ , \nonumber \\ \hspace{-10mm}&&
R_{\mu}(-p)[\lambda (-p)-\lambda (p)] + L(-p) [ \rho_{\mu}(-p) -
\rho_{\mu}(p)]
+ v_{\mu}(p) [\Omega (p) + \Omega (-p)] = 0 \ .
\end{eqnarray}
It is sensible to assume the following symmetry properties in the
action: $\rho_{\mu}$ is odd in the $\mu$ component and even in all
other directions, while the Dirac scalars
$\lambda$ and $\Omega$ are entirely even. Then the conditions simplify to
\begin{eqnarray}
-R_{\mu}(-p) \rho_{\mu}(p) + u(p) \Omega (p) &=& 0 \ ,\nonumber \\
- L(-p) \rho_{\mu}(p) + v_{\mu}(p) \Omega (p) &=& 0 \ .
\end{eqnarray}
Remarkably, $\lambda $ does not occur any more in these conditions.
Finally, we assume for the transformation terms $R_{\mu}$ and $L$
the same symmetries as for $\rho_{\mu},\lambda$, respectively,
which leads to
\begin{eqnarray}
-R_{\mu}(p) \rho_{\mu}(p) &=& u(p) \Omega (p) \ ,
\nonumber \\
L (p) \rho_{\mu}(p) &=& v_{\mu}(p) \Omega(p)
\qquad \quad (\mu=1,2) . \label{condgen}
\end{eqnarray}
The translation operator is identified from
\begin{eqnarray} \hspace{-8mm}
[\delta_{1},\delta_{2}] \Phi_{x} &=& \sum_{y} [ \bar \epsilon_{1} \
Q(x-y) \
\epsilon_{2} \ - \ \bar \epsilon_{2} \ Q(x-y) \ \epsilon_{1} ]
\Phi_{y} \ , \nonumber \\ \hspace{-8mm}
Q(x-y) &=& \sum_{z}[u(x-z) + \gamma_{\nu} v_{\nu}(x-z)] \
[\gamma_{\mu} R_{\mu}(z-y) +L(z-y)] \ ,
\end{eqnarray}
but this general form is not immediately instructive.
Let us consider simple solutions for the case
$u=1$, $v_{\mu}=L=0$.
\footnote{Note that the two transformation terms with an unusual Dirac
structure, $L$ and $v_{\mu}$, can only come into play simultaneously.}
The standard lattice action,
$\rho_{\mu}(p) = i \bar p_{\mu} := i \sin p_{\mu}$,
$\Omega (p) = \hat p^{2} := \sum_{\mu} [2 \sin (p_{\mu}/2)]^{2}$,
requires
\begin{displaymath}
R_{\mu}(p) = \frac{\hat p^{2}}{\bar p^{2}} \ i \bar p_{\mu} \ ,
\end{displaymath}
which is singular at $p =(\pi ,0)$, $(0,\pi)$ and $(\pi ,\pi)$,
hence the transformation is non-local.
An obvious concept to simplify $R_{\mu}$
-- and to obtain the same dispersion relation for fermion and scalar --
is the use of the
same lattice differential operator for the scalar and the fermion part
of the action.
One way to do so is to set $\Omega(p)
= \bar p^{2}$, $R_{\mu}(p)=\rho_{\mu}(p)=i\bar p_{\mu}$, which is local but
affected by
doubling, both, for the fermion as well as the scalar.
In the present model, unlike the case of
the Wess-Zumino model \cite{Jap},
we cannot treat them by adding Wilson terms ($(r/2) \cdot\hat p^{2}$),
because terms of this kind alter $\Omega$ but not $\rho_{\mu}$
(the fermionic Wilson terms contributes to $\lambda$).
Hence $R_{\mu}$ gets complicated and non-local again,
$R_{\mu}(p) = i \bar p_{\mu}(1 + (r/2)\hat p^{2}/\bar p^{2})$.
If one is ready to accept non-locality, then it looks simpler to
adjust the differential operators the other way round,
$\rho_{\mu}(p)=R_{\mu}(p)= i \hat p_{\mu}$,
as suggested in Ref. \cite{China}, and $\Omega (p) = \hat p^{2}$.
Then the translation operator resulting from the lattice version
of eq. (\ref{transla}) corresponds to a half-lattice shift,
whereas it is a full lattice shift for the option mentioned before.
However, the fermionic inverse propagator performs a finite gap
at the edge of the Brillouin zone, so we are dealing with a
non-locality similar to the SLAC fermions. This suggests that also this
approach fails to recover Lorentz invariance in the presence of a
gauge interaction, as was pointed out for the SLAC fermions
on the one loop level \cite{KS}.
One can construct a number of solutions of this type by hand.
For instance, the standard action together with $u(p)= \prod_{\mu}
\cos (p_{\mu}/2)$ even provides locality, but such hand-made
constructions look hardly satisfactory. Similar
to the problem of chiral fermions on the lattice, they do not
appear promising for the consistent incorporation of interactions.
Hence we are going to follow a different, more systematic strategy.
\section{A perfect supersymmetric lattice action} \label{perfect}
Since we are considering a free theory here, we can construct
a perfect lattice action by ``blocking from the continuum'',
which corresponds to a block variable renormalization group
transformation (RGT) with blocking factor infinity,
\begin{equation} \label{RGT}
e^{-S[ \Psi ,\Phi ]} = \int D \psi D \varphi \
e^{-s[ \psi , \varphi ] - T [ \Psi , \psi , \Phi , \varphi ] } \ ,
\end{equation}
where $S$ is the perfect lattice action (i.e. an action without
lattice artifacts), $s$ the continuum action,
and $T$ the transformation term.
We choose the latter such that the functional integral remains
Gaussian,
\begin{eqnarray}
T &=& \sum_{x,y} [ \bar \Psi_{x} - \int_{C_{x}} \bar \psi (u) du ]
\ (\alpha^{f})^{-1}_{xy} \ [ \Psi_{y} - \int_{C_{y}} \psi (u) du ]
\nonumber \\
&+& \sum_{x,y} [ \Phi_{x} - \int_{C_{x}} \varphi (u) du ] \label{TT}
\ (\alpha^{s})^{-1}_{xy} \ [ \Phi_{y} - \int_{C_{y}} \varphi (u) du ] \ ,
\end{eqnarray}
where $C_{x}$ is the unit square with center $x$, and $\alpha^{f}$,
$\alpha^{s}$ are arbitrary RGT parameters ($\alpha^{s}$ has to be
positive).
The resulting perfect action reads
\begin{eqnarray} \hspace*{-7mm} \hspace{-5mm} &&
S[\Psi ,\Phi ] = \frac{1}{(2\pi )^{2}} \int_{-\pi}^{\pi}
d^{2}p \ \Big\{ \bar \Psi (-p) \Delta^{f}(p)^{-1} \Psi (p)
+ \Phi (-p) \Delta^{s}(p)^{-1} \Phi (p) \Big\} \nonumber \\
\hspace*{-7mm} \hspace{-5mm} &&
\Delta^{f}(p) = \sum_{l \in {\sf Z \!\!\! Z}^{2}} \frac{\Pi (p+2\pi l)^{2}}
{i (p_{\mu} + 2\pi l_{\mu}) \gamma_{\mu}} + \alpha^{f}(p) \ , \quad
\Delta^{s}(p) = \sum_{l \in {\sf Z \!\!\! Z}^{2}} \frac{\Pi (p+2\pi l)^{2}}
{(p + 2\pi l)^{2}} + \alpha^{s}(p) \ , \label{perfact}
\end{eqnarray}
where $\Pi (p) := \prod_{\mu} \hat p_{\mu}/p_{\mu}$.
Locality requires $\alpha^{f}\neq 0$, which naively breaks the chiral
symmetry. However, the latter is still present in the observables
\cite{Schwing}, and a continuous remnant form of it even persists
in the lattice action, if $\alpha^{f}(p)$ is analytic \cite{ML}.
\footnote{In this case, the full chiral symmetry of the fermion propagator
is broken only locally. This is the property denoted as Ginsparg-Wilson
relation.}
Now we consider the SUSY transformation. The variation of the
continuum fields is given in eq. (\ref{trafo}). If we transform
simultaneously the lattice fields as
\begin{equation} \label{ptrafo}
\delta \Psi_{x} = - \gamma_{\mu} \int_{C_{x}} \partial_{\mu} \varphi (u)
du \ \epsilon \ , \quad
\delta \Phi_{x} = \bar \epsilon \int_{C_{x}} \psi (u) du \ ,
\end{equation}
then all the square brackets in the expression for $T$ (eq. (\ref{TT}))
remain invariant -- and so does the continuum action -- hence
$\delta S =0$.
Everything is consistent since we block the fields as well as
their variations from the continuum.
Note that this is not a solution along the lines of section \ref{ansatz},
because the transformations of the
lattice fields are not expressed directly in terms of lattice fields.
(The special case of a $\delta$ function RGT, $\alpha^{s},
\ \alpha^{f} \to 0$, is an exception, see below.)
Hence the solution is somehow implicit.
However, we can re-write the field variations in terms of
lattice variables. First, we define a continuum current
\begin{equation}
j_{\mu} = \gamma_{\mu} \varphi \ .
\end{equation}
We now block this current by integrating the flux through
the face between two adjacent lattice cells,
\begin{equation} \label{curr}
J_{\mu ,x} = \int_{-1/2}^{1/2} j_{\mu}(x+ \frac{1}{2} \hat \mu
+u_{\nu}) \ du_{\nu} \qquad (\nu \neq \mu ,~ \vert \hat \mu \vert =1).
\end{equation}
This is a perfect lattice current \cite{Schwing}. Here we assume
it to be implemented
so that eq. (\ref{curr}) holds exactly.
As an interesting property,
its lattice divergence is equal to the blocked continuum divergence,
\begin{equation}
\nabla_{\mu} J_{x,\mu} := \sum_{\mu} (J_{\mu ,x}-J_{\mu , x-\hat \mu})
= \int_{C_{x}} \partial_{\mu} j_{\mu}
(u) \ du \qquad ({\rm Gauss'~law}).
\end{equation}
We are now prepared to write the variation of the lattice fermion
field from eq. (\ref{ptrafo}) in terms of lattice variables,
\begin{equation}
\delta \Psi = - \nabla_{\mu} J_{\mu} \ \epsilon \ , \quad
\delta \bar \Psi = - \bar \epsilon \ \nabla_{\mu} J_{\mu} \ .
\end{equation}
In addition, we introduce the fermionic lattice field
\begin{equation}
\tilde \Psi_{x} := \int_{C_{x}} \ \psi (u) du \ ,
\end{equation}
which allows us to write also $\delta \Phi$ in terms of lattice
quantities,
\begin{equation}
\delta \Phi = \bar \epsilon \ \tilde \Psi \ .
\end{equation}
For a $\delta$ function RGT in the fermionic sector we have
$\Psi = \tilde \Psi$, but for finite $\alpha^{f}$ this
does not hold exactly.\\
A generalization is possible for instance with respect to the blocking
scheme. Instead of the block average scheme we have used so far,
we can start from a more general ansatz
\begin{eqnarray} && \hspace{-20mm}
T = \sum_{x,y} [ \bar \Psi_{x} - \int \Pi^{f}(x-u) \bar \psi (u) du ]
\ (\alpha^{f})^{-1}_{xy} \ [ \Psi_{y} - \int \Pi^{f}(x-u) \psi (u) du ]
\nonumber \\ && \hspace{-15mm}
+ \sum_{x,y} [ \Phi_{x} - \int \Pi^{s}(x-u) \varphi (u) du ]
\ (\alpha^{s})^{-1}_{xy} \ [ \Phi_{y} - \int \Pi^{s}(x-u) \varphi (u) du ] \ ,
\end{eqnarray}
where $\int \Pi^{f}(u) du = \int \Pi^{s}(u) du =1$. Both
convolution functions $\Pi^{f}$, $\Pi^{s}$ are peaked around zero
and decay fast enough to preserve locality on the lattice.
Correspondingly, the variations of the lattice fields turn into
\begin{eqnarray}
\delta \Psi_{x} &=& - \gamma_{\mu} \int \Pi^{f}(x-u) \partial_{\mu}
\varphi (u) du \ \epsilon \ , \nonumber \\
\delta \Phi_{x} &=& \bar \epsilon \int \Pi^{s}(x-u) \psi (u) du
:= \bar \epsilon \ \tilde \Psi_{x} \ ,
\end{eqnarray}
where we have adjusted the definition of $\tilde \Psi$.
If we want to achieve $\Psi = \tilde \Psi$, then we need -- except for
$\alpha^{f} =0$ --
also $\Pi^{f}=\Pi^{s}$.
This generalized scheme does not yield an obvious
lattice current any more; the latter is a virtue of the block average
scheme (characterized by $\Pi^{f}(u)$, $\Pi^{s}(u) =1$ if $u\in
C_{0}$, and 0 otherwise).
In the general case, it is easier to consider the continuum
divergence
\begin{equation}
d(u) = \partial_{\mu} j_{\mu}(u) = \gamma_{\mu} \partial_{\mu}
\varphi (u) \ ,
\end{equation}
and build from it directly the lattice divergence
\begin{equation}
D_{x} = \int \Pi^{f}(x-u) d(u) \ du \ ,
\qquad \delta \Psi = -D \ \epsilon \ .
\end{equation}
Regarding the transformation algebra, we obtain
\begin{equation} \label{ptranslat}
[\delta_{1},\delta_{2}] \ \Phi_{x} = ( \bar \epsilon_{1} \
\gamma_{\mu} \ \epsilon_{2} \ - \ \bar \epsilon_{2} \
\gamma_{\mu} \ \epsilon_{1})
\int \Pi^{s}(x-u) \partial_{\mu} \varphi (u) \ du \ .
\end{equation}
In particular, for the block average scheme this simplifies to
\begin{equation} \label{ptranslatba}
[ \delta_{1} , \delta_{2} ] \ \Phi
= \bar \epsilon_{1} \ \nabla_{\mu}
J_{\mu} \ \epsilon_{2} - \bar \epsilon_{2} \ \nabla_{\mu}
J_{\mu} \ \epsilon_{1} \ .
\end{equation}
We see that the continuum translation operator is inherited by
the perfect lattice formulation in a consistent way:
eqs. (\ref{ptranslat}), (\ref{ptranslatba})
show that the corresponding lattice translation operator
is just the blocked continuum translation operator.
The formula for $[\delta_{1},\delta_{2}] \Psi$ is analogous.
If we require the resulting translation operators to be identical,
then we need $\Pi^{f} = \Pi^{s}$.
Then the algebra of field variations and translation closes
under the blocking integral.
In any case, the fermionic and scalar spectrum are equal, because
they both coincide with the continuum spectrum.\\
It is straightforward to apply this perfect
lattice formulation to more complicated cases, see below. Interactions
can be included perturbatively in the process of blocking
from the continuum. For asymptotically free theories,
at $m=0$ even the classically perfect action
behaves perfectly \cite{Has}. Hence by means of an implicit
(but not just symbolic) definition of the action -- in terms of
classical inverse blocking -- we can also go beyond perturbation
theory in the massless case. This is analogous to the
fixed point formulation of a chiral gauge theory on the lattice
\cite{Schwing,HLN}.
\section{Adding an auxiliary scalar field}
To proceed to the 2d Wess-Zumino model, we include
an auxiliary scalar field
\footnote{This is even necessary for the general validity
of the resulting translation operator; otherwise it only holds
on-shell. See for instance P. Freud, ``Introduction to
Supersymmetry'', Cambridge University Press, 1986.}
This equilibrates the number of fermionic and bosonic degrees of
freedom.
The continuum Lagrangian and the field variations read
\begin{eqnarray}
{\cal L} &=& \bar \psi \gamma_{\mu} \partial_{\mu} \psi +
\partial_{\mu} \varphi \partial_{\mu} \varphi + f^{2} \ , \nonumber \\
\delta \psi &=& - (\gamma_{\mu}\partial_{\mu}\varphi +f) \ \epsilon \ ,
\quad
\delta \varphi = \bar \epsilon \ \psi \ , \quad \label{FF}
\delta f = \bar \epsilon \ \gamma_{\mu} \partial_{\mu} \psi \ .
\end{eqnarray}
Now the method presented in Ref. \cite{Jap} is applicable
in an extended form, if we use the following lattice discretization:
\begin{eqnarray}
S &=& \int_{B} \frac{d^{2}p}{(2\pi )^{2}} \ \Big[ \bar \Psi (-p)
i \gamma_{\mu} \bar p_{\mu} \Psi (p) + \Phi (-p) \bar p^{2} \Phi (p)
+ F(-p) F(p) \nonumber \\
&& + \bar \Psi (-p) W^{f}(p) \Psi (p)+2\Phi (-p) W^{s}(p) F(p)
- \Phi (-p) W^{s}(p)^{2} \Phi (p) \Big] \ , \nonumber \\
\delta \Psi (p) &=& -\{ [ i\gamma_{\mu}\bar p_{\mu} +W^{s}(p)]
\Phi (p) + F(p) \} \ \epsilon
\nonumber \\
\delta \Phi (p) &=& \bar \epsilon \ \Psi (p) \nonumber \\
\delta F(p) &=& \bar \epsilon \ [ i \gamma_{\mu} \bar p_{\mu} -W^{s}(p)]
\Psi (p) \ ,
\end{eqnarray}
where $W^{f}(p),\ W^{s}(p)$ are some sort of Wilson terms
(zero at the origin, non-zero at the edges of the Brillouin zone,
local and $2\pi$ periodic, which implies that they are even).
Hence they remove the degeneracy of the physical particles
with their doublers.
The standard form $1/2 \ \hat p^{2}$ is an example, but we can
also insert more general scalar and fermionic Wilson terms
\footnote{The continuum limit of this action does in fact correspond
to eq. (\ref{FF}), as we see if we substitute $F(p) \to \tilde F(p) =
F(p) + \Phi (p) W^{s} (p)$. Then the mixed term of $F$ and $\Phi$
disappears, and we obtain another irrelevant term
$- \Phi (-p) W^{s}(p)^{2} \Phi (p)$.}
and we always arrive at $\delta S =0$.
If we also want the fermion and scalar spectrum to coincide,
then we have to relate $W^{f}$ and $W^{s}$.
The procedure applied in Ref. \cite{Jap} further restores a remnant
chiral symmetry by means of the so-called overlap formalism,
and this could also be done here.\\
Instead we can apply the perfect action machine from section
\ref{perfect}, starting from the continuum system (\ref{FF}).
This also solves the doubling problem and maintains a (remnant)
chiral symmetry. The perfect propagator of the lattice field
$F$ reads
\begin{equation} \label{statprop}
\Delta^{\bar s}(p) = \sum_{l \in {\sf Z \!\!\! Z}^{2}}
\Pi^{\bar s}(p+2\pi l)^{2}+\alpha^{\bar s}(p) \ ,
\end{equation}
where $\alpha^{\bar s}$ is a RGT
parameter analogous to $\alpha^{s}$.
We now introduce {\em two} continuum currents, $\gamma_{\mu}\varphi$ and
$\gamma_{\mu} \psi$, and we construct perfect currents
from them. The explicit formulae are a straightforward extension
of the formulae in section \ref{perfect}. We do not display them here, but
we write them down for the further extension to the 4d Wess-Zumino model
in the appendix.
\section{Conclusions}
We illuminated the problem of a direct construction
of a supersymmetric lattice formulation.
Then we have shown how a construction in terms of
renormalization group transformations can be achieved.
We preserve invariance under a continuous supersymmetric
type of field transformations in a local perfect lattice action,
which has also a remnant chiral symmetry.
This applies to the 2d models discussed above, as well as to
the 4d Wess-Zumino model, see appendix.
We remark that the perfect formulation also cures
the well-known problems related to the Leibniz rule
\cite{susylat}
\footnote{See in particular the first paper by S. Nojiri.} --
which breaks down for usual lattice discretizations --
because here we keep track of the exact continuum differential
operators.
This is manifest in the translation operator, which results from a
commutator of field variations: in the perfect lattice
formulation, we obtain
the consistently blocked continuum translation operator.
Therefore the algebra with the field variations closes.
Moreover, in the perfect lattice formulation, the fermionic and
scalar dispersion relation coincide automatically.
The next step is
the inclusion of the gauge interaction; this work is in progress.
A consistent blocking of the gauge fields from the continuum
leads to a perfect action with all the continuum symmetry properties
in the observables -- and also in the action, if the transformation
term respects these symmetries -- but this construction can only
be performed perturbatively. For asymptotically
free theories in the massless case, a classically
perfect action -- constructed by simplified inverse blocking
(based on minimization) --
is sufficient for the same purpose and enables the step beyond
perturbation theory.\\
{\em Acknowledgment} \ I am indebted to M. Peardon for many helpful
discussions. I also thank him and I. Montvay for reading
the manuscript.
|
{'timestamp': '1998-07-06T18:13:06', 'yymm': '9807', 'arxiv_id': 'hep-lat/9807010', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-lat/9807010'}
|
arxiv
|
\section{\@startsection{section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\reset@font\Large\scshape}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\reset@font\large\slshape}}
\makeatother
\def\ap#1#2#3 {{Ann.\ Phys.\ (NY)}\ #1 (#2) #3}
\def\ib#1#2#3 {ibid., #1 (#2) #3}
\def\np#1#2#3 {{Nucl.\ Phys.}\ #1 (#2) #3}
\def\pl#1#2#3 {{Phys.\ Lett.}\ #1 (#2) #3}
\def\prep#1#2#3 {{Phys.\ Rep.}\ #1 (#2) #3}
\def\prev#1#2#3 {{Phys.\ Rev.}\ #1 (#2) #3}
\def\prl#1#2#3 {{Phys.\ Rev.\ Lett.}\ #1 (#2) #3}
\def\zp#1#2#3 {{Z.\ Phys.}\ #1 (#2) #3}
\newcommand\thi{\ensuremath{\theta_1}}
\newcommand\thii{\ensuremath{\theta_2}}
\newcommand\phii{\ensuremath{\phi_1}}
\newcommand\phiii{\ensuremath{\phi_2}}
\newcommand\Sphi{\ensuremath{\sin\frac{\phi}{2}}}
\newcommand\SPhi{\ensuremath{\sin\frac{\Phi}{2}}}
\newcommand\Sphii{\ensuremath{\sin\frac{\phi_1}{2}}}
\newcommand\Sphiii{\ensuremath{\sin\frac{\phi_2}{2}}}
\newcommand\Cphi{\ensuremath{\cos\frac{\phi}{2}}}
\newcommand\CPhi{\ensuremath{\cos\frac{\Phi}{2}}}
\newcommand\Cphii{\ensuremath{\cos\frac{\phi_1}{2}}}
\newcommand\Cphiii{\ensuremath{\cos\frac{\phi_2}{2}}}
\newcommand\half{\ensuremath{\frac{1}{2}}}
\newcommand\Tr{\operatorname{Tr}}
\newcommand\sgn{\operatorname{sgn}}
\begin{document}
\thispagestyle{empty}
\begin{center}
July 1998 \hfill \large IFUP-TH 27/98 \\
\hfill hep-lat/9807019
\end{center}
\vspace{1cm plus 1fil}
\begin{center}
\textbf{\huge Wilson loop distributions, higher representations and
centre dominance in SU(2)}
\vspace{1cm plus 1fil}
{\large
{\bfseries P.W. Stephenson \\}
Dipartimento di Fisica dell'Universit\`a and INFN, \\
I-56100 Pisa, Italy \\
Email: \texttt{pws@@ibmth.df.unipi.it}}
\vspace{1cm plus 1fil}
\textbf{\Large Abstract}
\end{center}
\noindent To help understand the centre dominance picture of
confinement, we look at Wilson loop distributions in pure SU(2)
lattice gauge theory. A strong coupling approximation for the
distribution is developed to use for comparisons. We perform a
Fourier expansion of the distribution: centre dominance here
corresponds to suppression of odd terms beyond the first. The Fourier
terms correspond to SU(2) representations; hence Casimir scaling
behaviour leads to centre dominance. We examine the positive
plaquette model, where only thick vortices are present. We show that
a simple picture of random, non-interacting centre vortices gives a
string tension about 3/4 of the measured value. Finally, we attempt
to limit confusion about the adjoint representation.
\vspace{1cm}
\noindent PACS codes: 11.15.Ha, 12.38.Aw, 12.38.Gc
\noindent Keywords: SU(2), lattice, confinement, centre vortices,
representations
\newpage
\section{Introduction}
Recently the idea, originally proposed some time
ago~\cite{Co79,Ho79,MaPe79}, that confinement in gauge theories can be
considered as an effect related to the centre Z($N$) of the gauge
group SU($N$) has been undergoing a
revival~\cite{DDFa97a,DDFa97b,DDFa98,KoTo97}. Not the least of this
has been the demonstration~\cite{KoTo97} that if one replaces the
values of Wilson loops with their signs alone (i.e.\ the centre Z(2))
the heavy quark potential in SU(2), which is now very accurately
known~\cite{BaSc95}, can be essentially \textit{completely}
reproduced. This is a spectacular --- and gauge invariant ---
demonstration that something is right about this hypothesis; it is,
however, far from a demonstration that centre vortices are responsible
for confinement.
The key part of the argument for Z(2) to be an ingredient in
confinement is that there are `thick' centre vortices, associated with
the quotient group $\mathrm{SU}(2)/\mathrm{Z}(2) \cong
\mathrm{SO}(3)$, which pierce Wilson loops; their physical effect
depends on how many vortices, modulo 2, pierce a given area. There
are also `thin' vortices associated with Z(2), which appear as chains
of negative plaquettes; to create these requires an action
proportional to the number of flipped plaquettes, so these will not
survive in the continuum limit. By contrast, the thick vortices show
up only in larger Wilson loops, which may be negative while still
surrounding only positive plaquettes, and do survive in the continuum
limit; they are topological in nature, related to the fact that
SU(2)/Z(2) is not simply connected and indeed has a Z(2) homotopy.
(For simplicity, we have here ignored hybrid vortices, which combine
the two effects.) We shall return to the distinction between Z(2) and
SU(2)/Z(2) effects towards the end of the article because, as
emphasised in ref.~\cite{KoTo97} where this mechanism is described in
more detail, it is important and has caused much confusion.
We have specialised to SU($N$) for $N=2$ since the arguments are
expected to extend to higher $N$ via the centre Z($N$), though of
course this needs to be checked explicitly. We are also ignoring the
effects of fermions in the vacuum.
\begin{figure}
\begin{center}
\psfig{file=wl_dist.ps,width=3.7in,angle=270}
\end{center}
\caption{Monte Carlo distribution of Wilson loops on a $12^4$
lattice at $\beta=2.5$, showing also the limit of the distribution
for large loops. The area under each curve is normalised to
unity. On this scale errors are negligible.}
\label{fig:wl_dist}
\end{figure}
Because Z(2) commutes with all elements of the gauge group, the sign
of any Wilson loop in the fundamental representation of SU(2) must be
determined by the combined effect of thin and thick vortices: the
total of these objects passing through the loop determines whether the
loop is positive (even number of vortices) or negative (odd number).
In an attempt to see how this appears, we can look at the distribution
of the trace of the Wilson loop, as generated by a Monte Carlo
simulation. Note we are looking at the distribution of individual,
not average, Wilson loops. This was first looked at, in the case of
the plaquette, in refs.~\cite{MaPo82,BeMa84}, where it is referred to
as the `spectral density', and the point was made that it contains
information about all representations. One such set, taken from
10,000 configurations of a $12^4$ lattice at $\beta=2.5$, is shown in
fig.~\ref{fig:wl_dist}. It is clear that it changes smoothly from the
highly asymmetric shape of the plaquette action to some limiting
distribution for large loops. It is less clear how the Wilson loop
expectation value $\langle W_0\rangle$, which is the first moment of
the distribution, behaves; nor is it clear where the special role of
the sign of the loop comes from.
This paper attempts to shed some light on these matters. As the
behaviour of the distribution is largely unfamiliar territory, we
first perform a calculation using a simple approximation: we are not
able to calculate the raw distribution, but we are able to see how it
develops for larger loops. The expectation value in this
approximation has the leading order strong coupling behaviour.
We shall proceed as follows: first (section 2) we shall give a few
basic home truths about Wilson loops. Next (section 3), we shall look
at the approximation that all loops are uncorrelated, corresponding to
strong coupling; in section 4 we discuss the centre dominance picture
in the same spirit, where we also explicitly show the connection
between the shape of the distribution, irreducible representations of
SU(2) and the phenomenon of centre dominance. Then we make some
remarks about centre-projected vortices, the positive plaquette model,
the adjoint representation, and show how a gas of non-interacting Z(2)
vortices gives exact area law confinement. In the last section we
summarise these results concisely.
Section 3 can safely be missed out by anyone not interested in the
details of the distributions of uncorrelated Wilson loops. Of the
formalism in that section we shall only use the formula
relating the distributions for loops of areas $A$ and $2A$,
eqn.~(\ref{eq:fmult}), and its extension to arbitrary multiples of
$A$. The main physical discussion is in section 4.
\section{Wilson loop distributions}
We shall start by assuming the standard Wilson lattice gauge theory in
SU(2). We parametrise an open Wilson loop (i.e.\ the SU(2)
element representing the loop, where no trace has been taken) as
\begin{equation}
W \equiv W_0 \mathbf{1}_2 + i\mathbf{\sigma}.\mathbf{W},
\qquad W_0^2 + W_1^2 + W_2^2 + W_3^2 = 1.
\end{equation}
We now consider the distribution, the word being used in the sense of
a Monte Carlo calculation: we have a large number of configurations,
and consider the spread of values for all Wilson loops of a given
shape on all lattices. Of course, we could equally well talk in terms
of contributions to the path integral, where, because of the
importance sampling built into the Monte Carlo simulations, the
exponential factor is included and there is a uniform measure; this is
entirely equivalent. However, the statistical language fits well here
where we use actual Monte Carlo data for comparison. The distribution
contains all the gauge invariant information about the loops,
including the expectation values of the loops in all representations.
Indeed, in a quantum theory it is natural to consider this
distribution as the basic quantity.
The first matter of interest is the limiting large loop distribution
apparent in fig.~\ref{fig:wl_dist}. This can easily be calculated.
Given the short range behaviour of the force, for sufficiently large
loops different parts of the loop are physically unconnected with one
another. The lack of any overall correlation means the loops are
random: the distribution corresponds to a random walk by $W_i$ over
the 3-sphere of the gauge manifold, so we simply
need to calculate the fraction of the surface available at a given
$W_0$ with the other co-ordinates fixed. The probability that $W_0$
lies in a certain range is
\begin{equation}
\rho_l(W_0)\,dW_0 = S_0 ds_0 / S,
\end{equation}
where here, as throughout, we use the symbol $\rho$ for the
distribution and the suffix $l$ indicates the limiting distribution,
$S_0$ is the 2-surface for a given $W_0$, while between $W_0$ and
$W_0+dW$ a distance $ds_0$ is covered on the surface of the 3-sphere
with the other co-ordinates held constant, and $S$ is the total
surface area of the 3-sphere. Basic geometry gives
\begin{equation}
\rho_l(W_0)\,dW_0
= \frac{4\pi(1-W_0^2)}{2\pi^2}\frac{dW_0}{(1-W_0^2)^{1/2}}
= \frac{2}{\pi}(1-W_0^2)^{1/2}dW_0.
\label{eq:limiting}
\end{equation}
This is the lowest curve in fig.~\ref{fig:wl_dist} and clearly fits
the part. Indeed, loops larger than $4\times4$ are so close to this
that they have been omitted for clarity.
Later, we will see that this curve corresponds to the identity
representation of SU(2); it will then be obvious that convergence
to this limit is inevitable provided that all $j=1/2$ and higher
representation Wilson loops decay monotonically.
The loop distribution in figure~\ref{fig:wl_dist} is smooth and we
would expect to be able to expand it in a Fourier series. We will
pick the units $W_0\equiv \Sphi$, where $-\pi\le \phi\le\pi$ so that
the terms are orthogonal on the range of values of $W_0$.
However, our expansion is not quite standard; we use our freedom to
expand the even and odd parts separately to write,
\begin{equation}
\rho_A(W_0) \equiv \rho_A(\Sphi) =
\sum_{n=1}^{\infty} (a_n\cos (n-\half)\phi + b_n\sin n\phi)
\label{eq:fourier}
\end{equation}
as the distribution of $W_0$ at loop area $A$.
The difference from the usual Fourier series is
that this form satisfies the boundary condition that the function
vanishes at $\phi=\pm\pi$. The basis corresponds to the
eigenfunctions of the self-adjoint linear equation
\begin{equation}
\frac{d^2\rho}{d\phi^2} + \nu^2\rho = 0
\end{equation}
with those boundary conditions, so that orthogonality (which may
easily be checked) and completeness are guaranteed by Sturm-Liouville
theory. We will later show that there is a one-to-one correspondence
between these terms and the expectation value in irreducible
representations of the gauge group; this is why we have
picked this form. This means that any distribution which does not
vanish at the boundary does not have an interpretation in terms of the
underlying group theory. The large area limit is already visible as the
$a_1$ term.
As a final piece of preparatory formalism, we also calculate the
average Wilson loop in the fundamental representation from the Fourier
series; this is
\begin{multline}
\langle W_0\rangle = \int_{-1}^1\rho_A(W_0)W_0dW_0
= \int_{-\pi}^{\pi}\rho_A(W_0)\Sphi\,\Cphi\,d\phi/2 \\
= (1/4)\int_{-\pi}^{\pi}(b_1\sin\phi)\sin\phi\,d\phi
= \frac{\pi b_1}{4},
\label{eq:w0b1}
\end{multline}
since by orthogonality only the first odd term in the expansion survives.
Later we will generalise this formula to all representations.
\section{The uncorrelated loop approximation}
\label{sec:uncor_loop}
We now claim that due to gauge invariance we can always write an
individual Wilson loop (the qualification is important)
chosen from the distribution as
\begin{equation}
W \equiv W_0^2 + \sqrt{1 - W_0^2}\mathbf{\sigma}.\mathbf{e_r},
\label{eq:wdist}
\end{equation}
where $\mathbf{e_r}$ is a unit vector in a random direction in the
3-space inhabited by $\mathbf{W}$. This is essentially a
consequence of Elitzur's theorem. To see this, consider any Wilson
loop of a given shape with its opening at a particular point on a
lattice. If the distribution of $\mathbf{W}$ is not as shown, then
there must be some preferred direction(s) for $\mathbf{W}$. However,
we can perform a local gauge transformation at the site of the
opening. It is easy to see that this leaves $W_0$ invariant, while
rotating $\mathbf{W}$ to an arbitrary angle. As we have complete
freedom to do this, with the single exception that rotations of Wilson
loops with their openings on the same site are correlated, we can move
the preferred direction of $\mathbf{W}$. (We can avoid the exception
here by simply considering only sets of translated Wilson loops
without rotations, which gives a perfectly valid distribution.) Thus
there is no preferred direction after all, and the distribution is the
one claimed.
\begin{figure}
\begin{center}
\psfig{file=wl_mult.eps,angle=270,width=4in}
\end{center}
\caption{Multiplication of adjacent Wilson loops. We can move the
opening using a gauge transformation and hence create loops of
any size in this fashion.}
\label{fig:wl_mult}
\end{figure}
We now take two Wilson loops, $W^1(A)$ and $W^2(A)$, of the same
shape and area $A$, but chosen in such a way that their opening lies on
the same site, fig.~\ref{fig:wl_mult}. It will also be convenient to
choose rectangular loops with a side in common traversed in such a way
that this cancels out in the product of the loops. Thus we produce a
Wilson loop of twice the area. The $W_0$ element of this loop, by the
standard SU(2) multiplication rule, is:
\begin{equation}
W_0(2A) = = W_0^1(A)W_0^2(A) -
(1-{W_0^1(A)}^2)^{1/2}(1-{W_0^2(A)}^2)^{1/2} \cos\xi,
\label{eq:wmult}
\end{equation}
where $\cos\xi \equiv \mathbf{e}_r^1.\mathbf{e}_r^2$; $\xi$ is the
angle between the two unit three vectors in the
directions $\mathbf{W}^1(A)$ and $\mathbf{W}^2(A)$.
The correlations between the diagonal elements $W_0^i(A)$ in
eqn.~(\ref{eq:wmult}) together
with the angle $\xi$ contain all the information about correlations
between adjacent loops; without them we simply have loops from a random
distribution multiplied together. If we were to go on and consider
larger loops still, we would have to consider the cumulative effect of
these correlations on the larger loops, and the expressions would
rapidly become unmanageable.
Instead, we make the approximation that both the angular and the
diagonal correlations are negligible for loops larger than some area
$A=A_d$. We shall later see explicitly that this gives the leading
strong coupling value for $\langle W_0\rangle$, although in our case
expanded in terms of smaller Wilson loops rather than the coupling
itself. That such behaviour is expected is due to the random nature
of the choice of loops from the distribution.
A random distribution for $\cos\xi$ is simply a flat distribution,
which follows from the fact that the usual measure for the polar angle
on a 2-sphere (here the one containing $\mathbf{e}_r^i$) is
$\sin\xi\,d\xi=d\cos\xi$. We are assuming, however, that the loop
distribution $\rho(W_0)$ itself has not reached the large area limit.
The formula (\ref{eq:wmult}) can now be used to multiply loops of
different sizes, and we can build up larger Wilson loops by repeated
application. Moving the opening in the Wilson loop simply corresponds
to a gauge transformation, leaving the only remaining variable $W_0$
invariant, so there is no difficulty in constructing Wilson loops of
any area which is a multiple of $A_d$ by this method.
Our procedure is to take the measured Monte Carlo distribution of
$W_0$ on loops of area $A=A_d$ as our input and hence use the
approximation to calculate the distribution of Wilson loops for larger
areas. In our examples, we shall in fact start from the plaquette,
i.e.\ take $A_d=1$ in lattice units and the measured plaquette
distribution, but this is not forced on us. This method is obviously
something of a hybrid, since the initial distribution includes all
correlations, but it will allow us to perform explicit calculations
using the loop distribution. Further, we will be able to predict the
behaviour, starting from the appropriate expectation value for the
plaquette, in every representation of the gauge group.
\subsection{Calculation of the product distribution}
Suppose that the probability distribution of $W_0$ for loops of the
initial size $A_d$ is given by $\rho_{A_d}(W_0)dW_0$, normalised so
that the integral between $W_0=-1$ and 1 is unity. To produce the
distribution of larger loops we must integrate over the smaller ones,
using the equation~(\ref{eq:wmult}) as a constraint. It will be
helpful at this point to change to angular variables: take
$W_0(2A_d)\equiv\cos\Theta$, where $0\le\Theta\le\pi$, and likewise
$W_0^i(A_d)\equiv\theta_i$ for $0\le\theta_i\le\pi$. The new
distribution is given by
\begin{equation}
\rho_{2A_d}(W_0) dW_0 = \frac{1}{2}\int\in
\rho_{A_d}(\cos\thi)\rho_{A_d}(\cos\thii)\sin\thi d\thi
\sin\thii d\thii\sin\xi d\xi,
\label{eq:intprod}
\end{equation}
where the extra factor of $1/2$ in eqn.~(\ref{eq:intprod}) has appeared
because in covering the full range of $\thi$ and $\thii$ we
cover the $\Theta$ range exactly twice. This can be seen by
simultaneously exchanging $\thi\leftrightarrow\pi-\thi,
\thii\leftrightarrow\pi-\thii$ in eqn.~(\ref{eq:constraint}), which
leaves $W_0$ invariant.
The range in this notation is taken from eqn.~(\ref{eq:wmult})
which becomes
\begin{equation}
W_0 \equiv \cos\Theta = \cos\thi\cos\thii - \sin\thi\sin\thii\cos\xi.
\label{eq:constraint}
\end{equation}
Note that with the correct normalisation of $\rho_{A_d}(\cos\theta)$,
the new distribution will also be correctly normalised, i.e.
$\int\rho_{2A_d}(W_0) dW_0=1$, regardless of the form of $\rho$; this
is nothing more than a basic consistency condition for the integrals
we are performing.
We remove $d\xi$ from eqn.~(\ref{eq:intprod}) using
eqn.~(\ref{eq:constraint}), $\sin\thi\sin\thii\sin\xi d\xi = dW_0$,
giving
\begin{equation}
\rho_{2A_d}(\cos\Theta) = \frac{1}{2}\int\int_\mathrm{constraint}
\rho_{A_d}(\cos\thi)\rho_{A_d}(\cos\thii) d\thi d\thii
\end{equation}
with the constraint of eqn.~(\ref{eq:constraint}).
The considerable simplification due to the approximation that
$\cos\xi$ has a random distribution is now evident.
The explicit evaluation of the integral is rather uninteresting. We
shall simply point out that it is easiest to change variables to the
$\phi$ used in the Fourier series (\ref{eq:fourier}), so that $W_0
\equiv \cos\theta \equiv \Sphi$. We then insert the Fourier series
for the initial distribution $\rho_{A_d}$ into the integral, and after
some even more tedious algebra best left to computers we obtain,
\begin{equation}
\rho_{2A_d}(\SPhi) =
\sum_{n=1}^\infty\frac{(-1)^{n-1}\pi}{4}
\left(\frac{a_n^2}{n-\half}\cos(n-\half)\Phi
+ \frac{b_n^2}{n}\sin n\Phi \right).
\label{eq:fmult}
\end{equation}
Both odd and even sets of terms remain separate; as we have already
hinted, this is no coincidence.
The relationship between $b_1$ and the loop expectation value was
given in eqn.~(\ref{eq:w0b1}). We now read off the coefficient of
$\sin \Phi$ which is $\pi b_1^2/4$, hence the new expectation value is
$(\pi b_1/4)^2$. Because the terms in the expansion remain separate,
we can immediately extend the result to N multiplications and a loop
size $A\equiv N+1$,
\begin{equation}
\langle W_0(A)\rangle = (\pi b_1/4)^{1+N} = W_0(1)^{A},
\label{eq:arealaw}
\end{equation}
an area law behaviour with string tension $-\log(W_0(1))$.
Eqn.~(\ref{eq:arealaw}) is equivalent to the lowest order strong
coupling result, even though in this case no mention of the coupling
has been made; indeed, it is renormalised due to the fact that we
start from the physical plaquette rather than the strong coupling
prediction, although the behaviour for larger loops in terms of the
plaquette is the same.
\begin{figure}
\begin{center}
\psfig{file=compare.ps,angle=270,width=4in}
\end{center}
\caption{Comparison of Wilson loop distributions from Monte Carlo
and those from the uncorrelated loop approximation (where the
noise is clearly visible). The measured plaquette distribution
has been used as a starting point for the latter.}
\label{fig:wl_compare}
\end{figure}
In fig.~\ref{fig:wl_compare}, we compare distributions from
uncorrelated loops with the measured values taken from
fig.~\ref{fig:wl_dist}; as already stated, we have started from the
measured plaquette distribution to generate the uncorrelated loop
distributions. The multiplications in this case were done by Monte
Carlo integration of the rule~(\ref{eq:intprod}), which was rather more
convenient than applying the analytical formulae, thus accounting for
the noise in this distribution.
Not surprisingly, we immediately see the measured shape start to
diverge from the uncorrelated loop shape. So it is not at all clear
this simple approximation is useful in explaining the physics.
However, later we will see that the terms in eqn.~(\ref{eq:fmult})
correspond to irreducible representations of SU(2) and that the
individual terms separately obey an area law. Thus the calculation
has the useful effect of demonstrating the leading strong coupling
behaviour in all representations.
\section{Centre dominance}
As explained in the introduction, the centre dominance picture applied
directly to Wilson loops has been shown to be quantitatively
successfully, indeed in very good agreement with the full
values~\cite{KoTo97}. In this approach any Wilson loop obtained from
Monte Carlo results in SU(2) is approximated by its sign. It should
be pointed out that in a weak sense it is not surprising that this
reflects aspects of the dynamics: if one divides $\rho(W_0)$ into odd
and even parts, only the odd parts can contribute to $\langle
W_0\rangle$ because that comes from the first moment of the
distribution. We shall return to this point below. The result that
the sign \textit{alone} contains the essential dynamics is stronger
and deserves examination.
In terms of the distribution, the centre dominance picture requires
that we count $-1$ for a loop with $-\pi\le\phi\le0$ and $+1$ for
$0\le\phi\le\pi$ with $W_0\equiv\sin(\phi/2)$, i.e.\ a step
distribution. We use the Fourier expansion of the Wilson loop
distribution in eqn.~(\ref{eq:fourier}) and obtain for the expectation
value in this picture,
\begin{multline}
\langle W_0^{Z(2)}\rangle = \int_0^\pi\rho\Cphi \frac{d\phi}{2} -
\int_{-\pi}^0 \rho\Cphi \frac{d\phi}{2} \\
= \frac{1}{2}\sum_n\left(\int_0^\pi b_n\sin n\phi\Cphi d\phi
- \int_{-\pi}^0b_n\sin n\phi\Cphi d\phi\right)
= \sum \frac{nb_n}{n^2-1/4}.
\label{eq:cdexpn}
\end{multline}
The series here is to be compared with the exact value $\pi b_1/4$.
The important difference is the presence of all the higher $b_n$, with
decreasing coefficients, in the centre dominance formula; without
these the results would differ only by an overall factor of $3\pi/16$,
which vanishes in the ratios required for the heavy quark potential
(see eqn.~(\ref{eq:potdef}), below). Why the value in
eqn.~(\ref{eq:cdexpn}) gives the correct value can therefore by
clarified by looking at the values of $b_n$ in the real distribution.
Some support for this result is given by the uncorrelated loop
picture, eqn.~(\ref{eq:fmult}), where in doubling the area of a loop
the new coefficients $b_n$ are suppressed by a factor $1/n$, which in
combination with the factor $n/(n^2-1/4)$ in eqn.~(\ref{eq:cdexpn})
shows that higher terms become progressively less important both for
large $A$ and for large $n$. If the other $b_n$ are already smaller
than $b_1$ for small loops this effect is enhanced because of the
powers of $b_n$ involved.
We have calculated the coefficients $b_n$ by binning Wilson loop data
from Monte Carlo simulations and performing the integral for the
Fourier coefficients,
\begin{equation}
b_n = \frac{1}{\pi}\int_{-\pi}^\pi\rho\sin n\phi\,d\phi,
\end{equation}
numerically on the data at the end of the run. Errors have been estimated
by averaging over values of $b_n$ obtained in this way from several
runs. The results here use 500 bins; we have compared this with the
result from 200 bins in order to check that the
discretisation error due to the finite number of bins is small. This
error increases with the coefficient $n$ as usual when
sampling higher frequencies.
\begin{figure}
\begin{center}
\psfig{file=fc_against_n.ps,width=4in}
\end{center}
\caption{The ratios of Fourier coefficients $\lvert b_n/b_1\rvert$
plotted against $n$ for the three smallest loop areas. The
squares show the same values in the uncorrelated loop approximation.}
\label{fig:fc_against_n}
\end{figure}
In fig.~\ref{fig:fc_against_n} we show $\lvert b_n/b_1\rvert$ for the
three smallest areas (given in terms of the lattice spacing $a$)
plotted against $n$ on a logarithmic scale: $\lvert b_4/b_1\rvert$ at
area three is omitted because of large errors. The data is from a
$12^4$ lattice at $\beta=2.5$. Plotted in this fashion, the data
shows two things. First, $b_n$ drops off faster for larger $n$.
Second, the rate at which this happens increases for larger loops, as
witnessed by the increased slope. Even by area $3a^2$ the value for
$n=3$ is down by a factor over a hundred on $n=2$.
Thus the $b_1$ term is the only one contributing to the centre
dominance result, eq.~(\ref{eq:cdexpn}), for loops of even moderate
size and hence the result is the same, up to a trivial normalisation
factor, as that from the full Wilson loop for all medium and large
loops.
In fig.~\ref{fig:fc_against_n}, we have also shown the ratios for
$n=2$ and 3 using the uncorrelated loop approximation, for comparison.
As we have used the value in the plaquette distribution as the
starting point for the higher coefficients, the points for area $1a^2$
are chosen to be the same as in the full case. However, the trend
thereafter is similar, too. It has come from two places: first, the
intrinsic behaviour of the model seen above, where higher $b_n$ are
more highly suppressed in larger loops; secondly, the fact that the
initial coefficients $b_n$ for higher $n$ are already smaller and so
powers of higher $b_n$ disappear faster: only the first of the two is
an explicit prediction of the model.
\subsection{Behaviour in higher representations}
It turns out we are able to explain the results in the previous
section as the effect of higher representation loops. As an
alternative to eqn.~(\ref{eq:cdexpn}), where we expanded the
expectation value in the centre dominance picture in terms of Fourier
coefficients, we could equally well have performed an expansion in
terms of the characters of various representations; our conclusion
would then have been that only the lowest half-odd-integer
representation contributed for large loops. In fact,
the expansions are the same up to constant factors.
To show this, we shall define the character of the
representations in terms of the corresponding traces of the Wilson
loop as
\begin{equation}
\chi_m(W_0) \equiv (m+1)\Tr_{\frac{m}{2}}W,
\end{equation}
where the representation has isospin $m/2$, so that the $\Tr$ operator
is defined to contain the normalisation factor usually used in Monte
Carlo calculations. The standard recurrence relation between the
characters for SU(2) is
\begin{equation}
\chi_{m+1}(W_0) = \chi_m(W_0)\chi_1(W_0) - \chi_{m-1}(W_0).
\label{eq:charrec}
\end{equation}
We shall now show that using our variable $\phi$, defined in
terms of the fundamental representation as
$\Tr_{1/2}W\equiv W_0 \equiv \Sphi$,
the characters for odd and even $m$ can be written,
\begin{equation}
\begin{aligned}
\chi_{2(n-1)}(W_0) &\equiv
(-1)^{n-1}\frac{\cos(n-\frac{1}{2})\phi}{\Cphi}, \\
\chi_{2n-1}(W_0) &\equiv (-1)^{n-1}\frac{\sin n\phi}{\Cphi}.
\end{aligned}
\label{eq:charphi}
\end{equation}
It is to be assumed that the value of the character at
$\phi=\pm\pi$ is found by taking the limit, although we have already
noted that the distribution must vanish there from group-theoretic
considerations. Note that $\chi_0(W_0) \equiv 1$ and $\chi_1(W_0)
\equiv 2\Sphi$ as expected. Now we must prove that the following
versions of eqn.~(\ref{eq:charrec})
hold,
\begin{equation}
\begin{aligned}
\chi_{2n}(W_0) + \chi_{2n-2}(W_0) &= \chi_{2n-1}(W_0)\chi_1(W_0), \\
\chi_{2n+1}(W_0) + \chi_{2n-1}(W_0) &= \chi_{2n}(W_0)\chi_1(W_0).
\end{aligned}
\end{equation}
This can easily be done by insertion of the formulae~(\ref{eq:charphi})
into the respective left hand sides, whence standard trigonometric
relations give the required results. Hence by induction the
equations~(\ref{eq:charphi}) are true for all $n\ge1$.
Now we work out the expectation values of the traces using the
distribution. For the representations with $j=n-1/2$, $n\ge1$,
\begin{equation}
\langle\Tr_{n-\half}W\rangle = \frac{1}{2n}
\int_{-\pi}^{\pi} \rho(W_0) \chi_{2n-1}(W_0)\Cphi\frac{d\phi}{2}
= \frac{(-1)^{n-1}\pi b_n}{4n},
\label{eq:repexpect}
\end{equation}
after inserting the Fourier expansion of eqn.~(\ref{eq:fourier}) and
the second of eqns.~(\ref{eq:charphi}).
Likewise for the representations $j=n-1$, $n\ge1$, we have,
\begin{equation}
\langle\Tr_{n-1}W\rangle = \frac{(-1)^{n-1}\pi a_n}{4(n-1/2)},
\end{equation}
where $n=1$ and 2 correspond to the identity and
adjoint representations respectively. Hence, as claimed, there is a
one-to-one correspondence between the Fourier terms and the
representations of the gauge group. (The basic point about the
relationship between the sign of the loop and the character expansion
was made in ref.~\cite{AmGr98}.)
This means that the behaviour of the coefficients seen in
fig.~\ref{fig:fc_against_n} is entirely consistent with the Casimir
scaling hypothesis, recently discussed in the context of centre
dominance~\cite{FaGr97}, which notes that higher representations
$j=m+1/2$ have a string tension roughly proportional to the quadratic
Casimir operator,
\begin{equation}
K_j \approx K_C \times j(j+1).
\label{eq:casscale}
\end{equation}
A larger string tension causes a stronger area law fall off, and hence
from eqn.~(\ref{eq:repexpect}) the Fourier coefficients $b_m$ decay
faster in the loop area. Combined with eqn.~(\ref{eq:cdexpn}), it can
be seen that Casimir scaling implies centre
dominance in the $j=1/2$ Wilson loops. This result would be rigorous
if within Casimir scaling one included the assumption that all
loops, not just those in the region of linear confinement, were
suppressed by a similar factor. As the ratio of string
tensions between $j=3/2$ and $j=1/2$ is already 5 in
eqn.~(\ref{eq:casscale}), it is not surprising that centre dominance
holds so well.
Combining these results with the uncorrelated loop calculation in
eqn.~(\ref{eq:fmult}), we see that in the strong coupling limit all
half-odd-integer representations have an area law behaviour similar to
eqn.~(\ref{eq:arealaw}),
\begin{equation}
\langle \Tr_{n-\half}W(A)\rangle = \left(\frac{(-1)^{n-1}\pi
b_n}{4n}\right)^A.
\label{eq:halfreparea}
\end{equation}
Indeed, we have also an area law in the integer representations,
\begin{equation}
\langle \Tr_{n-1}W(A)\rangle = \left(\frac{(-1)^{n-1}\pi
a_n}{4(n-\half)}\right)^A.
\label{eq:wholereparea}
\end{equation}
This is perhaps more of a surprise, as one often encounters the
statement `strong coupling predicts a perimeter law behaviour in the
adjoint representation'. The point is simply that the random
plaquette model we have used sees only the planar contribution to the
adjoint Wilson loop~\footnote{I am grateful to Jeff Greensite for
making this point clear to me.}. The perimeter law term comes from
a closed tube of plaquettes surrounding the loop, to which the planar
plaquettes do not contribute in this approximation. In real loops one
expects both to be present, hence at large $R$ and $T$ the perimeter
law will eventually dominate. We shall later return to the question
of centre dominance in adjoint loops.
\subsection{Centre projection and positive plaquettes}
In the last few years, detailed investigations of the importance of
Z(2) degrees of freedom have been made by projecting the full SU(2)
links to Z(2). One then makes plaquettes from these projected links,
creating so-called projection vortices~\cite{DDFa97a,DDFa97b}; a
planar Wilson loop is simply a product of these over the minimal area.
As we have shown (see for example eqn.~(\ref{eq:w0b1})), any
observable contributing to the Wilson loop expectation value must come
from the odd part of the distribution, so that it changes sign when
the relevant loop does. It is natural to suppose the projection
vortices are related, by some effect of the projection,
to signs of Wilson loops, since that is where the effect of the
centre appears physically. If this is the case, then the observation
that no vortices means no area law is trivial: as we saw in discussing
the loop distributions, no odd-sign behaviour means, in analytical
terms, no contribution to the real expectation value. This does not
mean that the relationship between the vortices and
confining behaviour is trivial --- the claim is that the
physical string tension can be seen at particularly short distances in
centre projection --- merely that the presence of both positive and
negative signs is necessary to observe the dynamics.
To make this a little clearer, consider the graphs showing the
expectation value of Wilson loops separated according to whether they
contain odd or even numbers of projection vortices in, for example,
fig.~8 of ref.~\cite{DDFa97a}: the Wilson loops separated in this way
show no confining behaviour. Indeed, the fact that the Wilson loops
go to zero at all is because the projection vortices do not exactly
correspond to the signs of the full loops: suppose we look instead at
the latter. As we have already seen, the distribution approaches
(\ref{eq:limiting}) for large loops, and if we consider the part of
the distribution with a positive value of the Wilson loop, we can see
that large Wilson loops approach a constant value,
\begin{equation}
\langle \lvert W_{0l}\rvert\rangle = \int_{-1}^1 \lvert W_0\rvert
\rho_l(W_0)dW_0
=\frac{2}{\pi} \int_{-1}^1 \lvert W_0\rvert(1-W_0^2)^{1/2}dW_0
=\frac{4}{3\pi},
\label{eq:evlimit}
\end{equation}
and the negative loops taken alone correspondingly approach the limit
$-4/(3\pi)$. This is a property of the even part of the distribution,
unrelated to the odd part where $\langle W_0\rangle$ is to be found.
(However, the adjoint string tension lives in the even part of the
distribution: see below.)
The relationship between thick vortices and the sign of the loop is
clearer in the positive plaquette model~\cite{MaPi82}. Here,
plaquettes are constrained to have a positive value: this can be
implemented by a simple accept/reject test in Monte Carlo updating.
This model appears to have all the right properties to be an
alternative lattice regularisation of SU(2)~\cite{FiHe94}. There
are no thin vortices, which are simply lines of negative
plaquettes, so that through the commutativity property of the
centre any Wilson loop with a positive (negative) sign must contain
an even (odd) number of thick vortices.
We have investigated the Wilson loop distribution in this theory, and
show the results in fig.~\ref{fig:wl_ppm_dist}. The coupling here is
$\beta=1.9$, found to be in the scaling region for this
theory~\cite{FiHe94}. The approach to the large loop region is,
unsurprisingly, very similar to that shown in standard SU(2) in
fig.~\ref{fig:wl_dist}.
The sign of the Wilson loop here counts thick vortices exactly and so
the expectation values of Wilson loops with only even or odd numbers
of thick vortices must tend to $\pm4/3\pi$, irrespective of
whether the vortices are responsible for the dynamics, i.e.\ %
irrespective of centre dominance. Our point here is simply that one
\textit{must} consider loops with odd and even numbers of vortices
together, as separately they can have no interpretation in terms of
$\langle W_0\rangle$.
We can also use the positive plaquette model to examine the question
of whether the results from the Z(2) part agree with the full results
when only thick vortices are present, or whether, on the other hand,
both thick and thin vortices are required to produce the full result.
Fig.~\ref{fig:ppm_pot} shows the potential between a heavy quark and
an antiquark calculated from 150 configurations on a $16^4$ lattice at
$\beta=1.8$ for both the full results and those using only the signs
of the Wilson loops. This figure corresponds to figures 7 and 8 of
ref.~\cite{KoTo97} which show results in standard pure SU(2); our
method is the same, i.e.\ $V(R)$ is calculated from
\begin{equation}
V(R) = \lim_{T\to\infty} -\log\frac{W(R,T+1)}{W(R,T)}
\label{eq:potdef}
\end{equation}
with the signal improved by using fuzzing on spatial links only ---
this does not affect the presence of vortices in the $(R,T)$ plane.
To enable a comparison with the standard case we have also made a fit
to the form
\begin{equation}
V(R) = V_0 + KR - e/R;
\end{equation}
no correction for lattice artifacts has been made and
the single correlated fit included all data with $T \ge 3$ and
$R \ge 2.5$ for the full loops only (i.e.\ not using the Z(2)
projected values). We obtained $V_0=0.572(6)$, $K=0.0410(9)$ and $e=
0.261(11)$, corresponding to $\beta$ slightly below 2.5 for standard
SU(2).
It is clear from fig.~\ref{fig:ppm_pot} that, as with standard SU(2)
in ref.~\cite{KoTo97}, the Z(2) part carries the physics; a similar
picture (not shown) holds for $\beta=1.9$ where the curvature is more
pronounced. Thus centre dominance is present across the whole size
range, from the Coulomb to the confining region, works even with no
thin vortices present. This is something of a relief, as the division
into thin and thick vortices is an artifact of the cut-off. To see
this, consider taking a negative plaquette, i.e.\ part of a thin
vortex, and halving the lattice spacing: on the new lattice, it is no
longer clear whether the loop corresponding to the former plaquette is
(say) part of a thick vortex surrounding four positive plaquettes on
the new lattice, or is negative by virtue of one of the those
plaquettes being negative. In other words, the new lattice does not
preserve the division into thin and thick vortices of the old one.
\begin{figure}
\begin{center}
\psfig{file=wl_ppm_dist.ps,width=3.7in,angle=270}
\end{center}
\caption{Monte Carlo distribution of Wilson loops on a $12^4$
lattice at $\beta=1.9$ for the positive plaquette model,
corresponding otherwise to figure~\protect\ref{fig:wl_dist}.}
\label{fig:wl_ppm_dist}
\end{figure}
\begin{figure}
\begin{center}
\psfig{file=ppm_pot_1.8.ps,width=3.7in,angle=270}
\end{center}
\caption{The heavy quark-antiquark potential from the positive
plaquette model at $\beta=1.8$, in lattice units. The dashed line
is a three-parameter fit.}
\label{fig:ppm_pot}
\end{figure}
\subsection{Confinement from non-interacting vortices}
We will show that a random distribution of non-interacting thick
centre vortices, with no other dynamics, can give an area law in SU(2)
with a string tension of about the right magnitude. This is the
simplest possible treatment, although it is very much in the spirit of
Nielsen--Olesen flux vortices~\cite{NiOl73,NiOl79}; random vortices
and their confining nature are not new and were considered for example
in~\cite{Ol82}. However it is now possible to make a quantitative
test of the idea using results from centre vortices. A similar (but
not identical) calculation appeared in ref.~\cite{FaGr97}; the version
here has the advantage that it makes no reference to the lattice, even
if the underlying assumptions are too simplistic for it to be a true
continuum theory.
We assume that the value of a Wilson loop of area $A$ depends only on
the number of vortices inside, i.e.\ $W_0(A) = W_C(-1)^n$, where $n$
vortices pass through the loop $W_0$ and $W_C$ depends only on the
coupling: this is just the centre dominance proposal, which we have
seen works well in practice. Although the assumption in this form does
not fit in with the Wilson loop distributions we have shown, the
proposal is that the sign alone determines the dynamics, so that the
effect of the rest of the distribution on $\langle W_0\rangle$ can be
absorbed into $W_C$.
We also assume that the vortices are independent from one another, so
that any vortex can pass at a random point through a given area
regardless of the presence of another: this second assumption is
clearly an oversimplification, as there is indeed interaction between
the vortices~\cite{EnLa98}. We are also assuming the vortices are
infinitely thin, so that there are no edge effects; this, too, is
unphysical, as we have already stressed that the vortices must be
thick to survive in the continuum limit.
Finally, we suppose that the only relevant physical quantity is the
mean density of vortices passing through unit area which we denote by
$p_A$. In addition, we are using the fact of translational invariance
so that there are equal probabilities for a vortex to pierce any
regions with the same area. We are also essentially ignoring the fact
that the vortices appear from the gauge dynamics.
\begin{figure}
\begin{center}
\psfig{file=rand_vort.eps,angle=270,width=4.5in}
\end{center}
\caption{The random non-interacting vortex model, as described in
the text.}
\label{fig:rand_vort}
\end{figure}
Consider an area very much larger than $A$,
$A' = AN$ for $N\gg 1$, which encloses $A$ and through which
exactly $p_AA'$ vortices are assumed to pass, randomly distributed
across the area: a three-dimensional slice is shown in
fig.~\ref{fig:rand_vort}. For each vortex
independently, there is a probability $1/N$ that it lies inside
$A$. The probability for $n$ of them to lie in the area $A$ is then
given by a binomial distribution,
\begin{equation}
\Pr(\text{$n$ vortices in $A$}) = \binom{p_AA'}{n}\left(\frac{1}{N}\right)^n
\left(1-\frac{1}{N}\right)^{p_AA'-n}.
\end{equation}
The contribution to the Wilson loop is
\begin{equation}
\langle W_0(A) \rangle = W_C \sum_{n=0}^{p_AA'}\binom{p_AA'}{n}(-1)^n
\left(\frac{1}{N}\right)^n\left(1-\frac{1}{N}\right)^{p_AA'-n}
= W_C \left(1 - \frac{2}{N}\right)^{p_AAN}.
\end{equation}
Letting $N\to\infty$, and noting that the limit as $N$ tends to
infinity of $(1-2/N)^N$ is $e^{-2}$, we have
\begin{equation}
\langle W_0(A)\rangle = W_C e^{-2p_AA},
\end{equation}
which is an area law with string tension $K=2p_A$. This result was
obtained in a different approximation by ref.~\cite{DDFa98}; in this case
we stress that its validity is based only on the notion of
randomly distributed, non-interacting and (unfortunately) thin
vortices.
A string tension of $K = (440 \text{MeV})^2$ would hence require
$p_A\approx 2.5\>\text{fm}^{-2}$. Determining this quantity in a gauge
invariant fashion is difficult, so we shall use the value calculated
directly from Z(2)-projected vortices in ref.~\cite{LaRe97}: $p_A =
(1.9\pm0.2)\>\text{fm}^{-2}$, close to what we require. Further, the
same value of the string tension was assumed in that calculation, so
in fact the ratios of the two numbers are independent of the
experimental value of $K$. Hence the random vortex model predicts a
value of $K$ about three quarters of the measured value (giving for
$\sqrt{K}$ some 85\% of the measured value), the difference
corresponding to three standard deviations. This is surprisingly good
considering the model in question contained very little mathematics
and did not even take into account the extended nature of the
vortices. That the centre vortices are physical is supported by the
result of ref.~\cite{LaRe97} that the distribution is renormalisation
group invariant.
\subsection{The adjoint representation}
It has long been argued, and has to some extent entered the folklore,
that the observation of a string tension in the adjoint representation
of SU(2) cannot easily be explained by centre vortices. This claim is
based on the point that the centre Z(2) of SU(2) is not seen by
adjoint fields. This is, of course, correct, but this Z(2) is
associated with \textit{thin} centre vortices, which are not expected
to be involved with confinement in the continuum limit --- and as we
saw from the positive plaquette model are not necessary even for
lattice physics. The surviving symmetry of the adjoint representation
is SU(2)/Z(2), which is just that required for \textit{thick} centre
vortices. The mathematics of confinement relates to the fact that
with SU(2)/Z(2) one has two types of path in the gauge manifold, one
contractable and one not; the latter are closed due to the
identification of opposite ends of a diameter of the manifold. This
distinction survives in the adjoint representation.
In the adjoint representation, up to constants irrelevant to the
argument, the action depends on the square of the plaquette value, and
the negative trace of the loops is not seen. However, the fact that one
cannot measure the signs of the loops in no way blurs the distinction
between the topologically different windings of the field. One has
simply lost a useful but --- if the theory of thick centre vortices is
correct --- physically unnecessary label, namely the association
between the sign of the loop and the presence of a vortex.
This last remark explains the difficulty with projection vortices,
where factors of $-1$ presumably correspond to a thick centre vortex
with an increasing admixture of thin vortices for stronger coupling.
Such vortices certainly are not present in the adjoint representation,
making investigation of the mechanism considerably more difficult, but
the underlying topology of the gauge manifold which forms the basis
for confinement in the theory is still present.
The point made here is slightly different from the
suggestion~\cite{FaGr97} for reconciling the adjoint string tension
with the vortex picture involving sign flips. However, the finite
width of the vortices enters in both cases: in that mechanism it was
the origin of string behaviour in higher representations, while here
we have stressed the SU(2)/Z(2) nature of the loops, which must be
spread over a large area so as to be present in the continuum limit.
Thus there may be a connection between the two.
We can show the correlations between the behaviour of adjoint Wilson
loops and the presence of vortices directly. We have run a simulation
on our $12^4$ lattice at $\beta=2.5$ in which the adjoint loops are
separated according to whether the fundamental loop was positive or
negative, i.e.\ whether there is an even or odd number of vortices
passing through the loop. This is just the process we criticised
above for the fundamental case; however, here the sign of $W_0$ itself
is not seen by the adjoint trace,
\begin{equation}
W_0^\mathrm{adj} = (4W_0^2-1)/3,
\label{eq:adjdef}
\end{equation}
so that any correlation with the
sign involves the SU($N$)/Z($N$) topology associated with the
thick vortices. Indeed, the adjoint loop probes the even part of the
distribution $\rho(W_0)$ whereas, as we have made clear,
the expectation values of fundamental loops and their signs only
involve the odd part.
\begin{figure}
\begin{center}
\psfig{file=log_adj.ps,angle=270,width=4in}
\end{center}
\caption{Logarithms of adjoint Wilson loops plotted against area.
Both the complete data, and those loops associated with a positive
fundamental loop, are shown.}
\label{fig:log_adj}
\end{figure}
In fig.~\ref{fig:log_adj}, we show the logarithms of the adjoint
Wilson loops, both for the complete data and those with positive
fundamental trace, once more at $\beta=2.5$. There is clear sign of
an area law behaviour in the former, i.e.\ an adjoint string tension.
The latter have a consistently shallower gradient, in other words the
string tension is significantly lower in those loops with only even
numbers of vortices. A perfect or near-perfect correlation is not to
be expected since from eqn.~(\ref{eq:adjdef}) that would require all
loops with $W_0 < 0$ also to have $\lvert W_0 \rvert < 1/4$, and the
other way around for positive $W_0$.
Phenomenologically, the source of the correlation is not hard to see
from the fundamental distribution in fig.~\ref{fig:wl_dist}, bearing
in mind that the expectation value we want is just
eqn.~(\ref{eq:adjdef}) integrated over this distribution. For smaller
loops, the distribution peaks at positive $W_0$ and tails off towards
negative values. Therefore, large absolute values of $W_0$, and hence
positive contributions to $W_0^\mathrm{adj}$, tend to come
predominantly from positive fundamental loops. However, the source of
the adjoint string tension itself is rather less easy to fathom by
this method, and for that we appeal to the strong coupling result
of eqn.~(\ref{eq:wholereparea}) with $n=2$.
As this work was being completed, ref.~\cite{FaGr98} appeared, showing
similar correlations between the presence of vortices and the adjoint
string tension.
\section{Summary}
The major results of this paper are as follows:
\begin{itemize}
\item We considered the distribution $\rho(W_0)dW_0$ of Wilson loops
in SU(2); for loops larger than the correlation length of the field
$\rho(W_0)$ reaches the limiting form $(2/\pi)\sqrt{1-W_0^2}$.
\item We applied the Fourier decomposition in eqn.~(\ref{eq:fourier});
the expectation value of the loop is $\langle W_0\rangle=\pi
b_1/4$, in other words it only probes the first odd coefficient of
the distribution in this parametrisation; the even portions of
$\rho(W_0)$ do not contribute.
\item In the simple approximation where all Wilson loops are
uncorrelated, and we use the measured plaquette distribution as
input, the coefficient $b_1$ and hence $\langle W_0\rangle$ obey the
strong coupling behaviour, eqn.~(\ref{eq:arealaw}). Larger
coefficients $b_n$ are suppressed by an
additional factor $1/n$ for each multiplication by a Wilson loop of
the same size. This approximation also shows an area law behaviour
for the adjoint representation.
\item The Z(2) expectation value contains all $b_n$, not just $b_1$.
We found that for $n>2$ the coefficients were strongly suppressed
in Monte Carlo results, showing the validity of centre dominance.
\item The Fourier terms correspond, term by term, with
the expectation value of the Wilson loop in the gauge group
representations. The faster decay of higher representation loops
therefore gives an explanation for centre dominance.
\item We stressed that care needs to be taken when looking at results
in Z(2)-projection (whether the links are projected or we simply
look at the sign of the loops): the expectation values of the
negative and positive parts separately are trivially seen to be
non-confining, from the basic properties of the distribution.
\item We showed by considering the positive plaquette model that the
absence of thin vortices, which are chains of negative plaquettes,
does not change the centre dominance behaviour.
\item We showed that even a simple physical model of a random gas of
centre vortices with no interactions gave three quarters of the
observed string tension from the measured vortex density.
\item We pointed out that there is no essential difficulty in having
\textit{thick} centre vortices explain an adjoint string tension, as
both the thick vortices and the adjoint representation are
associated with an SU($N$)/Z($N$) symmetry --- the theory of centre
dominance does not directly involve the centre Z($N$), as has
sometimes been incorrectly assumed.
\item We showed that there is a correlation between the adjoint
string tension and the sign of the corresponding fundamental Wilson
loop, even though the latter does not directly affect the adjoint
loop: this makes it plausible that thick vortices are involved here,
too. This correlation is easily seen by looking at the Wilson loop
distributions.
\end{itemize}
These results seem to show that there is indeed physics contained in
the centre dominance picture. Of course, the results for other
pictures of confinement need to be borne in mind. For example, a lot
of physics has also emerged from the maximally Abelian monopole
condensation (Abelian dominance, or dual superconductor vacuum)
picture~\cite{abmono}; see e.g.\ the proceedings of Lattice
'97~\cite{Latt97} for recent progress. A way in which Abelian
monopoles might be related to centre vortices has been
proposed~\cite{DDFa97b}. It is interesting to note that both models
are based on topological mechanisms using homotopy classes, those of
SU(2)/Z(2) or U(1), not manifest in the full gauge group, and one can
speculate that there is a deeper connection. There are of course
other models, too, such as the anti-ferromagnetic vacuum~\cite{Po96}.
The arguments for the fundamental importance of Z(N) in SU(N) given by
't~Hooft twenty years ago~\cite{Ho78} are still valid and should be
born in mind, but the debate looks set to continue.
For further progress, one needs a better way of asking which
properties are the most fundamental. Work is currently in progress by
another group to try to understand the correlations between the
monopole picture and instanton effects~\cite{IlMa98}; centre dominance
is presumably another ingredient which needs to be taken into account.
How far our results here really reflect dynamics involving the centre
of the gauge group can presumably be clarified by looking at SU(3).
Recent results~\cite{KoTo98} suggest the corresponding behaviour of
Wilson loops for the centre Z(3) is indeed found in the SU(3) case.
|
{'timestamp': '1998-11-13T10:52:36', 'yymm': '9807', 'arxiv_id': 'hep-lat/9807019', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-lat/9807019'}
|
arxiv
|
\section{INTRODUCTION}
\label{sec1}
The research of ultra-relativistic heavy ion collisions is from its
beginning motivated by the search for quark matter: a bulk of
deconfined color charges.
The prime attempts to describe the hadronization assumed,
that this quark matter consists of non-interacting massless quarks
and gluons \cite{Bir1,Raf,Knoll}.
In place of these early image of free quarks and gluons (plasma)
- which was based on thermodynamical studies of the pure nonabelian
gauge theory on the lattice -
gradually the picture of a quark matter emerges, containing effective
propagators and interaction vertices \cite{high-TQCD,Kamp,Lev1}.
At the characteristic energy scale of CERN SPS heavy ion
experiments ($\sqrt{s}/2 \approx 10$ GeV/nucleon) the dressed gluons
are heavier than the quarks
($M_g \approx 600-800$ MeV, $M_q \approx 150-300$ MeV) \cite{Lev1},
and both values are
bigger than the temperature ($T \approx 160$ MeV).
Therefore the number of quarks overweight that of the gluons and
can be treated non-relativistically.
On the ground of these theoretical indications we expect
that in the CERN heavy ion experiments not an ideal quark gluon plasma,
but a constituent quark plasma ({\bf CQP }), which contains antiquarks too,
is formed in some
intermediate state of the reaction (see the ALCOR model,
Ref.~\cite{Bir2,Zim3}).
For the description of heavy ion reaction products at SPS
a purely
hadronic interpretation of bulk experimental results
was also suggested
\cite{Stach}. This alternative interpretation,
however,
has problems in explaining the process of the creation
of new hadrons, especially the short time scale.
Namely, the hadronic processes have characteristic
times of several
tens of fm/c \cite{Knoll,Zim1},
while the typical heavy ion reaction time at SPS energy
is about $1-2$ fm/c \cite{Bjorken}.
Processes on the quark level on the
other hand have equilibration times of the order of
$0.1 - 2$ fm/c \cite{Bir1,Raf,Svetitsky+McLerran}.
The {\bf CQP} picture has some advantages
in comparison with the quark-gluon plasma scenario too:
In the hadronization of a quark-gluon plasma
the principles of color confinement and entropy generation
can be both satisfied only with the assumption of an extreme
growth of the reaction volume \cite{Zim2}.
Further, a slow first order transition
through near-equilibrium states
would need too long time (up to $50$ fm/c),
due to the re-heating \cite{Csernai+Kapusta}.
In this paper we assume
that a massive quark matter, the {\bf CQP}, is formed
in heavy ion reactions at SPS energy, which is
in thermal but not necessarily in chemical
equilibrium.
The evolution of this {\bf CQP} is followed through a set
of coupled time dependent differential equations for
the temperature and chemical composition.
The construction of the equations requires relations between
the different equilibrium particle numbers at an actual temperature
as input.
These input values are determined by the equation
of state.
We mention in this place, that the ALCOR model
\cite{Bir2,Zim3}, the hadronization problem is treated with
an algebraic approximation, in contrast to the method
of the present paper, where we follow the complete time
evolution of the system. The inclusion of confining equation of state
in the present model is very important difference to the ALCOR model,
and to our best knowledge this type of phenomenological equation
of state with confining character has not been discussed in the
literature until now.
The paper is structured as follows: in Section \ref{sec_eos}. we discuss
the equation of state of the mixture of hadrons
and interacting quarks.
In Section \ref{sec_dyn}. we describe in details
the dynamics of the hadronization.
In Section \ref{sec_num}. we discuss our numerical results.
The conclusion is drawn in Section \ref{sec_con}.
In Appendix A. we collected the relevant hydrodynamical and
thermodynamical and statistical expressions used in the paper.
\section{EQUATION OF STATE OF THE MIXTURE OF HADRONS AND INTERACTING QUARKS}
\label{sec_eos}
We assume that at the beginning of the hadronization
the matter consists of massive quarks and anti-quarks.
In the time evolution of the system quarks and anti-quarks
form diquarks, anti-diquarks, mesons, baryons and anti-baryons.
We assume that the mixture of all of these particles are in thermal
equilibrium which can be characterized by a temperature.
For the representation of the interaction of the colored particles
we introduce an extra term into the free energy, which is inspired
by the string picture.
We note that there is an important difference to the canonical
approach to color confinement transition: in an (ideal) mixture of
quarks and hadrons the occupied volume, $V$, is the same for both
components, $V_q=V_h=V$,
the pressure
contributions $p_q$ and $p_h$ are additive. On the other hand
in the application
of the Gibbs criteria of a phase co-existence the volumes $V_q$ and
$V_h$ are additive $V_q+V_h={\rm constant}$, and the partial pressures
are equal, $p_q=p_h=p$, in phase-equilibrium.
In our physical picture of hadronization there is
no phase coexistence and the Gibbs criteria do not apply. Colored particles
and color-neutral clusters, pre-hadrons are distributed in a common
reaction volume and chemical reactions convert eventually
the quark matter into a pure hadronic matter.
During this hadronization process the reaction volume
expands and cools.
Due to the change of the multi-particle composition this expansion
is not adiabatic: some heat can be produced (or consumed) by
quarkochemical processes.
The expansion law of an ideal mixture
follows from eq.(\ref{COOL}) and eq.(\ref{PRESSURE})
in Appendix A:
\begin{equation}
\sum_i m_i \dot{N}_i + \frac{3}{2} T \sum_i \dot{N}_i
+ \frac{3}{2} \dot{T} \sum_i N_i + T \frac{\dot{V}}{V} \sum_i N_i = 0.
\label{STAR}
\end{equation}
Due to the foregoing hadronization the number of particles decreases,
\begin{equation}
\sum_i \dot{N}_i < 0,
\label{REHEAT}
\end{equation}
therefore this process re-heats the system. Cooling effects are due to
the expansion \hbox{($\dot{V}/V=\partial_{\mu}u^{\mu} > 0$)}
and rest mass creation
\hbox{($\sum_i m_i \dot{N}_i > 0$).} The latter is possible with satisfying
(\ref{REHEAT}) only if hadron masses are larger than
the sum of their constituents.
At the present level, however, the physical process of color
confinement by the hadronization is not yet taken into account.
In particular due to the dilution during the expansion
it happens that
not all quarks or diquarks find a partner to hadronize.
In order to avoid this effect we supplement the model
by the following confinement principle:
all particles carrying color charge
(quarks, diquarks, anti-quarks and anti-diquarks) will be penalized
by a free energy contribution stemming from strings.
The number of strings is proportional to
a weighted sum of the number of color charges,
\begin{equation}
Q = \sum_i q_i N_i,
\end{equation}
Here it is $q_i=0$ for hadrons,
$q_i = 1/2$ for quarks and anti-quarks, and $q_i = 3/4$
for diquarks and anti-diquarks.
The higher effective charge of diquarks reflects a possibly higher
number of in-medium partners, to which a string is stretched.
The average length $L$ of a string depends on the
density of colored objects, as $L=n_c^{-1/3}$, where
\begin{equation}
n_c = \sum_{i \in c} N_i/V,
\label{eq_nc}
\end{equation}
where the summation $i \in c$ excludes color neutral particles (hadrons).
So the free energy of the ideal quark matter - hadron matter mixture,
$F = F_{{\rm id}} + \Delta F,$
is supplemented by the following contribution of strings:
\begin{equation}
\Delta F = \sigma_s n_c^{-1/3} Q,
\label{DELTA}
\end{equation}
with the effective string tension $\sigma_s \approx 1.0 \ GeV/fm$.
We note here that if all colored particles carried the same color charge,
the effective string tension would be equal for all, and the interaction
free energy would become
\begin{equation}
\Delta F = \sigma_{s, {\rm eff}} n_c^{2/3} V.
\end{equation}
In order to achieve baryon production and diquark elimination
properly we shall, however, need to use the more complicated ansatz
eq.(\ref{DELTA}).
This additional free energy comprises the non-ideality of the equation of
state we use. Since this addition is proportional to the volume $V$ and
the rest depends on densities only, it satisfies thermodynamical
consistency requirements \cite{Toneev} due to its construction.
While there is no new contribution to the entropy,
\begin{equation}
S = S_{{\rm id}},
\label{ENTR}
\end{equation}
the pressure, the energy and the chemical potentials of colored
($q_i \ne 0$) particles receive important modifications:
\begin{eqnarray}
p &=& p_{{\rm id}} - \frac{1}{3} \sigma_s n_c^{-1/3} \frac{Q}{V}, \nonumber \\
E &=& E_{{\rm id}} + \sigma_s n_c^{-1/3} Q, \nonumber \\
\mu_i &=& \mu_{i,{\rm id}} + \sigma_s n_c^{-1/3}
(q_i - \frac{1}{3} \overline{q}),
\label{NONID}
\end{eqnarray}
with $\overline{q} = Q/(Vn_c)$. Hadronic chemical potentials have no
modifications at all.
This non-ideal completion of the equation of state influences both
the expansion and cooling and the changes of particle composition.
Since the entropy is not changed by the introduction
of the interaction term (see. eq.~(\ref{ENTR})) therefore
\begin{equation}
TdS = TdS_{{\rm id}} \ .
\end{equation}
From this equation, using eq.~(\ref{PRINC}) we obtain
\begin{equation}
dE + pdV - \sum_i \mu_i dN_i =
\left( dE + pdV \right)_{{\rm id}}
- \sum_i \mu_{i,id} dN_i \ .
\end{equation}
Applying eq.~(\ref{COOL}),
the non-ideal cooling law becomes
\begin{equation}
\left( dE + pdV \right)_{{\rm id}} +
\sum_i \left( \mu_i - \mu_{i, {\rm id}} \right) dN_i = 0 \ .
\label{NONCOOL}
\end{equation}
Accordingly eq.(\ref{STAR}) is supplemented by a new generic term
due to the non-ideal equation of state,
\begin{eqnarray}
\frac{\dot{T}}{T} &=& - \frac{2}{3} \frac{\dot{V}}{V}
- \frac{\sum_i \dot{N}_i}{\sum_i N_i}
- \frac{2}{3}\frac{\sum_i (m_i/T) \dot{N}_i}{\sum_i N_i} \nonumber \\
&-&\frac{2}{3}\frac{\sum_i (\mu_i/T - \mu_{i,{\rm id}}/T)\dot{N}_i}{\sum_i N_i}
\end{eqnarray}
The additional term has the form
\begin{equation}
\sum_i \frac{\mu_i- \mu_{i,id}}{T} \dot{N}_i =
\frac{\sigma_s n_c^{-1/3}}{T}
\sum_{i \in c} (q_i - \frac{1}{3}\overline{q}) \dot{N}_i \ .
\end{equation}
This term is negative if color charges are eliminated from the mixture.
Therefore color confinement, causing an extra suppression of
equilibrium numbers of quarks and alike particles, re-heats the
expanding fireball as well as the ``normal'' chemistry of the
ideal quark - hadron mixture.
The more one suppresses color charges with respect to an ideal
mixture the more re-heating occurs during hadronization. This, of course,
works against the hadronization process.
The only physical effect besides a fast expansion - which has, however,
kinematical limits stemming from scaling relativistic expansion -
that can cool the mixture sufficiently is rest-mass production.
The color charge eliminating hadronization therefore must be
accompanied by the production of heavy hadron resonances.
(Rest mass production due to quark pair or gluon creation would not
reduce color.)
Since in our equation of state we have an explicit interaction energy
between the quarks, our effective quark masses should be less
than that given in Ref.\cite{Lev1}. We shall use the following values:
$ ( m_u=[m_{u0}^2 + m_{th}^2]^{1/2} $,
$ m_d=[m_{d0}^2 + m_{th}^2]^{1/2} $,
$ m_s=[m_{s0}^2 + m_{th}^2]^{1/2} $,
with thermal mass $ m_{th} = 0.15 $ GeV and
$m_{u0}=m_{d0}\approx 0$, $m_{s0}=0.15$ GeV.
The clusters have a mass according to the average mass
extra to the summed valence quark masses of the two lowest
lying hadron multiplets: the pseudoscalar and vector meson nonets
and the baryon octet and decuplet, respectively.
\section{DYNAMICS OF HADRONIZATION}
\label{sec_dyn}
\subsection{Initial state}
\label{subs_ini}
The initial energy density --- distributed along the beam direction
between $-\tau_0 \sinh \eta_0$ and $\tau_0 \sinh \eta_0$ --- can be related
to the center of mass bombarding energy $\sqrt{s}$ in the experiment,
\begin{equation}
\varepsilon_0 = \frac{\sqrt{s}}{\pi R_0^2\tau_0 2 {\rm sh} \eta_0}.
\end{equation}
On the other hand the initial invariant volume dual to $d\tau$
at constant $\tau=\tau_0$ is given by
\begin{equation}
V_0 = \pi R_0^2\tau_0 2\eta_0.
\end{equation}
The initial internal energy (i.e. the energy without the collective
flow of a fluid cell) at $\tau=\tau_0$ is therefore
less than $\sqrt{s}$ for finite $\eta_0$:
\begin{equation}
E_0 = \varepsilon_0 V_0 = \frac{\eta_0}{ {\rm sh}\eta_0} \sqrt{s}.
\end{equation}
At the CERN SPS experiment $\eta_0 \approx 1.75$ (due to some stopping),
$R_0 \approx 7$ fm, $\tau_0 \approx 0.8$ fm and
we obtain $V_0 \approx 431$ fm$^3$ and $E_0 \approx 2.13$ TeV.
Compared to the total energy of about $\sqrt{s} = 3.4$ TeV
(carried by about $390$ participating nucleons in a central
Pb-Pb collision)
approximately two third of the energy is invested into
rest mass of newly produced particles and thermal motion and
and one third into the flow.
Comparing this with an alternative expression for the thermal energy
of an ideal massive quark matter,
\begin{equation}
E_0 = \sum_i N_i(0) (m_i + \frac{3}{2} T),
\end{equation}
one can estimate the initial temperature at the beginning of hadronization.
Using our standard values for the incoming quark numbers \cite{SQM97}
$N_u(0) = 544,$ $N_d(0) = 626$, further, assuming that 400 $u \overline u$,
400 $ d \overline d $ and, with $f_s=0.21 $,
168 $ s \overline s$ quark anti-quark
pairs are created in one central collision,
we get from the above equation $T_0 = 0.18$ GeV.
We use these numbers for the newly produced quark pairs
in order to arrive the experimentally measured hadron and
strange particle numbers.
\subsection{Hadronization processes}
\label{subs_hadr}
Chemical equilibrium is not supposed initially, rather a definite
over-saturation of quarks in the reaction volume.
The initially missing color-neutral hadron states - mesons and
baryons - are formed due to quark fusion processes in a non-relativistic
Coulomb potential. The rates for different flavor compositions
differ mainly due to the different reduced masses of quark anti-quark
or quark diquark pairs. First of all this influences the Bohr radius
in the Coulomb potential \cite{Bir2}.
Of course, the presence of a medium -
which establishes the necessary momentum balance after the fusion -
also influences the hadronization rate. The cross section for
such a $2 \rightarrow 1$ process in medium is
\begin{equation}
\sigma = \left(\frac{\rho}{a}\right)^3
\frac{16M^2\sqrt{\pi}\alpha^2}{\left( \vec{p}^2 + 1/a^2 \right)^2}
\end{equation}
with $a = 1/(\alpha m)$ Bohr radius of the $1s$ state in the
Coulomb potential and $\rho $ is the Debye screening length
\cite{Bir2}. Here $\vec{p}$ is the relative momentum of the
hadronizing precursors, $m$ is their reduced mass and $M$ is the
total mass.
The coupling constant we are using is a function of the
relative momentum according to the formula
\begin{equation}
\frac{1}{\alpha(p)} = \frac{1}{\alpha_0} +
\beta_{{\rm QCD}} \log \frac{p^2}{\Lambda_{{\rm QCD}}^2}
\label{ALFA}
\end{equation}
as long as $\alpha(p) < \alpha_0$. At smaller momenta
it levels off at $\alpha_0$.
Here
\begin{equation}
\beta_{{\rm QCD}} = \frac{1}{4\pi} \left( \frac{11}{3} - 2N_F \right)
\end{equation}
is the beta-parameter of the one loop beta-function
occurring in the charge renormalization process in perturbative
QCD. The function (\ref{ALFA}) is a good phenomenological approximation
and has the correct infrared
and ultraviolet asymptotics. The value $\alpha_0 \approx 1.4$
is taken in order to fit the gluon condensate strength
in vacuum.
The medium effect is parameterized in all cross sections
by $\rho \approx 0.2 - 0.3$ fm in a flavor independent way.
If the hadronization process is fast
enough, the prehadron numbers (i.e. hadron numbers before resonance decay)
become proportional to the two-body reaction rates. Since quark fusion
does not change the number of valence quarks during the
hadronization the fast hadronization limit leads to
the ALCOR model \cite{Bir2}.
The relative momenta are taken from a random Gaussian
distribution,
\begin{equation}
d{\cal P}(\vec{p}) \propto e^{-p^2/2mT} d^3p,
\end{equation}
at temperature $T$ and reduced mass $m$.
This method allows us both to simulate thermally averaged
hadronization rates,
\begin{equation}
R = \langle \sigma \frac{|\vec{p}|}{m} \rangle,
\label{RATE}
\end{equation}
or to follow the event by event variation of the quark - hadron
composition. For a fast estimate of thermally averaged rates
about $3-5$ points in the relative phase space suffice for each reaction
at each time instant.
Although the above corrections to the equation of state
help to reduce the equilibrium
ratio of free quarks to those confined in hadrons, total color
confinement would only occur after a long time when the mixture
is cool and dilute.
This is obviously not the case in relativistic
heavy ion collisions. Therefore we also apply a {\em dynamic confinement}
mechanism in our model: the medium screening length $\rho$
occurring in the hadronization cross section will be related to
strings pulled by color charges trying to leave the reaction zone.
This way the screening length $\rho$ is increased as the color
density decreases: we keep, however, the product $\rho^3 N_c/V$
constant,
\begin{equation}
\rho(t) = \rho(0) * \left( N_c(0)/N_c(t) \right)^{1/3}.
\end{equation}
Here $N_c$ stands for the number of colored objects
(each in triplet or anti-triplet representation),
\begin{equation}
N_c = N_Q+N_{\overline{Q}}+N_D+N_{\overline{D}}.
\end{equation}
As we shall discuss in the next chapter both color confinement
mechanisms are necessary in order to achieve a pure hadronic
composition in a short time by trans-chemical processes,
while satisfying the requirements of the entropy growth
and energy conservation.
\subsection{Reaction network}
\label{subs_net}
What remains to specify the model is the system of rate equations describing
the transformation of quark matter into hadronic matter.
We consider $N_F=3$ light quark flavors $u$, $d$ and $s$.
There are $N_F(N_F+1)/2=6$ possible diquark flavors and the same
number of anti-diquark flavors. The number of quark anti-quark
flavor combinations is $N_F^2=9$ while that of quark or
anti-quark triplet combinations is $N_F(N_F+1)(N_F+2)/6=10$.
In the hadronizing quark matter we deal with altogether
$2*3+2*6+9+2*10=47$ sorts of particles.
Let us generally denote quarks by $Q$, diquarks by $D$,
mesons by $M$ and baryons by $B$.
The possible fusion reactions are:
\begin{eqnarray}
Q + Q & \longrightarrow & D, \nonumber \\
\overline{Q} + \overline{Q} & \longrightarrow & \overline{D}, \nonumber \\
Q + \overline{Q} & \longrightarrow & M, \nonumber \\
Q + D & \longrightarrow & B, \nonumber \\
\overline{Q} + \overline{D} & \longrightarrow & \overline{B}.
\end{eqnarray}
Accordingly the number of reaction channels,
$2*10+9+2*6=41$ is $2N_F=6$ less than the number of
particle sorts. Therefore in chemical equilibrium,
when the above reactions are balanced
by the respective decays,
the number of all particles are under-determined.
This is fortunately not a problem, because
the hadronization reactions under consideration do not create or
annihilate elementary quark flavors, they just
produce new combinations.
There are therefore exactly $2N_F=6$ quark and anti-quark
flavor numbers, which are conserved by these reactions.
They are determined by the initial state.
Our model is completed by the system of rate equations.
Considering a general reaction of type
$$ i + j \longrightarrow k$$
we account for the changes
\begin{equation}
dN_i = dN_j = - Adt, \qquad dN_k = +Adt,
\label{COMP}
\end{equation}
cumulatively in each reaction. Here
\begin{equation}
A = R_{ij\rightarrow k} N_i N_j
\left( 1 - e^{\frac{\mu_k}{T} - \frac{\mu_j}{T} - \frac{\mu_i}{T} } \right)
\label{BACK}
\end{equation}
with a thermally averaged rate
$ R_{ij\rightarrow k}$ (cf. eq.(\ref{RATE})).
The changes stemming from different reactions accumulate to a
total change of each particle sort in a time-step $dt$.
Inspecting the expression (\ref{BACK}) it is transparent that an extra
increase of a chemical potential suppresses the production or enhances
the decay of the corresponding particle. This behavior ensures that
\begin{equation}
\dot{S} = - \sum_i \frac{\mu_i}{T} \dot{N}_i > 0,
\end{equation}
for any complicated network of reactions. The approach to chemical
equilibrium always produces entropy.
\subsection{Chemical equilibrium}
Chemical equilibrium is defined by the requirement that all chemical
rates vanish (cf. eq.\ref{BACK}). It leads to relations like
\begin{equation}
\mu_i^{{\rm eq}} + \mu_j^{{\rm eq}} = \mu_k^{{\rm eq}}
\label{CHEMEQ}
\end{equation}
for each reaction channel. The correspondence between equilibrium
chemical potentials and equilibrium number densities is, however,
in the general case not as simple as for a mixture of ideal gases.
From eq.(\ref{NONID}) we obtain an implicit equation for the
equilibrium densities of colored particles,
\begin{equation}
n_i^{{\rm eq}} = n_{i}^{\rm th} \,
\exp \left[ {\frac{\mu_{i}^{{\rm eq}}-b_i(n^{{\rm eq}}_c)-m_i}{T} } \right] \ ,
\label{eq_implic}
\end{equation}
where $n_c^{{\rm eq}}$ is the color charge density in equilibrium,
(see Eq.(\ref{eq_nc})), $n_{i}^{\rm th}=N_{i}^{\rm th}/V$
from Eq.(\ref{MaxBolt}) and
\begin{equation}
b_i(n_c) =
\sigma_s n_c^{-1/3}
\left( q_i - \frac{1}{3} \overline{q} \right) \ .
\end{equation}
We call the attention to the fact that no solution of Eq.(\ref{eq_implic})
exists below a critical temperature.
\vspace{0.5cm}
Since the number of reactions is less than the number of particle sorts,
the equilibrium state is not fully determined by these conditions alone.
The missing information is contained in the value of conserved numbers.
Applying eq.(\ref{NONID}),
the non-equilibrium chemical potentials, and hence the essential factors,
$e^{-\mu_i/T}$, in the chemical rates (eq.(\ref{BACK})) can be expressed
as
\begin{equation}
e^{-\mu_i/T} = e^{-\mu_i^{\rm eq}/T} \cdot
e^{\frac{b_i(N^{{\rm eq}})-b_i(N)}{T}} \cdot
\frac{N_i^{{\rm eq}}}{N_i}.
\end{equation}
In the combinations appearing in the detailed balance factor of the
rate equations using eq.(\ref{CHEMEQ}) we obtain
\begin{equation}
1 - e^{\frac{\mu_k-\mu_i-\mu_j}{T}} = 1 -
\frac{N_i^{{\rm eq}}}{N_i} \frac{N_j^{{\rm eq}}}{N_j} \frac{N_k}{N_k^{{\rm eq}}}
e^{\frac{\Delta\mu_k-\Delta\mu_i-\Delta\mu_j}{T} },
\end{equation}
with
\begin{equation}
\Delta\mu_i = b_i(N) - b_i(N^{{\rm eq}}).
\end{equation}
The corrections $b_i(N)$ in the non-equilibrium chemical potentials
may in general depend on the number densities of several other
components on the mixture.
\vspace{0.5cm}
At this point we note that the extra $e^{-\Delta\mu_i/T}$ factors
occur for non-ideal equations of state where the correction to
the free energy density is a nonlinear function of the number
densities.
\vspace{0.5cm}
\subsection{Detailed rate equations}
With the purpose of better understanding in mind
we enumerate certain types of the rate
equations given in the above compressed form in eqs.(\ref{COMP},\ref{BACK}).
Let $\{u, d, s, \ldots \}$ be an ordered set of flavor indices.
We denote
quarks by flavor indices $i,j$ or $k$,
the anti-quarks by their overlined versions,
diquarks consisting of a flavor pair satisfying $i\le j$ by $[ij],$
a meson made from a flavor $i$ quark and a flavor $\overline{j}$
anti-quark by $[i\overline{j}],$
and finally a baryon made from quark flavors $i, j$ and $k$
satisfying $i\le j\le k$ by $[ijk]$.
In fusion reactions we create primarily
mesons, (anti)diquarks and (anti)baryons.
The rate equations for these reactions
are straightforward to encode on a computer but
lead to somewhat clumsy equations writing down the actual
rate equations:
\begin{eqnarray}
V \frac{dN_i}{dt} & = & - \, \sum_{j\ge i}
R_{i+j \rightarrow [ij]} \NN{i}{j}{[ij]} \nonumber \\ \nonumber \\
& - & R_{i+i \rightarrow [ii]} \NN{i}{i}{[ii]} \nonumber \\ \nonumber \\
& - & \sum_{\overline{j}}
R_{i+\overline{j} \rightarrow [i\overline{j}] }
\NN{i}{\overline{j}}{[i\overline{j}]} \nonumber \\ \nonumber \\
& - & \sum_{[jk]}
R_{i+[jk] \rightarrow [ijk]} \NN{i}{[jk]}{[ijk]}
\end{eqnarray}
\noindent for quarks,
\begin{eqnarray}
V \frac{dN_{[ij]}}{dt} & = &
R_{i+j \rightarrow [ij]} \NN{i}{j}{[ij]}_{i \le j} \nonumber \\ \nonumber \\
& - & \sum_k R_{[ij]+k \rightarrow [ijk]} \NN{[ij]}{k}{[ijk]}
\end{eqnarray}
for diquarks,
\begin{equation}
V \frac{dN_{[i\overline{j}]}}{dt} =
R_{i+\overline{j} \rightarrow [i\overline{j}] }
\NN{i}{\overline{j}}{[i\overline{j}]}
\end{equation}
for mesons and finally
\begin{eqnarray}
V \frac{dN_{[ijk]}}{dt} & = &
R_{i+[jk] \rightarrow [ijk]} \NN{i}{[jk]}{[ijk]}_{i\le j \le k} \nonumber \\ \nonumber \\ & + &
R_{j+[ik] \rightarrow [ijk]} \NN{j}{[ik]}{[ijk]}_{i < j \le k} \nonumber \\ \nonumber \\ & + &
R_{k+[ij] \rightarrow [ijk]} \NN{k}{[ij]}{[ijk]}_{i \le j < k}
\end{eqnarray}
for baryons.
We obtain similar equations for anti-quarks, anti-diquarks and
anti-baryons.
\vspace{0.5cm}
We would like to point out that the novel appearance of the
above rate equations is a consequence of the equation of state
we are using. This form goes beyond the usual expression
multilinear in $N_i/N_i^{{\rm eq}}$, and reflects the fact that
the hadronization reactions are strongly influenced by the
presence of colored particles not taking part in the particular
two-body process directly.
\vspace{0.5cm}
Especially the rate equation for baryon production is complicated.
The conditions indicated as lower indices at the
bracket expressions or in the summation signs rule out multiple
counting of baryons which contain equal flavor quarks.
For example the reaction $u+[uu]\rightarrow[uuu]$ makes only
one $[uuu]$ baryon (pre-$\Delta^{++}$) and not three.
On the other hand $[uds]$ can be constructed from three
different quark - diquark fusion process
\begin{eqnarray}
u + [ds] &\longrightarrow & [uds], \nonumber \\ \nonumber \\
d + [us] &\longrightarrow & [uds], \nonumber \\ \nonumber \\
s + [ud] &\longrightarrow & [uds].
\end{eqnarray}
The only easy and transparent notation we found for these
reactions is the compact eq.(\ref{COMP}).
Since we calculate the evolution of the number of a given type of quark
group (e.g. $uud$), at the final time we distribute this number between
the corresponding multiplets ---
the lowest lying pseudoscalar and vector nonets for the mesons and
lowest lying octet and decuplet for baryons (antibaryons) ---
according to the spin degeneracy. (E.g.
the number of $uud$ quark group is distributed between the $ p^+ $
and $ \Delta^+$ in the ratio two to four while $uuu$ populates only the
$\Delta^{++}$ resonance. )
\subsection{Hadronic decays}
The set of rate equations describes the time evolution of the
number of all involved particles. In order to get the final
hadron numbers we integrate these equations until the
number of colored particles becomes negligible.
This way we obtain a number of hadronic resonances
(in the present version the vector meson nonet and baryon
decuplet).
Finally hadronic decays are taken into account with
the dominant
branching ratios obtained from Particle Data Table \cite{PDT}.
We assume that secondary hadron-hadron
interactions have a negligible effect on the
finally observed hadronic composition.
The time evolution of the entropy and temperature
is obtained by simultaneous integration of eqs.(\ref{ENTROPY})
and (\ref{STAR}). The structure of the rate equations
containing both creation and decay terms ensures
that the entropy never decreases during the
transition \cite{Thank_Knoll}.
On the other hand the energy conservation is established by
the effective cooling law eq.(\ref{STAR}).
The numerical study of this complex system of equations
is presented in the next section.
\section{ Numerical results and discussion}
\label{sec_num}
In this section we present the results of the numerical solution
of the set of the $41+2$ coupled differential equations. In the
Figures the calculated time evolution of the temperature,
entropy, internal energy and the
different particle and antiparticle numbers are presented for a
158 GeV / nucleon Pb + Pb central collision. For the numerical solution
the initial condition and the values of the parameters has to be
specified. For the parameters describing the initial state we used those
given in subsection \ref{subs_ini}, while
for the parameters determining the dynamics of the
hadronization we used the
following values: $\rho = 0.2 fm$, $\alpha = 1.4$.
Both the quarks and diquarks
vanished only in the case, if we used the value $ q_d = 1.5 $ for the
effective diquark color string tension. The time
evolution of different quantities
calculated with this initial condition and parameters
are shown in the Figures.
From Fig.~1a we can see, that at the beginning of the hadronization
there is a rapid decrease in the temperature due to the rest mass formation
of the hadrons. Shortly after that, the reheating starts as an effect
of color confinement (see eqs.(28) and (32)).
The eq.(28) shows, that by removing
two colored object to produce a colorless hadron,
the associated string energy is also removed,
and it has to appear in the thermal energy. Since this energy is inversely
proportional to the one third root of the color density, this
effect is stronger at smaller color density. Finally, as the
hadronization is completed,
the expansion bleeds to the cooling of the system. Fig.~1b shows, that the
total entropy is monotonically increasing during the hadronization,
while the contribution of colored particles (CQP) is gradually
eliminated.
In Fig.~1c one can
observe an interesting pattern in the time evolution of the pressure.
The partial pressure
of the interacting CQP rapidly decreases as the number of quarks
decrease. As the color density drops, this pressure becomes even negative.
The increasing hadron partial pressure, however, overcompensates this negative
value.
At later times with the expansion of the ideal hadron gas the total
pressure decreases. Due to this interplay, the total pressure
falls steeply in the early period, it stays for a short while
almost constant in the re-heating period, and finally drops slowly
at the late expansion stage.
Assuming an initial condition leading
to slower hadronization, this balance is not so effective and
in the time evolution of pressure a local minimum appears.
The partial and total internal energy evolution,
displayed in Fig.~1d, shows, that the hadronization is completed
at $2$ fm/c after the beginning of the process. The decrease of the
internal energy is compensated by the
work of pressure while making the flow (cf. eq.(\ref{COOL})),
the total energy density drops due to the volume expansion
like $\epsilon \propto 1/\tau$ at very late times only,
when $p=0$.
Fig.~2 shows the time evolution of different colored particles.
The quark and antiquarks numbers are monotonically decreasing functions
of time. The diquarks are produced from the quarks, and then they
contribute rapidly to the formation of baryons. That keeps their number
always on a low level. In Fig.~3 the evolution of hadron
numbers is shown. Here we did not take into account the decay
of the resonances. Actually these time evolutions are
the evolution of the constituent quark clusters, i.e.
prime hadronic resonances, from which the experimentally
observed particles emerge.
It is clear from this Figures, that the mesons are formed
faster than the baryons. This is understandable, if we consider, that
the mesons are produced in a one step process, while the baryons are
formed in a two step process: first the formation of diquarks and
in a diquark quark reaction the formation of the baryon.
In the ALCOR model the ratio of hadronic species are determined
by the ratio of steepnesses of these curves. Since these curves do not
cross each other, one can understand, why the algebraic ALCOR
approach to the solution of rate equations is a good approximation.
The difference, that at the very beginning the meson
curves increase linearly with the time and the baryon curves start
with a quadratic form, was taken into account in the ALCOR
model by one single common factor, the so called baryon
suppression factor.
In our model we calculate the number of all produced hadrons.
In Table 1 the hadron numbers obtained with the Transchemistry
model and those obtained with the ALCOR \cite{SQM97}
and the RQMD \cite{RQMD} models are shown
together with the few published experimental data.
The present results of the ALCOR model are slightly different
from the previously published ones \cite{SQM97}, because here we
considered the feeding of $\Lambda^0$
(and $\overline{\Lambda^0}$) particle multiplicities from
$\Sigma^0$, $\Xi^-$, $\Xi^0$, $\Omega^-$
(and $\overline{\Sigma^0}$,
$\overline{\Xi}^+$, $\overline{\Xi}^0$,
$\overline{\Omega}^+$
respectively), as it was obtained in the experiments.
The fit parameters to this new case are:
$N_{q{\overline q}}= 391$, $N_{s{\overline s}}= 172$
and $\alpha_{eff}=0.97$ comparing to the earlier
$N_{q{\overline q}}= 398$, $N_{s{\overline s}}= 175$
and $\alpha_{eff}=1.03$.
Table 2 shows a comparison
for the multi-strange baryon ratios. While in
many cases there are intriguing agreements, in some other cases
there are some discrepancies.
The reason for this may originate from two sources: i) the
experimental data referred here are the production ratios in
the overlap window of the detector acceptances.
Thus, if the momentum distribution of the two particle sorts
is not the same,
then these ratios are not equal to the total number
ratios. ii) these calculated
values are more sensitive to the simplifying assumption, that
the hadronization happens into
the lowest lying baryon octet and decuplet and the two lowest lying
meson nonet. We have committed this simplification, because the
inclusion of other hadron multiplets
would have multiplied the workload in the computer simulation.
In our model we don't have such parameters
with which the different particle ratios could be manipulated
independently. The degree of the overall harmony between our results
and the recent
experimental data seems to be promising and thus it could initiate
the search for further signatures of the {\bf CQP} state.
\section{Conclusion}
\label{sec_con}
In this paper we presented a new model for the hadronization
of a constituent quark plasma ({\bf CQP}) based on rate equations in a
quark matter - hadron matter mixture.
The color confinement was taken into account by
using consistently a plausible equation of state
motivated by the string model. The role of the different
physical processes entering into the hadronization was discussed.
Our results presented in the Figures clearly show a very fast
hadronization.
This is mainly a consequence of the large hadronization
rates, $R_{ij \rightarrow k}$ and of the fact that in the initial state
the system is very far from equilibrium.
Observing the shape of the time evolution of different
hadron multiplicities, it became understandable, why
the simpler algebraic approximation, applied in the
ALCOR model, works so well. The comparison with the existing
experimental data indicate, that it is possible, that
in the PbPb collision at SPS a piece of matter is formed, inside which
the massive quarks and anti-quarks interact with a string like
mean field.
Finally we emphasize, that this type of phenomenological
investigations
are necessary, as long as the hadronization of the quark matter
as a non-equilibrium, non-static,
non-perturbative process, cannot be described with
the methods of the elementary field theory.
\insertplot{qmfig1u.eps}
\begin{center}
\begin{minipage}[t]{13.054cm}
{ {\bf Figure 1.}
The time evolution of the temperature $T$ (a), the
entropy $S$ (b),
the pressure $p$ (c) and the energy density $\varepsilon$ (d)
of the system together with the partial contributions of quarks and hadrons.}
\end{minipage}
\end{center}
\newpage
\insertplot{qmfig2u.eps}
\begin{center}
\begin{minipage}[t]{13.054cm}
{ {\bf Figure 2.}
The time evolution of the number of colored particles.
The line styles of different flavor compositions are
indicated in the respective figures. }
\end{minipage}
\end{center}
\newpage
\insertplot{qmfig3u.eps}
\begin{center}
\begin{minipage}[t]{13.054cm}
{ {\bf Figure 3.}
The time evolution of the number of color neutral
clusters consisting of
quarks and/or anti-quarks indicated on the figures.
}
\end{minipage}
\end{center}
\newpage
\begin{center}
\begin{tabular}{||c||c||c|c|c||}
\hline {\bf Pb+Pb} & { NA49}
& {TrCHEM.} & { ALCOR } & { RQMD } \\
\hline
\hline
$h^{-}$ & $680^a$
&677.0 &679.8 & \\
\hline
\hline
$\pi^+$ &
&581.7 &590.6 &692.9 \\
\hline
$\pi^0$ &
&617.0 &605.9 &724.9 \\
\hline
$\pi^-$ &
&613.1 &622.0 &728.8 \\
\hline
$K^+$ & $76^*$
&\ 79.58 &\ 78.06 &\ 79.0 \\
\hline
$K^0$ &
&\ 79.58 &\ 78.06 &\ 79.0 \\
\hline
${\overline K}^0$ &
&\ 39.47 &\ 34.66 &\ 50.4 \\
\hline
$K^-$ & $\{32\}^b $
&\ 39.47 &\ 34.66 &\ 50.4 \\
\hline
\hline
$p^+$ &
&158.7 &153.2 &199.7 \\
\hline
$n^0$ &
&175.8 &170.5 &217.6 \\
\hline
$\Sigma^+$ &
&\ \ 8.38 &\ \ 9.16 &\ 12.9 \\
\hline
$\Sigma^0$ &
&\ \ 9.79 &\ \ 9.76 &\ 13.1 \\
\hline
$\Sigma^-$ &
&\ \ 9.79 &\ 10.39 &\ 13.3 \\
\hline
$\Lambda^0$ &
&\ 46.79 &\ 48.85 & \ 35.3 \\
\hline
$\Xi^0$ &
&\ \ 4.40 &\ \ 4.89 & \ \ 4.2 \\
\hline
$\Xi^-$ &
&\ \ 4.43 &\ \ 4.93 & \ \ 4.2 \\
\hline
$\Omega^{-}$ &
&\ \ 0.42 &\ \ 0.62 & \\
\hline
\hline
${\overline p}^-$ &
&\ \ 8.98 &\ \ 6.24 &\ 27.9 \\
\hline
${\overline n}^0$ &
&\ \ 8.93 &\ \ 6.24 &\ 27.9 \\
\hline
${\overline \Sigma}^-$ &
&\ \ 1.00 &\ \ 0.91 &\ \ 4.6 \\
\hline
${\overline \Sigma}^0$ &
&\ \ 1.09 &\ \ 0.91 &\ \ 4.6 \\
\hline
${\overline \Sigma}^+$ &
&\ \ 0.99 &\ \ 0.91 &\ \ 4.6 \\
\hline
${\overline \Lambda}^0$ &
&\ \ 5.24 &\ \ 4.59 &\ 10.7 \\
\hline
${\overline \Xi}^0$ &
&\ \ 1.08 &\ \ 1.12 &\ \ 2.0 \\
\hline
${\overline \Xi}^+$ &
&\ \ 1.08 &\ \ 1.12 &\ \ 2.0 \\
\hline
${\overline \Omega}^{+}$ &
&\ \ 0.22 &\ \ 0.35 & \\
\hline
\hline
$K^0_{S}$ &$ \{54\}^{b,c}$
&\ 59.66 &\ 56.36 & \ 63.5 \\
\hline
$p^+-{\overline p}^-$ &$\{145\}^a$
&149.7 &147.0 &171.8 \\
\hline
$\Lambda^0$-like &$\{50\pm 10\}^b$
&\ 56.58 &\ 69.07 & 56.8 \\
\hline
${\overline \Lambda}^0$-like &$\{8\pm 1.5\}^b$
&\ 6.34 &\ 8.12 & 19.3 \\
\hline
\hline
\end{tabular}
\end{center}
\vskip 0.5cm
\noindent {\bf Table 1:}
Total hadron multiplicities for $Pb+Pb$ collision
at 158 GeV/nucleon bombarding energy. The displayed
experimental results are from the
NA49 Collaboration: estimated result ${}^a$ is from \cite{NA49QM96};
${}^b$ is from \cite{NA49S97}; ${}^c$ is from
\cite{NA49S96}; ${}^*$ is estimated from $\{K^-\}$ and
$\{K^0_S\}$. Theoretical results are from
the Transchemistry, ALCOR
and RQMD \cite{RQMD}
("ropes + no rescattering" version) model. Here it is
$\Lambda^0-{\rm like} \equiv\Lambda^0+\Sigma^0+\Xi^-+\Xi^0+\Omega^-$.
\newpage
\begin{center}
\begin{tabular}{||c||c|c|c|c||}
\hline {\bf Pb+Pb} & {\bf WA97} & {\bf TrCHEM} & {\bf ALCOR}
\\
\hline
\hline
${\overline {\Lambda}}^0/\Lambda^0$
&$0.128 \pm 0.012$ &0.112&0.117 \\
\hline
${\overline {\Xi}}^+/\Xi^-$
&$0.266 \pm 0.028$ &0.243&0.227 \\
\hline
${\overline {\Omega}}^+/\Omega^-$
&$0.46 \pm 0.15$ &0.529&0.564 \\
\hline
$\Xi^-/\Lambda^0$
&$0.093 \pm 0.007$ &0.078&0.071 \\
\hline
${\overline {\Xi}}^+/{\overline {\Lambda}}^0$
&$0.195 \pm 0.023$ &0.170&0.138 \\
\hline
$\Omega^-/\Xi^-$
&$0.195 \pm 0.028$ &0.095&0.125 \\
\hline
\hline
\end{tabular}
\end{center}
\vskip 0.8cm
\noindent {\bf Table 2:}
Strange baryon and anti-baryon ratios measured by
WA97 Collaboration \cite{WA97QM97}
and obtained from Transchemistry and ALCOR model
for $Pb+Pb$ collision at 158 GeV/nucl bombarding energy.
The experimental data are the production ratios in
mid rapidity at $p_T>0 \ GeV$.
\section*{Acknowledgments}
Stimulating discussions with J.Knoll, A.A.Shanenko, V.Toneev are acknowledged.
This work was supported by the Hungarian Science Fund grants T024094 and
T019700, by the US-Hungarian Science and Technology Joint Fund No. 652/1998,
and by a common project of the Deutsche Forshungsgemeinschaft
and the Hungarian Academy of Science DFG-MTA 101/1998.
\section*{APPENDIX A: \\
THE RELEVANT HYDRODYNAMICAL AND \\ THERMODYNAMICAL EXPRESSIONS}
\label{sec_hydro}
The familiar energy-momentum conservation
\begin{equation}
\partial_{\nu}T^{\mu\nu}=0,
\end{equation}
with
\begin{equation}
T^{\mu\nu} = (\varepsilon + p) u^{\mu} u^{\nu} - p g^{\mu\nu},
\end{equation}
being the energy-momentum tensor of a perfect fluid with flow
four-velocity $u_{\mu}$, local energy density $\varepsilon$ and
pressure $p$, has two interesting projections: one is parallel
to the four-velocity,
\begin{equation}
u_{\mu}\partial_{\nu}T^{\mu\nu} =
\partial_{\nu}(\varepsilon u^{\nu}) + p (\partial_{\nu}u^{\nu}) = 0,
\label{ENERG-FLOW}
\end{equation}
and the other is orthogonal to this,
\begin{equation}
\left( u^{\mu}(u^{\nu}\partial_{\nu}) - \partial^{\mu}\right) p
+ (\varepsilon + p) (u^{\nu}\partial_{\nu}) u^{\mu} = 0.
\label{PRESS-FLOW}
\end{equation}
The former - expressing local energy conservation -
can be casted into the suggestive form
\begin{equation}
dE+pdV=0
\label{COOL}
\end{equation}
using an infinitesimal volume element, $dV=(\partial_{\mu} u^{\mu})Vd\tau$,
the total internal energy, $E=\varepsilon V$,
and the parametric derivation with respect to the time-variable
in the co-moving fluid cell,
\begin{equation}
\frac{d}{d\tau} = u^{\nu}\partial_{\nu} \ .
\end{equation}
Starting from the first principal theorem of thermodynamics
\begin{equation}
dE = TdS - pdV + \sum_i \mu_i dN_i,
\label{PRINC}
\end{equation}
and using eq.(\ref{COOL}) we arrive at the following change of
rate of the total entropy:
\begin{equation}
\dot{S} = - \sum_i \frac{\mu_i}{T} \dot{N}_i,
\end{equation}
where the 'dot' stands for the time derivative ($\dot{S}=dS/dt$).
The chemical potentials $\mu_i$ can generally be derived from
the equation of state.
We consider a flow pattern given by the Bjorken-flow
\begin{equation}
u_{\mu} = \left( \frac{t}{\tau}, \frac{z}{\tau}, 0, 0 \right).
\label{FLOW-ANSATZ}
\end{equation}
Here
\begin{eqnarray}
\tau^2 &=& t^2 - z^2, \nonumber \\
\end{eqnarray}
and the four-velocity is normalized to $u_{\mu}u^{\mu}=1$ and its
four-divergence is
\begin{equation}
\partial_{\mu} u^{\mu} = \frac{1}{\tau} \ .
\end{equation}
The energy density $\epsilon$ and the pressure $p$ depend only on
$\tau$ and the flow is stationary.
Corrections stemming from a mild radial flow also can be worked out,
but we do not use these corrections here.
The effective dimensionality of the flow is $1$ in this case.
Since we deal with a system of rate equations for particle
species numbers $N_i$ occupying a common reaction volume $V$
at temperature $T$, the generating thermodynamical functional
is the free energy, $F(V,T,N_i)$. The chemical potentials are
given by
\begin{equation}
\mu_i = \frac{\partial F}{\partial N_i},
\end{equation}
the pressure by
\begin{equation}
p = - \frac{\partial F}{\partial V},
\end{equation}
the entropy by
\begin{equation}
S = - \frac{\partial F}{\partial T},
\end{equation}
and finally the internal energy is
\begin{equation}
E = F + TS.
\end{equation}
As a starting point we consider a mixture of ideal gases of massive
quarks, diquarks, mesons and baryons, and their respective anti-particles.
The corresponding free energy is
\begin{equation}
F_{{\rm id}} = \sum_i T N_i \left( \ln \frac{N_i}{N_{i,{\rm th}}} - 1 \right)
\, + \, \sum_i m_iN_i,
\end{equation}
with Maxwell-Boltzmann statistics for the non-relativistic
massive matter:
\begin{equation}
N_i^{{\rm th}} = V d_i \int \! \frac{d^3p}{(2\pi)^3} \,
e^{-p^2/2m_iT}.
\label{MaxBolt}
\end{equation}
Here $d_i=(2s_i+1)c_i$ are spin and color degeneracy factors.
The chemical potentials in an ideal gas mixture are
\begin{equation}
\mu_{i,{\rm id}} = T \ln \frac{N_i}{N_{i}^{\rm th}} + m_i.
\end{equation}
The total entropy is given by
\begin{equation}
S_{{\rm id}} = \sum_i N_i \left( \frac{5}{2} -
\ln \frac{N_i}{N_{i}^{\rm th}} \right).
\label{ENTROPY}
\end{equation}
The energy and pressure of such an ideal, non-relativistic mixture
is given by
\begin{eqnarray}
E_{{\rm id}} & = & \sum_i \left( m_i + \frac{3}{2}T \right) N_i,
\nonumber \\
p_{{\rm id}} & = & \sum_i N_i \frac{T}{V}.
\label{PRESSURE}
\end{eqnarray}
|
{'timestamp': '1999-04-08T15:31:12', 'yymm': '9807', 'arxiv_id': 'hep-ph/9807303', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9807303'}
|
arxiv
|
\section{Introduction}
Quantum error correction is one of the fundamental requirements for practical large-scale quantum computation \cite{terhal2015quantum,gottesman1997stabilizer,preskill1998reliable}. The main idea of quantum error correction is encoding quantum information in a large quantum system. One effective way is storing redundant information in qubit-based quantum systems where logical qubits are encoded by a large amount of physical qubits. Many experiments have demonstrated quantum error correction with multiple qubits in various platforms such as superconducting \cite{reed2012realization,
kelly2015state,corcoles2015demonstration,takita2017experimental,
andersen2020repeated,gong2019experimental,
ai2021exponential}, ion traps \cite{nigg2014quantum,
egan2020fault,ryan2021realization} and nuclear magnetic resonance (NMR) \cite{cory1998experimental,
moussa2011demonstration} systems. However, scaling up the number of qubits in these systems is extremely challenging \cite{corcoles2019challenges}.
A popular alternative way for quantum error correction is the bosonic error correction code \cite{albert2018performance,cai2021bosonic}. In the bosonic architecture, the logical qubits are encoded in a continuous-variable system, i.e., bosonic modes. Bosonic modes provide infinite-dimensional Hilbert space for quantum information encoding. The representative bosonic codes based on a single bosonic mode include the cat code \cite{ofek2016extending}, binomial code \cite{hu2019quantum} and Gottesman-Kitaev-Preskill (GKP) code \cite{gottesman2001encoding,grimsmo2021quantum}.
Recently, with the development of quantum hardware technology, the GKP code has aroused extensive attention to realize practical bosonic error correction \cite{campagne2020quantum}. There are three main options for the GKP code error correction, the Steane type scheme \cite{steane1996error,gottesman2001encoding,terhal2016encoding}, the Knill-Glancy type scheme \cite{glancy2006error,wan2020memory} and the teleportation-based scheme \cite{walshe2020continuous,noh2021low}, in which the Steane type error correction scheme is most widely mentioned. However, when the ancilla qubits are noisy, the conventional Steane scheme does not provide the optimal solution for the error correction. Consider an example where the data qubit and the ancilla qubit have the error shift $u_1$ and $u_2$ respectively. Suppose the variances of $u_1$ and $u_2$ are equal, the conventional Steane scheme applies the correction $q_{cor}=(u_1+u_2)\,{\rm mod} \sqrt{\pi}$ in data qubits. A simple analysis shows that replacing $u_1$ with $-u_2$ contributes nothing to the error correction, as $u_1$ and $u_2$ have equal variances. Actually, as noted in Section \ref{pre}, the Steane type error correction scheme with a maximum-likelihood estimation (ME-Steane scheme) \cite{fukui2019high} gives a better shift correction $q^{\scriptscriptstyle (ME)}_{cor}=\frac{1}{2}[(u_1+u_2)\,{\rm mod} \sqrt{\pi}]$ in this special case.
Since the GKP code error correction schemes are only used to correct small shift errors in the position and momentum quadratures of an oscillator, it is required to concatenate the GKP code with a stabilizer code to protect against larger errors \cite{menicucci2014fault}. For instance, the concatenation of GKP code with the toric code \cite{vuillot2019quantum} or surface code \cite{yamasaki2020polylog,noh2020fault,noh2021low} has been proposed, some of which is analyzed thoroughly under the circuit-level error model \cite{
noh2020fault,noh2021low}.
This paper is intended to study the GKP code concatenated with another topological stabilizer code -- color code \cite{bombin2006topological,bombin2007exact}. It is well-known that the logical Clifford group \cite{gottesman1998heisenberg} can be implemented transversally in color code \cite{fowler2011two} which is deemed to be a merit in the competition with the surface code. However, decoding the color code is generally considered much harder than the toric code or surface code since it
requires hypergraph matching, for which efficient algorithms are
unknown \cite{wang2009graphical}. Several efficient decoders reveal the threshold of 2D color code around $8\%$ \cite{sarvepalli2012efficient,delfosse2014decoding}, that is obviously below the threshold of 2D toric code over $10\%$ \cite{dennis2002topological}. Fortunately, the Restriction Decoder shows good performance in 8,8,4 color code (i.e., 2D color code on the square-octagon lattice) with the threshold at $10.2\%$ \cite
{kubica2019efficient}.
This gives us a brilliant prospect to decode the color-GKP code by an efficient algorithm.
In this paper, we first decode the color-GKP code under perfect measurements by using the Restriction Decoder \cite{kubica2019efficient,chamberland2020triangular} with the continuous-variable GKP information, and find the threshold improved from $\bar{p}\approx 10.2\%$ to $13.3\%$ $(\sigma \approx 0.59)$. Secondly, to decode color code with noisy measurements, the generalized Restriction Decoder is introduced. More specifically, the minimum-weight perfect matching (MWPM) algorithm is applied in the 3D space-time graph. When data qubits and stabilizer measurements are noisy but GKP error correction is perfect, the threshold reaches $\sigma\approx 0.46$, and when GKP error correction is also noisy, the threshold reaches $\sigma\approx 0.24$. These results show the performance of the color-GKP code is on par with the toric-GKP code under the same error models. We accredit this to the good performance of the generalized Restriction Decoder. To support this consequence, the generalized Restriction Decoder is used to decode the normal 2D color code, and shows the threshold at $3.1\%$ under the phenomenological error model. The threshold compares quite well to the result given by some low efficient decoders \cite{landahl2011fault}.
The rest of the paper is organized as follows. Section \ref{s2} starts with reviewing some basic aspects of GKP error correction code and introduces color-GKP code on the square-octagon lattice. Section \ref{pre} discusses ME-Steane type GKP error correction scheme and compares it with the conventional scheme. Section \ref{s3} states two error models and the decoding strategies in detail. The numerical simulation results of decoding color-GKP code are presented in Section \ref{s4} where we compare our results with previous work. Lastly, the conclusion of this paper and the outlook for future work are described in Section \ref{s5}.
\section{The color-GKP code}\label{s2}
In this section, we introduce the color-GKP code, i.e., the single-mode GKP code concatenated with the color code. In the first layer of concatenation, the GKP code encodes qubits into the Hilbert space of harmonic oscillators. In the second layer, the standard square GKP qubits are used as the data qubits of the 2d color code. Our work only considers the color-GKP code on the square-octagon lattice, since the 8,8,4 color code has a high threshold using a efficient decoding algorithm \cite{kubica2019efficient}.
\subsection{The GKP error correction code}
The GKP error correction code was firstly proposed by Gottesman, Kitaev, and Preskill in 2001 \cite{gottesman2001encoding}, which encodes a qubit into a harmonic oscillator. For a harmonic oscillator, the position and momentum operators are defined as:
\begin{equation}
\hat{q}=\frac{1}{\sqrt{2}}(\hat{a}+\hat{a}^\dag),\quad
\hat{p}=\frac{i}{\sqrt{2}}(\hat{a}-\hat{a}^\dag),
\end{equation}
where $\hat{a}$ and $\hat{a}^\dag$ are annihilation and creation operators satisfying $[\hat{a},\hat{a}^\dag]=1$. The code space is stabilized by two commuting stabilizer operators
\begin{equation}
\hat{S_p}=e^{-i2\sqrt{\pi}\hat{p}},\quad
\hat{S_q}= e^{i2\sqrt{\pi}\hat{q}},
\end{equation}
which can be regarded as $2\sqrt{\pi}$ displacement in the $\hat{q}$ and $\hat{p}$ quadratures respectively. Then the logical Pauli operators are:
\begin{equation}
\bar{X}=e^{-i\sqrt{\pi}\hat{p}},\quad
\bar{Z}= e^{i\sqrt{\pi}\hat{q}}.
\end{equation}
One can easily check that $\bar{X}$ (or $\bar{Z}$) commutes with $\hat{S_p}$ and $\hat{S_q}$, and satisfies $\bar{X}^2=\hat{S_q}$ and $\bar{Z}^2=\hat{S_p}$. Ideally, the logical states can be written as
\begin{equation}\label{eq4}
\begin{aligned}
&|\bar{0}\rangle \propto
\sum_{n\in \mathbb{Z}}\delta(q-2n\sqrt{\pi}) |{q}\rangle
=\sum_{n\in \mathbb{Z}} |{q}=2n\sqrt{\pi} \rangle,\\
&|\bar{1}\rangle \propto
\sum_{n\in \mathbb{Z}}\delta(q-(2n+1)\sqrt{\pi}) |{q}\rangle
=\sum_{n\in \mathbb{Z}} |{q}=(2n+1)\sqrt{\pi} \rangle .
\end{aligned}
\end{equation}
The most relevant quantum gates to the quantum error correction are Clifford gates, the generators of which are Hadamard gate $H$, phase gate $S$, and control-not gate ${\rm{CNOT}}_{ij}$ in control qubit $i$ and target qubit $j$. The Clifford gates of the GKP code can be performed by interactions that are at most quadratic in the creation and annihilation operators \cite{grimsmo2021quantum}, therefore $H$, $S$ and $\mathrm{CNOT}_{ij}$ have the following forms:
\begin{equation}
\begin{aligned}
&H=e^{i\pi\hat{a}^\dag\hat{a}/2},\quad S=e^{i\hat{q}^2/2},\\
&\mathrm{CNOT}_{ij} =e^{-i\hat{q_i}\hat{p}_j}.
\end{aligned}
\end{equation}
However, the logical states defined in Eq.$\,$(\ref{eq4}) are not physical, since these ideal states are not normalizable and require infinite energy to squeeze. In the realistic situation, $\delta$ functions in Eq.$\,$(\ref{eq4}) will be replaced by finitely squeezed Gaussian states weighted by a Gaussian envelope. Therefore the noisy GKP states become
\cite{wang2019quantum}:
\begin{equation}
\begin{aligned}
&|\tilde{0}\rangle \propto \sum_{n\in \mathbb{Z}} \int_{-\infty}^\infty
e^{-\frac{\Delta^2}{2}(2n)^2\pi}e^{{-\frac{1}{2\Delta^2}({q}-2n\sqrt{\pi})^2}}
|{q}\rangle dq,\\
&|\tilde{1}\rangle \propto \sum_{n\in \mathbb{Z}} \int_{-\infty}^\infty
e^{-\frac{\Delta^2}{2}(2n+1)^2\pi}e^{{-\frac{1}{2\Delta^2}({q}-(2n+1)\sqrt{\pi})^2}}
|{q}\rangle dq.
\end{aligned}
\end{equation}
An arbitrary noisy GKP state $|\tilde{\psi}\rangle=\alpha|\tilde{0}\rangle+\beta|\tilde{1}\rangle$ can be regarded as the ideal GKP state
$|\bar{\psi}\rangle$ suffering the position and momentum shift errors with the Gaussian distribution:
\begin{equation}\label{eq7}
\begin{aligned}
|\tilde{\psi}\rangle\propto\iint e^{-\frac{u^2+v^2}{2\Delta^2}} e^{-iu\hat{p}}e^{iv\hat{q}} |\bar{\psi}\rangle du dv.
\end{aligned}
\end{equation}
The proof of Eq.$\,$(\ref{eq7}) is shown in Appendix \ref{aa}.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig1.eps}
\caption{\justify Quantum circuit of the Steane type error correction. The Steane type error correction scheme can correct small shift errors in the $\hat{q}$ and $\hat{p}$ quadratures and requires two high-quality ancilla GKP qubits. Here we suppose $\bar{\ket{+}}$ and $\bar{\ket{0}}$ are the ideal GKP states and $\tilde{\ket{\psi}}$ is the noisy GKP state. The correction operator $U_{cor}=\exp(iq_{cor}\hat{p})\exp(ip_{cor}\hat{q})$, where $q_{cor}=$ $q_{out}\,{\rm mod}\sqrt{\pi}$ and $p_{cor}=p_{out}\,{\rm mod}\sqrt{\pi}$.}\label{fig1}
\end{figure}
By utilizing Pauli twirling approximation \cite{katabarwa2017dynamical}, these errors can be viewed as coming from a Gaussian shift error channel $ \mathcal{N}$ with variance $\sigma^2$ acting on the ideal GKP state:
\begin{equation}\label{ee8}
\mathcal{N}(\rho)\equiv \iint P_\sigma (u) P_\sigma (v) e^{-iu\hat{p}} e^{iv\hat{q}} \rho e^{-iv\hat{q}} e^{iu\hat{p}}du dv,
\end{equation}
where $P_\sigma (x)=\frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{x^2}{2\sigma^2}}$ is Gaussian distribution function with variance $\sigma^2$.
Note that this error channel is the result after using the Pauli twirling approximation, where the coherent displacement errors are replaced by the incoherent mixture of displacement errors. One is unable to distinguish between the pure state $|\tilde{\psi}\rangle$ and the mixed state $\mathcal{N}(|\bar{\psi}\rangle\langle\bar{\psi}|)$ by only measuring $\hat{q}$ or $\hat{p}$.
The incoherent mixture of displacement errors will simplify our analysis but is noisier than the coherent displacement errors. The Pauli twirling approximation also increases the average photon number
of a GKP state.
To correct small Gaussian shift errors in $\hat{p}$ and $\hat{q}$ quadratures, the Steane type error correction scheme is required. Fig.$\,${\ref{fig1}} shows the quantum circuit of the Steane type error correction, which is composed of CNOT gates, homodyne measurements and ideal GKP ancilla qubits. Many works under the circuit-based error model consider inverse-CNOT gates \cite{vuillot2019quantum,noh2020fault,noh2021low}. Nevertheless, the circuit in Fig.$\,${\ref{fig1}} doesn't involve inverse-CNOT gates, since the CNOT gate and inverse- CNOT gate are equivalent in our following discussion. In Section \ref{pre}, we will discuss the complete process of the Steane type error correction.
\subsection{The GKP code concatenated with color code}
Color code is a CSS stabilizer code constructed in a three-colorable trivalent graph \cite{bombin2013introduction}. Our work considers color code on the square-octagon lattice, i.e., the 8,8,4 color code \cite{fowler2011two}.
As shown in Fig.$\,$\ref{fig2a}, the data qubits lie on the vertices in the graph, and each face $f$ corresponds to two stabilizers $X_f$ and $Z_f$, which are the tensor products of $X$ and $Z$ operators of each qubit incident on face $f$, respectively.
\begin{figure}
\centering
\subfigure[]{\label{fig2a}
\includegraphics[width=3.7cm]{fig2a.eps}}
\hspace{0.1in}
\subfigure[]{\label{fig2b}
\includegraphics[width=3.7cm]{fig2b.eps}}
\caption{\justify (Color online) The 8,8,4 color code and its dual lattice. (a) The qubit layout of the 8,4,4 color code. Black dots indicate the data qubits and red, blue, green faces indicate syndrome qubits. The boundary faces are not complete since the whole graph embeds on the torus.
(b) The dual lattice of the color code on a torus with the parameters $\llbracket{72,4,6}\rrbracket$. In the dual lattice, the triangular faces are data qubits and the vertices are syndrome qubits. We also label a logical error of the color code which is the $X$ (or $Z$) tensor product of the shaded (yellow) qubits.}\label{f2}
\end{figure}
The following discussion focuses on color code in the square-octagon lattice embedded on a torus with the parameters $\llbracket 2d^2,4,d \rrbracket$. Here $d$ is the code distance and logical qubit number 4 comes from the redundancy of the stabilizers:
\begin{equation}
\begin{aligned}
&\prod_{f \in red}X_f=\prod_{f \in blue}X_f=\prod_{f \in green}X_f ,\\
&\prod_{f \in red}Z_f=\prod_{f \in blue}Z_f=\prod_{f \in green}Z_f.
\end{aligned}
\end{equation}
For decoding purposes, it is beneficial to transform the color code to the dual lattice \cite{sarvepalli2012efficient} in Fig.$\,$\ref{fig2b}. The three-color vertices represent
the stabilizers (or syndrome qubits) and the triangular faces represent data qubits. Each red vertex connects with 4 faces and each blue or green vertex connects with 8 faces, which corresponds to the Pauli weight of the stabilizer. Suppose the stabilizer checks are accurate, the Pauli error in a single data qubit will lead three connecting syndrome qubits to flip. Fig.$\,$\ref{fig2b} also shows a logical error of the 8,4,4 color code which cannot be detected by stabilizer checks.
Now let us consider the GKP code concatenated with the color code. The two-dimensional layout of color-GKP code follows the layout of the color code in Fig.$\,$\ref{fig2a}. Each GKP data qubit connects with ancilla GKP qubits for the GKP error correction in the first layer. And in the second layer, these data qubits connect with syndrome qubits to implement stabilizer checks of the color code. The circuit for stabilizer checks is presented in Fig.$\,$\ref{fig3}. Note that the $\pm$ sign of a stabilizer check depends on the $\hat{q}$ or $\hat{p}$ measurement result which is a continuous variable. If $q_{out}$ is closer to even multiples of $\sqrt{\pi}$, in other words, $|q_{out}\,{\rm mod} 2\sqrt{\pi}| <\frac{\sqrt{\pi}}{2}$, the stabilizer check is $+1$; otherwise it is $-1$. In this paper, the range of $a\,{\rm mod}\, b$ is from$ -\frac{b}{2}$ to $\frac{b}{2}$.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig3.eps}
\caption{\justify The stabilizer check of the color-GKP code. The top is the stabilizer check of $Z_1Z_2Z_3Z_4$, and the bottom is the stabilizer check of $X_1X_2...X_7X_8$. The result of the stabilizer check depends on the measurement of the ancilla qubit. If the measurement result $q_{out}$ satisfies $|q_{out}\,{\rm mod} 2\sqrt{\pi}| <\frac{\sqrt{\pi}}{2}$, the stabilizer check is $+1$; otherwise it is -1. Note that when all the GKP states are ideal, the measurement result $q_{out}$ (or $p_{out}$) can only be the integer multiples of $\sqrt{\pi}$.}\label{fig3}
\end{figure}
\section{ME-Steane type GKP error correction scheme}\label{pre}
Before introducing ME-Steane scheme \cite{fukui2019high}, we first simply review the conventional Steane error correction scheme. Our discussion is based on the result after Pauli twirling approximation, which means the noisy GKP state is regarded as a mixed state:
\begin{equation}\label{ee10}
\int du \int dv P_\sigma (u) P_\sigma (v) e^{-iu\hat{p}} e^{iv\hat{q}} |\bar{\psi}\rangle\langle\bar{\psi}| e^{-iv\hat{q}} e^{iu\hat{p}}.
\end{equation}
For the sake of simplicity, let us concentrate on the error correction in the $\hat{q}$ quadrature which needs the quantum circuit involving only one ancilla qubit, one CNOT gate and $\hat{q}$ measurement in Fig.$\,$\ref{fig1}. Also, all components of the circuit are noiseless except the initial states.
Assume that the input state is
\begin{equation}
e^{-iu_1p_1} e^{-iu_2p_2}|\bar{\psi} \rangle|\bar{+} \rangle,
\end{equation}
where the probabilities of $u_1$ and $u_2$ obey the Gaussian distribution functions $P_{\sigma_1}(u_1)=\frac{1}{\sqrt{2\pi \sigma_1^2}}\exp(-\frac{u_1^2}{2\sigma_1^2})$ and $ P_{\sigma_2}(u_2)=\frac{1}{\sqrt{2\pi \sigma_2^2}}\exp(-\frac{u_2^2}{2\sigma_2^2})$, respectively. After the CNOT gate, the output state is
\begin{equation}
\begin{aligned}
&e^{-iu_1p_1} e^{-i(u_1+u_2)p_2}|\bar{\psi} \rangle|\bar{+} \rangle\\
=&\frac{1}{\sqrt{N}}\sum_{n\in \mathbb{Z}} e^{-iu_1p_1} |\bar{\psi} \rangle|q=n\sqrt{\pi } +u_1+u_2\rangle.
\end{aligned}
\end{equation}
Here $N$ is the normalization constant and the measurement result of the ancilla qubit is $q_{out}= n\sqrt{\pi} +u_1+u_2$. The conventional Steane error correction scheme gives the correction $q_{cor}= q_{out}\,{\rm mod} \sqrt{\pi}$. Based on the premise that $\sigma_2$ is much smaller than $\sigma_1$, shift error $u_2$ can be ignored and then the shift error $u_1$ will be corrected successfully under the condition:
\begin{equation}
\begin{aligned}
q_{cor}=u_1+2k\sqrt{\pi},
\end{aligned}
\end{equation}
for some integer $k$. As a result, the conditional error probability for the given $q_{out}$ is
\begin{equation}\label{e9}
\begin{aligned}
p(\bar{X}|q_{out})=&1-\frac{\sum_{k}P_{\sigma_1}(q_{out}-2k\sqrt{\pi})}
{\sum_{k}P_{\sigma_1}(q_{out}-k\sqrt{\pi})}.
\end{aligned}
\end{equation}
However, when $\sigma_2$ is comparable to $\sigma_1$, the effect of the error shift $u_2$ cannot be ignored. Hence the conventional Steane scheme is not always satisfactory because it leaves the error shift $-u_2$. To find the best choice of the correction $q_{cor}$, the ME-Steane scheme considers the distribution of the error shift $u_1$ conditioned on the measurement result $ q_{out}$ as follows:
\begin{equation}
\begin{aligned}
f(u_1,q_{out})
=\frac{P_{\sigma_1}(u_1) \sum_k P_{\sigma_2}(q_{out}-k\sqrt{\pi}-u_1)}
{\sum_k P_{\sigma_{12}}(q_{out}-k\sqrt{\pi}) },
\end{aligned}
\end{equation}
where $\sigma_{12}=\sqrt{\sigma_1^2+\sigma_2^2}$ is the variance of
the variable $w=u_1+u_2$. Note that $f(u_1,q_{out})$ can be decomposed into Gaussian functions with different weights:
\begin{equation}
\begin{aligned}
f(u_1,q_{out})=\frac{\sum_k
w_k
\exp[-\frac{(u_1-\eta(q_{out}-k\sqrt{\pi}))^2}{2\sigma^2}]}
{2\pi\sigma_1\sigma_2\sum_k P_{\sigma_{12}}( q_{out}-k\sqrt{\pi})},
\end{aligned}
\end{equation}
where $\eta=\frac{\sigma_1^2}{\sigma_1^2+\sigma_2^2}$, $\sigma^2=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}$ and $w_k=\exp[-\frac{(\eta-\eta^2)(q_{out}-k\sqrt{\pi})^2}{2\sigma^2}]$. One can easily find the peaks of these Gaussian functions in $u_1=\eta (q_{out}-k\sqrt{\pi})$ and the maximum point of $f(u_1)$ is $u_1=\eta (q_{out}\,{\rm mod}\sqrt{\pi})$. Therefore the maximum-likelihood estimation of shift error $u_1$ is $q^{\scriptscriptstyle (ME)}_{cor}=\eta (q_{out}\,{\rm mod}\sqrt{\pi})$ \cite{fukui2019high}.
\begin{figure}[t]
\centering
\includegraphics[width=8.2cm]{fig4.eps}
\caption{\justify $\Delta(\pi',\pi)$ of the Steane type error correction with different $q_{cor}$ in Eq.$\,$(\ref{eq20}). Here $q^{\scriptscriptstyle (ME)}_{cor}=\eta ((u_1+u_2)\,{\rm mod}\sqrt{\pi})$ is the correction proposed in ME-Steane scheme. We estimate $\Delta(\pi',\pi)$ with different $q_{cor}$ by Monte Carlo simulation, where Gaussian random numbers $u_1$ and $u_2$ are produced to compute $\pi(u_1)$ and $\pi'(u_1)$, and we repeat this process to get the average of $|(\pi'(u_1)-\pi(u_1))\,{\rm mod} 2\,\sqrt{\pi}|$.
The variance $\sigma_2^2$ of ancilla qubits is fixed as 0.056 which is the maximal squeezing of the GKP state under the current experimental technique level \cite{campagne2020quantum}. The result shows $q_{cor}=q^{\scriptscriptstyle (ME)}_{cor}$ is exactly the optimal correction to minimize $\Delta(\pi',\pi)$.}\label{fig4}
\end{figure}
In the ME-Steane scheme, the conditional $\bar{X}$ error probability is
\begin{equation}\label{e20}
\begin{aligned}
p'(\bar{X}|q_{out})=
1-\sum_k\int_{2k\sqrt{\pi}-\frac{\sqrt{\pi}}{2}}^{2k\sqrt{\pi}+\frac{\sqrt{\pi}}{2}}f(u_1+q^{\scriptscriptstyle (ME)}_{cor},q_{out})du_1.
\end{aligned}
\end{equation}
Likewise, the conditional $\bar{X}$ error probability in conventional scheme is
\begin{equation}
\begin{aligned}
p(\bar{X}|q_{out})=
1-\sum_k\int_{2k\sqrt{\pi}-\frac{\sqrt{\pi}}{2}}^{2k\sqrt{\pi}+\frac{\sqrt{\pi}}{2}}f(u_1+q_{cor},q_{out})du_1,
\end{aligned}
\end{equation}
where $q_{cor}=q^{\scriptscriptstyle (ME)}_{cor}/\eta= q_{out}\,{\rm mod} \sqrt{\pi}$.
As mentioned in the introduction, in the special case where $\sigma_2=\sigma_1$ $(\eta=\frac{1}{2})$, the correction $q^{\scriptscriptstyle (ME)}_{cor}$ is half of $q_{cor}$ \cite{noh2020encoding}. When $\sigma_2\ll\sigma_1$, we have $\eta\approx1$ and the two schemes give the same correction. In fact, the ME-Steane scheme will completely come back to the conventional scheme on the condition $\sigma_2\ll\sigma_1$. One can check that Eq.$\,$(\ref{e9}) and Eq.$\,$(\ref{e20}) give the same error probability when $\sigma_2\rightarrow 0$. Hence, in the following sections, the conventional Steane scheme will continue to be used when assuming the GKP ancilla qubits are noiseless.
Then let us introduce a function to measure the performance of a GKP error correction scheme. A GKP error correction with perfect ancilla qubits corresponds to a mapping of the shift error $u_1$ in the data qubit:
\begin{equation}\label{e21}
\pi(u_1)=\left\{
\begin{array}{rl}
0, & {\rm if} \ |u_1\,{\rm mod}2\sqrt{\pi}|< \frac{\sqrt{\pi}}{2};\\
\sqrt{\pi}, & {\rm otherwise}.
\end{array}
\right.
\end{equation}
This means the data qubit error $u_1$ turns into $\pi(u_1)$ after the perfect GKP error correction. Any small error $u_1$ can be corrected, and a larger $u_1$ may lead to a $\sqrt{\pi}$ displacement ($\bar{X}$ error), which will spread to the concatenated stabilizer code in the next layer. Likewise, when ancilla qubits are noisy, the mapping $\pi'(u_1)$ of a Steane type error correction scheme is
\begin{equation}\label{eq20}
\pi'(u_1)=u_1-q_{cor}.
\end{equation}
Then we define a function $\Delta(\pi',\pi)$ as
\begin{equation}
\begin{aligned}
\Delta(\pi',\pi)= <|(\pi'(u_1)-\pi(u_1))\,{\rm mod} 2\,\sqrt{\pi}|> ,
\end{aligned}
\end{equation}
to measure the difference between a perfect GKP error correction and a noisy GKP error correction scheme, where the angle bracket means the Gaussian weighted average of all Gaussian variables $u_1$ and $u_2$. To show the correction $q^{\scriptscriptstyle (ME)}_{cor}$ proposed by ME-Steane scheme is optimal, we simulate $\Delta(\pi',\pi)$ with different $q_{cor}$ and find $q^{\scriptscriptstyle (ME)}_{cor}$ is exactly the best choice to minimize $\Delta(\pi',\pi)$ (see Fig.$\,$\ref{fig4}).
One may wonder whether the maximum-likelihood estimation of shift error still remains under the coherent noise. Actually, it should be emphasized that considering the pure state as Eq.$\,$(\ref{eq7})
gives the same correction $q^{\scriptscriptstyle (ME)}_{cor}$ as above \cite{seshadreesan2021coherent}. We give a simplified proof in Appendix \ref{ab}.
\section{Decoding strategies}\label{s3}
The threshold of a quantum error correction code is largely affected
by the different error models. Therefore in this section, we first clarify two error models and then introduce our decoding strategies of the color-GKP code. The key part of our decoding
is the MWPM algorithm which can be executed in a polynomial time.
\subsection{Error model}
In this work, all the errors are assumed to come from the Gaussian error shift channels $ \mathcal{N}_1, \mathcal{N}_2$ and $ \mathcal{N}_m$ in Fig.$\,$\ref{fig5}. Other components in the circuits are noiseless, which means the CNOT gates and the initial ancilla qubits are both perfect. Here the ancilla qubits are not only used in GKP error correction (as GKP ancilla qubits) but also in the stabilizer checks (as syndrome qubits). Moreover, after considering the effect of the Gaussian error shift channel, every measurement can be seen as the perfect homodyne measurement in $\hat{q}$ or $\hat{p}$ quadrature.
Since color code is a kind of CSS code, one can correct $X$ or $Z$ errors independently \cite{calderbank1996good,steane1996multiple}. Here we only discuss the Gaussian shift error in $\hat{q}$ quadrature and the $Z$ stabilizer check of color code.
Two kinds of error models are considered in the color-GKP error correction protocol. The first model is noisy data GKP qubits with perfect measurements. And in the second model, both data GKP qubits and measurements are noisy.
In the first error model, the measurements in GKP error correction and stabilizer checks are perfect, that is, $ \mathcal{N}_2=I$ and $\mathcal{N}_m=I$, where $I$ is the identity operator. This error model is corresponding to the code capacity error model \cite{landahl2011fault}, in which only data qubit errors will be considered. The decoding process is implemented after a single round of $X$ or $Z$ stabilizer checks.
It is worthy to analyze the Fig.$\,$\ref{fig5} circuit in detail. In the beginning, every data GKP qubit suffers a Gaussian shift error $u_1$ with variance $\sigma$ from Gaussian error shift channel $\mathcal{N}_1$. Then the conventional Steane error correction is performed for every data GKP qubit. The shift $u_1$ will propagate to the GKP ancilla qubit by the CNOT gate. Therefore the measurement result of ancilla qubit is $q_{out} = u_1+n\sqrt{\pi}$, where $n$ can be any integer. Next the $\hat{q}$ correction $q_{cor}=(q_{out}\,{\rm mod} \sqrt{\pi})\in (-\frac{\sqrt{\pi}}{2},\frac{\sqrt{\pi}}{2}]$ is applied in the GKP data qubit. If
$|u_1\,{\rm mod} 2\sqrt{\pi}| <\frac{\sqrt{\pi}}{2}$,
the correction will be successful. Otherwise, it produces odd multiples of $\sqrt{\pi}$ displacement in $\hat{q}$ quadrature, i.e., a logical $\bar{X}$ error of GKP code.
After that, we do stabilizer checks and extract the syndromes by the output measurement results $q_{m}$. Note that since the measurements are perfect, $q_{m}$ can only be integer multiples of $\sqrt{\pi}$. So the result of a stabilizer check depends on whether $q_{m}/\sqrt{\pi}$ is odd or even.
In the second error model, in addition to the Gaussian error shift channel $\mathcal{N}_1$ in data qubits, the ancilla qubits in measurements also suffer $ \mathcal{N}_2$ and $\mathcal{N}_m$. The Gaussian error shift channel $ \mathcal{N}_2$ and $\mathcal{N}_m$ are behind the CNOT gates, so the propagation of error from ancilla qubit to data qubit won't exist, which corresponds to the phenomenological error model \cite{dennis2002topological,landahl2011fault}.
After applying ME-Steane error correction, every data GKP qubit is not completely corrected but carries the shift error $u_1-\eta(u_1+u_2)$, where $u_2$ is the error of the GKP ancilla qubit from $\mathcal{N}_2$.
Since the stabilizer checks are faulty, the stabilizer check process should repeat $d$ times typically, where $d$ is the color code distance. In each round of the stabilizer checks, we consider the effect of Gaussian error shift channels $\mathcal{N}_1, \mathcal{N}_2$ and $\mathcal{N}_m$.
Meanwhile, the measurements in the final round are assumed to be perfect both in the GKP error correction and stabilizer check. This assumption is to ensure the final GKP states are back to the code space so that one can decode the code successfully or exactly produce a logical error $X_L$ of color code.
\begin{figure}[t]
\centering
\includegraphics[width=8.2cm]{fig5.eps}
\caption{\justify The stabilizer check circuits with Gaussian error shift channels. The Gaussian error shift channels $ \mathcal{N}_1$, $ \mathcal{N}_2$ and $ \mathcal{N}_m$ have the same form as Eq.$\,$(\ref{ee8}). The $EC$ represents the Steane type GKP error correction with $U_{cor}=\exp(iq_{cor}\hat{p})$. The ME-Steane scheme and the conventional scheme give different $q_{cor}$ (see Section \ref{pre} for more details).}\label{fig5}
\end{figure}
\subsection{The Restriction Decoder with perfect measurements}\label{s3.2}
Now we analyze the decoding of color-GKP code in the first error model.
The aim of the decoding can be described as follows:
for a given syndrome set (a set of vertices in Fig.$\,$\ref{fig6}),
the decoder finds a correction (a set of faces) that
produces the same syndromes. We hope the corrected
qubits are the same as the error qubits up to
a stabilizer, otherwise it will create a logical
error $X_L$. In general, a better decoder always finds
the qubit error which occurs with a higher
probability for fixed syndromes. The ideal decoder is the maximum-likelihood
decoder (MLD) \cite{bravyi2014efficient}. However, the
efficient MLD for color code under the general
error model hasn't been found yet.
In our work, an efficient
decoder -- Restriction Decoder \cite{kubica2019efficient,chamberland2020triangular} will be applied , which shows a high threshold of the 8,8,4 color code by using
the MWPM algorithm in a polynomial time. Here we decode the color-GKP
code by the Restriction Decoder combined with
the continuous variable information from the GKP
code.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\label{fig6a}
\includegraphics[width=4.8cm]{fig6a.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig6b}
\includegraphics[width=4.8cm]{fig6b.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig6c}
\includegraphics[width=4.8cm]{fig6c.eps}}
\caption{\justify (Color online) Restriction Decoder. (a) The error qubits (gray faces) and syndrome (amplified vertices) on the whole lattice $\mathcal{L}$. (b) The restricted lattices $\mathcal{L}_{RG}$ is obtained from $\mathcal{L}$ by removing all blue vertices, and all the edges and faces incident to the blue vertices. The bold
(blue) lines indicate the matching results on the restricted lattices $\mathcal{L}_{RG}$. (c) Combining the matching results on $\mathcal{L}_{RG}$ and $\mathcal{L}_{RB}$ and applying the lifting procedure on each red vertex in the bold lines, one can get the correction (shaded faces). Note that the difference between corrected qubits and error qubits is just a stabilizer.
}\label{fig6}
\end{figure*}
The measurement results in the bosonic framework are continuous variables rather than binary values which provides extra information
of the logical $\bar{X}$ error rate. One can compute the $\bar{X}$ error probability of GKP qubit $i$ conditioned on $q_{out,i}$ by
Eq.$\,$(\ref{e9}). In contrast, the average error probability of all GKP qubits is
\begin{equation}
\bar{p}=1-\sum_{n\in \mathbb{Z}}\int ^{2n\sqrt{\pi}+\frac{\sqrt{\pi}}{2}}
_{2n\sqrt{\pi}-\frac{\sqrt{\pi}}{2}}P_ {\sigma}(x)dx.
\end{equation}
Apparently, using the conditional error rate rather than $\bar{p}$
is beneficial for us to find the maximum-likelihood correction.
Then let us follow the steps of the
Restriction Decoder (Fig.$\,$\ref{fig6}). First, we construct
restricted lattices $\mathcal{L}_{RB}$ and $\mathcal{L}_{RG}$, which is obtained from the whole lattice $\mathcal{L}$ by removing all green (blue) vertices of $\mathcal{L}$, and all the edges and faces incident to the removed vertices.
Using the MWPM
algorithm \cite{edmonds1965paths,higgott2021pymatching}, the syndrome vertex pairs on $\mathcal{L}_{RB}$ and $\mathcal{L}_{RG}$ can be connected. The matching results of $\mathcal{L}_{RB}$ and $\mathcal{L}_{RG}$ are two sets of edges $\rho_{RB}$ and $\rho_{RG}$. Finally, the lifting procedure \cite{kubica2019efficient} is implemented in the edge set $\rho=\rho_{RB}\cup \rho_{RG}$
and returns the correction. Specifically, the lifting
procedure is to find all the red vertices
in edges in $\rho$ and decode the part around
each red vertex locally.
To give a proper weight using the conditional error rate, we introduce a probabilistic
matching weight for each edge (see details in Appendix \ref{ac}). Furthermore, the
shortest path connecting a syndrome vertex pair can not be obtained
directly, for the weight is not directly related to the distance between two vertices.
So we use the Dijkstra algorithm to find
the shortest path for each possible
syndrome vertex pair before using the MWPM algorithm.
The whole decoding process can be summarized as:
(1) input $\sigma$, $q_{out,i}$, syndromes;
(2) compute conditional error probability $p_i$ by Eq.$\,$(\ref{e9});
(3) construct restricted lattices $\mathcal{L}_{RB}$ and $\mathcal{L}_{RG}$;
(4) use Dijkstra algorithm and MWPM algorithm to match syndrome pairs;
(5) implement the lifting procedure and output the decoding result.
\subsection{The Generalized Restriction Decoder with noisy measurements}
\begin{figure}[b]
\centering
\includegraphics[width=6.5cm]{fig7.eps}
\caption{\justify Three-dimensional space-time graph. One data qubit error may cause syndromes in different rounds. So we match the syndrome pairs in the 3D graph and project the results to the bottom to implement lifting procedure.}
\label{fig7}
\end{figure}
Turn our attention to the second error model, where the noisy measurements exist in both GKP error corrections and stabilizer checks. Since the results of stabilizer checks are unreliable, repeated measurements are necessary. The syndromes in different rounds can be used to construct a three-dimensional (3D) space-time graph. In the original version, the Restriction Decoder is only used to decode color code with
perfect measurements in a 2D lattice \cite{kubica2019efficient}. Another important work adapts the restriction decoder to color codes with boundaries and circuit-level noise \cite{chamberland2020triangular}.
Here we introduce the generalized Restriction Decoder to the 3d space-time graph to decode color-GKP code with noisy measurements.
\begin{figure*}[t]
\centering
\subfigure[]{
\label{fig8a}
\includegraphics[width=8.6cm]{fig8a.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig8b}
\includegraphics[width=8.1cm]{fig8b.eps}}
\caption{\justify Threshold of color-GKP code with perfect measurements. The logical error rates are estimated by Monte Carlo simulations. We use different colors to mark the data with the different code distances $d$. (a) The threshold reaches $\sigma\approx0.59$ ($\bar{p}\approx 13.3\%$) by using the continuous-variable information. (b) As a comparison, the threshold reaches $\sigma\approx0.542$ ($\bar{p}\approx 10.2\%$) without using the continuous-variable information. }\label{f8}
\end{figure*}
To construct the 3d cubic graph, first we extract syndromes from the measurement results $q_{m,j}$ of syndrome qubits. If $|q_{m,j}\,{\rm mod} 2\sqrt{\pi}| <\frac{\sqrt{\pi}}{2}$, the stabilizer check is $+1$, otherwise it is $-1$. In the beginning, suppose all stabilizer checks are $+1$, then the vertices are labeled as the syndromes whose stabilizer check is different from that in last round. For example, if stabilizer checks in the first two rounds are
$-1,-1$, we only label the first check since the second check doesn't change.
Next, the conditional error rates of data qubits and stabilizer checks are required. In convention Steane scheme, these conditional error rates have the expression as Eq.$\,$(\ref{e9}). The error rate $p_i$ of data qubit $i$ is obtained by just replacing $\sigma$ in Eq.$\,$(\ref{e9}) to $\sqrt{\sigma_1^2+2\sigma_2^2}$, where one $\sigma_2^2$ comes from $\mathcal{N}_2$ in the current round and the other $\sigma_2^2$ comes from last round.
For error rate $p_{m,j}$ of stabilizer check $j$, $\sigma$ is replaced by $\sqrt{4\sigma_2^2+\sigma_m^2}$ or $\sqrt{8\sigma_2^2+\sigma_m^2}$ which depends
on Pauli weight of the
stabilizer. We also need to replace $q_{out,i}$ to measurement result $q_{m,j}$
of the syndrome qubit.
However, when taking the noise of GKP ancilla qubits into account, the data qubit error rate is given by Eq.$\,$(\ref{e20}). After the ME-Steane type GKP error correction, every data qubits has the shift error with the variance $\eta\sigma_2^2$, where $\eta$ is not a constant in different rounds. So we replace $\sigma$ with $\sqrt{4\eta\sigma_2^2+\sigma_m^2}$ or $\sqrt{8\eta\sigma_2^2+\sigma_m^2}$ and use Eq.$\,$(\ref{e9}) to compute the error rate of the stabilizer check.
Then the Dijkstra and MWPM algorithm are performed on 3D restricted lattice in the same way of Section \ref{s3.2}. The matching weight of the horizontal
edge follows the setting in Appendix $\,$\ref{ac}, and the weight of the vertical edge is $w_u=-\log\frac{p_{m,j}}{1-p_{m,j}}$ where $j$ is the vertex in the bottom of edge $u$. To apply the lifting procedure, we project the matching results to the bottom (see Fig.$\,$\ref {fig7}). Note that the projections with even times in the same edge will be canceled since $\bar{X}$ error acting on one qubit twice is equivalent to the identity.
The whole decoding process can be summarized as:
(1) input $\sigma_1$, $\sigma_2$, $\sigma_m$, $q_{out,i}$ and $q_{m,j}$;
(2) compute conditional error probability $p_i$ and $p_{m,j}$;
(3) construct 3d restricted lattices $\mathcal{L}_{RB}$ and $\mathcal{L}_{RG}$;
(4) use Dijkstra algorithm and MWPM algorithm to match syndrome pairs;
(5) project the matching result to the bottom;
(6) implement the lifting procedure and output the decoding result.
\section{Numerical results}\label{s4}
This section shows our numerical simulation results of the decoding. We use the Monte Carlo simulation to estimate the logical error rates after our decoding, and determine the threshold by the common intersection point. A Gaussian random number is created in every Gaussian error shift channel $ \mathcal{N}_1$, $\mathcal{N}_2$ or $\mathcal{N}_m$ to simulate the shift errors in the circuit. It should be pointed out again that the coherent displacement errors are replaced by the incoherent mixture of displacement errors for the convenience of simulation.
The threshold of color-GKP code with perfect measurements is shown in Fig.$\,$\ref{fig8a} at $\sigma\approx 0.59$ corresponding to the average error rate $\bar{p}\approx 13.3\%$, which is close to the threshold $\sigma\approx 0.60$ of toric-GKP code in the same error model \cite{vuillot2019quantum}. Fig.$\,$\ref{fig8b} also presents the result without using continuous variable information from the GKP code as a comparison, which shows the threshold at $\sigma\approx 0.542$ corresponding to $\bar{p}\approx 10.2\%$. This threshold is the same as the result from the original version of Restriction Decoder where they obtained the
threshold of the normal 8,4,4 color code at ${p}\approx 10.2\%$ \cite{kubica2019efficient}.
\begin{figure*}[t]
\centering
\subfigure[]{
\label{fig9a}
\includegraphics[width=8.7cm]{fig9a.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig9b}
\includegraphics[width=8cm]{fig9b.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig9c}
\includegraphics[width=8.7cm]{fig9c.eps}}
\hspace{0.1in}
\subfigure[]{
\label{fig9d}
\includegraphics[width=8cm]{fig9d.eps}}
\caption{\justify Numerical results. (a) Threshold of color-GKP code reaches $\sigma=0.46$ with noisy stabilizer checks and perfect GKP error corrections ($\sigma_1 =\sigma_m\equiv\sigma$, $\sigma_2=0$). (b) In the case $\sigma_1 =\sigma_m=\sigma_2\equiv\sigma$, we use ME-Steane GKP error correction scheme and get the threshold at $\sigma=0.24$. (c) Also in the case $\sigma_1 =\sigma_m=\sigma_2\equiv\sigma$, if we use the conventional Steane error correction scheme, the threshold is dropped to $\sigma=0.231$. (d) By using the generalized Restriction Decoder, the threshold of 8,4,4 color code without concatenating the GKP code reaches $p=3.1\%$ under the phenomenological error model. We use different colors to mark the data with the different code distances $d$.}
\label{fig9}
\end{figure*}
For the second error model, there are three independent noisy parameters $\sigma_1$, $\sigma_2$ and $\sigma_m$. So we just simulate two special cases: (1)
$\sigma_1 =\sigma_m\equiv\sigma$, $\sigma_2=0$; (2) $\sigma_1 =\sigma_2 = \sigma_m\equiv\sigma$. Note that the average error rates of data qubit and stabilizer measurement are equal in the first case, which corresponds to the standard phenomenological error model. The thresholds are $\sigma \approx 0.46$ in the first case and
$\sigma \approx 0.24$ in the second case (see Fig.$\,$\ref {fig9}).
Fig.$\,$\ref {fig9b} and Fig.$\,$\ref {fig9c} also compare the ME-Steane GKP error correction scheme and the conventional scheme in the case $\sigma_1 =\sigma_2 = \sigma_m$, which shows the threshold of color-GKP code at $\sigma\approx0.24$ and $0.231$ respectively. The distinction will be more clear if one transforms $\sigma$ to the average $\bar{X}$ error rate $\bar{p}$. Then these two schemes give the thresholds
at $\bar{p}\approx 0.222\%$ and $0.125\%$. Note that in Fig.$\,$\ref {fig9b} and Fig.$\,$\ref {fig9c}, the logical error rates are almost equal in these two thresholds. The consequence proves the advantage comes from the ME-Steane scheme where it reduces about half of $\bar{X}$ errors compared with the conventional scheme.
With the noisy measurements, the thresholds in two cases approach or even exceed
those of the toric-GKP code in the same cases \cite{vuillot2019quantum}. We attribute this good performance to the generalized Restriction Decoder in 8,8,4 color code. To demonstrate this, we test the threshold of normal 8,8,4 color code under the phenomenological error model. By using the Restriction Decoder in 3D space-time graph, the threshold reaches $p=3.1\%$ (see Fig.$\,$\ref {fig9d}).
Compared with Ref.$\,$\cite{landahl2011fault}, our decoder is efficient and gives
a higher threshold.
\section{Conclusion and discussion}\label{s5}
In this paper, we study the concatenation of GKP code with 2d color code. By applying the Restriction Decoder with continuous-variable information, the threshold of color-GKP code is improved under perfect measurements. Our work also generalizes the Restriction Decoder to the 3d space-time lattice to decode color- GKP code with
noisy measurements. Lastly, the good performance of the generalized decoder is also shown in normal 8,8,4 color code under the phenomenological error model.
We notice that Ref.$\,$\cite{walshe2020continuous} proposes the teleportation-based GKP error correction scheme using the balance beam-splitter. Ref.$\,$\cite{noh2021low} analyzes its advantages compared with the Steane type error correction scheme, where they assume the finite squeezing of the GKP states is the only noise source.
For this scheme (see more details of this scheme in Ref.$\,$\cite{noh2021low}), one can also define $\pi_t(u_1)$ and $\pi'_t(u_1)$ to characterize a perfect and a noisy GKP error correction. If GKP ancilla qubits are perfect, $\pi_t(u_1)$ has the same form as Eq.$\,$(\ref{e21}). Suppose the three initial GKP states in the teleportation have the shift error $u_1$, $u_2$, $u_3$, we have
\begin{equation}
\pi'_t(u_1)=\frac{u_2+u_3}{\sqrt{2}}+\pi(u_1-\frac{u_2-u_3}{\sqrt{2}}),
\end{equation}
where $\pi(x)$ has been defined as Eq.$\,$(\ref{e21}).
Fig.$\,$\ref {fig10} compares $\Delta(\pi',\pi)$ (or $\Delta(\pi_t',\pi_t)$) of the Steane type schemes and teleportation-based scheme. As mentioned above, $\Delta(\pi',\pi)$ (or $\Delta(\pi_t',\pi_t)$) is a quantity to measure the difference between a perfect GKP error correction and a noisy error correction scheme. In all three schemes, we assume the shift error of the three initial GKP state $u_1$, $u_2$, $u_3$ have the same variances $\sigma^2\equiv\sigma_1^2=\sigma_2^2=\sigma_3^2$, and other components in the circuits are noiseless. Here $u_1$, $u_2$, $u_3$ are shift errors in $\hat{p}$ quadrature if we use the circuit in Fig.$\,${\ref{fig1}} and then
\begin{equation}
\pi'(u_1)=u_1-u_2-p_{cor}.
\end{equation}
We apply the noiseless error correction circuit of each scheme and simulate the error propagation. The result shows the ME-Steane scheme provides smaller $\Delta(\pi',\pi)$ than the teleportation-based scheme for the $\hat{p}$ error correction. Note that Steane type error correction schemes are not symmetric for $\hat{q}$ and $\hat{p}$ errors, which depends on the order of CNOT gates in Fig.$\,${\ref{fig1}}. Therefore, a better application scenarios for the ME-Steane scheme may introduce asymmetric GKP states. We have noticed asymmetric GKP codes are
studied in several works of the GKP concatenation code \cite{hanggli2020enhanced,noh2021low}.
Moreover, Ref.$\,$\cite{baragiola2019all} points out that only Gaussian elements are enough to obtain distillable GKP magic states. So using GKP magic states combined with transversal Clifford gates in the color-GKP code, one can theoretically achieve the universal and fault-tolerant quantum computation.
In order to connect with experiments closely, it is indispensable to consider
the circuit-level error model. The study on the surface-GKP code gives the threshold at 18.6 dB after considering the circuits in detail \cite{noh2020fault}.
But the threshold of color-GKP code under circuit-level error mode is
still unknown, which is an interesting open question for further study. Meanwhile, estimating the overhead of the color-GKP code in the realistic quantum hardware is also another direction to extend this work. Several similar researches have been presented in the surface-GKP code \cite{noh2021low} or cat-surface code \cite{chamberland2020building}.
\begin{figure}[b]
\centering
\includegraphics[width=8.2cm]{fig10.eps}
\caption{\justify $\Delta(\pi',\pi)$ of three GKP error correction schemes with different $\sigma$. We assume the variance of the three initial GKP states obey $\sigma_1^2=\sigma_2^2=\sigma_3^2=\sigma^2$. The ME-Steane scheme is closer to the perfect GKP error correction than the other two schemes.}\label{fig10}
\end{figure}
\acknowledgments
We thank Xiaosi Xu for the helpful discussion. This work was supported by the National Key Research and Development Program of China (Grant No. 2016YFA0301700) and the Anhui Initiative in Quantum Information Technologies (Grant No. AHY080000).
|
{'timestamp': '2022-01-03T02:05:24', 'yymm': '2112', 'arxiv_id': '2112.14447', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.14447'}
|
arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.