The Lyman Forest Power Spectrum from the Sloan Digital Sky Survey
Abstract
We measure the power spectrum, , of the transmitted flux in the Ly forest using 3035 high redshift quasar spectra from the Sloan Digital Sky Survey. This sample is almost two orders of magnitude larger than any previously available data set, yielding statistical errors of % and on, respectively, the overall amplitude and logarithmic slope of . This unprecedented statistical power requires a correspondingly careful analysis of the data and of possible systematic contaminations in it. For this purpose we reanalyze the raw spectra to make use of information not preserved by the standard pipeline. We investigate the details of the noise in the data, resolution of the spectrograph, sky subtraction, quasar continuum, and metal absorption. We find that background sources such as metals contribute significantly to the total power and have to be subtracted properly. We also find clear evidence for SiIII correlations with the Ly forest and suggest a simple model to account for this contribution to the power. While it is likely that our newly developed analysis technique does not eliminate all systematic errors in the measurement below the level of the statistical errors, our tests indicate that any residual systematics in the analysis are unlikely to affect the inference of cosmological parameters from . These results should provide an essential ingredient for all future attempts to constrain modeling of structure formation, cosmological parameters, and theories for the origin of primordial fluctuations.
1 Introduction
Although the Ly forest was discovered many decades ago (Lynds, 1971), it has only recently emerged as one of the prime tracers of the large scale structure in the Universe. The high resolution measurements using the Keck HIRES spectrograph (Vogt et al., 1994) have been largely reproduced using hydrodynamical simulations (Cen et al., 1994; Zhang et al., 1995; Hernquist et al., 1996; Theuns et al., 1998) and semianalytical models (Gnedin & Hui, 1998). The picture that has emerged from these studies is one in which the neutral gas responsible for the absorption is in a relatively low density, smooth environment, which implies a simple connection between the gas and the underlying dark matter. The neutral fraction of the gas is determined by the interplay between the recombination rate (which depends on the temperature of the gas) and ionization caused by ultraviolet photons. Photoionization heating and expansion cooling cause the gas density and temperature to be tightly related, except where mild shocks heat up the gas. This leads to a tight relation between the absorption and the gas density. Finally, the gas density is closely related to the dark matter density on large scales, while on small scales the effects of thermal broadening and Jeans smoothing must be included. In the simplest picture described here, all of the physics ingredients are known and can be modeled. The fact that one can trace the fluctuations over a range of redshifts ( using ground based spectrographs) and over a range of scales, which are typically smaller than the scales of other tracers, is the main strength of this method. It becomes particularly powerful when combined with cosmic microwave background (CMB) anisotropies or other tracers that are sensitive to larger scales. Such a combination is sensitive to the shape of the primordial spectrum of fluctuations, which is one of the few observationally accessible probes of the early universe. These observations are therefore directly testing the models of the early universe such as inflation.
Ly forest observations and constraints on cosmology have been explored by several groups in the past. Most of the analyses focused on the power spectrum, , of the fluctuations in the Ly forest flux,
(1) 
where is the optical depth to Ly absorption. The first such work was by Croft et al. (1998), followed by McDonald et al. (2000), Croft et al. (2002), and Kim et al. (2003). These groups were limited to a few dozen spectra at most. Recent theoretical analyses, in addition to above, have been performed by Gnedin & Hamilton (2002), Zaldarriaga et al. (2001), and Seljak et al. (2003). In the latter two of these papers the degeneracy between the amplitude and slope of the primordial power spectrum and the normalization of the optical depthdensity relation [most sensitive to the intensity of UV background, and typically parametrized in terms of the mean transmitted flux fraction, ] was emphasized, which leads to a significant expansion of the allowed range of cosmological parameters relative to what one would have inferred from the errors on the flux power spectrum alone. Seljak et al. (2003) have shown that the current Ly forest constraints are consistent with the CDM model favored by recent CMB data, testing it in a regime of redshift and length scale not probed by other measurements, but that within the CDM framework they do not add much leverage on parameter values beyond that afforded by the CMB data alone.
An important practical implication of the theoretical breakthroughs of the 1990s is that large scale structure in the Ly forest can be effectively studied with moderate resolution spectra. Once the spectrum is modeled as a continuous phenomenon rather than a collection of discrete lines, there is no need to resolve every feature. Some of the studies cited above use high resolution (Å) spectra, some use moderate resolution () spectra, and some use a mix of the two.
The goal of this paper is to present a new measurement of the Ly forest power spectrum, based on Sloan Digital Sky Survey (SDSS; York et al. (2000)) spectra that probe the Ly forest at a resolution (Å FWHM). This sample is almost two orders of magnitude larger than anything that was available before. As such it greatly increases the statistical power of the Ly forest, making it comparable to the CMB from WMAP. At the same time, the required tolerance of systematic errors also increases by the same amount. This requires a careful investigation of all of the sources of systematic errors, and a large portion of this paper is devoted to the issue of possible systematics in the data and their influence on the parameters of interest. We also discuss how the analysis we perform and results we obtain differ from what can be done using the standard spectral pipeline outputs in the public SDSS data. In part because of the practicalities of work in a large, multiinstitutional collaboration, and in part because of the importance of obtaining an accurate measurement with well understood statistical and systematic errors, the Ly forest power spectrum has been pursued by two independent groups within the SDSS, one led by P. McDonald and U. Seljak, and the other by L. Hui and A. Lidz. The methods employed are different and have been developed independently. Results of the alternative analysis will be presented elsewhere (Hui et al., in preparation).
We only present the observational measurement of the SDSS Ly forest power spectrum in the current paper. Independent of any theoretical interpretation, this basic result should be robust on the scales for which we give results,
The common usage of the term Ly forest is to describe the Ly absorption by neutral hydrogen in the relatively low density bulk of the IGM. In this paper we include dampedLy systems (DLAs) in the definition of the “forest”, so it includes all HILy absorption. We could try to remove DLAs before measuring , because they are more difficult to simulate than the lower optical depth absorption; however, we believe the advantage of removal is illusory. If the DLAs were located randomly within the IGM (which they certainly are not completely), it would be simple to include them in the theory using their known column density distribution. If they are not located randomly, the regions obscured by DLAs in the spectra are special, so the effect of removing the DLAs still must be understood using simulations. We leave the handling of the effects of DLAs as a problem for the theory, which we will address elsewhere.
Absorption by metals is also difficult to simulate accurately, so we would like to remove this contribution to . This is relatively easy to do for transitions with wavelength Å, but it is basically impossible for transitions with , because the metal features always appear mixed with HILy. We will subtract the power measured in the rest wavelength range Å from our measurement of the power in the forest, which removes the effect of transitions with longer wavelength, but we leave shorter wavelength transitions as part of the forest. The only significant contaminant of this kind that we can identify is SiIII absorption at Å, and we develop a simple and effective way to account for this in the theory. We refer to our final backgroundsubtracted power spectrum as , and use for the raw power measured in the interval . We are using the range Å for the Ly forest.
The outline of this paper is as follows. In §2, we describe the selection of our data set and the preparation of spectra for the measurement of . In §3 we describe the method used to measure the power spectrum and estimate the error bars, test the procedure, and give the basic results. We perform consistency checks on the results and discuss systematic errors in §4, which is followed by a brief recipe for using our results in §5, and conclusions in §6.
2 Data Selection and Preparation
We describe the sample of quasar spectra that we use in §2.1. In §2.2 we explain how we remove broad absorption line (BAL) quasars from the sample. In §2.3 we explain how we combine spectra from different exposures for the same quasar and use the differences between exposures to understand the noise in the data. We discuss the resolution of the spectra in §2.4. Finally, in §2.5 we describe how we divide each spectrum by an estimate of the quasar continuum, the expected mean absorption level in the spectrum, and a spectral calibration vector (see below), to produce the vectors of transmissionfluctuation estimates, , for each quasar, from which we will measure .
2.1 SDSS Observations and Sample Selection
The Sloan Digital Sky Survey (York et al., 2000) uses a driftscanning imaging camera (Gunn et al., 1998) and a 640 fiber double spectrograph on a dedicated 2.5 m telescope. It is an ongoing survey to image 10,000 sq. deg. of the sky in the SDSS AB magnitude system (Fukugita et al., 1996; Stoughton et al., 2002) and to obtain spectra for galaxies and quasars. The astrometric calibration is good to better than rms per coordinate (Pier et al., 2003), and the photometric calibration is accurate to 3% or better (Hogg et al., 2001; Smith et al., 2002). The data sample used in this paper was compiled in Summer 2002 and is a combination of data releases one (Abazajian et al., 2003) and two (Abazajian et al., 2004).
About 13% of the spectroscopic survey targets are quasar candidates selected based on their colors (Richards et al., 2002). The magnitude limit for UVexcess objects is , while additional highredshift candidates () are targeted to . Fibers are allocated according to a tiling algorithm (Blanton et al., 2003), with the galaxy sample and the quasar sample being the top priorities. The remaining 8% of fibers serve for calibration purposes.
SDSS spectra are obtained using plates holding 640 fibers, each of which subtends 3 on the sky; the spectra cover . The pixel width is a slowly varying function of wavelength, but is typically . The resolution also varies, but is typically also rms (i.e., the resolution is and there are pixels per FWHM resolution element). All quasars have multiple spectra, usually taken one after the other (timescales of a fraction of an hour), so the quasar variability can be ignored (in the opposite case it would act as an additional source of noise). The coadded spectra in the official SDSS release use local spline interpolation onto a uniform grid of pixels of width , and do not guarantee the noise to be uncorrelated. We therefore redo this step starting from the individual exposures. This is discussed in more detail below. Spectral flux errors per pixel in most cases are about erg s cm Å. Redshifts are automatically assigned by the SDSS spectral classification algorithm, which is based on fitting of templates to each spectrum (Schlegel et al., in preparation).
We limit ourselves to quasars with redshift when measuring the power in the Ly forest region of spectra, so that each spectrum contains a significant stretch of the Ly forest above the detector cutoff at 3800 Å (which corresponds to Ly absorption at ). We use the sample compiled in Summer 2002, cut down to 3035 spectra by eliminating some plates of questionable quality, some spectra where two different redshift estimation codes disagree, and some BAL quasars (see below). Figure 1 shows the redshift distribution of the data.
The dashed, red histogram shows the distribution of quasar redshifts. The solid, black histogram shows the distribution of pixels in the range Å. Note that there is a gap in the quasar redshifts around , which is due to the stellar locus crossing the quasar locus in the 5color SDSS photometry (Richards et al., 2002). Figure 2 shows an example SDSS spectrum of a quasar.
This spectrum is unusual in that most have lower S/N, and most quasars are at lower redshift.
We employ an additional sample of spectra with , so that we can study the full observed wavelength range, , outside the confusion of the Ly forest. As we discuss in §3.4, we compute a nonnegligible background power term (probably mostly metal absorption), by measuring the power in the wavelength range Å. Using only the primary sample, we would not be able to compute this term for observed wavelengths below Å.
We remove several wavelength regions from our analysis because of calibration problems: Å, Å, Å, Å, and Å (the last two have no direct effect on the results we present). Most of these problems are due to strong sky lines.
2.2 BAL Removal
Our sample was initially examined by eye, and the most extreme broad absorption line (BAL) quasars were removed (see Hall et al. (2002) for a discussion of BALs). When we first measured the background power in the region Å, we found that the most extreme outliers in power were still obvious BALs (this was not true of the Ly forest region). To test the importance of these systems to our Ly forest power measurement, we removed a further 147 quasars using the following automated method: Each spectrum is smoothed by a Gaussian with rms width . The continuum within the region Å is redefined by dividing by the mean fluxtocontinuum ratio in the region. A quasar is identified as a BAL quasar if the region Å contains a long continuous set of pixels that all fall more than 20% above or below our estimated continuum (we initially identified wide regions with flux above the continuum out of simple curiosity, but found that these are in practice almost always obvious BAL quasars where the continuum has been biased low by the BAL feature). We iterate the continuum redefinition twice, computing the new mean after throwing out pixels more than 20% below the previous mean, but this makes almost no difference to the results. Note that the smoothing was applied to allow easier identification of BALs in noisy spectra. As we show below, this BAL cut makes essentially no difference to our result, although it does have a noticeable effect on the power measurement in the region Å.
2.3 Combining Exposures and Calibrating the Noise
SDSS obtains multiple (at least 3) exposures for each quasar. We combine the individual exposure spectra to produce a single spectrum, using a nearestgridpoint method that produces uncorrelated noise and a reasonably welldefined sampling window. For each pixel we record estimates of wavelength, quasar flux, resolution, sky flux, readnoise, and two different total noise estimates. The first noise estimate, which we will call simply ( for pipeline), is computed using the error array given for the exposure spectra by the spectral reduction pipeline. The second noise estimate, which we call ( for component), is computed by summing the readnoise and the noise implied by estimates of the number of photons corresponding to the quasar flux and sky flux. The two noise estimates generally do not agree, but this is not a problem for us because we ultimately recalibrate the noise (next). Finally, we record for each pixel, computed by treating the determination of the combined flux value for each pixel as a one parameter fit to the measurements given by the different exposures. Examples of the more important of these quantities in Ly forest regions are shown in Figure 3.
For comparison to the sky and quasar flux levels, we have converted the Gaussian readnoise into the flux of photons that would contribute the same noise variance. Several elements of Figure 3 (e.g., the estimation of the quasar continuum and ) will be described later in this paper.
The noise estimate from the standard SDSS pipeline is only approximate. The accuracy of the noise estimate required for our purpose is much higher than anticipated when the pipeline was developed. For this reason we use the differences between singleexposure spectra for the same quasar to determine the noise properties of the data. We construct difference spectra by combining the fluxcalibrated exposures with alternating sign for each exposure, i.e., we use exactly the same procedure that we normally use to produce combined spectra from the exposures, except half of the exposures are subtracted instead of added, so the mean result is zero (we drop the last exposure when there are an odd number – this is not the most efficient method possible, but we do not need it to be). The result is a direct measure of the exposuretoexposure changes. We measure the power spectrum of these difference spectra using the method described in §3.1, including noise subtraction based on the pipeline noise estimates for the pixels. The result is shown in Figure 4 (points with error bars).
We obtain a clear detection of power, where there should be none if the spectra differ only by the noise estimate from the pipeline which is being subtracted. If we assume that the noise has been underestimated by a constant factor, and fit for that factor using the error covariance matrix estimated by bootstrap resampling, we find a decent fit: for 107 degrees of freedom (formally, this fit is not good because is unlikely to be this high by chance). This fit uses our usual points in
A dependence different than expected for white noise could be a problem for us, so we check for this by fitting for a power law dependence, [with ], finding , a significant detection ( is now a reasonable 123.3 for 106 dof). Allowing a running of the power law, ). The slope we find corresponds to a % change in 16% of the noise power at the extremes of our range, i.e., only % of the total noise power, which is a relatively small fraction of the Ly forest power except at the highest (see Figure 11 below). We henceforth assume that the extra noise is proportional to rather than white (this makes % difference in the final results except for the one highest , lowest point where the difference is 2%). , does not improve the fit (
How accurate is this noise estimate based on differences between exposures? Our difference spectra will contain a component of the Ly forest power if the calibration between exposures is not perfect. The power in this term would be suppressed relative to the Ly forest power by the fractional calibration error squared, so it would be very small unless the exposuretoexposure calibration errors were quite large. The fact that a simple one parameter extranoise model fits reasonably well, in the face of variation in redshift, noise amplitude, and , argues against calibration errors being a big problem. More convincingly, we measure nearly the same excess noise contribution (%) and slope () in the region Å as we do in the Ly forest. This argues against any connection to leaking Ly forest power. Note that the effective absolute level of noise in the Å region is about half that in the Ly forest region, so this test shows that the fraction of extra noise does not depend strongly on the noise level itself.
Pixels in different exposures are not perfectly aligned, and misalignment can allow Ly forest power to leak into our difference spectra. To test this alternative explanation for the apparent excess noise in the spectra, we split the spectra into two groups with approximately equal weight, based on the rms misalignment in the forest region (the alignment is known from the wavelength calibration of the exposures, which is thought to be practically perfect). We find the same excess noise power in both the poorly aligned group (%, ) and the better aligned group (%, ), suggesting that the excess power is not due to misalignment. Furthermore, the presence of a similar level of excess noise power outside of the forest region again argues against leakage. We therefore believe that our noise estimate is considerably more accurate than the noise estimate from the SDSS pipeline.
In our initial power spectrum analysis we multiplied the noisepower estimated from the pipeline errors by the factor 1.16 for all spectra; however, when we split the data based on the mean value of for the exposure combination (see §4.4) we found that the large and small subsamples disagreed significantly on the results. We eliminated this problem by estimating the noisecorrection factor individually for each spectrum, by fitting to the power in the difference spectrum for that quasar. The mean extra power from these fits is still close to 16%, but there is considerable scatter. When we use these individual estimates, the correlation between measured and disappears, i.e, the mean value of for a spectrum’s exposure combination was a good indicator of the amount by which the noise in each spectrum was misestimated. Note that there are statistical errors in these noise estimates for each spectrum, of the same order as the error for which we are trying to correct; however, there is no systematic bias associated with these errors, and the random error they contribute is automatically included in our final bootstrap errors. In fact, including the spectrumbyspectrum noise estimate reduces the bootstrap errors slightly on small scales, verifying that these estimates are on average more accurate than the original noise estimates. It is not known why the noise is misestimated by the standard pipeline. Tests at this level have not been done before.
Our final data product will be a measurement of binned in and , i.e., a matrix where labels bins with and labels bins with . We will also give the noise power that was subtracted, , in the same bins. We suggest allowing a 5% rms freedom in the noise amplitude in each bin when performing model fits, i.e., for each bin subtract from , and add to . This is probably overly conservative for any one bin, but implies a combined freedom (for 9 bins) on an overall noise misestimation. This seems prudent, even though it is not really required by any test we have performed.
2.4 Accuracy of the Resolution
The resolution of the SDSS spectra is estimated using lines from calibration lamps mounted on the telescope structure. Shifts of the detector pixel grid relative to a fixed observed wavelength frame during an exposure, which we will call flexure, are expected to be the dominant source of error in this spectral resolution estimate. We tried estimating the rate of shifting for each pixel by differencing the wavelength calibrations of adjacent exposures (this calibration is determined very precisely for each exposure using the positions of sky lines). The implied extra smoothing only changes the power by % at our highest bin.
The strong sky line at 5577 Å provides a good opportunity to measure the resolution more directly (note that the spectral wavelengths are in vacuum, and heliocentric, so this and other sky lines generally appear shifted from their standard wavelength). We measure the power spectrum in sky spectra in the range Å. If the sky line has negligible width and the smoothing has a Gaussian shape with rms width , the power spectrum should be proportional to is the pixel width (the pixelization effect is subdominant but not negligible). In Figure 5 we show the measured power averaged over all the sky spectra after dividing each individual measurement by , where and are the local values (they are to a good approximation constant over the range we are looking at), and also dividing each measurement by the value at a low where the resolution should not have any effect. , where
The result is remarkably close to unity, indicating that the estimated resolution is an accurate representation of the true resolution. What are the small wiggles? Figure 6 shows an example of the region we Fourier transform to measure the power.
We believe the small features to the sides of the main line are OH lines at 5564 and 5589 Å (Slanger et al., 2003). We test this explanation for the wiggles by constructing mock sky spectra that simply have a delta function at 5579Å and two more with 0.003 times the main line’s amplitude at 5566 and 5591 Å, convolved with the resolution and pixelization (0.003 was chosen to give the best fit to the wiggles). The red, dotted line in Figure 5 shows the same resolution test using the mocks. We see that the wiggles are essentially perfectly reproduced. In conclusion: the resolution profile appears to be perfectly Gaussian, with exactly the width expected from the given resolution. There is apparently no room for even a 2% level effect from flexure. We are prevented from performing the same kind of measurement using other sky lines by similar features which are always much larger relative to the central line.
We suggest that fits to include a multiplicative uncertainty on the overall power, of the form , where is a single parameter in the fit subject to the rms constraint . This allows for a % change in the smoothing kernel at our highest , similar to our estimate of the error from flexure. This error estimate is somewhat arbitrary, but the evidence we have presented suggests that it should be smaller, so our estimate is conservative.
Note that this resolution test, and the noise calibration, cannot be used directly with the standard pipeline spectra, where the exposures are combined in a different way. The reader may be confused at this point about how our spectra differ from the standard publicly available set, so we give the following summary:

Our nearestgridpoint combination of the exposures produces uncorrelated noise in pixels (to the extent that the noise in the exposures was uncorrelated, which is expected from the way they are extracted), while the standard pipeline uses a local splining procedure which does a good but not perfect job of preventing noise correlation.

When combining exposures we record the effect of different pixel sizes, misalignment of the pixels, and flexure of the detector during exposures, which can influence the effective resolution.

We record the contribution of quasar flux, sky flux, and readnoise to the total noise in each pixel. Knowing the contribution from quasar flux is important if pixelbypixel noise weighting is to be used, because the correlation between flux level and noise amplitude can lead to biases (see the end of §2.5).

We correct for the bias in the exposure combination associated with crosscorrelation between the noise variance level in exposure pixels and the quasar flux in the pixel (a different incarnation of the problem alluded to in the previous point).

The noise is recalibrated for each spectrum by differencing the exposures. The noise variance in the standard pipeline exposure spectra is underestimated by on average 16%, on top of any error related to the exposure combination, and the power measured in the difference spectra is slightly tilted relative to white noise.
The last point is the most important.
2.5 Determination of the Continuum and Mean Absorption Level
Our goal is to measure the power spectrum of the fluctuations in the transmitted flux fraction through the IGM, , where and is the Ly forest optical depth (as defined in §1). However, the spectrum of each quasar is the product of and the quasar continuum (note that we use “continuum” to refer to all the flux emitted by the quasar, including emission lines), further complicated by errors in the detector calibration and absorption by longer wavelength transitions. The details of the procedure we use to separate these contributions will be presented elsewhere, here we give the basic idea and key results that are relevant for the flux power spectrum determination.
We use an iterative procedure to determine the components of the data model
(2) 
where is the raw flux in pixel , is the noise, is the overall normalization of the quasar spectrum, is the mean quasar continuum shape, are fluctuations around the mean continuum, is a mean generalized calibration vector (this includes wavelength dependent calibration errors in the SDSS spectra, but also the mean absorption by metal lines with resonance wavelength Å), are fluctuations around , such as individual metal lines or variable calibration errors, is the mean Ly forest absorption at a given redshift, and are the fluctuations in Ly forest absorption. Note that here, as most places in this paper, is the redshift of gas that would produce Ly absorption in the pixel, not the redshift of the quasar. Briefly, we determine for each spectrum, the global functions , , and , and a set of principal component analysis (PCA) eigenvectors that describe by, in turn, computing each component of the model from all the spectra while holding all the others fixed. E.g., is estimated in bins of by averaging over all the pixels in the Ly forest that fall in each bin. We separate from by measuring in the rest wavelength range Å, i.e., outside the Ly forest. A few iterations suffice to determine all the components of the model independently. Three examples of the results are shown in Figure 3. The full details of this procedure will be presented in a separate paper focused on a precise determination of .
In preparation for measuring the power spectrum, we divide each quasar spectrum by our estimate of and terms is completely negligible (always % of and usually much less). are represented for each quasar by N PCA eigenvectors. We have tried several different values for N ranging from 0 to 13, and find that the power spectrum results depend slightly (but not critically) on the value, as we discuss below. For our final results we use , i.e., we only divide by a mean continuum, although we will also show how using affects the cosmological fit results. We do not use the PCA continua because we are not satisfied with their robustness, and division by them frequently actually increases the resulting power slightly. This may indicate that the error introduced by allowing additional freedom in the continuum is larger than the continuum fluctuations that we are trying to remove. Our study of PCA continua was primarily aimed at determining rather than the power spectrum, so we cannot be certain that the PCA method could not be used productively in a power spectrum measurement if it was more carefully optimized for that application. Because we know that our continuum estimate (which involves an extrapolation from outside the Ly forest region) is not perfect within the Ly forest, we further divide each chunk of spectrum that will be used to make a power spectrum estimate by its mean (optimally computed considering both observational noise and absorption variance). We call our resulting observed data vector , where is the normalized noise fluctuation and we are ignoring the crossterms between , , and . As we describe in detail below, is treated as a random noise background and its statistical properties are determined by measuring the power spectrum in the rest wavelength range Å, where (and ). . The power in the
A small but nonnegligible detail of our procedure is hidden within our description of the normalization of the spectra. When we estimate the mean to divide by, we weight the computation optimally using the covariance matrix, , of the pixels ( is discussed in more detail in our explanation of the power spectrum measurement below). includes Ly forest fluctuations and measurement noise. We do not use our best estimate of the measurement noise directly for the weighting, because the noise variance estimate is correlated with the measured flux in the pixel, which leads to a bias: the mean is underestimated because lower flux pixels have lower noise. The original noise estimate is , where is the flux from the quasar, is the flux from the sky, and accounts for the conversion between the units of flux and photons (this description is slightly idealized since the reduction of twodimensional CCD data to a spectrum introduces some complications). To remove the correlation between flux and noise, we subtract from and add , where ( for weight; see Figure 3 for some comparisons of the noise estimates). The estimate of we have from the spectral reduction pipeline is not perfect, so our replacement of the correlated part of the noise amplitude is imperfect. We make a final, very small, correction to the mean estimation based on a direct computation of the crosscorrelation between the flux and noise amplitude. We use the same decorrelated noise amplitudes for weighting the power spectrum extraction (discussed below); however, the bias is completely insignificant in that case (i.e., computed using the original noise estimates for weighting is practically identical). . We call the final result
3 Power Spectrum Determination
The high precision of the measurement obtainable using the SDSS data sample requires unprecedented (in this field) care in the design and testing of the procedure used to produce it. We describe the basic method that we use to extract a power spectrum and estimate the errors in §3.1. In §3.2 we present the test of the full method as implemented in our code, using mock data sets. In §3.3 we give the raw result for the measurement of power in the Ly forest region.
We aim to measure the power spectrum of , representing the correlation of fluctuations in the Ly forest absorption only; however, the covariance matrix of the data vector is
(3) 
(the three components of as we have defined it should be uncorrelated). The noise term in equation (3) is relatively easy to compute and subtract. We estimate and subtract most of by defining
(4) 
where is always defined by , so that we are subtracting power measured in the same observed wavelength ranges, not the same quasar spectrum (we remind the reader that we have defined to mean the power measured in the range ). As it appears in , is the redshift of gas that would produce Ly absorption in this part of the quasar spectrum, if it was not at a higher redshift than the quasar, i.e., is really just an indicator of observed wavelength. The subtraction in equation (4) will completely remove the power due to transitions with Å, including SiIV (a doublet absorbing at rest wavelengths 1393.75 and 1402.77 Å) and CIV (another doublet at 1548.20 and 1550.78 Å). Note that this subtraction of metal power is exact, not an approximation [except for the approximation that ], because we are determining the metal power in exactly the same observed wavelength range as the Ly forest power from which it is being subtracted, i.e., the same gas, at the same redshift, is doing the absorbing both inside and outside the forest, so the absorption will have identical statistical properties. This background subtraction will also remove any strictly observedwavelengthdependent power introduced by the detector, such as spectrum to spectrum variations in the calibration of the detector. We implement it in §3.4.
3.1 Core Method
In this subsection we describe our method for extracting the power spectrum, , from any selected rest wavelength range (§3.1.1), and estimating its statistical uncertainty (§3.1.2).
3.1.1 BandPower Estimation
We estimate using the quadratic estimation method, which is essentially a fast iterative implementation of the maximum likelihood estimator (we follow the expressions as given in Seljak (1998)). This method is optimal for a Gaussian probability distribution. While the power spectrum estimates are not Gaussian distributed, the deviations are small, as shown below. We measure the power in flat bands with edges given by where ranges from 0 to 30 (to produce 29 bands), although we will not give results for some of the large and smallscale bands when we think they are unreliable. Defining , where are the fluctuations we are measuring (e.g., within the forest) and are the normalized noise fluctuations, a bandpower estimate, , for each chunk of spectrum is given by
(5) 
where , , , ,
(6) 
is the Fisher matrix and the noise bias is
(7) 
Note that we could include the background power explicitly in these equations as a noise source when measuring the power in the Ly forest region, but we ignore this because its contribution is too small to change the weighting significantly. We will subtract it from the estimates later. The noise subtraction term, , is computed using the pipeline noise estimates, (not ), with the amplitude corrected as discussed above based on the differences between exposures. In principle, in these equations should be the true covariance matrix; however, as we discuss below, we use the measured covariance from a previous iteration of the power spectrum determination instead.
Except in a few cases that we will identify as they arise, when we set out to measure the power in a defined restwavelength region (e.g., Å for the Ly forest region) we first use equation (5) to estimate the power separately in halves of the region in each spectrum (e.g., Å and Å). Our choice of halfspectra is a compromise between competing desires for resolution in redshift and wavenumber. The full length of the forest in a spectrum corresponds to a redshift interval, , that is unnecessarily large. While the precision of the measured power spectrum would support smaller than halfspectrum chunks to give finer redshift resolution than , the shorter chunks would limit the space resolution. Note that we could have used full chunks and still achieved the same resolution by more carefully applying the estimator equation, as we discuss below, but this would increase the computational time without much improvement in the final errors on the scales of relevance.
After computing estimates for each halfspectrum, we perform a weighted average to determine in redshift bins centered at where (in this paper we only present results up to ). Each bin is the average of the power in all the halfspectra for which the redshift of the central pixel falls within of the bin center (we discuss below how we correct for an asymmetric distribution of data within a bin). We combine sets of estimates using the Fisher matrices (equation 6) for the weighting. In practice this means that we sum the quantity in parentheses in equation (5) over all estimates and multiply the result by the inverse of the sum of the Fisher matrices for each individual estimate. Our procedure would be optimal for Gaussian data, which the Ly forest is not; however, when we use the Gaussian approximation to compute the errors on the measured power the results are not much different from the more accurate bootstrap errors (see §3.1.2), so we conclude that our method is not far from optimal.
Whenever we have a finite length of spectrum, there will be mixing between the power in different bins. Variable noise or gaps in the data will produce more mixing. This mixing is described in terms of a window matrix, which is given by the Fisher matrix in equation 6. In our standard procedure, the power spectrum estimates in equation 5 are multiplied with the inverse of Fisher matrix and are thus deconvolved with the window, which removes the mixing of other modes into the bin one is estimating (however, the bins are still correlated). This method thus produces a diagonal window matrix, so each combined estimate of represents exactly the range of corresponding to its bin. Our tests below show that there is no practical problem with instability in the Fisher matrix inversion (the window matrices are close to diagonal to begin with). A diagonal window matrix is desirable from a theoretical standpoint because our ability to compute the power spectrum from simulations is limited at both low (by limited box size) and high (by simulation resolution and complexity of physics). In the few cases where we use the power without deconvolution, we are using the estimator , where (Seljak, 1998).
To compute the weight matrix , we need an estimate of , i.e., the power spectrum we are trying to measure. We solve this problem by computing iteratively. The first estimate is made assuming . In subsequent iterations we compute from the previous estimate of . This procedure converges quickly (the difference between and a reasonable estimate of is significant, but once is in the right ballpark it does not matter what it is exactly). We add a large constant (10.0) to all elements of the weight matrix, to remove all direct sensitivity of our power measurement to the mean of the chunk. This makes very little difference to the results on the scales we present. We are however still sensitive to the mean estimate from when we divided the spectrum by it. Even if the mean estimates are correct on average, the statistical error on the mean for each spectrum can still lead to a bias. If the errors on the mean estimate were small and uncorrelated with the fluctuations in the flux field, the bias would be , where is the error on the mean [to lowest order in , i.e., the bias is , where is the fractional error in the mean, and ]. We divide each estimate by this factor as part of our standard procedure; however, as we discuss below when we test our code on mock spectra, this approximation is not sufficient and we will need to include another small, dependent, factor determined numerically using the mock spectra (this is the only use of the mock spectra other than for testing).
The reader may at this point be wondering what redshift the resulting should be taken to represent, i.e., is not necessarily the center of weight of the data, and neither is the mean redshift of the pixels in the bin, considering the rather complicated weighting in equation (5). In fact, the effective redshift is not even the same for each bin in the same bin. We resolve this question – represents the power spectrum at precisely (to first order) – in our construction of and , where and label pixels at redshifts and , and labels the redshift bin in which this chunk of spectrum falls. To account for the evolution from and to , we define a power spectrum growth factor, , where
(8) 
(we use a onesided derivative estimate instead of equation 8 for the first and last redshift bins). Now , where and is computed as if the pixels were located at the center of the bin. Finally, . This correction may be difficult to understand intuitively at first, but it is really quite simple. The modification of just corrects the power spectrum estimate for the excess (dearth) of power that we expect for pixels in the high (low) redshift ends of the bin. The correction to affects the weighting, simply producing a more accurate at the redshift of the pixels in question.
Note that an alternative method would be to treat the points as simply parameters of a continuous power spectrum defined by some form of interpolation. This would mean would have nonzero derivative with respect to more than one of the power spectrum bins (e.g., usually two for linear interpolation). This method would be elegant, and probably produce narrower effective window functions in the direction; however, it will increase the correlation in the direction between measurement errors, because the same pixels would contribute to more than one power spectrum point. Since this more sophisticated method would allow long chunks of spectra to be used without degrading our resolution, it would be most useful if we were trying to measure the power on even larger scales.
How does our method compare to the straightforward Fourier transform (FT) method? The basic FT method is to project the data vector, , onto a set of modes of the form , and to simply compute the variance of the amplitudes of all the modes with in some bin, i.e.,
(9) 
where and define the bin, and the discrete spacing of is somewhat arbitrary (the natural spacing is , where is the length of the spectrum, but nothing prevents one from choosing more finely spaced s). Our estimator, equation (5), can be cast in a similar form, i.e., as a projection of the data vector onto a set of modes, and a sum of the squares of the mode amplitudes. We require that the mode amplitudes are statistically independent, which makes their computation equivalent to a computation of KarhunenLoève eigenmodes (see, e.g., Tegmark et al. (1997)). Figure 7 shows the two most important modes for our bin with
In this case two modes differing primarily by a phase shift, analogous to and , contain most of the information, because our bin width is approximately . We see that the difference between our modes and a simple sine wave is not dramatic – there is a little bit of edge tapering (downweighting the edges to make the effective window on the data more compact in Fourier space) and some straightforward downweighting of the most noisy pixels. Curiously, there seems to be an additional effect where pixels adjacent to an edge are given extra weight, possibly as a way of compensating for missing data (this is seen more clearly in spectra where a narrow gap is present in the middle of the data). The picture is similar for bins with larger , except of course that there are increasingly many important modes as the width of our bins increases (the bins have a fixed width , but the relevant mode width is ). For more discussion of the quadratic estimator see, e.g., Tegmark (1997).
The method we adopt is optimal for Gaussian fields and therefore guarantees that no other method can surpass it. An additional advantage is that within this formalism window and covariance matrices are automatically computed. For continuous spectra with few gaps and near uniform noise one does not necessarily expect an FT method to be significantly worse. In practice the noise level is slowly varying across the spectrum, so averaging all the pixels uniformly is not optimal and degrades performance. Another advantage is that with our method each pixel pair has its own effective redshift and the correlations for a given pair are then interpolated to the redshift of interest using the appropriate evolution. In the FT method the whole spectrum is Fourier transformed first, so the redshift information is preserved only in an averaged sense, but a priori it is not clear how this average is defined.
3.1.2 Bootstrap Error Estimation
While the Fisher matrix obtained during the estimation process would give the error matrix for if the data were Gaussian, we cannot reliably assume this. Our solution is to compute a bootstrap error matrix by the standard procedure (Press et al. 1992). From our data set of spectra, we form a bootstrap data set by selecting spectra at random, with replacement. The covariance matrix of is taken to be , where , is an estimate of the power in the th bin from a bootstrap data set, and means average over bootstrap realizations. We generally use 4000 realizations, after checking that this produces convergence in the result. We assume that the error correlations extend only one bin off diagonal in the direction, because the spectrum of a single quasar practically never contributes to nonadjacent bins.
We have no compelling reason to believe that this method of computing the error bars will give rigorously correct results. Considering the large number of offdiagonal elements that must be estimated, one worry is that a particular linear combination of the bins may accidentally vary very little in our data set, so it will appear to have an unrealistically small error. Our tests on mock spectra (§3.2.2) show no sign of this problem. Still, to be conservative we apply one tweak to after it is computed, in an attempt to inoculate it against the possibility. We perform a singularvalue decomposition on , which produces a set of independent vectors and their variances. We then compute the variances of the same vectors under the Gaussian approximation, using the Fisher matrix. If the bootstrap variance is smaller than the Gaussian variance we replace it with the Gaussian variance. Finally, we transform back to . The tests on mock samples described below give us confidence that our procedure is reliable.
3.2 Tests on Mock Data Sets
We validate our procedure as implemented in code by applying it to mock data sets. Many iterations of these tests were required to produce results that show no serious problems in the error estimation or the power spectrum estimation itself. Testing the results on realistically created mock data is absolutely essential for measurements of such high precision. In §3.2.1 we describe our procedure for generating the mock spectra. We test our bootstrap error estimates in §3.2.2. Finally, we test the power spectrum estimation procedure for systematic errors in §3.2.3.
3.2.1 Generating Mock Spectra
We generate mock spectra by combining the auxiliary information we have for each observed spectrum (e.g., our continuum estimate, noise estimate, sky estimate, etc.) with a simplified version of the Bi et al. (1992) model for the field, which results in realistic looking spectra.
For each observed quasar we start with the term we divide by before computing the power spectrum, (see equation 2). We multiply this by (generated as described below), smooth the result using the resolution from the observed spectrum, and sample the result onto the observed grid of pixels. This produces a noise free version of the flux we would observe coming from this quasar. We add flux from the sky as estimated for the observed spectrum, and transform the total flux to the number of photons that would be expected in each pixel. We generate a Poisson deviate with this mean, add the appropriate Gaussian readnoise for each pixel, transform back to the original flux units, and subtract the sky flux estimate to obtain an observed (noisy) quasar spectrum. The results of this procedure for each observed quasar are written into files in the same format as the observed spectra, so exactly the same code can be used to measure the power in the mock spectra.
To generate the fields we use a simple model that is arranged to give roughly the correct power spectrum as a function of and , and the correct mean absorption as a function of redshift. For each observed spectrum, we start by generating a Gaussian random field, , on a very long, relatively finely spaced grid (65536 cells with width , to be precise), with power spectrum
(10) 
where , , and [this was chosen after some experimentation because it produces a final flux power spectrum with approximately the same dependence as the observed ]. An arbitrary cell in this grid is chosen to correspond to the redshift of the quasar, and the evolution of the amplitude of the power spectrum with redshift is imposed by the transformation with , where the form of was chosen so that the final flux power spectrum would evolve like the observed one. Next we make the squared lognormal transformation , where is computed from the input power spectrum, including the amplitude factor (the factor in the exponential just fixes the mean of the lognormal field to 1). We smooth the field with a Gaussian filter with rms width and multiply it by a factor to produce a field (this redshift evolution factor produces roughly the observed redshift evolution of ). The mock transmitted flux in each grid cell is then , which is sampled as described above.
The procedure described above leads to realistic looking spectra of the Ly forest. We have verified that it generates a bispectrum that is within a factor of 2 of the one measured in Nbody simulations. The main advantages of this procedure over the Nbody simulation approach when generating the mock spectra are that it is faster, so one can make an arbitrary number of independent realizations, and that the simulated spectra can be of arbitrary length, important to eliminate any periodicity effects (this would be impossible with simulations, where a typical box size is much shorter than the total length of a single spectrum). Both of these advantages are critical for a high precision test. We determine the true by a simple FFT of extremely long fields (without redshift evolution).
3.2.2 Tests of the Error Estimates
Without accurate statistical errors it is difficult to identify systematic problems, so we first test our bootstrap procedure for estimating the errors. Note that there is no reason to expect bootstrap errors to be perfect (there is even some ambiguity in how exactly the bootstrapping should be done when the data do not consist of statistically identical objects). Regardless of systematic errors in the method, the only difference between the power spectra measured from two mock data sets that differ only in the random seed that was used to create them should be the statistical errors that we estimate. We test our error bars by generating ten different sets of mock data and computing for the differences between each of them and their error weighted mean, using the bootstrap error bars and the 108 points in , and . This is effectively a fit of 108 parameters to 1080 data points, with 972 degrees of freedom. The total is 939, perfectly consistent with a random fluctuation around the mean, and strongly disfavoring an underestimation of the errors by more than a couple percent.
3.2.3 Tests of the Power Estimates
We can now search for systematic errors. To enhance the statistical significance of any errors, we average our ten sets of mock spectra to form a single, more precise measurement. The result is shown in Figure 8.
The results look reasonably good; however, we find an unacceptably bad for the comparison between our measured and the true power spectrum (there are 108 degrees of freedom). To quantify the systematic problem, we first assume the bias has the form and that minimize in the comparison. We find and with for 106 degrees of freedom [the pivot point was chosen to make the errors independent; the amplitude coefficient would be larger if we were not already dividing by as explained in §3.1.1]. The combination of slope and amplitude errors corresponds to a 3.1% excess of power at our largest scale, , and a 1.3% underestimate at . We find some less significant dependencies by generalizing the fitting formula even more to and fit for the values of
(11) 
where , with . The parameters are , , , , and , with . Where does this bias come from? We expect some bias related to the division of each chunk of spectrum by its overall mean (not because of an integral constraint suppression of largescale power – our estimator should take care of that – but because of statistical error in the estimate of the mean that we divide by). When we measure the power without this division by the mean, which we can only do using mock spectra, we find significantly smaller corrections – small enough to ignore when model fitting.
We expect that this bias should be present when we use real observed spectra, so we will correct for it by dividing the measured power by . We describe its effect on the amplitude and slope of the power spectrum below (table 1).
3.3 Raw Power
Figure 9 shows the raw power measured in our standard Ly forest rest wavelength range, .
All the figures in this subsection show , not the background subtracted power . Our normalization convention is:
(12) 
We usually plot the dimensionless quantity , the contribution to the variance per unit .
Figure 10 shows the fractional errors on all of the measured points.
The errors are less than 5% for most of the points, and frequently as small as 3%. If we were only estimating a single amplitude parameter by combining all these points then its error would be 0.6%. An overall logarithmic slope would have an error . The errors on the largest scales are increased somewhat by the diagonalization of the window matrix.
Figure 11 shows the ratio of subtracted noise power to measured signal power () for each point.
The noise power is significant (2030%) on all scales, but diverges at high where the resolution suppresses the absorption power. The lowest redshift bin has the most noise, due to the lower Ly forest power combined with extra noise at the short wavelength end of the spectra.
Figure 12 shows our window matrix (at ), which we proceed to diagonalize.
The matrix is reasonably close to diagonal already, with large contributions only from adjacent bins. It is useful to diagonalize the matrix at this stage, rather than waiting until the modelfitting stage, because this allows us to compute bootstrap errors directly for the final bins (the bootstrap error calculation and window matrix diagonalization do not perfectly commute).
Figure 13 shows the ratio of the bootstrap errors to the errors estimated assuming the data are Gaussian.
We did not apply the Gaussian floor to the bootstrap errors when making this figure. Typically the bootstrap errors are 020% larger than the Gaussian errors. Figure 14 shows examples of the estimated correlation between the errors, at .
We note that diagonalizing the window matrix noticeably reduces the error correlations. The bootstrap errors are, in contrast to the Gaussian errors, noticeably correlated ( when , where and label the bins) across the full range. These differences between bootstrap and Gaussian errors are not necessarily an indication of intrinsic nonGaussianity in the absorption fluctuations. Possible alternative explanations for the differences include the uncertainty in the mean flux value that each chunk of spectrum is divided by and the uncertainty in the noisesubtraction term for each chunk, neither of which are included in the Gaussian estimate and both of which would increase the error in a way that is correlated across bins.
3.4 Background Subtraction
Our background subtraction is the power in the wavelength range Å. Figure 15 shows and for comparison.
The bump at in is probably due to the CIV doublet at separation . The bump at may be due to the SiIV doublet at separation . Figure 16 shows .
We see that, even though the background power is a small fraction of the Ly forest power, it is quite significant when compared to the small size of the errors on the Ly forest power. It is important to remember that even a small overall systematic error can be very significant if it covers many data points (e.g., a error over 100 points shifts the mean by ).
We are going to subtract the power in the range 12681380 Å from the Ly forest power, but it is informative to measure the power at other places in the quasar rest frame for comparison. The range 14091523 Å includes CIV absorption (at 1548.2 and 1550.78 Å) but excludes SiIV (at 1393.75 and 1402.77 Å) and shorter wavelength transitions. Figure 17 shows .
If all of the power was coming from metal line absorption, the power in the range Å should always be less than the power in the range Å. As we see in Figure 18, which shows the difference in the background fractions, , the power in is greater than except on large scales.
The difference on large scales suggests that there is tiny amount of power left in the quasar continua (in spite of our division by the mean continuum), which is larger in the range 14091523 Å than in the range 12681380 Å. Finally, Figure 19 shows , past the wavelength of CIV absorption.
The reduction of power relative to shorter wavelengths is dramatic, but not surprising since CIV is the most common metal absorber. It does suggest however that most of the power is due to metals and not continuum fluctuations, unless the continuum in the range 15581774 Å has significantly less power relative to other intervals studied here (which is admittedly not inconceivable). It seems likely, although we are not certain, that the background power has a noticeable contribution from measurementrelated problems, because the alternative is a very sudden increase in metal absorption power.
What is the upshot from these studies? The metal absorption appears to contribute a small but significant amount of power, which should also appear in the Ly forest region. We subtract this power from the power measured in the forest. There is some indication of measurementrelated problems in our lowest redshift bin. The power contributed by deviations of the quasar continua from their mean appears to be small.
While the idea that contains almost exclusively power due to simple metal absorption seems plausible at this point, when we perform consistency checks in §4.4 we find evidence that this is not the case. Splitting the data set used to measure in half based on the noise level in each spectrum, we find that the power in the halves is significantly different, by as much as 50% in some bins. Splits based on several other properties of the data (e.g., sky to quasar flux ratio) also show significant differences, but we find that these differences can all be accounted for by the difference in the basic noise level in the subsamples. Splits of the Ly forest data set show similar trends in with the splitting parameters, although the fractional differences are much smaller. While we don’t know the source of this noise dependence, it is not hard to imagine relatively benign reasons for it. For example, if sky subtraction is imperfect this would add an increasing amount of power as the sky flux, and thus noise level, increases relative to the quasar flux. The procedure we describe next would remove this power.
Since we know that depends on noise it seems logical to subtract the value of corresponding to the level of noise in the forest, rather than the best measured value of , which is dominated in practice by data with less noise. If we had simply misestimated the noise by an overall factor, for example, the errors in and would cancel for this form of subtraction. To implement this idea, we model the background subtraction term as a linear function of the noise level,
(13) 
where is the mean noise level in the data computed in the same way as the mean flux level (this is the mean of the normalized noise, i.e., after division by continuum and mean flux). The choice of a linear relation is arbitrary but it does the job (see §4.4) better than the alternatives we tried. We compute and for each value of and using a linear fit to the full sample of spectra that probe , weighted by the Gaussian estimate of the error on each point for each spectrum. When the time comes to subtract the background from to obtain , we use computed in the Å wavelength range to compute the appropriate subtraction term. Figure 20 shows the extra power subtracted through Equation (13), beyond what we would subtract if we simply used from Figure 16.
It is typically less than 4% of the Ly forest power, but rises to 10% at the highest for the lowest redshift.
The reader who is paying attention may complain that we have no compelling reason to believe that this source of noisedependent power that we do not understand depends on noise in the same way inside and outside the Ly forest region. This would be true, except that when we follow this prescription for background subtraction the differences in between subsamples are removed (see §4.4). This would be a remarkable coincidence if our model for the subtraction was not substantially correct.
Note that the background power has much smaller (absolute) statistical errors than , mostly because there is simply less power, but also because there are more quasars probing a given redshift interval.
4 Consistency Checks
In §4.1 we describe how we use fits of theoretical models to the results to help understand the importance of any systematic errors. We plot the correlation function of the Ly forest in §4.2 and use it to identify a significant contribution to from SiIII absorption. In §4.3 we examine the effects of modifications of our procedure. In §4.4 we break the data set up in many different ways to look for dependencies of that should not exist. In §4.5 we discuss the possibility that continuum fluctuations contribute significant power. Finally, we compare our results to past measurements in §4.6.
4.1 Rudimentary Fitting of Theoretical Models
The ultimate purpose of measuring the Ly forest power spectrum is to determine cosmological parameters by comparing the observed to the predictions for different cosmological models. For the CDM models supported by present observations, the universe is nearly Einsteinde Sitter at , so cosmology influences the Ly forest almost entirely through the linear theory power spectrum of the mass fluctuations, , at and (roughly 1 comoving h/Mpc, depending somewhat on the model). We usually parameterize by its amplitude, , slope , and curvature , where we use and as the pivot points.
A full discussion of the details of theoretical modeling of using numerical simulations is beyond the scope of this paper. Furthermore, the theory of the Ly forest is perhaps less certain than the observations, so we want to present the observational results untarnished by theoretical interpretation. However, it is very useful to interpret the possible systematic errors in the appropriate context of cosmological model fitting. In other words: without model fitting, it is difficult to know how important a given change in is.
In this paper we take a cautious approach to the theoretical model fitting – we perform fits to different estimates of computed using modifications of the extraction procedure or different subsamples of the data, however, we do not give the central result, only the deviations in the results from the value obtained from our preferred . These deviations in fitting results should give the reader a useful indication of the importance of systematic effects in the data, regardless of the reader’s opinion of the theory.
The simulations and fitting procedure that we use are described in McDonald et al. (2004), where we present the final result. We use a CDM transfer function, and perform the fit with and as free parameters (because is not tightly constrained by the present Ly forest data alone, we fix the primordial running , not to be confused with , to zero). Unless otherwise specified, we perform the fits using the 108 points in the ranges and . We allow for some error in our noise estimate by permitting the noise subtraction terms to vary independently in each redshift bin by 5% (9 extra free parameters to fit for, constrained by Gaussian likelihood function with this rms width). We also allow a single overall parameter describing the squared resolution error to vary with rms constraint .
The Ly forest model in the simulations is controlled by the externally constrained functions , the mean absorption, , the temperature at overdensity 1.4, , the logarithmic slope of the temperaturedensity relation, and a reionization parameter that we will call . is described in our fits by the 10 parameter formula , where labels our 9 redshift bins, gives the arbitrarily normalized dependence and is an overall normalization. We have performed a preliminary analysis using the formalism in §2.5 to measure from SDSS data and we use this to constrain the parameters (the error on each redshift bin is ). Because the SDSS analysis does not constrain the overall normalization, we leave free except for the additional constraint that we require interpolated to to match the HIRES constraints (see McDonald et al. (2000) – we have modified the numbers slightly and increased the errors to allow for systematic uncertainties, as discussed in Seljak et al. (2003)). We parameterize and by quadratic functions of (3 parameters each) with the external constraints K and at (see McDonald et al. (2001) – we added 2000 K in quadrature to the temperature errors to allow for systematic errors). Note that there are other, sometimes more precise, measurements of (Schaye et al., 2003; Bernardi et al., 2003) and the temperaturedensity relation (Schaye et al., 2000; Ricotti et al., 2000) in the literature – our choice of McDonald et al. (2000) and McDonald et al. (2001) for this example is simply for convenience. The redshift of reionization and postionization temperature of the gas influence Ly forest predictions because the smoothing of the gas on small scales depends on its pressure history. At the level of precision we care about, this dependence can be captured by a single parameter. In our modeling, we use to interpolate between two reasonable boundaries, reionization heating of the gas to 25,000 K at or to 50,000 K at