SSJ  3.3.1 Stochastic Simulation in Java
Package umontreal.ssj.hups

Highly Uniform Point Sets. More...

## Classes

class  AntitheticPointSet
This container class provides antithetic versions of the contained points. More...

class  BakerTransformedPointSet
This container class embodies a point set to which a baker's transformation (also called a tent transform) is applied (see, e.g., [49], [84], [156]). More...

class  CachedPointSet
This container class caches a point set by precomputing and storing its points locally in an array. More...

class  ContainerPointSet
This acts as a generic base class for all container classes that contain a point set and apply a specific kind of transformation to the coordinates $$u_{i,j}$$ when the points are generated by the iterator. More...

class  CycleBasedLFSR
Linear feedback shift register (LFSR) random number generators [147], [124], [187], produce numbers by generating a sequence of bits from a linear recurrence modulo 2, and forming fractional numbers by taking blocks of successive bits. More...

class  CycleBasedPointSet
This abstract class provides the basic structures for storing and manipulating a point set defined by a set of cycles. More...

class  CycleBasedPointSetBase2
Similar to CycleBasedPointSet, except that the successive values in the cycles are stored as integers in the range $$\{0,\dots,2^k-1\}$$, where $$1\le k \le31$$. More...

class  DigitalNet
This class provides the basic structures for storing and manipulating linear digital nets in base $$b$$, for an arbitrary base $$b\ge2$$. More...

class  DigitalNetBase2
A special case of DigitalNet for the base $$b=2$$. More...

class  DigitalNetBase2FromFile
This class permits one to read the parameters that define a digital net in base 2 either from a file, or from a URL address. More...

class  DigitalNetFromFile
This class allows us to read the parameters defining a digital net either from a file, or from a URL address on the World Wide Web. More...

class  DigitalSequence
This abstract class describes methods specific to digital sequences. More...

class  DigitalSequenceBase2
This abstract class describes methods specific to digital sequences in base 2. More...

class  EmptyRandomization
This class implements an empty umontreal.ssj.hups.PointSetRandomization. More...

class  F2wCycleBasedLFSR
This class creates a point set based upon a linear feedback shift register sequence. More...

class  F2wCycleBasedPolyLCG
This class creates a point set based upon a linear congruential sequence in the finite field $$\mathbb F_{2^w}[z]/P(z)$$. More...

class  F2wNetLFSR
This class implements a digital net in base 2 starting from a linear feedback shift register generator. More...

class  F2wNetPolyLCG
This class implements a digital net in base 2 starting from a polynomial LCG in $$\mathbb F_{2^w}[z]/P(z)$$. More...

class  F2wStructure
This class implements methods and fields needed by the classes umontreal.ssj.hups.F2wNetLFSR, umontreal.ssj.hups.F2wNetPolyLCG, umontreal.ssj.hups.F2wCycleBasedLFSR and umontreal.ssj.hups.F2wCycleBasedPolyLCG. More...

class  FaureSequence
This class implements digital nets or digital sequences formed by the first $$n = b^k$$ points of the Faure sequence in base $$b$$. More...

class  HaltonSequence
This class implements the sequence of Halton [76] , which is essentially a modification of Hammersley nets for producing an infinite sequence of points having low discrepancy. More...

class  HammersleyPointSet
This class implements Hammersley point sets, which are defined as follows. More...

class  IndependentPointsCached
Similar to IndependentPoints, but the points are all generated and stored (cached) when the point set is randomized. More...

class  KorobovLattice
This class implements a Korobov lattice, which represents the same point set as in class LCGPointSet, but implemented differently. More...

class  KorobovLatticeSequence

class  LatinHypercube
Implements Latin Hypercube Sampling (LHS) with $$n$$ points in the $$s$$-dimensional unit hypercube. More...

class  LCGPointSet
Implements a recurrence-based point set defined via a linear congruential recurrence of the form $$x_i = a x_{i-1} \mod n$$ and $$u_i = x_i / n$$. More...

class  LMScrambleShift
This class implements a umontreal.ssj.hups.PointSetRandomization that performs a left matrix scrambling and adds a random digital shift. More...

class  NestedUniformScrambling
This class implements a PointSetRandomization that performs Owen's nested uniform scrambling [191], [195] . More...

class  NiedSequenceBase2
This class implements digital sequences constructed from the Niederreiter sequence in base 2. More...

class  NiedXingSequenceBase2
This class implements digital sequences based on the Niederreiter-Xing sequence in base 2. More...

This container class realizes padded point sets, constructed by taking some coordinates from a point set $$P_1$$, other coordinates from a point set $$P_2$$, and so on. More...

class  PointSet
This abstract class represents a general point set. More...

interface  PointSetIterator
This is the interface for iterators that permit one to go through the points of a #PointSet and the successive coordinates of these points. More...

interface  PointSetRandomization
This interface is for a randomization that can be used to randomize a umontreal.ssj.hups.PointSet. More...

This class implements basic methods for working with radical inverses of integers in an arbitrary basis $$b$$. More...

class  RandomShift
This class implements a umontreal.ssj.hups.PointSetRandomization. More...

class  RandomStart
This class implements a umontreal.ssj.hups.PointSetRandomization that randomizes a sequence simply by taking a random starting point. More...

class  RandShiftedMod1PointSet
This container class embodies an arbitrary point set and its iterator adds a random shift modulo 1 to all the points, when producing the coordinates. More...

class  Rank1Lattice
This class implements point sets specified by integration lattices of rank. More...

class  RQMCPointSet
This class is used for randomized quasi-Monte Carlo (RQMC) simulations [125], [126], [192], [193]. More...

class  SMScrambleShift
This class implements a umontreal.ssj.hups.PointSetRandomization that performs a striped matrix scrambling [195] and adds a random digital shift. More...

class  SobolSequence
This class implements digital nets and digital sequences in base 2 formed by the first $$n = 2^k$$ points of a Sobol’ sequence [221], [222]. More...

class  SortedAndCutPointSet
This class is useful for the Array-RQMC method, in the situation where the Markov chain has a multidimensional state, the RQMC points are sorted once for all, based on their first $$\ell$$ coordinates, then these coordinates are removed and only the remaining coordinates are kept and randomized at each step. More...

class  StratifiedUnitCube
This class implements a stratification of the unit cube in rectangular boxes of same size and orientation. More...

class  StratifiedUnitCubeAnti
This class implements a stratification of the unit cube in rectangular boxes of same size and orientation, similar to StratifiedUnitCube. More...

class  SubsetOfPointSet
This container class permits one to select a subset of a point set. More...

## Detailed Description

Highly Uniform Point Sets.

## Monte Carlo and quasi-Monte Carlo

This package provides classes implementing highly uniform point sets (HUPS) over the $$s$$-dimensional unit hypercube $$[0,1)^s$$, and tools for their randomization. The terminology low-discrepancy sequence (LDS) is often used for infinite sequences of points such that the discrepancy between the distribution of the first $$n$$ points of the sequence and the uniform distribution converges to zero at a certain rate when $$n\to\infty$$ [187]. HUPS and LDS are used for quasi-Monte Carlo integration, as we now briefly explain. For more details, see [49], [69], [79], [126], [151], [156], [194], [187], [188], [219]; for example. A short applied tutorial can be found in [160].

Suppose we want to estimate the integral of a function $$f$$ defined over the $$s$$-dimensional unit hypercube,

$\mu= \int_{[0,1)^s} f(\mathbf{u}) d\mathbf{u}. \tag{mu}$

Practically any mathematical expectation that can be estimated by simulation can be written in this way, usually for a very complicated $$f$$ and sometimes for $$s=\infty$$. Indeed, the source of randomness of stochastic simulations is usually a stream of real numbers $$\mathbf{u}= (u_0,u_1,u_2,…)$$ whose purpose is to imitate i.i.d. $$U(0,1)$$ random variables. These real numbers are transformed in complicated ways to produce the estimator. Thus, the dimension $$s$$ of the integral in (mu) represents the number of calls to the uniform random number generator if that number is deterministic. If it is random and unbounded, we take $$s = \infty$$. In the latter case, we shall assume that the actual number of calls is finite with probability one (otherwise the simulation may never end).

We consider an estimator of $$\mu$$ of the form

$Q_n = \frac{1}{n} \sum_{i=0}^{n-1} f(\mathbf{u}_i), \tag{Qn}$

which is the average of $$f$$ over the point set $$P_n = \{\mathbf{u}_0,…,\mathbf{u}_{n-1}\} \subset[0,1)^s$$. With the Monte Carlo (MC) method, the $$\mathbf{u}_i$$’s are i.i.d. random vectors uniformly distributed over $$[0,1)^s$$. Then, $$Q_n$$ is an unbiased estimator of $$\mu$$ with variance $$\sigma^2/n$$, where

$\sigma^2 = \int_{[0,1)^s} f^2(\mathbf{u}) d\mathbf{u}- \mu^2,$

and it obeys a central-limit theorem if $$\sigma^2 < \infty$$.

Quasi-Monte Carlo (QMC) methods use point sets $$P_n$$ that are more evenly distributed over the unit hypercube than typical random points. We call them highly uniform point sets (HUPS) or QMC point sets. The aim is to reduce the size of the integration error $$Q_n - \mu$$. Among the most important classes of methods for constructing such point sets, we find are digital nets, integration lattices in the real space, polynomial integration lattices, Hammersley points, Halton sequences, etc.; see [49], [126], [18a], [187], [219]. All the methods named above are implemented in this package, in various flavors.

The deterministic QMC points can also be randomized in a way that each point has the uniform distribution in the unit cube while the point set as a whole keeps its high uniformity. This gives rise to randomized QMC (RQMC), which provides an unbiased estimator of $$\mu$$ whose variance can be estimated via independent replications. This is further discussed below.

## Elementary constructions

To give an idea of how HUPS and LDS can be constructed, we start with a simple one-dimensional example. If $$s=1$$ and $$n$$ is fixed, very simple highly uniform constructions are the point sets $$P_n = \{0, 1/n, …, (n-1)/n\}$$ and the shifted version $$P’_n = \{1/(2n), 3/(2n), …, (2n-1)/(2n)\}$$.

In $$s > 1$$ dimensions, the simplest extensions would be as follows. Let $$n = d^s$$ for some integer $$d$$ and define $$P_n$$ as the Cartesian product of $$s$$ copies of the one-dimensional sets $$P_d$$; that is, $$P_n = \{(u_0,…,u_{s-1}) : u_j \in\{0, 1/d, …, (d-1)/d\}$$ for each $$j\}$$, and similarly for $$P’_n$$. The point sets thus obtained are regular rectangular grids. Unfortunately, this approach breaks down rapidly when $$s$$ gets large, because $$n$$ must increase exponentially fast with $$s$$ for fixed $$d$$. Another important drawback is that when $$P_n$$ is projected over lower-dimensional subspaces, several points are projected onto each other and become redundant [126].

A better idea is to construct a point set $$P_n$$ in $$s$$ dimensions such that each one-dimensional projection of $$P_n$$ is the set of values $$\{0, 1/n, …, (n-1)/n\}$$. Of course, these values should not be visited in the same order for all coordinates, because otherwise all the points would lie on the diagonal line going from $$(0,…,0)$$ to $$(1,…,1)$$. In other words, for each coordinate $$j$$, $$0\le j < s$$, we must define a different permutation of the integers $$\{0,…,n-1\}$$ and visit the values $$\{0, 1/n, …, (n-1)/n\}$$ in the order determined by that permutation. The trick is to select those permutations in a way that $$P_n$$ itself is highly uniform over $$[0,1)^s$$ in a well-defined sense (there are many ways to define it). This is what most construction methods attempt to achieve. Before looking at concrete ways of defining such permutations, we introduce a related issue: what to do if $$n$$ is not fixed.

For $$s=1$$, a simple way of filling up the unit interval $$[0,1)$$ uniformly is via the low-discrepancy sequence 0, 1/2, 1/4, 3/4, 1/8, 5/8, 3/8, 7/8, 1/16, 9/16, …, called the van der Corput sequence in base 2. More generally, select an integer $$b \ge2$$, called the base. The radical inverse function in base $$b$$, $$\psi_b : \mathbb{N}\to[0,1)$$, is defined as follows. If $$i$$ is a $$k$$-digit integer in base $$b$$ with digital $$b$$-ary expansion

$i = a_0 + a_1 b + …+ a_{k-1} b^{k-1},$

then

$\psi_b(i) = a_0 b^{-1} + a_1 b^{-2} + \cdots+ a_{k-1} b^{-k}.$

For a given $$b$$, $$\psi_b(0), \psi_b(1), \psi_b(2), …$$ is called the van der Corput sequence in base $$b$$. This sequence fills up the unit interval $$[0,1)$$ quite uniformly. For example, for $$b=2$$ we obtain the sequence mentioned above and for $$b=3$$ we obtain 0, 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9, 1/27, 10/27, 19/27, …. Moreover, for two relatively prime bases $$b_1$$ and $$b_2$$, the two sequences have no value in common except 0.

For $$s > 1$$, one could either take different (relatively prime) bases for the different coordinates, or take the same basis $$b$$ but permute the successive values using a different permutation for each coordinate. These permutations are usually selected in a way that for every integer $$k$$, the first $$b^k$$ values that are enumerated remain the same (they are the values of $$\psi_b(i)$$ for $$i=0,…,b^k-1$$), but they are enumerated in a different order. Several digital net constructions (to be defined later) fit this framework.

If we decide to take different bases, the most natural choice is to take the $$j$$th smallest prime, $$b_j$$, as a base for coordinate $$j-1$$; that is, base 2 for coordinate 0, base 3 for coordinate 1, base 5 for coordinate 2, and so on. The infinite sequence thus defined, where point $$i$$ is

$\mathbf{u}_i = (\psi_{b_1}(i),\psi_{b_2}(i),…, \psi_{b_s}(i)) \tag{Halton-point}$

for $$i \ge0$$, was proposed in [76]  and is called the Halton sequence. One drawback of this sequence is that for large $$s$$, the base $$b_s$$ becomes quite large.

In the case where $$n$$ is fixed, we can always take $$i/n$$ as the first coordinate of point $$i$$. In particular, the Hammersley point set with $$n$$ points in $$s$$ dimensions contains the points

$\mathbf{u}_i = (i/n,\psi_{b_1}(i),\psi_{b_2}(i),…, \psi_{b_{s-1}}(i)), \tag{Hammersley-point}$

for $$i=0,…,n-1$$ [78] . Historically, Halton sequences were defined as extensions of Hammersley point sets. Hammersley points and Halton sequences are implemented in the classes HammersleyPointSet and HaltonSequence.

## Digital nets

Digital nets and sequences are an important class of HUPS and LDS constructions. Most concrete implementations, e.g., those proposed by Sobol’, Faure, Niederreiter, Niederreiter and Xing, Dick, etc., are linear digital nets and sequences, defined as follows (see also [49], [126], [187], [230]).

Let $$b\ge2$$ be an arbitrary integer (usually a prime number), called the base. A net that contains $$n = b^k$$ points in $$s$$ dimensions is defined via $$s$$ generator matrices $$\mathbf{C}_0,…,\mathbf{C}_{s-1}$$, which are (in theory) $$\infty\times k$$ matrices whose elements are in $$\mathbb{Z}_b = \{0,\dots,b-1\}$$. The matrix $$\mathbf{C}_j$$ is used for coordinate $$j$$ of all the points, for $$j\ge0$$. To define the $$i$$th point $$\mathbf{u}_i$$, for $$i=0,…,b^k-1$$, write the digital expansion of $$i$$ in base $$b$$ and multiply the vector of its digits by $$\mathbf{C}_j$$ to obtain the digits of the expansion of $$u_{i,j}$$, the $$j$$th coordinate of $$\mathbf{u}_i$$. That is,

\begin{align} i & = \sum_{\ell=0}^{k-1} a_{i,\ell} b^{\ell}, \tag{digital-i} \\ \begin{pmatrix} u_{i,j,1} \\ u_{i,j,2} \\ \vdots \end{pmatrix} & = \mathbf{C}_j \begin{pmatrix} a_{i,0} \\ a_{i,1} \\ \vdots \\ a_{i,k-1} \end{pmatrix} , \tag{digital-Cj} \\ u_{i,j} & = \sum_{\ell=1}^{\infty}u_{i,j,\ell} b^{-\ell}, \tag{digital-uij} \\ \mathbf{u}_i & = (u_{i,0},…,u_{i,s-1}). \tag{digital-ui} \end{align}

In practice, the expansion in ( digital-uij ) is truncated to the first $$w$$ digits for some positive integer $$w$$, so each matrix $$\mathbf{C}_j$$ is actually truncated to a $$w\times k$$ matrix. Typically $$w$$ is equal to $$k$$, or is slightly larger, or is selected so that $$b^r$$ is near or equal to the largest representable integer, e.g., $$2^{31}$$ on an 32-bit processor, and perhaps $$2^{53}$$ or larger on a 64-bit processor, to take advantage of the precision of floating-point numbers in double for the $$u_{i,j}$$'s.

Usually, the first $$k$$ lines of each $$\mathbf{C}_j$$ form a nonsingular $$k\times k$$ matrix. Then, the $$n$$ output values for coordinate $$j$$, $$u_{0,j},\dots, u_{n-1,j}$$, when truncated to their first $$k$$ fractional digits in base $$b$$, are a permutation of the numbers $$0, 1/n,\dots, (n-1)/n$$. Different coordinates would use different permutations, implemented via the matrices $$\mathbf{C}_j$$.

When the first $$k$$ lines of $$\mathbf{C}_j$$ form the identity and the other lines are zero, the first $$n$$ output values are the first $$n$$ elements of the van der Corput sequence in base $$b$$. If we reverse the order of the columns of that matrix $$\mathbf{C}_j$$ (i.e., column $$c$$ will contain a one in line $$k-c+1$$ and zeros elsewhere, for $$0\le c < k$$), we obtain the output values $$0, 1/n, …, (n-1)/n$$ in that order. With a slight abuse of language, we shall call this first matrix (with the identity followed by lines of zeros) the identity and the second one (with the columns in reverse order) the reflected identity. It is customary to take $$\mathbf{C}_0$$ as the identity for digital sequences, and often for digital nets as well. But for digital nets (where $$n$$ is fixed in advance), one can take $$\mathbf{C}_0$$ as the reflected identity instead, then $$\mathbf{C}_1$$ as the identity, and so on. That is, the matrix $$\mathbf{C}_j$$ for the digital net is taken as the matrix $$\mathbf{C}_{j-1}$$ of the digital sequence. The hups package often gives the choice.

For digital sequences, the matrices $$\mathbf{C}_j$$ actually have an infinite number of columns, although only the first $$k$$ columns are needed to generate the first $$b^k$$ points. So in practice, we never need to store more than a finite number of columns at a time. When we need more than $$b^k$$ points for the current value of $$k$$, we can simply increase $$k$$ and add the corresponding columns to the matrices $$\mathbf{C}_j$$, assuming that we can compute them.

The classes umontreal.ssj.hups.DigitalNet and umontreal.ssj.hups.DigitalSequence implement generic digital nets and sequences. Specific instances are constructed in subclasses of these two classes. In particular, umontreal.ssj.hups.DigitalNetBase2 implements digital nets in base 2, which are the most popular because computations in binary arithmetic is generally much faster than in other bases. Among those, we find Sobol sequences and Sobol nets, for instance; see SobolSequence. Polynomial lattice rules (see below) are special cases of digital nets and in practice, to generate the points, we implement them as digital nets.

## Lattice Rules

An integration lattice is a discrete (but infinite) subset of $$\mathbb{R}^s$$ of the form

$L_s = \left\{\mathbf{v}= \sum_{j=1}^s h_j {\mathbf{v}_j} \mbox{ such that each } h_j\in\mathbb{Z}\right\},$

where $$\mathbf{v}_1,…,\mathbf{v}_s \in\mathbb{R}^s$$ are linearly independent over $$\mathbb{R}$$ and $$\mathbb{Z}^s \subseteq L_s$$. This last condition means that $$L_s$$ must contain all integer vectors, and this implies that $$L_s$$ is periodic with period 1 along each of the $$s$$ coordinates. The approximation of $$\mu$$ by $$Q_n$$ with the point set $$P_n = L_s \cap[0,1)^s$$ is called a lattice rule [82], [113], [125], [219]. The value of $$n$$ is the number of points of the lattice that are in the unit hypercube $$[0,1)^s$$.

Let $$\mathbf{V}$$ be the matrix whose rows are the basis vectors $$\mathbf{v}_1,\cdots,\mathbf{v}_s$$ and $$\mathbf{V}^{-1}$$ its inverse. One has $$\mathbb{Z}^s\subseteq L_s$$ if and only if all entries of $$\mathbf{V}^{-1}$$ are integer. When this holds, $$n = \det(\mathbf{V}^{-1})$$ and all entries of $$\mathbf{V}$$ are multiples of $$1/n$$. The rank of the lattice is the smallest $$r$$ such that one can find a basis of the form $$\mathbf{v}_1,\dots, \mathbf{v}_r,\mathbf{e}_{r+1},\cdots,\mathbf{e}_s$$, where $$\mathbf{e}_j$$ is the $$j$$th unit vector in $$s$$ dimensions. In particular, a lattice rule of rank 1 has a basis of the form $$\mathbf{v}_1 = (a_1, \dots, a_s)/n$$ and $$\mathbf{v}_j = \mathbf{e}_j$$ for $$j>1$$, where $$a_j \in\mathbb{Z}_n$$ for each $$j$$. Lattice rules of rank 1 are implemented in Rank1Lattice.
The class KorobovLattice implements Korobov lattice rules, which occur when $$\mathbf{v}_1$$ has the special form $$\mathbf{v}_1 = (1,\; a,\; a^2 \mod n,\; \dots \; a^{s-1} \mod n)/n$$ for some $$a\in\mathbb{Z}_n$$. The point set $$P_n$$ of a Korobov lattice rule can also be written as $$P_n = \{(x_1,…,x_s)/n \mbox{ such that } x_1\in\mathbb{Z}_n \mbox{ and } x_j = a x_{j-1} \mod n \mbox{ for all } j > 1\}$$. This is the set of all vectors of successive values produced by a linear congruential generator (LCG) with modulus $$n$$ and multiplier $$a$$, from all possible initial states, including 0. In this case, the points are easy to enumerate by using the recurrence, and this is what we do in LCGPointSet.

Uniformity criteria for lattice rules, and methods and software to search for good parameters $$a_1,\dots, a_s$$ for rank 1 lattice rules for any given $$s$$ and $$n$$ and various types of criteria, can be found in [128], [127], [129], and [188], for example.

## Polynomial Lattice Rules

Integration lattices defined in a space of polynomials instead of in the real space provide another very effective way of constructing QMC points [51], [49], [131], [152], [156], [163], [161], [186], [188]. These lattices are similar to the ordinary integration lattices, but are defined in different spaces. They also turn out to be special cases of digital nets. The following follows [156].

To define a polynomial integration lattice, we first select an integer $$b \geq 2$$ called the base, let $$\mathbb{Z}_b$$ denote the ring of integers modulo $$b$$, $$\mathbb{Z}_b[z]$$ the ring of polynomials with coefficients in $$\mathbb{Z}_b$$, and $$\mathbb{L}_b$$ the ring of formal Laurent series with coefficients in $$\mathbb{Z}_b$$, which have the form $$\sum_{\ell=\omega}^\infty x_\ell z^{-\ell}$$, where $$x_\ell\in\mathbb{Z}_b$$. The lattice is defined as

$\tag{eq:cLs} {\mathcal{L}_s} = \left\{\mathbf{v}(z) = \sum_{j=1}^s q_j(z) \mathbf{v}_j(z) \mbox{ such that each } q_j(z) \in \mathbb{Z}_b[z]\right\},$

where $$\mathbf{v}_j(z) = \mathbf{a}_j(z)/P(z) \in \mathbb{L}_b$$ for $$j=1,\dots,s$$, $$P(z) = z^k + \alpha_1 z^{k-1} + \cdots + \alpha_k \in \mathbb{Z}_b[z]$$, and each $$\mathbf{a}_j(z)$$ is a vector of $$s$$ polynomials of degree less than $$k$$. We have $$(\mathbb{Z}_b[z])^s \subseteq \mathcal{L}_s$$. The output mapping $$\varphi : \mathbb{L}_b \to \mathbb{R}$$ is defined by

${\varphi}\left(\sum_{\ell=\omega}^\infty x_\ell z^{-\ell}\right) = \sum_{\ell=\omega}^\infty x_\ell b^{-\ell}.$

The polynomial lattice rule uses the node set $$P_n = \varphi(\mathcal{L}_s) \cap [0,1)^s = \varphi(\mathcal{L}_s \cap \mathbb{L}_{b,0})$$, where $$\mathbb{L}_{b,0} = \mathbb{L}_b$$ mod $$\mathbb{Z}_b[z]$$. Most properties of ordinary lattice rules have counterparts for the polynomial rules [50], [49], [152], [161], [188].

In SSJ, the polynomial lattice rules are implemented as digital nets, because this provides a faster way to generate the points than working directly in polynomial arithmetic, especially in base $$b = 2$$.
The user can construct a DigitalNet object from a polynomial lattice rule using .......

• ...... explain what we have in hups for that.... to be done *

## Cycle-based point sets

Certain types of QMC point sets are defined pretty much like random number generators, in the sense that the successive coordinates of each point follow a simple linear recurrence over a finite state space $$\mathcal{S}$$, with a transition function $$f : \mathcal{S}\to\mathcal{S}$$, and an output function $$g : \mathcal{S}\to[0,1)$$. The point set is defined as

$P_n = \{\mathbf{u}= (u_0,u_1,…) : s_0\in\mathcal{S}, s_j = f(s_{j-1}), \mbox{ and } u_j = g(s_j) \mbox{ for all } j\}.$

This is the set of all vectors of successive output values produced by the recurrence defined by $$f$$ and the output function $$g$$, from all possible initial states. The value of $$n$$ is the cardinality of $$\mathcal{S}$$ and the dimension $$s$$ is infinite. We could also have $$n = \infty$$ (an infinite sequence) if $$\mathcal{S}$$ is infinite but denumerable and ordered (so we know in which order to enumerate the points).

Let us assume that $$n$$ is finite and that for each $$s_0\in\mathcal{S}$$, the recurrence $$s_j = f(s_{j-1})$$ is purely periodic, i.e., there is always an integer $$j$$ such that $$s_j = s_0$$. The smallest such $$j$$, called the period length, depends in general on $$s_0$$. Thus, the state space $$\mathcal{S}$$ is partitioned into a finite number of cycles. The successive coordinates of any point $$\mathbf{u}\in P_n$$ are periodic with period length equal to the length of the cycle that contains $$s_0$$ (and the following $$s_j$$’s).

One way of implementing such a point set while avoiding to recompute $$f$$ and $$g$$ each time a coordinate is needed is to store explicitly all the cycles of the recurrence, in the form of a list of cycles. We can store either the successive $$u_j$$’s directly, or the successive $$s_j$$’s, over each cycle. The class umontreal.ssj.hups.CycleBasedPointSet provides the framework for doing that. For example, a Korobov lattice point set is defined via the recurrence $$x_j = a x_{j-1} \bmod n$$ and output function $$u_j = x_j/n$$. If $$n$$ is prime and $$a$$ is a primitive element modulo $$n$$, then there are two cycles: one of period 1 that contains only 0, and the other of period $$n-1$$. For more general $$n$$ and $$a$$, there will be more cycles. The class umontreal.ssj.hups.LCGPointSet constructs this type of point set and stores explicitly the successive values of $$u_j$$ over the different cycles.

There are cases where $$n$$ is a power of two, say $$n = 2^k$$, and where the state $$s_j$$ is represented as a $$k$$-bit string. Concrete instances are usually based on linear recurrences modulo 2 and they include the Korobov-type polynomial lattice rules in base 2 [49], [149], [124], [131], [161], [188], [196]. In that context, it is often more convenient to store the successive states $$s_j$$’s instead of the successive $$u_j$$’s, over the set of cycles (e.g., if a random digital shift in base 2 is to be applied to randomize the points, it can be performed by applying a bitwise xor directly to $$s_j$$). When generating the coordinates, the $$s_j$$’s can be interpreted as $$2^k$$-bit integers and multiplied by $$2^{-k}$$ to produce the output. This is supported by the class umontreal.ssj.hups.CycleBasedPointSetBase2.

## Point set implementations and enumeration tools

The base class for point sets is the abstract class umontreal.ssj.hups.PointSet. It has several predefined subclasses. Let $$\mathbf{u}_i = (u_{i,0}, u_{i,1}, \dots, u_{i,s-1})$$ be the elements of the point set $$P_n$$, for $$i=0,\dots,n-1$$ (the point and coordinate indexes both start at 0). The number of points $$n$$ and the dimension $$s$$ can be finite or infinite. Conceptually, the point set can be viewed as a two-dimensional array whose element $$(i,j)$$ contains $$u_{i,j}$$, the coordinate $$j$$ of point $$i$$. In the implementations of typical point sets, the values $$u_{i,j}$$ are not stored explicitly in a two-dimensional array, but relevant information is organized so that the points and their coordinates can be generated efficiently. One notable exception is umontreal.ssj.hups.CachedPointSet, in which all (randomized) points are stored explicitly. This is required for certain types of randomizations such as stratified sampling and Latin hypercube sampling, for example.

To enumerate the successive points or the successive coordinates of a given point, we use point set iterators, that resemble the iterators defined in Java collections, except that they loop over bi-dimensional sets. Their general behavior is defined in the interface umontreal.ssj.hups.PointSetIterator. It contains methods to traverse a point set. One can return only one coordinate, $$t$$ coordinates, change the current coordinate and current point index, reset the iterator, and so on. Several independent iterators can coexist at any given time for the same point set. Each one maintains a current point index and a current coordinate index, which are incremented by 1 when the iterator advances to the next point or to the next coordinate. Both are initialized to 0. Each subclass of umontreal.ssj.hups.PointSet has its own implementation of umontreal.ssj.hups.PointSetIterator and has a method iterator() that creates and returns a new specialized point set iterator of the correct type, allowing efficient access to the coordinates.

An important feature of the umontreal.ssj.hups.PointSetIterator interface is that it extends the umontreal.ssj.rng.RandomStream interface. This means that any point set iterator can be used in place of a random stream that is supposed to generate i.i.d. $$U(0,1)$$ random variables, anywhere in a simulation program. This makes it very easy to replace the (pseudo)random numbers by the coordinates $$u_{i,j}$$ of a deterministic (or randomized) HUPS without changing the internal code of the simulation program.

## Randomized quasi-Monte Carlo

In their original versions, these HUPS described so far are deterministic, and the corresponding QMC methods give a deterministic integration error that is difficult to estimate. In randomized QMC methods, $$P_n$$ is randomized, preferably in a way that it retains its high uniformity over $$[0,1)^s$$ when taken as a set, while each of its points has the uniform distribution over $$[0,1)^s$$ when taken individually. Then, $$Q_n$$ becomes an unbiased estimator of $$\mu$$, hopefully with smaller variance than the standard MC estimator. To estimate the variance and compute a confidence interval on $$\mu$$, one can apply $$m$$ independent randomizations to the same $$P_n$$, and compute $${\bar{X}_m}$$ and $${S_{m,x}^2}$$, the sample mean and sample variance of the $$m$$ corresponding (independent) copies of $$Q_n$$. Then, $$E[\bar{X}_m] = \mu$$ and $$E[S_{m,x}^2] = \mathrm{Var}[Q_n] = m\mathrm{Var}[\bar{X}_m]$$ [125], [192], [195].

Two examples of such randomizations are the random shift modulo 1, proposed in [37] and implemented in class umontreal.ssj.hups.RandShiftedPointSet, and the random digital shift in base $$b$$, described and implemented in class umontreal.ssj.hups.DigitalNet. In the random shift modulo 1, we generate a single point $$\mathbf{u}$$ uniformly over $$[0,1)^s$$ and add it to each point of $$P_n$$, coordinate-wise, modulo 1. Since all points of $$P_n$$ are shifted by the same amount, the set retains most of its structure and uniformity. For the random digital shift in base $$b$$, we generate again a single $$\mathbf{u}= (u_0,\dots,u_{s-1})$$ uniformly over $$[0,1)^s$$, write the digital expansion in base $$b$$ of each of its coordinates, say $$u_j = \sum_{\ell=1}^{\infty}d_{j,\ell} b^{-\ell}$$, then add $$d_{j,\ell}$$ modulo $$b$$ to the $$\ell$$th digit of the digital expansion in base $$b$$ of the $$j$$th coordinate of each point $$\mathbf{u}_i\in P_n$$. For $$b=2$$, the digit-wise addition modulo $$b$$ becomes a bitwise exclusive-or, which is fast to perform on a computer.

An important property of the digital shift in base $$b$$ is that if the hypercube $$[0,1)^s$$ is partitioned into $$b^{q_1 + \cdots+ q_s}$$ rectangular boxes of the same size by partitioning the $$j$$th axis into $$b^{q_j}$$ equal parts for each $$j$$, for some integers $$q_j \ge0$$ (such a partition is called a $$\mathbf{q}$$-equidissection in base $$b$$ of the unit hypercube, where $$\mathbf{q}= (q_1,…,q_s)$$), then the number of boxes that contain $$m$$ points, for each integer $$m$$, is unchanged by the randomization. In particular, if each box contains the same number of points of $$P_n$$ before the randomization, then it also does after the randomization. In this case, we say that $$P_n$$ is $$\mathbf{q}$$-equidistributed in base $$b$$. Several other randomization methods exist and most are adapted to special types of point sets; for example they randomize the generator matrices of a digital net [195].

In the hups package, viewed from the outside, randomization methods for QMC point sets are defined in classes that implement the umontreal.ssj.hups.PointSetRandomization interface. Each subclass of PointSetRandomization defines a type of randomization. By combining a PointSet object with a PointSetRandomization object, one can obtain an umontreal.ssj.hups.RQMCPointSet object. However, not every type of randomization is compatible with a given type of point set. For example, a umontreal.ssj.hups.LMSScrambleShift and a umontreal.ssj.hups.NestedUniformScrambling apply only to DigitalNet point sets. The usual (and recommended) way of doing RQMC in SSJ is to construct a PointSet and a compatible PointSetRandomization, combine them into an RQMCPointSet, and use the latter to run RQMC experiments. The class umontreal.ssj.hups.RQMCExperiments offers some methods that can perform such experiments for simple Monte Carlo models. Examples are given in the tutorial.

Conceptually, one could think of the PointSetRandomization objects as filters that transform all the QMC points $$\mathbf{u}_i$$ after they have been computed. However, almost all randomizations are not implemented that way, but they are incorporated directly in the calculation of the points $$\mathbf{u}_i$$ by the point set iterators in the PointSet subclasses, mainly for reasons of efficiency. In fact, the randomize method of a PointSetRandomization applied to a PointSet object p can only impact the point set p, which means that the randomization must be incorporated in p in some way. For example, a randomization that changes the generator matrices of a DigitalNet is implemented by changing directly those generator matrices in the DigitalNet object before generating the point. The old matrices are saved so we can revert the change. The random shifts and digital random shifts are also applied directly inside the PointSet object when the points are generated. Each PointSet has a method addRandomShift that takes a RandomStream and generates a random shift to be applied to all the points. Depending on the type of point set, it can be either a digital shift or a shift modulo 1. This must be specified and defined inside each subclass of PointSet. By default, for all DigitalNet point sets, this random shift is a random digital shift, whereas for ordinary lattice rules it is a random shift modulo 1. For any given type of point set, one should check the documention to make sure what the addRandomShift method really does. A umontreal.ssj.hups.RandomShift is a type of randomization that invokes directly this internal addRandomShift facility.

One complication arises in the case of point sets having an unbounded number of coordinates; for example, a CycleBasedPointSet, whose points have a infinite number of coordinates. In this case, the random shift must be generated for a finite number of coordinates, but it can always be extended later if we need more (randomized) coordinates for any given point. This is one of the main reasons for having a version of the addRandomShift method that generates the random shift only over an arbitrary range of coordinates, say from d1 to d2-1. It can be used to extend the current random shift if needed. This extension will usually be performed automatically by the iterator, using the same RandomStream that was used to produce the previous random shift (this stream is saved internally for this purpose).

We recommend to always use a PointSetRandomization object (such as a RandomShift) and use its randomize (PointSet p) method rather than calling addRandomShift directly, because this permits one to change the randomization externally without changing the randomize call internally in a simulation program. The tutorial and the code of some methods in umontreal.ssj.hups.RQMCExperiments provide examples of that.

With the current implementation, in which the randomizations are integrated in the PointSet object itself and not in the iterator, different iterators operating in parallel on the same point set will all eumerate the same randomized points when the points are randomized. To have independent randomizations for the different iterators, the randomizations would have to be implemented in the iterators. Currently, to use different randomizations of the same point set in parallel, one can simply construct many instances of the same point set and randomize them independently.

## Transformed point sets and containers

Aside from the PointSetRandomization subclasses, the hups package also offers tools to transform arbitrary point sets in various ways, either deterministically or randomly, by external filters. Some deterministic transformations can be applied to eliminate some points or coordinates (i.e., selecting subsets), or to concatenate point sets (padding), or to take an antithetic version of a point set, for example. Some types of random transformations can be used for RQMC. When a point set is transformed, we usually want to keep the original as well, and we may want to apply different types of transformations in succession to the same point set.

This is achieved via container point sets, which are defined in terms of another point set to which they keep a reference and apply certain transformations. umontreal.ssj.hups.ContainerPointSet is the base class for such containers. The contained point set can be a container itself and this can be done recursively, but many levels of recursivity can obviously slow down the generation of the points.

One example of a ContainerPointSet that performs a randomization is a umontreal.ssj.hups.RandShiftedMod1PointSet, which applies a random shift modulo 1 to the point set that it contains, whatever it is. It can be used for instance if one wishes to apply a random shift modulo 1 to a Digitalnet.
Another one is umontreal.ssj.hups.AntitheticPointSet, which permits one to generate the antithetic coordinates for any given point set, i.e., return $$1-u_{i,j}$$ instead of $$u_{i,j}$$. A third example is a umontreal.ssj.hups.BakerTransformedPointSet, which applies the baker (or tent) transformation to the contained points. Such container point sets implement their own iterators that use the iterators of the contained point sets to access the points almost as efficiently as if the contained point set iterators were used directly, and they add their transformation.

The umontreal.ssj.hups.SubsetOfPointSet allows one to constrain a point set’s size or dimension, for example to limit the dimension of a cycle-based point set to a finite integer $$s$$. The class umontreal.ssj.hups.PaddedPointSet gathers two or more point sets of the same cardinality and juxtaposes (pads) them to construct a point set whose dimension is the sum of dimensions of the padded components.

## Cached points for stratified sampling, Latin hypercube sampling, sorted points, etc.

In the class umontreal.ssj.hups.CachedPointSet, all coordinates of all the points are stored internally in a matrix, and can then be accessed very efficiently. Storing the points explicitly like this is necessary for certain types of point sets or sampling, for which the points cannot be recovered from a smaller amount of information. One trivial example of this is a set of $$n$$ independent random points, which is implemented in umontreal.ssj.hups.IndependentPointsCached. This implementation can be useful in case one has a program to simulate a system with RQMC points and one wishes to try it with independent points for comparizon.

Stratified sampling over the unit cube can be implemented as follows. One partitions the unit cube in $$n$$ rectangles of the same size and one generates one point in each rectangle, uniformly and independently across rectangles. The class umontreal.ssj.hups.StratifiedUnitCube implements this. Here, the randomized points must be stored explicitly, so this is implemented as a subclass of CachedPointSet. Another subclass is umontreal.ssj.hups.LatinHypercube, which implements Latin hypercube sampling. Yet another one is umontreal.ssj.hups.SortedAndCutPointSet, in which the inner points are sorted by the first few (one or more) coordinates, then these coordinates are removed and the other ones are cached in their new order. These types of constructions are used in the Array-RQMC method [139], [143].

## Examples

See the tutorial.