|
The functions in this section perform reduction of lattice bases.
For each reduction algorithm there are three functions:
a function which takes a basis matrix, a function which takes a Gram matrix
and a function which takes a lattice.
The Lenstra-Lenstra-Lovász algorithm [LLL82] was first described in 1982
and was immediately used by the authors to provide a polynomial-time algorithm
for factoring integer polynomials, for solving simultaneous diophantine
equations and for solving the integer programming problem.
It very quickly received much attention
and in the last 25 years has found an incredible range of applications,
in such areas as
computer algebra, cryptography, optimisation, algorithmic number theory,
group theory, etc. However, the original LLL algorithm is mainly of
theoretical interest, since, although it has polynomial time complexity,
it remains quite slow in practice. Rather, floating-point variants are used,
where the underlying Gram-Schmidt orthogonalisation is performed with
floating-point arithmetic instead of rational arithmetic (which produces
huge numerators and denominators). Most of these floating-point variants
are heuristic, but here we use the provable Nguyen-Stehlé
algorithm [NS09a] (see also [Ste09] for
more details about the implementation).
Let δ ∈(1/4, 1] and η ∈[1/2, Sqrt(δ)).
An ordered set of d linearly independent
vectors b1, b2, ..., bd ∈Rn is said to be
(δ, η)-LLL-reduced if
the two following conditions are satisfied:
- (a)
- For any i>j, we have |μi, j| ≤η,
- (b)
- For any i<d, we
have δ |bi * |2 ≤|bi + 1 * + μi + 1, i bi + 1 * |2,
where μi, j = (bi, bj * )/|bj * |2 and bi * is the i-th
vector of the Gram-Schmidt orthogonalisation of (b1, b2, ..., bd).
LLL-reduction classically refers to the case where
η=1/2 and δ=3/4 since it was
the choice of parameters originally made in [LLL82], but the closer
that η and δ are to 1/2 and 1, respectively,
the more reduced the lattice
basis should be. In the classical LLL algorithm, the polynomial-time
complexity is guaranteed as long as δ<1, whereas the floating-point LLL
additionally requires that η>1/2.
A (δ, η)-LLL-reduced basis (b1, b2, ..., bd) of a
lattice L has the following useful properties:
- (a)
- |b1| ≤(δ - η2) - (d - 1)/4 (det L)1/d,
- (b)
- |b1| ≤(δ - η2) - (d - 1)/2
min_(b ∈L \{0})|b|,
- (c)
- ∏i=1d |bi| ≤(δ - η2) - d(d - 1)/4 (det L),
- (d)
- For any j<i, |bj * | ≤(δ - η2)(j - i)/2 |bi * |.
The default Magma parameters are δ=0.75 and η=0.501, so
that (δ - η2) - 1/4<1.190 and (δ - η2) - 1/2<1.416.
The four previous bounds can be reached, but in practice one usually obtains
better bases.
It is possible to obtain lattice bases satisfying these four conditions without
being LLL-reduced, by using the so-called Siegel condition [Akh02]; this
useful variant, though available, is not the default one.
Internally, the LLL routine may use up to eight different LLL variants, one of
them being de Weger's exact integer method [dW87], and
the seven others relying upon diverse kinds of
floating-point arithmetic. All but one of these variants are heuristic and
implement diverse ideas from [SE94], Victor Shoup's
NTL [Sho] and [NS06]. The heuristic variants
possibly loop forever, but when that happens it is hopefully detected
and a more reliable variant is used. For a given input, the LLL routine
tries to find out which variant should be the fastest, and may eventually
call other variants before the end of the execution: either because it
deems that the current variant loops forever or that a faster variant
could be used with the thus-far reduced basis. This machinery remains
unknown to the user (except when the LLL verbose mode is activated) and
makes the Magma LLL routine the fastest currently available, with the
additional advantage of being fairly reliable.
The floating-point variants essentially differ on the two following points:
- (a)
- whether or not the matrix basis computations are performed with the
help of a complete knowledge of the Gram matrix of the vectors (the matrix
of the pairwise inner products),
- (b)
- the underlying floating-point arithmetic.
In theory, to get a provable variant, one needs the Gram matrix and
arbitrary precision floating-point numbers (as provided by the
MPFR-based [Pro] Magma real numbers). In practice, to get an efficient
variant, it is not desirable not to maintain the exact knowledge of the
Gram matrix (maintaining the Gram matrix represents asymptotically a large
constant fraction of the total computational cost), while it is desirable to
use the machine processor floating-point arithmetic (for instance, C doubles).
Three of the seven variants use the Gram matrix, while the four others do not.
In each group (either the three or the remaining four), one variant relies
on arbitrary precision floating-point arithmetic, another on C doubles
and another on what we call doubles with extended exponent.
These last variants are important because they are almost as fast
as the variants based on C doubles, and can be used for much wider families
of inputs: when the initial basis is made of integers larger than
approximately 500 bits, overflows can occur with the C doubles.
To be precise, a "double with extended exponent" is a C double with a
C integer, the last one being the extended exponent. So far, we have
described six variants. The seventh does not rely on the Gram matrix
and is based on an idea similar to "doubles with extended exponents".
The difference is that for a given vector of dimension n, instead of
having n pairs of doubles and integers, one has n doubles and only
one integer: the extended exponent is "factorised". This variant is
quite often the one to use in practice, because it is reasonably efficient
and not too unreliable.
Important warning. By default, the LLL routine is entirely
reliable. This implies the default provable variant can be much
slower than the heuristic one. By setting Proof to false,
a significant run-time improvement can be gained without taking much
risk: even in that case, the LLL routine remains more reliable than
any other implementation based on floating-point arithmetic. For any
application where guaranteed LLL-reduction is not needed, we recommend
setting Proof to false.
Recommended usage. The formal description of the function LLL
below explains all the parameters in detail, but we first note here some
common situations which arise.
The desired variant of LLL is often not the default one.
Since the LLL algorithm is used in many different contexts and with
different output requirements,
it seems impossible to define a natural default variant, and
so using the LLL routine efficiently
often requires some tuning. Here we consider three main-stream situations.
- -
- It may be desired to obtain the main LLL inequalities (see the
introduction of this subsection), without paying much attention to
the δ and η reduction parameters. In this case one should
activate the Fast parameter, possibly
with the verbose mode to know afterwards which parameters have been used.
- -
- It may be desired to have the main LLL inequalities for a given
pair of parameters (δ, η). In this case one should set the parameters
Delta and Eta
to the desired values, and set SwappingCondition to "Siegel".
- -
- It may be desired to compute a very well LLL-reduced basis.
In this case one should set Delta to 0.9999, Eta to 0.5001,
and possibly also activate the DeepInsertion option.
In any case, if you want to be absolutely sure of the quality of the output,
you need to keep the Proof option activated.
This example is meant to show the differences between the three main
recommended usages above. Assume that we are interested in the lattice
generated by the rows of the matrix B defined as follows.
> R:=RealField(5);
> B:=RMatrixSpace(IntegerRing(), 50, 51) ! 0;
> for i := 1 to 50 do B[i][1] := RandomBits(10000); end for;
> for i := 1 to 50 do B[i][i+1] := 1; end for;
The matrix B is made of 100 vectors of length 101. Each entry of the
first column is approximately 10000 bits long. Suppose first that
we use the default variant.
> time C:=LLL(B:Proof:=false);
Time: 11.300
> R!(Norm (C[1]));
5.1959E121
The output basis is (0.75, 0.501)-reduced. Suppose now
that we only wanted to have a basis that satisfies the main LLL
properties with δ=0.75 and η=0.501. Then we could have used the
Siegel swapping condition.
> time C:=LLL(B:Proof:=false, SwapCondition:="Siegel");
Time: 10.740
> R!(Norm (C[1]));
6.6311E122
Notice that the quality of the output is quite often worse with the Siegel
condition than with the Lovász condition, but the main LLL properties are
satisfied anyway. Suppose now that we want a very well reduced basis. Then
we fix δ and η close to 1 and 0.5 respectively.
> time C:=LLL(B:Proof:=false, Delta:=0.9999, Eta:=0.5001);
Time: 19.220
> R!(Norm (C[1]));
1.8056E121
This is of course more expensive, but the first output vector is
significantly shorter.
Finally, suppose that we only wanted to "shorten" the entries of the input
basis, very efficiently and without any particular preference for the reduction
factors δ and η. Then we could have used the Fast
option, that tries to choose such parameters in order to optimize the
running-time.
> time C:=LLL(B:Proof:=false, Fast:=1);
Time: 8.500
> R!(Norm (C[1]));
6.2746E121
By activating the verbose mode, one can know for which parameters the
output basis is reduced.
> SetVerbose ("LLL", 1);
> C:=LLL(B:Proof:=false, Fast:=1);
[...]
The output basis is (0.830,0.670)-reduced
LLL(X) : ModMatRngElt -> ModMatRngElt, AlgMatElt, RngIntElt
LLL(X) : AlgMatElt -> AlgMatElt, AlgMatElt, RngIntElt
Al: MonStgElt Default: "New"
Proof: BoolElt Default: true
Method: MonStgElt Default: "FP"
Delta: RngElt Default: 0.75
Eta: RngElt Default: 0.501
InitialSort: BoolElt Default: false
FinalSort: BoolElt Default: false
StepLimit: RngIntElt Default: 0
TimeLimit: RngElt Default: 0.0
NormLimit: RngIntElt Default:
UseGram: BoolElt Default: false
DeepInsertions: BoolElt Default: false
EarlyReduction: BoolElt Default: false
SwapCondition: MonStgElt Default: "Lovasz"
Fast: RngIntElt Default: 0
Weight: SeqEnum Default: [0, ...,0]
Given a matrix X belonging to the matrix module S=HomR(M, N) or
the matrix algebra S=Mn(R), where R is a subring of the real field,
compute a matrix Y whose non-zero rows are a LLL-reduced basis
for the Z-lattice
spanned by the rows of X (which need not be Z-linearly independent).
The LLL function returns three values:
- (a)
- A LLL-reduced matrix Y in S whose rows span the same
lattice (Z-module) as that spanned by the rows of X.
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of X such that TX=Y;
- (c)
- The rank of X.
Note that the returned transformation matrix T is always over
the ring of integers Z, even if R is not Z.
The input matrix X does not need to have linearly independent rows: this
function performs a floating-point variant of what is usually known as
the MLLL algorithm of M. Pohst [Poh87].
By default the rows of the returned matrix Y are sorted so that all the zero
rows are at the bottom. The non-zero vectors are sorted so that they form
a LLL-reduced basis (except when FinalSort is true; see below).
A detailed description of the parameters now follows.
See the discussion and the example above this function for recommended
parameters for common cases.
By default, the Proof option is set to true. It means that
the result is guaranteed. It is possible, and usually faster,
to switch off this option: it will perform the same calculations,
without checking that the output is indeed reduced by using the
L2 algorithm. For the vast majority of cases, we recommend setting
Proof to false.
By default the δ and η constants used in the LLL
conditions are set respectively to 0.75 and 0.501
(except when Method is "Integral", see below). The
closer δ and η are to 1 and 0.5, respectively, the shorter
the output basis will be. In the original description of the LLL algorithm,
δ is 0.75 and η is 0.5. Making δ
and η vary can have a significant influence on the running time.
If Method is "Integral", then by default η is set
to 0.5 and δ to 0.99. With this value of Method, the
parameter δ can be chosen arbitrarily in (0.25, 1],
but when δ is 1, the running time may increase dramatically.
If Method is not "Integral", then δ may be
chosen arbitrarily in (0.25, 1), and η may be chosen arbitrarily
in (0.5, Sqrt(δ)). Notice that the parameters δ and η
used internally will be slightly larger and smaller than the
given ones, respectively, to take floating-point inaccuracies into account
and to
ensure that the output basis will indeed be reduced for the expected
parameters. In all cases, the output basis will be (δ, η)-LLL reduced.
For matrices over Z or Q, there are two main methods for handling
the Gram-Schmidt orthogonalisation variables used in the algorithm:
the Nguyen-Stehlé floating-point method (called L2)
and the exact integral method described by de Weger in [dW87].
The Nguyen-Stehlé algorithm is implemented as described
in the article, along with a few faster heuristic variants. When Method
is "FP", these faster variants will be tried first,
and when they are considered as misbehaving, they will be followed
by the proved variant automatically. When Method is "L2",
the provable L2 algorithm is used directly. Finally, when
Method is "Integral", the De Weger integral method is used.
In this case, the DeepInsertions, EarlyReduction,
Siegel SwapCondition, NormLimit, Fast and
Weight options
are unavailable (the code is older and has not been updated for these).
Furthermore, the default value for Eta is then 0.5.
The default FP method is nearly always very
much faster than the other two methods.
By default, it is possible (though it happens very infrequently) that
the returned basis is not of the expected quality. In order to be sure
that the output is indeed correct, Method can be set to
either "Integral" or "L2".
Since V2.26, a parallel multi-threaded version of the integral
method (selected by Method := "Integral") is now supported.
One can call SetNthreads beforehand to specify the number
of threads to be used. For inputs in larger dimension, this
may lead to a significant speedup.
The parameter InitialSort specifies whether the vectors should
be sorted by length before the LLL reduction. When FinalSort
is true, the vectors are sorted by increasing length (and
alphabetical order in case of equality) after the LLL reduction
(this parameter was called Sort before V2.13). The resulting
vectors may therefore not be strictly LLL-reduced, because of the permutation
of the rows.
The parameter UseGram specifies that for the floating-point
methods (L2 and FP) the computation should be
performed by computing the Gram matrix F, using the LLLGram
algorithm below, and by updating the basis matrix correspondingly.
Magma will automatically do this internally for the floating-point
method if it deems that it is more efficient to do so, so this parameter
just allows one to stipulate that this method should or should not be used.
For the integral method, using the Gram matrix never improves
the computation, so the value of this parameter is simply ignored in
this case.
Setting the parameter StepLimit to a positive integer s will cause
the function to terminate after s steps of the main loop.
Setting the parameter TimeLimit to a positive real number t will
cause the function to terminate after the algorithm has been executed
for t seconds (process user) time. Similarly, setting the parameter
NormLimit to a non-negative integer N will cause the algorithm to
terminate after a vector of norm less or equal to N has been found.
In such cases, the current reduced basis is
returned, together with the appropriate transformation matrix and an upper
bound for the rank of the matrix. Nothing precise can then be said exactly
about the reduced basis (it will not be LLL-reduced in general of course)
but will at least be reduced to a certain extent.
When the value of the parameter DeepInsertions is true,
Schnorr-Euchner's deep insertion algorithm is used [SE94]. It
usually provides better bases, but can take significantly more time. In
practice, one may first LLL-reduce the basis and then use the deep insertion
algorithm on the computed basis.
When the parameter EarlyReduction is true and when
any of the two floating-point methods are used,
some early reduction steps are inserted inside the execution of the
LLL algorithm. This sometimes makes the entries of the basis smaller very
quickly. It occurs in particular for lattice bases built from minimal
polynomial or integer relation detection problems. The speed-up
is sometimes dramatic, especially if the reduction in length makes it possible
to use C integers for the basis matrix (instead of multiprecision integers)
or C doubles for the floating-point calculations (instead of
multiprecision floating-point numbers or doubles with additional exponent).
The parameter SwapCondition can be set either
to "Lovasz" (default)
or to "Siegel". When its value is "Lovasz", the classical Lovász
swapping condition is used, whereas otherwise the
so-called Siegel condition is used. The Lovász condition
tests whether the
inequality |bi * + μi, i - 1bi - 1 * |2 ≥δ |bi - 1 * |2
is satisfied or not, whereas in the Siegel case the
inequality |bi * |2 ≥(δ - η2) |bi - 1 * |2 is
used (see [Akh02] for more details).
In the Siegel case, the execution may be significantly
faster. Though the output basis may not be LLL-reduced, the classical LLL
inequalities (see the introduction of this subsection) will be fulfilled.
When the option Fast is set to 1 or 2, all the other parameters may be
changed internally to end the execution as fast as possible. Even if the
other parameters are not set to the default values, they may be ignored.
This includes the parameters Delta, Eta and SwapCondition,
so that the output basis may not be LLL-reduced. However, if the verbose
mode is activated, the chosen values of these parameters will be printed, so
that the classical LLL inequalities (see the introduction of this subsection)
may be used afterwards. When Fast is set to 2, the chosen parameters
are such that the classical LLL inequalities will be at least as strong
as for the default parameters Delta and Eta.
If Weight is the list of integers [x1, ..., xn]
(where n is the degree of the lattice), then the LLL
algorithm uses a weighted Euclidean inner product: the inner product
of the vectors (v1, v2, ..., vn) and (w1, w2, ..., wn)
is defined to be ∑i=1n (vi .wi .22xi).
By default, the Al parameter is set to New.
If it is set to Old, then the former Magma LLL is used (prior
to V2.13); notice that in this case, the execution is not guaranteed
to finish and that the output basis may not be reduced for the given LLL
parameters.
This variant is not maintained anymore.
Note 1: If the input basis has many zero entries, then try to place them
in the last columns; this will slightly improve the running time.
Note 2: If the elementary divisors of the input matrix are fairly
small, relative to the size of the matrix and the size of its entries, then
it may be very much quicker to reduce the matrix to Hermite form first (using
the function HermiteForm) and then to LLL-reduce this matrix.
This case can arise: for example, when one has a "very messy" matrix for
which it is known that the lattice described by it has a relatively short
basis.
Note 3: Sometimes, the EarlyReduction variant can significantly
decrease the running time.
Since V2.21 (2014), Magma also supports an analogue for LLL-reduction
for matrices over K[x]. The algorithm is similar to that described
in [Pau98]. Note that unlike the integer case,
the LLL-reduction of a matrix in the K[x] case is always unique,
with the rows sorted by degree.
BasisReduction(X) : ModMatRngElt -> ModMatRngElt, AlgMatElt, RngIntElt
BasisReduction(X) : AlgMatElt -> AlgMatElt, AlgMatElt, RngIntElt
This is a shortcut for LLL(X:Proof:=false).
LLLGram(F) : AlgMatElt -> AlgMatElt, AlgMatElt, RngIntElt
Isotropic: BoolElt Default: false
Given a symmetric matrix F belonging to the
the matrix module S=HomR(M, M) or the matrix algebra S=Mn(R),
where R is a subring of the real field,
so that F=XXtr for some matrix X over the real field,
compute a matrix G which is the Gram matrix corresponding to
a LLL-reduced form of the matrix X.
The rows of the corresponding generator matrix X need not be Z-linearly
independent in which case F will be singular.
This function returns three values:
- (a)
- A LLL-reduced Gram matrix G in S of the Gram matrix F;
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of F such that G=TFTtr.
- (c)
- The rank of F (which equals the dimension of the lattice generated
by X in the definite case).
The options available are the same as for the LLL routine, except of
course the UseGram option that has no sense here. The
Weight parameter should not be used either. The routine can be
used with any symmetric matrix (possibly not definite nor positive): Simon's
indefinite LLL variant (which puts absolute values on the Lovász condition)
is used (see [Sim05]).
When the option Isotropic is true, some attempt to deal with
isotropic vectors is made. (This is only relevant when working on
an indefinite Gram matrix). In particular, if the determinant is
squarefree, hyperbolic planes are split off iteratively.
Given a lattice L with basis matrix B, return the LLL basis matrix
B' of L, together with the transformation matrix T such that B'=TB.
The LLL basis matrix B' is simply defined to be a LLL-reduced form of B;
it is stored in L when computed and subsequently used
internally by many lattice functions.
The LLL basis matrix will be created automatically internally as needed
with δ=0.999 by default (note that this is different from the
usual default of 0.75);
by the use of parameters to this function one can ensure that
the LLL basis matrix is created in a way which is different to
the default.
The parameters (not listed here again) are the same as for the function
LLL (q.v.), except that the limit parameters are illegal.
Given a lattice L with Gram matrix F, return the LLL Gram matrix
F' of L, together with the transformation matrix T such that
F'=TFTtr. F' is simply defined to be B'(B')tr, where
B' is the LLL basis matrix of L---see the function LLLBasisMatrix.
The parameters (not listed here again) are the same as for the function
LLL (q.v.), except that the limit parameters are illegal.
Given a lattice L with basis matrix B, return a new lattice L' with
basis matrix B' and a transformation matrix T so that L' is equal
to L but B' is LLL-reduced and B'=TB. Note that the inner product
used in the LLL computation is that given by the inner product matrix of
L (so, for example, the resulting basis may not be LLL-reduced with respect
to the standard Euclidean norm).
The LLL basis matrix of L is used, so calling this function with
argument L is completely equivalent
(ignoring the second return value) to the invocation
LatticeWithBasis(LLLBasisMatrix(L), InnerProductMatrix(L)).
The parameters (not listed here again) are the same as for the function
LLL (q.v.), except that the limit parameters are illegal.
The BasisReduction shortcut to turn off the Proof option is
also available.
This is a shortcut for LLL(L:Proof:=false).
(Procedure.)
Set the verbose printing level for the LLL algorithm to
be v. Currently the legal values for v are true, false, 0, 1, 2 and 3
(false is the same as 0, and true is the same as 1).
The three non-zero levels notify when the maximum LLL-reduced rank of the
LLL algorithm increases, level 2 also prints the norms of the
reduced vectors at such points, and level 3 additionally
gives some current status information every 15 seconds.
We demonstrate how the LLL algorithm can be used to find very good
multipliers for the extended GCD of a sequence of integers.
Given a sequence Q=[x 1, ..., x n] of integers we wish to find the GCD g
of Q and integers m i for 1≤i ≤m
such that g=m 1.x 1 + ... + m n.x m.
For this example we set Q to a sequence of n=10 integers of varying bit
lengths (to make the problem a bit harder).
> Q := [ 67015143, 248934363018, 109210, 25590011055, 74631449,
> 10230248, 709487, 68965012139, 972065, 864972271 ];
> n := #Q;
> n;
10
We next choose a scale factor S large enough and then create the
n x (n + 1) matrix X=[I n | C] where C is the column vector
whose i-th entry is S.Q[i].
> S := 100;
> X := RMatrixSpace(IntegerRing(), n, n + 1) ! 0;
> for i := 1 to n do X[i][i + 1] := 1; end for;
> for i := 1 to n do X[i][1] := S * Q[i]; end for;
> X;
[6701514300 1 0 0 0 0 0 0 0 0 0]
[24893436301800 0 1 0 0 0 0 0 0 0 0]
[10921000 0 0 1 0 0 0 0 0 0 0]
[2559001105500 0 0 0 1 0 0 0 0 0 0]
[7463144900 0 0 0 0 1 0 0 0 0 0]
[1023024800 0 0 0 0 0 1 0 0 0 0]
[70948700 0 0 0 0 0 0 1 0 0 0]
[6896501213900 0 0 0 0 0 0 0 1 0 0]
[97206500 0 0 0 0 0 0 0 0 1 0]
[86497227100 0 0 0 0 0 0 0 0 0 1]
Finally, we compute a LLL-reduced form L of X.
> L := LLL(X);
> L;
[ 0 0 1 0 -15 -6 3 1 2 -3 -3]
[ 0 -3 5 -3 -11 -1 0 8 -14 4 3]
[ 0 -5 -2 2 -2 14 8 -6 8 1 -4]
[ 0 6 1 -3 -10 -3 -14 5 0 8 8]
[ 0 -1 -2 0 -2 -13 8 6 8 10 -2]
[ 0 -9 -3 -11 5 7 -1 1 9 -11 -2]
[ 0 16 0 3 3 9 -3 0 -1 -4 -11]
[ 0 -6 1 -16 4 -1 6 -6 -5 7 -7]
[ 0 9 -3 -10 7 -3 1 -7 8 0 18]
[-100 -3 -1 13 -1 -4 2 3 4 5 -1]
Notice that the large weighting on the first column forces the LLL algorithm
to produce as many vectors as possible at the top of the matrix with a
zero entry in the first column (since such vectors will have shorter
norms than any vector with a non-zero entry in the first column).
Thus the GCD of the entries in Q is the entry in the bottom left corner
of L divided by S, so this must be 1 or -1. The last n entries
of the last row of L gives a sequence of multipliers M
for the extended GCD algorithm. Also, taking the last n entries of each of
the first n - 1 rows of L gives independent null-relations for the
entries of Q (i.e., the kernel of the corresponding column matrix).
We check that M gives a correct sequence of multipliers.
> M := Eltseq(L[10])[2 .. n+1]; M;
[ 3, 1, -13, 1, 4, -2, -3, -4, -5, 1 ]
> &+[Q[i]*M[i]: i in [1 .. n]];
-1
PairReduce(X) : AlgMatElt -> AlgMatElt, AlgMatElt
Given a matrix X belonging to the
the matrix module S=HomR(M, N) or the matrix algebra S=Mn(R),
where R is Z or Q, compute a matrix Y whose rows form a pairwise
reduced basis for the Z-lattice spanned by the rows of X.
Being pairwise reduced (i.e., 2|(v, w)| ≤min(|v|, |w|)
for all pairs of
basis vectors) is a much simpler criterion than being LLL-reduced,
but often yields sufficiently good results very quickly. It can also be used
as a preprocessing for LLL or in alternation with LLL to obtain better bases.
The rows of X need not be Z-linearly independent.
This function returns two values:
- (a)
- The pairwise reduced matrix Y row-equivalent to X as a
matrix in S;
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of X such that TX=Y.
PairReduceGram(F) : AlgMatElt -> AlgMatElt, AlgMatElt, RngIntElt
Check: BoolElt Default: false
Given a symmetric positive semidefinite matrix F belonging to the
the matrix module S=HomR(M, M) or the matrix algebra S=Mn(R),
where R is Z or Q,
so that F=XXtr for some matrix X over the real field,
compute a matrix G which is the Gram matrix corresponding to
a pairwise reduced form of the matrix X.
The rows of the corresponding matrix X need not be Z-linearly independent.
This function returns two values:
- (a)
- The pairwise reduced Gram matrix G of F as a matrix in S;
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of F which gives the corresponding transformation:
G=TFTtr.
The matrix F must be a symmetric positive semidefinite matrix;
if it is not the results are unpredictable.
By default, Magma does not check this since it may be expensive in higher
dimensions and in many applications will be known a priori.
The checking can be invoked by setting Check := true.
Given a lattice L with basis matrix B, return a new lattice L' with
basis matrix B' and a transformation matrix T so that L' is equal
to L but B' is pairwise reduced and B'=TB. Note that the inner product
used in the pairwise reduction computation is that given by the inner product
matrix of L (so, for example, the resulting basis may not be pairwise reduced
with respect to the standard Euclidean norm).
Seysen(X) : AlgMatElt -> AlgMatElt, AlgMatElt
Given a matrix X belonging to the
the matrix module S=HomR(M, N) or the matrix algebra S=Mn(R),
where R is Z or Q, compute a matrix Y whose
rows form a Seysen-reduced basis for the Z-lattice spanned by the rows
of X. The rows of X need not be Z-linearly independent.
The Seysen-reduced matrix Y is such that the entries of the corresponding
Gram matrix G=YYtr and its inverse G - 1 are simultaneously reduced.
This function returns two values:
- (a)
- The Seysen-reduced matrix Y corresponding to X as a
matrix in S;
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of X such that TX=Y.
SeysenGram(F) : AlgMatElt -> AlgMatElt, AlgMatElt, RngIntElt
Check: BoolElt Default: false
Given a symmetric positive semidefinite matrix F belonging to the
the matrix module S=HomR(M, M) or the matrix algebra S=Mn(R),
where R is Z or Q,
so that F=XXtr for some matrix X over the real field,
compute a matrix G which is the Gram matrix corresponding to
a Seysen-reduced form of the matrix X.
The rows of the corresponding matrix X need not be Z-linearly independent.
The Seysen-reduced Gram matrix G is such that the entries of G
and its inverse G - 1 are simultaneously reduced.
This function returns two values:
- (a)
- The Seysen-reduced Gram matrix G of F as a matrix in S;
- (b)
- A unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of F which gives the corresponding transformation:
G=TFTtr.
The matrix F must be a symmetric positive semidefinite matrix;
if it is not the results are unpredictable.
By default, Magma does not check this since it may be expensive in higher
dimensions and in many applications will be known a priori.
The checking can be invoked by setting Check := true.
Given a lattice L with basis matrix B, return a new lattice L' with
basis matrix B' and a transformation matrix T so that L' is equal
to L but B' is Seysen-reduced and B'=TB. Note that the inner product
used in the Seysen-reduction computation is that given by the inner product
matrix of L (so, for example, the resulting basis may not be Seysen-reduced
with respect to the standard Euclidean norm). The effect of the reduction
is that the basis of L' and the dual basis of L' will be simultaneously
reduced.
We demonstrate how the function Seysen can be used on the Gram
matrix of the Leech lattice to obtain a gram matrix S for the lattice
so that both S and S - 1 are simultaneously reduced.
Note that all three reduction methods yield a basis of vectors of minimal
length, but the Seysen reduction also has a dual basis of vectors of norm 4.
This is of interest in representation theory, for example, as the entries of
the inverse of the Gram matrix control the size of the entries in a
representation on the lattice.
> F := GramMatrix(Lattice("Lambda", 24));
> [ F[i][i] : i in [1..24] ];
[ 8, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]
> [ (F^-1)[i][i] : i in [1..24] ];
[ 72, 8, 12, 8, 10, 4, 4, 8, 8, 4, 4, 8, 4, 4, 4, 4, 4, 4, 4, 4, 8, 4, 4, 8 ]
> L := LLLGram(F);
> P := PairReduceGram(F);
> S := SeysenGram(F);
> [ L[i][i] : i in [1..24] ];
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]
> [ P[i][i] : i in [1..24] ];
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]
> [ S[i][i] : i in [1..24] ];
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]
> [ (L^-1)[i][i] : i in [1..24] ];
[ 6, 4, 8, 4, 32, 4, 4, 8, 18, 4, 4, 8, 8, 8, 12, 4, 6, 4, 20, 4, 8, 4, 14, 4 ]
> [ (P^-1)[i][i] : i in [1..24] ];
[ 72, 40, 12, 8, 10, 4, 4, 8, 8, 4, 4, 8, 4, 4, 4, 4, 4, 4, 4, 4, 8, 4, 4, 8 ]
> [ (S^-1)[i][i] : i in [1..24] ];
[ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 ]
An ordered set of d linearly independent
vectors b1, b2, ..., bd ∈Rn is said to be
Hermite-Korkine-Zolotarev-reduced (HKZ-reduced for short) if the
three following conditions are satisfied:
- (a)
- For any i>j, we have |μi, j| ≤0.501,
- (b)
- The vector b1 is a shortest non-zero vector in the lattice
spanned by b1, b2, ..., bd,
- (c)
- The vectors b2 - μ2, 1 b1, ..., bd - μd, 1 b1
are themselves HKZ-reduced,
where μ i, j=(b i, b j * )/|b j * | 2 and b i * is the i-th
vector of the Gram-Schmidt orthogonalisation of (b 1, b 2, ..., b d).
HKZ(X) : ModMatRngElt -> ModMatRngElt, AlgMatElt
Proof: BoolElt Default: true
Unique: BoolElt Default: false
Prune: SeqEnum Default: [ ...,[1.0, ...,1.0], ... ]
Given a matrix X over a subring of the real field, compute a
matrix Y whose non-zero rows are an HKZ-reduced basis for the
Z-lattice spanned by the rows of X (which need not be
Z-linearly independent). The HKZ function returns an HKZ-reduced
matrix Y whose rows span the same lattice (Z-module) as that
spanned by the rows of X, and a unimodular matrix T in the matrix
ring over Z whose degree is the number of rows of X such
that TX=Y.
Although the implementation relies on floating-point arithmetic, the
result can be guaranteed to be correct. By default, the Proof
option is set to true, thus guaranteeing the output. The
run-time might be improved by switching off this option.
A given lattice may have many HKZ-reduced bases. If the Unique
option is turned on, a uniquely determined HKZ-reduced basis is
computed. This basis is chosen so that: for any i>j, we
have μi, j ∈[ - 1/2, 1/2); for any i, the first non-zero
coordinate of the vector bi * is positive; and for any i, the
vector bi * is the shortest among possible vectors for the
lexicographical order.
The implementation solves the Shortest Vector Problem for lattices of
dimensions 1 to d, where d is the number of rows of the input
matrix X. The latter instances of the Shortest Vector Problem are
solved with the same enumeration algorithm as is used for computing
the minimum of a lattice, which can be expressed as a search within a
large tree. The i-th table of the Prune optional parameter is
used to prune the i-dimensional enumeration. The default value is:
[[1.0: j in [1..i]]: i in [1..NumberOfRows(X)]].
See the introduction of
Section Minima and Element Enumeration for more details on the Prune
option.
Proof: BoolElt Default: true
Prune: SeqEnum Default: [ ...,[1.0, ...,1.0], ... ]
Given a symmetric positive semidefinite matrix F over a subring
of the real field so that F=XXtr for some matrix X over the
real field, compute a matrix G which is the Gram matrix
corresponding to an HKZ-reduced form of the matrix X.
The function returns the HKZ-reduced Gram matrix G of F,
and a unimodular matrix T in the matrix ring over Z whose degree
is the number of rows of F which gives the corresponding transformation:
G=TFTtr.
Proof: BoolElt Default: true
Prune: SeqEnum Default: [ ...,[1.0, ...,1.0], ... ]
Given a lattice L with basis matrix B, return a new lattice L' with
basis matrix B' and a transformation matrix T so that L' is equal
to L but B' is HKZ-reduced and B'=TB.
(Procedure.)
Set the verbose printing level for the HKZ algorithm to
be v. Currently the legal values for v are true, false, 0 and 1
(false is the same as 0, and true is the same as 1). More
information on the progress of the computation can be obtained by
setting the "Enum" verbose on.
GaussReduce(X) : ModMatRngElt -> ModMatRngElt, AlgMatElt
GaussReduceGram(F) : ModMatRngElt -> ModMatRngElt, AlgMatElt
GaussReduce(L) : Lat -> Lat, AlgMatElt
Restrictions of the HKZ functions to lattices of rank 2.
HKZ-reduced bases are much harder to compute than LLL-reduced bases,
but provide a significantly more powerful representation of the
spanned lattice. For example, computing all short lattice vectors is
more efficient if one starts from an HKZ-reduced basis.
> d:=60;
> B:=RMatrixSpace(IntegerRing(), d, d)!0;
> for i,j in [1..d] do B[i][j]:=Random(100*d); end for;
> time C1 := LLL(B);
Time: 0.020
> time C2 := HKZ(B);
Time: 1.380
> m := Norm(C2[1]);
> time _:=ShortVectors(Lattice(C1), 11/10*m);
Time: 1.750
> time _:=ShortVectors(Lattice(C2), 11/10*m);
Time: 0.850
> time _:=ShortVectors(Lattice(C1), 3/2*m);
Time: 73.800
> time _:=ShortVectors(Lattice(C2), 3/2*m);
Time: 32.220
The notion of block Korkine-Zolotareff reduction (BKZ) allows one to
interpolate between LLL and HKZ. In particular, one gives a size parameter n,
and at step i the HKZ-reduction of the projected sublattice on the vectors
[i..(i + n - 1)] is computed. In practice, one achieves most of the possible
reduction from (say) taking n=20.
Delta: FldReElt Default: 0.75
Given a basis matrix M and a parameter n, return a BKZ-reduction.
As above, but with a lattice (over the integers, rationals, or reals) as input.
We create a 40-dimensional lattice, then take a random transformation of it.
Then LLL is applied, but this does not give a great basis (at least
with the chosen Delta value). Finally BKZ is used to make
the basis a bit better.
> L := Lattice(LatticeDatabase(),"HZ40");
> L := LLL(L : Delta:=0.999);
> Max(Diagonal(GramMatrix(L)));
8
> R := RandomSLnZ(40,80,320);
> M := R*GramMatrix(L)*Transpose(R);
> A := LLLGram(M : Delta:=0.75);
> Max(Diagonal(A));
13
> LAT := BKZ(LatticeWithGram(A),20);
> Max(Diagonal(GramMatrix(LAT)));
6
Given a lattice L, an ordered set of d linearly independent
lattice vectors b1, b2, ..., bd ∈L is Minkowski-reduced if for all
1 ≤i ≤d, bi has minimal norm among all lattice vectors bi such that
b1, . . . , bi can be extended to a basis of L.
MinkowskiGramReduction(M) : Mtrx -> Mtrx, Mtrx
Canonical: BoolElt Default: false
Returns a lattice L' isometric to L
but with a Minkowski-reduced basis, together with
the corresponding transformation matrix T such that
M' = TMTtr, where M and M' are the Gram matrices
of L and L' respectively.
If Canonical is true, returns a canonical
representative L' so that L1 and L2 are isometric
if and only if their canonical Minkowski-reduced
representatives are equal. This is only implemented for
lattices of rank at most 4.
The Gauss reduction algorithm for binary lattices gives rise to a greedy algorithm,
as described in [NS09b].
Given a lattice L, an ordered set of d linearly independent
lattice vectors b1, b2, ..., bd ∈L is greedy-reduced if for all
2 ≤i ≤d and for all x1, x2, ... , xi�Ü1 ∈Z, we have
|bi| ≤|bi + x1 b1 + ... + xi - 1bi - 1|.
Clearly, any Minkowski-reduced basis is greedy-reduced.
In fact, for lattices of rank at most 4, these notions are equivalent.
But for larger rank they may considerably differ.
Al: MonStgElt Default: em "Nguyen-Stehle"
Returns a lattice L' isometric to L
but with a greedy-reduced basis, together with
the corresponding transformation matrix T such that
M' = TMTtr, where M and M' are the Gram matrices
of L and L' respectively.
The parameter Al specifies which algorithm to use
for reduction. The default algorithm is the greedy Gauss
reduction, described in [NS09b]. Another option is
to set Al := "Kohel", for a variant based on methods implemented
by Kohel.
If M is a symmetric matrix on the boundary of a Minkowski domain,
returns its orbit using a greedy algorithm.
Given a basis S of a finite index sublattice of the
lattice L spanned by the rows of the integral matrix B, return a
matrix C whose rows span L and are not much longer than those
of S. Specifically, the algorithm described in Lemma 7.1
of [MG02] is implemented, and the rows of the output matrix satisfy the
following properties:
- (a)
- For any i, |ci| ≤i1/2 |si|,
- (b)
- For any i, |ci * | ≤|si * |,
where ci * (respectively si * ) denotes the i-th vector of the
Gram-Schmidt orthogonalisation of (c1, c2, ..., cd)
(respectively (s1, s2, ..., sd)) .
[Next][Prev] [Right] [Left] [Up] [Index] [Root]
|