# AbsPPTConstraints

 Other toolboxes required AbsPPTConstraints Builds the eigenvalue matrices that determine whether or not a state is absolutely PPT none IsAbsPPT Ball of separability

AbsPPTConstraints is a function that computes the matrices of eigenvalues that determine whether or not a given density matrix $\rho$ is "absolutely PPT" (that is whether or not $U\rho U^\dagger$ has positive partial transpose for all unitary matrices $U$). The state $\rho$ is absolutely PPT if and only if every matrix that this function produces as output is positive semidefinite. The matrices that characterize when a density matrix is absolutely PPT were derived in [1].

## Syntax

• L = AbsPPTConstraints(LAM)
• L = AbsPPTConstraints(LAM,DIM)
• L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS)
• L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS,LIM)

## Argument descriptions

• LAM: A bipartite density matrix or a vector containing the eigenvalues of a bipartite density matrix.
• DIM (optional, default has both subsystems of equal dimension): A specification of the dimensions of the subsystems that the density matrix acts on. DIM can be provided in one of two ways:
• If DIM is a scalar, it is assumed that the first subsystem has dimension DIM and the second subsystem has dimension length(LAM)/DIM.
• DIM can be a 1-by-2 vector containing the dimensions of the local subsystems.
• ESC_IF_NPOS (optional, default 0): A flag, either 1 or 0, indicating that the function should stop computing constraint matrices as soon as it finds one that is not positive semidefinite (i.e., as soon as it finds proof that the density matrix in question is not absolutely PPT).
• LIM (optional, default 0): A non-negative integer specifying how many constraint matrices should be computed. If this equals 0 then all constraint matrices will be computed.

## Examples

### Qutrit-qutrit states

In the qutrit-qutrit case, a state $\rho$ with eigenvalues $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_9 \geq 0$ is absolutely PPT if and only if the following two matrices are positive semidefinite: $\begin{bmatrix}2\lambda_9 & \lambda_8-\lambda_1 & \lambda_7-\lambda_2 \\ \lambda_8-\lambda_1 & 2\lambda_6 & \lambda_5-\lambda_3 \\ \lambda_7-\lambda_2 & \lambda_5-\lambda_3 & 2\lambda_4\end{bmatrix} \quad \text{and} \quad \begin{bmatrix}2\lambda_9 & \lambda_8-\lambda_1 & \lambda_6-\lambda_2 \\ \lambda_8-\lambda_1 & 2\lambda_7 & \lambda_5-\lambda_3 \\ \lambda_6-\lambda_2 & \lambda_5-\lambda_3 & 2\lambda_4\end{bmatrix}.$

We can compute these two matrices for a given set of eigenvalues as follows:

>> rho = RandomDensityMatrix(9);
>> lam = eig(rho);
>> L = AbsPPTConstraints(lam,3) % compute the two constraint matrices

L =

[3x3 double]    [3x3 double]

>> L{1}

ans =

0.0013   -0.3100   -0.2579
-0.3100    0.0679   -0.1159
-0.2579   -0.1159    0.2102

>> L{2}

ans =

0.0013   -0.3100   -0.2381
-0.3100    0.0282   -0.1159
-0.2381   -0.1159    0.2102

>> IsPSD(L{1})

ans =

0 % L{1} is not positive semidefinite, so rho is not absolutely PPT

### Can be used with CVX

This function can be used with CVX in order to perform optimizations over the set of absolutely PPT states. In particular, because each of the absolutely PPT constraints requires that a certain matrix be positive semidefinite, we can use semidefinite programming to maximize or minimize a linear function over the set of eigenvalues of all absolutely PPT states. For example, the following code maximizes the largest eigenvalue of all absolutely PPT states in a $(4 \otimes 4)$-dimensional system:

>> d = 4;
>> cvx_begin sdp quiet
variable lam(d^2);
L = AbsPPTConstraints(lam,d); % compute the absolutely PPT constraints

maximize lam(1);
subject to
sum(lam) == 1; % eigenvalues of a density matrix sum to 1
lam >= 0; % eigenvalues of a density matrix are non-negative
for j = 1:length(L)
L{j} >= 0; % all of the absolute PPT constraints (there are 10 of them in this case)
end
for j = 1:d^2-1
lam(j) >= lam(j+1); % eigenvalues must be in order
end
cvx_end
cvx_optval

cvx_optval =

0.1667

Indeed, this agrees with Proposition 8.2 of [2], which says that the largest maximal eigenvalue of absolutely PPT states in a $(k \otimes n)$-dimensional system is $3/(2+nk)$.

IMPORTANT: If you use this function within a CVX optmization problem, make sure that you enforce lam(1) >= lam(2) >= ... >= 0 as one of the constraints in CVX!

## Notes

• For all practical purposes, this function can only compute all constraint matrices when at least one of the local dimensions is 6 or less. When both local dimensions are 7 or higher, the LIM argument should be specified (otherwise the computation may take days, weeks, or longer).
• This function actually computes slightly more constraints than is optimal. It computes the optimal set of constraints when one of the local dimensions is 5 or less, but when the smallest local dimension is 6, it computes 2612 constraint matrices rather than the optimal set of 2608 constraint matrices: the additional 4 matrices are redundant, but do not cause any harm.
• The optimal number of constraint matrices, when the smaller of the two local dimensions is 2, 3, 4, ..., is given by the sequence 1, 2, 10, 114, 2608, ... (A237749 in the OEIS).
• Absolutely PPT states are sometimes said to be "PPT from spectrum".

## Source code

Click on "expand" to the right to view the MATLAB source code for this function.

1. %%  ABSPPTCONSTRAINTS    Builds the eigenvalue matrices that determine whether or not a state is absolutely PPT
2. %   This function has one required input argument:
3. %     LAM: a vector of length prod(DIM) that contains the eigenvalues of
4. %          the state
5. %
6. %   L = AbsPPTConstraints(LAM) is a cell containing the constraint
7. %   matrices constructed in [1] that characterize whether or not a mixed
8. %   state with eigenvalues LAM is "absolutely PPT" (see [1] for
9. %   definition). In particular, a state with eigenvalues LAM is absolutely
10. %   PPT if and only if every matrix in L is positive semidefinite.
11. %
12. %   These matrices can be used as constraints in a CVX optimization
13. %   problem, allowing the user to solve SDPs whose feasible region is the
14. %   set of absolutely PPT states (see online documentation for examples).
15. %
16. %   This function has two optional input arguments:
17. %     DIM (default has both subsystems of equal dimension)
18. %     ESC_IF_NPOS (default 0)
19. %     LIM (default 0)
20. %
21. %   L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS,LIM) is as above, but if LIM
22. %   > 0 then at most LIM matrices are computed. The two local dimensions
23. %   are specified by the 1-by-2 vector DIM. Additionally, if ESC_IF_NPOS =
24. %   1 then the function stops computing as soon as a matrix in L is not
25. %   positive semidefinite.
26. %
27. %   URL: http://www.qetlab.com/AbsPPTConstraints
28. %
29. %   References:
30. %   [1] R. Hildebrand. Positive partial transpose from spectra. Phys. Rev.
31. %       A, 76:052325, 2007. E-print: arXiv:quant-ph/0502170
32. 
33. %   requires: IsPSD.m, opt_args.m, perm_inv.m
34. %   author: Nathaniel Johnston (nathaniel@njohnston.ca)
35. %   package: QETLAB
36. %   last updated: December 2, 2014
37. 
38. function L = AbsPPTConstraints(lam,varargin)
39. 
40.     sz = size(lam);
41.     if(min(sz) > 1) % LAM is a density matrix: get its eigenvalues
42.         lam = eig(lam);
43.     end
44.     lam_len = max(sz);
45.     dim_guess = round(sqrt(lam_len));
46. 
47.     % set optional argument defaults: dim=dim_guess, esc_if_npos=0, lim=0
48.     [dim,esc_if_npos,lim] = opt_args({ dim_guess, 0, 0 },varargin{:});
49. 
50.     % Allow the user to enter a single number for DIM.
51.     if(length(dim) == 1)
52.         dim = [dim,dim];
53.     end
54. 
55.     % Now do some error checking.
56.     if(prod(dim) ~= lam_len)
57.         error('AbsPPTConstraints:InvalidDim','The dimensions provided by DIM do not match the length of the vector LAM.');
58.     end
59. 
60.     % Make sure that LAM is sorted in descending order (if possible).
61.     if(isa(lam,'cvx') == 0)
62.         lam = sort(lam,'descend');
63.     end
64. 
65.     % No simple formula is known for the number of matrices in L that will
66.     % be returned, so just increase L's size every step. The speed
67.     % reduction from re-allocating memory is minimal compared to the time
68.     % it takes to do the recursion anyway.
69.     L = cell(0);
70. 
71.     % Begin by setting some preliminary values that will let the search
72.     % work.
73.     esc_now = 0;
74.     p = min(dim);
75.     X = (p*(p+1)/2+1)*ones(p); % set each entry to anything larger than p*(p+1)/2
76.     num_pool = ones(p*(p+1)/2,1);
77.     ct = 1;
78. 
79.     % If p <= 2, hard-code the results (the recursion will get cranky).
80.     if(p == 1)
81.         return;
82.     elseif(p == 2)
83.         L{1} = [2*lam(end), lam(end-1)-lam(1); lam(end-1)-lam(1), 2*lam(end-2)];
84.         return;
85.     end
86. 
87.     % The two top-left and bottom-right entries of the matrix are always
88.     % the same, so fix them.
89.     X(1,1) = 1;
90.     X(1,2) = 2;
91.     X(p,p) = p*(p+1)/2;
92.     X(p-1,p) = p*(p+1)/2 - 1;
93. 
94.     % Now recursely construct the absolutely PPT constraints.
95.     fill_matrix(1,3,3);
96. 
97.     % This function does the recursion itself. It is nested because it
98.     % needs to have access to variables like num_pool and X.
99.     function fill_matrix(i,j,low_lim)
100.         up_lim = min(j*(j+1)/2 + i*(p-j), p*(p+1)/2 - 2);
101.         for k = low_lim:up_lim
102.             if(num_pool(k) == 1)
103.                 X(i,j) = k;
104.                 num_pool(k) = 0;
105. 
106.                 if(check_ordered(X,i,j) == 1)
107.                     if(i == p-1 && j == p-1) % we have completely filled X: check if it is valid
108.                         if(check_cross(p,X) == 1)
109.                             % This ordering is valid, so let's use the
110.                             % values in the lam vector to actually
111.                             % construct the matrix of eigenvalues from X.
112.                             L{ct} = eigen_from_order(X,lam,dim);
113.                             if((esc_if_npos == 1 && ~IsPSD(L{ct})) || (lim > 0 && ct >= lim))
114.                                 esc_now = 1;
115.                                 return;
116.                             end
117.                             ct = ct + 1;
118.                         end
119.                     elseif(j == p) % we are at the end of a column: move to the next row
120.                         fill_matrix(i+1,i+1,3);
121.                     else % we are somewhere in the middle of the matrix: move to the next column
122.                         fill_matrix(i,j+1,k+1);
123.                     end
124.                     if(esc_now == 1)
125.                         return;
126.                     end
127.                 end
128.                 num_pool(k) = 1;
129.             end
130.         end
131.         X(i,j) = p*(p+1)/2+1; % anything larger than p*(p+1)/2
132.     end
133. end
134. 
135. % This function makes sure that the entry of X that is currently being
136. % placed does not violate the fact that the columns of X must be ascending
137. % (the rows will automatically be ascending by the method of construction
138. % of X).
139. function is_o = check_ordered(X,i,j)
140.     if(i > 1 && X(i-1,j) > X(i,j))
141.         is_o = 0;
142.     else
143.         is_o = 1;
144.     end
145. end
146. 
147. % This function makes sure that X does not contain any of the "criss-cross"
148. % suborderings described in the "A Better Upper Bound" section of
149. % http://www.njohnston.ca/2014/02/counting-the-possible-orderings-of-pairwise-multiplication/
150. function cc = check_cross(p,X)
151.     for i=1:p % i: row
152.         for j=1:p % j: column
153.             for k=1:p % k: row
154.                 for l=1:p % l: column
155.                     for m=1:p % m: row
156.                         for n=1:p % n: column
157.                             if(X(min(i,j),max(i,j)) > X(min(k,l),max(k,l)) && X(min(l,n),max(l,n)) > X(min(j,m),max(j,m)) && X(min(i,n),max(i,n)) < X(min(k,m),max(k,m)))
158.                                 cc = 0;
159.                                 return
160.                             end
161.                         end
162.                     end
163.                 end
164.             end
165.         end
166.     end
167. 
168.     cc = 1;
169. end
170. 
171. % This function converts an ordering matrix to the corresponding eigenvalue
172. % constraint matrix. For example, the ordering matrix [1 2 3;* 4 5;* * 6]
173. % corresponds to the following eigenvalue constraint matrix:
174. % [2*lam(9), lam(8)-lam(1), lam(7)-lam(2);
175. %    *     , 2*lam(6)     , lam(5)-lam(3);
176. %    *     ,     *        , 2*lam(4)]
177. % For details of how these eigenvalue constraint matrices are constructed,
178. % see the Hildebrand paper.
179. function L_mat = eigen_from_order(X,lam,dim)
180.     prod_dim = prod(dim);
181. 
182.     % Do some pre-processing to extract information about the ordering X
183.     % that will be useful for us.
184.     Y = triu(X,1);
185.     X = triu(X) + Y.';
186.     sparsity_pattern = logical(Y);
187. 
188.     % Reduce the entries in Y so that they are 1, 2, ..., (p-1)*p/2, but in
189.     % the same order as the original entries of Y.
190.     [~,ind] = sort(nonzeros(Y(sparsity_pattern).'));
191.     Y(sparsity_pattern) = perm_inv(ind); % Y now contains the indices of the eigenvalues that are subtracted in the off-diagonal entries
192.     Y = Y + Y' + diag(prod_dim + 1 - diag(X));
193. 
194.     % We have now built all of the pieces we need: put them together to
195.     % make the eigenvalue matrix in three steps.
196.     L_mat = lam(prod_dim + 1 - X); % this places all of the eigenvalues that have a positive coefficient
197.     L_mat = L_mat + 2*diag(diag(L_mat)); % this triples the diagonal (we only really want to double it, but one copy of it will be subtracted at the next step)
198.     L_mat = L_mat - lam(Y); % this takes off all of the eigenvalues that have a negative coefficient
199. end

## References

1. R. Hildebrand. Positive partial transpose from spectra. Phys. Rev. A, 76:052325, 2007. E-print: arXiv:quant-ph/0502170
2. M. A. Jivulescu, N. Lupa, I. Nechita, and D. Reeb. Positive reduction from spectra. E-print: arXiv:1406.1277 [quant-ph]