AbsPPTConstraints
AbsPPTConstraints | |
Builds the eigenvalue matrices that determine whether or not a state is absolutely PPT | |
Other toolboxes required | none |
---|---|
Related functions | IsAbsPPT |
Function category | Ball of separability |
AbsPPTConstraints is a function that computes the matrices of eigenvalues that determine whether or not a given density matrix $\rho$ is "absolutely PPT" (that is whether or not $U\rho U^\dagger$ has positive partial transpose for all unitary matrices $U$). The state $\rho$ is absolutely PPT if and only if every matrix that this function produces as output is positive semidefinite. The matrices that characterize when a density matrix is absolutely PPT were derived in ^{[1]}.
Contents
Syntax
- L = AbsPPTConstraints(LAM)
- L = AbsPPTConstraints(LAM,DIM)
- L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS)
- L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS,LIM)
Argument descriptions
- LAM: A bipartite density matrix or a vector containing the eigenvalues of a bipartite density matrix.
- DIM (optional, default has both subsystems of equal dimension): A specification of the dimensions of the subsystems that the density matrix acts on. DIM can be provided in one of two ways:
- If DIM is a scalar, it is assumed that the first subsystem has dimension DIM and the second subsystem has dimension length(LAM)/DIM.
- DIM can be a 1-by-2 vector containing the dimensions of the local subsystems.
- ESC_IF_NPOS (optional, default 0): A flag, either 1 or 0, indicating that the function should stop computing constraint matrices as soon as it finds one that is not positive semidefinite (i.e., as soon as it finds proof that the density matrix in question is not absolutely PPT).
- LIM (optional, default 0): A non-negative integer specifying how many constraint matrices should be computed. If this equals 0 then all constraint matrices will be computed.
Examples
Qutrit-qutrit states
In the qutrit-qutrit case, a state $\rho$ with eigenvalues $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_9 \geq 0$ is absolutely PPT if and only if the following two matrices are positive semidefinite: \[\begin{bmatrix}2\lambda_9 & \lambda_8-\lambda_1 & \lambda_7-\lambda_2 \\ \lambda_8-\lambda_1 & 2\lambda_6 & \lambda_5-\lambda_3 \\ \lambda_7-\lambda_2 & \lambda_5-\lambda_3 & 2\lambda_4\end{bmatrix} \quad \text{and} \quad \begin{bmatrix}2\lambda_9 & \lambda_8-\lambda_1 & \lambda_6-\lambda_2 \\ \lambda_8-\lambda_1 & 2\lambda_7 & \lambda_5-\lambda_3 \\ \lambda_6-\lambda_2 & \lambda_5-\lambda_3 & 2\lambda_4\end{bmatrix}.\]
We can compute these two matrices for a given set of eigenvalues as follows:
>> rho = RandomDensityMatrix(9); >> lam = eig(rho); >> L = AbsPPTConstraints(lam,3) % compute the two constraint matrices L = [3x3 double] [3x3 double] >> L{1} ans = 0.0013 -0.3100 -0.2579 -0.3100 0.0679 -0.1159 -0.2579 -0.1159 0.2102 >> L{2} ans = 0.0013 -0.3100 -0.2381 -0.3100 0.0282 -0.1159 -0.2381 -0.1159 0.2102 >> IsPSD(L{1}) ans = 0 % L{1} is not positive semidefinite, so rho is not absolutely PPT
Can be used with CVX
This function can be used with CVX in order to perform optimizations over the set of absolutely PPT states. In particular, because each of the absolutely PPT constraints requires that a certain matrix be positive semidefinite, we can use semidefinite programming to maximize or minimize a linear function over the set of eigenvalues of all absolutely PPT states. For example, the following code maximizes the largest eigenvalue of all absolutely PPT states in a $(4 \otimes 4)$-dimensional system:
>> d = 4; >> cvx_begin sdp quiet variable lam(d^2); L = AbsPPTConstraints(lam,d); % compute the absolutely PPT constraints maximize lam(1); subject to sum(lam) == 1; % eigenvalues of a density matrix sum to 1 lam >= 0; % eigenvalues of a density matrix are non-negative for j = 1:length(L) L{j} >= 0; % all of the absolute PPT constraints (there are 10 of them in this case) end for j = 1:d^2-1 lam(j) >= lam(j+1); % eigenvalues must be in order end cvx_end cvx_optval cvx_optval = 0.1667
Indeed, this agrees with Proposition 8.2 of ^{[2]}, which says that the largest maximal eigenvalue of absolutely PPT states in a $(k \otimes n)$-dimensional system is $3/(2+nk)$.
IMPORTANT: If you use this function within a CVX optmization problem, make sure that you enforce lam(1) >= lam(2) >= ... >= 0 as one of the constraints in CVX!
Notes
- For all practical purposes, this function can only compute all constraint matrices when at least one of the local dimensions is 6 or less. When both local dimensions are 7 or higher, the LIM argument should be specified (otherwise the computation may take days, weeks, or longer).
- This function actually computes slightly more constraints than is optimal. It computes the optimal set of constraints when one of the local dimensions is 5 or less, but when the smallest local dimension is 6, it computes 2612 constraint matrices rather than the optimal set of 2608 constraint matrices: the additional 4 matrices are redundant, but do not cause any harm.
- The optimal number of constraint matrices, when the smaller of the two local dimensions is 2, 3, 4, ..., is given by the sequence 1, 2, 10, 114, 2608, ... (A237749 in the OEIS).
- Absolutely PPT states are sometimes said to be "PPT from spectrum".
Source code
Click on "expand" to the right to view the MATLAB source code for this function.
%% ABSPPTCONSTRAINTS Builds the eigenvalue matrices that determine whether or not a state is absolutely PPT
% This function has one required input argument:
% LAM: a vector of length prod(DIM) that contains the eigenvalues of
% the state
%
% L = AbsPPTConstraints(LAM) is a cell containing the constraint
% matrices constructed in [1] that characterize whether or not a mixed
% state with eigenvalues LAM is "absolutely PPT" (see [1] for
% definition). In particular, a state with eigenvalues LAM is absolutely
% PPT if and only if every matrix in L is positive semidefinite.
%
% These matrices can be used as constraints in a CVX optimization
% problem, allowing the user to solve SDPs whose feasible region is the
% set of absolutely PPT states (see online documentation for examples).
%
% This function has two optional input arguments:
% DIM (default has both subsystems of equal dimension)
% ESC_IF_NPOS (default 0)
% LIM (default 0)
%
% L = AbsPPTConstraints(LAM,DIM,ESC_IF_NPOS,LIM) is as above, but if LIM
% > 0 then at most LIM matrices are computed. The two local dimensions
% are specified by the 1-by-2 vector DIM. Additionally, if ESC_IF_NPOS =
% 1 then the function stops computing as soon as a matrix in L is not
% positive semidefinite.
%
% URL: http://www.qetlab.com/AbsPPTConstraints
%
% References:
% [1] R. Hildebrand. Positive partial transpose from spectra. Phys. Rev.
% A, 76:052325, 2007. E-print: arXiv:quant-ph/0502170
% requires: IsPSD.m, opt_args.m, perm_inv.m
% author: Nathaniel Johnston (nathaniel@njohnston.ca)
% package: QETLAB
% last updated: December 2, 2014
function L = AbsPPTConstraints(lam,varargin)
sz = size(lam);
if(min(sz) > 1) % LAM is a density matrix: get its eigenvalues
lam = eig(lam);
end
lam_len = max(sz);
dim_guess = round(sqrt(lam_len));
% set optional argument defaults: dim=dim_guess, esc_if_npos=0, lim=0
[dim,esc_if_npos,lim] = opt_args({ dim_guess, 0, 0 },varargin{:});
% Allow the user to enter a single number for DIM.
if(length(dim) == 1)
dim = [dim,dim];
end
% Now do some error checking.
if(prod(dim) ~= lam_len)
error('AbsPPTConstraints:InvalidDim','The dimensions provided by DIM do not match the length of the vector LAM.');
end
% Make sure that LAM is sorted in descending order (if possible).
if(isa(lam,'cvx') == 0)
lam = sort(lam,'descend');
end
% No simple formula is known for the number of matrices in L that will
% be returned, so just increase L's size every step. The speed
% reduction from re-allocating memory is minimal compared to the time
% it takes to do the recursion anyway.
L = cell(0);
% Begin by setting some preliminary values that will let the search
% work.
esc_now = 0;
p = min(dim);
X = (p*(p+1)/2+1)*ones(p); % set each entry to anything larger than p*(p+1)/2
num_pool = ones(p*(p+1)/2,1);
ct = 1;
% If p <= 2, hard-code the results (the recursion will get cranky).
if(p == 1)
return;
elseif(p == 2)
L{1} = [2*lam(end), lam(end-1)-lam(1); lam(end-1)-lam(1), 2*lam(end-2)];
return;
end
% The two top-left and bottom-right entries of the matrix are always
% the same, so fix them.
X(1,1) = 1;
X(1,2) = 2;
X(p,p) = p*(p+1)/2;
X(p-1,p) = p*(p+1)/2 - 1;
% Now recursely construct the absolutely PPT constraints.
fill_matrix(1,3,3);
% This function does the recursion itself. It is nested because it
% needs to have access to variables like num_pool and X.
function fill_matrix(i,j,low_lim)
up_lim = min(j*(j+1)/2 + i*(p-j), p*(p+1)/2 - 2);
for k = low_lim:up_lim
if(num_pool(k) == 1)
X(i,j) = k;
num_pool(k) = 0;
if(check_ordered(X,i,j) == 1)
if(i == p-1 && j == p-1) % we have completely filled X: check if it is valid
if(check_cross(p,X) == 1)
% This ordering is valid, so let's use the
% values in the lam vector to actually
% construct the matrix of eigenvalues from X.
L{ct} = eigen_from_order(X,lam,dim);
if((esc_if_npos == 1 && ~IsPSD(L{ct})) || (lim > 0 && ct >= lim))
esc_now = 1;
return;
end
ct = ct + 1;
end
elseif(j == p) % we are at the end of a column: move to the next row
fill_matrix(i+1,i+1,3);
else % we are somewhere in the middle of the matrix: move to the next column
fill_matrix(i,j+1,k+1);
end
if(esc_now == 1)
return;
end
end
num_pool(k) = 1;
end
end
X(i,j) = p*(p+1)/2+1; % anything larger than p*(p+1)/2
end
end
% This function makes sure that the entry of X that is currently being
% placed does not violate the fact that the columns of X must be ascending
% (the rows will automatically be ascending by the method of construction
% of X).
function is_o = check_ordered(X,i,j)
if(i > 1 && X(i-1,j) > X(i,j))
is_o = 0;
else
is_o = 1;
end
end
% This function makes sure that X does not contain any of the "criss-cross"
% suborderings described in the "A Better Upper Bound" section of
% http://www.njohnston.ca/2014/02/counting-the-possible-orderings-of-pairwise-multiplication/
function cc = check_cross(p,X)
for i=1:p % i: row
for j=1:p % j: column
for k=1:p % k: row
for l=1:p % l: column
for m=1:p % m: row
for n=1:p % n: column
if(X(min(i,j),max(i,j)) > X(min(k,l),max(k,l)) && X(min(l,n),max(l,n)) > X(min(j,m),max(j,m)) && X(min(i,n),max(i,n)) < X(min(k,m),max(k,m)))
cc = 0;
return
end
end
end
end
end
end
end
cc = 1;
end
% This function converts an ordering matrix to the corresponding eigenvalue
% constraint matrix. For example, the ordering matrix [1 2 3;* 4 5;* * 6]
% corresponds to the following eigenvalue constraint matrix:
% [2*lam(9), lam(8)-lam(1), lam(7)-lam(2);
% * , 2*lam(6) , lam(5)-lam(3);
% * , * , 2*lam(4)]
% For details of how these eigenvalue constraint matrices are constructed,
% see the Hildebrand paper.
function L_mat = eigen_from_order(X,lam,dim)
prod_dim = prod(dim);
% Do some pre-processing to extract information about the ordering X
% that will be useful for us.
Y = triu(X,1);
X = triu(X) + Y.';
sparsity_pattern = logical(Y);
% Reduce the entries in Y so that they are 1, 2, ..., (p-1)*p/2, but in
% the same order as the original entries of Y.
[~,ind] = sort(nonzeros(Y(sparsity_pattern).'));
Y(sparsity_pattern) = perm_inv(ind); % Y now contains the indices of the eigenvalues that are subtracted in the off-diagonal entries
Y = Y + Y' + diag(prod_dim + 1 - diag(X));
% We have now built all of the pieces we need: put them together to
% make the eigenvalue matrix in three steps.
L_mat = lam(prod_dim + 1 - X); % this places all of the eigenvalues that have a positive coefficient
L_mat = L_mat + 2*diag(diag(L_mat)); % this triples the diagonal (we only really want to double it, but one copy of it will be subtracted at the next step)
L_mat = L_mat - lam(Y); % this takes off all of the eigenvalues that have a negative coefficient
end
References
- ↑ R. Hildebrand. Positive partial transpose from spectra. Phys. Rev. A, 76:052325, 2007. E-print: arXiv:quant-ph/0502170
- ↑ M. A. Jivulescu, N. Lupa, I. Nechita, and D. Reeb. Positive reduction from spectra. E-print: arXiv:1406.1277 [quant-ph]